{"text": "\\section{Introduction}\n\\label{sec:intro}\n\n\\emph{Gender diversity}, or more often its lack thereof, among participants to\nsoftware development activities has been thoroughly studied in recent years. In\nparticular, the presence of, effects of, and countermeasures for \\emph{gender\n bias} in Free/Open Source Software (FOSS) have received a lot of attention\nover the past decade~\\cite{david2008fossdevs, qiu2010kdewomen,\n nafus2012patches, kuechler2012genderfoss, vasilescu2014gender,\n oneil2016debiansurvey, robles2016womeninfoss, terrell2017gender,\n zacchiroli2021gender}. \\emph{Geographic diversity} is on the other hand the\nkind of diversity that stems from participants in some global activity coming\nfrom different world regions and cultures.\n\nGeographic diversity in FOSS has received relatively little attention in scholarly\nworks. In particular, while seminal survey-based and\npoint-in-time medium-scale studies of the geographic origins of FOSS\ncontributors exist~\\cite{ghosh2005understanding, david2008fossdevs,\n barahona2008geodiversity, takhteyev2010ossgeography, robles2014surveydataset,\n wachs2021ossgeography}, large-scale longitudinal studies of the geographic\norigin of FOSS contributors are still lacking. Such a quantitative\ncharacterization would be useful to inform decisions related to global\ndevelopment teams~\\cite{herbsleb2007globalsweng} and hiring strategies in the\ninformation technology (IT) market, as well as contribute factual information\nto the debates on the economic impact and sociology of FOSS around the world.\n\n\n\\paragraph{Contributions}\n\nWith this work we contribute to close this gap by conducting \\textbf{the first\n longitudinal study of the geographic origin of contributors to public code\n over 50 years.} Specifically, we provide a preliminary answer to the\nfollowing research question:\n\\begin{researchquestion}\n From which world regions do authors of publicly available commits come from\n and how has it changed over the past 50 years?\n \\label{rq:geodiversity}\n\\end{researchquestion}\nWe use as dataset the \\SWH/ archive~\\cite{swhipres2017} and analyze from it\n2.2 billion\\xspace commits archived from 160 million\\xspace projects and authored by\n43 million\\xspace authors during the 1971--2021 time period. \nWe geolocate developers to\n\\DATAWorldRegions/ world regions, using as signals email country code top-level domains (ccTLDs) and \nauthor (first/last) names compared with name distributions around the world, and UTC offsets \nmined from commit metadata.\n\nWe find evidence of the early dominance of North America in open source\nsoftware, later joined by Europe. After that period, the geographic diversity \nin public code has been constantly increasing.\nWe also identify relevant historical shifts\nrelated to the end of the UNIX wars and the increase of coding literacy in\nCentral and South Asia, as well as of broader phenomena like colonialism and\npeople movement across countries (immigration/emigration).\n\n\n\n\n\\paragraph{Data availability.}\n\nA replication package for this paper is available from Zenodo at\n\\url{https://doi.org/10.5281/zenodo.6390355}~\\cite{replication-package}.\n\n\n \\section{Related Work}\n\\label{sec:related}\n\nBoth early and recent works~\\cite{ghosh2005understanding, david2008fossdevs,\n robles2014surveydataset, oneil2016debiansurvey} have characterized the\ngeography of Free/Open Source Software (FOSS) using \\emph{developer surveys},\nwhich provide high-quality answers but are limited in size (2-5\\,K developers)\nand can be biased by participant sampling.\n\nIn 2008 Barahona et al.~\\cite{barahona2008geodiversity} conducted a seminal\nlarge-scale (for the time) study on FOSS \\emph{geography using mining software\n repositories (MSR) techniques}. They analyzed the origin of 1\\,M contributors\nusing the SourceForge user database and mailing list archives over the\n1999--2005 period, using as signals information similar to ours: email domains\nand UTC offsets. \nThe studied period (7 years) in~\\cite{barahona2008geodiversity} is shorter than \nwhat is studied in the present paper (50 years) and the data sources are \nlargely different; with that in mind, our results show a slightly larger quote of \nEuropean v.~North American contributions.\n\nAnother empirical work from 2010 by Takhteyev and\nHilts~\\cite{takhteyev2010ossgeography} harvested self-declared geographic\nlocations of GitHub accounts recursively following their connections,\ncollecting information for $\\approx$\\,70\\,K GitHub users. A very recent\nwork~\\cite{wachs2021ossgeography} by Wachs et al.~has geolocated half a million\nGitHub users, having contributed at least 100 commits each, and who\nself-declare locations on their GitHub profiles. While the study is\npoint-in-time as of 2021, the authors compare their findings\nagainst~\\cite{barahona2008geodiversity, takhteyev2010ossgeography} to\ncharacterize the evolution of FOSS geography over the time snapshots taken by\nthe three studies.\n\nCompared with previous empirical works, our study is much larger scale---having\nanalyzed 43 million\\xspace authors of 2.2 billion\\xspace commits from 160 million\\xspace\nprojects---longitudinal over 50 years of public code contributions rather than\npoint in time, and also more fine-grained (with year-by-year granularity over\nthe observed period). Methodologically, our study relies on Version Control\nSystem (VCS) commit data rather than platform-declared location information.\n\n\nOther works---in particular the work by Daniel~\\cite{daniel2013ossdiversity}\nand, more recently, Rastogi et al.~\\cite{rastogi2016geobias,\n rastogi2018geobias, prana2021geogenderdiversity}---have studied geographic\n\\emph{diversity and bias}, i.e., the extent to which the origin of FOSS\ndevelopers affect their collaborative coding activities.\nIn this work we characterized geographic diversity in public code for the first\ntime at this scale, both in terms of contributors and observation period. We do\nnot tackle the bias angle, but provide empirical data and findings that can be\nleveraged to that end as future work.\n\n\\emph{Global software engineering}~\\cite{herbsleb2007globalsweng} is the\nsub-field of software engineering that has analyzed the challenges of scaling\ndeveloper collaboration globally, including the specific concern of how to deal\nwith geographic diversity~\\cite{holmstrom2006globaldev, fraser2014eastwest}.\nDecades later the present study provides evidence that can be used, in the\nspecific case of public code and at a very large scale, to verify which\npromises of global software engineering have borne fruit.\n\n\n\n\n\n\n \\section{Methodology}\n\\label{sec:method}\n\n\n\\newif\\ifgrowthfig \\growthfigtrue\n\\ifgrowthfig\n\\begin{figure}\n \\includegraphics[width=\\columnwidth]{yearly-commits}\n \\caption{Yearly public commits over time (log scale).\n}\n \\label{fig:growth}\n\\end{figure}\n\\fi\n\n\\paragraph{Dataset}\n\nWe retrieved from \\SWH/~\\cite{swh-msr2019-dataset} all commits archived until \\DATALastCommitDate/.\nThey amount to \\DATACommitsRaw/ commits, unique by SHA1 identifier, harvested from \\DATATotalCommitsInSH/ public projects coming from major development forges (GitHub, GitLab, etc.) and package repositories (Debian, PyPI, NPM, etc.).\nCommits in the dataset are by \\DATAAuthorsRaw/ authors, unique by $\\langle$name, email$\\rangle$ pairs.\nThe dataset came as two relational tables, one for commits and one for authors, with the former referencing the latter via a foreign key.\n\\iflong\nEach row in the commit table contains the following fields: commit SHA1 identifier, author and committer timestamps, author and committer identifiers (referencing the author table).\nThe distinction between commit authors and committers come from Git, which allows to commit a change authored by someone else.\nFor this study we focused on authors and ignored committers, as the difference between the two is not relevant for our research questions and the amount of commits with a committer other than its author is negligible.\n\\fi\nFor each entry in the author table we have author full name and email as two separate strings of raw bytes.\n\nWe removed implausible or unusable names that: are not decodable as UTF-8 (\\DATAAuthorsRmNondecodable/ author names removed), are email addresses instead of names (\\DATAAuthorsRmEmail/ ``names''), consist of only blank characters (\\DATAAuthorsRmBlank/), contain more than 10\\% non-letters (\\DATAAuthorsRmNonletter/), are longer than 100 characters (\\DATAAuthorsRmToolong/).\nAfter filtering, about \\DATAAuthorsPlausibleApprox/ authors (\\DATAAuthorsPlausiblePct/ of the initial dataset) remained for further analysis.\n\nNote that the amount of public code commits (and authors) contained in the\ninitial dataset grows exponentially over\ntime~\\cite{swh-provenance-emse}\\ifgrowthfig, as shown for commits in\n\\Cref{fig:growth}\\else: from $10^4$ commits in 1971, to $10^6$ in 1998, to\nalmost $10^9$ in 2020\\fi. As a consequence the observed trends tend to be more\nstable in recent decades than in 40+ year-old ones, due to statistics taken on\nexponentially larger populations.\n\n\n\\paragraph{Geolocation}\n\n\\begin{figure}\n \\centering\n \\includegraphics[clip,trim=6cm 6cm 0 0,width=\\linewidth]{subregions-ours}\n \\caption{The \\DATAWorldRegions/ world regions used as geolocation targets.}\n \\label{fig:worldmap}\n\\end{figure}\n\nAs geolocation targets we use macro world regions derived from the United Nations geoscheme~\\cite{un1999geoscheme}.\nTo avoid domination by large countries (e.g., China or Russia) within macro regions, we merged and split some regions based on geographic proximity and the sharing of preeminent cultural identification features, such as spoken language.\n\\Cref{fig:worldmap} shows the final list of \\DATAWorldRegions/ world regions used as geolocation targets in this study.\n\nGeolocation of commit authors to world regions uses the two complementary techniques introduced in~\\cite{icse-seis-2022-gender}, briefly recalled below.\nThe first one relies on the country code top-level domain (ccTLD) of email addresses extracted from commit metadata, e.g., \\texttt{.fr}, \\texttt{.ru}, \\texttt{.cn}, etc.\nWe started from the IANA list of Latin character ccTLDs~\\cite{wikipedia-cctld} and manually mapped each corresponding territory to a target world region.\n\nThe second geolocation technique uses the UTC offset of commit timestamps (e.g., UTC-05:00) and author names to determine the most likely world region of the commit author.\nFor each UTC offset we determine a list of compatible places (country, state, or dependent territory) in the world that, at the time of that commit, had that UTC offset; commit time is key here, as country UTC offsets vary over time due to timezone changes.\nTo make this determination we use the IANA time zone database~\\cite{tzdata}.\n\nThen we assign to each place a score that captures the likelihood that a given author name is characteristic of it.\nTo this end we use the Forebears dataset of the frequencies of the most common first and family names which, quoting from~\\cite{forebear-names}: {\\itshape ``provides the approximate incidence of forenames and surnames produced from a database of \\num{4 044 546 938} people (55.5\\% of living people in 2014). As of September 2019 it covers \\num{27 662 801} forenames and \\num{27 206 821} surnames in 236 jurisdictions.''}\nAs in our dataset authors are full name strings (rather than split by first/family name), we first tokenize names (by blanks and case changes) and then lookup individual tokens in both first and family names frequency lists.\nFor each element found in name lists we multiply the place population\\footnotemark{} by the name frequency to obtain a measure that is proportional to the number of persons bearing that name (token) in the specific place.\n\\footnotetext{To obtain population totals---as the notion of ``place'' is heterogeneous: full countries v.~slices of large countries spanning multiple timezones---we use a mixture of primary sources (e.g., government websites), and non-primary ones (e.g., Wikipedia articles).}\nWe sum this figure for all elements to obtain a place score, ending up with a list of $\\langle$place, score$\\rangle$ pairs.\nWe then partition this list by the world region that a place belongs to and sum the score for all the places in each region to obtain an overall score, corresponding to the likelihood that the commit belongs to a given world region.\nWe assign the starting commit as coming from the world region with the highest score.\n\nThe email-based technique suffers from the limited and unbalanced use of ccTLDs: most developers use generic TLDs such as \\texttt{.com}, \\texttt{.org}, or \\texttt{.net}.\nMoreover this does not happen uniformly across zones: US-based developers, for example, use the \\texttt{.us} ccTLD much more seldomly than their European counterparts.\nOn the other hand the offset/name-based technique relies on the UTC offset of the commit timestamps.\nDue to tool configurations on developer setups, a large number of commits in the dataset has an UTC offset equal to zero.\nThis affects less recent commits (\\DATACommitsTZZTwoThousandTwenty/ of 2020s commits have a zero offset) than older ones (\\DATACommitsTZZTwoThousand/ in 2000).\nAs a result the offset/name-based technique could end up detecting a large share of older commits as authored by African developers, and to a lesser extent Europeans.\n\nTo counter these issues we combine the two geolocation techniques together by applying the offset/name-based techniques to all commits with a non-zero UTC offset, and the email-based on to all other commits.\n\n\n \\section{Results and Discussion}\n\\label{sec:results}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\linewidth]{stacked.pdf}\n \\caption{Ratio of commits (above) and active authors (below) by world zone over the 1971--2020 period.}\n \\Description[Chart]{Stacked bar chart showing the world zone ratios for commits and authors over the 1971--2020 period.}\n \\label{fig:results}\n\\end{figure*}\n\n\n \nTo answer \\cref{rq:geodiversity} we gathered the number of commits and distinct authors per year and per world zone.\nWe present the obtained results in \\Cref{fig:results} as two stacked bar charts, showing yearly breakdowns for commits and authors respectively.\nEvery bar represents a year and is partitioned in slices showing the commit/author ratio for each of the world regions of \\Cref{fig:worldmap} in that year.\nTo avoid outliers due to sporadic contributors, in the author chart we only consider authors having contributed at least 5 commits in a given year.\n\nWhile observing trends in the charts remember that the total numbers of commits and authors grow exponentially over time.\nHence for the first years in the charts, the number of data points in some world regions can be extremely small, with negative consequences on the stability of trends.\n\n\n\n\n\\paragraph{Geographic diversity over time}\n\nOverall, the general trend appears to be that the \\textbf{geographic diversity in public code is increasing}: North America and Europe alternated their ``dominance'' until the middle of the 90s; from that moment on most other world regions show a slow but steady increment.\nThis trend of increased participation into public code development includes Central and South Asia (comprising India), Russia, Africa, Central and South America,\nNotice that also zones that do not seem to follow this trend, such as Australia and New Zealand, are also increasing their participation, but at a lower speed with respect to other zones.\nFor example, Australia and New Zealand incremented the absolute number of their commits by about 3 orders of magnitude from 2000 to present days.\n\nAnother interesting phenomenon that can be appreciated in both charts is the sudden contraction of contributions from North America in 1995; since the charts depict ratios, this corresponds to other zones, and Europe in particular, increasing their share.\nAn analysis of the main contributions in the years right before the contraction shows that nine out of ten have \\texttt{ucbvax.Berkeley.EDU} as author email domain, and the tenth is Keith Bostic, one of the leading Unix BSD developers, appearing with email \\texttt{bostic}.\nNo developer with the same email domain appears anymore within the first hundred contributors in 1996.\nThis shows the relevance that BSD Unix and the Computer Systems Research Group at the University of California at Berkeley had in the history of open source software.\nThe group was disbanded in 1995, partially as a consequence of the so-called UNIX wars~\\cite{kernighan2019unixhistory}, and this contributes significantly---also because of the relatively low amount of public code circulating at the time---to the sudden drop of contributions from North America in subsequent years.\nDescendant UNIX operating systems based on BSD, such as OpenBSD, FreeBSD, and NetBSD had smaller relevance to world trends due to (i) the increasing amount of open source code coming from elsewhere and (ii) their more geographically diverse developer community.\n\nAnother time frame in which the ratios for Europe and North America are subject to large, sudden changes is 1975--79.\nA preliminary analysis shows that these ratios are erratic due to the very limited number of commits in those time period, but we were unable to detect a specific root cause.\nTrends for those years should be subject to further studies, in collaboration with software historians.\n\n\n\\paragraph{Colonialism}\n\nAnother trend that stands out from the charts is that Africa appears to be well represented.\nTo assess if this results from a methodological bias, we double-checked the commits detected as originating from Africa for timezones included in the $[0, 3]$ range using both the email- the offset/name-based methods.\nThe results show that the offset/name-based approach assigns 22.7\\% of the commits to Africa whereas the email-based one only assigns 2.7\\% of them.\nWhile a deeper investigation is in order, it is our opinion that the phenomenon we are witnessing here is a consequence of colonialism, specifically the adoption of Europeans names in African countries.\nFor example the name Eric, derived from Old Norse, is more popular in Ghana than it is in France or in the UK.\nThis challenges the ability of the offset/name-based method to correctly differentiate between candidate places.\nTogether with the fact that several African countries are largely populated, the offset/name-based method could detect European names as originating from Africa.\nWhile this cuts both way, the likelihood of a random person contributing to public code is very different between European countries, all having a well-developed software industry, and African countries that do not all share this trait.\n\n\n\\paragraph{Immigration/emigration}\n\nAnother area where a similar phenomenon could be at play is the evolution of Central and South America.\nContribution from this macro region appears to be growing steadily.\nTo assess if this is the result of a bias introduced by the name-based detection we analyzed the evolution of offset/name-based assignment over time for authors whose email domain is among the top-ten US-based entities in terms of overall contributions (estimated in turn by analyzing the most frequent email domains and manually selecting those belonging to US-based entities).\nIn 1971 no author with an email from top US-based entities is detected as belonging to Central and South America, whereas in 2019 the ratio is 12\\%.\nNowadays more than one tenth of the people email-associated to top US-based entities have popular Central and South American names, which we posit as a likely consequence of immigration into US (emigration from Central and South America).\nSince immigration has a much longer history than what we are studying here, what we are witnessing probably includes long-term consequences of it, such as second and third generation immigrants employed in white-collar jobs, such as software development.\n\n\n\n\n \\section{Limitations and Future Work}\n\\label{sec:conclusion}\n\nWe have performed an exploratory, yet very large scale, empirical study of the geographic diversity in public code commits over time.\nWe have analyzed 2.2 billion\\xspace public commits covering the \\DATAYearRange/ time period.\nWe have geolocated developers to \\DATAWorldRegions/ world regions using as signals email domains, timezone offsets, and author names.\nOur findings show that the geographic diversity in public code is increasing over time, and markedly so over the past 20--25 years.\nObserved trends also co-occur with historical events and macro phenomena like the end of the UNIX wars, increase of coding literacy around the world, colonialism, and immigration.\n\n\n\\medskip\n\\emph{Limitations.}\nThis study relies on a combination of two geolocation methods: one based on email domains, another based on commit UTC offsets and author names.\nWe discussed some of the limitations of either method in \\Cref{sec:method}, motivating our decision of restricting the use of the email-based method to commits with a zero UTC offset.\nAs a consequence, for most commits in the dataset the offset/name-based method is used.\nWith such method, the frequencies of forenames and surnames are used to rank candidate zones that have a compatible UTC offset at commit time.\n\nA practical consequence of this is that for commits with, say, offset UTC+09:00 the candidate places can be Russia, Japan and Australia, depending on the specific date due to daylight saving time.\nPopular forenames and surnames in these regions tend to be quite different so the likelihood of the method to provide a reliable detection is high.\nFor other offsets the set of popular forenames and surnames from candidate zones can exhibit more substantial overlaps, negatively impacting detection accuracy.\nWe have discussed some of these cases in \\Cref{sec:results}, but other might be lingering in the results impacting observed trends.\n\nThe choice of using the email-based method for commits with zero UTC offset, and the offset/name-based method elsewhere, has allowed us to study all developers not having a country-specific email domain (ccTLD), but comes with the risk of under-representing the world zones that have (in part and in some times of the year) an actual UTC offset of zero.\n\nA potential bias in this study could be introduced by the fact that the name database used for offset/name-based geolocation only contains names formed using Latin alphabet characters.\nWe looked for names containing Chinese, Japanese, and Korean characters in the original dataset, finding only a negligible amount of authors who use non-Latin characters in their VCS names, which leads us to believe that the impact of this issue is minimal.\n\nWe did not apply identity merging (e.g., using state-of-the-art tools like SortingHat~\\cite{moreno2019sortinghat}), but we do not expect this to be a significant issue because: (a) to introduce bias in author trends the distribution of identity merges around the world should be uneven, which seems unlikely; and (b) the observed commit trends (which would be unaffected by identity merging) are very similar to observed author trends.\n\nWe did not systematically remove known bot accounts~\\cite{lebeuf2018swbots} from the author dataset, but we did check for the presence of software bots among the top committers of each year. We only found limited traces of continuous integration (CI) bots, used primarily to automate merge commits. After removing CI bots from the dataset the observed global trends were unchanged, therefore this paper presents unfiltered data.\n\n\n\\medskip\n\\emph{Future work.}\nTo some extent the above limitations are the price to pay to study such a large dataset: there exists a trade-off between large-scale analysis and accuracy.\nWe plan nonetheless to further investigate and mitigate them in future work.\nMulti-method approaches, merging data mining with social science methods, could be applied to address some of the questions raised in this exploratory study.\nWhile they do not scale to the whole dataset, multi-methods can be adopted to dig deeper into specific aspects, specifically those related to social phenomena.\nSoftware is a social artifact, it is no wonder that aspects related to sociocultural evolution emerge when analyzing its evolution at this scale.\n\n\n\n\n \n\\clearpage\n\n\n", "meta": {"timestamp": "2022-03-30T02:27:00", "yymm": "2203", "arxiv_id": "2203.15369", "language": "en", "url": "https://arxiv.org/abs/2203.15369"}} {"text": "\\section{Introduction}\n\nOne of the fundamental ingredients in the theory of non-commutative or\nquantum geometry is the notion of a differential calculus.\nIn the framework of quantum groups the natural notion\nis that of a\nbicovariant differential calculus as introduced by Woronowicz\n\\cite{Wor_calculi}. Due to the allowance of non-commutativity\nthe uniqueness of a canonical calculus is lost.\nIt is therefore desirable to classify the possible choices.\nThe most important piece is the space of one-forms or ``first\norder differential calculus'' to which we will restrict our attention\nin the following. (From this point on we will use the term\n``differential calculus'' to denote a\nbicovariant first order differential calculus).\n\nMuch attention has been devoted to the investigation of differential\ncalculi on quantum groups $C_q(G)$ of function algebra type for\n$G$ a simple Lie group.\nNatural differential calculi on matrix quantum groups were obtained by\nJurco \\cite{Jur} and\nCarow-Watamura et al.\\\n\\cite{CaScWaWe}. A partial classification of calculi of the same\ndimension as the natural ones\nwas obtained by\nSchm\\\"udgen and Sch\\\"uler \\cite{ScSc2}.\nMore recently, a classification theorem for factorisable\ncosemisimple quantum groups was obtained by Majid \\cite{Majid_calculi},\ncovering the general $C_q(G)$ case. A similar result was\nobtained later by Baumann and Schmitt \\cite{BaSc}.\nAlso, Heckenberger and Schm\\\"udgen \\cite{HeSc} gave a\ncomplete classification on $C_q(SL(N))$ and $C_q(Sp(N))$. \n\n\nIn contrast, for $G$ not simple or semisimple the differential calculi\non $C_q(G)$\nare largely unknown. A particularly basic case is the Lie group $B_+$\nassociated with the Lie algebra $\\lalg{b_+}$ generated by two elements\n$X,H$ with the relation $[H,X]=X$. The quantum enveloping algebra\n\\ensuremath{U_q(\\lalg{b_+})}{}\nis self-dual, i.e.\\ is non-degenerately paired with itself \\cite{Drinfeld}.\nThis has an interesting consequence: \\ensuremath{U_q(\\lalg{b_+})}{} may be identified with (a\ncertain algebraic model of) \\ensuremath{C_q(B_+)}. The differential calculi on this\nquantum group and on its ``classical limits'' \\ensuremath{C(B_+)}{} and \\ensuremath{U(\\lalg{b_+})}{}\nwill be the main concern of this paper. We pay hereby equal attention\nto the dual notion of ``quantum tangent space''.\n\nIn section \\ref{sec:q} we obtain the complete classification of differential\ncalculi on \\ensuremath{C_q(B_+)}{}. It turns out that (finite\ndimensional) differential\ncalculi are characterised by finite subsets $I\\subset\\mathbb{N}$.\nThese\nsets determine the decomposition into coirreducible (i.e.\\ not\nadmitting quotients) differential calculi\ncharacterised by single integers. For the coirreducible calculi the\nexplicit formulas for the commutation relations and braided\nderivations are given.\n\nIn section \\ref{sec:class} we give the complete classification for the\nclassical function algebra \\ensuremath{C(B_+)}{}. It is essentially the same as in the\n$q$-deformed setting and we stress this by giving an almost\none-to-one correspondence of differential calculi to those obtained in\nthe previous section. In contrast, however, the decomposition and\ncoirreducibility properties do not hold at all. (One may even say that\nthey are maximally violated). We give the explicit formulas for those\ncalculi corresponding to coirreducible ones.\n\nMore interesting perhaps is the ``dual'' classical limit. I.e.\\ we\nview \\ensuremath{U(\\lalg{b_+})}{} as a quantum function algebra with quantum enveloping\nalgebra \\ensuremath{C(B_+)}{}. This is investigated in section \\ref{sec:dual}. It\nturns out that in this setting we have considerably more freedom in\nchoosing a\ndifferential calculus since the bicovariance condition becomes much\nweaker. This shows that this dual classical limit is in a sense\n``unnatural'' as compared to the ordinary classical limit of section\n\\ref{sec:class}. \nHowever, we can still establish a correspondence of certain\ndifferential calculi to those of section \\ref{sec:q}. The\ndecomposition properties are conserved while the coirreducibility\nproperties are not.\nWe give the\nformulas for the calculi corresponding to coirreducible ones.\n\nAnother interesting aspect of viewing \\ensuremath{U(\\lalg{b_+})}{} as a quantum function\nalgebra is the connection to quantum deformed models of space-time and\nits symmetries. In particular, the $\\kappa$-deformed Minkowski space\ncoming from the $\\kappa$-deformed Poincar\\'e algebra\n\\cite{LuNoRu}\\cite{MaRu} is just a simple generalisation of \\ensuremath{U(\\lalg{b_+})}.\nWe use this in section \\ref{sec:kappa} to give\na natural $4$-dimensional differential calculus. Then we show (in a\nformal context) that integration is given by\nthe usual Lesbegue integral on $\\mathbb{R}^n$ after normal ordering.\nThis is obtained in an intrinsic context different from the standard\n$\\kappa$-Poincar\\'e approach.\n\nA further important motivation for the investigation of differential\ncalculi on\n\\ensuremath{U(\\lalg{b_+})}{} and \\ensuremath{C(B_+)}{} is the relation of those objects to the Planck-scale\nHopf algebra \\cite{Majid_Planck}\\cite{Majid_book}. This shall be\ndeveloped elsewhere.\n\nIn the remaining parts of this introduction we will specify our\nconventions and provide preliminaries on the quantum group \\ensuremath{U_q(\\lalg{b_+})}, its\ndeformations, and differential calculi.\n\n\n\\subsection{Conventions}\n\nThroughout, $\\k$ denotes a field of characteristic 0 and\n$\\k(q)$ denotes the field of rational\nfunctions in one parameter $q$ over $\\k$.\n$\\k(q)$ is our ground field in\nthe $q$-deformed setting, while $\\k$ is the\nground field in the ``classical'' settings.\nWithin section \\ref{sec:q} one could equally well view $\\k$ as the ground\nfield with $q\\in\\k^*$ not a root of unity. This point of view is\nproblematic, however, when obtaining ``classical limits'' as\nin sections \\ref{sec:class} and \\ref{sec:dual}.\n\nThe positive integers are denoted by $\\mathbb{N}$ while the non-negative\nintegers are denoted by $\\mathbb{N}_0$.\nWe define $q$-integers, $q$-factorials and\n$q$-binomials as follows:\n\\begin{gather*}\n[n]_q=\\sum_{i=0}^{n-1} q^i\\qquad\n[n]_q!=[1]_q [2]_q\\cdots [n]_q\\qquad\n\\binomq{n}{m}=\\frac{[n]_q!}{[m]_q! [n-m]_q!}\n\\end{gather*}\nFor a function of several variables (among\nthem $x$) over $\\k$ we define\n\\begin{gather*}\n(T_{a,x} f)(x) = f(x+a)\\\\\n(\\fdiff_{a,x} f)(x) = \\frac{f(x+a)-f(x)}{a}\n\\end{gather*}\nwith $a\\in\\k$ and similarly over $\\k(q)$\n\\begin{gather*}\n(Q_{m,x} f)(x) = f(q^m x)\\\\\n(\\partial_{q,x} f)(x) = \\frac{f(x)-f(qx)}{x(1-q)}\\\\\n\\end{gather*}\nwith $m\\in\\mathbb{Z}$.\n\nWe frequently use the notion of a polynomial in an extended\nsense. Namely, if we have an algebra with an element $g$ and its\ninverse $g^{-1}$ (as\nin \\ensuremath{U_q(\\lalg{b_+})}{}) we will mean by a polynomial in $g,g^{-1}$ a finite power\nseries in $g$ with exponents in $\\mathbb{Z}$. The length of such a polynomial\nis the difference between highest and lowest degree.\n\nIf $H$ is a Hopf algebra, then $H^{op}$ will denote the Hopf algebra\nwith the opposite product.\n\n\\subsection{\\ensuremath{U_q(\\lalg{b_+})}{} and its Classical Limits}\n\\label{sec:intro_limits}\n\nWe recall that,\nin the framework of quantum groups, the duality between enveloping algebra\n$U(\\lalg{g})$ of the Lie algebra and algebra of functions $C(G)$ on the Lie\ngroup carries over to $q$-deformations.\nIn the case of\n$\\lalg{b_+}$, the\n$q$-deformed enveloping algebra \\ensuremath{U_q(\\lalg{b_+})}{} defined over $\\k(q)$ as\n\\begin{gather*}\nU_q(\\lalg{b_+})=\\k(q)\\langle X,g,g^{-1}\\rangle \\qquad\n\\text{with relations} \\\\\ng g^{-1}=1 \\qquad Xg=qgX \\\\\n\\cop X=X\\otimes 1 + g\\otimes X \\qquad\n\\cop g=g\\otimes g \\\\\n\\cou (X)=0 \\qquad \\cou (g)=1 \\qquad\n\\antip X=-g^{-1}X \\qquad \\antip g=g^{-1}\n\\end{gather*}\nis self-dual. Consequently, it\nmay alternatively be viewed as the quantum algebra \\ensuremath{C_q(B_+)}{} of\nfunctions on the Lie group $B_+$ associated with $\\lalg{b_+}$.\nIt has two classical limits, the enveloping algebra \\ensuremath{U(\\lalg{b_+})}{}\nand the function algebra $C(B_+)$.\nThe transition to the classical enveloping algebra is achieved by\nreplacing $q$\nby $e^{-t}$ and $g$ by $e^{tH}$ in a formal power series setting in\n$t$, introducing a new generator $H$. Now, all expressions are written in\nthe form $\\sum_j a_j t^j$ and only the lowest order in $t$ is kept.\nThe transition to the classical function algebra on the other hand is\nachieved by setting $q=1$.\nThis may be depicted as follows:\n\\[\\begin{array}{c @{} c @{} c @{} c}\n& \\ensuremath{U_q(\\lalg{b_+})} \\cong \\ensuremath{C_q(B_+)} && \\\\\n& \\diagup \\hspace{\\stretch{1}} \\diagdown && \\\\\n \\begin{array}{l} q=e^{-t} \\\\ g=e^{tH} \\end{array} \\Big| _{t\\to 0} \n && q=1 &\\\\\n \\swarrow &&& \\searrow \\\\\n \\ensuremath{U(\\lalg{b_+})} & <\\cdots\\textrm{dual}\\cdots> && \\ensuremath{C(B_+)}\n\\end{array}\\]\nThe self-duality of \\ensuremath{U_q(\\lalg{b_+})}{} is expressed as a pairing\n$\\ensuremath{U_q(\\lalg{b_+})}\\times\\ensuremath{U_q(\\lalg{b_+})}\\to\\k$\nwith\nitself:\n\\[\\langle X^n g^m, X^r g^s\\rangle =\n \\delta_{n,r} [n]_q!\\, q^{-n(n-1)/2} q^{-ms}\n \\qquad\\forall n,r\\in\\mathbb{N}_0\\: m,s\\in\\mathbb{Z}\\]\nIn the classical limit this becomes the pairing $\\ensuremath{U(\\lalg{b_+})}\\times\\ensuremath{C(B_+)}\\to\\k$\n\\begin{equation}\n\\langle X^n H^m, X^r g^s\\rangle =\n \\delta_{n,r} n!\\, s^m\\qquad \\forall n,m,r\\in\\mathbb{N}_0\\: s\\in\\mathbb{Z}\n\\label{eq:pair_class}\n\\end{equation} \n\n\n\n\\subsection{Differential Calculi and Quantum Tangent Spaces}\n\nIn this section we recall some facts about differential calculi\nalong the lines of Majid's treatment in \\cite{Majid_calculi}.\n\nFollowing Woronowicz \\cite{Wor_calculi}, first order bicovariant differential\ncalculi on a quantum group $A$ (of\nfunction algebra type) are in one-to-one correspondence to submodules\n$M$ of $\\ker\\cou\\subset A$ in the category $^A_A\\cal{M}$ of (say) left\ncrossed modules of $A$ via left multiplication and left adjoint\ncoaction:\n\\[\na\\triangleright v = av \\qquad \\mathrm{Ad_L}(v)\n =v_{(1)}\\antip v_{(3)}\\otimes v_{(2)}\n\\qquad \\forall a\\in A, v\\in A\n\\]\nMore precisely, given a crossed submodule $M$, the corresponding\ncalculus is given by $\\Gamma=\\ker\\cou/M\\otimes A$ with $\\diff a =\n\\pi(\\cop a - 1\\otimes a)$ ($\\pi$ the canonical projection).\nThe right action and coaction on $\\Gamma$ are given by\nthe right multiplication and coproduct on $A$, the left action and\ncoaction by the tensor product ones with $\\ker\\cou/M$ as a left\ncrossed module. In all of what follows, ``differential calculus'' will\nmean ``bicovariant first order differential calculus''.\n\nAlternatively \\cite{Majid_calculi}, given in addition a quantum group $H$\ndually paired with $A$\n(which we might think of as being of enveloping algebra type), we can\nexpress the coaction of $A$ on\nitself as an action of $H^{op}$ using the pairing:\n\\[\nh\\triangleright v = \\langle h, v_{(1)} \\antip v_{(3)}\\rangle v_{(2)}\n\\qquad \\forall h\\in H^{op}, v\\in A\n\\]\nThereby we change from the category of (left) crossed $A$-modules to\nthe category of left modules of the quantum double $A\\!\\bowtie\\! H^{op}$.\n\nIn this picture the pairing between $A$ and $H$ descends to a pairing\nbetween $A/\\k 1$ (which we may identify with $\\ker\\cou\\subset A$) and\n$\\ker\\cou\\subset H$. Further quotienting $A/\\k 1$ by $M$ (viewed in\n$A/\\k 1$) leads to a pairing with the subspace $L\\subset\\ker\\cou H$\nthat annihilates $M$. $L$ is called a ``quantum tangent space''\nand is dual to the differential calculus $\\Gamma$ generated by $M$ in\nthe sense that $\\Gamma\\cong \\Lin(L,A)$ via\n\\begin{equation}\nA/(\\k 1+M)\\otimes A \\to \\Lin(L,A)\\qquad\nv\\otimes a \\mapsto \\langle \\cdot, v\\rangle a\n\\label{eq:eval}\n\\end{equation}\nif the pairing between $A/(\\k 1+M)$ and $L$ is non-degenerate.\n\nThe quantum tangent spaces are obtained directly by dualising the\n(left) action of the quantum double on $A$ to a (right) action on\n$H$. Explicitly, this is the adjoint action and the coregular action\n\\[\nh \\triangleright x = h_{(1)} x \\antip h_{(2)} \\qquad\na \\triangleright x = \\langle x_{(1)}, a \\rangle x_{(2)}\\qquad\n \\forall h\\in H, a\\in A^{op},x\\in A\n\\]\nwhere we have converted the right action to a left action by going\nfrom \\mbox{$A\\!\\bowtie\\! H^{op}$}-modules to \\mbox{$H\\!\\bowtie\\! A^{op}$}-modules.\nQuantum tangent spaces are subspaces of $\\ker\\cou\\subset H$ invariant\nunder the projection of this action to $\\ker\\cou$ via \\mbox{$x\\mapsto\nx-\\cou(x) 1$}. Alternatively, the left action of $A^{op}$ can be\nconverted to a left coaction of $H$ being the comultiplication (with\nsubsequent projection onto $H\\otimes\\ker\\cou$).\n\nWe can use the evaluation map (\\ref{eq:eval})\nto define a ``braided derivation'' on elements of the quantum tangent\nspace via\n\\[\\partial_x:A\\to A\\qquad \\partial_x(a)={\\diff a}(x)=\\langle\nx,a_{(1)}\\rangle a_{(2)}\\qquad\\forall x\\in L, a\\in A\\]\nThis obeys the braided derivation rule\n\\[\\partial_x(a b)=(\\partial_x a) b\n + a_{(2)} \\partial_{a_{(1)}\\triangleright x}b\\qquad\\forall x\\in L, a\\in A\\]\n\nGiven a right invariant basis $\\{\\eta_i\\}_{i\\in I}$ of $\\Gamma$ with a\ndual basis $\\{\\phi_i\\}_{i\\in I}$ of $L$ we have\n\\[{\\diff a}=\\sum_{i\\in I} \\eta_i\\cdot \\partial_i(a)\\qquad\\forall a\\in A\\]\nwhere we denote $\\partial_i=\\partial_{\\phi_i}$. (This can be easily\nseen to hold by evaluation against $\\phi_i\\ \\forall i$.)\n\n\n\\section{Classification on \\ensuremath{C_q(B_+)}{} and \\ensuremath{U_q(\\lalg{b_+})}{}}\n\\label{sec:q}\n\nIn this section we completely classify differential calculi on \\ensuremath{C_q(B_+)}{}\nand, dually, quantum tangent spaces on \\ensuremath{U_q(\\lalg{b_+})}{}. We start by\nclassifying the relevant crossed modules and then proceed to a\ndetailed description of the calculi.\n\n\\begin{lem}\n\\label{lem:cqbp_class}\n(a) Left crossed \\ensuremath{C_q(B_+)}-submodules $M\\subseteq\\ensuremath{C_q(B_+)}$ by left\nmultiplication and left\nadjoint coaction are in one-to-one correspondence to\npairs $(P,I)$\nwhere $P\\in\\k(q)[g]$ is a polynomial with $P(0)=1$ and $I\\subset\\mathbb{N}$ is\nfinite.\n$\\codim M<\\infty$ iff $P=1$. In particular $\\codim M=\\sum_{n\\in I}n$\nif $P=1$.\n\n(b) The finite codimensional maximal $M$\ncorrespond to the pairs $(1,\\{n\\})$ with $n$ the\ncodimension. The infinite codimensional maximal $M$ are characterised by\n$(P,\\emptyset)$ with $P$ irreducible and $P(g)\\neq 1-q^{-k}g$ for any\n$k\\in\\mathbb{N}_0$.\n\n(c) Crossed submodules $M$ of finite\ncodimension are intersections of maximal ones.\nIn particular $M=\\bigcap_{n\\in I} M^n$, with $M^n$ corresponding to\n$(1,\\{n\\})$.\n\\end{lem}\n\\begin{proof}\n(a) Let $M\\subseteq\\ensuremath{C_q(B_+)}$ be a crossed \\ensuremath{C_q(B_+)}-submodule by left\nmultiplication and left adjoint coaction and let\n$\\sum_n X^n P_n(g) \\in M$, where $P_n$ are polynomials in $g,g^{-1}$\n(every element of \\ensuremath{C_q(B_+)}{} can be expressed in\nthis form). From the formula for the coaction ((\\ref{eq:adl}), see appendix)\nwe observe that for all $n$ and for all $t\\le n$ the element\n\\[X^t P_n(g) \\prod_{s=1}^{n-t} (1-q^{s-n}g)\\]\nlies in $M$.\nIn particular\nthis is true for $t=n$, meaning that elements of constant degree in $X$\nlie separately in $M$. It is therefore enough to consider such\nelements.\n\nLet now $X^n P(g) \\in M$.\nBy left multiplication $X^n P(g)$ generates any element of the form\n$X^k P(g) Q(g)$, where $k\\ge n$ and $Q$ is any polynomial in\n$g,g^{-1}$. (Note that $Q(q^kg) X^k=X^k Q(g)$.)\nWe see that $M$ contains the following elements:\n\\[\\begin{array}{ll}\n\\vdots & \\\\\nX^{n+2} & P(g) \\\\\nX^{n+1} & P(g) \\\\\nX^n & P(g) \\\\\nX^{n-1} & P(g) (1-q^{1-n}g) \\\\\nX^{n-2} & P(g) (1-q^{1-n}g) (1-q^{2-n}g) \\\\\n\\vdots & \\\\\nX & P(g) (1-q^{1-n}g) (1-q^{2-n}g) \\ldots (1-q^{-1}g) \\\\\n& P(g) (1-q^{1-n}g) (1-q^{2-n}g) \\ldots (1-q^{-1}g)(1-g) \n\\end{array}\n\\]\nMoreover, if $M$ is generated by $X^n P(g)$ as a module\nthen these elements generate a basis for $M$ as a vector\nspace by left\nmultiplication with polynomials in $g,g^{-1}$. (Observe that the\napplication of the coaction to any of the elements shown does not\ngenerate elements of new type.)\n\nNow, let $M$ be a given crossed submodule. We pick, among the\nelements in $M$ of the form $X^n P(g)$ with $P$ of minimal\nlength,\none\nwith lowest degree in $X$. Then certainly the elements listed above are\nin $M$. Furthermore for any element of the form $X^k Q(g)$, $Q$ must\ncontain $P$ as a factor and for $k0 \\}$ in the crossed submodule or not. In\nparticular, the crossed submodule characterised by \\{1\\} in lemma\n\\ref{lem:uqbp_class} is projected out.\n\\end{proof}\n\nDifferential calculi in the original sense of Woronowicz are\nclassified by corollary \\ref{cor:cqbp_eclass} while from the quantum\ntangent space\npoint of view the\nclassification is given by corollary \\ref{cor:uqbp_eclass}.\nIn the finite dimensional case the duality is strict in the sense of a\none-to-one correspondence.\nThe infinite dimensional case on the other hand depends strongly on\nthe algebraic models we use for the function or enveloping\nalgebras. It is therefore not surprising that in the present purely\nalgebraic context the classifications are quite different in this\ncase. We will restrict ourselves to the finite dimensional\ncase in the following description of the differential calculi.\n\n\n\\begin{thm}\n\\label{thm:q_calc}\n(a) Finite dimensional differential calculi $\\Gamma$ on \\ensuremath{C_q(B_+)}{} and\ncorresponding quantum tangent spaces $L$ on \\ensuremath{U_q(\\lalg{b_+})}{} are\nin one-to-one correspondence to\nfinite sets $I\\subset\\mathbb{N}\\setminus\\{1\\}$. In particular\n$\\dim\\Gamma=\\dim L=\\sum_{n\\in I}n$.\n\n(b) Coirreducible $\\Gamma$ and irreducible $L$ correspond to\n$\\{n\\}$ with $n\\ge 2$ the dimension.\nSuch a $\\Gamma$ has a\nright invariant basis $\\eta_0,\\dots,\\eta_{n-1}$ so that the relations\n\\begin{gather*}\n\\diff X=\\eta_1+(q^{n-1}-1)\\eta_0 X \\qquad\n \\diff g=(q^{n-1}-1)\\eta_0 g\\\\\n[a,\\eta_0]=\\diff a\\quad \\forall a\\in\\ensuremath{C_q(B_+)}\\\\\n[g,\\eta_i]_{q^{n-1-i}}=0\\quad \\forall i\\qquad\n[X,\\eta_i]_{q^{n-1-i}}=\\begin{cases}\n \\eta_{i+1} & \\text{if}\\ i0\n\\end{gather*}\nas a crossed module.\n\\end{proof}\n\nFor the transition from the $q$-deformed (lemma\n\\ref{lem:uqbp_class}) to the classical case we\nobserve that the space spanned by $g^{s_1},\\dots,g^{s_m}$ with $m$\ndifferent integers $s_i\\in\\mathbb{Z}$ maps to the space spanned by\n$1, H, \\dots, H^{m-1}$ in the\nprescription of the classical limit (as described in section\n\\ref{sec:intro_limits}). I.e.\\ the classical crossed submodule\ncharacterised by an integer $l$ and a finite set $I\\subset\\mathbb{N}$ comes\nfrom a crossed submodule characterised by this same $I$ and additionally $l$\nother integers $j\\in\\mathbb{Z}$ for which $X^k g^{1-j}$ is included. In\nparticular, we have a one-to-one correspondence in the finite\ndimensional case.\n\nTo formulate the analogue of corollary \\ref{cor:uqbp_eclass} for the\nclassical case is essentially straightforward now. However, as for\n\\ensuremath{C(B_+)}{}, we obtain more crossed submodules than those from the $q$-deformed\nsetting. This is due to the degeneracy introduced by forgetting the\npowers of $g$ and just retaining the number of different powers. \n\n\\begin{cor}\n\\label{cor:ubp_eclass}\n(a) Proper left crossed \\ensuremath{U(\\lalg{b_+})}-submodules\n$L\\subset\\ker\\cou\\subset\\ensuremath{U(\\lalg{b_+})}$ via the\nleft adjoint\naction and left regular coaction (with subsequent projection to\n$\\ker\\cou$ via $x\\mapsto x-\\cou(x)1$) are in one-to-one correspondence to\npairs $(l,I)$ with $l\\in\\mathbb{N}_0$ and $I\\subset\\mathbb{N}$ finite where $l\\neq 0$\nor $I\\neq\\emptyset$.\n$\\dim L<\\infty$ iff $l=0$. In particular $\\dim\nL=(\\sum_{n\\in I}n)-1$ if $l=0$.\n\\end{cor}\n\n\nAs in the $q$-deformed setting, we give a description of the finite\ndimensional differential calculi where we have a strict duality to\nquantum tangent spaces.\n\n\\begin{prop}\n(a) Finite dimensional differential calculi $\\Gamma$ on \\ensuremath{C(B_+)}{} and\nfinite dimensional quantum tangent spaces $L$ on \\ensuremath{U(\\lalg{b_+})}{} are\nin one-to-one correspondence to non-empty finite sets $I\\subset\\mathbb{N}$.\nIn particular $\\dim\\Gamma=\\dim L=(\\sum_{n\\in I}) n)-1$.\n\nThe $\\Gamma$ with $1\\in\\mathbb{N}$ are in\none-to-one correspondence to the finite dimensional\ncalculi and quantum tangent spaces of the $q$-deformed setting\n(theorem \\ref{thm:q_calc}(a)).\n\n(b) The differential calculus $\\Gamma$ of dimension $n\\ge 2$\ncorresponding to the\ncoirreducible one of \\ensuremath{C_q(B_+)}{} (theorem \\ref{thm:q_calc}(b)) has a right\ninvariant\nbasis $\\eta_0,\\dots,\\eta_{n-1}$ so that\n\\begin{gather*}\n\\diff X=\\eta_1+\\eta_0 X \\qquad\n \\diff g=\\eta_0 g\\\\\n[g, \\eta_i]=0\\ \\forall i \\qquad\n[X, \\eta_i]=\\begin{cases}\n 0 & \\text{if}\\ i=0\\ \\text{or}\\ i=n-1\\\\\n \\eta_{i+1} & \\text{if}\\ 0 10 \\\\\n\\end{array}\\)}\n\\right. \\)\n\n\\\\\n\nKulczynski2 \\cite{Kulczynski1927,Naish2011} & %\n\n\\( \\frac{ 1 }{ 2 } \\times ( \\frac{ \\Aef }{ \\Aef + \\Anf } + \\frac{ \\Aef }{ \\Aef + \\Aep } ) \\)\n\n\\\\\n\nFailed only & %\n\n\\( \\left\\{\\scalebox{.8}{\\(\\renewcommand{\\arraystretch}{1} %\n\\begin{array}{@{}ll@{}}\n1 & \\text{if~} \\Ncs = 0 \\\\\n0 & \\text{otherwise~} \\\\\n\\end{array}\\)}\n\\right. \\)\n\n\\\\\n\\bottomrule\n\n\\end{tabular}} &\n\\begin{tabular}{lp{2.99cm}}\n\\toprule\n\\multicolumn{2}{l}{notation used} \\\\\\midrule\n\\Ncf & number of \\emph{failing} logs \\\\ & that \\emph{include} the event \\\\\n\\Nuf & number of \\emph{failing} logs \\\\ & that \\emph{exclude} the event \\\\\n\\Ncs & number of \\emph{passing} logs \\\\ & that \\emph{include} the event \\\\\n\\Nus & number of \\emph{passing} logs \\\\ & that \\emph{exclude} the event \\\\\n\\bottomrule\n\\end{tabular}\n\\end{tabular}\\vspace*{1ex}\n\\caption{\\label{table:measures}The 10 interestingness measures under consideration in this paper.}\n\\vspace*{-3ex}\n\\end{table*}\n\n\\head{Analyzing a target log file} Using our database of event scores,\nwe first identify the events occurring in the target log file and the\ninterestingness scores associated with these events. Then, we group\nsimilarly scored events together using a clustering algorithm. Finally,\nwe present the best performing cluster of events to the end user. The\nclustering step helps us make a meaningful selection of events rather\nthan setting an often arbitrary window selection size. Among other\nthings, it prevents two identically scored events from falling at\nopposite sides of the selection threshold. If the user suspects that\nthe best performing cluster did not report all relevant events, she can\ninspect additional event clusters in order of decreasing\naggregate interestingness score. To perform the clustering step we use Hierarchical Agglomerative\nClustering (HAC) with Complete linkage~\\cite{manning2008introduction}, where\nsub-clusters are merged until the maximal distance between members of\neach candidate cluster exceeds some specified threshold. In SBLD,\nthis threshold is the uncorrected sample standard deviation of the event\nscores for the events being clustered.\\footnote{~Specifically, \nwe use the \\texttt{numpy.std} procedure from the SciPy framework~\\cite{2020SciPy-NMeth},\nin which the uncorrected sample standard deviation is given by\n$ \\sqrt{\\frac{1}{N} \\sum_{i=1}^{N}\\lvert x_{i} - \\bar{x} \\rvert^2} $ where\n$\\bar{x}$ is the sample mean of the interestingness scores obtained for the\nevents in the log being analyzed and $N$ is the number of events in the log.} \nThis ensures that the ``interestingness-distance'' between two events \nin a cluster never exceeds the uncorrected sample standard deviation observed in the set.\n\n %\n\n\\section{Research Questions}\n\\label{sec:rqs}\n\nThe goal of this paper is to present SBLD and help practitioners make\nan informed decision whether SBLD meets their needs. To this end, we have identified\nthree research questions that encompass several concerns practitioners\nare likely to have and that also are of interested to the research community at\nlarge:\n\\begin{enumerate}[\\bfseries RQ1]\n\n\\item How well does SBLD reduce the effort needed to identify all\n known-to-be relevant events (\"does it work?\") ?\n\n\\item How is the efficacy of SBLD impacted by increased evidence in the form of\n additional failing and passing logs (\"how much data do we need before\n running the analysis?\") ?\n\n\\item How does SBLD perform compared to a strategy based on searching for\n common textual patterns with a tool like \\texttt{grep} (\"is it better than doing the obvious thing?\") ?\n\\end{enumerate}\nRQ1 looks at the aggregated performance of SBLD to assess its viability.\nWith RQ2 we assess how sensitive the performance is to the amount of\navailable data: How many logs should you have before you can expect the\nanalysis to yield good results? Is more data unequivocally a good thing?\nWhat type of log is more informative: A passing log or a failing log?\nFinally, we compare SBLD's performance to a more traditional method for\nfinding relevant segments in logs: Using a textual search for strings \none expects to occur near informative segments, like\n\"failure\" and \"error\". The next section details the dataset used, our\nchosen quality measures for assessment and our methodology for answering\neach research question.\n\n %\n\n\\section{Experimental Design}\n\\label{sec:expdesign}\n\n\\begin{table}\n\\centering\n\\caption{The key per-test attributes of our dataset. Two events are considered\n distinct if they are treated as separate events after the abstraction\n step. A \"mixed\" event is an event that occurs in logs of both failing and\n passing runs.}\n\\vspace*{-1ex}\n\\label{table:descriptive}\n\\renewcommand{\\tabcolsep}{0.11cm}\\small\n\\begin{tabular}{rcrrrrrr}\n\\toprule\n & & \\# fail & \\# pass & distinct & fail-only & mixed & pass-only \\\\\ntest & signature & logs & logs & events & events & events & events \\\\\n\\midrule\n 1 & C & 24 & 100 & 36391 & 21870 & 207 & 14314 \\\\\n 2 & E & 11 & 25 & 380 & 79 & 100 & 201 \\\\\n 3 & E & 11 & 25 & 679 & 174 & 43 & 462 \\\\\n 4 & E & 4 & 25 & 227 & 49 & 39 & 139 \\\\\n 5 & C & 2 & 100 & 33420 & 2034 & 82 & 31304 \\\\\n 6 & C & 19 & 100 & 49155 & 15684 & 893 & 32578 \\\\\n 7 & C & 21 & 100 & 37316 & 17881 & 154 & 19281 \\\\\n 8 & C & 4 & 100 & 26614 & 3976 & 67 & 22571 \\\\\n 9 & C & 21 & 100 & 36828 & 19240 & 228 & 17360 \\\\\n 10 & C & 22 & 100 & 110479 & 19134 & 1135 & 90210 \\\\\n 11 & E & 5 & 25 & 586 & 95 & 47 & 444 \\\\\n 12 & E & 7 & 25 & 532 & 66 & 18 & 448 \\\\\n 13 & C & 2 & 100 & 15351 & 2048 & 232 & 13071 \\\\\n 14 & C & 3 & 100 & 16318 & 2991 & 237 & 13090 \\\\\n 15 & C & 26 & 100 & 60362 & 20964 & 1395 & 38003 \\\\\n 16 & C & 12 & 100 & 2206 & 159 & 112 & 1935 \\\\\n 17 & E & 8 & 25 & 271 & 58 & 98 & 115 \\\\\n 18 & A & 23 & 75 & 3209 & 570 & 156 & 2483 \\\\\n 19 & C & 13 & 100 & 36268 & 13544 & 411 & 22313 \\\\\n 20 & B & 3 & 19 & 688 & 69 & 31 & 588 \\\\\n 21 & B & 22 & 25 & 540 & 187 & 94 & 259 \\\\\n 22 & E & 1 & 25 & 276 & 11 & 13 & 252 \\\\\n 23 & C & 13 & 100 & 28395 & 13629 & 114 & 14652 \\\\\n 24 & E & 7 & 26 & 655 & 117 & 56 & 482 \\\\\n 25 & C & 21 & 100 & 44693 & 18461 & 543 & 25689 \\\\\n 26 & C & 21 & 100 & 42259 & 19434 & 408 & 22417 \\\\\n 27 & C & 21 & 100 & 44229 & 18115 & 396 & 25718 \\\\\n 28 & C & 20 & 100 & 43862 & 16922 & 642 & 26298 \\\\\n 29 & C & 28 & 100 & 54003 & 24216 & 1226 & 28561 \\\\\n 30 & C & 31 & 100 & 53482 & 26997 & 1063 & 25422 \\\\\n 31 & C & 27 & 100 & 53092 & 23283 & 463 & 29346 \\\\\n 32 & C & 21 & 100 & 55195 & 19817 & 768 & 34610 \\\\\n 33 & E & 9 & 25 & 291 & 70 & 30 & 191 \\\\\n 34 & D & 2 & 13 & 697 & 76 & 92 & 529 \\\\\n 35 & E & 9 & 25 & 479 & 141 & 47 & 291 \\\\\n 36 & E & 10 & 75 & 1026 & 137 & 68 & 821 \\\\\n 37 & E & 7 & 25 & 7165 & 1804 & 94 & 5267 \\\\\n 38 & E & 4 & 25 & 647 & 67 & 49 & 531 \\\\\n 39 & G & 47 & 333 & 3350 & 428 & 144 & 2778 \\\\\n 40 & G & 26 & 333 & 3599 & 240 & 157 & 3202 \\\\\n 41 & G & 26 & 332 & 4918 & 239 & 145 & 4534 \\\\\n 42 & C & 17 & 100 & 30411 & 14844 & 348 & 15219 \\\\\n 43 & F & 267 & 477 & 10002 & 3204 & 1519 & 5279 \\\\\n 44 & C & 9 & 100 & 29906 & 8260 & 274 & 21372 \\\\\n 45 & E & 3 & 25 & 380 & 44 & 43 & 293 \\\\\n\\bottomrule\n\\end{tabular}\n\\vspace*{-2ex}\n\\end{table}\n %\n\n\\begin{table}\n\\centering\n\\caption{Ground-truth signatures and their occurrences in distinct events.}\n\\label{table:signature}\n\\vspace*{-1ex}\n\\small\n\\begin{tabular}{ccrrrc}\n\\toprule\n & sub- & fail-only & pass-only & fail \\& & failure \\\\\nsignature & pattern & events & events & pass & strings* \\\\\n\\midrule\n A & 1 & 1 & 0 & 0 & yes \\\\\n A & 2 & 2 & 0 & 0 & no \\\\\n B & 1 & 2 & 0 & 0 & yes \\\\\n C & 1 & 21 & 0 & 0 & yes \\\\\n C & 2 & 21 & 0 & 0 & yes \\\\\n D & 1 & 4 & 0 & 0 & yes \\\\\n \\textbf{D$^{\\#}$} & \\textbf{2} & 69 & 267 & 115 & no \\\\\n \\textbf{D$^{\\#}$} & \\textbf{3} & 2 & 10 & 13 & no \\\\\n \\textbf{E$^{\\#}$} & \\textbf{1} & 24 & 239 & 171 & no \\\\\n E & 1 & 1 & 0 & 0 & no \\\\\n E & 2 & 9 & 0 & 0 & no \\\\\n E & 3 & 9 & 0 & 0 & yes \\\\\n E & 4 & 23 & 0 & 0 & yes \\\\\n F & 1 & 19 & 0 & 0 & yes \\\\\n F & 2 & 19 & 0 & 0 & no \\\\\n F & 3 & 19 & 0 & 0 & yes \\\\\n F & 4 & 14 & 0 & 0 & yes \\\\\n G & 1 & 2 & 0 & 0 & yes \\\\\n G & 2 & 1 & 0 & 0 & no \\\\\n G & 3 & 1 & 0 & 0 & no \\\\\n\\bottomrule\n\\multicolumn{6}{l}{* signature contains the lexical patterns 'error', 'fault' or 'fail*'}\\\\\n\\multicolumn{6}{l}{$^{\\#}$ sub-patterns that were removed to ensure a clean ground truth}\n\\end{tabular}\n\\vspace*{-3ex}\n\\end{table}\n \n\\subsection{Dataset and ground truth}\n\\label{sec:dataset}\n\nOur dataset provided by \\CiscoNorway{our industrial partner} consists\nof failing and passing log files from 45 different end-to-end integration\ntests. In addition to the log text we also have data on when a given\nlog file was produced. Most test-sets span a time-period of 38 days, while\nthe largest set (test 43 in Table~\\ref{table:descriptive}) spans 112\ndays. Each failing log is known to exemplify symptoms of one of seven\nknown errors, and \\CiscoNorway{our industrial partner} has given us a\nset of regular expressions that help determine which events are relevant\nfor a given known error. We refer to the set of regular expressions\nthat identify a known error as a \\emph{signature} for that error. These\nsignatures help us construct a ground truth for our investigation.\nMoreover, an important motivation for developing SBLD is to help create\nsignatures for novel problems: The events highlighted by SBLD should be\ncharacteristic of the observed failure, and the textual contents of the\nevents can be used in new signature expressions.\n\nDescriptive facts about our dataset is listed in\nTable~\\ref{table:descriptive} while Table~\\ref{table:signature}\nsummarizes key insights about the signatures used.\n\nIdeally, our ground truth should highlight exactly and \\emph{only} the\nlog events that an end user would find relevant for troubleshooting\nan error. However, the signatures used in this investigation were\ndesigned to find sufficient evidence that the \\emph{entire log} in\nquestion belongs to a certain error class: the log might contain other\nevents that a human user would find equally relevant for diagnosing\na problem, but the signature in question might not encompass these\nevents. Nevertheless, the events that constitute sufficient evidence\nfor assigning the log to a given error class are presumably relevant\nand should be presented as soon as possible to the end user. However,\nif our method cannot differentiate between these signature events and\nother events we cannot say anything certain about the relevance of\nthose other events. This fact is reflected in our choice of quality\nmeasures, specifically in how we assess the precision of the approach. This\nis explained in detail in the next section.\n\nWhen producing the ground truth, we first ensured that a log would only be\nassociated with a signature if the entire log taken as a whole satisfied all\nthe sub-patterns of that signature. If so, we then determined which events\nthe patterns were matching on. These events constitute the known-to-be relevant\nset of events for a given log. However, we identified some problems with two of the provided\nsignatures that made them unsuitable for assessing SBLD. Signature \\emph{E}\n(see Table~\\ref{table:signature}) had a sub-pattern that searched for a \"starting test\"-prefix that necessarily\nmatches on the first event in all logs due to the structure of the logs.\nSimilarly, signature \\emph{D} contained two sub-patterns that necessarily\nmatch all logs in the set--in this case by searching for whether the test\nwas run on a given machine, which was true for all logs for the corresponding\ntest. We therefore elected to remove these sub-patterns from the signatures\nbefore conducting the analysis.\n\n\\subsection{Quality Measures}\n\nAs a measure of how well SBLD reports all known-to-be relevant log\nevents, we measure \\emph{recall in best cluster}, which we for brevity refer to\nas simply \\emph{recall}. \nThis is an adaption of the classic recall measure used in information retrieval,\nwhich tracks the proportion of all relevant events that were retrieved\nby the system~\\cite{manning2008introduction}. \nAs our method presents events to the user in a series of ranked clusters, \nwe ideally want all known-to-be relevant events to appear in the highest ranked cluster. \nWe therefore track the overall recall obtained as if the first cluster were the only events retrieved.\nNote, however, that SBLD ranks all clusters, and a user can retrieve additional clusters if desired. \nWe explore whether this could improve SBLD's performance on a\nspecific problematic test-set in Section~\\ref{sec:testfourtythree}.\n\nIt is trivial to obtain a perfect recall by simply retrieving all events\nin the log, but such a method would obviously be of little help to a user\nwho wants to reduce the effort needed to diagnose failures.\nWe therefore also track the \\emph{effort reduction} (ER), defined as\n\\[ \\text{ER} = 1 - \\frac{\\text{number of events in first cluster}}{\\text{number of events in log}} \\]\n\nMuch like effective information retrieval systems aim for high recall and\nprecision, we want our method to score a perfect recall while obtaining the\nhighest effort reduction possible. \n\n\\subsection{Recording the impact of added data}\n\nTo study the impact of added data on SBLD's performance, we need to measure how\nSBLD's performance on a target log $t$ is affected by adding an extra\nfailing log $f$ or a passing log $p$. There are several strategies\nfor accomplishing this. One way is to try all combinations in the\ndataset i.e.\\ compute the performance on any $t$ using any choice of\nfailing and passing logs to produce the interestingness scores. This\napproach does not account for the fact that the logs in the data are\nproduced at different points in time and is also extremely expensive\ncomputationally. We opted instead to order the logs chronologically and\nsimulate a step-wise increase in data as time progresses, as shown in\nAlgorithm~\\ref{alg:time}.\n\n\\begin{algorithm}[b]\n\\caption{Pseudo-code illustrating how we simulate a step-wise increase in data\n as time progresses and account for variability in choice of\n interestingness measure.}\n\\label{alg:time}\n\\begin{algorithmic}\\small\n\\STATE $F$ is the set of failing logs for a given test\n\\STATE $P$ is the set of passing logs for a given test\n\\STATE $M$ is the set of interestingness measures considered\n\\STATE sort $F$ chronologically\n\\STATE sort $P$ chronologically\n\\FOR{$i=0$ to $i=\\lvert F \\rvert$}\n \\FOR{$j=0$ to $j=\\lvert P \\rvert$}\n \\STATE $f = F[:i]$ \\COMMENT{get all elements in F up to and including position i}\n \\STATE $p = P[:j]$\n \\FORALL{$l$ in $f$}\n \\STATE initialize $er\\_scores$ as an empty list\n \\STATE initialize $recall\\_scores$ as an empty list\n \\FORALL{$m$ in $M$}\n \\STATE perform SBLD on $l$ using $m$ as measure \\\\ \\hspace*{1.75cm} and $f$ and $p$ as spectrum data\n \\STATE append recorded effort reduction score to $er\\_scores$\n \\STATE append recorded recall score to $recall\\_scores$\n \\ENDFOR\n \\STATE record median of $er\\_scores$\n \\STATE record median of $recall\\_scores$\n \\ENDFOR\n \\ENDFOR\n\\ENDFOR\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Variability in interestingness measures}\n\\label{sec:imvars}\n\nAs mentioned in Section~\\ref{sec:approach}, SBLD requires a\nchoice of interestingness measure for scoring the events, \nwhich can have a considerable impact on SBLD's performance. \nConsidering that the best choice of interestingness measure is context-dependent, \nthere is no global optimum, \nit is up to the user to decide which interestingness metric best reflects their\nnotion of event relevance. \n\nConsequently, we want to empirically study SBLD in way\nthat captures the variability introduced by this decision. \nTo this end, we record the median score obtained by performing SBLD for every possible choice of\ninterestingness measure from those listed in Table~\\ref{table:measures}.\nAlgorithm~\\ref{alg:time} demonstrates the procedure in pseudo-code.\n\n\\subsection{Comparing alternatives}\n\\label{sec:comps}\n\nTo answer RQ2 and RQ3, we use pairwise comparisons of\ndifferent configurations of SBLD with a method that searches for regular expressions. \nThe alternatives are compared\non each individual failing log in the set in a paired fashion. An\nimportant consequence of this is that the statistical comparisons have\nno concept of which test the failing log belongs to, and thus the test\nfor which there is most data has the highest impact on the result of the\ncomparison.\n\nThe pairwise comparisons are conducted using paired Wilcoxon signed-rank\ntests~\\cite{wilcoxon1945} where the Pratt correction~\\cite{Pratt1959}\nis used to handle ties. We apply Holm's correction~\\cite{Holm1979}\nto the obtained p-values to account for the family-wise error\nrate arising from multiple comparisons. We declare a comparison\n\\emph{statistically significant} if the Holm-adjusted p-value is below\n$\\alpha=0.05$. The Wilcoxon tests check the two-sided null hypothesis of\nno difference between the alternatives. We report the Vargha-Delaney $A_{12}$ and\n$A_{21}$~\\cite{Vargha2000} measures of stochastic superiority to\nindicate which alternative is the strongest. Conventionally, $A_{12}=0.56$ is\nconsidered a small difference, $A_{12}=.64$ is considered a medium difference\nand $A_{12}=.71$ or greater is considered large~\\cite{Vargha2000}. Observe\nalso that $A_{21} = 1 - A_{12}$.\n\n\\begin{figure*}\n \\includegraphics[width=0.8\\textwidth]{rq1_boxplot.png}\n %\n \\caption{The overall performance of SBLD in terms of effort reduction\n and recall. On many tests, SBLD exhibited perfect recall for\n all observations in the inter-quartile range and thus the box collapses to a single line on the $1.0$ mark.\\label{fig:rq1boxplot}}\n\\end{figure*}\n\n\\subsection{Analysis procedures}\n\nWe implement the SBLD approach in a prototype tool \nDAIM (Diagnosis and Analysis using Interestingness Measures), \nand use DAIM to empirically evaluate the idea.\n\n\\head{RQ1 - overall performance} We investigate the overall performance\nof SBLD by analyzing a boxplot for each test in our dataset. Every individual\ndatum that forms the basis of the plot is the median performance of SBLD over\nall choices of interestingness measures for a given set of failing and passing\nlogs subject to the chronological ordering scheme outlined above.\n\n\\head{RQ2 - impact of data} We analyze the impact of added data by\nproducing and evaluating heatmaps that show the obtained performance\nas a function of the number of failing logs (y-axis) and number of\npassing logs (x-axis). The color intensity of each tile in the heatmaps\nis calculated by taking the median of the scores obtained for each\nfailing log analyzed with the given number of failing and passing logs\nas data for the spectrum inference, wherein the score for each log is\nthe median over all the interestingness measures considered as outlined in\nSection~\\ref{sec:imvars}.\n\nFurthermore, we compare three variant configurations\nof SBLD that give an overall impression of the influence of added\ndata. The three configurations considered are \\emph{minimal evidence},\n\\emph{median evidence} and \\emph{maximal evidence}, where minimal\nevidence uses only events from the log being analyzed and one additional\npassing log, median evidence uses the median amount of respectively failing and\nand passing logs available while maximal evidence uses\nall available data for a given test. The comparisons are conducted with the\nstatistical scheme described above in Section~\\ref{sec:comps}.\n\n\\head{RQ3 - SBLD versus pattern-based search} To compare SBLD\nagainst a pattern-based search, we record the effort reduction and\nrecall obtained when only selecting events in the log that match on the\ncase-insensitive regular expression \\texttt{\"error|fault|fail*\"}, where\nthe $*$ denotes a wildcard-operator and the $\\lvert$ denotes logical\n$OR$. This simulates the results that a user would obtain by using\na tool like \\texttt{grep} to search for words like 'error' and 'failure'.\nSometimes the ground-truth signature expressions contain words from this\npattern, and we indicate this in Table~\\ref{table:signature}. If so, the\nregular expression-based method is guaranteed to retrieve the event.\nSimilarly to RQ2, we compare the three configurations of SBLD described\nabove (minimum, median and maximal evidence) against the pattern-based\nsearch using the statistical described in Section~\\ref{sec:comps}.\n\n %\n\n\\section{Results and Discussion}\n\\label{sec:resdiscuss}\n\nThis section gradually dissects Figure~\\ref{fig:rq1boxplot}, showing a breakdown of SBLD's performance per test for both recall\nand effort reduction, Figures \\ref{fig:erheat} and \\ref{fig:recallheat}, \nshowing SBLD's performance as a function of the number of failing and passing\nlogs used, as well as Table~\\ref{table:comparisons}, which shows the results\nof the statistical comparisons we have performed.\n\n\\begin{figure*}\n\\includegraphics[width=\\textwidth]{er_heatmap.pdf}\n \\caption{Effort reduction score obtained when SBLD is run on a given number of failing and passing logs. The tests not listed in this figure all obtained a lowest median effort reduction score of 90\\% or greater and are thus not shown for space considerations. \\label{fig:erheat}}\n\\vspace*{-2ex}\n\\end{figure*}\n\n\\begin{table*}\n\\caption{Statistical comparisons performed in this investigation. The\nbold p-values are those for which no statistically significant difference under $\\alpha=0.05$\n could be established.}\n\\label{table:comparisons}\n{\\small%\n\\begin{tabular}{lllrrrr}\n\\toprule\n variant 1 & variant 2 & quality measure & Wilcoxon statistic & $A_{12}$ & $A_{21}$ & Holm-adjusted p-value\\\\\n\\midrule\n pattern-based search & minimal evidence & effort reduction & 29568.5 & 0.777 & 0.223 & $\\ll$ 0.001 \\\\\n pattern-based search & maximal evidence & effort reduction & 202413.0 & 0.506 & 0.494 & \\textbf{1.000} \\\\\n pattern-based search & median evidence & effort reduction & 170870.5 & 0.496 & 0.504 & $\\ll$ 0.001 \\\\\n minimal evidence & maximal evidence & effort reduction & 832.0 & 0.145 & 0.855 & $\\ll$ 0.001 \\\\\n minimal evidence & median evidence & effort reduction & 2666.0 & 0.125 & 0.875 & $\\ll$ 0.001 \\\\\n maximal evidence & median evidence & effort reduction & 164674.0 & 0.521 & 0.479 & \\textbf{1.000} \\\\\n pattern-based search & minimal evidence & recall & 57707.0 & 0.610 & 0.390 & $\\ll$ 0.001 \\\\\n pattern-based search & maximal evidence & recall & 67296.0 & 0.599 & 0.401 & $\\ll$ 0.001 \\\\\n pattern-based search & median evidence & recall & 58663.5 & 0.609 & 0.391 & $\\ll$ 0.001 \\\\\n minimal evidence & maximal evidence & recall & 867.5 & 0.481 & 0.519 & $\\ll$ 0.001 \\\\\n minimal evidence & median evidence & recall & 909.0 & 0.498 & 0.502 & 0.020 \\\\\n maximal evidence & median evidence & recall & 0.0 & 0.518 & 0.482 & $\\ll$ 0.001 \\\\\n\\bottomrule\n\\end{tabular}\n %\n}\n\\end{table*}\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{recall_heatmap.pdf}\n \\caption{Recall score obtained when SBLD is run on a given number of failing and passing logs. For space\n considerations, we only show tests for which the minimum observed\n median recall was smaller than 1 (SBLD attained perfect median recall for all configurations in the other tests). \\label{fig:recallheat}}\n\\vspace*{-3ex}\n\\end{figure}\n\n\\subsection{RQ1: The overall performance of SBLD}\n\nFigure~\\ref{fig:rq1boxplot} suggests that SBLD's overall performance is strong,\nsince it obtains near-perfect recall while retaining a high degree of effort\nreduction. In terms of recall, SBLD obtains a perfect performance on all except\nfour tests: 18, 34, 42 and 43, with the lower quartile stationed at perfect recall for all tests\nexcept 43 (which we discuss in detail in Section~\\ref{sec:testfourtythree}).\nFor test 18, only 75 out of 20700 observations ($0.036\\%$) obtained a recall score\nof $0.5$ while the rest obtained a perfect score. On test 34 (the smallest in our\ndataset), 4 out of 39 observations obtained a score of zero recall while the\nothers obtained perfect recall. \nFor test 42, 700 out of 15300 ($0.4\\%$) observations obtained a score of zero recall while the rest obtained perfect recall.\nHence with the exception of test 43 which is discussed later, \nSBLD obtains very strong recall scores overall with only a few outliers.\n\nThe performance is also strong in terms of effort reduction, albeit\nmore varied. To a certain extent this is expected since the attainable\neffort reduction on any log will vary with the length of the log and\nthe number of ground-truth relevant events in the log. As can be seen\nin Figure~\\ref{fig:rq1boxplot}, most of the observations fall well\nover the 75\\% mark, with the exceptions being tests 4 and 22. For test\n4, Figure~\\ref{fig:erheat} suggests that one or more of the latest\npassing logs helped SBLD refine the interestingness scores. A similar\nbut less pronounced effect seems to have happened for test 22. However,\nas reported in Table~\\ref{table:descriptive}, test 22 consists only of\n\\emph{one} failing log. Manual inspection reveals that the log consists\nof 30 events, of which 11 are fail-only events. Without additional\nfailing logs, most interestingness measures will give a high score to\nall events that are unique to that singular failing log, which is likely\nto include many events that are not ground-truth relevant. Reporting 11\nout of 30 events to the user yields a meager effort reduction of around\n63\\%. Nevertheless, the general trend is that SBLD retrieves a compact\nset of events to the user which yields a high effort reduction score.\n\nIn summary, the overall performance shows that SBLD\nretrieves the majority of all known-to-be-relevant events\nin compact clusters, which dramatically reduces the analysis burden for the\nend user. The major exception is Test 43, which we return to in\nSection~\\ref{sec:testfourtythree}.\n\n\\subsection{RQ2: On the impact of evidence}\n\nThe heatmaps suggest that the effort reduction is generally not\nadversely affected by adding more \\emph{passing logs}. If the\nassumptions underlying our interestingness measures are correct,\nthis is to be expected: Each additional passing log either gives us\nreason to devalue certain events that co-occur in failing and passing\nlogs or contain passing-only events that are deemed uninteresting.\nMost interestingness measures highly value events that\nexclusively occur in failing logs, and additional passing logs help\nreduce the number of events that satisfy this criteria. However, since\nour method bases itself on clustering similarly scored events it is\nweak to \\emph{ties} in interestingness scores. It is possible that\nan additional passing log introduces ties where there previously was\nnone. This is likely to have an exaggerated effect in situations with\nlittle data, where each additional log can have a dramatic impact on the\ninterestingness scores. This might explain the gradual dip in effort\nreduction seen in Test 34, for which there are only two failing logs.\n\nAdding more failing logs, on the other hand, draws a more nuanced\npicture: When the number of failing logs (y-axis) is high relative\nto the number of passing logs (x-axis), effort reduction seems to suffer.\nAgain, while most interestingness measures will prioritize events that\nonly occur in failing logs, this strategy only works if there is a\nsufficient corpus of passing logs to weed out false positives. When\nthere are far fewer passing than failing logs, many events will be\nunique to the failing logs even though they merely reflect a different\nvalid execution path that the test can take. This is especially true for\ncomplex integration tests like the ones in our dataset, which might test\na system's ability to recover from an error, or in other ways have many\nvalid execution paths.\n\nThe statistical comparisons summarized in Table~\\ref{table:comparisons}\nsuggest that the minimal evidence strategy performs poorly compared to the\nmedian and maximal evidence strategies. This is especially\npronounced for effort reduction, where the Vargha-Delaney\nmetric scores well over 80\\% in favor of the maximal and median\nstrategy. For recall, the difference between the minimum strategy and\nthe other variants is small, albeit statistically significant. Furthermore,\nthe jump from minimal evidence to median evidence is much more\npronounced than the jump from median evidence to maximal evidence.\nFor effort reduction, there is in fact no statistically discernible\ndifference between the median and maximal strategies. For recall, the maximal\nstrategies seems a tiny bit better, but the $A_{12}$ measure suggests the\nmagnitude of the difference to be small.\n\nOverall, SBLD seems to benefit from extra data, especially additional passing\nlogs. Failing logs also help, but depend on a proportional amount of passing\nlogs for SBLD to fully benefit. \nThe performance increase from going from minimal data to some data is more pronounced than going from some data to\nmaximal data. This suggests that there may be diminishing returns to\ncollecting extra logs, but our investigation cannot prove or disprove this.\n\n\\subsection{RQ3: SBLD versus simple pattern-search}\n\nIn terms of effort reduction, Table~\\ref{table:comparisons} shows that\nthe pattern-based search clearly beats the minimal evidence variant of\nSBLD. It does not, however, beat the median and maximal variants: The\ncomparison to median evidence suggests a statistically significant win\nin favor of median evidence, but the effect reported by $A_{12}$ is\nso small that it is unlikely to matter in practice. No statistically\nsignificant difference could be established between the pattern-based\nsearch and SBLD with maximal evidence.\n\nIn one sense, it is to be expected that the pattern-based search does\nwell on effort reduction assuming that events containing words like\n\"fault\" and \"error\" are rare. The fact that the pattern-based search\nworks so well could indicate that \\CiscoNorway{our industrial partner}\nhas a well-designed logging infrastructure where such words are\nrare and occur at relevant positions in the logs. On the other\nhand, it is then notable that the median and maximum variants of SBLD perform\ncomparably on effort reduction without having any concept of the textual\ncontent in the events.\n\nIn terms of recall, however, pattern-based search beats all variants of\nSBLD in a statistically significant manner, where the effect size of the\ndifferences is small to medium. One likely explanation for this better performance is that the\npattern-based search performs very well on Test 43, which SBLD generally\nperforms less well on. Since the comparisons are run per failing log and test\n43 constitutes 29\\% of the failing logs (specifically, 267 out of 910 logs), the\nperformance of test 43 has a massive impact. We return to test 43 and its\nimpact on our results in Section~\\ref{sec:testfourtythree}.\n\nOn the whole, SBLD performs similarly to pattern-based search, obtaining\nslightly poorer results on recall for reasons that are likely due\nto a particular test we discuss below. At any rate, there is no\ncontradiction in combining SBLD with a traditional pattern-based search.\nAnalysts could start by issuing a set of pattern-based searches and\nrun SBLD afterward if the pattern search returned unhelpful results.\nIndeed, an excellent and intended use of SBLD is to suggest candidate\nsignature patterns that, once proven reliable, can be incorporated in a\nregular-expression based search to automatically identify known issues\nin future runs.\n\n\\subsection{What happens in Test 43?}\n\\label{sec:testfourtythree}\n\nSBLD's performance is much worse on Test 43 than the other tests, which\nwarrants a dedicated investigation. The first thing we observed in the\nresults for Test 43 is that all of the ground-truth-relevant events\noccurred \\emph{exclusively} in failing logs and were often singular\n(11 out of the 33) or infrequent (30 out of 33 events occurred in 10\\%\nof the failing logs or fewer). Consequently, we observed a strong\nperformance from the \\emph{Tarantula} and \\emph{Failed only}-measures\nthat put a high premium on failure-exclusive events. Most of the\ninterestingness measures, on the other hand, will prefer an event that\nis very frequent in the failing logs and sometimes occur in passing logs\nover a very rare event that only occurs in failing logs. This goes a\nlong way in explaining the poor performance on recall. The abundance of\nsingular events might also suggest that there is an error in the event\nabstraction framework, where several events that should be treated as\ninstances of the same abstract event are treated as separate events. We\ndiscuss this further in Section~\\ref{sec:ttv}.\n\n\\begin{sloppypar}%\nAnother observation we made is that the failing logs contained only \\emph{two}\nground-truth relevant events, which means that the recorded recall can quickly\nfluctuate between $0$, $0.5$ and $1$.\n\\end{sloppypar}\n\nWould the overall performance improve by retrieving an additional\ncluster? A priori, retrieving an extra cluster would strictly improve\nor not change recall since more events are retrieved without removing\nthe previously retrieved events. Furthermore, retrieving an additional\ncluster necessarily decreases the effort reduction. We re-ran the\nanalysis on Test 43 and collected effort reduction and recall scores\nfor SBLD when retrieving \\emph{two} clusters, and found that the added\ncluster increased median recall from $0$ to $0.5$ while the median\neffort reduction decreased from $0.97$ to $0.72$. While the proportional\nincrease in recall is larger than the decrease in effort reduction,\nthis should in our view not be seen as an improvement: As previously\nmentioned, the failing logs in this set contain only two ground-truth\nrelevant events and thus recall is expected to fluctuate greatly.\nSecondly, an effort reduction of $0.72$ implies that you still have to\nmanually inspect 28\\% of the data, which in most information retrieval\ncontexts is unacceptable. An unfortunate aspect of our analysis in this\nregard is that we do not account for event \\emph{lengths}: An abstracted\nevent is treated as one atomic entity, but could in reality vary from a\nsingle line to a stack trace that spans several pages. A better measure\nof effort reduction should incorporate a notion of event length to\nbetter reflect the real-world effect of retrieving more events.\n\nAll in all, Test 43 exhibits a challenge that SBLD is not suited for:\nIt asks SBLD to prioritize rare events that are exclusive to failing\nlogs over events that frequently occur in failing logs but might\noccasionally occur in passing logs. The majority of interestingness\nmeasures supported by SBLD would prioritize the latter category of\nevents. In a way, this might suggest that SBLD is not suited for finding\n\\emph{outliers} and rare events: Rather, it is useful for finding\nevents that are \\emph{characteristic} for failures that have occurred\nseveral times - a \"recurring suspect\", if you will. An avenue for future\nresearch is to explore ways of letting the user combine a search for\n\"recurring suspects\" with the search for outliers.\n\n %\n\n\\section{Related Work}\n\\label{sec:relwork}\n\nWe distinguish two main lines of related work: \nFirst, there is other work aimed at automated analysis of log files, \ni.e., our problem domain,\nand second, there is other work that shares similarities with our technical approach, \ni.e., our solution domain.\n\n\\head{Automated log analysis}\nAutomated log analysis originates in \\emph{system and network monitoring} for security and administration~\\cite{lin1990:error,Oliner2007}, \nand saw a revival in recent years due to the needs of \\emph{modern software development}, \\emph{CE} and \\emph{DevOps}~\\cite{Hilton2017,Laukkanen2017,Debbiche2014,Olsson2012,Shahin2017,candido2019:contemporary}.\n\nA considerable amount of research has focused on automated \\emph{log parsing} or \\emph{log abstraction}, \nwhich aims to reduce and organize log data by recognizing latent structures or templates in the events in a log~\\cite{zhu2019:tools,el-masri2020:systematic}.\nHe et al. analyze the quality of these log parsers and conclude that many of them are not accurate or efficient enough for parsing the logs of modern software systems~\\cite{he2018:automated}.\nIn contrast to these automated approaches, \nour study uses a handcrafted log abstracter developed by \\CiscoNorway{our industrial collaborator}.\n\n\\emph{Anomaly detection} has traditionally been used for intrusion detection and computer security~\\cite{liao2013:intrusion,ramaki2016:survey,ramaki2018:systematic}.\nApplication-level anomaly detection has been investigated for troubleshooting~\\cite{chen2004:failure,zhang2019:robust},\nand to assess compliance with service-level agreements~\\cite{banerjee2010:logbased,He2018,sauvanaud2018:anomaly}.\nGunter et al. present an infrastructure for troubleshooting of large distributed systems, %\nby first (distributively) summarizing high volume event streams before submitting those summaries to a centralized anomaly detector. \nThis helps them achieve the fidelity needed for detailed troubleshooting, \nwithout suffering from the overhead that such detailed instrumentation would bring~\\cite{Gunter2007}.\nDeeplog by Du et al. enables execution-path and performance anomaly detection in system logs by training a Long Short-Term Memory neural network of the system's expected behavior from the logs, and using that model to flag events and parameter values in the logs that deviate from the model's expectations~\\cite{Du2017}.\nSimilarly, LogRobust by Zhang et al. performs anomaly detection using a bi-LSTM neural network but also detects events that are likely evolved versions of previously seen events, making the learned model more robust to updates in the target logging infrastructure~\\cite{zhang2019:robust}.\n\nIn earlier work, we use \\emph{log clustering} to reduce the effort needed to process a backlog of failing CE logs \nby grouping those logs that failed for similar reasons~\\cite{rosenberg2018:use,rosenberg:2018:improving}. \nThey build on earlier research that uses log clustering to identify problems in system logs~\\cite{Lin2016,Shang2013}.\nCommon to these approaches is how the contrast between passing and failing logs is used to improve accuracy, \nwhich is closely related to how SBLD highlights failure-relevant events.\n\nNagarash et al.~\\cite{nagaraj:2012} explore the use of dependency networks to exploit the contrast between two sets of logs, \none with good and one with bad performance, \nto help developers understand which component(s) likely contain the root cause of performance issues.\n\nAn often-occurring challenge is the need to (re)construct an interpretable model of a system's execution.\nTo this end, several authors investigate the combination of log analysis with (static) source code analysis, \nwhere they try to (partially) match events in logs to log statements in the code, \nand then use these statements to reconstruct a path through the source code to help determine \nwhat happened in a failed execution~\\cite{Xu2009,yuan:2010:sherlog,zhao2014:lprof,schipper2019:tracing}.\nGadler et al. employ Hidden Markov Models to create a model of a system's usage patterns from logged events~\\cite{gadler2017:mining}, while\nPettinato et al. model and analyze the behavior of a complex telescope system using Latent Dirichlet Allocation~\\cite{pettinato2019:log}.\n\nOther researchers have analyzed the logs for successful and failing builds, \nto warn for anti-patterns and decay~\\cite{vassallo2019:automated}, \ngive build repair hints~\\cite{Vassallo2018}, \nand automatically repair build scripts~\\cite{hassan2018:hirebuild, tarlow2019:learning}. \nOpposite to our work,\nthese techniques exploit the \\emph{overlap} in build systems used by many projects to mine patterns that hint at decay or help repair a failing build, \nwhereas we exploit the \\emph{contrast} with passing runs for the same project to highlight failure-relevant events.\n\n\\begin{sloppypar}\n\\head{Fault Localization} \nAs mentioned, our approach was inspired by Spectrum-Based Fault Localization (SBFL), \nwhere the fault-proneness of a statement is computed as a function of \nthe number of times that the statement was executed in a failing test case, combined with \nthe number of times that the statement was skipped in a passing test case~\\cite{Jones2002,Chen2002,Abreu2007,Abreu2009,Naish2011}.\nThis more or less directly translates to the inclusion or exclusion of events in failing, resp. passing logs, \nwhere the difference is that SBLD adds clustering of the results to enable step-wise presentation of results to the user. \n\\end{sloppypar}\n\nA recent survey of Software Fault Localization includes the SBFL literature up to 2014~\\cite{Wong2016}.\nDe Souza et. all extend this with SBFL work up to to 2017, and add an overview of seminal work on automated debugging from 1950 to 1977~\\cite{deSouza2017}.\nBy reflecting on the information-theoretic foundations of fault localization, Perez proposes the DDU metric, \nwhich can be used to evaluate test suites and predict their diagnostic performance when used in SBFL~\\cite{Perez2018}. \nOne avenue for future work is exploring how a metric like this can be adapted to our context, \nand see if helps to explain what happened with test 43.\n\nA recent evaluation of \\emph{pure} SBFL on large-scale software systems found that it under-performs in these situations \n(only 33-40\\% of the bugs are identified with the top 10 of ranked results~\\cite{heiden2019:evaluation}. \nThe authors discuss several directions beyond pure SBFL, such as combining it with dynamic program analysis techniques, \nincluding additional text analysis/IR techniques~\\cite{Wang2015a}, mutation based fault localization, \nand using SBFL in an interactive feedback-based process, such as whyline-debugging~\\cite{ko2008:debugging}.\nPure SBFL is closely related to the Spectrum-Based Log Diagnosis proposed here, \nso we may see similar challenges (in fact, test 43 may already show some of this). \nOf the proposed directions to go beyond pure SBFL, \nboth the inclusion of additional text analysis/IR techniques, \nand the application of Spectrum-Based Log Diagnosis in an interactive feedback-based process\nare plausible avenues to extend our approach. \nClosely related to the latter option, \nde Souza et al.~\\cite{deSouza2018b} assess guidance and filtering strategies to \\emph{contextualize} the fault localization process.\nTheir results suggest that contextualization by guidance and filtering can improve the effectiveness of SBFL,\nby classifying more actual bugs in the top ranked results.\n\n\\begin{comment}\n\nDirect comparison~\\cite{He2018, jiang2017:what, Jones:2007:DP:1273463.1273468,\nXu2009, Hwa-YouHsu:2008:RIB:1642931.1642994}. \n\nHsu et\nal~\\cite{Hwa-YouHsu:2008:RIB:1642931.1642994} discuss methods for extracting\nfailure signatures as sequences of code executions, which in spirit is rather\nsimilar to what we are trying to accomplish.\n\nAn interesting data-structure, the event correlation\ngraph, is explores in~\\cite{Fu2012a}. An FL metric that takes frequencies into\naccount~\\cite{Shu2016}.\n\\end{comment}\n\n %\n\\section{Threats to Validity}\n\\label{sec:ttv}\n\n\\head{Construct Validity} %\nThe signatures that provide our ground truth were devised to determine whether a given log \\emph{in its entirety} showed symptoms of a known error.\nAs discussed in Section~\\ref{sec:dataset}, we have used these signatures to detect events that give sufficient evidence for a symptom, \nbut there may be other events that could be useful to the user that are not part of our ground truth.\nWe also assume that the logs exhibit exactly the failures described by the signature expression.\nIn reality, the logs could contain symptoms of multiple failures beyond the ones described by the signature.\n\nFurthermore, we currently do not distinguish between events that consist of single line of text, \nor events that contain a multi-line stack-trace, although these clearly represent different comprehension efforts.\nThis threat could be addressed by tracking the \\emph{length} of the event contents, \nand using it to further improve the accuracy of our effort reduction measure.\n\nThe choice of clustering algorithm and parameters affects the events retrieved, \nbut our investigation currently only considers HAC with complete linkage.\nWhile we chose complete linkage to favor compact clusters, \noutliers in the dataset could cause unfavorable clustering outcomes.\nFurthermore, using the uncorrected sample standard deviation as threshold criterion \nmay be too lenient if the variance in the scores is high.\nThis threat could be addressed by investigate alternative cluster algorithm and parameter choices.\n\nMoreover, as for the majority of log analysis frameworks, the performance of SBLD strongly depends on the quality of log abstraction. \nAn error in the abstraction will directly propagate to SBLD: \nFor example, if abstraction fails to identify two concrete events as being instances of the same generic event, \ntheir aggregated frequencies will be smaller and consequently treated as less interesting by SBLD.\nSimilarly, the accuracy will suffer if two events that represent distinct generic events are treated as instances of the same generic event.\nFuture work could investigate alternative log abstraction approaches.\n\n\\head{Internal Validity} %\nWhile our heatmaps illustrate the interaction between additional data and SBLD performance, \nthey are not sufficient to prove a causal relationship between performance and added data.\nOur statistical comparisons suggests that a strategy of maximizing data is generally preferable, \nbut they are not sufficient for discussing the respective contribution of failing or passing logs.\n\n\\head{External Validity} %\nThis investigation is concerned with a single dataset from one industrial partner.\nStudies using additional datasets from other contexts is needed to assess the generalizability of SBLD to other domains.\nMoreover, while SBLD is made to help users diagnose problems that are not already well understood,\nwe are assessing it on a dataset of \\emph{known} problems.\nIt could be that these errors, being known, are of a kind that are generally easier to identify than most errors.\nStudying SBLD in-situ over time and directly assessing whether end users found it helpful\nin diagnosis would better indicate the generalizability of our approach.\n\n %\n\n\\section{Concluding Remarks}\n\\label{sec:conclusion}\n\n\\head{Contributions}\nThis paper presents and evaluates Spectrum-Based Log Diagnosis (SBLD), \na method for automatically identifying segments of failing logs \nthat are likely to help users diagnose failures. \nOur empirical investigation of SBLD addresses the following questions: \n(i) How well does SBLD reduce the \\emph{effort needed} to identify all \\emph{failure-relevant events} in the log for a failing run? \n(ii) How is the \\emph{performance} of SBLD affected by \\emph{available data}? \n(iii) How does SBLD compare to searching for \\emph{simple textual patterns} that often occur in failure-relevant events? \n\n\\head{Results}\nIn response to (i), \nwe find that SBLD generally retrieves the failure-relevant events in a compact manner \nthat effectively reduces the effort needed to identify failure-relevant events. \nIn response to (ii), \nwe find that SBLD benefits from addition data, especially more logs from successful runs. \nSBLD also benefits from additional logs from failing runs if there is a proportional amount of successful runs in the set. \nWe also find that the effect of added data is most pronounced when going from little data to \\emph{some} data rather than from \\emph{some} data to maximal data. \nIn response to (iii), \nwe find that SBLD achieves roughly the same effort reduction as traditional search-based methods but obtains slightly lower recall. \nWe trace the likely cause of this discrepancy on recall to a prominent part of our dataset, whose ground truth emphasizes rare events. \nA lesson learned in this regard is that SBLD is not suited for finding statistical outliers but rather \\emph{recurring suspects} \nthat characterize the observed failures. \nFurthermore, the investigation highlights that traditional pattern-based search and SBLD can complement each other nicely: \nUsers can resort to SBLD if they are unhappy with what the pattern-based searches turn\nup, and SBLD is an excellent method for finding characteristic textual patterns\nthat can form the basis of automated failure identification methods.\n\n\\head{Conclusions}\nWe conclude that SBLD shows promise as a method diagnosing failing runs, \nthat its performance is positively affected by additional data, \nbut that it does not outperform textual search on the dataset considered. \n\n\\head{Future work}\nWe see the following directions for future work: \n(a) investigate SBLD's performance on other datasets, to better assess generalizability, \n(b) explore the impact of alternative log abstraction mechanisms,\n(c) explore ways of combining SBLD with outlier detection, to accommodate different user needs, \n(d) adapt the Perez' DDU metric to our context and see if it can help predict diagnostic efficiency,\n(e) experiment with extensions of \\emph{pure SBLD} that include additional text analysis/IR techniques, \n or apply it in an interactive feedback-based process\n(f) rigorously assess (extensions of) SBLD in in-situ experiments.\n\n\\begin{acks}\nWe thank Marius Liaaen and Thomas Nornes of Cisco Systems Norway for help with obtaining and understanding the dataset, for developing the log abstraction\nmechanisms and for extensive discussions.\nThis work is supported by the \\grantsponsor{RCN}{Research Council of Norway}{https://www.rcn.no} through the\nCertus SFI (\\grantnum{RCN}{\\#203461/030)}.\nThe empirical evaluation was performed on resources provided by \\textsc{uninett s}igma2,\nthe national infrastructure for high performance computing and data\nstorage in Norway.\n\\end{acks}\n\n \\printbibliography\n\n\\end{document}\n", "meta": {"timestamp": "2020-08-18T02:18:33", "yymm": "2008", "arxiv_id": "2008.06948", "language": "en", "url": "https://arxiv.org/abs/2008.06948"}} {"text": "\\section{Introduction}\nWhen granular material in a cubic container is shaken\nhorizontally one observes experimentally different types of\ninstabilities, i.e. spontaneous formation of ripples in shallow\nbeds~\\cite{StrassburgerBetatSchererRehberg:1996},\nliquefaction~\\cite{RistowStrassburgerRehberg:1997,Ristow:1997}, convective\nmotion~\\cite{TennakoonBehringer:1997,Jaeger} and recurrent swelling of\nshaken material where the period of swelling decouples from the\nforcing period~\\cite{RosenkranzPoeschel:1996}. Other interesting experimental results concerning simultaneously vertically and horizontally vibrated granular systems~\\cite{TennakoonBehringer:1998} and enhanced packing of spheres due to horizontal vibrations~\\cite{PouliquenNicolasWeidman:1997} have been reported recently. Horizontally shaken\ngranular systems have been simulated numerically using cellular\nautomata~\\cite{StrassburgerBetatSchererRehberg:1996} as well as\nmolecular dynamics\ntechniques~\\cite{RistowStrassburgerRehberg:1997,Ristow:1997,IwashitaEtAl:1988,LiffmanMetcalfeCleary:1997,SaluenaEsipovPoeschel:1997,SPEpre99}.\nTheoretical work on horizontal shaking can be found\nin~\\cite{SaluenaEsipovPoeschel:1997} and the dynamics of a single\nparticle in a horizontally shaken box has been discussed\nin~\\cite{DrosselPrellberg:1997}.\n\n\\begin{figure}[htbp]\n \\centerline{\\psfig{file=sketch.eps,width=7cm,clip=}} \n \\caption{Sketch of the simulated system.}\n \\label{fig:sketch}\n\\end{figure}\n\nRecently the effect of convection in a horizontally shaken box filled with \ngranular material attracted much attention and presently the effect is studied\nexperimentally by different\ngroups~\\cite{TennakoonBehringer:1997,Jaeger,RosenkranzPoeschel:1996}.\nUnlike the effect of convective motion in vertically shaken granular\nmaterial which has been studied intensively experimentally,\nanalytically and by means of computer simulations\n(s.~e.g.~\\cite{vertikalEX,JaegerVert,vertikalANA,vertikalMD}), there\nexist only a few references on horizontal shaking. Different from the\nvertical case, where the ``architecture'' of the convection pattern is\nvery simple~\\cite{BizonEtAl:1998}, in horizontally shaken containers one observes a variety\nof different patterns, convecting in different directions, in parallel\nas well as perpendicular to the direction of\nforcing~\\cite{TennakoonBehringer:1997}. Under certain conditions one\nobserves several convection rolls on top of each other~\\cite{Jaeger}.\nAn impression of the complicated convection can be found in the\ninternet~\\cite{movies}.\n\nWhereas the properties of convection in vertically sha\\-ken systems\ncan be reproduced by two dimensional molecular dynamics simulations\nwith good reliability, for the case of horizontal motion the results\nof simulations are inconsistent with the experimental results: in {\\em\n all} experimental investigations it was reported that the material\nflows downwards close to the vertical\nwalls~\\cite{TennakoonBehringer:1997,Jaeger,RosenkranzPoeschel:1996,movies},\nbut reported numerical simulations systematically show surface rolls\nin opposite direction accompanying the more realistic deeper rolls, or\neven replacing them completely~\\cite{LiffmanMetcalfeCleary:1997}.\n\nOur investigation is thus concerned with the convection pattern, i.e. the\nnumber and direction of the convection rolls in a two dimensional\nmolecular dynamics simulation. We will show that the choice of the\ndissipative material parameters has crucial influence on the convection pattern\nand, in particular, that the type of convection rolls observed experimentally\ncan be \nreproduced by using sufficiently high dissipation constants.\n\n\\section{Numerical Model}\nThe system under consideration is sketched in Fig.~\\ref{fig:sketch}:\nwe simulate a two-dimensional vertical cross section of a three-dimensional\ncontainer.\nThis rectangular section of width $L=100$ (all units in cgs system), and\ninfinite height, contains $N=1000$ spherical particles. The system is\nperiodically driven by an external oscillator $x(t) = A \\sin (2\\pi f\nt)$ along a horizontal plane. For the effect we want to show, a\nworking frequency $f=10$ and amplitude $A=4$ is\nselected. \nThese values give an acceleration amplitude of approximately $16 g$.\nLower accelerations affect the intensity of the\nconvection but do not change the basic features of the convection \npattern which we want to discuss. \nAs has been shown in~\\cite{SPEpre99},\npast the fluidization point, a much better indicator of the convective\nstate is the dimensionless velocity $A 2\\pi f/ \\sqrt{Lg}$. This means\nthat in small containers motion saturates earlier, hence, results for\ndifferent container lengths at the same values of the acceleration amplitude \ncannot be compared directly. Our acceleration amplitude $\\approx 16g$ corresponds to\n$\\approx 3g$ in a 10 cm container (provided that the frequency is the same\nand particle sizes have been \nscaled by the same amount).\n\n\nThe radii of the particles of density $2$ are homogeneously\ndistributed in the interval $[0.6, 1.4]$. The rough inner walls of the\ncontainer are simulated by attaching additional particles of the same\nradii and material properties (this simulation technique is similar to ``real''\nexperiments, e.g.~\\cite{JaegerVert}). \n\nFor the molecular dynamics simulations, we apply a modified\nsoft-particle model by Cundall and Strack~\\cite{CundallStrack:1979}:\nTwo particles $i$ and $j$, with radii $R_i$ and $R_j$ and at positions\n$\\vec{r}_i$ and $\\vec{r}_j$, interact if their compression $\\xi_{ij}=\nR_i+R_j-\\left|\\vec{r}_i -\\vec{r}_j\\right|$ is positive. In this case\nthe colliding spheres feel the force\n $F_{ij}^{N} \\vec{n}^N + F_{ij}^{S} \\vec{n}^S$, \nwith $\\vec{n}^N$ and $\\vec{n}^S$ being the unit vectors in normal and shear\ndirection. The normal force acting between colliding spheres reads\n\\begin{equation}\n F_{ij}^N = \\frac{Y\\sqrt{R^{\\,\\mbox{\\it\\footnotesize\\it eff}}_{ij}}}{1-\\nu^2} \n~\\left(\\frac{2}{3}\\xi_{ij}^{3/2} + B \\sqrt{\\xi_{ij}}\\, \n\\frac{d {\\xi_{ij}}}{dt} \\right)\n\\label{normal}\n\\end{equation}\nwhere $Y$ is the Young modulus, $\\nu$ is the Poisson ratio and $B$ \nis a material constant which characterizes the dissipative\ncharacter of the material~\\cite{BSHP}. \n\\begin{equation}\nR^{\\,\\mbox{\\it\\footnotesize\\it\n eff}}_{ij} = \\left(R_i R_j\\right)/\\left(R_i + R_j\\right) \n\\end{equation}\n is the\neffective radius. For a strict derivation of (\\ref{normal})\nsee~\\cite{BSHP,KuwabaraKono}.\n\nFor the shear force we apply the model by Haff and Werner~\\cite{HaffWerner}\n\\begin{equation}\nF_{ij}^S = \\mbox{sign}\\left({v}_{ij}^{\\,\\mbox{\\it\\footnotesize\\it rel}}\\right) \n\\min \\left\\{\\gamma_s m_{ij}^{\\,\\mbox{\\it\\footnotesize\\it eff}} \n\\left|{v}_{ij}^{\\,\\mbox{\\it\\footnotesize\\it rel}}\\right|~,~\\mu \n\\left|F_{ij}^N\\right| \\right\\} \n\\label{shear} \n\\end{equation}\nwith the effective mass $m_{ij}^{\\,\\mbox{\\it\\footnotesize\\it eff}} =\n\\left(m_i m_j\\right)/\\left(m_i + m_j\\right)$ and the relative velocity\nat the point of contact\n\\begin{equation}\n{v}_{ij}^{\\,\\mbox{\\it\\footnotesize\\it rel}} = \\left(\\dot{\\vec{r}}_i - \n\\dot{\\vec{r}}_j\\right)\\cdot \\vec{n}^S + R_i {\\Omega}_i + R_j {\\Omega}_j ~.\n\\end{equation}\n$\\Omega_i$ and $\\Omega_j$ are the angular velocities of the particles.\n \nThe resulting momenta $M_i$ and $M_j$ acting upon the particles are\n$M_i = F_{ij}^S R_i$ and $M_j = - F_{ij}^S R_j$. Eq.~(\\ref{shear})\ntakes into account that the particles slide upon each other for the\ncase that the Coulomb condition $\\mu \\mid F_{ij}^N \\mid~<~\\left| \nF_{ij}^S \\right|$ holds, otherwise they feel some viscous friction.\nBy means of $\\gamma _{n} \\equiv BY/(1-\\nu ^2)$ and $\\gamma _{s}$,\nnormal and shear damping coefficients, energy loss during particle\ncontact is taken into account~\\cite{restitution}.\n\nThe equations of motion for translation and rotation have been solved\nusing a Gear predictor-corrector scheme of sixth order\n(e.g.~\\cite{AllenTildesley:1987}).\n\nThe values of the coefficients used in simulations are $Y/(1-\\nu\n^2)=1\\times 10^{8}$, $\\gamma _{s}=1\\times 10^{3}$, $ \\mu =0.5$. For\nthe effect we want to show, the coefficient $\\gamma _{n}$ takes values within the range\n$\\left[10^2,10^4\\right]$.\n\n\\section{Results}\nThe mechanisms for convection under horizontal shaking have been\ndiscussed in \\cite{LiffmanMetcalfeCleary:1997}. Now we can show that\nthese mechanisms can be better understood by taking into account the\nparticular role of dissipation in this problem. The most striking\nconsequence of varying the normal damping coefficient is the change\nin organization of the convective pattern, i.e. the direction and\nnumber of rolls in the stationary regime. This is shown in\nFig.~\\ref{fig1}, which has been obtained after averaging particle\ndisplacements over 200 cycles \n(2 snapshots per cycle).\nThe asymmetry of compression and expansion of particles close to\nthe walls (where the material results highly compressible) explains \nthe large transverse velocities shown in the figure.\nNote, however, that the upward and downward motion at the walls cannot be altered \nby this particular averaging procedure. \n\nThe first frame shows a convection pattern with only two rolls, where\nthe arrows indicate that the grains slide down the walls, with at most\na slight expansion of the material at the surface. \nThere are no surface rolls.\nThis is very\nsimilar to what has been observed in\nexperiments\\cite{TennakoonBehringer:1997,Jaeger,RosenkranzPoeschel:1996}.\nIn this case, dissipation is high enough to damp most of the sloshing\ninduced by the vertical walls, and not even the grains just below the\nsurface can overcome the pressure gradient directed downwards.\n\nFor lower damping, we see the developing of surface rolls, \nwhich\ncoexist with the inner rolls circulating in the opposite way. Some\nenergy is now available for upward motion when the walls compress the\nmaterial fluidized during the opening of the wall ``gap'' (empty space\nwhich is created alternatively during the shaking motion). This is the\ncase reported in \\cite{LiffmanMetcalfeCleary:1997}. The last frames\ndemonstrate how the original rolls vanish at the same time that the\nsurface rolls grow occupying a significant part of the system.\nAnother feature shown in the figure is the thin layer of material involving\n3 particle rows close to the bottom, which perform a different kind\nof motion. This effect, which can be seen in all frames,\nis due to the presence of the constraining boundaries\nbut has not been analyzed separately.\n\\onecolumn\n\\begin{figure}\n\\centerline{\\psfig{file=fric1nn.eps,width=5.7cm,clip=}\n\\hspace{0.3cm}\\psfig{file=fric2nn.eps,width=5.7cm,clip=}\n\\hspace{0.3cm}\\psfig{file=fric3nn.eps,width=5.7cm,clip=}}\n\\centerline{\\psfig{file=fric4nn.eps,width=5.7cm,clip=}\n\\hspace{0.3cm}\\psfig{file=fric5nn.eps,width=5.7cm,clip=}\n\\hspace{0.3cm}\\psfig{file=fric6nn.eps,width=5.7cm,clip=}}\n\\centerline{\\psfig{file=fric7nn.eps,width=5.7cm,clip=}\n\\hspace{0.3cm}\\psfig{file=fric8nn.eps,width=5.7cm,clip=}\n\\hspace{0.3cm}\\psfig{file=fric9nn.eps,width=5.7cm,clip=}}\n\\vspace{0.3cm}\n\\caption{Velocity field obtained after cycle averaging of \n particle displacements, for different values of the normal damping\n coefficient, $\\gamma_n$. The first one is $1\\times 10^4$, and for\n obtaining each subsequent frame the coefficient has been divided by\n two. The frames are ordered from left to right and from top to\n bottom. The cell size for averaging is approximately one particle diameter.}\n\\label{fig1}\n\\vspace*{-0.2cm}\n\\end{figure}\n\\twocolumn\n\nWith decreasing normal damping $\\gamma_n$ there are two transitions \nobservable in Fig.~\\ref{fig1}, meaning that the convection pattern changes\nqualitatively at these two particular values of $\\gamma_n$:\nThe first transition leads to the appearance of two surface rolls\nlaying on top of the bulk cells and circulating in opposite direction.\nThe second transition eliminates the bulk rolls. A more detailed analysis of \nthe displacement fields (Fig.~\\ref{fig2})\nallows us to locate the transitions much more precisely.\nIn Fig.~\\ref{fig2} we have represented in grey-scale the horizontal and\nvertical components of the displacement vectors pictured in\nFig.~\\ref{fig1} but in a denser sampling, analyzing data from 30 simulations \ncorresponding to \nvalues of the normal damping coefficient within the interval [50,10000]. \nFor horizontal displacements, we have chosen vertical sections \nat some representative position in horizontal direction\n($x=30$). For the vertical displacements, vertical sections of the\nleftmost part of the container were selected ($x=10$), s.\nFig.~\\ref{fig2}, lower part.\n\\begin{figure}\n \\centerline{\\psfig{file=vx.eps,width=4.5cm,clip=}\\hspace{-0.5cm}\n \\psfig{file=vy.eps,width=4.5cm,clip=}\n\n\\centerline{\\psfig{file=sectionn.eps,height=4.2cm,bbllx=7pt,bblly=16pt,bburx=507pt,bbury=544pt,clip=}}\n\\vspace*{0.2cm}\n\\caption{Horizontal (left) and vertical (right) displacements at \n selected positions of the frames in Fig.~\\ref{fig1} (see the text\n for details), for decreasing normal damping and as a function of\n depth. White indicates strongest flow along positive axis directions\n (up,right), and black the corresponding negative ones. The black region \n at the bottom of the left picture corresponds to the complex boundary\n effect observed in Fig.~\\ref{fig1}, involving only two particle layers.\n The \n figure below shows a typical convection pattern together with the sections\n at $x=10$ and $x=30$ at which the displacements were recorded.}\n\\label{fig2}\n\\vspace*{-0.1cm}\n\\end{figure}\n\nThe horizontal axis shows the values of the normal damping\ncoefficient scaled logarithmically in decreasing sequence. The\nvertical axis represents the position in vertical direction, with the\nfree surface of the system located at $y \\approx 60$. One observes first\nthat white surface shades, complemented by subsurface black ones,\nappear quite clearly at about $\\gamma =$2000 in Fig.~\\ref{fig2}\n(left), indicating the appearance of surface rolls. On the other\nhand, Fig.~\\ref{fig2} (right) shows a black area (indicative of\ndownward flow along the vertical wall) that vanishes at\n$\\gamma_n \\approx 200$ (at this point the grey shade represents vanishing vertical velocity). \nThe dashed lines in Fig.~\\ref{fig2} lead the eye to identify the transition values.\nIn the interval $ 200 \\lesssim \\gamma_n\n\\lesssim 2000$ surface and inner rolls coexist, rotating in opposite\ndirections.\n\nOne can analyze the situation in terms of the restitution coefficient.\n\\ From Eq. (\\ref{normal}), the equation of motion for the displacement\n$\\xi_{ij}$ can be integrated and the relative energy loss in a\ncollision $\\eta=(E_0-E)/E_0$ (with $E$ and $E_0$ being the energy of\nthe relative motion of the particles) can be evaluated approximately.\nUp to the lowest order in the expansion parameter, one\nfinds~\\cite{Thomas-Thorsten}\n\\begin{equation}\n\\eta = 1.78 \\left( \\frac{\\tau}{\\ell} v_0\\right)^{1/5}\\;,\n\\label{energyloss}\n\\end{equation}\nwhere $v_0$ is the relative initial velocity in normal direction, and\n$\\tau$, $\\ell$, time and length scales associated with the problem\n(see~\\cite{Thomas-Thorsten} for details),\n\n\\begin{equation}\n\\tau = \\frac{3}{2} B\\; ,~~~~~~~~~\n\\ell = \\left(\\frac{1}{3} \\frac{m_{ij}^{\\,\\mbox{\\it\\footnotesize\\it eff}} \n}{\\sqrt{R^{\\,\\mbox{\\it\\footnotesize\\it eff}}_{ij}} \nB \\gamma_{n}}\\right)^{2}.\n\\end{equation}\nFor $\\gamma_n = 10^4$ (the highest value analyzed) and the values of\nthe parameters specified above ($v_0 \\approx A 2\\pi f$ for collisions\nwith the incoming wall), $B= 10^{-4}$ and $\\eta$ is typically\n50\\%. This means that after three more collisions the particle leaves\nwith an energy not enough to overcome the height of one single\nparticle in the gravity field. For $\\gamma_n = 10^3$ and the other\nparameters kept constant, $B=10^{-5}$ and $\\eta$ has been\nreduced to 5\\%, resulting in that the number of collisions needed for\nthe particle to have its kinetic energy reduced to the same residual\nfraction, has increased roughly by an order of magnitude. On the other\nhand, given the weak dependence of Eq. (\\ref{energyloss}) on the\nvelocity, one expects that the transitions shown in Fig.~\\ref{fig2}\nwill depend also weakly on the amplitude of the shaking velocity. The reduction of the\ninelasticity $\\eta$ by an order of magnitude seems enough for\nparticles to ``climb'' the walls and develop the characteristic\nsurface rolls observed in numerical simulations.\n\n\\section{Discussion}\nWe have shown that the value of the normal damping coefficient\ninfluences the convective pattern of horizontally shaken granular\nmaterials. By means of molecular dynamics simulations in two\ndimensions we can reproduce the pattern observed in real experiments,\nwhich corresponds to a situation of comparatively high damping,\ncharacterized by inelasticity parameters $\\eta$ larger than 5\\%. For\nlower damping, the upper layers of the material develop additional\nsurface rolls as has been reported previously. As normal damping\ndecreases, the lower rolls descend and finally disappear completely at\ninelasticities of the order of 1\\%.\n\n\\begin{acknowledgement}\nThe authors want to thank R. P. Behringer, H. M. Jaeger, M. Medved,\nand D. Rosenkranz for providing experimental results prior to\npublication and V. Buchholtz, S. E. Esipov, and L. Schimansky-Geier\nfor discussion. The calculations have been done on the parallel\nmachine {\\it KATJA} (http://summa.physik.hu-berlin.de/KATJA/) of the\nmedical department {\\em Charit\\'e} of the Humboldt University Berlin.\nThe work was supported by Deut\\-sche Forschungsgemeinschaft through\ngrant Po 472/3-2.\n\\end{acknowledgement}\n\n", "meta": {"timestamp": "2002-03-19T12:47:20", "yymm": "9807", "arxiv_id": "cond-mat/9807071", "language": "en", "url": "https://arxiv.org/abs/cond-mat/9807071"}} {"text": "\\section{\\label{sec:intro}Introduction}\n \nDemonstration of non-abelian exchange statistics is one of the most active areas of condensed matter research and yet experimental realization of braiding of Majorana modes remains elusive~\\cite{RevModPhys.80.1083,zhang2019next}. Most efforts so far have been focused on superconductor/semiconductor nanowire hybrids, where Majorana bound states (MBS) are expected to form at the ends of a wire or at boundaries between topologically trivial and non-trivial regions~\\cite{rokhinson2012fractional, deng2012anomalous, mourik2012signatures, LutchynReview}. Recently, it became clear that abrupt interfaces may also host topologically trivial Andreev states with experimental signatures similar to MBS \\cite{pan2020generic,Yu2021}, which makes demonstrating braiding in nanowire-based platforms challenging. Phase-controlled long Josephson junctions (JJ) open much wider phase space to realize MBS with a promise to solve some problems of the nanowire platform, such as enabling zero-field operation to avoid detrimental flux focusing for in-plane fields \\cite{pientka2017topological, ren2019topological}. However, MBSs in long JJs suffer from the same problems as in the original Fu-Kane proposal for topological insulator/superconductor JJs, such as poor control of flux motion along the junction and presence of sharp interfaces in the vicinity of MBS-carrying vortices which may host Andreev states and trap quasiparticles. For instance, MBS spectroscopy in both HgTe and InAs-based JJs shows a soft gap \\cite{fornieri2019evidence}, despite a hard SC gap in an underlying InAs/Al heterostructure.\n\n\\begin{figure*}[t]\n\\centering\n\\begin{subfigure}{0.95\\textwidth}\n\\includegraphics[width=1\\textwidth]{Schematic.pdf}\n\\caption{\\label{fig:schematic}}\n\\end{subfigure}\n\\begin{subfigure}{0.35\\textwidth}\n\\includegraphics[width=1\\textwidth]{stack_2.pdf}\n\\caption{\\label{fig:layers}}\n\\end{subfigure}\n\\begin{subfigure}{0.6\\textwidth}\n\\includegraphics[width=1\\textwidth]{Flow_2.pdf}\n\\caption{\\label{fig:flow}}\n\\end{subfigure}\n\\caption{\\label{fig:one} (a) Schematic of the Majorana braiding platform. Magnetic multilayer (MML) is patterned into a track and is separated from TSC by a thin insulating layer. Green lines represent on-chip microwave resonators for a dispersive parity readout setup. The left inset shows a magnified view of a SVP and the right inset shows the role of each layer (b) Expanded view of the composition of an MML (c) Process flow diagram for our Majorana braiding scheme. Here, $T_c$ is superconducting transition temperature and $T_{BKT}$ is Berezinskii\u2013Kosterlitz\u2013Thouless transition temperature for the TSC.}\n\n\\end{figure*}\n\nIn the search for alternate platforms to realize Majorana braiding, spectroscopic signatures of MBS have been recently reported in STM studies of vortex cores in iron-based topological superconductors (TSC) \\cite{wang2018evidence}. Notably, a hard gap surrounding the zero-bias peak at a relatively high temperature of $0.55$ K, and a $5$ K separation gap from trivial Caroli-de Gennes-Matricon (CdGM) states were observed \\cite{chen2020observation, chen2018discrete}. Moreover, vortices in a TSC can be field-coupled to a skyrmion in an electrically-separated magnetic multilayer (MML) \\cite{volkov,petrovic2021skyrmion}, which can be used to manipulate the vortex. This allows for physical separation of the manipulation layer from the layer wherein MBS reside, eliminating the problem of abrupt interfaces faced by nanowire hybrids and JJs. Finally, recent advances in the field of spintronics provide a flexible toolbox to design MML in which skyrmions of various sizes can be stabilized in zero external magnetic field and at low temperatures \\cite{petrovic2021skyrmion, buttner2018theory, dupe2016engineering}. Under the right conditions, stray fields from these skyrmions alone can nucleate vortices in the adjacent superconducting layer. In this paper, we propose TSC--MML heterostructures hosting skyrmion-vortex pairs (SVP) as a viable platform to realize Majorana braiding. By patterning the MML into a track and by driving skyrmions in the MML with local spin-orbit torques (SOT), we show that the SVPs can be effectively moved along the track, thereby facilitating braiding of MBS bound to vortices.\n\nThe notion of coupling skyrmions (Sk) and superconducting vortices (Vx) through magnetic fields has been studied before \\cite{volkov, baumard2019generation, zhou_fusion_2022, PhysRevLett.117.077002, PhysRevB.105.224509, PhysRevB.100.064504, PhysRevB.93.224505, PhysRevB.99.134505, PhysRevApplied.12.034048}. Menezes et al. \\cite{menezes2019manipulation} performed numerical simulations to study the motion of a skyrmion--vortex pair when the vortex is dragged via supercurrents and Hals et al. \\cite{hals2016composite} proposed an analytical model for the motion of such a pair where a skyrmion and a vortex are coupled via exchange fields. However, the dynamics of a SVP in the context of Majorana braiding remains largely unexplored. Furthermore, no \\textit{in-situ} non-demolition experimental technique has been proposed to measure MBS in these TSC--MML heterostructures. In this paper, through micromagnetic simulations and analytical calculations within London and Thiele formalisms, we study the dynamics of a SVP subjected to external spin torques. We demonstrate that the SVP moves without dissociation up to speeds necessary to complete Majorana braiding within estimated quasiparticle poisoning time. We further eliminate the problem of \\textit{in-situ} MBS measurements by proposing a novel on-chip microwave readout technique. By coupling the electric field of the microwave cavity to dipole-moments of transitions from Majorana modes to CdGM modes, we show that a topological non-demolition dispersive readout of the MBS parity can be realized. Moreover, we show that our platform can be used to make the first experimental observations of quasiparticle poisoning times in topological superconducting vortices.\n\nThe paper is organized as follows: in Section~\\ref{sec:plat} we present a schematic and describe our platform. In Section~\\ref{sec:initial} we present the conditions for initializing a skyrmion--vortex pair and discuss its equilibrium properties. In particular, we characterize the skyrmion--vortex binding strength. In Section~\\ref{sec:braid} we discuss the dynamics of a SVP in the context of braiding. Then in Section~\\ref{sec:read}, we present details of our microwave readout technique. Finally, we discuss the scope of our platform in Section~\\ref{sec:summ}.\n\n\\begin{figure*}[t]\n\\centering\n \\begin{subfigure}{0.32\\textwidth}\n \\includegraphics[width=1\\textwidth]{energies.jpg}\n \\caption{\\label{fig:energies}}\n \\end{subfigure}\n \\begin{subfigure}{0.32\\textwidth}\n \\includegraphics[width=1\\textwidth]{forces.jpg}\n \\caption{\\label{fig:forces}}\n \\end{subfigure}\n \\begin{subfigure}{0.32\\textwidth}\n \\includegraphics[width=1\\textwidth]{fvav.jpg}\n \\caption{\\label{fig:fvav}}\n \\end{subfigure}\n \\caption{\\label{fig:onenew} (a -- b) Normalized energies and forces for Sk--Vx interaction between a Pearl vortex and a N\\'eel skyrmion of varying thickness. (c) Attractive $F_{Vx-Avx}$ and repulsive $F_{Sk-Avx}$ (colored lines) for the example materials in Appendix~\\ref{app:A}: $M_{0}=1450$ emu/cc, $r_{sk}=35$ nm, $d_s = 50$ nm, $\\Lambda = 5$ $\\mu$m and $\\xi=15$ nm.}\n\n\\end{figure*}\n\n\\section{\\label{sec:plat}Platform Description}\n\n\\begin{figure*}[t]\n\\centering\n \\begin{subfigure}{0.59\\textwidth}\n \\includegraphics[width=1\\textwidth]{Braiding.jpg}\n \\caption{\\label{fig:braiding}}\n \\end{subfigure}\n \\begin{subfigure}{0.39\\textwidth}\n \\includegraphics[width=1\\textwidth]{t0.jpg}\n \\caption{\\label{fig:t0}}\n \\end{subfigure}\n \n \\begin{subfigure}{0.15\\textwidth}\n \\includegraphics[width=1\\textwidth]{t1.jpg}\n \\caption{\\label{fig:t1}}\n \\end{subfigure}\n \\begin{subfigure}{0.15\\textwidth}\n \\includegraphics[width=1\\textwidth]{t2.jpg}\n \\caption{\\label{fig:t2}}\n \\end{subfigure}\n \\begin{subfigure}{0.15\\textwidth}\n \\includegraphics[width=1\\textwidth]{t3.jpg}\n \\caption{\\label{fig:t3}}\n \\end{subfigure}\n \\begin{subfigure}{0.15\\textwidth}\n \\includegraphics[width=1\\textwidth]{t4.jpg}\n \\caption{\\label{fig:t4}}\n \\end{subfigure}\n \\begin{subfigure}{0.15\\textwidth}\n \\includegraphics[width=1\\textwidth]{t55.jpg}\n \\caption{\\label{fig:t5}}\n \\end{subfigure}\n \\begin{subfigure}{0.15\\textwidth}\n \\includegraphics[width=1\\textwidth]{t6.jpg}\n \\caption{\\label{fig:t6}}\n \\end{subfigure}\n\\caption{\\label{fig:two} (a) Schematic of our braiding process: manipulations of four skyrmions in the MML track are shown. MBS at the centers of vortices bound to each of these skyrmions are labeled $\\gamma_1$--$\\gamma_4$. Ohmic contacts in HM layers of the MML are shown in brown and rf readout lines are shown in green. II--VI show the steps involved in braiding $\\gamma_2$ and $\\gamma_4$. In step II, $\\gamma_1$ and $\\gamma_2$ are brought close to rf lines by applying charge currents from C to A and D to B, respectively. $\\gamma_1$ and $\\gamma_2$ are then initialized by performing a dispersive readout of their parity (see Section~\\ref{sec:read}). Similarly, $\\gamma_3$ and $\\gamma_4$ are initialized after applying charge currents along P to R and Q to S, respectively. In step III, $\\gamma_2$ is moved aside to make room for $\\gamma_4$ by applying currents from B to X followed by applying currents from X to C. In step IV, $\\gamma_4$ is braided with $\\gamma_2$ by applying currents along S to X and X to B. Finally, in step V, the braiding process is completed by bringing $\\gamma_2$ to S by applying currents from A to X and from X to S. Parities (i.e., fusion outcomes) of $\\gamma_1$ and $\\gamma_4$, and $\\gamma_3$ and $\\gamma_2$ are then measured in step VI. Fusion outcomes in each pair of MBS indicate the presence or absence of a fermion corresponding to a parity of $\\pm1$ \\cite{PhysRevApplied.12.054035, PhysRevX.6.031016}. (b) Initial position of the skyrmions labeled A and B in the micromagnetic simulation for skyrmion braiding (see Appendix.~\\ref{app:A}) (c--h) Positions of the two skyrmions at the given times as the braiding progresses. Charge current $j = 2\\times 10^{12}$ A/m$^2$ was applied.}\n\n\\end{figure*}\n\nOur setup consists of a thin TSC layer that hosts vortices grown on top of a MML that hosts skyrmions as shown in Fig.~\\ref{fig:schematic}. A thin insulating layer separates the magnetic and superconducting layers ensuring electrical separation between the two. Vortices in a TSC are expected to host MBS at their cores \\cite{wang2018evidence,chen2020observation, chen2018discrete}. Stray fields from a skyrmion in the MML nucleate such a vortex in the TSC, forming a bound skyrmion--vortex pair under favorable energy conditions (see Sec.~\\ref{sec:initial}). This phenomenon has been recently experimentally demonstrated in Ref.~\\cite{petrovic2021skyrmion}, where stray fields from N\\'eel skyrmions in Ir/Fe/Co/Ni magnetic multilayers nucleated vortices in a bare Niobium superconducting film.\n\nThe MML consists of alternating magnetic and heavy metal (HM) layers, as shown in Fig.~\\ref{fig:layers}. The size of a skyrmion in a MML is determined by a delicate balance between exchange, magnetostatic, anisotropy and Dzyaloshinskii\u2013Moriya interaction (DMI) energies \\cite{wang2018theory, romming2015field} -- and the balance is highly tunable, thanks to advances in spintronics \\cite{buttner2018theory, dupe2016engineering, soumyanarayanan2017tunable}. Given a TSC, this tunability allows us to find a variety of magnetic materials and skyrmion sizes that can satisfy the vortex nucleation condition [to be detailed in Eq.~(\\ref{eqn:nuc})]. In Appendix~\\ref{app:A}, we provide a specific example of FeTeSe topological superconductor coupled with Ir/Fe/Co/Ni magnetic multilayers.\n\nDue to large intrinsic spin-orbit coupling, a charge current through the heavy metal layers of a MML exerts spin-orbit torques (SOT) on the magnetic moments in the MML, which have been shown to drive skyrmions along magnetic tracks \\cite{fert2013skyrmions, woo2017spin}. In our platform, to realize Majorana braiding we propose to pattern the MML into a track as shown in Fig.~\\ref{fig:schematic} and use local spin-orbit torques to move skyrmions along each leg of the track. If skyrmions are braided on the MML track, and if skyrmion-vortex binding force is stronger than total pinning force on the SVPs, then the MBS hosting vortices in TSC will closely follow the motion of skyrmions, resulting in the braiding of MBS. We note here that there is an upper threshold speed with which a SVP can be moved as detailed in Sec.~\\ref{sec:braid}. By using experimentally-relevant parameters for TSC and MML in Appendix~\\ref{app:A}, we show that our Majorana braiding scheme can be realized with existing materials.\n\nWe propose a non-demolition microwave measurement technique for the readout of the quantum information encoded in a pair of vortex Majorana bound states (MBS). A similar method has been proposed for the parity readout in topological Josephson junctions~\\cite{PhysRevB.92.245432,Vayrynen2015,Yavilberg2015,PhysRevB.99.235420,PRXQuantum.1.020313} and in Coulomb blockaded Majorana islands~\\cite{PhysRevB.95.235305}. Dipole moments of transitions from MBS to CdGM levels couple dispersively to electric fields in a microwave cavity, producing a parity-dependent dispersive shift in the cavity resonator frequency. Thus by probing the change in the resonator's natural frequency, the state of the Majorana modes can be inferred. Virtual transitions from Majorana subspace to excited CdGM subspace induced due to coupling to the cavity electric field are truly parity conserving, making our readout scheme a so-called topological quantum non-demolition technique \\cite{PRXQuantum.1.020313, PhysRevB.99.235420}. The readout scheme is explained in greater detail in Sec.~\\ref{sec:read}.\n\nAs discussed above, in our platform we consider coupling between a thin superconducting layer and magnetic multilayers. We note that in thin superconducting films, vortices are characterized by the Pearl penetration depth, given by $\\Lambda \\ =\\ \\lambda ^{2} /d_{s}$, where $\\lambda$ is the London penetration depth and $d_{s}$ is the thickness of the TSC film. Typically, these penetration depths $\\Lambda$ are much larger than skyrmion radii $r_{sk}$ in MMLs of interest. Further, interfacial DMI in MML stabilizes a N\\'eel skyrmion as opposed to a Bloch skyrmion. So hereon, we only study coupling between a N\\'eel skyrmion and a Pearl vortex in the limit $\\Lambda\\gg r_{sk}$.\n\n\\section{\\label{sec:initial}Initialization and SVP in Equilibrium}\n\nFig.~\\ref{fig:flow} illustrates the process flow of our initialization scheme. Skyrmions can be generated individually in MML by locally modifying magnetic anisotropy through an artificially created defect center and applying a current through adjacent heavy metal layers \\cite{zhang2020skyrmion}. Such defect centers have been experimentally observed to act as skyrmion creation sites \\cite{buttner2017field}. When the TSC--MML heterostructure is cooled below the superconducting transition temperature (SC $T_{C}$), stray fields from a skyrmion in the MML will nucleate a vortex and an antivortex in the superconducting layer if the nucleation leads to a lowering in overall free energy of the system \\cite{volkov}. An analytical expression has been obtained for the nucleation condition in Ref.~\\cite{NeelInteraction} ignoring contributions of dipolar and Zeeman energies to total magnetic energy: a N\\'eel skyrmion nucleates a vortex directly on top of it if \n\\begin{equation}\n d_{m}\\left[ \\alpha _{K}\\frac{Kr_{sk}^{2}}{2} -\\alpha _{A} A-M_{0} \\phi _{0}\\right] \\geq \\frac{{\\phi _{0}}^2}{8 \\pi^2 \\lambda} \\ln\\left(\\frac{\\Lambda }{\\xi }\\right).\n \\label{eqn:nuc}\n\\end{equation}\n\\noindent Here, $d_{m}$ is the effective thickness, $M_{0}$ is the saturation magnetization, $A$ is the exchange stiffness and $K$ is the perpendicular anisotropy constant of the MML; $\\alpha_K$ and $\\alpha_A$ are positive constants that depend on skyrmion's spatial profile (see Appendix~\\ref{app:A}), $r_{sk}$ is the radius of the skyrmion in the presence of a Pearl vortex \\footnote{The radius of a skyrmion is not expected to change significantly in the presence of a vortex \\cite{NeelInteraction}. We verified this claim with micromagnetic simulations. For the materials in Appendix~\\ref{app:A}, when vortex fields are applied on a bare skyrmion, its radius increased by less than $10\\%$. So, for numerical calculations in this paper, we use bare skyrmion radius for $r_{sk}$.}, $\\phi _{0}$ is the magnetic flux quantum, and $\\Lambda$ ($\\xi$) is the Pearl depth (coherence length) of the TSC. Although a complete solution of the nucleation condition must include contributions from dipolar and Zeeman energies to total energy of a MML, such a calculation can only be done numerically and Eq.~(\\ref{eqn:nuc}) can still be used as an approximate estimate. For the choice of materials listed in the Appendix, the left side of the equation exceeds the right side by $400\\%$, strongly suggesting the nucleation of a vortex for every skyrmion in the MML. Furthermore, skyrmions in Ir/Fe/Co/Ni heterostructures have also been experimentally shown to nucleate vortices in Niobium superconducting films \\cite{petrovic2021skyrmion}. \n\nWe proceed to characterize the strength of a skyrmion (Sk) -- vortex (Vx) binding force as it plays a crucial role in determining the feasibility of moving the skyrmion and the vortex as a single object. Spatial magnetic profile of a N\\'eel skyrmion is given by $\\boldsymbol{M}_{sk} =M_{0}[\\zeta \\sin\\theta(r) \\boldsymbol{\\hat{r}}+ \\cos\\theta(r) \\boldsymbol{\\hat{z}}]$, where $\\zeta=\\pm$1 is the chirality and $\\theta(r)$ is the angle of the skyrmion. For $\\Lambda\\gg r_{sk}$, the interaction energy between a vortex and a skyrmion is given by \\cite{NeelInteraction}:\n\\begin{equation}\n E_{Sk-Vx} =\\frac{M_{0} \\phi _{0} r_{sk}^{2}}{2\\Lambda }\\int_{0}^{\\infty} \\frac{1}{q^2}(e^{-q\\tilde{d}}-1) J_{0}(qR) m_{z,\\theta}(q) \\,dq,\n \\label{eqn:energy}\n\\end{equation}\n\n\\noindent where $\\tilde{d} = d_m \\slash r_{sk}$, $J_{n}$ is the nth-order Bessel function of the first kind, and $R=r/r_{sk}$ is the normalized horizontal displacement $r$ between the centers of the skyrmion and the vortex. $m_{z,\\theta}(q)$ contains information about skyrmion's spatial profile and is given by \\cite{NeelInteraction}: $m_{z,\\theta}(q) = \\int_{0}^{\\infty} x [\\zeta q + \\theta^\\prime ( x )] J_{1}( qx) \\sin\\theta(x) \\,dx$, where $\\theta ( x )$ is determined by skyrmion ansatz.\n\nWe now derive an expression for the skyrmion--vortex restoring force by differentiating Eq.~(\\ref{eqn:energy}) with respect to $r$:\n\\begin{equation}\n F_{Sk-Vx} =-\\frac{M_{0} \\phi _{0} r_{sk}}{2\\Lambda }\\int_{0}^{\\infty} \\frac{1}{q}(1- e^{-q\\tilde{d}}) J_{1}(qR) m_{z,\\theta}(q) \\,dq.\n \\label{eqn:force}\n\\end{equation}\nFor small horizontal displacements $r\\ll r_{sk}$ between the centers of the skyrmion and the vortex, we can approximate the Sk--Vx energy as:\n\\begin{equation}\n E_{Sk-Vx} =\\frac{1}{2} kr^{2},\n \\label{eqn:springconstant}\n\\end{equation}\n\\noindent with an effective spring constant \n\\begin{equation}\n k =-\\frac{M_{0} \\phi _{0}}{4\\Lambda }\\int_{0}^{\\infty} (1- e^{-q\\tilde{d}}) m_{z,\\theta}(q) \\,dq.\n \\label{eqn:spring}\n\\end{equation}\n\nFigs.~\\ref{fig:energies}--\\ref{fig:forces} show binding energy and restoring force between a vortex and skyrmions of varying thickness for the materials listed in Appendix~\\ref{app:A}. Here we used domain wall ansatz for the skyrmion with $\\theta(x) = 2\\tan^{-1}[\\frac{\\sinh(r_{sk}/\\delta)}{\\sinh(r_{sk}x/\\delta)}]$, where $r_{sk}/\\delta$ is the ratio of skyrmion radius to its domain wall width and $x$ is the distance from the center of the skyrmion normalized by $r_{sk}$. As seen in Fig.~\\ref{fig:forces}, the restoring force between a skyrmion and a vortex increases with increasing separation between their centers until it reaches a maximum value, $F_{max}$, and then decreases with further increase in separation. We note that $F_{max}$ occurs when Sk--Vx separation is equal to the radius of the skyrmion, i.e. when $R=1$ in Eq.~(\\ref{eqn:force}):\n\\begin{equation}\n F_{max} = -\\frac{M_{0} \\phi _{0} r_{sk}}{2\\Lambda }\\int_{0}^{\\infty} \\frac{1}{q}(1- e^{-q\\tilde{d}}) J_{1}(q) m_{z,\\theta}(q) \\,dq. \n \\label{eqn:fmax}\n\\end{equation}\n\n\\noindent As the size of the skyrmion increases, the maximum binding force $F_{max}$ of the SVP increases. For a given skyrmion size, increasing the skyrmion thickness increases the attractive force until the thickness reaches the size of the skyrmion. Further increase in MML thickness does not lead to an appreciable increase in stray fields outside the MML layer and, as a result, the Sk--Vx force saturates.\n\nIt is important to note that stray fields from a skyrmion nucleate both a vortex and an antivortex (Avx) in the superconducting layer \\cite{volkov, PhysRevLett.88.017001, milosevic_guided_2010, PhysRevLett.93.267006}. While the skyrmion attracts the vortex, it repels the antivortex. Eqs.~(\\ref{eqn:energy}) and (\\ref{eqn:force}) remain valid for Sk--Avx interaction, but switch signs. The equilibrium position of the antivortex is at the location where repulsive skyrmion--antivortex force, $F_{Sk-Avx}$, is balanced by the attractive vortex--antivortex force, $F_{Vx-Avx}$~\\cite{lemberger2013theory, ge2017controlled}. Fig.~\\ref{fig:fvav} shows $F_{Vx-Avx}$ against $F_{Sk-Avx}$ for the platform in the Appendix. We see that for thicker magnets, the location of the antivortex is far away from that of the vortex, where the Avx can be pinned with artificially implanted pinning centers \\cite{aichner2019ultradense, gonzalez2018vortex}. For thin magnetic films, where the antivortex is expected to be nucleated right outside the skyrmion radius, we can leverage Berezinskii\u2013Kosterlitz\u2013Thouless (BKT) transition to negate $F_{Vx-AVx}$ for Vx-Avx distances $r<\\Lambda$ \\cite{PhysRevB.104.024509, schneider_excess_2014, goldman2013berezinskii, zhao2013evidence}. Namely, when a Pearl superconducting film is cooled to a temperature below $T_C$ but above $T_{BKT}$, vortices and antivortices dissociate to gain entropy, which minimizes the overall free energy of the system \\cite{beasley1979possibility}. While the attractive force between a vortex and an antivortex is nullified, a skyrmion in the MML still attracts the vortex and pushes the antivortex towards the edge of the sample, where it can be pinned. Therefore we assume that the antivortices are located far away and neglect their presence in our braiding and readout schemes.\n\n\\section{\\label{sec:braid}Braiding}\n\nMajorana braiding statistics can be probed by braiding a pair of MBS \\cite{RevModPhys.80.1083} which involves swapping positions of the two vortices hosting the MBS. We propose to pattern the MML into interconnected Y-junctions as shown in Fig.~\\ref{fig:two} to enable that swapping. Ohmic contacts in HM layers across each leg of the Y-junctions enable independent application of charge currents along each leg of the track. These charge currents in-turn apply spin-orbit torques on the adjacent magnetic layers and enable skyrmions to be moved independently along each leg of the track. As long as skyrmion and vortex move as a collective object, braiding of skyrmions in the MML leads to braiding of MBS hosting vortices in the superconducting layer. Below we study the dynamics of a SVP subjected to spin torques for braiding. We calculate all external forces acting on the SVP in the process and discuss the limits in which the skyrmion and the vortex move as a collective object.\n\nFor a charge current $\\bm{J}$ in the HM layer, the dynamics in the magnetic layer is given by the modified Landau\u2013Lifshitz\u2013Gilbert (LLG) equation \\cite{hayashi2014quantitative, slonczewski1996current}:\n\\begin{equation}\n \\partial _{t}\\bm{m} =-\\gamma (\\bm{m} \\times {{\\bm H}_{eff}} +\\eta J\\ \\bm{m} \\times \\bm{m} \\times \\bm{p}) +\\alpha \\bm{m} \\times \\partial _{t}\\bm{m}\n \\label{eqn:llg}\n\\end{equation}\n\\noindent where we have included damping-like term from the SOT and neglected the field-like term as it does not induce motion of N\\'eel skyrmions for our geometry \\cite{jiang_blowing_2015}. Here, $\\gamma$ is the gyromagnetic ratio, $\\alpha$ is the Gilbert damping parameter, and ${{\\bm H}_{eff}}$ is the effective field from dipole, exchange, anisotropy and DMI interactions. $\\bm{p}=sgn(\\Theta _{SH})\\bm{\\hat{J}} \\times \\hat{\\bm{n}}$ is the direction of polarization of the spin current, where $\\Theta _{SH}$ is the spin Hall angle, $\\bm{\\hat{J}}$ is the direction of charge current in the HM layer and $\\hat{\\bm{n}}$ is the unit vector normal to the MML. $\\eta=\\hbar \\Theta _{SH}/2eM_{0} d_{m}$ quantifies the strength of the torque, $\\hbar$ is the reduced Planck's constant and $e$ is the charge of an electron. \n\nAssuming skyrmion and vortex move as a collective object, semiclassical equations of motion for the centers of mass of the skyrmion and the vortex can be written using collective coordinate approach as done in Ref.~\\cite{hals2016composite}:\n\\begin{eqnarray}\n m_{sk}\\ddot{\\bm{R}}_{sk}= {\\bf{F}}_{SOT} - \\frac{\\partial U_{sk,\\ pin}}{\\partial \\bm{R}_{sk}} - & {\\bm{G}}_{sk}\\times \\dot{\\bm{R}}_{sk} - 4\\pi s \\alpha \\dot{\\bm{R}}_{sk} \\nonumber \\\\\n &- k({\\bm{R}}_{sk}-{\\bm{r}}_{vx}),\n \\label{eqn:skmotion}\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n m_{vx}\\ddot{\\bm{R}}_{vx} = - \\frac{\\partial U_{vx,\\ pin}}{\\partial \\bm{R}_{vx}} - &{\\bm{G}}_{vx}\\times \\dot{\\bm{R}}_{vx} - {\\alpha}_{vx} \\dot{\\bm{R}}_{vx} \\nonumber \\\\\n & + k({\\bm{R}}_{sk}-{\\bm{r}}_{vx}),\n \\label{eqn:vxmotion}\n\\end{eqnarray}\n\\noindent where ${\\bm{R}}_{sk}$ (${\\bm{R}}_{vx}$), $m_{sk}$ ($m_{vx}$) and $q_{sk}$ ($q_{vx}$) are the position, mass and chirality of the skyrmion (vortex). $k$ is the effective spring constant of the Sk--Vx system, given in Eq.~(\\ref{eqn:spring}). ${\\bm{F}}_{SOT}=\\pi ^{2} \\gamma \\eta r_{sk} s\\bm{{J}} \\times \\hat{\\bm{n}}$ is the force on a skyrmion due to spin torques in Thiele formalism, where $s=M_0 d_m/\\gamma$ is the spin density \\cite{upadhyaya2015electric, thiele1970theory}. The third term on the right side of Eq.~(\\ref{eqn:skmotion}) gives Magnus force on the skyrmion, with ${\\bm{G}}_{sk} = 4\\pi s q_{sk}\\hat{\\bm{z}}$, and the fourth term characterizes a dissipative force due to Gilbert damping. Similarly, the second term on the right side of Eq.~(\\ref{eqn:vxmotion}) gives the Magnus force on the vortex with ${\\bm{G}}_{vx} = 2\\pi s n_{vx} q_{vx} \\hat{\\bm{z}}$, with $n_{vx}$ being the superfluid density of the TSC, and the third term characterizes viscous force with friction coefficient ${\\alpha}_{vx}$. $U_{sk,\\ pin}$ ($U_{vx,\\ pin}$) gives the pinning potential landscape for the skyrmion (vortex). The last term in Eq.~(\\ref{eqn:vxmotion}) represents restoring force on a vortex due to its separation from a skyrmion and is valid when $\\mid{\\bm{R}}_{sk}-{\\bm{R}}_{vx}\\mid 100~m$ were rejected,\ncorresponding to the area near the fifth telescope currently \nnot included in the system.\n\\begin{figure}[htb]\n\\begin{center}\n\\mbox{\n\\epsfxsize8.0cm\n\\epsffile{coreloc.eps}}\n\\end{center}\n\\caption\n{Distribution of the core locations of events, after the cuts to\nenhance the fraction of $\\gamma$-rays. Also indicated are the\nselection region and the telescope locations.}\n\\label{fig_core}\n\\end{figure}\nAfter these cuts, a sample of 11874 on-source events remained, including\na background of 1543 cosmic-ray events, as estimated using the equal-sized\noff-source region.\n\nFor such a sample of events at TeV energies, \nthe core location is measured with a\nprecision of about 6~m to 7~m for events with cores within a \ndistance up to 100~m from the central telescope; for larger\ndistances, the resolution degrades gradually, due to\nthe smaller angles between the different views,\nand the reduced image {\\em size} (see Fig.~\\ref{fig_coreres}).\n\\begin{figure}[htb]\n\\begin{center}\n\\mbox{\n\\epsfxsize7.0cm\n\\epsffile{res.ps}}\n\\end{center}\n\\caption\n{Resolution in the core position as a function of the distance\nbetween the shower core and the central telescope, as determined\nfrom Monte Carlo simulations of $\\gamma$-ray showers with\nenergies between 1 and 2 TeV. The resolution is defined by\nfitting a Gaussian to the distribution of differences between the true and\nreconstructed coordinates of the shower impact point, projected\nonto the $x$ and $y$ axes of the coordinate system. Due to slight\nnon-Gaussian tails, the rms widths of the distributions are about\n20\\% larger.}\n\\label{fig_coreres}\n\\end{figure}\n\n\\section{The shape of the Cherenkov light pool for $\\gamma$-ray\nevents}\n\nUsing the technique described in the introduction, the intensity\ndistribution in the Cherenkov light pool can now simply be traced\nby selecting events with the shower core at given distance $r_i$ from\na `reference' \ntelescope $i$ and with a fixed image {\\em size} $a_i$, and plotting the\nmean amplitude $a_j$ of telescope $j$ as a function of $r_j$.\nHowever, in this simplest form, the procedure is not very practical,\ngiven the small sample of events remaining after such additional\ncuts. To be able to use a larger sample of events, one has to\n\\begin{itemize}\n\\item select events with $a_i$ in a certain range, $a_{min} < a_i \n< a_{max}$, and plot $a_j/a_i$ vs $r_j$, assuming that the shape of\nthe light pool does not change rapidly with energy, and that one\ncan average over a certain energy range\n\\item repeat the measurement of $a_j(r_j)/a_i$ for different (small) bins \nin $r_i$, and combine these measurements after normalizing the distributions\nat some fixed distance\n\\item Combine the results obtained for different pairs of telescopes $i,j$.\n\\end{itemize}\nCare has to be taken not to introduce a bias due to the trigger\ncondition. For example, one has to ensure that the selection\ncriterion of at least three triggered telescopes is fulfilled regardless\nof whether telescope $j$ has triggered or not, otherwise the selection\nmight enforce a minimum image {\\em size} in telescope $j$. \n\nTo avoid truncation of images by the border of the camera, only images\nwith a maximum distance of $1.5^\\circ$ between the image centroid and\nthe camera center were included, leaving a $0.6^\\circ$ margin to\nthe edge of the field of view. Since \nthe image of the source if offset by $0.5^\\circ$ from the camera \ncenter, a maximum distance of $2.0^\\circ$ is possible between the source\nimage and the centroid of the shower image.\n\nEven after these selections, the comparison between data and shower models\nis not completely straight forward. One should not, e.g., simply compare\ndata to the predicted photon flux at ground level since\n\\begin{itemize}\n\\item as is well known, the radial dependence\nof the density of Cherenkov light depends on the solid angle over which\nthe light is collected, i.e., on the field of view of the camera\n\\item the experimental resolution in the\nreconstruction of the shower core position causes a \ncertain smearing, which is visible in particular near the break \nin the light distribution\nat the Cherenkov radius\n\\item the selection of image pixels using the tail cuts results in a\ncertain loss of photons; this loss is the more significant the lower\nthe intensity in the image is, and the more diffuse the image is.\n\\end{itemize}\nWhile the distortion in the measured radial distribution of Cherenkov\nlight due to the latter two effects is relatively modest (see\nFig.~\\ref{fig_pool}), a detailed\ncomparison with Monte Carlo should take these effects into account by\nprocessing Monte-Carlo generated events using the same procedure as\nreal data, i.e., by plotting the distance to the reconstructed core\nposition rather than the true core position, and by applying the same\ntail cuts etc. \n\\begin{figure}[htb]\n\\begin{center}\n\\mbox{\n\\epsfxsize11.0cm\n\\epsffile{mc_final.eps}}\n\\end{center}\n\\caption\n{Radial distribution of Cherenkov light for TeV $\\gamma$-ray\nshowers, for unrestricted aperture of the photon detector (full line),\nfor a $2^\\circ$ aperture (dashed), and\nincluding the full camera simulation and image processing (shaded).\nThe curves are normalized at $r \\approx $100~m.}\n\\label{fig_pool}\n\\end{figure}\n\nFor a first comparison between data and simulation,\nshowers from the zenith (zenith angle between\n$10^\\circ$ and $15^\\circ$) were selected. \nThe range of distances $r_i$ from the shower core \nto the reference telescope was restricted to the plateau region\nbetween 50~m and 120~m. Smaller\ndistances were not used because of the large fluctuations of image\n{\\em size} close to the shower core, and larger distances were excluded\nbecause of the relatively steep variation of light yield with \ndistance. The showers were further selected on an amplitude in the `reference'\ntelescope $i$ between 100 and 200 photoelectrons, corresponding to\na mean energy of about 1.3~TeV. \nContamination of the Mrk 501 on-source data sample by cosmic\nrays was subtracted using an off-source region displaced from\nthe optical axis by the same amount as the source, but in\nthe opposite direction. The measured radial distribution\n(Fig.~\\ref{fig_dat2}(a))\nshows the expected features: a relatively flat plateau out to distances\nof 120~m, and a rapid decrease in light yield for larger distances.\n\nThe errors given in the Figure are purely statistical. To estimate the\ninfluence of systematic errors, one can look at the consistency of\nthe data for different ranges in distance $r_i$ to the `reference' \ntelescope, one can compare results for different telescope combinations,\nand one can study the dependence on the cuts applied. Usually,\nthe different data sets were consistent to better than $\\pm 0.05$ units;\nsystematic effects certainly do not exceed a level of $\\pm 0.1$ units. \nWithin these\nerrors, the measured distribution is reasonably well reproduced\nby the Monte-Carlo\nsimulations.\n\n\\begin{figure}[p]\n\\begin{center}\n\\mbox{\n\\epsfysize18.0cm\n\\epsffile{reng1.eps}}\n\\end{center}\n\\caption\n{Light yield as a function of shower energy, for image {\\em size} in \nthe reference telescope between 100 and 200 photoelectrons (a),\n200 and 400 photoelectrons (b), and 400 to 800 photoelectrons (c).\nEvents were selected \nwith a distance range between 50~m and 120~m from the reference telescope,\nfor zenith angles between $10^\\circ$ and $15^\\circ$.\nThe shaded bands indicate the Monte-Carlo results.\nThe distributions are normalized at $r \\approx 100$~m. Only \nstatistical errors are shown.}\n\\label{fig_dat2}\n\\end{figure}\n\\begin{figure}[p]\n\\begin{center}\n\\mbox{\n\\epsfysize20.0cm\n\\epsffile{rall1.eps}}\n\\end{center}\n\\caption\n{Light yield as a function of core distance, for zenith angles between\n$10^\\circ$ and $15^\\circ$ (a), $15^\\circ$ and $25^\\circ$ (b), $25^\\circ$ and\n$35^\\circ$ (c), and $35^\\circ$ and $45^\\circ$ (d). Events were selected \nwith a distance range between 50~m and 120~m from the reference telescope,\nand an image {\\em size} between 100 and 200 photoelectrons in the reference\ntelescope. \nThe shaded bands indicate the Monte-Carlo results.\nThe distributions are normalized at $r \\approx 100$~m.\nOnly statistical errors are shown.}\n\\label{fig_dat3}\n\\end{figure}\n\nShower models predict that the distribution\nof light intensity varies (slowly) with the shower\nenergy and with the zenith angle. Fig.~\\ref{fig_dat2} compares the\ndistributions obtained for different {\\em size} ranges $a_i$ of\n100 to 200, 200 to 400, and 400 to 800 photoelectrons at distances\nbetween 50~m and 120~m, corresponding\nto mean shower energies of about 1.3, 2.5, and 4.5 TeV, respectively.\nWe note that the intensity close to the shower core increases with\nincreasing energy. This component of the Cherenkov light is generated\nby penetrating particles near the shower core. Their number grows\nrapidly with increasing shower energy, and correspondingly decreasing\nheight of the shower maximum. The increase in the mean light intensity \nat small distances from the shower core is primarily caused by\nlong tails distribution of image {\\em sizes} towards large {\\em size}; the\nmedian {\\em size} is more or less constant.\nThe observed trends are well reproduced by the\nMonte-Carlo simulations.\n\nThe dependence on zenith angle is\nillustrated in Fig.~\\ref{fig_dat3}, where zenith angles between \n$10^\\circ$ and $15^\\circ$, $15^\\circ$ and $25^\\circ$, $25^\\circ$ and\n$35^\\circ$, and $35^\\circ$ and $45^\\circ$ are compared. Events were\nagain selected for an image {\\em size} in the `reference' telescope\nbetween 100 and 200 photoelectrons, in a distance range of 50~m to \n120~m \\footnote{Core\ndistance is always measured in the plane perpendicular to the shower\naxis}. The corresponding \nmean shower energies for the four ranges in zenith angle are about \n1.3~TeV, 1.5~TeV, 2~TeV, and 3~TeV.\nFor increasing zenith angles, the distribution of Cherenkov light\nflattens for small radii, and the diameter of the light pool\nincreases. Both effects are expected, since for larger zenith\nangles the distance between the telescope and the shower maximum\ngrows, reducing the number of penetrating particles, and resulting\nin a larger Cherenkov radius. The simulations properly account for \nthis behaviour.\n\n\\begin{figure}[tb]\n\\begin{center}\n\\mbox{\n\\epsfxsize7.0cm\n\\epsffile{rms.eps}}\n\\end{center}\n\\caption\n{Relative variation in the {\\em size} ratio $a_j/a_i$ as a function\nof $r_j$, for $r_i$ in the range 50~m to 120~m, and for image {\\em size}\nin the `reference' telescope between 100 and 200 photoelectrons.\nFull circles refer to zenith angles between $10^\\circ$ and $15^\\circ$, \nopen circles to zenith angles between $25^\\circ$ and $35^\\circ$.}\n\\label{fig_rms}\n\\end{figure}\nIt is also of some interest to consider the fluctuations of\nimage {\\em size}, $\\Delta(a_j/a_i)$.\nFig.~\\ref{fig_rms} shows the relative rms fluctuation in the\n{\\em size} ratio, as a function of $r_j$, for small ($10^\\circ$ to\n$15^\\circ$) and for larger ($25^\\circ$ and $35^\\circ$) zenith\nangles. The fluctuations are minimal near the Cherenkov radius;\nthey increase for larger distances, primarily due to the smaller\nlight yield and hence larger relative fluctuations in the number\nof photoelectrons. In particular for the small zenith angles,\nthe fluctuations also increase for small radii, reflecting the\nlarge fluctuations associated with the penetrating tail of the\nair showers. For larger zenith angles, this effect is much reduced,\nsince now all shower particles are absorbed well above the telescopes;\nmore detailed studies show that already zenith angles of $20^\\circ$\nmake a significant difference. \n\n\\section{Summary}\n\nThe stereoscopic observation of $\\gamma$-ray induced air showers\nwith the HEGRA Cherenkov telescopes allowed for the first time\nthe measurement of the light distribution in the Cherenkov light \npool at TeV energies, providing a consistency check of one of the\nkey inputs for the calculation of shower energies based on the \nintensity of the Cherenkov images. The light distribution shows a\ncharacteristic variation with shower energy and with zenith angle.\nData are well reproduced by the Monte-Carlo\nsimulations.\n\n\\section*{Acknowledgements}\n\nThe support of the German Ministry for Research \nand Technology BMBF and of the Spanish Research Council\nCYCIT is gratefully acknowledged. We thank the Instituto\nde Astrofisica de Canarias for the use of the site and\nfor providing excellent working conditions. We gratefully\nacknowledge the technical support staff of Heidelberg,\nKiel, Munich, and Yerevan.\n\n", "meta": {"timestamp": "1998-07-13T09:54:01", "yymm": "9807", "arxiv_id": "astro-ph/9807119", "language": "en", "url": "https://arxiv.org/abs/astro-ph/9807119"}} {"text": "\\section{Introduction}\n\\label{sec:introduction}\nA plethora of observations have led to confirm the standard $\\Lambda$CDM framework as the most economical and successful model describing our current universe.\nThis simple picture (pressureless dark matter, baryons and a cosmological constant representing the vacuum energy) has been shown to provide an excellent fit to cosmological data.\nHowever, there are a number of inconsistencies that persist and, instead of diluting with improved precision measurements, gain significance~\\cite{Freedman:2017yms,DiValentino:2020zio,DiValentino:2020vvd,DiValentino:2020srs,Freedman:2021ahq,DiValentino:2021izs,Schoneberg:2021qvd,Nunes:2021ipq,Perivolaropoulos:2021jda,Shah:2021onj}.\n\nThe most exciting (i.e.\\ probably non due to systematics) and most statistically significant ($4-6\\sigma$) tension in the literature is the so-called Hubble constant tension, which refers to the discrepancy between cosmological predictions and low redshift estimates of $H_0$~\\cite{Verde:2019ivm,Riess:2019qba,DiValentino:2020vnx}.\nWithin the $\\Lambda$CDM scenario, Cosmic Microwave Background (CMB) measurements from the Planck satellite provide a value of $H_0=67.36\\pm 0.54$~km s$^{-1}$ Mpc$^{-1}$ at 68\\%~CL~\\cite{Planck:2018vyg}.\nNear universe, local measurements of $H_0$, using the cosmic distance ladder calibration of Type Ia Supernovae with Cepheids, as those carried out by the SH0ES team, provide a measurement of the Hubble constant $H_0=73.2\\pm 1.3$~km s$^{-1}$ Mpc$^{-1}$ at 68$\\%$~CL~\\cite{Riess:2020fzl}.\nThis problematic $\\sim 4\\sigma$ discrepancy aggravates when considering other late-time estimates of $H_0$.\nFor instance, measurements from the Megamaser Cosmology Project~\\cite{Pesce:2020xfe}, or those exploiting Surface Brightness Fluctuations~\\cite{Blakeslee:2021rqi} only exacerbate this tension~\\footnote{%\nOther estimates are unable to disentangle between nearby universe and CMB measurements. These include results from the Tip of the Red Giant Branch~\\cite{Freedman:2021ahq},\nfrom the astrophysical strong lensing observations~\\cite{Birrer:2020tax}\nor from gravitational wave events~\\cite{Abbott:2017xzu}.}.\n\nAs previously mentioned, the SH0ES collaboration exploits the cosmic distance ladder calibration of Type Ia Supernovae, which means that these observations do not provide a direct extraction of the Hubble parameter.\nMore concretely, the SH0ES team measures the absolute peak magnitude $M_B$ of Type Ia Supernovae \\emph{standard candles} and then translates these measurements into an estimate of $H_0$ by means of the magnitude-redshift relation of the Pantheon Type Ia Supernovae sample~\\cite{Scolnic:2017caz}.\nTherefore, strictly speaking, the SH0ES team does not directly extract the value of $H_0$, and there have been arguments in the literature aiming to translate the Hubble constant tension into a Type Ia Supernovae absolute magnitude tension $M_B$~\\cite{Camarena:2019rmj,Efstathiou:2021ocp,Camarena:2021jlr}.\nIn this regard, late-time exotic cosmologies have been questioned as possible solutions to the Hubble constant tension~\\cite{Efstathiou:2021ocp,Camarena:2021jlr}, since within these scenarios, it is possible that the supernova absolute magnitude $M_B$ used to derive the low redshift estimate of $H_0$ is no longer compatible with the $M_B$ needed to fit supernovae, BAO and CMB data. \n\nA number of studies have prescribed to use in the statistical analyses a prior on the intrinsic magnitude rather than on the Hubble constant $H_0$~\\cite{Camarena:2021jlr,Schoneberg:2021qvd}.\nFollowing the very same logic of these previous analyses, we reassess here the potential of interacting dark matter-dark energy cosmology~\\cite{Amendola:1999er}\nin resolving the Hubble constant (\\cite{Kumar:2016zpg, Murgia:2016ccp, Kumar:2017dnp, DiValentino:2017iww, Yang:2018ubt, Yang:2018euj, Yang:2019uzo, Kumar:2019wfs, Pan:2019gop, Pan:2019jqh, DiValentino:2019ffd, DiValentino:2019jae, DiValentino:2020leo, DiValentino:2020kpf, Gomez-Valent:2020mqn, Yang:2019uog, Lucca:2020zjb, Martinelli:2019dau, Yang:2020uga, Yao:2020hkw, Pan:2020bur, DiValentino:2020vnx, Yao:2020pji, Amirhashchi:2020qep, Yang:2021hxg, Gao:2021xnk, Lucca:2021dxo, Kumar:2021eev,Yang:2021oxc,Lucca:2021eqy,Halder:2021jiv}\nand references therein)\nand/or the intrinsic magnitude $M_B$ tension, by demonstrating explicitly from a full analysis that the results are completely independent of whether a prior on $M_B$ or $H_0$ is assumed (see also the recent~\\cite{Nunes:2021zzi}).\n\n\n\\section{Theoretical framework}\n\\label{sec:theory}\nWe adopt a flat cosmological model described by the Friedmann-Lema\\^{i}tre-Robertson-Walker metric.\nA possible parameterization of a dark matter-dark energy interaction is provided by the following expressions~\\cite{Valiviita:2008iv,Gavela:2009cy}:\n\n\\begin{eqnarray}\n \\label{eq:conservDM}\n\\nabla_\\mu T^\\mu_{(dm)\\nu} &=& Q \\,u_{\\nu}^{(dm)}/a~, \\\\\n \\label{eq:conservDE}\n\\nabla_\\mu T^\\mu_{(de)\\nu} &=&-Q \\,u_{\\nu}^{(dm)}/a~.\n\\end{eqnarray}\nIn the equations above, $T^\\mu_{(dm)\\nu}$ and $T^\\mu_{(de)\\nu}$ represent the energy-momentum tensors for the dark matter and dark energy components respectively, the function $Q$ is the interaction rate between the two dark components, and $u_{\\nu}^{(dm)}$ represents the dark matter four-velocity. \nIn what follows we shall restrict ourselves to the case in which the\ninteraction rate is proportional to the dark energy density $\\rho_{de}$~\\cite{Valiviita:2008iv,Gavela:2009cy}:\n\\begin{equation}\nQ=\\ensuremath{\\delta{}_{DMDE}}\\mathcal{H} \\rho_{de}~,\n\\label{rate}\n\\end{equation}\nwhere $\\ensuremath{\\delta{}_{DMDE}}$ is a dimensionless coupling parameter and\n$\\mathcal{H}=\\dot{a}/a$~\\footnote{The dot indicates derivative respect to conformal time $d\\tau=dt/a$.}.\nThe background evolution equations in the coupled model considered\nhere read~\\cite{Gavela:2010tm}\n\\begin{eqnarray}\n\\label{eq:backDM}\n\\dot{{\\rho}}_{dm}+3{\\mathcal H}{\\rho}_{dm}\n&=&\n\\ensuremath{\\delta{}_{DMDE}}{\\mathcal H}{\\rho}_{de}~,\n\\\\\n\\label{eq:backDE}\n\\dot{{\\rho}}_{de}+3{\\mathcal H}(1+\\ensuremath{w_{\\rm 0,fld}}){\\rho}_{de}\n&=&\n-\\ensuremath{\\delta{}_{DMDE}}{\\mathcal H}{\\rho}_{de}~.\n\\end{eqnarray}\nThe evolution of the dark matter and dark energy density perturbations and velocities divergence field are described in \\cite{DiValentino:2019jae} and references therein.\n\nIt has been shown in the literature that this model is free of instabilities\nif the sign of the coupling $\\ensuremath{\\delta{}_{DMDE}}$ and the sign of $(1+\\ensuremath{w_{\\rm 0,fld}})$ are opposite,\nwhere $\\ensuremath{w_{\\rm 0,fld}}$ refers to the dark energy equation of state~\\cite{He:2008si,Gavela:2009cy}.\nIn order to satisfy such stability conditions, we explore three possible scenarios, all of them with a redshift-independent equation of state.\nIn Model A, the equation of state $\\ensuremath{w_{\\rm 0,fld}}$ is fixed to $-0.999$.\nConsequently, since $(1+\\ensuremath{w_{\\rm 0,fld}}) >0$, in order to ensure a instability-free perturbation evolution, the dark matter-dark energy coupling $\\ensuremath{\\delta{}_{DMDE}}$ is allowed to vary in a negative range.\nIn Model B, $\\ensuremath{w_{\\rm 0,fld}}$ is allowed to vary but we ensure that the condition $(1+\\ensuremath{w_{\\rm 0,fld}})>0$ is always satisfied.\nTherefore, the coupling parameter $\\ensuremath{\\delta{}_{DMDE}}$ is also negative.\nIn Model C, instead, the dark energy equation of state is phantom ($\\ensuremath{w_{\\rm 0,fld}}<-1$), therefore the dark matter-dark energy coupling is taken as positive to avoid early-time instabilities.\nWe shall present separately the cosmological constraints for these three models, together with those corresponding to the canonical $\\Lambda$CDM.\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{c|c|c}\n Model & Prior $\\ensuremath{w_{\\rm 0,fld}}$ & Prior $\\ensuremath{\\delta{}_{DMDE}}$ \\\\\n \\hline\n A & -0.999 & [-1.0, 0.0]\\\\\n B & [-0.999, -0.333] & [-1.0, 0.0] \\\\\n C & [-3, -1.001]& [0.0, 1.0] \\\\\n \\end{tabular}\n \\caption{Priors of $\\ensuremath{w_{\\rm 0,fld}}$, $\\delta$ in models A, B, C.}\n \\label{tab:priors}\n\\end{table}\n\n\n\\section{Datasets and Methodology}\n\\label{sec:data}\n\nIn this Section, we present the data sets and methodology employed to obtain the observational constraints on the model parameters by performing Bayesian Monte Carlo Markov Chain (MCMC) analyses.\nIn order to constrain the parameters, we use the following data sets:\n\\begin{itemize}\n\\item The Cosmic Microwave Background (CMB) temperature and polarization power spectra from the final release of Planck 2018, in particular we adopt the plikTTTEEE+lowl+lowE likelihood \\cite{Aghanim:2018eyx,Aghanim:2019ame}, plus the CMB lensing reconstruction from the four-point correlation function~\\cite{Aghanim:2018oex}.\n\\item Type Ia Supernovae distance moduli measurements from the \\textit{Pantheon} sample~\\cite{Scolnic:2017caz}. These measurements constrain the uncalibrated luminosity distance $H_0d_L(z)$, or in other words the slope of the late-time expansion rate (which in turn constrains the current matter energy density, $\\Omega_{\\rm 0,m}$). We refer to this dataset as \\textit{SN}. \n\\item Baryon Acoustic Oscillations (BAO) distance and expansion rate measurements from the 6dFGS~\\cite{Beutler:2011hx}, SDSS-DR7 MGS~\\cite{Ross:2014qpa}, BOSS DR12~\\cite{Alam:2016hwk} galaxy surveys,\nas well as from the eBOSS DR14 Lyman-$\\alpha$ (Ly$\\alpha$) absorption~\\cite{Agathe:2019vsu} and Ly$\\alpha$-quasars cross-correlation~\\cite{Blomqvist:2019rah}.\nThese consist of isotropic BAO measurements of $D_V(z)/r_d$\n(with $D_V(z)$ and $r_d$ the spherically averaged volume distance and sound horizon at baryon drag, respectively)\nfor 6dFGS and MGS, and anisotropic BAO measurements of $D_M(z)/r_d$ and $D_H(z)/r_d$\n(with $D_M(z)$ the comoving angular diameter distance and $D_H(z)=c/H(z)$ the radial distance)\nfor BOSS DR12, eBOSS DR14 Ly$\\alpha$, and eBOSS DR14 Ly$\\alpha$-quasars cross-correlation. \n\\item A gaussian prior on $M_B= -19.244 \\pm 0.037$~mag~\\cite{Camarena:2021jlr}, corresponding to the SN measurements from SH0ES.\n\\item A gaussian prior on the Hubble constant $H_0=73.2\\pm 1.3$~km s$^{-1}$ Mpc$^{-1}$ in\nagreement with the measurement obtained by the\nSH0ES collaboration in~\\cite{Riess:2020fzl}.\n\\end{itemize}\nFor the sake of brevity, data combinations are indicated as CMB+SN+BAO (CSB), CMB+SN+BAO+$H_0$ (CSBH) and CMB+SN+BAO+$M_B$ (CSBM).\n\nCosmological observables are computed with \\texttt{CLASS}~\\cite{Blas:2011rf,Lesgourgues:2011re}.\nIn order to derive bounds on the proposed scenarios, we modify the efficient and well-known cosmological package \\texttt{MontePython}~\\cite{Brinckmann:2018cvx}, supporting the Planck 2018 likelihood~\\cite{Planck:2019nip}.\nWe make use of CalPriorSNIa, a module for \\texttt{MontePython}, publicly available at \\url{https://github.com/valerio-marra/CalPriorSNIa}, that implements an effective calibration prior on the absolute magnitude of Type Ia Supernovae~\\cite{Camarena:2019moy,Camarena:2021jlr}.\n\n\n\n\\section{Main results and discussion}\n\\label{sec:results}\n\n\\begin{figure*}[t]\n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{H0.pdf} \n\\caption{Posterior distribution of the Hubble parameter in the $\\Lambda$CDM model (black) and in interacting cosmologies, with priors on the parameters as given in Tab.~\\ref{tab:priors}. \nWe show constraint obtained within model A (green), model B (red) and model C (blue)\nfor the CMB+SN+BAO data combination (solid lines),\nCMB+SN+BAO+$H_0$ (dashed lines)\nand CMB+SN+BAO+$M_B$ (dotted lines).}\n\\label{fig:h0}\n\\end{center}\n\\end{figure*}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{0_PlSB-vs-0_PlSBH-vs-0_PlSBM_triangle.pdf} \n\\caption{68\\% CL and 95\\% CL allowed contours and one-dimensional posterior probabilities on a selection of cosmological parameters within the canonical $\\Lambda$CDM picture, considering three data combinations: CMB+SN+BAO (red), CMB+SN+BAO+$H_0$ (blue) and CMB+SN+BAO+$M_B$ (green).}\n\\label{fig:triangle_LCDM}\n\\end{center}\n\\end{figure*}\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{|l|c|c|c|} \n\\hline \nParameter & CSB & CSBH & CSBM \\\\\n\\hline\n$\\omega{}_{cdm }$ & $0.1193\\pm0.0010$ & $0.1183\\pm0.0009$ & $0.1183_{-0.0009}^{+0.0008}$ \\\\\n$\\ensuremath{\\Omega_{\\rm 0,fld}}$ & $0.6889_{-0.0061}^{+0.0057}$ & $0.6958_{-0.0050}^{+0.0056}$ & $0.6956_{-0.0049}^{+0.0057}$ \\\\\n$\\Omega_{\\rm 0,m}$ & $0.3111_{-0.0057}^{+0.0061}$ & $0.3042_{-0.0056}^{+0.0050}$ & $0.3044_{-0.0057}^{+0.0049}$ \\\\\n$M_B$ & $-19.42\\pm0.01$ & $-19.40\\pm0.01$ & $-19.40\\pm0.01$ \\\\\n$H_0$ & $67.68_{-0.46}^{+0.41}$ & $68.21_{-0.41}^{+0.42}$ & $68.20_{-0.41}^{+0.41}$ \\\\\n$\\sigma_8$ & $0.8108_{-0.0058}^{+0.0061}$ & $0.8092_{-0.0065}^{+0.0060}$ & $0.8090_{-0.0059}^{+0.0064}$ \\\\\n\\hline \nminimum $\\chi^2$ & $3819.46$ & $3836.50$ & $3840.44$ \\\\\n\\hline \n\\end{tabular}\n\\caption{Mean values and 68\\% CL errors on $\\omega_{cdm }\\equiv\\Omega_{cdm} h^2$, the current dark energy density $\\ensuremath{\\Omega_{\\rm 0,fld}}$, the current matter energy density $\\Omega_{\\rm 0,m}$, the Supernovae Ia intrinsic magnitude $M_B$, the Hubble constant $H_0$ and the clustering parameter $\\sigma_8$ within the standard $\\Lambda$CDM paradigm. We also report the minimum value of the $\\chi^2$ function obtained for each of the data combinations.}\n\\label{tab:model_LCDM}\n\\end{table}\n\nWe start by discussing the results obtained within the canonical $\\Lambda$CDM scenario. Table~\\ref{tab:model_LCDM} presents the mean values and the $1\\sigma$ errors on a number of different cosmological parameters.\nNamely, we show the constraints on\n$\\omega_{cdm }\\equiv\\Omega_{0,cdm} h^2$,\nthe current dark energy density $\\ensuremath{\\Omega_{\\rm 0,fld}}$,\nthe current matter energy density $\\Omega_{\\rm 0,m}$,\nthe Supernovae Ia intrinsic magnitude $M_B$,\nthe Hubble constant $H_0$ and the clustering parameter $\\sigma_8$\narising from three possible data combinations considered here and above described:\nCMB+SN+BAO (CSB), CMB+SN+BAO+$H_0$ (CSBH), CMB+SN+BAO+$M_B$ (CSBM).\nInterestingly, \\emph{all} the parameters experience the very same shift regardless the prior is adopted on the Hubble constant or on the intrinsic Supernovae Ia magnitude $M_B$.\nThe mean value of $H_0$ coincides for both the CSBH and the CSBM data combinations, as one can clearly see from the dashed and dotted black lines in Fig.~\\ref{fig:h0}. \nFigure~\\ref{fig:triangle_LCDM} presents the two-dimensional allowed contours and the one-dimensional posterior probabilities on the parameters shown in Tab.~\\ref{tab:model_LCDM}.\nNotice that all the parameters are equally shifted when adding the prior on $H_0$ or on $M_B$, except for $\\sigma_8$ which remains almost unchanged. Notice also that the value of the current matter density, $\\Omega_{\\rm 0,m}$, is smaller when a prior from SN measurements is considered:\ndue to the larger $H_0$ value that these measurements imply, in order to keep the CMB peaks structure unaltered, the value of $\\Omega_{\\rm 0,m}$ should be smaller to ensure that the product $\\omega_m h^2$ is barely shifted.\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{|l|c|c|c|} \n\\hline \nParameter & CSB & CSBH & CSBM \\\\\n\\hline\n$\\omega{}_{cdm }$ & $0.107_{-0.005}^{+0.011}$ & $0.09\\pm0.01$ & $0.096_{-0.009}^{+0.011}$ \\\\\n$\\ensuremath{\\Omega_{\\rm 0,fld}}$ & $0.723_{-0.028}^{+0.017}$ & $0.758_{-0.024}^{+0.026}$ & $0.754_{-0.028}^{+0.025}$ \\\\\n$\\Omega_{\\rm 0,m}$ & $0.277_{-0.017}^{+0.028}$ & $0.242_{-0.026}^{+0.024}$ & $0.246_{-0.025}^{+0.028}$ \\\\\n$\\ensuremath{\\delta{}_{DMDE}}$ & $-0.116_{-0.044}^{+0.100}$ & $-0.219_{-0.086}^{+0.083}$ & $-0.203_{-0.087}^{+0.093}$ \\\\\n$M_B$ & $-19.40\\pm0.02$ & $-19.38_{-0.01}^{+0.02}$ & $-19.37\\pm0.02$ \\\\\n$H_0$ & $68.59_{-0.79}^{+0.65}$ & $69.73_{-0.72}^{+0.71}$ & $69.67_{-0.85}^{+0.75}$ \\\\\n$\\sigma_8$ & $0.90_{-0.08}^{+0.04}$ & $1.01_{-0.11}^{+0.08}$ & $1.00_{-0.12}^{+0.07}$ \\\\\n\\hline \nminimum $\\chi^2$ & $3819.86$ & $3831.90$ & $3835.86$ \\\\ \n\\hline \n\\end{tabular}\n\\caption{Mean values and 68\\% CL errors on $\\omega_{cdm }\\equiv\\Omega_{cdm} h^2$, the current dark energy density $\\ensuremath{\\Omega_{\\rm 0,fld}}$, the current matter energy density $\\Omega_{\\rm 0,m}$, the dimensionless dark matter-dark energy coupling $\\ensuremath{\\delta{}_{DMDE}}$, the Supernovae Ia intrinsic magnitude $M_B$, the Hubble constant $H_0$ and the clustering parameter $\\sigma_8$ within the interacting model A, see Tab.~\\ref{tab:priors}. We also report the minimum value of the $\\chi^2$ function obtained for each of the data combinations.}\n\\label{tab:model_A}\n\\end{table}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{A_PlSB-vs-A_PlSBH-vs-A_PlSBM_triangle.pdf} \n\\caption{68\\% CL and 95\\% CL allowed contours and one-dimensional posterior probabilities on a selection of cosmological parameters within model A, considering three data combinations: CMB+SN+BAO (red), CMB+SN+BAO+$H_0$ (blue) and CMB+SN+BAO+$M_B$ (green).}\n\\label{fig:triangle_A}\n\\end{center}\n\\end{figure*}\n\nWe focus now on Model A, which refers to an interacting cosmology with $\\ensuremath{w_{\\rm 0,fld}}=-0.999$ and $\\ensuremath{\\delta{}_{DMDE}}<0$.\nTable~\\ref{tab:model_A} presents the mean values and the $1\\sigma$ errors on the same cosmological parameters listed above, with the addition of the coupling parameter $\\ensuremath{\\delta{}_{DMDE}}$, for the same three data combination already discussed.\n\nNotice again that all the parameters are equally shifted to either smaller or larger values, regardless the prior is adopted on either $H_0$ or $M_B$. In this case the shift on the Hubble parameter is larger than that observed within the $\\Lambda$CDM model, as one can notice from the blue curves depicted in \nFig.~\\ref{fig:h0}.\nInterestingly, we observe a $2\\sigma$ indication in favor of a non-zero value of the coupling $\\ensuremath{\\delta{}_{DMDE}}$ when considering the CSBH and the CSBM data combinations.\nIndeed, while the value of the minimum $\\chi^2$ is almost equal to that obtained in the $\\Lambda$CDM framework for the CSB data analyses, when adding either a prior on $H_0$ or on $M_B$,\nthe minimum $\\chi^2$ value is \\emph{smaller} than that obtained for the standard cosmological picture: therefore, the addition of a coupling \\emph{improves} the overall fit.\nFigure~\\ref{fig:triangle_A} presents the two-dimensional allowed contours and the one-dimensional posterior probabilities obtained within Model A.\nIt can be noticed that the prior on the Hubble constant and on the intrinsic magnitude lead to the very same shift, and the main conclusion is therefore prior-independent:\nthere is a $\\sim 2\\sigma$ indication for a non-zero dark matter-dark energy coupling when considering either $H_0$ or $M_B$ measurements,\n\\emph{and} the value of the Hubble constant is considerably larger, alleviating the $H_0$ tension.\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{|l|c|c|c|} \n\\hline \nParameter & CSB & CSBH & CSBM \\\\\n\\hline\n$\\omega{}_{cdm }$ & $0.077_{-0.014}^{+0.036}$ & $0.061_{-0.019}^{+0.034}$ & $0.065_{-0.017}^{+0.036}$ \\\\\n$\\ensuremath{\\Omega_{\\rm 0,fld}}$ & $0.785_{-0.081}^{+0.034}$ & $0.825_{-0.070}^{+0.045}$ & $0.818_{-0.075}^{+0.041}$ \\\\\n$\\Omega_{\\rm 0,m}$ & $0.215_{-0.034}^{+0.081}$ & $0.174_{-0.044}^{+0.069}$ & $0.182_{-0.041}^{+0.075}$ \\\\\n$\\ensuremath{w_{\\rm 0,fld}}$ & $-0.909_{-0.090}^{+0.026}$ & $-0.917_{-0.082}^{+0.026}$ & $-0.918_{-0.081}^{+0.026}$ \\\\\n$\\ensuremath{\\delta{}_{DMDE}}$ & $-0.35_{-0.14}^{+0.26}$ & $-0.45_{-0.16}^{+0.22}$ & $-0.43_{-0.15}^{+0.24}$ \\\\\n$M_B$ & $-19.41\\pm0.02$ & $-19.38\\pm0.02$ & $-19.38\\pm0.02$ \\\\\n$H_0$ & $68.28_{-0.85}^{+0.79}$ & $69.68_{-0.75}^{+0.71}$ & $69.57_{-0.76}^{+0.75}$ \\\\\n$\\sigma_8$ & $1.30_{-0.51}^{+0.01}$ & $1.60_{-0.76}^{+0.06}$ & $1.53_{-0.71}^{+0.03}$ \\\\\n\\hline \nminimum $\\chi^2$ & $ 3819.96$ & $3832.28$ & $3836.24$ \\\\\n\\hline \n\\end{tabular}\n\\caption{Mean values and 68\\% CL errors on $\\omega_{cdm }\\equiv\\Omega_{cdm} h^2$, the current dark energy density $\\ensuremath{\\Omega_{\\rm 0,fld}}$, the current matter energy density $\\Omega_{\\rm 0,m}$, the dark energy equation of state $\\ensuremath{w_{\\rm 0,fld}}$,\nthe dimensionless dark matter-dark energy coupling $\\ensuremath{\\delta{}_{DMDE}}$, the Supernovae Ia intrinsic magnitude $M_B$, the Hubble constant $H_0$ and the clustering parameter $\\sigma_8$ within the interacting model B, see Tab.~\\ref{tab:priors}.\nWe also report the minimum value of the $\\chi^2$ function obtained for each of the data combinations.}\n\\label{tab:model_B}\n\\end{table}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{B_PlSB-vs-B_PlSBH-vs-B_PlSBM_triangle.pdf} \n\\caption{68\\% CL and 95\\% CL allowed contours and one-dimensional posterior probabilities on a selection of cosmological parameters within model B, considering three data combinations: CMB+SN+BAO (red), CMB+SN+BAO+$H_0$ (blue) and CMB+SN+BAO+$M_B$ (green).}\n\\label{fig:triangle_B}\n\\end{center}\n\\end{figure*}\n\nFocusing now on Model B, which assumes a negative coupling $\\ensuremath{\\delta{}_{DMDE}}$ and a constant, but freely varying, dark energy equation of state $\\ensuremath{w_{\\rm 0,fld}}$ within the $\\ensuremath{w_{\\rm 0,fld}}>-1$ region,\nwe notice again the same shift on the cosmological parameters, regardless the prior is introduced in the Hubble parameter ($H_0$) or in the Supernovae Ia intrinsic magnitude ($M_B$), as can be noticed from Tab.~\\ref{tab:model_B}.\nAs in Model A, the value of $H_0$ in this interacting cosmology is larger than within the $\\Lambda$CDM framework (see the red curves in Fig.~\\ref{fig:h0}),\nalbeit slightly smaller than in Model A, due to the strong anti-correlation between $\\ensuremath{w_{\\rm 0,fld}}$ and $H_0$~\\cite{DiValentino:2016hlg,DiValentino:2019jae}.\nConsequently, a larger value of $\\ensuremath{w_{\\rm 0,fld}}>-1$ implies a lower value of $H_0$.\nNevertheless, a $2\\sigma$ preference for a non-zero value of the dark matter-dark energy coupling is present also in this case, and also when the CSB dataset is considered:\nfor the three data combinations presented here, there is always a preference for a non-zero dark matter-dark energy coupling. \nNotice that the minimum $\\chi^2$ in Model B is smaller than that corresponding to the minimal $\\Lambda$CDM framework, but slightly larger than that of Model A, which is nested in Model B. The differences between the minimum $\\chi^2$ in Model A and Model B, however, are small\nenough to be considered as numerical fluctuations. Since, as previously stated, $\\ensuremath{w_{\\rm 0,fld}}$ and $H_0$ are strongly anti-correlated, a more negative value of the dark energy equation of state (i.e.\\ $\\ensuremath{w_{\\rm 0,fld}}=-0.999$ as in Model A, close to the prior limit) is preferred by both the CSBH and the CSBM data combinations. \n\nIn Fig.~\\ref{fig:triangle_B} we depict the two-dimensional allowed contours and the one-dimensional posterior probabilities obtained for Model B.\nFrom a comparison to Fig.~\\ref{fig:triangle_LCDM} and also confronting the mean values of Tab.~\\ref{tab:model_B} to those shown in Tab.~\\ref{tab:model_LCDM} (and, to a minor extent, to those in Tab.~\\ref{tab:model_A}),\none can notice that the value of \\ $\\ensuremath{\\Omega_{\\rm 0,fld}}$ is much larger.\nThe reason for this is related to the lower value for the present matter energy density $\\Omega_{\\rm 0,m}$ (the values are also shown in the tables), which is required within the interacting cosmologies when the dark matter-dark energy coupling is negative.\nIn the context of a universe with a negative dark coupling, indeed, there is an energy flow from dark matter to dark energy.\nConsequently, the (dark) matter content in the past is higher than in the standard $\\Lambda$CDM scenario and the amount of intrinsic (dark) matter needed today is lower, because of the extra contribution from the dark energy sector.\nIn a flat universe, this translates into a much higher value of $\\ensuremath{\\Omega_{\\rm 0,fld}}$.\nOn the other hand, a lower value of $\\Omega_{m,0}$ requires a larger value of the clustering parameter $\\sigma_8$ to be able to satisfy the overall normalization of the matter power spectrum. In any case, we find again that the addition of a prior on either $H_0$ or $M_B$ leads to exactly the very same shift for all the cosmological parameters.\nTherefore, Model B also provides an excellent solution to the Hubble constant tension,\nalthough at the expense of a very large $\\sigma_8$. \n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{|l|c|c|c|} \n\\hline \nParameter & CSB & CSBH & CSBM \\\\\n\\hline\n$\\omega{}_{cdm }$ & $0.138_{-0.015}^{+0.008}$ & $0.137_{-0.016}^{+0.007}$ & $0.135_{-0.013}^{+0.008}$ \\\\\n$\\ensuremath{\\Omega_{\\rm 0,fld}}$ & $0.655_{-0.021}^{+0.032}$ & $0.671_{-0.018}^{+0.031}$ & $0.675_{-0.018}^{+0.027}$ \\\\\n$\\Omega_{\\rm 0,m}$ & $0.345_{-0.032}^{+0.021}$ & $0.329_{-0.031}^{+0.018}$ & $0.325_{-0.027}^{+0.018}$ \\\\\n$\\ensuremath{w_{\\rm 0,fld}}$ & $-1.087_{-0.042}^{+0.051}$ & $-1.131_{-0.044}^{+0.053}$ & $-1.117_{-0.044}^{+0.048}$ \\\\\n$\\ensuremath{\\delta{}_{DMDE}}$ & $0.183_{-0.180}^{+0.061}$ & $0.173_{-0.170}^{+0.051}$ & $0.150_{-0.150}^{+0.051}$ \\\\\n$M_B$ & $-19.41\\pm0.02$ & $-19.38\\pm0.02$ & $-19.37\\pm0.02$ \\\\\n$H_0$ & $68.29_{-0.91}^{+0.66}$ & $69.74_{-0.73}^{+0.75}$ & $69.67_{-0.77}^{+0.78}$ \\\\\n$\\sigma_8$ & $0.735_{-0.057}^{+0.045}$ & $0.748_{-0.041}^{+0.068}$ & $0.755_{-0.047}^{+0.051}$ \\\\\n\\hline\nminimum $\\chi^2$ & $3818.24$ & $3830.56$ & $3835.10$ \\\\\n\\hline\n\\end{tabular}\n\\caption{Mean values and 68\\% CL errors on $\\omega_{cdm }\\equiv\\Omega_{cdm} h^2$, the current dark energy density $\\ensuremath{\\Omega_{\\rm 0,fld}}$, the current matter energy density $\\Omega_{\\rm 0,m}$, the dark energy equation of state $\\ensuremath{w_{\\rm 0,fld}}$,\nthe dimensionless dark matter-dark energy coupling $\\ensuremath{\\delta{}_{DMDE}}$, the Supernovae Ia intrinsic magnitude $M_B$, the Hubble constant $H_0$ and the clustering parameter $\\sigma_8$ within the interacting model C, see Tab.~\\ref{tab:priors}.\nWe also report the minimum value of the $\\chi^2$ function obtained for each of the data combinations.}\n\\label{tab:model_C}\n\\end{table}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{C_PlSB-vs-C_PlSBH-vs-C_PlSBM_triangle.pdf} \n\\caption{68\\% CL and 95\\% CL allowed contours and one-dimensional posterior probabilities on a selection of cosmological parameters within model C, considering three data combinations: CMB+SN+BAO (red), CMB+SN+BAO+$H_0$ (blue) and CMB+SN+BAO+$M_B$ (green).}\n\\label{fig:triangle_C}\n\\end{center}\n\\end{figure*}\n\nFinally, Tab.~\\ref{tab:model_C} shows the mean values and the $1\\sigma$ errors on the usual cosmological parameters explored along this study, for Model C.\nNotice that this model benefits from both its interacting nature and from the fact that $\\ensuremath{w_{\\rm 0,fld}}<-1$ and $\\ensuremath{\\delta{}_{DMDE}}>0$.\nBoth features of the dark energy sector have been shown to be excellent solutions to the Hubble constant problem.\nAs in the previous cases, the shift in the cosmological parameters induced by the addition of a prior is independent of its nature, i.e.\\ it is independent on whether a prior on $H_0$ or $M_B$ is adopted.\nWithin this model, the value of the Hubble constant is naturally larger than within the $\\Lambda$CDM model (see the blue lines in Fig.~\\ref{fig:h0}),\nregardless of the data sets assumed in the analyses.\nDespite its phantom nature, as in this particular case $\\ensuremath{w_{\\rm 0,fld}}<-1$ to ensure a instability-free evolution of perturbations, Model C provides the \\emph{best-fits to any of the data combinations explored here, performing even better than} the minimal $\\Lambda$CDM picture,\nas one can clearly notice from the last row of Tab.~\\ref{tab:model_C}.\nThis fact makes Model C a very attractive cosmological scenario which can provide a solution for the long-standing $H_0$ tension. We must remember that model C, however, has two degrees of freedom more than the standard $\\Lambda$CDM paradigm.\nFigure~\\ref{fig:triangle_C} illustrates the two-dimensional allowed contours and the one-dimensional posterior probabilities obtained within Model C.\nNotice that here the situation is just the opposite one of Model B: the value of $\\ensuremath{\\Omega_{\\rm 0,fld}}$ is much smaller than in standard scenarios,\ndue to the larger value required for the present matter energy density $\\Omega_{\\rm 0,m}$ when the dark matter-dark energy coupling $\\ensuremath{\\delta{}_{DMDE}}>0$ and $\\ensuremath{w_{\\rm 0,fld}}<-1$.\nThis larger value of the present matter energy density also implies a lower value for the clustering parameter $\\sigma_8$, in contrast to what was required within Model B.\n\n\n\\section{Final Remarks}\n\\label{sec:conclusions}\n\nIn this study we have tried to reassess the ability of interacting dark matter-dark energy cosmologies in alleviating the long-standing and highly significant Hubble constant tension.\nDespite the fact that in the past these models have been shown to provide an excellent solution to the discrepancy between local measurements and high redshift, Cosmic Microwave Background estimates of $H_0$, there have been recent works in the literature questioning \ntheir effectiveness, related to a misinterpretation of SH0ES data, which indeed does not directly extract the value of $H_0$.\nWe have therefore computed the ability of interacting cosmologies of reducing the Hubble tension by means of two possible different priors in the cosmological analyses:\na prior on the Hubble constant and, separately, a prior on Type Ia Supernova absolute magnitude.\nWe combine these priors with Cosmic Microwave Background (CMB), Type Ia Supernovae (SN) and Baryon Acoustic Oscillation (BAO) measurements,\nshowing that the constraints on the cosmological parameters are independent of the choice of prior, and that the Hubble constant tension is always alleviated.\nThis last statement is also prior-independent.\nFurthermore, one of the possible interacting cosmologies considered here,\nwith a phantom nature, provides a better fit than the canonical $\\Lambda$CDM framework for all the considered data combinations, but with two extra degrees of freedom.\nWe therefore conclude that interacting dark-matter dark-energy cosmologies still provide a very attractive and viable theoretical and phenomenological scenario\nwhere to robustly relieve the Hubble constant tension,\n regardless the method one adopts to process SH0ES data. \n\n\n\\begin{acknowledgments}\n\\noindent \nSG acknowledges financial support from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\u0142odowska-Curie grant agreement No 754496 (project FELLINI).\nEDV is supported by a Royal Society Dorothy Hodgkin Research Fellowship. \nOM is supported by the Spanish grants PID2020-113644GB-I00, PROMETEO/2019/083 and by the European ITN project HIDDeN (H2020-MSCA-ITN-2019//860881-HIDDeN).\nRCN acknowledges financial support from the Funda\\c{c}\\~{a}o de Amparo \\`{a} Pesquisa do Estado de S\\~{a}o Paulo (FAPESP, S\\~{a}o Paulo Research Foundation) under the project No. 2018/18036-5.\n\\end{acknowledgments}\n\n", "meta": {"timestamp": "2021-11-08T02:04:43", "yymm": "2111", "arxiv_id": "2111.03152", "language": "en", "url": "https://arxiv.org/abs/2111.03152"}} {"text": "\n\n\\section{Introduction} \\label{sec:introduction} \\input{introduction}\n\\section{Related Work} \\label{sec:related_work} \\input{relatedWork}\n\\section{Model Description} \\label{sec:model} \\input{modelDescription}\n\\section{Experiments} \\label{sec:experiments} \\input{experiments}\n\\section{Conclusions and Future Work} \\label{sec:conclusions} \\input{conclusion}\n\n{\\small\n\\textbf{Acknowledgements}\n\\input{acknowledgements}\n}\n\n{\\small\n\\bibliographystyle{ieee}\n\n\\subsection{Composable Activities Dataset} \\label{subsec:composableActivities}\r\n\r\n\r\n\r\n\r\n\n\n\n\\subsection{Inference of per-frame annotations.}\n\\label{subsec:action_annotation}\nThe hierarchical structure and compositional\nproperties of our model enable it to output a predicted global activity,\nas well as per-frame annotations of predicted atomic actions and poses for each body\nregion.\nIt is important to highlight that in the generation of the per-frame annotations, no prior temporal \nsegmentation of atomic actions is needed. Also, no post-processing of the output is performed. The \nproficiency of our model to produce\nper-frame annotated data, enabling action detection temporally and\nspatially, make our model unique. \n\nFigure \\ref{fig:annotation} illustrates\nthe capability of our model to provide per-frame annotation of the atomic\nactions that compose each activity. The accuracy of\nthe mid-level action prediction can be evaluated as in \\cite{Wei2013}.\nSpecifically, we first obtain segments of the same predicted action in each\nsequence, and then compare these segments with ground truth action labels. The\nestimated label of the segment is assumed correct if the detected segment is\ncompletely contained in a ground truth segment with the same label, or if the\nJaccard Index considering the segment and the ground truth label is greater\nthan 0.6. Using these criteria, the accuracy of the mid-level actions is\n79.4\\%. In many cases, the wrong action prediction is only highly local in time\nor space, and the model is still able to correctly predict the activity label\nof the sequence. Taking only the correctly predicted videos in terms of global\nactivity prediction, the accuracy of action labeling reaches 83.3\\%. When consider this number, it \nis\nimportant to note that not every ground truth action label is accurate: the\nvideos were hand-labeled by volunteers, so there is a chance for mistakes in\nterms of the exact temporal boundaries of the action. In\nthis sense, in our experiments we observe cases where the predicted\nlabels showed more accuracte temporal boundaries than the ground \ntruth.\n\n \n\\begin{figure*}[th]\n\\begin{center}\n\\includegraphics[width=0.999\\linewidth]{./fig_all_sequences_red.pdf}\n\\end{center}\n\\caption{Per-frame predictions of atomic actions for selected activities,\nshowing 20 frames of each video. Each frame is joined with the predicted action\nannotations of left arm, right arm, left leg and right leg. Besides the prediction of the global \nactivity of the video, our algorithm is able to\ncorrectly predict the atomic actions that compose each activity in each frame,\nas well as the body regions that are active during the execution of the action.\nNote that in the example video of the activity \\emph{Walking while calling with\nhands}, the \\emph{calling with hands} action is correctly annotated even when\nthe subject change the waving hand during the execution of the activity.}\n\\label{fig:annotation}\n\\end{figure*}\n\n\\subsection{Robustness to occlusion and noisy joints.}\nOur method is also capable of inferring action and activity labels even if some\njoints are not observed. This is a common situation in practice,\nas body motions induce temporal self-occlusions of body regions.\nNevertheless, due to the joint estimation of poses, actions, and activities,\nour model is able to reduce the effect of this problem. To illustrate this, we\nsimulate a totally occluded region by fixing its geometry to the position\nobserved in the first frame.\nWe select which region to be completely occluded in every sequence using uniform sampling.\nIn this scenario, the accuracy of our preliminary model in \\cite{Lillo2014} drops\nby 7.2\\%. Using our new SR setup including NI handling, the accuracy only drops\nby 4.3\\%, showing that the detection of non-informative poses helps the model\nto deal with occluded regions. In fact, as we show in Section\n\\ref{subsec:exp_non_info_handling}, many of truly occluded regions in the\nvideos are identified using NI handling. In contrast, the drop in performance of\nBoW is 12.5\\% and HMM 10.3\\%: simpler models are less capable of robustly dealing\nwith occluded regions, since their pose assignments rely only on the descriptor\nitself, while in our model the assigned pose depends on the descriptor,\nsequences of poses and actions, and the activity evaluated, making inference\nmore robust. Fig. \\ref{fig:occlusions} shows some qualitative results of\noccluded regions.\n\n\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=0.999\\linewidth]\n{./subject_1_6.pdf} \\\\\n{\\footnotesize Right arm occluded} \\\\\n\\includegraphics[width=0.999\\linewidth]\n{./subject_1_23.pdf}\\\\\n{\\footnotesize Left leg occluded} \\\\\n\\includegraphics[width=0.999\\linewidth]\n{./subject_1_8.pdf}\\\\\n{\\footnotesize Left arm occluded}\\\\\n\\end{center}\n\\caption{The occluded body regions are depicted in light blue. When an arm or\nleg is occluded, our method still provides a good estimation of the underlying actions in each\nframe.}\n\\label{fig:occlusions}\n\\end{figure}\n\nIn terms of noisy joints, we manually add random Gaussian noise to change the\njoints 3D location of testing videos, using the SR setup and the GEO descriptor\nto isolate the effect of the joints and not mixing the motion descriptor. Figure\n\\ref{fig:joint_noise} shows the accuracy of testing videos in terms of noise\ndispersion $\\sigma_{noise}$ measured in inches. For little noise, there is no\nmuch effect in our model accuracy, as expected for the robustness of the\ngeometric descriptor. However, for more drastic noise added to every joint, the\naccuracy drops dramatically. This behavior is expected, since for highly noisy\njoints the model can no longer predict well the sequence of actions and poses. \n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=0.999\\linewidth]{./fig_acc_vs_noise.pdf} \\\\\n\\end{center}\n\\caption{Performance of our model in presence of simulated Gaussian noise in\nevery joint, as a function of $\\sigma_{noise}$ measured in inches. When the\nnoise is less than 3 inches in average, the model performance is not very\naffected, while for bigger noise dispersion the model accuracy is drastically\naffected. It is important no note that in our simulation, every joint is\naffected to noise, while in a real setup, noisy joint estimation tend to occur\nmore rarely. } \\label{fig:joint_noise}\n\\end{figure}\n\n\\subsection{Early activity prediction.}\nOur model needs the complete video to make an accurate activity and action\nprediction of a query video. In this section, we analyze the number of frames\n(as a percentage of a complete activity sequence) needed\nto make an accurate activity prediction. Figure \\ref{fig:accuracy_reduced_frames}\nshows the mean accuracy over the dataset (using leave-one-subject-out\ncross-validation) in function of the\npercentage of frames used by the classifier to label each video. We note that\nconsidering 30\\% of the frames, the classifier performs reasonable predictions,\nwhile 70\\% of frames are needed to closely match the\naccuracy of using all frames.\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=0.999\\linewidth]{./fig_acc_vs_frame_reduction.pdf}\n\\end{center}\n\\caption{Accuracy of activity recognition versus percentage of frames used in\nComposable Activities dataset. In general, 30\\% of the frames are needed to\nperform reasonable predictions, while 70\\% of frames are needed to closely match the\naccuracy of using all frames.}\n\\label{fig:accuracy_reduced_frames}\n\\end{figure}\n\n\\subsection{Failure cases.}\n\nWe also study some of the failure cases that we observe during the\nexperimentation with our model.\nFigure \\ref{fig:errors} shows some error cases. It is interesting that\nthe sequences are confusing even for humans when only the skeleton is available\nas in the figure. These errors probably will not be surpassed with the model\nitself, and will need to use other sources of information like object\ndetectors, where a cup should be distinguished from a cellphone as in the\nthird row of Figure \\ref{fig:errors}.\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=0.999\\linewidth]\n{./sbj1_1.pdf} \\\\\n{\\footnotesize Ground truth: Walking while calling with hands\\\\\nPrediction: Walking while waving hand} \\\\\n\\includegraphics[width=0.999\\linewidth]\n{./sbj4_4.pdf}\\\\\n{\\footnotesize Ground truth: Composed activity 1\\\\\nPrediction: Talking on cellphone and drinking} \\\\\n\\includegraphics[width=0.999\\linewidth]\n{./sbj4_6.pdf}\\\\\n{\\footnotesize Ground truth: Waving hand and drinking\\\\\nPrediction: Talking on cellphone and scratching head} \\\\\n\\end{center}\n\\caption{Failure cases. Our algorithm tends to confuse activities that share very similar\nbody postures.}\n\\label{fig:errors}\n\\end{figure}\n\n\n\\begin{comment}\n\\subsubsection{New activity characterization}\nAs we mention in previous section, our model using sparse regularization and\nnon-negative weights on activity ($\\alpha$) classifiers and action ($\\beta$)\nclassifiers do not \\emph{punish} poses that have no influence in the\nactivities. For this reason, our model is able to model a new composed activity\njust combining the coefficients of two known activities, leaving the rest of\nthe parameters of the model untouched. We use an heuristic approach to combine\ntwo models: givint two classes $c_1$ and $c_2$, their coefficients for a region\n$r$ and action $a$ are $ \\alpha^r_{c_1,a}$ and $ \\alpha^r_{c_2,a}$\nrespectively. For a new class $c_{new}$ composed of classes $c_1$ and $c_2$, we\nuse the mean value of the coefficients \\begin{equation}\n\\alpha^r_{{c_{new},a}} = \\frac{(\\alpha^r_{c_1,a} + \\alpha^r_{c_2,a})}{2}\n\\end{equation}\n only when the corresponding coefficients for are positive; in other case, we\nuse the maximum value of the two coefficients. For all subjects of the dataset,\nwe create all the combinations od two activities, and tested the new model\nusing three composed videos per subject. The average accuracy of the activity\n$16+1$ is 90.2\\%, and in average the activities that compose the new activity\ndrops its accuracy in 12.3\\%, showing that we effectively incorporate a new\ncomposed activity to the model at a little cost of getting more confusion over\nthe original activities. Moreover, the accuracy of action labeling for the new\nclass is 74.2\\%, similar to the accuracy of the action labeling of the\noriginal model, so we can effectively transfer the learning of atomic action\nclassifiers to new compositions of activities. \n\n\\begin{table}\n\\begin{tabular}\n\\hline\nActivity group & Accuracy of new class & \\\\ \n\\hline\nSimple & 92.\nComplex & 87.2\\% & \\\\\n\\hline\nAll & 90.2\\% & \\\\\n\\end{tabular}\n\\caption{}\n\\label{tab:acc_new_class}\n\\end{table}\n\n\\end{comment}\n\n\n\n\n\n\\subsection{Classification of Simple and Isolated Actions}\n\nAs a first experiment,\nwe evaluate the performance of our model on the task of simple and\nisolated human action recognition in the MSR-Action3D dataset\n\\cite{WanLi2010}.\nAlthough our model is tailored at recognizing complex\nactions, this experiment verifies the performance of our model in the\nsimpler scenario of isolated atomic action classification.\n\nThe MSR-Action3D dataset provides pre-trimmed depth videos and estimated body poses\nfor isolated actors performing actions from 20\ncategories. We use 557 videos\nin a similar setup to\n\\cite{Wang2012}, where videos from subjects 1, 3, 5, 7, 9 are used for\ntraining and the rest for testing. Table \\ref{tab:msr3d} shows that in this \ndataset our model achieves classification accuracies comparable to \nstate-of-the-art methods.\n\n\\begin{table}[t]\n\\footnotesize\n\\centering\n\\begin{tabular}{|l|c|}\n\\hline\n\\textbf{Algorithm} & \\textbf{Accuracy}\\\\\n\\hline\nOur model & 93.0\\% \\\\\n\\hline\nL. Tao \\etal \\cite{Tao2015} & 93.6\\% \\\\\nC. Wang \\etal \\cite{Wang2013} & 90.2\\% \\\\\nVemulapalli \\etal \\cite{Vemulapalli2014} & 89.5\\% \\\\\n\\hline\n\\end{tabular}\n\\caption{\\footnotesize\nRecognition accuracy in the MSR-Action3D \ndataset.}\n\\label{tab:msr3d}\n\\end{table}\n\n\n\n\n\\subsection{Detection of Concurrent Actions}\nOur second experiment evaluates the performance of our model in a concurrent\naction recognition setting. In this scenario, the goal is to predict\nthe temporal localization of actions that may occur concurrently in a long\nvideo. We evaluate this task on the Concurrent Actions dataset \\cite{Wei2013},\nwhich\nprovides 61 RGBD videos and pose estimation data annotated with 12\naction categories.\nWe use a similar evaluation setup as proposed by the authors.\nWe split the dataset into training and testing sets with a 50\\%-50\\% ratio.\nWe evaluate performance by measuring precision-recall: a detected action\nis declared as a true positive if its temporal overlap with the ground\ntruth action interval is larger than 60\\% of their union, or if\nthe detected interval is completely covered by the ground truth annotation.\n\nOur model is tailored at recognizing complex actions that are composed\nof atomic components. However, in this scenario, only atomic actions are\nprovided and no compositions are explicitly defined. Therefore, we apply\na simple preprocessing step: we cluster training videos into groups\nby comparing the occurrence of atomic actions within each video.\nThe resulting groups are used as complex actions labels in the training\nvideos of this dataset.\nAt inference time, our model outputs a single labeling per video,\nwhich corresponds to the atomic action labeling that maximizes the energy of\nour model.\nSince there are no thresholds to adjust, our model produces the single\nprecision-recall measurement reported in Table \\ref{tab:concurrent}.\nOur model outperforms the state-of-the-art method in this\ndataset at that recall level.\n\n\n\n\n\n\\begin{table}[tb]\n\\footnotesize\n\\centering\n\\begin{tabular}{|l|c|c|}\n\\hline\n\\textbf{Algorithm} & \\textbf{Precision} & \\textbf{Recall}\\\\\n\\hline\nOur full model & 0.92 & 0.81 \\\\\n\\hline\nWei et al. \\cite{Wei2013} & 0.85 & 0.81 \\\\\n\\hline\n\\end{tabular}\n\\caption{\n\\footnotesize\nRecognition accuracy in the Concurrent Actions dataset. }\n\\label{tab:concurrent}\n\\end{table}\n \n\\subsection{Recognition of Composable Activities}\nIn this experiment, we evaluate the performance of our model to recognize complex \nand composable human actions. In the evaluation, we use the Composable \nActivities dataset \\cite{Lillo2014},\nwhich provides 693 videos of 14 subjects performing 16 activities.\nEach activity is a spatio-temporal composition of atomic actions.\nThe dataset provides a total of 26 atomic actions that are shared across\nactivities. We train our model using two levels of supervision during training:\ni) spatial annotations that map body regions to the execution of each action are made available\nii) spatial supervision is not available, and therefore the labels $\\vec{v}$ to assign spatial regions to actionlets \nare treated as latent variables.\n\nTable \\ref{tab:composable} summarizes our results. We observe that under both \ntraining conditions, our model achieves comparable performance. This indicates \nthat our weakly supervised model can recover some of the information\nthat is missing while performing well at the activity categorization task.\nIn spite of using less\nsupervision at training time, our method outperforms state-of-the-art\nmethodologies that are trained with full spatial supervision.\n\n\n\\begin{table}[tb]\n\\footnotesize\n\\centering\n\\begin{tabular}{|l|c|}\n\\hline\n\\textbf{Algorithm} & \\textbf{Accuracy}\\\\\n\\hline\nBase model + GC, GEO desc. only, spatial supervision & 88.5\\%\\\\\nBase model + GC, with spatial supervision & 91.8\\% \\\\\nOur full model, no spatial supervision (latent $\\vec{v}$) & 91.1\\%\\\\\n\\hline\nLillo \\etal \\cite{Lillo2014} (without GC) & 85.7\\% \\\\\nCao et al. \\cite{cao2015spatio} & 79.0\\% \\\\\n\\hline\n\\end{tabular}\n\\caption{\n\\footnotesize\nRecognition accuracy in the Composable Activities\ndataset.}\n\\label{tab:composable}\n\\end{table}\n \n\\subsection{Action Recognition in RGB Videos}\nOur experiments so far have evaluated the performance of our model\nin the task of human action recognition in RGBD videos.\nIn this experiment, we explore the use of our model in the problem of human\naction recognition in RGB videos. For this purpose, we use the sub-JHMDB\ndataset \\cite{Jhuang2013}, which focuses on videos depicting 12 actions and\nwhere most of the actor body is visible in the image frames.\nIn our validation, we use the 2D body pose configurations provided by the\nauthors and compare against previous methods that also use them. Given that \nthis dataset only includes 2D image coordinates for each body joint, we obtain \nthe geometric descriptor by adding a depth coordinate with a value $z = d$ to \njoints corresponding to wrist and knees, $z = -d$ to elbows, and $z = 0$ to other joints, \nso we can compute angles between segments, using $d = 30$ fixed with cross-validation. We summarize the results in Table \n\\ref{tab:subjhmdb},\nwhich shows that our method outperforms alternative state-of-the-art techniques.\n\n\n\n\\begin{table}[tb]\n\\footnotesize\n\\centering\n\\begin{tabular}{|l|c|}\n\\hline\n\\textbf{Algorithm} & \\textbf{Accuracy}\\\\\n\\hline\nOur model & 77.5\\% \\\\\n\\hline\nHuang et al. \\cite{Jhuang2013} & 75.6\\% \\\\\nCh\\'eron et al. \\cite{Cheron2015} & 72.5\\%\\\\\n\\hline\n\\end{tabular}\n\\caption{\\footnotesize\nRecognition accuracy in the sub-JHMDB dataset.}\n\\label{tab:subjhmdb}\n\\end{table}\n\n\n\\subsection{Spatio-temporal Annotation of Atomic Actions}\nIn this experiment, we study the ability of our model to provide spatial and \ntemporal annotations of relevant atomic actions. Table \\ref{tab:annotation} \nsummarizes our results. We report precision-recall rates\nfor the spatio-temporal annotations predicted by our model in the \ntesting videos (first and second rows). Notice that this is a \nvery challenging task. The testing videos do no provide any label, and \nthe model needs to predict both, the temporal extent of each action and the \nbody regions associated with the execution of each action. Although the \ndifficulty of the task, our model shows satisfactory results being able to \ninfer suitable spatio-temporal annotations. \n\nWe also study the capability of the model to provide spatial and temporal \nannotations during training. In our first experiment, each video \nis provided\nwith the temporal extent of each action, so the model only needs to infer the \nspatial annotations (third row in Table \\ref{tab:annotation}). In a \nsecond experiment, we do not provide any temporal or spatial annotation, \nbut only the global action label of each video (fourth row in Table \n\\ref{tab:annotation}). In both experiments, we observe that the model is \nstill able to infer suitable spatio-temporal annotations.\n\n\n\\begin{table}[tb]\n\\footnotesize\n\\centering\n\\begin{tabular}{|l|c|c|c|}\n\\hline\n\\textbf{Videos} & \\textbf{Annotation inferred} & \\textbf{Precision} & \\textbf{Recall}\\\\\n\\hline\nTesting set & Spatio-temporal, no GC & 0.59 & 0.77 \\\\\nTesting set & Spatio-temporal & 0.62 & 0.78 \\\\\n\\hline\nTraining set & Spatial only & 0.86 & 0.90\\\\\nTraining set & Spatio-temporal & 0.67 & 0.85 \\\\\n\\hline\n\\end{tabular}\n\\caption{\n\\footnotesize\nAtomic action annotation performances in the Composable Activities\ndataset. The results show that our model is able to recover spatio-temporal\nannotations both at training and testing time.}\n\\label{tab:annotation}\n\\end{table}\n\n\n\\subsection{Effect of Model Components}\nIn this experiment,\nwe study the contribution of key components of the\nproposed model. First, using the sub-JHMDB dataset, \nwe measure the impact of three components of our model: garbage collector for \nmotion poselets (GC), multimodal modeling of actionlets, and use of latent \nvariables to infer spatial annotation about body regions (latent $\\vec{v}$). Table \n\\ref{tab:components} summarizes our experimental results. \nTable \\ref{tab:components} shows that the full version\nof our model achieves the best performance, with each of the components \nmentioned above contributing to the overall success of the method.\n\n\n\n\\begin{table}[tb]\n\\footnotesize\n\\centering\n\\begin{tabular}{|l|c|}\n\\hline\n\\textbf{Algorithm} & \\textbf{Accuracy}\\\\\n\\hline\nBase model, GEO descriptor only & 66.9\\%\\\\\nBase Model & 70.6\\%\\\\\nBase Model + GC & 72.7\\% \\\\\nBase Model + Actionlets & 75.3\\%\\\\\nOur full model (Actionlets + GC + latent $\\vec{v}$) & 77.5\\% \\\\\n\\hline\n\\end{tabular}\n\\caption{\n\\footnotesize\nAnalysis of contribution to recognition performance from\neach model component in the sub-JHMDB dataset.}\n\\label{tab:components}\n\\end{table}\n\nSecond, using the Composable Activities dataset, we also analyze the \ncontribution of the proposed self-paced learning scheme for initializing and \ntraining our model. We summarize our results in\nTable \\ref{tab:initialization} by reporting action\nrecognition accuracy under different initialization schemes: i) Random: random \ninitialization of latent variables $\\vec{v}$, ii) Clustering: initialize \n$\\vec{v}$ by first computing a BoW descriptor for the atomic action intervals \nand then perform $k$-means clustering, assigning the action intervals to the \ncloser cluster center, and iii) Ours: initialize $\\vec{v}$ using the proposed \nself-paced learning scheme. Our proposed initialization scheme helps the model to achieve its best\nperformance.\n\n\n\n\\begin{table}[tb]\n\\footnotesize\n\\centering\n\\begin{tabular}{|l|c|}\n\\hline\n\\textbf{Initialization Algorithm} & \\textbf{Accuracy}\\\\\n\\hline\nRandom & 46.3\\% \\\\\nClustering & 54.8\\% \\\\\nOurs & 91.1\\% \\\\\n\\hline\nOurs, fully supervised & 91.8\\%\\\\\n\\hline\n\\end{tabular}\n\\caption{\n\\footnotesize\nResults in Composable Activities dataset, with latent $\\vec{v}$ and different initializations. }\n\\label{tab:initialization}\n\\end{table}\n\n\\subsection{Qualitative Results}\nFinally, we provide a qualitative analysis of\nrelevant properties of our model. Figure \\ref{fig:poselets_img} \nshows examples of moving poselets learned in the Composable \nActivities dataset. We observe that each moving poselet captures \na salient body configuration that helps to discriminate among atomic \nactions. To further illustrate this, Figure \\ref{fig:poselets_img} \nindicates the most likely underlying atomic action for each moving poselet.\nFigure \\ref{fig:poselets_skel} presents a similar analysis for moving \nposelets learned in the MSR-Action3D dataset.\n\nWe also visualize the action annotations produced by our model.\nFigure \\ref{fig:actionlabels} (top) shows the action labels associated\nwith each body part in a video from the Composable Activities dataset.\nFigure \\ref{fig:actionlabels} (bottom) illustrates per-body part action\nannotations for a video in the Concurrent Actions dataset. These\nexamples illustrate the capabilities of our model to correctly\nannotate the body parts that are involved in the execution of each action,\nin spite of not having that information during training.\n\n\n\\begin{figure}[tb]\n\\begin{center}\n\\scriptsize\n Motion poselet \\#4 - most likely action: talking on cellphone\\\\\n \\includegraphics[trim=0 0 0 0.35cm, clip, width=0.49\\textwidth]{Fig/poselets1}\n\n Motion poselet \\#7 - most likely action: erasing on board\\\\\n \\includegraphics[trim=0 0 0 0.35cm, clip, width=0.49\\textwidth]{Fig/poselets2}\n\n Motion poselet \\#19 - most likely action: waving hand\\\\\n \\includegraphics[trim=0 0 0 0.35cm, clip, width=0.49\\textwidth]{Fig/poselets3}\n\\end{center}\n\\caption{\n\\footnotesize\nMoving poselets learned from the Composable Activities\ndataset.}\n\\label{fig:poselets_img}\n\\end{figure}\n\n\n\\begin{figure}[tb]\n\\begin{center}\n\\scriptsize\n Motion poselet \\#16 - most likely action: tennis swing\\\\\n \\includegraphics[trim=0 0 0cm 0cm, clip, width=0.49\\textwidth]{Fig/poselets4}\n\n Motion poselet \\#34 - most likely action: golf swing\\\\\n \\includegraphics[trim=0 0 0cm 0cm,clip, width=0.49\\textwidth]{Fig/poselets5}\n\n Motion poselet \\#160 - most likely action: bend\\\\\n \\includegraphics[trim=0 0 0cm 0cm, clip, width=0.49\\textwidth]{Fig/poselets6}\n\n\\end{center}\n\\caption{\n\\footnotesize\nMoving poselets learned from the MSR-Action3D\ndataset.}\n\\label{fig:poselets_skel}\n\\end{figure}\n\n \n\n\\begin{figure}[tb]\n\\begin{center}\n\\scriptsize\n\\includegraphics[]{Fig/labels_acciones}\n\\end{center}\n\\caption{\n\\footnotesize\nAutomatic spatio-temporal annotation of atomic actions. Our method\ndetects the temporal span and spatial body regions that are involved in\nthe performance of atomic actions in videos.}\n\\label{fig:actionlabels}\n\\end{figure}\n\n\n\\begin{comment}\n\n[GENERAL IDEA]\n\nWhat we want to show:\n\\begin{itemize}\n\\item Show tables of results that can be useful to compare the model.\n\\item Show how the model is useful for videos of simple and composed actions, since now the level of annotations is similar.\n\\item Show how the inference produces annotated data (poses, actions, etc). In particular, show in Composable Activities and Concurrent actions how the action compositions are handled by the model without post-processing.\n\\item Show results in sub-JHMDB,showing how the model detects the action in the videos and also which part of the body performs the action (search for well-behaved videos). It could be interesting to show the annotated data over real RGB videos. \n\\item Show examples of poses (like poselets) and sequences of 3 or 5 poses for actions (Actionlets?)\n\\end{itemize}\n\n\\subsection{Figures}\nThe list of figures should include:\n\\begin{itemize}\n\\item A figure showing the recognition and mid-level labels of Composable Activities, using RGB videos\n\\item Comparison of action annotations, real v/s inferred in training set, showing we can recover (almost) the original annotations.\n\\item Show a figure similar to Concurrent Actions paper, with a timeline showing the actions in color. We can show that our inference is more stable than proposed in that paper, and it is visually more similar to the ground truth than the other methods.\n\\item Show a figure for sub-JHMDB dataset, where we can detect temporally and spatially the action without annotations in the training set.\n\\item Show Composable Activities and sub-JHMDB the most representative poses and actions.\n\\end{itemize}\n\n\n\\paragraph{Composable Activities Dataset}\nIn this dataset we show several results.\n(1) Comparing TRAJ descriptor (HOF over trajectory);\n(2) Compare the results using latent variables for action assignations to\nregions, with different initializations;\n(3) Show results of the annotations of the videos in inference.\n\nWe must include figures comparing the real annotations\nand the inferred annotations for training data, to show we are able to get the\nannotations only from data.\n\n\n\n\\subsection{Recognition of composable activities}\n\\label{subsec:experiments_summary}\n\n\\subsection{Impact of including motion features}\n\\label{subsec:exp_motionfeats}\n\n\\subsection{Impact of latent spatial assignment of actions}\n\\label{subsec:exp_vlatent}\n\n\\subsection{Impact of using multiple classifiers per semantic action}\n\\label{subsec:exp_multiple}\n\n\\subsection{Impact of handling non-informative poses}\n\\label{subsec:exp_non_info_handling}\n\\end{comment}\n\n\n\n\n\\begin{comment}\n\\subsection{CAD120 Dataset}\nThe CAD120 dataset is introduced in \\cite{Koppula2012}. It is composed of 124\nvideos that contain activities in 10 clases performed by 4 actors. Activities\nare related to daily living: \\emph{making cereal}, \\emph{stacking objects}, or\n\\emph{taking a meal}. Each activity is composed of simpler actions like\n\\emph{reaching}, \\emph{moving}, or \\emph{eating}. In this database, human-object\ninteractions are an important cue to identify the actions, so object\nlocations and object affordances are provided as annotations. Performance\nevaluation is made through leave-one-subject-out cross-validation. Given\nthat our method does not consider objects, we use only\nthe data corresponding to 3D joints of the skeletons. As shown in Table\n\\ref{Table-CAD120},\nour method outperforms the results reported in\n\\cite{Koppula2012} using the same experimental setup. It is clear that using\nonly 3D joints is not enough to characterize each action or activity in this\ndataset. As part of our future work, we expect that adding information related\nto objects will further improve accuracy.\n\\begin{table}\n\\centering\n{\\small\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\textbf{Algorithm} & \\textbf{Average precision} & \\textbf{Average recall}\\\\\n\\hline\nOur method & 32.6\\% & 34.58\\% \\\\\n\\hline\n\\cite{Koppula2012} & 27.4\\% & 31.2\\%\\\\\n\\cite{Sung2012} & 23.7\\% & 23.7\\% \\\\\n\\hline\n\\end{tabular}\n}\n\\caption{Recognition accuracy of our method compared to state-of-the-art methods\nusing CAD120 dataset.}\n\\label{Table-CAD120}\n\\end{table}\n\\end{comment}\n\n\n\n\n\n\\subsection{Latent spatial actions for hierarchical action detection}\n\n\\subsection{Hierarchical activity model}\n\nSuppose we have a video $D$ with $T$ frames, each frame described by a feature vector $x_t$. Assume we have available $K$ classifiers$\\{w_k\\}_{k=1}^K$ over the frame descriptors, such that each frame descriptor can be associated to a single classifier. If we choose the maximum response for every frame, encoded as $z_t = \\argmax_k\\{w_k^\\top x_t\\}$, we can build a BoW representation to feed linear action classifiers $\\beta$, computing the histogram $h(Z)$ of $Z = \\{z_1,z_2,\\dots,z_T\\}$ and using these histograms as a feature vector for the complete video to recognize single actions. Imagine now that we would like to use the scores of the maximum responses, $w_{z_t}^\\top x_t$ as a potential to help discriminating between videos that present reliable poses from videos that does not. We can build a joint energy function, combining the action classifier score and the aggregated frame classifier scores, as\n\\begin{equation}\n\\label{eq:2-levels}\n\\begin{split}\nE(D) &= \\beta_{a}^\\top h(Z) + \\sum_{t=1}^T w_{z_t}^\\top x_t \\\\ & = \\sum_{t=1}^T\\sum_{k=1}^K\\left(\\beta_{a,k} + w_k^\\top x_t \\right)\\delta(z_t=k)\n\\end{split}\n\\end{equation}\nWhat is interesting of Eq. (\\ref{eq:2-levels}) is that it every term in the sum is tied for the value of $z_t$, creating a model such that all its components depends of the labeling $Z$. We can expand the previous model to more levels using the same philosophy. In fact, for a new level, we could create a new indicator $v_t$ for every frame that indicates the election of which classifier $\\beta$ will be used (the same as $z_t$ indicates which classifier of $w$). If we name $w$ as \\emph{pose classifiers}, and $\\beta$ as \\emph{action classifiers}, we can create a hierarchical model where multiple poses and actions can be present in a single video. Supposing we have $A$ actions; the energy for a three-level hierarchy could be, for an \\emph{activity} $l$,\n\\begin{equation}\nE(D) =\\alpha_l^\\top h(V) + \\sum_{a=1}^A \\beta_{a}^\\top h^a(Z,V) + \\sum_{t=1}^T w_{z_t}^\\top x_t \n\\end{equation}\nwhere $h^a(Z,V)$ refers to the BoW representation of $Z$ for those frames labeled as action $v_t = a$.\n\n[NEW MODEL]\n\nRecent work in action recognition \\cite{Cheron2015,Tao2015, Wang2011,Jhuang2013} shows a resurgence of describing human actions as a collection of dynamic spatial parts that resembles Poselets. In line with these research, we split the human body into $R$ semantic regions. As modeling actions using the whole body is hard, separating the body into groups of limbs helps in recognition of actions, specially for complex datasets \\cite{Tao2015}. Our wiew is that while poses are in general well defined in most research, little effort has been made to mine actions from videos, in terms of detecting the temporal spanning (action detection) and action localization. In addition to the fact that most action datasets are only single actions, there is a lack of research in the general setup where actions are combined in the same video. Nevertheless, a few works have noticed that humans usually performs complex action in real life \\cite{Wei2013, Lillo2014}, providing their own datasets based in RGB-D cameras. In our work, we aim to group both worlds of single and composed actions in a single hierarchical model of three semantic levels, and using human body regions to improve the representativeness. \n\nDuring training, we assume there is temporal annotations of actions. As we want our model to perform action localization, we model the action assignments $V_r$ in each region as latent variables during training, allowing the model to infer which human part execute the action without needing this kind of annotations in the training set, including a model for the initialization of action labels. In this way, we advance from a simple detection problem to infer also \\emph{how} the subject executes the action, important in surveillance applications, health monitoring, between others. We also expand the modeling of recurrent patterns of poses to construct a general model for shared actions, aiming to handle multimodal information, which is produced by actions with the same label but with different execution patterns, or by changes in representation of actions such as varying camera view. We handle this problem by augmenting the number of action classifiers, where each original action acts as a parent node of several non-overlapping child actions. Finally, as we are using local information for poses, some frames could be noisy or representing an uncommon pose, not useful to build the pose models. We attack this issue by adding a garbage collector for poses, where only the most-informative poses are used by pose classifiers during learning. We describe these contributions in the following paragraphs.\n\n\\paragraph{[EDIT] Latent assignments of actions to human regions}\n\nKnowing the parts of the body involved in the actions is highly appealing. Suppose we have $M$ videos, each video annotated with $Q_m$ action intervals. Each action interval can be associated with any number of regions, from $1$ to all $R$ regions. For example, a \\emph{waving hand} action could be associated only with \\emph{right\\_arm}, while the action \\emph{jogging} could be associated with the whole body. We want to learn the associations of actions and human parts for training videos, and we build these associations using latent variables. The main problem to solve is to how to get a proper initialization for actions, since there is a very high chance to get sticked in a local minimum far away of the optimum, producing bad results.\n\nOur first contribution is a method to get a proper initialization of fine-grained spatial action labels, knowing only the time span of the actions. Using the known action intervals, we formulate the problem of action to region assignment as an optimization problem, constrained using structural information: the actions intervals must not overlap in the same region, and all the action intervals must be present at least in one region. We formulate this labeling problem as a binary Integer Linear Programming (ILP) problem. We define as $v_{r,q}^m=1$ when the action interval $q \\in \\{1,\\dots,Q_m\\}$ appears in region $r$ in the video $m$, and $v_{r,q}^m=0$ otherwise. We assume we have pose labels $z_{t,r}$ in each frame, independent for each region, learned via clustering the poses for all frames in all videos. For an action interval $q$, we use as descriptor the histogram of pose labels for each region in the action interval, defined for the video $m$ as $h_{r,q}^m$ . We can solve the problem of finding the correspondence between action intervals and regions in a formulation similar to $k$-means, using the structure of the problem as constraints in the labels, and using $\\chi^2$ distance between the action interval descriptors and the cluster centers: \n\\begin{equation}\n\\begin{split}\nP1) \\quad \\min_{v,\\mu} &\\sum_{m=1}^M \\sum_{r=1}^R \\sum_{q=1}^{Q_m} v_{r,q}^m d( h_{r,q}^m - \\mu_{a_q}^r) -\\frac{1}{\\lambda} v_{r,q}^m\\\\ \n \\text{s. to} \n\\quad \n& \\sum_{r=1}^R v_{r,q}^m \\ge 1\\text{, }\\forall q\\text{, }\\forall m \\\\ \n& v_{r,q_1}^m + v_{r,q_2}^m \\le 1 \\text{ if } q_1\\cap q_2 \\neq \\emptyset \\text{, }\\forall r\\text{, }\\forall m\\\\ \n& v_{r,q}^m \\in \\{0,1\\}\\text{, }\\forall q\\text{, }\\forall{r}\\text{, }\\forall m\n\\end{split}\n\\end{equation}\nwith\n\\begin{equation}\nd( h_{r,q}^m - \\mu_{a_q}^r) = \\sum_{k=1}^K (h_{r,q}^m[k] - \\mu_{a_q}^r[k])^2/(h_{r,q}^m[k] +\\mu_{a_q}^r[k]).\n\\end{equation}\n\n$\\mu_{a_q}^r$ are computed as the mean of the descriptors with the same action label within the same region. We solve $P1$ iteratively as $k$-means, finding the cluster centers for each region $r$, $\\mu_{a}^r$ using the labels $v_{r,q}^m$, and then finding the best labeling given the cluster centers, solving an ILP problem. Note that the first term of the objective function is similar to a $k$-means model, while the second term resembles the objective function of \\emph{self-paced} learning as in \\cite{Kumar2010}, fostering to balance between assigning a single region to every action, towards assigning all possible regions to the action intervals when possible. \n\n[IL: INCLUDE FIGURE TO SHOW P1 GRAPHICALLY]\n\nWe describe the further changes in the hierarchical model of \\cite{Lillo2014} in the learning and inference sections.\n \\paragraph{[EDIT] Representing semantic actions with multiple atomic sequences}.\n\n\nAs the poses and atomic actions in \\cite{Lillo2014} model are shared, a single classifier is generally not enough to model multimodal representations, that occur usually in complex videos. We modify the original hierarchical model of \\cite{Lillo2014} to include multiple linear classifiers per action. We create two new concepts: \\textbf{semantic actions}, that refer to actions \\emph{names} that compose an activity; and \\textbf{atomic sequences}, that refers to the sequence of poses that conform an action. Several atomic sequences can be associated to a single semantic action, creating disjoint sets of atomic sequences, each set associated to a single semantic action. The main idea is that the action annotations in the datasets are associated to semantic actions, whereas for each semantic action we learn several atomic sequence classifiers. With this formulation, we can handle the multimodal nature of semantic actions, covering the changes in motion, poses , or even changes in meaning of the action according to the context (e.g. the semantic action ``open'' can be associated to opening a can, opening a door, etc.). \n\nInspired by \\cite{Raptis2012}, we first use the \\emph{Cattell's Scree test} for finding a suitable number of atomic sequence for every semantic action. Using the semantic action labels, we compute a descriptor for every interval using normalized histograms of pose labels. Then, for a particular semantic action $u$, we compute the the eigenvalues $\\lambda_u$ of the affinity matrix of the semantic action descriptors, using $\\chi^2$ distance. For each semantic action $u \\in \\{1,\\dots,U\\}$ we find the number of atomic sequences $G_u$ as $G_u = \\argmin_i \\lambda_{i+1}^2 / (\\sum_{j=1}^i \\lambda_j) + c\\cdot i$, with $c=2\\cdot 10^{-3}$. Finally, we cluster the descriptors corresponding to each semantic action using k-means, using a different number of clusters for each semantic action $u$ according to $G_u$. This approach generates non-overlapping atomic sequences, each associated to a single semantic action.\n\nTo transfer the new labels to the model, we define $u(v)$ as the function that given the atomic sequence label $v$, returns the corresponding semantic action label $u$. The energy for the activity level is then\n\\begin{equation}\nE_{\\text{activity}} = \\sum_{u=1}^U\\sum_{t=1}^T \\alpha_{y,u}\\delta(u(v_t)=u)\n\\end{equation} \n\nFor the action and pose labels the model remains unchanged. Using the new atomic sequences allows a richer representation for actions, while in he activity level, several atomic sequences will map to a single semantic action. This behavior resembles a max-pooling operation, where we will choose at inference the atomic sequences that best describe the performed actions in the video, keeping the semantics of the original labels. \n\n\\paragraph{Towards a better representation of poses: adding a garbage collector}\n\nThe model in \\cite{Lillo2014} uses all poses to feed action classifiers. Out intuition is that only a subset of poses in each video are really discriminative or informative for the actions performed, while there is plenty of poses that corresponds to noisy or non-informative ones. [EXPAND] Our intuition is that low-scored frames in terms of poses (i.e. a low value of $w_{z_t}^\\top x_t$ in Eq. (\\ref{eq:energy2014})) make the same contribution as high-scored poses in higher levels of the model, while degrading the pose classifiers at the same time since low-scored poses are likely to be related to non-informative frames. We propose to include a new pose, to explicitly handling those low-scored frames, keeping them apart for the pose classifiers $w$, but still adding a fixed score to the energy function to avoid normalization issues and to help in the specialization of pose classifiers. We call this change in the model a \\emph{garbage collector} since it handles all low-scores frames and group them having a fixed energy score $\\theta$. In practice, we use a special pose entry $K+1$ to identify the non-informative poses. The equation representing the energy for pose level is\n\\begin{equation} \\label{Eq_poseEnergy}\nE_{\\text{poses}} = \\sum_{t=1}^T \\left[ {w_{z_t}}^\\top x_{t}\\delta(z_{t} \\le K) + \\theta \n\\delta(z_{t}=K+1)\\right] \n\\end{equation}\nwhere $\\delta(\\ell) = 1$ if $\\ell$ is true and $\\delta(\\ell) = 0$ if\n$\\ell$ is false. The action level also change its energy:\n\\begin{equation}\n\\begin{split}\n \\label{Eq_actionEnergy}\nE_{\\text{actions}} = \\sum_{t=1}^T \\sum_{a=1}^A \\sum_{k=1}^{K+1} \\beta_{a,k} \\delta(z_t = k) \\delta(v_t = a).\n\\end{split}\n\\end{equation}\n\n\\begin{comment}\nIntegrating all contribution detailed in previous sections, the model is written as:\nEnergy function:\n\\begin{equation}\nE = E_{\\text{activity}} + E_{\\text{action}} + E_{\\text{pose}}\n + E_{\\text{action transition}} + E_{\\text{pose transition}}.\n\\end{equation}\n\n\\begin{equation}\nE_{\\text{poses}} = \\sum_{t=1}^T \\left[ {w_{z_t}}^\\top x_{t}\\delta(z_{t} \\le K) + \\theta \n\\delta(z_{t}=K+1)\\right] \n\\end{equation}\n\n\\begin{equation}\nE_{\\text{actions}} = \\sum_{t=1}^T \\sum_{a=1}^A \\sum_{k=1}^{K+1} \\beta_{a,k} \\delta(z_t = k) \\delta(v_t = a).\n\\end{equation}\n\n\\begin{equation}\nh_g^{r}(U) = \\sum_{t} \\delta_{u_{t,r}}^g\n\\end{equation}\n\nSo the energy in the activity level is\n\\begin{equation}\nE_{\\text{activity}} = \\sum_{r} {\\alpha^r_{y}}^\\top h^{r}(U) = \\sum_{r,g,t} \\alpha^r_{y,g} \\delta_{u_{t,r}}^g\n\\end{equation}\n\n\\begin{equation}\nE_{\\text{action transition}} = \\sum_{r,a,a'} \\gamma^r_{a',a} \\sum_{t} \\delta_{v_{t-1,r}}^{a'}\\delta_{v_{t,r}}^a \n\\end{equation}\n\n\\begin{equation}\nE_{\\text{pose transition}} =\\sum_{r,k,k'} \\eta^r_{k',k}\\sum_{t}\\delta_{z_{t-1,r}}^{k'}\\delta_{z_{t,r}}^{k}\n\\end{equation}\n\\end{comment}\n\n\n\n\\subsection{Inference}\n\\label{subsec:inference}\nThe input to the inference algorithm is a new video sequence with features\n$\\vec{x}$. The task is to infer the best complex action label $\\hat y$, and to \nproduce the best labeling of actionlets $\\hat{\\vec{v}}$ and motion poselets $\\hat{\\vec{z}}$.\n{\\small\n\\begin{equation}\n \\hat y, \\hat{\\vec{v}}, \\hat{\\vec{z}} = \\argmax_{y, \\vec{v},\\vec{z}} E(\\vec{x}, \\vec{v}, \\vec{z}, y)\n\\end{equation}}\nWe can solve this by exhaustively enumerating all values of complex actions $y$, and solving for $\\hat{\\vec{v}}$ and $\\hat{\\vec{z}}$ using:\n\\small\n\\begin{equation}\n\\begin{split}\n \\hat{\\vec{v}}, \\hat{\\vec{z}} | y ~ =~ & \\argmax_{\\vec{v},\\vec{z}} ~ \\sum_{r=1}^R \\sum_{t=1}^T \\left( \\alpha^r_{y,u(v{(t,r)})} \n + \\beta^r_{v_{(t,r)},z_{(t,r)}}\\right. \\\\\n\t\t\t\t&\\quad\\quad \\left.+ {w^r_{z_{(t,r)}}}^\\top x_{t,r} \\delta(z_{(t,r)} \\le K) + \\theta^r \\delta_{z_{(t,r)}}^{K+1} \\right. \\\\ \n\t\t\t\t& \\quad\\quad \\left.+ \\gamma^r_{v_{({t-1},r)},v_{(t,r)}} + \\eta^r_{z_{({t-1},r)},z_{(t,r)}} \\vphantom{{w^r_{z_{(t,r)}}}^\\top x_{t,r}} \\right). \\\\\n\\end{split}\n\\label{eq:classify_inference}\n\\end{equation}\n\\normalsize\n\n\n\n\\subsection{Learning} \\label{subsec:learning}\n\\textbf{Initial actionlet labels.} An important step in the training process is\nthe initialization of latent variables. This is a challenging due to the lack\nof spatial supervision: at each time instance, the available atomic actions can be associated with \nany of the $R$ body regions.\nWe adopt the machinery of \nself-paced \nlearning \\cite{Kumar:EtAl:2010} to provide a suitable solution and \nformulate the association between actions and body regions as an \noptimization problem. We constrain this optimization using two structural \nrestrictions:\ni) atomic actions intervals must not overlap in the same region, and \nii) a labeled atomic action must be present at least in one region. We \nformulate the labeling \nprocess as a binary Integer Linear Programming (ILP) problem, where we define \n$b_{r,q}^m=1$ when action interval $q \\in \\{1,\\dots,Q_m\\}$ is active in region \n$r$ of video $m$; and $b_{r,q}^m=0$ otherwise. Each action interval $q$ is \nassociated with a single atomic action. We assume that we have initial \nmotion poselet labels\n$z_{t,r}$ in each frame and region.\nWe describe the action interval $q$ and region $r$ using \nthe histogram $h_{r,q}^m$ of motion poselet labels. We can find \nthe correspondence between action intervals and regions using a formulation \nthat resembles the operation of$k$-means, but using the\nstructure of the problem to constraint the labels:\n\\small\n\\begin{equation}\n\\begin{split}\n\\text{P1}) \\quad \\min_{b,\\mu} &\\sum_{m=1}^M \\sum_{r=1}^R \\sum_{q=1}^{Q_m} b_{r,q}^m \nd( h_{r,q}^m - \\mu_{a_q}^r) -\\frac{1}{\\lambda} b_{r,q}^m\\\\ \n \\text{s.t.} \n\\quad \n& \\sum_{r=1}^R b_{r,q}^m \\ge 1\\text{, }\\forall q\\text{, }\\forall m \\\\ \n& b_{r,q_1}^m + b_{r,q_2}^m \\le 1 \\text{ if } q_1\\cap q_2 \\neq \\emptyset \n\\text{, \n}\\forall r\\text{, }\\forall m\\\\ \n& b_{r,q}^m \\in \\{0,1\\}\\text{, }\\forall q\\text{, }\\forall{r}\\text{, }\\forall m\n\\end{split}\n\\end{equation}\nwith\n\\begin{equation}\nd( h_{r,q}^m - \\mu_{a_q}^r) = \\sum_{k=1}^K (h_{r,q}^m[k] - \n\\mu_{a_q}^r[k])^2/(h_{r,q}^m[k] +\\mu_{a_q}^r[k]).\n\\end{equation}\n\\normalsize\nHere, $\\mu_{a_q}^r$ are the means of the descriptors with action \nlabel $a_q$ within region $r$. We solve $\\text{P1}$ iteratively using a block coordinate \ndescending scheme, alternating between solving $b_{r,q}^m$ with $\\mu_{a}^r$ \nfixed, which has a trivial solution; and then fixing $\\mu_{a}^r$ to solve \n$b_{r,q}^m$, relaxing $\\text{P1}$ to solve a linear program. Note that the second term \nof the objective function in $\\text{P1}$ resembles the objective function of \n\\emph{self-paced} learning \\cite{Kumar:EtAl:2010}, managing the balance between \nassigning a single region to every action or assigning all possible regions to \nthe respective action interval. \n\n\\textbf{Learning model parameters.}\nWe formulate learning the model parameters as a Latent Structural SVM\nproblem \\cite{Yu:Joachims:2010}, with latent variables for motion\nposelets $\\vec{z}$ and actionlets $\\vec{v}$. We find values for parameters in \nequations \n(\\ref{eq:motionposelets}-\\ref{eq:actionletstransition}),\nslack variables $\\xi_i$, motion poselet labels $\\vec{z}_i$, and actionlet labels $\\vec{v}_i$, \nby solving:\n{\\small\n\\begin{equation}\n\\label{eq:big_problem}\n\\min_{W,\\xi_i,~i=\\{1,\\dots,M\\}} \\frac{1}{2}||W||_2^2 + \\frac{C}{M} \\sum_{i=1}^M\\xi_i ,\n\\end{equation}}\nwhere\n{\\small \\begin{equation}\nW^\\top=[\\alpha^\\top, \\beta^\\top, w^\\top, \\gamma^\\top, \\eta^\\top, \\theta^\\top],\n\\end{equation}}\nand\n{\\small\n\\begin{equation} \\label{eq:slags}\n\\begin{split}\n\\xi_i = \\max_{\\vec{z},\\vec{v},y} \\{ & E(\\vec{x}_i, \\vec{z}, \\vec{v}, y) + \\Delta( (y_i,\\vec{v}_i), (y, \\vec{v})) \\\\\n & - \\max_{\\vec{z}_i}{ E(\\vec{x}_i, \\vec{z}_i, \\vec{v}_i, y_i)} \\}, \\; \\;\\; i\\in[1,...M].\t\n\\end{split}\n\\end{equation}}\nIn Equation (\\ref{eq:slags}), each slack variable\n$\\xi_i$ quantifies the error of the inferred labeling for\nvideo $i$. We solve Equation (\\ref{eq:big_problem}) iteratively using the CCCP\nalgorithm \\cite{Yuille:Rangarajan:03}, by solving for \nlatent labels $\\vec{z}_i$ and $\\vec{v}_i$ given model parameters $W$, \ntemporal atomic action annotations (when available), and labels of complex actions occurring in \ntraining videos (see Section \\ref{subsec:inference}). Then, we solve for \n$W$ via 1-slack formulation using Cutting Plane algorithm \n\\cite{Joachims2009}. \n\nThe role of the loss function $\\Delta((y_i,\\vec{v}_i),(y,\\vec{v}))$ is to penalize inference errors during \ntraining. If the true actionlet labels are known in advance, the loss function is the same as in \\cite{Lillo2014} using the actionlets instead of atomic actions:\n\\small \\begin{equation}\n\\Delta((y_i,\\vec{v}_i),(y,\\vec{v})) = \\lambda_y(y_i \\ne y) + \\lambda_v\\frac{1}{T}\\sum_{t=1}^T \n\\delta({v_t}_{i} \\neq v_t),\n\\end{equation}\n\\normalsize\n\\noindent where ${v_t}_{i}$ is the true actionlet label. If the spatial ordering of actionlets is unknown (hence the latent \nactionlet formulation), but the temporal composition is known, we can compute a \nlist $A_t$ of possible actionlets for frame $t$, and include that information\non the loss function as\n\\small \\begin{equation}\n\\Delta((y_i,\\vec{v}_i),(y,\\vec{v})) = \\lambda_y(y_i \\ne y) + \\lambda_v\\frac{1}{T}\\sum_{t=1}^T \n\\delta(v_t \\notin A_t)\n\\end{equation}\n\\normalsize\n\n\\subsection{Body regions}\nWe divide the body pose into $R$ fixed spatial regions and independently compute \na pose feature vector for each region. Figure \\ref{fig:skeleton_limbs_regions} \nillustrates the case when $R = 4$ that we use in all our experiments. Our body \npose feature vector consists of the concatenation of two descriptors. At frame \n$t$ and region $r$, a descriptor $x^{g}_{t,r}$ encodes geometric information \nabout the spatial configuration of body joints, and a descriptor $x^{m}_{t,r}$ \nencodes local motion information around each body joint position.\nWe use the geometric descriptor from \\cite{Lillo2014}:\nwe construct six segments that connect pairs of joints at each\nregion\\footnote{Arm segments: wrist-elbow, elbow-shoulder, shoulder-neck, wrist-shoulder, wrist-head, and neck-torso; Leg segments: ankle-knee, knee-hip, hip-hip center, ankle-hip, ankle-torso and hip center-torso}\nand compute 15 angles between those segments.\nAlso, three angles are calculated between a plane formed by three\nsegments\\footnote{Arm plane: shoulder-elbow-wrist; Leg plane: hip-knee-ankle} and \nthe remaining three non-coplanar segments, totalizing an 18-D geometric descriptor (GEO) for every region.\nOur motion descriptor is based on tracking motion trajectories of key points\n\\cite{WangCVPR2011}, which in our case coincide with body joint positions.\nWe extract a HOF descriptor\nusing 32x32 RGB patches centered at the joint location for a temporal window of 15 \nframes. At each joint location, this produces a 108-D descriptor, \nwhich we concatenate across all joints in each a region to obtain our motion descriptor. Finally, \nwe apply PCA to reduce the dimensionality of our concatenated motion descriptor\nto 20. The final descriptor is the concatenation of the geometric and \nmotion descriptors, $x_{t,r} = [x_{t,r}^g ; x_{t,r}^m]$.\n\n\n\\subsection{Hierarchical compositional model}\n\nWe propose a hierarchical compositional model that spans three semantic \nlevels. Figure \\ref{fig:overview} shows a schematic of our model. At the \ntop level, our model assumes that each input video has a single complex action \nlabel $y$. Each complex action is composed of a \ntemporal and spatial arrangement of atomic actions with labels $\\vec{u}=[u_1,\\dots,u_T]$, $u_i \\in \\{1,\\dots,S\\}$.\nIn turn, each atomic action consists of several non-shared \\emph{actionlets}, which correspond to representative sets of pose configurations for action identification, modeling the multimodality of each atomic action.\nWe capture actionlet assignments in $\\vec{v}=[v_1,\\dots,v_T]$, $v_i \\in \\{1,\\dots,A\\}$.\nEach actionlet index $v_i$ corresponds to a unique and known actomic action label $u_i$, so they are related by a mapping $\\vec{u} = \\vec{u}(\\vec{v})$. At the \nintermediate level, our model assumes that each actionlet is composed of a \ntemporal arrangement of a subset from $K$ body poses, encoded in $\\vec{z} = [z_1,\\dots,z_T]$, $z_i \\in \\{1,\\dots,K\\}$,\nwhere $K$ is a hyperparameter of the model.\nThese subsets capture pose geometry and local motion, so we call them \\emph{motion poselets}.\nFinally, at the bottom level, our model identifies motion poselets \nusing a bank of linear classifiers that are applied to the incoming frame \ndescriptors.\n\n\nWe build each layer of our hierarchical model on top of BoW \nrepresentations of labels. To this end, at the bottom level of our hierarchy, and for \neach body region, we learn a dictionary of motion poselets. Similarly, at the mid-level of our hierarchy, we learn a dictionary of actionlets, using the BoW representation of motion poselets as inputs. At each of these levels, \nspatio-temporal activations of the respective dictionary words are used \nto obtain the corresponding histogram encoding the BoW representation. \nThe next two sections provide\ndetails on the process to represent and learn the dictionaries of motion \nposelets and actionlets. Here we discuss our\nintegrated hierarchical model.\n\nWe formulate our hierarchical model using an energy function.\nGiven a video of $T$ frames corresponding to complex action $y$ encoded by descriptors $\\vec{x}$, with the label vectors $\\vec{z}$ for motion poselets,\n$\\vec{v}$ for actionlets and $\\vec{u}$ for atomic actions, we\ndefine an energy function for a video as:\n\\small\n\\begin{align}\\label{Eq_energy}\nE(\\vec{x},&\\vec{v},\\vec{z},y) = E_{\\text{motion poselets}}(\\vec{z},\\vec{x}) \\nonumber \\\\&+ E_{\\text{motion poselets BoW}}(\\vec{v},\\vec{z}) + \nE_{\\text{atomic actions BoW}}(\\vec{u}(\\vec{v}),y) \\nonumber \\\\ \n& + E_{\\text{motion poselets transition}}(\\vec{z}) + E_{\\text{actionlets \ntransition}}(\\vec{v}).\n\\end{align}\n\\normalsize\nBesides the BoW representations and motion poselet classifiers\ndescribed above, Equation (\\ref{Eq_energy}) includes\ntwo energy potentials that encode information related to\ntemporal\ntransitions between pairs of motion poselets ($E_{\\text{motion poselets \ntransition}}$) and \nactionlets ($E_{\\text{actionlets transition}}$). \nThe energy potentials are given by:\n{\\small\n\\begin{align}\n\\label{eq:motionposelets}\n&E_{\\text{mot. poselet}}(\\vec{z},\\vec{x}) = \\sum_{r,t} \\left[ \\sum_{k} {w^r_k}^\\top \nx_{t,r}\\delta_{z_{(t,r)}}^{k} + \\theta^r \\delta_{z_{(t,r)}}^{K+1}\\right] \\\\\n&E_{\\text{mot. poselet BoW}}(\\vec{v},\\vec{z}) = \\sum_{r,a,k} {\\beta^r_{a,k}}\\delta_{v_{(t,r)}}^{a}\\delta_{z_{(t,r)}}^{k}\\\\\n\\label{eq:actionlets_BoW} \n&E_{\\text{atomic act. BoW}}(\\vec{u}(\\vec{v}),y) =\\sum_{r,s} {\\alpha^r_{y,s}}\\delta_{u(v_{(t,r)})}^{s} \\\\\n&E_{\\text{mot. pos. trans.}}(\\vec{z}) = \n\\sum_{r,k_{+1},k'_{+1}} \\eta^r_{k,k'} \n\\sum_{t} \\delta_{z_{(t-1,r)}}^{k}\\delta_{z_{(t,r)}}^{k'} \\\\\n\\label{eq:actionletstransition}\n&E_{\\text{acttionlet trans.}}(\\vec{v}) =\\sum_{r,a,a'} \\gamma^r_{a,a'} \n\\sum_{t} \n\\delta_{v_{(t-1,r)}}^{a}\\delta_{v_{(t,r)}}^{a'} \n\\end{align}\n}\n\nOur goal is to \nmaximize $E(\\vec{x},\\vec{v},\\vec{z},y)$, and obtain the \nspatial and temporal arrangement \nof motion poselets $\\vec{z}$ and actionlets $\\vec{v}$, as well as, the underlying \ncomplex action $y$.\n\nIn the previous equations, we use $\\delta_a^b$ to indicate the Kronecker delta function $\\delta(a = b)$, and use indexes $k \\in \\{1,\\dots,K\\}$ for motion poselets, $a \\in \\{1,\\dots,A\\}$ for actionlets, and $s \\in \\{1,\\dots,S\\}$ for atomic actions.\nIn the energy term for motion poselets,\n$w^r_k$ are a set of $K$ linear pose classifiers applied to frame \ndescriptors $x_{t,r}$, according to the label of the latent variable $z_{t,r}$. \nNote that there is a special label $K+1$; the role of this label will be \nexplained in Section \\ref{subsec:garbage_collector}.\nIn the energy potential associated to \nthe BoW representation for motion poselets, $\\vec{\\beta}^r$ denotes a set of $A$ \nmid-level classifiers, whose inputs are histograms of motion \nposelet labels at those frame annotated as actionlet $a$. At the highest level, \n$\\alpha^r_{y}$ is a linear classifier associated with complex action $y$, whose \ninput is the histogram of atomic action labels,\nwhich are related to actionlet assignments by the mapping function $\\vec{u}(\\vec{v})$. Note that all classifiers \nand labels here correspond to a single region $r$. We add the contributions of all \nregions to compute the global energy of the video. The transition terms act as\nlinear classifiers $\\eta^r$ and $\\gamma^r$ over histograms of temporal transitions of motion poselets \nand temporal transitions of actionlets respectively. As we have a special label $K+1$ for motion poselets, the summation index\n$k_{+1}$ indicates the interval $\\lbrack 1,\\dots,K+1 \\rbrack$.\n\n\\subsection{Learning motion poselets}\nIn our model, motion poselets are learned by treating them as latent variables \nduring training. Before training, we fix the number of motion poselets per region to $K$.\nIn every region $r$, we learn an independent\nset of pose classifiers $\\{w^r_k\\}_{k=1}^K$, initializing the motion poselet \nlabels using the $k$-means algorithm. We learn pose classifiers, \nactionlets and complex actions classifiers jointly, allowing the model to discover \ndiscriminative motion poselets useful to detect and recognize complex actions. \nAs shown in previous work, jointly learning linear\nclassifiers to identify body parts and atomic actions improves recognition \nrates \\cite{Lillo2014,Wang2008}, so here we follow a similar hierarchical \napproach, and integrate learning\nof motion poselets with the learning of actionlets.\n\n\\subsection{Learning actionlets}\n\\label{sec:learningactionlets}\nA single linear classifier does not offer enough flexibility to identify atomic \nactions that exhibit high visual variability. As an example, the atomic action \n``open'' can be associated with ``opening a can'' or ``opening a \nbook'', displaying high variability in action execution. Consequently, we \naugment our hierarchical model including multiple classifiers to \nidentify different modes of action execution. \n\nInspired by \\cite{Raptis2012}, we use the \\emph{Cattell's Scree test} to\nfind a suitable number of actionlets to model each atomic \naction. Specifically, using the atomic action labels, we compute a descriptor \nfor every video interval using \nnormalized histograms of initial pose labels obtained with $k$-means. Then, for a particular atomic action \n$s$, we compute the eigenvalues $\\lambda(s)$ of the affinity matrix of the \natomic action descriptors, which is build using $\\chi^2$ distance. For each \natomic action \n$s \\in \\{1,\\dots,S\\}$, we find the number of actionlets $G_s$ as $G_s = \n\\argmin_i {\\lambda(s)}_{i+1}^2 / (\\sum_{j=1}^i {\\lambda(s)}_j) + c\\cdot i$, with $c=2\\cdot \n10^{-3}$. Finally, we cluster the descriptors from each atomic \naction $s$ running $k$-means with $k = G_s$. This scheme generates \na set of non-overlapping actionlets to model each single atomic \naction. In our experiments, we notice that the number of actionlets used to \nmodel each atomic action varies typically from 1 to 8.\n\nTo transfer the new labels to the model, we define $u(v)$ as a function that\nmaps from actionlet label $v$ to the corresponding atomic action label \n$u$. A dictionary of actionlets provides a richer representation for actions, \nwhere several actionlets will map to a single atomic action. This behavior \nresembles a max-pooling operation, where at inference time we will choose the \nset of actionlets that best describe the performed actions in the video, keeping \nthe semantics of the original atomic action labels.\n\n\\subsection{A garbage collector for motion poselets}\n\\label{subsec:garbage_collector}\nWhile poses are highly informative for action recognition, an input video \nmight contain irrelevant or idle zones, where the underlying poses are noisy \nor non-discriminative to identify the actions being performed in the video. As \na result, low-scoring motion poselets could degrade the pose classifiers during \ntraining, decreasing their performance. To deal with this problem, we include in \nour model a \\emph{garbage collector} mechanism for motion poselets. This \nmechanism operates by assigning all low-scoring motion poselets to\nthe $(K+1)$-th pose dictionary entry. These collected poses are \nassociated with a learned score lower than $\\theta^r$, as in Equation \n(\\ref{eq:motionposelets}). Our experiments show that this mechanism leads \nto learning more discriminative motion poselet classifiers.\n\n\n\\input{learning}\n\\input{inference}\n\n\n\n\n\n\n\n\n\\subsection{Video Representation} \\label{subsec:videorepresentation}\n\n[EXPLAIN BETTER, ADD FIGURE]\nOur model is based on skeleton information encoded in joint annotations. We use the same geometric descriptor as in \\cite{Lillo2014}, using angles between segments connecting two joints, and angles between these segments and a plane formed by three joints. In addition to geometry, other authors \\cite{Zanfir2013,Tao2015,Wang2014} have noticed that including local motion information is beneficial to the categorization of videos. Moreover, in \\cite{zhu2013fusing} the authors create a fused descriptor using spatio-temporal descriptors and joint descriptors, showing that they combined perform better than separated. With this is mind, we augment the original geometric descriptor with motion information: when there is only skeleton jonints data, we use the displacement of vectors (velocity) as a motion descriptor. If RGB video is available, we use the HOF descriptor extracted from the trajectory of the joint in a small temporal window.\n\nFor the geometric descriptor, we use 6 segments per human action (see Fig. XXXX). The descriptor is composed by the angles between the segments (15 angles), and the angles between a plane formed by three segments and the non-coplanar segments (3 angles). For motion descriptor, we use either the 3D velocity of every joint in each region as a concatenated vector (18 dimensions), or the concatenated HOF descriptor of the joint trajectories, transformed to a low-dimensional space using PCA (20 dimensions).\n", "meta": {"timestamp": "2016-06-17T02:01:41", "yymm": "1606", "arxiv_id": "1606.04992", "language": "en", "url": "https://arxiv.org/abs/1606.04992"}} {"text": "\\section{introduction}\nRecent discovery of Weyl semimetals (WSMs)~\\cite{Lv2015TaAs,Xu2015TaAs,Yang2015TaAs} in realistic materials has stimulated tremendous research interest in topological semimetals, such as WSMs, Dirac semimetals, and nodal line semimetals~\\cite{volovik2003universe,Wan2011,Balents2011,Burkov2011,Hosur2013,Vafek2014}, as a new frontier of condensed matter physics after the discovery of topological insulators~\\cite{qi2011RMP, Hasan2010}.\nThe WSMs are of particular interest not only because of their exotic Fermi-arc-type surface states but also because of their appealing bulk chiral magneto-transport properties, such as the chiral anomaly effect~\\cite{Xiong2015,Huang2015anomaly,Arnold2015}, nonlocal transport~\\cite{Parameswaran2014,Baum2015}, large magnetoresistance, and high mobility~\\cite{Shekhar2015}.\nCurrently discovered WSM materials can be classified into two groups. One group breaks crystal inversion symmetry but preserves time-reversal symmetry (e.g., TaAs-family transition-metal pnictides~\\cite{Weng2015,Huang2015}and WTe$_2$- and MoTe$_2$-family transition-metal dichalcogenides~\\cite{Soluyanov2015WTe2,Sun2015MoTe2,Wang2016MoTe2,Koepernik2016,Deng2016,Jiang2016}). The other group breaks time-reversal symmetry in ferromagnets with possible tilted moments (e.g., magnetic Heusler GdPtBi~\\cite{Hirschberger2016,Shekhar2016} and YbMnBi$_2$~\\cite{Borisenko2015}). An antiferromagnetic (AFM) WSM compound has yet to be found, although Y$_2$Ir$_2$O$_7$ with a noncoplanar AFM structure was theoretically predicted to be a WSM candidate~\\cite{Wan2011}.\n\nIn a WSM, the conduction and valence bands cross each other linearly through nodes called Weyl points. Between a pair of Weyl points with opposite chiralities (sink or source of the Berry curvature)~\\cite{volovik2003universe}, the emerging Berry flux can lead to the anomalous Hall effect (AHE) ~\\cite{Burkov2014}, as observed in GdPtBi~\\cite{Hirschberger2016,Shekhar2016}, and an intrinsic spin Hall effect (SHE), as predicted in TaAs-type materials~\\cite{Sun2016}, for systems without and with time-reversal symmetry, respectively. Herein, we raise a simple recipe to search for WSM candidates among materials that host strong AHE or SHE.\n\nRecently, Mn$_3$X (where $\\rm X=Sn$, Ge, and Ir), which exhibit noncollinear antiferromagetic (AFM) phases at room temperature, have been found to show large AHE~\\cite{Kubler2014,Chen2014,Nakatsuji2015,Nayak2016} and SHE~\\cite{Zhang2016}, provoking our interest to investigate their band structures. In this work, we report the existence of Weyl fermions for Mn$_3$Ge and Mn$_3$Sn compounds and the resultant Fermi arcs on the surface by \\textit{ab initio} calculations, awaiting experimental verifications. Dozens of Weyl points exist near the Fermi energy in their band structure, and these can be well understood with the assistance of lattice symmetry.\n\n\n\\section{methods}\n\n\nThe electronic ground states of Mn$_3$Ge and Mn$_3$Sn were calculated by using density-functional theory (DFT) within the Perdew-Burke-Ernzerhof-type generalized-gradient approximation (GGA)~\\cite{Perdew1996} using the Vienna {\\it ab initio} Simulation Package (\\textsc{vasp})~\\cite{Kresse1996}. The $3d^6 4s^1$, $4s^24p^2$, and $5s^2 5p^2$ electrons were considered as valance electrons for Mn, Ge, and Sn atoms, respectively. The primitive cell with experimental crystal parameters $a=b=5.352$ and $c=4.312$ \\AA~ for Mn$_3$Ge\nand $a=b=5.67$ and $c=4.53$ \\AA~ for Mn$_3$Sn\nwere adopted. Spin-orbit coupling (SOC) was included in all calculations.\n\nTo identify the Weyl points with the monopole feature, we calculated the Berry curvature distribution in momentum space.\nThe Berry curvature was calculated based on a tight-binding Hamiltonian based on localized Wannier functions\\cite{Mostofi2008} projected from the DFT Bloch wave functions. Chosen were atomic-orbital-like Wannier functions, which include Mn-$spd$ and Ge-$sp$/Sn-$p$ orbitals, so that the tight-binding Hamiltonian is consistent with the symmetry of \\textit{ab initio} calculations.\nFrom such a Hamiltonian, the Berry curvature can be calculated using the Kubo-formula approach\\cite{Xiao2010},\n\\begin{equation}\n\\begin{aligned}\\label{equation1}\n\\Omega^{\\gamma}_n(\\vec{k})= 2i\\hbar^2 \\sum_{m \\ne n} \\dfrac{}{(E_{n}(\\vec{k})-E_{m}(\\vec{k}))^2},\n\\end{aligned}\n\\end{equation}\nwhere $\\Omega^{\\gamma}_n(\\vec{k})$ is the Berry curvature in momentum space for a given band $n$,\n$\\hat{v}_{\\alpha (\\beta, \\gamma)}=\\frac{1}{\\hbar}\\frac{\\partial\\hat{H}}{\\partial k_{\\alpha (\\beta, \\gamma)}}$ is the velocity operator with $\\alpha,\\beta,\\gamma=x,y,z$, and $|u_{n}(\\vec{k})\\rangle$ and $E_{n}(\\vec{k})$ are the eigenvector and eigenvalue of the Hamiltonian $\\hat{H}(\\vec{k})$, respectively. The summation of $\\Omega^{\\gamma}_n(\\vec{k})$ over all valence bands gives the Berry curvature vector $\\mathbf{\\Omega} ~(\\Omega^x,\\Omega^y,\\Omega^z)$.\n\nIn addition, the surface states that demonstrate the Fermi arcs were calculated on a semi-infinite surface, where the momentum-resolved local density of states (LDOS) on the surface layer was evaluated based on the Green's function method. We note that the current surface band structure corresponds to the bottom surface of a half-infinite system.\n\n\\section{Results and Discussion}\n\\subsection{Symmetry analysis of the antiferromagnetic structure}\n\nMn$_3$Ge and Mn$_3$Sn share the same layered hexagonal lattice (space group $P6_3/mmc$, No. 193).\nInside a layer, Mn atoms form a Kagome-type lattice with mixed triangles and hexagons and Ge/Sn atoms are located at the centers of these hexagons.\nEach Mn atom carries a magnetic moment of 3.2 $\\mu$B in Mn$_3$Sn and 2.7 $\\mu$B in Mn$_3$Ge.\nAs revealed in a previous study~\\cite{Zhang2013}, the ground magnetic state is a\nnoncollinear AFM state, where Mn moments align inside the $ab$ plane and form 120-degree angles with neighboring moment vectors, as shown in Fig.\\ref{stru}b. Along the $c$ axis, stacking two layers leads to the primitive unit cell.\nGiven the magnetic lattice, these two layers can be transformed into each other by inversion symmetry or with a mirror reflection ($M_y$) adding a half-lattice ($c/2$) translation, i.e., a nonsymmorphic symmetry $\\{M_y|\\tau = c/2\\}$. In addition, two other mirror reflections ($M_x$ and $M_z$) adding time reversal (T), $M_x T$ and $M_z T$, exist.\n\nIn momentum space, we can utilize three important symmetries, $M_x T$, $M_z T$, and $M_y$, to understand the electronic structure and locate the Weyl points. Suppose a Weyl point with chirality $\\chi$ (+ or $-$) exists at a generic position $\\mathbf{k}~(k_x,k_y,k_z)$.\nMirror reflection reverses $\\chi$ while time reversal does not and both of them act on $\\mathbf{k}$. The transformation is as follows:\n\\begin{equation}\n\\begin{aligned}\nM_x T : & ~ (k_x,k_y,k_z) \\rightarrow (k_x, -k_y, -k_z); &~\\chi &\\rightarrow -\\chi \\\\\nM_z T : &~ (k_x,k_y,k_z) \\rightarrow (-k_x, -k_y, k_z); &~ \\chi &\\rightarrow -\\chi \\\\\nM_y : &~ (k_x,k_y,k_z) \\rightarrow (k_x, -k_y, k_z); &~ \\chi &\\rightarrow -\\chi \\\\\n\\end{aligned}\n\\label{symmetry}\n\\end{equation}\nEach of the above three operations doubles the number of Weyl points. Thus, eight nonequivalent Weyl points can be generated at $(\\pm k_x,+k_y,\\pm k_z)$ with chirality $\\chi$ and\n$(\\pm k_x,-k_y,\\pm k_z)$ with chirality $-\\chi$ (see Fig. 1d). We note that the $k_x=0/\\pi$ or $k_z=0/\\pi$ plane can host Weyl points. However, the $k_y=0/\\pi$ plane cannot host Weyl points, because $M_y$ simply reverses the chirality and annihilates the Weyl point with its mirror image if it exists. Similarly the $M_y$ mirror reflection requires that a nonzero anomalous Hall conductivity can only exist in the $xz$ plane (i.e., $\\sigma_{xz}$), as already shown in Ref.~\\onlinecite{Nayak2016}.\n\nIn addition, the symmetry of the 120-degree AFM state is slightly broken in the materials, owing to the existence of a tiny net moment ($\\sim$0.003 ~$\\mu$B per unit cell)~\\cite{Nakatsuji2015,Nayak2016,Zhang2013}. Such weak symmetry breaking seems to induce negligible effects in the transport measurement. However, it gives rise to a perturbation of the band structure, for example, shifting slightly the mirror image of a Weyl point from its position expected, as we will see in the surface states of Mn$_3$Ge.\n\n\\begin{figure\n \\begin{center}\n \\includegraphics[width=0.45\\textwidth]{figure1.png}\n \\end{center}\n \\caption{ Crystal and magnetic structures of Mn$_3X$ (where $\\rm X = Sn$ or Ge) and related symmetry.\n(a) Crystal structure of Mn$_3$X. Three mirror planes are shown in purple, corresponding to\n \\{$M_y|\\tauup=c/2$\\}, $M_xT$, and $M_zT$ symmetries.\n(b) Top view along the $c$ axis of the Mn sublattice. Chiral AFM with an angle of 120 degrees between neighboring magnetic moments is formed in each Mn layer.\nThe mirror planes that correspond to $M_xT$ and \\{$M_y|\\tauup=c/2$\\} are marked by dashed lines.\n(c) Symmetry in momentum space, $M_y$, $M_xT$, and $M_zT$.\nIf a Weyl point appears at $(k_x,k_y,k_z)$, eight Weyl points in total can be generated at $(\\pm k_x,\\pm k_y,\\pm k_z)$ by the above three symmetry operations. For convenience, we choose the $k_y=\\pi$ plane for $M_y$ here.\n }\n \\label{stru}\n\\end{figure}\n\n\\begin{table\n\\caption{\nPositions and energies of Weyl points in first Brillouin zone for Mn$_3$Sn.\nThe positions ($k_x$, $k_y$, $k_z$) are in units of $\\pi$.\nEnergies are relative to the Fermi energy $E_F$.\nEach type of Weyl point has four copies whose coordinates can be generated\nfrom the symmetry as $(\\pm k_x, \\pm k_y, k_z=0)$.\n}\n\\label{table:Mn3Sn}\n\\centering\n\\begin{tabular}{cccccc}\n\\toprule\n\\hline\nWeyl point & $k_x$ & $k_y$ & $k_z$ & Chirality & Energy (meV) \\\\\n\\hline\nW$_1$ & $-0.325$ & 0.405 & 0.000 & $-$ & 86 \\\\\nW$_2$ & $-0.230$ & 0.356 & 0.003 & + & 158 \\\\\nW$_3$ & $-0.107$ & 0.133 & 0.000 & $-$ & 493 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\\begin{table\n\\caption{\nPositions and energies of Weyl points in the first Brillouin zone for Mn$_3$Ge.\nThe positions ($k_x$, $k_y$, $k_z$) are in units of $\\pi$.\nEnergies are relative to the Fermi energy $E_F$.\nEach of W$_{1,2,7}$ has four copies whose coordinates can be generated\nfrom the symmetry as $(\\pm k_x, \\pm k_y, k_z=0)$.\nW$_4$ has four copies at $(k_x \\approx 0, \\pm k_y, \\pm k_z)$ and\nW$_9$ has two copies at $(k_x \\approx 0, \\pm k_y, k_z =0)$.\nEach of the other Weyl points has four copies whose coordinates can be generated\nfrom the symmetry as $(\\pm k_x, \\pm k_y, \\pm k_z)$.\n} \\label{table:Mn3Ge}\n\\centering\n\\begin{tabular}{@{}cccccc@{}}\n\\toprule\n\\hline\nWeyl point & $k_x$ & $k_y$ & $k_z$ & Chirality & Energy (meV) \\\\\n\\hline\nW$_1$ & $-0.333$ & 0.388 & $-0.000$ & $-$ & 57 \\\\\nW$_2$ & 0.255 & 0.378 & $-0.000$ & + & 111 \\\\\nW$_3$ & $-0.101$ & 0.405 & 0.097 & $-$ & 48 \\\\\nW$_4$ & $-0.004$ & 0.419 & 0.131 & + & 8 \\\\\nW$_5$ & $-0.048$ & 0.306 & 0.164 & + & 77 \\\\\nW$_6$ & 0.002 & 0.314 & 0.171 & $-$ & 59 \\\\\nW$_7$ & $-0.081$ & 0.109 & 0.000 & + & 479 \\\\\nW$_8$ & 0.069 & $-0.128$ & 0.117 & + & 330 \\\\\nW$_9$ & 0.004 & $-0.149$ & $-0.000$ & + & 470 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\n\\subsection{Weyl points in the bulk band structure}\n\nThe bulk band structures are shown along high-symmetry lines in Fig.~\\ref{bandstrucure} for Mn$_3$Ge and Mn$_3$Sn. It is not surprising that the two materials exhibit similar band dispersions.\nAt first glance, one can find two seemingly band degenerate points at $Z$ and $K$ points, which are below the Fermi energy. Because of $M_z T$ and the nonsymmorphic symmetry \\{$M_y|\\tauup=c/2$\\}, the bands are supposed to be quadruply degenerate at the Brillouin zone boundary $Z$, forming a Dirac point protected by the nonsymmorphic space group~\\cite{Young2012,Schoop2015,Tang2016}. Given the slight mirror symmetry breaking by the residual net magnetic moment, this Dirac point is gapped at $Z$ (as shown in the enlarged panel) and splits into four Weyl points, which are very close to each other in $k$ space. A tiny gap also appears at the $K$ point. Nearby, two additional Weyl points appear, too. Since the Weyl point separations are too small near both $Z$ and $K$ points, these Weyl points may generate little observable consequence in experiments such as those for studying Fermi arcs. Therefore, we will not focus on them in the following investigation.\n\n\\begin{figure\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{figure2.png}\n\\end{center}\n\\caption{\nBulk band structures for (a) Mn$_3$Sn and (b) Mn$_3$Ge along high-symmetry lines with SOC.\nThe bands near the $Z$ and $K$ (indicated by red circles) are expanded to show details in (a).\nThe Fermi energy is set to zero.}\n\\label{bandstrucure}\n\\end{figure}\n\nMn$_3$Sn and Mn$_3$Ge are actually metallic, as seen from the band structures. However, we retain the terminology of Weyl semimetal for simplicity and consistency. The valence and conduction bands cross each many times near the Fermi energy, generating multiple pairs of Weyl points. We first investigate the Sn compound. Supposing that the total valence electron number is $N_v$, we search for the crossing points between the $N_v ^{\\rm th}$ and $(N_v +1) ^{\\rm th}$ bands.\n\nAs shown in Fig.~\\ref{bc_Mn3Sn}a, there are six pairs of Weyl points in the first Brillouin zone; these can be classified into three groups according to their positions, noted as W$_1$, W$_2$, and W$_3$. These Weyl points lie in the $M_z$ plane (with W$_2$ points being only slightly off this plane owing to the residual-moment-induced symmetry breaking) and slightly above the Fermi energy. Therefore, there are four copies for each of them according to the symmetry analysis in Eq.~\\ref{symmetry}.\n Their representative coordinates and energies are listed in Table~\\ref{table:Mn3Sn} and also indicated in Fig.~\\ref{bc_Mn3Sn}a. A Weyl point (e.g., W$_1$ in Figs.~\\ref{bc_Mn3Sn}b and ~\\ref{bc_Mn3Sn}c) acts as a source or sink of the Berry curvature $\\mathbf{\\Omega}$, clearly showing the monopole feature with a definite chirality.\n\nIn contrast to Mn$_3$Sn, Mn$_3$Ge displays many more Weyl points. As shown in Fig.~\\ref{bc_Mn3Ge}a and listed in Table~\\ref{table:Mn3Ge}, there are nine groups of Weyl points. Here W$_{1,2,7,9}$ lie in the $M_z$ plane with W$_9$ on the $k_y$ axis, W$_4$ appears in the $M_x$ plane, and the others are in generic positions. Therefore, there are four copies of W$_{1,2,7,4}$, two copies of W$_9$, and eight copies of other Weyl points.\nAlthough there are many other Weyl points in higher energies owing to different band crossings, we mainly focus on the current Weyl points that are close to the Fermi energy. The monopole-like distribution of the Berry curvature near these Weyl points is verified; see W$_1$ in Fig.~\\ref{bc_Mn3Ge} as an example.\nWithout including SOC, we observed a nodal-ring-like band crossing in the band structures of both Mn$_3$Sn and Mn$_3$Ge. SOC gaps the nodal rings but leaves isolating band-touching points, i.e., Weyl points. Since Mn$_3$Sn exhibits stronger SOC than Mn$_3$Ge, many Weyl points with opposite chirality may annihilate each other by being pushed by the strong SOC in Mn$_3$Sn. This might be why Mn$_3$Sn exhibits fewer Weyl points than Mn$_3$Ge.\n\n\n\\begin{figure\n \\begin{center}\n \\includegraphics[width=0.5\\textwidth]{figure3.png}\n \\end{center}\n \\caption{Surface states of Mn$_3$Sn.\n(a) Distribution of Weyl points in momentum space.\nBlack and white points represent Weyl points with $-$ and + chirality, respectively. \n(b) and (c) Monopole-like distribution of the Berry curvature near a W$_1$ Weyl point.\n(d) Fermi surface at $E_F= 86$ meV crossing the W$_1$ Weyl points.\nThe color represents the surface LDOS.\nTwo pairs of W$_1$ points are shown enlarged in the upper panels, where clear Fermi arcs exist.\n(e) Surface band structure along a line connecting a pair of W$_1$ points with opposite chirality.\n(f) Surface band structure along the white horizontal line indicated in (d). Here p1 and p2 are the chiral states corresponding to the Fermi arcs.\n}\n \\label{bc_Mn3Sn}\n\\end{figure}\n\n\\begin{figure\n \\begin{center}\n \\includegraphics[width=0.5\\textwidth]{figure4.png}\n \\end{center}\n \\caption{ Surface states of Mn$_3$Ge.\n(a) Distribution of Weyl points in momentum space.\nBlack and white points represent Weyl points with $-$' and + chirality, respectively. Larger points indicate two Weyl points ($\\pm k_z$) projected into this plane.\n(b) and (c) Monopole-like distribution of the Berry curvature near a W$_1$ Weyl point.\n(d) Fermi surface at $E_F= 55$ meV crossing the W$_1$ Weyl points.\nThe color represents the surface LDOS.\nTwo pairs of W$_1$ points are shown enlarged in the upper panels, where clear Fermi arcs exist.\n(e) Surface band structure along a line connecting a pair of W$_1$ points with opposite chirality.\n(f) Surface band structure along the white horizontal line indicated in (d). Here p1 and p2 are the chiral states corresponding to the Fermi arcs.\n}\n \\label{bc_Mn3Ge}\n\\end{figure}\n\n\n\\subsection{Fermi arcs on the surface}\n\nThe existence of Fermi arcs on the surface is one of the most significant consequences of Weyl points inside the three-dimensional (3D) bulk. We first investigate the surface states of Mn$_3$Sn that have a simple bulk band structure with fewer Weyl points. When projecting W$_{2,3}$ Weyl points to the (001) surface, they overlap with other bulk bands that overwhelm the surface states. Luckily, W$_1$ Weyl points are visible on the Fermi surface. When the Fermi energy crosses them, W$_1$ Weyl points appear as the touching points of neighboring hole and electron pockets. Therefore, they are typical type-II Weyl points~\\cite{Soluyanov2015WTe2}. Indeed, their energy dispersions demonstrate strongly tilted Weyl cones.\n\nThe Fermi surface of the surface band structure is shown in Fig.~\\ref{bc_Mn3Sn}d for the Sn compound. In each corner of the surface Brillouin zone, a pair of W$_1$ Weyl points exists with opposite chirality. Connecting such a pair of Weyl points, a long Fermi arc appears in both the Fermi surface (Fig.~\\ref{bc_Mn3Sn}d) and the band structure (Fig.~\\ref{bc_Mn3Sn}e). Although the projection of bulk bands exhibit pseudo-symmetry of a hexagonal lattice, the surface Fermi arcs do not. It is clear that the Fermi arcs originating from two neighboring Weyl pairs (see Fig.~\\ref{bc_Mn3Sn}d) do not exhibit $M_x$ reflection, because the chirality of Weyl points apparently violates $M_x$ symmetry. For a generic $k_x$--$k_z$ plane between each pair of W$_1$ Weyl points, the net Berry flux points in the $-k_y$ direction. As a consequence, the Fermi velocities of both Fermi arcs point in the $+k_x$ direction on the bottom surface (see Fig.~\\ref{bc_Mn3Sn}f). These two right movers coincide with the nonzero net Berry flux, i.e., Chern number $=2$.\n\nFor Mn$_3$Ge, we also focus on the W$_1$-type Weyl points at the corners of the hexagonal Brillouin zone. In contrast to Mn$_3$Sn, Mn$_3$Ge exhibits a more complicated Fermi surface. Fermi arcs exist to connect a pair of W$_1$-type Weyl points with opposite chirality, but they are divided into three pieces as shown in Fig.~\\ref{bc_Mn3Ge}d. In the band structures (see Figs. ~\\ref{bc_Mn3Ge}e and f), these three pieces are indeed connected together as a single surface state. Crossing a line between two pairs of W$_1$ points, one can find two right movers in the band structure, which are indicated as p1 and p2 in Fig. ~\\ref{bc_Mn3Ge}f. The existence of two chiral surface bands is consistent with a nontrivial Chern number between these two pairs of Weyl points.\n\n\\section{Summary}\n\nIn summary, we have discovered the Weyl semimetal state in the chiral AFM compounds Mn$_3$Sn and Mn$_3$Ge by {\\it ab~initio} band structure calculations.\nMultiple Weyl points were observed in the bulk band structures, most of which are type II.\nThe positions and chirality of Weyl points are in accordance with the symmetry of the magnetic lattice.\nFor both compounds, Fermi arcs were found on the surface, each of which connects a pair of Weyl points with opposite chirality, calling for further experimental investigations such as angle-resolved photoemission spectroscopy.\nThe discovery of Weyl points verifies the large anomalous Hall conductivity observed recently in titled compounds.\nOur work further reveals a guiding principle to search for Weyl semimetals among materials\nthat exhibit a strong anomalous Hall effect.\n\n\\begin{acknowledgments}\nWe thank Claudia Felser, J{\\\"u}rgen K{\\\"u}bler and Ajaya K. Nayak for helpful discussions.\nWe acknowledge the Max Planck Computing and Data Facility (MPCDF) and Shanghai Supercomputer Center for computational resources and the German Research Foundation (DFG) SFB-1143 for financial support.\n\\end{acknowledgments}\n\n\n", "meta": {"timestamp": "2016-08-18T02:05:38", "yymm": "1608", "arxiv_id": "1608.03404", "language": "en", "url": "https://arxiv.org/abs/1608.03404"}} {"text": "\\section{Introduction}\n\nConformal invariance was first recognised to be of physical interest when it was realized that the Maxwell equations are covariant under the $15$-dimensional conformal group \\cite{Cu,Bat}, a fact that motivated a more detailed analysis of conformal invariance in other physical contexts such as General Relativity, Quantum Mechanics or high energy physics \\cite{Ful}. These applications further suggested to study conformal invariance in connection with the physically-relevant groups, among which the Poincar\\'e and Galilei groups were the first to be considered. In this context, conformal extensions of the Galilei group have been considered in Galilei-invariant field theories, in the study of possible dynamics of interacting particles as well as in the nonrelativistic AdS/CFT correspondence\n\\cite{Bar54,Hag,Hav,Zak,Fig}. Special cases as the (centrally extended) Schr\\\"odinger algebra $\\widehat{\\mathcal{S}}(n)$ corresponding to the maximal invariance group of the \nfree Schr\\\"odinger equation have been studied in detail by various authors, motivated by different applications such as the kinematical invariance of hierarchies of partial differential equations, Appell systems, quantum groups or representation theory \\cite{Ni72,Ni73,Do97,Fra}. The class of Schr\\\"odinger algebras can be generalized in natural manner to the so-called conformal Galilei algebras $\\mathfrak{g}_{\\ell}(d)$ for (half-integer) values $\\ell\\geq \\frac{1}{2}$, \nalso corresponding to semidirect products of the semisimple Lie algebra $\\mathfrak{sl}(2,\\mathbb{R})\\oplus\\mathfrak{so}(d)$ with a Heisenberg algebra but with a higher dimensional characteristic representation.\\footnote{By characteristic representation we mean the representation of $\\mathfrak{sl}(2,\\mathbb{R})\\oplus\\mathfrak{so}(d)$ that describes the action on the Heisenberg algebra.} Such algebras, that can be interpreted as a nonrelativistic analogue of the conformal algebra, have been used in a variety of contexts, ranging from classical (nonrelativistic) mechanics, electrodynamics and fluid dynamics to higher-order Lagrangian mechanics \\cite{Ai12,Tac,Du11,St13}\nThe algebraic structure of the conformal Galilei algebra $\\mathfrak{g}_{\\ell}(d)$ for values of $\\ell\\geq \\frac{3}{2}$ and its representations have been analyzed in some detail, and an algorithmic procedures to compute their Casimir operators have been proposed (see e.g. \\cite{Als17,Als19} and references therein). In the recent note \\cite{raub}, a synthetic formula for the Casimir operators of the $\\mathfrak{g}_{\\ell}(d)$ algebra has been given. Although not cited explicitly, the \nprocedure used there corresponds to the so-called ``virtual-copy\" method, a technique well-known for some years that enables to compute the Casimir operators of a Lie algebra using those of its maximal semisimple subalgebra (\\cite{Que,C23,C45,SL3} and references therein). \n\n\\medskip\n\\noindent \nIn this work, we first propose a further generalization of the conformal Galilei algebras $\\mathfrak{g}_{\\ell}(d)$, replacing the $\\mathfrak{sl}(2,\\mathbb{R})\\oplus\\mathfrak{so}(d)$ subalgebra of the latter by the semisimple Lie algebra $\\mathfrak{sl}(2,\\mathbb{R})\\oplus\\mathfrak{so}(p,q)$. As the defining representation $\\rho_d$ of $\\mathfrak{so}(p,q)$ is real for all values $p+q=d$ \\cite{Tits}, the structure of a semidirect product with a Heisenberg Lie algebra remains unaltered. The Lie algebras $\\mathfrak{Gal}_{\\ell}(p,q)$ describe a class of semidirect products of semisimple and Heisenberg Lie algebras among which $\\mathfrak{g}_{\\ell}(d)$ corresponds to the case with a largest maximal compact subalgebra. \nUsing the method developed in \\cite{C45}, we construct a virtual copy of $\\mathfrak{sl}(2,\\mathbb{R})\\oplus\\mathfrak{so}(p,q)$ in the enveloping algebra of $\\mathfrak{Gal}_{\\ell}(p,q)$ for all half-integer values of $\\ell$ and any $d=p+q\\geq 3$. The Casimir operators of these Lie algebras are determined combining the analytical and the matrix trace methods, showing how to compute them explicitly in terms of the determinant of a polynomial matrix. \n\n\n\\medskip\n\\noindent We further determine the exact number of Casimir operators for the unextended Lie algebras $\\overline{\\mathfrak{Gal}}_{\\ell}(p,q)$ obtained by factorizing \n$\\mathfrak{Gal}_{\\ell}(p,q)$ by its centre. Using the reformulation of the Beltrametti-Blasi formula in terms of the Maurer-Cartan equations, we show that albeit the number $\\mathcal{N}$ of invariants increases considerably for fixed $\\ell$ and varying $d$, a generic polynomial formula at most quadratic in $\\ell$ and $d$ that gives the exact value of $\\mathcal{N}$ can be established. Depending on the fact whether the relation $d\\leq 2\\ell+2$ is satisfied or not, it is shown that $\\overline{\\mathfrak{Gal}}_{\\ell}(p,q)$ admits a complete set of invariants formed by operators that do not depend on the generators of the Levi subalgebra. An algorithmic procedure to compute these invariants by means of a reduction to a linear system is proposed. \n \n\n\\section{Maurer-Cartan equations of Lie algebras and Casimir operators }\n\nGiven a Lie algebra $ \\frak{g}=\\left\\{X_{1},..,X_{n}\\; |\\;\n\\left[X_{i},X_{j}\\right]=C_{ij}^{k}X_{k}\\right\\}$ in terms of\ngenerators and commutation relations, we are principally interested\non (polynomial) operators\n$C_{p}=\\alpha^{i_{1}..i_{p}}X_{i_{1}}..X_{i_{p}}$ in the\ngenerators of $\\frak{s}$ such that the constraint $\n\\left[X_{i},C_{p}\\right]=0$,\\; ($i=1,..,n$) is satisfied. Such an\noperator can be shown to lie in the centre of the enveloping\nalgebra of $\\frak{g}$ and is called a (generalized) Casimir\noperator. For semisimple Lie algebras, the determination of\nCasimir operators can be done using structural properties\n\\cite{Ra,Gel}. However, for non-semisimple Lie algebras the relevant\ninvariant functions are often rational or even transcendental\nfunctions \\cite{Bo1,Bo2}. This suggests to develop a method in order to\ncover arbitrary Lie algebras. One convenient approach is the\nanalytical realization. The generators of the Lie algebra\n$\\frak{s}$ are realized in the space $C^{\\infty }\\left(\n\\frak{g}^{\\ast }\\right) $ by means of the differential operators:\n\\begin{equation}\n\\widehat{X}_{i}=C_{ij}^{k}x_{k}\\frac{\\partial }{\\partial x_{j}},\n\\label{Rep1}\n\\end{equation}\nwhere $\\left\\{ x_{1},..,x_{n}\\right\\}$ are the coordinates in a dual basis of\n$\\left\\{X_{1},..,X_{n}\\right\\} $. The invariants of $\\frak{g}$ hence correspond to solutions of the following\nsystem of partial differential equations:\n\\begin{equation}\n\\widehat{X}_{i}F=0,\\quad 1\\leq i\\leq n. \\label{sys}\n\\end{equation}\nWhenever we have a polynomial solution of (\\ref{sys}), the\nsymmetrization map defined by\n\\begin{equation}\n{\\rm Sym(}x_{i_{1}}^{a_{1}}..x_{i_{p}}^{a_{p}})=\\frac{1}{p!}\\sum_{\\sigma\\in\nS_{p}}x_{\\sigma(i_{1})}^{a_{1}}..x_{\\sigma(i_{p})}^{a_{p}}\\label{syma}\n\\end{equation}\nallows to rewrite the Casimir operators in their usual form \nas central elements in the enveloping algebra of $\\frak{g}$,\nafter replacing the variables $x_{i}$ by the corresponding\ngenerator $X_{i}$. A maximal set of functionally\nindependent invariants is usually called a fundamental basis. The\nnumber $\\mathcal{N}(\\frak{g})$ of functionally independent\nsolutions of (\\ref{sys}) is obtained from the classical criteria\nfor differential equations, and is given by the formula \n\\begin{equation}\n\\mathcal{N}(\\frak{g}):=\\dim \\,\\frak{g}- {\\rm\nsup}_{x_{1},..,x_{n}}{\\rm rank}\\left( C_{ij}^{k}x_{k}\\right),\n\\label{BB}\n\\end{equation}\nwhere $A(\\frak{g}):=\\left(C_{ij}^{k}x_{k}\\right)$ is the matrix\nassociated to the commutator table of $\\frak{g}$ over the given\nbasis \\cite{Be}.\\newline \nThe reformulation of condition (\\ref{BB}) in terms of differential forms (see e.g. \\cite{C43})\nallows to compute $\\mathcal{N}(\\frak{g})$ quite efficiently and even to \nobtain the Casimir\noperators under special circumstances \\cite{Peci,C72}. In terms of the\nMaurer-Cartan equations, the Lie algebra $\\frak{g}$\nis described as follows: If $\\left\\{ C_{ij}\n^{k}\\right\\} $ denotes the structure tensor over the basis $\\left\\{ X_{1},..,X_{n}\\right\\} $,\nthe identification of the dual space $\\frak{g}^{\\ast}$ with the\nleft-invariant 1-forms on the simply connected Lie group the Lie algebra of which is isomorphic to $\\frak{g}$ allows to define an exterior\ndifferential $d$ on $\\frak{g}^{\\ast}$ by\n\\begin{equation}\nd\\omega\\left( X_{i},X_{j}\\right) =-C_{ij}^{k}\\omega\\left(\nX_{k}\\right) ,\\;\\omega\\in\\frak{g}^{\\ast}.\\label{MCG}\n\\end{equation}\nUsing the coboundary operator $d$, we rewrite $\\frak{g}$ as a\nclosed system of $2$-forms%\n\\begin{equation}\nd\\omega_{k}=-C_{ij}^{k}\\omega_{i}\\wedge\\omega_{j},\\;1\\leq\ni2\\ell+2$. \n\n\n\\begin{enumerate}\n\\item Let $d=p+q\\leq 2\\ell +2$. In this case the dimension of the characteristic representation $\\Gamma$ is clearly larger than that of the Levi subalgebra, so that a 2-form of maximal rank can be constructed using only the differential forms associated to the generators $P_{n,k}$. Consider the 2-form in (\\ref{MCA}) given by $\\Theta=\\Theta_1+\\Theta_2$, where \n\\begin{eqnarray}\n\\Theta_1=d\\sigma_{0,1}+d\\sigma_{2\\ell,d}+d\\sigma_{2\\ell-1,d-1},\\; \n\\Theta_2=\\sum_{s=1}^{d-4} d\\sigma_{s,s+1}.\\label{difo1}\n\\end{eqnarray}\nUsing the decomposition formula $\\bigwedge^{a}\\Theta=\\sum_{r=0}^{a} \\left(\\bigwedge^{r}\\Theta_1\\right) \\wedge \\left(\\bigwedge^{a-r}\\Theta_2\\right)$ we obtain that \n\\begin{eqnarray}\n\\fl \\bigwedge^{\\frac{1}{2}\\left(6-d+d^2\\right)}\\Theta= &\\bigwedge^{d+1}d\\sigma_{0,1}\\wedge\\bigwedge^{d-1}d\\sigma_{2\\ell,d}\\wedge\\bigwedge^{d-3}d\\sigma_{2\\ell-1,d-1}\\wedge\n\\bigwedge^{d-4}d\\sigma_{1,2}\\wedge\\nonumber\\\\\n& \\wedge\\bigwedge^{d-5}d\\sigma_{2,3}\\wedge\\bigwedge^{d-6}d\\sigma_{3,4}\\wedge\\cdots \\bigwedge^{2}d\\sigma_{d-5,d-4}\\wedge d\\sigma_{d-4,d-3}+\\cdots \\neq 0.\\label{pro2}\n\\end{eqnarray}\nAs $\\frac{1}{2}\\left(6-d+d^2\\right)=\\dim\\left(\\mathfrak{sl}(2,\\mathbb{R})\\oplus\\mathfrak{so}(p,q)\\right)$, the 2-form $\\Theta$ is necessarily of maximal rank, as all the generators of the Levi subalgebra appear in some term of the product (\\ref{pro2}) and no products of higher rank are possible due to the Abelian nilradical. We therefore conclude that $j(\\mathfrak{g})=\\frac{1}{2}\\left(6-d+d^2\\right)$ and by formula (\\ref{BB1}) we have \n\\begin{equation}\n\\mathcal{N}(\\mathfrak{g})= \\frac{1}{2}\\left(4\\ell d+3d-d^2-6\\right).\\label{inva1}\n\\end{equation}\n\n\\item Now let $d \\geq 2\\ell +3$. The main difference with respect to the previous case is that a generic form $\\omega\\in\\mathcal{L}(\\mathfrak{g})$ of maximal rank must necessarily contain linear combinations of the 2-forms $d\\omega_{i,j}$ corresponding to the semisimple part of $\\overline{\\mathfrak{Gal}}_{\\ell}(p,q)$. Let us consider first the 2-form \n\\begin{equation}\n\\Xi_1= \\Theta_1+\\Theta_2,\n\\end{equation}\nwhere $\\Theta_1$ is the same as in (\\ref{difo1}) and $\\Theta_2$ is defined as\n\\begin{equation}\n\\Theta_2=\\sum_{s=0}^{2\\ell-3} d\\sigma_{1+s,2+s}.\n\\end{equation}\nIn analogy with the previous case, for the index $\\mu_1=(2\\ell+1)d+(\\ell+2)(1-2\\ell)$ the first term of the following product does not vanish: \n\\begin{equation}\n\\fl \\bigwedge^{\\mu_1}\\Xi_1=\\bigwedge^{d+1}d\\sigma_{0,1}\\bigwedge^{d-1}d\\sigma_{2\\ell,d}\\bigwedge^{d-3}d\\sigma_{2\\ell-1,d-1} \n\\bigwedge^{d-4}d\\sigma_{1,2}\\cdots \\bigwedge^{d-1-2\\ell}d\\sigma_{2\\ell-2,2\\ell-1}+\\cdots \\neq 0.\\label{Pot1}\n\\end{equation}\nThis form, although not maximal in $\\mathcal{L}(\\mathfrak{g})$, is indeed of maximal rank when restricted to the subspace $\\mathcal{L}(\\mathfrak{r})$ generated by the 2-forms $d\\sigma_{n,k}$ with $0\\leq n\\leq 2\\ell$, $1\\leq k\\leq d$. \nThis means that the wedge product of $\\bigwedge^{\\mu_1}\\Xi_1$ with any other $d\\sigma_{n,k}$ is identically zero. Hence, in order to construct a 2-form of maximal rank in $\\mathcal{L}(\\mathfrak{g})$, we have to consider a 2-form $\\Xi_2$ that is a linear combination of the differential forms associated to the generators of the Levi subalgebra of $\\overline{\\mathfrak{Gal}}_{\\ell}(p,q)$. As follows at once from (\\ref{Pot1}), the forms $\\theta_1,\\theta_2,\\theta_3$ associated to $\\mathfrak{sl}(2,\\mathbb{R})$-generators have already appeared, thus it suffices to restrict our analysis to linear combinations of the forms $d\\omega_{i,j}$ corresponding to the pseudo-orthogonal Lie algebra $\\mathfrak{so}(p,q)$. Specifically, we make the choice \n\\begin{equation}\n\\Xi_2= \\sum_{s=0}^{\\nu}d\\omega_{3+2s,4+2s},\\quad \\nu=\\frac{1}{4}\\left(2d-4\\ell-9+(-1)^{1+d}\\right).\n\\end{equation} \nConsider the integer $\\mu_2=\\frac{1}{4}\\left(11+(d-4\\ell)(1+d)-4\\ell^2-2\\left[\\frac{d}{2}\\right]\\right)$ and take the 2-form $\\Xi=\\Xi_1+\\Xi_2$. A long but routine computation shows that following identity is satisfied:\n\\begin{eqnarray}\n\\fl \\bigwedge^{\\mu_1+\\mu_2}\\Xi =& \\left(\\bigwedge^{\\mu_1}\\Xi_1\\right)\\wedge \\left(\\bigwedge^{\\mu_2}\\Xi_2\\right) \\nonumber\\\\\n& = \\left(\\bigwedge^{\\mu_1}\\Xi_1\\right)\\wedge\\bigwedge^{d-6}d\\omega_{3,4}\\bigwedge^{d-8}d\\omega_{5,6}\\cdots \\bigwedge^{d-6-2\\nu}d\\omega_{3+2\\nu,4+2\\nu}+\\cdots \\neq 0.\\label{pro1}\n\\end{eqnarray}\nWe observe that this form involves $\\mu_1+2\\mu_2$ forms $\\omega_{i,j}$ from $\\mathfrak{so}(p,q)$, hence there remain $\\frac{d(d-1)}{2}-\\mu_1-2\\mu_2$ elements of the pseudo-orthogonal that do not appear in the first term in (\\ref{pro1}). From this product and (\\ref{MCA}) it can be seen that these uncovered elements are of the type $\\left\\{\\omega_{i_1,i_1+1},\\omega_{i_2,i_2+1},\\cdots \\omega_{i_r,i_r+1}\\right\\}$ with the subindices satisfying $i_{\\alpha+1}-i_{\\alpha}\\geq 2$ for $1\\leq \\alpha\\leq r$, from which we deduce that no other 2-form $d\\omega_{i_\\alpha,i_\\alpha+1}$, when multiplied with $\\bigwedge^{\\mu_1+\\mu_2}\\Xi $ will be different from zero. \nWe conclude that $\\Xi$ has maximal rank equal to $j_0(\\mathfrak{g})=\\mu_1+\\mu_2$, thus applying (\\ref{BB1}) we find that \n\\begin{equation}\n\\fl \\mathcal{N}(\\mathfrak{g})= 3 + \\frac{d(d-1)}{2}+ (2 \\ell + 1) d-2(\\mu_1+\\mu_2)= 2\\ell^2+2\\ell-\\frac{5}{2}+\\left[\\frac{d}{2}\\right],\n\\end{equation}\nas asserted.\n\\end{enumerate}\n\n\\medskip\n\\noindent In Table \\ref{Tabelle1} we give the numerical values for the number of Casimir operators of the Lie algebras $\\overline{\\mathfrak{Gal}}_{\\ell}(p,q)$ with $d=p+q\\leq 12$, and where the linear increment with respect to $\\ell$ can be easily recognized. \n \n\\smallskip\n\\begin{table}[h!] \n\\caption{\\label{Tabelle1} Number of Casimir operators for $\\overline{\\mathfrak{Gal}}_{\\ell}(p,q)$.}\n\\begin{indented}\\item[]\n\\begin{tabular}{c||cccccccccc}\n$\\;d$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $11$ & $12$ \\\\\\hline \n{$\\ell=\\frac{1}{2}$} & $2$ & $3$ & $3$ & $4$ & $4$ & $5$ & $5$\n& $6$ & $6$ & $7$ \\\\ \n{$\\ell=\\frac{3}{2}$} & $6$ & $7$ & $7$ & $8$ & $8$ & $9$ & $9$\n& $10$ & $10$ & $11$ \\\\ \n{$\\ell=\\frac{5}{2}$} & $12$ & $15$ & $17$ & $18$ & $18$ & $19$\n& $19$ & $20$ & $20$ & $21$ \\\\ \n{$\\ell=\\frac{7}{2}$} & $18$ & $23$ & $27$ & $30$ & $32$ & $33$\n& $33$ & $34$ & $34$ & $35$ \\\\ \n{$\\ell=\\frac{9}{2}$} & $24$ & $31$ & $37$ & $42$ & $46$ & $49$\n& $51$ & $52$ & $52$ & $53$ \\\\ \n{$\\ell=\\frac{11}{2}$} & $30$ & $39$ & $47$ & $54$ & $60$ & $65\n$ & $69$ & $72$ & $74$ & $75$%\n\\end{tabular}\n\\end{indented}\n\\end{table}\n\n\\medskip\n\\noindent As follows from a general property concerning virtual copies \\cite{C45}, Lie algebras of the type $\\mathfrak{g}=\\mathfrak{s}\\overrightarrow{\\oplus} \\mathfrak{r}$ with an Abelian radical $\\mathfrak{r}$ do not admit virtual copies of $\\mathfrak{s}$ in $\\mathcal{U}\\left(\\mathfrak{g}\\right)$. Thus for Lie algebras of this type the Casimir invariants must be computed either directly from system (\\ref{sys}) or by some other procedure. Among the class $\\overline{\\mathfrak{Gal}}_{\\ell}(p,q)$, an exception is given by the unextended (pseudo-)Schr\\\"odinger algebra $\\overline{\\mathfrak{Gal}}_{\\frac{1}{2}}(p,q)\\simeq \\widehat{\\mathcal{S}}(p,q)$, where the invariants can be deduced from those of the central extension $\\widehat{\\mathcal{S}}(p,q)$ by the widely used method of contractions (see e.g. \\cite{IW,We}). For the remaining values $\\ell\\geq \\frac{3}{2}$ the contraction procedure is useless in practice, given the high number of invariants. However, an interesting property concerning the invariants of $\\overline{\\mathfrak{Gal}}_{\\ell}(p,q)$ emerges when we try to find the Casimir operators $F$ that only depend on variables $p_{n,k}$ associated to generators $P_{n,k}$ of the radical, i.e., such that the condition \n\\begin{equation}\n\\quad \\frac{\\partial F}{\\partial x}=0,\\quad \\forall x\\in\\mathfrak{sl}(2,\\mathbb{R})\\oplus\\mathfrak{so}(p,q).\\label{kond}\n\\end{equation}\nis satisfied. As will be shown next, the number of such solutions tends to stabilize for high values of $d=p+q$, showing that almost any invariant will depend on all of the variables in $\\overline{\\mathfrak{Gal}}_{\\ell}(p,q)$, implying that finding a complete set of invariants is a computationally formidable task, as there is currently no general method to derive these invariants in closed form. \n\n\\begin{proposition}\nLet $\\ell\\geq \\frac{3}{2}$. For sufficiently large $d$, the number of Casimir invariants of $\\overline{\\mathfrak{Gal}}_{\\ell}(p,q)$ depending only on the variables $p_{n,k}$ of the Abelian radical is constant and given by \n\\begin{equation}\n\\mathcal{N}_1(S)=2\\ell^2+3\\ell-2.\\label{sr2}\n\\end{equation}\n\\end{proposition}\n\n\\noindent The proof follows analyzing the rank of the subsystem of (\\ref{sys}) corresponding to the differential operators $\\widehat{X}$ associated to the generators of the Levi subalgebra $\\mathfrak{sl}(2,\\mathbb{R})\\oplus\\mathfrak{so}(p,q)$ and such that condition (\\ref{kond}) is fulfilled. Specifically, this leads to the system $S$ of PDEs\n\\begin{eqnarray}\n\\widehat{D}^{\\prime}(F):=\\sum_{n=0}^{2\\ell}\\sum_{i=1}^{d} (2\\ell-n)p_{n,i}\\frac{\\partial F}{\\partial p_{n,i}}=0,\\; \n\\widehat{H}^{\\prime}(F):=\\sum_{n=0}^{2\\ell}\\sum_{i=1}^{d} n p_{n-1,i}\\frac{\\partial F}{\\partial p_{n,i}}=0,\\nonumber\\\\\n\\widehat{C}^{\\prime}(F):=\\sum_{n=0}^{2\\ell}\\sum_{i=1}^{d} (2\\ell-n)p_{n+1,i}\\frac{\\partial F}{\\partial p_{n,i}}=0,\\label{kond2}\\\\\n\\widehat{E}_{j,k}^{\\prime}(F):=\\sum_{n=0}^{2\\ell}\\sum_{i=1}^{d} \\left( g_{ij} p_{n,k} -g_{ik} p_{n,j}\\right) \\frac{\\partial F}{\\partial p_{n,i}}=0, 1\\leq j2\\ell+2$, those invariants of $\\mathfrak{Gal}_{\\ell}(p,q)$ satisfying the condition (\\ref{kond}) can be easily computed by means of a reduction argument that leads to a linear system. To this extent, consider the last of the equations in (\\ref{kond2}). As the generators of $\\mathfrak{so}(p,q)$ permute the generators of the Abelian radical, it is straightforward to verify that the quadratic polynomials \n\\begin{equation}\n\\Phi_{n,s}= \\sum_{k=1}^{d} \\frac{g_{11}}{g_{kk}}\\;p_{n,k}p_{n+s,k},\\; 0\\leq n\\leq 2\\ell,\\; 0\\leq s\\leq 2\\ell-n.\\label{ELE}\n\\end{equation}\nare actually solutions of these equations. Indeed, any solution of the type (\\ref{kond}) is built up from these functions. Let $\\mathcal{M}_d=\\left\\{\\Phi_{n,s},\\; 0\\leq n\\leq 2\\ell,\\; 0\\leq s\\leq 2\\ell-n\\right\\}$. The cardinal of this set is given by $2\\ell^2+3\\ell+1$, and we observe that not all of the elements in $\\mathcal{M}_d$ are independent. It follows by a short computation that \n\\begin{equation}\n\\widehat{D}^{\\prime}(\\mathcal{M}_d)\\subset \\mathcal{M}_d,\\; \\widehat{H}^{\\prime}(\\mathcal{M}_d)\\subset \\mathcal{M}_d,\\; \\widehat{C}^{\\prime}(\\mathcal{M}_d)\\subset \\mathcal{M}_d,\\label{ELE2}\n\\end{equation}\nshowing that this set is invariant by the action of $\\mathfrak{sl}(2,\\mathbb{R})$. Therefore, we can construct the solutions of system (\\ref{kond2}) recursively using polynomials in the new variables $\\Phi_{n,s}$. Specifically, renumbering the elements in $\\mathcal{M}_d$ as $\\left\\{u_{1},\\cdots ,u_{2\\ell^2+3\\ell+1}\\right\\}$, for any $r\\geq 2$ we define a polynomial of degree $2r$ as \n\\begin{equation}\n\\Psi_r= \\sum_{1\\leq i_1< \\cdots 0\\\\\n\\tau_m,&\\tau_{m+1}<0\n\\end{array}\\right.\n\\end{equation}\nThe Hermite coefficient $\\lambda_b$ \nhas the following properties:\n\\begin{equation}\\label{eqn035}\n\\lambda_b\\geq 1~\\mathrm{and} \\lim_{b\\rightarrow\\infty}\\lambda_b=1.\n\\end{equation}\n\\end{thm}\n\\begin{IEEEproof}\nSee Appendix \\ref{A}.\n\\end{IEEEproof}\n\nWith the ideal AGC, we assume that the input and output signals can be optimally scaled to meet the quantization boundaries.\n{\\em Theorem \\ref{thm01}} provides two implications: {\\em 1)} low-resolution quantizers can introduce a scalar ambiguity $\\lambda_b$, \nwhich often amplifies the input signal in the digital domain. The principle on how the signal is amplified is analytically explained in \nAppendix \\ref{A}; {\\em 2)} In the SOHE model, the scalar ambiguity vanishes with the increase of resolution ($b$ or $M$). This is in line \nwith the phenomenon that can be observed in reality. In other words, the SOHE model as well as the proof in Appendix \\ref{A} can well \nexplain the phenomenon of scalar ambiguity occurred in our practice. \n\n{\\em 2)} Unlike other linear approximation models, the SOHE model does not impose the assumptions A1) and A2) (see Section \\ref{sec2b})\nonto the quantization noise $q_b$. Instead, $q_b$ is described as a function of the input signal $x$, with their statistical behaviors being \nanalytical studied here. \n\\begin{thm}\\label{thm02}\n Suppose: C1) the input signal $x$ follows $\\mathbb{E}(x)=0$. The cross-correlation between $x$ and $q_b$ depends on the third-order central moments of $x$. \nWhen the input signal $(x)$ is AWGN, the quantization noise can be considered to be uncorrelated with the input signal. Moreover, for the \ncase of $b\\rightarrow\\infty$, the following result holds\n\\begin{equation}\\label{eqn036}\n\\lim_{b\\rightarrow\\infty}q_b(x)=0.\n\\end{equation}\n\\end{thm}\n\\begin{IEEEproof}\nSee Appendix \\ref{B}.\n\\end{IEEEproof}\n\nThe implication of {\\em Theorem \\ref{thm02}} lies in two folds: {\\em 1)} the quantization noise cannot be easily assumed to be uncorrelated with the input signal. {\\em Theorem \\ref{thm02}} provides sufficient conditions for the hypothesis of uncorrelated quantization noise; {\\em 2)} due to the use of second-order expansion for quantization function, it is possible that the SOHE-based quantization noise cannot well represent the characteristics of ideal quantization like \\eqref{eqn036}. However, {\\em Theorem \\ref{thm02}} confirms that with the increasing of resolutions, the quantization noise which is a function of the input signal would approximate to zero. \n\n{\\em Remark 2:} \nIt is worthwhile to note that, for complex-valued signals, the quantization process is applied individually in the real and imaginary domains. \nTherefore, {\\em Theorems \\ref{thm01}-\\ref{thm02}} apply straightforwardly to the complex-valued input signal.\n\n\n\\subsection{The Vector-SOHE Model and Characteristics}\nThe vector representation of the SOHE model has no fundamental difference from the scalar-SOHE model presented \nin \\eqref{eqn031}. It can be obtained by applying \\eqref{eqn031} into \\eqref{eqn004}\n\\begin{IEEEeqnarray}{ll}\\label{eqn037}\n\\mathbf{y}&=\\lambda_b\\mathbf{r}+\\mathbf{q}_b,\\\\\n&=\\lambda_b\\mathbf{H}\\mathbf{s}+\\underbrace{\\lambda_b\\mathbf{v}+\\mathbf{q}_b}_{\\triangleq\\mat{\\varepsilon}_b}.\\label{eqn038}\n\\end{IEEEeqnarray}\nThe vector form of the quantization noise is specified by\n\\begin{equation}\\label{eqn039}\n\\mathbf{q}_b=4\\omega_2\\Big(\\Re(\\mathbf{r})^2+j\\Im(\\mathbf{r})^2\\Big)-2\\omega_2,\n\\end{equation}\nwhere $\\Re(\\mathbf{r})^2$ or $\\Im(\\mathbf{r})^2$ denotes the corresponding real-vector with the Hadamard power of $2$. \nWith {\\em Theorem \\ref{thm02}}, we can reach the following conclusion about the vector-SOHE model. \n\n\\begin{cor}\\label{cor1}\nSuppose that C2) each element of $\\mathbf{H}$ is independently \ngenerated; and C3) the number of transmit antennas ($N$) is sufficiently large. The following cross-covariance matrix can be obtained\n\\begin{equation}\\label{eqn040}\n\\mathbf{C}_{qv}=\\mathbb{E}(\\mathbf{q}_b\\mathbf{v}^H)=\\mathbf{0}.\n\\end{equation}\n\\end{cor}\n\\begin{IEEEproof}\nThe condition C2) ensures that each element of the vector $[\\mathbf{Hs}]$ is a sum of $N$ independently generated random variables. \nWith the condition C3), the central limit theorem tells us that each element of $[\\mathbf{Hs}]$ is\nasymptotically AWGN. Since the thermal noise $\\mathbf{v}$ is AWGN and independent from $[\\mathbf{Hs}]$, \nthe received signal $\\mathbf{r}$ is approximately AWGN. In this case, {\\em Theorem \\ref{thm02}} tells us\n\\begin{equation}\\label{eqn041}\n\\mathbf{C}_{qr}=\\mathbb{E}(\\mathbf{q}_b\\mathbf{r}^H)=\\mathbf{0}.\n\\end{equation}\nPlugging \\eqref{eqn003} into \\eqref{eqn041} results in\n\\begin{IEEEeqnarray}{ll}\\label{eqn042}\n\\mathbf{C}_{qr}&=\\mathbb{E}(\\mathbf{q}_b(\\mathbf{Hs}+\\mathbf{v})^H),\\\\\n&=\\mathbb{E}(\\mathbf{q}_b(\\mathbf{Hs})^H)+\\mathbf{C}_{qv}=\\mathbf{0}.\\label{eqn043}\n\\end{IEEEeqnarray}\nSince $\\mathbf{v}$ is independent from $[\\mathbf{Hs}]$, the only case for \\eqref{eqn043} to hold is that both cross-covariance terms are zero. \n\\eqref{eqn040} is therefore proved. \n\\end{IEEEproof}\n\n\\begin{cor}\\label{cor2}\nGiven the conditions C2) and C3), the auto-covariance matrix of the quantization noise ($\\mathbf{C}_{qq}$) has the following \nasymptotical form\n\\begin{equation}\\label{eqn044}\n\\mathbf{C}_{qq}=4\\omega_2^2\\Big(4\\sigma_r^4\\mathbf{I}+(2\\sigma_r^4-\\sigma_r^2+1)(\\mathbf{1}\\otimes\\mathbf{1}^T)\\Big),\n\\end{equation}\nwhere $\\sigma_{r}^2$ denotes the variance of $r_k, _{\\forall k}$ when $N\\rightarrow\\infty$.\n\\end{cor}\n\\begin{IEEEproof}\nSee Appendix \\ref{C}.\n\\end{IEEEproof}\n\\begin{thm}\\label{thm03}\nSuppose that C4) the information-bearing symbols $s_n, _{\\forall n},$ have their third-order central moments fulfilling the condition:\n$\\mathbb{E}(\\Re(s_n)^3)=0$; $\\mathbb{E}(\\Im(s_n)^3)=0$. Then, the following cross-covariance holds\n\\end{thm}\n\\begin{equation}\\label{eqn045}\n\\mathbf{C}_{\\varepsilon s}=\\mathbb{E}(\\mat{\\varepsilon}_b\\mathbf{s}^H)=\\mathbf{0}.\n\\end{equation}\n\\begin{IEEEproof}\nThe cross-covariance in \\eqref{eqn045} can be computed as follows\n\\begin{IEEEeqnarray}{ll}\n\\mathbf{C}_{\\varepsilon s}&=\\mathbb{E}((\\lambda_b\\mathbf{v}+\\mathbf{q}_b)\\mathbf{s}^H),\\label{eqn046}\\\\\n&=\\lambda_b\\mathbf{C}_{vs}+\\mathbb{E}(\\mathbf{q}_b\\mathbf{s}^H),\\label{eqn047}\\\\\n&=\\mathbf{C}_{qs}\\label{eqn048}.\n\\end{IEEEeqnarray}\nThe derivation from \\eqref{eqn047} to \\eqref{eqn048} is due to the mutual independence between $\\mathbf{s}$ and $\\mathbf{v}$. \nAppendix \\ref{D} shows \n\\begin{equation}\\label{eqn049}\n\\mathbf{C}_{qs}=\\mathbf{0}.\n\\end{equation}\nThe result \\eqref{eqn045} is therefore proved. \n\nIt is perhaps worthwhile to note that in wireless communications, \n$s_n$ is normally centrosymmetric (such as M-PSK and M-QAM) and equally probable. In this case, it is not hard to find that the \ncondition C4) does hold in reality. \n\\end{IEEEproof}\n\nIn summary, {\\em Corollary \\ref{cor1}} exhibits the conditions for the quantization noise to be uncorrelated with the thermal noise as well as \nthe noiseless part of the received signal. The condition C3) indicates the need for a sufficiently large number of transmit-antennas ($N$). However, \nthis does not mean to require a very large $N$ in practice. Let us take an example of $N=8$. Each element of $\\mathbf{r}$ is a \nsuperposition of $(2N)=16$ independently generated real random-variables, and this can already lead to a reasonable asymptotical result.\n\n{\\em Corollary \\ref{cor2}} exhibits the auto-covariance matrix of $\\mathbf{q}_b$, which is an asymptotical result for $N\\rightarrow\\infty$.\nThe exact form of $\\mathbf{C}_{qq}$ is very tedious and we do not have the closed-form. Nevertheless, \\eqref{eqn045} already provides \nsufficient physical essence for us to conduct the LMMSE analysis. \n \nFinally, {\\em Theorem \\ref{thm03}} shows that the quantization noise is uncorrelated with the information-bearing symbols. All of \nthese results are useful tools to our LMMSE analysis in Section \\ref{sec4}.\n\n\\section{LMMSE Analysis with The Vector-SOHE Model}\\label{sec4}\n The primary aim of this section is to employ the vector-SOHE model \\eqref{eqn037}-\\eqref{eqn038} to conduct the LMMSE analysis, with which those interesting phenomena observed in the current LMMSE algorithm can be well explained. In addition, a better understanding of the behavior of the current LMMSE algorithm helps to find us an enhanced version particularly for signals with non-constant modulus modulations. \n\n\\subsection{The SOHE-Based LMMSE Analysis}\\label{sec4a}\nVector-SOHE is still a linear model. It does not change the classical form of the LMMSE, i.e., $\\mathbf{G}^\\star=\\mathbf{C}_{sy}\\mathbf{C}_{yy}^{-1}$ still holds. Despite, the cross-covariance matrix $\\mathbf{C}_{sy}$ can now be computed by\n\\begin{IEEEeqnarray}{ll}\n\\mathbf{C}_{sy}&=\\mathbb{E}\\Big(\\mathbf{s}(\\lambda_b\\mathbf{H}\\mathbf{s}+\\mat{\\varepsilon}_b)^H\\Big),\\label{eqn050}\\\\\n&=\\lambda_b\\mathbf{C}_{ss}\\mathbf{H}^H+\\mathbf{C}_{s\\varepsilon},\\label{eqn051}\\\\\n&=\\lambda_b\\mathbf{H}^H.\\label{eqn052}\n\\end{IEEEeqnarray}\nThe derivation from \\eqref{eqn051} to \\eqref{eqn052} is due to the fact $\\mathbf{C}_{s\\varepsilon}=\\mathbf{0}$ (see {\\em Theorem \\ref{thm03}}) as well as the assumption that $x_n, \\forall n,$ are uncorrelated with respect to $n$ (see the assumption above \\eqref{eqn002}). \n\nThe auto-covariance matrix $\\mathbf{C}_{yy}$ can be represented by\n\\begin{equation}\n\\mathbf{C}_{yy}=\\lambda_b^2\\mathbf{HH}^H+\\mathbf{C}_{\\varepsilon\\varepsilon},\\label{eqn053}\n\\end{equation}\nwhere \n\\begin{IEEEeqnarray}{ll}\\label{eqn054}\n\\mathbf{C}_{\\varepsilon\\varepsilon}&=\\lambda_b^2N_0\\mathbf{I}+\\mathbf{C}_{qq}+\\lambda_b(\\mathbf{C}_{qv}+\\mathbf{C}_{vq}),\\\\\n&=\\lambda_b^2N_0\\mathbf{I}+\\mathbf{C}_{qq}+2\\lambda_b\\Re(\\mathbf{C}_{qv}).\\label{eqn055}\n\\end{IEEEeqnarray}\nThen, the LMMSE formula can be represented by\n\\begin{equation}\\label{eqn056}\n\\mathbf{G}^\\star=\\lambda_b^{-1}\\mathbf{H}^H(\\mathbf{HH}^H+\\lambda_b^{-2}\\mathbf{C}_{\\varepsilon\\varepsilon})^{-1}.\n\\end{equation}\nProvided the conditions C2) and C3), \\eqref{eqn056} turns into \n(see {\\em Corollary \\ref{cor1}})\n\\begin{equation}\\label{eqn057}\n\\mathbf{G}^\\star=\\lambda_b^{-1}\\mathbf{H}^H(\\mathbf{HH}^H+N_0\\mathbf{I}+\\lambda_b^{-2}\\mathbf{C}_{qq})^{-1},\n\\end{equation}\nwhere $\\mathbf{C}_{qq}$ can be substituted by \\eqref{eqn044} in {\\em Corollary \\ref{cor2}}. \n\n\\subsection{Comparison between Various LMMSE Formulas}\\label{sec4b}\nGiven that the generalized-AQNM model (see Section \\ref{sec2b3}) was only studied for the $1$-bit quantizer, we mainly conduct \nthe LMMSE comparison between the SOHE model and the (modified) AQNM model. As shown in Section \\ref{sec2b2}, \nthe modified-AQNM model does not give a different LMMSE formula from the AQNM model when the Gaussian quantization noise is assumed. \nTherefore, our study is quickly focused on the comparison with the AQNM model. \n\nBasically, there are two major differences in their LMMSE forms:\n\n{\\em 1)} The SOHE-LMMSE formula has a scaling factor $\\lambda_b^{-1}$, which plays the role of equalizing the scalar ambiguity \ninherent in the SOHE model (see \\eqref{eqn037}-\\eqref{eqn038}). As shown in {\\em Theorem \\ref{thm01}}, this scalar ambiguity is introduced \nin the low-resolution quantization procedure. It amplifies the signal energy in the digital domain and vanishes with the increase of resolutions. \nThis theoretical conclusion coincides well with the phenomenon observed in the literature (e.g. \\cite{nguyen2019linear,9144509}). \n\n{\\em 2)} In the AQNM-LMMSE formula \\eqref{eqn009}, the impact of the quantization noise is described by the term \n$\\mathrm{nondiag}(\\rho\\mathbf{HH}^H)$. This implies that the quantization noise is modeled as a linear distortion.\nHowever, such is not the case for the SOHE-LMMSE formula. As shown in \\eqref{eqn045} and \\eqref{eqn057}, the auto-covariance matrix \n$\\mathbf{C}_{qq}$ involves the terms $\\sigma_r^2$ and $\\sigma_r^4$; and higher-order components are approximated in the SOHE model. \nAlthough \\eqref{eqn045} is only an asymptotical and approximate result, it carries a good implication in the sense that the quantization noise \nwould introduce non-linear effects to the LMMSE. Due to the modeling mismatch, the AQNM-LMMSE algorithm can suffer additional \nperformance degradation. \n\nDenote $\\mathbf{G}^\\star_{\\eqref{eqn009}}$ and $\\mathbf{G}^\\star_{\\eqref{eqn057}}$ to be the corresponding LMMSE formulas with \nrespect to the AQNM and SOHE models. Section \\ref{sec2a} indicates that they share the same size, i.e., $(N)\\times(K)$. \nAssuming that $\\mathbf{G}^\\star_{\\eqref{eqn009}}$ has the full row rank, we are able to find a $(N)\\times(N)$ matrix $\\mathbf{\\Theta}$ \nfulfilling \n\\begin{equation}\\label{eqn058}\n\\mathbf{\\Theta}\\mathbf{G}^\\star_{\\eqref{eqn009}}=\\mathbf{G}^\\star_{\\eqref{eqn057}}.\n\\end{equation}\nDenote $(\\mathbf{G}^\\star_{\\eqref{eqn009}})^\\dagger$ to be the pseudo inverse of $\\mathbf{G}^\\star_{\\eqref{eqn009}}$. \nThe matrix $\\mathbf{\\Theta}$ can be obtained through\n\\begin{equation}\\label{eqn059}\n\\mathbf{\\Theta}=\\mathbf{G}^\\star_{\\eqref{eqn057}}\\Big(\\mathbf{G}^\\star_{\\eqref{eqn009}}\\Big)^\\dagger.\n\\end{equation}\n Therefore, the impact of the modeling mismatch inherent in the AQNM-LMMSE can be mitigated through a linear transform. \nSuppose that the matrix $\\mathbf{G}^\\star_{\\eqref{eqn009}}$ has the full row rank. The modeling-mismatch induced \nperformance degradation inherent in the AQNM-LMMSE algorithm can be mitigated through the linear transform specified in \n\\eqref{eqn058}, where the scaling factor $\\lambda_b$ is incorporated in the matrix $\\mathbf{\\Theta}$. \n\n\\subsection{Enhancement of The AQNM-LMMSE Algorithm}\nThe SOHE-LMMSE formula describes more explicitly the impact of non-linear distortion in the channel equalization. \nHowever, the SOHE-LMMSE formula cannot be directly employed for the channel equalization mainly due to two reasons: \n{\\em 1)} the auto-covariance matrix $\\mathbf{C}_{qq}$ does not have a closed-form in general; and {\\em 2)} the scalar \n$\\lambda_b$ defined in \\eqref{eqn032} comes only from the first-order Hermite kernel. However, other odd-order Hermite \nkernels also contribute to $\\lambda_b$. The omission of the third- and higher-order Hermite kernels can make the computation of \n$\\lambda_b$ inaccurate. Fortunately, the analysis in \\eqref{eqn058} and\\eqref{eqn059} show that the SOHE-LMMSE formula can be translated into \nthe AQNM-LMMSE formula through a linear transform. In other words, there is a potential to enhance the AQNM-LMMSE algorithm \nby identifying the linear transform $\\mathbf{\\Theta}$.\n\nDenote $\\hat{\\mathbf{s}}_{\\eqref{eqn057}}\\triangleq\\mathbf{G}^\\star_{\\eqref{eqn057}}\\mathbf{y}$ \nand $\\hat{\\mathbf{s}}_{\\eqref{eqn009}}\\triangleq\\mathbf{G}^\\star_{\\eqref{eqn009}}\\mathbf{y}$ to be the outputs of the SOHE-LMMSE \nchannel equalizer and the AQNM-LMMSE channel equalizer, respectively. Applying the result \\eqref{eqn058}-\\eqref{eqn059} yields\n\\begin{equation}\\label{eqn060}\n\\hat{\\mathbf{s}}_{\\eqref{eqn009}}=\\mathbf{\\Theta}^{-1}\\hat{\\mathbf{s}}_{\\eqref{eqn057}}.\n\\end{equation}\nGenerally, it is not easy to identify $\\mathbf{\\Theta}$ and remove it from $\\hat{\\mathbf{s}}_{\\eqref{eqn009}}$. On the other hand, \nif $\\mathbf{G}^\\star_{\\eqref{eqn057}}$ and $\\mathbf{G}^\\star_{\\eqref{eqn009}}$ are not too different, \\eqref{eqn059} implies that \n$\\mathbf{\\Theta}$ can be considered to be approximately diagonal. In this case, the linear transform reduces to symbol-level scalar ambiguities. \nAssume that the channel-equalized result $\\hat{\\mathbf{s}}_{\\eqref{eqn057}}$ does not have such scalar ambiguities. It is easy to understand \nthe scalar ambiguities of $\\hat{\\mathbf{s}}_{\\eqref{eqn009}}$ come from $\\lambda_b\\mathbf{G}^\\star_{\\eqref{eqn009}}\\mathbf{H}$. In other \nwords, we can have the following approximation \n\\begin{equation}\\label{eqn061}\n\\mathbf{\\Theta}^{-1}\\approx\\lambda_b\\mathbb{D}\\Big(\\mathbf{G}^\\star_{\\eqref{eqn009}}\\mathbf{H}\\Big).\n\\end{equation}\nIn \\eqref{eqn061}, $\\lambda_b$ is the only unknown notation which must be determined. {\\em Theorem \\ref{thm01}} shows that \nthe effect of $\\lambda_b$ is the block-level energy amplification, of which the value can be computed using \\eqref{appa6}. Finally, we conclude the following form of enhanced LMMSE channel equalizer (eLMMSE)\n\\begin{equation}\\label{eqn063}\n\\mathbf{G}_e=\\frac{1}{\\lambda_b}\\mathbb{D}\\Big(\\mathbf{G}^\\star_{\\eqref{eqn009}}\\mathbf{H}\\Big)^{-1}\\mathbf{G}^\\star_{\\eqref{eqn009}}.\n\\end{equation}\n\n\n\\section{Simulation Results and Discussion}\\label{sec5}\nComputer simulations were carried out to elaborate our theoretical work in Section \\ref{sec3} and Section \\ref{sec4}. \nSimilar to the AQNM models, the SOHE model cannot be directly evaluated through computer simulations. \nNevertheless, their features can be indirectly demonstrated through the evaluation of their corresponding LMMSE channel equalizers. \nGiven various LMMSE channel equalizers discussed in Section \\ref{sec2} and Section \\ref{sec4}, it is perhaps useful to provide a brief summary here for the sake of clarification:\n \\begin{itemize}\n \\item AQNM-LMMSE: this is the LMMSE channel equalizer shown in \\eqref{eqn009}. \n As shown in Section \\ref{sec2b2}, the LMMSE channel equalizer \\eqref{eqn017} is equivalent to \\eqref{eqn009}; and thus it is not demonstrated in our simulation results.\n \\item B-LMMSE: this is the LMMSE channel equalizer shown in \\eqref{eqn024}. This channel equalizer is specially designed and optimized for the $1$-bit quantizer. Therefore, it will only be demonstrated in our simulation results for the $1$-bit quantizer.\n \\item N-LMMSE: this is the AQNM-LMMSE channel equalizer normalized by the term $\\|\\mathbf{G}^\\star_{\\eqref{eqn009}}\\mathbf{y}\\|$. \n \\item NB-LMMSE: this is the B-LMMSE channel equalizer normalized by the term $\\|\\mathbf{G}^\\star_{\\eqref{eqn024}}\\mathbf{y}\\|$. \n Both the N-LMMSE and NB-LMMSE channel equalizers have been studied in \\cite{7439790,nguyen2019linear,tsefunda}.\n\\item e-LMMSE: this is the e-LMMSE channel equalizer proposed in \\eqref{eqn063}. As shown in Section \\ref{sec4}, this e-LMMSE channel equalizer is driven by the SOHE model. \n \\end{itemize}\n\\begin{figure}[tb]\n\t\\centering\n\t\\includegraphics[scale=0.25]{1bit_MSE_comparisons_dB.eps}\n\t\\caption{\n\t\tThe MSE performance as a function of Eb/N0 for the $N$-by-$K$ multiuser-MIMO systems with $1$-bit quantizers, \n\t\t\\protect\\tikz[baseline]{\\protect\\draw[line width=0.2mm, dashed] (0,.5ex)--++(0.6,0) ;}~$(N/K)=(2/32)$,\n\t\t\\protect\\tikz[baseline]{\\protect\\draw[line width=0.2mm] (0,.5ex)--++(0.6,0) ;}~$(N/K)=(4/64)$,\n\t\t\\protect\\tikz[baseline]{\\protect\\draw[line width=0.2mm, dash dot] (0,.5ex)--++(0.6,0) ;}~$(N/K)=(8/128)$.}\\label{fig01}\n\\end{figure}\nIn our computer simulations, the e-LMMSE channel equalizer is compared to the SOTA (i.e., AQNM-LMMSE, B-LMMSE, N-LMMSE and NB-LMMSE) in terms of their MSE as well as bit-error-rate (BER) performances. The MSE is defined by \n\\begin{equation}\\label{eqn064}\n\\mathrm{MSE}\\triangleq\\frac{1}{(N)(I)}\\sum_{i=0}^{I-1}\\|\\mathbf{G}_i^\\star\\mathbf{y}_i-\\mathbf{s}_i\\|^2,\n\\end{equation}\nwhere $I$ denotes the number of Monte Carlo trials. \nAll the simulation results were obtained by taking average of sufficient number of Monte Carlo trials. For each trial, the wireless MIMO narrowband channel was generated according to independent complex Gaussian distribution (Rayleigh in amplitude), and this is the commonly used simulation setup in the literature \\cite{7458830, 6987288}. In addition, the signal-to-noise ratio (SNR) is defined by the average received bit-energy per receive antenna to the noise ratio (Eb/N0), and the transmit power for every transmit antenna is set to be identical. The low-resolution quantization process follows the design in \\cite{7037311}, which for 1-bit quantizer, binary quantization is taken; for quantizer other than 1-bit (i.e., 2, 3-bit), the ideal AGC is assumed and the quantization is determined by quantization steps \\cite{1057548}.\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[scale=0.35]{MSE_23bit_comparisons_dB.eps}\n\t\\caption{The MSE performance as a function of Eb/N0 for the $N$-by-$K$ multiuser-MIMO systems with $1$-bit quantizers, \n\t\t\\protect\\tikz[baseline]{\\protect\\draw[line width=0.2mm, dashed] (0,.5ex)--++(0.6,0) ;}~$(N/K)=(2/32)$,\n\t\t\\protect\\tikz[baseline]{\\protect\\draw[line width=0.2mm] (0,.5ex)--++(0.6,0) ;}~$(N/K)=(4/64)$,\n\t\t\\protect\\tikz[baseline]{\\protect\\draw[line width=0.2mm, dash dot] (0,.5ex)--++(0.6,0) ;}~$(N/K)=(8/128)$.}\\label{fig02}\n\\end{figure*}\nAccording to the measures used in computer simulations, we divide the simulation work into two experiments. \nOne is designed to examine the MSE performance, and the other is for the BER performance. \nIn our simulation results, we demonstrate the performances mainly for $16$-QAM. This is due to two reasons: \n{\\em 1)} all types of LMMSE channel equalizers offer the same performances for M-PSK modulations. \nThis phenomenon has already been reported in the literature and also discussed in Section \\ref{sec1}; \nand {\\em 2)} higher-order QAM modulations exhibit almost the same basic features as those of $16$-QAM. \nOn the other hand, they perform worse than $16$-QAM due to their increased demand for the resolution of quantizers. \nThose observations are not really novel and thus abbreviated. \n\n\\subsubsection*{Experiment 1}\\label{exp1} \n\nThe objective of this experiment is to examine the MSE performance of various LMMSE channel equalizers. \nFor all simulations, we keep the transmit antenna to receive antenna ratio to be a constant (e.g. $N/K=1/16$). \n\n\n\\figref{fig01} depicts the MSE performances of various LMMSE channel equalizers as far as the $1$-bit quantizer is concerned. \nGenerally, it can be observed that all the MSE performances get improved by increasing the size of MIMO. \nThis phenomenon is fully in line with the principle of mMIMO. \n\n\n\n\n\nIt can also be observed that both the AQNM-LMMSE and the B-LMMSE channel equalizers perform poorly throughout the whole SNR range. \nThis is because the AQNM models do not capture the scaling ambiguity as described in the SOHE model. When the normalization operation is applied, \nthe AQNM-LMMSE and the B-LMMSE channel equalizers turn into their corresponding N-LMMSE and NB-LMMSE equalizers, respectively. \nInterestingly, their performances get significantly improved, and thereby outperforming the e-LMMSE channel equalizer for most of cases. \nOn one hand, this is the additional evidence showing the missing of scaling ambiguity in the AQNM models; and on the other hand, \nit is shown that the NB-LMMSE is indeed the optimized LMMSE channel equalizer for the $1$-bit quantizer. \nNevertheless, we can see that the e-LMMSE approach still offers very comparable MSE performances with the N-LMMSE and NB-LMMSE approaches. \nThis provides the indirect evidence showing that the SOHE model offers a good approximation for the $1$-bit quantizer.\n\nThen, we carry on our simulations for $2$- and $3$-bit low-resolution quantizers, respectively, and illustrate their MSE performances in \\figref{fig02}. \nIt is perhaps worth emphasizing that the B-LMMSE and NB-LMMSE channel equalizers are not examined here since they are devised only for the $1$-bit quantizer. \n\nThe first thing coming into our sight is that the e-LMMSE shows the best MSE performance for almost all the demonstrated cases. \nThis is a very good evidence to support our theoretical work about the SOHE model as well as the SOHE-based LMMSE analysis. \n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[scale=0.27]{two_123bit_enhanced_comparison.eps}\t\n\t\\caption{The BER performance as a function of Eb/N0 for $N= 2$ transmitters, $16$-QAM systems with different resolutions of quantizers, \n\t\t\\protect\\tikz[baseline]{\\protect\\draw[line width=0.2mm] (0,.5ex)--++(0.6,0) ;}~$K=32$ receive antennas,\n\t\t\\protect\\tikz[baseline]{\\protect\\draw[line width=0.2mm, dash dot] (0,.5ex)--++(0.6,0) ;}~$K=16$ receive antennas.}\\label{fig03}\n\\end{figure*}\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[scale=0.27]{four_123bit_enhanced_comparison.eps}\n\t\\caption{The BER performance as a function of Eb/N0 for $N= 4$ transmitters, $16$-QAM systems with different resolutions of quantizers, \n\t\t\\protect\\tikz[baseline]{\\protect\\draw[line width=0.2mm] (0,.5ex)--++(0.6,0) ;}~$K=64$ receive antennas,\n\t\t\\protect\\tikz[baseline]{\\protect\\draw[line width=0.2mm, dash dot] (0,.5ex)--++(0.6,0) ;}~$K=32$ receive antennas.}\\label{fig04}\n\\end{figure*}\n\nWhen going down to the detail, specifically for the $2$-bit quantizer, the N-LMMSE approach demonstrates very comparable performance \nwith the e-LMMSE approach in the case of larger MIMO (i.e. $(N/K)=(8/128)$). However, its performance gets quickly degraded with the decrease of \nthe MIMO size. Take the example of Eb/N0$=5$ dB. For the case of $(N/K)=(8/128)$, \nboth the e-LMMSE and the N-LMMSE approaches have their MSEs at around $-22.6$ dB, \nwhile the AQNM-LMMSE has its MSE at around $-16.8$ dB. Both the e-LMMSE and the N-LMMSE outperform the AQNM-LMMSE by around $6$ dB. \nWhen the size of MIMO reduces to $(N/K)=(4/64)$, the e-LMMSE shows the best MSE (around $-21.2$ dB). \nThe MSE for N-LMMSE and AQNM-LMMSE becomes $-18.9$ dB and $-17.7$ dB, respectively. \nThe N-LMMSE underperforms the e-LMMSE by around $2.3$ dB, although it still outperforms the AQNM-LMMSE by around $1.2$ dB. \nBy further reducing the size of MIMO to $(N/K)=(2/32)$, the e-LMMSE has its MSE performance degraded to $-19.6$ dB. \nThe MSE for N-LMMSE and AQNM-LMMSE now becomes $-14.9$ dB and $-17.4$ dB, respectively. \nThe e-LMMSE outperforms the AQNM-LMMSE by around $2.2$ dB and the N-LMMSE by around $4.7$ dB. \nThe major reason for this phenomenon to occur is that the AQNM model assumes the quantization distortion and the input signal to be Gaussian. \nThis assumption becomes less accurate with the use of less transmit antennas. Moreover, the use of less receive antennas has the spatial de-noising ability reduced. The term used for normalization gets more negatively influenced by the noise as well as the quantization distortion. \nThe SOHE model does not assume the input signal and the quantization distortion to be Gaussian, and thus it gets the least negative impact. \n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[scale=0.27]{eight_123bit_enhanced_comparison.eps}\n\t\n\t\\caption{The BER performance as a function of Eb/N0 for $N= 8$ transmitters, $16$-QAM systems with different resolutions of quantizers, \n\t\t\\protect\\tikz[baseline]{\\protect\\draw[line width=0.2mm] (0,.5ex)--++(0.6,0) ;}~$K=128$ receive antennas,\n\t\t\\protect\\tikz[baseline]{\\protect\\draw[line width=0.2mm, dash dot] (0,.5ex)--++(0.6,0) ;}~$K=64$ receive antennas.}\\label{fig05}\n\\end{figure*}\nDue to the same rationale, the similar phenomenon can also be observed for the $3$-bit quantizer. \nAgain, the e-LMMSE approach shows the best performance for almost all the cases. Apart from that, there are two notable differences that worth a mention:\n{\\em 1)} the performance of AQNM-LMMSE is quite close to that of e-LMMSE for all sizes of MIMO. This is because the $3$-bit quantizer is of reasonably good resolution for $16$-QAM modulations, and this largely mitigates the discrimination between the AQNM model and the SOHE model; \nand 2) the N-LMMSE performs really poorly when compared with the others. This implies the inaccuracy of using the term $\\|\\mathbf{G}^\\star_{\\eqref{eqn009}}\\mathbf{y}\\|$ for the normalization. \n\nAfter all, the experiment about the MSE evaluation confirms our theoretical work in Sections \\ref{sec2}-\\ref{sec4} and demonstrates the major \nadvantages of the SOHE model as well as the e-LMMSE channel equalizer from the MSE perspective. \n\n\n\n\\subsubsection*{Experiment 2}\\label{exp2}\nIt is common knowledge that an MMSE-optimized approach is not necessarily optimized for the detection performance.\nThis motivates us to examine the average-BER performance for various LMMSE channel equalizers in this experiment.\nBasically, this experiment is divided into three sub-tasks, with each having a fixed number of transmit antennas. \n\n\\figref{fig03} depicts the case of $N=2$ transmit antennas. Generally, the use of more receive antennas can largely improve the BER performance. \nThis conclusion is true for all types of examined low-resolution quantizers. In other words, all LMMSE channel equalizers can enjoy the \nreceiver-side spatial diversity. \n\nSpecifically for the $1$-bit quantizer, the AQNM-based LMMSE approaches (i.e., AQNM-LMMSE and B-LMMSE) generally underperform their corresponding normalized version (i.e., N-LMMSE and NB-LMMSE). This phenomenon fully coincides with their MSE behaviors shown in \n{\\em Experiment 1}- \\figref{fig01}. The e-LMMSE approach does not demonstrate remarkable advantages in this special case. It offers the best BER \nat the SNR range around Eb/N0 $=2$ dB, and then the BER grows with the increase of SNR. Such phenomenon is not weird, and this occurs quite \noften in systems with low-resolution quantizers and other non-linear systems due to the physical phenomenon called stochastic resonance \\cite{RevModPhys.70.223}. Similar phenomenon also occurs in the AQNM-LMMSE approach. It means that, for low-resolution quantized systems, additive noise could be constructive to the signal detection at certain SNRs, especially for the QAM constellations (e.g. \\cite{7247358, 7894211, 9145094, jacobsson2019massive, She2016The}). \nThe theoretical analysis of constructive noise in the signal detection can be found in Kay's work \\cite{809511}\n( interested readers please see Appendix \\ref{E} for the elaboration of the phenomenon of constructive noise.)\nInterestingly, the normalized approaches do not show \nconsiderable stochastic resonance phenomenon within the observed SNR range. \n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[scale=0.27]{four_123bit_SE_comparison.eps}\n\t\n\t\\caption{The sum SE as a function of Eb/N0 for $N= 4$ transmitters for systems with different resolutions of quantizers, different LMMSE based channel estimators and ZF channel equalizer.\n\t\t\\protect\\tikz[baseline]{\\protect\\draw[line width=0.2mm] (0,.5ex)--++(0.6,0) ;}~$K=64$ receive antennas,\n\t\t\\protect\\tikz[baseline]{\\protect\\draw[line width=0.2mm, dash dot] (0,.5ex)--++(0.6,0) ;}~$K=32$ receive antennas.}\\label{fig06}\n\\end{figure*}\nWhen the resolution of quantizer increases to $b=2$ bit, the e-LMMSE approach demonstrates the significant performance gain for most of cases. \nFor instance, the e-LMMSE significantly outperforms the AQNM-LMMSE for the higher SNR range (i.e., Eb/N0 $>0$ dB). \nThe N-LMMSE approach performs the worst in all the cases. This observation is well in line with our observation in the MSE performance \n(see \\figref{fig02}), and they share the same rationale. \n\nWhen the resolution of quantizer increases to $b=3$ bit, both the e-LMMSE and the AQNM-LMMSE approaches offer excellent BER performances.\nTheir performances are very close to each other, and the e-LMMSE only slightly outperforms the AQNM-LMMSE for the case of $K=16$. \nThis reason for this phenomenon is the same as that for the MSE performance, which has also been explained in {\\em Experiment 1}. \n\nIn a short summary, the e-LMMSE approach shows significant advantages for $2$-bit quantizers. This is the case where the SOHE model offers a \nbetter mathematical description than the AQNM models, and at the same time the resolution is not high enough to support higher-order modulations. \nThis is particularly true for the case of $N=2$ transmit antennas, where the input signal and quantization distortion can be assumed to be white Gaussian.\n\nNow, we increase the number of transmit antennas $(N)$ to investigate how the BER performance will be influenced. \nAccordingly, the number of receive antennas $(K)$ is also increased. The BER results for the case of $N=4$ are plotted in \\figref{fig04}. \n\nLet us begin with the $3$-bit quantizer. We have almost the same observation as for the case of $N=2$ transmit antennas. \nThe e-LMMSE approach performs slightly better than the AQNM-LMMSE approach. The performance difference is not really considerable. \nWhen it comes to the case of $2$-bit quantizer, their difference in BER gets largely increased, and the e-LMMSE approach \ndemonstrates significant advantages. It is worth noting that the N-LMMSE approach offers comparable performances \nwith the AQNM-LMMSE approach. This is because the increase of transmit antennas brings the input signal and quantization distortion closer to \nthe white Gaussian. This rationale is also explained in the MSE analysis. For the case of $1$-bit quantizer, there is no much new phenomenon observed \nin comparison with the case of $N=2$ transmit antennas; apart from that the stochastic resonance phenomenon becomes less significant. \n\n\nWhen the number of transmit antennas increases to $N=8$, the BER results are plotted in \\figref{fig05}. \nFor the case of $3$-bit quantizer, the e-LMMSE approach demonstrates slightly more considerable gain, \nand the N-LMMSE approach gets its performance even closer to the others. \nSimilar phenomenon can be observed for the case of $2$-bit quantizer, where the N-LMMSE offers considerably close performance to \nthe e-LMMSE approach. The AQNM-LMMSE approach performs the worst. This phenomenon is also observed in the MSE analysis. \nAgain, for the $1$-bit quantizer, the NB-LMMSE approach offers the best BER performance, as it is devised and optimized for this special case. \n\n Similar to the phenomenon observed in {\\em Experiment 1}, the performance of e-LMMSE is not the best for the $1$-bit quantized system. This is because, for the $1$-bit quantized system, there exists an optimum LMMSE channel equalizer using the arcsine law \\cite{Mezghani2012,Papoulis_2002}. \nDespite, the proposed e-LMMSE approach can still provide comparable performance over the closed-form approach. When it comes to the $3$-bit quantizer, it can be found that e-LMMSE has only a slight BER gain over the AQNM-LMMSE. It is known that one of the characteristics of SOHE model is that it is not based on the Gaussian quantization noise assumption. However, when the resolution of quantizer rises to $3$-bit, the distribution of quantization noise very approximates to the Gaussian distribution and such results in similar performances between e-LMMSE and AQNM-LMMSE.\n\n\n\\subsubsection*{Experiment 3}\\label{exp3}\nAs response to the reviewers' comments, we add this experiment to examine the SOHE-based channel estimation and its corresponding channel equalization. \nFor this experiment, SOTA approaches can include those results reported in \\cite{7931630, 7894211,rao2021massive}.\nIt is perhaps worth noting that the literature \\cite{rao2021massive} considers the use of sigma-delta quantizer, which takes advantage of oversampling to achieve an enhanced performance. \nThis is however not the case for our work as well as those in \\cite{7931630, 7894211}. \nFor the sake of fair comparison, we only conduct the performance comparison between our work and the result in \\cite{7931630, 7894211}.\nIn this experiment, the performance is evaluated through the sum SE defined by \\cite{rao2021massive}\n\\begin{equation}\\label{eqn067}\n\\mathrm{SE} =\\frac{T-P}{T}\\sum_n^NR_n,\n\\end{equation}\nwhere $T$ is the length of the coherence interval, and $R_n$ the achievable rate for each transmitter-to-receiver link defined in \\cite{7931630, 7894211}. \nThis is because the sum SE is the widely considered metric in the SOTA \\cite{7931630, 7894211,rao2021massive}, where $T$ is commonly set to $200$.\n\nSimilar to \\eqref{eqn003}, the mathematical model for low-resolution quantized mMIMO channel estimation is given in the vectorized form\n\\begin{equation}\\label{eqn065}\n\\mathbf{r}_p = \\bar{\\mathbf{\\Phi}}\\bar{\\mathbf{h}}+\\bar{\\mathbf{v}}_p,\n\\end{equation}\nwhere $\\bar{\\mathbf{h}}=\\mathrm{vec}(\\mathbf{H})$, $\\bar{\\mathbf{\\Phi}}=(\\mathbf{\\Phi} \\otimes \\mathbf{I}_K)$ and $\\mathbf{\\Phi}\\in \\mathbb{C}^{N\\times P}$ is the pairwise orthogonal pilot matrix, which is composed of submatrices of the discrete Fourier transform (DFT) operator \\cite{Biguesh_1bit}. During training, all $N$ users simultaneously transmit their pilot sequences of $P$ symbols to the BS.\nFeeding \\eqref{eqn065} to the low-resolution quantizer, we should have the output $\\mathbf{y}_p \\in \\mathbb{C}^{KP\\times 1}$, which is similar to \\eqref{eqn004}. \nRegarding the LMMSE channel estimation algorithms, we should have the closed-form B-LMMSE estimator for 1-bit quantized model in \\cite{7931630} and AQNM-LMMSE and N-LMMSE estimators for other resolutions. \nThose channel estimators are compared with the SOHE-LMMSE channel estimator in \\eqref{eqn063}.\nGiven the LMMSE estimator $\\mathbf{W}^*$, the channel estimate can be expressed as $\\hat{\\mathbf{H}}=\\mathrm{unvec}(\\mathbf{W}^*\\mathbf{y}_b)$. For the sake of fairness, we employ the zero-forcing (ZF) algorithm for the channel equalization as it has been used by the SOTA, i.e., \n$\\mathbf{G}_{\\text{ZF}} = \\mathbf{\\hat{\\mathbf{H}}}^H(\\hat{\\mathbf{H}}\\hat{\\mathbf{H}}^H)^{-1}$.\n\n\n\\figref{fig06} depicts the sum SE performance of various LMMSE channel estimators while $N=4$ transmitters and $K= 32, 64$ receive antennas are considered. The length of the pilot is considered as $P=N$. Similar to the phenomenon observed in above experiments, the rising up of the number of receive antennas and resolution of quantizers can offer significant SE gain.\n\nWhen the resolution of quantizer is $b=1$ bit, the B-LMMSE algorithm has the best sum SE over other LMMSE channel estimators, and the gap can be approximately 4 bit/s/Hz. This phenomenon is not wired as the B-LMMSE is the closed-form algorithm for 1-bit quantized model \\cite{7931630}. SOHE-LMMSE and AQNM-LMMSE channel estimators do not demonstrate advantages in this special scenario, but it can be found that SOHE-LMMSE can achieve almost the same sum SE as the N-LMMSE channel estimator, while AQNM-LMMSE approach performs the worst in such model.\n\nWhen the resolution of quantizer increases to $b=2$ bit, all three types (i.e., SOHE-LMMSE, AQNM-LMMSE and N-LMMSE) of channel estimators share the similar sum SE. For instance, they can have their sum SE reaching at 16 bit/s/Hz for $K=32$ and 20 bit/s/Hz for $K=64$ for the four-user system. When it comes to the case of 3-bit quantizer, we have almost the same observation as for the case of $b=2$ bit quantizer. The performance difference between all three types of channel estimators is not really considerable for high Eb/N0. When the Eb/N0 $>$ 0dB, for $K=64$, the AQNM-LMMSE channel estimator can slightly outperform the N-LMMSE and SOHE-LMMSE channel estimators. As it is discussed in Section \\ref{sec4}-\\ref{sec5}, the scalar ambiguity will be detrimental for QAM modulations. However, the setup of each element of the pilot matrix $\\mathbf{\\Phi}$ follows unit power and all pilot sequences are pairwise orthogonal; similar to the analysis for LMMSE channel equalization for PSK constellations, the scalar ambiguity does not show any side effect on such case. This explains the reason why the SOHE-LMMSE channel estimator has the same sum SE compared with current version LMMSE algorithms. \n\n\\section{Conclusion}\nIn this paper, a novel linear approximation method, namely SOHE, has been proposed to model the low-resolution quantizers. \nThe SOHE model was then extended from the real-scalar form to the complex-vector form, and the latter was applied and extensively studied in \nthe low-resolution quantized multiuser-MIMO uplink signal reception. It has been shown that the SOHE model does not require\nthose assumptions employed in the AQNM model as well as its variations. Instead, it uses the first-order Hermite kernel to model the \nsignal part and the second-order Hermite kernel to model the quantization distortion. Such equipped us with sufficient flexibility and \ncapacity to develop deeper and novel understanding of the stochastic behavior and correlation characteristics of the quantized signal \nas well as the non-linear distortion. Through our intensive analytical work, it has been unveiled that the low-resolution quantization could result \nin a scalar ambiguity. In the SOHE model, this scalar ambiguity is characterized by the coefficient of the first-order Hermite kernel. \nHowever, it is not accurately characterized in other linear approximation models due to the white-Gaussian assumption. \nWhen applying the SOHE model for the LMMSE analysis, \nit has been found that the SOHE-LMMSE formula carries the Hermite coefficient, which equips the SOHE-LMMSE channel equalizer with \nthe distinct ability to remove the scalar ambiguity in the channel equalization. It has been shown that the SOHE-LMMSE formula involves \nhigher-order correlations, and this prevents the implementation of the SOHE-LMMSE channel equalizer. Nevertheless, it was also found that \nthe SOHE-LMMSE formula could be related to the AQNM-LMMSE formula through a certain linear transform. This finding motivated the \ndevelopment of the e-LMMSE channel equalizer, which demonstrated significant advantages in the MSE and BER performance evaluation. All of \nthe above conclusions have been elaborated through extensive computer simulations in the independent Rayleigh-fading channels. \n\n\\appendices\n\\section{Proof of Theorem \\ref{thm01}}\\label{A}\nWith the equations \\eqref{eqn028} and \\eqref{eqn032}, the coefficient $\\lambda_b$ can be computed as follows\n\\begin{IEEEeqnarray}{ll}\\label{appa1}\n\\lambda_b&=\\frac{-1}{\\sqrt{\\pi}}\\sum_{m=0}^{M-1}x_m\\int_{\\tau_m}^{\\tau_{m+1}}\n\\Big[\\frac{\\partial}{\\partial x}\\exp(-x^2)\\Big]\\mathrm{d}x,\\\\\n&=\\frac{-1}{\\sqrt{\\pi}}\\sum_{m=0}^{M-1}x_m\\int_{\\tau_m}^{\\tau_{m+1}}(-2x)\\exp(-x^2)\\mathrm{d}x,\\label{appa2}\\\\\n&=\\frac{1}{\\sqrt{\\pi}}\\sum_{m=0}^{M-1}x_m\\Big(\n\\exp(-\\tau_m^2)-\\exp(-\\tau_{m+1}^2)\n\\Big).\\label{appa3}\n\\end{IEEEeqnarray}\nWe first examine the limit of $\\lambda_b$ when $b\\rightarrow\\infty$. It is equivalent to the following case\n\\begin{IEEEeqnarray}{ll}\n\\lim_{b\\rightarrow\\infty}\\lambda_b\n&=\\frac{1}{\\sqrt{\\pi}}\\lim_{M\\rightarrow\\infty}\\sum_{m=0}^{M-1}x_m\\Big(\n\\exp(-\\tau_m^2)\\nonumber\n\\\\&\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad-\\exp(-\\tau_{m+1}^2)\\Big). \\label{appa4}\n\\end{IEEEeqnarray}\nFor $M\\rightarrow\\infty$, the discrete-time summation in \\eqref{appa4} goes back to the integral in \\eqref{eqn028}. \nSince it is an ideal quantization, we have $x_m=x$, and thereby having\n\\begin{equation}\\label{appa5}\n\\lim_{b\\rightarrow\\infty}\\lambda_b=\\frac{2}{\\sqrt{\\pi}}\\int_{-\\infty}^{\\infty}x^2\\exp(-x^2)\\mathrm{d}x=1.\n\\end{equation}\nThe derivation of \\eqref{appa5} can be found in \\cite[p. 148]{Papoulis_2002}. \n\nFor the symmetric quantization, \\eqref{appa3} can be written into\n\\begin{equation}\\label{appa6}\n\\lambda_b\n=\\frac{2}{\\sqrt{\\pi}}\\sum_{m=M/2}^{M-1}x_m\\Big(\n\\exp(-\\tau_m^2)-\\exp(-\\tau_{m+1}^2)\n\\Big).\n\\end{equation}\nConsider the particular range of $x\\in(\\tau_m, \\tau_{m+1}]$ and $\\tau_m>0$, in which $\\exp(-x^2)$ is a monotonically \ndecreasing function of $x$. Then, we have \n\\begin{equation}\\label{appa7}\n\\exp(-\\tau_m^2)\\geq\\exp(-x^2),~x\\in(\\tau_m, \\tau_{m+1}].\n\\end{equation}\nand consequently have\n\\begin{equation}\\label{appa8}\n(\\tau_{m+1})\\exp(-\\tau_m^2)\\geq\\int_0^{\\tau_{m+1}}\\exp(-x^2)\\mathrm{d}x.\n\\end{equation}\nApplying \\eqref{eqn034} and \\eqref{appa8} into \\eqref{appa6} results in\n\\begin{IEEEeqnarray}{ll}\\label{appa9}\n\\lambda_b&=\\frac{2}{\\sqrt{\\pi}}\\sum_{m=M/2}^{M-1}\\tau_{m+1}\\Big(\n\\exp(-\\tau_m^2)-\\exp(-\\tau_{m+1}^2)\\Big),\\\\\n&\\geq\\frac{2}{\\sqrt{\\pi}}\\sum_{m=M/2}^{M-1}\\Big[\\int_0^{\\tau_{m+1}}\\exp(-x^2)\\mathrm{d}x\\nonumber\\\\\n&\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad-(\\tau_{m+1})\\exp(-\\tau_{m+1}^2)\\Big],\\\\\n&\\geq\\frac{2}{\\sqrt{\\pi}}\\int_0^{\\infty}\\exp(-x^2)\\mathrm{d}x=1.\\label{appa11}\n\\end{IEEEeqnarray}\n{\\em Theorem \\ref{thm01}} is therefore proved.\n\n\n\\section{Proof of Theorem \\ref{thm02}}\\label{B}\nWith the quantization noise model \\eqref{eqn033}, the cross-correlation between $x$ and $q_b$ can be computed by\n\\begin{IEEEeqnarray}{ll}\\label{appb1}\n\\mathbb{E}(xq_b)&\\approx\\mathbb{E}(x(4\\omega_2x^2-2\\omega_2)),\\\\\n&\\approx4\\omega_2\\mathbb{E}(x^3)-2\\omega_2\\mathbb{E}(x).\\label{appb2}\n\\end{IEEEeqnarray}\nWith the condition C1), \\eqref{appb2} is equivalent to \n\\begin{equation}\\label{appb3}\n\\mathbb{E}(xq_b)\\approx 4\\omega_2\\mathbb{E}(x^3).\n\\end{equation}\nWhen $x$ is AWGN, the third-order term $\\mathbb{E}(x^3)$ in \\eqref{appb3} equals to 0 (see \\cite[p. 148]{Papoulis_2002}).\nThis leads to the observation that $\\mathbb{E}(xq_b)=0$\nand then the first part of {\\em Theorem \\ref{thm02}} is therefore proved.\n\nTo prove the limit \\eqref{eqn036}, we first study the coefficient $\\omega_2$ in \\eqref{eqn033}. For $b\\rightarrow\\infty$, \n$\\omega_2$ goes back to the formula specified in \\eqref{eqn028}. Then, we can compute $\\omega_2$ as follows\n\\begin{IEEEeqnarray}{ll}\n\\omega_2&=\\frac{1}{8\\sqrt{\\pi}}\\int_{-\\infty}^{\\infty}x\\Big[\\frac{\\partial^2}{\\partial x^2}\\exp(-x^2)\\Big]\\mathrm{d}x,\\label{appb4}\\\\\n&=\\frac{1}{8\\sqrt{\\pi}}\\int_{-\\infty}^{\\infty}x\\Big[(-2+4x^2)\\exp(-x^2)\\Big]\\mathrm{d}x,\\label{appb5}\\\\\n&=-\\frac{1}{4\\sqrt{\\pi}}\\int_{-\\infty}^{\\infty}x\\exp(-x^2)\\mathrm{d}x\\nonumber\\\\\n&\\quad\\quad\\quad\\quad\\quad\\quad+\\frac{1}{2\\sqrt{\\pi}}\\int_{-\\infty}^{\\infty}x^3\\exp(-x^2)\\mathrm{d}x.\\label{appb6}\n\\end{IEEEeqnarray}\nIt is well known that (also see \\cite[p. 148]{Papoulis_2002})\n\\begin{equation}\\label{appb7}\n\\int_{-\\infty}^{\\infty}x^l\\exp(-x^2)\\mathrm{d}x=0, ~l=1, 3;\n\\end{equation}\nand thus we can obtain $\\omega_2=0$ for the case of $b\\rightarrow\\infty$. Applying this result into \\eqref{eqn033} leads to \nthe conclusion in \\eqref{eqn036}. \n\n\\section{Proof of {\\em Corollary \\ref{cor2}}}\\label{C}\nWith \\eqref{eqn039}, we can compute $\\mathbf{C}_{qq}$ as follows\n\\begin{IEEEeqnarray}{ll}\n\\mathbf{C}_{qq}\n&=\\mathbb{E}(\\mathbf{q}_b\\mathbf{q}_b^H),\\label{app08}\\\\\n&=4\\omega_2^2\\Big(4\\underbrace{\\mathbb{E}\\Big(\\Re(\\mathbf{r})^2+j\\Im(\\mathbf{r})^2\\Big)\\Big(\\Re(\\mathbf{r})^2-j\\Im(\\mathbf{r})^2\\Big)^T}_{\\triangleq\\mathbf{C}_{qq}^{(1)}}-\\nonumber\\\\\n&\\quad2\\underbrace{\\mathbb{E}\\Big(\\Big(\\Re(\\mathbf{r})^2+j\\Im(\\mathbf{r})^2\\Big)\\otimes\\mathbf{1}^T\\Big)}_{\\triangleq\\mathbf{C}_{qq}^{(2)}}-\\nonumber\\\\\n&\\quad2\\underbrace{\\mathbb{E}\\Big(\\mathbf{1}\\otimes\\Big(\\Re(\\mathbf{r})^2-j\\Im(\\mathbf{r})^2\\Big)^T\\Big)}_{\\triangleq\\mathbf{C}_{qq}^{(3)}}+\\mathbf{1}\\otimes\\mathbf{1}^T.\\Big)\n\\label{app09}\n\\end{IEEEeqnarray}\nWe start from $\\mathbf{C}_{qq}^{(2)}$ in \\eqref{app09}. Given the conditions C3) and C4), the proof in {\\em Corollary \\ref{cor1}} shows \nthat $\\mathbf{r}$ is asymptotically zero-mean complex \nGaussian with the covariance to be approximately $\\sigma_r^2\\mathbf{I}$.\n\\begin{IEEEeqnarray}{ll}\n\\mathbf{C}_{qq}^{(2)}&=\\Big(\\mathbb{E}\\Big(\\Re(\\mathbf{r})^2\\Big)+j\\mathbb{E}\\Big(\\Im(\\mathbf{r})^2\\Big)\\Big)\\otimes\\mathbf{1}^T,\\label{app10}\\\\\n&=\\frac{\\sigma_r^2}{2}(\\mathbf{1}+j\\mathbf{1})\\otimes\\mathbf{1}^T,\\label{app11}\n\\end{IEEEeqnarray}\nAnalogously, the following result holds\n\\begin{equation}\n\\mathbf{C}_{qq}^{(3)}=\\frac{\\sigma_r^2}{2}\\mathbf{1}\\otimes(\\mathbf{1}-j\\mathbf{1})^T.\\label{app12}\n\\end{equation}\nThen, we can obtain\n\\begin{equation}\\label{app13}\n2\\Big(\\mathbf{C}_{qq}^{(2)}+\\mathbf{C}_{qq}^{(3)}\\Big)=\\sigma_r^2\\mathbf{1}\\otimes\\mathbf{1}^T.\n\\end{equation}\nNow, we come to the last term $\\mathbf{C}_{qq}^{(1)}$, which can be computed as follows\n\\begin{IEEEeqnarray}{ll}\n\\mathbf{C}_{qq}^{(1)}&=\\mathbb{E}\\Big(\\Re(\\mathbf{r})^2\\Re(\\mathbf{r}^T)^2\\Big)\n+\\mathbb{E}\\Big(\\Im(\\mathbf{r})^2\\Im(\\mathbf{r}^T)^2\\Big)+\\nonumber\\\\\n&\\quad j\\Big(\\mathbb{E}\\Big(\\Im(\\mathbf{r})^2(\\Re(\\mathbf{r}^T)^2\\Big)-\\mathbb{E}\\Big(\\Re(\\mathbf{r})^2(\\Im(\\mathbf{r}^T)^2\\Big)\\Big).\n\\label{app14}\n\\end{IEEEeqnarray}\nSince $\\Re(\\mathbf{r})$ and $\\Im(\\mathbf{r})$ follow the identical distribution, we can easily justify\n\\begin{IEEEeqnarray}{ll}\\label{app15}\n\\mathbb{E}\\Big(\\Re(\\mathbf{r})^2\\Re(\\mathbf{r}^T)^2\\Big)&=\\mathbb{E}\\Big(\\Im(\\mathbf{r})^2\\Im(\\mathbf{r}^T)^2\\Big), \\\\\n\\mathbb{E}\\Big(\\Im(\\mathbf{r})^2(\\Re(\\mathbf{r}^T)^2\\Big)&=\\mathbb{E}\\Big(\\Re(\\mathbf{r})^2(\\Im(\\mathbf{r}^T)^2\\Big).\n\\label{app16}\n\\end{IEEEeqnarray}\nApplying \\eqref{app15} into \\eqref{app14} results in\n\\begin{equation}\\label{app17}\n\\mathbf{C}_{qq}^{(1)}=2\\mathbb{E}\\Big(\\Re(\\mathbf{r})^2\\Re(\\mathbf{r}^T)^2\\Big).\n\\end{equation} \nPlugging \\eqref{app17} and \\eqref{app13} into \\eqref{app09} yields\n\\begin{equation}\\label{app17a}\n\\mathbf{C}_{qq}=4\\omega_2^2\\Big(8\\mathbb{E}\\Big(\\Re(\\mathbf{r})^2\\Re(\\mathbf{r}^T)^2\\Big)+(1-\\sigma_r^2)(\\mathbf{1}\\otimes\\mathbf{1}^T)\\Big).\n\\end{equation}\nIt is not hard to derive (see \\cite[p. 148]{Papoulis_2002})\n\\begin{equation}\\label{app18}\n\\mathbb{E}\\Big(\\Re(r_k)^4\\Big)=\\frac{3\\sigma_r^4}{4}.\n\\end{equation}\n\\begin{IEEEeqnarray}{ll}\n\\mathbb{E}\\Big(\\Re(r_k)^2\\Re(r_m)^2\\Big)&=\\mathbb{E}\\Big(\\Re(r_k)^2\\Big)\\mathbb{E}\\Big(\\Re(r_m)^2\\Big), _{\\forall k\\neq m,}\\label{app19}\\\\\n&=\\frac{\\sigma_r^4}{4}.\\label{app20}\n\\end{IEEEeqnarray}\nApplying \\eqref{app18} and \\eqref{app20} into \\eqref{app17} yields\n\\begin{equation}\\label{app21}\n\\mathbf{C}_{qq}^{(1)}=\\frac{\\sigma_r^4}{2}(2\\mathbf{I}+\\mathbf{1}\\otimes\\mathbf{1}^T).\n\\end{equation}\nFurther applying \\eqref{app21} into \\eqref{app17a} yields the result \\eqref{eqn044}. {\\em Corollary \\ref{cor2}} is therefore proved.\n\n\\section{Proof of \\eqref{eqn049}}\\label{D}\nConsider the element-wise cross-correlation between the $m^\\mathrm{th}$ element of $\\mathbf{q}_b$ (denoted by $q_m$) and the \n$k^\\mathrm{th}$ element of $\\mathbf{s}$, i.e.,\n\\begin{IEEEeqnarray}{ll}\n\\mathbb{E}\\Big(q_ms_k^*\\Big)&=\\mathbb{E}\\Big(\\Re(s_k)\\Re(q_m)+\\Im(s_k)\\Im(q_m)\\Big)+\\nonumber\\\\\n&\\quad j\\mathbb{E}\\Big(\\Re(s_k)\\Im(q_m)-\\Im(s_k)\\Re(q_m)\\Big),\\label{app01}\\\\\n&=2\\mathbb{E}\\Big(\\Re(s_k)\\Re(q_m)\\Big).\\label{app02}\n\\end{IEEEeqnarray}\nUsing \\eqref{eqn033}, we can obtain \n\\begin{IEEEeqnarray}{ll}\n\\mathbb{E}\\Big(\\Re(s_k)\\Re(q_m)\\Big)&=\\mathbb{E}\\Big(\\Re(s_k)(4\\omega_2\\Re(r_m)^2-2\\omega_2)\\Big),\\nonumber\\label{app03}\\\\\n&=4\\omega_2\\mathbb{E}\\Big(\\Re(s_k)\\Re(r_m)^2\\Big).\\label{app04}\n\\end{IEEEeqnarray}\nThe term $\\Re(r_m)$ can be represented by\n\\begin{equation}\\label{app05}\n\\Re(r_m)=\\Re(s_k)\\Re(h_{m,k})+\\gamma_m+\\Re(v_m),\n\\end{equation}\nwhere $\\gamma_m$ is the sum of all corresponding terms that are uncorrelated with $\\Re(s_k)$, and $h_{m,k}$ is the $(m,k)^\\mathrm{th}$\nentry of $\\mathbf{H}$. Define $\\epsilon_m\\triangleq\\gamma_m+\\Re(v_m)$. We apply \\eqref{app05} into \\eqref{app04} and obtain\n\\begin{IEEEeqnarray}{ll}\n\\mathbb{E}&\\Big(\\Re(s_k)\\Re(r_m)^2\\Big)=\\Re(h_{m,k})^2\\mathbb{E}\\Big(\\Re(s_k)^3\\Big)+\\nonumber\\\\\n&\\quad\\quad\\underbrace{2\\Re(h_{m,k})\\mathbb{E}\\Big(\\Re(s_k)^2\\epsilon_m\\Big)+\\mathbb{E}\\Big(\\Re(s_k)\\epsilon_m^2\\Big)}_{=0}.\\label{app06}\n\\end{IEEEeqnarray}\nPlugging \\eqref{app06} into \\eqref{app04} yields\n\\begin{equation}\\label{app07}\n\\mathbb{E}\\Big(\\Re(s_k)\\Re(q_m)\\Big)=4\\omega_2\\Re(h_{m,k})^2\\mathbb{E}\\Big(\\Re(s_k)^3\\Big).\n\\end{equation}\nThe condition C4) ensures that the third-order central moments $\\mathbb{E}\\Big(\\Re(s_k)^3\\Big)=0$. Hence, we can conclude \n$\\mathbb{E}\\Big(q_ms_k^*\\Big)=0, \\forall m,k$. The result \\eqref{eqn049} is therefore proved. \n\n\n\\section{Elaborative Explanation of the Phenomenon of Constructive Noise}\\label{E}\nAs response to the review comment, we find it important to elaborate the phenomenon of constructive noise in the low-resolution signal detection. To better explain the phenomenon, we consider the case where two different information-bearing symbol blocks termed $\\mathbf{s}^{(1)}$ and $\\mathbf{s}^{(2)}$, $\\mathbf{s}^{(1)}\\neq\\mathbf{s}^{(2)}$, are transmitted to the receiver separately. \nIn the case of very high SNR or perhaps more extremely the noiseless case, their corresponding received blocks can be expressed by \n\\begin{equation}\\label{appe1}\n\\mathbf{r}^{(1)}=\\mathbf{H}\\mathbf{s}^{(1)},~\\mathbf{r}^{(2)}=\\mathbf{H}\\mathbf{s}^{(2)},\n\\end{equation}\nwhere the noise $\\mathbf{v}$ is omitted for now because it is negligibly small. \nIn this linear system, there exists a perfect bijection between $\\mathbf{s}$ and $\\mathbf{r}$ and we have $\\mathbf{r}^{(1)}\\neq \\mathbf{r}^{(2)}$. \nFor this reason, the receiver can reconstruct the information-bearing symbol block from $\\mathbf{r}$ without error. \nThe noise will only introduce the detrimental impact to the signal detection. \nHowever, such is not the case for the system with low-resolution ADC.\n\nTo make the concept easy to access, we consider the special case of $1$-bit ADC, the output of which is \n\\begin{equation}\\label{appe2}\n\\mathbf{y}^{(1)}=\\mathcal{Q}_b(\\mathbf{H}\\mathbf{s}^{(1)}),~\\mathbf{y}^{(2)}=\\mathcal{Q}_b(\\mathbf{H}\\mathbf{s}^{(2)}). \n\\end{equation}\nThe nonlinear function $\\mathcal{Q}_b(\\cdot)$ can destroy the input-output bijection as hold in the linear system. \nHere, we use a simple numerical example to explain the phenomenon. \nTo fulfill the condition $\\mathbf{s}^{(1)}\\neq\\mathbf{s}^{(2)}$, we let $\\mathbf{s}^{(1)}=[-1+3j, 3-j]^T$ and $\\mathbf{s}^{(2)}=[-3+1j, 1-3j]^T$.\nMoreover, to simply our discussion, we let $\\mathbf{H}=[\\mathbf{I}_{2\\times2}, \\mathbf{I}_{2\\times2}]^T$. \nThen, the output of the $1$-bit ADC is given by\n\\begin{equation}\\label{appe3}\n\\mathbf{y}^{(1)}=\\mathbf{y}^{(2)}=[-1+j, 1-j, -1+j, 1-j]^T.\n\\end{equation}\nIn the probability domain, we have \n\\begin{equation}\\label{appe4}\n\\mathrm{Pr}(\\mathbf{y}^{(1)}\\neq\\mathbf{y}^{(2)}|\\mathbf{H}, \\mathbf{x}^{(1)}, \\mathbf{x}^{(2)})=0.\n\\end{equation}\nIt means that there is no bijection between $\\mathbf{y}$ and $\\mathbf{s}$ in this case; and for this reason, the receiver is not able to successfully reconstruct $\\mathbf{s}$ from $\\mathbf{y}$ even in the noiseless case. \n\nNow, we increase the noise power (or equivalently reduce the SNR). \nDue to the increased randomness, we understand that a positive-amplitude signal can become a negative-amplitude signal. \nDenote $s$ to be a real scalar drawn from the discrete finite-set $\\{-3, -1, 1, 3\\}$ and $v$ the Gaussian noise. It is trivial to have\n\\begin{equation}\\label{appe5}\n\\mathrm{Pr}(s+v>0|s=-1)>\\mathrm{Pr}(s+v>0|s=-3).\n\\end{equation}\nAs shown in \\cite{9145094}, with the decrease of SNR from a large value (e.g., the noiseless case), the difference between these two probabilities will quickly increase at the beginning, and then converge to a certain non-zero value. \nIt means that the noise helps to discriminate the ADC output $\\mathbf{y}^{(1)}$ and $\\mathbf{y}^{(2)}$, i.e.,\n\\begin{equation}\\label{appe6}\n\\mathrm{Pr}(\\mathbf{y}^{(1)}\\neq\\mathbf{y}^{(2)}|\\mathbf{H}, \\mathbf{x}^{(1)}, \\mathbf{x}^{(2)})\\neq 0,\n\\end{equation}\nand the probability increases with the decrease of SNR, and this helps the signal detectability \\cite{809511}. \nSince the probability converges to a certain value at some SNR, further reducing the SNR will not improve the signal detectability but will only degrade the detection performance. For the general case, the converged probability of \\eqref{appe6} can be found in \\cite{9145094}, i.e.,\n\\begin{equation}\\label{rev01}\n\t\\mathrm{Pr}(\\mathbf{y}^{(1)}=\\mathbf{y}^{(2)}|\\mathbf{s}^{(1)}\\neq\\mathbf{s}^{(2)})\n\t=\\frac{(\\mathcal{L}^N)(\\mathcal{L}^N-1)}{2^{(2K+1)}},\n\\end{equation} \nwhere $\\mathcal{L}$ is the modulation order. Finally, when the resolution of the quantizer increases, the communication system becomes closer to linear, for which the noise becomes less constructive. \n\n\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\\bibliographystyle{IEEEtran}\n", "meta": {"timestamp": "2021-09-14T02:15:26", "yymm": "2109", "arxiv_id": "2109.05334", "language": "en", "url": "https://arxiv.org/abs/2109.05334"}} {"text": "\\section{Introduction}\n\n\nA covering array $CA(n,k,g)$ is a $k\\times n$ array on $\\mathbb{Z}_g$ with the property that any two rows are qualitatively independent. The number $n$ of columns \nin such array is called its size. The smallest possible size of a covering array is denoted \n\\begin{equation*}\nCAN(k,g)=\\min_{n\\in \\mathbb{N}}\\{n~:~ \\mbox{there exists a } CA(n,k,g)\\}\n\\end{equation*}\nCovering arrays are generalisations of both orthogonal arrays and Sperner systems. Bounds and constructions of covering arrays have been derived from algebra, design theory, graph theory, set systems\nand intersecting codes \\cite{chatea, kleitman, sloane, stevens1}. Covering arrays have industrial applications in many disparate applications in which factors or components interact, for example, software and circuit testing, switching networks, drug screening and data compression \\cite{korner,ser,Cohen}. In \\cite{karen}, the definition of a covering array has been extended to include a graph structure. \n\n\\begin{definition}\\rm (Covering arrays on graph). A covering array on a graph $G$ with alphabet size $g$ and $k=|V(G)|$ is a $k\\times n$ array on $\\mathbb{Z}_g$. \nEach row in the array corresponds to a vertex in the graph $G$. The array has the property that any two rows which correspond to adjacent vertices in $G$ are qualitatively independent. \n\\end{definition}\n\n\\noindent A covering array on a graph $G$ will be denoted by $CA(n,G,g)$. The smallest possible covering array on a graph $G$ will be denoted\n\\begin{equation*}\nCAN(G,g)=\\min_{n\\in \\mathbb{N}}\\{n~:~ \\mbox{there exists a } CA(n,G,g)\\}\n\\end{equation*}\nGiven a graph $G$ and a positive integer $g$, a covering array on $G$ with minimum size is called {\\it optimal}. Seroussi and Bshouly proved that determining the existence of an optimal binary \ncovering array on a graph is an NP-complete problem \\cite{ser}. We start with a review of some definitions and results from product graphs in Section \\ref{productgraph}. In Section \\ref{bound}, \nwe show that for all graphs $G_1$ and $G_2$, \n$$\\max_{i=1,2}\\{CAN(G_i,g)\\}\\leq CAN(G_1\\Box G_2,g)\\leq CAN( \\max_{i=1,2}\\{\\chi(G_i)\\},g).$$ We look for graphs $G_1$ and $G_2$ where the lower bound on $CAN(G_1\\Box G_2)$ is\nachieved. In Section \\ref{Cayley}, we give families of Cayley graphs that achieves this lower bound on covering array number on graph product. In Section \\ref{Approx}, we present a polynomial time \napproximation algorithm with approximation ratio $\\log(\\frac{V}{2^{k-1}})$ for constructing covering array on \ngraph $G=(V,E)$ having more than one prime factor with respect to the Cartesian product. \n\n\n\\section{Preliminaries} \\label{productgraph}\nIn this section, we give several definitions from product graphs that we use in this article. \nA graph product is a binary operation on the set of all finite graphs. However among all possible associative graph products \nthe most extensively studied in literature are the Cartesian product, the direct product,\n the strong product and the lexicographic product. \n\n\\begin{definition}\\rm\n The Cartesian product of graphs $G$ and $H$, denoted by $G\\Box H$, is the graph with \n \\begin{center}\n $V(G\\Box H) = \\{(g, h) \\lvert g\\in V(G) \\mbox{ and } h \\in V(H)\\}$,\n \\\\ $E(G\\Box H) = \\{ (g, h)(g', h') \\lvert g = g', hh' \\in E(H), \\mbox{ or } gg' \\in E(G), h=h' \\}$.\n \\end{center}\nThe graphs $G$ and $H$ are called the {\\it factors} of the product $G \\Box H$.\n\\end{definition}\n\\noindent In general, given graphs $G_1,G_2,...,G_k$, then $G_1 \\Box G_2 \\Box \\cdots \\Box G_k$, is the graph with vertex set\n$V(G_1) \\times V(G_2) \\times \\cdots \\times V(G_k) $, and two vertices $(x_1,x_2,\\ldots, x_k)$ and\n$(y_1, y_2,\\ldots,y_k)$ are adjacent if and only if $x_iy_i \\in E(G_i)$ for exactly one index $1\\leq i\\leq k$ and $x_j = y_j$ for each index $j \\not= i$.\\\\\n\n\\begin{definition}\\rm\nThe direct product of graphs $G_1,G_2,...,G_k$, denoted by $G_1\\times G_2\\times \\cdots \\times G_k$, is the graph with vertex \nset $V(G_1) \\times V(G_2) \\times \\cdots \\times V(G_k) $, and for which vertices $(x_1,x_2,...,x_k)$ and $(y_1,y_2,...,y_k)$ are \nadjacent precisely if $x_iy_i \\in E(G_i)$ for each index $i$. \n\\end{definition}\n\n\\begin{definition}\\rm\nThe strong product of graphs $G_1,G_2,...,G_k$, denoted by $G_1\\boxtimes G_2\\boxtimes \\cdots \\boxtimes G_k$, is the graph with vertex set \n$V(G_1) \\times V(G_2) \\times \\cdots \\times V(G_k) $, and distinct vertices $(x_1,x_2,\\ldots,x_k)$ and $(y_1,y_2,\\ldots,y_k)$ are adjacent if and only if \neither $x_iy_i\\in E(G_i)$ or $x_i=y_i$ for each $1\\leq i\\leq k$. We note that in general $E(\\boxtimes_{i=1}^k {G_i}) \\neq E(\\Box_{i=1}^k G_i) \\cup E(\\times_{i=1}^k G_i)$, unless $k=2$.\n\\end{definition}\n\n\\begin{definition}\\rm\n The lexicographic product of graphs $G_1,G_2,...,G_k$, denoted by $G_1\\circ G_2\\circ \\cdots \\circ G_k$, is the graph with \n vertex set $V(G_1) \\times V(G_2) \\times \\cdots \\times V(G_k) $, and two vertices $(x_1,x_2,...,x_k)$ and $(y_1,y_2,...,y_k)$ are \nadjacent if and only if for some index $j\\in \\{1,2,...,k\\}$ we have $x_jy_j \\in E(G_j)$ and $x_i =y_i$ for each index $1\\leq i < j$. \n\\end{definition}\n\nLet $G$ and $H$ be graphs with vertex sets $V(G)$ and $V(H)$, respectively. A {\\it homomorphism} from $G$ to $H$ is a map \n$\\varphi~:~V(G)\\rightarrow V(H)$ that preserves adjacency: if $uv$ is an edge in $G$, then $\\varphi(u)\\varphi(v)$ is an edge in $H$. \nWe say $G\\rightarrow H$ if there is a homomorphism from $G$ to $H$, and $G \\equiv H$ if $G\\rightarrow H$ and $H\\rightarrow G$. \nA {\\it weak homomorphism} from $G$ to $H$ is a map $\\varphi~:~V(G)\\rightarrow V(H)$ such that if $uv$ is an edge in $G$, then either \n$\\varphi(u)\\varphi(v)$ is an edge in $H$, or $\\varphi(u)=\\varphi(v)$. Clearly every homomorphism is automatically a weak homomorphism. \n\n\nLet $\\ast$ represent either the Cartesian, the direct or the strong product of graphs, and consider a product $G_1\\ast G_2\\ast \\ldots\\ast G_k$. \nFor any index $i$, $1\\leq i\\leq k$, a {\\it projection map} is defined as:\n$$p_i~:~G_1\\ast G_2\\ast \\ldots\\ast G_k \\rightarrow G_i ~\\mbox{where} ~p_i(x_1,x_2,\\ldots,x_k)=x_i.$$ By the definition of the Cartesian, the direct, and the strong product of \ngraphs, each $p_i$ is a weak homomorphism. In the case of direct product, as $(x_1,x_2,\\ldots,x_k)(y_1,y_2,\\ldots,y_k)$ is an an edge of $G_1\\times G_2 \\times,\\ldots,\\times G_k$ \nif and only if $x_iy_i\\in E(G_i)$ for each $1\\leq i\\leq k$., each projection $p_i$ is actually a homomorphism. In the case of lexicographic product, the first projection map that is projection on first component is a weak homomorphism, where as in general the projections to the other \ncomponents are not weak homomorphisms. \\\\\n\n\n\n\nA graph is {\\it prime} with respect to a given graph product if it is nontrivial and cannot be represented as the product of two nontrivial \ngraphs. For the Cartesian product,\nit means that a nontrivial graph $G$ is prime if $G=G_1\\Box G_2$ implies that either $G_1$ or $G_2$ is $K_1$. Similar observation is \ntrue for other three products. The uniqueness of the prime factor decomposition of connected graphs with respect to the\n Cartesian product was first shown by Subidussi $(1960)$, and independently by Vizing $(1963)$. Prime factorization is not unique \n for the Cartesian product in the class of possibly disconnected simple graphs \\cite{HBGP}. It is known that any connected graph factors \n uniquely into prime graphs with respect to the Cartesian product. \n \n \\begin{theorem}(Sabidussi-Vizing)\nEvery connected graph has a unique representation as a product of prime graphs, up to isomorphism and the order of the factors. The number of prime factors is \nat most $\\log_2 {V}$.\n \\end{theorem}\n\\noindent For any connected graph $G=(V,E)$, the prime factors of $G$ with respect to the Cartesian product can be computed in $O(E \\log V) $ times and $O(E)$ space. See Chapter 23, \\cite{HBGP}. \n\n\n\n\\section{Graph products and covering arrays}\\label{bound}\nLet $\\ast$ represent either the Cartesian, the direct, the strong, or the lexicographic product operation. \nGiven covering arrays $CA(n_1,G_1,g)$ and $CA(n_2,G_2,g)$, one can construct covering array on $G_1 \\ast G_2$ as follows: the row corresponds\n to the vertex $(a,b)$ is obtained by horizontally concatenating the row corresponds to the vertex $a$ in $CA(n_1,G_1,g)$ with the row\n corresponds to the vertex $b$ in $CA(n_2,G_2,g)$. Hence an obvious upper bound for the covering array number is given by\n \\begin{center}\n $CAN(G_1 \\ast G_2, g) \\leq CAN(G_1, g) + CAN(G_2, g) $\n \\end{center}\n We now propose some improvements of this bound. A column of a covering array is {\\it constant} if, for some symbol $v$, every entry in the \n column is $v$. In a {\\it standardized } $CA(n,G,g)$ the first column is constant. Because symbols within each row can be permuted independently, \n if a $CA(n,G,g)$ exists, then a standardized $CA(n,G,g)$ exists. \n\\begin{theorem}\n Let $G=G_1\\boxtimes G_2\\boxtimes \\cdots \\boxtimes G_k$, $k\\geq 2$ and $g$ be a positive integer. \n Suppose for each $1\\leq i\\leq k$ there exists a $CA(n_i,G_i,g)$, then there exists a \n $CA(n,G,g)$ where $n=\\underset{i=1}{\\overset{k}\\sum} n_i -k$. Hence,\n $CAN(G,g)\\leq \\underset{i=1}{\\overset{k}\\sum} CAN(G_i,g)-k$.\n \n\\end{theorem}\n\n\\begin{proof} Without loss of generality, we assume that for each $1\\leq i\\leq g$, the first column of $CA(n_i,G_i,g)$\n is a constant column on symbol $i$ and for each $g+1\\leq i\\leq k$, the first column of $CA(n_i,G_i,g)$ is a constant \n column on symbol 1. \n Let $C_i$ be the array \n obtained from $CA(n_i,G_i,g)$ by removing the first column. Form an array $A$ with \n $\\underset{i=1}{\\overset{k}\\prod} |V(G_i)|$ rows and \n $\\underset{i=1}{\\overset{k}\\sum} n_i -k$ columns, indexing rows as $(v_1,v_2,...,v_k)$, where $v_i\\in V(G_i)$.\n Row $(v_1,v_2,...,v_k)$ is \n obtained by horizontally concatenating the rows correspond to the vertex $v_i$ of $C_i$, for $1\\leq i\\leq k$. \n Consider two distinct rows $(u_1,u_2,\\ldots,u_k)$ and $(v_1,v_2,\\ldots,v_k)$ of $A$ which correspond to adjacent vertices in $G$. \n Two distinct vertices $(u_1,u_2,\\ldots,u_k)$ and $(v_1,v_2,\\ldots,v_k)$ are adjacent if and only if \neither $u_iv_i\\in E(G_i)$ or $u_i=v_i$ for each $1\\leq i\\leq k$. Since the vertices are distinct, $u_iv_i\\in E(G_i)$ for at least one index $i$.\nWhen $u_i=v_i$, all pairs of the form $(a,a)$ are covered. When $u_iv_i\\in E(G_i)$ all remaining pairs are covered because two different rows of $C_i$ correspond to adjacent vertices in $G_i$ are selected. \n \n \n\\end{proof}\n\n\\noindent Using the definition of strong product of graphs we have following result as a corollary.\n\\begin{corollary}\n Let $G=G_1\\ast G_2\\ast \\cdots \\ast G_k$, $k\\geq 2$ and $g$ be a positive integer, where $\\ast\\in\\{\\Box,\\times\\}$. Then,\n $CAN(G,g)\\leq \\underset{i=1}{\\overset{k}\\sum} CAN(G_i,g)-k$.\n \n\\end{corollary}\n\n\\noindent The lemma given below will be used in Theorem \\ref{product}. \n\n \\begin{lemma}\\label{karenlemma} (Meagher and Stevens \\cite{karen})\n Let $G$ and $H$ be graphs. If $G\\rightarrow H$ then $CAN(G,g)\\leq CAN(H,g)$.\n \n \\end{lemma}\n \n\n\\begin{theorem}\\label{product}\n Let $G=G_1\\times G_2\\times \\cdots \\times G_k$, $k\\geq 2$ and $g$ be a positive integer. \n Suppose for each $1\\leq i\\leq k$ there exists a $CA(n_i,G_i,g)$. Then there exists a \n $CA(n,G,g)$ where $n=\\min\\limits_{i} n_i$. Hence, $CAN(G,g)\\leq \\underset{i}{\\overset{}\\min}$ $ CAN(G_i,g)$.\n \n\\end{theorem}\n\\begin{proof}\n Without loss of generality assume that $n_1 = \\min\\limits_{i} {n_i} $. It is known that $G_1\\times G_2\\times \\cdots \\times G_k\\rightarrow G_1$. Using Lemma \\ref{karenlemma}, we have $CAN(G,g)\\leq CAN(G_1,g)$.\n\n\\end{proof}\n\n\\begin{theorem}\n Let $G=G_1\\circ G_2\\circ \\cdots \\circ G_k$, $k\\geq 2$ and $g$ be a positive integer. \n Suppose for each $1\\leq i\\leq k$ there exists a $CA(n_i,G_i,g)$. Then there exists a \n $CA(n,G,g)$ where $n=\\underset{i=1}{\\overset{k}\\sum} n_i -k+1$. Hence, \n $CAN(G,g)\\leq \\underset{i=1}{\\overset{k}\\sum} CAN(G_i,g)-k+1$.\n \\end{theorem}\n\\begin{proof} We assume that for each $1\\leq i\\leq k$, the first column of $CA(n_i,G_i,g)$\n is a constant column on symbol $1$.\n Let $C_1= CA(n_1,G_1,g)$. For each $2\\leq i\\leq k$ remove the first column of $CA(n_i,G_i,g)$ to form $C_i$ with $n_i-1$ columns. Without loss of generality assume first column of each $CA(n_i,G_i,g)$ is constant \n vector on symbol 1 while for each $2\\leq i\\leq k$, $C_i$ is the array obtained from $CA(n_i,G_i,g)$ by removing the first \n column. Form an array $A$ with $\\underset{i=1}{\\overset{k}\\prod} |V(G_i)|$ rows and $\\underset{i=1}{\\overset{k}\\sum} n_i -k+1$ columns, indexing \n rows as $(v_1,v_2,..,v_k)$, $v_i\\in V(G_i)$. Row $(v_1,v_2,\\ldots,v_k)$ is obtained by horizontally\n concatenating the rows correspond to the vertex $v_i$ of $C_i$, for $1\\leq i\\leq k$. If two vertices \n $(v_1,v_2,...,v_k)$ and $(u_1,u_2,...,u_k)$ are adjacent in $G$ then either $v_1u_1\\in E(G_1)$ or $v_ju_j\\in E(G_j)$ for \n some $j\\geq 2$ and $v_i=u_i$ for each $i< j$. In first case rows from $C_1$ covers each ordered pair of symbols while in second case \n rows from $C_j$ covers each ordered pair of symbol probably except $(1,1)$. But this pair appears in each $C_i$ for $i] (\\to);\n\\end{scope}\n\\begin{scope}[style=thin]\n \\foreach \\from/\\to/\\weight/\\where \n in { 4/-k/1/above, 0/1/1/above, 2/i/1/right, 3/j/1/above, -4/k/1/right, \n 6/-1/1/above, 5/-j/1/above, 7/-i/1/above}\n \\draw[gray] (\\from) to [->] (\\to);\n \\end{scope}\n\\begin{scope}[style=thin]\n \\foreach \\from/\\to/\\weight/\\where \n in { 04/-k/1/above, 00/1/1/above, 02/i/1/right, 03/j/1/above, -04/k/1/right, \n 06/-1/1/above, 05/-j/1/above, 07/-i/1/above}\n \\draw[red] (\\from) to [->] (\\to);\n \\end{scope}\n \\begin{scope}[style=thin]\n \\foreach \\from/\\to/\\weight/\\where \n in{ 4/04/1/above, 0/00/1/above, 2/02/1/right, 3/03/1/above, -4/-04/1/right, \n 6/06/1/above, 5/05/1/above, 7/07/1/above}\n \\draw[blue] (\\from) to [->] (\\to);\n \\end{scope}\n\n\\end{tikzpicture}\n\\caption{$Cay(Q_8, \\{-1,\\pm i, \\pm j\\})\\Box K_3$}\n\n\n \\end{center}\n\\end{figure}\n\n\n\\section{Approximation algorithm for covering array on graph}\\label{Approx}\nIn this section, we present an approximation algorithm for construction of covering array on a given graph $G=(V,E)$ with \n$k>1$ prime factors with respect to the Cartesian product. \nIn 1988, G. Seroussi and N H. Bshouty proved that the decision problem whether there exists a binary \ncovering array of strength $t\\geq 2$ and size $2^t$ on a given $t$-uniform hypergraph is NP-complete \\cite{VS}. \nAlso, construction of \nan optimal size covering array on a graph is at least as hard as finding its optimal size. \n \n\\noindent We give an approximation algorithm for the Cartesian product with approximation ratio $O(\\log_s |V|)$, where $s$ can be obtained from the \nnumber of symbols corresponding to each vertex. The following result by Bush is used in our approximation algorithm. \n\n\\begin{theorem}\\rm{\\cite{GT}}\\label{B} Let $g$ be a positive integer. If $g$ is written in standard form: $$g=p_1^{n_1}p_2^{n_2}\\ldots p_l^{n_l}$$ where $p_1,p_2,\\ldots,p_l$ are distinct primes, and if \n$$r=\\mbox{min}(p_1^{n_1},p_2^{n_2},\\ldots, p_l^{n_l}),$$ then one can construct $OA(s,g)$ where \n $s =1+ \\max{(2,r)}$.\n\\end{theorem}\nWe are given a wighted connected graph $G=(V,E)$ with each vertex having the same weight $g$. \nIn our approximation algorithm, we use a technique from \\cite{HBGP} for prime factorization of $G$ with respect to the Cartesian product. \n This can be done in $O(E \\log V$) time. For details see \\cite{HBGP}. After obtaining prime factors of $G$, we construct \n strength two covering array $C_1$ on maximum size prime factor. Then \n using rows of $C_1$, we produce a covering array on $G$.\\\\\n\n\n\\noindent\\textbf{APPROX $CA(G,g)$:}\n\\\\\\textbf{Input:} A weighted connected graph $G=(V,E)$ with $k>1$ prime factors with respect to the Cartesian product. Each vertex has weight $g$; $g=p_1^{n_1}p_2^{n_2}\\ldots p_l^{n_l}$ where \n$p_1$, $p_2, \\ldots, p_l$ are primes. \n \\\\\\textbf{Output:} $CA(ug^2,G,g)$.\n\\\\\\textbf{Step 1:} Compute $s = 1 + \\mbox{max}\\{2,r\\}$ where $r=\\mbox{min}(p_1^{n_1},p_2^{n_2},\\ldots, p_l^{n_l})$.\n\\\\\\textbf{Step 2:} Factorize $G$ into prime factors with respect to the Cartesian product;\nsay $G = \\Box_{i=1} ^{k} G_i$ where $G_i= (V_i,E_i)$ is a prime factor.\n\\\\\\textbf{Step 3:} Suppose $V_1\\geq V_2\\geq \\ldots\\geq V_k$. For prime factor $G_1=(V_1, E_1)$ \\textbf{do} \n\\begin{enumerate}\n\\item Find the smallest positive integer $u$ such that $s^u\\geq V_1$. That is, $u=\\lceil \\mbox{log}_s V_1\\rceil$. \n\\item Let $OA(s,g)$ be an orthogonal array and denote its $i$th row by $R_i$ for $i=1,2,\\ldots,s$. Total $s^u$ many row vectors $(R_{i_1}, R_{i_2},\\ldots R_{i_u})$, each of length $ug^2$, are formed by horizontally concatenating $u$ rows \n$R_{i_1}$, $ R_{i_2}$, $\\ldots,$ $ R_{i_u}$ where $1\\leq i_1, \\ldots, i_u\\leq s$. \n\\item Form an $V_1 \\times ug^2$ array $C_1$ by choosing any $V_1$ rows out of $s^u$ concatenated row vectors. \nEach row in the array corresponds to a vertex in the graph $G_1$. \\end{enumerate}\n\\textbf{Step 4:}\nFrom $C_1$ we can construct an $V\\times ug^2$ array $C$. Index the rows of $C$ by $(u_1,u_2,\\ldots,u_k)$, $u_i\\in V(G_i)$. \nSet the row $(u_1,u_2,\\ldots,u_k)$ to be identical to the row corresponding to $u_1+u_2+\\ldots+u_k ~ \\mbox{mod } V_1$ in $C_1$. Return $C$. \n\n\n\\vspace{1cm}\\begin{theorem}\n Algorithm APPROX $CA(G,g)$ is a polynomial-time $\\rho(V)$ approximation algorithm for covering array on graph problem, where \n $$\\rho(V) \\leq \\lceil \\log_s \\frac{V}{2^{k-1}} \\rceil.$$\n \\end{theorem}\n\\begin{proof}\n\\textbf{Correctness:} The verification that $C$ is a $CA(ug^2,G,g)$ is straightforward. First, we show that $C_1$ is a covering array of strength two with $ |V_1|$ parameters. \nPick any two distinct rows of $C_1$ and consider the sub matrix induced by these two rows. In the sub matrix, there must be a column $(R_i, R_j)^T$ where $i \\neq j$. \nHence each ordered pair of values appears at least once. \n Now to show that $C$ is a covering array on $G$, it is sufficient to show that the rows in $C$ for any pair of adjacent vertices $u=(u_1,u_2,\\ldots,u_k)$ and $v=(v_1,v_2,\\ldots,v_k)$ in $G$ will be qualitatively \n independent. We know $u$ and $v$ are adjacent if and only if $(a_i,b_i)\\in E(G_i)$ for exactly one index $1\\leq i\\leq k$ and \n $a_j=b_j$ for $j\\neq i$. \n Hence $ u_1+u_2+ \\ldots+u_k \\neq v_1+v_2+\\ldots+v_k ~ \\mbox{mod } V_1$ and in Step 6, \n two distinct rows from $C_1$ are assigned to the vertices $u$ and $v$.\\\\\n \\textbf{Complexity :} The average order of $l$ in Step 1 is $\\ln\\ln g$ \\cite{Riesel}. Thus, the time to find $s$ in Step 1 is $O(\\ln \\ln g)$. \n The time to factorize graph $G=(V,E)$ in Step 2 is $O(E \\log V)$. In Step 3(1), the smallest positive integer $u$ can be found in \n $O(\\mbox{log}_s V_1)$ time. In Step 3(2), forming one row vector requires $\\mbox{log}_sV_1$ assignments; hence, forming $V_1$ row vectors require $O(V_1\\mbox{log}V_1)$ time. \n Thus the total running time of APPROX $CA(G,g)$ is $O(E \\log V+\\ln \\ln g)$. Observing that, in practice, $\\ln \\ln g \\leq E \\log V$, we can restate the running time of \n APPROX $CA(G,g)$ as $O(E \\log V)$. \\\\\n \\textbf{Approximation ratio:} We show that APPROX $CA(G,g)$ returns a covering array that is at most $\\rho(V)$ times the size of an optimal covering array on $G$. \n We know the smallest $n$ for which a $CA(n,G,g)$ exists is $g^2$, that is, $CAN(G,g)\\geq g^2$. The algorithm returns a covering array on $G$ of size $ug^2$ where\n $$u=\\lceil \\log_s V_1\\rceil.$$ As $G$ has $k$ prime factors, the maximum number of vertices in a factor can be $\\frac{V}{2^{k-1}}$, that is, $V_1\\leq \\frac{V}{2^{k-1}}$. \nHence $$u= \\lceil \\log_s V_1\\rceil \\leq \\lceil \\log_s \\frac{V}{2^{k-1}}\\rceil.$$ By relating to the size of the covering array returned to the optimal size, we obtain our approximation ratio \n$$\\rho(V)\\leq \\lceil \\log_s \\frac{V}{2^{k-1}}\\rceil.$$ \\end{proof}\n\n\\section{Conclusions} One motivation for introducing a graph structure was to optimise covering arrays for their use in testing software and networks based on internal structure. Our primary \nconcern in this paper is with constructions that make optimal covering arrays on large graphs from smaller ones. Large graphs are obtained by considering either the Cartesian, the direct, the strong, or the Lexicographic product of small graphs. Using graph homomorphisms, we have \n$$\\max_{i=1,2}\\{CAN(G_i,g)\\}\\leq CAN(G_1\\Box G_2,g)\\leq CAN( \\max_{i=1,2}\\{\\chi(G_i)\\},g).$$ We gave several classes of Cayley graphs where the lower bound on covering array number $CAN(G_1\\Box G_2)$ is achieved. It is an interesting problem to find out other classes of graphs for which lower bound on covering array number of product graph can be achieved. We gave an approximation algorithm \nfor construction of covering array on a graph $G$ having more than one factor with respect to the Cartesian product. Clearly, another area to explore is to consider in details the other graph products, that is, the direct, the strong, and the Lexicographic product. \n\n\n", "meta": {"timestamp": "2015-12-23T02:03:33", "yymm": "1512", "arxiv_id": "1512.06966", "language": "en", "url": "https://arxiv.org/abs/1512.06966"}} {"text": "\\section{Introduction}\\label{sec:introduction}\nGalaxy clusters are the ultimate result of the hierarchical bottom-up process of cosmic structure formation. Hosted in massive dark matter haloes that formed through subsequent phases of mass accretion and mergers, galaxy clusters carry information on the underlying cosmological scenario as well as the astrophysical processes that shape the properties of the intra-cluster medium (ICM) \\citep[for a review, see e.g.][]{2005RvMP...77..207V,2011ARA&A..49..409A,2012ARA&A..50..353K}. \n\nBeing at the top of the pyramid of cosmic structures, galaxy clusters are mostly found in the late-time universe. These can be observed using a variety a techniques that probe either the distribution of the hot intra-cluster gas through its X-ray emission \\citep[see e.g.][]{2005ApJ...628..655V,2010MNRAS.407...83E,2016A&A...592A...1P,2021A&A...650A.104C}, the scattering of the Cosmic Microwave Background radiation (CMB) due to the Sunyaev-Zeldovich effect \\citep[see e.g.][]{2009ApJ...701...32S,2013ApJ...765...67M,2013ApJ...763..127R,2014A&A...571A..29P,2015ApJS..216...27B}, through measurement of galaxy overdensities or the gravitational lensing effect caused by the cluster's gravitational mass on background sources \\citep{2016ApJS..224....1R,2019MNRAS.485..498M,2011ApJ...738...41U,2012ApJS..199...25P}.\n\nThe mass distribution of galaxy clusters primarily depends on the dynamical state of the system. Observations of relaxed clusters have shown that the matter density profile at large radii is consistent with the universal Navarro-Frenk-White profile \\citep[NFW,][]{NFW1997}, while deviations have been found in the inner regions \\citep[][]{2013ApJ...765...24N,2017ApJ...851...81A,2017ApJ...843..148C,2020A&A...637A..34S}. In relaxed systems, the gas falls in the dark matter dominated gravitational potential and thermalises through the propagation of shock waves. This sets the gas in a hydrostatic equilibrium (HE) that is entirely controlled by gravity. Henceforth, aside astrophysical processes affecting the baryon distribution in the cluster core, the thermodynamic properties of the outer ICM are expected to be self-similar \\citep[see e.g.][]{2019A&A...621A..39E,2019A&A...621A..41G,2021ApJ...910...14G}. This is not the case of clusters undergoing major mergers for which the virial equilibrium is strongly altered \\citep[see e.g.][]{2016ApJ...827..112B}. Such systems exhibit deviations from self-similarity such that scaling relations between the ICM temperature, the cluster mass and X-ray luminosity differ from that of relaxed clusters \\citep[see e.g.][]{2009MNRAS.399..410P,2011ApJ...729...45R,2019MNRAS.490.2380C}. \n\nA direct consequence of merger events is that the mass estimates inferred assuming the HE hypothesis or through scaling relations may be biased. This may induce systematic errors on cosmological analyses that rely upon accurate cluster mass measurements. On the other hand, merging clusters can provide a unique opportunity to investigate the physics of the ICM \\citep{2007PhR...443....1M,2016JPlPh..82c5301Z} and test the dark matter paradigm \\citep[as in the case of the Bullet Cluster][]{2004ApJ...604..596C,2004ApJ...606..819M}. This underlies the importance of identifying merging events in large cluster survey catalogues.\n\nThe identification of unrelaxed clusters relies upon a variety of proxies specifically defined for each type of observations \\citep[for a review see e.g.][]{2016FrASS...2....7M}. As an example, the detection of radio haloes and relics in clusters is usually associated with the presence of mergers. Similarly, the offset between the position of the brightest central galaxy and the peak of the X-ray surface brightness, or the centroid of the SZ signal are used as proxy of merger events. This is because the merging process alters differently the distribution of the various matter constituents of the cluster.\n\nThe growth of dark matter haloes through cosmic time has been investigated extensively in a vast literature using results from N-body simulations. \\citet{2003MNRAS.339...12Z} found that haloes build up their mass through an initial phase of fast accretion followed by a slow one. \\citet{2007MNRAS.379..689L} have shown that the during the fast-accretion phase, the mass assembly occurs primarily through major mergers, that is mergers in which the mass of the less massive progenitor is at least one third of the more massive one. Moreover, they found that the greater the mass of the halo the later the time when the major merger occurred. In contrast, slow accretion is a quiescent phase dominated by minor mergers. Subsequent studies have mostly focused on the relation between the halo mass accretion history and the concentration parameter of the NFW profile \\citep[see e.g.][]{2007MNRAS.381.1450N,2009ApJ...707..354Z,2012MNRAS.427.1322L,2016MNRAS.460.1214L,2017MNRAS.466.3834L,2019MNRAS.485.1906R}. Recently, \\citet{Wang2020} have shown that major mergers have a universal impact on the evolution of the median concentration. In particular, after a large initial response, in which the concentration undergoes a large excursion, the halo recovers a more quiescent dynamical state within a few dynamical times. Surprisingly, the authors have also found that even minor mergers can have a non-negligible impact on the mass distribution of haloes, contributing to the scatter of the concentration parameter. \n\nThe use of concentration as a proxy of galaxy mergers is nevertheless challenging for multiple reasons. Firstly, the concentration exhibits a large scatter across the merger phase and the value inferred from the analysis of galaxy cluster observations may be sensitive to the quality of the NFW-fit. Secondly, astrophysical processes may alter the mass distribution in the inner region of the halo, thus resulting in values of the concentration that differ from those estimated from N-body simulations \\citep[see e.g.][]{2010MNRAS.406..434M,2011MNRAS.416.2539K}, which could be especially the case for merging clusters. \n\nAlternatively, a non-parametric approach to characterise the mass distribution in haloes has been proposed by \\citet{Balmes2014} in terms of simple mass ratios, dubbed halo {\\it sparsity}:\n\\begin{equation}\\label{sparsdef}\ns_{\\Delta_1,\\Delta_2} = \\frac{M_{\\Delta_1}}{M_{\\Delta_2}},\n\\end{equation} \nwhere $M_{\\Delta_1}$ and $M_{\\Delta_2}$ are the masses within spheres enclosing respectively the overdensity $\\Delta_1$ and $\\Delta_2$ (with $\\Delta_1<\\Delta_2$) in units of the critical density (or equivalently the background density). This statistics presents a number of interesting properties that overcome many of the limitations that concern the concentration parameter. First of all, the sparsity can be estimated directly from cluster mass estimates without having to rely on the assumption of a specific parametric profile, such as the NFW profile. Secondly, for any given choice of $\\Delta_1$ and $\\Delta_2$, the sparsity is found to be weakly dependent on the overall halo mass with a much reduced scatter than the concentration \\citep{Balmes2014,Corasaniti2018,Corasaniti2019}. Thirdly, these mass ratios retain cosmological information encoded in the mass profile, thus providing an independent cosmological proxy. Finally, the halo ensemble average sparsity can be predicted from prior knowledge of the halo mass functions at the overdensities of interests, which allows to infer cosmological parameter constraints from cluster sparsity measurements \\citep[see e.g.][]{Corasaniti2018,Corasaniti2021}. \n\nAs haloes grow from inside out such that newly accreted mass is redistributed in concentric shells within a few dynamical times \\citep[see e.g.][for a review]{2011MNRAS.413.1373W,2011AdAst2011E...6T}, it is natural to expect that major mergers can significantly disrupt the onion structure of haloes and result in values of the sparsity that significantly differ from those of the population of haloes that have had sufficient time to rearrange their mass distribution and reach the virial equilibrium. \n\nHere, we perform a thorough analysis of the relation between halo sparsity and the halo mass accretion history using numerical halo catalogues from large volume high-resolution N-body simulations. We show that haloes which undergo a major merger in their recent history form a distinct population of haloes characterised by large sparsity values. Quite importantly, we are able to fully characterise the statistical distributions of such populations in terms of the halo sparsity and the time of their last major merger. Thus, building upon these results, we have developed a statistical tool which uses cluster sparsity measurements to test whether a galaxy cluster has undergone a recent major merger and if so when such event took place.\n\nThe paper is organised as follows. In Section~\\ref{halocat} we describe the numerical halo catalogues used in the analysis, while in Section~\\ref{sparsmah} we present the results of the study of the relation between halo sparsity and major mergers. In Section~\\ref{calistat} we present the statistical tests devised to identify the imprint of mergers in galaxy clusters and in discuss the statistical estimation of the major merger epoch from sparsity measurements. In Section~\\ref{cosmo_imp} we discuss the implications of these results regarding cosmological parameter estimation studies using halo sparsity. In Section~\\ref{testcase} we validate our approach using similar data, assess its robustness to observational biasses and describe the application of our methodology to the analysis of known galaxy clusters. Finally, in Section~\\ref{conclu} we discuss the conclusions.\n\n\n\n\\section{Numerical Simulation Dataset}\\label{halocat}\n\\subsection{N-body Halo catalogues}\nWe use N-body halo catalogues from the MultiDark-Planck2 (MDPL2) simulation \\citep{Klypin2016} which consists of $3840^3$ particles in $(1 \\,h^{-1}\\,\\textrm{Gpc})^3$ comoving volume (corresponding to a particle mass resolution of $m_p=1.51\\cdot 10^{9}\\,h^{-1} \\text{M}_{\\odot}$) of a flat $\\Lambda$CDM cosmology run with the \\textsc{Gadget-2}\\footnote{\\href{https://wwwmpa.mpa-garching.mpg.de/gadget/}{https://wwwmpa.mpa-garching.mpg.de/gadget/}} code \\citep{2005MNRAS.364.1105S}. The cosmological parameters have been set to the values of the \\textit{Planck} cosmological analysis of the Cosmic Microwave Background (CMB) anisotropy power spectra \\citep{2014A&A...571A..16P}: $\\Omega_m=0.3071$, $\\Omega_b=0.0482$, $h=0.6776$, $n_s=0.96$ and $\\sigma_8=0.8228$. Halo catalogues and merger trees at each redshift snapshot were generated using the friend-of-friend (FoF) halo finder code \\textsc{rockstar}\\footnote{\\href{https://code.google.com/archive/p/rockstar/}{https://code.google.com/archive/p/rockstar/}} \\citep{Behroozi2013a,Behroozi2013b}. We consider the default set up with the detected haloes consisting of gravitationally bound particles only. We specifically focus on haloes in the mass range of galaxy groups and clusters corresponding to $M_{200\\text{c}}>10^{13}\\,h^{-1} \\text{M}_{\\odot}$. \n\nFor each halo in the MDPL2 catalogues we build a dataset containing the following set of variables: the halo masses $M_{200\\text{c}}$, $M_{500\\text{c}}$ and $M_{2500\\text{c}}$ estimated from the number of N-body particles within spheres enclosing overdensities $\\Delta=200,500$ and $2500$ (in units of the critical density) respectively; the scale radius, $r_s$, of the best-fitting NFW profile; the virial radius, $r_{\\rm vir}$; the ratio of the kinetic to the potential energy, $K/U$; the offset of the density peak from the average particle position, $x_{\\rm off}$; and the scale factor (redshift) of the last major merger, $a_{\\rm LMM}$ ($z_{\\rm LMM}$). From these variables we additionally compute the following set of quantities: the halo sparsities $s_{200,500}$, $s_{200,2500}$ and $s_{500,2500}$; the offset in units of the virial radius, $\\Delta_r=x_{\\rm off}/r_{\\rm vir}$, and the concentration parameter of the best-fit NFW profile, $c_{200\\text{c}}=r_{200\\text{c}}/r_s$, with $r_{200\\text{c}}$ being the radius enclosing an overdensity $\\Delta=200$ (in units of the critical density). In our analysis we also use the mass accretion history of MDPL2 haloes.\n\nIn addition to the MDPL2 catalogues, we also use data from the Uchuu simulations \\citep{Ishiyama2021}, which cover a larger cosmic volume with higher mass resolution. We use these catalogues to calibrate the sparsity statistics that provides the base for practical applications of halo sparsity measurements as cosmic chronometers of galaxy cluster mergers. The Uchuu simulation suite consists of N-body simulations of a flat $\\Lambda$CDM model realised with \\textsc{GreeM} code \\citep{2009PASJ...61.1319I,2012arXiv1211.4406I} with cosmological parameters set to the values of a later \\textit{Planck}-CMB cosmological analysis \\citep{2016A&A...594A..13P}: $\\Omega_m=0.3089$, $\\Omega_b=0.0486$, $h=0.6774$, $n_s=0.9667$ and $\\sigma_8=0.8159$. In particular, we use the halo catalogues from the $(2\\,\\textrm{Gpc}\\,h^{-1})^3$ comoving volume simulation with $12800^3$ particles (corresponding to a particle mass resolution of $m_p=3.27\\cdot 10^{8}\\,h^{-1}\\text{M}_{\\odot}$) that, as for MDPL2, were also generated using the \\textsc{rockstar} halo finder.\n\nIt is important to stress that the major merger epoch to which we refer in this work is that defined by the \\textsc{rockstar} halo finder, that is the time when the particles of the merging halo and those of the parent one are within the same iso-density contour in phase-space. Hence, this should not be confused with the first core-passage time usually estimated in Bullet-like clusters.\n\n\\begin{table}\n\\centering\n\\caption{Characteristics of the selected halo samples at $z=0,0.2,0.4$ and $0.6$ (columns from left to right). Quoted in the rows are the number of haloes in the samples and the redshift of the last major merger $z_{\\rm LMM}$ used to select the haloes for each sample.}\n\\begin{tabular}{ccccc}\n\\hline\n\\hline\n& \\multicolumn{4}{c}{Merging Halo Sample ($T>-1/2$)} \\\\ \n\\hline\n\\hline\n & $z=0.0$ & $z=0.2$ & $z=0.4$ & $z=0.6$ \\\\\n \\hline\n$\\#$-haloes & $23164$ & $28506$ & $31903$ & $32769$ \\\\\n$z_{\\rm LMM}$ & $<0.113$ & $<0.326$ & $<0.540$ & $<0.754$ \\\\\n\\hline\n\\hline\n& \\multicolumn{4}{c}{Quiescent Halo Sample ($T<-4$)} \\\\ \n\\hline\n\\hline\n & $z=0.0$ & $z=0.2$ & $z=0.4$ & $z=0.6$ \\\\\n \\hline\n$\\#$-haloes & $199853$ & $169490$ & $140464$ & $113829$ \\\\\n$z_{\\rm LMM}$ & $>1.15$ & $>1.50$ & $>1.86$ & $>2.22$ \\\\\n\\hline\n\\end{tabular}\n\\label{tab:samples}\n\\end{table}\n\n\\subsection{Halo Sample Selection}\\label{haloeselection}\nWe aim to study the impact of merger events on the halo mass profile. To this purpose we focus on haloes which undergo their last major merger at different epochs. In such a case, it is convenient to introduce a time variable that characterises the backward time interval between the redshift $z$ (scale factor $a$) at which a halo is investigated and that of its last major merger $z_{\\rm LMM}$ ($a_{\\rm LMM}$) in units of the dynamical time \\citep{Jiang2016, Wang2020},\n\\begin{equation}\\label{backwardtime}\nT(z|z_\\text{LMM})= \\frac{\\sqrt{2}}{\\pi}\\int_{z_{\\text{LMM}}}^{z}\\frac{\\sqrt{\\Delta_\\text{vir}(z)}}{z+1}dz,\n\\end{equation}\nwhere $\\Delta_{\\rm vir}(z)$ is the virial overdensity, which we estimate using the spherical collapse model approximated formula $\\Delta_{\\rm vir}(z)=18\\pi^2+82[\\Omega_m(z)-1]-39[\\Omega_m(z)-1]^2$ \\citep{Bryan1998}. Hence, one has $T=0$ for haloes which undergo a major merger at the time they are investigated (i.e. $z_{\\rm LMM}=z$), and $T<0$ for haloes that had their last major merger at earlier times (i.e. $z_{\\rm LMM}>z$). Notice that the definition used here differs by a minus sign from that of \\citet{Wang2020}, where the authors have found that merging haloes recover a quiescent state within $|T| \\sim 2$ dynamical times. \n\nIn Section~\\ref{sparsprof}, we investigate the differences in halo mass profile between merging haloes and quiescent ones, to maximise the differences we select haloes samples as following:\n\\begin{itemize}\n\\item {\\it Merging haloes}: a sample of haloes that are at less than one half the dynamical time since their last major merger ($T> -1/2$), and therefore still in the process of rearranging their mass distribution;\n\\item {\\it Quiescent haloes}: a sample of haloes for which their last major merger occurred far in the past ($T\\le -4)$, thus they had sufficient time to rearrange their mass distribution to an equilibrium state; \n\\end{itemize}\n\nIn the case of the $z=0$ catalogue, the sample of merging haloes with $T>-1/2$ consists of all haloes for which their last major merger as tagged by the \\textsc{rockstar} algorithm occurred at $a_{\\rm LMM}>0.897$ ($z_{\\rm LMM}<0.115$), while the samples of quiescent\nhaloes with $T\\le -4$ in the same catalogue are characterised by a last major merger at $a_{\\rm LMM}<0.464$ ($z_{\\rm LMM}>1.155$). In order to study the redshift dependence, we perform a similar selection for the catalogues at $z=0.2,0.4$ and $0.6$ respectively. In Table~\\ref{tab:samples} we quote the characteristics of the different samples selected in the various catalogues.\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=.8\\linewidth]{figures/concentration_sparsities_lines.pdf}\n \\caption{Distribution of the relative deviations of individual halo sparsities with respect to the expected NFW value for $\\delta_{200,500}=1-s^{\\rm NFW}_{200,500}/s_{200,500}$ (dashed lines) and $\\delta_{200,2500}=1-s^{\\rm NFW}_{200,2500}/s_{200,2500}$ (solid lines) in the case of the merging (blue lines) and quiescent (orange lines) haloes at $z=0.0$ (top left panel), $0.2$ (top right panel), $0.4$ (bottom left panel) and $0.6$ (bottom right panel) respectively}\n \\label{fig:relative_spars_conc}\n\\end{figure*}\n\n\\section{Halo Sparsity \\& Major Mergers}\\label{sparsmah}\n\\subsection{Halo Sparsity Profile}\\label{sparsprof}\nHere, we seek to investigate the halo mass profile of haloes undergoing a major merger as traced by halo sparsity and evaluate to which extent the NFW profile can account for the estimated sparsities at different overdensities. To this purpose, we compute for each halo in selected samples the halo sparsities $s_{200,500}$ and $s_{200,2500}$ from their SOD estimated masses, as well as the values obtained assuming the NFW profile using the best-fit concentration parameter $c_{200\\text{c}}$, which we denote as $s^{\\rm NFW}_{200,500}$ and $s^{\\rm NFW}_{200,2500}$ respectively. These can be inferred from the sparsity-concentration relation \\citep{Balmes2014}:\n\\begin{equation}\nx^3_{\\Delta}\\frac{\\Delta}{200}=\\frac{\\ln{(1+c_{200\\text{c}}x_{\\Delta})}-\\frac{c_{200\\text{c}}x_{\\Delta}}{1+c_{200\\text{c}}x_{\\Delta}}}{\\ln{(1+c_{200\\text{c}})}-\\frac{c_{200\\text{c}}}{1+c_{200\\text{c}}}},\\label{sparconc}\n\\end{equation}\nwhere $x_{\\Delta}=r_{\\Delta}/r_{200\\text{c}}$ with $r_{\\Delta}$ being the radius enclosing $\\Delta$ times the critical density. Hence, for any value of $\\Delta$ and given the value of $c_{200\\text{c}}$ for which the NFW-profile best-fit that of the halo of interest, we can solve Eq.~(\\ref{sparconc}) numerically to obtain $x_{\\Delta}$ and then derive the value of the NFW halo sparsity given by:\n\\begin{equation}\ns^{\\rm NFW}_{200,\\Delta}=\\frac{200}{\\Delta}x_{\\Delta}^{-3}.\n\\end{equation}\nIt is worth emphasising that such relation holds true only for haloes whose density profile is well described by the NFW formula. In such a case the higher the concentration the smaller the value of sparsity, and inversely the lower the concentration the higher the sparsity. Because of this, the mass ratio defined by Eq.~(\\ref{sparsdef}) provides information on the level of sparseness of the mass distribution within haloes, that justifies being dubbed as halo sparsity. Notice that from Eq.~(\\ref{sparconc}) we can compute $s_{200,\\Delta}$ for any $\\Delta>200$, and this is sufficient to estimate the sparsity at any other pair of overdensities $\\Delta_1\\ne\\Delta_2>200$ as given by $s_{\\Delta_1,\\Delta_2}=s_{200,\\Delta_1}/s_{200,\\Delta_2}$. Haloes whose mass profile deviates from the NFW prediction will have sparsity values that differ from that given by Eq.~(\\ref{sparconc}).\n\nThis is emphasised in Fig.~\\ref{fig:relative_spars_conc}, where we plot the distribution of the relative deviations of individual halo sparsities with respect to the expected NFW value for $\\delta_{200,500}=1-s^{\\rm NFW}_{200,500}/s_{200,500}$ (dashed lines) and $\\delta_{200,2500}=1-s^{\\rm NFW}_{200,2500}/s_{200,2500}$ (solid lines) in the case of the merging (blue lines) and quiescent (orange lines) haloes at $z=0.0,0.2,0.4$ and $0.6$ respectively. We can see that for quiescent haloes the distributions are nearly Gaussian. More specifically, in the case $\\delta_{200,500}$ we can see that the distribution has a narrow scatter with a peak that is centred at the origin at $z=0.6$, and slightly shifts toward positive values at smaller redshifts with a maximal displacement at $z=0$. This corresponds to an average bias of the NFW-estimated sparsity $s^{\\rm NFW}_{200,500}$ of order $\\sim 4\\%$ at $z=0$. A similar trend occurs for the distribution of $\\delta_{200,2500}$, though with a larger scatter and a larger shift in the location of the peak of the distribution at $z=0$, which corresponds to an average bias of $s^{\\rm NFW}_{200,2500}$ of order $\\sim 14\\%$ at $z=0$. Such systematic differences are indicative of the limits of the NFW-profile in reproducing the halo mass profile of haloes both in the outskirt regions and the inner ones. Moreover, the redshift trend is consistent with the results of the analysis of the mass profile of stacked haloes presented in \\citep[][]{2018ApJ...859...55C}, which shows that the NFW-profile better reproduce the halo mass distribution at $z=3$ than at $z=0$ (see top panels of their Fig.~8). Very different is the case of the merging halo sample, for which we find the distribution of $\\delta_{200,500}$ and $\\delta_{200,2500}$ to be highly non-Gaussian and irregular. In particular, in the case of $\\delta_{200,500}$ we find the distribution to be characterised by a main peak located near the origin with a very heavy tail up to relative differences of order $20\\%$. The effect is even more dramatic for $\\delta_{200,2500}$, in which case the distribution looses the main peak and become nearly bimodal, while being shifted over a positive range of values that extend up to relative variations up $\\sim 40\\%$. Overall this suggests that sparsity provides a more reliable proxy of the halo mass profile than that inferred from the NFW concentration.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = \\linewidth]{figures/sparsity_concentration_histories.pdf}\n \\caption{Evolution with scale factor $a$ (redshift $z$) of the median sparsity $s_{200,500}$ (top panels), $s_{500,2500}$ (middle panels) and $s_{200,2500}$ (bottom panels) for a sample of $10^4$ randomly selected haloes from the MDPL2 halo catalogue at $z=0$ and the sample of all haloes with a last major merger event at $a_{\\rm LMM} = 0.67$ (right panels). The solid lines corresponds to the median sparsity computed from the mass accretion histories of the individual haloes, while the shaded area corresponds to the $68\\%$ region around the median.}\n \\label{fig:sparsity_histories_1}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.8\\linewidth]{figures/sparsity_vs_T.pdf}\n \\caption{Median sparsity histories as function of the backward time interval since the major merger events $T$ (in units of dynamical time) for halo samples from the MDPL2 catalogue at $z=0$ with different last major merger redshifts $z_{\\rm LMM}=0.2,0.4,0.6,0.8,0.8$ and $1$ (curves from bottom to top). Notice that the backward time interval used here differ by a minus sign from that given by Eq.~(\\ref{backwardtime}) to be consistent with the definition by \\citet{Wang2020}.}\n \\label{fig:sparsity_histories_2}\n\\end{figure}\n\n\n\\subsection{Halo Sparsity Evolution}\nDifferently from the previous analysis, we now investigate the evolution of the halo mass profile as traced by halo sparsity, which we reconstruct from the mass accretion histories of the haloes in the MDPL2 catalogue at $z=0$. In Fig.~\\ref{fig:sparsity_histories_1}, we plot the median sparsity evolution of $s_{200,500}$ (top panel), $s_{500,2500}$ (middle panel) and $s_{200,2500}$ (bottom panel) as function of the scale factor. In the left panels we shown the case of a sample of $10^4$ randomly selected haloes, thus behaving as quiescent haloes in the redshift range considered, while in the right panels we plot the evolution of the sparsity of all haloes in the $z=0$ catalogue undergoing a major merger at $a_{\\rm LMM}=0.67$. The shaded area corresponds to the $68\\%$ sparsity excursion around the median, while the vertical dashed line marks the value of the scale factor of the last major merger.\n\nIt is worth remarking taht the sparsity provides us with an estimate of the fraction of mass in a shell of radii $R_{\\Delta_1}$ and $R_{\\Delta_2}$ relative to the mass enclosed in the inner radius $R_{\\Delta_2}$, i.e. Eq.~(\\ref{sparsdef}) can be rewritten $s_{\\Delta_1,\\Delta_2}=\\Delta{M}/M_{\\Delta_2}+1$. As such $s_{200,500}$ is a more sensitive probe of the mass distribution in the external region of the halo, while $s_{500,2500}$ and $s_{200,2500}$ are more sensitive to the inner part of the halo. \n\nAs we can see from Fig.~\\ref{fig:sparsity_histories_1}, the evolution of the sparsity of merging haloes matches that of the quiescent sample before the major merger event. In particular, during the quiescent phase of evolution, we notice that $s_{200,500}$ remains nearly constant, while $s_{500,2500}$ and $s_{200,2500}$ are decreasing functions of the scale factor. This is consistent with the picture that haloes grow from inside out, with the mass in the inner region (in our case $M_{2500\\text{c}}$) increasing relative to that in the external shell ($\\Delta{M}=M_{\\Delta_1}-M_{2500\\text{c}}$, with $\\Delta_1=200$ and $500$ in units of critical density), thus effectively reducing the value of the sparsity. This effect is compensated on $s_{200,500}$, thus resulting in a constant evolution. \nWe can see that the onset of the major merger event induce a pulse-like response in the evolution of halo sparsities at the different overdensities with respect to the quiescent evolution. These trends are consistent with the evolution of the median concentration during major mergers found in \\citet{Wang2020}, in which the concentration rapidly drops to a minimum before bouncing again. Here, the evolution of the sparsity allows to follow how the merger alters the mass profile of the halo throughout the merging process. In fact, we may notice that the sparsities rapidly increase to a maximum, suggesting the arrival of the merger in the external region of the parent halo, which increases the mass $\\Delta{M}$ in the outer shell relative to the inner mass. Then, the sparsities decrease to a minimum, indicating that the merged mass has reached the inner region, after which the sparsities increases to a second maximum that indicates that the merged mass has been redistributed outside the $R_{2500\\text{c}}$ radius. However, notice that in the case of $s_{200,2500}$ and $s_{500,2500}$ the second peak is more pronounced than the first one, while the opposite occurs for $s_{200,500}$, which suggests that the accreted mass remains confined within $R_{500\\text{c}}$. Afterwards, a quiescent state of evolution is recovered.\n\nIn Fig.~\\ref{fig:sparsity_histories_2} we plot the median sparsities of haloes in the MDPL2 catalogue at $z=0$ that are characterised by different major merger redshifts $z_{\\rm LMM}$ as function of the backward interval of time $T$ (in units of the dyanmical time) since the last major merger. Notice that $T$ used in this plot differs by a minus sign from that given by Eq.~(\\ref{backwardtime}) to conform to the definition by \\citet{Wang2020}. We can see that after the onset of the major merger (at $T\\ge 0$), the different curves superimpose on one another, indicating that the imprint of the major merger on the profile of haloes is universal, producing the same pulse-like feature on the evolution of the halo sparsity. Furthermore, all haloes recover a quiescent evolution within two dynamical times, i.e. for $T\\ge 2$. Conversely, on smaller time scale $T<2$, haloes are still perturbed by the major merger event. These result are consistent with the findings of \\citet{Wang2020}, who have shown that the impact of mergers on the median concentration of haloes leads to a time pattern that is universal and also dissipates within two dynamical times. Notice, that this distinct pattern due to the major merger is the result of gravitational interactions only. Hence, it is possible that such a feature may be sensitive to the underlying theory of gravity or the physics of dark matter particles.\n\nAs we will see next, the universality of the pulse-like imprint of the merger event on the evolution of the halo sparsity, as well as its limited duration in time, have quite important consequences, since these leave a distinct feature on the statistical distribution of sparsity values, which can be exploited to use sparsity measurements as a time proxy of major mergers in clusters.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width = 0.8\\linewidth]{figures/T_aLMM_vs_s200500.pdf}\n \\caption{\\label{fig:s_almm} Iso-probability contours of the joint probability distribution in the $s_{200,500}-T$ plane for the haloes from the MDPL2 catalogues at $z=0.0,0.2,0.4$ and $0.6$ respectively. The solid horizontal line marks the value $T=-2$. The inset plots show that marginal probability distribution for haloes with $T>-2$ (blue histograms) and $T<-2$ (beige histograms) respectively.}\n \\label{fig:sva}\n\\end{figure*}\n\n\n\\subsection{Halo Sparsity Distribution}\nWe have seen that the sparsity of different haloes evolves following the same pattern after the onset of the major merger, such that the universal imprint of the merger event is best highlighted in terms of the backward interval time $T$. Hence, we aim to investigate the joint statistical distribution of halo sparsity values for haloes characterised by different time $T$ since their last major merger in the MDPL2 catalogues at different redshift. Here, we revert to the definition of $T$ given by Eq.~(\\ref{backwardtime}), where the time interval is measured relative to the time the haloes are investigated, that is the redshift $z$ of the halo catalog. Hence, $T=0$ for haloes undergoing a major merger at $z_{\\rm LMM}=z$ and $T<0$ for those with $z_{\\rm LMM}>z$.\n\nFor conciseness, here we only describe the features of the joint distribution $p(s_{200,500},T)$ shown in Fig.~\\ref{fig:s_almm} in the form of iso-probability contours in the $s_{200,500}-T$ plane at $z=0$ (top left panel), $0.2$ (top right panel), $0.4$ (bottom left panel) and $0.6$ (bottom right panel). We find a similar structure of the distributions at other redshift snapshots and for the halo sparsities $s_{200,2500}$ and $s_{500,2500}$. In each panel the horizontal solid line marks the characteristic time interval $|T|=2$. As shown by the analysis of the evolution of the halo sparsity, haloes with $|T|>2$ have recovered a quiescent state, while those with $|T|<2$ are still undergoing the merging process. The marginal conditional probability distributions $p(s_{200,500}|T<-2)$ and $p(s_{200,500}|T>-2)$ are shown in the inset plot. \n\nFirstly, we may notice that the joint probability distribution has a universal structure, that is the same at different redshift snapshots. Moreover, it is characterised by two distinct regions. The region with $T\\le -2$, that corresponds to haloes which are several dynamical times away since their last major merger event ($|T|\\ge 2$), as such they are in a quiescent state of evolution of the sparsity; and a region with $-20$. What this indicates is that we can use $\\Gamma = s_{200,500}$ to efficiently differentiate between haloes that have undergone a recent major merger from an at rest population. In addition to this result, one can estimate a simple ${\\rm p}-$value,\n\\begin{equation}\n {\\rm p} = \\text{P}_\\text{r}(\\Gamma > s_{200,500}|\\mathcal{H}_0) = 1 - \\int_0^{s_{200,500}-1}\\rho(x|\\mathcal{H}_0)dx\n\\end{equation}\ni.e. the probability of finding a higher value of $s_{200,500}$ in a halo at equilibrium. And conversely, one can also estimate the value of the threshold corresponding to a given $p-$value, by inverting this relation. \nIn Fig.~\\ref{fig:xi_of_z} we have estimated the threshold corresponding to three key p-values at increasingly higher redshifts.\nHere, each point is estimated using the sparsity distributions from the numerical halo catalogues. This figure allows to quickly estimate the values of sparsity above which a halo at some redshift $z$ should be considered as recently perturbed. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.9\\linewidth]{figures/xi_200_500_z.pdf}\n \\caption{Sparsity thresholds $s^{\\rm th}_{200,500}$ as function of redshift for ${\\rm p}-$values$=0.05$ (purple solid line), $0.01$ (orange solid line) and $0.005$ (green solid line) computed using the Frequentist Likelihood-Ratio approach.}\n \\label{fig:xi_of_z}\n\\end{figure}\n\nIt is worth noticing that these thresholds are derived from sparsity estimates from N-body halo masses. In contrast, sparsities of observed galaxy clusters are obtained from mass measurements that may be affected by systematic uncertainties that may differ depending on the type of observations. The impact of mass biases is reduced in the mass ratio, but it could still be present. As an example, using results from hydro/N-body simulations for an extreme model of AGN feedback model, \\citet{Corasaniti2018} have shown that baryonic processes on average can bias the estimates of the sparsity $s_{200,500}$ up to $\\lesssim 4\\%$ and $s_{200,2500}$ up to $\\lesssim 15\\%$ at the low mass end. This being said, as long as the mass estimator is unbiased we expect our analysis to hold, albeit with a modification to the fitting parameters. In Section~\\ref{testcase} we present a preliminary analysis of the impact of mass biasses on our approach, however we will leave more in depth investigations into this topic, as well as modifications that could arise from non-gravitational physics, to upcoming work.\n\n\\subsection{Bayesian approach}\n\\label{sec:Bayesisan}\nAn alternate way of tackling this problem is through the Bayesian flavour of detection theory. In this case, instead of looking directly at how likely the data $\\boldsymbol{x}$ is described by a model characterised by the model parameters $\\boldsymbol{\\theta}$ in terms of the likelihood function $p(\\boldsymbol{x}|\\boldsymbol{\\theta})$, one is interested in how likely is the model given the observed data, that is the posterior function $p(\\boldsymbol{\\theta}|\\boldsymbol{x})$. \n\nBayes theorem allows us to relate these two quantities:\n\\begin{equation}\n p(\\bmath{\\theta}|x) = \\frac{p(x|\\bmath{\\theta})\\pi(\\bmath{\\theta})}{\\pi(x)},\n \\label{eq:posterior}\n\\end{equation}\nwhere $\\pi(\\bmath{\\theta})$ is the prior distribution for the parameter vector $\\bmath{\\theta}$ and \n\\begin{equation}\n \\pi(x) = \\int p(x|\\bmath{\\theta})\\pi(\\bmath{\\theta}) d\\bmath{\\theta},\n\\end{equation}\nis a normalisation factor, known as evidence.\n\nWhile this opens up the possibility of estimating the parameter vector, which we will discuss in sub-section~\\ref{statmergerepoch}, this approach also allows one to systematically define a test statistic known as the Bayes Factor,\n\\begin{equation}\n B_\\text{f} = \\frac{\\int_{V_1} p(\\bmath{x}|\\bmath{\\theta})\\pi(\\bmath{\\theta})d\\bmath{\\theta}}{\\int_{V_0} p(\\bmath{x}|\\bmath{\\theta})\\pi(\\bmath{\\theta})d\\bmath{\\theta}},\n\\end{equation}\nassociated to the binary test. Here, we have denoted $V_1$ and $V_0$ the volumes of the parameter space respectively attributed to hypothesis $\\mathcal{H}_1$ and $\\mathcal{H}_0$.\n\nIn practice, to evaluate this statistic we first need to model the likelihood. Again we use the numerical halo catalogues as calibrators. We find that the distribution of $s_{200,500}$ for a given value of the scale factor at the epoch of the last major merger, $a_\\text{LMM}$, is well described by a generalised $\\beta '$ pdf. In particular, we fit the set of parameters $\\bmath{\\theta} = [\\alpha, \\beta, p, q]^\\top$ that depend solely on $a_\\text{LMM}$ by sampling the posterior distribution using Monte-Carlo Markov Chains (MCMC) with a uniform prior for $a_\\text{LMM}\\sim \\mathcal{U}(0; a(z))$\\footnote{The upper bound is the scale factor at the epoch at which the halo is observed.}. This is done using the \\textsc{emcee}\\footnote{\\href{https://emcee.readthedocs.io/en/stable/}{https://emcee.readthedocs.io/en/stable/}} library \\citep{Emcee2013}. The resulting values of $B_\\text{f}$ can then be treated in exactly the same fashion as the Frequentist statistic. It is however important to note that the Bayes factor is often associated with a standard ``rule of thumb'' interpretation \\citep[see e.g.][]{Trotta2007} making these statistic particularly interesting to handle.\n\nOne way of comparing the efficiency of different tests is to draw their respective Receiver Operating Characteristic (ROC) curves \\citep{Fawcett2006}, which show the probability of having a true detection, $\\text{P}_\\text{r}(\\Gamma > \\Gamma_{\\rm th}|\\mathcal{H}_1)$, plotted against the probability of a false one, $\\text{P}_\\text{r}(\\Gamma > \\Gamma_{\\rm th}|\\mathcal{H}_0)$ for the same threshold. In other words, this means we are simply plotting the probability of finding a value of $\\Gamma$ that is larger than the threshold under the alternate hypothesis against that of finding a value of $\\Gamma$ larger than the same threshold under the null hypothesis. The simplest graphical interpretation of this type of figure is, the closer a curve gets to the top left corner the more powerful the test is at differentiating between both cases. \n\nIn Fig.~\\ref{fig:roc_curves} we plot the ROC curves corresponding to all the tests we have studied in the context of this work. These curves have been evaluated using a sub sample of $10^4$ randomly selected haloes from the MDPL2 catalogues at $z=0$ with masses $M_{200\\text{c}} > 10^{13}\\,h^{-1}\\text{M}_{\\odot}$. Let us focus on the comparison between the Frequentist direct sparsity approach (S 1D) against the Bayes Factor obtained using a single sparsity measurement (BF 1D). We can see that both tests have very similar ROC curves for low false alarm rates. This indicates that we do not gain any substantial power from the additional computational work done to estimate the Bayes factor using a single value of sparsity.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.9\\linewidth]{figures/roc_curves.pdf}\n \\caption{ROC curves associated with the binary tests studied in this work: the Frequentist sparsity test (S 1D, solid orange line), the Bayes Factor based on a single sparsity value (BF 1D, dashed green line) and using three values (BF 3D, dash-dotted magenta line), the Support Vector Machines with one sparsity value (SVM 1D, dotted purple line) and three sparsities (SVM 3D, dotted yellow line). What can be observed is that all 1D tests are equivalent at small false alarm rates and the only way to significantly increase the power of the test is to increase the amount of input data, i.e. adding a third mass measurement as in the BF 3D and SVM 3D cases.}\n \\label{fig:roc_curves}\n\\end{figure}\n\nWhile this may seem as the end of the line for the method based on the Bayes factor, the latter does present the significant advantage of being easily expanded to include additional data. In our case this comes in the form of adding additional sparsity measurements at different overdensities. Simply including a third mass measurement, here we use $M_{2500\\text{c}}$, gives us access to two additional sparsities from the three possible pairs, $s_{200,500},\\,s_{200,2500}$ and $s_{500,2500}$. This leads us to defining each halo as a point in a 3-dimensional space with coordinates\n\\begin{equation}\n \\begin{cases}\n x = s_{200,500} - 1 \\\\\n y = s_{200,2500} -1 \\\\\n z = s_{500,2500} -1\n \\end{cases}\n\\end{equation}\nAfter estimating the likelihood in this coordinate system, one quickly observes that a switching spherical-like coordinate system, $\\mathbfit{r} = [r, \\vartheta, \\varphi]^\\top$, allows for a much simpler description. The resulting likelihood model,\n\\begin{equation}\n L(\\mathbfit{r};\\bmath{\\theta},\\bmath{\\mu},\\mathbfss{C}) = \\frac{f(r;\\bmath{\\theta})}{2\\pi\\sqrt{|\\mathbfss{C}|}}\\exp\\left[-\\frac{1}{2}(\\bmath{\\alpha} - \\bmath{\\mu})^\\top\\mathbfss{C}^{-1}(\\bmath{\\alpha} - \\bmath{\\mu})\\right],\n \\label{eq:like3D}\n\\end{equation}\ntreats $r$ as independent from the two angular coordinates that are placed within the 2-vector $\\bmath{\\alpha} = [\\vartheta, \\varphi]^\\top$. Making the radial coordinate independent allows us to constrain $f(r,\\bmath{\\theta})$ simply from the marginalised distribution. Doing so we found that the latter is best described by a Burr type XII \\citep{10.1214/aoms/1177731607} distribution,\n\\begin{equation}\n f(x,c,k,\\lambda,\\sigma) = \\frac{ck}{\\sigma}\\left(\\frac{x-\\lambda}{\\sigma}\\right)^{c-1}\\left[1+\\left(\\frac{x-\\lambda}{\\sigma}\\right)^2\\right]^{-k-1},\n\\end{equation}\nwith additional displacement, $\\lambda$, and scale, $\\sigma$, parameters. In total the likelihood function is described by 9 parameters, 3 of which are constrained by fitting the marginalised distribution of $r$ realisations assuming $\\lambda = 0$ and 5, 2 in $\\bmath{\\mu}$ and 3 in $\\mathbfss{C}$, are measured through unbiased sample means and covariances. \n\nIn a similar fashion as in the single sparsity case, we evaluate these parameters as functions of $a_\\text{LMM}$ and thus recover a posterior likelihood for the epoch of the last major merger using MCMC, again applying a flat prior on $a_\\text{LMM}$. This posterior in turn allow us to measure the the corresponding Bayes Factor. We calculate these Bayes factors for the same test sample used previously and evaluate the corresponding ROC curve (BF 3D in Fig.~\\ref{fig:roc_curves}). As intended, the additional mass measurement has for effect of increasing the detection power of the test, thus raising the ROC curve with respect to the 1D tests. Increasing the true detection rate from 40 to 50 percent for a false positive rate of 10 percent. We have tested that the same trends hold valid at $z>0$.\n\n\\subsection{Support Vector Machines}\nAn alternative to the Frequentist -- Bayesian duo is to use machine learning techniques designed for classification. While Convolutional Neural Networks \\citep[see eg.][for a review]{2015Natur.521..436L} are very efficient and have been profusely used to classify large datasets, both in terms of dimensionality and size, recent examples in extra-galactic astronomy include galaxy morphology classification \\citep[e.g.][]{Hocking2018,Martin2020,Abul_Hayat2020,Cheng2021,Spindler2021} detection of strong gravitational lenses \\citep[e.g.][]{Jacobs2017,Jacobs2019, Lanusse2018,Canameras2020,Huang2020,Huang2021,He2020,Gentile2021,Stein2021} galaxy merger detection \\citep{Ciprijanovic2021} and galaxy cluster merger time estimation \\citep{Koppula2021}. They may not be the tool of choice when dealing with datasets of small dimensionality, like the case at a hand. A simpler option for this problem is to use Support Vector Machines (SVM) \\citep[see e.g.][]{Cristianini2000} as classifiers for the hypotheses defined in Eq.~(\\ref{eq:hypothesis}), using as training data the sparsities measured from the halo catalogues.\n\nA SVM works on the simple principle of finding the boundary that best separates the two hypotheses. In opposition to Random Forests \\citep[see e.g.][]{Breiman2001} which can only define a set of horizontal and vertical boundaries, albeit to arbitrary complexity, the SVM maps the data points to a new euclidean space and solves for the plane best separating the two sub-classes. This definition of a new euclidean space allows for a non-linear boundary between the classes. For large datasets however the optimisation of the non-linear transformation can be slow to converge, and thus we restrict ourselves to a linear transformations. To do so we make use the \\textsc{scikit-learn}\\footnote{\\href{https://scikit-learn.org/}{https://scikit-learn.org/stable/}} \\citep{scikit-learn} python package. The ``user friendly'' design of this package allows for fast implementations with little required knowledge of python and little input from the user, giving this method an advantage over its Frequentist and Bayesian counterparts.\n\nIn order to compare the effectiveness of the SVM tests, with 1 and 3 sparsities, against those previously presented we again plot the corresponding ROC curves\\footnote{Note that the test data used for the ROC curves was excluded from the training set.} in Fig.~\\ref{fig:roc_curves}. What can be seen is that the SVM tests reach comparable differentiating power to both the Bayesian and Frequentist test for 1 sparisty and is only slightly out performed by the Bayesian test using 3 sparsities. This shows that designing a statistical test based on the sparsity can be done in a simple fashion without significant loss of differentiation power. Making sparsity an all the more viable proxy to identify recent major mergers.\n\n\\subsection{Estimating cluster major merger epoch}\\label{statmergerepoch}\n\nIn the previous section we have investigated the possibility of using halo sparsity as a statistic to identify clusters that have had a recent major merger. We will now expand the Bayesian formulation of the binary test to \\emph{estimate} when this last major merger took place. This can be achieved by using the posterior distributions which we have previously computed to calculate the Bayes Factor statistics. These distributions allow us to define the most likely epoch for the last major merger as well as the credible interval around this epoch. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.95\\linewidth]{figures/sparsity1d_posteriors.pdf}\n \\caption{Posterior distributions for different values of the sparsity $s_{200,500}=1.2$ (dash-dotted green line), $1.7$ (dashed orange line), $2$ (dotted purple line) and $3$ (solid magenta line). We can see that for large sparsity values, the distributions are bimodal at recent epoch, while low values produce both a continuous distribution at low scale factor values as well as a single peak at recent epochs corresponding to a confusion region. This induce a degeneracy that needs to be broken if we are to accurately estimate $a_\\text{LMM}$.}\n \\label{fig:post_1sparsity}\n\\end{figure}\n\nBeginning with the one sparsity estimate, in Fig.~\\ref{fig:post_1sparsity} we plot the resulting posterior distributions $p(a_{\\rm LMM}|s_{200,500})$ obtained assuming four different values of $s_{200,500}=1.2,1.7,2$ and $3$ at $z=0$. As we can see, in the case of large sparsity values ($s_{200,500}\\ge 1.7$), we find a bimodal distribution in the posterior, caused by the pulse-like feature in the structure of the joint distribution shown in Fig.~\\ref{fig:sva} and which is consequence of universal imprint of the major merger on the halo sparsity evolution shown in Fig.~\\ref{fig:sparsity_histories_2}. In particular, we notice that the higher the measured sparsity and the lower the likelihood of having the last major merger to occur in the distant past. A consequence of this pulse-like feature is that a considerable population of haloes with a recent major mergers, characterised by $-1/2 0.8$, with the 3S estimator being much more accurate at recovering the scale factor of the last major merger and with restricted error margins (see blue curves in top and bottom panels respectively). Nevertheless from the middle panel, we may notice that both the 1S and 3S estimators have an area of confusion around the dip of the pulse feature in the $\\hat{a}_{\\rm LMM}$ plot. In both cases, we see that the estimator disfavours very recent merger (at $a_{\\rm LMM}\\approx 0.8$ in favour of placing them in the second bump of pulse, thus causing the median value and the $68\\%$ region of $\\hat{a}_{\\rm LMM}$ to be lower than the true value of the last major merger epoch. An effect, that should be kept in mind when using the pipeline.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = .9\\linewidth]{figures/estimator_tests.pdf}\n \\caption{\\textit{Top:} Accuracy of the estimation of the epoch of the last major merger, $\\alpha_{\\rm cc}$, as a function of the true value $a_{\\rm LMM}$ of the haloes in the validation sample for both the 1S (orange solid line) and 3S (blue solid line) estimators respectively. \\textit{Middle:} Median value of the estimated epoch of the last major merger, $\\hat{a}_{\\rm LMM}$, as function of the true value for the 1S (orange curves) and 3S (blue curves) estimators respectively. The shaded areas correspond to the $68\\%$ interval around the median, while the dashed diagonal line gives the ideal value of the estimator $\\hat{a}_{\\rm LMM}=a_{\\rm LMM}$. \\textit{Bottom:} relative width of the $68\\%$ interval around the median value of $\\hat{a}_{\\rm LMM}$ as a function of the true value $a_{\\rm LMM}$ for the 1S (orange curves) and 3S (blue curves) estimators respectively. We refer the reader to the text for a detailed discussion of the various trends.}\n \\label{fig:test_metrics}\n\\end{figure}\n\n\\subsection{Systematic Bias}\n\nThe statistical methodology we have developed here relies on sparsity estimates from N-body halo masses. However, these masses are not directly comparable to those inferred from galaxy cluster mass measurements, since the latter involve systematic uncertainties that may bias the cluster mass estimates compared to that from dark matter only simulations. Hence, before applying the sparsity test to real observations, we check the robustness of our approach against observational mass biases. More specifically, we will review conservative estimates of these biases for various mass estimation techniques and attempt to quantify the effect that these have on the sparsity.\n\n\\subsubsection{Weak Lensing Mass Bias}\nA well known source of systematic error in weak lensing mass estimates comes from fitting the observed tangential shear profile of a cluster with a spherically symmetric NFW inferred shear profile. In such a case deviations from sphericity of the mass distribution within the cluster, as well as projection effects induce a systematic error on the estimated cluster mass that may vary at different radii, consequently biasing the evaluation of the sparsity.\n\n\\citet{Becker2011} have investigated the impact of this effect on weak lensing estimated masses. They modelled the observed mass at overdensity $\\Delta$ as:\n\\begin{equation}\n M_{\\Delta}^{\\text{WL}} = M_\\Delta \\exp(\\beta_\\Delta)\\exp(\\sigma_\\Delta X),\n\\end{equation}\nwhere $M_{\\Delta}$ is the unbiased mass, $\\beta_{\\Delta}$ is a deterministic bias terms, while the third factor is a stochastic term with $\\sigma_{\\Delta}$ quantifying the spread of a log-normal distribution and $X\\sim\\mathcal{N}(0,1)$. Under the pessimistic assumption of independent scatter on both mass measurements, the resulting bias on the sparsity then reads as:\n\\begin{equation}\\label{spars_wl_bias}\ns_{\\Delta_1,\\Delta_2}^{\\rm WL} = s_{\\Delta_1,\\Delta_2} \\left(b^{\\rm WL}_{\\Delta_1,\\Delta_2} +1\\right) \\exp\\left(\\sigma^{\\rm WL}_{\\Delta_1,\\Delta_2} X\\right), \n\\end{equation}\nwhere $b^{\\rm WL}_{\\Delta_1,\\Delta_2} = \\exp(\\beta_{\\Delta_1} - \\beta_{\\Delta_2}) - 1$ and $\\sigma^{\\rm WL}_{\\Delta_1,\\Delta_2} = \\sqrt{\\sigma_{\\Delta_1}^2 + \\sigma_{\\Delta_2}^2}$, with the errors being propagated from the errors quoted on the mass biases. \\citet{Becker2011} have estimated the mass bias model parameters at $\\Delta_1=200$ and $\\Delta_2=500$, using the values quoted in their Tab.~3 and 4 we compute the sparsity bias $b^{\\rm WL}_{200,500}$ and the scatter $\\sigma^{\\rm WL}_{200,500}$, which we quote in Tab.~\\ref{tab:WL_bias}, for different redshifts and galaxy number densities, $n_\\text{gal}$, in units of galaxies per arcmin$^{-2}$. Notice that the original mass bias estimates have been obtained assuming an intrinsic shape noise $\\sigma_e = 0.3$. \n\n\n\\begin{table}\n \\centering\n \\caption{Sparsity bias and scatter obtained from the weak lensing mass bias estimates by \\citet{Becker2011}.}\n \\begin{tabular}{cccc}\n \\hline\n & $n_\\text{gal}$ & $b^\\text{WL}_\\text{200,500}$ & $\\sigma^\\text{WL}_\\text{200,500}$\\\\\n \\hline\n & $10$ & $0.04\\pm0.02$ & $ 0.51\\pm0.03 $\\\\\n $z=0.25$ & $20$ & $ 0.01\\pm0.01 $ & $ 0.40\\pm0.02 $\\\\\n & $40$ & $ 0.03\\pm0.01 $ & $ 0.35\\pm0.02 $\\\\\n & & &\\\\\n & $10$ & $0.07\\pm0.07$ & $ 0.76\\pm0.03 $\\\\\n $z=0.5$ & $20$ & $ 0.02\\pm0.02 $ & $ 0.58\\pm0.04 $\\\\\n & $40$ & $ 0.03\\pm0.01 $ & $ 0.49\\pm0.03 $\\\\\n \\hline\n \\end{tabular}\n \n \\label{tab:WL_bias}\n\\end{table}\nWe may notice that although the deterministic sparsity bias is smaller than that on individual mass estimates the scatter can be large. In order to evaluate the impact of such biasses on the identification of merging clusters using sparsity estimates we use the values of the bias parameters quoted in Tab.~\\ref{tab:WL_bias} to generate a population of biased sparsities using Eq.~(\\ref{spars_wl_bias}) with the constraint that $s_{200,500}^\\text{WL} > 1$ for our validation sample at $z=0.25$. We then performed the frequentist test for a single sparsity measurements (the Bayesian estimator has a detection power similar to that of the frequentist one.) and evaluated the Area Under the ROC curve (AUC) as function of the scatter $\\sigma^{\\rm WL}_{200,500}$ to quantify the efficiency of the estimator at detecting recent major merger events. This is shown in Fig.~\\ref{fig:AUC-scatter}. Notice that a classifier should have values of AUC$>0.5$ \\citep{Fawcett2006}. Hence, we can see that the scatter can greatly reduce the detection power of the sparsity estimator and render the method ineffective at detecting recent mergers for $\\sigma^{\\rm WL}_{200,500}>0.2$. In contrast, the estimator is valuable classifier for smaller values of the scatter.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.9\\linewidth]{figures/AUC_scatter.pdf}\n \\caption{Area Under the ROC Curve (AUC) as function of the scatter on the measured sparsity for WL mass estimates. A random classifier has an AUC$=0.5$. The vertical and horizontal lines denote AUC = 0.6 and the corresponding scatter $\\sigma^{\\rm WL}_{200,500}=0.2$, denoting the point, $\\sigma^\\text{WL}_{200,500} > 0.2$, beyond which the detector can be considered ineffective at detecting recent mergers.}\n \\label{fig:AUC-scatter}\n\\end{figure}\n\n\n\\subsubsection{Hydrostatic Mass Bias}\nMeasurements of galaxy cluster masses from X-ray observations rely on the hypothesis that the intra-cluster gas is in hydrostatic equilibrium. Deviations from this condition can induce a radially dependent bias on the cluster masses \\citep[see e.g.][]{2016ApJ...827..112B,Eckert2019,Ettori2022}, thus affecting the estimation of the cluster's sparsity. The hydrostatic mass bias has been studied in \\citet{2016ApJ...827..112B}, who have realised cosmological zoom N-body/hydro simulations of 29 clusters to evaluate the bias of masses at overdensities $\\Delta=200, 500$ and $2500$ (in units of the critical density) for Cool Core (CC) and No Cool Core (NCC) clusters, as defined with respect to the entropy in the core of their sample, as well as for Regular and Disturbed clusters defined by the offset of the centre of mass and the fraction of substructures.\n\n\\begin{table}\n \\centering\n \\caption{Sparsity bias from the hydrostatic mass bias estimated of \\citet{2016ApJ...827..112B} for different categories of simulated clusters.}\n \\begin{tabular}{lccc}\n \\hline\n & $b_{200,500}^\\text{HE}$ & $b_{500,2500}^\\text{HE}$ & $b_{200,2500}^\\text{HE}$ \\\\\n \\hline\n All & $0.003\\pm0.032$ & $-0.037\\pm0.025$ & $-0.033\\pm0.034$ \\\\\n CC & $-0.009\\pm0.031$ & $-0.151\\pm0.038$ & $-0.162\\pm0.041$ \\\\\n NCC & $0.019\\pm0.046$ & $0.005\\pm0.027$ & $0.023\\pm0.041$ \\\\\n Regular & $0.032\\pm0.089$ & $0.025\\pm0.037$ & $0.057\\pm0.082$ \\\\\n Disturbed & $-0.017\\pm0.077$ & $-0.080\\pm0.086$ & $-0.098\\pm0.052$\\\\\n \\hline\n \\end{tabular}\n \\label{tab:hydro_biasses}\n\\end{table}\n\nFollowing the evaluation presented in \\citet{Corasaniti2018}, we use the hydrostatic mass bias estimates given in Tab.~1 of \\citet{2016ApJ...827..112B} to estimate the bias on cluster sparsities, these are quoted in Tab.~\\ref{tab:hydro_biasses}. Overall, we can see that the hydrostatic mass bias does not significantly affect the estimated sparsity, with a bias of the order of few percent and in most cases compatible with a vanishing bias with only a few exceptions. This is consistent with the results of the recent analysis based on observed X-ray clusters presented in \\citet{Ettori2022}, which yield sparsity biasses at the percent level and consistent with having no bias at all. However, we have seen in the case of the WL mass bias that even though the effect on the measured sparsity remains small, the scatter around the true sparsity can severely affect the efficiency of the detector at identifying recent mergers. Unfortunately, the limited sample from \\citet{2016ApJ...827..112B} does not allow to compute the hydrostatic mass bias scatter of the sparsity. If the latter behaves in the same manner as in the WL case, then we can expect the estimator to respond to the increasing scatter as in Fig.~\\ref{fig:AUC-scatter}. Consequently, as long as the scatter remains small, $\\sigma^{\\rm HE}_{\\Delta_1,\\Delta_2} < 0.1$, then the efficiency of the estimator will remain unaffected.\n\n\\subsubsection{Concentration Mass Bias}\n\nWe have seen in Section~\\ref{sparsprof} that sparsities deduced from the concentration parameter of a NFW profile fitted to the halo density profile are biased compared to those measured using N-body masses. In particular, as seen in Fig.~\\ref{fig:relative_spars_conc}, concentration deduced sparsities tend to underestimate their N-body counterparts. Hence, they are more likely to be associated with relaxed clusters than systems in a perturbed state characterised by higher values. A notable exception is the case of haloes undergoing recent mergers which are associated to lower concentration values, or equivalently higher sparsity, even though the N-body estimated sparsity is low. This effect is most likely due to poor fit agreement \\citep{Balmes2014}, and systematically increases the population of perturbed haloes above the detection threshold. The concurrences of these two effects leads to an apparent increase in detection power for the 1S estimators when using NFW-concentration estimated masses, as can be seen for the solid lines in Fig.~\\ref{fig:validation_roc_curves}.\n\nIn contrast when looking at the 3S case in Fig.~\\ref{fig:validation_roc_curves}, there is a clear decrease in the detection power for the concentration based sparsity estimates. This is due to the differences in the pulse patterns deduced from concentration compared to the direct measurement of the sparsity, which results in a shape of the pulse at inner radii that is significantly different from that obtained using the N-body masses. Similarly to the 1S estimator, the sparsities measured using the NFW concentration are on average shifted towards smaller values. As such, the effect of using concentration based estimates results in an overestimation of the likelihood that a halo has not undergone a recent merger.\n\nKeeping the above discussions in mind we now present example applications to two well studied galaxy clusters.\n\n\n\\subsection{Abell 383}\nAbell 383 is a cluster at $z=0.187$ that has been observed in X-ray \\citep{2004A&A...425..367B,2006ApJ...640..691V} and optical bands \\citep{2002PASJ...54..833M,2012ApJS..199...25P} with numerous studies devoted to measurements of the cluster mass from gravitational lensing analyses \\citep[e.g.][]{2016MNRAS.461.3794O,2016ApJ...821..116U,2019MNRAS.488.1704K}. The cluster appears to be a relaxed system with HE masses $M_{500\\text{c}}=(3.10\\pm 0.32)\\cdot 10^{14}\\,\\text{M}_{\\odot}$ and $M_{2500\\text{c}}=(1.68\\pm 0.15)\\cdot 10^{14}\\,\\text{M}_{\\odot}$ from Chandra X-ray observations \\citep{2006ApJ...640..691V}, corresponding to the halo sparsity $s_{500,2500}=1.84\\pm 0.25$ that is close to the median of the halo sparsity distribution. We compute the merger test statistics of Abell 383 using the lensing masses estimates from the latest version of the Literature catalogues of Lensing Clusters \\citep[LC$^2$][]{2015MNRAS.450.3665S}. In particular, we use the mass estimates obtained from the analysis of the latest profile data of \\citep{2019MNRAS.488.1704K}: $M_{2500\\text{c}}=(2.221\\pm 0.439)\\cdot 10^{14}\\,\\text{M}_{\\odot}$, $M_{500\\text{c}}=(5.82\\pm 1.15)\\cdot 10^{14}\\,\\text{M}_{\\odot}$ and $M_{200\\text{c}}=(8.55\\pm 1.7)\\cdot 10^{14}\\,\\text{M}_{\\odot}$. These give the following set of sparsity values: $s_{200,500}=1.47\\pm 0.41$, $s_{200,2500}=3.85\\pm 1.08$ and $s_{500,2500}=2.62\\pm 0.73$. We obtain a p-value ${\\rm p}=0.21$ and Bayes Factor $B_\\text{f}=0.84$, incorporating errors on the measurement of $s_{200,500}$ yields a higher p-value, ${\\rm p}=0.40$, which can be interpreted as an effective sparsity of $s^\\text{eff}_{200,500} = 1.40$. These results disfavour the hypothesis that the cluster has gone through a major merger in its recent history.\n\n\\subsection{Abell 2345}\nAbell 2345 is a cluster at $z=0.179$ that has been identified as a perturbed system by a variety of studies that have investigated the distribution of the galaxy members in optical bands \\citep{2002ApJS..139..313D,2010A&A...521A..78B} as well as the properties of the gas through radio and X-ray observations \\citep[e.g.][]{1999NewA....4..141G,2009A&A...494..429B,2017ApJ...846...51L,2019ApJ...882...69G,2021MNRAS.502.2518S}. The detection of radio relics and the disturbed morphology of the gas emission indicate that the cluster is dynamically disturbed. Furthermore, the analysis by \\citet{2010A&A...521A..78B} suggests that the system is composed of three sub-clusters. \\citet{2002ApJS..139..313D} have conducted a weak lensing study on a small field of view centred on the main sub-cluster and found that the density distribution is roughly peaked on the bright central galaxy. This is also confirmed by the study of \\citet{2004ApJ...613...95C}, however the analysis by \\citet{2010PASJ...62..811O} on a larger field-of-view has indeed shown that Abell 2345 has a complex structure. The shear data have been re-analysed to infer lensing masses that are reported in latest version the LC$^2$-catalogue \\citep{2015MNRAS.450.3665S}: $M_{200\\text{c}}=(28.44\\pm 10.76)\\cdot 10^{14}\\,\\text{M}_{\\odot}$, $M_{500\\text{c}}=(6.52\\pm 2.47)\\cdot 10^{14}\\,\\text{M}_{\\odot}$ and $M_{2500\\text{c}}=(0.32\\pm 0.12)\\cdot 10^{14}\\,\\text{M}_{\\odot}$. These mass estimates give the following set of sparsity values: $s_{200,500}= 4.36\\pm 2.33$, $s_{200,2500}=87.51\\pm 46.83$ and $s_{500,2500}=20.06\\pm 10.74$. Using only the $s_{200,500}$ estimate result in a very small p-value, ${\\rm p}=4.6\\cdot 10^{-5}$. Incorporating errors on the measurement of $s_{200,500}$ yields a higher p-value, ${\\rm p}=7.5\\cdot10^{-4}$, which can be interpreted as an effective sparsity of $s^\\text{eff}_{200,500} = 2.76$, significantly lower than the measured value, however both strongly favour the signature of a major merger event, that is confirmed by the combined analysis of the three sparsity measurements for which we find a divergent Bayes factor. In Fig.~\\ref{fig:post_A2345} we plot the marginal posterior for the single sparsity $s_{200,500}$ (orange solid line) and for the ensemble of sparsity estimates (purple solid line). In the former case with obtain a median redshift $z_{\\rm LMM}=0.30^{+0.03}_{-0.06}$, while in the latter case we find $z_\\text{LMM} = 0.39\\pm 0.02$, which suggests that a major merger event occurred $t_\\text{LMM} = 2.1\\pm 0.2$ Gyr ago. One should however note that in light of the discussions presented above, this result could be associated to a more recent merger event which, as can be seen in Fig.~\\ref{fig:test_metrics}, are artificially disfavoured by our method.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.9\\linewidth]{figures/fig_A2345.pdf}\n \\caption{Posterior distributions Abell 2345 obtained using three sparsity measurements from the lensing cluster masses in the LC$^2$ catalogue \\citep{2015MNRAS.450.3665S} using the shear data from \\citep{2010PASJ...62..811O}. The vertical lines indicates the median value of $z_{\\rm LMM}$, while the shaded are corresponds to the $68\\%$ credible region around the median.}\n \\label{fig:post_A2345}\n\\end{figure}\n\n\\section{Conclusions}\\label{conclu}\nIn this work we have investigated the properties of the mass profile of massive dark matter haloes hosting galaxy clusters. We have focused on haloes undergoing major merger events with the intent of finding observational proxies of the halo mass distribution that can provide hints of recent mergers in galaxy clusters. To this purpose we have performed a thorough analysis of N-body halo catalogues from the MultiDark-Planck2 simulation. \n\nWe have shown that halo sparsity provides a good proxy of the halo mass profile, especially in the case of merging haloes whose density profile significantly deviates from the NFW formula. We have found that major mergers leave a characteristic universal imprint on the evolution of the halo sparsity. This manifests as a rapid pulse response to the major merger event with a shape that is independent of the time at which the major merger occurs. The onset of the merger systematically increases the value of the sparsity, suggesting that mass in the inner part of the halo is displaced relative to the mass in the external region. Following the pulse the value of the sparsity, a quiescent evolution of the halo mass distribution is recovered within only $\\sim 2$ dynamical times, which is consistent with the findings of the concentration analysis by \\citet{Wang2020}.\n\nThe universal imprint of major mergers on the evolution of halo sparsity implies the universality of the distribution of halo sparsities of merging and quiescent haloes respectively. That is to say that at any given redshift it is possible to distinctly characterise the distribution of merging and quiescent haloes. This is because the distribution of sparsity values of haloes that have undergone their last major merger within $|T|\\lesssim 2$ dynamical times differs from that of quiescent haloes that had their last major merger at earlier epochs, $|T|\\gtrsim 2$. The former constitutes a sub-sample of the whole halo population that largely contributes to the scatter of the halo sparsity distribution with their large sparsity values. \n\nThe characterisation of these distributions enable us to devise statistical tests to evaluate whether a cluster at a given redshift and with given sparsity estimates has gone through a major merger in its recent history and eventually at which epoch. To this purpose we have developed different metrics based on a standard binary frequentist test, Bayes Factors and Support Vector Machines. We have shown that having access to cluster mass estimates at three different overdensities, allowing to obtain three sparsity estimates, provides more robust conclusions. In the light of these results we have developed a numerical code that can be used to investigate the presence of major mergers in observed clusters. As an example case, we have considered Abell 2345 a known perturbed clusters as well as Abell 383 a known quiescent cluster. \n\nIn the future we plan to expand this work in several new directions. On the one hand, it will be interesting to assess the impact of baryons on halo sparsity estimates especially for merging haloes. This should be possible through the analysis of N-body/hydro simulations of clusters. On the other hand, it may be also useful to investigate whether the universality of the imprint of major mergers on the evolution of halo sparsity depends with the underlying cosmological model. The analysis of N-body halo catalogues from simulations of non-standard cosmological scenarios such as the RayGalGroupSims suite \\citep{Corasaniti2018,2021arXiv211108745R}, may allow us to address this point. \n\nIt is important to stress that the study presented here focuses on the statistical relation between halo sparsity and the epoch of last major merger defined as the time when the parent halo merges with a smaller mass halo that has at least one third of its mass. This is different from the collision time, or the central passage time of two massive haloes, which occur on a much shorter time scale. Hence, the methodology presented here cannot be applied to Bullet-like clusters that have just gone through a collision, since the distribution of the collisionless dark matter component in the colliding clusters has not been disrupted and their merger has yet to be achieved. Overall, our results opens the way to timing major merger in perturbed galaxy clusters through measurements of dark matter halo sparsity. \n\n\n\\section*{Acknowledgements}\nWe are grateful to Stefano Ettori, Mauro Sereno and the anonymous referee for carefully reading the manuscript and their valuable comments. \n\nThe CosmoSim database used in this paper is a service by the Leibniz-Institute for Astrophysics Potsdam (AIP).\nThe MultiDark database was developed in cooperation with the Spanish MultiDark Consolider Project CSD2009-00064. \nThe authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) and the Partnership for Advanced Supercomputing in Europe (PRACE, www.prace-ri.eu) for funding the MultiDark simulation project by providing computing time on the GCS Supercomputer SuperMUC at Leibniz Supercomputing Centre (LRZ, www.lrz.de).\n\nWe thank Instituto de Astrofisica de Andalucia (IAA-CSIC), Centro de Supercomputacion de Galicia (CESGA) and the Spanish academic and research network (RedIRIS) in Spain for hosting Uchuu DR1 in the Skies \\& Universes site for cosmological simulations. The Uchuu simulations were carried out on Aterui II supercomputer at Center for Computational Astrophysics, CfCA, of National Astronomical Observatory of Japan, and the K computer at the RIKEN Advanced Institute for Computational Science. The Uchuu DR1 effort has made use of the skun@IAA\\_RedIRIS and skun6@IAA computer facilities managed by the IAA-CSIC in Spain (MICINN EU-Feder grant EQC2018-004366-P).\n\\section*{Data Availability}\n\nDuring this work we have used publicly available data from the MDPL2 simulation suite \\citep{Klypin2016}, provided by the CosmoSim database \\href{https://www.cosmosim.org/}{https://www.cosmosim.org/}, in conjunction with publicly available data from the Uchuu simulation suite \\citep{Ishiyama2021}, provided by the Skies and Universes database \\href{http://skiesanduniverses.org/}{http://skiesanduniverses.org/}.\n\nThe numerical code \\textsc{lammas} used for this analysis are available at: \\href{https://gitlab.obspm.fr/trichardson/lammas}{https://gitlab.obspm.fr/trichardson/lammas}. The package also contains the detailed fitting parameters of the 1S and 3S likelihood distributions for all Uchuu snapshots up to z = 2. \n\n\n\n\n\\bibliographystyle{mnras}\n", "meta": {"timestamp": "2022-04-01T02:34:25", "yymm": "2112", "arxiv_id": "2112.04926", "language": "en", "url": "https://arxiv.org/abs/2112.04926"}} {"text": "\\section{Introduction}\n\nThe performance of a sea-going ship is important not only to keep the fuel and operational cost in-check but also to reduce global emissions from the shipping industry. Analyzing the performance of a ship is also of great interest for charter parties to estimate the potential of a ship and the profit that can be made out of it. Therefore, driven by both the economic and social incentives, the trade of ship performance analysis and monitoring has been booming substantially in recent times. The importance of in-service data in this context is very well understood by most of the stake holders, clearly reflected by the amount of investment made by them on onboard sensors, data acquisition systems, and onshore operational performance monitoring and control centers.\n\nThe traditional way to evaluate the performance of a ship is using the noon report data provided by the ship's crew. A more exact approach, but not very feasible for commercial vessels, was suggested by \\citet{Walker2007}, conducting in-service sea trials in calm-water conditions on a regular basis. With the advent of sensor-based continuous monitoring systems, the current trend is to directly or indirectly observe the evolution of the calm-water speed-power curve over time. ISO 19030 \\cite{ISO19030} along with several researchers (\\citet{Koboevic2019}; \\citet{Coraddu2019DigTwin}) recommends observing the horizontal shift (along the speed axis) of the calm-water speed-power curve, termed as the speed-loss, over time to monitor the performance of a sea-going ship using the in-service data. Alternatively, it is suggested to observe the vertical shift of the calm-water speed-power curve, often termed as the change in power demand (adopted by \\citet{Gupta2021PrefMon} and \\citet{CARCHEN2020}). Some researchers also formulated and used some indirect performance indicators like fuel consumption (\\citet{Koboevic2019}), resistance (or fouling) coefficient (\\citet{Munk2006}; \\citet{Foteinos2017}; \\citet{CARCHEN2020}), (generalized) admiralty coefficient (\\citet{Ejdfors2019}; \\citet{Gupta2021}), wake fraction (\\citet{CARCHEN2020}), fuel efficiency (\\citet{Kim2021}), etc. In each of these cases, it is clearly seen (and most of the time acknowledged) that the results are quite sensitive to the quality of the data used to estimate the ship's performance.\n\n\n\nThe ship's performance-related data obtained from various sources usually inherits some irregularities due to several factors like sensor inaccuracies, vibration of the sensor mountings, electrical noise, variation of environment, etc., as pointed out in the Guide for Smart Functions for Marine Vessels and Offshore Units (Smart Guide) published recently by \\citet{ABS2020guide}. The quality of data used to carry-out ship performance analysis and the results obtained further can be significantly improved by adopting some rational data processing techniques, as shown by \\citet{Liu2020} and \\citet{Kim2020}. Another important factor is the source of data as it may also be possible to obtain such datasets using the publicly available AIS data (\\citet{You2017}). \\citet{Dalheim2020DataPrep} presented a data preparation toolkit based on the in-service data recorded onboard 2 ships. The presented toolkit was developed for a specific type of dataset, where the variables were recorded asynchronously and had to be synchronized before carrying-out ship performance analysis. The current work would rather focus on challenges faced while processing an already synchronized dataset.\n\nThe current paper presents a review of different data sources used for ship performance analysis and monitoring, namely, onboard recorded in-service data, AIS data, and noon reports, along with the characteristics for each of these data sources. Finally, a data processing framework is outlined which can be used to prepare these datasets for ship performance analysis and monitoring. Although the data processing framework is developed for the performance monitoring of ships, it may easily be casted for several other purposes. With the easy availability of data from ships, the concept of creating digital twins for sea-going ships is becoming quite popular. \\citet{Major2021} presented the concept of digital twin for a ship and the cranes onboard it. The digital twin established by \\citet{Major2021} can be used to perform three main offshore operations, including remote monitoring of the ship, maneuvering in harsh weather and crane operations, from an onshore control center. Moreover, as pointed out by \\citet{Major2021}, the digital twin technology can also be adopted for several other purposes, like predictive maintenance, ship autonomy, etc. Nevertheless, the data processing framework presented here can also be used to process the real-time data obtained to create digital twins for ships in an efficient manner. \n\nThe following section discusses the art of ship performance analysis and the bare minimum characteristics of a dataset required to do such an analysis. Section \\ref{sec:dataSources} presents the above mentioned sources of data used for ship performance analysis, their characteristics, and the tools required to process these datasets. Section \\ref{sec:results} presents the data processing framework which can be used to process and prepare these datasets for ship performance monitoring. Finally, section \\ref{sec:conclusion} finishes the paper with concluding remarks.\n\n\\section{Ship Performance Analysis}\n\nThe performance of a ship-in-service can be assessed by observing its current performance and, then, comparing it to a benchmarking standard. There are several ways to establish (or obtain) a benchmarking standard, like model test experiments, full-scale sea trials, CFD analysis, etc. It may even be possible to establish a benchmarking standard using the in-service data recorded onboard a newly built ship, as suggested by \\citet{Coraddu2019DigTwin} and \\citet{Gupta2021}. On the other hand, evaluating the current performance of a ship requires a good amount of data processing as the raw data collected during various voyages of a ship is susceptible to noise and errors. Moreover, the benchmarking standard is, generally, established for only a given environmental condition, most likely the calm-water condition. In order to draw a comparison between the current performance and the benchmarking standard, the current performance must be translated to the same environmental condition, therefore, increasing the complexity of the problem.\n\n\\subsection{Bare Minimum Variables}\n\nFor translating the current performance data to the benchmarking standard's environmental condition and carrying-out a reliable ship performance analysis, a list of bare minimum variables must be recorded (or observed) at a good enough sampling rate. The bare minimum list of variables must provide the following information about each sampling instant for the ship: (a) Operational control, (b) Loading condition, (c) Operational environment, and (d) Operating point. The variables containing the above information must either be directly recorded (or observed) onboard the ship, collected from regulatory data sources such as AIS, or may be derived using additional data sources, like the operational environment can be easily derived using the ship's location and timestamp with the help of an appropriate weather hindcast (or metocean) data repository.\n\nThe operational control information should contain the values of the propulsion-related control parameters set by the ship's captain on the bridge, like shaft rpm, rudder angle, propeller pitch, etc. The shaft rpm (or propeller pitch, in case of ships equipped with controllable pitch propellers running at constant rpm) is by far the most important variable here as it directly correlates with the ship's speed-through-water. It should be noted that even in the case of constant power or speed mode, the shaft rpm (or propeller pitch) continues to be the primary control parameter as the set power or speed is actually achieved by using a real-time optimizer (incorporated in the governor) which optimizes the shaft rpm (or propeller pitch) to get to the set power or speed. Nevertheless, in case the shaft rpm (or propeller pitch) is not available, it may be appropriate to use the ship's speed-through-water as an operational control parameter, as done by several researchers (\\citet{FARAG2020}; \\citet{Laurie2021}; \\citet{Minoura2020}; \\citet{Liang2019}), but in this case, it should be kept in mind that, unlike the shaft rpm (or propeller pitch), the speed-through-water is a dependant variable strongly influenced by the loading condition and the operational environment. \n\nThe loading condition should contain the information regarding the ship's fore and aft draft, which can be easily recorded onboard the ship. Although the wetted surface area and under-water hull-form are more appropriate for a hydrodynamic analysis, these can be derived easily using the ship's hull form, if the fore and aft draft is known. The operational environment should at least contain variables indicating the intensity of wind and wave loads acting on the ship, like wind speed and direction, significant wave height, mean wave direction, mean wave period, etc. Finally, the operating point should contain the information regarding the speed-power operating point for the sampling instant. Table \\ref{tab:bareMinVars} presents the list of bare minimum variables required for ship performance analysis. The list given in the table may have to be modified according to ship specifications, for example, the propeller pitch is only relevant for a ship equipped with a controllable pitch propeller. \n\n\\begin{table}[ht]\n\\caption{The list of bare minimum data variables required for ship performance analysis.} \\label{tab:bareMinVars}\n\\centering\n\\begin{tabular}{l|l}\n\\hline\n\\multicolumn{1}{c|}{\\textbf{Category}} & \\multicolumn{1}{c}{\\textbf{Variables}} \\\\\n\\hline\nOperational Control & Shaft rpm, Rudder angle, \\& Propeller pitch \\\\\n\\hline\nLoading Condition & Fore and aft draft \\\\\n\\hline\nOperational Environment & \\begin{tabular}[l]{@{}l@{}}Longitudinal and transverse wind speed, Significant wave height,\\\\ Relative mean wave direction, \\& Mean wave period\\end{tabular} \\\\\n\\hline\nOperating Point & Shaft power \\& Speed-through-water \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\subsection{Best Practices} \\label{sec:bestPractices}\n\nIt is well-known that the accuracy of various measurements is not the same. It also depends on the source of the measurements. The measurements recorded using onboard sensors are generally more reliable as compared to the manually recorded noon report measurements, due to the possibility of human error in the latter. Even in the case of onboard recorded sensor measurements, the accuracy varies from sensor-to-sensor and case-to-case. Some sensors can be inherently faulty, whereas others can give incorrect measurements due to unfavorable installation and operational conditions, and even the best ones are known to have some measurement noise. Thus, it is recommended to establish and follow some best practices for a reliable and robust ship performance analysis.\n\nThe onboard measurements for shaft rpm ($n$) and shaft torque ($\\tau$) are generally obtained using a torsion meter installed on the propeller shaft, which is considered to be quite reliable. The shaft power ($P_s$) measurements are also derived from the same as the shaft power is related to the shaft rpm and torque through the following identity: $P_s = 2\\pi n\\tau$. It should be noted that no approximation is assumed in this formulation and, therefore, it should be validated with the data, if all three variables ($n, \\tau, P_s$) are available. On the other hand, the measurements for speed-through-water are known to have several problems, as presented by \\citet{DALHEIM2021}. Thus, it is recommended to use shaft rpm (and not speed-though-water) as the independent variable while creating data-driven regression models to predict the shaft power. Owing to the same reason, it may also be a good idea to quantify the change in ship's performance in terms of change in power demand rather than speed-loss (or speed-gain), as recommended by ISO 19030 \\cite{ISO19030}.\n\nFurther, it is also quite common to use fuel oil consumption as a key performance indicator for ship performance analysis (\\citet{Karagiannidis2021}). The fuel oil consumption can be easily calculated from the engine delivered torque and engine rpm, if the specific fuel consumption (SFC) curve for the engine is known. Even though the SFC curve is established and supplied by the engine manufacturer, it is only valid for a specific operating environment, and it is known to evolve over time due to engine degradation and maintenance. Thus, including the fuel oil consumption in ship performance analysis increases the complexity of the problem, which requires taking engine health into account. If the objective of ship performance analysis is also to take into account the engine performance, then it may be beneficial to divide the problem into two parts: (a) Evaluate the change in power demand (for hydrodynamic performance analysis), and (b) Evaluate the change in engine SFC (for engine performance analysis). Now, the latter can be formulated as an independent problem with a completely new set of variables-of-interest, like engine delivered torque, engine rpm, ambient air temperature, calorific value of fuel, turbocharger health, etc. This would not only improve the accuracy of ship's hydrodynamic performance analysis but would also allow the user to develop a more comprehensive and, probably, accurate analysis model. The current work is focused on the hydrodynamic performance analysis.\n\n\\subsection{Sampling Frequency}\nAlmost all electronics-based sensors are known to have some noise in their measurements. The simplest way adopted to subdue this noise is by taking an average over a number of measurements (known as a `sample' in statistics), recorded over a very short period of time (milliseconds). It is also known that the statistical mean of a `sample' converges to the true mean (i.e., the mean of the entire population), thereby eliminating the noise, as the number of measurements in the `sample' is increased, provided the observations follow a symmetrical distribution. Nevertheless, it is observed that the high frequency data still retains some noise, probably due to the fact that the number of measurements in each `sample' is small, i.e., the measurements are obtained by averaging a small number of samples recorded over a very short period of time. On the other hand, as seen in the case of noon reports and most of the in-service datasets, time-averaging the measurements over a longer period of time obscures the effect of moderately varying influential factors, for example, instantaneous incident wind and waves, response motions, etc. Thus, a very high sampling frequency data may retain high noise, and a very low sampling frequency data, with time-averaged values, may result in obscuring important effects from the data time-series. Furthermore, in the third scenario, it may be possible that the data acquisition (DAQ) system onboard the ship is simply using low sampling frequency, recording instantaneous values instead of time-averaged ones, saving a good amount of storage and bandwidth while transmitting it to the shore-based control centers. These low frequency instantaneous values may result in an even more degraded data quality as it would contain noise as well as obscure the moderately varying effects.\n\nThe ideal sampling frequency would also depend on the objective of the analysis and the recorded variables. For example, if the objective of the analysis is to predict the motion response of a ship or analyse its seakeeping characteristics, the data should be recorded at a high enough sampling frequency such that it is able to capture such effects. \\citet{hansen2011performance} analyzed the ship's rudder movement and the resulting resistance, and demonstrated that if the sampling interval would be large, the overall dynamics of the rudder movement would not be captured, resulting in a difference in resistance. One criterion for selecting the data sampling rate is Nyquist frequency (\\citet{jerri1977shannon}), which is widely used in signal processing. According to this criterion, the sampling frequency shall be more than twice the frequency of the observed phenomenon to sufficiently capture the information regarding the phenomenon. Therefore, if the aim is not to record any information regarding the above mentioned moderately varying effects (instantaneous incident wind and waves, response motions, etc.), it may be acceptable to just obtain low frequency time-averaged values so that such effects are subdued. But it may still be useful to obtain high frequency data, in this case, as it can be advantageous from data cleaning point of view. For example, the legs of time-series showing very high variance, due to the noise or moderately varying effects, can be removed from the analysis to increase the reliability of results. \n\\section{Data Sources, Characteristics \\& Processing Tools} \\label{sec:dataSources}\n\n\n\\subsection{In-service Data}\n\nThe in-service data, referred to here, is recorded onboard a ship during its voyages. This is achieved by installing various sensors onboard the ship, collecting the measurements from these sensors on a regular basis (at a predefined sampling rate) using a data acquisition (DAQ) system, and fetching the collected data to onshore control centers. The two most important features of in-service data is the sampling rate (or, alternatively, sampling frequency) and the list of recorded variables. Unfortunately, there is no proper guide or standard which is followed while defining both these features for a ship. Thus, the in-service data processing has to be adapted to each case individually. \n\nThe in-service datasets used here are recorded over a uniform (across all recorded variables) and evenly-spaced sampling interval, which makes it easier to adopt and apply data processing techniques. In an otherwise case, where the data is sampled with a non-uniform and uneven sampling interval, some more pre-processing has to be done in order to prepare it for further analysis, as demonstrated by \\citet{Dalheim2020DataPrep}. \\citet{Dalheim2020DataPrep} presented a detailed algorithm to deal with time vector jumps and synchronizing non-uniformly recorded data variables. The problem of synchronization can, alternatively, be looked at using the well-known dynamic time warping (DTW) technique, which is generally used for aligning the measurements taken by two sensors, measuring the same or highly correlated features. In a different approach, \\citet{virtanen2020scipy} demonstrated that the collected data can be down-sampled or up-sampled (resampling) to obtain a uniform and evenly sampled dataset. \n\n\\subsubsection{Inherently Faulty \\& Incorrect Measurements} \\label{sec:incorrMeasureInServData}\n\nSome of the sensors onboard a ship can be inherently faulty and provide incorrect measurements due to unfavorable installation or operational conditions. Many of these can actually be fixed quite easily. For instance, \\citet{Wahl2019} presented the case of faulty installation of the wind anemometer onboard a ship, resulting in missing measurements for head-wind condition probably due to the presence of an obstacle right in front of the sensor. Such a fault is fairly simple to deal with, say, by fixing the installation of the sensor, and it is even possible to fix the already recorded dataset using the wind measurements from one of the publicly available weather hindcast datasets. Such an instance also reflects the importance of data exploration and validation for ship performance analysis. Unlike above, the case of draft and speed-through-water measurement sensors is not as fortunate and easy to resolve.\n\nThe ship's draft is, generally, recorded using a pressure transducer installed onboard the ship. The pressure transducer measures the hydrostatic pressure acting on the bottom plate of the ship which is further converted into the corresponding water level height or the draft measurement. When the ship starts to move and the layer of water in contact with the ship develops a relative velocity with respect to the ship, the total pressure at the ship's bottom reduces due to the non-zero negative hydrodynamic pressure and, therefore, further measurements taken by the draft sensor are incorrect. This is known as the Venturi effect. It may seem like a simple case, and one may argue that the measurements can be fixed by just adding the water level height equivalent to the hydrodynamic pressure, which may be calculated using the ship's speed-through-water. Here, it should be noted that, firstly, to accurately calculate the hydrodynamic pressure, one would need the localized relative velocity of the flow (and not the ship's speed-through-water), which is impractical to measure, and secondly, the speed-though-water measurements are also known to have several sources of inaccuracy. Alternatively, it may be possible to obtain the correct draft measurements from the ship's loading computer. The loading computer can calculate the draft and trim in real-time based on the information such as the ship's lightweight, cargo weight and distribution, and ballast water loading configuration.\n\nThe state-of-the-art speed-though-water measurement device uses the Doppler acoustic speed log principle. Here, the relative speed of water around the hull (i.e., the speed-though-water) is measured by observing the shift in frequency (popularly known as the Doppler shift) of the ultrasound pulses emitted from the ship's hull, due to its motion. The ultrasonic pulses are reflected by the ocean bottom, impurities in the surrounding water, marine life, and even the liquid-liquid interface between the density difference layers in deep ocean. The speed of water surrounding the ship is influenced by the boundary layer around the hull so it is required that the ultrasonic pulses reflected only by the particles outside the boundary layer are used to estimate the speed-though-water. Therefore, a minimum pulse travelling distance has to be prescribed for the sensor. If the prescribed distance is too larger or if the ship is sailing in shallow waters, the Doppler shift is calculated using the reflection from the ocean bottom, i.e., the sensor is in ground-tracking mode, and therefore, it would clearly record the ship's speed-over-ground instead of the speed-though-water. \\citet{DALHEIM2021} presented a detailed account regarding the uncertainty in the speed-though-water measurements for a ship, commenting that the speed log sensors are considered to be one of the most inaccurate ones onboard the ship. \n\nIt may also be possible to estimate the speed-though-water of a ship using the ship's speed-over-ground and incident longitudinal water current speed. The speed-over-ground of a ship is measured using a GPS sensor, which is considered to be quite accurate, but unfortunately, the water current speed is seldom recorded onboard the ship. It is certainly possible to obtain the water current speed from a weather hindcast data source, but the hindcast measurements are not accurate enough to obtain a good estimate for speed-through-water, as indicated by \\citet{Antola2017}. It should also be noted that the temporal and spatial resolution of weather hindcast data is relatively larger than the sampling interval of the data recorded onboard the ship. Moreover, the water current speed varies along the depth of the sea, therefore, the incident longitudinal water current speed must be calculated as an integral of the water current speed profile over the depth of the ship. Thus, in order to obtain accurate estimates of speed-though-water, the water current speed has to be measured or estimated upto a certain depth of the sea with good enough accuracy, which is not possible with the current state-of-the-art.\n\n\\subsubsection{Outliers} \\label{sec:outliers}\n\nAnother big challenge with data processing is the problem of detecting and handling outliers. As suggested by \\citet{Olofsson2020}, it may be possible to categorize outliers into the following two broad categories: (a) Contextual outliers, and (b) Correlation-defying outliers\\footnote{Called collective outliers by \\citet{Olofsson2020}.}. \\citet{Dalheim2020DataPrep} presented methods to detect and remove contextual outliers, further categorized as (i) obvious (or invalid) outliers, (ii) repeated values, (iii) drop-outs, and (iv) spikes. Contextual outliers are easily identifiable as they either violate the known validity limits of one or more recorded variables (as seen in the case of obvious outliers and spikes) or present an easily identifiable but anomalous pattern (as seen in the case of repeated values and drop-outs). \n\nThe case of correlation-defying outliers is much more difficult to handle, as they can easily blend into the cleaned data pool. The two most popular methods which can be used to identify correlation-defying outliers are Principal Component Analysis (PCA) and autoencoders. Both these methods try to reconstruct the data samples after learning the correlation between the variables. It is quite obvious that a correlation-defying outlier would result in an abnormally high reconstruction error and, therefore, can be detected using such techniques. In a recent attempt, \\citet{Thomas2021} demonstrated an ensemble method combining PCA and autoencoders coupled with isolation forest to detect such outliers.\n\n\\subsubsection{Time-Averaging Problem} \\label{sec:timeAvgProb}\n\nAs aforementioned, the onboard recorded in-service data can be supplied as time-averaged values over a short period of time (generally upto around 15 minutes). Although the time-averaging method eliminates white noise and reduces the variability in the data samples, it introduces a new problem in case of angular measurements. The angular measurements are, generally, recorded in the range of 0 to 360 degrees. When the measurement is around 0 or 360 degrees, it is obvious that the instantaneous measurements, reported by the sensor, will fluctuate in the vicinity of 0 and 360 degrees. For instance, assuming that the sensor reports a value of about 0 degree for half of the averaging time and about 360 degrees for the remaining time, the time-averaged value recorded by the data acquisition (DAQ) system will be around 180 degrees, which is significantly incorrect. Most of the angular measurements recorded onboard a ship, like relative wind direction, ship heading, etc., are known to inherit this problem, and it should be noted that, unlike the example given here, the incorrect time-averaged angle can take any value between 0 and 360 degrees, depending on the instantaneous values over which the average is calculated. \n\nAlthough it may be possible to fix these incorrect values using a carefully designed algorithm, there is no established method available at the moment. Thus, it is suggested to fix these measurements using an alternate source for the data variables. For example, the wind direction can be gathered easily from a weather hindcast data source. Thus, it can be used to correct or just replace the relative wind direction measurements, recorded onboard the ship. The ship's heading, on the other hand, can be estimated using the latitude and longitude measurements from the GPS sensor.\n\n\n\\subsection{AIS Data}\nAIS is an automatic tracking system that uses transceivers to help ships and maritime authorities identify and monitor ship movements. It is generally used as a tool for ship transportation services to prevent collisions during navigation. Ships over 300 tons must be equipped with transponders capable of transmitting and receiving all message types of AIS under the SOLAS Convention. AIS data is divided into dynamic (position, course, speed, etc.) static (ship name, dimensions, etc.), and voyage-related data (draft, destination, ETA, etc.). Dynamic data is automatically transmitted every 2-10 seconds depending on the speed and course of the ship, and if anchored, such information is automatically transmitted every 6 minutes. On the other hand, static and voyage-related data is provided by the ship's crew, and it is transmitted every 6 minutes regardless of the ship's movement state.\n\nSince dynamic information is automatically updated based on sensor data, it is susceptible to faults and errors, similar to those described in section \\ref{sec:incorrMeasureInServData}. In addition, problems may occur even in the process of collecting and transmitting data between AIS stations, as noted by \\citet{weng2020exploring}. The AIS signal can also be influenced by external factors, such as weather conditions and Earth's magnetic field, due to their interference with the very high frequency (VHF) equipment. Therefore, some of the AIS messages are lost or get mixed. Moreover, the receiving station has a short time slot during which the data must be received, and due to heavy traffic in the region, it fails to receive the data from all the ships in that time. In some cases, small ships deliver inaccurate information due to incorrectly calibrated transmitters, as shown by \\citet{weng2020exploring}. In a case study, \\citet{harati2007automatic} observed that 2\\% of the MMSI (Maritime Mobile Service Identity) information was incorrect and 30\\% of the ships were not properly marked with the correct navigation status. In the case of ship dimensions, about 18\\% of the information was found to be inaccurate. Therefore, before using AIS raw data for ship performance analysis, it is necessary to check key parameters such as GPS position, speed, and course, and the data identified as incorrect must be fixed.\n\n\n\n\\subsubsection{Irrational Speed Data}\nThe GPS speed (or speed-over-ground) measurements from AIS data may contain samples that have a sudden jump compared to adjacent samples or excessively higher or lower value than the normal operating range. This type of inaccurate data can be identified through comparison with location and speed data of adjacent samples. The distance covered by the ship at the corresponding speed during the time between the two adjacent AIS messages is calculated, and the distance between the actual two coordinates is calculated using the Haversine formula (given by equation \\ref{eq:havsineDistance}) to compare the two values. If the difference between the two values is negligible, the GPS speed can be said to be normal, but if not, it is recommended to be replaced with the GPS speed value of the adjacent sample. It should be noted that if the time difference between the samples is too short, the deviation of the distance calculated through this method may be large. In such a case, it is necessary to consider the average trend for several samples. If there are no valid samples nearby or the GPS coordinate data is problematic, one can refer to the normal service speed according to the ship type, as shown in table \\ref{tab:vParams}, or if available, a more specific method such as normalcy box (\\citet{rhodes2005maritime,tu2017exploiting}), which defines the speed range of the ships according to the geographic location, may be applied.\n\n\\begin{equation}\\label{eq:havsineDistance}\n{D = 2r\\sin^{-1} \\left(\\sqrt{\\sin^{2}\\left(\\frac{y_{i+1}-y_{i}}{2}\\right)+\\cos{\\left(y_i\\right)}\\cos{\\left(y_{i+1}\\right)}\\sin^{2}\\left(\\frac{x_{i+1}-x_{i}}{2}\\right)}\\right)}\n\\end{equation}\n\nWhere $D$ is the distance between two coordinates ($x_i$, $y_i$) and ($x_{i+1}$, $y_{i+1}$), $r$ is the radius of Earth, and ($x_i$, $y_i$) is the longitude and latitude at timestamp $i$.\n\n\\begin{table}[ht]\n\\caption{Typical service speed range of different ship types, given by \\citet{solutions2018basic}.} \\label{tab:vParams}\n\\centering\n\\begin{tabular}{l|l|l}\n\\hline\n\\multicolumn{1}{c|}{\\textbf{Category}} & \\multicolumn{1}{c|}{\\textbf{Type}} & \\multicolumn{1}{c}{\\textbf{Service speed (knot)}}\\\\\n\\hline\nTanker & Crude oil carrier & 13-17\\\\\n & Gas tanker/LNG carrier & 16-20\\\\\n & Product & 13-16\\\\\n & Chemical & 15-18\\\\\n\\hline\nBulk carrier & Ore carrier & 14-15\\\\\n & Regular & 12-15\\\\\n\\hline\nContainer & Line carrier & 20-23\\\\\n & Feeder & 18-21\\\\\n\\hline\nGeneral cargo & General cargo & 14-20\\\\\n & Coaster & 13-16\\\\\n\\hline\nRoll-on/roll-off cargo & Ro-Ro/Ro-Pax & 18-23\\\\\n\\hline\nPassenger ship & Cruise ship & 20-23\\\\\n & Ferry & 16-23\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\subsubsection{Uncertainty due to Human Error}\nAIS data, excluding dynamic information, is not automatically updated by the sensors, but it is logged by the ship's crew manually, so there is a possibility of human error. This includes information such as the draft, navigation status, destination, and estimated time of arrival (ETA) of the ship. Although it is difficult to clearly distinguish the incorrectly entered information, it is possible to indirectly determine whether the manual input values have been updated using the automatically logged dynamic information. Each number in navigation status represents ship activity such as `under way using engine (0)', `at anchorage (1)', and `moored (5)'. If this field is being updated normally, it should be `0' if the ship is in-trip and `5' if it is at berth. If the navigation status of the collected AIS data is `1' or `5' above a certain GPS speed (or speed-over-ground), or if the state is set to `0' even when the speed is 0 and the location is within the port, the AIS data have not been updated on time and other manually entered information should also be questioned.\n\n\\subsection{Noon Report Data}\nShips engaged in international navigation of more than 500 gross tons are required to send a noon report to the company, which briefly records what happened on the ship from previous noon to present noon. The noon report must basically contain sufficient information regarding the location, course, speed, and internal and external conditions affecting the vessel's voyage. Additionally, the shipping company collects information related to fuel consumption and remaining fuel onboard, propeller slip, average RPM, etc. as needed. Such information is often used as a ship's management tool and reference data, such as monitoring and evaluating ship's performance, calculating energy efficiency operating indicators, and obtaining fuel and freshwater order information. Despite its customary use, the standardized information in the noon reports may not be sufficient to accurately assess the performance of the ship, due to several problems discussed as follows. This information is based on the average values from noon to noon. For an accurate ship performance analysis, higher frequency samples and additional data may be recommended.\n\\subsubsection{Uncertainties due to Averaging Measurements \\& Human Error} \\label{sec:noonReportsAvgProb}\nBasically, information reported through the noon reports is created based on the measurement values of the onboard sensor. Therefore, it also has the possibility to involve the problem of inherently faulty sensors and incorrect measurements, as discussed in section \\ref{sec:incorrMeasureInServData}. Apart from the problems caused by sensors, the noon report data may have problems caused by the use of 24-hour averaged values and human errors. The data collection interval is once a day and the average of the values recorded for 24 hours is reported, thus, significant inaccuracies may be included in the data. \\citet{aldous2015uncertainty} performed a sensitivity analysis to assess the uncertainty due to the input data for ship performance analysis using continuously recorded in-service data and noon reports. It was observed here that the uncertainty of the outcome was significantly sensitive to the number of samples in the dataset. In other words, such uncertainty can be mitigated through the use of data representing longer time-series, data collection with higher frequency, and data processing. These results were also confirmed by \\citet{park2017comparative} and \\citet{themelis2018comparative}. \\citet{park2017comparative} demonstrated in a case study that the power consumption between the noon reports and the recorded sensor data differed by 6.2\\% and 17.8\\% in ballast and laden voyage, respectively. \n\n\nUsing the averaged values over a long time period, as in the case of noon reports, the variations due to acceleration/deceleration and maneuvering cannot be captured. In particular, in the case of ships that sail relatively short voyages such as feeder ships and ferries, inappropriate data for performance analysis may be provided due to frequent changes in the operational state. In the case of information regarding the weather and sea states, the information generally corresponds to the condition right before the noon report is sent from the ship, therefore, it is not easy to account for the changes in the performance of the ship due to the variation of weather conditions during the last 24 hours. In general, the information to be logged in the noon report is read and noted by a person from onboard sensors. Thus, it is possible that the time at which the values are read from the sensors everyday may be different as well as different sensors may be used for the values to be logged for the same field. In addition, there may be cases when the observed value is incorrectly entered into the noon report. Thus, if the process of preparing the noon reports is not automated, there would always be a possibility of human errors in the data. \n\n\n\n\n\n\n\\section{Results: Data Processing Framework} \\label{sec:results}\n\nThe results here are presented in the form of the developed data processing framework, which can be used to process raw data obtained from one of the above mentioned data sources (in section \\ref{sec:dataSources}) for ship performance analysis. The data processing framework is designed to resolve most of the problems cited in the above section. Figure \\ref{fig:flowDiag} shows the flow diagram for the data processing framework. The following sections will explain briefly the consecutive processing steps of the given flow diagram. It may be possible that the user may not able to carry-out each step due to unavailability of some information or features in the dataset, for example, due to the unavailability of the GPS data (latitude, longitude and timestamp variables), it may not be possible to interpolate weather hindcast data. In such a case, it is recommended to skip the corresponding step and continue with the next one. \n\nThe data processing framework has been outlined in such a manner that, after being implemented, it can be executed in a semi-automatic manner, i.e., requiring limited intervention from the user. The semi-autonomous nature of the framework would also result in fast data processing, which can be important for very large datasets. The implementation of the framework in terms of executable code is also quite important to obtain a semi-automatic and fast implementation of the data processing framework. Therefore, it is recommended to adopt best practices and optimized algorithms for each individual processing step according to the programming language in use. On another note, the reliability of the data processing activity is also quite critical to obtain good results. Therefore, it is important to carry-out the validation of work done in each processing step by creating visualization (or plots) and inspecting them for any undesired errors. The usual practice adopted here, while processing the data using the framework, is to create several such visualizations, like time-series plots of data variables in trip-wise manner (explained later in section \\ref{sec:divideIntoTrips}), at the end of each processing step and then inspecting them to validate the outcome. \n\n\\begin{figure}\n\\centering\n\n\\begin{tikzpicture}[font=\\small,thick, node distance = 0.35cm]\n\n\\node[draw,\n rounded rectangle,\n minimum width = 2.5cm,\n minimum height = 1cm\n] (block1) {Raw Data};\n\n\\node[draw,\n below=of block1,\n minimum width=3.5cm,\n minimum height=1cm,\n align=center\n] (block2) {Ensure Uniform \\\\ Time Steps};\n\n\\node[draw,\n below=of block2,\n minimum width=3.5cm,\n minimum height=1cm\n] (block3) {Divide into Trips};\n\n\\node[draw,\n below=of block3,\n minimum width=3.5cm,\n minimum height=1cm,\n align=center\n] (block4) {Interpolate Hindcast \\\\ (Using GPS Data)};\n\n\\node[draw,\n trapezium, \n trapezium left angle = 65,\n trapezium right angle = 115,\n trapezium stretches,\n left=of block4,\n minimum width=3.5cm,\n minimum height=1cm\n] (block5) {Weather Hindcast};\n\n\\node[draw,\n below=of block4,\n minimum width=3.5cm,\n minimum height=1cm\n] (block6) {Derive New Features};\n\n\\node[draw,\n diamond,\n right=of block6,\n minimum width=2.5cm,\n inner sep=1,\n align=center\n] (block17) {Interpolation \\\\ Error?};\n\n\\node[draw,\n below=of block6,\n minimum width=3.5cm,\n minimum height=1cm\n] (block7) {Validation Checks};\n\n\\node[draw,\n diamond,\n below=of block7,\n minimum width=2.5cm,\n inner sep=1,\n align=center\n] (block8) {Data Processing \\\\ Errors Detected?};\n\n\\node[coordinate,right=1.8cm of block8] (block9) {};\n\\node[coordinate,right=1.6cm of block4] (block10) {};\n\n\\node[draw,\n below=of block8,\n minimum width=3.5cm,\n minimum height=1cm\n] (block11) {Fix Draft \\& Trim};\n\n\\node[draw,\n below=of block11,\n minimum width=3.5cm,\n minimum height=1cm,\n align=center\n] (block12) {Calculate Hydrostatics \\\\ (Displacement, WSA, etc.)};\n\n\\node[draw,\n trapezium, \n trapezium left angle = 65,\n trapezium right angle = 115,\n trapezium stretches,\n left=of block12,\n minimum width=3.5cm,\n minimum height=1cm\n] (block15) {Ship Particulars};\n\n\\node[draw,\n below=of block12,\n minimum width=3.5cm,\n minimum height=1cm,\n align=center\n] (block13) {Calculate Resistance \\\\ Components};\n\n\\node[draw,\n below=of block13,\n minimum width=3.5cm,\n minimum height=1cm,\n align=center\n] (block16) {Data Cleaning \\& \\\\ Outlier Detection};\n\n\\node[draw,\n rounded rectangle,\n below=of block16,\n minimum width = 2.5cm,\n minimum height = 1cm,\n inner sep=0.25cm\n] (block14) {Processed Data};\n\n\\draw[-latex] (block1) edge (block2)\n (block2) edge (block3)\n (block3) edge (block4)\n (block4) edge (block6)\n (block6) edge (block7)\n (block7) edge (block8)\n (block8) edge node[anchor=east,pos=0.25,inner sep=2.5]{No} (block11)\n (block11) edge (block12)\n (block12) edge (block13)\n (block13) edge (block16)\n (block16) edge (block14);\n\n\\draw[-latex] (block5) edge (block4);\n\\draw[-latex] (block15) edge (block12);\n\n\\draw[-latex] (block8) -| (block9) node[anchor=south,pos=0.1,inner sep=2.5]{Yes}\n (block9) -| (block17);\n\n\\draw[-latex] (block17) |- (block10) \n (block10) |- (block4) node[anchor=south,pos=0.1,inner sep=2.5]{Yes};\n\n\\draw[-latex] (block17) -- (block6) node[anchor=south,pos=0.4,inner sep=2.5]{No};\n\n\\end{tikzpicture}\n\\caption{Data processing framework flow diagram.} \\label{fig:flowDiag}\n\\end{figure}\n\n\\subsection{Ensure Uniform Time Steps}\n\nEnsuring uniform and evenly-spaced samples would not only make it easier to apply time-gradient-based data processing or analysis steps but would also help avoid any misunderstanding while visualizing the data, by clearly showing a gap in the time-series plots (when plotted against sample numbers) and removing any abrupt jumps in the data values. Depending on the data acquisition (DAQ) system, the in-service data recorded onboard a ship is generally recorded with a uniform and evenly spaced sampling interval. Nevertheless, it is observed that the extracted sub-dataset from the main database may contain several missing time steps (or timestamps). In such a case, it is recommended to check for such missing timestamps by simply calculating the gradient of timestamps, and for each missing timestamp, just add an empty row consisting only the missing timestamp value. Finally, the dataset should be sorted according the timestamps, resulting in a uniform and evenly-spaced list of samples. \n\nSimilar procedure can be adopted for a noon report dataset. The noon reports are generally recorded every 24 hours, but it may sometimes be more or less than 24 hours if the vessel's local time zone is adjusted, specially on the day of arrival or departure. The same procedure may not be feasible in case of AIS data, as the samples here are sporadically distributed in general. Here, the samples are collected at different frequencies depending on the ship's moving state, surrounding environment, traffic, and the type of AIS receiving station (land-based or satellite). It is observed here that the data is collected in short and continuous sections of the time-series, leaving some large gaps between samples, as shown in figure \\ref{fig:resampleSOG}. Here, it is recommended to first resample the short and continuous sections of AIS data to a uniform sampling interval through data resampling techniques, i.e., up-sampling or down-sampling (as demonstrated by \\citet{virtanen2020scipy}), and then, fill the remaining large gaps with empty rows.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.5\\linewidth]{Figures/resample.png}\n\\caption{Down-sampling the collected AIS data to 15 minutes interval.} \\label{fig:resampleSOG}\n\\end{figure}\n\n\\subsection{Divide Into Trips} \\label{sec:divideIntoTrips}\n\nUsing conventional tools, data visualization becomes a challenge if the number of samples in the dataset is enormously large. It may simply not be practical to plot the whole time-series in a single plot. Moreover, dividing the time-series into individual trips may be considered as neat and help discretize the time-series into sensible sections which may be treated individually for further data processing and analysis. Plotting an individual trip would also give a complete overview of a port-to-port journey of the ship. Dividing the data into trips and at-berth legs would also make further data processing computationally less expensive as it may be possible to ignore a large number of samples (for further steps) where the ship is not in a trip. For such samples, it may not be necessary to interpolate hindcast, calculate hydrostatics, calculate resistance components, etc. Lastly, identifying individual trips would also make draft and trim correction step easier.\n\nDividing data into trips is substantially easier for noon reports and AIS data as they are generally supplied with a source and/or destination port name. In case of in-service data, it may be possible that no such information is available. In such a case, if the GPS data (latitude and longitudes) is available, it may be possible to just plot the samples on the world map and obtain individual trips by looking at the port calls. Alternatively, if the in-service data is supplied with a `State' variable\\footnote{Generally available for ships equipped with Marorka systems (www.marorka.com).} (mentioned by \\citet{Gupta2019}), indicating the propulsive state of the ship, like `Sea Passage', `At Berth', `Maneuvering', etc., it is recommended to find the continuous legs of `At Berth' state and enumerate the gaps in these legs with trip numbers, containing the rest of the states, as shown in figure \\ref{fig:splitTSviaState}. Otherwise, it is recommended to use the shaft rpm and GPS speed (or speed-over-ground) time-series to identify the starting and end of each port-to-port trip. Here, a threshold value can be adopted for the shaft rpm and GPS speed. All the samples above these threshold values (either or both) are considered to be in-trip samples, as shown in figure \\ref{fig:splitTS}. Thus, continuous legs of such in-trip samples can simply be identified and enumerated. It may also be possible to append few samples before and after each of these identified trips to obtain a proper trip, starting from zero and ending at zero. Such a process is designed keeping in mind the noise in the shaft rpm and GPS speed variables when the ship is actually static. Finally, if the GPS data is available, further adjustments can be done by looking at the port calls on the world map plotted with the GPS data.\n\n\\begin{figure}[ht]\n \\centering\n \\begin{subfigure}{0.48\\linewidth}\n \\includegraphics[width=\\linewidth]{Figures/Split_TS_J3.png}\n \\caption{Splitting time-series into trips using the `State' variable.} \\label{fig:splitTSviaState}\n \\end{subfigure}\n \\begin{subfigure}{0.48\\linewidth}\n \\includegraphics[width=\\linewidth]{Figures/Static_Indices_J3.png}\n \\caption{Splitting time-series into trips using threshold values (indicated by dashed red lines) for shaft rpm (10 rpm) and GPS speed (3 knots) variables.} \\label{fig:splitTS}\n \\end{subfigure}\n \\caption{Splitting time-series into trips.}\n\\end{figure}\n\n\\subsection{Interpolate Hindcast \\& GPS Position Correction} \\label{sec:interpolateHindcast}\n\nEven if the raw data contains information regarding the state of the weather for each data sample, it may be a good idea to interpolate weather hindcast (or metocean) data available from one of the well-established sources. The interpolated hindcast data would not only provide a quantitative measure of the weather conditions (and, consequently, the environmental loads) experienced by the ship, but it would also help carry-out some important validation checks (discussed later in section \\ref{sec:resultsValChecks}). In order to interpolate hindcast data, the information regarding the location (latitude and longitude) and recording timestamp must be available in the ship's dataset. For ship performance analysis, it should be aimed that, at least, the information regarding the three main environmental load factors, i.e., wind, waves and sea currents, is gathered from the weather hindcast sources. For a further detailed analysis, it may also be a good idea to obtain additional variables, like sea water temperature (both surface and gradient along the depth of the ship), salinity, etc.\n\nBefore interpolating the weather hindcast data to the ship's location and timestamps, it is recommended to ensure that the available GPS (or navigation) data is validated and corrected (if possible) for any errors. If the GPS data is inaccurate, weather information at the wrong location is obtained, resulting in incorrect values for further analysis. For instance, the ship's original trajectory obtained from the GPS data, presented in figure \\ref{fig:gps_outlier}, shows that the ship proceeds in a certain direction while suddenly jumping to an off-route location occasionally. The ship, of course, may have gone off-route as shown here, but referring to the GPS speed and heading of the ship at the corresponding time, shown in figure \\ref{fig:gps_condition}, it is obvious that the navigation data is incorrect. Here, such an irrational position change can be detected through the two-stage steady-state (or stationarity) filter suggested by \\citet{Gupta2021}, based on the method developed by \\citet{Dalheim2020}. The first stage of the filter uses a sliding window to remove unsteady samples by performing a t-test on the slope of the data values, while the second stage performs an additional gradient check for the samples failing in the first stage to retain the misidentified samples. The `irrational position' in figure \\ref{fig:gps_outlier} shows the coordinates identified as unsteady when the above two-stage filter is applied to longitude and latitude time-series. The filtered trajectory is further obtained after removing the samples with `irrational position' from the original data. \n\n\\begin{figure}[ht]\n \\centering\n \\begin{subfigure}{0.48\\linewidth}\n \\includegraphics[width=\\linewidth]{Figures/GPS_outlier.png}\n \\caption{Original trajectory and filtered trajectory with irrational GPS position.} \\label{fig:gps_outlier}\n \\end{subfigure}\n \\begin{subfigure}{0.48\\linewidth}\n \\includegraphics[width=\\linewidth]{Figures/GPS_condition.png}\n \\caption{Trends of GPS speed, heading, and position of the ship according to the corresponding period.} \\label{fig:gps_condition}\n \\end{subfigure}\n \\caption{GPS position cleaning using the steady-state detection algorithm.}\n\\end{figure}\n\nThe hindcast data sources generally allow downloading a subset of the variables, timestamps, and a sub-grid of latitudes and longitudes, i.e., the geographical location. Depending on the hindcast source, the datasets can be downloaded manually (by filling a form), using an automated API script, or even by directly accessing their ftp servers. It may also be possible to select the temporal and spatial resolution of the variables being downloaded. In some cases, the hindcast web servers allows the users to send a single query, in terms of location, timestamp, and list of variables, to extract the required data for an individual sample. But every query received by these servers is generally queued for processing, causing substantially long waiting times, as they are facing a good amount of traffic from all over the world. Thus, it is recommended to simply download the required subset of data on a local machine for faster interpolation. \n\nOnce the hindcast data files are available offline, the main task at hand is to understand the cryptic (but highly efficient) data packaging format. Now-a-days, the two most poplar formats for such data files are GRIdded Binary data (GRIB) and NetCDF. GRIB (available as GRIB1 or GRIB2) is the international standard accepted by World Meteorological Organization (WMO), but due to some compatibility issues with windows operating systems, it may be preferable to use the NetCDF format.\n\n\nFinally, a step-by-step interpolation has to be carried-out for each data sample from the ship's dataset. Algorithm \\ref{algo:hindcastInterp} shows a simple procedure for n-th order (in time) interpolation scheme. Here, the spatial and temporal interpolation is performed in steps \\ref{algoStep:spatialInterp} and \\ref{algoStep:temporalInterp}, respectively. For a simple and reliable procedure, it is recommended to perform the spatial interpolation using a grid of latitudes and longitudes around the ship's location, after fitting a linear or non-linear 2D surface over the hindcast grid. It may be best to use a linear surface here as, firstly, the hindcast data may not be so accurate that performing a higher order interpolation would provide any better estimates, and secondly, in some case, higher order interpolation may result in highly inaccurate estimates, due to the waviness of the over-fitted non-linear surface. Similar arguments can be given in the case of temporal interpolation, and therefore, a linear interpolation in time can also be considered acceptable. The advantage of using the given algorithm is that the interpolation steps, here, can be easily validated by plotting contours (for spatial interpolation) and time-series (for temporal interpolation). \n\n\\begin{algorithm}\n\\caption{A simple algorithm for n-th order interpolation of weather hindcast data variables.}\\label{algo:hindcastInterp}\n\\begin{algorithmic}[1]\n\n\\State $wData \\gets $ weather hindcast data\n\\State $x \\gets $ data variables to interpolate from hindcast\n\\State $wT \\gets $ timestamps in $wData$\n\n\\ForAll{timestamps in ship's dataset}\n \n \\State $t \\gets $ current ship time stamp\n \\State $loc \\gets $ current ship location (latitude \\& longitude)\n \\State $i \\gets n+1$ indices of $wT$ around $t$\n \n \\ForAll{$x$}\n \\ForAll{$i$}\n \\State $x[i] \\gets $ 2D spatial interpolation at $loc$ using $wData[x][i, :, :]$ \\label{algoStep:spatialInterp}\n \\EndFor\n \\State $X \\gets $ n-th order temporal interpolation at $t$ using $x[i]$ \\label{algoStep:temporalInterp}\n \\EndFor\n\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\nAn important feature of hindcast datasets is masking the invalid values. For instance, the significant wave height should only be predicted by the hindcast model for the grid nodes which fall in the sea, requesting the value of such a variable on land should result in an invalid value. Such invalid values (or nodes) are by default masked in the downloaded hindcast data files, probably for an efficient storage of the data. These masked nodes may be filled with zeros before carrying-out the spatial interpolation in step \\ref{algoStep:spatialInterp}, as one or more of these nodes may be contributing to the interpolation. Alternatively, if a particular masked node is contributing to the interpolation, it can be set to the mean of other nodes surrounding the point of interpolation, as suggested by \\citet{Ejdfors2019}. It is argued by \\citet{Ejdfors2019} that this would help avoid the artificially low (zero) values during the interpolation, but if the grid resolution is fine-enough, it is expected that the calculated mean (of unmasked surrounding nodes) would also not be much higher than zero. \n\n\\subsection{Derive New Features}\n\nInterpolating the weather hindcast variables to ship's location at a given time would provide the hindcast variables in the global (or the hindcast model's) reference frame. For further analysis, it may be appropriate to translate these variables to ship's frame of reference, and furthermore, it may be desired to calculate some new variables which could be more relevant for the analysis or could help validate the assimilated (ship and hindcast) dataset. The wind and sea current variables, obtained from the hindcast source and the ship's dataset, can be resolved into the longitudinal and transverse speed components for validation and further analysis. Unfortunately, the wave load variables cannot be resolved in a similar manner, but the mean wave direction should be translated into the relative mean wave direction (relative to the ship's heading or course). \n\n\\subsection{Validation Checks} \\label{sec:resultsValChecks}\n\nAlthough it is recommended to validate each processing step by visualizing (or plotting) the task being done, it may be a good idea to take an intermediate pause and perform all types of possible validation checks. These validation checks would not only help assess the dataset from reliability point of view but can also be used to understand the correlation between various features. The validation checks can be done top-down, starting from the most critical feature to the least one. As explained in section \\ref{sec:bestPractices}, the shaft power measurements can be validated against the shaft rpm and shaft torque measurements, if these are available, else just plotting the shaft rpm against the shaft power can also provide a good insight into the quality of data. For a better assessment, it is suggested to visualize the shaft rpm vs shaft power overlaid with the engine operational envelope and propeller curves, as presented by \\citet{Liu2020} (in figure 11). Any sample falling outside the shaft power overload envelope (specially at high shaft rpm) should be removed from the analysis, as they may be having measurement errors. It may also be possible to make corrections, if the shaft power data seems to be shifted (up or down) with respect to the propeller curves due to sensor bias.\n\nThe quality of speed-through-water measurements can be assessed by validating it against its estimate, obtained as a difference between the speed-over-ground and longitudinal current speed. Here, it should be kept in mind that the two values may not be a very good match due to several problems cited in section \\ref{sec:incorrMeasureInServData}. Visualizing the speed-though-water vs shaft power along with all the available estimates of the speed-power calm-water curve is also an important validation step (shown in figure \\ref{fig:speedVsPowerWSPCurves}). Here, the majority of measurement data should accumulate around these curves. In case of disparity between the curves, the curve obtained through the sea trial of the actual ship may take precedence. \n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.6\\linewidth]{Figures/Log_speed_vs_Power_0_J3.png}\n\\caption{Speed-though-water (log speed) vs shaft power with various estimates of speed-power calm-water curves.} \\label{fig:speedVsPowerWSPCurves}\n\\end{figure}\n\nThe interpolated weather hindcast data variables must also be validated against the measurements taken onboard the ship. This is quite critical as the sign and direction notations assumed by the hindcast models and ship's sensors (or data acquisition system) are probably not the same, which may cause mistakes during the interpolation step. Moreover, most ships are generally equipped with anemometers that can measure the actual and relative wind speed and directions, and these two modes (actual or relative) can be switched through a simple manipulation by the crew onboard. It is possible that this mode change may have occurred during the data recording duration, resulting in errors in the recorded data. In addition, there may be a difference between the reference height of the wind hindcast data and the vertical position of the installed anemometer, which may lead to somewhat different results even at the same location at sea. The wind speed at the reference height (${V_{WT}}_{ref}$) can be corrected using the anemometer recorded wind speed ($V_{WT}$), assuming a wind speed profile, as follows (recommended by \\citet{ITTC2017}):\n\n\\begin{equation}\\label{eq:referenceHeight}\n{V_{WT}}_{ref} = V_{WT}\\left(\\frac{Z_{ref}}{Z_{a}}\\right)^{\\frac{1}{9}}\n\\end{equation}\n\nWhere $Z_{ref}$ is the reference height above the sea level and $Z_a$ is the height of the anemometer.\n\nFinally, these wind measurements can be translated into the longitudinal and transverse relative components. The obtained transverse relative wind speed can be validated against the transverse wind speed, obtained from the hindcast source, as they are basically the same. Similarly, the difference between the longitudinal relative wind speed and the speed-over-ground of the ship can be validated against the longitudinal wind speed measurements from hindcast, as shown in figure \\ref{fig:longWindSpeedValidation}. In case of time-averaged in-service data, the problem of faulty averaging of angular measurements when the measurement values are near 0 or 360 degrees (i.e., the angular limits), explained in section \\ref{sec:timeAvgProb}, must also be verified and appropriate corrective measures should be taken. From figure \\ref{fig:longWindSpeedValidation}, it can be clearly seen that the time-averaging problem (in relative wind direction) causes the longitudinal wind speed (estimated using the ship data) to jump from positive to negative, resulting in a mismatch with the corresponding hindcast values. In such a case, it is recommended to either fix these faulty measurements, which may be difficult as there is no proven way to do it, or just use the hindcast measurements for further analysis. \n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.5\\linewidth]{Figures/LongWindSpeed_J3.png}\n\\caption{Validating longitudinal wind speed obtained using the ship data against the values obtained from the hindcast. The time-averaging problem with angular measurements around 0 or 360 degrees (explained in section \\ref{sec:timeAvgProb}) is clearly visible here.} \\label{fig:longWindSpeedValidation}\n\\end{figure}\n\nAs discussed in the case of noon reports in section \\ref{sec:noonReportsAvgProb}, weather information generally refers to the state of the weather at the time when the report is logged, which is probably not the average state from noon to noon. Furthermore, the wind loads here are observed based on the Beaufort scale, therefore, the deviation may be somewhat large when converted to the velocity scale. In this case, it is recommended to consider the daily average values obtained from the weather hindcast data, over the travel region, rather than the noon report values.\n\n\\subsection{Data Processing Errors}\n\nThe validation step is very critical in finding out any processing mistakes or inherent problems with the dataset, as demonstrated in the previous section. Such problems or mistakes, if detected, must be corrected or amended for before moving forward with the processing and analysis. The main mistakes found at this step are generally either interpolation mistakes or incorrect formulation of the newly derived feature. These mistakes should be rectified accordingly, as shown in the flow diagram (figure \\ref{fig:flowDiag}). \n\n\\subsection{Fix Draft \\& Trim} \\label{sec:fixDraft}\n\nThe draft measurements recorded onboard the ship are often found to be incorrect due to the Venturi effect, explained briefly in section \\ref{sec:incorrMeasureInServData}. The Venturi effect causes the draft measurements to drop to a lower value due to a non-zero negative dynamic pressure as soon as the ship develops a relative velocity with respect of the water around the hull. Thus, the simplest solution to fix these incorrect measurements is by interpolating the draft during a voyage using the draft measured just before and after the voyage. Such a simple solution provides good results for a simple case where the draft of the ship basically remains unchanged during the voyage, except for the reduction of draft due to consumed fuel, as shown in the figure \\ref{fig:simpleDraftCorr}.\n\n\\begin{figure}[ht]\n \\centering\n \\begin{subfigure}{0.48\\linewidth}\n \\includegraphics[width=\\linewidth]{Figures/Trip_014.png}\n \\caption{Simple draft correction.} \\label{fig:simpleDraftCorr}\n \\end{subfigure}\n \\begin{subfigure}{0.48\\linewidth}\n \\includegraphics[width=\\linewidth]{Figures/Trip_TS_033_Corr_J3.png}\n \\caption{Complex draft correction.} \\label{fig:complexDraftCorr}\n \\end{subfigure}\n \\caption{Correcting in-service measured draft.}\n\\end{figure}\n\nIn a more complex case where the draft of the ship is changed in the middle of the voyage and the ship is still moving, i.e., conducting ballasting operations or trim adjustments during transit, the simple draft interpolation would result in corrections which can be way off the actual draft of the vessel. As shown in figure \\ref{fig:complexDraftCorr}, the fore draft is seen to be dropping and the aft draft increasing in the middle of the voyage without much change in the vessel speed, indicating trim adjustments during transit. In this case, a more complex correction is applied after taking into account the change in draft during the transit. Here, first of all, a draft change operation is identified (marked by green and red vertical lines in figure \\ref{fig:complexDraftCorr}), then the difference between the measurements before and after the operation is calculated by taking an average over a number of samples. Finally, a ramp is created between the starting of the draft change operation (green line) and the end of operation (red line). The slope of the ramp is calculated using the difference between the draft measurements before and after the draft change operation. The draft change operation can either be identified manually, by looking at the time-series plots, or using the steady-state (or stationarity) filter developed by \\citet{Dalheim2020}.\n\nIn case of AIS data, \\citet{bailey2008training} reported that 31\\% of the draft information out of the investigated AIS messages had obvious errors. The draft information from AIS data generally corresponds to the condition of ships while arriving at or departing from the port, and changes due to fuel consumption and ballast adjustment onboard are rarely updated. Since the draft obtained from the AIS as well as noon reports has a long update cycle and is acquired by humans, it is practically difficult to precisely fix the draft values as in the case of in-service data. However, by comparing the obtained draft with a reference value, it may be possible to gauge whether the obtained draft is, in fact, correct. If the obtained draft excessively deviates from the reference, it may be possible to remove the corresponding data samples from further analysis or replace the obtained draft value with a more appropriate value. Table \\ref{tab:draftRatio} shows the results of investigating the average draft ratio, which is the ratio of the actual draft ($T_c$) and design draft ($T_d$), for various ship types from 2013 to 2015 by \\citet{olmer2017greenhouse}. As summarized in the table, the draft ratio varies depending on the ship type and the voyage type. By using these values as the above mentioned reference, the draft obtained from the AIS data and noon reports can be roughly checked and corrected.\n\n\\begin{table}[ht]\n\\caption{Average draft ratio ($T_c/T_d$) for different ship types. $T_c$ = actual draft during a voyage; $T_d$ = design draft of the ship.} \\label{tab:draftRatio}\n\\centering\n\\begin{tabular}{l|c|c}\n\\hline\n\\multicolumn{1}{c|}{\\textbf{Ship types}} & \\multicolumn{1}{c|}{\\textbf{Ballast Voyage}} & \\multicolumn{1}{c}{\\textbf{Laden Voyage}}\\\\\n\\hline\nLiquefied gas tanker & 0.67 & 0.89\\\\\nChemical tanker & 0.66 & 0.88\\\\\nOil tanker & 0.60 & 0.89\\\\\nBulk carrier & 0.58 & 0.91\\\\\nGeneral cargo & 0.65 & 0.89\\\\\n\\hline\n\\multicolumn{3}{c}{\\textit{The following ship types do not generally have ballast-only voyages.}} \\\\\n\\hline\nContainer & \\multicolumn{2}{c}{0.82}\\\\\nRo-Ro & \\multicolumn{2}{c}{0.87}\\\\\nCruise & \\multicolumn{2}{c}{0.98}\\\\\nFerry pax & \\multicolumn{2}{c}{0.90}\\\\\nFerry ro-pax & \\multicolumn{2}{c}{0.93}\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\subsection{Calculate Hydrostatics}\n\nDepending on the type of performance analysis, it may be necessary to have features like displacement, wetted surface area (WSA), etc. in the dataset, as they are more relevant from a hydrodynamic point of view. Moreover, most of the empirical or physics-based methods for resistance calculations (to be done in the next step) requires these features. Unfortunately, these feature cannot be directly recorded onboard the ship. But it is fairly convenient to estimate them using the ship's hydrostatic table or hull form (or offset table) for the corresponding mean draft and trim for each data sample. Here, it is recommended to use the corrected draft and trim values, obtained in the previous step. If the detailed hull form is not available, the wetted surface area can also be estimated using the empirical formulas shown in table \\ref{tab:wsaParams}. The displacement at design draft, on the other hand, can be estimated using the ship particulars and typical range of block coefficient ($C_B$), presented in table \\ref{tab:cbParams}.\n\n\\begin{table}[ht]\n\\caption{Estimation formulas for wetted surface area of different ship types.} \\label{tab:wsaParams}\n\\centering\n\\begin{tabular}{l|l|l}\n\\hline\n\\multicolumn{1}{c|}{\\textbf{Category}} & \\multicolumn{1}{c|}{\\textbf{Formula}} & \\multicolumn{1}{c}{\\textbf{Reference}}\\\\\n\\hline\nTanker/Bulk carrier & $WSA = 0.99\\cdot(\\frac{\\nabla}{T}+1.9\\cdot L_{WL}\\cdot T)$ & \\citet{Kristensen2017} \\\\\nContainer & $WSA = 0.995\\cdot(\\frac{\\nabla}{T}+1.9\\cdot L_{WL}\\cdot T)$ & \\citet{Kristensen2017} \\\\\nOther (General) & $WSA = 1.025\\cdot(\\frac{\\nabla}{T}+1.7\\cdot L_{PP}\\cdot T)$ & \\citet{molland2011maritime} \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[ht]\n\\caption{Typical block coefficient ($C_B$) range at design draft for different ship types, given by \\citet{solutions2018basic}.} \\label{tab:cbParams}\n\\centering\n\\begin{tabular}{l|l|c}\n\\hline\n\\multicolumn{1}{c|}{\\textbf{Category}} & \\multicolumn{1}{c|}{\\textbf{Type}} & \\multicolumn{1}{c}{\\textbf{Block coefficient ($C_B$)}}\\\\\n\\hline\nTanker & Crude oil carrier & 0.78-0.83\\\\\n & Gas tanker/LNG carrier & 0.65-0.75\\\\\n & Product & 0.75-0.80\\\\\n & Chemical & 0.70-0.78\\\\\n\\hline\nBulk carrier & Ore carrier & 0.80-0.85\\\\\n & Regular & 0.75-0.85\\\\\n\\hline\nContainer & Line carrier & 0.62-0.72\\\\\n & Feeder & 0.60-0.70\\\\\n\\hline\nGeneral cargo & General cargo/Coaster & 0.70-0.85\\\\\n\\hline\nRoll-on/roll-off cargo & Ro-Ro cargo & 0.55-0.70\\\\\n & Ro-pax & 0.50-0.70\\\\\n\\hline\nPassenger ship & Cruise ship & 0.60-0.70\\\\\n & Ferry & 0.50-0.70\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\subsection{Calculate Resistance Components}\n\nThere are several components of the ship's total resistance, and there are several methods to estimate each of these components. The three main resistance components which generally constitutes the majority of the ship's total resistance are calm-water, added wind, and added wave resistance. It is possible to further divide the calm-water resistance into sub-components, namely, skin friction and residual resistance. The total calm-water resistance can be calculated using one of the many well-known empirical methods, like Guldhammer and Harvald (\\citet{Guldhammer1970}), updated Guldhammer and Harvald (\\citet{Kristensen2017}), Hollenbach (\\citet{Hollenbach1998}), Holtrop and Mennen (\\citet{Holtrop1982}), etc. These empirical methods are developed using the data from numerous model test results of different types of ships, and each one is proven to be fitting well on several different ship types. The latter makes choosing the right method for a ship quite complicated. \n\nThe easiest way to choose the right calm-water resistance estimation method is to calculate the calm-water resistance from each of these methods and comparing it with the corresponding data obtained for the given ship. The calm-water data for a given ship can be obtained from the model tests, sea trial, or even filtering the operational data, obtained from one of the sources discussed here (in section \\ref{sec:dataSources}), for near-calm-water condition. The usual practice here is to use the sea trial data as it is obtained and corrected for near-calm-water condition and do not suffer from scale effects, seen in model test results. But the sea trials are sometimes conducted at only the high range of speed and ballast displacement (as shown in figure \\ref{fig:speedVsPowerWSPCurves}). Thus, it is recommended to use the near-calm-water filtered (and corrected) operational data to choose the right method so that a good fit can be ensured for a complete range to speed and displacement.\n\nAccording to \\citet{ITTC2017}, the increase in resistance due to wind loads can be obtained by applying one of the three suggested methods, namely, wind tunnel model tests, STA-JIP, and Fujiwara's method. If the wind tunnel model test results for the vessel are available, it may be considered as the most accurate method for estimating added wind resistance. Otherwise, the database of wind resistance coefficients established by STA-JIP (\\citet{van2013new}) or the regression formula presented by \\citet{Fujiwara2005} is recommended. From the STA-JIP database, experimental values according to the specific ship type can be obtained, whereas Fujiwara's method is based-on the regression analysis of data obtained from several wind tunnel model tests for different ship types. \n\nThe two main sets of parameters required to estimate the added wind resistance using any of the above three methods are incident wind parameters and information regarding the exposed area to the wind. The incident wind parameters, i.e., relative wind speed and direction, can be obtained from onboard measurements or weather hindcast data. In case of weather hindcast data, the relative wind measurements can be calculated from the hindcast values according to the formulation outlined by \\citet{ITTC2017} in section E.1, and in case of onboard measurements, the relative wind measurements should be corrected for the vertical position of the anemometer according to the instructions given by \\citet{ITTC2017} in section E.2, also explained here in section \\ref{sec:resultsValChecks}. The information regarding the exposed area to the wind can be either estimated using the general arrangement drawing of the ship or approximately obtained using a regression formula based-on the data from several ship, presented by \\citet{kitamura2017estimation}.\n\nThe added wave resistance ($R_{AW}$) can also be obtained in a similar manner using one of the several well-established methods for estimating $R_{AW}$. \\citet{ITTC2017} recommends conducting sea keeping model tests in regular waves to obtain $R_{AW}$ transfer functions, which can further be used to estimate $R_{AW}$ for the ship in irregular seas. To empirically obtain these transfer functions or $R_{AW}$ for a given ship, it is possible to use physics-based empirical methods like STAWAVE1 and STAWAVE2 (recommended by \\citet{ITTC2017}). STAWAVE1 is a simplified method for directly estimating $R_{AW}$ in head wave conditions only, and it requires limited input, including ship's waterline length, breadth, and significant wave height. STAWAVE2 is an advanced method to empirically estimate parametric $R_{AW}$ transfer functions for a ship. The method is developed using an extensive database of sea keeping model test results from numerous ships, but unfortunately, it only provides transfer functions for approximate head wave conditions (0 to $\\pm$45 degrees from bow). A method proposed by DTU (\\citet{Martinsen2016}; \\citet{Taskar2019}; \\citet{Taskar2021}) provides transfer functions for head to beam seas, i.e., 0 to $\\pm$90 degrees from bow. Finally, for all wave heading, it may be recommended to use the newly established method by \\citet{Liu2020}. There have been several studies to assess and compare the efficacy of each of these methods and several other methods, but no consistent guidelines are provided regarding their applicability. \n\n\n\\subsection{Data Cleaning \\& Outlier Detection}\n\nIt may be argued by some that the process of data cleaning and outlier detection should be carried-out way earlier in the data processing framework, as proposed by \\citet{Dalheim2020DataPrep}, but it should be noted here that all the above steps proposed here have to be performed only once for a given dataset, whereas data cleaning is done based on the features selected for further analysis. Since the same dataset can be used for several different analyses, which may be using different sets of features, some part of data cleaning has to be repeated before each analysis to obtain a clean dataset with as many data samples as possible. Moreover, the additional features acquired during the above listed processing steps may be helpful in determining to a better extent if a suspected sample is actually an outlier or not. \n\nNevertheless, it may be possible to reduce the work load for the above processing steps by performing some basic data cleaning before some of these steps. For instance, while calculating the resistance components for in-trip data samples, it is possible to filter-out samples with invalid values for one or more of the ship data variables used to calculate these components, like speed-though-water, mean draft (or displacement), etc. This would reduce the number of samples for which the new feature has to be calculated. It should also be noted that even if such simple data cleaning (before each step) is not performed, these invalid samples would be easily filtered-out in the present step. Thus, the reliability and efficacy of the data processing framework is not affected by performing the data cleaning and outlier detection step at the end.\n\nMost of the methods developed for ship performance monitoring assumes that the ship is in a quasi-steady state for each data sample. The quasi-steady assumption indicates that the propulsive state of the ship remains more or less constant during the sample recording duration, i.e., the ship is neither accelerating nor decelerating. This is specially critical for aforementioned time-averaged datasets, as the averaging duration can be substantially long, hiding the effects of accelerations and decelerations. Here, the two-stage steady-state filter, explained in section \\ref{sec:interpolateHindcast}, can be applied to the shaft rpm time-series to remove the samples with accelerations and decelerations, resulting in quasi-steady samples. In tandem to the steady-state filter on the shaft rpm time-series, it may also be possible to use the steady-state filter, with relaxed setting, on the speed-over-ground time-series to filter-out the sample where the GPS speed (or speed-over-ground) signal suddenly drops or recovers from a dead state, resulting in measurement errors.\n\nAs discussed in section \\ref{sec:outliers}, the outliers can be divided into two broad categories: (a) Contextual outliers, and (b) Correlation-defying outliers. The contextual outliers can be identified and resolved by the methods presented as well as demonstrated by \\citet{Dalheim2020DataPrep}, and for correlation-defying outliers, methods like Principal Component Analysis (PCA) and autoencoders can be used. Figure \\ref{fig:corrDefyingOutliers} shows the in-service data samples recorded onboard a ship. The data here is already filtered-out for quasi-steady assumption, explained above, and contextual outliers, according to the methods suggested by \\citet{Dalheim2020DataPrep}. Thus, the samples highlighted by red circles (around 6.4 MW shaft power in figure \\ref{fig:corrDefyingOutliersSP}) can be classified as correlation-defying outliers. The time-series plot (shown in figure \\ref{fig:corrDefyingOutliersTS}) clearly indicates that the detected outliers have faulty measurements for the speed-through-water (stw) and speed-over-ground (sog), defying the correlation between these variables and the rest. It is also quite surprising to notice that the same fault occurs in both the speed measurements at the same time, considering that they are probably obtained from different sensors. \n\n\\begin{figure}[ht]\n\\centering\n\\begin{subfigure}[]{0.42\\linewidth}\n\\includegraphics[width=\\linewidth]{Figures/stw_vs_power_J3.png}\n\\caption{Log speed (or stw) vs shaft power.} \\label{fig:corrDefyingOutliersSP}\n\\end{subfigure}\n\\begin{subfigure}[]{0.57\\linewidth}\n\\includegraphics[width=\\linewidth]{Figures/Trip_TS_128_J3.png}\n\\caption{Time-series.} \\label{fig:corrDefyingOutliersTS}\n\\end{subfigure}\n\\caption{Correlation-defying outliers marked with red circles.} \\label{fig:corrDefyingOutliers}\n\\end{figure}\n\n\n\\section{Conclusion} \\label{sec:conclusion}\n\nThe quality of data is very important in estimating the performance of a ship. In this study, a streamlined semi-automatic data processing framework is developed for ship performance analysis. The data processing framework can be used to process data from several different sources, like onboard recorded in-service data, AIS data and noon reports. These three data sources are discussed here in detail along with their inherent problems and associated examples. It is here recommended to use the onboard recorded in-service data for ship performance monitoring over the other data sources, as it is considered more reliable due its consistent and higher sampling rate. Moreover, the AIS data and noon reports lacks some of the critical variables required for ship performance analysis, and they are also susceptible to human error, as some of the data variables recorded here are manually logged by the ship's crew. Nevertheless, all three data sources are known to have several problems and should be processed carefully for any further analysis. \n\nThe data processing framework, presented in the current work, is designed to address and resolve most of the problems found in the above three data sources. It is first recommended to divide the data into trips so that further processing can be performed in a more systematic manner. A simple logic to divide the data into individual trips is outlined here if the port call information is not available. The weather hindcast (metocean) data is considered as an important supplementary information, which can be used for data validation and estimating environmental loads experienced by the ship. A simple algorithm to effectively interpolate the hindcast data at a specific time and location of a ship is presented within the data processing framework. The problem of erroneous draft measurements, caused due to the Venturi effect, is discussed in detail as well as simple interpolation is recommended to fix these measurements. A more complex case, where the draft or trim is voluntarily adjusted during the voyage without reducing the vessel speed, is also presented here. Such a case cannot be resolved with simple interpolation, and therefore, an alternate method is suggested for the same problem. \n\nChoosing the most suitable methods for estimating resistance components may also be critical for ship performance analysis. It is, therefore, recommended to carry out some validation checks to find the most suitable methods before adopting them into practice. Such validation checks should be done, wherever possible, using the data obtained from the ship while in-service rather than just using the sea trial or model test results. Data cleaning and outlier detection is also considered an important step for processing the data. Since cleaning the data requires selecting a subset of features relevant for the analysis, it is recommended to perform this as the last step of the data processing framework, and some part of it should be reiterated before carrying out a new type of analysis. The presented data processing framework can be systematically and efficiently adopted to process the datasets for ship performance analysis. Moreover, the various data processing methods or steps mentioned here can also be used elsewhere to process the time-series data from ships or similar sources, which can be used further for a variety of tasks.\n\n\\section{}\\label{}\n\n\n\n\n\n\n\n\n\n", "meta": {"timestamp": "2022-02-03T02:20:20", "yymm": "2202", "arxiv_id": "2202.01000", "language": "en", "url": "https://arxiv.org/abs/2202.01000"}} {"text": "\\section{Introduction} \n\\label{sec:introduction}\n\nThe coupled interaction between free flow and porous medium flow forms an active area of research due to their appearance in a wide variety of applications. Examples include biomedical applications such as blood filtration, engineering situations such as air filters and PEM fuel cells, as well as environmental considerations such as the drying of soils. In all mentioned applications, it is essential to properly capture the mutual interaction between a free, possibly turbulent, flow and the creeping flow inside the porous medium. \n\nWe consider the simplified setting in which the free-flow regime is described by Stokes flow and we let Darcy's law govern the porous medium flow. Moreover, we only consider the case of stationary, single-phase flow and assume a sharp interface between the two flow regimes. These considerations are simplifications of a more general framework of models coupling free flow with porous medium flow. Such models have been the topic of a variety of scientific work in recent years, with focuses including mathematical analysis, discretization methods, and iterative solution techniques. Different formulations of the Stokes-Darcy problem have been analyzed in \\cite{discacciati2004domain,layton2002coupling,gatica2011analysis}. Examples in the context of discretization methods include the use of Finite Volume methods \\cite{iliev2004a, mosthaf2011a, rybak2015a, masson2016a, fetzer2017a, schneider2020coupling} and (Mixed) Finite Element methods, both in a coupled \\cite{layton2002coupling,discacciati2009navier,riviere2005locally,gatica2009conforming} and in a unified \\cite{armentano2019unified,karper2009unified} setting. Moreover, iterative methods for this problem are considered in e.g. \\cite{discacciati2007robin,discacciati2005iterative,discacciati2018optimized,ganderderivation,galvis2007balancing,cao2011robin}. We refer the reader to the works \\cite{discacciati2009navier,rybak2016mathematical,discacciati2004domain} and references therein for more comprehensive overviews on the results concerning the Stokes-Darcy model.\n\nIn order to distinguish this work from existing results, we formulate the following objective: \\\\\nThe goal of this work is to create an iterative numerical method that solves the stationary, Stokes-Darcy problem with the following three properties:\n\\begin{enumerate}\n\t\\item \\label{goal: mass conservation}\n\tThe solution retains \\textbf{local mass conservation}, after each iteration. \n\tSince mass balance is a physical conservation law, we emphasize its importance over all other constitutive relationships in the model. Hence, the first aim is to produce a solution that respects local mass conservation and use iterations to improve the accuracy of the solution with respect to the remaining equations. Importantly, we aim to obtain a conservative flow field in the case that the iterative scheme is terminated before reaching convergence.\n\n\tWe present two main ideas to achieve this. First, we limit ourselves to discretization methods capable of ensuring local mass conservation within each flow regime. Secondly, we ensure that no mass is lost across the Stokes-Darcy interface by introducing a single variable describing this interfacial flux. Our contribution in this context is to pose and analyze the Stokes-Darcy problem using function spaces that ensure normal flux continuity (Section~\\ref{sub:functional_setting}), both in the continuous (Section~\\ref{sec:well_posedness}) and discretized (Section~\\ref{sec:discretization}) settings. Our approach is closely related to the ``global'' approach suggested in \\cite[Remark 2.3.2]{discacciati2004domain} which we further develop in a functional setting. We moreover note that our construction is, in a sense, dual to the more conventional approach in which a mortar variable representing the interface pressure is introduced, see e.g. \\cite{layton2002coupling,gatica2009conforming}.\n\t\n\t\\item\n\tThe performance of the iterative solution scheme is \\textbf{robust with respect to physical and mesh parameters}. In this respect, the first aim is to obtain sufficient accuracy of the solution within a given number of iterations that is robust with respect to given material parameters such as the permeability of the porous medium and the viscosity of the fluid. This will allow the scheme to handle wide ranges of material parameters that arise either from the physical problem or due to the chosen non-dimensionalization.\n\n\tRobustness with respect to mesh size is advantageous from a computational perspective. If the scheme reaches sufficient accuracy within a number of iterations on a coarse grid, then this robustness provides a prediction on the necessary computational time on refined grids. We note that the analysis in this work is restricted to shape-regular meshes, hence the typical mesh size $h$ becomes the only relevant mesh parameter.\n\n\tTo attain this goal, we pay special attention to the influence of the material and mesh parameters in the a priori analysis of the problem. We derive stability bounds of the solution in terms of functional norms weighted with the material parameters. One of the main contributions is thus the derivation of a properly weighted norm for the normal flux on the Stokes-Darcy interface, presented in equation \\eqref{eq: norm phi}. In turn, this norm is used to construct an optimal preconditioner a priori.\n\t\n\t\\item\n\tThe method is easily extendable to a \\textbf{wide range of discretization methods} for the Stokes and Darcy subproblems. Aside from compliance with aim (1), we impose as few restrictions as possible on the underlying choice of discretization methods, thus allowing the presented iterative scheme to be highly adaptable. Moreover, the scheme is able to benefit from existing numerical implementations that are tailored to solving the Stokes and Darcy subproblems efficiently. This work employs a conforming Mixed Finite Element method, keeping in mind that extensions can readily be made to other locally conservative methods such as e.g. Finite Volume Methods or Discontinuous Galerkin methods.\n\n\tIn order to achieve this third goal, we first derive the properties of the problem in the continuous setting and apply the discretization afterward. The key strategy here is to reformulate the problem into a Steklov-Poincar\\'e system concerning only the normal flux across the interface, similar to the strategy presented in \\cite[Sec. 2.5]{discacciati2004domain}. We then propose a preconditioner for this problem that is independent of the chosen discretization methods for the subproblems.\n\\end{enumerate}\n\nOur formulation and analysis of the Stokes-Darcy problem therefore has three distinguishing properties. Most importantly, we consider a mixed formulation of the coupled problem using a function space that strongly imposes normal flux continuity at the interface. In contrast, existing approaches often use a primal formulation for the Darcy subproblem \\cite{discacciati2004domain} or enforce flux continuity using Lagrange multipliers \\cite{layton2002coupling}. In the context of Mixed Finite Element Methods, this directly leads to different choices of discrete spaces. Secondly, our analysis employs weighted norms and we derive an estimate for the interface flux that has, to our knowledge, not been exploited in existing literature. Third, we propose a preconditioner in Section~\\ref{sub:parameter_robust_preconditioning} that is entirely local to the interface and does not require additional subproblem solves, in contrast to more conventional approaches such as the Neumann-Neumann method presented in Section~\\ref{sub:comparison_to_NN_method}. The construction of this preconditioner does, however, require solving a generalized eigenvalue problem, which is done in the a priori, or ``off-line'', stage. As an additional feature, our set-up does not require choosing any acceleration parameters.\n\nThe article is structured as follows. Section~\\ref{sec:the_model} introduces the coupled Stokes-Darcy model and its variational formulation as well as the notational conventions used throughout this work. Well-posedness of the model is shown in Section~\\ref{sec:well_posedness} with the use of weighted norms. Section~\\ref{sec:the_steklov_poincare_system} shows the reduction to an interface problem concerning only the normal flux defined there. A conforming discretization is proposed in Section~\\ref{sec:discretization} with the use of the Mixed Finite Element method. Using the ingredients of these sections, Section~\\ref{sec:iterative_solvers} describes the proposed iterative scheme and the optimal preconditioner it relies on. The theoretical results are confirmed numerically in Section~\\ref{sec:numerical_results}. Finally, Section~\\ref{sec:conclusions} contains concluding remarks.\n\n\\section{The Coupled Stokes-Darcy Model} \n\\label{sec:the_model}\n\nConsider an open, bounded domain $\\Omega \\subset \\mathbb{R}^n$, $n \\in \\{2, 3\\}$, decomposed into two disjoint, Lipschitz subdomains $\\Omega_S$ and $\\Omega_D$. Here, and throughout this work, the subscript $S$ or $D$ is used on subdomains and variables to denote its association to the Stokes or Darcy subproblem, respectively. Let the interface be denoted by $\\Gamma := \\partial{\\Omega}_S \\cap \\partial{\\Omega}_D$ and let $\\bm{n}$ denote the unit vector normal to $\\Gamma$ oriented outward with respect to $\\Omega_S$. An illustration of these definitions is given in Figure~\\ref{fig:figure1}.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width = \\textwidth]{Fig1.pdf}\n\t\\caption{Decomposition of the domain into $\\Omega_S$ and $\\Omega_D$.}\n\t\\label{fig:figure1}\n\\end{figure}\n\nWe introduce the model problem following the description of \\cite{layton2002coupling}. The main variables are given by the velocity $\\bm{u}$ and pressure $p$. A subscript denotes the restriction of a variable to the corresponding subdomain. The model is formed by considering Stokes flow for $(\\bm{u}_S, p_S)$ in $\\Omega_S$, Darcy flow for $(\\bm{u}_D, p_D)$ in $\\Omega_D$, and mass conservation laws for $\\bm{u}$ in both subdomains: \n\\begin{subequations} \\label{eq: SD strong form}\n\t\\begin{align}\n\t\t-\\nabla \\cdot \\sigma(\\bm{u}_S, p_S) &= \\bm{f}_S, & \n\t\t\\nabla \\cdot \\bm{u}_S &= 0, & \\text{in }\\Omega_S, \\\\\n\t\t\\bm{u}_D + K \\nabla p_D &= 0, & \n\t\t\\nabla \\cdot \\bm{u}_D &= f_D, & \\text{in }\\Omega_D.\n\t\\end{align}\nIn this setting, $K$ is the hydraulic conductivity of the porous medium. For simplicity, we assume that $K$ is homogeneous and isotropic and thus given by a positive scalar. On the right-hand side, $\\bm{f}_S$ represents a body force and $f_D$ corresponds to a mass source.\nIn the governing equations for Stokes flow, let the strain $\\varepsilon$ and stress $\\sigma$ be given by\n\\begin{align*}\n\t\\varepsilon(\\bm{u}_S) &:= \\frac{1}{2}\\left(\\nabla \\bm{u}_S + (\\nabla \\bm{u}_S)^T \\right), &\n\t\\sigma(\\bm{u}_S, p_S) &:= \\mu \\varepsilon(\\bm{u}_S) - p_S I.\n\\end{align*}\nThe parameter $\\mu > 0$ is the viscosity. \n\nNext, we introduce two coupling conditions on the interface $\\Gamma$ that describe mass conservation and the balance of forces, respectively:\n\t\\begin{align}\n\t\t\\bm{n} \\cdot \\bm{u}_S &= \\bm{n} \\cdot \\bm{u}_D, \n\t\t& \\text{on } \\Gamma, \\label{eq: coupling_mass}\\\\\n\t\t\\bm{n} \\cdot \\sigma(\\bm{u}_S, p_S) \\cdot \\bm{n} &= -p_D,\n\t\t& \\text{on } \\Gamma. \n\t\\end{align}\nAs remarked in the introduction, we keep a particular focus on conservation of mass. To ensure that no mass is lost across the interface, we will prioritize condition \\eqref{eq: coupling_mass} at a later stage.\n\nTo close the model, we consider the following boundary conditions. First, for the Stokes subproblem, we impose the Beavers-Joseph-Saffman condition on the interface $\\Gamma$, given by\n\\begin{align}\n\t\t\\bm{n} \\cdot \\sigma(\\bm{u}_S, p_S) \\cdot \\bm{\\tau}\n\t\t&= - \\beta \\bm{\\tau} \\cdot \\bm{u}_S, \n\t\t& \\text{on } \\Gamma. \n\t\t\\label{eq: BJS}\n\\end{align}\nHere, we define $\\beta := \\alpha \\frac{\\mu}{\\sqrt{\\bm{\\tau} \\cdot \\kappa \\cdot \\bm{\\tau}}} $ with $\\kappa := \\mu K$ the permeability and $\\alpha$ a proportionality constant to be determined experimentally. Moreover, the unit vector $\\bm{\\tau}$ is obtained from the tangent bundle of $\\Gamma$. Thus, for $n = 2$, equation \\eqref{eq: BJS} corresponds to a single condition on the one-dimensional interface $\\Gamma$ whereas for $n = 3$, it describes two separate coupling conditions.\n\nThe boundary of $\\Omega$ is decomposed in the disjoint unions $\\partial \\Omega_S \\setminus \\Gamma = \\partial_u \\Omega_S \\cup \\partial_\\sigma \\Omega_S$ and $\\partial \\Omega_D \\setminus \\Gamma = \\partial_u \\Omega_D \\cup \\partial_p \\Omega_D$. The subscript denotes the type of boundary condition imposed on that portion of the boundary. Specifically, we set\n\t\\begin{align}\n\t\t\\bm{u}_S &= 0, & \\text{on } &\\partial_u \\Omega_S, &\n\t\t\\bm{n} \\cdot \\bm{u}_D &= 0, & \\text{on } &\\partial_u \\Omega_D. \\label{eq: BC essential} \\\\\n\t\t\\bm{n} \\cdot \\sigma(\\bm{u}_S, p_S) &= 0, & \\text{on } &\\partial_\\sigma \\Omega_S, &\n\t\tp_D &= g_p, & \\text{on } &\\partial_p \\Omega_D, \\label{eq: BC natural} \n\t\\end{align}\nwith $g_p$ a given pressure distribution.\n\\end{subequations}\n\nIn the following, we assume that the interface $\\Gamma$ touches the portion of the boundary $\\partial \\Omega_S$ where homogeneous flux conditions are imposed, i.e. $\\partial \\Gamma \\subseteq \\overline{\\partial_u \\Omega_S}$. \nWe note that this assumption excludes the case in which $\\Omega_D$ is completely surrounded by $\\Omega_S$.\nMoreover, we assume that $|\\partial_\\sigma \\Omega_S \\cup \\partial_p \\Omega_D| > 0$ to ensure unique solvability of the coupled problem and we focus on the case in which $|\\partial_\\sigma \\Omega_S| > 0$.\n\n\\subsection{Functional Setting} \n\\label{sub:functional_setting}\n\nIn this section, we introduce the function spaces in which we search for a weak solution of problem \\eqref{eq: SD strong form}. We start by considering the space for the velocity variable $\\bm{u}$. With the aim of deriving mixed formulations for both subproblems, we introduce the following spaces:\n\\begin{subequations}\n\t\\begin{align}\n\t\t\\bm{V}_S &:= \\left\\{ \\bm{v}_S \\in (H^1(\\Omega_S))^n :\\ \n\t\t\\bm{v}_S|_{\\partial_u \\Omega_S} = 0 \\right\\}, \\\\\n\t\t\\bm{V}_D &:= \\left\\{ \\bm{v}_D \\in H(\\div, \\Omega_D) :\\ \n\t\t\\bm{n} \\cdot \\bm{v}_D|_{\\partial_u \\Omega_D} = 0 \\right\\}.\n\t\\end{align}\n\\end{subequations}\n\n\nNote that these spaces incorporate the boundary conditions \\eqref{eq: BC essential} on $\\partial_u \\Omega$ which become essential boundary conditions in our mixed formulation. Similarly, the normal flux continuity across $\\Gamma$ \\eqref{eq: coupling_mass} needs to be incorporated as an essential boundary condition. For that, we introduce a single function $\\phi \\in \\Lambda$, defined on $\\Gamma$ to represent the normal flux across the interface. The next step is then to define the following three function spaces:\n\\begin{subequations}\n\t\\begin{align}\n\t\t\\bm{V}_S^0 &:= \\left\\{ \\bm{v}_S \\in \\bm{V}_S :\\ \n\t\t\\bm{n} \\cdot \\bm{v}_S|_{\\Gamma} = 0 \\right\\}, \\\\\n\t\t\\Lambda &:= H^{1/2}_{00}(\\Gamma), \\\\\n\t\t\\bm{V}_D^0 &:= \\left\\{ \\bm{v}_D \\in \\bm{V}_D :\\ \n\t\t\\bm{n} \\cdot \\bm{v}_D|_{\\Gamma} = 0 \\right\\}.\n\t\\end{align}\n\\end{subequations}\nWe note that $\\Lambda$ is the normal trace space of $\\bm{V}_S$ on $\\Gamma$. From the previous section, we recall that $\\Gamma$ touches the boundary $\\partial \\Omega$ where zero velocity conditions are imposed for the Stokes problem. The trace space is therefore characterized as the fractional Sobolev space $H^{1/2}_{00}(\\Gamma)$, containing distributions that can be continuously extended by zero on $\\partial \\Omega$. We refer the reader to \\cite{lions2012non} for more details on this type of trace spaces. For the purpose of our analysis, we note that the inclusion $H_0^1(\\Gamma) \\subset \\Lambda \\subset L^2(\\Gamma)$ holds and we let $H^{-\\frac{1}{2}}(\\Gamma)$ denote the dual of $\\Lambda$.\n\nFor the incorporation of the interface condition \\eqref{eq: coupling_mass} in our weak formulation, we introduce continuous operators that extend a given flux distribution on the interface to the two subdomains. The extension operators $\\mathcal{R}_S: \\Lambda \\to \\bm{V}_S$ and $\\mathcal{R}_D: \\Lambda \\to \\bm{V}_D$ are chosen such that\n\\begin{align} \\label{eq: extension property}\n\t(\\bm{n} \\cdot \\mathcal{R}_S \\varphi)|_{\\Gamma} \n\t&= \n\t(\\bm{n} \\cdot \\mathcal{R}_D \\varphi)|_{\\Gamma} \n\t= \\varphi.\n\\end{align}\nWe use $\\| \\cdot \\|_{s, \\Omega}$ as short-hand notation for the norm on $H^s(\\Omega)$. With this notation, the continuity of $\\mathcal{R}_i$ implies that the following inequalities hold\n\\begin{align} \\label{eq: continuity R_i}\n\t\\| \\mathcal{R}_S \\varphi \\|_{1, \\Omega_S} &\\lesssim \\| \\varphi \\|_{\\frac{1}{2}, \\Gamma}, &\n\t\\| \\mathcal{R}_D \\varphi \\|_{0, \\Omega_D} + \\| \\nabla \\cdot \\mathcal{R}_D \\varphi \\|_{0, \\Omega_D} &\\lesssim \\| \\varphi \\|_{-\\frac{1}{2}, \\Gamma}.\n\\end{align}\nExamples of continuous extension operators can be found in \\cite[Sec. 4.1.2]{quarteroni1999domain}. The notation $A \\lesssim B$ implies that a constant $c > 0$ exists, independent of material parameters and the mesh size $h$ such that $A \\le cB$. The relationship $\\gtrsim$ is defined analogously.\n\nThese definitions allow us to create a function space $\\bm{V}$ containing velocities with normal trace continuity on $\\Gamma$. Let this function space be defined as\n\\begin{align}\n\t\\bm{V} &:= \\left\\{ \\bm{v} \\in (L^2(\\Omega))^n :\\ \n\t\\exists (\\bm{v}_S^0, \\varphi, \\bm{v}_D^0) \\in \\bm{V}_S^0 \\times \\Lambda \\times \\bm{V}_D^0\n\t\\text{ such that } \\bm{v}|_{\\Omega_i} = \\bm{v}_i^0 + \\mathcal{R}_i \\varphi, \\text{ for } i \\in \\{S, D\\} \\right\\}.\n\\end{align}\n\nSecond, the function space for the pressure variable is given by $W := L^2(\\Omega)$ and we define $W_S := L^2(\\Omega_S)$ and $W_D := L^2(\\Omega_D)$.\n\nAs before, we use the subscript $i \\in \\{S, D\\}$ to denote the restriction to a subdomain $\\Omega_i$. Thus, for $(\\bm{v}, w) \\in \\bm{V} \\times W$, we have\n\\begin{align}\n\n\t\\bm{v}_i &:= \\bm{v}|_{\\Omega_i} \\in \\bm{V}_i, & \n\tw_i &:= w|_{\\Omega_i} \\in W_i, &\n\ti &\\in \\{S, D\\}.\n\\end{align}\nDespite the fact that each function in $\\bm{V}$ can be decomposed into components in $\\bm{V}_S$ and $\\bm{V}_D$, we emphasize that $\\bm{V}$ is a strict subspace of $\\bm{V}_S \\times \\bm{V}_D$ due to the continuity of normal traces on $\\Gamma$. \n\nA key concept in our functional setting is to consider a decomposition of $\\bm{V}$ comprising of a function with zero normal trace on $\\Gamma$ and an extension of the normal flux distribution. For that purpose, let $\\bm{V}^0$ be the subspace of $\\bm{V}$ consisting of functions with zero normal trace over $\\Gamma$:\n\\begin{align}\n\t\\bm{V}^0 := \\left\\{ \\bm{v}^0 \\in \\bm{V} :\\ \n\t\t\\exists (\\bm{v}_S^0, \\bm{v}_D^0) \\in \\bm{V}_S^0 \\times \\bm{V}_D^0\n\t\t\\text{ such that } \\bm{v}^0|_{\\Omega_i} = \\bm{v}_i^0, \\text{ for } i \\in \\{S, D\\} \\right\\}.\n\\end{align}\n\nSecondly, we define the composite extension operator $\\mathcal{R}: \\Lambda \\to V$ such that $\\mathcal{R} \\varphi|_{\\Omega_i} = \\mathcal{R}_i \\varphi$ for $i \\in \\{S, D\\}$. Combined with the subspace $\\bm{V}^0$, we obtain the decomposition\n\\begin{align} \\label{eq: decomposition V}\n\t\\bm{V} = \\bm{V}^0 \\oplus \\mathcal{R} \\Lambda.\n\\end{align}\n\nIt is important to emphasize that the function space $\\bm{V}$ is independent of the choice of extension operators. On the other hand, each choice of $\\mathcal{R}$ leads to a specific decomposition of the form \\eqref{eq: decomposition V}.\n\n\\subsection{Variational Formulation} \n\\label{sub:variational_Formulation}\n\nWith the function spaces defined, we continue by deriving the variational formulation of \\eqref{eq: SD strong form}. The first step is to consider the Stokes and Darcy flow equations. We test these with $\\bm{v} \\in \\bm{V}$ and integrate over the corresponding subdomain. Using $(\\cdot, \\cdot)_{\\Omega}$ to denote the $L^2$ inner product on $\\Omega$, we apply integration by parts and use the boundary conditions to derive\n\\begin{align*}\n\t-(\\nabla \\cdot \\sigma(\\bm{u}_S, p_S), \\bm{v}_S)_{\\Omega_S} &= \\nonumber \\\\\n\t(\\sigma(\\bm{u}_S, p_S), \\nabla \\bm{v}_S)_{\\Omega_S} \n\t- ( \\bm{n} \\cdot \\sigma(\\bm{u}_S, p_S), \\bm{v}_S )_{\\Gamma} \n\n\t&= \\nonumber\\\\\n\n\n\n\n\n\n\n\n\n\n\t(\\mu \\varepsilon(\\bm{u}_S), \\varepsilon(\\bm{v}_S))_{\\Omega_S} \n\t- (p_SI, \\nabla \\bm{v}_S)_{\\Omega_S} \n\t+ (\\beta \\bm{\\tau} \\cdot \\bm{u}_S, \\bm{\\tau} \\cdot \\bm{v}_S )_{\\Gamma}\n\t+ ( p_D, \\bm{n} \\cdot \\bm{v}_S )_{\\Gamma} \n\n\t&= ( \\bm{f}_S, \\bm{v}_S )_{\\Omega}.\n\\end{align*}\n\nOn the other hand, we test Darcy's law in the porous medium $\\Omega_D$ and use similar steps to obtain\n\\begin{align*}\n\t(K^{-1} \\bm{u}_D, \\bm{v}_D)_{\\Omega_D} \n\t- (p_D, \\nabla \\cdot \\bm{v}_D)_{\\Omega_D} \n\t- ( p_D, \\bm{n} \\cdot \\bm{v}_D )_{\\Gamma} \n\t- ( g_p, \\bm{n} \\cdot \\bm{v}_D )_{\\partial_p \\Omega_D}\n\t&= 0.\n\\end{align*}\n\n\nThe normal trace continuity imposed in the space $\\bm{V}$ gives us\n$( p_D, \\bm{n} \\cdot \\bm{v}_S )_{\\Gamma} - ( p_D, \\bm{n} \\cdot \\bm{v}_D )_{\\Gamma} = 0$.\n\nIn turn, after supplementing the system with the equations for mass conservation, we arrive at the following variational formulation: \\\\\nFind the pair $(\\bm{u}, p) \\in \\bm{V} \\times W$ that satisfies\n\\begin{subequations}\n\\begin{align}\n\t(\\mu \\varepsilon(\\bm{u}_S), \\varepsilon(\\bm{v}_S))_{\\Omega_S} \n\t+ (\\beta \\bm{\\tau} \\cdot \\bm{u}_S, \\bm{\\tau} \\cdot \\bm{v}_S )_{\\Gamma} \n\t+ (K^{-1} \\bm{u}_D, \\bm{v}_D)_{\\Omega_D} \n\t& \\nonumber\\\\\n\t- (p_S, \\nabla \\cdot \\bm{v}_S)_{\\Omega_S} \n\t- (p_D, \\nabla \\cdot \\bm{v}_D)_{\\Omega_D} \n\t&= ( \\bm{f}_S, \\bm{v}_S )_{\\Omega_S}\n\t+ ( g_p, \\bm{n} \\cdot \\bm{v}_D )_{\\partial_p \\Omega_D}, \n\t& \\forall \\bm{v} &\\in \\bm{V}, \\\\\n\n\t(\\nabla \\cdot \\bm{u}_S, w_S)_{\\Omega_S} \n\t+ (\\nabla \\cdot \\bm{u}_D, w_D)_{\\Omega_D} \n\t&= \n\n\t(f_D, w_D)_{\\Omega_D},\n\t& \\forall w &\\in W.\n\\end{align}\n\\end{subequations}\n\nWe note that this system has a characteristic saddle-point structure, allowing us to rewrite the problem as:\\\\\nFind the pair $(\\bm{u}, p) \\in \\bm{V} \\times W$ that satisfies\n\\begin{subequations} \\label{eq: variational formulation}\n\t\\begin{align} \n\t\n\t\ta(\\bm{u}, \\bm{v}) + b(\\bm{v}, p) &= f_u(\\bm{v}),\n\t\t& \\forall \\bm{v} &\\in \\bm{V}, \\label{eq: variational formulation 1st eq}\\\\\n\t\tb(\\bm{u}, w) &= f_p(w), \\label{eq: variational formulation 2nd eq}\n\t\t& \\forall w &\\in W.\n\t\\end{align}\n\\end{subequations}\n\nThe bilinear forms $a: \\bm{V} \\times \\bm{V} \\to \\mathbb{R}$ and $b: \\bm{V} \\times W \\to \\mathbb{R}$, and the functionals $f_u: \\bm{V} \\to \\mathbb{R}$ and $f_p: W \\to \\mathbb{R}$ are given by\n\\begin{subequations} \\label{eq: bilinear forms}\n\t\\begin{align}\n\t\ta(\\bm{u}, \\bm{v}) &:= (\\mu \\varepsilon(\\bm{u}_S), \\varepsilon(\\bm{v}_S))_{\\Omega_S} \n\t\t+ (\\beta \\bm{\\tau} \\cdot \\bm{u}_S, \\bm{\\tau} \\cdot \\bm{v}_S )_{\\Gamma} \n\t\t+ (K^{-1} \\bm{u}_D, \\bm{v}_D)_{\\Omega_D}, \\\\\n\t\tb(\\bm{u}, w) &:= -(\\nabla \\cdot \\bm{u}_S, w_S)_{\\Omega_S}\n\t\t-(\\nabla \\cdot \\bm{u}_D, w_D)_{\\Omega_D}, \\\\\n\t\tf_u(\\bm{v}) &:= ( \\bm{f}_S, \\bm{v}_S )_{\\Omega_S} \n\t\n\t\t+ ( g_p, \\bm{n} \\cdot \\bm{v}_D )_{\\partial_p \\Omega_D}, \\\\\n\t\tf_p(w) &:= \n\t\t-(f_D, w_D)_{\\Omega_D}.\n\t\n\t\\end{align}\n\\end{subequations}\n\n\n\n\\section{Well-Posedness Analysis} \n\\label{sec:well_posedness}\n\n\nIn this section, we analyze problem \\eqref{eq: variational formulation} with the use of weighted norms. The main goal is to show that a unique solution exists that is bounded in norms that depend on the material parameters. Consequently, this result allows us to construct an iterative method that is robust with respect to material parameters. \nWe start by deriving the appropriate norms, for which we first make two assumptions on the material parameters. \n\t\n\nFirst, the constant $\\beta$ in the Beavers-Joseph-Saffman condition \\eqref{eq: BJS} is assumed to be bounded as\n\\begin{subequations} \\label{eqs: material parameter bounds}\n\\begin{align}\n\t\\beta = \\alpha \\frac{\\mu}{\\sqrt{\\bm{\\tau} \\cdot \\kappa \\cdot \\bm{\\tau}}} \\lesssim \\mu.\n\\end{align}\nIn the special case of $\\alpha = 0$, this condition is trivially satisfied.\n\nSecond, we assume that the permeability $\\kappa := \\mu K$ is bounded from above in the sense that\n\\begin{align}\n\t\\mu K \\lesssim 1.\n\\end{align}\n\\end{subequations}\n\nWe are now ready to define the weighted norms for $\\bm{v} \\in \\bm{V}$ and $w \\in W$, respectively, given by\n\\begin{subequations} \\label{eq: norms}\n\t\\begin{align}\n\t\t\\| \\bm{v} \\|_V^2 &:= \n\t\t\\| \\mu^{\\frac{1}{2}} \\bm{v}_S \\|_{1, \\Omega_S}^2\n\t\t+ \\| K^{-\\frac{1}{2}} \\bm{v}_D \\|_{0, \\Omega_D}^2\n\t\t+ \\| K^{-\\frac{1}{2}} \\nabla \\cdot \\bm{v}_D \\|_{0, \\Omega_D}^2 \\\\\n\t\t\\| w \\|_W^2 &:= \\| \\mu^{-\\frac{1}{2}} w_S \\|_{0, \\Omega_S}^2\n\t\t+ \\| K^{\\frac{1}{2}} w_D \\|_{0, \\Omega_D}^2.\n\t\\end{align}\n\\end{subequations}\n\nThe next step is to analyze the problem using these norms. For that purpose, we recall the identification of \\eqref{eq: variational formulation} as a saddle-point problem. Using saddle-point theory \\cite{boffi2013mixed}, well-posedness is shown by proving the four sufficient conditions presented in the following lemma.\n\\begin{lemma} \\label{lem: inequalities}\n\tThe bilinear forms defined in \\eqref{eq: bilinear forms} satisfy the following inequalities:\n\t\\begin{subequations}\n\t\\begin{align}\n\t\t\t& \\text{For } \\bm{u}, \\bm{v} \\in \\bm{V}: \n\t\t\t& a(\\bm{u}, \\bm{v}) &\\lesssim \\| \\bm{u} \\|_V \\| \\bm{v} \\|_V. \\label{ineq: a_cont}\\\\\n\t\t\t& \\text{For } (\\bm{v}, w) \\in \\bm{V} \\times W: \n\t\t\t& b(\\bm{v}, w) &\\lesssim \\| \\bm{v} \\|_V \\| w \\|_W. \\label{ineq: b_cont}\\\\\n\t\t\t& \\text{For } \\bm{v} \\in \\bm{V} \\text{ with } b(\\bm{v}, w) = 0 \\ \\forall w \\in W: \n\t\t\t& a(\\bm{v}, \\bm{v}) &\\gtrsim \\| \\bm{v} \\|_V^2. \\label{ineq: a_coercive}\\\\\n\t\t\t& \\text{For } w \\in W, \\ \\exists \\bm{v} \\in \\bm{V} \\text{ with } \\bm{v} \\ne 0, \\text{ such that}: \n\t\t\t& b(\\bm{v}, w) &\\gtrsim \\| \\bm{v} \\|_V \\| w \\|_W.\\label{ineq: b_infsup}\n\t\t\\end{align}\n\t\\end{subequations}\n\t\\end{lemma}\n\\begin{proof}\n\tUsing the Cauchy-Schwarz inequality, the assumptions \\eqref{eqs: material parameter bounds}, and a trace inequality for $H^1$, we obtain the continuity bounds \\eqref{ineq: a_cont} and \\eqref{ineq: b_cont}:\t\n\\begin{subequations} \\label{ineqs: continuity}\n\t\\begin{align}\n\t\ta(\\bm{u}, \\bm{v}) \n\t\n\t\n\t\n\t\t&\\lesssim \\|\\mu^{\\frac{1}{2}} \\bm{u}_S \\|_{1, \\Omega_S} \n\t\t\\|\\mu^{\\frac{1}{2}} \\bm{v}_S \\|_{1, \\Omega_S} \n\t\t+ \\| \\beta^{\\frac{1}{2}} \\bm{\\tau} \\cdot \\bm{u}_S \\|_{0, \\Gamma}\n\t\t\\| \\beta^{\\frac{1}{2}} \\bm{\\tau} \\cdot \\bm{v}_S \\|_{0, \\Gamma} \n\t\t+ \\| K^{-\\frac{1}{2}} \\bm{u}_D \\|_{0, \\Omega_D}\n\t\t\\| K^{-\\frac{1}{2}} \\bm{v}_D \\|_{0, \\Omega_D} \\nonumber\\\\\n\t\t&\\lesssim \\| \\bm{u} \\|_V \\| \\bm{v} \\|_V, \\\\\n\t\tb(\\bm{v}, w) \n\t\n\t\n\t\t&\\lesssim \\| \\mu^{\\frac{1}{2}} \\bm{v}_S \\|_{1, \\Omega_S} \\| \\mu^{-\\frac{1}{2}} w_S \\|_{0, \\Omega_S} \n\t\t+ \\| K^{-\\frac{1}{2}} \\nabla \\cdot \\bm{v}_D \\|_{0, \\Omega_D} \\| K^{\\frac{1}{2}} w_D \\|_{0, \\Omega_D} \\nonumber\\\\\n\t\t&\\lesssim \\| \\bm{v} \\|_V \\| w \\|_W.\n\t\\end{align}\n\\end{subequations}\n\n\tFor the proof of inequality \\eqref{ineq: a_coercive}, we first note that if $b(\\bm{v}, w) = 0$ for all $w \\in W$, then $\\nabla \\cdot \\bm{v}_D = 0$. Combining this observation with Korn's inequality gives us:\n\t\\begin{align} \\label{eq: proof 3.5c}\n\t\ta(\\bm{v}, \\bm{v}) &\\gtrsim\n\t\t\\| \\mu^{\\frac{1}{2}} \\bm{v}_S \\|_{1, \\Omega_S}^2\n\t\t+ \\| \\beta^{\\frac{1}{2}} \\bm{\\tau} \\cdot \\bm{v}_S \\|_{0, \\Gamma}^2\n\t\t+ \\| K^{-\\frac{1}{2}} \\bm{v}_D \\|_{0, \\Omega_D}^2 \\nonumber\\\\\n\t\t&\\ge\n\t\t\\| \\mu^{\\frac{1}{2}} \\bm{v}_S \\|_{1, \\Omega_S}^2\n\t\t+ \\| K^{-\\frac{1}{2}} \\nabla \\cdot \\bm{v}_D \\|_{0, \\Omega_D}^2 \n\t\t+ \\| K^{-\\frac{1}{2}} \\bm{v}_D \\|_{0, \\Omega_D}^2\n\t\t= \\| \\bm{v} \\|_V^2.\n\t\\end{align}\n\n\tInequality \\eqref{ineq: b_infsup} is the inf-sup condition relevant for this formulation. For a given $w = (w_S, w_D) \\in W$, let us construct $\\bm{v} = (\\bm{v}_S^0, \\phi, \\bm{v}_D^0) \\in \\bm{V}$ in the following manner. First, let the interface function $\\phi \\in H_0^1(\\Gamma)$ solve the following, constrained minimization problem:\n\t\\begin{align} \\label{eq: phi constraint}\n\t\t\\min_{\\varphi \\in H_0^1(\\Gamma)} \\tfrac12 &\\| \\varphi \\|_{1, \\Gamma}^2,\n\t\t&\\text{subject to } (\\varphi, 1)_{\\Gamma} &= (K w_D, 1)_{\\Omega_D}.\n\t\\end{align}\n\tThe solution $\\phi$ then satisfies the two key properties:\n\t\\begin{subequations}\n\t\\begin{align}\n\t\t\\| \\phi \\|_{1, \\Gamma} &\\lesssim \\| K w_D \\|_{0, \\Omega_D}, \\label{eq: bound on phi} \\\\\n\t\t(\\nabla \\cdot \\mathcal{R}_D \\phi, 1)_{\\Omega_D} &=\n\t\t(- \\bm{n} \\cdot \\mathcal{R}_D \\phi, 1)_{\\Gamma} = \n\t\t(- \\phi, 1)_{\\Gamma} = \n\t\t- (K w_D, 1)_{\\Omega_D}. \\label{eq: compatibility of phi}\n\t\\end{align}\n\t\\end{subequations}\n\tBound \\eqref{eq: bound on phi} can be deduced by constructing a function $\\psi \\in H_0^1(\\Gamma)$ that satisfies the constraint in \\eqref{eq: phi constraint} and is bounded in the sense of \\eqref{eq: bound on phi}. It then follows that the minimizer $\\phi$ satisfies \\eqref{eq: bound on phi} as well.\n\t\n\tNext, we construct $\\bm{v}_i^0 \\in \\bm{V}_i^0$ for $i \\in \\{S, D\\}$. For that, we first introduce $p_S \\in H^2(\\Omega_S)$ as the solution to the following auxiliary problem\n\t\\begin{subequations} \\label{eqs: aux prob p_S}\n\t\\begin{align}\n\t\t- \\nabla \\cdot \\nabla p_S &= \\mu^{-1} w_S + \\nabla \\cdot \\mathcal{R}_S \\phi, \\\\\n\t\tp_S|_{\\partial_\\sigma \\Omega_S} &= 0, \\\\\n\t\t(\\bm{n} \\cdot \\nabla p_S)|_{\\partial \\Omega_S \\setminus \\partial_\\sigma \\Omega_S} &= 0.\n\t\\end{align}\n\t\\end{subequations}\n\tSimilarly, we define $p_D \\in H^2(\\Omega_D)$ such that\n\t\\begin{subequations} \\label{eqs: aux prob p_D}\n\t\\begin{align}\n\t\t- \\nabla \\cdot \\nabla p_D &= K w_D + \\nabla \\cdot \\mathcal{R}_D \\phi, \\\\\n\t\t(\\bm{n} \\cdot \\nabla p_D)|_{\\partial \\Omega_D} &= 0.\n\t\\end{align}\n\t\\end{subequations}\n\tWe note that \\eqref{eqs: aux prob p_D} is a Neumann problem. We therefore verify the compatibility of the right hand side by using \\eqref{eq: compatibility of phi} in the following derivation:\n\t\\begin{align*}\n\t\t( K w_D + \\nabla \\cdot \\mathcal{R}_D \\phi, 1)_{\\Omega_D}\n\t\t= ( K w_D - K w_D, 1)_{\\Omega_D}\n\t\t= 0.\n\t\\end{align*}\n\t\n\tLet $\\bm{v}_S^0 := \\nabla p_S$ and $\\bm{v}_D^0:= \\nabla p_D$. From the elliptic regularity of the auxiliary problems, see e.g. \\cite{evans2010partial}, we obtain the bounds\n\t\\begin{subequations}\n\t\\begin{align}\n\t\t\\| \\bm{v}_S^0 \\|_{1, \\Omega_S} \\lesssim \n\t\t\\| \\mu^{-1} w_S \\|_{0, \\Omega_S} + \\| \\nabla \\cdot \\mathcal{R}_S \\phi \\|_{0, \\Omega_S}, \\\\\n\t\t\\| \\bm{v}_D^0 \\|_{1, \\Omega_D} \\lesssim \n\t\t\\| K w_D \\|_{0, \\Omega_D} + \\| \\nabla \\cdot \\mathcal{R}_D \\phi \\|_{0, \\Omega_D}.\n\t\\end{align}\n\t\\end{subequations}\n\n\tNext, we set $\\bm{v}_S := \\bm{v}_S^0 + \\mathcal{R}_S \\phi$ and $\\bm{v}_D := \\bm{v}_D^0 + \\mathcal{R}_D \\phi$. Combining the bounds on $\\bm{v}_S^0$ and $\\phi$ with the continuity of the extension operators from \\eqref{eq: continuity R_i} and the material parameter bounds \\eqref{eqs: material parameter bounds}, we derive\n\\begin{subequations}\n\t\\begin{align}\n\t\t\\| \\mu^{\\frac{1}{2}} \\bm{v}_S \\|_{1, \\Omega_S}\n\t\t&\\le \\| \\mu^{\\frac{1}{2}} \\bm{v}_S^0 \\|_{1, \\Omega_S} \n\t\t+ \\| \\mu^{\\frac{1}{2}} \\mathcal{R}_S \\phi \\|_{1, \\Omega_S} \\nonumber \\\\\n\t\t&\\lesssim \\| \\mu^{-\\frac{1}{2}} w_S \\|_{0, \\Omega_S} \n\t\t+ \\| \\mu^{\\frac{1}{2}} \\nabla \\cdot \\mathcal{R}_S \\phi \\|_{0, \\Omega_S} \n\t\t+ \\| \\mu^{\\frac{1}{2}} \\mathcal{R}_S \\phi \\|_{1, \\Omega_S} \\nonumber \\\\\n\t\t&\\lesssim \\| \\mu^{-\\frac{1}{2}} w_S \\|_{0, \\Omega_S} \n\t\t+ \\| \\mu^{\\frac{1}{2}} \\phi \\|_{\\frac{1}{2}, \\Gamma} \\nonumber \\\\\n\t\t&\\lesssim \\| \\mu^{-\\frac{1}{2}} w_S \\|_{0, \\Omega_S} \n\t\t+ \\| \\mu^{\\frac{1}{2}} K w_D \\|_{0, \\Omega_D} \\nonumber \\\\\n\t\t&\\lesssim \\| \\mu^{-\\frac{1}{2}} w_S \\|_{0, \\Omega_S} \n\t\t+ \\| K^{\\frac{1}{2}} w_D \\|_{0, \\Omega_D}.\n\t\\end{align}\n\tSimilarly, $\\bm{v}_D$ is bounded in the following sense:\n\t\\begin{align}\n\t\t%\n\t\t\\| K^{-\\frac{1}{2}} \\bm{v}_D \\|_{0, \\Omega_D}\n\t\t+ \\| K^{-\\frac{1}{2}} \\nabla \\cdot \\bm{v}_D \\|_{0, \\Omega_D}\n\t\t&\\le \n\t\t\\| K^{-\\frac{1}{2}} \\bm{v}_D^0 \\|_{0, \\Omega_D}\n\t\t+ \\| K^{-\\frac{1}{2}} \\mathcal{R}_D \\phi \\|_{0, \\Omega_D}\n\t\t+ \\| K^{\\frac{1}{2}} w_D \\|_{0, \\Omega_D} \\nonumber \\\\\n\t\t&\\lesssim\n\t\t\\| K^{-\\frac{1}{2}} \\nabla \\cdot \\mathcal{R}_D \\phi \\|_{0, \\Omega_D}\n\t\t+ \\| K^{-\\frac{1}{2}} \\mathcal{R}_D \\phi \\|_{0, \\Omega_D}\n\t\t+ \\| K^{\\frac{1}{2}} w_D \\|_{0, \\Omega_D} \\nonumber \\\\\n\t\t&\\lesssim\n\t\t\\| K^{-\\frac{1}{2}} \\phi \\|_{-\\frac{1}{2}, \\Gamma}\n\t\t+ \\| K^{\\frac{1}{2}} w_D \\|_{0, \\Omega_D} \\nonumber \\\\\n\t\t&\\lesssim\n\t\t\\| K^{\\frac{1}{2}} w_D \\|_{0, \\Omega_D}.\n\t\\end{align}\n\t\\end{subequations}\n\tIn the final step, we have used that $H^1(\\Gamma) \\subseteq H^{-\\frac{1}{2}}(\\Gamma)$ and \\eqref{eq: bound on phi}.\n\t\n\tBy construction, $\\bm{v}$ now satisfies the following two properties:\n\t\\begin{subequations} \\label{eqs: proof b inf sup}\n\t\\begin{align}\n\t\t\\| \\bm{v} \\|_V \n\t\t&= \\left(\\| \\mu^{\\frac{1}{2}} \\bm{v}_S \\|_{1, \\Omega_S}^2\n\t\t\t\t+ \\| K^{-\\frac{1}{2}} \\bm{v}_D \\|_{0, \\Omega_D}^2\n\t\t\t\t+ \\| K^{-\\frac{1}{2}} \\nabla \\cdot \\bm{v}_D \\|_{0, \\Omega_D}^2 \\right)^\\frac{1}{2}\n\t\t\\lesssim \\| w \\|_W, \\\\\n\t\tb(\\bm{v}, w) &= -(\\nabla \\cdot \\bm{v}_S, w_S)_{\\Omega_S}\n\t\t-(\\nabla \\cdot \\bm{v}_D, w_D)_{\\Omega_D} \\nonumber\\\\\n\t\t&= -(\\nabla \\cdot (\\nabla p_S + \\mathcal{R}_S \\phi), w_S)_{\\Omega_S}\n\t\t-(\\nabla \\cdot (\\nabla p_D + \\mathcal{R}_D \\phi), w_D)_{\\Omega_D}\\nonumber\\\\\n\t\t&= \\| \\mu^{-\\frac{1}{2}} w_S \\|_{0, \\Omega_S}^2\n\t\t+ \\| K^{\\frac{1}{2}} w_D \\|_{0, \\Omega_D}^2 \\nonumber\\\\\n\t\t&=\\| w \\|_W^2.\n\t\\end{align}\n\t\\end{subequations}\n\tThe proof is concluded by gathering \\eqref{eqs: proof b inf sup}.\n\\end{proof}\n\nIn the special case of $|\\partial_p \\Omega_D| > 0$, the Darcy subproblem is itself well-posed. This can be used to our advantage in the proof of \\eqref{ineq: b_infsup}. In particular, the construction of $\\phi \\in \\Lambda$ becomes obsolete, as shown in the following corollary.\n\n\\begin{corollary} \\label{cor: infsup V0}\n\tIf $|\\partial_p \\Omega_D| > 0$, then for each $w \\in W$, there exists $\\bm{v}^0 \\in \\bm{V}^0$ with $\\bm{v}^0 \\ne 0$ such that \n\t\\begin{align*}\n\t\tb(\\bm{v}^0, w) \\gtrsim \\| \\bm{v}^0 \\|_V \\| w \\|_W.\n\t\\end{align*}\n\\end{corollary}\n\\begin{proof}\n\tWe follow the same arguments as for \\eqref{ineq: b_infsup} in Lemma~\\ref{lem: inequalities}. The main difference is that we now set $\\phi = 0$ and solve auxiliary Poisson problems to obtain $(\\bm{v}_S^0, \\bm{v}_D^0) \\in \\bm{V}_S^0 \\times \\bm{V}_D^0$ such that\n\t\\begin{align*}\n\t\t- \\nabla \\cdot \\bm{v}_S^0 &= \\mu^{-1} w_S, &\n\t\t- \\nabla \\cdot \\bm{v}_D^0 &= K w_D, \\\\\n\t\t(\\bm{n} \\cdot \\bm{v}_S^0)|_{\\partial \\Omega_S \\setminus \\partial_\\sigma \\Omega_S} &= 0, &\n\t\t(\\bm{n} \\cdot \\bm{v}_D^0)|_{\\partial \\Omega_D \\setminus \\partial_p \\Omega_D} &= 0.\n\t\\end{align*}\n\tSince both $\\partial_\\sigma \\Omega_S$ and $\\partial_p \\Omega_D$ have positive measure, these two subproblems are well-posed and the statement follows by elliptic regularity.\n\\end{proof}\n\nWe are now ready to present the main result of this section, namely that problem \\eqref{eq: variational formulation} is well-posed with respect to the weighted norms of \\eqref{eq: norms}.\n\\begin{theorem} \\label{thm: well-posedness}\n\tProblem \\eqref{eq: variational formulation} is well-posed, i.e. a unique solution $(\\bm{u}, p) \\in \\bm{V} \\times W$ exists satisfying\n\t\\begin{align}\n\t\t\\| \\bm{u} \\|_V + \\| p \\|_W \n\t\t\\lesssim \n\t\t\\| \\mu^{-\\frac{1}{2}} \\bm{f}_S \\|_{-1, \\Omega_S}\n\t\n\t\t+ \\| K^{-\\frac{1}{2}} f_D \\|_{0, \\Omega_D}\n\t\n\t\t+ \\| K^{\\frac{1}{2}} g_p \\|_{\\frac{1}{2}, \\partial_p \\Omega_D}.\n\t\\end{align}\n\\end{theorem}\n\\begin{proof}\n\tWith the inequalities from \\Cref{lem: inequalities}, it suffices to show continuity of the right-hand side. Let us therefore apply the Cauchy-Schwarz inequality followed by a trace inequality:\n\t\\begin{align*}\n\t\tf_u(\\bm{v}) + f_p(w) &= ( \\bm{f}_S, \\bm{v}_S )_{\\Omega_S} \n\t\n\t\t+ (f_D, w_D)_{\\Omega_D} \n\t\n\t\t+ ( g_p, \\bm{n} \\cdot \\bm{v}_D )_{\\partial_p \\Omega_D} \\\\\n\t\t&\\le \\| \\mu^{-\\frac{1}{2}} \\bm{f}_S \\|_{-1, \\Omega_S} \\| \\mu^{\\frac{1}{2}} \\bm{v}_S \\|_{1, \\Omega_S}\n\t\t+ \\| K^{-\\frac{1}{2}} f_D \\|_{0, \\Omega_D} \\| K^{\\frac{1}{2}} w_D \\|_{0, \\Omega_D} \n\t\n\t\t\\\\\n\t\t& \\ \\ + \\| K^{\\frac{1}{2}} g_p \\|_{\\frac{1}{2}, \\partial_p \\Omega_D} \\| K^{-\\frac{1}{2}} \\bm{n} \\cdot \\bm{v}_D \\|_{-\\frac{1}{2}, \\partial_p \\Omega_D} \\\\\n\t\t&\\lesssim \\left( \\| \\mu^{-\\frac{1}{2}} \\bm{f}_S \\|_{-1, \\Omega_S} \n\t\t+ \\| K^{-\\frac{1}{2}} f_D \\|_{0, \\Omega_D} \n\t\n\t\t+ \\| K^{\\frac{1}{2}} g_p \\|_{\\frac{1}{2}, \\partial_p \\Omega_D} \\right) \n\t\t\\left( \\| \\bm{v} \\|_{V} + \\| w \\|_W \\right).\n\t\\end{align*}\n\tWith the continuity of the right-hand shown, all requirements are satisfied to invoke standard saddle point theory \\cite{boffi2013mixed}, proving the claim.\n\\end{proof}\n\n\\section{The Steklov-Poincar\\'e System}\n\\label{sec:the_steklov_poincare_system}\n\nThe strategy is to introduce the Steklov-Poincar\\'e operator $\\Sigma$ and reduce the system \\eqref{eq: variational formulation} to a problem concerning only the interface flux $\\phi$. The reason for this is twofold. First, since the interface is a lower-dimensional manifold, the problem is reduced in dimensionality and is therefore expected to be easier to solve. Second, we show that the resulting system is symmetric and positive-definite and hence amenable to a large class of iterative solvers including the Minimal Residual (MinRes) and the Conjugate Gradient (CG) method.\n\nWe start with the case in which both the pressure and stress boundary conditions are prescribed on a part of the boundary with positive measure, i.e. we assume that $| \\partial_\\sigma \\Omega_S | > 0$ and $| \\partial_p \\Omega_D | > 0$. The cases in which one, or both, of the subproblems have pure Neumann boundary conditions are considered afterward.\n\nIn order to construct the reduced problem, we use the bilinear forms and functionals from \\eqref{eq: bilinear forms} and the extension operator $\\mathcal{R}$ from Section~\\ref{sub:functional_setting} and define the operator $\\Sigma: \\Lambda \\to \\Lambda^*$ and $\\chi \\in \\Lambda^*$ as\n\\begin{subequations} \n\\begin{align}\n\t\\langle \\Sigma \\phi, \\varphi \\rangle &:=\n\ta(\\bm{u}_\\star^0 + \\mathcal{R} \\phi, \\mathcal{R} \\varphi) + b(\\mathcal{R} \\varphi, p_\\star), \\label{eq: def Sigma}\\\\\n\t\\langle \\chi, \\varphi \\rangle &:=\n\tf_u(\\mathcal{R} \\varphi) - a(\\bm{u}_0^0, \\mathcal{R} \\varphi) - b(\\mathcal{R} \\varphi, p_0),\n\\end{align}\n\\end{subequations}\t\nin which $\\langle \\cdot, \\cdot \\rangle$ denotes the duality pairing on $\\Lambda^* \\times \\Lambda$. Here, the pair $(\\bm{u}_\\star^0, p_\\star) \\in \\bm{V}^0 \\times W$ satisfies\n\\begin{subequations} \\label{eq: auxiliary problem _phi}\n\t\\begin{align}\n\t\ta(\\bm{u}_\\star^0, \\bm{v}^0) + b(\\bm{v}^0, p_\\star)\n\t\t&= - a(\\mathcal{R} \\phi, \\bm{v}^0), \n\t\t& \\forall \\bm{v}^0 &\\in \\bm{V}^0, \\\\\n\t\tb(\\bm{u}_\\star^0, w) \n\t\t&= - b(\\mathcal{R} \\phi, w), \n\t\t& \\forall w &\\in W.\n\t\t\\label{eq aux problem eq2}\n\t\\end{align}\n\\end{subequations}\nand the pair $(\\bm{u}_0^0, p_0) \\in \\bm{V}^0 \\times W$ is defined such that\n\\begin{subequations} \\label{eq: auxiliary problem _0}\n\\begin{align}\n\t\ta(\\bm{u}_0^0, \\bm{v}^0) + b(\\bm{v}^0, p_0)\n\t\t&= f_u(\\bm{v}^0),\n\t\t& \\forall \\bm{v}^0 &\\in \\bm{V}^0, \\\\\n\t\tb(\\bm{u}_0^0, w) \n\t\t&= f_p(w), \n\t\t& \\forall w &\\in W.\n\\end{align}\n\\end{subequations}\n\nWith the above definitions, we introduce the reduced interface problem as: \\\\\nFind $\\phi \\in \\Lambda$ such that \n\\begin{align} \\label{eq: poincare steklov}\n\t\\langle \\Sigma \\phi, \\varphi \\rangle &=\n\t\\langle \\chi, \\varphi \\rangle,\n\t& \\forall \\varphi &\\in \\Lambda.\n\\end{align}\n\nNote that setting $\\bm{u} := \\bm{u}_\\star^0 + \\bm{u}_0^0 + \\mathcal{R} \\phi$ and $p := p_\\star + p_0$ yields the solution to the original problem \\eqref{eq: variational formulation}. Hence, if this problem admits a unique solution, then \\eqref{eq: poincare steklov} and \\eqref{eq: variational formulation} are equivalent. \n\nSimilar to the analysis of problem~\\eqref{eq: variational formulation} in Section~\\ref{sec:well_posedness}, we require an appropriate, parameter-dependent norm on functions $\\varphi \\in \\Lambda$ in order to analyze \\eqref{eq: poincare steklov}. Let us therefore define\n\\begin{align} \\label{eq: norm phi}\n\t\t\\| \\varphi \\|_{\\Lambda}^2 &:= \n\t\t\\| \\mu^{\\frac{1}{2}} \\varphi \\|_{\\frac{1}{2}, \\Gamma}^2\n\t\t+ \\| K^{-\\frac{1}{2}} \\varphi \\|_{-\\frac{1}{2}, \\Gamma}^2.\n\\end{align}\nWe justify this choice by proving two bounds with respect to $\\| \\cdot \\|_V$ in the following lemma. These results are then used in a subsequent theorem to show that $\\Sigma$ is continuous and coercive with respect to $\\| \\cdot \\|_\\Lambda$.\n\n\\begin{lemma} \\label{lem: norm equivalences}\n\tGiven $\\phi \\in \\Lambda$, then the following bounds hold for any $\\bm{u}^0 \\in \\bm{V}^0$:\n\t\\begin{align}\n\t\t\\| \\phi \\|_{\\Lambda}\n\t\t&\\lesssim \\| \\bm{u}^0 + \\mathcal{R} \\phi \\|_V, &\n\t\t\\| \\mathcal{R} \\phi \\|_V &\\lesssim \n\t\t\\| \\phi \\|_{\\Lambda}.\n\t\\end{align}\n\\end{lemma}\n\\begin{proof}\n\tWe apply trace inequalities in $H^1(\\Omega_S)$ and $H(\\div, \\Omega_D)$:\n\t\\begin{align*}\n\t\t\t\\| \\phi \\|_{\\Lambda}^2\n\t\t\t&= \\| \\mu^{\\frac{1}{2}} \\phi \\|_{\\frac{1}{2}, \\Gamma}^2\n\t\t\t+ \\| K^{-\\frac{1}{2}} \\phi \\|_{-\\frac{1}{2}, \\Gamma}^2 \\nonumber\\\\\n\t\t\n\t\t\n\t\t\n\t\t\t&\\lesssim\\| \\mu^{\\frac{1}{2}} (\\bm{u}_S^0 + \\mathcal{R}_S \\phi) \\|_{1, \\Omega_S}^2\n\t\t\t+ \\| K^{-\\frac{1}{2}} (\\bm{u}_D^0 + \\mathcal{R}_D \\phi) \\|_{0, \\Omega_D}^2\n\t\t\t+ \\| K^{-\\frac{1}{2}} \\nabla \\cdot (\\bm{u}_D^0 + \\mathcal{R}_D \\phi) \\|_{0, \\Omega_D}^2\n\t\t\t= \\| \\bm{u}^0 + \\mathcal{R} \\phi \\|_V^2.\n\t\\end{align*}\n\tThus, the first inequality is shown. On the other hand, the continuity of $\\mathcal{R}_i$ for $i \\in \\{S, D\\}$ from \\eqref{eq: continuity R_i} gives us\n\t\\begin{align*}\n\t\t\\| \\mathcal{R} \\phi \\|_V^2\n\t\t&\\le \\| \\mu^{\\frac{1}{2}} \\mathcal{R}_S \\phi \\|_{1, \\Omega_S}^2\n\t\t+ \\| K^{-\\frac{1}{2}} \\mathcal{R}_D \\phi \\|_{0, \\Omega_D}^2\n\t\t+ \\| K^{-\\frac{1}{2}} \\nabla \\cdot \\mathcal{R}_D \\phi \\|_{0, \\Omega_D}^2 \\nonumber\\\\\n\t\t&\\lesssim \\| \\mu^{\\frac{1}{2}} \\phi \\|_{\\frac{1}{2}, \\Gamma}^2\n\t\t+ \\| K^{-\\frac{1}{2}} \\phi \\|_{-\\frac{1}{2}, \\Gamma}^2\n\t\t= \\| \\phi \\|_{\\Lambda}^2.\n\t\\end{align*}\n\\end{proof}\n\n\\begin{theorem} \\label{thm: sigma SPD}\n\tThe operator $\\Sigma: \\Lambda \\to \\Lambda^*$ is symmetric, continuous, and coercive with respect to the norm $\\| \\cdot \\|_{\\Lambda}$. \n\\end{theorem}\n\\begin{proof}\n\tWe first note that the auxiliary problem \\eqref{eq: auxiliary problem _phi} is well-posed by Lemma~\\ref{lem: inequalities}, Corollary~\\ref{cor: infsup V0}, and saddle point theory. Moreover, the right-hand side is continuous due to \\eqref{ineq: a_cont} and \\eqref{ineq: b_cont}. For given $\\phi$, the pair $(\\bm{u}_\\star^0, p_\\star)$ therefore exists uniquely and satisfies\n\t\\begin{align} \\label{eq: bound u_phi}\n\t\t\\| \\bm{u}_\\star^0 \\|_V + \\| p_\\star \\|_W \n\t\t\\lesssim \n\t\t\\| \\mathcal{R} \\phi \\|_V.\n\t\\end{align}\n\n\tSymmetry is considered next. Let $(\\bm{u}_\\varphi, p_\\varphi)$ be the solution to \\eqref{eq: auxiliary problem _phi} with data $\\varphi$. By setting $(\\bm{v}^0, w) = (\\bm{u}_\\star^0, p_\\star)$ in the corresponding problem, it follows that\n\t\\begin{align*}\n\t\ta(\\bm{u}_\\varphi^0, \\bm{u}_\\star^0) \n\t\t+ b(\\bm{u}_\\star^0, p_\\varphi)\n\t\t+ b(\\bm{u}_\\varphi^0, p_\\star) \n\t\t&=\n\t\t- a(\\mathcal{R} \\varphi, \\bm{u}_\\star^0) - b(\\mathcal{R} \\varphi, p_\\star) \n\t\\end{align*}\n\tSubstituting this in definition \\eqref{eq: def Sigma} and using the symmetry of $a$, we obtain\n\t\\begin{align}\n\t\t\\langle \\Sigma \\phi, \\varphi \\rangle\n\t\t&= a(\\mathcal{R} \\phi, \\mathcal{R} \\varphi) + a(\\bm{u}_\\star^0, \\mathcal{R} \\varphi) + b(\\mathcal{R} \\varphi, p_\\star) \n\t\t= a(\\mathcal{R} \\phi, \\mathcal{R} \\varphi) - a(\\bm{u}_\\star^0, \\bm{u}_\\varphi^0) - b(\\bm{u}_\\varphi^0, p_\\star) \n\t\t - b(\\bm{u}_\\star^0, p_\\varphi),\n\t\\end{align}\n\tand symmetry of $\\Sigma$ is shown.\n\n\tWe continue by proving continuity of $\\Sigma$. Employing \\eqref{ineq: a_cont} and \\eqref{ineq: b_cont} once again, it follows that\n\t\\begin{align}\n\t\t\\langle \\Sigma \\phi, \\varphi \\rangle\n\t\t\\lesssim \n\t\t(\\| \\bm{u}_\\star^0 \\|_V + \\| \\mathcal{R} \\phi \\|_V + \\| p_\\star \\|_W)\n\t\t\\| \\mathcal{R} \\varphi \\|_V\n\t\t\\lesssim \n\t\t\\| \\mathcal{R} \\phi \\|_V\n\t\t\\| \\mathcal{R} \\varphi \\|_V\n\t\t\\lesssim \n\t\t\\| \\phi \\|_\\Lambda\n\t\t\\| \\varphi \\|_\\Lambda\n\t\\end{align}\n\tin which the second and third inequalities follow from \\eqref{eq: bound u_phi} and Lemma~\\ref{lem: norm equivalences}, respectively.\n\n\tIt remains to show coercivity, which we derive by setting $\\varphi = \\phi$ and $(\\bm{v}^0, w) = (\\bm{u}_\\star^0, p_\\star)$ in \\eqref{eq: auxiliary problem _phi}:\n\t\\begin{align}\n\t\t\\langle \\Sigma \\phi, \\phi \\rangle\n\t\t&= a(\\bm{u}_\\star^0 + \\mathcal{R} \\phi, \\mathcal{R} \\phi) + b(\\mathcal{R} \\phi, p_\\star) \n\t\n\t\t= a(\\bm{u}_\\star^0 + \\mathcal{R} \\phi, \\mathcal{R} \\phi) + a(\\bm{u}_\\star^0 + \\mathcal{R} \\phi, \\bm{u}_\\star^0) \n\t\n\t\t= a(\\bm{u}_\\star^0 + \\mathcal{R} \\phi, \\bm{u}_\\star^0 + \\mathcal{R} \\phi).\n\t\\end{align}\n\tNext, we observe that \\eqref{eq aux problem eq2} gives us $b(\\bm{u}_\\star^0 + \\mathcal{R} \\phi, w) = 0$ for all $w \\in W$. Thus, we use \\eqref{ineq: a_coercive} and Lemma~\\ref{lem: norm equivalences} to conclude that\n\t\\begin{align}\n\t\t\\langle \\Sigma \\phi, \\phi \\rangle \n\t\t&\\gtrsim \\| \\bm{u}_\\star^0 + \\mathcal{R} \\phi \\|_V^2\n\t\t\\gtrsim \\| \\phi \\|_\\Lambda^2.\n\t\\end{align}\n\\end{proof}\n\n\\begin{corollary}\n\tProblem \\eqref{eq: poincare steklov} is well-posed and the solution $\\phi \\in \\Lambda$ satisfies\n\t\\begin{align}\n\t\t\\| \\phi \\|_{\\Lambda}\n\t\t\\lesssim \n\t\t\\| \\mu^{-\\frac{1}{2}} \\bm{f}_S \\|_{-1, \\Omega_S}\n\t\n\t\t+ \\| K^{-\\frac{1}{2}} f_D \\|_{0, \\Omega_D}\n\t\t+ \\| K^{\\frac{1}{2}} g_p \\|_{\\frac{1}{2}, \\partial_p \\Omega_D}.\n\t\\end{align}\n\\end{corollary}\n\\begin{proof} \n\tAs shown in Theorem~\\ref{thm: sigma SPD}, $\\Sigma$ is symmetric and positive-definite. Therefore, \\eqref{eq: poincare steklov} admits a unique solution. We then set $\\bm{u} = \\bm{u}_\\star^0 + \\bm{u}_0^0 + \\mathcal{R} \\phi$ and $p = p_\\star + p_0$ and note that $(\\bm{u}, p)$ is the solution to \\eqref{eq: variational formulation}. By employing Lemma~\\ref{lem: norm equivalences}, we note that $\\| \\phi \\|_{\\Lambda} \\lesssim \\| \\bm{u} \\|_V + \\| p \\|_W$ and the proof is concluded using the result from Theorem~\\ref{thm: well-posedness}.\n\\end{proof}\n\n\\subsection{Neumann Problems}\n\\label{sub:neumann_cases}\n\nIn this section, we consider the case in which one, or both, of the subproblems have flux boundary conditions prescribed on the entire boundary. In other words, the cases in which $| \\partial_\\sigma \\Omega_S | = 0$ or $| \\partial_p \\Omega_D | = 0$. \nWe first introduce the setting in which one of the subdomains corresponds to a Neumann problem, followed by the case of $| \\partial_\\sigma \\Omega_S | = | \\partial_p \\Omega_D | = 0$.\n\n\\subsubsection{Single Neumann Subproblem}\n\\label{ssub:single_neumann_problem}\n\nLet us consider $| \\partial_p \\Omega_D | = 0$ and $| \\partial_\\sigma \\Omega_S | > 0$, noting that the converse case follows by symmetry. The complication in this case is that solving the Darcy subproblem results in a pressure distribution that is defined up to a constant. Thus, several preparatory steps need to be made before the interface problem can be formulated and solved. \n\nThe key idea is to pose the interface problem on the subspace of $\\Lambda$ containing functions with zero mean. This is done by introducing a function $\\phi_\\star$ that balances the source term in $\\Omega_D$ and subtracting this from the problem. The modified interface problem produces a pressure distribution with zero mean in $\\Omega_D$ and we obtain the true pressure average obtained afterwards.\n\nLet us first define the subspace $\\Lambda_0 \\subset \\Lambda$ of functions with zero mean, i.e.\n\\begin{align} \\label{def: Lambda_0}\n\t\\Lambda_0 \n\t:= \\left\\{ \\varphi_0 \\in \\Lambda :\\ (\\varphi_0, 1)_{\\Gamma} = 0 \\right\\}.\n\\end{align}\n\nWe continue by constructing $\\phi_\\star \\in \\Lambda \\setminus \\Lambda_0$. For that, we follow \\cite[Sec. 5.3]{quarteroni1999domain} and introduce $\\zeta \\in \\Lambda$ as an interface flux with non-zero mean. For convenience, we choose $\\zeta$ such that \n\\begin{align} \\label{eq: avg equal one}\n\t(\\zeta, 1)_{\\Gamma} = 1.\n\\end{align}\nAny bounded $\\zeta$ with this property will suffice for our purposes.\nAs a concrete example, we uniquely define $\\zeta$ by solving a minimization problem in $H_0^1(\\Gamma)$ with \\eqref{eq: avg equal one} as a constraint, similar to the construction \\eqref{eq: phi constraint} in Lemma~\\ref{lem: inequalities}. \n\nNext, we test the mass conservation equation \\eqref{eq: variational formulation 2nd eq} with $w = 1_{\\Omega_D}$, the indicator function of $\\Omega_D$. Due to the assumption $| \\partial_p \\Omega_D | = 0$, the divergence theorem gives us\n\\begin{align*}\n\tf_p(1_{\\Omega_D})\n\t= -(f_D, 1)_{\\Omega_D} \n\t= -(\\nabla \\cdot (\\bm{u}_D^0 + \\mathcal{R}_D \\phi), 1)_{\\Omega_D} \n\t= (\\phi, 1)_{\\Gamma}.\n\\end{align*}\nUsing this observation, we define the function $\\phi_\\star \\in \\Lambda \\setminus \\Lambda_0$ such that $(\\phi_\\star, 1)_\\Gamma = (\\phi, 1)_\\Gamma$ by setting\n\\begin{align}\n\t\\phi_\\star\n\t&:= \\zeta f_p(1_{\\Omega_D}).\n\\end{align}\n\nThe next step is to pose the interface problem, similar to \\eqref{eq: poincare steklov}, in this subspace: \\\\\nFind $\\phi_0 \\in \\Lambda_0$ such that \n\\begin{align} \\label{eq: poincare steklov_0}\n\t\\langle \\Sigma \\phi_0, \\varphi_0 \\rangle &=\n\t\\langle \\chi, \\varphi_0 \\rangle,\n\t& \\forall \\varphi_0 &\\in \\Lambda_0,\n\\end{align}\nwith $\\Sigma: \\Lambda \\to \\Lambda^*$ and $\\chi \\in \\Lambda^*$ redefined as\n\\begin{subequations} \\label{eqs: def sigma chi 0}\n\\begin{align}\n\t\\langle \\Sigma \\phi_0, \\varphi_0 \\rangle \n\t&:=\n\ta(\\bm{u}_0^0 + \\mathcal{R} \\phi_0, \\mathcal{R} \\varphi_0) + b(\\mathcal{R} \\varphi_0, p_0), \\\\\n\t\\langle \\chi, \\varphi_0 \\rangle\n\t&:= f_u(\\mathcal{R} \\varphi_0) - a(\\bm{u}_\\star^0 + \\mathcal{R} \\varphi_\\star, \\mathcal{R} \\varphi_0) - b(\\mathcal{R} \\varphi_0, p_\\star).\n\\end{align}\n\\end{subequations}\nThe construction of the pairs $(\\bm{u}_\\star^0, p_\\star)$ and $(\\bm{u}_0^0, p_0)$ now require solving the Darcy subproblem with pure Neumann conditions. We emphasize that due to the nature of these problems, the pressure distributions are defined up to a constant and we therefore enforce $p_\\star$ and $p_0$ to have mean zero with the use of Lagrange multipliers.\n\nIn particular, let $(\\bm{u}_0^0, p_0, r_0) \\in \\bm{V}^0 \\times W \\times \\mathbb{R}$ satisfy the following:\n\\begin{subequations} \\label{eqs: aux problem u0 p0}\n\t\\begin{align}\n\t\ta(\\bm{u}_0^0, \\bm{v}^0) + b(\\bm{v}^0, p_0)\n\t\t&= -a(\\mathcal{R} \\phi_0, \\bm{v}^0), \n\t\t& \\forall \\bm{v}^0 &\\in \\bm{V}^0, \\\\\n\t\t(r_0, w)_{\\Omega_D} + b(\\bm{u}_0^0, w)\n\t\t&= -b(\\mathcal{R} \\phi_0, w), \n\t\t& \\forall w &\\in W, \\label{eq: conservation aux 0}\\\\\n\t\t(p_0, s)_{\\Omega_D} &= 0,\n\t\t& \\forall s &\\in \\mathbb{R}.\n\t\\end{align}\n\\end{subequations}\nSimilarly, we let $(\\bm{u}_\\star^0, p_\\star, r_\\star) \\in \\bm{V}^0 \\times W \\times \\mathbb{R}$ solve\n\\begin{subequations} \\label{eqs: aux problem u* p*}\n\t\\begin{align}\n\t\ta(\\bm{u}_\\star^0, \\bm{v}^0) + b(\\bm{v}^0, p_\\star)\n\t\t&= -a(\\mathcal{R} \\phi_\\star, \\bm{v}^0) + f_u(\\bm{v}^0), \n\t\t& \\forall \\bm{v}^0 &\\in \\bm{V}^0, \\\\\n\t\t(r_\\star, w)_{\\Omega_D} + b(\\bm{u}_\\star^0, w)\n\t\t&= - b(\\mathcal{R} \\phi_\\star, w) + f_p(w), \n\t\t& \\forall w &\\in W, \\label{eq: conservation aux *}\\\\\n\t\t(p_\\star, s)_{\\Omega_D} &= 0,\n\t\t& \\forall s &\\in \\mathbb{R}.\n\t\\end{align}\n\\end{subequations}\nWe emphasize that setting $w = 1_{\\Omega_D}$ in the conservation equations \\eqref{eq: conservation aux 0} and \\eqref{eq: conservation aux *} yields $r_\\star = r_0 = 0$. Hence, these terms have no contribution to the mass balance.\n\nThe solution to problem \\eqref{eq: poincare steklov_0} allows us to construct the velocity distribution:\n\\begin{align}\n\t\\bm{u} := \\bm{u}_0^0 + \\bm{u}_\\star^0 + \\mathcal{R} (\\phi_0 + \\phi_\\star). \\label{eq: reconstructed flux}\n\\end{align}\n\nThe next step is to recover the correct pressure average in $\\Omega_D$. For that, we presume that the pressure solution is given by $p = p_0 + p_\\star + \\bar{p}$ with $\\bar{p} := c_D 1_{\\Omega_D}$ for some $c_D \\in \\mathbb{R}$. In other words, $\\bar{p}$ is zero in $\\Omega_S$ and a constant $c_D$ on $\\Omega_D$, to be determined next.\n\nUsing $\\zeta$ from \\eqref{eq: avg equal one}, we substitute this function in \\eqref{eq: variational formulation 1st eq} and choose the test function $\\bm{v} = \\mathcal{R} \\zeta$:\n\\begin{align*}\n\ta(\\bm{u}, \\mathcal{R} \\zeta) + b(\\mathcal{R} \\zeta, p_0 + p_\\star + \\bar{p}) = f_u(\\mathcal{R} \\zeta).\n\\end{align*}\nUsing this relationship and the divergence theorem, we make the following two observations:\n\\begin{subequations} \\label{eq: def c_D}\n\t\\begin{align}\n\t\tb(\\mathcal{R} \\zeta, \\bar{p}) \n\t\t&= \n\t\tf_u(\\mathcal{R} \\zeta)\n\t\t- a(\\bm{u}, \\mathcal{R} \\zeta) - b(\\mathcal{R} \\zeta, p_0 + p_\\star) \n\t\t= \\langle \\chi - \\Sigma \\phi_0, \\zeta \\rangle, \\\\\n\t\tb(\\mathcal{R} \\zeta, \\bar{p})\n\t\t&= - (\\nabla \\cdot \\mathcal{R} \\zeta, \\bar{p})_{\\Omega_D}\n\t\t= c_D(\\zeta, 1)_\\Gamma\n\t\t= c_D.\n\t\\end{align}\n\\end{subequations}\nCombining these two equations yields $c_D = \\langle \\chi - \\Sigma \\phi_0, \\zeta \\rangle$ and we set\n\\begin{align}\\label{eq: def p bar single}\n\t\\bar{p} := \\langle \\chi - \\Sigma \\phi_0, \\zeta \\rangle 1_{\\Omega_D}.\n\\end{align}\n\nFinally, by setting\n$p := p_0 + p_\\star + \\bar{p}$,\nwe have obtained $(\\bm{u}, p) \\in \\bm{V} \\times W$, the solution to \\eqref{eq: variational formulation}. We remark that the well-posedness of \\eqref{eq: poincare steklov_0} follows by the same arguments as in \\Cref{thm: sigma SPD}. \n\n\\subsubsection{Coupled Neumann Problems} \n\\label{ssub:coupled_neumann_problems}\n\nIn this case, we have flux conditions prescribed on the entire boundary, i.e. $\\partial \\Omega = \\partial_u \\Omega_S \\cup \\partial_u \\Omega_D$. \nWe follow the same steps as in Section~\\ref{ssub:single_neumann_problem}, and highlight the differences required to treat this case.\n\nLet us consider a slightly more general case than \\eqref{eq: bilinear forms} by including a source function $f_S$ in the Stokes subdomain. In other words, the right-hand side of the mass balance equations is given by\n\\begin{align}\n\tf_p(w) := -(f_S, w_S)_{\\Omega_S} -(f_D, w_D)_{\\Omega_D}.\n\\end{align}\nBy compatibility of the source function with the boundary conditions, it follows that $f_p(1) = 0$ and therefore\n\\begin{align}\n\tf_p(1_{\\Omega_D})\n\t= (f_D, 1)_{\\Omega_D}\n\t= -(f_S, 1)_{\\Omega_S}\n\t= - f_p(1_{\\Omega_S}).\n\\end{align}\n\nLet $\\Lambda_0 \\subset \\Lambda$ be defined as in \\eqref{def: Lambda_0} and $\\zeta$ as in \\eqref{eq: avg equal one}. Using the same arguments as in the previous section, we define\n\\begin{align}\n\t\\phi_\\star\n\t&:= \\zeta f_p(1_{\\Omega_D})\n\t= - \\zeta f_p(1_{\\Omega_S}).\n\\end{align}\n\nThe operators $\\Sigma$ and $\\chi$ are defined as in \\eqref{eqs: def sigma chi 0} with the only difference being in the pairs of functions $(\\bm{u}_0^0, p_0)$ and $(\\bm{u}_\\star^0, p_\\star)$. As before, these pairs are constructing by solving the separate subproblems. Since these correspond to Neumann problems, it follows that the pressure distributions $p_0$ and $p_\\star$ are defined up to a constant in each subdomain. \nWe therefore enforce zero mean of these variables in each subdomain with the use of a Lagrange multiplier $s \\in S$. Let $S$ be the space of piecewise constant functions given by\n\\begin{align}\n\tS := \\operatorname{span}\\{ 1_{\\Omega_S}, 1_{\\Omega_D} \\}.\n\\end{align}\n\nWe augment problem \\eqref{eqs: aux problem u0 p0} to: \\\\\nFind $(\\bm{u}_0^0, p_0, r_0) \\in \\bm{V}^0 \\times W \\times S$ such that\n\\begin{subequations}\n\t\\begin{align}\n\t\ta(\\bm{u}_0^0, \\bm{v}^0) + b(\\bm{v}^0, p_0)\n\t\t&= -a(\\mathcal{R} \\phi_0, \\bm{v}^0), \n\t\t& \\forall \\bm{v}^0 &\\in \\bm{V}^0, \\\\\n\t\t(r_0, w)_{\\Omega} + b(\\bm{u}_0^0, w)\n\t\t&= -b(\\mathcal{R} \\phi_0, w), \n\t\t& \\forall w &\\in W,\\\\\n\t\t(p_0, s)_{\\Omega} &= 0,\n\t\t& \\forall s &\\in S.\n\t\\end{align}\n\\end{subequations}\nProblem \\eqref{eqs: aux problem u* p*} is changed analogously to produce $(\\bm{u}_\\star^0, p_\\star, r_\\star) \\in \\bm{V}^0 \\times W \\times S$. After solving the interface problem, all ingredients are available to construct the velocity $\\bm{u}$ as in \\eqref{eq: reconstructed flux}.\n\nIn the construction of the pressure $p$, we compute $c_D = \\langle \\chi - \\Sigma \\phi_0, \\zeta \\rangle$ using the same arguments as in \\eqref{eq: def c_D}. Since the pressure is globally defined up to a constant, we ensure that the pressure distribution has mean zero on $\\Omega$ by setting\n\\begin{align} \\label{eq: def p bar pure}\n\t\\bar{p} := \\langle \\chi - \\Sigma \\phi_0, \\zeta \\rangle \\left(\n\t1_{\\Omega_D} - \\frac{| \\Omega_D |}{| \\Omega |}\n\t\\right).\n\\end{align}\nAs before, we set\n$p := p_0 + p_\\star + \\bar{p}$ and obtain the solution $(\\bm{u}, p)$ of the original problem \\eqref{eq: variational formulation}.\n\n\\section{Discretization} \n\\label{sec:discretization}\n\nThis section presents the discretization of problem \\eqref{eq: variational formulation} with the use of the Mixed Finite Element method. By introducing the interface flux as a separate variable, we derive a mortar method reminiscent of \\cite{boon2018robust,nordbotten2019unified}, presented in the context of fracture flow. The focus in this section is to introduce this flux-mortar method for the coupled Stokes-Darcy problem and show its stability.\n\nLet $\\Omega_{S, h}$, $\\Omega_{D, h}$, $\\Gamma_h$ be shape-regular tesselations of $\\Omega_S$, $\\Omega_D$, and $\\Gamma$, respectively. Let $\\Omega_{S, h}$ and $\\Omega_{D, h}$ be constructed independently and consist of simplicial or quadrangular (hexahedral in 3D) elements. Similarly, $\\Gamma_h$ is a simplicial or quadrangular mesh of dimension $n - 1$, constructed according to the restrictions mentioned below. \n\nWe impose the following three restriction on the Mixed Finite Element discretization:\n\\begin{enumerate}\n\t\\item\n\tFor the purpose of structure preservation, the finite element spaces are chosen such that\n\t\\begin{subequations} \\label{eq: inclusions}\n\t\t\\begin{align}\n\t\t\t\\bm{V}_{S, h} \n\t\t\t&\\subset \\bm{V}_S, &\n\t\t\t\\bm{V}_{D, h} \n\t\t\t&\\subset \\bm{V}_D, &\n\t\t\t\\Lambda_h \n\t\t\t&\\subset \\Lambda, \\\\\n\t\t\tW_{S, h} \n\t\t\t&\\subset W_S, & \n\t\t\tW_{D, h} \n\t\t\t&\\subset W_D.\n\t\t\\end{align}\n\t\\end{subequations}\n\tIt is convenient, but not necessary, to define $\\Gamma_h$ as the trace mesh of $\\Omega_{S, h}$ and $\\Lambda_h = (\\bm{n} \\cdot \\bm{V}_{S, h})|_\\Gamma$. In this case, it follows that $\\Lambda_h \\subset H_0^1(\\Gamma) \\subset H_{00}^{1/2}(\\Gamma) = \\Lambda$. We moreover define\n\t\\begin{align}\n\t\t\\bm{V}_{S, h}^0 \n\t\t\t&:= \\bm{V}_{S, h} \\cap \\bm{V}_S^0, &\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\t\\bm{V}_{D, h}^0 \n\t\t\t&:= \\bm{V}_{D, h} \\cap \\bm{V}_D^0.\n\t\t\n\t\t\n\t\\end{align}\n\n\t\\item\n\tThe Mixed Finite Element spaces $V_{i, h} \\times W_{i, h}$ with $i \\in \\{S, D\\}$ are chosen to form stable pairs for the Stokes and Darcy (sub)systems, respectively.\n\tIn particular, bounded interpolation operators $\\Pi_{V_i}$ exist for $i \\in \\{S, D\\}$ such that for all $\\bm{v} \\in \\bm{V} \\cap H^\\epsilon(\\Omega)$ with $\\epsilon > 0$:\n\t\\begin{align} \\label{eq: commutative}\n\t\t\\Pi_{W_S} \\nabla \\cdot ((I - \\Pi_{V_S}) \\bm{v}_S) &= 0, & \n\t\t\\nabla \\cdot (\\Pi_{V_D} \\bm{v}_D) &= \\Pi_{W_D} \\nabla \\cdot \\bm{v}_D.\n\t\\end{align}\n\tin which $\\Pi_{W_i}$ is the $L^2$-projection onto $W_{i, h}$. \n\tMoreover, to ensure local mass conservation, we assume that the space of piecewise constants $(P_0)$ is contained in $W_h$.\n\n\tExamples for the Stokes subproblem include $\\bm{P}_2-P_0$ in the two-dimensional case as well as the Bernardi-Raugel pair \\cite{bernardi1985analysis}. \n\tFor the Darcy subproblem, stable choices of low-order finite elements include the Raviart-Thomas pair $RT_0-P_0$ \\cite{raviart1977mixed} and the Brezzi-Douglas Marini pair $BDM_1-P_0$ \\cite{brezzi1985two}. For more examples of stable Mixed Finite Element pairs, we refer the reader to \\cite{boffi2013mixed}.\n\n\t\\item\n\tFor $\\phi_h \\in \\Lambda_h$, let the discrete extension operators $\\mathcal{R}_{i, h}: \\Lambda_h \\to \\bm{V}_{i, h}$ with $i \\in \\{S, D\\}$ satisfy\n\t\\begin{align} \\label{eq: extension property h}\n\t\t(\\phi_h - \\bm{n} \\cdot \\mathcal{R}_{i, h} \\phi_h, \\bm{n} \\cdot \\bm{v}_{i, h} )_{\\Gamma} &= 0, & \\forall \\bm{v}_{i, h} &\\in \\bm{V}_{i, h}.\n\t\\end{align}\n\tThe extension operators are continuous in the sense that for $\\phi_h \\in \\Lambda_h$, we have\n\t\\begin{align} \\label{eq: continuity R_h}\n\t\t\\| \\mathcal{R}_{S, h} \\varphi_h \\|_{1, \\Omega_S} &\\lesssim \\| \\varphi_h \\|_{\\frac{1}{2}, \\Gamma}, &\n\t\t\\| \\mathcal{R}_{D, h} \\varphi_h \\|_{0, \\Omega_D} + \\| \\nabla \\cdot \\mathcal{R}_D \\varphi_h \\|_{0, \\Omega_D} &\\lesssim \\| \\varphi_h \\|_{-\\frac{1}{2}, \\Gamma}.\n\t\\end{align}\n\tWe define $\\mathcal{R}_h \\phi = \\mathcal{R}_{S, h} \\oplus \\mathcal{R}_{D, h}$Let the mesh $\\Gamma_h$ and function space $\\Lambda_h$ be chosen such that the kernel of $\\mathcal{R}_h$ is zero:\n\t\\begin{align}\n\t\t\\mathcal{R}_h \\phi_h = 0 \\text{ if and only if } \\phi_h = 0.\n\t\\end{align}\n\tWe remark that this is a common restriction encountered in mortar methods (see e.g. \\cite{arbogast2000mixed}) and can be satisfied by choosing $\\Gamma_h$ sufficiently coarse or constructing $\\Lambda_h$ using polynomials of lower order. \n\\end{enumerate}\n\nWith the above restrictions in place, we define the discretizations of the combined spaces $\\bm{V}$ and $W$ as\n\\begin{subequations}\n\\begin{align}\n\t\\bm{V}_h &:= \\left\\{ \\bm{v}_h :\\ \n\t\\exists (\\bm{v}_{S, h}^0, \\varphi_h, \\bm{v}_{D, h}^0) \\in \\bm{V}_{S, h}^0 \\times \\Lambda_h \\times \\bm{V}_{D, h}^0\n\t\\text{ such that } \\bm{v}_h|_{\\Omega_i} = \\bm{v}_{i, h}^0 + \\mathcal{R}_{i, h} \\varphi_h, \\text{ for } i \\in \\{S, D\\} \\right\\}\n\n\t, \\\\\n\tW_h &:= W_{S, h} \\times W_{D, h}.\n\t\n\\end{align}\n\\end{subequations}\n\nAs in the continuous case, the function space $\\bm{V}_h$ is independent of the choice of extension operators. We remark that in the case of non-matching grids or if different polynomial orders are chosen for $\\bm{V}_{S, h}$ and $\\bm{V}_{D, h}$, we have $V_h \\not \\subset V$ due to the weaker property of $\\mathcal{R}_h$ in \\eqref{eq: extension property h} as opposed to $\\mathcal{R}$ defined by \\eqref{eq: extension property}. Nevertheless, a normal flux continuity is imposed in the sense that the normal trace of $\\bm{v}_{S, h}$ and $\\bm{v}_{D, h}$ are $L^2$ projections of a single variable $\\phi_h$.\n\nAgain, the subscript $i \\in \\{S, D\\}$ distinguishes the restrictions of $(\\bm{v}, w) \\in \\bm{V}_h \\times W_h$ to the different subdomains:\n\\begin{align}\n\t\\bm{v}_{i, h} &:= \\bm{v}_{i, h}^0 + \\mathcal{R}_{i, h} \\varphi_h \\in \\bm{V}_{i, h}, & \n\tw_{i, h} &:= w_h|_{\\Omega_i} \\in W_{i, h}.\n\\end{align}\n\nWe finish this section by formally stating the discrete problem: \\\\\nFind the pair $(\\bm{u}_h, p_h) \\in \\bm{V}_h \\times W_h$ such that\n\\begin{subequations} \\label{eq: variational formulation_h}\n\t\\begin{align}\n\t\ta(\\bm{u}_h, \\bm{v}_h) + b(\\bm{v}_h, p_h) &= f_u(\\bm{v}_h),\n\t\t& \\forall \\bm{v}_h &\\in \\bm{V}_h, \\\\\n\t\tb(\\bm{u}_h, w_h) &= f_p(w_h), \n\t\t& \\forall w_h &\\in W_h.\n\t\\end{align}\n\\end{subequations}\nwith the bilinear forms and functionals defined in \\eqref{eq: bilinear forms}.\n\nWith the chosen spaces, the discretizations on $\\Omega_{S, h}$ and $\\Omega_{D, h}$ are stable and consistent for the Stokes and Darcy subproblems, respectively. However, in order to show stability of the method for the fully coupled problem \\eqref{eq: variational formulation_h}, we briefly confirm that the relevant inequalities hold independent of the mesh parameter $h$.\n\n\\begin{lemma} \\label{lem: inequalities_h}\n\tThe following inequalities are satisfied:\n\t\\begin{subequations}\n\t\\begin{align}\n\t\t\t& \\text{For } \\bm{u}_h, \\bm{v}_h \\in \\bm{V}_h: \n\t\t\t& a(\\bm{u}_h, \\bm{v}_h) &\\lesssim \\| \\bm{u}_h \\|_V \\| \\bm{v}_h \\|_V. \\label{ineq: a_cont_h}\\\\\n\t\t\t& \\text{For } (\\bm{v}_h, w_h) \\in \\bm{V}_h \\times W_h: \n\t\t\t& b(\\bm{v}_h, w_h) &\\lesssim \\| \\bm{v}_h \\|_V \\| w_h \\|_W. \\label{ineq: b_cont_h}\\\\\n\t\t\t& \\text{For } \\bm{v}_h \\in \\bm{V}_h \\text{ with } b(\\bm{v}_h, w_h) = 0 \\ \\forall w_h \\in W_h: \n\t\t\t& a(\\bm{v}_h, \\bm{v}_h) &\\gtrsim \\| \\bm{v}_h \\|_V^2. \\label{ineq: a_coercive_h}\\\\\n\t\t\t& \\text{For } w_h \\in W_h, \\ \\exists \\bm{v}_h \\in \\bm{V}_h \\text{ with } \\bm{v}_h \\ne 0, \\text{ such that}: \n\t\t\t& b(\\bm{v}_h, w_h) &\\gtrsim \\| \\bm{v}_h \\|_V \\| w_h \\|_W.\\label{ineq: b_infsup_h}\n\t\t\\end{align}\n\t\\end{subequations}\n\t\\end{lemma}\n\t\\begin{proof}\n\t\tInequalities \\eqref{ineq: a_cont_h} and \\eqref{ineq: b_cont_h} follow using the same arguments as \\eqref{ineqs: continuity} in \\Cref{lem: inequalities}. Continuing with \\eqref{ineq: a_coercive_h}, we note from \\eqref{eq: commutative} that $b(\\bm{v}_h, w_h) = 0$ implies\n\t\t\\begin{align*}\n\t\t\t0 = \\Pi_{W_D} \\nabla \\cdot \\bm{v}_{D, h} = \\nabla \\cdot (\\Pi_{V_D} \\bm{v}_{D, h}) = \\nabla \\cdot \\bm{v}_{D, h}.\n\t\t\\end{align*}\n\t\tHence, the same derivation as \\eqref{eq: proof 3.5c} is followed to give us \\eqref{ineq: a_coercive_h}. \n\n\t\tFor the final inequality, we consider $w_h \\in W_h$ given and follow the strategy of \\Cref{lem: inequalities}. First, we set up a minimization problem in $\\Lambda_h$ analogous to \\eqref{eq: phi constraint} to obtain a bounded $\\phi_h \\in \\Lambda_h$ such that\n\t\t\\begin{subequations}\n\t\t\\begin{align}\n\t\t\t\\| \\phi_h \\|_{1, \\Gamma} &\\lesssim \\| K w_{D, h} \\|_{0, \\Omega_D}, \\\\\n\t\t\t(\\nabla \\cdot \\mathcal{R}_{D, h} \\phi_h, 1)_{\\Omega_D} &=\n\t\t\t(- \\bm{n} \\cdot \\mathcal{R}_{D, h} \\phi_h, 1)_{\\Gamma} = \n\t\t\t(- \\phi_h, 1)_{\\Gamma} = \n\t\t\t- (K w_{D, h}, 1)_{\\Omega_D}. \\label{eq: compatibility of phi_h}\n\t\t\\end{align}\n\t\t\\end{subequations}\n\n\t\tNext, we note that $w_h \\in W_h \\subset W$. In turn, we use the auxiliary problems \\eqref{eqs: aux prob p_S} and \\eqref{eqs: aux prob p_D} to construct $\\bm{v}_S^0 \\in \\bm{V}_S^0$ and $\\bm{v}_D^0 \\in \\bm{V}_D^0$ such that\n\t\t\\begin{subequations}\n\t\t\\begin{align}\n\t\t\t- \\nabla \\cdot \\bm{v}_S^0 &= \\mu^{-1} w_{S, h} + \\nabla \\cdot \\mathcal{R}_{S, h} \\phi_h, \\\\\n\t\t\t- \\nabla \\cdot \\bm{v}_D^0 &= K w_{D, h} + \\nabla \\cdot \\mathcal{R}_{D, h} \\phi_h, \\\\\n\t\t\t\\| \\bm{v}_S^0 \\|_{1, \\Omega_S} &\\lesssim \n\t\t\t\\| \\mu^{-1} w_{S, h} \\|_{0, \\Omega_S} + \\| \\nabla \\cdot \\mathcal{R}_{S, h} \\phi_h \\|_{0, \\Omega_S}, \\\\\n\t\t\t\\| \\bm{v}_D^0 \\|_{1, \\Omega_D} &\\lesssim \n\t\t\t\\| K w_{D, h} \\|_{0, \\Omega_D} + \\| \\nabla \\cdot \\mathcal{R}_{D, h} \\phi_h \\|_{0, \\Omega_D}.\n\t\t\\end{align}\n\t\t\\end{subequations}\n\n\t\tWe then employ the interpolation operators from \\eqref{eq: commutative} to create $\\bm{v}_{S, h}^0 = \\Pi_{V_S} \\bm{v}_S^0$ and $\\bm{v}_{D, h}^0 = \\Pi_{V_D}\\bm{v}_D^0$. Using the commutative properties, we obtain\n\t\t\\begin{align*}\n\t\t\tb(\\bm{v}_h, w_h) &\n\t\t\t= - \\sum_{i \\in \\{S, D\\}} (\\nabla \\cdot (\\Pi_{V_i} \\bm{v}_i^0 + \\mathcal{R}_{i, h} \\phi_h), w_{i, h})_{\\Omega_i}\n\t\t\t= (\\mu^{-1} w_{S, h}, w_{S, h})_{\\Omega_S} + (K w_{D, h}, w_{D, h})_{\\Omega_D}\n\t\t\t= \\| w_h \\|_W^2.\n\t\t\\end{align*}\n\t\tMoreover, by the boundedness of these interpolation operators, we have\n\t\t\\begin{align*}\n\t\t\t\\| \\bm{v}_h \\|_V \n\t\t\t\\le \\| \\bm{v}_h^0 \\|_V + \\| \\mathcal{R}_h \\phi_h \\|_V\n\t\t\t\\lesssim \\| \\bm{v}^0 \\|_V + \\| \\phi_h \\|_\\Lambda\n\t\t\t\\lesssim \\| w_h \\|_W,\n\t\t\\end{align*}\n\t\tproving the final inequality \\eqref{ineq: b_infsup_h}.\n\t\\end{proof}\n\n\t\\begin{theorem}\n\t\tIf the three conditions presented at the beginning of this section are satisfied, then the discretization method is stable, i.e. a unique solution $(\\bm{u}_h, p_h) \\in \\bm{V}_h \\times W_h$ exists for \\eqref{eq: variational formulation_h} satisfying\n\t\\begin{align}\n\t\t\\| \\bm{u}_h \\|_V + \\| p_h \\|_W \n\t\t\\lesssim \n\t\t\\| \\mu^{-\\frac{1}{2}} \\bm{f}_S \\|_{-1, \\Omega_S}\n\t\n\t\t+ \\| K^{-\\frac{1}{2}} f_D \\|_{0, \\Omega_D}\n\t\t+ \\| K^{\\frac{1}{2}} g_p \\|_{\\frac{1}{2}, \\partial_p \\Omega_D}.\n\t\\end{align}\n\t\\end{theorem}\n\t\\begin{proof}\n\t\tThis result follows from \\Cref{lem: inequalities_h}, the continuity of the right-hand side from \\Cref{thm: well-posedness}, and saddle point theory.\n\t\\end{proof}\n\n\\section{Iterative Solution Method} \n\\label{sec:iterative_solvers}\n\nWith well-posedness of the discrete system shown in the previous section, we continue by constructing an efficient solution method to solve the coupled system in an iterative manner. The scheme is introduced according to three steps. We first present the discrete Steklov-Poincar\\'e system that we aim to solve using a Krylov subspace method. Second, a parameter-robust preconditioner is introduced for the reduced system. The third step combines these two ideas to form an iterative method that respects mass conservation at each iteration.\n\n\\subsection{Discrete Steklov-Poincar\\'e System} \n\\label{sub:discrete_poincar}\n\nSimilar to the continuous case in Section~\\ref{sec:the_steklov_poincare_system}, we reduce the problem to the interface flux variables $\\phi_h \\in \\Lambda_h$. The reduced system is a direct translation of \\eqref{eq: poincare steklov} to the discrete setting: \\\\\nFind $\\phi_h \\in \\Lambda_h$ such that\n\\begin{align} \\label{eq: poincare steklov_h}\n\t\\left \\langle \\Sigma_h \\phi_h, \\varphi_h \\right \\rangle &= \\left \\langle \\chi_h, \\varphi_h \\right \\rangle, &\n\t\\forall \\varphi_h &\\in \\Lambda_h.\n\\end{align}\n\nTo ease the implementation of the scheme, we focus particularly on the structure of the operator $\\Sigma_h$. For that, we note that the space $\\bm{V}_h^0$ can be decomposed orthogonally into $\\bm{V}_{S, h}^0 \\times \\bm{V}_{D, h}^0$ and a similar decomposition holds for $W_h$. The aim is to propose a solution method which exploits this property. For brevity, the subscript $h$ is omitted on all variables and operators, keeping in mind that the remainder of this section concerns the discretized setting.\n\nLet us rewrite the bilinear forms $a$ and $b$ in terms of duality pairings, thereby revealing the matrix structure of the problem. For that, we group the terms according to the subdomains and let the operators $A_i: \\bm{V}_{i, h} \\to \\bm{V}_{i, h}^*$ and $B_i:\\bm{V}_{i, h} \\to W_{i, h}^*$ be defined for $i \\in \\{S, D\\}$ such that\n\\begin{align*}\n\t\\left \\langle A_S \\bm{u}_S, \\bm{v}_S \\right \\rangle \n\n\t&= (\\mu \\varepsilon(\\bm{u}_S), \\varepsilon(\\bm{v}_S))_{\\Omega_S} \n\t\t+ (\\beta \\bm{\\tau} \\cdot \\bm{u}_S, \\bm{\\tau} \\cdot \\bm{v}_S )_{\\Gamma}, \\\\\n\t\\left \\langle A_D \\bm{u}_D, \\bm{v}_D \\right \\rangle \n\n\t&= (K^{-1} \\bm{u}_D, \\bm{v}_D)_{\\Omega_D}, \\\\\n\t\\left \\langle B_S \\bm{u}_S, w_S \\right \\rangle \n\n\t&= -(\\nabla \\cdot \\bm{u}_S, w_S)_{\\Omega_S}, \\\\\n\t\\left \\langle B_D \\bm{u}_D, w_D \\right \\rangle \n\n\t&= -(\\nabla \\cdot \\bm{u}_D, w_D)_{\\Omega_D}.\n\\end{align*}\n\nLet $A_{i, 0}$ and $B_{i, 0}$ be the respective restrictions of the above to the subspace $\\bm{V}_{i, h}^0$.\nWith these operators and the decomposition $\\bm{u}_i = \\bm{u}_i + \\mathcal{R}_i \\phi$ for the trial and test functions, problem \\eqref{eq: variational formulation} attains the following matrix form:\n\\begin{align} \\label{eq: matrix form}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\\begin{bmatrix}\n\t\tA_{S, 0} & B_{S, 0}^T & & & A_S \\mathcal{R}_S \\\\[5pt]\n\t\tB_{S, 0} & & & & B_S \\mathcal{R}_S \\\\[5pt]\n\t\t & & A_{D, 0} & B_{D, 0}^T & A_D \\mathcal{R}_D \\\\[5pt]\n\t\t & & B_{D, 0} & & B_D \\mathcal{R}_D \\\\[5pt]\n\t\t(A_S \\mathcal{R}_S)^T & (B_S \\mathcal{R}_S)^T & (A_D \\mathcal{R}_D)^T & (B_D \\mathcal{R}_D)^T & \\sum_i \\mathcal{R}_i^T A_i \\mathcal{R}_i\n\t\\end{bmatrix}\n\t\\begin{bmatrix}\n\t\t\\bm{u}_S^0 \\\\[5pt]\n\t\tp_S \\\\[5pt]\n\t\t\\bm{u}_D^0 \\\\[5pt]\n\t\tp_D \\\\[5pt]\n\t\t\\phi\n\t\\end{bmatrix}\n\t&= \n\t\\begin{bmatrix}\n\t\tf_{S, u} \\\\[5pt]\n\t\tf_{S, p} \\\\[5pt]\n\t\tf_{D, u} \\\\[5pt]\n\t\tf_{D, p} \\\\[5pt]\n\t\tf_{\\phi} \n\t\\end{bmatrix}.\n\\end{align}\nIn practice, the discrete extension operators are chosen to only have support in the elements adjacent to $\\Gamma$, leading to a favorable sparsity pattern.\nWe moreover note that the final row corresponds to test functions $\\varphi \\in \\Lambda_h$.\nThe right-hand side of \\eqref{eq: matrix form} is defined such that\n\\begin{align}\n\t\\left \\langle f_{S, u}, \\bm{v}_S^0 \\right \\rangle \n\t+ \\left \\langle f_{S, p}, w_S \\right \\rangle \n\t+ \\left \\langle f_{D, u}, \\bm{v}_D^0 \\right \\rangle\n\t+ \\left \\langle f_{D, p}, w_D \\right \\rangle\n\t+ \\left \\langle f_{\\phi}, \\varphi \\right \\rangle\n\t&=\n\tf_u(\\bm{v}) + f_p(w), &\n\t\\forall (\\bm{v}, w) &\\in \\bm{V}_h \\times W_h.\n\\end{align}\n\nThe discrete Steklov-Poincar\\'e system is obtained by taking a Schur-complement of this system. In particular, we obtain $\\Sigma_h$ and the right-hand side $\\chi_h$ as\n\\begin{subequations}\n\\begin{align} \n\t\\label{eq: def sigma_h}\n\t\\Sigma_h &:= \\sum_{i \\in \\{S, D\\} } \n\t\\mathcal{R}_i^T A_i \\mathcal{R}_i\n\t- \\mathcal{R}_i^T [A_i \\ B_i^T] \n\t\\begin{bmatrix}\n\tA_{i, 0} & B_{i, 0}^T \\\\[5pt]\n\tB_{i, 0} & \\end{bmatrix}^{-1}\n\t\\begin{bmatrix}\n\t\tA_i \\\\[5pt]\n\t\tB_i\n\t\\end{bmatrix}\n\t\\mathcal{R}_i, \\\\\n\t\\chi_h &:= f_{\\phi}\n\t- \\sum_{i \\in \\{S, D\\} } \\mathcal{R}_i^T [A_i \\ B_i^T] \n\t\\begin{bmatrix}\n\tA_{i,0} & B_{i, 0}^T \\\\[5pt]\n\tB_{i, 0} & \\end{bmatrix}^{-1}\n\t\\begin{bmatrix}\n\t\tf_{u, i} \\\\[5pt]\n\t\tf_{p, i}\n\t\\end{bmatrix}.\n\t\\label{eq: def chi_h}\n\\end{align}\n\\end{subequations}\n\nWe employ a Krylov subspace method to solve \\eqref{eq: poincare steklov_h} iteratively, thereby avoiding the computationally costly assembly of $\\Sigma_h$. In order to obtain a parameter-robust iterative method, the next step is to introduce an appropriate preconditioner, as presented in the next section. \n\n\\subsection{Parameter-Robust Preconditioning}\n\\label{sub:parameter_robust_preconditioning}\n\nIn this section, we construct the preconditioner such that the resulting iterative method is robust with respect to the material parameters ($K$ and $\\mu$) and the mesh size ($h$). For that, we use the parameter-dependent norm $\\| \\cdot \\|_\\Lambda$ from \\eqref{eq: norm phi} and follow the framework presented by \\cite{mardal2011preconditioning} to form a norm-equivalent preconditioner. In particular, we use the following result from that work:\n\\begin{lemma}\n\tGiven a bounded, symmetric, positive-definite operator $\\Sigma: \\Lambda \\to \\Lambda^*$ \n\n\n\n\n\n\tand a symmetric positive definite operator $\\mathcal{P}: \\Lambda^* \\to \\Lambda$. If the induced norm $\\| \\phi \\|_{\\mathcal{P}^{-1}}^2 := \\left \\langle \\mathcal{P}^{-1} \\phi, \\phi \\right \\rangle$ satisfies\n\t\\begin{align} \\label{eq: norm equivalence precond}\n\t\t\\| \\phi \\|_{\\Lambda}^2 \\lesssim\n\t\t\\| \\phi \\|_{\\mathcal{P}^{-1}}^2 \\lesssim\n\t\t\\| \\phi \\|_{\\Lambda}^2,\n\t\\end{align}\n\tthen $\\mathcal{P}$ is a robust preconditioner in the sense that the condition number of $\\mathcal{P} \\Sigma$ is bounded independent of the material and mesh parameters.\n\\end{lemma}\n\nNote that symmetry of $\\Sigma_h$ is apparent from \\eqref{eq: def sigma_h}. Positive definiteness follows using the same arguments as in \\Cref{thm: sigma SPD}. The next step is therefore to create an operator $\\mathcal{P}^{-1}$ that generates a norm which is equivalent to $\\| \\cdot \\|_\\Lambda$ on $\\Lambda_h$. Recall from \\eqref{eq: norm phi} that $\\| \\cdot \\|_\\Lambda$ is composed of fractional Sobolev norms. The key idea is to introduce a matrix $\\mathsf{H}(s)$ that induces a norm which is equivalent to $H^s(\\Gamma)$ for $s = \\pm \\frac{1}{2}$. We apply the strategy explained in \\cite{kuchta2016preconditioners} to achieve this, of which a short description follows.\n\nFor given basis $\\{\\phi_i\\}_{i = 1}^{n_\\Lambda} \\in \\Lambda_h$ with $n_\\Lambda$ the dimension of $\\Lambda_h$, let the mass matrix $\\mathsf{M}$ and stiffness matrix $\\mathsf{A}$ be defined such that\n\\begin{align}\n\t\\mathsf{M}_{ij} &:= ( \\phi_j, \\phi_i )_{\\Gamma}, & \n\t\\mathsf{A}_{ij} &:= ( \\nabla_\\Gamma \\phi_j, \\nabla_\\Gamma \\phi_i )_{\\Gamma}.\n\\end{align}\n\nThen, a complete set of eigenvectors $\\mathsf{v}_i \\in \\mathbb{R}^{n_\\Lambda}$ and eigenvalues $\\lambda_i \\in \\mathbb{R}$ exist solving the generalized eigenvalue problem \n\\begin{align} \\label{eq: generalized eigenvalue problem}\n\t\\mathsf{A} \\mathsf{v}_i = \\lambda_i \\mathsf{M} \\mathsf{v}_i.\n\\end{align}\nThe eigenvectors satisfy $\\mathsf{v}_i^\\mathsf{T} \\mathsf{M} \\mathsf{v}_j = \\delta_{ij}$ with $\\delta_{ij}$ the Kronecker delta function. Let the diagonal matrix $\\mathsf{\\Lambda} := \\operatorname{diag}([\\lambda_i]_{i = 1}^{n_\\Lambda})$ and let $\\mathsf{V}$ be the matrix with $\\mathsf{v}_i$ as its columns. The following eigendecomposition then holds:\n\\begin{align}\n\t\\mathsf{A} = \\mathsf{(MV) \\Lambda (MV)^T}.\n\\end{align}\n\nUsing the matrices $\\mathsf{M}$, $\\mathsf{V}$, and $\\mathsf{\\Lambda}$, we define the operator $\\mathsf{H}: \\mathbb{R} \\to \\mathbb{R}^{n_\\Lambda \\times n_\\Lambda}$ as\n\\begin{align}\n\t\\mathsf{H}(s) = \\mathsf{(MV) \\Lambda}^s \\mathsf{(MV)^T}.\n\\end{align}\nAn advantage of this construction is that its inverse can be directly computed as $\\mathsf{H}(s)^{-1} = \\mathsf{V \\Lambda}^{-s} \\mathsf{V^T}$ due to $\\mathsf{V^TMV = I}$. Next, we emphasize that $\\mathsf{H}(0) = \\mathsf{M}$ and $\\mathsf{H}(1) = \\mathsf{A}$, i.e. the discrete $L^2(\\Gamma)$ and $H^1(\\Gamma)$ norms on $\\Lambda_h$ are generated for $s = 0$ and $s = 1$, respectively. As a generalization, the norm induced by the matrix $\\mathsf{H}(s)$ is equivalent to the $H^s(\\Gamma)$ norm on the discrete space $\\Lambda_h$ \\cite{kuchta2016preconditioners}. In other words, \n\\begin{align}\n\t\\| \\phi \\|_{s, \\Gamma}^2\n\t\\lesssim (\\pi \\phi)^T \\mathsf{H}(s) (\\pi \\phi)\n\t\\lesssim \\| \\phi \\|_{s, \\Gamma}^2,\n\\end{align}\nin which $\\pi$ is the representation operator in the basis $\\{\\phi_i\\}_{i = 1}^{n_\\Lambda}$.\n\nNext, we use these tools to define our preconditioner following the strategy of \\cite{mardal2011preconditioning}. The operator $\\mathsf{P}^{-1}: \\mathbb{R}^{n_{\\Lambda}} \\to \\mathbb{R}^{n_{\\Lambda}}$ is defined according to the norm $\\| \\cdot \\|_{\\Lambda}$ from \\eqref{eq: norm phi}:\n\\begin{align}\n\t\\mathsf{P}^{-1} := \\mu \\mathsf{H} \\left(\\tfrac{1}{2}\\right) + K^{-1} \\mathsf{H} \\left(-\\tfrac{1}{2}\\right).\n\\end{align}\n\nDefining $\\mathcal{P}^{-1} := \\pi^T \\mathsf{P}^{-1} \\pi$, we obtain the equivalence relation \\eqref{eq: norm equivalence precond} by construction. In turn, the inverse operator $\\mathcal{P}$ is an optimal preconditioner for the system $\\eqref{eq: poincare steklov_h}$. The matrix $\\mathsf{P}$ is explicitly computed using the properties of $\\mathsf{V}$ and $\\mathsf{M}$:\n\\begin{align} \\label{eq: preconditioner}\n\t\\mathsf{P} = \n\t\\left(\n\t\\mu \\mathsf{H} \\left(\\tfrac{1}{2}\\right) + K^{-1} \\mathsf{H} \\left(-\\tfrac{1}{2}\\right)\n\t\\right)^{-1}\n\t=\n\t\\mathsf{V} \\left(\n\t\\mu \\mathsf{\\Lambda}^{\\frac{1}{2}} + K^{-1} \\mathsf{\\Lambda}^{-\\frac{1}{2}}\n\t\\right)^{-1}\n\t\\mathsf{V^T}.\n\\end{align}\n\n\\subsection{An Iterative Method Respecting Mass Conservation} \n\\label{sub:a_conservative_method}\n\nThe Steklov-Poincar\\'e system from \\cref{sub:discrete_poincar} and the preconditioner from \\cref{sub:parameter_robust_preconditioning} form the main ingredients of the iterative scheme proposed next. As mentioned before, we aim to use a Krylov subspace method on the reduced system \\eqref{eq: poincare steklov_h}. We turn to the Generalized Minimal Residual (GMRes) method \\cite{saad1986gmres} and propose the scheme we refer to as \\Cref{alg: GMRes}, described below.\n\n\\begin{algorithm}[ht]\n\n\t\\caption{}\n\t\\label{alg: GMRes}\n\t\\begin{enumerate}\n\t\t\\item Set the tolerance $\\epsilon > 0$, choose an initial guess $\\phi_h^0 \\in \\Lambda_h$, and construct the right-hand side $\\chi_h$ from \\eqref{eq: def chi_h} and $\\mathsf{P}$ from \\eqref{eq: preconditioner}.\n\t\t\\item \\label{step: SD solves}\n\t\tUsing $\\mathsf{P}$ as a preconditioner, apply GMRes to the discrete Steklov-Poincar\\'e system \\eqref{eq: poincare steklov_h} until the relative, preconditioned residual is smaller than $\\epsilon$. This involves solving a Stokes and a Darcy subproblem in \\eqref{eq: def sigma_h} at each iteration.\n\t\t\\item \\label{step: reconstruction}\n\t\tConstruct $(\\bm{u}_{S, h}, p_{S, h}) \\in \\bm{V}_{S, h} \\times W_{S, h}$ and $(\\bm{u}_{D, h}, p_{D, h}) \\in \\bm{V}_{D, h} \\times W_{D, h}$ by solving the independent Stokes and Darcy subproblems with $\\phi_h$ as the normal flux on $\\Gamma$.\n\t\t\\item In the case of Neumann problems, reconstruct the mean of the pressure in $\\Omega_D$ using \\eqref{eq: def p bar single} or \\eqref{eq: def p bar pure}.\n\t\\end{enumerate}\n\\end{algorithm}\n\nWe make three observations concerning this algorithm. Most importantly, the solution $(\\bm{u}_h, p_h)$ produced by \\Cref{alg: GMRes} conserves mass locally, independent of the number of GMRes iterations. In particular, \nthe definition of the space $\\bm{V}_h$ ensures that no mass is lost across the interface. Moreover, with the flux $\\phi_h$ given on the interface, the reconstruction in step \\eqref{step: reconstruction} ensures that mass is conserved in each subdomain.\n\nSecondly, we emphasize that the solves for the Stokes and Darcy subproblems in step \\ref{step: SD solves} can be performed in parallel by optimized solvers. Moreover, the preconditioner is local to the interface and is agnostic to the choice of extension operators and chosen discretization methods. In turn, this scheme is applicable to a wide range of well-established ``legacy'' codes tailored to solving Stokes and Darcy flow problems. \n\nThird, we have made the implicit assumption that obtaining $\\mathsf{P}$ by solving \\eqref{eq: generalized eigenvalue problem} is computationally feasible. This is typically the case if the dimension of the interface space $\\Lambda_h$ is sufficiently small. The generalized eigenvalue problem is solved once in an a priori, or ``off-line'', stage and the assembled matrix $\\mathsf{P}$ is then applied in each iteration of the GMRes method.\n\n\\section{Numerical Results} \n\\label{sec:numerical_results}\n\nIn this section, we present numerical experiments that verify the theoretical results presented above. By setting up artificial coupled Stokes-Darcy problems in two dimension, we investigate the dependency of \\Cref{alg: GMRes} on physical and discretization parameters in Section~\\ref{sub:parameter_robustness}. Afterward, Section~\\ref{sub:comparison_to_NN_method} presents a comparison of the proposed scheme to a Neumann-Neumann method.\n\n\\subsection{Parameter Robustness}\n\\label{sub:parameter_robustness}\t\n\n\nLet the subdomains be given by $\\Omega_S := (0, 1) \\times (0, 1)$, $\\Omega_D := (0, 1) \\times (-1, 0)$, and $\\Gamma := [0, 1] \\times \\{ 0 \\}$. Two test cases are considered, defined by different boundary conditions. The first concerns the setting in which both the Stokes and Darcy subproblems are well-posed. On the other hand, test case 2 illustrates the scenario in which a pure Neumann problem is imposed on the porous medium. \n\nFor test case 1, let $\\partial_\\sigma \\Omega_S$ be the top boundary ($x_2 = 1$) and $\\partial_u \\Omega_D$ the bottom boundary ($x_2 = -1$). The remaining portions of the boundary $\\partial \\Omega$ form $\\partial_u \\Omega_S$ and $\\partial_p \\Omega_D$. On $\\partial_u \\Omega_S$, zero velocity is imposed as described by \\eqref{eq: BC essential}. The pressure data is set to $g_p(x_1, x_2) := x_2$ on $\\partial_p \\Omega_D$ to stimulate a flow field that infiltrates the porous medium.\n\nTest case 2 simulates parallel flow over a porous medium. We impose no-flux conditions on $\\partial \\Omega_D \\setminus \\Gamma$, thereby ensuring that all mass transfer to and from the porous medium occurs at the interface $\\Gamma$. The flow is stimulated by prescribing the velocity at the left and right boundaries of $\\Omega_S$ using the parabolic profile $\\bm{u}_S(x_1, x_2) := [0, x_2 (2 - x_2)]$. As in test case 1, the top boundary represents $\\partial_\\sigma \\Omega_S$, at which a zero stress is prescribed.\n\nBoth test cases consider zero body force and mass source, i.e. $\\bm{f}_S := 0$ and $f_D := 0$. Moreover, we set the parameter $\\alpha$ in the Beavers-Joseph-Saffman condition to zero for simplicity. \n\nThe meshes $\\Omega_{S, h}$ and $\\Omega_{D, h}$ are chosen to be matching at $\\Gamma$ and we set $\\Gamma_h$ as the coinciding trace mesh. Following \\Cref{sec:discretization}, we choose the Mixed Finite Element method in both subdomains, implemented with the use of FEniCS \\cite{logg2012automated}. The spaces are given by a vector of quadratic Lagrange elements $(\\bm{P}_2)$ for the Stokes velocity $\\bm{V}_{S, h}$ and lowest order Raviart-Thomas elements $(RT_0)$ for the Darcy velocity $\\bm{V}_{D, h}$. The pressure space $W_h$ is given by piecewise constants $(P_0)$. The interface space $\\Lambda_h$ is chosen to be the normal trace of $\\bm{V}_{S, h}$ on $\\Gamma_h$, and therefore consists of quadratic Lagrange elements.\n\nFor the sake of efficiency, the matrices $\\mathsf{V}$ and $\\mathsf{\\Lambda}$ used in the preconditioner $\\mathsf{P}$ are computed a priori. Moreover, we pre-compute the $\\mathsf{LU}$-decompositions of the Darcy and Stokes subproblems. These decompositions serve as surrogates for optimized ``legacy'' codes. The iterative solver is terminated when a relative residual of $\\epsilon = 10^{-6}$ is reached, i.e. when the ratio of Euclidean norms of the preconditioned residual and the preconditioned right-hand side is smaller than $\\epsilon$. \n\nWe first consider the dependency of the iterative solver on the mesh size. We set unit viscosity and permeability and start with a coarse mesh with $h = 1/8$. The mesh is refined four times and at each refinement, the number of iterations in \\Cref{alg: GMRes} is reported. The results for both test cases are shown in \\Cref{tab: Robustness}. \n\nThe results show that the number of iterations is robust with respect to the mesh size. Moreover, the second and third columns indicate the reduction from a fully coupled problem of size $n_{total}$ to a significantly smaller interface problem of size $n_\\Lambda$. As shown in \\Cref{sec:the_steklov_poincare_system}, this interface problem is symmetric and positive definite.\n\n\\begin{table}[ht]\n\t\\caption{The number of iterations necessary to reach the given tolerance with respect to the mesh size. The material parameters are given by $\\kappa = \\mu = 1$. }\n\t\\label{tab: Robustness}\n\t\\centering\n\t\\begin{tabular}{|r|r|r|c|c|}\n\t\t\\hline\n\n\t\t\\hline\n\t\t\t$1 / h$ &\n\t\t\t$n_{total}$ &\n\t\t\t$n_\\Lambda$ &\n\t\t\tCase 1 &\n\t\t\tCase 2 \\\\\n\t\t\\hline\n\t\t\t 8 & 1,042 & 15 & 8 & 6 \\\\\n\t\t\t 16 & 4,002 & 31 & 9 & 8 \\\\\n\t\t\t 32 & 15,682 & 63 & 8 & 9 \\\\\n\t\t\t 64 & 62,082 & 127 & 8 & 9 \\\\\n\t\t\t128 & 247,042 & 255 & 8 & 8 \\\\\n\t\t\\hline\n\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}\n\nWe investigate the robustness of \\Cref{alg: GMRes} with respect to physical parameters by varying both the (scalar) permeability $\\kappa$ and the viscosity $\\mu$ over a range of eight orders of magnitude. The number of iterations is reported in Table~\\ref{tab: Robustness parameters}. \n\nIt is apparent that the scheme is robust with respect to both parameters, reaching the desired tolerance within a maximum of eleven iterations for the two test cases. Minor deviations in the iteration numbers can be observed for low permeabilities. The scheme may require an extra iteration in that case due to the higher sensitivity of the Darcy subproblem on flux boundary data. \n\n\\begin{table}[ht]\n\t\\caption{Number of iterations necessary to reach the tolerance with respect to the physical parameters. For both cases, the mesh size is set to $h = 1/64$.}\n\t\\label{tab: Robustness parameters}\n\t\\centering\n\t\\begin{tabular}{|r|r|rrrrr|}\n\t\t\\hline\n\n\t\t\\hline\n\t\t\\multicolumn{2}{|c|}{\\multirow{2}{*}{Case 1}}\n\t\t& \n\t\t\\multicolumn{5}{c|}{log$_{10}(\\mu)$} \\\\\n\t\t\\cline{3-7}\n\t\t\\multicolumn{2}{|c|}{} \n\t\t& $-4$ & $-2$ & $\\phantom{-}0$ & $\\phantom{-}2$ & $\\phantom{-}4$ \\\\\n\t\t\\hline\n\t\t\t\\multirow{5}{*}{log$_{10}(\\kappa)$}\n\t\t\t& 4 & 8 & 8 & 8 & 8 & 8 \\\\\n\t\t\t& 2 & 8 & 8 & 8 & 8 & 8 \\\\\n\t\t\t& 0 & 8 & 8 & 8 & 8 & 8 \\\\\n\t\t\t& $-2$ & 7 & 7 & 7 & 7 & 7 \\\\\n\t\t\t& $-4$ & 7 & 7 & 7 & 7 & 7 \\\\\n\t\t\\hline\n\n\t\t\\hline\n\t\\end{tabular}\n\t\\hspace{50 pt}\n\t\\begin{tabular}{|r|r|rrrrr|}\n\t\t\\hline\n\n\t\t\\hline\n\t\t\\multicolumn{2}{|c|}{\\multirow{2}{*}{Case 2}}\n\t\t& \n\t\t\\multicolumn{5}{c|}{log$_{10}(\\mu)$} \\\\\n\t\t\\cline{3-7}\n\t\t\\multicolumn{2}{|c|}{} \n\t\t& $-4$ & $-2$ & $\\phantom{-}0$ & $\\phantom{-}2$ & $\\phantom{-}4$ \\\\\n\t\t\\hline\n\t\t\t\\multirow{5}{*}{log$_{10}(\\kappa)$}\n\t\t\t& 4 & 9 & 9 & 9 & 9 & 9 \\\\\n\t\t\t& 2 & 9 & 9 & 9 & 9 & 9 \\\\\n\t\t\t& 0 & 9 & 9 & 9 & 9 & 9 \\\\\n\t\t\t& $-2$ & 9 & 9 & 9 & 9 & 9 \\\\\n\t\t\t& $-4$ & 11 & 11 & 11 & 11 & 10 \\\\\n\t\t\\hline\n \n\t\t\\hline\n\t\\end{tabular}\n\\end{table}\n\n\n\\subsection{Comparison to a Neumann-Neumann Method} \n\\label{sub:comparison_to_NN_method}\n\nIn order to compare the performance of Algorithm~\\ref{alg: GMRes} to more conventional domain decomposition methods, we consider the closely-related Neumann-Neumann method. This method, as remarked in \\cite[Remark 3.1]{discacciati2007robin}, solves the Steklov-Poincar\\'e sytem \\eqref{eq: poincare steklov_h} in the following, iterative manner. Given the current residual, we solve the Stokes and Darcy subproblems by interpreting this residual as a normal stress, respectively pressure, boundary condition. The computed fluxes normal to $\\Gamma$ then update $\\phi$ through the following operator:\n\\begin{align}\n\t\\mathcal{P}_{NN} := \\Sigma_{S, h}^{-1} + \\Sigma_{D, h}^{-1},\n\\end{align}\nwith $\\Sigma_{S, h} + \\Sigma_{D, h} = \\Sigma_h$ the decomposition in \\eqref{eq: def sigma_h}. \n\nNoting that $\\mathcal{P}_{NN}$ is an approximation of $\\Sigma_h^{-1}$, we define the Neumann-Neumann method by replacing the preconditioner $\\mathsf{P}$ by $\\mathcal{P}_{NN}$ in Algorithm~\\ref{alg: GMRes}. Moreover, we choose the same Krylov subspace method (GMRes) and stopping criterion, in order to make the comparison as fair as possible. \n\nWe consider the numerical experiment from \\cite{discacciati2005iterative,discacciati2007robin} posed on $\\Omega := (0, 1) \\times (0, 2)$ with $\\Omega_S := (0, 1) \\times (1, 2)$, $\\Omega_D := (0, 1) \\times (0, 1)$, and $\\Gamma = (0, 1) \\times \\{ 1 \\}$.\nThe solution is given by $\\bm{u}_S := \\left[(x_2 - 1)^2, \\ x_1(x_1 - 1)\\right]$, $p_S = \\mu (x_1 + x_2 - 1) + (3K)^{-1}$, and $p_D = \\left(x_1(1 - x_1)(x_2 - 1) + \\frac13 x_2^3 - x_2^2 + x_2 \\right) K^{-1} + \\mu x_1$. The boundary conditions are chosen to comply with this solution and we impose the pressure on $\\partial \\Omega_D \\setminus \\Gamma$, the normal stress on the top boundary, and the velocity on the remaining boundaries of $\\Omega_S$. Finally, the Beavers-Joseph-Saffman condition is replaced by a no-slip condition for the tangential Stokes velocity on $\\Gamma$.\n\nUsing the same Mixed Finite Element discretization as in the previous section, we vary the material and discretization parameters and report the number of iterations necessary to reach a relative residual of $\\epsilon = 10^{-6}$ to the preconditioned problem. The results are presented in Table~\\ref{tab: comparison with NN}.\n\n\\begin{table}[ht]\n\t\n\t\\caption{Iteration counts of the proposed scheme compared to a Neumann-Neumann method. The initial mesh has $h_0 = 1/7$ and each refinement is such that $h_{i + 1} = h_i/2$.}\n\t\\label{tab: comparison with NN}\n\t\\centering\n\t\\begin{tabular}{|c|c|rrrrr|rrrrr|}\n\t\t\\hline\n\n\t\t\\hline\n\t\t\\multirow{2}{*}{log$_{10}(\\mu)$} & \n\t\t\\multirow{2}{*}{log$_{10}(K)$} & \n\t\t\\multicolumn{5}{c|}{Neumann-Neumann} &\n\t\t\\multicolumn{5}{c|}{Algorithm~\\ref{alg: GMRes}} \\\\\n\t\t\\cline{3-12}\n\t\t& &\n\t\t$h_0$ & $h_1$ & $h_2$ & $h_3$ & $h_4$ &\n\t\t$h_0$ & $h_1$ & $h_2$ & $h_3$ & $h_4$ \\\\\n\t\t\\hline\t\t\t\n\t\t\\multirow{3}{*}{$\\phantom{-}0$}\n\t\t& $\\phantom{-}0$ & 3 & 3 & 3 & 3 & 3 & 8 & 8 & 8 & 8 & 8 \\\\\n\t\t& $-1$ & 5 & 5 & 5 & 5 & 5 & 8 & 8 & 8 & 8 & 8 \\\\\n\t\t& $-2$ & 7 & 7 & 8 & 8 & 8 & 7 & 7 & 7 & 8 & 8 \\\\\n\t\t\\hline \n\t\t\\multirow{3}{*}{$-1$}\n\t\t& $\\phantom{-}0$ & 5 & 5 & 5 & 5 & 5 & 8 & 8 & 8 & 8 & 8 \\\\\n\t\t& $-1$ & 7 & 7 & 8 & 8 & 8 & 7 & 7 & 7 & 8 & 8 \\\\\n\t\t& $-2$ & 8 & 9 & 12 & 16 & 13 & 10 & 9 & 8 & 7 & 7 \\\\\n\t\t\\hline \n\t\t\\multirow{3}{*}{$-2$}\n\t\t& $\\phantom{-}0$ & 7 & 7 & 8 & 8 & 8 & 7 & 7 & 7 & 8 & 8 \\\\\n\t\t& $-1$ & 8 & 9 & 12 & 16 & 13 & 10 & 9 & 8 & 7 & 7 \\\\\n\t\t& $-2$ & 14 & 9 & 18 & 25 & 29 & 13 & 14 & 12 & 8 & 7 \\\\\n\t\t\\hline\n\n\t\t\\hline\n\t\\end{tabular}\n\n\\end{table}\n\n\nWe observe that the two methods behave opposite as the mesh size decreases. Whereas the Neumann-Neumann method requires more iterations for finer grids, the performance of our proposed scheme appears to improve, requiring only eight iterations in all cases on the finest levels.\n\nIn general, Algorithm~\\ref{alg: GMRes} outperforms the Neumann-Neumann method in terms of robustness, with the only deviation occurring in the case of a low permeability and a coarse grid. The Neumann-Neumann method appears more sensitive to both material and discretization parameters and only converges faster for material parameters close to unity.\n\nIn terms of computational cost, we emphasize that our proposed scheme requires an off-line computation to construct the preconditioner and contains a solve for the Stokes and Darcy subproblems at each iteration. On the other hand, the Neumann-Neumann method requires an additional solve for the subproblems in the preconditioner $\\mathcal{P}_{NN}$. These additional solves will likely become prohibitively expensive for finer grids, since each solve is more costly and more iterations become necessary. Thus, if the preconditioner $\\mathsf{P}$ can be formed efficiently, then Algorithm~\\ref{alg: GMRes} forms an attractive alternative for such problems.\n\nAlthough these results do not allow for a thorough, quantitative comparison with the Robin-Robin methods presented in \\cite{discacciati2007robin}, we do make an important, qualitative observation. In particular, our proposed method does not require setting any acceleration parameters and its performance is a direct consequence of the constructed preconditioner. This is advantageous because finding the optimal values for such parameters can be a non-trivial task.\n\n\n\\section{Conclusion}\n\\label{sec:conclusions}\n\nIn this work, we proposed an iterative method for solving coupled Stokes-Darcy problems that retains local mass conservation at each iteration. By introducing the normal flux at the interface, the original problem is reduced to a smaller problem concerning only this variable. Through a priori analysis with the use of weighted norms, a preconditioner is formed to ensure that the scheme is robust with respect to physical and discretization parameters.\n\nFuture research will focus on four main ideas. First, we are interesting in investigating the application of this scheme on different discretization methods, including the pairing of a MPFA finite volume method with the MAC-scheme as in \\cite{schneider2020coupling}. \n\nSecondly, we note that the use of non-matching grids forms another natural extension. In that case, we aim to investigate how such a mismatch affects the discretization error. However, such analysis heavily depends on the chosen discretization method and is therefore reserved as a topic for future investigation.\n\nThird, the generalization of these ideas to non-linear problems forms another area of our interest. By considering the Navier-Stokes equations in the free-flow subdomain, for example, the reduction to an interface problem will inherit the non-linearity. An iterative method that solves this reduced problem may benefit from a similarly constructed preconditioner. \n\nFinally, as remarked in Section~\\ref{sub:a_conservative_method}, we are under the assumption that the generalized eigenvalue problem \\eqref{eq: generalized eigenvalue problem} can be solved efficiently. However, if the assembly of $\\mathsf{P}$ is too costly, then more efficient, spectrally equivalent preconditioners are required. A promising example may be to employ the recent work on multi-grid preconditioners for fractional diffusion problems \\cite{baerland2019multigrid}.\n\nTo conclude, we have presented this iterative method in a basic setting so that it may form a foundation for a variety of research topics that we aim to pursue in future work.\n\n\\begin{acknowledgement}\n\tThe author expresses his gratitude to Prof. Rainer Helmig, Prof. Ivan Yotov, Dennis Gl\\\"aser, and Kilian Weishaupt for valuable discussions on closely related topics.\n\\end{acknowledgement}\n\n\\bibliographystyle{siam}\n", "meta": {"timestamp": "2022-09-28T02:22:25", "yymm": "2209", "arxiv_id": "2209.13421", "language": "en", "url": "https://arxiv.org/abs/2209.13421"}} {"text": "\\section{Introduction}\r\n\r\nInterplay between interactions and disorders has been one of the\r\ncentral issues in modern condensed matter physics\r\n\\cite{Interaction_Disorder_Book,RMP_Disorder_Interaction}. In the\r\nweakly disordered metal the lowest-order interaction-correction\r\nwas shown to modify the density of states at the Fermi energy in\r\nthe diffusive regime \\cite{AAL}, giving rise to non-Fermi liquid\r\nphysics particulary in low dimensions less than $d = 3$ while\r\nfurther enhancement of electron correlations was predicted to\r\ncause ferromagnetism \\cite{Disorder_FM}. In an insulating phase\r\nspin glass appears ubiquitously, where the average of the spin\r\nmoment vanishes in the long time scale, but local spin\r\ncorrelations become finite, making the system away from\r\nequilibrium \\cite{SG_Review}.\r\n\r\nAn outstanding question is the role of disorder in the vicinity of\r\nquantum phase transitions\r\n\\cite{Disorder_QCP_Review1,Disorder_QCP_Review2}, where effective\r\nlong-range interactions associated with critical fluctuations\r\nappear to cause non-Fermi liquid physics\r\n\\cite{Disorder_QCP_Review2,QCP_Review}. Unfortunately, complexity\r\nof this problem did not allow comprehensive understanding until\r\nnow. In the vicinity of the weakly disordered ferromagnetic\r\nquantum critical point, an electrical transport-coefficient has\r\nbeen studied, where the crossover temperature from the ballistic\r\nto diffusive regimes is much lowered due to critical fluctuations,\r\ncompared with the disordered Fermi liquid\r\n\\cite{Paul_Disorder_FMQCP}. Generally speaking, the stability of\r\nthe quantum critical point should be addressed, given by the\r\nHarris criterion \\cite{Harris}. When the Harris criterion is not\r\nsatisfied, three possibilities are expected to arise\r\n\\cite{Disorder_QCP_Review2}. The first two possibilities are\r\nemergence of new fixed points, associated with either a\r\nfinite-randomness fixed point satisfying the Harris criterion at\r\nthis new fixed point or an infinite randomness fixed point\r\nexhibiting activated scaling behaviors. The last possibility is\r\nthat quantum criticality can be destroyed, replaced with a smooth\r\ncrossover. In addition, even away from the quantum critical point\r\nthe disordered system may show non-universal power law physics,\r\ncalled the Griffiths phase \\cite{Griffiths}. Effects of rare\r\nregions are expected to be strong near the infinite randomness\r\nfixed point and the disorder-driven crossover region\r\n\\cite{Disorder_QCP_Review2}.\r\n\r\nThis study focuses on the role of strong randomness in the heavy\r\nfermion quantum transition. Heavy fermion quantum criticality is\r\nbelieved to result from competition between Kondo and RKKY\r\n(Ruderman-Kittel-Kasuya-Yosida) interactions, where larger Kondo\r\ncouplings give rise to a heavy fermion Fermi liquid while larger\r\nRKKY interactions cause an antiferromagnetic metal\r\n\\cite{Disorder_QCP_Review2,QCP_Review,HF_Review}. Generally\r\nspeaking, there are two competing view points for this problem.\r\nThe first direction is to regard the heavy fermion transition as\r\nan antiferromagnetic transition, where critical spin fluctuations\r\nappear from heavy fermions. The second view point is that the\r\ntransition is identified with breakdown of the Kondo effect, where\r\nFermi surface fluctuations are critical excitations. The first\r\nscenario is described by the Hertz-Moriya-Millis (HMM) theory in\r\nterms of heavy electrons coupled with antiferromagnetic spin\r\nfluctuations, the standard model for quantum criticality\r\n\\cite{HMM}. There are two ways to realize the second scenario\r\ndepending on how to describe Fermi surface fluctuations. The first\r\nway is to express Fermi surface fluctuations in terms of a\r\nhybridization order parameter called holon in the slave-boson\r\ncontext \\cite{KB_z2,KB_z3}. This is usually referred as the Kondo\r\nbreakdown scenario. The second one is to map the lattice problem\r\ninto the single site one resorting to the dynamical mean-field\r\ntheory (DMFT) approximation \\cite{DMFT_Review}, where order\r\nparameter fluctuations are critical only in the time direction.\r\nThis description is called the locally critical scenario\r\n\\cite{EDMFT}.\r\n\r\nEach scenario predicts its own critical physics. Both the HMM\r\ntheory and the Kondo breakdown model are based on the standard\r\npicture that quantum criticality arises from long-wave-length\r\ncritical fluctuations while the locally quantum critical scenario\r\nhas its special structure, that is, locally (space) critical\r\n(time). Critical fluctuations are described by $z = 2$ in the HMM\r\ntheory due to finite-wave vector ordering \\cite{HMM} while by $z =\r\n3$ in the Kondo breakdown scenario associated with uniform\r\n\"ordering\" \\cite{KB_z3}, where $z$ is the dynamical exponent\r\nexpressing the dispersion relation for critical excitations. Thus,\r\nquantum critical physics characterized by scaling exponents is\r\ncompletely different between these two models. In addition to\r\nqualitative agreements with experiments depending on compounds\r\n\\cite{Disorder_QCP_Review2}, these two theories do not allow the\r\n$\\omega/T$ scaling in the dynamic susceptibility of their critical\r\nmodes because both theories live above their upper critical\r\ndimensions. On the other hand, the locally critical scenario gives\r\nrise to the $\\omega/T$ scaling behavior for the dynamic spin\r\nsusceptibility \\cite{EDMFT} while it seems to have some\r\ndifficulties associated with some predictions for transport\r\ncoefficients.\r\n\r\n\r\n\r\nWe start to discuss an Ising model with Gaussian randomness for\r\nits exchange coupling, called the Edwards-Anderson model\r\n\\cite{SG_Review}. Using the replica trick and performing the\r\nsaddle-point analysis, one can find a spin glass phase when the\r\naverage value of the exchange interaction vanishes, characterized\r\nby the Edwards-Anderson order parameter without magnetization.\r\nApplying this concept to the Heisenberg model with Gaussian\r\nrandomness, quantum fluctuations should be incorporated to take\r\ninto account the Berry phase contribution carefully. It was\r\ndemonstrated that quantum corrections in the DMFT approximation\r\nlead the spin glass phase unstable at finite temperatures,\r\nresulting in a spin liquid state when the average value of the\r\nexchange coupling vanishes \\cite{Sachdev_SG}. It should be noted\r\nthat this spin liquid state differs from the spin liquid phase in\r\nfrustrated spin systems in the respect that the former state\r\noriginates from critical single-impurity dynamics while the latter\r\nphase results from non-trivial spatial spin correlations described\r\nby gauge fluctuations \\cite{Spin_Liquid_Review}. The spin liquid\r\nphase driven by strong randomness is characterized by its critical\r\nspin spectrum, given by the $\\omega/T$ scaling local spin\r\nsusceptibility \\cite{Sachdev_SG}.\r\n\r\nIntroducing hole doping into the spin liquid state, Parcollet and\r\nGeorges examined the disordered t-J model within the DMFT\r\napproximation \\cite{Olivier}. Using the U(1) slave-boson\r\nrepresentation, they found marginal Fermi-liquid phenomenology,\r\nwhere the electrical transport is described by the $T$-linear\r\nresistivity, resulting from the marginal Fermi-liquid spectrum for\r\ncollective modes, here the $\\omega/T$ scaling in the local spin\r\nsusceptibility. They tried to connect this result with physics of\r\nhigh T$_{c}$ cuprates.\r\n\r\nIn this study we introduce random hybridization with conduction\r\nelectrons into the spin liquid state. Our original motivation was\r\nto explain both the $\\omega/T$ scaling in the spin spectrum\r\n\\cite{INS_Local_AF} and the typical $T$-linear resistivity\r\n\\cite{LGW_F_QPT_Nature} near the heavy fermion quantum critical\r\npoint. In particular, the presence of disorder leads us to the\r\nDMFT approximation naturally \\cite{Moore_Dis_DMFT}, expected to\r\nresult in the $\\omega/T$ scaling for the spin spectrum\r\n\\cite{Sachdev_SG}.\r\n\r\n\r\n\r\nStarting from an Anderson lattice model with disorder, we derive\r\nan effective local field theory in the DMFT approximation, where\r\nrandomness is introduced into both hybridization and RKKY\r\ninteractions. Performing the saddle-point analysis in the U(1)\r\nslave-boson representation, we reveal its phase diagram which\r\nshows a quantum phase transition from a spin liquid state to a\r\nlocal Fermi liquid phase. In contrast with the clean limit of the\r\nAnderson lattice model \\cite{KB_z2,KB_z3}, the effective\r\nhybridization given by holon condensation turns out to vanish,\r\nresulting from the zero mean value of the hybridization coupling\r\nconstant. However, we show that the holon density becomes finite\r\nwhen variance of hybridization is sufficiently larger than that of\r\nthe RKKY coupling, giving rise to the Kondo effect. On the other\r\nhand, when the variance of hybridization becomes smaller than that\r\nof the RKKY coupling, the Kondo effect disappears, resulting in a\r\nfully symmetric paramagnetic state, adiabatically connected with\r\nthe spin liquid state of the disordered Heisenberg model\r\n\\cite{Sachdev_SG}.\r\n\r\nOur contribution compared with the previous works\r\n\\cite{Kondo_Disorder} is to introduce RKKY interactions between\r\nlocalized spins and to observe the quantum phase transition in the\r\nheavy fermion system with strong randomness. The previous works\r\nfocused on how the non-Fermi liquid physics can appear in the\r\nKondo singlet phase away from quantum criticality\r\n\\cite{Kondo_Disorder}. A huge distribution of the Kondo\r\ntemperature $T_{K}$ turns out to cause such non-Fermi liquid\r\nphysics, originating from the finite density of unscreened local\r\nmoments with almost vanishing $T_K$, where the $T_{K}$\r\ndistribution may result from either the Kondo disorder for\r\nlocalized electrons or the proximity of the Anderson localization\r\nfor conduction electrons. Because RKKY interactions are not\r\nintroduced in these studies, there always exist finite $T_{K}$\r\ncontributions. On the other hand, the presence of RKKY\r\ninteractions gives rise to breakdown of the Kondo effect, making\r\n$T_{K} = 0$ identically in the strong RKKY coupling phase.\r\n\r\nIn Ref. [\\onlinecite{Kondo_RKKY_Disorder}] the role of random RKKY\r\ninteractions was examined, where the Kondo coupling is fixed while\r\nthe chemical potential for conduction electrons is introduced as a\r\nrandom variable with its variance $W$.\r\nIncreasing the randomness of the electron chemical potential, the\r\nFermi liquid state in $W < W_{c}$ turns into the spin liquid phase\r\nin $W > W_{c}$, which displays the marginal Fermi-liquid\r\nphenomenology due to random RKKY interactions\r\n\\cite{Kondo_RKKY_Disorder}, where the Kondo effect is suppressed\r\ndue to the proximity of the Anderson localization for conduction\r\nelectrons \\cite{Kondo_Disorder}. However, the presence of finite\r\nKondo couplings still gives rise to Kondo screening although the\r\n$T_{K}$ distribution differs from that in the Fermi liquid state,\r\nassociated with the presence of random RKKY interactions. In\r\naddition, the spin liquid state was argued to be unstable against\r\nthe spin glass phase at low temperatures, maybe resulting from the\r\nfixed Kondo interaction. On the other hand, we do not take into\r\naccount the Anderson localization for conduction electrons, and\r\nintroduce random hybridization couplings. As a result, the Kondo\r\neffect is completely destroyed in the spin liquid phase, thus\r\nquantum critical physics differs from the previous study of Ref.\r\n[\\onlinecite{Kondo_RKKY_Disorder}]. In addition, the spin liquid\r\nphase is stable at finite temperatures in the present study\r\n\\cite{Sachdev_SG}.\r\n\r\nWe investigate the quantum critical point beyond the mean-field\r\napproximation. Introducing quantum corrections fully\r\nself-consistently in the non-crossing approximation\r\n\\cite{Hewson_Book}, we prove that the local charge susceptibility\r\nhas exactly the same critical exponent as the local spin\r\nsusceptibility. This is quite unusual because these correlation\r\nfunctions are symmetry-unrelated in the lattice scale. This\r\nreminds us of deconfined quantum criticality \\cite{Senthil_DQCP},\r\nwhere the Landau-Ginzburg-Wilson forbidden continuous transition\r\nmay appear with an enhanced emergent symmetry. Actually, the\r\ncontinuous quantum transition was proposed between the\r\nantiferromagnetic phase and the valence bond solid state\r\n\\cite{Senthil_DQCP}. In the vicinity of the quantum critical point\r\nthe spin-spin correlation function of the antiferromagnetic\r\nchannel has the same scaling exponent as the valence-bond\r\ncorrelation function, suggesting an emergent O(5) symmetry beyond\r\nthe symmetry O(3)$\\times$Z$_{4}$ of the lattice model\r\n\\cite{Tanaka_SO5} and confirmed by the Monte Carlo simulation of\r\nthe extended Heisenberg model \\cite{Sandvik}. Tanaka and Hu\r\nproposed an effective O(5) nonlinear $\\sigma$ model with the\r\nWess-Zumino-Witten term as an effective field theory for the\r\nLandau-Ginzburg-Wilson forbidden quantum critical point\r\n\\cite{Tanaka_SO5}, expected to allow fractionalized spin\r\nexcitations due to the topological term. This proposal can be\r\nconsidered as generalization of an antiferromagnetic spin chain,\r\nwhere an effective field theory is given by an O(4) nonlinear\r\n$\\sigma$ model with the Wess-Zumino-Witten term, which gives rise\r\nto fractionalized spin excitations called spinons, identified with\r\ntopological solitons \\cite{Tsvelik_Book}. Applying this concept to\r\nthe present quantum critical point, the enhanced emergent symmetry\r\nbetween charge (holon) and spin (spinons) local modes leads us to\r\npropose novel duality between the Kondo singlet phase and the\r\ncritical local moment state beyond the Landau-Ginzburg-Wilson\r\nparadigm. We suggest an O(4) nonlinear $\\sigma$ model in a\r\nnontrivial manifold as an effective field theory for this local\r\nquantum critical point, where the local spin and charge densities\r\nform an O(4) vector with a constraint. The symmetry enhancement\r\nserves the mechanism of electron fractionalization in critical\r\nimpurity dynamics, where such fractionalized excitations are\r\nidentified with topological excitations.\r\n\r\nThis paper is organized as follows. In section II we introduce an\r\neffective disordered Anderson lattice model and perform the DMFT\r\napproximation with the replica trick. Equation (\\ref{DMFT_Action})\r\nis the main result in this section. In section III we perform the\r\nsaddle-point analysis based on the slave-boson representation and\r\nobtain the phase diagram showing breakdown of the Kondo effect\r\ndriven by the RKKY interaction. We show spectral functions,\r\nself-energies, and local spin susceptibility in the Kondo phase.\r\nFigures (1)-(3) with Eqs. (\\ref{Sigma_C_MFT})-(\\ref{Sigma_FC_MFT})\r\nand (\\ref{Lambda_MFT})-(\\ref{Constraint_MFT}) are main results in\r\nthis section. In section IV we investigate the nature of the\r\nimpurity quantum critical point based on the non-crossing\r\napproximation beyond the previous mean-field analysis. We solve\r\nself-consistent equations analytically and find power-law scaling\r\nsolutions. As a result, we uncover the marginal Fermi-liquid\r\nspectrum for the local spin susceptibility. We propose an\r\neffective field theory for the quantum critical point and discuss\r\nthe possible relationship with the deconfined quantum critical\r\npoint. In section V we summarize our results.\r\n\r\nThe present study extends our recent publication\r\n\\cite{Tien_Kim_PRL}, showing both physical and mathematical\r\ndetails.\r\n\r\n\r\n\r\n\r\n\r\n\\section{An effective DMFT action from an Anderson lattice model with strong randomness}\r\n\r\nWe start from an effective Anderson lattice model \\bqa H &=& -\r\n\\sum_{ij,\\sigma} t_{ij} c^{\\dagger}_{i\\sigma} c_{j\\sigma} + E_{d}\r\n\\sum_{i\\sigma} d^{\\dagger}_{i\\sigma} d_{i\\sigma} \\nn &+& \\sum_{ij}\r\nJ_{ij} \\mathbf{S}_{i} \\cdot \\mathbf{S}_{j} + \\sum_{i\\sigma} (V_{i}\r\nc^{\\dagger}_{i\\sigma} d_{i\\sigma} + {\\rm H.c.}) , \\label{ALM} \\eqa\r\nwhere $t_{ij} = \\frac{t}{M \\sqrt{z}}$ is a hopping integral for\r\nconduction electrons and \\bqa && J_{ij} = \\frac{J}{\\sqrt{z M}}\r\n\\varepsilon_{i}\\varepsilon_{j} , ~~~~~ V_{i} = \\frac{V}{\\sqrt{M}}\r\n\\varepsilon_{i} \\nonumber \\eqa are random RKKY and hybridization\r\ncoupling constants, respectively. Here, $M$ is the spin degeneracy\r\nand $z$ is the coordination number. Randomness is given by the\r\nGaussian distribution \\bqa \\overline{\\varepsilon_{i}} = 0 , ~~~~~\r\n\\overline{\\varepsilon_{i}\\varepsilon_{j}} = \\delta_{ij} . \\eqa\r\n\r\nThe disorder average can be performed in the replica trick\r\n\\cite{SG_Review}. Performing the disorder average in the Gaussian\r\ndistribution function, we reach the following expression for the\r\nreplicated effective action\r\n\\begin{eqnarray}\r\n&& \\overline{Z^n} = \\int \\mathcal{D}c_{i\\sigma}^{a}\r\n\\mathcal{D}d_{i\\sigma}^{a} e^{-\\bar{S}_n } , \\nn &&\r\n\\overline{S}_{n} = \\int\\limits_{0}^{\\beta} d\\tau \\sum_{ij\\sigma a}\r\nc^{\\dagger a}_{i\\sigma}(\\tau) ((\\partial_{\\tau} - \\mu)\\delta_{ij}\r\n+ t_{ij}) c^{a}_{j\\sigma}(\\tau) \\nn && + \\int\\limits_{0}^{\\beta}\r\nd\\tau \\sum_{i\\sigma a}d^{\\dagger a}_{i\\sigma}(\\tau)\r\n(\\partial_{\\tau} + E_d) d^{a}_{i\\sigma}(\\tau) \\nn && -\r\n\\frac{J^2}{2 z M} \\int\\limits_{0}^{\\beta} d\\tau\r\n\\int\\limits_{0}^{\\beta} d\\tau' \\sum_{ijab}\r\n\\mathbf{S}^{a}_{i}(\\tau) \\cdot \\mathbf{S}^{a}_{j}(\\tau) \\;\\;\r\n\\mathbf{S}^{b}_{i}(\\tau') \\cdot \\mathbf{S}^{b}_{j}(\\tau') \\nn && -\r\n\\frac{V^{2}}{2 M} \\int\\limits_{0}^{\\beta} d\\tau\r\n\\int\\limits_{0}^{\\beta} d\\tau' \\sum_{i \\sigma \\sigma' ab} \\big(\r\nc^{\\dagger a}_{i\\sigma}(\\tau) d^{a}_{i\\sigma}(\\tau) + d^{\\dagger\r\na}_{i\\sigma}(\\tau) c^{a}_{i\\sigma}(\\tau)\\big) \\nn &&\r\n~~~~~~~~~~~~~~~ \\times \\big( c^{\\dagger b}_{i\\sigma'}(\\tau')\r\nd^{b}_{i\\sigma'}(\\tau') + d^{\\dagger b}_{i\\sigma'}(\\tau')\r\nc^{b}_{i\\sigma'}(\\tau')\\big) , \\label{DALM}\r\n\\end{eqnarray}\r\nwhere $\\sigma, \\sigma' = 1, ..., M$ is the spin index and $a, b =\r\n1, ..., n$ is the replica index. In appendix A we derive this\r\nreplicated action from Eq. (\\ref{ALM}).\r\n\r\nOne may ask the role of randomness of $E_{d}$, generating \\bqa &&\r\n- \\int_{0}^{\\beta} d\\tau \\int_{0}^{\\beta} d\\tau'\r\n\\sum_{i\\sigma\\sigma' ab} d^{\\dagger a}_{i\\sigma}(\\tau)\r\nd^{a}_{i\\sigma}(\\tau) d^{\\dagger b}_{i\\sigma'}(\\tau')\r\nd^{b}_{i\\sigma'}(\\tau') , \\nonumber \\eqa where density\r\nfluctuations are involved. This contribution is expected to\r\nsupport the Kondo effect because such local density fluctuations\r\nhelp hybridization with conduction electrons. In this paper we fix\r\n$E_{d}$ as a constant value in the Kondo limit, allowed as long as\r\nits variance is not too large to overcome the Kondo limit.\r\n\r\nOne can introduce randomness in the hopping integral of conduction\r\nelectrons. But, this contribution gives rise to the same effect as\r\nthe DMFT approximation in the $z\\rightarrow \\infty$ Bethe lattice\r\n\\cite{Olivier}. In this respect randomness in the hopping integral\r\nis naturally introduced into the present DMFT study.\r\n\r\nThe last disorder contribution can arise from randomness in the\r\nelectron chemical potential, expected to cause the Anderson\r\nlocalization for conduction electrons. Actually, this results in\r\nthe metal-insulator transition at the critical disorder strength,\r\nsuppressing the Kondo effect in the insulating phase. Previously,\r\nthe Griffiths phase for non-Fermi liquid physics has been\r\nattributed to the proximity effect of the Anderson localization\r\n\\cite{Kondo_Disorder}. In this work we do not consider the\r\nAnderson localization for conduction electrons.\r\n\r\n\r\n\r\nWe observe that the disorder average neutralizes spatial\r\ncorrelations except for the hopping term of conduction electrons.\r\nThis leads us to the DMFT formulation, resulting in an effective\r\nlocal action for the strong random Anderson lattice model\r\n\\begin{eqnarray}\r\n&& \\bar{S}_{n}^{\\rm eff} = \\int_{0}^{\\beta} d\\tau \\Bigl\\{\r\n\\sum_{\\sigma a} c^{\\dagger a}_{\\sigma}(\\tau) (\\partial_{\\tau} -\r\n\\mu) c^{a}_{\\sigma}(\\tau) \\nn && + \\sum_{\\sigma a}d^{\\dagger\r\na}_{\\sigma}(\\tau) (\\partial_{\\tau} + E_d) d^{a}_{\\sigma}(\\tau)\r\n\\Bigr\\} \\nn && -\\frac{V^2}{2 M} \\int_{0}^{\\beta} d\\tau\r\n\\int_{0}^{\\beta} d\\tau' \\sum_{\\sigma \\sigma' a b} \\big[ c^{\\dagger\r\na}_{\\sigma}(\\tau) d^{a}_{\\sigma}(\\tau) + d^{\\dagger\r\na}_{\\sigma}(\\tau) c^{a}_{\\sigma}(\\tau)\\big] \\nn &&\r\n~~~~~~~~~~~~~~~~~~~~~~~~~ \\times \\big[ c^{\\dagger\r\nb}_{\\sigma'}(\\tau') d^{b}_{\\sigma'}(\\tau') + d^{\\dagger\r\nb}_{\\sigma'}(\\tau') c^{b}_{\\sigma'}(\\tau')\\big] \\nn && -\r\n\\frac{J^2}{2 M} \\int_{0}^{\\beta} d\\tau \\int_{0}^{\\beta} d\\tau'\r\n\\sum_{ab} \\sum_{\\alpha\\beta\\gamma\\delta} S^{a}_{\\alpha\\beta}(\\tau)\r\nR^{ab}_{\\beta\\alpha\\gamma\\delta}(\\tau-\\tau')\r\nS^{b}_{\\delta\\gamma}(\\tau') \\nn && + \\frac{t^2}{M^2}\r\n\\int_{0}^{\\beta} d\\tau \\int_{0}^{\\beta} d\\tau' \\sum_{ab\\sigma}\r\nc^{\\dagger a}_{\\sigma}(\\tau) G^{ab}_{c \\;\r\n\\sigma\\sigma}(\\tau-\\tau') c^{b}_{\\sigma}(\\tau' ) ,\r\n\\label{DMFT_Action}\r\n\\end{eqnarray}\r\nwhere $G^{ab}_{c \\; ij\\sigma\\sigma}(\\tau-\\tau')$ is the local\r\nGreen's function for conduction electrons and $R^{ab}_{\\beta\r\n\\alpha \\gamma \\delta}(\\tau-\\tau')$ is the local spin\r\nsusceptibility for localized spins, given by \\bqa G^{ab}_{c \\;\r\nij\\sigma\\sigma}(\\tau-\\tau') &=& - \\langle T_{\\tau} [\r\nc^{a}_{i\\sigma}(\\tau) c^{\\dagger b}_{j\\sigma}(\\tau') ] \\rangle ,\r\n\\nn R^{ab}_{\\beta \\alpha \\gamma \\delta}(\\tau-\\tau') &=& \\langle\r\nT_{\\tau} [S^{a}_{\\beta\\alpha}(\\tau) S^{b}_{\\gamma\\delta}(\\tau')]\r\n\\rangle , \\label{Local_Green_Functions} \\eqa respectively. Eq.\r\n(\\ref{DMFT_Action}) with Eq. (\\ref{Local_Green_Functions}) serves\r\na completely self-consistent framework for this problem.\r\nDerivation of Eq. (\\ref{DMFT_Action}) from Eq. (\\ref{DALM}) is\r\nshown in appendix B.\r\n\r\nThis effective model has two well known limits, corresponding to\r\nthe disordered Heisenberg model \\cite{Sachdev_SG} and the\r\ndisordered Anderson lattice model without RKKY interactions\r\n\\cite{Kondo_Disorder}, respectively. In the former case a spin\r\nliquid state emerges due to strong quantum fluctuations while a\r\nlocal Fermi liquid phase appears at low temperatures in the latter\r\ncase as long as the $T_{K}$ distribution is not so broadened\r\nenough. In this respect it is natural to consider a quantum phase\r\ntransition driven by the ratio between variances for the RKKY and\r\nhybridization couplings.\r\n\r\n\\section{Phase diagram}\r\n\r\n\\subsection{Slave boson representation and mean field approximation}\r\n\r\nWe solve the effective DMFT action based on the U(1) slave boson\r\nrepresentation\r\n\\begin{eqnarray}\r\nd^{a}_{\\sigma} &=& \\hat{b}^{\\dagger a} f^{a}_{\\sigma} , \\label{SB_Electron} \\\\\r\nS_{\\sigma\\sigma'}^{a} &=& f^{a\\dagger}_{\\sigma} f_{\\sigma'}^{a} -\r\nq_{0}^{a} \\delta_{\\sigma \\sigma'} \\label{SB_Spin}\r\n\\end{eqnarray}\r\nwith the single occupancy constraint $|b^{a}|^2 + \\sum_{\\sigma}\r\nf^{a}_{\\sigma}(\\tau) f^{a}_{\\sigma}(\\tau) = 1$, where $q_{0}^{a} =\r\n\\sum_{\\sigma}f^{a\\dagger}_{\\sigma} f_{\\sigma}^{a}/M $.\r\n\r\nIn the mean field approximation we replace the holon operator\r\n$\\hat{b}^{a}$ with its expectation value $\\langle \\hat{b}^{a}\r\n\\rangle \\equiv b^{a}$. Then, the effective action Eq.\r\n(\\ref{DMFT_Action}) becomes\r\n\\begin{widetext}\r\n\\begin{eqnarray}\r\n&& \\bar{S}_{n}^{\\rm eff} = \\int_{0}^{\\beta} d\\tau \\Bigl\\{\r\n\\sum_{\\sigma a} c^{\\dagger a}_{\\sigma}(\\tau) (\\partial_{\\tau} -\r\n\\mu) c^{a}_{\\sigma}(\\tau) + \\sum_{\\sigma a} f^{\\dagger\r\na}_{\\sigma}(\\tau) (\\partial_{\\tau} + E_d) f^{a}_{\\sigma}(\\tau) +\r\n\\sum_{a} \\lambda^{a} (|b^{a}|^2 + \\sum_{\\sigma}\r\nf^{a}_{\\sigma}(\\tau) f^{a}_{\\sigma}(\\tau)- 1) \\Bigr\\} \\nonumber \\\\\r\n&& -\\frac{V^2}{2 M} \\int_{0}^{\\beta} d\\tau \\int_{0}^{\\beta} d\\tau'\r\n\\sum_{\\sigma \\sigma' a b} \\big[ c^{\\dagger a}_{\\sigma}(\\tau)\r\nf^{a}_{\\sigma}(\\tau) (b^{a})^{*} + b^{a} f^{\\dagger\r\na}_{\\sigma}(\\tau) c^{a}_{\\sigma}(\\tau)\\big] \\big[ c^{\\dagger\r\nb}_{\\sigma'}(\\tau') f^{b}_{\\sigma'}(\\tau') (b^{b})^{*} + b^{b}\r\nf^{\\dagger b}_{\\sigma'}(\\tau') c^{b}_{\\sigma'}(\\tau')\\big]\r\n\\nonumber \\\\ &&-\\frac{J^2}{2 M} \\int_{0}^{\\beta} d\\tau\r\n\\int_{0}^{\\beta} d\\tau' \\sum_{ab} \\sum_{\\alpha\\beta\\gamma\\delta}\r\n\\big[f^{\\dagger a}_{\\alpha}(\\tau) f^{a}_{\\beta}(\\tau) -\r\nq_{\\alpha}^{a} \\delta_{\\alpha\\beta} \\big]\r\nR^{ab}_{\\beta\\alpha\\gamma\\delta}(\\tau-\\tau') \\big[f^{\\dagger\r\nb}_{\\delta}(\\tau') f^{b}_{\\gamma}(\\tau') - q_{\\gamma}^{b}\r\n\\delta_{\\gamma\\delta} \\big] \\nonumber \\\\ && + \\frac{t^2}{M^2}\r\n\\int_{0}^{\\beta} d\\tau \\int_{0}^{\\beta} d\\tau' \\sum_{ab\\sigma}\r\nc^{\\dagger a}_{\\sigma}(\\tau) G^{ab}_{\\sigma}(\\tau-\\tau')\r\nc^{b}_{\\sigma}(\\tau' ) , \\label{SB_MFT}\r\n\\end{eqnarray}\r\n\\end{widetext}\r\nwhere $\\lambda^{a}$ is a lagrange multiplier field to impose the\r\nconstraint and $q_{\\alpha}^{a} =\\langle f^{\\dagger a}_{\\alpha}\r\nf^{a}_{\\alpha} \\rangle$.\r\n\r\nTaking the $M\\rightarrow \\infty$ limit, we obtain self-consistent\r\nequations for self-energy corrections,\r\n\\begin{eqnarray}\r\n\\Sigma_{c \\;\\sigma\\sigma'}^{\\;ab}(\\tau) &=& \\frac{V^2}{M} G_{f \\;\r\n\\sigma\\sigma'}^{\\; a b}(\\tau) (b^{a})^{*} b^b + \\frac{t^2}{M^2}\r\n\\delta_{\\sigma\\sigma'} G_{c \\; \\sigma}^{\\; a b}(\\tau) ,\r\n\\\\ \\Sigma_{f \\;\\sigma\\sigma'}^{\\;ab}(\\tau) &=& \\frac{V^2}{M} G_{c\r\n\\; \\sigma\\sigma'}^{\\; a b}(\\tau) (b^{b})^{*} b^a \\nn &+&\r\n\\frac{J^2}{2 M} \\sum_{s s'} G_{f \\; s s'}^{\\; a b}(\\tau) [\r\nR^{ab}_{s\\sigma \\sigma' s'}(\\tau) + R^{ba}_{\\sigma' s' s\r\n\\sigma}(-\\tau) ] , \\nn \\\\ \\Sigma_{cf \\; \\sigma\\sigma'}^{\\;\\;\r\nab}(\\tau) &=& - \\delta_{ab} \\delta_{\\sigma\\sigma'}\\delta(\\tau)\r\n\\frac{V^2}{M} \\sum_{s c} [\\langle f^{\\dagger c}_{s} c^{c}_{s}\r\n\\rangle b^c + {\\rm c.c.} ] (b^{a})^{*} \\nn &+& \\frac{V^2}{M}\r\nG_{fc \\; \\sigma\\sigma'}^{\\;\\; ab}(\\tau) (b^a b^b)^{*} , \\\\\r\n\\Sigma_{fc \\; \\sigma\\sigma'}^{\\;\\; ab}(\\tau) &=& - \\delta_{ab}\r\n\\delta_{\\sigma\\sigma'}\\delta(\\tau) \\frac{V^2}{M} \\sum_{s c}\r\n[\\langle f^{\\dagger c}_{s} c^{c}_{s} \\rangle b^c + {\\rm c.c.} ]\r\nb^{a} \\nn &+& \\frac{V^2}{M} G_{cf \\; \\sigma\\sigma'}^{\\;\\;\r\nab}(\\tau) b^a b^b ,\r\n\\end{eqnarray} respectively, where local Green's functions are given by\r\n\\begin{eqnarray}\r\nG_{c \\; \\sigma\\sigma'}^{\\; ab}(\\tau) &=& - \\langle T_c\r\nc^{a}_{\\sigma}(\\tau) c^{\\dagger b}_{\\sigma'} (0) \\rangle ,\r\n\\\\\r\nG_{f \\; \\sigma\\sigma'}^{\\; ab}(\\tau) &=& - \\langle T_c\r\nf^{a}_{\\sigma}(\\tau) f^{\\dagger b}_{\\sigma'} (0) \\rangle ,\r\n\\\\\r\nG_{cf \\; \\sigma\\sigma'}^{\\; ab}(\\tau) &=& - \\langle T_c\r\nc^{a}_{\\sigma}(\\tau) f^{\\dagger b}_{\\sigma'} (0) \\rangle ,\r\n\\\\\r\nG_{fc \\; \\sigma\\sigma'}^{\\; ab}(\\tau) &=& - \\langle T_c\r\nf^{a}_{\\sigma}(\\tau) c^{\\dagger b}_{\\sigma'} (0) \\rangle .\r\n\\end{eqnarray}\r\n\r\nIn the paramagnetic and symmetric replica phase these Green's\r\nfunctions are diagonal in the spin and replica indices, i.e.,\r\n$G^{ab}_{x \\sigma\\sigma'}(\\tau)=\\delta_{ab}\\delta_{\\sigma\\sigma'}\r\nG_{x}(\\tau)$ with $x=c,f,cf,fc$. Then, we obtain the Dyson\r\nequation\r\n\\begin{widetext}\r\n\\begin{eqnarray}\r\n\\left(\\begin{array}{cc} G_{c}(i \\omega_l) & G_{fc}(i \\omega_l) \\\\\r\nG_{cf}(i \\omega_l) & G_{f}(i \\omega_l)\r\n\\end{array} \\right) = \\left( \\begin{array}{cc}\r\ni\\omega_l + \\mu - \\Sigma_{c}(i \\omega_l) & - \\Sigma_{cf}(i\r\n\\omega_l) \\\\\r\n- \\Sigma_{fc}(i \\omega_l) & i\\omega_l - E_d -\\lambda -\r\n\\Sigma_{f}(i \\omega_l)\r\n\\end{array} \\right)^{-1} ,\r\n\\end{eqnarray}\r\n\\end{widetext}\r\nwhere $\\omega_l=(2 l+1) \\pi T$ with $l$ integer. Accordingly, Eqs.\r\n(9)-(12) are simplified as follows\r\n\\begin{eqnarray}\r\n\\Sigma_{c}(i\\omega_l) &=& \\frac{V^2}{M} G_{f}(i\\omega_l) |b|^2 +\r\n\\frac{t^2}{M^2} G_{c}(i\\omega_l) , \\label{Sigma_C_MFT} \\\\\r\n\\Sigma_{f}(i\\omega_l) &=& \\frac{V^2}{M} G_{c}(i\\omega_l) |b|^2 +\r\n\\frac{J^2}{2 M} T \\sum_{s} \\sum_{\\nu_m} G_{f}(i\\omega_l-\\nu_m) \\nn\r\n&\\times& [R_{s\\sigma\\sigma s}(i\\nu_m) + R_{\\sigma s\r\ns\\sigma}(-i\\nu_m) ] , \\label{Sigma_F_MFT} \\\\\r\n\\Sigma_{cf}(i\\omega_l) &=& \\frac{V^2}{M} G_{fc}(i\\omega_l)\r\n(b^2)^{*} - n \\frac{V^2}{M} (b^2)^{*} \\sum_s \\langle\r\nf^{\\dagger}_{s} c_{s}\r\n+ c^{\\dagger}_{s} f_{s} \\rangle , \\label{Sigma_CF_MFT} \\nn \\\\\r\n\\Sigma_{fc}(i\\omega_l) &=& \\frac{V^2}{M} G_{cf}(i\\omega_l) b^2 - n\r\n\\frac{V^2}{M} b^2 \\sum_s \\langle f^{\\dagger}_{s} c_{s} +\r\nc^{\\dagger}_{s} f_{s} \\rangle \\label{Sigma_FC_MFT}\r\n\\end{eqnarray} in the frequency space.\r\nNote that $n$ is the replica index and the last terms in\r\nEqs.~(\\ref{Sigma_CF_MFT})-(\\ref{Sigma_FC_MFT}) vanish in the limit\r\nof $n \\rightarrow 0$. $R_{s\\sigma\\sigma s}(i\\nu_m)$ is the local\r\nspin susceptibility, given by\r\n\\begin{eqnarray}\r\nR_{\\sigma s s \\sigma}(\\tau) = - G_{f \\sigma}(-\\tau) G_{f s}(\\tau)\r\n\\label{Spin_Corr_MFT}\r\n\\end{eqnarray} in the Fourier transformation.\r\n\r\nThe self-consistent equation for boson condensation is\r\n\\begin{eqnarray}\r\n&& b \\Big[ \\lambda + 2 V^2 T \\sum_{\\omega_l} G_{c}(i\\omega_l)\r\nG_{f}(i\\omega_l) \\nn && + V^2 T \\sum_{\\omega_l} \\Bigl\\{\r\nG_{fc}(i\\omega_l) G_{fc}(i\\omega_l) + G_{cf}(i\\omega_l)\r\nG_{cf}(i\\omega_l)\\Bigr\\} \\Big] =0 . \\label{Lambda_MFT} \\nn\r\n\\end{eqnarray}\r\nThe constraint equation is given by\r\n\\begin{eqnarray}\r\n|b|^2 + \\sum_{\\sigma} \\langle f^{\\dagger}_{\\sigma} f_{\\sigma}\r\n\\rangle = 1 . \\label{Constraint_MFT}\r\n\\end{eqnarray}\r\n\r\nThe main difference between the clean and disordered cases is that\r\nthe off diagonal Green's function $G_{fc}(i\\omega_l)$ should\r\nvanish in the presence of randomness in $V$ with its zero mean\r\nvalue while it is proportional to the condensation $b$ when the\r\naverage value of $V$ is finite. In the present situation we find\r\n$b^{a} = \\langle f^{a\\dagger}_{\\sigma} c_{\\sigma}^{a} \\rangle = 0$\r\nwhile $(b^{a})^{*}b^{b} = \\langle f^{a\\dagger}_{\\sigma}\r\nc_{\\sigma}^{a} c_{\\sigma'}^{b\\dagger} f_{\\sigma'}^{b} \\rangle\r\n\\equiv |b|^{2} \\delta_{ab} \\not= 0$. As a result, Eqs.\r\n(\\ref{Sigma_CF_MFT}) and (\\ref{Sigma_FC_MFT}) are identically\r\nvanishing in both left and right hand sides. This implies that the\r\nKondo phase is not characterized by the holon condensation but\r\ndescribed by finite density of holons. It is important to notice\r\nthat this gauge invariant order parameter does not cause any kinds\r\nof symmetry breaking for the Kondo effect as it should be.\r\n\r\n\\subsection{Numerical analysis}\r\n\r\n\r\n\r\nWe use an iteration method in order to solve the mean field\r\nequations (\\ref{Sigma_C_MFT}), (\\ref{Sigma_F_MFT}),\r\n(\\ref{Sigma_CF_MFT}), (\\ref{Sigma_FC_MFT}), (\\ref{Lambda_MFT}),\r\nand (\\ref{Constraint_MFT}). For a given $E_d+\\lambda$, we use\r\niterations to find all Green's functions from Eqs.\r\n(\\ref{Sigma_C_MFT})-(\\ref{Sigma_FC_MFT}) with Eq.\r\n(\\ref{Spin_Corr_MFT}) and $b^2$ from Eq.~(\\ref{Lambda_MFT}). Then,\r\nwe use Eq.~(\\ref{Spin_Corr_MFT}) to calculate $\\lambda$ and $E_d$.\r\nWe adjust the value of $E_d+\\lambda$ in order to obtain the\r\ndesirable value for $E_d$. Using the obtained $\\lambda$ and $b^2$,\r\nwe calculate the Green's functions in the real frequency by\r\niterations. In the real frequency calculation we introduce the\r\nfollowing functions \\cite{Saso}\r\n\\begin{eqnarray}\r\n\\alpha_{\\pm}(t)=\\int_{-\\infty}^{\\infty} d\\omega e^{-i \\omega t}\r\n\\rho_{f}(\\omega) f(\\pm \\omega/T),\r\n\\end{eqnarray}\r\nwhere $\\rho_{f}(\\omega) = - {\\rm Im} G_{f}(\\omega+i0^{+})/\\pi$ is\r\nthe density of states for f-electrons, and $f(x)=1/(\\exp(x)+1)$ is\r\nthe Fermi-Dirac distribution function. Then, the self-energy\r\ncorrection from spin correlations is expressed as follows\r\n\\begin{eqnarray}\r\n&& \\Sigma_{J}(i\\omega_l) \\equiv \\frac{J^2}{2 M} T \\sum_{s}\r\n\\sum_{\\nu_m} G_{f}(i\\omega_l-\\nu_m) \\nn && ~~~~~~~~~~ \\times\r\n[R_{s\\sigma\\sigma s}(i\\nu_m) + R_{\\sigma s s\\sigma}(-i\\nu_m) ] \\nn\r\n&& = - i J^2 \\int_{0}^{\\infty} d t e^{i\\omega t} \\Bigl( [\r\n\\alpha_{+}(t)]^2 \\alpha_{-}^{*}(t) + [ \\alpha_{-}(t)]^2\r\n\\alpha_{+}^{*}(t) \\Bigr) . \\nn\r\n\\end{eqnarray} Performing the Fourier transformation, we\r\ncalculate $\\alpha_{\\pm}(t)$ and obtain $\\Sigma_{J}(\\omega)$.\r\n\r\n\\begin{figure}[h]\r\n\\includegraphics[width=0.48\\textwidth]{crit.eps}\r\n\\caption{The phase diagram of the strongly disordered Anderson\r\nlattice model in the DMFT approximation ($E_d=-1$, $\\mu=0$,\r\n$T=0.01$, $t=1$, $M=2$).} \\label{fig1}\r\n\\end{figure}\r\n\r\n\\begin{figure}[h]\r\n\\includegraphics[width=0.48\\textwidth]{imself.eps}\r\n\\caption{The imaginary part of the self-energy of conduction\r\nelectrons and that of localized electrons for various values of\r\n$J$ ($V=0.5$, $E_d=-0.7$, $\\mu=0$, $T=0.01$, $t=1$, $M$=2).}\r\n\\label{fig2}\r\n\\end{figure}\r\n\r\n\\begin{figure}[h]\r\n\\includegraphics[width=0.48\\textwidth]{dosfcbw.eps}\r\n\\caption{Density of states of conduction ($\\rho_{c}(\\omega)$) and\r\nlocalized ($\\rho_{f}(\\omega)$) electrons for various values of $J$\r\n($V=0.5$, $E_d=-0.7$, $\\mu=0$, $T=0.01$, $t=1$, $M=2$). }\r\n\\label{fig3}\r\n\\end{figure}\r\n\r\n\\begin{figure}[h]\r\n\\includegraphics[width=0.48\\textwidth]{imchi.eps}\r\n\\caption{Local spin susceptibility for various values of $J$\r\n($V=0.5$, $E_d=-0.7$, $\\mu=0$, $T=0.01$, $t=1$, $M=2$).}\r\n\\label{fig4}\r\n\\end{figure}\r\n\r\nFigure \\ref{fig1} shows the phase diagram of the strongly\r\ndisordered Anderson lattice model in the plane of $(V, J)$, where\r\n$V$ and $J$ are variances for the Kondo and RKKY interactions,\r\nrespectively. The phase boundary is characterized by $|b|^{2} =\r\n0$, below which $|b|^{2} \\not= 0$ appears to cause effective\r\nhybridization between conduction electrons and localized fermions\r\nalthough our numerical analysis shows $\\langle\r\nf^{\\dagger}_{\\sigma} c_{\\sigma} \\rangle =0$, meaning\r\n$\\Sigma_{cf(fc)}(i\\omega) = 0$ and $G_{cf(fc)}(i\\omega) = 0$ in\r\nEqs. (\\ref{Sigma_CF_MFT}) and (\\ref{Sigma_FC_MFT}).\r\n\r\nIn Fig. \\ref{fig2} one finds that the effective hybridization\r\nenhances the scattering rate of conduction electrons dramatically\r\naround the Fermi energy while the scattering rate for localized\r\nelectrons becomes reduced at the resonance energy. Enhancement of\r\nthe imaginary part of the conduction-electron self-energy results\r\nfrom the Kondo effect. In the clean situation it is given by the\r\ndelta function associated with the Kondo effect\r\n\\cite{Hewson_Book}. This self-energy effect reflects the spectral\r\nfunction, shown in Fig. \\ref{fig3}, where the pseudogap feature\r\narises in conduction electrons while the sharply defined peak\r\nappears in localized electrons, identified with the Kondo\r\nresonance although the description of the Kondo effect differs\r\nfrom the clean case. Increasing the RKKY coupling, the Kondo\r\neffect is suppressed as expected. In this Kondo phase the local\r\nspin susceptibility is given by Fig. \\ref{fig4}, displaying the\r\ntypical $\\omega$-linear behavior in the low frequency limit,\r\nnothing but the Fermi liquid physics for spin correlations\r\n\\cite{Olivier}. Increasing $J$, incoherent spin correlations are\r\nenhanced, consistent with spin liquid physics \\cite{Olivier}.\r\n\r\n\r\n\r\nOne can check our calculation, considering the $J = 0$ limit to\r\nrecover the known result. In this limit we obtain an analytic\r\nexpression for $V_c$ at half filling ($\\mu=0$)\r\n\\begin{eqnarray}\r\nV_c(J=0) &=& \\sqrt{\\frac{E_d}{2 P_c }}, \\\\\r\nP_c &=& \\int_{-1}^{1} d\\omega \\rho_{0}(\\omega)\r\n\\frac{f(\\omega/T)-f(0)}{\\omega} ,\r\n\\end{eqnarray}\r\nwhere $\\rho_{0}(\\omega)=\\frac{2}{\\pi} \\sqrt{1-\\omega^2}$ is the\r\nbare density of states of conduction electrons. One can check\r\n$V_c(J=0) \\rightarrow 0$ in the zero temperature limit because\r\n$P_{c} \\rightarrow \\infty$.\r\n\r\n\r\n\r\n\\section{Nature of quantum criticality}\r\n\r\n\\subsection{Beyond the saddle-point analysis : Non-crossing approximation}\r\n\r\nResorting to the slave-boson mean-field approximation, we\r\ndiscussed the phase diagram of the strongly disordered Anderson\r\nlattice model, where a quantum phase transition appears from a\r\nspin liquid state to a dirty \"heavy-fermion\" Fermi liquid phase,\r\nincreasing $V/J$, the ratio of variances of the hybridization and\r\nRKKY interactions. Differentiated from the heavy-fermion quantum\r\ntransition in the clean situation, the order parameter turns out\r\nto be the density of holons instead of the holon condensation.\r\nEvaluating self-energies for both conduction electrons and\r\nlocalized electrons, we could identify the Kondo effect from each\r\nspectral function. In addition, we obtained the local spin\r\nsusceptibility consistent with the Fermi liquid physics.\r\n\r\nThe next task will be on the nature of quantum criticality between\r\nthe Kondo and spin liquid phases. This question should be\r\naddressed beyond the saddle-point analysis. Introducing quantum\r\ncorrections in the non-crossing approximation, justified in the\r\n$M\\rightarrow \\infty$ limit, we investigate the quantum critical\r\npoint, where density fluctuations of holons are critical.\r\n\r\nReleasing the slave-boson mean-field approximation to take into\r\naccount holon excitations, we reach the following self-consistent\r\nequations for self-energy corrections,\r\n\\begin{eqnarray}\r\n\\Sigma_{c \\;\\sigma\\sigma'}^{\\;ab}(\\tau) = \\frac{V^2}{M} G_{f \\;\r\n\\sigma\\sigma'}^{\\; a b}(\\tau) G_{b}^{a b}(-\\tau) + \\frac{t^2}{M^2}\r\n\\delta_{\\sigma\\sigma'} G_{c \\; \\sigma}^{\\; a b}(\\tau) ,\r\n\\label{Sigma_C_NCA}\r\n\\end{eqnarray}\r\n\\begin{eqnarray}\r\n\\Sigma_{f \\;\\sigma\\sigma'}^{\\;ab}(\\tau) &=& \\frac{V^2}{M} G_{c \\;\r\n\\sigma\\sigma'}^{\\; a b}(\\tau) G_{b}^{a b}(\\tau) \\nn &+&\r\n\\frac{J^2}{2 M} \\sum_{s s'} G_{f \\; s s'}^{\\; a b}(\\tau) [\r\nR^{ab}_{s\\sigma \\sigma' s'}(\\tau) + R^{ba}_{\\sigma' s' s\r\n\\sigma}(-\\tau) ] , \\label{Sigma_F_NCA} \\nn\r\n\\end{eqnarray}\r\n\\begin{eqnarray}\r\n\\Sigma_{cf \\; \\sigma\\sigma'}^{\\;\\; ab}(\\tau) = - \\delta_{ab}\r\n\\delta_{\\sigma\\sigma'}\\delta(\\tau) \\frac{V^2}{M} \\sum_{s c} \\int\r\nd\\tau_1 \\langle f^{\\dagger c}_{s} c^{c}_{s} \\rangle G_{b}^{c\r\na}(\\tau_1-\\tau') , \\label{Sigma_CF_NCA} \\nn\r\n\\end{eqnarray}\r\n\\begin{eqnarray}\r\n\\Sigma_{fc \\; \\sigma\\sigma'}^{\\;\\; ab}(\\tau) = - \\delta_{ab}\r\n\\delta_{\\sigma\\sigma'}\\delta(\\tau) \\frac{V^2}{M} \\sum_{s c}\\int\r\nd\\tau_1 \\langle c^{\\dagger c}_{s} f^{c}_{s} \\rangle G_{b}^{a\r\nc}(\\tau-\\tau_1) , \\label{Sigma_FC_NCA} \\nn\r\n\\end{eqnarray}\r\n\\begin{eqnarray}\r\n\\Sigma_{b}^{a b}(\\tau) = \\frac{V^2}{M} \\sum_{\\sigma\\sigma'} G_{f\r\n\\; \\sigma\\sigma'}^{\\; b a}(\\tau) G_{c \\; \\sigma'\\sigma}^{\\; b\r\na}(-\\tau) . \\label{Sigma_B_NCA}\r\n\\end{eqnarray}\r\n\r\nSince we considered the paramagnetic and replica symmetric phase,\r\nit is natural to assume such symmetries at the quantum critical\r\npoint. Note that the off diagonal self-energies,\r\n$\\Sigma_{cf}(i\\omega_l)$ and $\\Sigma_{fc}(i\\omega_l)$, are just\r\nconstants and proportional to $\\langle f^{\\dagger}_{\\sigma}\r\nc_{\\sigma} \\rangle$ and $\\langle c^{\\dagger}_{\\sigma} f_{\\sigma}\r\n\\rangle$, respectively. As a result, $\\Sigma_{cf}(i\\omega_l) =\r\n\\Sigma_{fc}(i\\omega_l) = 0$ should be satisfied at the quantum\r\ncritical point as the Kondo phase because of $\\langle\r\nf^{\\dagger}_{\\sigma} c_{\\sigma} \\rangle = \\langle\r\nc^{\\dagger}_{\\sigma} f_{\\sigma} \\rangle = 0$. Then, we reach the\r\nfollowing self-consistent equations called the non-crossing\r\napproximation\r\n\\begin{eqnarray}\r\n\\Sigma_{c}(\\tau) &=& \\frac{V^2}{M} G_{f}(\\tau) G_{b}(-\\tau) +\r\n\\frac{t^2}{M^2} G_{c}(\\tau) ,\r\n\\label{Sigma_C_NCA_GF} \\\\\r\n\\Sigma_{f}(\\tau) &=& \\frac{V^2}{M} G_{c}(\\tau) G_{b}(\\tau) - J^2\r\n[G_{f}(\\tau)]^2 G_{f}(-\\tau) , \\label{Sigma_F_NCA_GF} \\\\\r\n\\Sigma_{b}(\\tau) &=& V^2 G_{c}(-\\tau) G_{f}(\\tau) .\r\n\\label{Sigma_B_NCA_GF}\r\n\\end{eqnarray}\r\nLocal Green's functions are given by\r\n\\begin{eqnarray}\r\nG_{c}(i\\omega_l) &=& \\Big[i\\omega_l + \\mu - \\Sigma_{c}(i\\omega_l)\r\n\\Big]^{-1} , \\label{Dyson_Gc} \\\\\r\nG_{f}(i\\omega_l) &=& \\Big[i\\omega_l - E_d -\\lambda -\r\n\\Sigma_{f}(i\\omega_l) \\Big]^{-1} , \\label{Dyson_Gf} \\\\\r\nG_{b}(i \\nu_{l}) &=& \\Big[ i\\nu_{l} -\\lambda -\\Sigma_{b}(i\\nu_l)\r\n\\Big]^{-1} , \\label{Dyson_Gb}\r\n\\end{eqnarray}\r\nwhere $\\omega_l=(2 l+1) \\pi T$ is for fermions and $\\nu_{l} = 2 l\r\n\\pi T$ is for bosons.\r\n\r\n\r\n\r\n\\subsection{Asymptotic behavior at zero temperature }\r\n\r\nCalling quantum criticality, power-law scaling solutions are\r\nexpected. Actually, if the second term is neglected in Eq.\r\n(\\ref{Sigma_F_NCA_GF}), Eqs. (\\ref{Sigma_F_NCA_GF}) and\r\n(\\ref{Sigma_B_NCA_GF}) are reduced to those of the multi-channel\r\nKondo effect in the non-crossing approximation \\cite{Hewson_Book}.\r\nPower-law solutions are well known in the regime of $1/T_K \\ll\r\n\\tau \\ll \\beta=1/T \\rightarrow \\infty$, where $T_{K} =\r\nD[\\Gamma_{c}/\\pi D]^{1/M} \\exp[\\pi E_{d}/M \\Gamma_{c}]$ is an\r\neffective Kondo temperature \\cite{Tien_Kim} with the conduction\r\nbandwidth $D$ and effective hybridization $\\Gamma_{c} = \\pi\r\n\\rho_{c} \\frac{V^{2}}{M}$. In the presence of the RKKY interaction\r\n[the second term in Eq. (\\ref{Sigma_F_NCA_GF})], the effective\r\nhybridization will be reduced, where $\\Gamma_{c}$ is replaced with\r\n$\\Gamma_{c}^{J} \\approx \\pi \\rho_{c} (\\frac{V^{2}}{M} - J^{2})$.\r\n\r\nOur power-law ansatz is as follows\r\n\\begin{eqnarray}\r\nG_{c} &=& \\frac{A_c}{\\tau^{\\Delta_c}} , \\\\\r\nG_{f} &=& \\frac{A_f}{\\tau^{\\Delta_f}} , \\\\\r\nG_{b} &=& \\frac{A_b}{\\tau^{\\Delta_b}} ,\r\n\\end{eqnarray} where $A_{c}$, $A_{f}$, and $A_{b}$ are positive\r\nnumerical constants. In the frequency space these are\r\n\\begin{eqnarray}\r\nG_{c}(\\omega) &=& A_c C_{\\Delta_{c}-1} \\omega^{\\Delta_c-1}, \\label{Dyson_W_Gc} \\\\\r\nG_{f}(\\omega) &=& A_f C_{\\Delta_{f}-1} \\omega^{\\Delta_f-1}, \\label{Dyson_W_Gf} \\\\\r\nG_{b}(\\omega) &=& A_b C_{\\Delta_{b}-1} \\omega^{\\Delta_b-1},\r\n\\label{Dyson_W_Gb}\r\n\\end{eqnarray}\r\nwhere $C_{\\Delta_{c,f,b}} = \\int_{-\\infty}^{\\infty} d x \\frac{e^{i\r\nx}}{x^{\\Delta_{c,f,b}+1}}.$\r\n\r\nInserting Eqs. (\\ref{Dyson_W_Gc})-(\\ref{Dyson_W_Gb}) into Eqs.\r\n(\\ref{Sigma_C_NCA_GF})-(\\ref{Sigma_B_NCA_GF}), we obtain scaling\r\nexponents of $\\Delta_{c}$, $\\Delta_{f}$, and $\\Delta_{b}$. In\r\nappendix C-1 we show how to find such critical exponents in a\r\ndetail. Two fixed points are allowed. One coincides with the\r\nmulti-channel Kondo effect, given by $\\Delta_{c} = 1$, and\r\n$\\Delta_{f} = \\frac{M}{M+1}$, $\\Delta_{b} = \\frac{1}{M+1}$ with $M\r\n= 2$, where contributions from spin fluctuations to self-energy\r\ncorrections are irrelevant, compared with holon fluctuations. The\r\nother is $\\Delta_{c} = 1$ and $\\Delta_{f} = \\Delta_{b} =\r\n\\frac{1}{2}$, where spin correlations are critical as much as\r\nholon fluctuations.\r\n\r\nOne can understand the critical exponent $\\Delta_{f} = 1/2$ as the\r\nproximity of the spin liquid physics \\cite{Sachdev_SG}.\r\nConsidering the $V \\rightarrow 0$ limit, we obtain the scaling\r\nexponents of $\\Delta_c = 1$ and $\\Delta_f = 1/2$ from the scaling\r\nequations (\\ref{92}) and (\\ref{93}). Thus, $G_{c}(\\omega) \\sim\r\n\\mbox{sgn}(\\omega)$ and $G_{f}(\\omega) \\sim 1/\\sqrt{\\omega}$\r\nresult for $\\omega \\rightarrow 0$. In this respect both spin\r\nfluctuations and holon excitations are critical as equal strength\r\nat this quantum critical point.\r\n\r\n\\subsection{Finite temperature scaling behavior}\r\n\r\nWe solve Eqs. (\\ref{Sigma_C_NCA_GF})-(\\ref{Sigma_B_NCA_GF}) in the\r\nregime $\\tau, \\beta \\gg 1/T_K$ with arbitrary $\\tau/\\beta$, where\r\nthe scaling ansatz at zero temperature is generalized as follows\r\n\\begin{eqnarray}\r\nG_{c}(\\tau) &=& A_{c} \\beta^{-\\Delta_{c}}\r\ng_{c}\\Big(\\frac{\\tau}{\\beta} \\Big) , \\label{Dyson_T_Gc} \\\\\r\nG_{f}(\\tau) &=& A_{f} \\beta^{-\\Delta_{f}}\r\ng_{f}\\Big(\\frac{\\tau}{\\beta} \\Big) , \\label{Dyson_T_Gf} \\\\\r\nG_{b}(\\tau) &=& A_{b} \\beta^{-\\Delta_{b}}\r\ng_{b}\\Big(\\frac{\\tau}{\\beta} \\Big) . \\label{Dyson_T_Gb}\r\n\\end{eqnarray}\r\n\\begin{eqnarray}\r\ng_{\\alpha}(x) = \\bigg(\\frac{\\pi}{\\sin(\\pi\r\nx)}\\bigg)^{\\Delta_\\alpha} \\label{T_Scaling}\r\n\\end{eqnarray}\r\nwith $\\alpha=c,f,b$ is the scaling function at finite\r\ntemperatures. In the frequency space we obtain\r\n\\begin{eqnarray}\r\nG_{c}(i\\omega_l) &=& A_c \\beta^{1-\\Delta_c}\r\n\\Phi_c(i\\bar{\\omega}_l) , \\label{Dyson_TW_Gc} \\\\\r\nG_{f}(i\\omega_l) &=& A_f \\beta^{1-\\Delta_f}\r\n\\Phi_f(i\\bar{\\omega}_l) , \\label{Dyson_TW_Gf} \\\\\r\nG_{b}(i\\nu_l) &=& A_c \\beta^{1-\\Delta_b} \\Phi_b(i\\bar{\\nu}_l) ,\r\n\\label{Dyson_TW_Gb}\r\n\\end{eqnarray}\r\nwhere $\\bar{\\omega}_l=(2 l+1) \\pi$, $\\bar{\\nu}_l= 2 l \\pi$, and\r\n\\begin{eqnarray}\r\n\\Phi_{\\alpha}(i\\bar{x}) = \\int_{0}^{1} d t e^{i \\bar{x} t}\r\ng_{\\alpha}(t) . \\label{Phi_alpha}\r\n\\end{eqnarray}\r\n\r\nInserting Eqs. (\\ref{Dyson_TW_Gc})-(\\ref{Dyson_TW_Gb}) into Eqs.\r\n(\\ref{Sigma_C_NCA_GF})-(\\ref{Sigma_B_NCA_GF}), we find two fixed\r\npoints, essentially the same as the case of $T = 0$. But, scaling\r\nfunctions of $\\Phi_c(i\\bar{\\omega}_l)$, $\\Phi_f(i\\bar{\\omega}_l)$,\r\nand $\\Phi_b(i\\bar{\\omega}_l)$ are somewhat complicated. All\r\nscaling functions are derived in appendix C-2.\r\n\r\n\\subsection{Spin susceptibility}\r\n\r\nWe evaluate the local spin susceptibility, given by\r\n\\begin{eqnarray}\r\n\\chi(\\tau) &=& G_{f}(\\tau) G_{f}(-\\tau) , \\nonumber \\\\\r\n&=& A_f^2 \\beta^{-2 \\Delta_f} \\bigg(\\frac{\\pi}{\\sin(\\pi\r\n\\tau/\\beta)} \\bigg)^{2\\Delta_f} . \\label{126}\r\n\\end{eqnarray}\r\nThe imaginary part of the spin susceptibility\r\n$\\chi^{''}(\\omega)={\\rm Im} \\; \\chi(\\omega+ i0^{+})$ can be found\r\nfrom\r\n\\begin{eqnarray}\r\n\\chi(\\tau) = \\int \\frac{d \\omega}{\\pi} \\frac{e^{-\\tau\r\n\\omega}}{1-e^{-\\beta \\omega}} \\chi^{''}(\\omega) . \\label{127}\r\n\\end{eqnarray}\r\n\r\nInserting the scaling ansatz\r\n\\begin{eqnarray}\r\n\\chi^{''}(\\omega) = A_f^2 \\beta^{1-2\\Delta_f}\r\n\\phi\\Big(\\frac{\\omega}{T}\\Big) \\label{128}\r\n\\end{eqnarray}\r\ninto Eq. (\\ref{127}) with Eq. (\\ref{126}), we obtain\r\n\\begin{eqnarray}\r\n\\int \\frac{d x}{\\pi} \\frac{e^{-x \\tau/\\beta}}{1-e^{-x}} \\phi(x) =\r\n\\bigg(\\frac{\\pi}{\\sin(\\pi \\tau/\\beta)} \\bigg)^{2\\Delta_f} .\r\n\\end{eqnarray}\r\nChanging the variable $t=i(\\tau/\\beta -1/2)$, we obtain\r\n\\begin{eqnarray}\r\n\\int \\frac{d x}{\\pi} e^{i x t} \\frac{\\phi(x)}{e^{x}-e^{-x}} =\r\n\\bigg(\\frac{\\pi}{\\cosh(\\pi t)} \\bigg)^{2\\Delta_f} .\r\n\\end{eqnarray}\r\nAs a result, we find the scaling function\r\n\\begin{eqnarray}\r\n\\phi(x) = 2 (2\\pi)^{2 \\Delta_f-1} \\sinh\\Big(\\frac{x}{2}\\Big)\r\n\\frac{\\Gamma(\\Delta_f+i x/2 \\pi)\\Gamma(\\Delta_f - i\r\nx/2\\pi)}{\\Gamma(2\\Delta_f)} . \\nn\r\n\\end{eqnarray}\r\nThis coincides with the spin spectrum of the spin liquid state\r\nwhen $V = 0$ \\cite{Olivier}.\r\n\r\n\r\n\r\n\\subsection{Discussion : Deconfined local quantum criticality}\r\n\r\nThe local quantum critical point characterized by $\\Delta_{c} = 1$\r\nand $\\Delta_{f} = \\Delta_{b} = 1/2$ is the genuine critical point\r\nin the spin-liquid to local Fermi-liquid transition because such a\r\nfixed point can be connected to the spin liquid state ($\\Delta_{c}\r\n= 1$ and $\\Delta_{f} = 1/2$) naturally. This fixed point results\r\nfrom the fact that the spinon self-energy correction from RKKY\r\nspin fluctuations is exactly the same order as that from critical\r\nholon excitations. It is straightforward to see that the critical\r\nexponent of the local spin susceptibility is exactly the same as\r\nthat of the local charge susceptibility ($2\\Delta_{f} =\r\n2\\Delta_{b} = 1$), proportional to $1/\\tau$. Since the spinon\r\nspin-density operator differs from the holon charge-density\r\noperator in the respect of symmetry at the lattice scale, the same\r\ncritical exponent implies enhancement of the original symmetry at\r\nlow energies. The symmetry enhancement sometimes allows a\r\ntopological term, which assigns a nontrivial quantum number to a\r\ntopological soliton, identified with an excitation of quantum\r\nnumber fractionalization. This mathematical structure is actually\r\nrealized in an antiferromagnetic spin chain \\cite{Tsvelik_Book},\r\ngeneralized into the two dimensional case\r\n\\cite{Senthil_DQCP,Tanaka_SO5}.\r\n\r\n\r\n\r\n\r\n\r\n\r\nWe propose the following local field theory in terms of physically\r\nobservable fields \\bqa Z_{eff} &=& \\int D\r\n\\boldsymbol{\\Psi}^{a}(\\tau)\r\n\\delta\\Bigl(|\\boldsymbol{\\Psi}^{a}(\\tau)|^{2} - 1\\Bigr) e^{-\r\n\\mathcal{S}_{eff}} , \\nn \\mathcal{S}_{eff} &=& - \\frac{g^{2}}{2M}\r\n\\int_{0}^{\\beta} d \\tau \\int_{0}^{\\beta} d \\tau'\r\n\\boldsymbol{\\Psi}^{a T}(\\tau)\r\n\\boldsymbol{\\Upsilon}^{ab}(\\tau-\\tau')\r\n\\boldsymbol{\\Psi}^{b}(\\tau') \\nn &+& \\mathcal{S}_{top} ,\r\n\\label{O4_Sigma_Model} \\eqa where \\bqa &&\r\n\\boldsymbol{\\Psi}^{a}(\\tau) = \\left(\r\n\\begin{array}{c} \\boldsymbol{S}^{a}(\\tau) \\\\ \\rho^{a}(\\tau)\r\n\\end{array} \\right) \\eqa represents an $O(4)$ vector, satisfying\r\nthe constraint of the delta function.\r\n$\\boldsymbol{\\Upsilon}^{ab}(\\tau-\\tau')$ determines dynamics of\r\nthe $O(4)$ vector, resulting from spin and holon dynamics in\r\nprinciple. However, it is extremely difficult to derive Eq.\r\n(\\ref{O4_Sigma_Model}) from Eq. (\\ref{DMFT_Action}) because the\r\ndensity part for the holon field in Eq. (\\ref{O4_Sigma_Model})\r\ncannot result from Eq. (\\ref{DMFT_Action}) in a standard way. What\r\nwe have shown is that the renormalized dynamics for the O(4)\r\nvector field follows $1/\\tau$ asymptotically, where $\\tau$ is the\r\nimaginary time. This information should be introduced in\r\n$\\boldsymbol{\\Upsilon}^{ab}(\\tau-\\tau')$. $g \\propto V/J$ is an\r\neffective coupling constant, and $\\mathcal{S}_{top}$ is a possible\r\ntopological term.\r\n\r\nOne can represent the O(4) vector generally as follows\r\n\\begin{widetext} \\bqa \\boldsymbol{\\Psi}^{a} : \\tau \\longrightarrow\r\n\\Bigl( \\sin \\theta^{a}(\\tau) \\sin \\phi^{a}(\\tau) \\cos\r\n\\varphi^{a}(\\tau) , \\sin \\theta^{a}(\\tau) \\sin \\phi^{a}(\\tau) \\sin\r\n\\varphi^{a}(\\tau) , \\sin \\theta^{a}(\\tau) \\cos \\phi^{a}(\\tau) ,\r\n\\cos \\theta^{a}(\\tau) \\Bigr) , \\label{O4_Vector} \\eqa\r\n\\end{widetext} where $\\theta^{a}(\\tau), \\phi^{a}(\\tau),\r\n\\varphi^{a}(\\tau)$ are three angle coordinates for the O(4)\r\nvector. It is essential to observe that the target manifold for\r\nthe O(4) vector is not a simple sphere type, but more complicated\r\nbecause the last component of the O(4) vector is the charge\r\ndensity field, where three spin components lie in $- 1 \\leq\r\nS^{a}_{x}(\\tau), S^{a}_{y}(\\tau), S^{a}_{z}(\\tau) \\leq 1$ while\r\nthe charge density should be positive, $0 \\leq \\rho^{a}(\\tau) \\leq\r\n1$. This leads us to identify the lower half sphere with the upper\r\nhalf sphere. Considering that $\\sin\\theta^{a}(\\tau)$ can be folded\r\non $\\pi/2$, we are allowed to construct our target manifold to\r\nhave a periodicity, given by\r\n$\\boldsymbol{\\Psi}^{a}(\\theta^{a},\\phi^{a},\\varphi^{a}) =\r\n\\boldsymbol{\\Psi}^{a}(\\pi - \\theta^{a},\\phi^{a},\\varphi^{a})$.\r\nThis folded space allows a nontrivial topological excitation.\r\n\r\nSuppose the boundary configuration of\r\n$\\boldsymbol{\\Psi}^{a}(0,\\phi^{a},\\varphi^{a}; \\tau = 0)$ and\r\n$\\boldsymbol{\\Psi}^{a}(\\pi,\\phi^{a},\\varphi^{a}; \\tau = \\beta)$,\r\nconnected by $\\boldsymbol{\\Psi}^{a}(\\pi/2,\\phi^{a},\\varphi^{a}; 0\r\n< \\tau < \\beta)$. Interestingly, this configuration is {\\it\r\ntopologically} distinguishable from the configuration of\r\n$\\boldsymbol{\\Psi}^{a}(0,\\phi^{a},\\varphi^{a}; \\tau = 0)$ and\r\n$\\boldsymbol{\\Psi}^{a}(0,\\phi^{a},\\varphi^{a}; \\tau = \\beta)$ with\r\n$\\boldsymbol{\\Psi}^{a}(\\pi/2,\\phi^{a},\\varphi^{a}; 0 < \\tau <\r\n\\beta)$ because of the folded structure. The second configuration\r\nshrinks to a point while the first excitation cannot, identified\r\nwith a topologically nontrivial excitation. This topological\r\nexcitation carries a spin quantum number $1/2$ in its core, given\r\nby $\\boldsymbol{\\Psi}^{a}(\\pi/2,\\phi^{a},\\varphi^{a}; 0 < \\tau <\r\n\\beta) = \\Bigl( \\sin \\phi^{a}(\\tau) \\cos \\varphi^{a}(\\tau) , \\sin\r\n\\phi^{a}(\\tau) \\sin \\varphi^{a}(\\tau) , \\cos \\phi^{a}(\\tau) , 0\r\n\\Bigr)$. This is the spinon excitation, described by an O(3)\r\nnonlinear $\\sigma$ model with the nontrivial spin correlation\r\nfunction $\\boldsymbol{\\Upsilon}^{ab}(\\tau-\\tau')$, where the\r\ntopological term is reduced to the single spin Berry phase term in\r\nthe instanton core.\r\n\r\nIn this local impurity picture the local Fermi liquid phase is\r\ndescribed by gapping of instantons while the spin liquid state is\r\ncharacterized by condensation of instantons. Of course, the low\r\ndimensionality does not allow condensation, resulting in critical\r\ndynamics for spinons. This scenario clarifies the\r\nLandau-Ginzburg-Wilson forbidden duality between the Kondo singlet\r\nand the critical local moment for the impurity state, allowed by\r\nthe presence of the topological term.\r\n\r\nIf the symmetry enhancement does not occur, the effective local\r\nfield theory will be given by \\bqa Z_{eff} &=& \\int\r\nD\\boldsymbol{S}^{a}(\\tau) D \\rho^{a}(\\tau) e^{- \\mathcal{S}_{eff}}\r\n, \\nn \\mathcal{S}_{eff} &=& - \\int_{0}^{\\beta} d \\tau\r\n\\int_{0}^{\\beta} d \\tau' \\Bigl\\{ \\frac{V^{2}}{2M} \\rho^{a}(\\tau)\r\n\\chi^{ab}(\\tau-\\tau') \\rho^{b}(\\tau') \\nn &+& \\frac{J^{2}}{2M}\r\n\\boldsymbol{S}^{a}(\\tau) R^{ab} (\\tau-\\tau')\r\n\\boldsymbol{S}^{b}(\\tau') \\Bigr\\} + \\mathcal{S}_{B} \\eqa with the\r\nsingle-spin Berry phase term \\bqa \\mathcal{S}_{B} = - 2 \\pi i S\r\n\\int_{0}^{1} d u \\int_{0}^{\\beta} d \\tau \\frac{1}{4\\pi}\r\n\\boldsymbol{S}^{a}(u,\\tau)\r\n\\partial_{u} \\boldsymbol{S}^{a}(u,\\tau) \\times\r\n\\partial_{\\tau} \\boldsymbol{S}^{a}(u,\\tau) , \\nonumber \\eqa where charge\r\ndynamics $\\chi^{ab}(\\tau-\\tau')$ will be different from spin\r\ndynamics $R^{ab} (\\tau-\\tau')$. This will not allow the spin\r\nfractionalization for the critical impurity dynamics, where the\r\ninstanton construction is not realized due to the absence of the\r\nsymmetry enhancement.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\\section{Summary}\r\n\r\nIn this paper we have studied the Anderson lattice model with\r\nstrong randomness in both hybridization and RKKY interactions,\r\nwhere their average values are zero. In the absence of random\r\nhybridization quantum fluctuations in spin dynamics cause the spin\r\nglass phase unstable at finite temperatures, giving rise to the\r\nspin liquid state, characterized by the $\\omega/T$ scaling spin\r\nspectrum consistent with the marginal Fermi-liquid phenomenology\r\n\\cite{Sachdev_SG}. In the absence of random RKKY interactions the\r\nKondo effect arises \\cite{Kondo_Disorder}, but differentiated from\r\nthat in the clean case. The dirty \"heavy fermion\" phase in the\r\nstrongly disordered Kondo coupling is characterized by a finite\r\ndensity of holons instead of the holon condensation. But,\r\neffective hybridization exists indeed, causing the Kondo resonance\r\npeak in the spectral function. As long as variation of the\r\neffective Kondo temperature is not so large, this disordered Kondo\r\nphase is identified with the local Fermi liquid state because\r\nessential physics results from single impurity dynamics,\r\ndifferentiated from the clean lattice model.\r\n\r\nTaking into account both random hybridization and RKKY\r\ninteractions, we find the quantum phase transition from the spin\r\nliquid state to the local Fermi liquid phase at the critical\r\n$(V_{c}, J_{c})$. Each phase turns out to be adiabatically\r\nconnected with each limit, i.e., the spin liquid phase when $V =\r\n0$ and the local Fermi liquid phase when $J = 0$, respectively.\r\nActually, we have checked this physics, considering the local spin\r\nsusceptibility and the spectral function for localized electrons.\r\n\r\nIn order to investigate quantum critical physics, we introduce\r\nquantum corrections from critical holon fluctuations in the\r\nnon-crossing approximation beyond the slave-boson mean-field\r\nanalysis. We find two kinds of power-law scaling solutions for\r\nself-energy corrections of conduction electrons, spinons, and\r\nholons. The first solution turns out to coincide with that of the\r\nmulti-channel Kondo effect, where effects of spin fluctuations are\r\nsub-leading, compared with critical holon fluctuations. In this\r\nrespect this quantum critical point is characterized by breakdown\r\nof the Kondo effect while spin fluctuations can be neglected. On\r\nthe other hand, the second scaling solution shows that both holon\r\nexcitations and spinon fluctuations are critical as the same\r\nstrength, reflected in the fact that the density-density\r\ncorrelation function of holons has the exactly the same critical\r\nexponent as the local spin-spin correlation function of spinons.\r\n\r\nWe argued that the second quantum critical point implies an\r\nenhanced emergent symmetry from O(3)$\\times$O(2)\r\n(spin$\\otimes$charge) to O(4) at low energies, forcing us to\r\nconstruct an O(4) nonlinear $\\sigma$ model on the folded target\r\nmanifold as an effective field theory for this disorder-driven\r\nlocal quantum critical point. Our effective local field theory\r\nidentifies spinons with instantons, describing the local\r\nFermi-liquid to spin-liquid transition as the condensation\r\ntransition of instantons although dynamics of instantons remains\r\ncritical in the spin liquid state instead of condensation due to\r\nlow dimensionality. This construction completes novel duality\r\nbetween the Kondo and critical local moment phases in the strongly\r\ndisordered Anderson lattice model.\r\n\r\nWe explicitly checked that the similar result can be found in the\r\nextended DMFT for the clean Kondo lattice model, where two fixed\r\npoint solutions are allowed \\cite{EDMFT_Spin,EDMFT_NCA}. One is\r\nthe same as the multi-channel Kondo effect and the other is\r\nessentially the same as the second solution in this paper. In this\r\nrespect we believe that the present scenario works in the extended\r\nDMFT framework although applicable to only two spatial dimensions\r\n\\cite{EDMFT}.\r\n\r\nOne may suspect the applicability of the DMFT framework for this\r\ndisorder problem. However, the hybridization term turns out to be\r\nexactly local in the case of strong randomness while the RKKY term\r\nis safely approximated to be local for the spin liquid state,\r\nexpected to be stable against the spin glass phase in the case of\r\nquantum spins. This situation should be distinguished from the\r\nclean case, where the DMFT approximation causes several problems\r\nsuch as the stability of the spin liquid state \\cite{EDMFT_Rosch}\r\nand strong dependence of the dimension of spin dynamics\r\n\\cite{EDMFT}.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\\section*{Acknowledgement}\r\n\r\n\r\n\r\nThis work was supported by the National Research Foundation of\r\nKorea (NRF) grant funded by the Korea government (MEST) (No.\r\n2010-0074542). M.-T. was also supported by the Vietnamese\r\nNAFOSTED.\r\n\r\n", "meta": {"timestamp": "2011-03-30T02:01:35", "yymm": "1103", "arxiv_id": "1103.5603", "language": "en", "url": "https://arxiv.org/abs/1103.5603"}} {"text": "\\section{Introduction}\n\nThe moment problem is a classical question in analysis, well studied because of its \nimportance and variety of applications. A simple example is the (univariate) Hamburger \nmoment problem: when does a given sequence of real numbers represent the successive\nmoments $\\int\\! x^n\\, d\\mu(x)$ of a positive Borel measure $\\mu$ on $\\mathbb R$?\nEquivalently, which linear functionals $L$ on univariate real polynomials are\nintegration with respect to some $\\mu$? By Haviland's theorem \\cite{Hav}\nthis is the case if and only if $L$ is nonnegative on all polynomials nonnegative on \n$\\mathbb R$. Thus Haviland's theorem relates the moment problem to positive polynomials. It\nholds in several variables and also if we are interested in restricting the support of \n$\\mu$. For details we refer the reader to one of the many beautiful expositions of this \nclassical branch of functional analysis, e.g.~\\cite{Akh,KN,ST}.\n\nSince Schm\\\"udgen's celebrated solution of the moment problem\non compact basic closed semialgebraic sets \\cite{Smu},\nthe moment problem has played a prominent role in real algebra,\nexploiting this duality between positive polynomials and the\nmoment problem, cf.~\\cite{KM,PS,Put,PV}.\nThe survey of Laurent \\cite{laurent2} gives a nice presentation of\nup-to-date results and applications;\nsee also \\cite{Mar,PD} for more on positive polynomials.\n\nOur main motivation are trace-positive polynomials in non-commuting\nvariables. A polynomial is called \\emph{trace-positive} if all\nits matrix evaluations (of \\emph{all} sizes) have nonnegative trace.\nTrace-positive polynomials have been employed to investigate\nproblems on \noperator algebras (Connes' embedding conjecture \\cite{connes,ksconnes})\nand mathematical physics (the Bessis-Moussa-Villani conjecture \n\\cite{bmv,ksbmv}), so a good understanding of this set is desired.\nBy duality this leads us to consider the tracial moment problem\nintroduced below.\nWe mention that the free non-commutative moment problem \nhas been studied and solved by\nMcCullough \\cite{McC} and Helton \\cite{helton}.\nHadwin \\cite{had} considered\nmoments involving traces on von Neumann algebras.\n\nThis paper is organized as follows. The short Section \\ref{sec:basic}\nfixes notation and terminology involving non-commuting variables used in the sequel. \nection \\ref{sec:ttmp} introduces \ntracial moment sequences,\ntracial moment matrices,\nthe tracial moment problem, and their truncated counterparts.\nOur main results in this section relate the truncated tracial moment problem\nto flat extensions of tracial moment matrices and resemble the \nresults of Curto and Fialkow \\cite{cffinite,cfflat} on the (classical)\ntruncated moment problem. For example,\nwe prove\nthat a tracial sequence can be represented with tracial moments of \nmatrices\nif its corresponding tracial moment matrix is positive semidefinite and of finite\nrank (Theorem \\ref{thm:finiterank}). \nA truncated tracial sequence allows for such a representation\nif and only if one if its extensions admits a flat extension (Corollary\n\\ref{cor:flatt}).\nFinally, in Section \\ref{sec:poly} we \nexplore the duality\nbetween the tracial moment problem and trace-positivity of polynomials.\nThroughout the paper several examples are given\nto illustrate the theory.\n\n\\section{Basic notions}\\label{sec:basic}\n\nLet $\\mathbb R\\ax$ denote the unital associative $\\mathbb R$-algebra freely generated \nby $\\ushort X=(X_1,\\dots,X_n)$. The elements of $\\mathbb R\\ax$ are polynomials in the non-commuting \nvariables $X_1,\\dots,X_n$ with coefficients in $\\mathbb R$. \nAn element $w$ of the monoid $\\ax$, freely generated by $\\ushort X$, \nis called a \\textit{word}. An element of the form $aw$, where $0\\neq a\\in\\mathbb R$ \nand $w\\in\\ax$, is called a \\textit{monomial} and $a$ its \\textit{coefficient}.\nWe endow $\\mathbb R\\ax$ with the \\textit{involution} $p\\mapsto p^*$ fixing $\\mathbb R\\cup\\{\\ushort X\\}$ \npointwise. Hence for each word $w\\in\\ax$, $w^*$ is its reverse. As an example, we have \n$(X_1X_2^2-X_2X_1)^*=X_2^2X_1-X_1X_2$. \n\nFor $f\\in\\mathbb R\\ax$ we will substitute symmetric matrices\n$\\ushort A=(A_1,\\dots A_n)$ of the same size for the variables $\\ushort X$ \nand obtain a matrix $f(\\ushort A)$. Since $f(\\ushort A)$ is\nnot well-defined if the $A_i$ do not have the \nsame size, we will assume this condition implicitly without further mention in the sequel. \n\nLet $\\sym \\mathbb R\\ax$ denote the set of \\emph{symmetric elements} in $\\mathbb R\\ax$, i.e., \n$$\\sym \\mathbb R\\ax=\\{f\\in \\mathbb R\\ax\\mid f^*=f\\}.$$\nSimilarly, we use $\\sym \\mathbb R^{t\\times t}$ to denote the set of all symmetric $t\\times t$ matrices. \n\nIn this paper we will mostly consider the \\emph{normalized} trace $\\Tr$,\ni.e.,\n$$\\Tr(A)=\\frac 1t\\tr(A)\\quad\\text{for } A\\in\\mathbb R^{t\\times t}.$$\nThe invariance of the trace under cyclic permutations motivates the\nfollowing definition of cyclic equivalence \\cite[p.~1817]{ksconnes}. \n\n\\begin{dfn}\nTwo polynomials $f,g\\in \\mathbb R\\ax$ are \\emph{cyclically equivalent}\nif $f-g$ is a sum of commutators:\n$$f-g=\\sum_{i=1}^k(p_iq_i-q_ip_i) \\text{ for some } k\\in\\mathbb N\n\\text{ and } p_i,q_i \\in \\mathbb R\\ax.$$\n\\end{dfn}\n\n\\begin{remark}\\label{rem:csim}\n\\mbox{}\\par\n\\begin{enumerate}[(a)]\n\\item Two words $v,w\\in\\ax$ are cyclically equivalent if and only if $w$ \nis a cyclic permutation of $v$. \nEquivalently: there exist $u_1,u_2\\in\\ax$ such that \n$v=u_1u_2$ and $w=u_2u_1$.\n\\item If $f\\stackrel{\\mathrm{cyc}}{\\thicksim} g$ then $\\Tr(f(\\ushort A))=\\Tr(g(\\ushort A))$ for all tuples\n$\\ushort A$ of symmetric matrices.\nLess obvious is the converse: if $\\Tr(f(\\ushort A))=\\Tr(g(\\ushort A))$\nfor all $\\ushort A$ and $f-g\\in\\sym\\mathbb R\\ax$, then $f\\stackrel{\\mathrm{cyc}}{\\thicksim} g$ \\cite[Theorem 2.1]{ksconnes}.\n\\item Although $f\\stackrel{\\mathrm{cyc}}{\\nsim} f^*$ in general, we still have \n$$\\Tr(f(\\ushort A))=\\Tr(f^*(\\ushort A))$$\nfor all $f\\in\\mathbb R \\ax$ and all $\\ushort A\\in (\\sym\\mathbb R^{t\\times t})^n$.\n\\end{enumerate}\n\\end{remark}\n\nThe length of the longest word in a polynomial $f\\in\\mathbb R\\ax$ is the\n\\textit{degree} of $f$ and is denoted by $\\deg f$.\nWe write $\\mathbb R\\ax_{\\leq k}$ for the set of all polynomials of degree $\\leq k$.\n\n\n\\section{The truncated tracial moment problem}\\label{sec:ttmp}\n\nIn this section we define tracial (moment) sequences,\ntracial moment matrices,\nthe tracial moment problem, and their truncated analogs.\nAfter a few motivating examples we proceed to show that the \nkernel of a tracial moment matrix has some real-radical-like\nproperties (Proposition \\ref{prop:radical}). \nWe then prove that a tracial moment matrix of finite\nrank has a tracial moment representation, i.e., the tracial moment problem\nfor the associated tracial sequence is solvable (Theorem \\ref{thm:finiterank}). \nFinally, we give the solution of \nthe truncated tracial moment problem: a truncated tracial sequence has\na tracial representation if and only if one of its extensions has a tracial moment matrix that \nadmits a flat extension (Corollary \\ref{cor:flatt}).\n\nFor an overview of the classical (commutative) moment problem in several \nvariables we refer \nthe reader to Akhiezer \\cite{Akh} (for the analytic theory) and\nto the survey of Laurent \\cite{laurent} and references therein for a more\nalgebraic approach.\nThe standard references on the truncated moment problems are\n\\cite{cffinite,cfflat}.\nFor the non-commutative moment problem with \\emph{free} (i.e.,\nunconstrained) moments see\n\\cite{McC,helton}.\n\n\n\n\\begin{dfn}\nA sequence of real numbers $(y_w)$ indexed by words $w\\in \\ax$ satisfying \n\\begin{equation}\n\ty_w=y_u \\text{ whenever } w\\stackrel{\\mathrm{cyc}}{\\thicksim} u, \\label{cyc}\n\\end{equation}\n\\begin{equation}\n\ty_w=y_{w^*} \\text{ for all } w, \\label{cycstar}\n\\end{equation}\nand $y_\\emptyset=1$, is called a (normalized) \\emph{tracial sequence}. \n\\end{dfn} \n\n\\begin{example}\nGiven $t\\in\\mathbb N$ and symmetric matrices $A_1,\\dots,A_n\\in \\sym \\mathbb R^{t\\times t}$,\nthe sequence given by $$y_w:= \\Tr(w(A_1,\\dots,A_n))=\\frac 1t \\tr(w(A_1,\\dots,A_n))$$ \nis a tracial sequence since by Remark \\ref{rem:csim}, the traces of cyclically \nequivalent words coincide. \n\\end{example}\n\nWe are interested in the converse of this example (the \\emph{tracial moment problem}): \n\\emph{For which sequences $(y_w)$ do there exist $N\\in \\mathbb N$, $t\\in \\mathbb N$,\n$\\lambda_i\\in \\mathbb R_{\\geq0}$ with $\\sum_i^N \\lambda_i=1$ and \nvectors $\\ushort A^{(i)}=(A_1^{(i)},\\dots,A_n^{(i)})\\in (\\sym \\mathbb R^{t\\times t})^n$, such that\n\\begin{equation}\n\ty_w=\\sum_{i=1}^N \\lambda_i \\Tr(w(\\ushort A^{(i)}))\\,? \\label{rep}\n\\end{equation}}\nWe then say that $(y_w)$ has a \\emph{tracial moment representation}\nand call it a \\emph{tracial moment sequence}.\n\n\nThe \\emph{truncated tracial moment problem} is the study of (finite) tracial sequences \n$(y_w)_{\\leq k}$ \nwhere $w$ is constrained by $\\deg w\\leq k$ for some $k\\in\\mathbb N$,\nand properties \\eqref{cyc} and \\eqref{cycstar} hold for these $w$.\nFor instance, which sequences $(y_w)_{\\leq k}$ have a tracial moment \nrepresentation, i.e., when does there \nexist a representation of the values $y_w$ as in \\eqref{rep} for $\\deg w\\leq k$? \nIf this is the case, then \nthe sequence $(y_w)_{\\leq k}$ is called a \\emph{truncated tracial moment sequence}.\n\n\n\\begin{remark}\n\\mbox{}\\par\n\\begin{enumerate}[(a)]\n\\item \nTo keep a perfect analogy with the classical moment problem, \none would need to consider the existence of a positive\nBorel measure $\\mu$ on $(\\sym \\mathbb R^{t\\times t})^n$ (for some\n$t\\in\\mathbb N$) satisfying\n\\begin{equation}\\label{eq:gewidmetmarkus}\ny_w = \\int \\! w(\\ushort A) \\, d\\mu(\\ushort A).\n\\end{equation}\nAs we shall mostly focus on the \\emph{truncated}\ntracial moment problem in the sequel, the\nfinitary representations \\eqref{rep} seem to be the\nproper setting. \nWe look forward to studying the more general representations\n\\eqref{eq:gewidmetmarkus} in the future.\n\\item\nAnother natural extension of our tracial moment problem\nwith respect to matrices would be to consider moments obtained by \ntraces in finite \\emph{von Neumann algebras} as\ndone by Hadwin \\cite{had}.\nHowever, our\nprimary motivation were trace-positive polynomials\ndefined via traces of matrices (see Definition \\ref{def:trpos}),\na theme we expand upon in Section \\ref{sec:poly}. Understanding these\nis one of the approaches to Connes' embedding conjecture \\cite{ksconnes}.\nThe notion dual to that of trace-positive polynomials is\nthe tracial moment problem as defined above.\n\\item The tracial moment problem\nis a natural extension of the classical quadrature problem\ndealing with \nrepresentability via atomic positive measures in\nthe commutative case. Taking $\\ushort a^{(i)}$\nconsisting of $1\\times 1$ matrices $a_j^{(i)}\\in\\mathbb R$ \nfor the $\\ushort A^{(i)}$ \nin \\eqref{rep}, we have\n$$y_w=\\sum_i \\lambda_i w(\\ushort a^{(i)})= \\int \\!x^w \\, d\\mu(x),$$\nwhere $x^w$ denotes the commutative collapse of $w\\in\\ax$.\nThe measure $\\mu$ is the convex combination \n$\\sum \\lambda_i\\delta_{\\ushort a^{(i)}}$\nof the atomic measures $\\delta_{\\ushort a^{(i)}}$.\n\\end{enumerate}\n\\end{remark}\n\n\nThe next example shows that there are (truncated) tracial moment sequences $(y_w)$ \nwhich\ncannot be written as $$y_w=\\Tr(w(\\ushort A)).$$ \n\n\\begin{example}\\label{exconv} \nLet $X$ be a single free (non-commutative) variable.\nWe take the index set $J=(1,X,X^2,X^3,X^4)$ and $y=(1,1-\\sqrt2,1,1-\\sqrt2,1)$. Then\n$$y_w=\\frac{\\sqrt2}{2}w(-1)+(1-\\frac{\\sqrt2}{2})w(1),$$ i.e., \n$\\lambda_1=\\frac{\\sqrt2}{2}$, $\\lambda_2=1-\\lambda_1$ and $A^{(1)}=-1$, $A^{(2)}=1$. \nBut there is no symmetric matrix $A\\in \\mathbb R^{t\\times t}$ for any $t\\in\\mathbb N$ such that \n$y_w=\\Tr(w(A))$ for all $w\\in J$. The proof is given in the appendix.\n\\end{example}\n\nThe (infinite) \\emph{tracial moment matrix} $M(y)$ of a tracial \nsequence $y=(y_w)$ is defined by \n$$M(y)=(y_{u^*v})_{u,v}.$$\nThis matrix is symmetric due to the condition \\eqref{cycstar} in the \ndefinition of a tracial sequence.\nA necessary condition for $y$ to be a tracial moment sequence is positive \nsemidefiniteness of $M(y)$ which in general is not sufficient.\n\nThe tracial moment matrix of \\emph{order $k$} is the tracial moment matrix $M_k(y)$ \nindexed by words $u,v$ with $\\deg u,\\deg v\\leq k$.\nIf $y$ is a truncated tracial moment sequence, then $M_k(y)$ is positive \nsemidefinite. Here is an easy example showing the converse is false:\n\n\\begin{example}\\label{expsd}\nWhen dealing with two variables, we write $(X,Y)$ instead of $(X_1,X_2)$.\nTaking the index set \n$$(1,X,Y,X^2,XY,Y^2,X^3,X^2Y,XY^2,Y^3,X^4,X^3Y,X^2Y^2,XYXY,XY^3,Y^4)$$\nthe truncated moment sequence $$y=(1,0,0,1,1,1,0,0,0,0,4,0,2,1,0,4) $$ yields the \ntracial moment matrix \n$$M_2(y)=\\left(\\begin{smallmatrix}\n\t1&0&0&1&1&1&1\\\\ 0&1&1&0&0&0&0\\\\ 0&1&1&0&0&0&0\\\\ 1&0&0&4&0&0&2\\\\\n\t1&0&0&0&2&1&0\\\\ 1&0&0&0&1&2&0\\\\ 1&0&0&2&0&0&4\n\\end{smallmatrix}\\right)$$\nwith respect to the basis $(1,X,Y,X^2,XY,YX,Y^2)$. \n$M_2(y)$ is positive semidefinite but $y$ has no tracial representation.\nAgain, we postpone the proof until the appendix.\n\\end{example}\n\n\nFor a given polynomial $p=\\sum_{w\\in \\ax} p_w w\\in \\mathbb R \\ax$ let $\\vv p$ be the\n(column) vector of coefficients $p_w$ in a given fixed order.\nOne can identify $\\mathbb R \\ax_{\\leq k}$ with $\\mathbb R^\\eta$\nfor $\\eta=\\eta(k)=\\dim\\mathbb R\\ax_{\\leq k}<\\infty$ by sending each $p\\in \\mathbb R \\ax_{\\leq k}$ to the vector \n$\\vv p$ of its entries with $\\deg w\\leq k$. \nThe tracial moment matrix $M(y)$ induces the linear map \n$$\\varphi_M:\\mathbb R\\ax\\to \\mathbb R^\\mathbb N,\\quad p\\mapsto M\\vv p.$$ The tracial moment matrices $M_k(y)$, \nindexed by $w$ with $\\deg w\\leq k$, can be regarded as linear maps \n$\\varphi_{M_k}:\\mathbb R^\\eta\\to \\mathbb R^\\eta$, $\\vv p\\mapsto M_k\\vv p$. \n\n\n\\begin{lemma}\\label{lem:mk}\nLet $M=M(y)$ be a tracial moment matrix. Then the following holds:\n\\begin{enumerate}[\\rm (1)]\n\\item $p(y):=\\sum_w p_w y_w={\\vv{1}}^*M\\vv{p}$. In particular,\n\t${\\vv{1}}^*M\\vv{p}={\\vv{1}}^*M\\vv{q}$ if $p\\stackrel{\\mathrm{cyc}}{\\thicksim} q$;\n\\item ${\\vv{p}}^*M\\vv{q}={\\vv{1}}^*M\\vv{p^*q}$.\n \\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nLet $p,q\\in \\mathbb R \\ax$. For $k:=\\max \\{\\deg p,\\deg q\\}$, we have \n\\begin{equation}\n{\\vv{p}}^*M(y)\\vv{q}={\\vv{p}}^*M_k(y)\\vv{q}.\n\\end{equation}\nBoth statements now follow by direct calculation. \n\\end{proof}\n\nWe can identify the kernel of a tracial moment matrix $M$ with the subset of $\\mathbb R \\ax$\ngiven by \n\\begin{equation}\\label{eq:momKer}\n\tI:=\\{p\\in \\mathbb R \\ax\\mid M\\vv p=0\\}.\n\\end{equation}\n\n\n\\begin{prop}\\label{lem:kerideal} Let $M\\succeq0$ be a tracial moment matrix. Then \n\t\\begin{equation}\\label{kerideal}\n\tI=\\{p\\in \\mathbb R \\ax\\mid \\langle M\\vv{p},\\vv{p}\\rangle=0\\}.\n\t\\end{equation} \n\tFurther, $I$ \n\tis a two-sided ideal of $\\mathbb R \\ax$ invariant under the involution. \n\\end{prop}\n\\begin{proof}\n\tLet $J:=\\{p\\in \\mathbb R \\ax\\mid \\langle M\\vv{p},\\vv{p}\\rangle=0\\}$. The implication\n$I\\subseteq J$ is obvious. Let $p\\in J$ be given and $k=\\deg p$.\nSince $M$ and thus $M_k$ for each $k\\in \\mathbb N$ is positive semidefinite, the square root \n$\\sqrt{M_k}$ of $M_k$ exists. Then\n$0=\\langle M_k\\vv{p},\\vv p\\rangle=\\langle\\sqrt{M_k}\\vv{p}, \\sqrt{M_k}\\vv{p}\\rangle$ implies\n$\\sqrt{M_k}\\vv{p}=0$. This leads to $M_k\\vv{p}=M\\vv p=0$, thus $p\\in I$.\n\nTo prove that $I$ is a two-sided ideal, it suffices to show that $I$ is a right-ideal \nwhich is closed under *. To do this, consider the bilinear map \n$$ \\langle p,q\\rangle_M:= \\langle M\\vv{p},\\vv{q}\\rangle$$ on $\\mathbb R \\ax$, which is a semi-scalar \nproduct. By Lemma \\ref{lem:mk}, we get that\n$$\\langle pq,pq\\rangle_M=((pq)^*pq)(y)=(qq^*p^*p)(y)= \\langle pqq^*,p\\rangle_M.$$\nThen by the Cauchy-Schwarz inequality it follows that for $p\\in I$, we have\n$$0\\leq \\langle pq,pq\\rangle_M^2=\\langle pqq^*,p\\rangle_M^2\\leq \n\\langle pqq^*,pqq^*\\rangle_M\\langle p,p\\rangle_M=0.$$\nHence $pq\\in I$, i.e., $I$ is a right-ideal.\n\nSince $p^*p\\stackrel{\\mathrm{cyc}}{\\thicksim} pp^*$, we obtain from Lemma \\ref{lem:mk} that\n$$\\langle M\\vv{p},\\vv{p} \\rangle=\\langle p,p \\rangle_M=(p^*p)(y)=(pp^*)(y)=\\langle p^*,p^* \n\\rangle_M=\n\\langle M{\\vv p}^*,{\\vv p}^* \\rangle.$$ Thus if $p\\in I$ then also $p^*\\in I$. \n\\end{proof}\n\nIn the \\emph{commutative} context, the kernel of $M$ is a real radical ideal if $M$ is positive\nsemidefinite as observed by Scheiderer (cf.~\\cite[p.~2974]{laurent2}).\nThe next proposition gives a description of \nthe kernel of $M$ in the non-commutative setting, and could be helpful in \ndefining a non-commutative real radical ideal.\n\n\\begin{prop}\\label{prop:radical}\nFor the ideal $I$ in \\eqref{eq:momKer} we have\n$$I=\\{f\\in \\mathbb R \\ax\\mid (f^*f)^k\\in I \\;\\text{for some}\\;k\\in \\mathbb N\\}.$$ \nFurther, \n$$I=\\{f\\in \\mathbb R \\ax\\mid (f^*f)^{2k}+\\sum g_i^*g_i\\in I \\;\\text{for some}\\;k\\in \\mathbb N, g_i\\in \\mathbb R \\ax\\}.\n$$\n\\end{prop}\n\n\\begin{proof}\nIf $f\\in I$ then also $f^*f\\in I$ since $I$ is an ideal. If $f^*f\\in I$ we have\n$M\\vv{f^*f}=0$ which implies by Lemma \\ref{lem:mk} that\n$$0={\\vv 1}^*M\\vv{f^*f}={\\vv f}^*M\\vv{f}=\\langle Mf,f\\rangle.$$ \nThus $f\\in I$. \nIf $(f^*f)^k\\in I$ then also $(f^*f)^{k+1}\\in I$. So without loss of generality let $k$ be even. \nFrom $(f^*f)^k\\in I$ we obtain \n$$0={\\vv 1}^*M\\vv{(f^*f)^k}={\\vv{(f^*f)^{k/2}}}^*M\\vv{(f^*f)^{k/2}},$$ implying \n$(f^*f)^{k/2}\\in I$. This leads to $f\\in I$ by induction.\n\nTo show the second statement let $(f^*f)^{2k}+\\sum g_i^*g_i\\in I$. This leads to\n$${\\vv{(f^*f)^k}}^*M\\vv{(f^*f)^k}+\\sum_i {\\vv{g_i}}^*M\\vv{g_i}=0.$$ Since \n$M(y)\\succeq0$ we have ${\\vv{(f^*f)^k}}^*M\\vv{(f^*f)^k}\\geq 0$ and \n${\\vv{g_i}}^*M\\vv{g_i}\\geq 0.$ Thus ${\\vv{(f^*f)^k}}^*M\\vv{(f^*f)^k}=0$ \n(and ${\\vv{g_i}}^*M\\vv{g_i}= 0$) which implies $f\\in I$ as above. \n\\end{proof}\n\nIn the commutative setting one uses the Riesz representation theorem for \nsome set of continuous functions (vanishing at infinity or with compact support) \nto show the existence of a representing measure. We will use the Riesz \nrepresentation theorem for positive linear functionals on a \nfinite-dimensional Hilbert space. \n\n\\begin{dfn}\nLet $\\mathcal A$ be an $\\mathbb R$-algebra with involution. We call a linear map \n$L:\\mathcal A\\to \\mathbb R$ a \\emph{state} if \n$L(1)=1$, $L(a^*a)\\geq0$ and $L(a^*)=L(a)$ for all $a\\in\\mathcal A$. \nIf all the commutators have value $0$, i.e., if $L(ab)=L(ba)$ for all \n$a,b\\in \\mathcal A$, then $L$ is called a \\emph{tracial state}.\n\\end{dfn}\n\nWith the aid of the Artin-Wedderburn theorem we shall\ncharacterize tracial states on matrix $*$-algebras in Proposition\n\\ref{prop:convtrace}.\nThis will enable us to prove the existence of a tracial moment representation for\ntracial sequences with a finite rank tracial moment matrix; see Theorem\n\\ref{thm:finiterank}.\n\n\\begin{remark}\\label{rem:aw}\nThe only central simple algebras over $\\mathbb R$ are full matrix\nalgebras over $\\mathbb R$, $\\mathbb C$ or $\\mathbb H$ (combine the Frobenius theorem \n\\cite[(13.12)]{Lam} with the Artin-Wedderburn theorem \\cite[(3.5)]{Lam}).\nIn order to understand ($\\mathbb R$-linear) tracial states on these, we recall\nsome basic Galois theory.\n\nLet \n$$\\Trd_{\\mathbb C/\\mathbb R}:\\mathbb C\\to\\mathbb R, \\quad z\\mapsto\\frac 12(z+\\bar z) $$ \ndenote the \\emph{field trace} and \n$$\\Trd_{\\mathbb H/\\mathbb R}:\\mathbb H\\to\\mathbb R,\\quad z\\mapsto\\frac12(z+\\bar z)$$\nthe \\emph{reduced trace} \\cite[p.~5]{boi}.\nHere the Hamilton quaternions $\\mathbb H$ are endowed with the \\emph{standard\ninvolution}\n$$\nz=a+\\mathbbm i b+\\mathbbm j c+\\mathbbm k d \\mapsto a-\\mathbbm i b-\\mathbbm j k-\\mathbbm k d = \\bar z\n$$\nfor $a,b,c,d\\in\\mathbb R$.\nWe extend the canonical involution on $\\mathbb C$ and $\\mathbb H$ to the conjugate\ntranspose involution $*$ on matrices\nover $\\mathbb C$ and $\\mathbb H$, respectively.\n\nComposing the field trace and reduced trace, respectively, with the normalized\ntrace, yields an $\\mathbb R$-linear map from $\\mathbb C^{t\\times t}$ and\n$\\mathbb H^{t\\times t}$, respectively, to $\\mathbb R$. We will denote it simply\nby $\\Tr$. A word of \\emph{caution}: \n$\\Tr(A)$ does not denote the (normalized) matricial trace \nover $\\mathbb K$\nif $A\\in \\mathbb K^{t\\times t}$ and $\\mathbb K\\in\\{\\mathbb C,\\mathbb H\\}$.\n\\end{remark}\n\nAn alternative description of $\\Tr$ is given by the following lemma:\n\n\\begin{lemma}\\label{lem:convtrace}\nLet $\\mathbb K\\in\\{\\mathbb R,\\mathbb C,\\mathbb H\\}$. Then\nthe only $(\\mathbb R$-linear$)$ tracial state on $\\mathbb K^{t\\times t}$ is $\\Tr$.\n\\end{lemma}\n\n\\begin{proof}\nAn easy calculation shows that $\\Tr$ is indeed a tracial state.\n\nLet $L$ be a tracial state on $\\mathbb R^{t\\times t}$.\nBy the Riesz representation theorem there exists a positive \nsemidefinite matrix $B$ with $\\Tr(B)=1$ such that $$L(A)=\\Tr(BA)$$ for all \n$A\\in\\mathbb R^{t\\times t}$.\n\nWrite $B=\\begin{pmatrix}b_{ij}\\end{pmatrix}_{i,j=1}^{t}$.\nLet \n$i\\neq j$.\nThen $A=\\lambda E_{ij}$ has zero trace for every \n$\\lambda\\in \\mathbb R$ and is thus a sum of commutators. \n(Here $E_{ij}$ denotes the $t\\times t$ \\emph{matrix unit} with a one\nin the $(i,j)$-position and zeros elsewhere.)\nHence \n$$\\lambda b_{ij} = L(A) = 0.$$\nSince $\\lambda\\in\\mathbb R$ was arbitrary, $b_{ij}=0$.\n\nNow let $A=\\lambda (E_{ii}-E_{jj})$. Clearly, \n$\\Tr(A)=0$ and hence $$\\lambda(b_{ii}-b_{jj})= L(A)= 0.$$\nAs before, this gives $b_{ii}=b_{jj}$. So $B$ is scalar,\nand $\\Tr(B)=1$. Hence it is the\nidentity matrix. In particular, $L=\\Tr$.\n\nIf $L$ is a tracial state on $\\mathbb C^{t\\times t}$, \nthen $L$ induces a tracial state on $\\mathbb R^{t\\times t}$,\nso $L_0:=L|_{\\mathbb R^{t\\times t}}=\\Tr$ by the above.\nExtend $L_0$ to \n$$L_1:\\mathbb C^{t\\times t} \\to \\mathbb R,\n\\quad A+\\mathbbm i B\\mapsto L_0(A)=\\Tr(A) \\quad\\text{for } A,B\\in\\mathbb R^{t\\times t}.\n$$\n$L_1$ is a tracial state on $\\mathbb C^{t\\times t}$ as a \nstraightforward computation\nshows. As $\\Tr(A)=\\Tr(A+\\mathbbm i B)$, all we need to show is that $L_1=L$.\n\nClearly, $L_1$ and $L$ agree on the vector space spanned\nby all commutators in $\\mathbb C^{t\\times t}$. This space is (over $\\mathbb R$)\nof codimension $2$. By construction, $L_1(1)=L(1)=1$ and\n$L_1(\\mathbbm i)=0$. On the other hand,\n$$L(\\mathbbm i)=L(\\mathbbm i^*)=-L(\\mathbbm i)$$ implying $L(\\mathbbm i)=0$.\nThis shows $L=L_1=\\Tr$.\n\nThe remaining case of tracial states over $\\mathbb H$ is dealt \nwith\nsimilarly and is left as an exercise for the reader.\n\\end{proof}\n\n\\begin{remark}\\label{rem:real}\nEvery complex number $z=a+\\mathbbm i b$ can be represented\nas a $2\\times 2$ real matrix \n$z'=\\left(\\begin{smallmatrix} a & b \\\\ -b & a\\end{smallmatrix}\\right)$.\nThis gives rise to \nan $\\mathbb R$-linear $*$-map\n$\\mathbb C^{t\\times t}\\to \\mathbb R^{(2t)\\times(2t)}$ that commutes with $\\Tr$. \nA similar property holds if quaternions\n$a+\\mathbbm i b+\\mathbbm j c+\\mathbbm k d$ \nare represented by the $4\\times 4$ real matrix\n$$\\left(\\begin{smallmatrix}\n a & b & c & d \\\\ \n -b & a & -d & c \\\\\n -c & d & a & -b \\\\\n -d & -c & b & a \n\\end{smallmatrix}\\right).$$\n\\end{remark}\n\n\\begin{prop}\\label{prop:convtrace}\n\tLet $\\mathcal A$ be a $*$-subalgebra of $ \\mathbb R^{t\\times t}$ for some $t\\in \\mathbb N$ and\n\t$L:\\mathcal A\\to \\mathbb R$ a tracial state.\n\tThen there exist \nfull matrix algebras $\\mathcal A^{(i)}$ over $\\mathbb R$, $\\mathbb C$ or $\\mathbb H$, \na $*$-isomorphism \n\\begin{equation}\\label{eq:iso}\n\\mathcal A\\to\\bigoplus_{i=1}^N \\mathcal A^{(i)},\n\\end{equation}\nand $\\lambda_1,\\dots, \\lambda_N\\in \\mathbb R_{\\geq0}$ with $\\sum_i \\lambda_i=1$, such that for all \n$A\\in \\mathcal A$,\n\t $$L(A)=\\sum_i^N \\lambda_i\\Tr(A^{(i)}).$$\nHere, $\\bigoplus_i A^{(i)} =\\left(\\begin{smallmatrix} A^{(1)} \\\\ & \\ddots \\\\ & & A^{(N)}\n\\end{smallmatrix}\\right)$ denotes the image of $A$ under the isomorphism\n\\eqref{eq:iso}. The size of $($the real representation of$)$ $\\bigoplus_i A^{(i)}$ is \nat most $t$.\n\\end{prop}\n\n\\begin{proof}\nSince $L$ is tracial, \n$L(U^*AU)=L(A)$ for all orthogonal $U\\in\\mathbb R^{t\\times t}$.\nHence we can apply orthogonal transformations to $\\mathcal A$ \nwithout changing the values of $L$. \nSo $\\mathcal A$ can be transformed into block diagonal form\nas in \\eqref{eq:iso}\naccording to its invariant subspaces.\nThat is, each of the blocks $\\mathcal A^{(i)}$ \nacts irreducibly on a subspace of $\\mathbb R^t$ and is thus \na central \nsimple algebra (with involution) over $\\mathbb R$.\nThe involution on $\\mathcal A^{(i)}$ is induced by the\nconjugate transpose involution. (Equivalently, by the\ntranspose on the real matrix representation in the complex\nof quaternion case.)\n\nNow $L$ induces (after a possible normalization) a tracial state on the block\n$\\mathcal A^{(i)}$ and hence by Lemma \\ref{lem:convtrace}, we have\n$L_i:=L|_{\\mathcal A^{(i)}}=\\lambda_i \\Tr$ for some $\\lambda_i\\in\\mathbb R_{\\geq0}$.\nThen\n\\[\nL(A)=L\\big(\\bigoplus_i A^{(i)}\\big)=\\sum_i L_i\\big(A^{(i)}\\big)\n= \\sum_i \\lambda_i \\Tr\\big(A^{(i)}\\big)\n\\]\nand\n$1=L(1)=\\sum_i \\lambda_i$.\n\\end{proof}\n\nThe following theorem is the tracial version of the representation theorem \nof Curto and Fialkow for moment matrices with finite rank \\cite{cffinite}.\n\n\\begin{thm}\\label{thm:finiterank}\nLet $y=(y_w)$ be a tracial sequence with positive semidefinite \nmoment matrix $M(y)$ of finite rank $t$. Then $y$ is a tracial moment\nsequence, i.e., there exist vectors \n$\\ushort A^{(i)}=(A_1^{(i)},\\dots,A_n^{(i)})$ of symmetric matrices $A_j^{(i)}$ \nof size at most $t$ and $\\lambda_i\\in \\mathbb R_{\\geq0}$ with $\\sum \\lambda_i=1$ \nsuch that $$y_w=\\sum \\lambda_i \\Tr(w(\\ushort A^{(i)})).$$ \n\\end{thm}\n\n\\begin{proof}\nLet $M:=M(y)$. We equip $\\mathbb R\\ax$ with the bilinear form given by\n$$\\langle p,q\\rangle_M:=\\langle M\\vv{p},\\vv{q} \\rangle={\\vv{q}}^*M\\vv p.$$ Let \n$I=\\{p\\in \\mathbb R\\ax\\mid \\langle p,p\\rangle_M=0\\}.$ Then by Proposition \\ref{lem:kerideal}, \n$I$ is an ideal of $\\mathbb R \\ax$. In particular, $I=\\ker \\varphi_M$ for \n$$\\varphi_M:\\mathbb R \\ax\\to \\ran M,\\quad p\\mapsto M\\vv{p}.$$ Thus if we define\n$E:=\\mathbb R \\ax/I$, the induced linear map \n$$\\overline\\varphi_M:E\\to \\ran M,\\quad \\overline p\\mapsto M\\vv{p}$$\nis an isomorphism and $$\\dim E=\\dim(\\ran M)=\\rank M=t<\\infty.$$ Hence \n$(E,\\langle$\\textvisiblespace ,\\textvisiblespace $\\rangle_E)$ is a finite-dimensional \nHilbert space for\n$\\langle \\bar p,\\bar q\\rangle_E={\\vv{q}}^*M\\vv{p}$. \n\nLet $\\hat X_i$ be the right multiplication with $X_i$ on $E$, i.e., \n$\\hat X_i \\overline p:=\\overline{pX_i}$. Since \n$I$ is a right ideal of $\\mathbb R \\ax$, the operator $\\hat X_i$ is well defined.\nFurther, $\\hat X_i$ is symmetric since\n\\begin{align*}\n\\langle \\hat X_i \\overline p,\\overline q \\rangle_E&=\\langle M \\vv{pX_i},\\vv{q} \\rangle\n= (X_ip^*q)(y)\\\\\n&=(p^*qX_i)(y)=\\langle M \\vv{p},\\vv{qX_i} \\rangle=\\langle\\overline p,\\hat X_i\\overline q \\rangle_E.\n\\end{align*}\nThus each $\\hat X_i$, acting on a $t$-dimensional vector space, has a representation matrix \n$A_i\\in \\sym \\mathbb R^{t\\times t}$.\n \nLet $\\mathcal B=B(\\hat X_1,\\dots,\\hat X_n)=B(A_1,\\dots,A_n)$ be the algebra of \noperators generated by $\\hat X_1,\\dots,\\hat X_n$. These operators can be written\nas $$\\hat p=\\sum_{w\\in\\ax} p_w \\hat{w}$$ for some $p_w\\in \\mathbb R$, \nwhere $\\hat w=\\hat X_{w_1}\\cdots \\hat X_{w_s}$ for $w=X_{w_1}\\cdots X_{w_s}$.\nObserve that $\\hat{w}=w(A_1,\\dots,A_n)$.\nWe define the linear functional $$L:\\mathcal B\\to\\mathbb R,\\quad\n\\hat p\\mapsto {\\vv{1}}^*M\\vv p=p(y),$$\nwhich is a state on $\\mathcal B$.\nSince $y_w=y_u$ for $w\\stackrel{\\mathrm{cyc}}{\\thicksim} u$, it follows that $L$ is tracial. Thus by Proposition \n\\ref{prop:convtrace} (and Remark \\ref{rem:real}), there exist\n$\\lambda_1,\\dots \\lambda_N\\in \\mathbb R_{\\geq0}$ with $\\sum_i\\lambda_i=1$ and real symmetric matrices $A_j^{(i)}$\n$(i=1,\\ldots,N$) \nfor each $A_j\\in \\sym \\mathbb R^{t\\times t}$, such that for all $w\\in \\ax$, \n$$y_w=w(y)=L(\\hat w)=\\sum_i \\lambda_i \\Tr(w(\\ushort A^{(i)})),$$\nas desired. \n\\end{proof}\n\nThe sufficient conditions on $M(y)$ in Theorem \\ref{thm:finiterank} are also \nnecessary for $y$ to be a tracial moment sequence. Thus we get our first \ncharacterization of tracial moment sequences:\n\n\\begin{cor}\\label{cor:finite}\nLet $y=(y_ w)$ be a tracial sequence. Then $y$ is a tracial moment sequence\nif and only if $M(y)$ is positive semidefinite and of finite rank.\n\\end{cor}\n\n\\begin{proof}\nIf $y_ w=\\Tr( w(\\ushort A))$ for some $\\ushort A=(A_1,\\dots,A_n)\\in(\\sym \\mathbb R^{t\\times t})^n$, \nthen $$L(p)=\\sum_ w p_ w y_ w=\\sum_ w p_ w \\Tr( w(\\ushort A))=\n \\Tr(p(\\ushort A)).$$\nHence \n\\begin{align*}\n{\\vv p}^*M(y)\\vv{p}&=L(p^*p)=\\Tr(p^*(\\ushort A)p(\\ushort A))\\geq0.\n\\end{align*} \nfor all $p \\in \\mathbb R\\ax$. \n\nFurther, the tracial moment matrix $M(y)$ has rank at most $t^2$.\nThis can be seen as follows: \n$M$ induces a bilinear map \n$$\\Phi:\\mathbb R \\ax\\rightarrow\\mathbb R \\ax^*,\\quad p\\mapsto\\Big(q\\mapsto \\Tr\\big((q^*p)(\\ushort A)\\big)\\Big),$$\nwhere $\\mathbb R \\ax^*$ is the dual space of $\\mathbb R \\ax$. This implies \n$$\\rank M=\\dim (\\ran\\Phi)=\\dim(\\mathbb R \\ax/\\ker\\Phi).$$\nThe kernel of the evaluation map \n$\\varepsilon_{\\ushort A}:\\mathbb R\\ax\\rightarrow\\mathbb R^{t\\times t}$, $p\\mapsto p(\\ushort A)$\nis a subset of $\\ker \\Phi$. In particular, \n\\[\\dim(\\mathbb R\\ax/\\ker\\Phi)\\leq \\dim(\\mathbb R\\ax/\\ker\\varepsilon_{\\ushort A})=\\dim(\\ran \\varepsilon_{\\ushort A})\\leq t^2. \\]\nThe same holds true for each convex combination $y_w=\\sum_i \\lambda_i \\Tr( w(\\ushort A^{(i)}))$.\n\nThe converse is Theorem \\ref{thm:finiterank}.\n\\end{proof}\n\n\n\\begin{dfn}\\label{defflat}\nLet $A\\in \\sym\\mathbb R^{t\\times t}$ be given. A (symmetric) extension of $A$ is a matrix \n$\\tilde A\\in \\sym\\mathbb R^{(t+s)\\times (t+s)}$ of the form\n$$\\tilde A=\\begin{pmatrix} A &B \\\\ B^* & C\\end{pmatrix} $$\nfor some $B\\in \\mathbb R^{t\\times s}$ and $C\\in \\mathbb R^{s\\times s}$.\nSuch an extension is \\emph{flat} if $\\rank A=\\rank\\tilde A$,\nor, equivalently, if $B = AW$ and $C = W^*AW$ for some matrix $W$.\n\\end{dfn}\n\n\nThe kernel of a flat extension $M_k$ of a tracial moment matrix $M_{k-1}$ \nhas some (truncated) \\emph{ideal-like properties} as \nshown in the following lemma. \n\n\\begin{lemma}\\label{lem:flatrideal}\nLet $f\\in \\mathbb R \\ax$ with $\\deg f\\leq k-1$ and let $M_k$ be a flat extension of $M_{k-1}$. \nIf $f\\in\\ker M_k$ then $fX_i,X_if\\in \\ker M_k$.\n\\end{lemma}\n\n\\begin{proof}\nLet $f=\\sum_w f_w w$. Then for $v\\in \\ax_{k-1}$, we have\n\\begin{equation}\\label{eqker}\n(M_k\\vv{fX_i})_v =\\sum_w f_w y_{v^*wX_i}= \n\\sum_w f_w y_{(vX_i)^*w}=(M_k \\vv f)_{vX_i}=0.\n\\end{equation}\n\nThe matrix $M_k$ is of the form $M_k=\\left(\\begin{smallmatrix} M_{k-1}&B\\\\B^*&C\\end{smallmatrix}\\right)$. \nSince $M_k$ is a flat extension, \n$\\ker M_k=\\ker \\begin{pmatrix} M_{k-1}&B\\end{pmatrix}$. \nThus by \\eqref{eqker}, \n$fX_i\\in \\ker \\begin{pmatrix} M_{k-1}&B\\end{pmatrix}=\\ker M_k$. \nFor $X_if$ we obtain analogously that\n$$(M_k\\vv{X_if})_v =\\sum_w f_w y_{v^*X_iw}=\n\\sum_w f_w y_{(X_iv)^*w}=(M_k \\vv f)_{X_iv}=0$$\nfor $v\\in \\ax_{k-1}$, which implies $X_if\\in \\ker M_k$.\n\\end{proof}\n\n\nWe are now ready to prove the tracial version of the flat extension theorem of\nCurto and Fialkow \\cite{cfflat}.\n\n\n\\begin{thm}\\label{thm:flatextension}\nLet $y=(y_w)_{\\leq 2k}$ be a truncated tracial sequence of order $2k$. If \n$\\rank M_k(y)=\\rank M_{k-1}(y)$, then there exists\na unique tracial extension $\\tilde y=(\\tilde y_w)_{\\leq 2k+2}$ of $y$ such that \n$M_{k+1}(\\tilde y)$ is a flat extension of $M_k(y)$.\n\\end{thm}\n\n\\begin{proof}\nLet $M_k:=M_k(y)$.\nWe will construct a flat extension $M_{k+1}:=\\left(\\begin{smallmatrix} M_k&B\\\\B^*&C\\end{smallmatrix}\\right)$ \nsuch that $M_{k+1}$ is a tracial moment matrix. Since \n$M_k$ is a flat extension of $M_{k-1}(y)$ we can find a basis $b$ of \n$\\ran M_k$ consisting of columns of $M_k$ labeled by $w$ with $\\deg w\\leq k-1$.\nThus the range of $M_k$ is completely determined by the range of $M_k|_{\\spann b}$, \ni.e., for each $p\\in \\mathbb R \\ax$ with $\\deg p\\leq k$ there exists a \\emph{unique} \n$r\\in \\spann b$ such that\n$M_k\\vv p=M_k \\vv r$; equivalently, $p-r\\in \\ker M_k$. \n\nLet $v\\in\\ax$, $\\deg v=k+1$, $v=v'X_i$ for some $i\\in \\{1,\\dots,n\\}$ and $v'\\in \\ax$ \nwith $\\deg v'=k$. \nFor $v'$ there exists an $r\\in \\spann b$ such that $v'-r\\in \\ker M_k$. \n\n\\emph{If} there exists a flat extension $M_{k+1}$, then by Lemma \\ref{lem:flatrideal},\nfrom $v'-r\\in \\ker M_k\\subseteq\\ker M_{k+1}$ it\nfollows that $(v'-r)X_i\\in \\ker M_{k+1}$. Hence the desired flat extension \nhas to satisfy \n\\begin{equation}\\label{eqflatcond}\n\tM_{k+1}\\vv{v}=M_{k+1}\\vv{rX_i}=M_k\\vv{rX_i}.\n\\end{equation}\nTherefore we define \n\\begin{equation}\\label{eq:sabinedefinesB}\nB\\vv{v}:=M_k\\vv{rX_i}.\n\\end{equation}\n\nMore precisely, let $(w_1,\\dots,w_\\ell)$ be the \nbasis of $M_k$, i.e., $(M_k)_{i,j}=w_i^*w_j$. Let $r_{w_i}$\nbe the unique element in $\\spann b$ with $ w_i-r_{ w_i}\\in \\ker M_k$.\nThen $B=M_kW$ with \n$W=(r_{ w_1X_{i_1}},\\dots,r_{ w_\\ell X_{i_\\ell}})$ and we define \n\\begin{equation}\\label{eq:sabinedefinesC}\nC:=W^*M_kW. \n\\end{equation}\nSince the $r_{ w_i}$ are uniquely determined, \n\\begin{equation}\\label{eq:sabinedefinesMk+1}\nM_{k+1}=\\left(\\begin{smallmatrix} M_k&B\\\\B^*&C\\end{smallmatrix}\\right)\n\\end{equation} \nis well-defined. The constructed $M_{k+1}$ is a flat extension of \n$M_k$, and \n$M_{k+1}\\succeq0$ if and only if $M_k\\succeq0$, cf.~\\cite[Proposition 2.1]{cfflat}.\nMoreover, once $B$ is chosen, there is only one $C$ making\n$M_{k+1}$ as in \\eqref{eq:sabinedefinesMk+1} a flat extension of $M_k$. \nThis follows from general\nlinear algebra, see e.g.~\\cite[p.~11]{cfflat}. Hence $M_{k+1}$ is the \n\\emph{only} candidate for a flat extension.\n\nTherefore we are done if $M_{k+1}$ is a tracial moment matrix, i.e., \n\\begin{equation}\n (M_{k+1})_w=(M_{k+1})_v \\;\\text{ whenever}\\; w\\stackrel{\\mathrm{cyc}}{\\thicksim} v. \\label{mm}\n\\end{equation}\nTo show this we prove that $(M_{k+1})_{X_iw}=(M_{k+1})_{wX_i}$. Then \\eqref{mm} \nfollows recursively. \n\nLet $w=u^*v$. If $\\deg u,\\deg vX_i\\leq k$ there is nothing to show since \n$M_k$ is a tracial moment matrix. If $\\deg u\\leq k$ and $\\deg vX_i=k+1$ there exists\nan $r\\in \\spann b$ such that $r-v\\in \\ker M_{k-1}$, and by Lemma \\ref{lem:flatrideal},\nalso $vX_i-rX_i\\in \\ker M_k$. Then we get\n\\begin{align*}\n(M_{k+1})_{u^*vX_i}&=\\vv{u}^*M_{k+1}\\vv{vX_i}=\\vv{u}^*M_{k+1}\\vv{rX_i}\n=\\vv{u}^*M_{k}\\vv{rX_i}\\\\\n&=(M_k)_{u^*rX_i}\n=(M_k)_{X_iu^*r}\n=(M_k)_{(uX_i)^*r}\\\\\n&\\overset{(\\ast)}{=}{\\vv{uX_i}}^*M_{k+1}\\vv{v}=(M_{k+1})_{(uX_i)^*v}\n=(M_{k+1})_{X_iw},\n\\end{align*}\nwhere equality $(\\ast)$ holds by \\eqref{eqflatcond} which implies Lemma \n\\ref{lem:flatrideal} by construction.\n\nIf $\\deg u=\\deg vX_i=k+1$, write $u=X_ju'$. Further, there exist $s,r\\in \\spann b$ with \n$u'-s\\in \\ker M_{k-1}$ and $r-v\\in \\ker M_{k-1}$. Then \n\\begin{align*}\n(M_{k+1})_{u^*vX_i}&=\\vv{X_ju'}^*M_{k+1}\\vv{vX_i}=\\vv{X_js}^*M_{k}\\vv{rX_i}\\\\\n&=(M_k)_{s^*X_jrX_i}=(M_k)_{(sX_i)^*(X_jr)}\\\\\n&\\overset{(*)}{=}\\vv{uX_i}^*M_{k+1}\\vv{X_jv}=(M_{k+1})_{(uX_i)^*X_jv}\n=(M_{k+1})_{X_i w}.\n\\end{align*}\n\nFinally, the construction of $\\tilde y$ from $M_{k+1}$ is clear. \n\\end{proof}\n\n\n\\begin{cor}\\label{cor:flat}\nLet $y=(y_ w)_{\\leq 2k}$ be a truncated tracial sequence. If \n$M_k(y)$ is positive semidefinite\nand $M_k(y)$ is a flat extension of $M_{k-1}(y)$, then $y$ \nis a truncated tracial moment sequence.\n\\end{cor}\n\n\\begin{proof}\n\tBy Theorem \\ref{thm:flatextension} we can extend $M_k(y)$ inductively\n\tto a positive semidefinite moment matrix $M(\\tilde y)$ with \n\t$\\rank M(\\tilde y)=\\rank M_k(y)<\\infty$. Thus $M(\\tilde y)$ has finite \n\trank and by Theorem \\ref{thm:finiterank}, there exists a tracial moment \n\trepresentation \n\tof $\\tilde y$. Therefore $y$ is a truncated tracial moment sequence.\n\\end{proof}\n\nThe following two corollaries give characterizations of tracial \nmoment matrices coming from tracial moment sequences.\n\n\\begin{cor}\\label{cor:flatall}\nLet $y=(y_ w)$ be a tracial sequence. Then $y$ \nis a tracial moment sequence if and only if $M(y)$ is positive semidefinite and there\nexists some $N\\in \\mathbb N$ such that $M_{k+1}(y)$ is a flat extension of \n$M_{k}(y)$ for all $k\\geq N$.\n\\end{cor}\n\n\\begin{proof}\nIf $y$ is a tracial moment sequence then by Corollary \\ref{cor:finite},\n$M(y)$ is positive semidefinite and has finite rank $t$. Thus there exists an \n$N\\in \\mathbb N$ such that $t=\\rank M_N(y)$. \nIn particular, $\\rank M_k(y)=\\rank M_{k+1}(y)=t$ for all $k\\geq N$, i.e., $M_{k+1}(y)$ \nis a flat extension of $M_k(y)$ for all $k\\geq N$.\n\nFor the converse, let $N$ be given such that $M_{k+1}(y)$ is a flat extension of \n$M_{k}(y)$ for all $k\\geq N$. By Theorem \\ref{thm:flatextension}, the (iterated) \nunique extension $\\tilde y$ of $(y_w)_{\\leq 2k}$ for $k\\geq N$ is equal to $y$.\nOtherwise there exists a flat extension $\\tilde y$ of $(y_w)_{\\leq 2\\ell}$ \nfor some $\\ell\\geq N$ such that $M_{\\ell+1}(\\tilde y)\\succeq 0$ is a flat extension\nof $M_\\ell(y)$ and $M_{\\ell+1}(\\tilde y)\\neq M_{\\ell+1}(y)$ contradicting the \nuniqueness of the extension in Theorem \\ref{thm:flatextension}.\n\nThus $M(y)\\succeq 0$ and $\\rank M(y)=\\rank M_N(y)<\\infty$. Hence by Theorem \\ref{thm:finiterank}, \n$y$ is a tracial moment sequence.\n\\end{proof}\n\n\n\\begin{cor}\\label{cor:flatt}\nLet $y=(y_ w)$ be a tracial sequence. Then $y$ \nhas a tracial moment representation with matrices of size at most \n$t:=\\rank M(y)$ if \n$M_N(y)$ is positive semidefinite and $M_{N+1}(y)$ is \na flat extension of $M_{N}(y)$ for some $N\\in \\mathbb N$ with $\\rank M_N(y)=t$.\n\\end{cor}\n\n\\begin{proof}\nSince $\\rank M(y)=\\rank M_N(y)=t,$\neach $M_{k+1}(y)$ with $k\\geq N$ is a flat extension of $M_k(y)$.\nAs $M_N(y)\\succeq0$, all $M_k(y)$ \nare positive semidefinite. \nThus $M(y)$ is also positive semidefinite. Indeed, let\n$p\\in\\mathbb R\\ax$ \nand $\\ell=\\max\\{\\deg p,N\\}$. Then \n${\\vv p}^*M(y)\\vv p={\\vv p}^*M_\\ell(y)\\vv p\\geq0$.\n\nThus by Corollary \\ref{cor:flatall}, $y$ is a tracial moment sequence. The \nrepresenting matrices can be chosen to be of size at most $\\rank M(y)=t$. \n\\end{proof}\n\n\n\\section{Positive definite moment matrices and trace-positive polynomials}\\label{sec:poly}\n\nIn this section we explain how the representability of \\emph{positive definite}\ntracial moment matrices relates \nto sum of hermitian squares representations of\ntrace-positive polynomials. We start by introducing some terminology.\n\n\nAn element of the form $g^*g$ for some $g\\in\\mathbb R\\ax$ is called a \n\\textit{hermitian square} and we denote the set of all sums of hermitian \nsquares by \n$$\\Sigma^2=\\{f\\in\\mathbb R\\ax\\mid f=\\sum g_i^*g_i \\;\\text{for some}\\; g_i\\in\\mathbb R\\ax\\}.$$\nA polynomial $f\\in \\mathbb R \\ax$ is \\emph{matrix-positive} if $f(\\ushort A)$ is positive \nsemidefinite for all tuples $\\ushort A$ of symmetric matrices \n$A_i\\in \\sym \\mathbb R^{t\\times t}$, $t\\in\\mathbb N$. Helton \\cite{helton} proved that $f\\in\\mathbb R\\ax$ is \nmatrix-positive if and only if $f\\in \\Sigma^2$ by solving a non-commutative \nmoment problem; see also \\cite{McC}.\n\nWe are interested in a different type of positivity induced by\nthe trace.\n\\begin{dfn}\\label{def:trpos}\nA polynomial $f\\in \\mathbb R \\ax$ is called \\emph{trace-positive} if \n$$\\Tr(f(\\ushort A))\\geq 0\\;\\text{ for all}\\; \\ushort A\\in(\\sym\\mathbb R^{t\\times t})^n,\\; t\\in\\mathbb N.$$\n\\end{dfn}\n\nTrace-positive polynomials are intimately connected to deep open\nproblems from \ne.g.~operator algebras (Connes' embedding conjecture \\cite{ksconnes})\nand mathematical physics (the Bessis-Moussa-Villani conjecture \n\\cite{ksbmv}), so a good understanding of this set is needed.\nA distinguished subset is formed by sums of hermitian squares and\ncommutators.\n\n\\begin{dfn}\nLet $\\Theta^2$ be the set of all polynomials which are cyclically \nequivalent to a sum of hermitian squares, i.e.,\n\\begin{equation}\\label{eq:defcycsohs}\n\\Theta^2=\\{f\\in \\mathbb R\\ax\\mid f\\stackrel{\\mathrm{cyc}}{\\thicksim}\\sum g_i^*g_i\\;\\text{for some}\\;g_i \\in\\mathbb R\\ax\\}.\n\\end{equation}\n\\end{dfn}\n\nObviously, all $f\\in \\Theta^2$ are trace-positive. However, in contrast to \nHelton's sum of squares theorem mentioned above, the following\nnon-commutative version of the well-known Motzkin polynomial \\cite[p.~5]{Mar} shows that \na trace-positive polynomial need not be a member of $\\Theta^2$ \\cite{ksconnes}. \n\n\\begin{example}\\label{motznc}\nLet $$M_{\\rm nc}=XY^4X+YX^4Y-3XY^2X+1\\in\\mathbb R\\axy.$$ Then $M_{\\rm nc}\\notin \\Theta^2$ since \nthe commutative Motzkin polynomial is not a (commutative) sum of squares \\cite[p.~5]{Mar}. \nThe fact that $M_{\\rm nc}(A,B)$ has nonnegative trace for all symmetric matrices $A,B$ \nhas been shown by Schweighofer and the second author \\cite[Example 4.4]{ksconnes} using \nPutinar's\nPositivstellensatz \\cite{Put}.\n\\end{example}\n\nLet $\\Sigma_k^2:=\\Sigma^2\\cap \\mathbb R \\ax_{\\leq 2k}$ and $\\Theta_k^2:=\\Theta^2\\cap \\mathbb R \\ax_{\\leq 2k}$.\nThese are convex cones in $\\mathbb R \\ax_{\\leq 2k}$. \nBy duality there exists a connection \nbetween $\\Theta_k^2$ and positive semidefinite tracial moment matrices of order $k$. \nIf every tracial moment matrix $M_k(y)\\succeq0$ of order $k$ has a tracial representation \nthen every trace-positive polynomial of degree at most $2k$ lies in $\\Theta_k^2$. \nIn fact:\n\n\\begin{thm}\\label{thm:posdefmm}\n\tThe following statements are equivalent:\n\t\\begin{enumerate}[\\rm (i)]\n\t\\item all truncated tracial sequences $(y_ w)_{\\leq 2k}$ with \n\t{\\rm{positive definite}} tracial moment matrix $M_k(y)$ have a tracial moment representation \\eqref{rep};\n\t\\item all trace-positive polynomials of degree $\\leq2k$ are elements of $\\Theta^2_k$.\n\t\\end{enumerate}\n\\end{thm}\n\nFor the proof we need some preliminary work.\n\\begin{lemma}\\label{lem:thetaclosed}\n\t$\\Theta_k^2$ is a closed convex cone in $\\mathbb R \\ax_{\\leq 2k}$.\n\\end{lemma}\n\n\\begin{proof}\nEndow $\\mathbb R\\ax_{\\leq 2k}$ with a norm \n$\\|$\\textvisiblespace $\\|$ and the quotient space $\\mathbb R \\ax_{\\leq 2k}/_{\\stackrel{\\mathrm{cyc}}{\\thicksim}}$ \nwith the quotient norm\n\\begin{equation}\\label{eq:qnorm}\n\\| \\pi(f) \\| := \\inf \\big\\{ \\| f+h \\| \\mid h\\stackrel{\\mathrm{cyc}}{\\thicksim} 0\\big\\}, \\quad\nf\\in\\mathbb R\\ax_{\\leq 2k}.\n\\end{equation}\nHere $\\pi:\\mathbb R\\ax_{\\leq 2k}\\to \\mathbb R \\ax_{\\leq 2k}/_{\\stackrel{\\mathrm{cyc}}{\\thicksim}}$ denotes \nthe quotient map. (Note: due to the finite-dimensionality of $\\mathbb R\\ax_{\\leq 2k}$,\nthe infimum on the right-hand side of \\eqref{eq:qnorm} is attained.)\t\t\n\nSince $\\Theta_k^2= \\pi^{-1} \\big( \\pi(\\Theta_k^2)\\big)$, it suffices\nto show that $\\pi(\\Theta_k^2)$ is closed.\nLet $d_k=\\dim \\mathbb R \\ax_{\\leq 2k}$. Since by Carath\\`eodory's theorem \\cite[p.~10]{bar} each element\n\t$f\\in \\mathbb R \\ax_{\\leq 2k}$ can be written as a convex combination of $d_k+1$ elements\n\tof $\\mathbb R \\ax_{\\leq 2k}$, the image of\n\\begin{align*}\n\\varphi:\\left(\\mathbb R \\ax_{\\leq k}\\right)^{d_k}\n&\\to\n\\mathbb R \\ax_{2k}/_{\\stackrel{\\mathrm{cyc}}{\\thicksim}}\\\\\n(g_i)_{i=0,\\dots,d_k}\n&\\mapsto \n\\pi\\big(\\sum_{i=0}^{d_k}g_i^*g_i\\big)\n\\end{align*}\nequals $\\pi(\\Sigma^2_k)=\\pi(\\Theta_k^2)$. In $\\left(\\mathbb R \\ax_{\\leq k}\\right)^{d_k}$ we define \n$\\mathcal S:=\\{g=(g_i)\\mid \\|g\\|=1\\}$. Note that $\\mathcal S$ is compact, thus \n$V:=\\varphi(\\mathcal S)\\subseteq \\pi(\\Theta_k^2)$ is compact as well. \nSince $0\\notin \\mathcal S$,\nand a sum of hermitian squares cannot be cyclically equivalent to $0$ by \n\\cite[Lemma 3.2 (b)]{ksbmv}, we see that\n$0\\notin V$.\n\nLet $(f_\\ell)_\\ell$ be a sequence in $\\pi(\\Theta^2_k)$ which converges to $\\pi(f)$ \nfor some $f\\in\\mathbb R \\ax_{\\leq 2k}$. \nWrite $f_\\ell=\\lambda_\\ell v_\\ell$ for $\\lambda_\\ell\\in\\mathbb R_{\\geq 0}$ and $v_\\ell\\in V$. \nSince $V$ is compact there exists a subsequence $(v_{\\ell_j})_j$ of $v_\\ell$ converging \nto $v\\in V$. Then\n$$\\lambda_{\\ell_j}=\\frac{\\|f_{\\ell_j}\\|}{\\|v_{\\ell_j}\\|}\\stackrel{j\\rightarrow \\infty}{\\longrightarrow }\\frac{\\|f\\|}{\\|v\\|}.$$ \nThus $f_\\ell\\rightarrow f=\\frac{\\|f\\|}{\\|v\\|}v\\in\\pi(\\Theta^2_k)$.\n\\end{proof}\n\n\\begin{dfn}\nTo a truncated tracial sequence $(y_ w)_{\\leq k}$ we\nassociate\nthe \\emph{$($tracial$)$ Riesz functional} $L_y:\\mathbb R \\ax_{\\leq k}\\to\\mathbb R$ defined by\n$$L_y(p):=\\sum_ w p_ w y_ w\\quad\\text{for } p=\\sum_ w p_ w w\\in \\mathbb R\\ax_{\\leq k}.$$\nWe say that $L_y$ is \\emph{strictly positive} ($L_y>0$), if \n$$L_y(p)>0 \\text{ for all trace-positive } p\\in\\mathbb R \\ax_{\\leq k},\\, p\\stackrel{\\mathrm{cyc}}{\\nsim} 0.$$ \nIf $L_y(p)\\geq0$ for all trace-positive $p\\in\\mathbb R \\ax_{\\leq k}$, then\n$L_y$ is \\emph{positive} ($L_y\\geq0$).\n\\end{dfn}\n\nEquivalently, a tracial Riesz functional $L_y$\nis positive (resp., strictly positive) if and only if the map \n$\\bar L_y$ it induces on $ \\mathbb R \\ax_{\\leq 2k}/_{\\stackrel{\\mathrm{cyc}}{\\thicksim}}$ is \nnonnegative (resp., positive) on \nthe nonzero images of trace-positive polynomials in $ \\mathbb R \\ax_{\\leq 2k}/_{\\stackrel{\\mathrm{cyc}}{\\thicksim}}$.\n\n\nWe shall prove that strictly positive Riesz functionals lie in the interior of the cone\nof positive Riesz functionals,\nand that truncated tracial sequences $y$ with \\emph{strictly}\npositive $L_y$ are truncated tracial moment sequences (Theorem \\ref{thm:Lrep} below). \nThese results are motivated by and resemble the \nresults of Fialkow and Nie \n\\cite[Section 2]{fnie} in the commutative context.\n\n\\begin{lemma}\\label{lem:Linner}\n\tIf $L_y>0$ then there exists an $\\varepsilon>0$ such that $L_{\\tilde y}>0$ for all\n\t$\\tilde y$ with $\\|y-\\tilde y\\|_1<\\varepsilon$.\n\\end{lemma}\n\n\\begin{proof}\nWe equip $\\mathbb R \\ax_{\\leq 2k}/_{\\stackrel{\\mathrm{cyc}}{\\thicksim}}$ with a quotient norm as in \\eqref{eq:qnorm}.\nThen $$\\mathcal S:=\\{\\pi(p)\\in \\mathbb R \\ax_{\\leq 2k}/_{\\stackrel{\\mathrm{cyc}}{\\thicksim}}\\mid p\\in\\mathcal C_k,\\;\\|\\pi(p)\\|=1\\}$$ is compact.\nBy a scaling argument, it suffices to show that $\\bar L_{\\tilde y}>0$ on $\\mathcal S$ for $\\tilde y$ close to $y$. \nThe map $y\\mapsto \\bar L_y$ is linear between finite-dimensional vector spaces.\nThus \n$$|\\bar L_{y'}(\\pi(p))-\\bar L_{y''}(\\pi(p))|\\leq C \\|y'-y''\\|_1$$ for all $\\pi(p)\\in \\mathcal S$, \ntruncated tracial moment sequences $y',y''$, and some $C\\in\\mathbb R_{>0}$.\n\nSince $\\bar L_y$ is continuous and strictly positive on $\\mathcal S$,\n there exists an $\\varepsilon>0$ such \nthat $\\bar L_y(\\pi(p))\\geq2\\varepsilon$ for all $\\pi(p)\\in \\mathcal S$. \nLet $\\tilde y$ satisfy $\\|y-\\tilde y\\|_1<\\frac {\\varepsilon}C$.\nThen\n\\[\\bar L_{\\tilde y}(\\pi(p))\\geq \\bar L_y(\\pi(p))-C \\|y-\\tilde y\\|_1\\geq\\varepsilon>0. \\hfill\\qedhere \\]\n\\end{proof}\n\n\\begin{thm}\\label{thm:Lrep}\n\tLet $y=(y_ w)_{\\leq k}$ be a truncated tracial sequence of order $k$.\n\tIf $L_y>0$, then $y$ is a truncated tracial moment sequence.\n\\end{thm}\n\n\\begin{proof}\nWe show first that \n$y\\in \\overline T$, where $\\overline T$ is the closure of\n$$T=\\big\\{(y_ w)_{\\leq k}\\mid \\exists \\ushort A^{(i)}\\;\\exists \\lambda_i\\in \\mathbb R_{\\geq0} :\\; y_ w=\\sum \\lambda_i\\Tr( w(\\ushort A^{(i)}))\\big\\}.$$\n\nAssume $L_y>0$ but $y\\notin \\overline T$. Since $\\overline T$ is a closed \nconvex cone in $\\mathbb R^\\eta$ (for some $\\eta\\in \\mathbb N$), by the Minkowski separation \ntheorem there exists a vector $\\vv{p}\\in \\mathbb R^\\eta$ such that $\\vv{p}^*y<0$ \nand $\\vv{p}^*w\\geq 0$ for all $w\\in \\overline T$. The non-commutative \npolynomial corresponding to $\\vv{p}$ is\ntrace positive since $\\vv{p}^*z\\geq 0$ for all $z\\in \\overline T$. Thus\n$00$ \nfor all $p\\in \\Theta_k^2$, $p\\stackrel{\\mathrm{cyc}}{\\nsim} 0$.\nHence\nthe bilinear form given by $$(p,q)\\mapsto L(pq)$$ can be written as \n$ L(pq)={\\vv q}^*M\\vv{p}$ for some truncated tracial moment matrix $M\\succ0$. \nBy assumption, the corresponding truncated tracial sequence\n$y$ has a tracial moment representation $$y_ w=\\sum \\lambda_i \\Tr( w(\\ushort A^{(i)}))$$\nfor some tuples $A^{(i)}$ of symmetric matrices $A_j^{(i)}$ and $\\lambda_i\\in \\mathbb R_{\\geq0}$ \nwhich implies the contradiction\n$$0>L(f)=\\sum \\lambda_i \\Tr(f(\\ushort A^{(i)}))\\geq 0.$$ \n\nConversely, if (ii) holds,\nthen $L_y>0$ if and only if $M(y)\\succ0$. Thus a positive definite moment matrix $M(y)$\ndefines a strictly positive functional $L_y$ which by Theorem \\ref{thm:Lrep} has a tracial\nrepresentation. \n\\end{proof}\n\nAs mentioned above, the Motzkin polynomial $M_{\\rm nc}$\nis trace-positive but $M_{\\rm nc}\\notin \\Theta^2$. Thus by Theorem \\ref{thm:posdefmm}\nthere exists at least one truncated tracial moment matrix which is positive definite but has \nno tracial representation. \n\\begin{example}\nTaking the index set \n$$(1,X,Y,X^2,XY,YX,Y^2,X^2Y,XY^2,YX^2,Y^2X,X^3,Y^3,XYX,YXY),$$\n the \nmatrix \n$$M_3(y):=\\left(\\begin{smallmatrix}\n1 & 0 & 0 & \\frac74 & 0 & 0 & \\frac74 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ \n0 & \\frac74 & 0 & 0 & 0 & 0 & 0 & 0 & \\frac{19}{16} & 0 & \\frac{19}{16} & \\frac{21}4 & 0 & 0 & 0 \\\\ \n0 & 0 & \\frac74 & 0 & 0 & 0 & 0 & \\frac{19}{16} & 0 & \\frac{19}{16} & 0 & 0 & \\frac{21}4 & 0 & 0 \\\\ \n\\frac74 & 0 & 0 & \\frac{21}4 & 0 & 0 & \\frac{19}{16} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ \n0 & 0 & 0 & 0 & \\frac{19}{16} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ \n0 & 0 & 0 & 0 & 0 & \\frac{19}{16} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ \n\\frac74 & 0 & 0 & \\frac{19}{16} & 0 & 0 &\\frac{21}4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ \n0 & 0 & \\frac{19}{16} & 0 & 0 & 0 & 0 & \\frac{9}8 & 0 & \\frac{5}6 & 0 & 0 & \\frac{9}8 & 0 & 0 \\\\ \n0 & \\frac{19}{16} & 0 & 0 & 0 & 0 & 0 & 0 & \\frac{9}8 & 0 & \\frac{5}6 & \\frac{9}8 & 0 & 0 & 0 \\\\ \n0 & 0 & \\frac{19}{16} & 0 & 0 & 0 & 0 & \\frac{5}6 & 0 & \\frac{9}8 & 0 & 0 & \\frac{9}8 & 0 & 0 \\\\ \n0 & \\frac{19}{16} & 0 & 0 & 0 & 0 & 0 & 0 & \\frac{5}6 & 0 & \\frac{9}8 & \\frac{9}8 & 0 & 0 & 0 \\\\ \n0 & \\frac{21}4 & 0 & 0 & 0 & 0 & 0 & 0 & \\frac{9}8 & 0 & \\frac{9}8 & 51 & 0 & 0 & 0 \\\\ \n0 & 0 & \\frac{21}4 & 0 & 0 & 0 & 0 & \\frac{9}8 & 0 & \\frac{9}8 & 0 & 0 & 51 & 0 & 0 \\\\ \n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \\frac{5}6 & 0 \\\\ \n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \\frac{5}6\n\\end{smallmatrix} \\right)$$\nis a tracial moment matrix of degree 3 in 2 variables and is positive definite.\nBut $$L_y(M_{\\rm nc})=M_{\\rm nc}(y)=-\\frac5{16}<0.$$ Thus $y$ \nis not a truncated tracial moment sequence,\nsince otherwise $L_y(p)\\geq 0$ for all trace-positive polynomials $p\\in \\mathbb R\\axy_{\\leq 6}$.\n\nOn the other hand, the (free) non-commutative moment problem is always\nsolvable for positive definite moment matrices \\cite[Theorem 2.1]{McC}.\nIn our example this means\nthere are symmetric matrices $A,B\\in\\mathbb R^{15\\times 15}$ and a vector\n$v\\in\\mathbb R^{15}$ such that\n$$y_ w=\\langle w(A,B)v,v\\rangle$$\nfor all $ w\\in\\axy_{\\leq 3}$.\n\\end{example}\n\n\\begin{remark}\nA trace-positive polynomial $f\\in \\mathbb R \\ax$ of degree $2k$ lies in $\\Theta^2_k$ if\nand only if $L_y(f)\\geq 0$ for all truncated tracial sequences $(y_w)_{\\leq 2k}$ with\n $M_k(y)\\succeq0$. \nThis condition is obviously satisfied if all truncated tracial sequences $(y_w)_{\\leq 2k}$ with \n$M_k(y)\\succeq0$ have a tracial representation. \n\nUsing this we can prove that trace-positive binary quartics, i.e., \nhomogeneous polynomials of degree $4$ in $\\mathbb R \\langle X,Y\\rangle$, lie in $\\Theta_2^2$.\nEquivalently, truncated tracial sequences $(y_w)$ indexed by words of degree $4$ with a \npositive definite tracial \nmoment matrix have a tracial moment representation.\n\nFurthermore,\ntrace-positive binary biquadratic polynomials, i.e., polynomials $f\\in \\mathbb R \\axy$ with \n$\\deg_X f, deg_Y f\\leq 2$,\nare cyclically equivalent to a sum of hermitian squares. \nExample \\ref{expsd} then shows that a polynomial $f$ can satisfy $L_y(f)\\geq 0$ although there \nare truncated tracial sequences $(y_w)_{\\leq 2k}$ with $M_k(y)\\succeq0$ and no \ntracial representation.\n\nStudying extremal points of the convex cone $$\\{(y_w)_{\\leq 2k}\\mid M_k(y)\\succeq 0\\}$$ \nof truncated tracial sequences with positive semidefinite tracial moment matrices, we are able \nto impose a concrete block structure on the matrices needed in a tracial moment representation. \n\nThese statements and concrete sum of hermitian squares and commutators representations of trace-positive polynomials \nof low degree will be published elsewhere \\cite{sb}. \n\\end{remark}\n\n\n", "meta": {"timestamp": "2010-01-20T22:15:12", "yymm": "1001", "arxiv_id": "1001.3679", "language": "en", "url": "https://arxiv.org/abs/1001.3679"}} {"text": "\\section{Introduction}\n\nSome tasks, due to their complexity, cannot be carried out by single individuals. They need the concourse of sets of people composing teams. Teams provide a structure and means of bringing together people with a suitable mix of individual properties (such as competences or personality). This can encourage the exchange of ideas, their creativity, their motivation and job satisfaction and can actually extend individual capabilities. In turn, a suitable team can improve the overall productivity, and the quality of the performed tasks. However, sometimes teams work less effectively than initially expected due to several reasons: a bad balance of their capacities, incorrect team dynamics, lack of communication, or difficult social situations. Team composition is thus a problem that has attracted the interest of research groups all over the world, also in the area of multiagent systems. MAS research has widely acknowledged competences as important for performing tasks of different nature \\cite{Anagnostopoulos12onlineteam,Chen2015,Okimoto,Rangapuram2015}. However, the majority of the approaches represent capabilities of agents in a Boolean way (i.e., an agent either has a required skill or not). This is a simplistic way to model an agent's set of capabilities as it ignores any skill degree. In real life, capabilities are not binary since every individual (e.g. human or software) shows different performances for each competence. Additionally, the MAS literature has typically disregarded significant organizational psychology findings (with the exception of several recent, preliminary attempts like \\cite{FarhangianPPS15} or \\cite{alberola2016artificial}). Numerous studies in organizational psychology \\cite{Arnold,Mount,White} underline the importance of personality traits or \\emph{types} for team composition. Other studies have focused on how team members should differ or converge in their characteristics, such as experience, personality, level of skill, or gender, among others \\cite{West}, in order to increase performance. \n\nIn this paper, we focus on scenarios where a complex task requires the collaboration of individuals within a team. More precisely, we consider a scenario, where there are \\emph{multiple instances of the same complex task}. The task has a task type and a set of competence requests with competence levels needed to solve the task. We have a pool of human agents characterized by gender, personality, and a set of competences with competence levels. \nOur goal is to partition agents into teams so that within a task all competence requirements are covered (whenever possible) and team members work well together. That is, each resulting team is both \\emph{proficient} (covers the required competences) and \\emph{congenial} (balances gender and psychological traits). We refer to these teams as \\emph{synergistic teams}. We define the \\emph{synergistic value} of a team as its balance in terms of competence, personality and gender. Each synergistic team works on the very same task. This scenario is present in many real-life settings, for instance a classroom or a crowdsourcing task.\nWith this purpose, we design an algorithm that uses a greedy technique both to match competences with the required ones and at the same time to balance the psychological traits of teams' members. \n\nThis paper makes the following contributions. To start with, we formalise the synergistic team formation problem as the problem of partitioning a group of individuals into teams with limited size.\nWe provide an approximate local algorithm to solve the team composition problem. We empirically evaluate the algorithm using real data. Preliminary results show that our algorithm predicts better the performance of teams than the experts that know students' social situation, background and competences. \n\n\\textbf{Outline.} The remaining of this paper is structured as follows. Section~\\ref{related} opens with an overview of the related work. Section~\\ref{pers} gives the personality background for our model. Section~\\ref{sec:model} describes the synergistic team composition problem and Section~\\ref{sec:TeamForm} presents our algorithm to solve the synergistic team composition problem. Then, Section~\\ref{sec:results} presents results of our algorithm in the context of team composition in the classroom. Finally, Section~\\ref{sec:discuss} discusses our approach and future work.\n\\vspace{-2mm}\n\\section{Background} \\label{related}\nTo the best of our knowledge, \\cite{farhangian2015agent} is the only model that considers both personality and competences while composing teams. There, the influence of personality on different task allocation strategies (minimizing either undercompetence or overcompetence) is studied. Henceforth, this work is the most relevant for us, however there are substantial differences between our work and \\cite{farhangian2015agent}. Firstly, authors do not propose an algorithm to compose teams based on \\emph{both} personality and competences. Secondly, gender balance is not considered in their setting. Finally, \\cite{farhangian2015agent} does not provide an evaluation involving real data (only an agent-based simulation is presented).\n\nThe rest of the literature relevant to this article is divided into two categories as proposed in \\cite{andrejczuk}: those that consider agent capacities (individual and social capabilities of agents) and those that deal with agent personality (individual behaviour models).\n\n\\textbf{Capacity.}\nThe capacity dimension has been exploited by numerous previous works \\cite{Anagnostopoulos12onlineteam,Chalkiadakis2012,Chen2015,Crawford,Liemhetcharat2014,Okimoto,JAR2015,Rangapuram2015}. In contrast to our work, where the competences are graded, in the majority of works agents are assumed to have multiple binary skills (i.e., the agent either has a skill or not). For instance, \\cite{Okimoto,Crawford} use agents' capabilities to compose one k-robust team for a single task. A team is $k$-robust if removing any $k$ members from the team does not affect the completion of the task. \\cite{Anagnostopoulos12onlineteam} uses competences and communication cost in a context where tasks sequentially arrive and teams have to be composed to perform them. Each task requires a specific set of competences and the team composition algorithm is such that the workload per agent is fair across teams.\n\n\\textbf{Personality.}\nIn the team formation literature, the only two models to our knowledge considering personality to compose teams are \\cite{FarhangianPPS15} and \\cite{alberola2016artificial}. \\cite{alberola2016artificial} uses Belbin theory to obtain human predominant \\emph{roles} (we discuss this method in Section \\ref{pers}). Additionally, the gender is not taken into account while composing heterogeneous teams, which we believe may be important for team congeniality. Regarding \\cite{FarhangianPPS15}, Farhangian et al. use the classical MBTI personality test (this method is discussed in Section \\ref{pers}). They look for the best possible team built around a selected leader. In other words, the \\emph{best} team for a particular task is composed. Gender balance is not considered in this setting. Finally, although \\cite{FarhangianPPS15}'s team composition considered real data, the resulting teams' performance was not validated in any real setting (Bayesian theory was used to predict the probability of success in various team composition conditions).\n\\vspace{-3mm}\n\\section{Personality} \\label{pers}\nIn this section, we discuss the most prominent approaches to measure human personality and we explain the details of the method we have decided to examine.\n\nPersonality determines people's behaviour, cognition and emotion. Different personality theorists present their own definitions of personality and different ways to measure it based on their theoretical positions. \n\nThe most popular approach is to determine personality through a set of questions. There have been several simplified schemes developed over the years to profile human personality. The most populars are:\n\\begin{enumerate}\n\\vspace{-1.5mm}\n\\item the Five Factor Model (aka FFM or ``Big Five''), which uses five broad dimensions to describe human personality \\cite{Costa};\n\\vspace{-1.5mm}\n\\item Belbin theory \\cite{belbin}, which provides a theory on how different role types influence teamwork; and \n\\vspace{-1.5mm}\n\\item the Myers-Briggs Type Indicator (MBTI) scheme designed to indicate psychological preferences in how people perceive the world and make decisions \\cite{Myers}. \n\\end{enumerate}\n\\vspace{-1.5mm}\nAccording to \\cite{Poropat}, FFM personality instruments fail to detect significant sex differences in personality structures. It is also argued that the Big Five dimensions are too broad and heterogeneous, and lack the specificity to make accurate predictions in many real-life settings \\cite{Boyle,johnson2004genetic}. \n\nRegarding Belbin theory, the results of previous studies considering the correlation between team composition and team performance are ambiguous. Even though some research shows weak support or does not show support for this theory at all \\cite{batenburg2013belbin,van2008belbin,partington1999belbin}, it remains popular.\n\nFinally, the MBTI measure consists of four dimensions on a binary scale (e.g. either the person is Extrovert or Introvert). Within this approach, every person falls into one of the sixteen possible combinations of the four letter codes, one letter representing one dimension. This approach is easy to interpret by non-psychologists, though reliance on dichotomous preference scores rather than continuous scores excessively restricts the level of statistical analysis \\cite{devito}.\n\nHaving considered the arguments above, we have decided to explore a novel method: the Post-Jungian Personality Theory, which is a modified version of the Myers-Briggs Type Indicator (MBTI) \\cite{Myers}, the ``Step II'' version of Quenk, Hammer and Majors \\cite{Wilde2013}. The questionnaire to determine personality is short, contains only 20 quick questions (compared to the 93 MBTI questions). This is very convenient for both experts wanting to design teams and individuals doing the test since completing the test takes just a few minutes (for details of the questionnaire, see \\cite[p.21]{Wilde2013}). Douglass J. Wilde claims that it covers the same psychological territory as MBTI \\cite{Wilde2009}. In contrast to the MBTI measure, which consists of four binary dimensions, the Post-Jungian Personality Theory uses the \\emph{numerical} data collected using the questionnaire \\cite{Wilde2011}. The results of this method seem promising, since within a decade this novel approach has tripled the fraction of Stanford teams awarded national prizes by the Lincoln Foundation \\cite{Wilde2009}.\n\nThe test is based on the pioneering psychiatrist Carl Gustav Jung's cognitive-mode personality model \\cite{PT}. It has two sets of variable pairs called psychological functions: \n\\vspace{-1.5mm}\n\\begin{itemize}\n\\item {\\bf Sensing / Intuition (SN)} --- describes the way of approaching problems\n\\vspace{-1.5mm}\n\\item {\\bf Thinking / Feeling (TF)} --- describes the way of making decisions\n\\end{itemize} \n\\vspace{-1.5mm}\nand two sets of psychological attitudes:\n\\vspace{-1.5mm}\n\\begin{itemize}\n\\item {\\bf Perception / Judgment (PJ)} --- describes the way of living\n\\vspace{-1.5mm}\n\\item {\\bf Extroversion / Introversion (EI)} --- describes the way of interacting with the world\n\\end{itemize} \n\\vspace{-1.5mm}\nFor instance, for the Feeling-Thinking (TF) dimension, a value between -1 and 0 means that a person is of the feeling type, and a value between 0 and 1 means she is of the thinking type. Psychological functions and psychological attitudes compose together a personality. Every dimension of a personality (EI, SN, TF, PJ) is tested by five multiple choice true/false questions.\n\\vspace{-2mm}\n\\section{Team Composition Model}\\label{sec:model}\n\nIn this section we introduce and formalise our team composition problem. First, section \\ref{ssec:basic} introduces the basic notions of agent, personality, competence, and team, upon which we formalise our problem. Next, we formalise the notion of task assignment for a single team and a single task, and we characterise different types of assignments. Sections \\ref{ssec:proficiency} and \\ref{ssec:congeniality} show how to evaluate the proficiency and congeniality degrees of a team. Based on these measures, in section \\ref{ssec:synergisticProblem} we formalise the \\emph{synergistic team composition problem}.\n\\subsection{Basic definitions} \n\\label{ssec:basic}\n\nIn our model, we consider that each agent is a human. We characterise each agent by the following properties:\n\\begin{itemize}\n\\vspace{-1.5mm}\n\\item A unique \\emph{identifier} that distinguishes an agent from others (e.g. ID card number, passport number, employee ID, or student ID).\n\\vspace{-1.5mm}\n\\item \\emph{Gender.} Human agents are either a man or a woman.\n\\item A \\emph{personality} represented by four personality traits. Each personality trait is a number between -1 and 1. \n\\item A \\emph{set of competences}. A competence integrates knowledge, skills, personal values, and attitudes that enable an agent to act correctly in a job, task or situation \\cite{roe2002competences}. Each agent is assumed to possess a set of competences with associated competence levels. This set may vary over time as an agent evolves. \n\\end{itemize}\n\\vspace{-1.5mm}\nNext, we formalise the above-introduced concepts.\n\\vspace{-1.5mm}\n\\begin{mydef}\nA \\emph{personality profile} is a vector $\\langle sn, \\mathit{tf}, ei, pj \\rangle \\in [-1, 1]^4$, where each $sn, \\mathit{tf}, ei, pj$ represents one personality trait.\n\\end{mydef}\n\nWe denote by $C = \\{c_1, \\dots , c_m\\}$ the whole set of competences, where each element $c_i \\in C$ stands for a competence.\n\n\\begin{mydef}\nA \\emph{human agent} is represented as a tuple $\\langle id, g, \\emph{{\\bf p}}, l \\rangle$ such that:\n\\begin{itemize}\n\\item $id$ is the agent's identifier;\n\\item $g \\in \\{man, {\\mathit woman}\\}$ stands for their gender;\n\\item $\\emph{\\bf{p}}$ is a personality profile vector $\\langle sn, \\mathit{tf}, ei, pj \\rangle \\in [-1, 1]^4$;\n\\item $l: C \\to{[0,1]}$ is a function that assigns the probability that the agent will successfully show competence $c$. We will refer to $l(c)$ as the \\emph{competence level} of the agent for competence $c$. We assume that when an agent does not have a competence (or we do not know about it), the level of this competence is zero.\n\\end{itemize}\n\\end{mydef}\n\nHenceforth, we will note the set of agents as $A =\\{a_1,\\ldots, \\linebreak a_n\\}$. Moreover, We will use super-indexes to refer to agents' components. For instance, given an agent $a \\in A$, $id^{a}$ will refer to the $id$ component of agent $a$. We will employ matrix $L \\in [0,1]^{n \\times m}$ to represent the competence levels for each agent and each competence.\n\\vspace{-2mm}\n\\begin{mydef}[Team] A \\emph{team} is any non-empty subset of $A$ with at least two agents. We denote by $\\cal{K_A}$ $ = (2^A \\setminus \\{\\emptyset\\})\\setminus \\{\\{a_i\\}| a_i \\in A\\}$ the set of all possible teams in $A$. \n\\end{mydef}\n\\vspace{-2mm}\nWe assume that agents in teams coordinate their activities for mutual benefit. \n\n\\subsection{The task assignment problem} \n\\label{ssec:assignment}\n\nIn this section we focus on how to assign a team to a task.\nA task type determines the competence levels required for the task as well as the importance of each competence with respect to the others. For instance, some tasks may require a high level of creativity because they were never performed before (so there are no qualified agents in this matter). Others may require a highly skilled team with a high degree of coordination and teamwork (as it is the case for rescue teams). Therefore, we define a task type as:\n\\begin{mydef}\nA task type $\\tau$ is defined as a tuple \\\\ $\\langle \\lambda, \\mu, {\\{(c_{i},l_{i}, w_{i})\\}_{i \\in I_{\\tau}}} \\rangle$ such that:\n\\begin{itemize}\n\\item $\\lambda \\in [0,1]$ importance given to proficiency;\n\\item $\\mu \\in [-1,1]$ importance given to congeniality;\n\\item $c_{i} \\in C$ is a competence required to perform the task;\n\\item $l_{i} \\in [0,1]$ is the required competence level for competence $c_i$; \n\\vspace{-1.5mm}\n\\item $w_{i} \\in [0,1]$ is the importance of competence $c_i$ for the success of task of type $\\tau$; and\n\\vspace{-1.5mm}\n\\item $\\sum_{i \\in I_{\\tau}} w_i = 1$.\n\\end{itemize}\n\\end{mydef}\nWe will discuss the meaning of $\\lambda$ and $\\mu$ further ahead when defining synergistic team composition (see subsection \\ref{ssec:synergisticProblem}).\nThen, we define a task as:\n\\vspace{-1.5mm}\n\\begin{mydef}A \\emph{task} $t$ is a tuple $\\langle \\tau, m \\rangle$ such that $\\tau$ is a task type and $m$ is the required number of agents, where $m\\geq 2$.\n\\end{mydef}\n\nHenceforth, we denote by $T$ the set of tasks and by $\\mathcal{T}$ the set of task types. Moreover, we will note as $C_{\\tau} =\\{c_{i} | i \\in I_{\\tau}\\}$ the set of competences required by task type $\\tau$.\n\nGiven a team and a task type, we must consider how to assign competences to team members (agents). Our first, weak notion of task assignment only considers that all competences in a task type are assigned to some agent(s) in the team\n\n\\begin{mydef}Given a task type $\\tau$ and a team $K \\in \\cal{}K_A$, an assignment is a function $\\eta: K \\to 2^{C_{\\tau}}$ satisfying that\n$C_{\\tau} \\subseteq \\bigcup_{a \\in K} \\eta(a)$. \n\\end{mydef}\n\n\n\n\n\n\n\n\\subsection{Evaluating team proficiency} \\label{ssec:prof} \n\\label{ssec:proficiency}\n\nGiven a task assignment for a team, next we will measure the \\emph{degree of competence} of the team as a whole. This measure will combine both the degree of under-competence and the degree of over-competence, which we formally define first. Before that, we must formally identify the agents that are assigned to each competence as follows.\n\\vspace{-1.5mm}\n\\begin{mydef}\nGiven a task type $\\tau$, a team $K$, and an assignment $\\eta$, the set $\\delta(c_{i}) = \\{a \\in K | c_{i} \\in \\eta(a)\\}$ stands for the agents assigned to cover competence $c_{i}$.\n\\end{mydef}\n\\vspace{-1.5mm}\nNow we are ready to define the degrees of undercompentence and overcompetence. \n\\vspace{-1.5mm}\n\\begin{mydef}[Degree of undercompentence] \\item\n\\vspace{-1.6mm}\nGiven a task type $\\tau$, a team $K$, and an assignment $\\eta$, we define the degree of undercompetence of the team for the task as:\n\\vspace{-2.5mm}\n\\begin{equation*}\nu(\\eta)=\n\\sum_{i \\in I_{\\tau}} w_{i} \\cdot \\frac{\\sum_{a \\in \\delta(c_{i})} |\\min(l^{a}(c_{i}) - l_{i},0)|}{|\\{a \\in \\delta(c_{i})|l^{a}(c_{i})-l_{i} < 0\\}|}\n\\end{equation*}\n\\end{mydef}\n\\vspace{-2.5mm}\n\\begin{mydef}[Degree of overcompetence] \\item\n\\vspace{-1.6mm}\nGiven a task type $\\tau$, a team $K$, and an assignment $\\eta$, we define the degree of overcompetence of the team for the task as:\n\\vspace{-2.5mm}\n\\begin{equation*}\no(\\eta)=\n\\sum_{i \\in I_{\\tau}} w_i \\cdot \\frac{\\sum_{a \\in \\delta(c_{i})} \\max(l^{a}(c_{i}) - l_{i},0)}{|\\{a \\in \\delta(c_{i})|l^{a}(c_{i})-l_{i} > 0\\}|}\n\\end{equation*}\n\\end{mydef}\n\\vspace{-1.5mm}\nGiven a task assignment for a team, we can calculate its competence degree to perform the task by combining its overcompetence and undercompetence as follows. \n\\vspace{-1.5mm}\n\\begin{mydef}Given a task type $\\tau$, a team $K$ and an assignment $\\eta$, the competence degree of the team to perform the task is defined as:\n\\begin{equation}\n\\label{eq:uprof}\nu_{\\mathit{prof}}(\\eta) = 1-(\\upsilon \\cdot u(\\eta)+(1-\\upsilon) \\cdot o(\\eta))\n\\end{equation}\nwhere $\\upsilon \\in [0,1]$ is the penalty given to the undercompetence of team $K$. \n\\end{mydef}\n\\vspace{-1.5mm}\nNotice that the larger the value of $\\upsilon$ the higher the importance of the competence degree of team $K$, while the lower the value $\\upsilon$, the less important its undercompetence. The intuition here is that we might want to penalize more the undercompetency of teams, as some tasks strictly require teams to be at least as competent as defined in the task type.\n\\vspace{-1.5mm}\n\\begin{proposition}\nFor any $\\eta$, $u(\\eta) + o(\\eta) \\in [0,1]$.\n\\label{prop1}\n\\end{proposition}\n\n\\begin{proof}\nGiven that (1) $l^{a}(c_{i}) \\in [0,1]$ and $l_{i} \\in [0,1]$; \n(2) If $\\min(l^{a}(c_{i}) - l_{i},0)<0$ then $\\max(l^{a}(c_{i}) -l_{i},0) = 0$; and\n(3) If $\\max(l^{a}(c_{i})-l_{i},0) > 0$ then $\\min(l^{a}(c_{i}) - l_{i},0)=0$. Thus, from (1--3) \nwe have\n$|\\min(l^{a}(c_{i}) - l_{i},0)|$ + $\\max(l^{a}(c_{i})-l_{i},0) \\in [0,1]$.\nLet $n=|\\{a \\in \\delta(c_{i})|l^{a}(c_{i})-l_{i} > 0\\}|$, then obviously it holds that\n$\\frac{n \\cdot (|\\min(l^{a}(c_{i}) - l_{i},0)| + \\max(l^{a}(c_{i})-l_{i},0))}{n} \\in [0,1]$ and as $|\\delta(c_i)| \\leq n$ then\n$\\frac{\\sum_{a \\in \\delta(c_{i})}(|\\min(l^{a}(c_{i}) - l_{i},0)| + \\max(l^{a}(c_{i})-l_{i},0))}{n} \\in [0,1]$ holds; and \nsince $\\sum_{i \\in I_{\\tau}} w_i = 1$ then \\\\\n$\\sum_{i \\in I_{\\tau}} w_i \\cdot \\frac{\\sum_{a \\in \\delta(c_{i})}(|\\min(l^{a}(c_{i}) - l_{i},0)| + \\max(l^{a}(c_{i})-l_{i},0))}{n} \\in [0,1]$;\nFinally, distributing, this equation is equivalent to: \\\\\n$\\sum_{i \\in I_{\\tau}} w_i \\frac{\\sum_{a \\in \\delta(c_{i})}(|\\min(l^{a}(c_{i}) - l_{i},0)|}{n} \\\\\n+ \\sum_{i \\in I_{\\tau}} w_i \\frac{\\sum_{a \\in \\delta(c_{i})}(\\max(l^{a}(c_{i})-l_{i},0))}{n} \\in [0,1]$ which in turn is equivalent to $ u(\\eta) + o(\\eta) \\in [0,1]$.\n\\end{proof}\n\\vspace{-1.5mm}\nFunction $u_{\\mathit{prof}}$ is used to measure how proficient a team is for a given task assignment. However, counting on the required competences to perform a task does not guarantee that the team will succeed at performing it. Therefore, in the next subsection we present an evaluation function to measure \\emph{congeniality} within teams. Unlike our measure for proficiency, which is based on considering a particular task assignment, our congeniality measure will solely rely on the personalities and genders of the members of a team. \n\\subsection{Evaluating team congeniality} \\label{ssec:con} \n\\label{ssec:congeniality}\n\nInspired by the experiments of Douglass J. Wilde \\cite{Wilde2009} we will define the team utility function for congeniality $u_{con}(K)$, such that:\n\\begin{itemize}\n\\vspace{-1.5mm}\n\\item it values more teams whose SN and TF personality dimensions are as diverse as possible;\n\\vspace{-1.5mm}\n\\item it prefers teams with at least one agent with positive EI and TF dimensions and negative PJ dimension, namely an extrovert, thinking and judging agent (called ETJ personality),\n\\vspace{-1.5mm}\n\\item it values more teams with at least one introvert agent;\n\\vspace{-2.5mm}\n\\item it values gender balance in a team.\n\\end{itemize}\nTherefore, the higher the value of function $u_{con}(K)$, the more diverse the team is. \nFormally, this team utility function is defined as follows:\n\\vspace{-1mm}\n\\begin{equation}\n\\label{eq:ucon}\n\\begin{aligned}\nu_{con}(K) = & \\sigma_{SN}(K) \\cdot \\sigma_{TF}(K) + \\max_{a_i \\in K}{((0,\\alpha, \\alpha, \\alpha) \\cdot {\\bf p_i}, 0)} \\\\ \n & + {\\max_{a_i \\in K}{((0,0,-\\beta,0) \\cdot {\\bf p_i}, 0)}} + \\gamma \\cdot \\sin{(\\pi \\cdot g(K))}\n\\end{aligned}\n\\vspace{-2.5mm}\n\\end{equation}\nwhere the different parameters are explained next. \n\\begin{itemize}\n\\vspace{-1.5mm}\n\\item $\\sigma_{SN}(K)$ and $\\sigma_{TF}(K)$: These variances are computed over the SN and TF personality dimensions of the members of team $K$. Since we want to maximise $u_{con}$, we want these variances to be as large as possible. The larger the values of $\\sigma_{SN}$ and $\\sigma_{TF}$ the larger their product will be, and hence the larger team diversity too. \n\\vspace{-4mm}\n\\item $\\alpha$: The maximum variance of any distribution over an interval $[a,b]$ corresponds to a distribution with the elements evenly situated at the extremes of the interval. The variance will always be $\\sigma^2 \\le ((b-a)/2)^2$. In our case with $b=1$ and $a=-1$ we have $\\sigma \\le 1$. Then, to make the four factors equally important and given that the maximum value for ${\\bf p_i}$ (the personality profile vector of agent $a_i$) would be $(1, 1, 1, 1)$ a maximum value for $\\alpha$ would be $3 \\alpha = ((1-(-1))/2)^2 = 1$, as we have the factor $\\sigma_{SN} \\cdot \\sigma_{TF}$, so $\\alpha \\le 0.33(3)$. For values situated in the middle of the interval the variance will be $\\sigma^2 \\le \\frac{(b-a)^2}{12}$, hence a reasonable value for $\\alpha$ would be $\\alpha = \\frac{\\sqrt[]{(1-(-1))^2)/12}}{3} = 0.19$\n\\vspace{-1.5mm}\n\\item $\\beta$: A similar reasoning shows that $\\beta \\le 1$.\n\\vspace{-1.5mm}\n\\item $\\gamma$ is a parameter to weigh the importance of a gender balance and $g(K) = \\frac{w(K)}{w(K) + m(K)}$. Notice that for a perfectly gender balanced team with $w(K) = m(K)$ we have that\n$\\sin{(\\pi \\cdot g(K))} = 1$. The higher the value of $\\gamma$, the more important is that team $u_{con}$ is gender balanced. Similarly to reasoning about $\\alpha$ and $\\beta$, we assess $\\gamma \\leq 1$. In order to make this factor less important than the others in the equation we experimentally assessed that $\\gamma = 0.1$ is a good compromise.\n\\end{itemize}\n\\vspace{-1.5mm}\nIn summary, we will use a utility function $u_{con}$ such that: $\\alpha = \\frac{\\sigma_{SN}(K) \\cdot \\sigma_{TF}(SK)} 3$, $\\beta = 3 \\cdot \\alpha $ and $\\gamma = 0.1$. \n\n\\subsection{Evaluating synergistic teams}\n\nDepending on the task type, different importance for congeniality and proficiency should be given. For instance, creative tasks require a high level of communication and exchange of ideas, and hence, teams require a certain level of congeniality. While, repetitive tasks require good proficiency and less communication. The importance of proficiency ($\\lambda$) and congeniality ($\\mu$) is therefore a fundamental aspect of the task type. Now, given a team, we can combine its competence value (in equation \\ref{eq:uprof}) with its congeniality value (in equation \\ref{eq:ucon}) to measure its \\emph{synergistic value}. \n\\vspace{-1.5mm}\n\\begin{mydef}\nGiven a team $K$, a task type $\\tau = \\linebreak \\langle \\lambda, \\mu, {\\{(c_{i},l_{i}, w_{i})\\}_{i \\in I_{\\tau}}} \\rangle$ and a task assignment $\\eta: K \\rightarrow 2^{C_{\\tau}}$, the synergistic value of team $K$ is defined as:\n\\vspace{-1.5mm}\n\\begin{equation}\ns(K,\\eta) = \\lambda \\cdot u_{\\mathit{prof}}(\\eta) + \\mu \\cdot u_{con}(K)\n\\end{equation}\nwhere $\\lambda \\in [0,1]$ is the grade to which the proficiency of team $K$ is important, and $\\mu \\in [-1,1]$ is the grade to which the task requires diverse personalities.\n\\end{mydef}\n\n\\begin{figure}\n\\caption{Values of congeniality and proficiency with respect to the task type.}\n\\begin{tikzpicture}\n\\begin{axis}[\n axis line style={->},\n x label style={at={(axis description cs:0.5,-0.1)},anchor=north},\n y label style={at={(axis description cs:-0.1,.5)},anchor=south},\n xlabel=Proficiency ($\\lambda$),\n ylabel=Congeniality ($\\mu$),\n xmin=0,\n xmax=1,\n ymin=-1,\n ymax=1,\n \n unit vector ratio=6 1,\n]\n \\node[black] at (axis cs:0.25,0.5) {\n \\begin{tabular}{c}\n Creative \\\\ General tasks\n \\end{tabular}};\n \\node[black] at (axis cs:0.25,-0.5) {\\begin{tabular}{c}\n Structured \\\\ General tasks\n \\end{tabular}};\n \\node[black] at (axis cs:0.75,0.5) {\\begin{tabular}{c}\n Creative \\\\ Specialized tasks\n \\end{tabular}};\n \\node[black] at (axis cs:0.75,-0.5) {\\begin{tabular}{c}\n Structured \\\\ Specialized tasks\n \\end{tabular}};\n \n \\draw [black, thick] (axis cs:0,-1) rectangle (axis cs:0.5,1);\n\t\\draw (0,0) -- (1,0);\n \n\\end{axis}\n\\end{tikzpicture}\n\\label{tbl:parameters}\n\\vspace{-6mm}\n\\end{figure}\n\nFigure \\ref{tbl:parameters} shows the relation between the parameters $\\lambda$ and $\\mu$.\nIn general, the higher the $\\lambda$, the higher importance is given to the proficiency of a team. The higher the $\\mu$ the more important is personality diversity. Notice, that the $\\mu$ can be lower than zero. Having $\\mu$ negative, we impose that the congeniality value will be as low as possible (to maximize $s(K,\\eta)$) and so, team homogeneity is preferred. This situation may happen while performing tasks in unconventional performance environments that have serious consequences associated with failure. In order to quickly resolve issues, a team needs to be proficient and have team-mates who understand one another with minimum communication cost (which is associated to homogeneity of a team). \n\n\\subsection{The synergistic team composition problem}\n\\label{ssec:synergisticProblem}\n\nIn what follows we consider that there are multiple instances of the same task to perform. Given a set of agents $A$, our goal is to split them into teams so that each team, and the whole partition of agents into teams, is balanced in terms of competences, personality and gender. \nWe shall refer to these balanced teams as \\emph{synergistic teams}, meaning that they are both congenial and proficient. \n\nTherefore, we can regard our team composition problem as a particular type of set partition problem. We will refer to any partition of $A$ as a team partition. However, we are interested in a particular type of team partitions, namely those where teams are constrained by size $m$ as follows.\n\n\\begin{mydef}\nGiven a set of agents $A$, we say that a team partition $P_m$ of $A$ is constrained by size $m$ iff: (i) for every team $K_i \\in P_m$, $K_i \\in \\cal{K}_A$, $\\max(m-1, 2) \\leq |K| \\leq m+1$ holds; and (ii) for every pair of teams $K_i, K_j \\in P_m$ $||K_i| - |K_j|| \\le 1$. \n\\end{mydef}\n\nAs $|K| / m$ is not necessarily a natural number, we may need to allow for some flexibility in team size within a partition. This is why we introduced above the condition $\\max(m-1, 2) \\leq |K| \\leq m+1$. In practical terms, in a partition we may have teams differing by one agent. We note by ${\\cal P}_m(A)$ the set of all team partitions of $A$ constrained by size $m$. Henceforth, we will focus on team partitions constrained by some size. Since our goal is to find the most competence-balanced and psychologically-balanced team partition, we need a way to measure the synergistic value of a team partition, which we define as follows: \n\n\\begin{mydef}\nGiven a task $t = \\langle \\tau, m \\rangle$, a team partition $P_m$ and an assignment $\\eta_i$ for each team $K_i \\in P_m$, the synergistic value of $P_m$ is computed by:\n\\vspace{-1.5mm}\n\\begin{equation}\nu(P_m,\\bm{\\eta}) = \\prod_{i =1}^{|P_m|} s(K_i,\\eta_i)\n\\end{equation}\n\\vspace{-1.5mm}\nwhere $\\bm{\\eta}$ stands for the vector of task assignments $\\eta_1,\\ldots, \\linebreak \\eta_{|P_m|}$.\n\\end{mydef}\n\nNotice that the use of a Bernoulli-Nash function over the synergistic values of teams will favour team partitions whose synergistic values are balanced.\n\n\n\n\n\n\n \n\n\n\n\nNow we are ready to cast the synergistic team composition problem as the following optimisation problem:\n\n\\begin{mydef}\nGiven task $t = \\langle \\tau, m \\rangle$ and set of agents $A$ the \\textbf{synergistic team formation problem (STFP)} is the problem of finding a team partition constrained by size $m$, together with competence assignment for its teams, whose synergistic value is maximal. Formally, the STFP is the problem of finding the partition in $P \\in \\mathcal{P}_m(A)$ and the task assignments $\\bm{\\eta}$ for the teams in $P_m$ that maximises $u(P_m,\\bm{\\eta})$.\n\\end{mydef}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\\vspace{-2mm}\n\\section{Solving STFP}\\label{sec:TeamForm}\nIn this section we detail an algorithm, the so-called \\emph{SynTeam}, which solves the synergistic team formation problem described above. We will start from describing how to split agents into a partition (see subsection \\ref{ssec:dist}). Next, we will move on to the problem of assigning competences in a task to team members (see subsection \\ref{ssec:asg}), so that the utility of synergistic function is maximal. Finally, we will explain \\emph{SynTeam} that is a greedy algorithm that quickly finds a first, local solution, to subsequently improve it, hoping to reach a global optimum.\n\n\\subsection{How do we split agents?} \\label{ssec:dist}\n\nWe note by $n = |A|$ the number of agents in $A$, by $m \\in \\mathbb{N}$ the target number of agents in each team, and by $b$ the minimum total number of teams, $b = \\left\\lfloor n/m\\right\\rfloor$. We define the quantity distribution of agents in teams of a partition, noted $T: \\mathbb{N} \\times \\mathbb{N} \\to \\mathbb{N} \\times \\mathbb{N} \\cup (\\mathbb{N} \\times \\mathbb{N})^2 $ as:\n\\vspace{-2mm}\n\\begin{equation}\n\\begin{multlined}\nT(n,m) = \\\\\n\\begin{cases}\n\\{(b, m)\\} & \\text{if } n \\geq m \\textit{ and } n \\bmod m = 0\n\\\\\n \\{(n \\bmod m,m + 1), \\\\(b - (n \\bmod m),m)\\}\n & \\text{if } n \\geq m \\textit{ and } n \\bmod m \\le b\n\\\\\n\\{(b, m),(1, n \\bmod m)\\} & \\text{if } n \\geq m \\textit{ and } n \\bmod m > b\n\\\\\n\\{(0,m)\\} & \\text{otherwise}\n\\end{cases}\n\\end{multlined}\n\\end{equation}\n\nNote that depending on the cardinality of $A$ and the desired team size, the number of agents in each team may vary by one individual (for instance if there are $n=7$ agents in $A$ and we want to compose duets ($m=2$), we split agents into two duets and one triplet).\n\n\\subsection{Solving an Assignment} \\label{ssec:asg}\n\n\nThere are different methods to build an assignment. We have decided to solve our assignment problem by using the minimum cost flow model \\cite{ahuja1993network}. This is one of the most fundamental problems within network flow theory and it can be efficiently solved. For instance, in \\cite{orlin1993faster}, it was proven that the minimum cost flow problem can be solved in $O(m \\cdot log(n) \\cdot (m + n \\cdot log(n)))$ time with $n$ nodes and $m$ arcs.\n\nOur problem is as follows: \nThere are a number of agents in team $K$ and a number of competence requests in task $t$. Any agent can be assigned to any competence, incurring some cost that varies depending on the agent competence level of the assigned competence. We want to get each competence assigned to at least one agent and each agent assigned to at least one competence in such a way that the total cost (that is both undercompetence and overcompetence) of the assignment is minimal with respect to all such assignments.\n\nFormally, let $G = (N, E)$ be a directed network defined by a set $N$ of $n$ nodes and a set $E$ of $e$ directed arcs. There are four types of nodes: (1) one source node; (2) $|K|$ nodes that represent agents in team $K$; (3) $|C_{\\tau}|$ competence requests that form task type $\\tau$; and (4) one sink node. Each $arc$ $(i, j) \\in E$ has an associated cost $p_{ij} \\in \\mathbb{R}^+$ that denotes the cost per unit flow on that $arc$. We also associate with each $arc$ $(i, j) \\in E$ a capacity $u_{ij} \\in \\mathbb{R}^+$ that denotes the maximum amount that can flow on the arc. In particular, we have three kinds of edges: (1) Supply arcs. These edges connect the source to agent nodes. Each of these arcs has zero cost and a positive capacity $u_{ij}$ which define how many competences at most can be assigned to each agent. (2) Transportation arcs. These are used to ship supplies. Every transportation edge $(i, j) \\in E$ is associated with a shipment cost $p_{ij}$ that is equal to:\n\\begin{equation*}\np_{ij} =\n\\begin{cases}\n(l^{a_i}(c_{\\mathit{j}}) - l_{\\mathit{j}}) \\cdot (1-\\upsilon) \\cdot w_{\\mathit{j}} & \\text{if } l^{a_i}(c_{\\mathit{j}} - l_{\\mathit{j}}) > 0\\\\\n-(l^{a_i}(c_{\\mathit{j}}) - l_{\\mathit{j}}) \\cdot \\upsilon \\cdot w_{\\mathit{j}} & \\text{if } l^{a_i}(c_{\\mathit{j}} - l_{\\mathit{j}}) < 0\n\\end{cases}\n\\label{costeq}\n\\end{equation*}\n\\noindent\nwhere $v \\in [0,1]$ is the penalty given to the undercompetence of team $K$(see subsection \\ref{ssec:prof} for the definition). \n(3) Demand arcs. These arcs connect the competence requests nodes to the sink node. These arcs have zero costs and positive capacities $u_{ij}$ which equal the demand for each competence. \n\nThus, a network is denoted by $(G, w, u, b)$. We associate with each node $i \\in N$ an integer number $b(i)$ representing its supply. If $b(n) > 0$ then $n$ is a source node, if $b(n) < 0$ then $n$ is a sink node. In order to solve a task assignment problem, we use the implementation of \\cite{goldberg1990finding} provided in the ort-tools.\\footnote{\\url{https://github.com/google/or-tools/blob/master/src/graph/min_cost_flow.h}} \n\\vspace{-2mm}\n\\begin{figure}\n\\includegraphics[max size={\\textwidth}{10.35cm}]{attach/asg.png}\n\\caption{An example of an assignment graph $G(N,E)$}\\label{asg}\n\\vspace{-6mm}\n\\end{figure}\n\n\\paragraph{Example} Let us consider a team of three agents $K = \\{a_1, a_2, a_3\\}$:\n\\begin{itemize}\n\\vspace{-1.5mm}\n\\item $a_1 = \\langle id_1, `woman', p_1, [l(c_1) = 0.9, l(c_2) = 0.5]\\rangle$\n\\vspace{-1.5mm}\n\\item $a_2 = \\langle id_2, `man', p_2, [l(c_2) = 0.2, l(c_3) = 0.8]\\rangle$\n\\vspace{-1.5mm}\n\\item $a_3 = \\langle id_3, `man', p_3, [l(c_2) = 0.4, l(c_4) = 0.6]\\rangle$\n\\end{itemize}\nand task type $\\tau$ containing four competence requests \\\\ $\\{(c_{1},0.8, 0.25), (c_{2}, 0.6, 0.25), (c_{3},0.6, 0.25),(c_{4},0.6, 0.25)\\}$. \\\\ The penalty given to undercompetence is equal to $\\upsilon=0.6$.\n\nOur goal is to assign agents to competence requests, so that: (1) every agent is responsible for at least one competence, (2) every competence is covered by at least one agent, (3) the overall ``cost'' in minimal.\nAs shown in figure \\ref{asg}, we build a graph out of $n = 9$ nodes that is: one source node ($N_0$), three agents nodes ($N_1 - N_3$), four competences nodes ($N_4 - N_7$) and a sink node ($N_8$). Next, we add edges: (1) between source node $N_0$ and all agent nodes $N_1 - N_3$ that have a cost $p_{si} = 0$ and capacity $u_{si} = 2$ for all $i$ as the maximum number of competences assigned to one agent cannot be bigger than two if we want to make sure that all agents are assigned to at least one competence; (2) between agent nodes $N_1 - N_3$ and competence nodes ($N_4 - N_7$), where each capacity $u_{ij} = 1$ and we calculate costs according to the equation \\ref{costeq}. For instance, the cost between $N_1$ and $N_4$ is equal to: $(0.9 - 0.8) \\cdot (1-0.6) \\cdot 0.25 = 0.01$. We multiply all costs by $1000$ to meet the requirements of the solver (edges need to be integer). Hence, the final cost $p_{14}=10$; (3) edges between competence nodes $N_4 - N_7$ and sink node $N_8$ that have costs $p_{jw} = 0$ and capacities $u_{jw} = 1$ to impose that each is assigned.\nOnce the graph is built, we pass it to the solver to get the assignment, and we get $c_1$ and $c_2$ assigned to $a_1$, $c_3$ assigned to $a_2$ and $c_4$ assigned to $a_3$.\n\n\\subsection{SynTeam algorithm} \\label{ssec:SynTeam} \n\nAlgorithm \\ref{alg:teamDistribution} shows the SynTeam pseudocode.\nAlgorithm \\ref{alg:teamDistribution} is divided into two parts:\n\n{\\bf 1. \\textsl{Find a first team partition}}. This part of the algorithm simply builds a partition by randomly assigning agents to teams of particular team sizes. This part goes as follows. Given a list of agents $A$, we start by shuffling the list so that the order of agents in the list is random (line~1). Next, we determine the quantitative distribution of individuals among teams of size $m$ using function $T(|A|,m)$ as defined in section \\ref{ssec:dist} (line~2). We start from the top of the shuffled list of agents (line~3). For each number of teams (line~4), we define a temporary set $team$ to store a current team (line~5). We add to $team$ subsequent $size$ agents from the shuffled list of agents (line~7). We add the newly created team to the team partition $P_{\\mathit{best}}$ that we intend to build (line~10). When reaching line~14, $P_{\\mathit{best}}$ will contain a first disjoint subset of teams (a team partition). \n\n{\\bf 2. \\textsl{Improve the current best team partition}}. The second part of the algorithm consists in improving the current best team partition. The idea is to obtain a better team partition by performing crossovers of two randomly selected teams to yield two better teams. In this part, we took inspiration from simulated annealing methods, where the algorithm might accept swaps that actually decrease the solution quality with a certain probability. The probability of accepting worse solutions slowly decreases as the algorithm explores the solution space (as the number of iterations increases). The annealing schedule is defined by the $\\mathit{cooling\\_rate}$ parameter. We have modified this method to store the partition with the highest synergistic evaluation found so far.\nIn detail, the second part works as follows. First, we select two random teams, $K_1$ and $K_2$, in the current team partition (line~15). Then we compute all team partitions of size $m$ with agents in $K_1 \\cup K_2$ (line~19), and we select the best candidate team partition, named $P_{\\mathit{bestCandidate}}$ (lines~19~to~26). If the best candidate synergistic utility is larger than the utility contribution of $K_1$ and $K_2$ to the current best partition $P_{\\mathit{best}}$ (line~27), then we replace teams $K_1$ and $K_2$ by the teams in the best candidate team partition (line~28). If the best candidate team partition utility is lowe\n, then we check if the probability of accepting a worse solution is higher than a uniformly sampled value from $[0,1]$ (line~29).\nIf so,\nwe replace teams $K_1$ and $K_2$ by the teams in the best candidate team partition (line~30) and we lower $heat$ by a cooling rate. This part of the algorithm continues until the value of $heat$ reaches $1$ (line~13). We also store the best partition found so far (line~34) to make sure we do not end up with worse solution. Finally, we return found best partition $P_{\\mathit{bestEver}}$ as well as the assignment $\\eta$ for each team.\n\\begin{algorithm}[h]\n\\small\n\\caption{\\quad SynTeam}\n\\label{alg:teamDistribution}\n\\begin{algorithmic}[1]\n \\Require $A$ \\Comment{The list of agents}\n\t\\Require $T(|A|,m)$ \\Comment{Quantitative team distribution}\n \\Require $P_{\\mathit{best}} = \\emptyset$ \\Comment{Initialize best partition}\n \\Require $\\mathit{heat=10}$ \\Comment{Initial temperature for second step}\n \\Require $\\mathit{Cooling\\_rate}$ \\Comment{Heating decrease}\n \\Ensure $(P, \\bm{\\eta})$ \\Comment{Best partition found and best assignments}\n \\State $\\mathit{random.shuffle(A)}$\n \\If {$T(|A|,m) \\ne (0,m)$}\n \\State $\\mathit{index} = 0$ \\Comment{Used to iterate over the agent list}\n \\ForAll{$(\\mathit{numberOfTeams}, \\mathit{size)} \\in T(|A|,m)$}\n \\State $team = \\emptyset$\n \\For {$i \\in (0,\\dots ,\\mathit{(size-1))}$}\n \\State $team = team \\cup A[\\mathit{index}]$\n \\State $\\mathit{index}=\\mathit{index} + 1$\n \\EndFor\n \\State $P_{\\mathit{best}} = P_{\\mathit{best}} \\cup \\{team\\}$\n \n \\EndFor\n \\State $\\bm{ \\eta_{\\mathit{best}}} = \\mathit{assign\\_agents}(P_{\\mathit{best}})$ \\Comment{see Subsection \\ref{ssec:asg}}\n \\State $(P_{\\mathit{bestEver}}, \\mathit{bestValueEver}) = (P_{\\mathit{best}},u(P_{\\mathit{best}},\\bm{ \\eta_{\\mathit{best}}}))$\n \\While{$\\mathit{heat} > 1$} \n \\State $(K_1,K_2) = selectRandomTeams(P_{\\mathit{best}}$)\n \\State $(\\eta_1,\\eta_2) = \\mathit{assign\\_agents}(\\{K_1,K_2\\})$\n \\State $\\mathit{contrValue} = u(\\{K_1,K_2\\},(\\eta_1,\\eta_2))$\n \\State $(P_{\\mathit{bestCandidate}}, \\mathit{best Candidatevalue}) = (\\emptyset,0)$\n \\ForAll {$P_{\\mathit{candidate}} \\in P_m(K_1 \\cup K_2) \\setminus \\{K_1,K_2\\}$}\n \\State $(\\eta_1,\\eta_2) = assign\\_agents(P_{\\mathit{candidate}})$ \n \\State $\\mathit{candidateValue} = u(P_{\\mathit{candidate}},(\\eta_1,\\eta_2))$\n \\If{$\\mathit{candidateValue} > \\mathit{bestCandidateValue}$}\n \\State $P_{\\mathit{bestCandidate}} = P_{\\mathit{candidate}}$\n \\State $\\mathit{bestCandidateValue} = \\mathit{candidateValue}$\n \\EndIf\n \\EndFor\n \\If{$\\mathit{bestCandidateValue} > \\mathit{contrValue}$}\n \\State $P_{\\mathit{best}} = replace(\\{K_1,K_2\\},P_{\\mathit{bestCandidate}}, P_{\\mathit{best}})$\n \\ElsIf{$\\mathbb{P}(\\mathit{bestCandidateValue}, \\mathit{contrValue}, heat)$ \\StatexIndent[2] $\\geq \\mathit{random}(0, 1)$}\n \\State $P_{\\mathit{best}} = replace(\\{K_1,K_2\\},P_{\\mathit{bestCandidate}},P_{\\mathit{best}})$\n \\EndIf\n \\State $\\bm{ \\eta_{\\mathit{best}}} = \\mathit{assign\\_agents}(P_{\\mathit{best}})$\n \\If {$\\mathit{bestValueEver} < u(P_{\\mathit{best}},\\bm{ \\eta_{\\mathit{best}}})$}\n \\State $P_{\\mathit{bestEver}} = P_{\\mathit{best}}$\n \n \n \n \\EndIf\n \t\\State $heat$ = $heat-\\mathit{Cooling\\_rate}$\n \\EndWhile\n \\State $return(P_{\\mathit{bestEver}},\\mathit{assign\\_agents(P_{\\mathit{bestEver}}}))$\n \\EndIf\n\\end{algorithmic}\n\\end{algorithm}\n\\vspace{-4mm}\n\\section{Experimental Results} \\label{sec:results}\n\n\\subsection{Experimental Setting}\n``Institut Torras i Bages'' is a state school near Barcelona. Collaborative work has been implemented there for the last 5 years in their final assignment (``Treball de S\\'{\\i}ntesi'') with a steady and significant increase in the scores and quality of the final product that students are asked to deliver. This assignment takes one week and is designed to check if students have achieved, and to what extent, the objectives set in the various curricular areas. It is a work that encourages teamwork, research, and tests relationships with the environment. Students work in teams and at the end of every activity present their work in front of a panel of teachers that assess the content, presentation and cooperation between team members. This is a creative task, although requiring high level of competences. \n\\subsection{Data Collection} \nIn current school practice, teachers group students according to their own, manual method based on the knowledge about students, their competences, background and social situation. This year we have used our grouping system based only on personality (SynTeam\\ with $\\lambda = 0, \\mu = 1$) upon two groups of students: `3r ESO A' (24 students), and `3r ESO C' (24 students). Using computers and/or mobile phones, students answered the questionnaire (described in section \\ref{pers}) which allowed us to divide them into teams of size three for each class. Tutors have evaluated each team in each partition giving an integer value $v \\in [1,10]$ meaning their expectation of the performance of each team.\nEach student team was asked to undertake the set of interdisciplinary activities (``Treball de S\\'{\\i}ntesi'') described above. We have collected each student's final mark for ``Treball de S\\'{\\i}ntesi'' as well as final marks obtained for all subjects. That is: Catalan, Spanish, English, Nature, Physics and Chemistry, Social Science, Math, Physical Education, Plastic Arts, Technology. We have used a matrix provided by the tutors to relate each subject to different kinds of intelligence (that in education are understood as competences) needed for this subject. There are eight types of human intelligence \\cite{gardner1987theory}, each representing different ways of processing information: Naturalist, Interpersonal, Logical/Mathematical, Visual/Spatial, Body/Kinaesthetic, Musical, Intrapersonal and Verbal/Linguistic. This matrix for each subject and each intelligence is shown in figure \\ref{matrix}.\n\n\\begin{figure}[h]\n\\centering\n$\\begin{bmatrix}\n 0 & 1 & 0 & 0 & 0 & 0 & 1 & 1\n\\\\0 & 1 & 0 & 1 & 0 & 1 & 1 & 1\n\\\\0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 \n\\\\1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 \n\\\\1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 \n\\\\1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 \n\\\\0 & 1 & 1 & 1 & 0 & 0 & 1 & 1 \n\\\\0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 \n\\\\0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 \n\\\\1 & 1 & 1 & 0 & 1 & 0 & 1 & 1 \n\\end{bmatrix}$\n\\label{matrix}\n\\caption{Matrix matching Intelligence with subjects (each row corresponds to a subject, each column to an intelligence)}\n\\end{figure}\n\n\\noindent Subjects are represented by rows and intelligences by columns of the matrix in the order as provided above. Based on this matrix we calculate values of intelligences for every student by averaging all values obtained by her that are relevant for this intelligence. For instance, for Body/Kinaesthetic intelligence, we calculate an average of student marks obtained in Nature, Physical Education, Plastic Arts and Technology. An alternative way to measure students' competences level can be by calculating the collective assessments of each competence (like proposed by \\cite{andrejczukCompetences}).\n\nFinally, having competences (Intelligences), personality and actual performance of all students, we are able to calculate synergistic values for each team. We also calculate the average of marks obtained by every student in a team to get teams' performance values.\n\n\\subsection{Results}\n\\noindent \nGiven several team composition methods, we are interested in comparing them to know which method better predicts team performance. Hence, we generate several team rankings using the evaluation values obtained through different methods. First, we generate a ranking based on actual team performance that will be our base to compare other rankings. Second, we generate a ranking based on the expert evaluations. Finally, we generate several rankings based on calculated synergistic values with varying importance of congeniality and proficiency. Since ``Traball de S\\'{\\i}ntesi'' is a creative task, we want to examine the evaluation function with parameters $\\mu > 0$ and $\\lambda = 1-\\mu$. In particular, we want to observe how the rankings change when increasing the importance of competences. \nNotice that teacher and actual performance rankings may include ties since the pool of possible marks is discrete (which is highly improbable in case of SynTeam\\ rankings). Therefore, before generating rankings based on synergistic values, we round them up to two digits to discretize the evaluation space. An ordering with ties is also known as a \\emph{partial ranking}. \n\nNext, we compare teacher and SynTeam\\ rankings with the actual performance ranking using the standardized Kendall Tau distance. For implementation details, refer to the work by Fagin et al. \\cite{Fagin:2004:CAR,fagin2006comparing}, which also provide sound mathematical principles to compare partial rankings. The results of the comparison are shown in Figure \\ref{asg}. Notice that the lower the value of Kendall Tau, the more similar the rankings. We observe that the SynTeam\\ ranking improves as the importance of competences increases, and it is best at predicting students' performance for $\\lambda = 0.8$ and $\\mu = 0.2$ (Kendall Tau equal to $0.15$). A standardised Kendall Tau distance for teacher ranking is equal to $0.28$, which shows that SynTeam\\ predicts the performance better than teachers, when competences are included ($\\lambda > 0.2$). We also calculate the values of Kendall Tau for random ($0.42$) and reversed ($0.9$) rankings to benchmark teacher and SynTeam\\ grouping methods. The results show that both teachers and SynTeam\\ are better at predicting students' performance than the random method. \n\n\\begin{figure}\n\\includegraphics[max size={\\textwidth}{10.35cm}]{attach/KendallTauComparison.png}\n\\caption{Comparison of Kendall-Tau distances between different methods.}\\vspace{-2mm}\n\\label{asg}\n\\vspace{-2mm}\n\\end{figure}\n\n\\section{Discussion} \\label{sec:discuss}\nIn this paper we introduced SynTeam, an algorithm for partitioning groups of humans into competent, gender and psychologically balanced teams. \n\nTo our knowledge, SynTeam\\ is the first computational model to build synergistic teams that not only work well together, but are also competent enough to perform an assignment requiring particular expertise. \n\nWe have decided to evaluate our algorithm in the context of a classroom. Besides obvious advantages of observing students work in person, this scenario gave us an opportunity to compare our results with real-life, currently used practice. The results show that SynTeam\\ is able to predict team performance better that the experts that know the students, their social background, competences, and cognitive capabilities. \n\nThe algorithm is potentially useful for any organisation that faces the need to optimise their problem solving teams (e.g. a classroom, a company, a research unit). The algorithm composes teams in a purely automatic way without consulting experts, which is a huge advantage for environments where there is a lack of experts.\n\n\nRegarding future work, We would like to investigate how to determine quality guarantees of the algorithm. \n\nAdditionally, there is a need to consider richer and more sophisticated models to capture the various factors that influence the team composition process in the real world. We will consider how our problem relates to the constrained coalition formation framework \\cite{Rahwan}. This may help add constraints and preferences coming from experts that cannot be established by any algorithm, e.g. Anna cannot be in the same team with Jos\\'e as they used to have a romantic relationship.\n\n\\newpage\n\\bibliographystyle{plain}\n", "meta": {"timestamp": "2017-02-28T02:11:13", "yymm": "1702", "arxiv_id": "1702.08222", "language": "en", "url": "https://arxiv.org/abs/1702.08222"}} {"text": "\\section{Introduction}\nFor more than three decades, understanding the mechanism of superconductivity observed at high critical temperature (HTC) in\nstrongly correlated cuprates~\\cite{LaCuO2_Bednorz_86} has been the ``holy grail\u201d \nof many theoretical and experimental condensend matter researchers.\nIn this context, the observation of superconductivity in \n nickelates $Ln$NiO$_2$, $Ln$=\\{La, Nd and Pr\\} ~\\cite{li_superconductivity_2019,osada_superconducting_2020,osada_nickelate_2021} upon doping with holes is remarkable. \n These superconducting nickelates are isostructural as well as isoelectronic to \nHTC cuprate superconductors and thus enable the comparison of\nthe essential physical features that may be playing a crucial role in the mechanism driving superconductivity.\n\n$Ln$NiO$_2$ family of compounds are synthesized in the so-called infinite-layer structure, where NiO$_2$ and $Ln$ layers are stacked alternatively~\\cite{li_superconductivity_2019}. \nThe NiO$_2$ planes are identical to the CuO$_2$ planes in HTC cuprates which host much of the physics leading to superconductivity~\\cite{keimer_quantum_2015}. \nA simple valence counting of the these nickelates reveals a {1+} oxidation state for Ni ({2-} for O and {3+} for $Ln$) with 9 electrons in the $3d$ manifold. \nIn the cuprates, the Cu$^{2+}$ oxidation state gives rise to the same $3d^9$ electronic configuration.\nContrary to many nickel oxides where the Ni atom sits in an octahedral cage of oxygens, in the infinite-layered structure, square planar NiO$_4$ plaques are formed without the apical oxygens. \nThe crystal field due to square-planar oxygen coordination stabilizes the $d_{z^2}$ orbital of the $e_g$ manifold, making its energy close to the $t_{2g}$ orbitals (the $3d$ orbitals split to 3-fold $t_{2g}$ and 2-fold $e_g$ sub-shells in an octahedral environment). With $d^9$ occupation, a half-filled $d_{x^2-y^2}$-orbital system is realized as in cuprates.\nIn fact, recent resonant inelastic X-ray scattering (RIXS) experiments~\\cite{rossi2020orbital} as well as the {\\it ab initio} correlated multiplet calculations~\\cite{katukuri_electronic_2020} confirm that the Ni$^{1+}$ $d$-$d$ excitations in NdNiO$_2$\\ are similar to the Cu$ ^{2+} $ ions in cuprates~\\cite{moretti_sala_energy_2011}.\n\n Several electronic structure calculations based on density-functional theory (DFT) have shown that in monovalent nickelates the Ni 3$d_{x^2-y^2}$ states sit at the Fermi energy level ~\\cite{lee_infinite-layer_2004,liu_electronic_njpqm_2020,zhang_effective_prl_2020}.\n These calculations further show that the nickelates are more close to the Mott-Hubbard insulating limit with a decreased Ni $3d$- O $2p$ hybridization compared to cuprates. \n The latter are considered to be charge transfer insulators~\\cite{zsa_mott_charge_transfer_1985} where excitations across the electronic band gap involves O $2p$ to Cu $3d$ electron transfer.\nCorrelated wavefunction-based calculations~\\cite{katukuri_electronic_2020} indeed find that the contribution from the O $2p$ hole configuration to the ground state wavefunction in NdNiO$_2$\\ is four times smaller than in the cuprate analogue CaCuO$_2$.\n X-ray absorption and photoemission spectroscopy experiments~\\cite{hepting2020a,goodge-a} confirm the Mott behavior of nickelates. \n \nIn the cuprate charge-transfer insulators, the strong hybridization of the Cu 3$d_{x^2-y^2}$\\ and O $2p$ orbitals result in O $2p$ dominated bonding and Cu 3$d_{x^2-y^2}$\\ -like antibonding orbitals. As a consequence, the doped holes primarily reside on the bonding O $2p$ orbitals, making them singly occupied. \nThe unpaired electrons on the Cu $d_{x^2-y^2}$\\ and the O $2p$ are coupled antiferromagnetically resulting in the famous Zhang-Rice (ZR) spin singlet state~\\cite{zhang_effective_1988}. \nIn the monovalent nickelates, it is unclear where the doped-holes reside. Do they form a ZR singlet as in cuprates? Instead, if the holes reside on the Ni site, do they form a high-spin local triplet with two singly occupied Ni $3d$ orbitals and aligned ferromagnetically or a low-spin singlet with either both the holes residing in the Ni 3$d_{x^2-y^2}$\\ orbital or two singly occupied Ni 3$d$ but aligned anti-parallel.\nWhile Ni L-edge XAS and RIXS measurements~\\cite{rossi2020orbital} conclude that an orbitally polarized singlet state is predominant, where doped holes reside on the Ni 3$d_{x^2-y^2}$\\ orbital, O K-edge electron energy loss spectroscopy~\\cite{goodge-a} reveal that some of the holes also reside on the O $2p$ orbitals. \nOn the other hand, calculations based on multi-band $d-p$ Hubbard models show that the fate of the doped holes is determined by a subtle interplay of Ni onsite ($U_{dd}$), Ni $d$ - O $2p$ inter-site ($U_{dp}$) Coulomb interactions and the Hund's coupling along with the charge transfer gap~\\cite{jiang_critical_prl_2020,Plienbumrung_condmat_2021}. \nHowever, with the lack of extensive experimental data, it is difficult to identify the appropriate interaction parameters for a model Hamiltonian study, let alone identifying the model that best describes the physics of superconducting nickelates.\n\n \nDespite the efforts to discern the similarities and differences between the monovalent nickelates and superconducting cuprates, there is no clear understanding on the nature of doped holes in NdNiO$_2$.\nParticularly, there is no reliable parameter-free \\textit{ab initio} analysis of the hole-doped situation. \nIn this work, we investigate the hole-doped ground state in NdNiO$_2$\\ and draw parallels to the hole doped ground state of cuprate analogue CaCuO$_2$. \nWe use fully {\\it ab initio} many-body wavefunction-based quantum chemistry methodology\nto compute the ground state wavefunctions for the hole doped NdNiO$_2$\\ and CaCuO$_2$. \nWe find that the doped hole in NdNiO$_2$ mainly localizes on the Ni 3$d_{x^2-y^2}$\\ orbital to form a closed-shell singlet, and this singlet configuration contributes to $\\sim$40\\% of the wavefunction. \nIn contrast, in CaCuO$_2$ the Zhang-Rice singlet configurations contribute to $\\sim$65\\% of the wavefunction. \nThe persistent dynamic radial-type correlations within the Ni $d$ manifold result in stronger $d^8$ multiplet effects than in CaCuO$_2$,\nand consequently the additional hole foot-print is more three-dimensional in NdNiO$_2$. \nOur analysis shows that the most commonly used three-band Hubbard model to express the doped scenario in cuprates represents ~ 90\\% of the $d^8$ wavefunction for CaCuO$_2$, but such a model grossly approximates the $d^8$ wavefunction for the NdNiO$_2$ as it only stands for $\\sim$60\\% of the wavefunction.\n\n \n\n\nIn what follows, we first describe the computational methodology we employ in this work where we highlight the novel features of the methods and provide all the computational details.\nWe then present the results of our calculations and conclude with a discussion. \n\n\\section{The wavefunction quantum chemistry method}\n{\\it Ab initio} configuration interaction (CI) wavefunction-based quantum chemistry methods, particularly\nthe post Hartree-Fock (HF) complete active space self-consistent field (CASSCF) and the multireference perturbation theory (MRPT), are employed.\nThese methods not only facilitate systematic inclusion of electron correlations, but also enable to quantify different types of correlations, static vs dynamic~\\cite{helgaker_molecular_2000}. \nThese calculations do not use any \\textit{ad hoc} parameters to incorporate electron-electron interactions unlike other many-body methods, instead, they are computed fully {\\it ab initio} from the kinetic and Coulomb integrals. \nSuch \\textit{ab initio} calculations provide techniques to systematically analyze electron correlation effects and offer insights into the electronic structure of correlated solids that go substantially beyond standard DFT approaches, e.g., see Ref.~\\cite{Munoz_afm_htc_qc_prl_2000,CuO2_dd_hozoi11,book_Liviu_Fulde,Bogdanov_Ti_12,katukuri_electronic_2020} for the $ 3d $ TM oxides and Ref.~\\cite{katukuri_PRB_2012,Os227_bogdanov_12,213_rixs_gretarsson_2012,Katukuri_ba214_prx_2014,Katukuri_njp_2014} for $ 5d $ compounds.\n\\subsection{Embedded cluster approach}\nSince strong electronic correlations are short-ranged in nature \\cite{fulde_new_book}, a local approach for the calculation of the $N$ and $N\\pm$1 \u2013electron wavefunction is a very attractive option for transition metal compounds. \nIn the embedded cluster approach, a finite set of atoms, we call quantum cluster (QC), is cut out from the infinite solid and many-body quantum chemistry methods are used to calculate the electronic structure of the atoms within the QC. \nThe cluster is ``embedded\u201d in a potential that accounts for the part of the crystal that is not treated explicitly.\nIn this work, we represent the embedding potential with an array of point charges (PCs) at the lattice positions that are fitted to reproduce the Madelung crystal field in the cluster region~\\cite{ewald}.\nSuch procedure enables the use of quantum chemistry calculations for solids involving transition-metal or lanthanide ions, see Refs.~\\cite{katukuri_ab_2012,katukuri_electronic_2014,babkevich_magnetic_2016}.\n\n\n\n\\subsection{Complete active space self-consistent field}\nCASSCF method~\\cite{book_QC_00} is a specific type of multi-configurational (MC) self-consistent field technique in which a complete set of Slater determinants or configuration state functions (CSFs) is used in the expansion of the CI wavefunction is defined in a constrained orbital space, called the active space. \nIn the CASSCF(n,m) approach, a subset of $n$ active electrons are\nfully correlated among an active set of $m$ orbitals, leading to a highly multi-configurational (CAS) reference wavefunction.\nCASSCF method with a properly chosen active space guarantees a qualitatively correct wavefunction for strongly correlated systems where static correlation~\\cite{book_QC_00} effects are taken into account. \n %\nWe consider active spaces as large as CAS(24,30) in this work. \nBecause the conventional CASSCF implementations based on deterministic CI space (the Hilbert space of all possible configurations within in the active space) solvers are limited to active spaces of 18 active electrons in 18 orbitals,\nwe use the full configuration interaction quantum Monte Carlo (FCIQMC)~\\cite{booth_fermion_2009,cleland_survival_2010,guther_neci_2020} and density matrix renormalization group (DMRG) theory~\\cite{chan_density_2011,sharma_spin-adapted_2012} algorithms to solve the eigenvalue problem defined within the active space. \n\n\\subsection{Multireference perturbation theory}\nWhile the CASSCF calculation provides a qualitatively correct wavefunction, for a quantitative description of a strongly correlated system, dynamic correlations~\\cite{book_QC_00} (contributions to the wavefunction from those configurations related to excitations from inactive to active and virtual, and active to virtual orbitals) are also important and must be accounted for. \nA natural choice is variational multireference CI (MRCI) approach where the CI wavefunction is extended with excitations involving orbitals that are doubly occupied and empty in the reference CASSCF wavefunction \\cite{book_QC_00}. \nAn alternative and computationally less demanding approach to take into account dynamic correlations is based on perturbation theory in second- and higher-orders. \nIn multireference perturbation theory (MRPT) MC zeroth-order wavefunction is employed and excitations to the virtual space are accounted by means of perturbation theory. \nIf the initial choice of the MC wavefunction is good enough to capture the large part of the correlation energy, then the perturbation corrections are typically small. \nThe most common variations of MRPT are the complete active space second-order perturbation theory (CASPT2)~\\cite{anderson_caspt2_1992} and the n-electron valence second-order perturbation theory (NEVPT2)~\\cite{angeli_nevpt2_2001} which differ in the type of zeroth-order Hamiltonian $H_0$ employed.\n\n\\begin{figure}[!t]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.450\\textwidth]{fig1.pdf}\t\n\t\t\\caption{Quantum cluster of five NiO$_4$ (a) and CuO$_4$ (b) plaques considered in our calculations. The point-charge embedding is not shown. \n\t\t\tThe symmetry adapted localized 3$d_{x^2-y^2}$\\ and the oxygen Zhang-Rice-like 2$p$ orbitals, the basis in which the wavefunction in Table~\\ref{wfn} is presented are shown in yellow and green color. }\n\t\t\\label{fig1}\n\t\\end{center}\n\\end{figure}\n\n\\section{The {\\em ab initio} model}\nBefore we describe the {\\em ab initio} model we consider, let us summarize the widely used and prominent model Hamiltonian to study the nature of doped hole in HTC cuprates and also employed for monovalent nickelates lately. \nIt is the three-band Hubbard model~\\cite{emery_3b_hubbard_prl_1987} with \nthree orbital degrees of freedom (bands) which include the $d$ orbital of Cu with $x^2-y^2$ symmetry and the in-plane oxygen $p$ orbitals aligned in the direction of the nearest Cu neighbours. \nThese belong to the $b_1$ irreducible representation (irrep) of the $D_{4h}$ point group symmetry realized at the Cu site of the CuO$_4$ plaque, the other Cu $d$ orbitals belong to $a_1$ ($d_{z^2}$), $b_2$ ($d_{xy}$) and $e$ ($d_{xz,yz}$) irreps.\nThe parameters in this Hamiltonian include the most relevant hopping and Coulomb interactions within this set of orbitals. \nMore recently, the role of the Cu $3d$ multiplet structure on the hole doped ground state is also studied~\\cite{jiang_cuprates_prb_2020}. \nWhile this model explains certain experimental observations, there is still a huge debate on what is the minimum model to describe the low-energy physics of doped cuprates. \nNevertheless, this model has also been employed to investigate the character of the doped hole in monovalent nickelates~\\cite{jiang_critical_prl_2020,Plienbumrung_condmat_2021, Plienbumrung_prb_2021}. \n\nWithin the embedded cluster approach described earlier, \nwe consider a QC of five NiO$_4$ (CuO$_4$) plaques that includes five Ni (Cu) atoms, 16 oxygens and 8 Nd (Ca) atoms. The 10 Ni (Cu) ions neighbouring to the cluster are also included in the QC, however, these are considered as total ion potentials (TIPs).\nThe QC is embedded in point charges that reproduce the electrostatic field of the solid environment.\nWe used the crystal structure parameters for the thin film samples reported in Ref.~\\cite{li_superconductivity_2019,hayward_synthesis_2003,kobayashi_compounds_1997,karpinski_single_1994}.\n\nWe used all-electron atomic natural orbital (ANO)-L basis sets of tripple-$\\zeta$ quality with additional polarization functions -- [$7s6p4d2f1g$] for Ni (Cu)~\\cite{roos_new_2005} \nand [$4s3p2d1f$] for oxygens~\\cite{roos_main_2004}.\nFor the eight Nd (Ca) atoms large core effective potentials~\\cite{dolg_energy-adjusted_1989,dolg_combination_1993,kaupp_pseudopotential_1991} and associated [$3s2p2d$] basis functions were used. \nIn the case of Nd, the $f$-electrons were incorporated in the core. \nCu$ ^{1+} $ (Zn$^{2+}$) total ion potentials (TIPs) with [$2s1p$] functions were used for the 10 Ni$^{1+}$ (Cu$^{2+}$)~\\cite{ingelmann_thesis}~\\footnote{Energy-consistent Pseudopotentials of Stuttgart/Cologne group, \\url{http://www.tc.uni-koeln.de/cgi-bin/pp.pl?language=en,format=molpro,element=Zn,job=getecp,ecp=ECP28SDF}, [Accessed: 15-Sept-2021]}\nneighbouring ions of the QC. \n\n \\begin{table}[!t]\n\t\\caption{The different active spaces (CAS) considered in this work.\n\t\tNEL is number of active electrons and NORB is the number of active orbitals.\n\t\tThe numbers in parenthesis indicate the orbital numbers in Fig~\\ref{activespace_orb}. \n\t} \n\t\\label{activespaces}\n\t\\begin{center}\n\t\t\\begin{tabular}{lcc}\n\t\t\t\\hline\n\t\t\t\\hline\\\\\n\t\t\tCAS & NEL & NORB \\\\\n\t\t\t\\hline\\\\\n\t\t\tCAS-1 & 18 & 24 (1-24) \\\\\n\t\t\tCAS-2 & 24 & 30 (25-30) \\\\\n\t\t\tCAS-3\\footnote{The four neighbouring Ni$^{1+}$ (Cu$^{2+}$) ions in the quantum cluster are treated as closed shell Cu$^{1+}$ (Zn$^{2+}$) ions.}\n\t\t\t & 12 & 14 (1, 6, 11, 16 and 21-30) \\\\\n\t\t\t\\hline\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\\end{table}\n To investigate the role of different interactions in the $d^8$ ground state, \n two different active spaces were considered.\nIn the first active space, CAS-1 in Table ~\\ref{activespaces}, only the orbitals in the $b_1$ and $a_1$ irreps are active. \nThese are $d_{x^2-y^2}$ and $d_{z^2}$-like orbitals respectively, and the corresponding double-shell $4d$ orbitals of each of the five Ni (Cu) atoms.\nCAS-1 also contains the symmetry-adapted ZR-like composite O 2$p$ and the double-shell 3$p$-like orbitals, numbers 1-20 and 21-24 in Fig.~\\ref{activespace_orb}. \nAt the mean-field HF level of theory, there are 16 electrons within this set of orbitals, resulting in CAS(16,22) active space.\nIn the second active space, CAS-2, orbitals of $b_2$ and the $e$ irreps from the central Ni (Cu) $d$ manifold are also included. \nThese are the 3$d_{xy}$, 3$d_{xz,yz}$-like orbitals and the corresponding $4d$ orbitals and the six electrons, numbers 25-30 in Fig.~\\ref{activespace_orb}, resulting in a CAS(24,30) active space. \nThe latter active space takes into account the $d^8$ multiplet effects within the $3d$ manifold explicitly. \n\nThe two active spaces considered in this work not only describe all the physical effects included in the above mentioned three-band Hubbard model but go beyond. \nMore importantly, we do have any \\textit{ad-hoc} input parameters for the calculation as \nall the physical interactions are implicitly included in the {\\it ab initio} Hamiltonian describing the actual scenario in the real materials. \n We employed {\\sc OpenMolcas}~\\cite{fdez_galvan_openmolcas_2019} quantum chemistry package for all the calculations. \n \n\\begin{figure}[!t]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.480\\textwidth]{cas_orbitals.pdf}\t\n\t\t\\caption{Active orbital basis used in the CASSCF calculations, \n\t\t\tplotted using Jmol~\\cite{jmol}. \n\t\t}\n\t\t\\label{activespace_orb}\n\t\\end{center}\n\\end{figure}\n\n \\section{Results}\n\\subsection{Ground state of the \\boldmath${d^8}$ configuration}\n\nStarting from the electronic structure of the parent compounds, where each Ni (Cu) is in the $d^9$ configuration, we compute the electron-removal (in the photoemission terminology) $d^8$ state to investigate the hole-doped quasiparticle state. \nSince the parent compounds in $d^9$ configuration have strong nearest neighbour antiferromagnetic (AF) correlations~\\cite{katukuri_electronic_2020}, the total spin of our QC in undoped case, with five Ni (Cu) sites, in the AF ground state is $S_{QC}=3/2$. \nBy introducing an additional hole (or removing an electron) from the central Ni (Cu) in our QC, the $S_{QC}$ values range from 0 to 3. \nTo simplify the analysis of the distribution of the additional hole, we keep the spins on the four neighbouring Ni (Cu) sites parallelly aligned in all our calculations and from now on we only specify the spin multiplicity of the central Ni (Cu)O$_4$ plaque. \nThe multiplet structure of the $d^8$ configuration thus consists of only spin singlet and triplet states, spanned by the four irreps of the $3d$ manifold. \nThe active spaces we consider in this work allow us to compute accurately the excitations only within the $b_1$ and $a_1$ irreps\n\\footnote{For an accurate quantitative description of the multiplet structure spanned by the other two irreps $b_1$ and $e$, one would need to extend the active space and include the $3d$ and $4d$ manifolds of the four neighbouring Ni (Cu) atoms as well as the O 2$p$ orbitals of the same symmetry, resulting in a gigantic 68 electrons in 74 orbitals active space.}\nand we address the full multiplet structure elsewhere.\n\nWhen computing the local excitations, a local singlet state on the central Ni (Cu) corresponds to a total spin on the cluster $S_{QC}=2$. \nHowever, a local triplet state, with central spin aligned parallel to the neighboring spins, corresponds to $S_{QC}=3$ and do not satisfy the AF correlations.\nTo avoid the spin coupling between the central $d^8$ Ni (Cu) with the neighbouring $d^9$ Ni (Cu) ions, we replace the latter with closed shell, Cu (Zn) $d^{10}$, ions and freeze them at the mean-field HF level. \nSuch a simplification is justified, as the local excitation energy we compute is an order of magnitude larger than the exchange interaction~\\cite{katukuri_electronic_2020}. \n %\n\nIn Table \\ref{d8-excit}, the relative energies of the lowest local spin singlets $^1\\!A_{1g}$, $^1\\!B_{1g}$ and spin triplet $^3\\!B_{1g}$ states are shown. \nThese are obtained from CASSCF + CASPT2 calculations with CAS(12,14) active space (CAS-3 in Table~\\ref{activespaces}) which includes the 3$d$ and $4d$ orbitals of the central Ni (Cu) ion and the in-plane O 2$p $ and $3p$ orbitals in the $b_1$ irrep. \nIn the CASPT2 calculation, the remaining doubly occupied O $2p$, the central Ni (Cu) $3s$ and $3p$ orbitals and all the unoccupied virtual orbitals are correlated. \n\\begin{table}[!t]\n\t\\caption{Relative energies (in eV) of the electron removal $d^8$ states in NdNiO$_2$\\ and the iso-structural CaCuO$_2$\\ obtained from CAS(12,14)SCF and CASSCF+CASPT2 calculations. \n\t} \n\t\\label{d8-excit}\n\n\t\\begin{center}\n\t\t\\begin{tabular}{lccccl}\n\t\t\t\\hline\n\t\t\t\\hline\n\t\t\tState & \\multicolumn{2}{c}{NdNiO$ _{2} $} & \\multicolumn{2}{c}{CaCuO$ _{2} $} \\\\\n & CASSCF & +CASPT2 & CASSCF & +CASPT2 \\\\\n \\hline\n\t\t\t$^1\\!A_{1g}$ & 0.00 & 0.00 & 0.00 & 0.00 \\\\\n $^3\\!B_{1g}$ & 1.35 & 1.88 & 2.26 & 2.50 \\\\\n $^1\\!B_{1g}$ & 2.98 & 3.24 & 3.21 & 3.33 \\\\\n\t\t\t\\hline\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\\end{table}\nIt can be seen that the ground state is of $^1\\!A_{1g}$ symmetry and the lowest triplet excited state, with $^3\\!B_{1g}$ symmetry, is around 1.88 eV and 2.5 eV for NdNiO$_2$\\ and CaCuO$_2$\\ respectively. \nThe AF magnetic exchange in these two compounds is 76 meV and 208 meV respectively~\\cite{katukuri_electronic_2020}, and thus we expect that our simplification of making the neighbouring $d^9$ ions closed shell do not over/underestimate the excitation energies. \nAt the CASSCF level, the $^1\\!A_{1g}$-$^3\\!B_{1g}$ excitation energy is 1.35 eV in NdNiO$_2$\\ while it is 2.26 eV in CaCuO$_2$. \nInterestingly, the inclusion of dynamical correlations via the CASPT2 calculation, the $^1\\!A_{1g}$ in NdNiO$_2$\\ is stabilized by 0.53 eV compared to $^3\\!B_{1g}$ state. \nHowever, in CaCuO$_2$, the $^1\\!A_{1g}$ state is stabilized by only 0.24 eV. \nThis indicates that the dynamical correlations are more active in the $^1\\!A_{1g}$ state in NdNiO$_2$\\ than in CaCuO$_2$. \nWe note that the hole excitations within the $3d$ orbitals in the irreps $b_2$ and $e$, calculated with this limited active space (CAS-3) results in energies lower than the $^3\\!B_{1g}$ and $^1\\!B_{1g}$ states.\nHowever, an accurate description of those states requires an enlarged active space that includes not only the same symmetry oxygen 2$p$ and $3p$ orbitals from the central NiO$_4$ plaque but also the 3$d$, 4$d$ manifold of the neighbouring Ni (Cu) ions, making the active space prohibitively large. \nHere, we concentrate on the analysis of the $^1\\!A_{1g}$ ground state and address the complete $d^8$ multiplet spectrum elsewhere. \n\\begin{table}[!b]\n\t\\caption{\n\t\tNi and Cu $3d^8$ $^1\\!A_{1g}$ ground state wavefunction: Weights (\\%) of the leading configurations\n\t\tin the wavefunction computed for NdNiO$_2$\\ and CaCuO$_2$\\ with active spaces CAS-1 and CAS-2 (see Table~\\ref{activespaces}).\n\t\t$d_{b_1}$ and $p_{b_1}$ are the localized Ni (Cu) $3d_{x^2-y^2}$ and the oxygen $2p$ ZR-like orbitals (see Fig.~\\ref{fig1}) in the $b_1$ irrep respectively. \n\t\tArrows in the superscript indicate the spin of the electrons and a $\\square$ indicates two holes.\n\t}\n\t\\begin{center}\n\t\t\\begin{tabular}{l llll}\n\t\t\t\\hline\n\t\t\t\\hline\\\\[-0.30cm]\n\t\t\t & \\multicolumn{2}{c}{NdNiO$ _{2} $} & \\multicolumn{2}{c}{CaCuO$ _{2} $} \\\\ \n\t\t\t $^1\\!A_{1g}$ & CAS-1 & CAS-2 & CAS-1 & CAS-2 \\\\\n\t\t\t\\hline\n\t\t\t\\\\[-0.20cm]\n\t\t\t$|d_{b_{1}}^\\square p_{b_{1}}^{\\uparrow \\downarrow} \\rangle$ & 51.87 & 42.40 & 4.20 & 20.25 \\\\[0.3cm]\n\t\t\t$|d_{b_{1}}^{\\uparrow}p_{b_{1}}^{\\downarrow} \\rangle$ & 8.27 & 10.48 & 42.58 & 38.52 \\\\[0.3cm]\n\t\t\t$|d_{b_{1}}^{\\downarrow}p_{b_{1}}^{\\uparrow} \\rangle$ & 6.07 & 7.60 & 25.00 & 25.60 \\\\[0.3cm]\n\t\t\t$|d_{b_{1}}^{\\uparrow \\downarrow}p_{b_{1}}^\\square \\rangle$ & 0.09 & 0.23 & 21.56 & 5.14 \\\\[0.3cm]\n\n\t\t\t\n\t\t\t\\hline\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\label{wfn}\n\\end{table}\n\n\n\n\\subsection{Wavefunction of the electron-removal \\boldmath$d^8$ ground state} \nThe $^1\\!A_{1g}$ ground wavefunction in terms of \nthe weights of the four leading configurations (in the case of CaCuO$_2$) is shown in Table~\\ref{wfn}.\nThe wavefunctions corresponding to the CASSCF calculations with the active spaces CAS-1 and CAS-2 are shown. \nThe basis in which the wavefunctions are represented is constructed in two steps:\n1) A set of natural orbitals are generated by diagonalising the CASSCF one-body reduced density matrix. \n2) To obtain a set of atomic-like symmetry-adapted localized orbital basis, we localize the Ni (Cu) $3d$ and O $2p$ orbitals on the central NiO$_4$ (CuO$_4$) plaque through a unitary transformation.\nSuch partial localization within the active space keeps the total energy unchanged. \nThe resulting 3$d_{x^2-y^2}$\\ and the ZR-like oxygen 2$p$ orbital basis is shown in Fig~\\ref{fig1}.\nFCIQMC calculation was performed in this partial localized basis to obtain the wavefunction as a linear combination of Slater determinants. \n10 million walkers were used to converge the FCIQMC energy to within 0.1 mHartree. \n\nFrom Table~\\ref{wfn} it can be seen that the electron-removal $d^8$ ground state wavefunction for the two compounds is mostly described by the four configurations spanned by the localized 3$d_{x^2-y^2}$\\ ($d_{b_1}$) and the symmetry-adapted ZR-like oxygen 2$p$ ($p_{b_1}$) orbitals that are shown in Fig.~\\ref{fig1}.\nLet us first discus the wavefunction obtain from the CAS-1 active space. \nFor NdNiO$_2$, the dominant configuration involves two holes on 3$d_{x^2-y^2}$, $|d_{b_{1}}^\\square p_{b_{1}}^{\\uparrow \\downarrow} \\rangle$, and contributes to $\\sim$52\\% of the wavefunction,\nwhile the configurations that make up the ZR singlet, $|d_{b_{1}}^{\\uparrow}p_{b_{1}}^{\\downarrow} \\rangle$ and $|d_{b_{1}}^{\\downarrow}p_{b_{1}}^{\\uparrow} \\rangle$, contributes to only $\\sim$14\\%. \nOn the other hand, the $d^8$ $^1\\!A_{1g}$ state in CaCuO$_2$\\ is predominantly the ZR singlet with $\\sim$68\\% weight.\nIn the CASSCF calculation with CAS-2 active space, where all the electrons in the 3$d$ manifold are explicitly correlated, \nwe find that the character of the wavefunction remains unchanged in NdNiO$_2$\\ but weight on the dominant configurations is slightly reduced. \nOn the other hand, in CaCuO$_2$, while the contribution from the ZR singlet is slightly reduced, the contribution from $|d_{b_{1}}^\\square p_{b_{1}}^{\\uparrow \\downarrow} \\rangle$ configuration is dramatically increased at the expense of the weight on \n$|d_{b_{1}}^{\\uparrow \\downarrow}p_{b_{1}}^\\square \\rangle$. \nThis demonstrates that the additional freedom provided by the $d_{xy}$ and $d_{xz/yz}$ orbitals for the electron correlation helps to accommodate the additional hole on the Cu ion.\n\nWe note that the four configurations shown in Table~\\ref{wfn} encompass almost 90\\% of the $d^8$ wavefunction (with CAS-2 active space) in CaCuO$_2$. \nThus, the use of a three-band Hubbard model~\\cite{emery_3b_hubbard_prl_1987,jiang_cuprates_prb_2020} to investigate the role of doped holes in CuO$_2$ planes is a reasonable choice. \nHowever, for NdNiO$_2$\\ these configurations cover only 60\\% of the $d^8$ wavefunction, hence a three-band Hubbard model is too simple to describe the hole-doped monovalent nickelates. \n\nA more intuitive and visual understanding of the distribution of the additional hole can be obtained by plotting the difference of the $d^8$ and the $d^9$ ground state electron densities as shown in Fig.~\\ref{fig2}. \nElectron density of a multi-configurational state can be computed as a sum of densities arising from the natural orbitals and corresponding (well-defined) occupation numbers.\nWe used Multiwfn program \\cite{Multiwfn} to perform this summation.\nThe negative values of the heat map of the electron density difference (blue color) and the positive values (in red) represent respectively the extra hole density and additional electron density in $d^8$ state compared to the $d^9$ state.\nFrom Fig.~\\ref{fig2}(a)/(c) that show the density difference in the NiO$_2$/CuO$_2$ planes (xy-plane), we conclude the following: \n\\begin{enumerate}\n\\item The hole density is concentrated on the Ni site (darker blue) with $b_1$ ($d_{x^2-y^2}$) symmetry in NdNiO$_2$\\ whereas \n it is distributed evenly on the four oxygen and the central Cu ions with $b_1$ symmetry in CaCuO$_2$, a result consistent with the wavefunction reported in Table~\\ref{wfn}.\n\\item In NdNiO$_2$, the hole density is spread out around the Ni ion with larger radius, and otherwise in CaCuO$_2$. \n This demonstrates that the $3d$ manifold in Cu is much more localized than in Ni and therefore the onsite Coulomb repulsion $U$ is comparatively smaller for Ni.\n\\item The darker red regions around the Ni site in NdNiO$_2$\\ indicate stronger $d^8$ multiplet effects that result in rearrangement of electron density compared to $d^9$ configuration.\n\\item In CaCuO$_2$, we see darker red regions on the oxygen ions instead, which shows that the significant presence of a hole on these ions results in noticeable electron redistribution. \n\\end{enumerate}\n\nThe electron density difference in the xz-plane (which is perpendicular to the NiO$_2$/CuO$_2$ planes) is quite different in the two compounds. \nThe hole density in NdNiO$_2$\\ is spread out up to 2\\,\\AA\\ in the $z$-direction, unlike in CaCuO$_2$, where it is confined to within 1\\,\\AA .\nWe attribute this to the strong radial-type correlations in NdNiO$_2$. \nWith the creation of additional hole on the 3$d_{x^2-y^2}$\\ orbital, the electron density which is spread out in the $d_{z^2}$\\ symmetry via the dynamical correlation between 3$d_{z^2}$\\ and 4$d_{z^2}$\\ orbitals~\\cite{katukuri_electronic_2020}, becomes more compact in the $d_{z^2}$\\ symmetry through the reverse breathing. \nThus, we see a strong red region with 3$d_{z^2}$\\ profile and a blue region with expanded 4$d_{z^2}$\\ profile. \n\n\\begin{figure}[!t]\n\\begin{center}\n\t\\includegraphics[width=0.48\\textwidth]{Density_difference_2.pdf}\t\n\t\\caption{Electron density difference of the $d^8$ and $d^9$ ground states ($\\rho(d^8) - \\rho(d^9)$) for NdNiO$_2$\\ in the xy-plane (a) and xz-plane (b), and for CaCuO$_2$\\ xy-plane (c) and xz-plane (d). \n\tThe coordinates of the central Ni (Cu) $d^8$ ion are set to (0,0). The scale of the heat-bar is logarithmic between $\\pm$0.001 to $\\pm$1.0 and is linear between 0 and $\\pm$0.001. \n\t(e) Electron density difference integrated over a sphere centered on on the central Ni(Cu) atoms (full curves) as a function of the radius $r$ shown in (a). \n\tThe result of an additional radial integration (dashed curves) as a function of the upper integration limit.}\n\t\\label{fig2}\n\\end{center}\n\\end{figure}\nTo obtain a quantitative understanding of the charge density differences for the two compounds, in Fig.~\\ref{fig2}(e) we plot the electron density difference integrated over a sphere centered on the central Ni(Cu) atom as a function of the radius $r$ shown in Fig.~\\ref{fig2}(a).\nFour features, which we marked A-D, clearly demonstrate the contrast in the charge density differences in the two compounds. \nFrom the feature A at $r$ close to Ni (Cu), it is evident that\nthe extent of hole density around Ni in NdNiO$_2$\\ is larger than around Cu in CaCuO$_2$.\nThe features B and C that are on either side of the position of oxygen ions show that the hole density is significantly larger on oxygen atoms in CaCuO$_2$\\ than in the NdNiO$_2$.\nIt is interesting to note that we see a jump (feature D) in the electron density above zero at $r$ close to the position of Nd ions in NdNiO$_2$, while in CaCuO$_2$\\ the curve is flat in the region of Ca ions. \nThis shows that there is some electron redistribution happening around the Nd ions. \n\nThe hole density within a solid sphere (SS) around the central Ni (Cu) atom obtained by additional integration over the radius $r$ is also shown in Fig.~\\ref{fig2}(e) with dashed curves. \nIt can be seen that the total hole density within the SS of $r\\sim$4\\,\\AA, where the neighboring Ni (Cu) ions are located, is only $\\sim$0.5 in both the compounds, with slight differences related to the feature D. \nThis is due to the screening of the hole with the electron density pulled in from the farther surroundings. \nAs one would expect, a SS with $r$ of the size of the cluster, the total hole density is one in both the compounds. \n\n\\begin{figure}[!b]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.480\\textwidth]{entropy_4.pdf}\t\n\t\t\\caption{Single orbital entanglement entropy, $s(1)_i$, (dots) and mutual orbital entanglement entropy, $I_{i,j}$, (colored lines) of the orbital basis used to expand the $d^8$ wavefunction in Table~\\ref{wfn} for NdNiO$_2$\\ (a) and CaCuO$_2$\\ (b). \n\t\tEntanglement entropy of the orbitals centred on the central NiO$_4$/CuO$_4$ plaque are only shown. \n\t\tThe irrep to which the orbitals belong to are also shown. \n\t\tThe green and magenta colors represent the two different set of orbitals, occupied (at the HF level) and the corresponding double-shell (virtual), respectively. \n\t\tThe thickness of the black, blue and green lines denote the strength of $I_{i,j}$, and the size of the dots is proportional to $s(1)_i$.\n\t\t}\n\t\t\\label{entanglement}\n\t\\end{center}\n\\end{figure}\n\n\n\\subsection{Orbital entanglement entropy }\nTo analyse the different type of correlations active in the two compounds in $d^8$ configuration, we compute the entanglement entropy~\\cite{boguslawski_entanglement_2012,boguslawski_orbital_2013,boguslawski_orbital_2015}. \nWhile the single orbital entropy, $s(1)_i $, quantifies the correlation between $i$-th orbital and the remaining set of orbitals, \nthe mutual information, $I_{i,j}$ is the two-orbital entropy between $i$ and $j$~\\cite{legeza_optimizing_2003,rissler_measuring_2006}, and illustrates the correlation of an orbital with another, in the embedded environment comprising of all other orbitals. \nWe used {\\sc QCMaquis}~\\cite{keller_an_2015} embedded in {\\sc OpenMolcas}~\\cite{fdez_galvan_openmolcas_2019} package to compute the entropies.\n\nIn Figure~\\ref{entanglement}, $s(1)_i$ and $I_{i,j}$ extracted from CASSCF calculations with CAS-2 active space for NdNiO$_2$\\ and CaCuO$_2$\\ are shown. \nThe orbital basis for which the entropy is computed is the same as the basis in which the wavefunction presented in Table~\\ref{wfn} is expanded. \nAs mentioned previously, this orbital basis is obtained from partial localization of the natural orbitals in a way that only the 3$d_{x^2-y^2}$\\ and the O 2$p$ ZR-like orbitals are localized. \nSince a large part of electron correlation is compressed in natural orbitals, we see a tiny $s(1)_i$ for all orbitals except for the localized 3$d_{x^2-y^2}$\\ and the O 2$p$ ZR-like orbitals where it is significant. This is consistent with the wavefunction in Table~\\ref{wfn}.\nThe mutual orbital entanglement between pairs of orbitals shows strong entanglement between the 3$d_{x^2-y^2}$\\ and the O 2$p$ ZR-like orbitals for both NdNiO$_2$\\ and CaCuO$_2$, a consequence of the dominant weight of the configurations spanned by these two orbitals in the wavefunction. \nThe next strongest entanglement is between the Ni/Cu 3$d$ valence and their double-shell $4d$ orbitals.\nSuch strong entanglement also observed for the undoped $d^9$ ground state~\\cite{katukuri_electronic_2020}, is a result of dynamical radial correlation \\cite{helgaker_molecular_2000} and orbital breathing effects~\\cite{gunnarsson_density-functional_1989,bogdanov_natphys_2021}. \nInterestingly, the entanglement entropy in the range 0.001-0.01 (green lines) is quite similar in the two compounds, although one sees more entanglement connections in NdNiO$_2$. \nA comparison of the entropy information between NdNiO$_2$\\ and CaCuO$_2$\\ reveals that the Ni 3$d$ and 4$d$-like orbitals contribute rather significantly (thicker blue lines) to the total entropy, in contrast to the Cu 3$d$ and 4$d$-like orbitals, something that is also seen in the undoped compounds~\\cite{katukuri_electronic_2020}. \n\n\\section{Conclusions and discussion}\nIn conclusion, \nour {\\it ab initio} many-body quantum chemistry calculations for the electron removal ($d^8$) states find a low-spin closed-shell singlet ground state in NdNiO$_2$\\ and that the additional hole is mainly localized on the Ni 3$d_{x^2-y^2}$\\ orbital, unlike in CaCuO$_2$, where a Zhang-Rice singlet is predominant. \nWe emphasise that the $d^8$ wavefunction is highly multi-configurational where the dominant closed-shell singlet configuration weight is only $\\sim$42\\%.\nThis result is consistent with the experimental evidence~\\cite{rossi2020orbital,goodge-a} of orbitally polarized singlet state as well as the presence of holes on the O $2p$ orbitals.\nImportantly, the persistent dynamic radial-type correlations within the Ni $d$ manifold result in stronger $d^8$ multiplet effects in NdNiO$_2$, and consequently the additional hole foot-print is more three dimensional. \nIn CaCuO$_2$, we find that the electron correlations within the $d_{xy}$ and $d_{xz/yz}$ orbitals changes the hole-doped wavefunction significantly. Specifically, the double hole occupation of Cu $d_{x^2-y^2}$\\ is significantly increased and this can influence the transport properties. \n\nIt was recently proposed that nickelates could be a legitimate realization of the single-band Hubbard model~\\cite{kitatani_nickelate_2020}. \nHowever, our analysis shows that even the three-band Hubbard model~\\cite{eskes1991a}, which successfully describes the hole-doped scenario in cuprates, falls short to describe hole-doped nickelates and additional orbital degrees of freedom are indeed necessary for the description of the strong multiplet effects we find.\nMuch has been discussed about the importance of rare-earth atoms for the electronic structure of superconducting nickelates, e.g. see~\\cite{nomura2021superconductivity}. \nThe three-dimensional nature of the hole density we find in NdNiO$_2$\\ might also be hinting at the importance of out-of-plane Nd ions. \nIt would be interesting to compare the hole density of NdNiO$_2$\\ with other iso-structural nickelates such as LaNiO$_2$\\ where La $5d$ states are far from the Fermi energy. \nSince the infinite-layered monovalent nickelates are thin films and often grown on substrates, one could ask the question of how the electronic structure of the undoped and doped compounds changes with varying Ni-O bond length. Would this influence the role of electronic correlations in $d^9$ nickelates? We will address these in the near future. \n\n\\section*{Conflict of Interest Statement}\nThe authors express no conflict of interests. \n\n\\section*{Author Contributions}\nVMK and AA designed the project. VMK and NAB performed the calculations. All the authors analysed the data. VMK wrote the paper with inputs from NAB and AA. \n\n\\section*{Funding}\nWe gratefully acknowledge the Max Plank Society for financial support. \n\n\\section*{Acknowledgments}\nVMK would like to acknowledge Giovanni Li Manni and Oskar Weser for fruitful discussions.\n\n\n\n\n\n\n\n", "meta": {"timestamp": "2022-01-17T02:18:00", "yymm": "2201", "arxiv_id": "2201.05495", "language": "en", "url": "https://arxiv.org/abs/2201.05495"}} {"text": "\\section{Introduction}\n\nIf $C$ is a general curve of genus $g$, equipped with a general map\n$f \\colon C \\to \\pp^3$ of degree $d$,\nit is natural to ask\nwhether the intersection $f(C) \\cap Q$\nof its image with a general quadric $Q$\nis a general collection of $2d$ points on $Q$.\nInterest in this question historically developed as a result of the \nwork of Hirschowitz \\cite{mrat} on the maximal rank conjecture\nfor rational space curves, and the later extension of Ballico\nand Ellia \\cite{ball} of this method to nonspecial space curves: The\nheart of these arguments revolve precisely around understanding the intersection\nof a general curve with a general quadric.\nIn hopes of both simplifying and extending these results,\nEllingsrud and Hirschowitz \\cite{eh}, and later Perrin \\cite{perrin},\nusing the technique of liaison,\ngave partial results on the generality of this intersection.\nHowever, a complete analysis has so far remained conjectural.\nTo state the problem precisely, we make the following definition:\n\n\\begin{defi}\nWe say a stable map $f \\colon C \\to \\pp^r$ from a curve $C$ to $\\pp^r$\n(with $r \\geq 2$)\nis a \\emph{Weak Brill-Noether curve (WBN-curve)} if it\ncorresponds to a point in a component of\n$\\bar{M}_g(\\pp^r, d)$ which both\ndominates $\\bar{M}_g$,\nand whose generic member is a map\nfrom a smooth curve, which is an immersion if $r \\geq 3$,\nand birational onto its image if $r = 2$;\nand which is either\nnonspecial or nondegenerate.\nIn the latter case, we refer to it as a \\emph{Brill-Noether curve} (\\emph{BN-curve}).\n\\end{defi}\n\n\\noindent\nThe celebrated Brill-Noether theorem\nthen asserts that BN-curves of degree~$d$ and genus~$g$ to~$\\pp^r$ exist if and only if\n\\[\\rho(d, g, r) := (r + 1)d - rg - r(r + 1) \\geq 0.\\]\nMoreover, for $\\rho(d, g, r) \\geq 0$, the parameter space\nof BN-curves is irreducible. (In particular, it makes sense\nto talk about a ``general BN-curve''.)\n\n\\medskip\n\nIn this paper, we give a complete answer to the question posed above:\nFor $f \\colon C \\to \\pp^3$\na general BN-curve of degree $d$ and genus $g$\n(with, of course, $\\rho(d, g, 3) \\geq 0$),\nwe show the intersection $f(C) \\cap Q$ is a general collection of $2d$ points on $Q$\nexcept in exactly six cases. Furthermore, in these six cases, we compute precisely\nwhat the intersection is.\n\nA natural generalization of this problem is to study the intersection of\na general BN-curve $f \\colon C \\to \\pp^r$ (for $r \\geq 2$) with a hypersurface $H$\nof degree $n \\geq 1$: In particular, we ask when this intersection consists\nof a general collection of $dn$ points on $H$ (in all but finitely many cases).\n\nFor $r = 2$, the divisor $f(C) \\cap H$ on $H$ is linearly equivalent\nto $\\oo_H(d)$; in particular, it can only be general if $H$ is rational, i.e.\\ if $n = 1$ or $n = 2$.\nIn general, we note that\nin order for the intersection to be general, it is evidently necessary for\n\\[(r + 1)d - (r - 3)g \\sim (r + 1)d - (r - 3)(g - 1) = \\dim \\bar{M}_g(\\pp^r, d)^\\circ \\geq (r - 1) \\cdot dn.\\]\n(Here $\\bar{M}_g(\\pp^r, d)^\\circ$ denotes the component of $\\bar{M}_g(\\pp^r, d)$\ncorresponding to the BN-curves, and $A \\sim B$ denotes that $A$ differs from $B$ by a quantity bounded by\na function of $r$ alone.)\nIf the genus of $C$ is as large as possible (subject to the constraint\nthat $\\rho(d, g, r) \\geq 0$), i.e.\\ if\n\\[g \\sim \\frac{r + 1}{r} \\cdot d,\\]\nthen the intersection can only be general when\n\\[(r + 1) \\cdot d - (r - 3) \\cdot \\left(\\frac{r + 1}{r} \\cdot d \\right) \\gtrsim (r - 1) n \\cdot d;\\]\nor equivalently if\n\\[(r + 1) - (r - 3) \\cdot \\frac{r + 1}{r} \\geq (r - 1) n \\quad \\Leftrightarrow \\quad n \\leq \\frac{3r + 3}{r^2 - r}.\\]\n\nFor $r = 3$, this implies $n = 1$ or $n = 2$; for $r = 4$, this implies $n = 1$; and\nfor $r \\geq 5$, this is impossible.\n\n\\medskip\n\nTo summarize, there are only\nfive pairs $(r, n)$ where this intersection could be, with the exception of finitely many\n$(d, g)$ pairs,\na collection of $dn$ general points on $H$: The intersection of a plane curve with a line,\nthe intersection of a plane curve with a conic, the intersection of a space curve with a quadric,\nthe intersection of a space curve with a plane, and the intersection of a curve to $\\pp^4$\nwith a hyperplane.\nOur three main theorems (five counting the first two cases which are trivial)\ngive a complete description of this intersection\nin these cases:\n\n\\begin{thm} \\label{main-2}\nLet $f \\colon C \\to \\pp^2$ be a general BN-curve of degree~$d$ and genus~$g$. Then\nthe intersection $f(C) \\cap Q$, of $C$ with a general conic $Q$, consists\nof a general collection of $2d$ points on~$Q$.\n\\end{thm}\n\n\\begin{thm} \\label{main-2-1}\nLet $f \\colon C \\to \\pp^2$ be a general BN-curve of degree~$d$ and genus~$g$. Then\nthe intersection $f(C) \\cap L$, of $C$ with a general line $L$, consists\nof a general collection of $d$ points on~$L$.\n\\end{thm}\n\n\\begin{thm} \\label{main-3}\nLet $f \\colon C \\to \\pp^3$ be a general BN-curve of degree~$d$ and genus~$g$. Then\nthe intersection $f(C) \\cap Q$, of $C$ with a general quadric $Q$, consists\nof a general collection of $2d$ points on $Q$, unless\n\\[(d, g) \\in \\{(4, 1), (5, 2), (6, 2), (6, 4), (7, 5), (8, 6)\\}.\\]\nAnd conversely, in the above cases, we may describe the intersection\n$f(C) \\cap Q \\subset Q \\simeq \\pp^1 \\times \\pp^1$ in terms of\nthe intrinsic geometry of $Q \\simeq \\pp^1 \\times \\pp^1$ as follows:\n\n\\begin{itemize}\n\\item If $(d, g) = (4, 1)$, then $f(C) \\cap Q$ is the intersection of two general curves\nof bidegree $(2, 2)$.\n\n\\item If $(d, g) = (5, 2)$, then $f(C) \\cap Q$ is a general collection of $10$ points\non a curve of bidegree~$(2, 2)$.\n\n\\item If $(d, g) = (6, 2)$, then $f(C) \\cap Q$ is a general collection of $12$ points\n$p_1, \\ldots, p_{12}$ lying on a curve $D$ which satisfy:\n\\begin{itemize}\n\\item The curve $D$ is of bidegree $(3, 3)$ (and so is in particular of arithmetic genus $4$).\n\\item The curve $D$ has two nodes (and so is in particular of geometric genus $2$).\n\\item The divisors $\\oo_D(2,2)$ and $p_1 + \\cdots + p_{12}$ are linearly equivalent\nwhen pulled back to the normalization of $D$.\n\\end{itemize}\n\n\\item If $(d, g) = (6, 4)$, then $f(C) \\cap Q$ is the intersection of\ntwo general curves\nof bidegrees $(2, 2)$ and $(3,3)$ respectively.\n\n\\item If $(d, g) = (7, 5)$, then $f(C) \\cap Q$ is a general collection of $14$ points\n$p_1, \\ldots, p_{14}$ lying on a curve $D$ which satisfy:\n\\begin{itemize}\n\\item The curve $D$ is of bidegree $(3, 3)$.\n\\item The divisor $p_1 + \\cdots + p_{14} - \\oo_D(2, 2)$ on $D$\nis effective.\n\\end{itemize}\n\n\\item If $(d, g) = (8, 6)$, then $f(C) \\cap Q$ is a general collection of $16$\npoints on a curve of bidegree~$(3,3)$.\n\\end{itemize}\nIn particular, the above descriptions show $f(C) \\cap Q$ is not a general collection\nof $2d$ points on~$Q$.\n\\end{thm}\n\n\\begin{thm} \\label{main-3-1}\nLet $f \\colon C \\to \\pp^3$ be a general BN-curve of degree~$d$ and genus~$g$. Then\nthe intersection $f(C) \\cap H$, of $C$ with a general plane $H$, consists\nof a general collection of $d$ points on $H$, unless\n\\[(d, g) = (6, 4).\\]\nAnd conversely, for $(d, g) = (6, 4)$, the intersection $f(C) \\cap H$\nis a general collection of $6$ points on a conic in $H \\simeq \\pp^2$; in particular,\nit is not a general collection of $d = 6$ points.\n\\end{thm}\n\n\\begin{thm} \\label{main-4}\nLet $f \\colon C \\to \\pp^4$ be a general BN-curve of degree~$d$ and genus~$g$. Then\nthe intersection $f(C) \\cap H$, of $C$ with a general hyperplane $H$, consists\nof a general collection of $d$ points on $H$, unless\n\\[(d, g) \\in \\{(8, 5), (9, 6), (10, 7)\\}.\\]\nAnd conversely, in the above cases, we may describe the intersection\n$f(C) \\cap H \\subset H \\simeq \\pp^3$ in terms of\nthe intrinsic geometry of $H \\simeq \\pp^3$ as follows:\n\n\\begin{itemize}\n\\item If $(d, g) = (8, 5)$, then $f(C) \\cap H$ is the intersection of three general quadrics.\n\\item If $(d, g) = (9, 6)$, then $f(C) \\cap H$ is a general collection of $9$ points\non a curve $E \\subset \\pp^3$ of degree~$4$ and genus~$1$.\n\n\\item If $(d, g) = (8, 5)$, then $f(C) \\cap H$ is a general collection of $10$ points\non a quadric.\n\\end{itemize}\n\\end{thm}\n\nThe above theorems can be proven by studying the normal bundle of\nthe general BN-curve $f \\colon C \\to \\pp^r$: For any hypersurface $S$ of degree $n$,\nand unramified map $f \\colon C \\to \\pp^r$ dimensionally transverse to $S$,\nbasic deformation theory implies that the map\n\\[f \\mapsto (f(C) \\cap S)\\]\n(from the corresponding Kontsevich space of stable maps, to the\ncorresponding symmetric power of $S$)\nis smooth at $[f]$ if and only if\n\\[H^1(N_f(-n)) = 0.\\]\nHere, $N_f(-n) = N_f \\otimes f^* \\oo_{\\pp^r}(-n)$\ndenotes the twist of the normal bundle $N_f$ of the map $f \\colon C \\to \\pp^r$;\nthis is the vector bundle on the domain $C$ of $f$ defined via\n\\[N_f = \\ker(f^* \\Omega_{\\pp^r} \\to \\Omega_C)^\\vee.\\]\n\nSince a map between reduced irreducible varieties is dominant\nif and only if it is generically smooth, the map $f \\mapsto (f(C) \\cap S)$ is therefore dominant if and only if\n$H^1(N_f(-n)) = 0$ for $[f]$ general.\n\nThis last condition being visibly open, our problem is thus to prove\nthe existence of an unramified BN-curve $f \\colon C \\to \\pp^r$ of specified degree and genus,\nfor which $H^1(N_f(-n)) = 0$.\nFor this, we will use a variety of techniques, most crucially specialization\nto a map from a reducible curve $X \\cup_\\Gamma Y \\to \\pp^r$.\n\nWe begin, in\nSection~\\ref{sec:reducible}, by giving several tools\nfor studying the normal bundle of a map from a reducible curve.\nThen in\nSection~\\ref{sec:inter}, we review results on the closely-related\n\\emph{interpolation problem} (c.f.\\ \\cite{firstpaper}).\nIn Section~\\ref{sec:rbn}, we review results about when certain maps from reducible\ncurves, of the type we shall use, are BN-curves.\nUsing these techniques, we then concentrate our attention in Section~\\ref{sec:indarg} on\nmaps from reducible curves $X \\cup_\\Gamma Y \\to \\pp^r$ where $Y$ is a line or canonical curve.\nConsideration of these curves enables us to make an inductive argument\nthat reduces our main theorems to finite casework.\n\nThis finite casework is then taken care of in three steps:\nFirst, in Sections~\\ref{sec:hir}--\\ref{sec:hir-3}, we again use degeneration\nto a map from a reducible curve, considering the special case when $Y \\to \\pp^r$ factors through a\nhyperplane.\nSecond, in Section~\\ref{sec:in-surfaces},\nwe specialize to immersions of smooth curves contained in Del Pezzo surfaces, and study\nthe normal bundle of our curve using the\nnormal bundle exact sequence for a curve in a surface.\nLastly, in Section~\\ref{sec:51} we use the geometry of the cubic scroll in $\\pp^4$ to\nconstruct an example of an immersion of a smooth curve $f \\colon C \\hookrightarrow \\pp^3$ of degree $5$\nand genus $1$ with $H^1(N_f(-2)) = 0$.\n\nFinally, in Section~\\ref{sec:converses}, we examine each of the cases\nin our above theorems where the intersection is not general. In each\nof these cases, we work out precisely what the intersection is\n(and show that it is not general).\n\n\\subsection*{Conventions}\n\nIn this paper we make the following conventions:\n\n\\begin{itemize}\n\\item We work over an algebraically closed field of characteristic zero.\n\n\\item\nA \\emph{curve} shall refer to a nodal curve, which is assumed to be connected unless otherwise specified.\n\\end{itemize}\n\n\\subsection*{Acknowledgements}\n\nThe author would like to thank Joe Harris for\nhis guidance throughout this research.\nThe author would also like to thank Gavril Farkas, Isabel Vogt, and\nmembers of the Harvard and MIT mathematics departments\nfor helpful conversations;\nand to acknowledge the generous\nsupport both of the Fannie and John Hertz Foundation,\nand of the Department of Defense\n(NDSEG fellowship).\n\n\n\\section{Normal Bundles of Maps from Reducible Curves \\label{sec:reducible}}\n\nIn order to describe the normal bundle of a map from a reducible curve,\nit will be helpful to introduce some notions concerning modifications\nof vector bundles.\nThe interested reader is encouraged to consult \\cite{firstpaper} (sections 2, 3, and~5),\nwhere these notions are developed in full; we include here only a brief summary, which will\nsuffice for our purposes.\n\n\\begin{defi}\nIf $f \\colon X \\to \\pp^r$ is a map from a scheme $X$ to $\\pp^r$,\nand $p \\in X$ is a point, we write $[T_p C] \\subset \\pp^r$\nfor the \\emph{projective realization of the tangent space} --- i.e.\\ for the\nlinear subspace $L \\subset \\pp^r$ containing $f(p)$ and satisfying\n$T_{f(p)} L = f_*(T_p C)$.\n\\end{defi}\n\n\n\\begin{defi} Let $\\Lambda \\subset \\pp^r$ be a linear subspace, and $f \\colon C \\to \\pp^r$\nbe an unramified map from a curve.\nWrite $U_{f, \\Lambda} \\subset C$ for the open subset of points $p \\in C$ so that\nthe projective realization of the tangent space $[T_p C]$ does not meet $\\Lambda$. Suppose that $U_{f, \\Lambda}$\nis nonempty, and contains the singular locus of $C$. Define\n\\[N_{f \\to \\Lambda}|_{U_{f, \\Lambda}} \\subset N_f|_{U_{f, \\Lambda}}\\]\nas the kernel of the differential of the projection from $\\Lambda$\n(which is regular on a neighborhood of $f(U_{f, \\Lambda})$).\nWe then let $N_{f \\to \\Lambda}$ be the unique extension of $N_{f \\to \\Lambda}|_{U_{f, \\Lambda}}$\nto a sub-vector-bundle (i.e.\\ a subsheaf with locally free quotient) of $N_f$ on $C$.\nFor a more thorough discussion of this construction (written for $f$ an immersion\nbut which readily generalizes),\nsee Section~5 of \\cite{firstpaper}.\n\\end{defi}\n\n\\begin{defi} Given a subbundle $\\mathcal{F} \\subset \\mathcal{E}$ of a vector bundle on a scheme $X$,\nand a Cartier divisor $D$ on $X$, we define\n\\[\\mathcal{E}[D \\to \\mathcal{F}]\\]\nas the kernel of the natural map\n\\[\\mathcal{E} \\to (\\mathcal{E} / \\mathcal{F})|_D.\\]\nNote that $\\mathcal{E}[D \\to \\mathcal{F}]$ is naturally isomorphic to $\\mathcal{E}$\non $X \\smallsetminus D$. Additionally, note that $\\mathcal{E}[D \\to \\mathcal{F}]$\ndepends only on $\\mathcal{F}|_D$.\nFor a more thorough discussion of this construction, see Sections~2 and~3 of \\cite{firstpaper}.\n\\end{defi}\n\n\\begin{defi}\nGiven a subspace $\\Lambda \\subset \\pp^r$, an unramified map $f \\colon C \\to \\pp^r$ from a curve, and a Cartier divisor $D$ on $C$,\nwe define\n\\[N_f[D \\to \\Lambda] := N_f[D \\to N_{f \\to \\Lambda}].\\]\n\\end{defi}\n\nWe note that these constructions can be iterated on a smooth curve: Given subbundles $\\mathcal{F}_1, \\mathcal{F}_2 \\subset \\mathcal{E}$\nof a vector bundle on a smooth curve,\nthere is a unique subbundle $\\mathcal{F}_2' \\subset \\mathcal{E}[D_1 \\to \\mathcal{F}_1]$\nwhich agrees with $\\mathcal{F}_2$ away from $D_1$ (c.f.\\ Proposition~3.1 of \\cite{firstpaper}).\nWe may then define:\n\\[\\mathcal{E}[D_1 \\to \\mathcal{F}_1][D_2 \\to \\mathcal{F}_2] := \\mathcal{E}[D_1 \\to \\mathcal{F}_1][D_2 \\to \\mathcal{F}_2'].\\]\n\nBasic properties of this construction (as well as precise conditions when such iterated modifications\nmake sense for higher-dimensional\nvarieties) are investigated in \\cite{firstpaper} (Sections~2 and~3).\nFor example, we have natural isomorphisms $\\mathcal{E}[D_1 \\to \\mathcal{F}_1][D_2 \\to \\mathcal{F}_2] \\simeq \\mathcal{E}[D_2 \\to \\mathcal{F}_2][D_1 \\to \\mathcal{F}_1]$\nin several cases, including when $\\mathcal{F}_1 \\subseteq \\mathcal{F}_2$.\n\nUsing these constructions, we may give a partial characterization of the\nnormal bundle $N_f$ of an unramified map from a reducible curve $f \\colon X \\cup_\\Gamma Y \\to \\pp^r$:\n\n\\begin{prop}[Hartshorne-Hirschowitz]\nLet $f \\colon X \\cup_\\Gamma Y \\to \\pp^r$ be an unramified map from a reducible curve.\nWrite $\\Gamma = \\{p_1, p_2, \\ldots, p_n\\}$,\nand for each $i$ let $q_i \\neq f(p_i)$ be a point on the projective realization\n$[T_{p_i} Y]$ of the tangent space to $Y$ at $p_i$. Then we have\n\\[N_f|_X = N_{f|_X}(\\Gamma)[p_1 \\to q_1][p_2 \\to q_2] \\cdots [p_n \\to q_n].\\]\n\\end{prop}\n\\begin{proof}\nThis is Corollary~3.2 of \\cite{hh}, re-expressed in the above\nlanguage. (Hartshorne and Hirschowitz states this only for $r = 3$\nand $f$ an immersion; but the argument they give works for $r$ arbitrary.)\n\\end{proof}\n\nOur basic strategy to study the normal bundle of an unramified map from a\nreducible curve $f \\colon C \\cup_\\Gamma D \\to \\pp^r$\nis given by the following lemma:\n\n\\begin{lm} \\label{glue}\nLet $f \\colon C \\cup_\\Gamma D \\to \\pp^r$ be an unramified map from a reducible curve,\nand let $E$ and $F$ be\ndivisors supported on $C \\smallsetminus \\Gamma$ and $D \\smallsetminus \\Gamma$\nrespectively.\nSuppose that the natural map\n\\[\\alpha \\colon H^0(N_{f|_D}(-F)) \\to \\bigoplus_{p \\in \\Gamma} \\left(\\frac{T_p (\\pp^r)}{f_* (T_p (C \\cup_\\Gamma D))}\\right)\\]\nis surjective (respectively injective), and that\n\\begin{gather*}\nH^1(N_f|_D (-F)) = 0 \\quad \\text{(respectively } H^0(N_f|_D (-F)) = H^0(N_{f|_D} (-F))\\text{)} \\\\\nH^1(N_{f|_C} (-E)) = 0 \\quad \\text{(respectively } H^0(N_{f|_C} (-E)) = 0\\text{)}.\n\\end{gather*}\nThen we have\n\\[H^1(N_f(-E-F)) = 0 \\quad \\text{(respectively } H^0(N_f(-E-F)) = 0\\text{)}.\\]\n\\end{lm}\n\\begin{proof}\nWrite $\\mathcal{K}$ for the sheaf supported along $\\Gamma$ whose\nstalk at $p \\in \\Gamma$ is the quotient of tangent spaces:\n\\[\\mathcal{K}_p = \\frac{T_p(\\pp^r)}{f_*(T_p(C \\cup_\\Gamma D))}.\\]\nAdditionally, write $\\mathcal{N}$ for the (not locally-free) subsheaf of $N_f$\n``corresponding to deformations which do not smooth the nodes $\\Gamma$''; or in\nsymbols, as the kernel of the natural map\n\\[N_f \\to T^1_\\Gamma,\\]\nwhere $T^1$ is the Lichtenbaum-Schlessinger $T^1$-functor.\nWe have the following exact sequences of sheaves:\n\\[\\begin{CD}\n0 @>>> \\mathcal{N} @>>> N_f @>>> T^1_\\Gamma @>>> 0 \\\\\n@. @VVV @VVV @| @. \\\\\n0 @>>> N_{f|_D} @>>> N_f|_D @>>> T^1_\\Gamma @>>> 0 \\\\\n@. @. @. @. @. \\\\\n0 @>>> \\mathcal{N} @>>> N_{f|_C} \\oplus N_{f|_D} @>>> \\mathcal{K} @>>> 0. \\\\\n\\end{CD}\\]\n\nThe first of sequence above is just the definition of $\\mathcal{N}$.\nRestriction of the first sequence to~$D$ yields the second sequence\n(we have $\\mathcal{N}|_D \\simeq N_{f|_D}$);\nthe map between them being of course the restriction map.\nThe final sequence expresses $\\mathcal{N}$ as the gluing of $\\mathcal{N}|_C \\simeq N_{f|_C}$\nto $\\mathcal{N}|_D \\simeq N_{f|_D}$ along $\\mathcal{N}|_\\Gamma \\simeq \\mathcal{K}$.\n\nTwisting everything in sight by $-E-F$, we obtain new sequences:\n\\[\\begin{CD}\n0 @>>> \\mathcal{N}(-E-F) @>>> N_f(-E-F) @>>> T^1_\\Gamma @>>> 0 \\\\\n@. @VVV @VVV @| @. \\\\\n0 @>>> N_{f|_D}(-F) @>>> N_f|_D(-F) @>>> T^1_\\Gamma @>>> 0 \\\\\n@. @. @. @. @. \\\\\n0 @>>> \\mathcal{N}(-E-F) @>>> N_{f|_C}(-E) \\oplus N_{f|_D}(-F) @>>> \\mathcal{K} @>>> 0. \\\\\n\\end{CD}\\]\n\nThe commutativity of the rightmost square in the first diagram implies that\nthe image of $H^0(N_f(-E-F)) \\to H^0(T^1_\\Gamma)$\nis contained in the image of $H^0(N_f|_D(-F)) \\to H^0(T^1_\\Gamma)$.\nConsequently, we have\n\\begin{align}\n\\dim H^0(N_f(-E-F)) &= \\dim H^0(\\mathcal{N}(-E-F)) + \\dim \\im\\left(H^0(N_f(-E-F)) \\to H^0(T^1_\\Gamma)\\right) \\nonumber \\\\\n&\\leq \\dim H^0(\\mathcal{N}(-E-F)) + \\dim \\im\\left(H^0(N_f|_D(-F)) \\to H^0(T^1_\\Gamma)\\right) \\nonumber \\\\\n&= \\dim H^0(\\mathcal{N}(-E-F)) + \\dim H^0(N_f|_D(-F)) - \\dim H^0(N_{f|_D}(-F)). \\label{glue-dim}\n\\end{align}\n\nNext, our assumption that $H^0(N_{f|_D}(-F)) \\to H^0(\\mathcal{K})$ is surjective\n(respectively our assumptions that $H^0(N_{f|_C}(-E)) = 0$ and $H^0(N_{f|_D}(-F)) \\to H^0(\\mathcal{K})$ is injective) implies\nin particular that $H^0(N_{f|_C}(-E) \\oplus N_{f|_D}(-F)) \\to H^0(\\mathcal{K})$ is surjective (respectively injective).\n\nIn the ``respectively'' case, this yields $H^0(\\mathcal{N}(-E-F)) = 0$, which combined with \\eqref{glue-dim}\nand our assumption that $H^0(N_f|_D(-F)) = H^0(N_{f|_D}(-F))$ implies $H^0(N_f(-E-F)) = 0$ as desired.\nIn the other case, we have a bit more work to do; the surjectivity of\n$H^0(N_{f|_D}(-F)) \\to H^0(\\mathcal{K})$ yields\n\\[\\dim H^0(\\mathcal{N}(-E-F)) = \\dim H^0(N_{f|_C}(-E) \\oplus N_{f|_D}(-F)) - \\dim H^0(\\mathcal{K});\\]\nor upon rearrangement,\n\\begin{align*}\n\\dim H^0(\\mathcal{N}(-E-F)) - \\dim H^0(N_{f|_D}(-F)) &= \\dim H^0(N_{f|_C}(-E)) - \\dim H^0(\\mathcal{K}) \\\\\n&= \\chi(N_{f|_C}(-E)) - \\chi(\\mathcal{K}).\n\\end{align*}\n(For the last equality, $\\dim H^0(N_{f|_C}(-E)) = \\chi(N_{f|_C}(-E)) + \\dim H^1(N_{f|_C}(-E)) = \\chi(N_{f|_C}(-E))$\nbecause $H^1(N_{f|_C}(-E)) = 0$ by assumption. Additionally, \n$\\dim H^0(\\mathcal{K}) = \\chi(\\mathcal{K})$\nbecause $\\mathcal{K}$ is punctual.)\n\nSubstituting this into \\eqref{glue-dim}, and noting that\n$\\dim H^0(N_f|_D(-F)) = \\chi(N_f|_D(-F))$ because\n$H^1(N_f|_D(-F)) = 0$ by assumption, we obtain:\n\\begin{align}\n\\dim H^0(N_f(-E-F)) &\\leq \\dim H^0(N_f|_D(-F)) + \\dim H^0(\\mathcal{N}(-E-F)) - \\dim H^0(N_{f|_D}(-F)) \\nonumber \\\\\n&= \\chi(N_f|_D(-F)) + \\chi(N_{f|_C}(-E)) - \\chi(\\mathcal{K}) \\nonumber \\\\\n&= \\chi(N_f|_D(-F)) + \\chi(N_f|_C(-E - \\Gamma)) \\nonumber \\\\\n&= \\chi(N_f(-E - F)). \\label{glue-done}\n\\end{align}\nFor the final two equalities, we have used the exact sequences of sheaves\n\\begin{gather*}\n0 \\to N_f|_C(-E - \\Gamma) \\to N_{f|_C}(-E) \\to \\mathcal{K} \\to 0 \\\\[1ex]\n0 \\to N_f|_C(-E - \\Gamma) \\to N_f(-E-F) \\to N_f|_D(-F) \\to 0;\n\\end{gather*}\nwhich are just twists by $-E-F$ of the exact sequences:\n\\begin{gather*}\n0 \\to N_f|_C(-\\Gamma) \\to N_{f|_C} \\to \\mathcal{K} \\to 0 \\\\[1ex]\n0 \\to N_f|_C(-\\Gamma) \\to N_f \\to N_f|_D \\to 0.\n\\end{gather*}\n\n\\noindent\nTo finish, we note that, by \\eqref{glue-done},\n\\[\\dim H^1(N_f(-E-F)) = \\dim H^0(N_f(-E-F)) - \\chi(N_f(-E - F)) \\leq 0,\\]\nand so\n$H^1(N_f(-E-F)) = 0$ as desired.\n\\end{proof}\n\nIn the case where $f|_D$ factors through a hyperplane,\nthe hypotheses of Lemma~\\ref{glue} become easier to check:\n\n\\begin{lm} \\label{hyp-glue}\nLet $f \\colon C \\cup_\\Gamma D \\to \\pp^r$ be an unramified map from a reducible curve,\nsuch that $f|_D$ factors as a composition of $f_D \\colon D \\to H$ with the inclusion of a hyperplane $\\iota \\colon H \\subset \\pp^r$,\nwhile $f|_C$ is transverse to $H$ along $\\Gamma$.\nLet $E$ and $F$ be\ndivisors supported on $C \\smallsetminus \\Gamma$ and $D \\smallsetminus \\Gamma$\nrespectively.\nSuppose that, for some $i \\in \\{0, 1\\}$,\n\\[H^i(N_{f_D}(-\\Gamma-F)) = H^i(\\oo_D(1)(\\Gamma-F)) = H^i(N_{f|_C} (-E)) = 0.\\]\nThen we have\n\\[H^i(N_f(-E-F)) = 0.\\]\n\\end{lm}\n\\begin{proof}\nIf $i = 0$, we note that $H^0(\\oo_D(1)(\\Gamma - F)) = 0$ implies\n$H^0(\\oo_D(1)(-F)) = 0$. In particular, using\nthe exact sequences\n\\[\\begin{CD}\n0 @>>> N_{f_D}(-F) @>>> N_{f|_D}(-F) @>>> \\oo_D(1)(-F) @>>> 0 \\\\\n@. @| @VVV @VVV @. \\\\\n0 @>>> N_{f_D}(-F) @>>> N_f|_D(-F) @>>> \\oo_D(1)(\\Gamma - F) @>>> 0,\n\\end{CD}\\]\nwe conclude from the first sequence that\n$H^0(N_{f_D}(-F)) \\to H^0(N_{f|_D}(-F))$ is an isomorphism, and\nfrom the $5$-lemma applied to the corresponding map\nbetween long exact sequences that $H^0(N_{f|_D}(-F)) = H^0(N_f|_D(-F))$.\n\nSimilarly, when $i = 1$, we note that\n$H^1(N_{f_D}(-\\Gamma-F)) = 0$ implies $H^1(N_{f_D}(-F)) = 0$;\nwe thus conclude from the second sequence that $H^1(N_f|_D(-F)) = 0$.\n\nIt thus remains to check that the map $\\alpha$ in Lemma~\\ref{glue}\nis injective if $i = 0$ and surjective if $i = 1$. For this we use\nthe commutative diagram\n\\[\\begin{CD}\n\\displaystyle H^0(N_{f_D}(-F)) @>\\beta>> N_{f_D}|_\\Gamma \\simeq \\displaystyle \\bigoplus_{p \\in \\Gamma} \\left(\\frac{T_p H}{f_*(T_p D)}\\right) \\\\\n@VgVV @VV{\\iota_*}V \\\\\n\\displaystyle H^0(N_{f|_D}(-F)) @>\\alpha>> \\displaystyle \\bigoplus_{p \\in \\Gamma} \\left(\\frac{T_p (\\pp^r)}{f_*(T_p (C \\cup_\\Gamma D))}\\right).\n\\end{CD}\\]\nSince $f|_C$ is transverse to $H$ along $\\Gamma$, the\nmap $\\iota_*$ above is an isomorphism. In particular,\nsince $g$ is an isomorphism when $i = 0$, it suffices to check\nthat $\\beta$ is injective if $i = 0$ and surjective if $i = 1$.\nBut using the exact sequence\n\\[0 \\to N_{f_D}(-\\Gamma-F) \\to N_{f_D}(-F) \\to N_{f_D}|_\\Gamma \\to 0,\\]\nthis follows from our assumption that $H^i(N_{f_D}(-\\Gamma-F)) = 0$.\n\\end{proof}\n\n\\section{Interpolation \\label{sec:inter}}\n\nIf we generalize $N_f(-n)$ to $N_f(-D)$, where $D$ is a general effective divisor,\nwe get the problem of ``interpolation.'' Geometrically, this corresponds to\nasking if there is a curve of degree $d$ and genus $g$ which passes through a\ncollection of points which are general in $\\pp^r$\n(as opposed to general in a hypersurface $S$).\nThis condition is analogous in some sense to the conditions\nof semistability and section-semistability\n(see Section~3 of~\\cite{nasko}), as well as to the \nRaynaud condition (property $\\star$ of \\cite{raynaud});\nalthough we shall not make use of these analogies here.\n\n\\begin{defi} \\label{def:inter} We say a vector bundle $\\mathcal{E}$ on a curve $C$ \\emph{satisfies interpolation}\nif it is nonspecial, and for a general effective divisor $D$ of any degree,\n\\[H^0(\\mathcal{E}(-D)) = 0 \\tor H^1(\\mathcal{E}(-D)) = 0.\\]\n\\end{defi}\n\nWe have the following results on interpolation from \\cite{firstpaper}.\nTo rephrase them in our current language,\nnote that if $f \\colon C \\to \\pp^r$ is a general BN-curve for $r \\geq 3$, then $f$ is an immersion,\nso $N_f$ coincides with the normal bundle $N_{f(C)/\\pp^r}$ of the image.\nNote also that, from Brill-Noether theory,\na general BN-curve $f \\colon C \\to \\pp^r$ of degree $d$ and genus $g$\nis nonspecial (i.e.\\ satisfies $H^1(f^* \\oo_{\\pp^r}(1)) = 0$) if and only if\n$d \\geq g + r$.\n\n\\begin{prop}[Theorem~1.3 of~\\cite{firstpaper}] \\label{inter}\nLet $f \\colon C \\to \\pp^r$ (for $r \\geq 3$) be a general BN-curve of degree $d$ and genus $g$, where\n\\[d \\geq g + r.\\]\nThen $N_f$ satisfies interpolation, unless\n\\[(d, g,r) \\in \\{(5,2,3), (6,2,4), (7,2,5)\\}.\\]\n\\end{prop}\n\n\\begin{prop}[Proposition~4.12 of~\\cite{firstpaper}] \\label{twist}\nLet $\\mathcal{E}$ be a vector bundle on a curve $C$, and $D$ be a divisor on $C$.\nIf $\\mathcal{E}$ satisfies interpolation and\n\\[\\chi(\\mathcal{E}(-D)) \\geq (\\rk \\mathcal{E}) \\cdot (\\operatorname{genus} C),\\]\nthen $\\mathcal{E}(-D)$ satisfies interpolation. In particular,\n\\[H^1(\\mathcal{E}(-D)) = 0.\\]\n\\end{prop}\n\n\\begin{lm} \\label{g2} Let $f \\colon C \\to \\pp^r$ (for $r \\in \\{3, 4, 5\\}$)\nbe a general BN-curve of degree $r + 2$ and genus $2$.\nThen $H^1(N_f(-1)) = 0$.\n\\end{lm}\n\\begin{proof}\nWe will show that there exists\nan immersion $C \\hookrightarrow \\pp^r$, which is a BN-curve of degree $r + 2$ and genus $2$, and whose image\nmeets a hyperplane $H$ transversely\nin a general collection of $r + 2$ points. For this, we first find a rational normal\ncurve $R \\subset H$ passing through $r + 2$ general points, which is possible\nby Corollary~1.4 of~\\cite{firstpaper}.\nThis rational\nnormal curve is then the hyperplane section of some rational surface scroll $S \\subset \\pp^r$\n(and we can freely choose the projective equivalence class of $S$).\n\nIt thus suffices to prove that there exists a smooth curve $C \\subset S$,\nfor which $C \\subset S \\subset \\pp^r$ is a BN-curve of degree $r + 2$ and genus $2$,\nsuch that $C \\cap (H \\cap S)$ a set of $r + 2$ general points on $H \\cap S$;\nor alternatively such that the map\n\\[C \\mapsto (C \\cap (H \\cap S)),\\]\nfrom the Hilbert scheme of curves on $S$, to the Hilbert scheme of points\non $H \\cap S$,\nis smooth at $[C]$; this in turn would follow from\n$H^1(N_{C/S}(-1)) = 0$.\n\nBut by Corollary~13.3 of \\cite{firstpaper}, the general BN-curve $C' \\subset \\pp^r$\n(which is an immersion since $r \\geq 3$) of degree $r + 2$ and genus $2$\nin $\\pp^r$ is contained in some rational surface\nscroll $S'$, and satisfies $\\chi(N_{C'/S'}) = 11$. Since we can choose $S$ projectively\nequivalent to $S'$,\nwe may thus find a BN-curve $C \\subset S$ of degree~$r + 2$\nand genus~$2$ with $\\chi(N_{C/S}) = 11$. But then,\n\\[\\chi(N_{C/S}(-1)) = 11 - d \\geq g \\quad \\Rightarrow \\quad H^1(N_{C/S}(-1)) = 0. \\qedhere\\]\n\\end{proof}\n\n\\noindent\nCombining these results, we obtain:\n\n\\begin{lm} \\label{from-inter} Let $f \\colon C \\to \\pp^r$ (for $r \\geq 3$)\nbe a general BN-curve of degree $d$ and genus $g$.\nSuppose that $d \\geq g + r$.\n\\begin{itemize}\n\\item If $r = 3$ and $g = 0$, then $H^1(N_f(-2)) = 0$. In fact, $N_f(-2)$ satisfies interpolation.\n\\item If $r = 3$, then $H^1(N_f(-1)) = 0$. In fact, $N_f(-1)$ satisfies interpolation\nexcept when $(d, g) = (5, 2)$.\n\\item If $r = 4$ and $d \\geq 2g$, then $H^1(N_f(-1)) = 0$. In fact, $N_f(-1)$ satisfies interpolation\nexcept when $(d, g) = (6, 2)$.\n\\end{itemize}\n\\end{lm}\n\\begin{proof}\nWhen $(d, g, r) \\in \\{(5, 2, 3), (6, 2, 4)\\}$, the desired result follows from Lemma~\\ref{g2}.\nOtherwise,\nfrom Propositions~\\ref{inter}, we know that $N_f$ satisfies interpolation.\nHence, the desired conclusion follows by applying\nProposition~\\ref{twist}: If $r = 3$, then\n\\begin{align*}\n\\chi(N_f(-1)) &= 2d \\geq 2g = (r - 1) g\\\\\n\\chi(N_f(-2)) &= 0 = (r - 1)g;\n\\end{align*}\nand if $r = 4$ and $d \\geq 2g$, then\n\\[\\chi(N_f(-1)) = 2d - g + 1 \\geq 3g = (r - 1)g. \\qedhere \\]\n\\end{proof}\n\n\\begin{lm} \\label{addone-raw}\nSuppose $f \\colon C \\cup_u L \\to \\pp^3$ is an unramified map\nfrom a reducible curve, with $L \\simeq \\pp^1$, and $u$ a single point,\nand $f|_L$ of degree~$1$.\nWrite $v \\neq f(u)$ for some other point on $f(L)$. If\n\\[H^1(N_{f|_C}(-2)(u)[2u \\to v]) = 0,\\]\nthen we have\n\\[H^1(N_f(-2)) = 0.\\]\n\\end{lm}\n\\begin{proof}\nWe apply Lemma~8.5 of \\cite{firstpaper} (which is stated for $f$ an immersion,\nin which case $N_f = N_{C \\cup L}$ and $N_{f|_C} = N_C$, but the same proof works\nwhenever $f$ is unramified); we take $N_C' = N_{f|_C}(-2)$\nand $\\Lambda_1 = \\Lambda_2 = \\emptyset$. This implies $N_f(-2)$ satisfies\ninterpolation (c.f.\\ Definition~\\ref{def:inter}) provided that $N_{f|_C}(-2)(u)[u \\to v][u \\to v]$ satisfies interpolation.\nBut we have\n\\[\\chi(N_f(-2)) = \\chi(N_{f|_C}(-2)(u)[u \\to v][u \\to v]) = 0;\\]\nso both of these interpolation statements are equivalent to the vanishing of $H^1$.\nThat is, we have $H^1(N_f(-2)) = 0$, provided that\n\\[H^1(N_{f|_C}(-2)(u)[u \\to v][u \\to v]) = H^1(N_{f|_C}(-2)(u)[2u \\to v]) = 0,\\]\nas desired.\n\\end{proof}\n\nWe finish this section with the following proposition,\nwhich immediately implies Theorems~\\ref{main-2} and~\\ref{main-2-1}:\n\n\\begin{prop} \\label{p2}\nLet $f \\colon C \\to \\pp^2$ be a curve. Then $N_f(-2)$ satisfies interpolation.\nIn particular $H^1(N_f(-2)) = H^1(N_f(-1)) = 0$.\n\\end{prop}\n\\begin{proof}\nBy adjunction,\n\\[N_f \\simeq K_C \\otimes f^* K_{\\pp^3}^{-1} \\simeq K_f(3) \\imp N_f(-2) \\simeq K_C(1).\\]\nBy Serre duality,\n\\[H^1(K_C(1)) \\simeq H^0(\\oo_C(-1))^\\vee = 0;\\]\nwhich since $K_C(1)$ is a line bundle implies it satisfies interpolation.\n\\end{proof}\n\n\\section{Reducible BN-Curves \\label{sec:rbn}}\n\n\\begin{defi} Let $\\Gamma \\subset \\pp^r$ be a finite set of $n$ points. A pair\n$(f \\colon C \\to \\pp^r, \\Delta \\subset C_{\\text{sm}})$,\nwhere $C$ is a curve, $f$ is map from $C$ to $\\pp^r$, and $\\Delta$ is a subset of $n$ points on the smooth locus $C_{\\text{sm}}$,\nshall be called a \\emph{marked curve (respectively marked BN-curve, respectively marked WBN-curve) passing through $\\Gamma$}\nif $f \\colon C \\to \\pp^r$ is a map from a curve (respectively a BN-curve, respectively a WBN-curve) and $f(\\Delta) = \\Gamma$.\n\nGiven a marked curve $(f \\colon C \\to \\pp^r, \\Delta)$ passing through $\\Gamma$,\nwe realize $\\Gamma$ as a subset of $C$ via\n$\\Gamma \\simeq \\Delta \\subset C$.\n\nFor $p \\in \\Gamma$,\nwe then define the \\emph{tangent line $T_p (f, \\Gamma)$ at $p$} to be the unique line $\\ell \\subset \\pp^r$ through $p$\nwith $T_p \\ell = f_* T_p C$.\n\\end{defi}\n\nLet $\\Gamma \\subset \\pp^r$ be a finite set of $n$ general points,\nand $(f_i \\colon C_i \\to \\pp^r, \\Gamma_i)$ be marked WBN-curves passing through $\\Gamma$.\nWe then write $C_1 \\cup_\\Gamma C_2$ for the curve obtained\nfrom $C_1$ and $C_2$ by gluing \n$\\Gamma_1$ to $\\Gamma_2$ via the isomorphism $\\Gamma_1 \\simeq \\Gamma \\simeq \\Gamma_2$.\nThe maps $f_i$ give rise to a map $f \\colon C_1 \\cup_\\Gamma C_2 \\to \\pp^r$\nfrom a reducible curve.\nThen we have the following result:\n\n\\begin{prop}[Theorem~1.3 of \\cite{rbn}] \\label{prop:glue}\nSuppose that, for at least one $i \\in \\{1, 2\\}$, we have\n\\[(r + 1) d_i - r g_i + r \\geq rn.\\]\nThen\n$f \\colon C_1 \\cup_\\Gamma C_2 \\to \\pp^r$ is a WBN-curve.\n\\end{prop}\n\n\\begin{prop} \\label{prop:interior}\nIn Proposition~\\ref{prop:glue}, suppose that $[f_1, \\Gamma_1]$ is general in some component\nof the space of marked WBN-curves passing through $\\Gamma$,\nand that $H^1(N_{f_2}) = 0$. Then $H^1(N_f) = 0$.\n\\end{prop}\n\\begin{proof}\nThis follows from combining Lemmas~3.2 and~3.4 of~\\cite{rbn}.\n\\end{proof}\n\nThe following lemmas give information about the spaces of marked BN-curves\npassing through small numbers of points.\n\n\\begin{lm} \\label{small-irred}\nLet $\\Gamma \\subset \\pp^r$ be a general set of $n \\leq r + 2$ points,\nand $d$ and $g$ be integers with $\\rho(d, g, r) \\geq 0$.\nThen the space of marked BN-curves of degree $d$ and genus $g$ to $\\pp^r$\npassing through $\\Gamma$ is irreducible.\n\\end{lm}\n\\begin{proof}\nFirst note that, since $n \\leq r + 2$, any $n$ points in linear general position\nare related by an automorphism of $\\pp^r$. Fix some ordering on $\\Gamma$.\n\nThe space of BN-curves of degree $d$ and genus $g$\nis irreducible, and the source of the generic BN-curve is irreducible;\nconsequently the space of such BN-curves with an ordered collection of $n$\nmarked points, and the open subset thereof where the images of the marked points\nare in linear general position, is irreducible.\nIt follows that the space of such marked curves endowed with an automorphism\nbringing the images of the ordered marked points to~$\\Gamma$ (respecting our fixed ordering on $\\Gamma$)\nis also irreducible.\nBut by applying the automorphism to the curve and forgetting the order of the marked points,\nthis latter\nspace dominates the space of such BN-curves passing through~$\\Gamma$;\nthe space of such BN-curves passing through~$\\Gamma$ is thus irreducible.\n\\end{proof}\n\n\\begin{lm} \\label{gen-tang-rat}\nLet $\\Gamma \\subset \\pp^r$ be a general set of $n \\leq r + 2$ points, and\n$\\{\\ell_p : p \\in \\Gamma\\}$ be a set of lines with $p \\in \\ell_p$.\n\nThen the general marked rational normal curve\npassing through $\\Gamma$ has tangent lines at each point $p \\in \\Gamma$ distinct from $\\ell_p$.\n\\end{lm}\n\\begin{proof}\nSince the intersection of dense opens is a dense open, it suffices to show\nthe general marked rational normal curve $(f \\colon C \\to \\pp^r, \\Delta)$ passing through $\\Gamma$\nhas tangent line at $p$ distinct from $\\ell_p$\nfor any one $p \\in \\Gamma$.\n\nFor this we consider the map, from the space of such marked rational normal curves, to the space\nof lines through $p$, which associates to the curve its tangent line at $p$.\nBasic deformation theory implies this map is smooth (and thus nonconstant) at $(f, \\Delta)$\nso long as $H^1(N_f(-\\Delta)(-q)) = 0$, where $q \\in \\Delta$ is the point sent to $p$ under $f$,\nwhich follows from combining Propositions~\\ref{inter} and~\\ref{twist}.\n\\end{proof}\n\n\\begin{lm} \\label{contains-rat} A general BN-curve $f \\colon C \\to \\pp^r$ can be specialized to an unramified map from a\nreducible curve $f^\\circ \\colon X \\cup_\\Gamma Y \\to \\pp^r$,\nwhere $f^\\circ|_X$ is a rational normal curve.\n\\end{lm}\n\\begin{proof}\nWrite $d$ and $g$ for the degree and genus of $f$.\nWe first note it suffices to produce a marked WBN-curve $(f^\\circ_2 \\colon Y \\to \\pp^r, \\Gamma_2)$ of degree $d - r$\nand genus $g' \\geq g - r - 1$, passing through a set\n$\\Gamma$ of $g + 1 - g'$ general points.\nIndeed, $g + 1 - g' \\leq g + 1 - (g - r - 1) = r + 2$ by assumption;\nby Lemma~\\ref{gen-tang-rat}, there is a marked rational normal curve $(f^\\circ_1 \\colon X \\to \\pp^r, \\Gamma_1)$ passing through $\\Gamma$,\nwhose tangent lines at $\\Gamma$ are distinct from the tangent lines of $(f_2^\\circ, \\Gamma_2)$ at~$\\Gamma$.\nThen $f^\\circ \\colon X \\cup_\\Gamma Y \\to \\pp^r$ is unramified (as promised by our conventions)\nand gives the required specialization by \nProposition~\\ref{prop:glue}.\n\nIt remains to construct $(f_2^\\circ \\colon Y \\to \\pp^r, \\Gamma_2)$. If $g \\leq r$, then we note that since\n$d$ and $g$ are integers,\n\\[d \\geq d - \\frac{\\rho(d, g, r)}{r + 1} = g + r - \\frac{g}{r + 1} \\imp d \\geq g + r \\quad \\Leftrightarrow \\quad g + 1 \\leq (d - r) + 1.\\]\nConsequently, by inspection,\nthere is a marked rational curve $(f_2^\\circ \\colon Y \\to \\pp^r, \\Gamma_2)$ of degree $d - r$ passing through a set $\\Gamma$ of $g + 1$ general points.\n\nOn the other hand, if $g \\geq r + 1$, then we\nnote that\n\\[\\rho(d - r, g - r - 1, r) = (r + 1)(d - r) - r(g - r - 1) - r(r + 1) = (r + 1)d - rg - r(r + 1) = \\rho(d, g, r) \\geq 0.\\]\nWe may therefore let $(f_2^\\circ \\colon Y \\to \\pp^r, \\Gamma_2)$ be a marked BN-curve of degree $d - r$ and genus $g - r - 1$\npassing through a set $\\Gamma$ of $r + 2$ general points.\n\\end{proof}\n\n\\begin{lm} \\label{gen-tang}\nLet $\\Gamma \\subset \\pp^r$ be a general set of $n \\leq r + 2$ points,\n$\\{\\ell_p : p \\in \\Gamma\\}$ be a set of lines with $p \\in \\ell_p$,\nand $d$ and $g$ be integers with $\\rho(d, g, r) \\geq 0$.\n\nThen the general marked BN-curve $(f \\colon C \\to \\pp^r, \\Delta)$ of degree $d$ and genus $g$\npassing through $\\Gamma$\nhas tangent lines at every $p \\in \\Gamma$ which are distinct from $\\ell_p$.\n\\end{lm}\n\\begin{proof}\nBy Lemma~\\ref{contains-rat}, we may specialize $f \\colon C \\to \\pp^r$\nto $f^\\circ \\colon X \\cup_\\Gamma Y \\to \\pp^r$ where $f^\\circ|_X$ is a rational\nnormal curve. Specializing the marked points $\\Delta$ to lie on $X$\n(which can be done since a marked rational normal curve can pass through $n \\leq r + 2$ general points\nby Proposition~\\ref{inter}),\nit suffices to consider the case when $f$ is a rational\nnormal curve.\nBut this case was already considered in Lemma~\\ref{gen-tang-rat}.\n\\end{proof}\n\n\\begin{lm} \\label{contains-rat-sp}\nLemma~\\ref{contains-rat} remains true\neven if we instead ask\n$f^\\circ|_X$ to be an arbitrary nondegenerate specialization\nof a rational normal curve.\n\\end{lm}\n\\begin{proof}\nWe employ the construction used in the proof of Lemma~\\ref{contains-rat},\nbut flipping the order in which we construct $X$ and $Y$:\nFirst we fix $(f_1^\\circ \\colon X \\to \\pp^r, \\Gamma_1)$; then we construct $(f_2^\\circ \\colon Y \\to \\pp^r, \\Gamma_2)$\npassing through $\\Gamma$,\nwhose tangent lines at\n$\\Gamma$ are distinct from the tangent lines of $(f_1^\\circ, \\Gamma_1)$ at $\\Gamma$\nthanks to Lemma~\\ref{gen-tang}.\n\\end{proof}\n\n\\section{Inductive Arguments \\label{sec:indarg}}\n\nLet $f \\colon C \\cup_u L \\to \\pp^r$ be an unramified map from a reducible curve,\nwith $L \\simeq \\pp^1$, and $u$ a single point,\nand $f|_L$ of degree~$1$.\nBy Proposition~\\ref{prop:glue}, these\ncurves are BN-curves.\n\n\\begin{lm} \\label{p4-add-line} If $H^1(N_{f|_C}(-1)) = 0$,\nthen $H^1(N_f(-1)) = 0$.\n\\end{lm}\n\\begin{proof}\nThis is immediate from Lemma~\\ref{glue} (taking $D = L$).\n\\end{proof}\n\n\\begin{lm} \\label{p3-add-line} If $H^1(N_{f|_C}(-2)) = 0$,\nand $f$ is a general map of the above type extending $f|_C$, then $H^1(N_f(-2)) = 0$.\n\\end{lm}\n\\begin{proof}\nBy Lemma~\\ref{addone-raw}, it suffices to prove that\nfor $(u, v) \\in C \\times \\pp^3$ general,\n\\[H^1(N_{f|_C}(-2)(u)[2u \\to v]) = 0.\\]\nSince $H^1(N_{f|_C}(-2)) = 0$, we also have $H^1(N_{f|_C}(-2)(u)) = 0$;\nin particular, Riemann-Roch implies\n\\begin{align*}\n\\dim H^0(N_{f|_C}(-2)(u)) &= \\chi(N_{f|_C}(-2)(u)) = 2 \\\\\n\\dim H^0(N_{f|_C}(-2)) &= \\chi(N_{f|_C}(-2)) = 0.\n\\end{align*}\n\nThe above dimension\nestimates imply there is a unique section $s \\in \\pp H^0(N_{f|_C}(-2)(u))$\nwith $s|_u \\in N_{f|_C \\to v}|_u$; it remains to show that for $(u, v)$\ngeneral, $\\langle s|_{2u} \\rangle \\neq N_{f|_C \\to v}|_{2u}$.\nFor this, it suffices to verify that if $v_1$ and $v_2$\nare points with $\\{v_1, v_2, f(2u)\\}$ coplanar --- but\nneither $\\{v_1, v_2, f(u)\\}$, nor $\\{v_1, f(2u)\\}$, nor $\\{v_2, f(2u)\\}$\ncollinear; and $\\{v_1, v_2, f(3u)\\}$ not coplanar --- then\n$N_{f|_C \\to v_1}|_{2u} \\neq N_{f|_C \\to v_2}|_{2u}$.\n\nTo show this, we choose a local coordinate $t$ on $C$,\nand coordinates on an appropriate affine open $\\aa^3 \\subset \\pp^3$, so that:\n\\begin{align*}\nf(t) &= (t, t^2 + O(t^3), O(t^3)) \\\\\nv_1 &= (1 , 0 , 1) \\\\\nv_2 &= (-1 , 0 , 1).\n\\end{align*}\n\nIt remains to check that the vectors\n$f(t) - v_1$, $f(t) - v_2$, and $\\frac{d}{dt} f(t)$\nare linearly independent at first order in $t$. That is,\nwe want to check that the determinant\n\\[\\left|\\begin{array}{ccc}\nt - 1 & t^2 + O(t^3) & O(t^3) - 1 \\\\\nt + 1 & t^2 + O(t^3) & O(t^3) - 1 \\\\\n1 & 2t + O(t^2) & O(t^2)\n\\end{array}\\right|\\not\\equiv 0 \\mod t^2.\\]\nOr, reducing the entries of the left-hand side modulo $t^2$, that\n\\[-4t = \\left|\\begin{array}{ccc}\nt - 1 & 0 & - 1 \\\\\nt + 1 & 0 & - 1 \\\\\n1 & 2t & 0\n\\end{array}\\right|\\not\\equiv 0 \\mod t^2,\\]\nwhich is clear.\n\\end{proof}\n\n\\begin{lm} \\label{add-can-3}\nLet $\\Gamma \\subset \\pp^3$ be a set of $5$ general points,\n$(f_1 \\colon C \\to \\pp^3, \\Gamma_1)$ be a general marked BN-curve\npassing through $\\Gamma$, and\n$(f_2 \\colon D \\to \\pp^3, \\Gamma_2)$\nbe a general marked canonical curve\npassing through $\\Gamma$.\nIf $H^1(N_{f_1}(-2)) = 0$,\nthen $f \\colon C \\cup_\\Gamma D \\to \\pp^r$ satisfies $H^1(N_f(-2)) = 0$.\n\\end{lm}\n\n\\begin{rem}\nBy Lemma~\\ref{small-irred}, it makes sense to speak of a\n``general marked BN-curve (respectively general marked canonical curve)\npassing through $\\Gamma$'';\nby Lemma~\\ref{gen-tang}, the resulting curve $f$ is unramified.\n\\end{rem}\n\n\\begin{proof}\nBy Lemma~\\ref{glue}, our problem reduces\nto showing that the natural map\n\\[H^0(N_{f_2} (-2)) \\to \\bigoplus_{p \\in \\Gamma} \\left(\\frac{T_p (\\pp^r)}{f_* (T_p (C \\cup_\\Gamma D))}\\right)\\]\nis surjective, and that\n\\[H^1(N_f|_D (-2)) = 0.\\]\nThese conditions both being open, we may invoke\nLemma~\\ref{contains-rat} to specialize\n$(f_1 \\colon C \\to \\pp^3, \\Gamma_1)$ to a marked BN-curve with reducible source\n$(f_1^\\circ \\colon C_1 \\cup_\\Delta C_2 \\to \\pp^3, \\Gamma_1^\\circ)$,\nwith $f_1^\\circ|_{C_1}$ a rational\nnormal curve and $\\Gamma_1^\\circ \\subset C_1$.\nIt thus suffices to prove the above statements in the case when $f_1 = f_1^\\circ$\nis a rational normal curve.\n\nFor this, we first observe that $f(C) \\cap f(D) = \\Gamma$:\nSince there is a unique rational normal curve through any $6$ points,\nand a $1$-dimensional family of possible sixth points on $D$\nonce $D$ and $\\Gamma$ are fixed --- but there is a $2$-dimensional family\nof rational normal curves through $5$ points\nin linear general position --- \ndimension counting shows $f_1(C)$ and $f_2(D)$ cannot meet at a sixth point\nfor $([f_1, \\Gamma_1], [f_2, \\Gamma_2])$ general.\nIn particular, $f$ is an immersion.\n\nNext, we observe that $f(D)$ is contained in a $5$-dimensional space\nof cubics. Since it is one linear condition, for a cubic that vanishes on $f(D)$,\nto be tangent to $f(C)$ at a point of $\\Gamma$, there is necessarily a cubic\nsurface $S$ containing $f(D)$ which is tangent to $f(C)$ at four points of $\\Gamma$.\n\nIf $S$ were a multiple of $Q$, say $Q \\cdot H$ where $H$ is a hyperplane, then\nsince $f(C)$ is transverse to $Q$, it would follow that $H$ contains four points of $\\Gamma$.\nBut any $4$ points on $f(C)$ are in linear general position. Consequently, $S$ is not\na multiple of $Q$. Or equivalently, $f(D) = Q \\cap S$ gives a presentation of $f(D)$\nas a complete intersection.\n\nIf $S$ were tangent to $f(C)$ at all five points of $\\Gamma$, then restricting the\nequation of $S$ to $f(C)$ would give a section of $\\oo_C(3) \\simeq \\oo_{\\pp^1}(9)$\nwhich vanished with multiplicity two at five points. Since the only such section\nis the zero section, we would conclude that $f(C) \\subset S$.\nBut then $f(C)$ would meet $f(D)$ at all $6$ points of $f(C) \\cap Q$,\nwhich we already ruled out above.\nThus, $S$ is tangent to $f(C)$ at precisely four points of $\\Gamma$.\n\nWrite $\\Delta$ for the divisor on $D$ defined by these four points,\nand $p$ for the fifth point. Note that for $q \\neq p$ in the tangent line to $(f_1, \\Delta \\cup \\{p\\})$\nat $p$,\n\\begin{align*}\nN_f|_D &\\simeq \\big(N_{f(D)/S}(\\Delta + p) \\oplus N_{f(D)/Q}(p)\\big)[p \\to q] \\\\\n&\\simeq \\big(\\oo_D(2)(\\Delta + p) \\oplus \\oo_D(3)(p)\\big)[p \\to q] \\\\\n\\Rightarrow \\ N_f|_D(-2) &\\simeq \\big(\\oo_D(\\Delta + p) \\oplus \\oo_D(1)(p)\\big)[p \\to q] \\\\\n&\\simeq \\big(\\oo_D(\\Delta + p) \\oplus K_D(p)\\big)[p \\to q].\n\\end{align*}\n\nBy Riemann-Roch, $\\dim H^0(K_D(p)) = 4 = \\dim H^0(K_D)$; so every section\nof $K_D(p)$ vanishes at $p$. Consequently,\nthe fiber of every section of $\\oo_D(\\Delta + p) \\oplus K_D(p)$\nat $p$ lies in the fiber of the first factor. Since the fiber $N_{f_2 \\to q}|_p$\ndoes not lie in the fiber of the first factor, we have an isomorphism\n\\[H^0(N_f|_D(-2)) \\simeq H^0\\Big(\\big(\\oo_D(\\Delta + p) \\oplus K_D(p)\\big)(-p)\\Big) \\simeq H^0(\\oo_D(\\Delta)) \\oplus H^0(K_D).\\]\nConsequently,\n\\[\\dim H^0(N_f|_D(-2)) = \\dim H^0(\\oo_D(\\Delta)) + \\dim H^0(K_D) = 1 + 4 = 5 = \\chi(N_f|_D(-2)),\\]\nwhich implies\n\\[H^1(N_f|_D(-2)) = 0.\\]\n\n\\noindent\nNext, we prove the surjectivity of the evaluation map\n\\[\\text{ev} \\colon H^0(N_{f_2}(-2)) \\to \\bigoplus_{x \\in \\Gamma} \\left(\\frac{T_x (\\pp^r)}{f_* (T_x (C \\cup_\\Gamma D))}\\right)\\]\nFor this, we use the isomorphism\n\\[N_{f_2}(-2) \\simeq N_{f(D)/\\pp^3}(-2) \\simeq N_{f(D)/S}(-2) \\oplus N_{f(D)/Q}(-2) \\simeq \\oo_D \\oplus K_D.\\]\nThe restriction of $\\text{ev}$ to $H^0(N_{f(D)/S}(-2) \\simeq \\oo_D)$\nmaps trivially into the quotient $\\frac{T_x (\\pp^r)}{f_*(T_x (C \\cup_\\Gamma D))}$\nfor $x \\in \\Delta$, since $S$ is tangent to $f(C)$ along $\\Delta$.\nBecause $S$ is not tangent to $f(C)$ at $p$,\nthe restriction of $\\text{ev}$ to $H^0(N_{f(D)/S}(-2) \\simeq \\oo_D)$ thus\nmaps isomorphically onto the factor $\\frac{T_p (\\pp^r)}{f_*(T_p (C \\cup_\\Gamma D))}$.\nIt is therefore sufficient to show that the evaluation map\n\\[H^0(N_{f(D)/Q}(-2) \\simeq K_D) \\to \\bigoplus_{x \\in \\Delta} \\left(\\frac{T_x (\\pp^r)}{f_*(T_x (C \\cup_\\Gamma D))}\\right)\\]\nis surjective. Or equivalently, since $Q$ is not tangent to $f(C)$ at any $x \\in \\Delta$,\nthat the evaluation map\n\\[H^0(K_D) \\to K_D|_\\Delta\\]\nis surjective. But this is clear since $\\dim H^0(K_D) = 4 = \\# \\Delta$\nand $\\Delta$ is a general effective divisor of degree~$4$ on $D$.\n\\end{proof}\n\n\\begin{lm} \\label{to-3-skew}\nLet $f \\colon C \\to \\pp^4$ be a general BN-curve in $\\pp^4$, of arbitrary degree and genus.\nThen we can specialize $f$ to an unramified map from a reducible curve\n$f^\\circ \\colon C' \\cup L_1 \\cup L_2 \\cup L_3 \\to \\pp^4$,\nso that each $L_i$ is rational, $f^\\circ|_{L_i}$ is of degree~$1$,\nand the images of the $L_i$ under $f^\\circ$ are in linear general position.\n\\end{lm}\n\n\\begin{proof}\nBy Lemma~\\ref{contains-rat-sp},\nour problem reduces to the case $f\\colon C \\to \\pp^4$ is a rational normal curve.\n\nIn this case, we begin by taking three general lines in $\\pp^4$.\nThe locus of lines meeting\neach of our lines has class $\\sigma_2$ in the Chow ring of\nthe Grassmannian $\\mathbb{G}(1, 4)$ of lines in $\\pp^4$.\nBy the standard calculus of Schubert cycles,\nwe have $\\sigma_2^3 = \\sigma_{2,2} \\neq 0$\nin the Chow ring of $\\mathbb{G}(1, 4)$.\nThus, there exists a line meeting each of our three given lines.\nThe (immersion of the)\nunion of these four lines is then a specialization of a rational\nnormal curve.\n\\end{proof}\n\n\\begin{lm} \\label{add-can-4}\nLet $\\Gamma \\subset \\pp^4$ be a set of $6$ points in linear general position;\n$(f_1 \\colon C \\to \\pp^4, \\Gamma_1)$ be either a general marked\nimmersion of three disjoint lines,\nor a general marked BN-curve in $\\pp^4$, passing through $\\Gamma$;\nand $(f_2 \\colon D \\to \\pp^4, \\Gamma_2)$ be a general marked canonical curve\npassing through~$\\Gamma$.\nIf $H^1(N_{f_1}(-1)) = 0$, then\n$f \\colon C \\cup_\\Gamma D \\to \\pp^4$\nsatisfies $H^1(N_f(-1)) = 0$.\n\\end{lm}\n\n\\begin{proof}\nBy Lemma~\\ref{glue}, it suffices to prove that the natural map\n\\[H^0(N_{f_2}(-1)) \\to \\bigoplus_{p \\in \\Gamma} \\left(\\frac{T_p(\\pp^r)}{f_*(T_p(C \\cup_\\Gamma D))}\\right)\\]\nis surjective, and that\n\\[H^1(N_f|_D(-1)) = 0.\\]\nThese conditions both being open,\nwe may apply Lemma~\\ref{to-3-skew}\nto specialize $(f_1, \\Gamma_1)$ to a marked curve\nwith reducible source\n$(f_1^\\circ \\colon C_1 \\cup C_2 \\to \\pp^r, \\Gamma_1^\\circ)$,\nwith $C_1 = L_1 \\cup L_2 \\cup L_3$ a union of $3$ disjoint lines,\nand $\\Gamma_1^\\circ \\subset C_1$ with $2$ points on each line.\nIt thus suffices to prove the above statements in the case when $C = C_1 = L_1 \\cup L_2 \\cup L_3$\nis the union of $3$ general lines.\nWrite $\\Gamma = \\Gamma_1 \\cup \\Gamma_2 \\cup \\Gamma_3$, where $\\Gamma_i \\subset L_i$.\n\nIt is well known that every canonical curve in $\\pp^4$ is the complete intersection of three quadrics;\nwrite $V$ for the vector space of quadrics vanishing along $f(D)$.\nFor any $2$-secant line $L$ to $f(D)$, it is evident that it is one linear condition\non quadrics in $V$ to contain $L$; and moreover, that general lines impose independent\nconditions unless there is a quadric which contains all $2$-secant lines.\nNow the projection from a general line in $\\pp^4$ of $f(D)$ yields a nodal plane curve\nof degree $8$ and geometric genus $5$, which in particular must have\n\\[\\binom{8 - 1}{2} - 5 = 16\\]\nnodes.\nConsequently, the secant variety to $f(D)$\nis a hypersurface of degree $16$; and is thus not contained in a quadric.\nThus, vanishing on general lines impose independent\nconditions on~$V$. As $f(L_1)$, $f(L_2)$, and $f(L_3)$ are general,\nwe may thus choose a basis $V = \\langle Q_1, Q_2, Q_3 \\rangle$\nso that $Q_i$ contains $L_j$ if an only if $i \\neq j$\n(where the $Q_i$ are uniquely defined up to scaling).\nBy construction, $f(D)$ is the complete intersection $Q_1 \\cap Q_2 \\cap Q_3$.\n\nWe now consider the direct sum decomposition\n\\[N_{f_2} \\simeq N_{f(D)/\\pp^4} \\simeq N_{f(D)/(Q_1 \\cap Q_2)} \\oplus N_{f(D)/(Q_2 \\cap Q_3)} \\oplus N_{f(D)/(Q_3 \\cap Q_1)},\\]\nwhich induces a direct sum decomposition\n\\[N_f|_D \\simeq N_{f(D)/(Q_1 \\cap Q_2)}(\\Gamma_3) \\oplus N_{f(D)/(Q_2 \\cap Q_3)}(\\Gamma_1) \\oplus N_{f(D)/(Q_3 \\cap Q_1)}(\\Gamma_2).\\]\nTo show that $H^1(N_f|_D(-1)) = 0$, it is sufficient\nby symmetry to show that\n\\[H^1(N_{f(D)/(Q_1 \\cap Q_2)}(\\Gamma_3)(-1)) = 0.\\]\nBut we have\n\\[N_{f(D)/(Q_1 \\cap Q_2)}(\\Gamma_3)(-1) \\simeq \\oo_D(2)(\\Gamma_3)(-1) \\simeq \\oo_D(1)(\\Gamma_3) = K_D(\\Gamma_3);\\]\nso by Serre duality,\n\\[H^1(N_{f(D)/(Q_1 \\cap Q_2)}(\\Gamma_3)(-1)) \\simeq H^0(\\oo_D(-\\Gamma_3))^\\vee = 0.\\]\n\n\\noindent\nNext, we examine the evaluation map\n\\[H^0(N_{f_2}(-1)) \\to \\bigoplus_{p \\in \\Gamma} \\left(\\frac{T_p(\\pp^r)}{f_*(T_p(C \\cup_\\Gamma D))}\\right).\\]\nFor this, we use the direct sum decomposition\n\\[N_{f_2} \\simeq N_{f(D)/\\pp^4} \\simeq N_{f(D)/(Q_1 \\cap Q_2)}(-1) \\oplus N_{f(D)/(Q_2 \\cap Q_3)}(-1) \\oplus N_{f(D)/(Q_3 \\cap Q_1)}(-1),\\]\ntogether with the decomposition (for $p \\in \\Gamma_i$):\n\\[\\frac{T_p (\\pp^r)}{f_*(T_p(C \\cup_{\\Gamma_i} L_i))} \\simeq \\bigoplus_{j \\neq i} N_{f(D)/(Q_i \\cap Q_j)}|_p.\\]\nThis reduces our problem to showing (by symmetry) the surjectivity of\n\\[H^0(N_{f(D)/(Q_1 \\cap Q_2)}(-1)) \\to \\bigoplus_{p \\in \\Gamma_1 \\cup \\Gamma_2} N_{f(D)/(Q_1 \\cap Q_2)}|_p.\\]\nBut for this, it is sufficient to note that\n$\\Gamma_1 \\cup \\Gamma_2$ is a general collection of $4$ points\non $D$, and\n\\[N_{f(D)/(Q_1 \\cap Q_2)}(-1) \\simeq \\oo_D(2)(-1) = \\oo_D(1) \\simeq K_D.\\]\nIt thus remains to show\n\\[H^0(K_D) \\to K_D|_{\\Gamma_1 \\cup \\Gamma_2}\\]\nis surjective, where $\\Gamma_1 \\cup \\Gamma_2$ is a general collection of $4$ points\non $D$. But this is clear because $K_D$ is a line bundle\nand $\\dim H^0(K_D) = 5 \\geq 4$.\n\\end{proof}\n\n\\begin{cor} \\label{finite} To prove the main theorems (excluding the ``conversely\\ldots'' part),\nit suffices to verify them in the following special cases:\n\\begin{enumerate}\n\\item For Theorem~\\ref{main-3}, it suffices to consider the cases where $(d, g)$ is one of:\n\\begin{gather*}\n(5, 1), \\quad (7, 2), \\quad (6, 3), \\quad (7, 4), \\quad (8, 5), \\quad (9, 6), \\quad (9, 7), \\\\\n(10, 9), \\quad (11, 10), \\quad (12, 12), \\quad (13, 13) \\quad (14, 14).\n\\end{gather*}\n\\item For Theorem~\\ref{main-3-1}, it suffices to consider the cases where $(d, g)$ is one of:\n\\[(7, 5), \\quad (8, 6).\\]\n\\item For Theorem~\\ref{main-4}, it suffices to consider the cases where $(d, g)$ is one of:\n\\[(9, 5), \\quad (10, 6), \\quad (11, 7), \\quad (12, 9), \\quad (16, 15), \\quad (17, 16), \\quad (18, 17).\\]\n\\end{enumerate}\nIn proving the theorems in each of these cases, we may suppose the\ncorresponding theorem holds for curves of smaller genus.\n\\end{cor}\n\\begin{proof}\nFor Theorem~\\ref{main-3}, note that by Lemma~\\ref{p3-add-line}\nand Proposition~\\ref{prop:glue}, it suffices to show Theorem~\\ref{main-3} for\neach pair $(d, g)$, where $d$ is minimal (i.e.,\\ where $\\rho(d, g) = \\rho(d, g, r = 3) \\geq 0$\nand $(d, g)$ is not in our list of counterexamples; but either $\\rho(d - 1, g) < 0$,\nor $(d - 1, g)$ is in our list of counterexamples).\n\nIf $\\rho(d, g) \\geq 0$ and $g \\geq 15$, then $(d - 6, g - 8)$ is not in our list of counterexamples,\nand $\\rho(d - 6, g - 8) = \\rho(d, g) \\geq 0$. By induction, we know $H^1(N_f(-2)) = 0$\nfor $f$ a BN-general curve of degree $d - 6$ and genus $g - 8$.\nApplying Lemma~\\ref{add-can-3} (and Proposition~\\ref{prop:glue}), we conclude the desired result.\nIf $\\rho(d, g) \\geq 0$ and $g \\leq 14$, and $d$ is minimal as above,\nthen either $(d, g)$ is in our above list, or\n$(d, g) \\in \\{(3, 0), (9, 8), (12, 11)\\}$. The case of $(d, g) = (3, 0)$ follows from\nLemma~\\ref{from-inter}.\nBut in these last two cases,\nLemma~\\ref{add-can-3} again implies the desired result (using Theorem~\\ref{main-3}\nfor $(d', g') = (d - 6, g - 8)$ as our inductive hypotheses).\n\nFor Theorem~\\ref{main-3-1}, we note that if $H^1(N_f(-2)) = 0$, then\nit follows that $H^1(N_f(-1)) = 0$. It therefore suffices to check\nthe list of counterexamples appearing in Theorem~\\ref{main-3}\nbesides the counterexample $(d, g) = (6, 4)$ listed in Theorem~\\ref{main-3-1}.\nThe cases $(d, g) \\in \\{(4, 1), (5, 2), (6, 2)\\}$ follow from Lemma~\\ref{from-inter},\nso we only have to consider the remaining cases (which form the given list).\n\nFinally, for Theorem~\\ref{main-4}, Lemma~\\ref{p4-add-line} implies it suffices\nto show Theorem~\\ref{main-4} for each pair $(d, g)$ with $d$ minimal.\nIf $\\rho(d, g) \\geq 0$ and $g \\geq 18$, then $(d - 8, g - 10)$ is not in our list of counterexamples,\nand $\\rho(d - 8, g - 10) = \\rho(d, g) \\geq 0$. By induction, we know $H^1(N_f(-1)) = 0$\nfor $C$ is a general curve of degree $d - 8$ and genus $g - 10$.\nApplying Lemma~\\ref{add-can-4}, we conclude the desired result.\nIf $\\rho(d, g) \\geq 0$ and $g \\leq 17$, and $d$ is minimal as above,\nthen either $(d, g)$ is in our above list, or\n\\[(d, g) \\in \\{(4, 0), (5, 1), (6, 2), (7, 3), (8, 4)\\},\\]\nor\n\\[(d, g) \\in \\{(11, 8), (12, 10), (13, 11), (14, 12), (15, 13), (16, 14)\\},\\]\n\nIn the first set of cases above, Lemma~\\ref{from-inter} implies the desired\nresult. But in the last set of cases,\nLemma~\\ref{add-can-4} again implies the desired result. Here, for $(d, g) = (11, 8)$,\nour inductive hypothesis is that\n$H^1(N_f(-1)) = 0$ for $f \\colon L_1 \\cup L_2 \\cup L_3 \\to \\pp^4$\nan immersion of three skew lines.\nIn the remaining cases, we use Theorem~\\ref{main-3}\nfor $(d', g') = (d - 8, g - 10)$ as our inductive hypothesis.\n\\end{proof}\n\n\\section{Adding Curves in a Hyperplane \\label{sec:hir}}\n\nIn this section, we explain an inductive strategy involving adding\ncurves contained in hyperplanes, which will help resolve many of our\nremaining cases.\n\n\\begin{lm} \\label{smoothable} Let $H \\subset \\pp^r$ (for $r \\geq 3$) be a hyperplane,\nand let $(f_1 \\colon C \\to \\pp^r, \\Gamma_1)$ and\n\\mbox{$(f_2 \\colon D \\to H, \\Gamma_2)$} be marked curves,\nboth passing through a set $\\Gamma \\subset H \\subset \\pp^r$ of $n \\geq 1$ points.\n\nAssume that $f_2$ is a general BN-curve of degree $d$ and genus $g$ to $H$,\nthat $\\Gamma_2$ is a general collection of $n$ points on $D$, and that $f_1$ is transverse\nto $H$ along $\\Gamma$. If\n\\[H^1(N_{f_1}(-\\Gamma)) = 0 \\quad \\text{and} \\quad n \\geq g - d + r,\\]\nthen $f \\colon C \\cup_\\Gamma D \\to \\pp^r$\nsatisfies $H^1(N_f) = 0$\nand is a limit of unramified maps from smooth curves.\n\nIf in addition $f_1$ is an immersion,\n$f(C) \\cap f(D)$ is exactly equal to $\\Gamma$, and\n$\\oo_D(1)(\\Gamma)$ is very ample away from $\\Gamma$ --- i.e.\\ if\n$\\dim H^0(\\oo_D(1)(\\Gamma)(-\\Delta)) = \\dim H^0(\\oo_D(1)(\\Gamma)) - 2$\nfor any effective divisor $\\Delta$ of degree $2$ supported on $D \\smallsetminus \\Gamma$ --- then\n$f$ is a limit of immersions of smooth curves.\n\\end{lm}\n\n\\begin{rem} \\label{very-ample-away}\nThe condition that $\\oo_D(1)(\\Gamma)$ is very ample away from $\\Gamma$\nis immediate when $\\oo_D(1)$ is very ample (which in particular happens for $r \\geq 4$).\nIt is also immediate when $n \\geq g$, in which case $\\oo_D(1)(\\Gamma)$ is a general line bundle\nof degree $d + n \\geq g + r \\geq g + 3$ and is thus very ample.\n\\end{rem}\n\n\\begin{proof}\nNote that $N_{f_1}$ is a subsheaf of $N_f|_C$ with punctual quotient\n(supported at $\\Gamma$). Twisting down by $\\Gamma$, we obtain a short exact sequence\n\\[0 \\to N_{f_1}(-\\Gamma) \\to N_f|_C(-\\Gamma) \\to * \\to 0,\\]\nwhere $*$ denotes a punctual sheaf, which in particular has vanishing $H^1$.\nSince $H^1(N_{f_1}(-\\Gamma)) = 0$ by assumption,\nwe conclude that $H^1(N_f|_C(-\\Gamma)) = 0$ too.\nSince $f_2$ is a general BN-curve, $H^1(N_{f_2}) = 0$.\nThe exact sequences\n\\begin{gather*}\n0 \\to N_f|_C(-\\Gamma) \\to N_f \\to N_f|_D \\to 0 \\\\\n0 \\to N_{f_2} \\to N_f|_D \\to N_H|_D(\\Gamma) \\simeq \\oo_D(1)(\\Gamma) \\to 0\n\\end{gather*}\nthen imply that, to check $H^1(N_f) = 0$, it suffices to check $H^1(\\oo_D(1)(\\Gamma)) = 0$.\nThey moreover imply that\nevery section of $N_H|_D(\\Gamma) \\simeq \\oo_D(1)(\\Gamma)$ lifts to a section\nof $N_f$, which, as $H^1(N_f) = 0$, lifts to a global deformation\nof $f$.\n\nTo check $f$\nis a limit of unramified maps from smooth curves, it remains to see that the\ngeneric section of $N_H|_D(\\Gamma) \\simeq \\oo_D(1)(\\Gamma)$ corresponds\nto a first-order deformation which smoothes the nodes $\\Gamma$ --- or equivalently does not vanish at $\\Gamma$.\n\nSince by assumption $f_1$ is an immersion\nand there are no other nodes where $f(C)$ and $f(D)$ meet besides $\\Gamma$,\nto see that $f$\nis a limit of immersions of smooth curves, it remains to note in addition that\nthe generic section of $N_H|_D(\\Gamma) \\simeq \\oo_D(1)(\\Gamma)$\nseparates the points of $D$ identified under $f_2$ --- which is true by assumption that $\\oo_D(1)(\\Gamma)$ is very ample\naway from $\\Gamma$.\n\nTo finish the proof, it thus suffices to check $H^1(\\oo_D(1)(\\Gamma)) = 0$,\nand that the generic section of $\\oo_D(1)(\\Gamma)$ does not vanish at any point $p \\in \\Gamma$.\nEquivalently, it suffices to check $H^1(\\oo_D(1)(\\Gamma)(-p)) = 0$ for $p \\in \\Gamma$.\nSince $f_2$ is a general BN-curve, we obtain\n\\[\\dim H^1(\\oo_D(1)) = \\max(0, g - d + (r - 1)) \\leq n - 1.\\]\nTwisting by $\\Gamma \\smallsetminus \\{p\\}$, which is a set of $n - 1$ general points, we therefore obtain\n\\[H^1(\\oo_D(1)(\\Gamma \\smallsetminus \\{p\\})) = 0,\\]\nas desired.\n\\end{proof}\n\n\\begin{lm} \\label{lm:hir}\nLet $k \\geq 1$ be an integer, $\\iota \\colon H \\hookrightarrow \\pp^r$ ($r \\geq 3$) be a hyperplane,\nand $(f_1 \\colon C \\to \\pp^r, \\Gamma_1)$ and\n\\mbox{$(f_2 \\colon D \\to H, \\Gamma_2)$} be marked curves,\nboth passing through a set $\\Gamma \\subset H \\subset \\pp^r$ of $n \\geq 1$ points.\n\nAssume that $f_2$ is a general BN-curve of degree $d$ and genus $g$ to $H$,\nthat $\\Gamma_2$ is a general collection of $n$ points on $D$, and that $f_1$ is transverse\nto $H$ along $\\Gamma$.\nSuppose moreover that:\n\\begin{enumerate}\n\\item The bundle $N_{f_2}(-k)$ satisfies interpolation.\n\\item We have \n$H^1(N_{f_1}(-k)) = 0$.\n\\item We have\n\\[(r - 2) n \\leq rd - (r - 4)(g - 1) - k \\cdot (r - 2) d.\\]\n\\item We have\n\\[n \\geq \\begin{cases}\ng & \\text{if $k = 1$;} \\\\\ng - 1 + (k - 1)d & \\text{if $k > 1$.}\n\\end{cases}\\]\n\\end{enumerate}\nThen $f \\colon C \\cup_\\Gamma D \\to \\pp^r$ satisfies\n\\[H^1(N_f(-k)) = 0.\\]\n\\end{lm}\n\n\\begin{proof}\nSince $N_{f_2}(-k)$ satisfies interpolation by assumption and\n\\[(r - 2) n \\leq \\chi(N_{f_2}(-k)) = rd - (r - 4)(g - 1) - k \\cdot (r - 2) d,\\]\nwe conclude that $H^1(N_{f_2}(-k)(-\\Gamma)) = 0$.\nSince $H^1(N_{f_1} (-k)) = 0$ by assumption,\nto apply Lemma~\\ref{hyp-glue} it remains to check\n\\[H^1(\\oo_D(1 - k)(\\Gamma)) = 0.\\]\nIt is therefore sufficient for\n\\[n = \\#\\Gamma \\geq \\dim H^1(\\oo_D(1 - k)) = \\begin{cases}\ng & \\text{if $k = 1$;} \\\\\ng - 1 + (k - 1)d & \\text{if $k > 1$.}\n\\end{cases}\\]\nBut this is precisely our final assumption.\n\\end{proof}\n\n\\section{Curves of Large Genus \\label{sec:hir-2}}\n\nIn this section, we will deal with a number of our special\ncases, of larger genus. Taking care of these cases separately\nis helpful --- since in the remaining cases, we will not\nhave to worry about whether our curve is a BN-curve, thanks to\nresults of~\\cite{iliev} and~\\cite{keem} on the irreducibility\nof the Hilbert scheme of curves.\n\n\\begin{lm} \\label{bn3}\nLet $H \\subset \\pp^3$ be a plane, $\\Gamma \\subset H \\subset \\pp^3$ a set of $6$ general points,\n$(f_1 \\colon C \\to \\pp^3, \\Gamma_1)$ a general marked BN-curve passing through $\\Gamma$\nof degree and genus one of\n\\[(d, g) \\in \\{(6, 1), (7, 2), (8, 4), (9, 5), (10, 6)\\},\\]\nand $(f_2 \\colon D \\to H, \\Gamma_2)$\na general marked canonical curve\npassing through $\\Gamma$.\nThen $f \\colon C \\cup_\\Gamma D \\to \\pp^3$ is a BN-curve which satisfies $H^1(N_f) = 0$.\n\\end{lm}\n\\begin{proof}\nNote that the conclusion is an open condition; we may therefore freely specialize $(f_1, \\Gamma_1)$.\nWrite $\\Gamma = \\{s, t, u, v, w, x\\}$.\n\nIn the case $(d, g) = (6, 1)$, we specialize $(f_1, \\Gamma_1)$\nto \n$(f_1^\\circ \\colon C^\\circ = C_1 \\cup_p C_2 \\cup_{\\{q, r\\}} C_3 \\to \\pp^3, \\Gamma_1^\\circ)$,\nwhere $f_1^\\circ|_{C_1}$ is a conic, $f_1^\\circ|_{C_2}$ is a line with $C_2$ joined to $C_1$\nat one point $p$, and $f_1^\\circ|_{C_3}$ is a rational normal curve with $C_3$ joined to $C_1$ at two points $\\{q, r\\}$;\nnote that $f_1^\\circ$ is a BN-curve by (iterative application of) Proposition~\\ref{prop:glue}.\nWe suppose that $(f_1^\\circ|_{C_1}, \\Gamma_1^\\circ \\cap C_1)$ passes through $\\{s, t\\}$,\nwhile $(f_1^\\circ|_{C_2}, \\Gamma_1^\\circ \\cap C_2)$ passes through $u$,\nand $(f_1^\\circ|_{C_3}, \\Gamma_1^\\circ \\cap C_3)$ passes through $\\{v, w, x\\}$;\nit is clear this can be done so $\\{s, t, u, v, w, x\\}$ are general.\nWriting\n\\[f^\\circ \\colon C^\\circ \\cup_\\Gamma D = C_2 \\cup_{\\{p, u\\}} C_3 \\cup_{\\{q, r, v, w, x\\}} (C_1 \\cup_{\\{s, t\\}} D) \\to \\pp^3,\\]\nit suffices by Propositions~\\ref{prop:glue} and~\\ref{prop:interior} to\nshow that $f^\\circ|_{C_1 \\cup D}$ is a BN-curve which satisfies $H^1(N_{f^\\circ|_{C_1 \\cup D}}) = 0$.\n\nFor $(d, g) = (8, 4)$, we specialize $(f_1, \\Gamma_1)$ to\n$(f_1^\\circ \\colon C^\\circ = C_1 \\cup_{\\{p, q, r\\}} C_2 \\cup_{\\{y, z, a\\}} C_3 \\to \\pp^3, \\Gamma_1^\\circ)$,\nwhere $f_1^\\circ|_{C_1}$ is a conic, and\n$f_1^\\circ|_{C_2}$ and $f_1^\\circ|_{C_3}$ are rational normal curves,\nwith both $C_2$ and $C_3$ joined to $C_1$ at $3$ points (at $\\{p, q, r\\}$ and $\\{y, z, a\\}$ respectively);\nnote that $f_1^\\circ$ is a BN-curve by (iterative application of) Proposition~\\ref{prop:glue}.\nWe suppose that $(f_1^\\circ|_{C_1}, \\Gamma_1^\\circ \\cap C_1)$ passes through $\\{s, t\\}$,\nwhile $(f_1^\\circ|_{C_2}, \\Gamma_1^\\circ \\cap C_2)$ passes through $\\{u, v\\}$,\nand $(f_1^\\circ|_{C_3}, \\Gamma_1^\\circ \\cap C_3)$ passes through $\\{w, x\\}$;\nit is clear this can be done so $\\{s, t, u, v, w, x\\}$ are general.\nWriting\n\\[f^\\circ \\colon C^\\circ \\cup_\\Gamma D = C_2 \\cup_{\\{p, q, r, u, v\\}} C_3 \\cup_{\\{w, x, y, z, a\\}} (C_1 \\cup_{\\{s, t\\}} D) \\to \\pp^3,\\]\nit again suffices by Propositions~\\ref{prop:glue} and~\\ref{prop:interior} to\nshow that $f^\\circ|_{C_1 \\cup D}$ is a BN-curve which satisfies $H^1(N_{f^\\circ|_{C_1 \\cup D}}) = 0$.\n\nFor this, we first note that $f^\\circ|_{C_1 \\cup D}$ is a curve of degree $6$ and genus $4$,\nand that the moduli space of smooth curves of degree $6$ and genus $4$ in $\\pp^3$ is\nirreducible (they are all canonical curves).\nMoreover, by Lemma~\\ref{smoothable} (c.f.\\ Remark~\\ref{very-ample-away} and note that\n$\\oo_D(1) \\simeq K_D$ is very ample),\n$f^\\circ|_{C_1 \\cup D}$ is a limit\nof immersions of smooth curves, and satisfies\n$H^1(N_{f^\\circ|_{C_1 \\cup D}}) = 0$; this completes the proof.\n\\end{proof}\n\n\\begin{lm} \\label{bn4}\nLet $H \\subset \\pp^4$ be a hyperplane, $\\Gamma \\subset H \\subset \\pp^4$ a set of $7$ general points,\n\\mbox{$(f_1 \\colon C \\to \\pp^4, \\Gamma_1)$} a general marked BN-curve passing through $\\Gamma$\nof degree and genus one of\n\\[(d, g) \\in \\{(7, 3), (8, 4), (9, 5)\\},\\]\nand $(f_2 \\colon D \\to H, \\Gamma_2)$\na general marked BN-curve of degree~$9$ and genus~$6$\npassing through $\\Gamma$.\nThen $f \\colon C \\cup_\\Gamma D \\to \\pp^4$ is a BN-curve which satisfies $H^1(N_f) = 0$.\n\\end{lm}\n\\begin{proof}\nAgain, we note that the conclusion is an open statement; we may therefore freely\nspecialize $(f_1, \\Gamma_1)$. Write $\\Gamma = \\{t, u, v, w, x, y, z\\}$.\n\nFirst, we claim it suffices to consider the case $(d, g) = (7, 3)$.\nIndeed, suppose $(f_1, \\Gamma_1)$ is a marked BN-curve of degree $7$ and genus $3$ passing through $\\Gamma$.\nThen $f_1' \\colon C \\cup_{\\{p, q\\}} L \\to \\pp^4$ and $f_1'' \\colon C \\cup_{\\{p, q\\}} L \\cup_{\\{r, s\\}} L' \\to \\pp^4$\n(where $f_1'|_L$ and $f_1''|_L$ and $f_1''|_{L'}$ are lines with $L$ and $L'$ joined to $C$ at two points)\nare BN-curves by Proposition~\\ref{prop:glue}, of degree and genus\n$(8, 4)$ and $(9, 5)$ respectively. If $f \\colon C \\cup_\\Gamma D \\to \\pp^4$ is a BN-curve\nwith $H^1(N_f) = 0$, then invoking Propositions~\\ref{prop:glue}\nand~\\ref{prop:interior}, both\n\\begin{gather*}\nf' \\colon (C \\cup_{\\{p, q\\}} L) \\cup_\\Gamma D = (C \\cup_\\Gamma D) \\cup_{\\{p, q\\}} L \\to \\pp^4 \\\\\n\\text{and} \\quad f'' \\colon (C \\cup_{\\{p, q\\}} L \\cup_{\\{r, s\\}} L') \\cup_\\Gamma D = (C \\cup_\\Gamma D) \\cup_{\\{p, q\\}} L \\cup_{\\{r, s\\}} L' \\to \\pp^4\n\\end{gather*}\nare BN-curves, which satisfy\n$H^1(N_{f'}) = H^1(N_{f''}) = 0$.\n\nSo it remains to consider the case $(d, g) = (7, 3)$.\nIn this case, we begin by specializing $(f_1, \\Gamma_1)$ to\n$(f_1^\\circ \\colon C^\\circ = C' \\cup_{\\{p, q\\}} L \\to \\pp^4, \\Gamma_1^\\circ)$,\nwhere $f_1^\\circ|_{C'}$ is a general BN-curve of degree $6$ and genus $2$, and $f_1^\\circ|_L$ is a line with $L$\njoined to $C'$ at two points $\\{p, q\\}$.\nWe suppose that $(f_1^\\circ|_L, \\Gamma_1^\\circ \\cap L)$ passes through $t$,\nwhile $(f_1^\\circ|_{C'}, \\Gamma_1^\\circ \\cap C')$ passes through $\\{u, v, w, x, y, z\\}$;\nwe must check this can be done so $\\{t, u, v, w, x, y, z\\}$ are general.\nTo see this, it suffices to show\nthat the intersection $f_1^\\circ(C') \\cap H$ and the points $\\{f_1^\\circ(p), f_1^\\circ(q)\\}$\nindependently general. In other words,\nwe are claiming that the map\n\\[\\{(f_1^\\circ|_{C'} \\colon C' \\to \\pp^4, p, q) : p, q \\in C'\\} \\mapsto (f_1^\\circ|_{C'}(C') \\cap H, f_1^\\circ|_{C'}(p), f_1^\\circ|_{C'}(q))\\]\nis dominant; equivalently, that it is smooth at a generic point $(f_1^\\circ|_{C'}, p, q)$.\nBut the obstruction to smoothness lies in\n$H^1(N_{f_1^\\circ|_{C'}}(-1)(-p-q)) = 0$, which vanishes because\nbecause $N_{f_1^\\circ|_{C'}}(-1)$ satisfies interpolation by Lemma~\\ref{from-inter}.\n\nWe next specialize $(f_2, \\Gamma_2)$ to $(f_2^\\circ \\colon D^\\circ = D' \\cup_\\Delta D_1 \\to H, \\Gamma_2^\\circ)$,\nwhere $f_2^\\circ|_{D'}$ is a general BN-curve of degree\n$6$ and genus $3$, and $f_2^\\circ|_{D_1}$ is a rational normal curve with $D_1$\njoined to $D'$ at a set $\\Delta$ of $4$ points;\nnote that $f_2^\\circ$ is a BN-curve by Proposition~\\ref{prop:glue}.\nWe suppose that $(f_2^\\circ|_{D_1}, \\Gamma_2^\\circ \\cap D_1)$ passes through $t$,\nwhile $(f_2^\\circ|_{D'}, \\Gamma_2^\\circ \\cap D')$ passes through $\\{u, v, w, x, y, z\\}$;\nthis can be done so $\\{t, u, v, w, x, y, z\\}$ are still general,\nsince $f_2^\\circ|_{D'}$ (marked at general points of the source) can pass through $6$ general points,\nwhile $(f_2^\\circ|_{D_1}$ (again marked at general points of the source) can pass through $5$ general points,\nboth by Corollary~1.4 of~\\cite{firstpaper}.\n\nIn addition, $(f_2^\\circ|_{D_1}, (\\hat{t} = \\Gamma_2^\\circ \\cap D_1) \\cup \\Delta)$ has a general tangent line at $t$;\nto see this, note that we are asserting that the map sending $(f_2^\\circ|_{D_1}, \\hat{t} \\cup \\Delta)$\nto its tangent line at $t$ is dominant;\nequivalently, that it is smooth at a generic point of the source.\nBut the obstruction to smoothness lies in\n$H^1(N_{f_2^\\circ|_{D_1}}(-\\Delta - 2\\hat{t} \\, ))$, which vanishes because\n$N_{f_2^\\circ|_{D_1}}(-2\\hat{t} \\, )$ satisfies interpolation by combining\nPropositions~\\ref{inter} and~\\ref{twist}.\n\nAs $\\{p, q\\} \\subset C'$ is general, we thus know that the tangent lines\nto $(f_2^\\circ|_{D_1}, \\hat{t} \\cup \\Delta)$ at $t$, and to $(f_1^\\circ|_{C'}, \\{p, q\\})$ at $f_1^\\circ(p)$ and $f_1^\\circ(q)$,\ntogether span all of $\\pp^4$; write $\\bar{t}$, $\\bar{p}$, and $\\bar{q}$ for points on each of these tangent lines\ndistinct from $t$, $f_1^\\circ(p)$, and $f_1^\\circ(q)$ respectively.\nWe then use the exact sequences\n\\begin{gather*}\n0 \\to N_{f^\\circ}|_L(-\\hat{t} - p - q) \\to N_{f^\\circ} \\to N_{f^\\circ}|_{C' \\cup D^\\circ} \\to 0 \\\\\n0 \\to N_{f^\\circ|_{C' \\cup D^\\circ}} \\to N_{f^\\circ}|_{C' \\cup D^\\circ} \\to * \\to 0,\n\\end{gather*}\nwhere $*$ is a punctual sheaf (which in particular has vanishing $H^1$).\nWrite $H_t$ for the hyperplane spanned by $f_1^\\circ(L)$, $\\bar{p}$, and $\\bar{q}$;\nand $H_p$ for the hyperplane spanned by $f_1^\\circ(L)$, $\\bar{t}$, and $\\bar{q}$;\nand $H_q$ for the hyperplane spanned by $f_1^\\circ(L)$, $\\bar{t}$, and $\\bar{p}$.\nThen $f_1^\\circ(L)$ is the complete intersection $H_t \\cap H_p \\cap H_q$, and so we get a decomposition\n\\[N_{f^\\circ}|_L \\simeq N_{f_1^\\circ(L) / H_t}(\\hat{t} \\, ) \\oplus N_{f_1^\\circ(L) / H_p}(p) \\oplus N_{f_1^\\circ(L) / H_q}(q),\\]\nwhich upon twisting becomes\n\\[N_{f^\\circ}|_L(-\\hat{t} - p - q) \\simeq N_{f_1^\\circ(L) / H_t}(-p-q) \\oplus N_{f_1^\\circ(L) / H_p}(-\\hat{t}-q) \\oplus N_{f_1^\\circ(L) / H_q}(-\\hat{t} - p).\\]\nNote that $N_{f_1^\\circ(L) / H_t}(-p-q) \\simeq \\oo_L(-1)$ has vanishing $H^1$, and similarly for the other factors;\nconsequently, $H^1(N_{f^\\circ}|_L(-\\hat{t} - p - q)) = 0$. We conclude that\n$H^1(N_{f^\\circ}) = 0$ provided that $H^1(N_{f^\\circ|_{C' \\cup D^\\circ}}) = 0$.\nMoreover, \nwriting $C' \\cup_{\\{u, v, w, x, y, z\\}} D^\\circ = D_1 \\cup_\\Delta (D' \\cup_{\\{u, v, w, x, y, z\\}} C')$\nand applying Proposition~\\ref{prop:interior}, we know that $H^1(N_{f^\\circ|_{C' \\cup D^\\circ}}) = 0$ provided that\n$H^1(N_{f^\\circ|_{C' \\cup D'}}) = 0$.\nAnd if $f^\\circ|_{C' \\cup D'}$ is a BN-curve,\nthen\n$f^\\circ \\colon (C' \\cup_{\\{u, v, w, x, y, z\\}} D') \\cup_{\\Delta \\cup \\{p, q\\}} (D_1 \\cup_t L) \\to \\pp^4$\nis a BN-curve too by Proposition~\\ref{prop:glue},\nPutting this all together, it is sufficient to show that\n$f^\\circ|_{C' \\cup D'}$ is a BN-curve which satisfies $H^1(N_{f^\\circ|_{C' \\cup D'}}) = 0$.\n\nOur next step is to specialize $(f_1^\\circ|_{C'}, \\Gamma_1^\\circ \\cap C')$ to\n$(f_1^{\\circ\\circ} \\colon C^{\\circ\\circ} = C'' \\cup_{\\{r, s\\}} L' \\to \\pp^4, \\Gamma_1^{\\circ\\circ})$,\nwhere $f_1^{\\circ\\circ}|_{C''}$\nis a general BN-curve of degree~$5$ and genus~$1$, and $f_1^{\\circ\\circ}|_{L'}$ is a line\nwith $L'$ joined to $C''$ at two points $\\{r, s\\}$.\nWe suppose that $(f_1^{\\circ\\circ}|_{C''}, \\Gamma_1^{\\circ\\circ} \\cap C'')$ passes through $u$,\nwhile $(f_1^{\\circ\\circ}|_{C''}, \\Gamma_1^{\\circ\\circ} \\cap C'')$ passes through $\\{v, w, x, y, z\\}$;\nas before this can be done so $\\{u, v, w, x, y, z\\}$ are general.\nWe also specialize $(f_2^\\circ|_{D'}, \\Gamma_2^\\circ \\cap D')$ to\n$(f_2^{\\circ\\circ} \\colon D'' \\cup_\\Delta D_2 \\to \\pp^4, \\Gamma_2^{\\circ\\circ})$,\nwhere $f_2^{\\circ\\circ}|_{D''}$ and $f_2^{\\circ\\circ}|_{D_2}$ are both rational normal curves\nwith $D''$ and $D_2$ joined at a set $\\Delta$ of $4$ general points.\nWe suppose that $(f_2^{\\circ\\circ}|_{D_2}, \\Gamma_2^{\\circ\\circ} \\cap D_2)$\npasses through $u$,\nwhile $(f_2^{\\circ\\circ}|_{D''}, \\Gamma_2^{\\circ\\circ} \\cap D'')$\npasses through $\\{v, w, x, y, z\\}$;\nas before this can be done so $\\{u, v, w, x, y, z\\}$ are general.\nThe same argument as above, mutatis mutandis, then implies it is sufficient to show that\n$f^{\\circ\\circ}|_{C'' \\cup D''} \\colon C'' \\cup_{\\{v, w, x, y, z\\}} D'' \\to \\pp^4$ is a BN-curve which satisfies\n$H^1(N_{f^{\\circ\\circ}|_{C'' \\cup D''}}) = 0$.\n\nFor this, we first note that $f^{\\circ\\circ}|_{C'' \\cup D''}$ is a curve of degree $8$ and genus $5$,\nand that the moduli space of smooth curves of degree $8$ and genus $5$ in $\\pp^4$ is\nirreducible (they are all canonical curves).\nTo finish the proof, it suffices to note by Lemma~\\ref{smoothable} that\n$f^{\\circ\\circ}|_{C'' \\cup D''}$ is a limit of immersions of smooth curves and satisfies\n$H^1(N_{f^{\\circ\\circ}|_{C'' \\cup D''}}) = 0$.\n\\end{proof}\n\n\\begin{cor} \\label{smooth-enough} To prove the main theorems (excluding the ``conversely\\ldots'' part),\nit suffices to show the existence of (nondegenerate immersions of) smooth curves, of the following degrees\nand genera, which satisfy the conclusions:\n\\begin{enumerate}\n\\item For Theorem~\\ref{main-3}, it suffices to show the existence of smooth curves, satisfying\nthe conclusions, where $(d, g)$ is one of:\n\\[(5, 1), \\quad (7, 2), \\quad (6, 3), \\quad (7, 4), \\quad (8, 5), \\quad (9, 6), \\quad (9, 7).\\]\n\\item For Theorem~\\ref{main-3-1}, it suffices to show the existence of smooth curves, satisfying\nthe conclusions, where $(d, g)$ is one of:\n\\[(7, 5), \\quad (8, 6).\\]\n\\item For Theorem~\\ref{main-4}, it suffices to show the existence of smooth curves, satisfying\nthe conclusions, where $(d, g)$ is one of:\n\\[(9, 5), \\quad (10, 6), \\quad (11, 7), \\quad (12, 9).\\]\n\\end{enumerate}\n(And in constructing the above smooth curves, we may suppose the\ncorresponding theorem holds for curves of smaller genus.)\n\\end{cor}\n\\begin{proof}\nBy Lemmas~\\ref{bn3} and~\\ref{lm:hir}, and Proposition~\\ref{p2}, we know that Theorem~\\ref{main-3}\nholds for $(d, g)$ one of\n\\[(10, 9), \\quad (11, 10), \\quad (12, 12), \\quad (13, 13), \\quad (14, 14).\\]\nSimilarly, by Lemmas~\\ref{bn4}, \\ref{lm:hir}, and~\\ref{from-inter}, we know that Theorem~\\ref{main-4}\nholds for $(d, g)$ one of\n\\[(16, 15), \\quad (17, 16), \\quad (18, 17).\\]\nEliminating these cases from the lists in Corollary~\\ref{finite},\nwe obtain the given lists of pairs $(d, g)$.\n\nMoreover --- in each of the cases appearing in the statement\nof this corollary --- results of \\cite{keem} (for $r = 3$) and \\cite{iliev} (for $r = 4$)\nstate that the Hilbert scheme of curves of degree $d$ and genus $g$ in $\\pp^r$\nhas a \\emph{unique} component whose points represent smooth irreducible nondegenerate curves.\nThe condition that our curve be a BN-curve may thus be replaced\nwith the condition that our curve be smooth irreducible nondegenerate.\n\\end{proof}\n\n\\section{More Curves in a Hyperplane \\label{sec:hir-3}}\n\nIn this section, we give several more applications\nof the technique developed in the previous two sections. Note that from\nCorollary~\\ref{smooth-enough},\nit suffices to show the existence of curves satisfying\nthe desired conclusions which are limits of immersions of smooth curves;\nit not necessary to check that these\ncurves are BN-curves.\n\n\\begin{lm} \\label{lm:ind:3} Suppose $N_f(-2)$ satisfies interpolation, where $f \\colon C \\to \\pp^3$ is a general BN-curve\nof degree $d$ and genus $g$ to $\\pp^3$. Then the same is true for some smooth curve of\ndegree and genus:\n\\begin{enumerate}\n\\item \\label{33} $(d + 3, g + 3)$ (provided $d \\geq 3$);\n\\item \\label{42} $(d + 4, g + 2)$ (provided $d \\geq 3$);\n\\item \\label{46} $(d + 4, g + 6)$ (provided $d \\geq 5$).\n\\end{enumerate}\n\\end{lm}\n\\begin{proof}\nWe apply Lemma~\\ref{lm:hir} for $f_2$ a curve of degree up to $4$ (and note that\n$N_{f_2}(-2)$ satisfies interpolation by Proposition~\\ref{p2}), namely:\n\\begin{enumerate}\n\\item $(d_2, g_2) = (3, 1)$ and $n = 3$;\n\\item $(d_2, g_2) = (4, 0)$ and $n = 3$;\n\\item $(d_2, g_2) = (4, 2)$ and $n = 5$.\n\\end{enumerate}\nFinally, we note that $C \\cup_\\Gamma D \\to \\pp^r$ as above is a limit\nof immersions of smooth curves by Lemma~\\ref{smoothable}.\n\\end{proof}\n\n\\begin{cor} Suppose that Theorem~\\ref{main-3} holds for $(d, g) = (5, 1)$. Then\nTheorem~\\ref{main-3} holds for $(d, g)$ one of:\n\\[(7, 2), \\quad (6, 3), \\quad (9, 6), \\quad (9, 7).\\]\n\\end{cor}\n\\begin{proof}\nFor $(d, g) = (7, 2)$, we apply Lemma~\\ref{lm:ind:3}, part~\\ref{42}\n(taking as our inductive hypothesis the truth of Theorem~\\ref{main-3} for $(d', g') = (3, 0)$).\n\nSimilarly, for $(d, g) = (6, 3)$ and $(d, g) = (9, 6)$, we apply\nLemma~\\ref{lm:ind:3}, part~\\ref{33}\n(taking as our inductive hypothesis the truth of Theorem~\\ref{main-3} for $(d', g') = (3, 0)$,\nand the just-established $(d', g') = (6, 3)$, respectively).\n\nFinally, for $(d, g) = (9, 7)$, we apply Lemma~\\ref{lm:ind:3}, part~\\ref{46}\n(taking as our inductive hypothesis the yet-to-be-established truth of Theorem~\\ref{main-3}\nfor $(d', g') = (5, 1)$).\n\\end{proof}\n\n\\begin{lm} Suppose that Theorem~\\ref{main-3-1} holds for $(d, g) = (7, 5)$.\nThen Theorem~\\ref{main-3-1} holds for $(d, g) = (8, 6)$.\n\\end{lm}\n\\begin{proof}\nWe simply apply Lemma~\\ref{glue} with $f\\colon C \\cup_\\Gamma D \\to \\pp^3$\nsuch that $f|_C$ is a general BN-curve of degree $7$ and genus $5$,\nand $f|_D$ is a line, with $C$ joined to $D$ at a set $\\Gamma$ of two points.\n\\end{proof}\n\n\\begin{lm} \\label{lm:ind:4} Suppose $N_f(-1)$ satisfies interpolation, where $f$ is a general BN-curve\nof degree $d$ and genus $g$ in $\\pp^4$. Then the same is true for some smooth curve of\ndegree $d + 6$ and genus $g + 6$, provided $d \\geq 4$.\n\\end{lm}\n\\begin{proof}\nWe apply Lemmas~\\ref{lm:hir} and~\\ref{smoothable}\nfor $f_2$ a curve of degree $6$ and genus $3$ to $\\pp^3$,\nwith $n = 4$.\nNote that\n$N_{f_2}(-1)$ satisfies interpolation by Propositions~\\ref{inter} and~\\ref{twist}.\n\\end{proof}\n\n\\begin{lm} Theorem~\\ref{main-4} holds for $(d, g)$ one of:\n\\[(10, 6), \\quad (11, 7), \\quad (12, 9).\\]\n\\end{lm}\n\\begin{proof}\nWe simply apply Lemma~\\ref{lm:ind:4}\n(taking as our inductive hypothesis the truth of Theorem~\\ref{main-4} for\n$(d', g') = (d - 6, g - 6)$).\n\\end{proof}\n\nTo prove the main theorems (excluding the ``conversely\\ldots'' part),\nit thus remains to produce five smooth curves:\n\\begin{enumerate}\n\\item For Theorem~\\ref{main-3}, it suffices to find smooth curves, satisfying\nthe conclusions, of degrees and genera $(5, 1)$, $(7, 4)$, and $(8, 5)$.\n\\item For Theorem~\\ref{main-3-1}, it suffices to find a smooth curve, satisfying\nthe conclusions, of degree $7$ and genus $5$.\n\\item For Theorem~\\ref{main-4}, it suffices to find a smooth curve, satisfying\nthe conclusions, of degree $9$ and genus $5$.\n\\end{enumerate}\n\n\\section{Curves in Del Pezzo Surfaces \\label{sec:in-surfaces}}\n\nIn this section, we analyze the normal bundles of certain curves\nby specializing to immersions $f \\colon C \\hookrightarrow \\pp^r$\nof smooth curves whose images are contained in Del Pezzo\nsurfaces $S \\subset \\pp^r$ (where the Del Pezzo surface is embedded by\nits complete anticanonical series).\nSince $f$ will be an immersion, we shall identify $C = f(C)$ with its image,\nin which case the normal bundle $N_f$ becomes the normal bundle $N_C$ of the image.\nOur basic method in this section will be to use the normal bundle exact\nsequence associated to $C \\subset S \\subset \\pp^r$:\n\\begin{equation} \\label{nb-exact}\n0 \\to N_{C/S} \\to N_C \\to N_S|_C \\to 0.\n\\end{equation}\nSince $S$ is a Del Pezzo surface, we have by adjunction an isomorphism\n\\begin{equation} \\label{ncs}\nN_{C/S} \\simeq K_C \\otimes K_S^\\vee \\simeq K_C(1).\n\\end{equation}\n\n\\begin{defi} \\label{pic-res}\nLet $S \\subset \\pp^r$ be a Del Pezzo surface, $k$ be an integer with $H^1(N_S(-k)) = 0$,\nand $\\theta \\in \\pic S$ be any divisor class.\n\nLet $F$ be a general hypersurface of degree $k$.\nWe consider the moduli space $\\mathcal{M}$ of pairs $(S', \\theta')$,\nwith $S'$ a Del Pezzo surface containing $S \\cap F$, and $\\theta' \\in \\pic S'$.\nDefine $V_{\\theta, k} \\subseteq \\pic(S \\cap F)$\nto be the subvariety obtained by restricting $\\theta'$ to $S \\cap F \\subseteq S'$,\nas $(S', \\theta)$ varies over the component of $\\mathcal{M}$ containing $(S, \\theta)$.\n\nNote that there is a unique such component, since $\\mathcal{M}$ is smooth at $[(S, \\theta)]$\nthanks to our assumption that $H^1(N_S(-k)) = 0$.\n\\end{defi}\n\nOur essential tool is given by the following lemma,\nwhich uses the above normal bundle sequence together with the varieties\n$V_{\\theta, k}$ to analyze $N_C$.\n\n\\begin{lm} \\label{del-pezzo}\nLet $C \\subset S \\subset \\pp^r$ be a general curve (of any fixed class)\nin a general Del Pezzo surface $S \\subset \\pp^r$,\nand $k$ be a natural number with $H^1(N_S(-k)) = 0$. Suppose that (for $F$ a general hypersurface of degree $k$):\n\\[\\dim V_{[C], k} = \\dim H^0(\\oo_C(k - 1)) \\quad \\text{and} \\quad H^1(N_S|_C(-k)) = 0,\\]\nand that the natural map\n\\[H^0(N_S(-k)) \\to H^0(N_S|_C(-k))\\]\nis an isomorphism.\nThen,\n\\[H^1(N_{C}(-k)) = 0.\\]\n\\end{lm}\n\\begin{proof}\nTwisting our earlier normal bundle exact sequence \\eqref{nb-exact},\nand using the isomorphism \\eqref{ncs}, we obtain the exact sequence:\n\\[0 \\to K_C(1-k) \\to N_C(-k) \\to N_S|_C(-k) \\to 0.\\]\nThis gives rise to a long exact sequence in cohomology:\n\\[\\cdots \\to H^0(N_C(-k)) \\to H^0(N_S|_C(-k)) \\to H^1(K_C(1 - k)) \\to H^1(N_C(-k)) \\to H^1(N_S|_C(-k)) \\to \\cdots.\\]\nSince $H^1(N_S|_C(-k)) = 0$ by assumption,\nit suffices to show that the image of the natural map\n$H^0(N_C(-k)) \\to H^0(N_S|_C(-k))$ has codimension\n\\[\\dim H^1(K_C(1 - k)) = \\dim H^0(\\oo_C(k - 1)) = \\dim V_{[C], k}.\\]\n\nBecause the natural map $H^0(N_S(-k)) \\to H^0(N_S|_C(-k))$\nis an isomorphism, we may interpret sections\nof $N_S|_C(-k)$ as first-order deformations of the Del Pezzo surface $S$\nfixing $S \\cap F$.\nSo it remains to show that the space of such deformations\ncoming from a deformation of $C$ fixing $C \\cap F$ has codimension\n$\\dim V_{[C], k}$.\n\nThe key point here is that deforming\n$C$ on $S$ does not change its class $[C] \\in \\pic(S)$,\nand every deformation of $S$\ncomes naturally with a deformation of the element $[C] \\in \\pic(S)$.\nIt thus suffices to prove that \nthe space of first-order deformations of $S$ which leave invariant\nthe restriction $[C]|_{S \\cap F} \\in \\pic(S \\cap F)$\nhas codimension $\\dim V_{[C], k}$.\n\nBut since the map $\\mathcal{M} \\to V_{[C], k}$\nis smooth at $(S, [C])$, the vertical tangent space has codimension\nin the full tangent space\nequal to the dimension of the image.\n\\end{proof}\n\nIn applying Lemma~\\ref{del-pezzo},\nwe will first consider the case where $S \\subset \\pp^3$ is a general cubic surface,\nwhich is isomorphic to the blowup $\\bl_\\Gamma \\pp^2$ of $\\pp^2$ along a set\n\\[\\Gamma = \\{p_1, \\ldots, p_6\\} \\subset \\pp^2\\]\nof six general points. Recall that this a Del Pezzo surface,\nwhich is to say that the embedding $\\bl_\\Gamma \\pp^2 \\simeq S \\hookrightarrow \\pp^3$\nas a cubic surface is via the complete linear\nsystem for the inverse of the canonical bundle:\n\\[-K_{\\bl_\\Gamma \\pp^2} = 3L - E_1 - \\cdots - E_6,\\]\nwhere $L$ is the class of a line in $\\pp^2$ and $E_i$ is the exceptional divisor\nin the blowup over $p_i$. Note that by construction,\n\\[N_S \\simeq \\oo_S(3).\\]\nIn particular, $H^1(N_S(-1)) = H^1(N_S(-2)) = 0$ by Kodaira vanishing.\n\n\\begin{lm} \\label{cubclass} Let $C \\subset \\bl_\\Gamma \\pp^2 \\simeq S \\subset \\pp^3$ be a general curve of class either:\n\\begin{enumerate}\n\\item \\label{74} $5L - 2E_1 - 2E_2 - E_3 - E_4 - E_5 - E_6$;\n\\item \\label{85} $5L - 2E_1 - E_2 - E_3 - E_4 - E_5 - E_6$;\n\\item \\label{86} $6L - E_1 - E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6$;\n\\item \\label{75} $6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6$;\n\\end{enumerate}\nThen $C$ is smooth and irreducible.\nIn the first two cases, $H^1(\\oo_C(1)) = 0$.\n\\end{lm}\n\\begin{proof}\nWe first show the above linear series are basepoint-free.\nTo do this, we write each as a sum of terms which are evidently\nbasepoint-free:\n\\begin{align*}\n5L - 2E_1 - 2E_2 - E_3 - E_4 - E_5 - E_6 &= (3L - E_1 - E_2 - E_3 - E_4 - E_5 - E_6) \\\\\n&\\qquad + (L - E_1) + (L - E_2) \\\\\n5L - 2E_1 - E_2 - E_3 - E_4 - E_5 - E_6 &= (3L - E_1 - E_2 - E_3 - E_4 - E_5 - E_6) + (L - E_1) \\\\\n6L - E_1 - E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6 &= (3L - E_1 - E_2 - E_3 - E_4 - E_5 - E_6) \\\\\n&\\qquad + L + (2L - E_3 - E_4 - E_5 - E_6) \\\\\n6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6 &= (3L - E_1 - E_2 - E_3 - E_4 - E_5 - E_6) \\\\\n&\\qquad + (L - E_2) + (2L - E_3 - E_4 - E_5 - E_6).\n\\end{align*}\n\nSince all our linear series are basepoint-free, the\nBertini theorem implies that $C$ is smooth. Moreover, by basepoint-freeness,\nwe know that $C$ does not contain any of our exceptional divisors.\nWe conclude that $C$ is a the proper transform in the blowup of\na curve $C_0 \\subset \\pp^2$. This curve satisfies:\n\\begin{itemize}\n\\item In case~\\ref{74}, $C_0$ has exactly two nodes, at $p_1$ and $p_2$, and is otherwise smooth.\nIn particular, $C_0$ (and thus $C$) must be irreducible, since otherwise (by B\\'ezout's theorem) it would have\nat least $4$ nodes (where the components meet).\n\n\\item In case~\\ref{85}, $C_0$ has exactly one node, at $p_1$, and is otherwise smooth.\nAs above, $C_0$ (and thus $C$) must be irreducible.\n\n\\item In case~\\ref{86}, $C_0$ has exactly four nodes, at $\\{p_3, p_4, p_5, p_6\\}$, and is otherwise smooth.\nAs above, $C_0$ (and thus $C$) must be irreducible.\n\n\\item In case~\\ref{75}, $C_0$ has exactly $5$ nodes, at $\\{p_2, p_3, p_4, p_5, p_6\\}$, and is otherwise smooth.\nAs above, $C_0$ must either be irreducible, or the union of a\nline and a quintic. (Otherwise, it would have at least $8$ nodes.)\nBut in the second case, all $5$ nodes must be collinear,\ncontradicting our assumption that $\\{p_2, p_3, p_4, p_5, p_6\\}$ are general.\nConsequently, $C_0$ (and thus $C$) must be irreducible.\n\\end{itemize}\n\nWe now turn to showing $H^1(\\oo_C(1)) = 0$ in the first two cases.\nIn the first case, we note that $\\Gamma$ contains $4 = \\operatorname{genus}(C)$ general points $\\{p_3, p_4, p_5, p_6\\}$\non $C$; consequently, $E_3 + E_4 + E_5 + E_6$ --- and therefore\n$\\oo_C(1) = (3L - E_1 - E_2) - (E_3 + E_4 + E_5 + E_6)$ --- is a general line bundle of degree $7$,\nwhich implies $H^1(\\oo_C(1)) = 0$.\nSimilarly, in the second case,\nwe note that $\\Gamma$ contains $5 = \\operatorname{genus}(C)$\ngeneral points $\\{p_2, p_3, p_4, p_5, p_6\\}$ on $C$.\nAs in the first case, this implies $H^1(\\oo_C(1)) = 0$, as desired.\n\\end{proof}\n\n\\begin{lm} \\label{foo}\nLet $C \\subset \\pp^3$ be a general BN-curve of degree and genus $(7, 4)$ or $(8, 5)$.\nThen we have $H^1(N_C(-2)) = 0$.\n\\end{lm}\n\\begin{proof}\nWe take $C \\subset S$, as constructed in Lemma~\\ref{cubclass}, parts~\\ref{74} and~\\ref{85}\nrespectively.\nThese curves have degrees and genera $(7, 4)$ and $(8, 5)$ respectively, which can be seen by calculating the\nintersection product with the hyperplane class and using adjunction.\nFor example, for the curve in part~\\ref{74} of class\n$5L - 2E_1 - 2E_2 - E_3 - E_4 - E_5 - E_6$, we calculate\n\\[\\deg C = (5L - 2E_1 - 2E_2 - E_3 - E_4 - E_5 - E_6) \\cdot (3L - E_1 - E_2 - E_3 - E_4 - E_5 - E_6) = 7,\\]\nand\n\\[\\operatorname{genus} C = 1 + \\frac{K_S \\cdot C + C^2}{2} = 1 + \\frac{-\\deg C + C^2}{2} = 1 + \\frac{-7 + 13}{2} = 4.\\]\nBecause $N_S \\simeq \\oo_S(3)$, we have \n\\[H^1(N_S|_C(-2)) = H^1(\\oo_C(1)) = 0.\\]\nMoreover, $\\oo_S(1)(-C)$ is either\n$-2L + E_1 + E_2$ or $-2L + E_1$ respectively;\nin either case we have $H^0(\\oo_S(1)(-C)) = 0$. Consequently, the restriction map\n\\[H^0(\\oo_S(1)) \\to H^0(\\oo_C(1))\\]\nis injective. Since\n\\[\\dim H^0(\\oo_S(1)) = 4 = \\dim H^0(\\oo_C(1)),\\]\nthe above restriction map is therefore an isomorphism.\nApplying\nLemma~\\ref{del-pezzo}, it thus suffices to show that\n\\[\\dim V_{[C], 2} = \\dim H^0(\\oo_C(1)) = 4.\\]\n\nTo do this, we first observe that $[C]$ is always a linear combination $aH + bL_1 + cL_2$ of the\nhyperplane class $H$, and two nonintersecting lines $L_1$ and $L_2$, such that both $b$ and $c$\nare nonvanishing. Indeed:\n\\begin{align*}\n5L - 2E_1 - 2E_2 - E_3 - E_4 - E_5 - E_6 &= 3(3L - E_1 - E_2 - E_3 - E_4 - E_5 - E_6) \\\\\n&\\quad - (2L - E_1 - E_3 - E_4 - E_5 - E_6) \\\\\n&\\quad - (2L - E_2 - E_3 - E_4 - E_5 - E_6) \\\\\n5L - 2E_1 - E_2 - E_3 - E_4 - E_5 - E_6 &= 3(3L - E_1 - E_2 - E_3 - E_4 - E_5 - E_6) + E_1 \\\\\n&\\quad - 2(2L - E_2 - E_3 - E_4 - E_5 - E_6).\n\\end{align*}\n\nWriting $F$ for a general quadric hypersurface, and $D = F \\cap S$,\nwe observe that $\\pic(D)$ is $4$-dimensional.\nIt is therefore sufficient to prove that for a general class $\\theta \\in \\pic^{6a + 2b + 2c}(D)$,\nthere exists a smooth cubic surface $S$ containing $D$ and a pair $(L_1, L_2)$ of disjoint lines on $S$,\nsuch that the restriction $(aH + bL_1 + cL_2)|_D = \\theta$.\nSince $H|_D = \\oo_D(1)$ is independent of $S$ and the choice of $(L_1, L_2)$,\nwe may replace $\\theta$ by $\\theta(-a)$ and set $a = 0$.\n\nWe thus seek to show that for $b, c \\neq 0$ and $\\theta \\in \\pic^{2b + 2c}(D)$ general,\nthere exists a smooth cubic surface $S$ containing $D$, and a pair $(L_1, L_2)$ of disjoint lines on $S$,\nwith $(bL_1 + cL_2)|_D = \\theta$.\nEquivalently, we want to show the map\n\\[\\{(S, E_1, E_2) : E_1, E_2 \\subset S \\supset D\\} \\mapsto \\{(E_1, E_2)\\},\\]\nfrom the space of smooth cubic surfaces $S$ containing $D$ with a choice\nof pair of disjoint lines $(E_1, E_2)$, \nto the space of pairs of $2$-secant lines to $D$, is dominant.\nFor this, it suffices to check the vanishing of \n$H^1(N_S(-D -E_1 - E_2))$,\nfor any smooth cubic $S$ containing $D$ and disjoint lines $(E_1, E_2)$ on $S$,\nin which lies the obstruction to smoothness of this map.\nBut $N_S(-D -E_1 - E_2) = 3L - 2E_1 - 2E_2 - E_3 - E_4 - E_5 - E_6$\nhas no higher cohomology by Kawamata-Viehweg vanishing.\n\\end{proof}\n\n\\begin{lm}\nLet $C \\subset \\pp^3$ be a general BN-curve of degree $7$ and genus $5$.\nThen we have $H^1(N_C(-1)) = 0$.\n\\end{lm}\n\\begin{proof}\nWe take $C \\subset S$, as constructed in Lemma~\\ref{cubclass}, part~\\ref{75}.\nBecause $N_S \\simeq \\oo_S(3)$, we have \n\\[H^1(N_S|_C(-1)) = H^1(\\oo_C(2)) = 0.\\]\nMoreover, $\\oo_S(2)(-C) \\simeq \\oo_S(-E_1)$ has no sections.\nConsequently, the restriction map\n\\[H^0(\\oo_S(2)) \\to H^0(\\oo_C(2))\\]\nis injective. Since\n\\[\\dim H^0(\\oo_S(2)) = 10 = \\dim H^0(\\oo_C(2)),\\]\nthe above restriction map is therefore an isomorphism.\nApplying\nLemma~\\ref{del-pezzo}, it thus suffices to show that\n\\[\\dim V_{[C], 1} = \\dim H^0(\\oo_C) = 1.\\]\n\nWriting $F$ for a general hyperplane, and $D = F \\cap S$,\nwe observe that $\\pic(D)$ is $1$-dimensional.\nSince $[C] = 2H + E_1$,\nit is therefore sufficient to prove that for a general class $\\theta \\in \\pic^7(D)$,\nthere exists a cubic surface $S$ containing $D$ and a line $L$ on $S$,\nsuch that the restriction $(2H + L)|_D = \\theta$.\nSince $H|_D = \\oo_D(1)$ is independent of $S$ and the choice of $L$,\nwe may replace $\\theta$ by $\\theta(-1)$ and look instead for\n$L|_D = \\theta \\in \\pic^1(D)$.\nEquivalently, we want to show the map\n\\[\\{(S, E_1) : E_1 \\subset S \\supset D\\} \\mapsto \\{(E_1, E_2)\\},\\]\nfrom the space of smooth cubic surfaces $S$ containing $D$ with a choice\nof line $E_1$,\nto the space of $1$-secant lines to $D$, is dominant;\nit suffices to check the vanishing of \n$H^1(N_S(-D-E_1))$,\nfor any smooth cubic $S$ containing $D$ and line $E_1$ on $S$,\nin which lies the obstruction to smoothness of this map.\nBut $N_S(-D-E_1) = 6L - 3E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6$\nhas no higher cohomology by Kodaira vanishing.\n\\end{proof}\n\nNext, we consider the case where $S \\subset \\pp^4$ is the intersection\nof two quadrics, which is isomorphic to the blowup $\\bl_\\Gamma \\pp^2$\nof $\\pp^2$ along a set\n\\[\\Gamma = \\{p_1, \\ldots, p_5\\}\\]\nof five general points. Recall that this is a Del Pezzo surface,\nwhich is to say that the embedding\n$\\bl_\\Gamma \\pp^2 \\simeq S \\hookrightarrow \\pp^4$ as the intersection\nof two quadrics is via the complete linear\nsystem for the inverse of the canonical bundle:\n\\[-K_{\\bl_\\Gamma \\pp^2} = 3L - E_1 - \\cdots - E_5,\\]\nwhere $L$ is the class of a line in $\\pp^2$ and $E_i$ is the exceptional divisor\nin the blowup over $p_i$. Note that by construction,\n\\[N_S \\simeq \\oo_S(2) \\oplus \\oo_S(2).\\]\nIn particular, $H^1(N_S(-1)) = 0$ by Kodaira vanishing.\n\n\\begin{lm} \\label{qclass} Let $C \\subset \\bl_\\Gamma \\pp^2 \\simeq S \\subset \\pp^4$ be a general curve of class either:\n\\begin{enumerate}\n\\item $5L - 2E_1 - E_2 - E_3 - E_4 - E_5$;\n\\item $6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5$.\n\\end{enumerate}\nThen $C$ is smooth and irreducible. In the first case, $H^1(\\oo_C(1)) = 0$.\n\\end{lm}\n\\begin{proof}\nWe first show the above linear series\nare basepoint-free.\nTo do this, we write them as a sum of terms which are evidently\nbasepoint-free:\n\\begin{align*}\n5L - 2E_1 - E_2 - E_3 - E_4 - E_5 &= (3L - E_1 - E_2 - E_3 - E_4 - E_5) + (L - E_1) + L \\\\\n6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5 &= (3L - E_1 - E_2 - E_3 - E_4 - E_5) \\\\\n&\\qquad + (2L - E_2 - E_3 - E_4 - E_5) + L\n\\end{align*}\nAs in Lemma~\\ref{cubclass}, we conclude that $C$ is smooth and\nirreducible. In the first case, we have\n$\\deg \\oo_C(1) = 9 > 8 = 2g - 2$, which implies\n$H^1(\\oo_C(1)) = 0$ as desired.\n\\end{proof}\n\n\\begin{lm}\nLet $C \\subset \\pp^4$ be a general BN-curve of degree $9$ and genus $5$.\nThen we have $H^1(N_C(-1)) = 0$.\n\\end{lm}\n\\begin{proof}\nWe take $C \\subset S$, as constructed in Lemma~\\ref{qclass}.\nBecause $N_S \\simeq \\oo_S(2) \\oplus \\oo_S(2)$, we have \n\\[H^1(N_S|_C(-1)) = H^1(\\oo_C(1) \\oplus \\oo_C(1)) = 0.\\]\nMoreover, $\\oo_S(1)(-C) \\simeq \\oo_S(-2L + E_1)$ has no sections.\nConsequently, the restriction map\n\\[H^0(\\oo_S(1) \\oplus \\oo_S(1)) \\to H^0(\\oo_C(1) \\oplus \\oo_C(1))\\]\nis injective. Since\n\\[\\dim H^0(\\oo_S(1) \\oplus \\oo_S(1)) = 10 = \\dim H^0(\\oo_C(1) \\oplus \\oo_C(1)),\\]\nthe above restriction map is therefore an isomorphism.\nApplying\nLemma~\\ref{del-pezzo}, it thus suffices to show that\n\\[\\dim V_{[C], 1} = \\dim H^0(\\oo_C) = 1.\\]\n\nWriting $F$ for a general hyperplane, and $D = F \\cap S$, we observe that $\\pic(D)$ is $1$-dimensional.\nSince $[C] = 3(3L - E_1 - E_2 - E_3 - E_4 - E_5) - 2(2L - E_1 - E_2 - E_3 - E_4 - E_5) - E_1$,\nit is therefore sufficient to prove that for a general class $\\theta \\in \\pic^9(D)$,\nthere exists a quartic Del Pezzo surface $S$ containing $D$, and a pair $\\{L_1, L_2\\}$ of\nintersecting lines on $S$,\nsuch that the restriction $(3H - 2L_1 - L_2)|_D = \\theta$.\nSince $H|_D = \\oo_D(1)$ is independent of $S$ and the choice of $L$,\nwe may replace $\\theta$ by $\\theta^{-1}(3)$ and look instead for\n$(2L_1 + L_2)|_D = \\theta \\in \\pic^3(D)$.\nFor this, it suffices to show the map\n\\[\\{(S, L_1, L_2) : L_1, L_2 \\subset S \\supset D\\} \\mapsto \\{(L_1, L_2)\\},\\]\nfrom the space of smooth quartic Del Pezzo surfaces $S$\ncontaining $D$ with a choice\nof pair of intersecting lines $(L_1, L_2)$, \nto the space of pairs of intersecting $1$-secant lines to $D$, is dominant.\nTaking $[L_1] = E_1$ and $[L_2] = L - E_1 - E_2$,\nit suffices to check the vanishing of the first cohomology of the vector bundle\n$N_S(-D - E_1 - (L - E_1 - E_2))$ --- which is isomorphic to a direct\nsum of two copies of the line bundle $2L - E_1 - E_3 - E_4 - E_5$ --- for\nany smooth quartic Del Pezzo surface $S$ containing $D$,\nin which lies the obstruction to smoothness of this map.\nBut $2L - E_1 - E_3 - E_4 - E_5$ has no higher cohomology by Kodaira vanishing.\n\\end{proof}\n\nTo prove the main theorems (excluding the ``conversely\\ldots'' part),\nit thus remains to produce a smooth curve $C \\subset \\pp^3$ of degree $5$\nand genus $1$, with $H^1(N_C(-2)) = 0$.\n\n\\section{\\boldmath Elliptic Curves of Degree $5$ in $\\pp^3$ \\label{sec:51}}\n\nIn this section, we construct an immersion $f \\colon C \\hookrightarrow \\pp^3$\nof degree~$5$ from a smooth elliptic curve,\nwith $H^1(N_f(-2)) = 0$.\nAs in the previous section,\nwe shall identify $C = f(C)$ with its image,\nin which case the normal bundle $N_f$ becomes the normal bundle $N_C$ of the image.\n\nOur basic method in this section will be to use the geometry of the cubic scroll $S \\subset \\pp^4$.\nRecall that\nthe cubic scroll can be constructed\nin two different ways:\n\\begin{enumerate}\n\\item Let $Q \\subset \\pp^4$ and $M \\subset \\pp^4$ be a plane conic,\nand a line disjoint from the span of $Q$, respectively. As abstract varieties,\n$Q \\simeq \\pp^1 \\simeq M$.\nThen $S$ is the ruled surface swept out by lines joining pairs of points\nidentified under some choice of above isomorphism.\n\n\\item Let $x \\in \\pp^2$ be a point, and consider the blowup $\\bl_x \\pp^2$\nof $\\pp^2$ at the point $\\{x\\}$. Then, $S$ is the image of $f \\colon \\bl_x \\pp^2 \\hookrightarrow \\pp^4$\nunder the complete linear series attached to the line bundle\n\\[2L - E,\\]\nwhere $L$ is the class of a line in $\\pp^2$, and $E$ is the exceptional divisor\nin the blowup.\n\\end{enumerate}\n\nTo relate these two constructions, we fix a line $L \\subset \\pp^2$ not meeting $x$ in the second\nconstruction, and consider the isomorphism $L \\simeq \\pp^1 \\simeq E$\ndefined by sending $p \\in L$ to the intersection with $E$ of the proper transform\nof the line joining $p$ and $x$.\nThen the $f(L)$ and $f(E)$ are $Q$ and $M$ respectively in the second construction;\nthe proper transforms of lines through $x$ are the lines of the ruling.\n\n\\medskip\n\nNow take two points $p, q \\in L$. Since $f(L)$ is a plane conic,\nthe tangent lines to $f(L)$ at $p$ and $q$ intersect; we let $y$\nbe their point of intersection.\n\nFrom the first description of $S$, it is clear that any line through\n$y$ intersects $S$ quasi-transversely --- except for the lines joining $y$ to $p$ and $q$,\neach of which meets $S$ in a degree~$2$ subscheme of $f(L)$.\nWrite $\\bar{S}$ for the image of $S$ under projection from $y$; by construction,\nthe projection $\\pi \\colon S \\to \\bar{S} \\subseteq \\pp^3$ is unramified away from $\\{p, q\\}$,\nan immersion away from $f(L)$, and when restricted to $f(L)$ is a double cover of its image\nwith ramification exactly at $\\{p, q\\}$.\nAt $\\{p, q\\}$, the differential drops rank transversely,\nwith kernel the tangent\nspace to $f(L)$. (By ``drops rank transversely'', we mean that the section $d\\pi$ of\n$\\hom(T_S, \\pi^* T_{\\pp^3})$ is transverse to the subvariety\nof $\\hom(T_S, \\pi^* T_{\\pp^3})$ of maps with less-than-maximal rank.)\n\nIf $C \\subset \\bl_{\\{p\\}} \\pp^2 \\simeq S$ is a curve passing through $p$ and $q$,\nbut transverse to $L$ at each of these points, then any line through $y$ intersects\n$C$ quasi-transversely. In particular, if $C$ meets $L$ in at most one point outside of $\\{p, q\\}$,\nthe image $\\bar{C}$ of $C$ under projection from $y$\nis smooth. Moreover, the above analysis of $d\\pi$ on $S$ implies that the natural map\n\\[N_{C/S} \\to N_{\\bar{C}/\\pp^3}\\]\ninduced by $\\pi$ is fiberwise injective away from $\\{p, q\\}$, and has a simple\nzero at both $p$ and $q$. That is, we have an exact sequence\n\\begin{equation} \\label{51}\n0 \\to N_{C/S}(p + q) \\to N_{\\bar{C}/\\pp^3} \\to \\mathcal{Q} \\to 0,\n\\end{equation}\nwith $\\mathcal{Q}$ a vector bundle.\n\n\\medskip\n\nWe now specialize to the case where $C$ is the proper transform of a plane cubic, passing through\n$\\{x, p, q\\}$, and transverse to $L$ at $\\{p, q\\}$. By inspection,\n$\\bar{C}$ is an elliptic curve of degree $5$ in $\\pp^3$; it thus suffices to show\n$H^1(N_{\\bar{C}/\\pp^3}(-2)) = 0$.\n\n\\begin{lm} In this case, \n\\begin{align*}\nN_{C/S}(p + q) &\\simeq \\oo_C(3L - E + p + q) \\\\\n\\mathcal{Q} &\\simeq \\oo_C(5L - 3E - p - q).\n\\end{align*}\n\\end{lm}\n\\begin{proof} \nWe first note that\n\\[N_{C/S} \\simeq N_{C/\\pp^2}(-E) \\simeq \\oo_C(3L)(-E) \\quad \\Rightarrow \\quad N_{C/S}(p + q) \\simeq \\oo_C(3L - E + p + q).\\]\nNext, the Euler exact sequence\n\\[0 \\to \\oo_{\\bar{C}} \\to \\oo_{\\bar{C}}(1)^4 \\to T_{\\pp^3}|_{\\bar{C}} \\to 0\\]\nimplies\n\\[\\wedge^3 (T_{\\pp^3}|_{\\bar{C}}) \\simeq \\oo_C(4).\\]\nCombined with the normal bundle exact sequence\n\\[0 \\to T_C \\to T_{\\pp^3}|_{\\bar{C}} \\to N_{\\bar{C}/\\pp^3} \\to 0,\\]\nand the fact that $C$ is of genus $1$, so $T_C \\simeq \\oo_C$, we conclude that\n\\[\\wedge^2(N_{\\bar{C}/\\pp^3}) \\simeq \\oo_C(4) \\otimes T_C^\\vee \\simeq \\oo_C(4) = \\oo_C(4(2L - E)) = \\oo_C(8L - 4E).\\]\nThe exact sequence \\eqref{51} then implies\n\\[\\mathcal{Q} \\simeq \\wedge^2(N_{\\bar{C}/\\pp^3}) \\otimes (N_{C/S}(p + q))^\\vee \\simeq \\oo_C(8L - 4E)(-3L + E - p - q) = \\oo_C(5L - 3E - p - q),\\]\nas desired.\n\\end{proof}\n\n\\noindent\nTwisting by $\\oo_C(-2) \\simeq \\oo_C(-4L + 2E)$, we obtain isomorphisms:\n\\begin{align*}\nN_{C/S}(p + q) &\\simeq \\oo_C(-L + E + p + q) \\\\\n\\mathcal{Q} &\\simeq \\oo_C(L - E - p - q).\n\\end{align*}\nWe thus have an exact sequence\n\\[0 \\to \\oo_C(-L + E + p + q) \\to N_{\\bar{C}/\\pp^3}(-2) \\to \\oo_C(L - E - p - q) \\to 0.\\]\nSince $\\oo_C(-L + E + p + q)$ and $\\oo_C(L - E - p - q)$ are both general line bundles\nof degree zero on a curve of genus $1$, we have\n\\[H^1(\\oo_C(-L + E + p + q)) = H^1(\\oo_C(L - E - p - q)) = 0,\\]\nwhich implies\n\\[H^1(N_{\\bar{C}/\\pp^3}(-2)) = 0.\\]\nThis completes the proof the main theorems, except for the ``conversely\\ldots'' parts.\n\n\\section{The Converses \\label{sec:converses}}\n\nIn this section, we show that the intersections appearing in our main theorems\nfail to be general in all listed exceptional cases.\nWe actually go further, describing precisely the intersection of a general BN-curve $f \\colon C \\to \\pp^r$\nin terms of the intrinsic geometry of $Q \\simeq \\pp^1 \\times \\pp^1$, $H \\simeq \\pp^2$,\nand $H \\simeq \\pp^3$ respectively.\n\nSince the general BN-curve $f \\colon C \\to \\pp^r$ is an immersion, we can\nidentify $C = f(C)$ with its image as in the previous two sections, in which case\nthe normal bundle $N_f$ becomes the normal bundle $N_C$ of its image.\n\nThere are two basic phenomenon which occur explain the majority of our exceptional\ncases: cases where $C$ is a complete intersection, and cases where $C$ lies\non a surface of low degree. The first two subsections will be devoted\nto the exceptional cases that arise for these two reasons respectively.\nIn the final subsection, we will consider the two remaining exceptional\ncases.\n\n\\subsection{Complete Intersections}\n\nWe begin by dealing with those exceptional cases which\nare complete intersections.\n\n\\begin{prop}\nLet $C \\subset \\pp^3$ be a general BN-curve of degree $4$ and genus $1$.\nThen the intersection $C \\cap Q$ is the intersection of two general curves\nof bidegree $(2, 2)$ on $Q \\simeq \\pp^1 \\times \\pp^1$. In particular,\nit is not a collection of $8$ general points.\n\\end{prop}\n\\begin{proof}\nIt is easy to see that $C$ is the complete intersection of two general quadrics.\nRestricting these quadrics to $Q \\simeq \\pp^1 \\times \\pp^1$,\nwe see that $C \\cap Q$ is the intersection of two general curves\nof bidegree $(2, 2)$.\n\nSince general points impose independent conditions on the $9$-dimensional\nspace of curves of bidegree $(2, 2)$, a general collection of $8$ points\nwill lie only on one curve of bidegree $(2, 2)$.\nThe intersections of two general curves of bidegree $(2, 2)$\nis therefore not a collection of $8$ general points.\n\\end{proof}\n\n\\begin{prop} \\label{64-to-Q}\nLet $C \\subset \\pp^3$ be a general BN-curve of degree $6$ and genus $4$.\nThen the intersection $C \\cap Q$ is the intersection of two general curves\nof bidegrees $(2, 2)$ and $(3,3)$ respectively on $Q \\simeq \\pp^1 \\times \\pp^1$. In particular,\nit is not a collection of $12$ general points.\n\\end{prop}\n\\begin{proof}\nIt is easy to see that $C$ is the complete intersection of a\ngeneral quadric and cubic.\nRestricting these to $Q \\simeq \\pp^1 \\times \\pp^1$,\nwe see that $C \\cap Q$ is the intersection of two general curves\nof bidegrees $(2, 2)$ and $(3,3)$ respectively.\n\nSince general points impose independent conditions on the $9$-dimensional\nspace of curves of bidegree $(2, 2)$, a general collection of $12$ points\nwill not lie any curve of bidegree $(2,2)$, and in particular will not be\nsuch an intersection.\n\\end{proof}\n\n\\begin{prop}\nLet $C \\subset \\pp^3$ be a general BN-curve of degree $6$ and genus $4$.\nThen the intersection $C \\cap H$ is a general collection of $6$ points\nlying on a conic. In particular,\nit is not a collection of $6$ general points.\n\\end{prop}\n\\begin{proof}\nAs in Proposition~\\ref{64-to-Q},\nwe see that $C \\cap H$ is the intersection of general\nconic and cubic curves.\n\nIn particular, $C \\cap H$ lies on a conic. Conversely, any $6$ points\nlying on a conic are the complete intersection of a conic and a cubic by Theorem~\\ref{main-2}\n(with $(d, g) = (3, 1)$).\n\nSince general points impose independent conditions on the $6$-dimensional\nspace of plane conics,\na general collection of $6$ points\nwill not lie on a conic. We thus see our intersection\nis not a collection of $6$ general points.\n\\end{proof}\n\n\\begin{prop}\nLet $C \\subset \\pp^4$ be a general BN-curve of degree $8$ and genus $5$.\nThen the intersection $C \\cap H$ is the intersection of three general quadrics\nin $H \\simeq \\pp^3$. In particular,\nit is not a collection of $8$ general points.\n\\end{prop}\n\\begin{proof}\nIt is easy to see that $C$ is the complete intersection of three general quadrics.\nRestricting these quadrics to $H \\simeq \\pp^3$,\nwe see that $C \\cap H$ is the intersection of three general quadrics.\n\nSince general points impose independent conditions on the $10$-dimensional\nspace of quadrics, a general collection of $8$ points\nwill lie only on only two quadrics.\nThe intersection of three general quadrics\nis therefore not a collection of $8$ general points.\n\\end{proof}\n\n\\subsection{Curves on Surfaces}\n\nNext, we analyze those cases\nwhich are exceptional because $C$ lies on a surface $S$\nof small degree. To show the intersection is general subject to\nthe constraint imposed by $C \\subset S$, it will be useful to have the following lemma:\n\n\\begin{lm} \\label{pic-res-enough}\nLet $D$ be an irreducible curve of genus $g$ on a surface $S$, and $p_1, p_2, \\ldots, p_n$\nbe a collection of $n$ distinct points on $D$. Suppose that $n \\geq g$, and that\n$p_1, p_2, \\ldots, p_g$ are general.\nLet $\\theta \\in \\pic(S)$, with $\\theta|_D \\sim p_1 + p_2 + \\cdots + p_n$. Suppose that\n\\[\\dim H^0(\\theta) - \\dim H^0(\\theta(-D)) \\geq n - g + 1.\\]\nThen some curve $C \\subset S$ of class $\\theta$ meets $D$ transversely at $p_1, p_2, \\ldots, p_n$.\n\\end{lm}\n\\begin{proof}\nSince $p_1, p_2, \\ldots, p_g$ are general, and $\\theta|_D = p_1 + p_2 + \\cdots + p_n$,\nit suffices to show there is a curve of class $\\theta$ meeting $D$ dimensionally-transversely\nand passing through $p_{g + 1}, p_{g + 2}, \\ldots, p_n$; the remaining $g$ points\nof intersection are then forced to be $p_1, p_2, \\ldots, p_g$.\n\nFor this, we note there is a $\\dim H^0(\\theta) - (n - g) > \\dim H^0(\\theta(-D))$ dimensional\nspace of sections of $\\theta$ which vanish at $p_{g + 1}, \\ldots, p_n$.\nIn particular, there is some section which does not vanish along $D$.\nIts zero locus then gives the required curve $C$. (The curve $C$ meets $D$ dimensionally-transversely,\nbecause $C$ does not contain $D$ and $D$ is irreducible.)\n\\end{proof}\n\n\\begin{prop}\nLet $C \\subset \\pp^3$ be a general BN-curve of degree $5$ and genus $2$.\nThen the intersection $C \\cap Q$ is a collection of $10$ general points\nlying on a curve of bidegree $(2, 2)$ on $Q \\simeq \\pp^1 \\times \\pp^1$. In particular,\nit is not a collection of $10$ general points.\n\\end{prop}\n\\begin{proof}\nSince $\\dim H^0(\\oo_C(2)) = 9$ and $\\dim H^0(\\oo_{\\pp^3}(2)) = 10$,\nwe conclude that $C$ lies on a quadric.\nRestricting to $Q$, we see that $C \\cap Q$ lies on a curve\nof bidegree $(2,2)$.\n\nConversely, given $10$ points $p_1, p_2, \\ldots, p_{10}$ lying on a curve $D$ of bidegree $(2, 2)$,\nwe may first find a pair of points $\\{x, y\\} \\subset D$ so that\n$x + y + 2H \\sim p_1 + \\cdots + p_{10}$. We then claim there is a smooth quadric containing\n$D$ and the general $2$-secant line $\\overline{xy}$ to $D$.\nEquivalently, we want to show the map\n\\[\\{(S, L) : L \\subset S \\supset D\\} \\mapsto \\{L\\},\\]\nfrom the space of smooth quadric surfaces $S$ containing $D$ with a choice\nof line $L$,\nto the space of $2$-secant lines to $D$, is dominant;\nit suffices to check the vanishing of \n$H^1(N_S(-D-L))$,\nfor any smooth quadric $S$ containing $D$ and line $L$ on $S$,\nin which lies the obstruction to smoothness of this map.\nBut $N_S(-D-L) = \\oo_S(0, -1)$\nhas no higher cohomology by Kodaira vanishing.\n\nWriting $L \\in \\pic(S)$ for the class of the line $\\overline{xy}$,\nwe see that $(L + 2H)|_D \\sim p_1 + \\cdots + p_{10}$ as divisor classes.\nApplying Lemma~\\ref{pic-res-enough}, and noting that\n$\\dim H^0(\\oo_{S}(2H + L)) = 12$ while \n$\\dim H^0(\\oo_{S}(L)) = 2$, there is a curve $C$ of class\n$2H + L$ meeting $D$ transversely at $p_1, \\ldots, p_{10}$.\nSince $\\oo_{S}(2H + L)$ is very ample by inspection, $C$\nis smooth (for $p_1, \\ldots, p_{10}$ general). By results of \\cite{keem},\nthis implies $C$ is a BN-curve.\n\n\nSince general points impose independent conditions on the $9$-dimensional\nspace of curves of bidegree $(2, 2)$, a general collection of $10$ points\ndoes not lie on a curve of bidegree $(2, 2)$.\nA collection of $10$ general points on a general curve of bidegree $(2,2)$\nis therefore not a collection of $10$ general points.\n\\end{proof}\n\n\\begin{prop}\nLet $C \\subset \\pp^3$ be a general BN-curve of degree $7$ and genus $5$.\nThen the intersection $C \\cap Q$ is a collection of $14$\npoints lying on a curve $D \\subset Q \\simeq \\pp^1 \\times \\pp^1$,\nwhich is general subject to the following conditions:\n\\begin{enumerate}\n\\item The curve $D$ is of bidegree $(3, 3)$.\n\\item The divisor $C \\cap Q - 2H$ on $D$ (where $H$ is the hyperplane class)\nis effective.\n\\end{enumerate}\nIn particular, it is not a collection of $14$ general points.\n\\end{prop}\n\\begin{proof}\nFirst we claim the general such curve $C$ lies on a smooth cubic surface $S$ with class\n$2H + E_1 = 6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6$.\nIndeed, by Lemma~\\ref{cubclass} part~\\ref{75}, a general curve of this class is smooth and irreducible;\nsuch a curve has degree~$7$ and genus~$5$, and in particular is a BN-curve by results of \\cite{keem}.\nIt remains to see there are no obstructions to lifting a deformation\nof $C$ to a deformation of the pair $(S, C)$,\ni.e.\\ that $H^1(N_S(-C)) = 0$. But $N_S(-C) = 3L - 2E_1 - E_2 - E_3 - E_4 - E_5 - E_6$,\nwhich has no higher cohomology by Kodaira vanishing.\n\nThus, $C \\cap Q - 2H$ is the restriction to $D$\nof the class of a line on $S$; in particular, $C \\cap Q - 2H$\nis an effective divisor on $D$.\n\nConversely, suppose that $p_1, p_2, \\ldots, p_{14}$\nare a general collection of $14$ points lying on a curve $D$ of bidegree $(3,3)$\nwith $p_1 + \\cdots + p_{14} - 2H \\sim x + y$ effective.\nWe then claim there is a smooth cubic containing\n$D$ and the general $2$-secant line $\\overline{xy}$ to $D$.\nEquivalently, we want to show the map\n\\[\\{(S, L) : L \\subset S \\supset D\\} \\mapsto \\{L\\},\\]\nfrom the space of smooth cubic surfaces $S$ containing $D$ with a choice\nof line $L$,\nto the space of $2$-secant lines to $D$, is dominant;\nfor this it suffices to check the vanishing of \n$H^1(N_S(-D-L))$.\nBut $N_S(-D-L) = 3L - 2E_1 - E_2 - E_3 - E_4 - E_5 - E_6$,\nwhich has no higher cohomology by Kodaira vanishing.\n\nChoosing an isomorphism $S \\simeq \\bl_\\Gamma \\pp^2$ where $\\Gamma = \\{q_1, q_2, \\ldots, q_6\\}$,\nso that the line $\\overline{xy} = E_1$ is the exceptional\ndivisor over $q_1$,\nwe now look for a curve $C \\subset S$ of class\n\\[[C] = 6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6.\\]\n\nAgain by Lemma~\\ref{cubclass}, the general such curve is smooth and irreducible;\nsuch a curve has degree~$7$ and genus~$5$, and in particular is a BN-curve by results of \\cite{keem}.\nNote that\n\\[\\dim H^0(\\oo_S(6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6)) = 12 \\quad \\text{and} \\quad \\dim H^0(\\oo_S(E_1)) = 1.\\]\nApplying Lemma~\\ref{pic-res-enough},\nwe conclude that some curve of our given class meets $D$ transversely\nat $p_1, p_2, \\ldots, p_{14}$, as desired.\n\nIt remains to see from this description that\n$C \\cap Q$ is not a general collection of $14$ points.\nFor this, first note that there is a $15$-dimensional space\nof such curves $D$ (as $\\dim H^0(\\oo_Q(3,3)) = 16$).\nOn each each curve, there is a $2$-dimensional family of effective\ndivisors $\\Delta$; and for fixed $\\Delta$, a $10$-dimensional family of divisors\nlinearly equivalent to $2H + \\Delta$ (because $\\dim H^0(\\oo_D(2H + \\Delta)) = 11$\nby Riemann-Roch). Putting this together,\nthere is an (at most) $15 + 2 + 10 = 27$-dimensional family of such collections\nof points.\nBut $\\sym^{14}(Q)$ has dimension $28$. In particular, collections of such \npoints cannot be general.\n\\end{proof}\n\n\\begin{prop}\nLet $C \\subset \\pp^3$ be a general BN-curve of degree $8$ and genus $6$.\nThen the intersection $C \\cap Q$ is a general collection of $16$ points\non a curve of bidegree $(3,3)$ on $Q \\simeq \\pp^1 \\times \\pp^1$. In particular,\nit is not a collection of $16$ general points.\n\\end{prop}\n\\begin{proof}\nSince $\\dim H^0(\\oo_C(3)) = 19$ and $\\dim H^0(\\oo_{\\pp^3}(3)) = 20$,\nwe conclude that $C$ lies a cubic surface. Restricting this cubic\nto $Q$, we see that $C \\cap Q$ lies on a curve of bidegree $(3,3)$.\n\nConversely, take a general collection $p_1, \\ldots, p_{16}$ of $16$ points on a curve\n$D$ of bidegree $(3,3)$. The divisor $p_1 + \\cdots + p_{16} - 2H$ is of degree $4$\non a curve $D$ of genus $4$; it is therefore effective, say\n\\[p_1 + \\cdots + p_{16} - 2H \\sim x + y + z + w.\\]\nWe then claim there is a smooth cubic containing\n$D$ and the general $2$-secant lines $\\overline{xy}$ and $\\overline{zw}$ to $D$.\nEquivalently, we want to show the map\n\\[\\{(S, E_1, E_2) : E_1, E_2 \\subset S \\supset D\\} \\mapsto \\{(E_1, E_2)\\},\\]\nfrom the space of smooth cubic surfaces $S$ containing $D$ with a choice\nof pair of disjoint lines $(E_1, E_2)$,\nto the space of pairs of $2$-secant lines to $D$, is dominant;\nfor this it suffices to check the vanishing of\n$H^1(N_S(-D-E_1 - E_2))$.\nBut $N_S(-D-E_1 - E_2) = 3L - 2E_1 - 2E_2 - E_3 - E_4 - E_5 - E_6$,\nwhich has no higher cohomology by Kawamata-Viehweg vanishing.\n\nWe now look for a curve $C \\subset S$ of class\n\\[[C] = 6L - E_1 - E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6,\\]\nwhich is of degree $8$ and genus $6$.\nBy Lemma~\\ref{cubclass}, we conclude that $C$ is smooth and irreducible;\nby results of \\cite{keem}, this implies the general curve of this class is a BN-curve.\nNote that\n\\[\\dim H^0(\\oo_S(6L - E_1 - E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6)) = 14 \\quad \\text{and} \\quad \\dim H^0(\\oo_S(E_1 + E_2)) = 1.\\]\nApplying Lemma~\\ref{pic-res-enough},\nwe conclude that some curve of our given class meets $D$ transversely\nat $p_1, p_2, \\ldots, p_{16}$, as desired.\n\nSince general points impose independent conditions on the $16$-dimensional\nspace of curves of bidegree $(3, 3)$, a general collection of $16$ points\nwill not lie any curve of bidegree $(3,3)$. Our collection of points\nis therefore not general.\n\\end{proof}\n\n\\begin{prop}\nLet $C \\subset \\pp^4$ be a general BN-curve of degree $9$ and genus $6$.\nThen the intersection $C \\cap H$ is a general collection of $9$ points\non an elliptic normal curve\nin $H \\simeq \\pp^3$. In particular,\nit is not a collection of $9$ general points.\n\\end{prop}\n\\begin{proof}\nSince $\\dim H^0(\\oo_C(2)) = 13$ and $\\dim H^0(\\oo_{\\pp^4}(2)) = 15$,\nwe conclude that $C$ lies on the intersection of two quadrics.\nRestricting these quadrics to $H \\simeq \\pp^3$,\nwe see that $C \\cap H$ lies on the intersection of two quadrics,\nwhich is an elliptic normal curve.\n\nConversely, let $p_1, p_2, \\ldots, p_9$ be a collection of $9$ points\nlying on an elliptic normal curve $D \\subset \\pp^3$.\nSince $D$ is an elliptic curve, there exists (a unique) $x \\in D$\nwith\n\\[\\oo_D(p_1 + \\cdots + p_9)(-2) \\simeq \\oo_D(x).\\]\nLet $M$ be a general line through $x$.\nWe then claim there is a quartic Del Pezzo surface containing\n$D$ and the general $1$-secant line $M$.\nEquivalently, we want to show the map\n\\[\\{(S, E_1) : E_1 \\subset S \\supset D\\} \\mapsto \\{E_1\\},\\]\nfrom the space of smooth Del Pezzo surfaces $S$ containing $D$ with a choice\nof line $E_1$,\nto the space of $1$-secant lines to $D$, is dominant;\nfor this it suffices to check the vanishing of\n$H^1(N_S(-D-E_1))$.\nBut $N_S(-D-E_1)$ is a direct sum of two copies of the line bundle\n$3L - 2E_1 - E_2 - E_3 - E_4 - E_5$,\nwhich has no higher cohomology by Kodaira vanishing.\n\nWe now consider curves $C \\subset S$ of class\n\\[[C] = 6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5,\\]\nwhich are of degree $9$ and genus $6$.\nBy Lemma~\\ref{qclass}, we conclude that $C$ is smooth and irreducible;\nby results of \\cite{iliev}, this implies the general curve of this class is a BN-curve.\nNote that\n\\[\\dim H^0(\\oo_S(6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5)) = 15 \\quad \\text{and} \\quad \\dim H^0(\\oo_S(3L - E_2 - E_3 - E_4 - E_5)) = 6.\\]\nApplying Lemma~\\ref{pic-res-enough},\nwe conclude that some curve of our given class meets $D$ transversely\nat $p_1, p_2, \\ldots, p_9$, as desired.\n\nBy Corollary~1.4 of \\cite{firstpaper}, there does not exist an elliptic\nnormal curve in $\\pp^3$ passing through $9$ general points.\n\\end{proof}\n\n\\subsection{The Final Two Exceptional Cases}\n\nWe have exactly two remaining exceptional cases: The intersection\nof a general BN-curve of degree $6$ and genus $2$ in $\\pp^3$ with a quadric,\nand the intersection of a general BN-curve of degree $10$ and genus $7$ in $\\pp^4$\nwith a hyperplane. We will show in the first case that the intersection fails\nto be general since $C$ is the projection of a curve $\\tilde{C} \\subset \\pp^4$,\nwhere $\\tilde{C}$ lies on a surface of small degree (a cubic scroll).\nIn the second case, the intersection fails to be general since $C$\nis contained in a quadric hypersurface.\n\n\\begin{prop}\nLet $C \\subset \\pp^3$ be a general BN-curve of degree $6$ and genus $2$.\nThen the intersection $C \\cap Q$ is a collection of $12$ points\nlying on a curve $D \\subset Q \\simeq \\pp^1 \\times \\pp^1$, which is general subject\nto the following conditions:\n\\begin{enumerate}\n\\item The curve $D$ is of bidegree $(3, 3)$ (and so is in particular of arithmetic genus $4$).\n\\item The curve $D$ has two nodes (and so is in particular of geometric genus $2$).\n\\item The divisors $\\oo_D(2,2)$ and $C \\cap D$ are linearly equivalent\nwhen pulled back to the normalization of $D$.\n\\end{enumerate}\nIn particular, it is not a collection of $12$ general points.\n\\end{prop}\n\\begin{proof}\nWe first observe that $\\dim H^0(\\oo_C(1)) = 5$, so $C$ is the projection from a point $p \\in \\pp^4$\nof a curve $\\tilde{C} \\subset \\pp^4$ of degree $6$ and genus $2$.\nWrite $\\pi \\colon \\pp^4 \\dashedrightarrow \\pp^3$ for the map of projection\nfrom $p$, and define the quadric hypersurface $\\tilde{Q} = \\pi^{-1}(Q)$.\n\nLet $S \\subset \\pp^4$ be the surface swept out by joining pairs\nof points on $\\tilde{C}$ conjugate under the hyperelliptic involution.\nBy Corollary~13.3 of \\cite{firstpaper}, $S$ is a cubic surface;\nin particular, since $S$ has a ruling, $S$ is a cubic scroll.\nWrite $H$ for the hyperplane section on $S$, and $F$ for the class\nof a line of the ruling.\n\nTh curve $\\tilde{D} = \\tilde{Q} \\cap S$\n(which for $C$ general is smooth by Kleiman transversality), is of degree $6$ and genus $2$.\nBy construction, the intersection $C \\cap Q$ lies on $D = \\pi(\\tilde{D})$. Since $D = \\pi(S) \\cap Q$,\nit is evidently a curve of bidegree $(3, 3)$ on $Q \\simeq \\pp^1 \\times \\pp^1$.\nMoreover, since $\\tilde{D}$ has genus $2$, the geometric genus of $D$ is $2$.\nIn particular, $D$ has two nodes.\n\nNext, we note that on $S$, the curve $\\tilde{C}$ has class $2H$. Indeed, if $[\\tilde{C}] = a \\cdot H + b \\cdot F$,\nthen $a = \\tilde{C} \\cdot F = 2$ and $3a + b = \\tilde{C} \\cdot H = 6$; solving for $a$ and $b$, we obtain\n$a = 2$ and $b = 0$.\nConsequently, $\\tilde{C} \\cap \\tilde{D}$ has class $2H$ on $\\tilde{D}$.\nOr equivalently, $C \\cap D = \\pi(\\tilde{C} \\cap \\tilde{D})$ has class\nequal to $\\oo_D(2) = \\oo_D(2,2)$ when pulled back to the normalization.\n\nConversely, take $12$ points on $D$ satisfying our assumptions. Write\n$\\tilde{D}$ for the normalization of $D$, and $p_1, p_2, \\ldots, p_{12}$\nfor the preimages of our points in $\\tilde{D}$.\nWe begin by noting that $\\dim H^0(\\oo_{\\tilde{D}}(1)) = 5$,\nso $D$ is the projection from a point $p \\in \\pp^4$\nof $\\tilde{D} \\subset \\pp^4$ of degree $6$ and genus $2$.\nAs before, write $\\pi \\colon \\pp^4 \\dashedrightarrow \\pp^3$ for the map of projection\nfrom $p$, and define the quadric hypersurface $\\tilde{Q} = \\pi^{-1}(Q)$.\n\nAgain, we let $S \\subset \\pp^4$ be the surface swept out by joining pairs\nof points on $\\tilde{D}$ conjugate under the hyperelliptic involution.\nAs before, $S$ is a cubic scroll;\nwrite $H$ for the hyperplane section on $S$, and $F$ for the class\nof a line of the ruling.\nNote that $\\tilde{D} \\subseteq \\tilde{Q} \\cap S$; and since both\nsides are curves of degree $6$, we have $\\tilde{D} = \\tilde{Q} \\cap S$.\n\nIt now suffices to find a curve $\\tilde{C} \\subset S$ of class $2H$,\nmeeting $\\tilde{D}$ transversely \nin $p_1, \\ldots, p_{12}$. \nFor this, note that\n\\[\\dim H^0(\\oo_S(2H)) = 12 \\quad \\text{and} \\quad \\dim H^0(\\oo_S) = 1.\\]\nApplying Lemma~\\ref{pic-res-enough} yields the desired conclusion.\n\nIt remains to see from this description that\n$C \\cap Q$ is not a general collection of $12$ points.\nFor this, we first note that such a curve $D \\subset \\pp^1 \\times \\pp^1$\nis the same as specifying an abstract curve of genus $2$, two lines bundles\nof degree $3$ (corresponding to the pullbacks of $\\oo_{\\pp^1}(1)$ from each factor),\nand a basis-up-to-scaling for their space of sections (giving us two maps $D \\to \\pp^1$).\nSince there is a $3$-dimensional moduli space of abstract curves $D$ of genus $2$,\nand $\\dim \\pic^3(D) = 2$, and there is a $3$-dimensional family of bases-up-to-scaling\nof a $2$-dimensional vector space, the dimension of the space\nof such curves $D$ is $3 + 2 + 2 + 3 + 3 = 13$.\nOur condition $p_1 + \\cdots + p_{12} \\sim 2H$ then implies\ncollections of such points on a fixed $D$ are in bijection with\nelements of $\\pp \\oo_D(2H) \\simeq \\pp^{10}$. Putting this together,\nthere is an (at most) $13 + 10 = 23$ dimensional family of such collections of points.\nBut $\\sym^{12}(Q)$ has dimension $24$. In particular, collections of such \npoints cannot be general.\n\\end{proof}\n\n\\begin{prop}\nLet $C \\subset \\pp^4$ be a general BN-curve of degree $10$ and genus $7$.\nThen the intersection $C \\cap H$ is a general collection of $10$ points\non a quadric in $H \\simeq \\pp^3$. In particular,\nit is not a collection of $10$ general points.\n\\end{prop}\n\\begin{proof}\nSince $\\dim H^0(\\oo_C(2)) = 14$ and $\\dim H^0(\\oo_{\\pp^4}(2)) = 15$,\nwe conclude that $C$ lies on a quadric.\nRestricting this quadric to $H \\simeq \\pp^3$,\nwe see that $C \\cap H$ lies on a quadric.\n\nFor the converse, we take general points $p_1, \\ldots, p_{10}$\nlying on a general (thus smooth) quadric~$Q$.\nSince $\\dim H^0(\\oo_Q(3,3)) = 16$, we may find a curve $D \\subset Q$\nof type $(3,3)$ passing through $p_1, \\ldots, p_{10}$.\nAs divisor classes on $D$, suppose that\n\\[p_1 + p_2 + \\cdots + p_{10} - H \\sim x + y + z + w.\\]\nWe now pick a general (quartic) rational normal curve $R \\subset \\pp^4$\nwhose hyperplane section is $\\{x, y, z, w\\}$.\n\nWe then claim there is a smooth sextic K3 surface $S \\subset \\pp^4$\ncontaining $D$ and the general $2$-secant lines $\\overline{xy}$ and $\\overline{zw}$ to $D$.\nEquivalently, we want to show the map\n\\[\\{(S, R) : R \\subset S\\} \\mapsto \\{(R, D)\\},\\]\nfrom the space of smooth sextic K3 surfaces $S$,\nto the space of pairs $(R, D)$ where $R$ is a rational normal curve\nmeeting the canonical curve $D = S \\cap H$ in four points, is dominant;\nfor this it suffices to check the vanishing of\n$H^1(N_S(-H-R))$ at any smooth sextic K3 containing a rational normal curve $R$\n(where $H = [D]$ is the hyperplane class on $S$).\nWe first note that a sextic K3 surface $S$ containing a rational normal curve $R$\nexists, by Theorem~1.1 of~\\cite{knutsen}.\nOn this K3 surface, our vector bundle $N_S(-H-R)$ is the direct sum of the line bundles $H - R$ and $2H - R$;\nconsequently, it suffices to show $H^1(\\oo_S(n)(-R)) = 0$ for $n \\geq 1$.\nFor this we use the exact sequence\n\\[0 \\to \\oo_S(n)(-R) \\to \\oo_S(n) \\to \\oo_S(n)|_R = \\oo_R(n) \\to 0,\\]\nand note that $H^1(\\oo_S(n)) = 0$ by Kodaira vanishing,\nwhile $H^0(\\oo_S(n)) \\to H^0(\\oo_R(n))$ is surjective since $R$ is projectively normal.\nThis shows the existence of the desired K3 surface $S$ containing\n$D$ and the general $4$-secant rational normal curve $R$.\n\nNext, we claim that the linear series $H + R$ on $S$ is basepoint-free.\nTo see this, we first note that $H$ is basepoint free, so any basepoints\nmust lie on the curve $R$. Now the short exact sequence of sheaves\n\\[0 \\to \\oo_S(H) \\to \\oo_S(H + R) \\to \\oo_S(H + R)|_R \\to 0\\]\ngives a long exact sequence in cohomology\n\\[\\cdots \\to H^0(\\oo_S(H + R)) \\to H^0(\\oo_S(H + R)|_R) \\to H^1(\\oo_S(H)) \\to \\cdots.\\]\n\nSince the complete linear series\nattached to $\\oo_S(H + R)|_R \\simeq \\oo_{\\pp^1}(2)$ is basepoint-free,\nit suffices to show that \n$H^0(\\oo_S(H + R)) \\to H^0(\\oo_S(H + R)|_R)$ is surjective. For this,\nit suffices to note that $H^1(\\oo_S(H)) = 0$ by Kodaira vanishing.\n\nThus, $H + R$ is basepoint-free. In particular, the Bertini\ntheorem implies the general curve of class $H + R$ is smooth.\nSuch a curve is of degree~$10$ and genus~$7$;\nin particular it is a BN-curve by results \nof \\cite{iliev}.\nSo it suffices to find a curve of class $H + R$ on $S$\npassing through $p_1, p_2, \\ldots, p_{10}$.\nBy construction, as divisors on $D$, we have\n\\[p_1 + p_2 + \\cdots + p_{10} \\sim H + R.\\]\nBy Lemma~\\ref{pic-res-enough}, it suffices to show\n$\\dim H^0(\\oo_S(H + R)) = 8$ and $\\dim H^0(\\oo_S(R)) = 1$.\n\nMore generally,\nfor any smooth curve $X \\subset S$\nof genus $g$,\nwe claim $\\dim H^0(\\oo_S(X)) = 1 + g$. To see this, we use the exact sequence\n\\[0 \\to \\oo_S \\to \\oo_S(X) \\to \\oo_S(X)|_X \\to 0,\\]\nwhich gives rise to a long exact sequence in cohomology\n\\[0 \\to H^0(\\oo_S) \\to H^0(\\oo_S(X)) \\to H^0(\\oo_S(X)|_X) \\to H^1(\\oo_S) \\to \\cdots.\\]\nBecause $H^1(\\oo_S) = 0$, we thus have\n\\begin{align*}\n\\dim H^0(\\oo_S(X)) &= \\dim H^0(\\oo_S(X)|_X) + \\dim H^0(\\oo_S) \\\\\n&= \\dim H^0(K_S(X)|_X) + 1 \\\\\n&= \\dim H^0(K_X) + 1 \\\\\n&= g + 1.\n\\end{align*}\nIn particular, $\\dim H^0(\\oo_S(H + R)) = 8$ and $\\dim H^0(\\oo_S(R)) = 1$,\nas desired.\n\nSince general points impose independent conditions on the $10$-dimensional\nspace of quadrics, a general collection of $10$ points\nwill not lie on a quadric. In particular, our hyperplane\nsection here is not a general collection of $10$ points.\n\\end{proof}\n\n", "meta": {"timestamp": "2017-08-04T02:03:16", "yymm": "1605", "arxiv_id": "1605.06185", "language": "en", "url": "https://arxiv.org/abs/1605.06185"}} {"text": "\n\\chapter*{Preface}\n\n\\holmes{The scribes didn't have a large enough set from which to determine patterns.}{Brandon Sauderson}{The Hero of Ages}\n\n\\bigskip\\noindent\nThis partial solution manual to our book {\\em Introducing Monte Carlo Methods with R},\npublished by Springer Verlag in the {\\sf User R!} series, on December 2009, has been compiled \nboth from our own solutions and from homeworks \nwritten by the following Paris-Dauphine students in the 2009-2010 Master in Statistical Information Processing (TSI): \nThomas Bredillet, Anne Sabourin, and Jiazi Tang. Whenever appropriate, the \\R code\nof those students has been identified by a \\verb=# (C.) Name= in the text. \nWe are grateful to those students for allowing us to use their solutions.\nA few solutions in Chapter 4 are also taken {\\em verbatim} from\nthe solution manual to {\\em Monte Carlo Statistical Methods} compiled by Roberto Casarin from the University of Brescia\n(and only available to instructors from Springer Verlag). \n\nWe also incorporated in this manual indications about some typos found in the first printing that came to our\nattention while composing this solution manual have been indicated as well. Following the new ``print on demand\"\nstrategy of Springer Verlag, these typos will not be found in the versions of the book purchased in the coming months and should\nthus be ignored. (Christian Robert's book webpage at Universit\\'e Paris-Dauphine \\verb+www.ceremade.dauphine.fr/~xian/books.html+ \nis a better reference for the ``complete\" list of typos.)\n\nReproducing the warning Jean-Michel Marin and Christian P.~Robert \nwrote at the start of the solution manual to {\\em Bayesian Core}, let us stress here that\nsome self-study readers of {\\em Introducing Monte Carlo Methods with {\\sf R}} may come to the realisation that the solutions provided\nhere are too sketchy for them because the way we wrote those solutions assumes some minimal familiarity with the maths,\nthe probability theory and with the statistics behind the arguments. There is unfortunately a limit to the time and\nto the efforts we can put in this solution manual and studying {\\em Introducing Monte Carlo Methods with {\\sf R}} \nrequires some prerequisites in maths\n(such as matrix algebra and Riemann integrals), in probability theory (such as the use of joint and conditional densities)\nand some bases of statistics (such as the notions of inference, sufficiency and confidence sets) that we cannot cover here.\nCasella and Berger (2001) is a good reference in case a reader is lost with the ``basic\" concepts or sketchy math derivations.\n\nWe obviously welcome solutions, comments and questions on possibly erroneous or ambiguous solutions, as well as suggestions for\nmore elegant or more complete solutions: since this manual is distributed both freely and independently\nfrom the book, it can be updated and corrected [almost] in real time! Note however that the {\\sf R} codes given in the following\npages are not optimised because we prefer to use simple and understandable codes, rather than condensed and\nefficient codes, both for time constraints and for pedagogical purposes: some codes were written by our students.\nTherefore, if you find better [meaning, more efficient/faster] codes than those provided along those pages, we would be \nglad to hear from you, but that does not mean that we will automatically substitute your {\\sf R} code for the current one,\nbecause readability is also an important factor.\n\nA final request: this manual comes in two versions, one corresponding to the odd-numbered exercises and \nfreely available to everyone, and another one corresponding to a larger collection of exercises and with restricted access\nto instructors only. Duplication and dissemination of the more extensive ``instructors only\" version are obviously prohibited since,\nif the solutions to most exercises become freely available, the appeal of using our book as a textbook will be severely\nreduced. Therefore, if you happen to possess an extended version of the manual, please refrain from distributing\nit and from reproducing it. \n\n\\bigskip\\noindent\n{\\bf Sceaux and Gainesville\\hfil Christian P.~Robert~and~George Casella\\break\n\\today\\hfill}\n\n\\chapter{Gibbs Samplers\n\n\\newcommand{Gibbs sampling$\\;$}{Gibbs sampling$\\;$}\n\\newcommand{Gibbs sampler$\\;$}{Gibbs sampler$\\;$}\n\\subsection{Exercise \\ref{exo:margikov}}\n\nThe density $g_{t}$ of $(X_{t},Y_{t})$ in Algorithm \\ref{al:TSGibbs} is decomposed as\n\\begin{align*}\ng_{t}(X_{t},Y_{t}|X_{t-1},&\\dots X_{0},Y_{t-1},\\dots Y_{0}) \n= g_{t,X|Y}(X_{t}|Y_{t},X_{t-1},\\dots X_{0},Y_{t-1},\\dots Y_{0})\\\\\n&\\times g_{t,Y}(Y_{t}|X_{t-1},\\dots X_{0},Y_{t-1},\\dots Y_{0})\n\\end{align*}\nwith\n$$\ng_{t,Y}(Y_{t}|X_{t-1},\\dots X_{0},Y_{t-1},\\dots Y_{0})=f_{Y|X}(Y_{t}|X_{t-1})\n$$\nwhich only depends on $X_{t-1},\\dots X_{0},Y_{t-1},\\dots Y_{0}$ through\n$X_{t-1}$, according to Step 1. of Algorithm \\ref{al:TSGibbs}. Moreover,\n$$\ng_{t,X|Y}(X_{t}|Y_{t},X_{t-1},\\dots X_{0},Y_{t-1},\\dots Y_{0})=f_{X|Y}(X_{t}|Y_{t})\n$$\nonly depends on $X_{t-2},\\dots X_{0},Y_{t},\\dots Y_{0}$ through $Y_{t}$.\nTherefore, \n$$\ng_{t}(X_{t},Y_{t}|X_{t-1},\\dots X_{0},Y_{t-1},\\dots Y_{0})=g_{t}(X_{t},Y_{t}|X_{t-1})\\,,\n$$\nwhich shows this is truly an homogeneous Markov chain.\n\n\\subsection{Exercise \\ref{pb:multiAR}}\n\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\item The (normal) full conditionals are defined in Example\n\\ref{ex:normgibbs2}. An \\R program that implements this Gibbs\nsampler is\n\\begin{verbatim}\n# (C.) Anne Sabourin, 2009\nT=500 ;p=5 ;r=0.25\nX=cur=rnorm(p)\nfor (t in 1 :T){\n for (j in 1 :p){\n m=sum(cur[-j])/(p-1)\n cur[j]=rnorm(1,(p-1)*r*m/(1+(p-2)*r),\n sqrt((1+(p-2)*r-(p-1)*r^2)/(1+(p-2)*r)))\n }\n X=cbind(X,cur)\n }\npar(mfrow=c(1,5))\nfor (i in 1:p){\n hist(X[i,],prob=TRUE,col=\"wheat2\",xlab=\"\",main=\"\")\n curve(dnorm(x),add=TRUE,col=\"sienna\",lwd=2)}\n\\end{verbatim}\n\n\\item Using instead\n\\begin{verbatim}\nJ=matrix(1,ncol=5,nrow=5)\nI=diag(c(1,1,1,1,1))\ns=(1-r)*I+r*J\nrmnorm(500,s)\n\\end{verbatim}\nand checking the duration by \\verb+system.time+ shows \\verb=rmnorm= is about five times\nfaster (and exact!).\n\\item If we consider the constraint\n$$\n\\sum_{i=1}^{m} x_i^2 \\le \\sum_{i=m+1}^{p} x_i^2\n$$\nit imposes a truncated normal full conditional on {\\em all} components. Indeed, for $1\\le i\\le m$,\n$$\nx^2_i \\le \\sum_{j=m+1}^{p} x_j^2 - \\sum_{j=1,j\\ne i}^{m} x_j^2\\,,\n$$\nwhile, for $i>m$,\n$$\nx^2_i \\ge \\sum_{j=m+1,j\\ne i}^{p} x_j^2 - \\sum_{j=1}^{m} x_j^2\\,.\n$$\nNote that the upper bound on $x_i^2$ when $i\\le m$ {\\em cannot be negative} if we start the Markov chain under the constraint.\nThe \\verb#cur[j]=rnorm(...# line in the above \\R program thus needs to be modified into a truncated normal distribution. \nAn alternative is to use a hybrid solution (see Section \\ref{sec:MwithinG} for the validation): \nwe keep generating the $x_i$'s from the same plain normal full conditionals as before and we only\nchange the components for which the constraint remains valid, i.e.\n\\begin{verbatim}\n for (j in 1:m){\n mea=sum(cur[-j])/(p-1)\n prop=rnorm(1,(p-1)*r*mea/(1+(p-2)*r),\n sqrt((1+(p-2)*r-(p-1)*r^2)/(1+(p-2)*r)))\n if (sum(cur[(1:m)[-j]]^2+prop^2) qpois(.9999,lam[j-1])\n[1] 6\n\\end{verbatim}\n\\end{enumerate}\n\n\\subsection{Exercise \\ref{pb:Exp-Improper}} \n\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\item The \\R program that produced Figure \\ref{fig:Exp-Improper} is\n\\begin{verbatim}\nnsim=10^3\nX=Y=rep(0,nsim)\nX[1]=rexp(1) #initialize the chain\nY[1]=rexp(1) #initialize the chain\nfor(i in 2:nsim){\n X[i]=rexp(1,rate=Y[i-1])\n Y[i]=rexp(1,rate=X[i])\n }\nst=0.1*nsim\npar(mfrow=c(1,2),mar=c(4,4,2,1))\nhist(X,col=\"grey\",breaks=25,xlab=\"\",main=\"\")\nplot(cumsum(X)[(st+1):nsim]/(1:(nsim-st)),type=\"l\",ylab=\"\")\n\\end{verbatim}\n\\item Using the Hammersley--Clifford Theorem {\\em per se} means using $f(y|x)/f(x|y)=x/y$ which is {\\em not integrable}.\nIf we omit this major problem, we have\n$$\nf(x,y) = \\frac{x\\,\\exp\\{-xy\\}}{x\\, {\\displaystyle \\int \\dfrac{\\text{d}y}{y}}} \\propto \\exp\\{-xy\\}\n$$\n(except that the proportionality term is infinity!).\n\\item If we constrain both conditionals to $(0,B)$, the Hammersley--Clifford Theorem gives\n\\begin{align*}\nf(x,y) &= \\frac{\\exp\\{-xy\\}/(1-e^{-xB})}{{\\displaystyle \\int \\dfrac{1-e^{-yB}}{y(1-e^{-xB})}\\,\\text{d}y}}\\\\\n &= \\frac{\\exp\\{-xy\\}}{{\\displaystyle \\int \\dfrac{1-e^{-yB}}{y}\\,\\text{d}y}}\\\\\n &\\propto \\exp\\{-xy\\}\\,,\n\\end{align*}\nsince the conditional exponential distributions are truncated. This joint distribution is then well-defined on\n$(0,B)^2$. A Gibbs sampler simulating from this joint distribution is for instance\n\\begin{verbatim}\nB=10\nX=Y=rep(0,nsim)\nX[1]=rexp(1) #initialize the chain\nY[1]=rexp(1) #initialize the chain\nfor(i in 2:nsim){\t#inversion method\n X[i]=-log(1-runif(1)*(1-exp(-B*Y[i-1])))/Y[i-1]\n Y[i]=-log(1-runif(1)*(1-exp(-B*X[i])))/X[i]\n }\nst=0.1*nsim\nmarge=function(x){ (1-exp(-B*x))/x}\nnmarge=function(x){ \n marge(x)/integrate(marge,low=0,up=B)$val}\npar(mfrow=c(1,2),mar=c(4,4,2,1))\nhist(X,col=\"wheat2\",breaks=25,xlab=\"\",main=\"\",prob=TRUE)\ncurve(nmarge,add=T,lwd=2,col=\"sienna\")\nplot(cumsum(X)[(st+1):nsim]/c(1:(nsim-st)),type=\"l\",\n lwd=1.5,ylab=\"\")\n\\end{verbatim}\nwhere the simulation of the truncated exponential is done by inverting the cdf (and where the\ntrue marginal is represented against the histogram).\n\\end{enumerate}\n\n\\subsection{Exercise \\ref{pb:firsthier}}\n\nLet us define\n\\begin{eqnarray*}\nf(x) & = & \\frac{b^{a}x^{a-1}e^{-bx}}{\\Gamma(a)}\\,,\\\\\ng(x) & = & \\frac{1}{x}=y\\,,\\end{eqnarray*}\nthen we have\n\\begin{eqnarray*}\nf_{Y}(y) & = & f_{X}\\left(g^{-1}(y)\\right)\\mid\\frac{d}{dy}g^{-1}(y)\\mid\\\\\n & = & \\frac{b^{a}}{\\Gamma(a)}\\left({1}/{y}\\right)^{a-1}\\exp\\left(-{b}/{y}\\right)\\frac{1}{y^{2}}\\\\\n & = & \\frac{b^{a}}{\\Gamma(a)}\\left({1}/{y}\\right)^{a+1}\\exp\\left(-{b}/{y}\\right)\\,,\n\\end{eqnarray*}\nwhich is the ${\\cal IG}(a,b)$ density.\n\n\\subsection{Exercise \\ref{pb:truncnorm}}\n\n{\\bf Warning: The function \\verb+rtnorm+ requires a predefined \\verb+sigma+ that should be part\nof the arguments, as in\\\\ \n\\verb+rtnorm=function(n=1,mu=0,lo=-Inf,up=Inf,sigma=1)+.}\\\\\n\nSince the \\verb+rtnorm+ function is exact (within the precision of the \\verb+qnorm+ and \\verb+pnorm+\nfunctions, the implementation in \\R is straightforward:\n\\begin{verbatim}\nh1=rtnorm(10^4,lo=-1,up=1)\nh2=rtnorm(10^4,up=1)\nh3=rtnorm(10^4,lo=3)\npar(mfrow=c(1,3),mar=c(4,4,2,1))\nhist(h1,freq=FALSE,xlab=\"x\",xlim=c(-1,1),col=\"wheat2\")\ndnormt=function(x){ dnorm(x)/(pnorm(1)-pnorm(-1))}\ncurve(dnormt,add=T,col=\"sienna\")\nhist(h2,freq=FALSE,xlab=\"x\",xlim=c(-4,1),col=\"wheat2\")\ndnormt=function(x){ dnorm(x)/pnorm(1)}\ncurve(dnormt,add=T,col=\"sienna\")\nhist(h3,freq=FALSE,xlab=\"x\",xlim=c(3,5),col=\"wheat2\")\ndnormt=function(x){ dnorm(x)/pnorm(-3)}\ncurve(dnormt,add=T,col=\"sienna\")\n\\end{verbatim}\n\n\\subsection{Exercise \\ref{pb:freq_2}}\n\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\item Since $(j=1,2)$\n$$\n(1-\\theta_1-\\theta_2)^{x_5+\\alpha_3-1} = \\sum_{i=0}^{x_5+\\alpha_3-1}\n{x_5+\\alpha_3-1\\choose i} (1-\\theta_j)^i\\theta_{3-j}^{x_5+\\alpha_3-1-i}\\,,\n$$\nwhen $\\alpha_3$ is an integer, it is clearly possible to express $\\pi(\\theta_1,\\theta_2|x)$ as\na sum of terms that are products of a polynomial function of $\\theta_1$ and of a polynomial \nfunction of $\\theta_2$. It is therefore straightforward to integrate those terms in either $\\theta_1$ \nor $\\theta_2$.\n\\item For the same reason as above, rewriting $\\pi(\\theta_1,\\theta_2|x)$ as a density in $(\\theta_1,\\xi)$\nleads to a product of polynomials in $\\theta_1$, all of which can be expanded and integrated in $\\theta_1$,\nproducing in the end a sum of functions of the form \n$$\n\\xi^{\\delta}\\big/(1+\\xi)^{x_1+x_2+x_5+\\alpha_1+\\alpha_3-2}\\,,\n$$\nnamely a mixture of $F$ densities.\n\\item The Gibbs sampler based on (\\ref{eq:tannerFull}) is available in the \\verb+mcsm+ package.\n\\end{enumerate}\n\n\\subsection{Exercise \\ref{pb:RBall}} \n\n{\\bf Warning: There is a typo in Example 7.3, \\verb+sigma+ should be defined as \\verb+sigma2+\nand \\verb+sigma2{1}+ should be \\verb+sigma2[1]+...}\\\\\n\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\item In Example \\ref{ex:betabi}, since $\\theta|x\\sim {\\cal B}e(x+a,n-x+b)$, we have clearly $\\BE[\\theta \\vert x] = (x+a)/(n+a+b)$ (with a missing\nparenthesis). The comparison between the empirical average and of the Rao--Blackwellization version is of the form\n\\begin{verbatim}\nplot(cumsum(T)/(1:Nsim),type=\"l\",col=\"grey50\",\n xlab=\"iterations\",ylab=\"\",main=\"Example 7.2\")\nlines(cumsum((X+a))/((1:Nsim)*(n+a+b)),col=\"sienna\")\n\\end{verbatim}\nAll comparisons are gathered in Figure \\ref{fig:allrb's}.\n\n\\item In Example \\ref{ex:Metab-1}, equation (\\ref{eq:firstposterior}) defines two standard distributions as full\nconditionals. Since $\\pi(\\theta|\\bx,\\sigma^2)$ is a normal distribution with mean and variance provided two lines\nbelow, we obviously have \n$$\n\\BE[\\theta | \\bx,\\sigma^2] = \\frac{\\sigma^2}{\\sigma^2+n \\tau^2}\\;\\theta_0 + \\frac{n\\tau^2}{\\sigma^2+n \\tau^2} \\;\\bar x\n$$\nThe modification in the \\R program follows\n\\begin{verbatim}\nplot(cumsum(theta)/(1:Nsim),type=\"l\",col=\"grey50\",\n xlab=\"iterations\",ylab=\"\",main=\"Example 7.3\")\nylab=\"\",main=\"Example 7.3\")\nlines(cumsum(B*theta0+(1-B)*xbar)/(1:Nsim)),col=\"sienna\")\n\\end{verbatim}\n\n\\item The full conditionals of Example \\ref{ex:Metab-2} given in Equation (\\ref{eq:onewayfull})\nare more numerous but similarly standard, therefore \n$$\n\\BE[\\theta_i | \\bar X_i ,\\sigma^2] = \\frac{\\sigma^2 }{\\sigma^2+n_i \\tau^2} \\mu+\\frac{n_i \\tau^2 }{\\sigma^2+n_i \\tau^2}\\bar X_i\n$$\nfollows from this decomposition, with the \\R lines added to the \\verb+mcsm+ \\verb+randomeff+ function\n\\begin{verbatim}\nplot(cumsum(theta1)/(1:nsim),type=\"l\",col=\"grey50\",\n xlab=\"iterations\",ylab=\"\",main=\"Example 7.5\")\nlines(cumsum((mu*sigma2+n1*tau2*x1bar)/(sigma2+n1*tau2))/\n (1:nsim)),col=\"sienna\")\n\\end{verbatim}\n\n\\item In Example \\ref{ex:censoredGibbs}, the complete-data model is a standard normal model with\nvariance one, hence $\\BE[\\theta \\vert x, z ] = \\dfrac{m \\bar x +(n-m) \\bar z}{n}$. The additional lines\nin the \\R code are\n\\begin{verbatim}\nplot(cumsum(that)/(1:Nsim),type=\"l\",col=\"grey50\",\n xlab=\"iterations\",ylab=\"\",main=\"Example 7.6\")\nlines(cumsum((m/n)*xbar+(1-m/n)*zbar)/(1:Nsim)),\n col=\"sienna\")\n\\end{verbatim}\n\n\\item In Example \\ref{ex:5.7}, the full conditional on $\\lambda$,\n$\\lambda_i|\\beta,t_i,x_i \\sim \\CG (x_i+\\alpha,t_i+\\beta)$ and hence \n$\\BE[\\lambda_i|\\beta,t_i,x_i] = (x_i+\\alpha)/(t_i+\\beta)$. The corresponding addition\nin the \\R code is\n\\begin{verbatim}\nplot(cumsum(lambda[,1])/(1:Nsim),type=\"l\",col=\"grey50\",\n xlab=\"iterations\",ylab=\"\",main=\"Example 7.12\")\nlines(cumsum((xdata[1]+alpha)/(Time[1]+beta))/(1:Nsim)),\n col=\"sienna\")\n\\end{verbatim}\n\\end{enumerate}\n\\begin{figure\n\\centerline{\\includegraphics[width=\\textwidth]{Exercise715.jpg}}\n\\caption{\\label{fig:allrb's}\nComparison of the convergences of the plain average with its Rao-Blackwellized counterpart for\nfive different examples. The Rao-Blackwellized is plotted in {\\sf sienna} red and is always more\nstable than the original version.}\n\\end{figure}\n\n\n\\chapter{Metropolis-Hastings Algorithms\n\n\\subsection{Exercise \\ref{exo:AR}}\n\nA simple \\R program to simulate this chain is\n\\begin{verbatim}\n# (C.) Jiazi Tang, 2009\nx=1:10^4\nx[1]=rnorm(1)\nr=0.9\nfor (i in 2:10^4){\n x[i]=r*x[i-1]+rnorm(1) }\nhist(x,freq=F,col=\"wheat2\",main=\"\")\ncurve(dnorm(x,sd=1/sqrt(1-r^2)),add=T,col=\"tomato\"\n\\end{verbatim}\n\n\\subsection{Exercise \\ref{exo:rho}}\n\nWhen $q(y|x)=g(y)$, we have \n\\begin{align*}\n\\rho(x,y) &= \\min\\left(\\frac{f(y)}{f(x)} \\frac{q(x|y)}{q(y|x)},1\\right)\\\\\n&= \\min\\left(\\frac{f(y)}{f(x)} \\frac{g(x)}{g(y)},1\\right)\\\\\n&= \\min\\left(\\frac{f(y)}{f(x)} \\frac{g(x)}{g(y)},1\\right)\\,.\n\\end{align*}\nSince the acceptance probability satisfies\n$$\n\\frac{f(y)}{f(x)} \\frac{g(x)}{g(y)} \\ge \\frac{f(y)/g(y)}{\\max f(x)/g(x)}\n$$\nit is larger for Metropolis--Hastings than for accept-reject.\n\n\\subsection{Exercise \\ref{exo:mocho}}\n\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\item The first property follows\nfrom a standard property of the normal distribution, namely that the linear transform of a normal\nis again normal. The second one is a consequence of the decomposition $y = X\\beta + \\epsilon$, when\n$\\epsilon\\sim\\mathcal{N}_n(0,\\sigma^2 I_n)$ is independent from $X\\beta$. \n\\item This derivation is detailed in Marin and Robert (2007, Chapter 3, Exercise 3.9).\n\nSince\n$$\n\\by|\\sigma^2,X\\sim\\mathcal{N}_n(X\\tilde\\beta,\\sigma^2(I_n+n X(X^\\text{T} X)^{-1}X^\\text{T} ))\\,,\n$$\nintegrating in $\\sigma^2$ with $\\pi(\\sigma^2)=1/\\sigma^2$ yields\n\\begin{eqnarray*}\nf(\\by|X) & = & (n+1)^{-(k+1)/2}\\pi^{-n/2}\\Gamma(n/2)\\left[\\by^\\text{T} \\by\n -\\frac{n}{n+1}\\by^\\text{T} X(X^\\text{T} X)^{-1}X^\\text{T} \\by\\right. \\\\\n &&\\qquad -\\left.\\frac{1}{n+1}\\tilde\\beta^\\text{T} X^\\text{T} X\\tilde\\beta\\right]^{-n/2}.\n\\end{eqnarray*}\nUsing the \\R function \\verb+dmt(mnormt)+, we obtain the marginal density for the swiss dataset:\n\\begin{verbatim}\n> y=log(as.vector(swiss[,1]))\n> X=as.matrix(swiss[,2:6])\n> library(mnormt)\n> dmt(y,S=diag(length(y))+\n[1] 2.096078e-63\n\\end{verbatim}\nwith the prior value $\\tilde\\beta=0$.\n\\end{enumerate}\n\n\\subsection{Exercise \\ref{pb:beta}} \n\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\item We generate an Metropolis-Hastings sample from the ${\\cal B}e(2.7,6.3)$ density using uniform simulations:\n\\begin{verbatim}\n# (C.) Thomas Bredillet, 2009\nNsim=10^4\na=2.7;b=6.3\nX=runif(Nsim)\nlast=X[1]\nfor (i in 1:Nsim) {\n cand=rbeta(1,1,1)\n alpha=(dbeta(cand,a,b)/dbeta(last,a,b))/\n\t (dbeta(cand,1,1)/dbeta(last,1,1))\n if (runif(1) length(unique(X))/5000\n[1] 0.458\n\\end{verbatim}\nIf instead we use a ${\\cal B}e(20,60)$ proposal, the modified lines in the \\R program are\n\\begin{verbatim}\ncand=rbeta(20,60,1)\nalpha=(dbeta(cand,a,b)/dbeta(last,a,b))/\n (dbeta(cand,20,60)/dbeta(last,20,60))\n\\end{verbatim}\nand the acceptance rate drops to zero! \n\n\\item In the case of a truncated beta, the following \\R program\n\\begin{verbatim}\nNsim=5000\na=2.7;b=6.3;c=0.25;d=0.75\nX=rep(runif(1),Nsim)\ntest2=function(){\n last=X[1]\n for (i in 1:Nsim){\n cand=rbeta(1,2,6)\n alpha=(dbeta(cand,a,b)/dbeta(last,a,b))/\n\t (dbeta(cand,2,6)/dbeta(last,2,6))\n if ((runif(1) length(x)/5000\n[1] 0.8374\n\\end{verbatim}\n\n\\item The Metropolis-Hastings ~algorithm with a Gamma $\\CG(4,7)$ candidate can be implemented as follows\n\\begin{verbatim}\n# (C.) Jiazi Tang, 2009\nX=rep(0,5000)\nX[1]=rgamma(1,4.3,6.2)\nfor (t in 2:5000){\n rho=(dgamma(X[t-1],4,7)*dgamma(g47[t],4.3,6.2))/\n (dgamma(g47[t],4,7)*dgamma(X[t-1],4.3,6.2))\n X[t]=X[t-1]+(g47[t]-X[t-1])*(runif(1) length(unique(X))/5000\n[1] 0.79\n\\end{verbatim}\n\n\\item The Metropolis-Hastings~algorithm with a Gamma $\\CG(5,6)$ candidate can be implemented as follows\n\\begin{verbatim}\n# (C.) Jiazi Tang, 2009\ng56=rgamma(5000,5,6)\nX[1]=rgamma(1,4.3,6.2)\nfor (t in 2:5000){\n rho=(dgamma(X[t-1],5,6)*dgamma(g56[t],4.3,6.2))/\n (dgamma(g56[t],5,6)*dgamma(X[t-1],4.3,6.2))\n X[t]=X[t-1]+(g56[t]-X[t-1])*(runif(1) length(unique(X))/5000\n[1] 0.7678\n\\end{verbatim}\nwhich is therefore quite similar to the previous proposal.\n\\end{enumerate} \n\n\\subsection{Exercise \\ref{exo:brakin}} \n\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\arabic{enumi}.}\n\\item Using the candidate given in Example \\ref{ex:braking} mean using the \\verb+Braking+ \\R program of\nour package \\verb+mcsm+. In the earlier version, there is a missing link in the \\R function which must\nthen be corrected by changing\n\\begin{verbatim}\ndata=read.table(\"BrakingData.txt\",sep = \"\",header=T)\nx=data[,1]\ny=data[,2]\n\\end{verbatim}\ninto\n\\begin{verbatim}\nx=cars[,1]\ny=cars[,2]\n\\end{verbatim}\nIn addition, since the original \\verb$Braking$ function does not return the simulated chains, a final line\n\\begin{verbatim}\nlist(a=b1hat,b=b2hat,c=b3hat,sig=s2hat)\n\\end{verbatim}\nmust be added into the function.\n\\item If we save the chains as \\verb+mcmc=Braking()+ (note that we use $10^3$ simulations instead of $500$), \nthe graphs assessing convergence can be plotted by\n\\begin{verbatim}\npar(mfrow=c(3,3),mar=c(4,4,2,1))\nplot(mcmc$a,type=\"l\",xlab=\"\",ylab=\"a\");acf(mcmc$a) \nhist(mcmc$a,prob=T,main=\"\",yla=\"\",xla=\"a\",col=\"wheat2\")\nplot(mcmc$b,type=\"l\",xlab=\"\",ylab=\"b\");acf(mcmc$b) \nhist(mcmc$b,prob=T,main=\"\",yla=\"\",xla=\"b\",col=\"wheat2\")\nplot(mcmc$c,type=\"l\",xlab=\"\",ylab=\"c\");acf(mcmc$c) \nhist(mcmc$c,prob=T,main=\"\",yla=\"\",xla=\"c\",col=\"wheat2\")\n\\end{verbatim}\nAutocorrelation graphs provided by \\verb+acf+ show a strong correlation across iterations, while the raw plot\nof the sequences show poor acceptance rates. The histograms are clearly unstable as well. This $10^3$ iterations\ndo not appear to be sufficient in this case.\n\\item Using \n\\begin{verbatim}\n> quantile(mcmc$a,c(.025,.975))\n 2.\n-6.462483 12.511916\n\\end{verbatim}\nand the same for $b$ and $c$ provides converging confidence intervals on the three parameters.\n\\end{enumerate}\n\n\\subsection{Exercise \\ref{ex:challenger2}}\n\n{\\bf Warning: There is a typo in question b in that the candidate must also be a double-exponential for $\\alpha$, since\nthere is no reason for $\\alpha$ to be positive...}\n\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\arabic{enumi}}\n\\item The dataset {\\tt challenger} is provided with the \\verb+mcsm+ package, thus available as\n\\begin{verbatim}\n> library(mcsm)\n> data(challenger)\n\\end{verbatim}\nRunning a regular logistic regression is a simple call to \\verb+glm+:\n\\begin{verbatim}\n> temper=challenger[,2]\n> failur=challenger[,1]\n> summary(glm(failur~temper, family = binomial))\n\nDeviance Residuals:\n Min 1Q Median 3Q Max\n-1.0611 -0.7613 -0.3783 0.4524 2.2175\n\nCoefficients:\n Estimate Std. Error z value Pr(>|z|)\n(Intercept) 15.0429 7.3786 2.039 0.0415 *\ntemper -0.2322 0.1082 -2.145 0.0320 *\n---\nSignif. codes: 0 \"***\" .001 \"**\" .01 \"**\" .05 \".\" .1 \"\" 1\n\n(Dispersion parameter for binomial family taken to be 1)\n\n Null deviance: 28.267 on 22 degrees of freedom\nResidual deviance: 20.315 on 21 degrees of freedom\nAIC: 24.315\n\\end{verbatim}\nThe MLE's and the associated covariance matrix are given by\n\\begin{verbatim}\n> challe=summary(glm(failur~temper, family = binomial))\n> beta=as.vector(challe$coef[,1])\n> challe$cov.unscaled\n (Intercept) temper\n(Intercept) 54.4441826 -0.79638547\ntemper -0.7963855 0.01171512\n\\end{verbatim}\nThe result of this estimation can be checked by\n\\begin{verbatim}\nplot(temper,failur,pch=19,col=\"red4\",\nxlab=\"temperatures\",ylab=\"failures\")\ncurve(1/(1+exp(-beta[1]-beta[2]*x)),add=TRUE,col=\"gold2\",lwd=2)\n\\end{verbatim}\nand the curve shows a very clear impact of the temperature.\n\n\\item The Metropolis--Hastings resolution is based on the \\verb+challenge(mcsm)+ function, using the same\nprior on the coefficients, $\\alpha\\sim\\mathcal{N}(0,25)$, $\\beta\\sim\\mathcal{N}(0,25/s^2_x)$, where $s^2_x$\nis the empirical variance of the temperatures.\n\\begin{verbatim}\nNsim=10^4\nx=temper\ny=failur\nsigmaa=5\nsigmab=5/sd(x)\n\nlpost=function(a,b){\n sum(y*(a+b*x)-log(1+exp(a+b*x)))+\n dnorm(a,sd=sigmaa,log=TRUE)+dnorm(b,sd=sigmab,log=TRUE)\n }\n\na=b=rep(0,Nsim)\na[1]=beta[1]\nb[1]=beta[2]\n#scale for the proposals\nscala=sqrt(challe$cov.un[1,1])\nscalb=sqrt(challe$cov.un[2,2])\n\nfor (t in 2:Nsim){\n propa=a[t-1]+sample(c(-1,1),1)*rexp(1)*scala\n if (log(runif(1)) length(unique(a))/Nsim\n[1] 0.1031\n> length(unique(b))/Nsim\n[1] 0.1006\n\\end{verbatim}\nbut still acceptable.\n\\item Exploring the output can be done via graphs as follows\n\\begin{verbatim}\npar(mfrow=c(3,3),mar=c(4,4,2,1))\nplot(a,type=\"l\",xlab=\"iterations\",ylab=expression(alpha))\nhist(a,prob=TRUE,col=\"wheat2\",xlab=expression(alpha),main=\"\")\nacf(a,ylab=expression(alpha))\nplot(b,type=\"l\",xlab=\"iterations\",ylab=expression(beta))\nhist(b,prob=TRUE,col=\"wheat2\",xlab=expression(beta),main=\"\")\nacf(b,ylab=expression(beta))\nplot(a,b,type=\"l\",xlab=expression(alpha),ylab=expression(beta))\nplot(temper,failur,pch=19,col=\"red4\",\n xlab=\"temperatures\",ylab=\"failures\")\nfor (t in seq(100,Nsim,le=100)) curve(1/(1+exp(-a[t]-b[t]*x)),\n add=TRUE,col=\"grey65\",lwd=2)\ncurve(1/(1+exp(-mean(a)-mean(b)*x)),add=TRUE,col=\"gold2\",lwd=2.5)\npostal=rep(0,1000);i=1\nfor (t in seq(100,Nsim,le=1000)){ postal[i]=lpost(a[t],b[t]);i=i+1}\nplot(seq(100,Nsim,le=1000),postal,type=\"l\",\n xlab=\"iterations\",ylab=\"log-posterior\")\nabline(h=lpost(a[1],b[1]),col=\"sienna\",lty=2)\n\\end{verbatim}\nwhich shows a slow convergence of the algorithm (see the \\verb+acf+ graphs on Figure \\ref{fig:mhuttle}!)\n\\item The predictions of failure are given by\n\\begin{verbatim}\n> mean(1/(1+exp(-a-b*50)))\n[1] 0.6898612\n> mean(1/(1+exp(-a-b*60)))\n[1] 0.4892585\n> mean(1/(1+exp(-a-b*70)))\n[1] 0.265691\n\\end{verbatim}\n\\end{enumerate}\n\\begin{figure\n\\centerline{\\includegraphics[width=\\textwidth]{mhuttle.jpg}}\n\\caption{\\label{fig:mhuttle}\nGraphical checks of the convergence of the Metropolis--Hastings algorithm associated with\nthe {\\sf challenger} dataset and a logistic regression model.}\n\\end{figure}\n\n\\subsection{Exercise \\ref{pb:Norm-DE}} \n\n{\\bf Warning: There is a typo in question c, which should involve $\\mathcal{N}(0,\\omega)$\ncandidates instead of $\\mathcal{L}(0,\\omega)$...}\\\\\n\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\item An \\R program to produce the three evaluations is\n\\begin{verbatim}\n# (C.) Thomas Bredillet, 2009\nNsim=5000\nA=B=runif(Nsim)\nalpha=1;alpha2=3\nlast=A[1] \na=0;b=1\ncand=ifelse(runif(Nsim)>0.5,1,-1) * rexp(Nsim)/alpha\nfor (i in 1:Nsim){\n rate=(dnorm(cand[i],a,b^2)/dnorm(last,a,b^2))/\n (exp(-alpha*abs(cand[i]))/exp(-alpha*abs(last)))\n if (runif(1)0.5,1,-1) * rexp(Nsim)/alpha2\nfor (i in 1:Nsim) {\n rate=(dnorm(cand[i],a,b^2)/dnorm(last,a,b^2))/\n (exp(-alpha2*abs(cand[i]))/exp(-alpha2*abs(last)))\n if (runif(1)0.5,1,-1) * rexp(Nsim)\nacce=rep(0,50)\nfor (j in 1:50){\n cand=cand0/alf[j]\n last=A[1]\n for (i in 2:Nsim){\n rate=(dnorm(cand[i],a,b^2)/dnorm(last,a,b^2))/\n (exp(-alf[j]*abs(cand[i]))/exp(-alf[j]*abs(last)))\n if (runif(1)0.5,1,-1) * rexp(Nsim)\nacce=rep(0,50)\nfor (j in 1:50){\n eps=cand0/alf[j]\n last=A[1]\n for (i in 2:Nsim){\n cand[i]=last+eps[i]\n rate=dnorm(cand[i],a,b^2)/dnorm(last,a,b^2)\n if (runif(1) heidel.diag(mcmc(alpha))\n\n Stationarity start p-value\n test iteration\nvar1 passed 1 0.261\n\n Halfwidth Mean Halfwidth\n test\nvar1 passed 0.226 0.00163\n> geweke.diag(mcmc(alpha))\n\nFraction in 1st window = 0.1\nFraction in 2nd window = 0.5\n\n var1\n-0.7505\n\\end{verbatim}\nIf we reproduce the Kolmogorov--Smirnov analysis\n\\begin{verbatim}\nks=NULL\nM=10\nfor (t in seq(Nsim/10,Nsim,le=100)){\nalpha1=alpha[1:(t/2)]\nalpha2=alpha[(t/2)+(1:(t/2))]\nalpha1=alpha1[seq(1,t/2,by=M)]\nalpha2=alpha2[seq(1,t/2,by=M)]\nks=c(ks,ks.test(alpha1,alpha2)$p)\n}\n\\end{verbatim}\nPlotting the vector \\verb+ks+ by \\verb+plot(ks,pch=19)+ \nshows no visible pattern that would indicate a lack of uniformity.\n\nComparing the output with the true target in $\\alpha$ follows from the definition\n\\begin{verbatim}\nmarge=function(alpha){\n(alpha^(-3)/(sqrt(1+18*(alpha+sigma2)^(-1))*(alpha+sigma2)^9))*\nexp(-(2/alpha) - (.5/(alpha+sigma2))*sum(baseball^2) +\n.5*(alpha+sigma2)^(-2)*sum(baseball)^2/(1+n*(alpha+sigma2)^(-1)))\n}\n\\end{verbatim}\nFigure \\ref{fig:dafit} shows the fit of the simulated histogram to the above function (when normalized\nby \\verb+integrate+).\n\\begin{figure\n\\centerline{\\includegraphics[width=0.7\\textwidth]{dafit.jpg}}\n\\caption{\\label{fig:dafit}\nHistogram of the $(\\alpha^{(t)})$ chain produced by the Gibbs sampler of Example \\ref{ex:baseball} \nand fit of the exact marginal $\\pi(\\alpha|\\by)$, based on $10^4$ simulations.}\n\\end{figure}\n\n\\subsection{Exercise \\ref{pb:the_far_side}}\n\n\\begin{enumerate}\n \\renewcommand{\\theenumi}{\\alph{enumi}}\n\\item We simply need to check that this transition kernel $K$ satisfies the\ndetailed balance condition \\eqref{eq:db}, $f(x)K(y|x) = f(y) K(x|y)$ when $f$\nis the ${\\cal B}e(\\alpha,1)$ density: when $x\\ne y$,\n\\begin{align*}\nf(x)K(x,y) &= \\alpha x^{\\alpha-1}\\,x\\,(\\alpha+1)\\,y^{\\alpha}\\\\\n\t &= \\alpha (\\alpha+1) (xy)^\\alpha\\\\\n &= f(y)K(y,x)\n\\end{align*}\nso the ${\\cal B}e(\\alpha,1)$ distribution is indeed stationary.\n\\item Simulating the Markov chain is straightforward:\n\\begin{verbatim}\nalpha=.2\nNsim=10^4\nx=rep(runif(1),Nsim)\ny=rbeta(Nsim,alpha+1,1)\nfor (t in 2:Nsim){\n if (runif(1) heidel.diag(mcmc(x))\n\n Stationarity start p-value\n test iteration\nvar1 passed 1001 0.169\n\n Halfwidth Mean Halfwidth\n test\nvar1 failed 0.225 0.0366\n> geweke.diag(mcmc(x))\n\nFraction in 1st window = 0.1\nFraction in 2nd window = 0.5\n\n var1\n3.277\n\\end{verbatim}\nare giving dissonant signals. The \\verb+effectiveSize(mcmc(x))}+ is then equal to $329$.\nMoving to $10^6$ simulations does not modify the picture (but may cause your system to crash!)\n\\item The corresponding Metropolis--Hastings version is\n\\begin{verbatim}\nalpha=.2\nNsim=10^4\nx=rep(runif(1),Nsim)\ny=rbeta(Nsim,alpha+1,1)\nfor (t in 2:Nsim){\n if (runif(1) heidel.diag(mcmc(x))\n\n Stationarity start p-value\n test iteration\nvar1 passed 1001 0.0569\n\n Halfwidth Mean Halfwidth\n test\nvar1 failed 0.204 0.0268\n> geweke.diag(mcmc(x))\n\nFraction in 1st window = 0.1\nFraction in 2nd window = 0.5\n\n var1\n1.736\n\\end{verbatim}\n\\end{enumerate}\n\n\\subsection{Exercise \\ref{pb:proberge}}\n\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\item A possible \\R definition of the posterior is\n\\begin{verbatim}\npostit=function(beta,sigma2){\n prod(pnorm(r[d==1]*beta/sigma2))*prod(pnorm(-r[d==0]*beta/sigma2))*\n dnorm(beta,sd=5)*dgamma(1/sigma2,2,1)}\n\\end{verbatim}\nand a possible \\R program is\n\\begin{verbatim}\nr=Pima.tr$ped\nd=as.numeric(Pima.tr$type)-1\nmod=summary(glm(d~r-1,family=\"binomial\"))\nbeta=rep(mod$coef[1],Nsim)\nsigma2=rep(1/runif(1),Nsim)\nfor (t in 2:Nsim){\n prop=beta[t-1]+rnorm(1,sd=sqrt(sigma2[t-1]*mod$cov.unscaled))\n if (runif(1) gelman.diag(mcmc.list(mcmc(beta1),mcmc(beta2),mcmc(beta3),\n+ mcmc(beta4),mcmc(beta5)))\nPotential scale reduction factors:\n Point est. 97.\n[1,] 1.02 1.03\n\\end{verbatim}\nNote also the good mixing behavior of the chain:\n\\begin{verbatim}\n> effectiveSize(mcmc.list(mcmc(beta1),mcmc(beta2),\n+ mcmc(beta3),mcmc(beta4),mcmc(beta5)))\n var1\n954.0543\n\\end{verbatim}\n\\item The implementation of the traditional Gibbs sampler with completion is\ndetailed in \\cite{marin:robert:2007}, along with the appropriate \\R program.\nThe only modification that is needed for this problem is the introduction of \nthe non-identifiable scale factor $\\sigma^2$.\n\\end{enumerate}\n\n\\subsection{Exercise \\ref{pb:thin_ks}}\n\nIn the \\verb+kscheck.R+ program available in \\verb+mcsm+, you can modify $G$ by\nchanging the variable \\verb+M+ in\n\\begin{verbatim}\nsubbeta=beta[seq(1,T,by=M)]\nsubold=oldbeta[seq(1,T,by=M)]\nks=NULL\nfor (t in seq((T/(10*M)),(T/M),le=100)) \n ks=c(ks,ks.test(subbeta[1:t],subold[1:t])$p)\n\\end{verbatim}\n(As noted by a reader, the syntax \\verb+ks=c(ks,res)+ is very inefficient in\nsystem time, as you can check by yourself.)\n\n\\subsection{Exercise \\ref{pb:tan_cvg}}\n\nSince the Markov chain $(\\theta^{(t)})$ is converging to the posterior distribution\n(in distribution), the density at time $t$, $\\pi_t$, is also converging (pointwise)\nto the posterior density $\\pi(\\theta|x)$, therefore $\\omega_t$ is converging to\n$$\n\\dfrac{f(x|\\theta^{(\\infty)}) \\pi(\\theta^{(\\infty)})}{ \\pi(\\theta^{(\\infty)}|x)} = m(x)\\,,\n$$\nfor all values of $\\theta^{(\\infty)}$. (This is connected with Chib's (\\citeyear{chib:1995})\nmethod, discussed in Exercise \\ref{exo:chibmarge}.)\n\n\n\\subsection{Exercise \\ref{pb:essPress}}\n\nIf we get back to Example \\ref{ex:6.1}, the sequence \\verb+beta+ can be checked in terms of\neffective sample via an \\R program like\n\\begin{verbatim}\ness=rep(1,T/10)\nfor (t in 1:(T/10)) ess[t]=effectiveSize(beta[1:(10*t)])\n\\end{verbatim}\nwhere the subsampling is justified by the computational time required by \\verb&effectiveSize&.\nThe same principle can be applied to any chain produced by an MCMC algorithm.\n\nFigure \\ref{esscomp} compares the results of this evaluation over the first three examples of\nthis chapter. None of them is strongly conclusive about convergence...\n\\begin{figure\n\\centerline{\\includegraphics[width=\\textwidth]{esscomp.jpg}}\n\\caption{\\label{esscomp}\nEvolution of the effective sample size across iterations for the first three examples of\nChapter 8.}\n\\end{figure}\n\n\n\n\\chapter{Controling and Accelerating Convergence\n\n\\subsection{Exercise \\ref{pb:ratio_csts}}\n\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\item Since\n$$\n\\pi_1(\\theta|x) = \\tilde\\pi_1(\\theta)/c_1\n \\mbox{ and }\\pi_2(\\theta|x) =\\tilde\\pi_2(\\theta)/c_2\\,,\n$$\nwhere only $\\tilde\\pi_1$ and $\\tilde\\pi_2$ are known and where $c_1$ and $c_2$ correspond to\nthe marginal likelihoods, $m_1(x)$ and $m_2(x)$ (the dependence on $x$ is removed for simplification purposes),\nwe have that\n$$\n\\varrho=\\dfrac{m_1(x)}{m_2(x)}\n=\\dfrac{\\int_{\\Theta_1} \\pi_1(\\theta) f_1(x|\\theta)\\,\\text{d}\\theta}{\\int_{\\Theta_1} \\pi_2(\\theta) f_2(x|\\theta)\\,\\text{d}\\theta}\n=\\int_{\\Theta_1} \\dfrac{\\pi_1(\\theta) f_1(x|\\theta)}{\\tilde\\pi_2(\\theta)}\\,\\frac{\\tilde\\pi_2(\\theta)}{m_2(x)}\\text{d}\\theta_1 \n$$\nand therefore $\\tilde\\pi_1(\\theta)/\\tilde\\pi_2(\\theta)$ is an unbiased estimator of $\\varrho$ when $\\theta\\sim\\pi_2(\\theta|x)$.\n\n\\item Quite similarly,\n$$\n\\dfrac{\\int \\tilde\\pi_1(\\theta) \\alpha(\\theta) \\pi_2(\\theta|x) \\text{d}\\theta }{\n\\int \\tilde\\pi_2(\\theta) \\alpha(\\theta) \\pi_1(\\theta|x) \\text{d}\\theta} = \n\\dfrac{\\int \\tilde\\pi_1(\\theta) \\alpha(\\theta) \\tilde\\pi_2(\\theta)/c_2 \\text{d}\\theta }{\n\\int \\tilde\\pi_2(\\theta) \\alpha(\\theta) \\tilde\\pi_1(\\theta)/c_1 \\text{d}\\theta} = \\frac{c_1}{c_2} = \\varrho\\,.\n$$\n\\end{enumerate}\n\n\\subsection{Exercise \\ref{exo:ESSin}}\n\nWe have\n\\begin{align*}\n\\text{ESS}_{n} &=1\\bigg/\\sum_{i=1}^{n}\\underline{w}_{i}^{2}\n=1\\bigg/\\sum_{i=1}^{n}\\left(w_{i}\\bigg/\\sum_{j=1}^{n}w_{j}\\right)^{2}\\\\\n&=\\dfrac{\\left(\\sum_{i=1}^{n}w_{i}\\right)^{2}}{\\sum_{i=1}^{n}w_{i}^{2}}\n=\\dfrac{\\sum_{i=1}^{n}w_{i}^2+\\sum_{i\\neq j}w_{i}w_{j}}{\\sum_{i=1}^{n}w_{i}^{2}}\n\\le n\n\\end{align*}\n(This is also a consequence of Jensen's inequality when considering that the $\\underline{w}_{i}$ sum up to one.)\nMoreover, the last equality shows that\n\\[\nESS_{n}=1+\\frac{\\sum_{i\\neq j}w_{i}w_{j}}{\\sum_{i=1}^{n}w_{i}^{2}}\\ge 1\\,,\n\\]\nwith equality if and only if a single $\\omega_i$ is different from zero.\n\n\\subsection{Exercise \\ref{exo:simerin}}\n\n{\\bf Warning: There is a slight typo in the above in that $\\bar {\\mathbf X}_k$ should not be in bold. It should thus read}\n\\begin{rema}\n\\noindent Establish that\n$$\n\\text{cov}(\\bar {X}_k,\\bar { X}_{k^\\prime}) = {\\sigma^2}\\big/{\\max\\{k, k^\\prime\\}}.\n$$\n\\end{rema}\n\nSince the $X_{i}$'s are iid, for $k'y_0\\}}$,\nwhich is interesting when $p=P(Y>y_0)$ is known. In this case,\n$$\\beta^*=\\frac{\\int_{y>y_0}{hf}-\\int_{y>y_0}{hf}\\int_{y>y_0}{m}}{\\int_{y>y_0}{m}-(\\int_{y>y_0}{m})^2}=\\frac{\\int_{y>y_0}{hf}}{p}.$$\nThus, $\\beta^*$ can be estimated using the Accept-reject sample. A\nsecond choice of $c$ is $c(y)=y$, which leads to the two first\nmoments of $Y$. When those two moments $m_1$ and $m_2$ are known\nor can be well approximated, the optimal choice of $\\beta$ is\n$$\\beta^*=\\frac{\\int{yh(y)f(y)dy}-\\mathfrak{I}m_1}{m_2}.$$\nand can be estimated using the same sample or another instrumental\ndensity namely when $\\mathfrak{I}'=\\int{yh(y)f(y)dy}$ is simple to\ncompute, compared to $\\mathfrak{I}$.\n\\end{enumerate}\n\n\\chapter{Basic R programming\n\n\\subsection{Exercise \\ref{exo:baby}}\n\nSelf-explanatory.\n\n\\subsection{Exercise \\ref{exo:helpme}}\n\nSelf-explanatory.\n\n\\subsection{Exercise \\ref{exo:seq}}\n\nOne problem is the way in which \\R handles parentheses. So\n\\begin{verbatim}\n> n=10\n> 1:n\n\\end{verbatim}\nproduces \n\\begin{verbatim}\n1 2 3 4 5 6 7 8 9 10\n\\end{verbatim}\nbut\n\\begin{verbatim}\n> n=10\n> 1:n-1\n\\end{verbatim}\nproduces\n\\begin{verbatim}\n0 1 2 3 4 5 6 7 8 9\n\\end{verbatim}\nsince the \\verb+1:10+ command is executed first, then $1$ is subtracted.\n\nThe command \\verb+seq(1,n-1,by=1)+ operates just as \\verb+1:(n-1)+.\nIf $n$ is less than $1$ we can use something like \\verb@seq(1,.05,by=-.01)@. \nTry it, and try some other variations. \n\n\\subsection{Exercise \\ref{pb:boot1}}\n\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\item To bootstrap the data you can use the code \n\\begin{verbatim}\nBoot=2500\nB=array(0,dim=c(nBoot, 1))\nfor (i in 1:nBoot){\n ystar=sample(y,replace=T)\n B[i]=mean(ystar)\n }\n\\end{verbatim}\nThe quantile can be estimated with \\verb+sort(B)[.95*nBoot]+, which in our case/sample is $5.8478$.\n\\item To get a confidence interval requires a double bootstrap. That is, for each bootstrap sample we \ncan get a point estimate of the $95\\%$ quantile. We can then run an histogram on these quantiles\nwith \\verb@hist@, and get {\\em their} upper and lower quantiles for a confidence region.\n\\begin{verbatim}\nnBoot1=1000\nnBoot2=1000\nB1=array(0,dim=c(nBoot1, 1))\nB2=array(0,dim=c(nBoot2, 1))\nfor (i in 1:nBoot1){\n ystar=sample(y,replace=T)\n for (j in 1:nBoot2)\n B2[j]=mean(sample(ystar,replace=T))\n B1[i]=sort(B2)[.95*nBoot2]\n }\n\\end{verbatim}\nA $90\\%$ confidence interval is given by\n\\begin{verbatim}\n> c(sort(B1)[.05*nBoot1], sort(B1)[.95*nBoot1])\n[1] 4.731 6.844\n\\end{verbatim}\nor alternatively\n\\begin{verbatim}\n> quantile(B1,c(.05,.95))\n \n4.731 6.844\n\\end{verbatim}\nfor the data in the book. The command \\verb@hist(B1)@ will give a histogram of the values.\n\\end{enumerate}\n\n\\subsection{Exercise \\ref{exo:RvsC}}\n\nIf you type\n\\begin{verbatim}\n> mean\nfunction (x, ...)\nUseMethod(\"mean\")\n\n\\end{verbatim}\nyou do not get any information about the function \\verb+mean+ because it is not written in {\\tt R}, while\n\\begin{verbatim}\n> sd\nfunction (x, na.rm = FALSE)\n{\n if (is.matrix(x))\n apply(x, 2, sd, na.rm = na.rm)\n else if (is.vector(x))\n sqrt(var(x, na.rm = na.rm))\n else if (is.data.frame(x))\n sapply(x, sd, na.rm = na.rm)\n else sqrt(var(as.vector(x), na.rm = na.rm))\n}\n\\end{verbatim}\nshows \\verb+sd+ is written in {\\tt R}. The same applies to \\verb+var+ and \\verb+cov+.\n\n\\subsection{Exercise \\ref{exo:attach}}\n\nWhen looking at the description of \\verb+attach+, you can see that this command allows to use\nvariables or functions that are in a database rather than in the current \\verb=.RData=. Those\nobjects can be temporarily modified without altering their original format. (This is a fragile command\nthat we do not personaly recommend!)\n\nThe function \\verb+assign+ is also rather fragile, but it allows for the creation and assignment of\nan arbitrary number of objects, as in the documentation example:\n\\begin{verbatim}\nfor(i in 1:6) { #-- Create objects 'r.1', 'r.2', ... 'r.6' --\n nam <- paste(\"r\",i, sep=\".\")\n assign(nam, 1:i)\n }\n\\end{verbatim}\nwhich allows to manipulate the \\verb+r.1+, \\verb+r.2+, ..., variables.\n\n\\subsection{Exercise \\ref{exo:dump&sink}}\n\nThis is mostly self-explanatory. If you type the help on each of those functions,\nyou will see examples on how they work. The most recommended \\R function for saving\n\\R objects is \\verb+save+. Note that, when using \\verb\\write\\, the description states\n\\begin{verbatim}\nThe data (usually a matrix) 'x' are written to file \n'file'. If 'x' is a two-dimensional matrix you need \nto transpose it to get the columns in 'file' the same \nas those in the internal representation.\n\\end{verbatim}\nNote also that \\verb+dump+ and \\verb+sink+ are fairly involved and should use with caution.\n\n\\subsection{Exercise \\ref{exo:match}}\n\nTake, for example {\\tt a=3;x=c(1,2,3,4,5)} to see that they are the same, \nand, in fact, are the same as \\verb|max(which(x == a))|. For \n\\verb|y=c(3,4,5,6,7,8)|, try \\verb|match(x,y)| and \\verb|match(y,x)| to \nsee the difference. In contrast, \\verb|\n\n\\subsection{Exercise \\ref{exo:timin}}\n\nRunning \\verb=system.time= on the three sets of commands give\n\\begin{enumerate}\n\\item 0.004 0.000 0.07\n\\item 0 0 \n\\item 0.000 0.000 0.00\n\\end{enumerate}\nand the vectorial allocation is therefore the fastest\\idxr{system.time@\\verb+system.time+}.\n\n\\subsection{Exercise \\ref{exo:unifix}}\n\nThe \\R code is\n\\begin{verbatim}\n> A=matrix(runif(4),ncol=2)\n> A=A/apply(A,1,sum)\n> apply(\n[1] 1 1\n> B=A;for (t in 1:100) B=\n> apply(B,1,sum)\n[1] Inf Inf\n\\end{verbatim}\nand it shows that numerical inaccuracies in the product leads to the property to\nfail when the power is high enough.\n\n\\subsection{Exercise \\ref{exo:orange}}\n\nThe function \\verb=xyplot= is part of the \\verb+lattice+ library. Then\n\\begin{verbatim}\n> xyplot(age ~ circumference, data=Orange)\n> barchart(age ~ circumference, data=Orange)\n> bwplot(age ~ circumference, data=Orange)\n> dotplot(age ~ circumference, data=Orange)\n\\end{verbatim}\nproduce different representations of the dataset. Fitting a linear model is\nsimply done by \\verb+lm(age ~ circumference, data=Orange)+\nand using the tree index as an extra covariate leads to\n\\begin{verbatim}\n>summary(lm(age ~ circumference+Tree, data=Orange))\n\nCoefficients:\n Estimate Std. Error t value Pr(>|t|)\n(Intercept) -90.0596 55.5795 -1.620 0.116\ncircumference 8.7366 0.4354 20.066 < 2e-16 ***\nTree.L -348.8982 54.9975 -6.344 6.23e-07 ***\nTree.Q -22.0154 52.1881 -0.422 0.676\nTree.C 72.2267 52.3006 1.381 0.178\nTree^4 41.0233 52.2167 0.786 0.438\n\\end{verbatim}\nmeaning that only \\verb=Tree.L= was significant.\n\n\\subsection{Exercise \\ref{exo:sudoku}}\n\n\\begin{enumerate} \n\\item A plain representation is\n\\begin{verbatim}\n> s\n [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]\n [1,] 0 0 0 0 0 6 0 4 0\n [2,] 2 7 9 0 0 0 0 5 0\n [3,] 0 5 0 8 0 0 0 0 2\n [4,] 0 0 2 6 0 0 0 0 0\n [5,] 0 0 0 0 0 0 0 0 0\n [6,] 0 0 1 0 9 0 6 7 3\n [7,] 8 0 5 2 0 0 4 0 0\n [8,] 3 0 0 0 0 0 0 8 5\n [9,] 6 0 0 0 0 0 9 0 1\n\\end{verbatim}\nwhere empty slots are represented by zeros.\n\n\\item A simple cleaning of non-empty (i.e.~certain) slots is\n\\begin{verbatim}\nfor (i in 1:9)\nfor (j in 1:9){\n if (s[i,j]>0) pool[i,j,-s[i,j]]=FALSE\n }\n\\end{verbatim}\n\n\\item In {\\tt R}, matrices (and arrays) are also considered as vectors. Hence \\verb+s[i]+ represents\nthe $(1+\\lfloor (i-1)/9 \\rfloor,(i-1)\\,\\text{mod}\\,9+1)$ entry of the grid.\n\n\\item This is self-explanatory. For instance,\n\\begin{verbatim}\n> a=2;b=5\n> boxa\n[1] 1 2 3\n> boxb\n[1] 4 5 6\n\\end{verbatim}\n\n\\item The first loop checks whether or not, for each remaining possible integer, there exists\nan identical entry in the same row, in the same column or in the same box. The second command\nsets entries for which only one possible integer remains to this integer.\n\n\\item A plain \\R program solving the grid is\n\\begin{verbatim}\nwhile (sum(s==0)>0){\n for (i in sample(1:81)){\n if (s[i]==0){\n a=((i-1\n b=trunc((i-1)/9)+1\n boxa=3*trunc((a-1)/3)+1\n boxa=boxa:(boxa+2)\n boxb=3*trunc((b-1)/3)+1\n boxb=boxb:(boxb+2)\n\n for (u in (1:9)[pool[a,b,]]){\n pool[a,b,u]=(sum(u==s[a,])+sum(u==s[,b])\n +sum(u==s[boxa,boxb]))==0\n }\n\n if (sum(pool[a,b,])==1){ \n s[i]=(1:9)[pool[a,b,]]\n }\n\n if (sum(pool[a,b,])==0){\n print(\"wrong sudoku\")\n break()\n }\n }\n }\n }\n\\end{verbatim}\nand it stops with the outcome\n\\begin{verbatim}\n> s\n [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]\n [1,] 1 3 8 5 2 6 7 4 9\n [2,] 2 7 9 3 4 1 8 5 6\n [3,] 4 5 6 8 7 9 3 1 2\n [4,] 7 4 2 6 3 5 1 9 8\n [5,] 9 6 3 1 8 7 5 2 4\n [6,] 5 8 1 4 9 2 6 7 3\n [7,] 8 9 5 2 1 3 4 6 7\n [8,] 3 1 7 9 6 4 2 8 5\n [9,] 6 2 4 7 5 8 9 3 1\n\\end{verbatim}\nwhich is the solved Sudoku.\n\\end{enumerate}\n\n\n\\chapter{Monte Carlo Integration\n\n\\subsection{Exercise \\ref{pb:norm_cauchy}}\n\n\\begin{enumerate}\n\\item The plot of the integrands follows from a simple \\R program:\n\\begin{verbatim}\nf1=function(t){ t/(1+t*t)*exp(-(x-t)^2/2)}\nf2=function(t){ 1/(1+t*t)*exp(-(x-t)^2/2)}\nplot(f1,-3,3,col=1,ylim=c(-0.5,1),xlab=\"t\",ylab=\"\",ty=\"l\")\nplot(f2,-3,3,add=TRUE,col=2,ty=\"l\")\nlegend(\"topright\", c(\"f1=t.f2\",\"f2\"), lty=1,col=1 :2)\n\\end{verbatim}\nBoth numerator and denominator are expectations under the Cauchy distribution. They can therefore\nbe approximated directly by\n\\begin{verbatim}\nNiter=10^4\nco=rcauchy(Niter)\nI=mean(co*dnorm(co,mean=x))/mean(dnorm(co,mean=x))\n\\end{verbatim}\nWe thus get\n\\begin{verbatim}\n> x=0\n> mean(co*dnorm(co,mean=x))/mean(dnorm(co,mean=x))\n[1] 0.01724\n> x=2\n> mean(co*dnorm(co,mean=x))/mean(dnorm(co,mean=x))\n[1] 1.295652\n> x=4\n> mean(co*dnorm(co,mean=x))/mean(dnorm(co,mean=x))\n[1] 3.107256\n\\end{verbatim}\n\\item Plotting the convergence of those integrands can be done via\n\\begin{verbatim}\n# (C.) Anne Sabourin, 2009\nx1=dnorm(co,mean=x)\nestint2=cumsum(x1)/(1:Niter)\nesterr2=sqrt(cumsum((x1-estint2)^2))/(1:Niter)\nx1=co*x1\nestint1=cumsum(x1))/(1:Niter)\nesterr2=sqrt(cumsum((x1-estint1)^2))/(1:Niter)\npar(mfrow=c(1,2))\nplot(estint1,type=\"l\",xlab=\"iteration\",ylab=\"\",col=\"gold\")\nlines(estint1-2*esterr1,lty=2,lwd=2)\nlines(estint1+2*esterr1,lty=2,lwd=2)\nplot(estint2,type=\"l\",xlab=\"iteration\",ylab=\"\",col=\"gold\")\nlines(estint2-2*esterr2,lty=2,lwd=2)\nlines(estint2+2*esterr2,lty=2,lwd=2)\n\\end{verbatim}\nBecause we have not yet discussed the evaluation of the error for a ratio of estimators, we consider\nboth terms of the ratio separately. The empirical variances $\\hat\\sigma$ are given by \\verb+var(co*dnorm(co,m=x))+\nand \\verb+var(dnorm(co,m=x))+ and solving $2\\hat\\sigma/\\sqrt{n}<10^{-3}$ leads to an evaluation of the number of\nsimulations necessary to get $3$ digits of accuracy.\n\\begin{verbatim}\n> x=0;max(4*var(dnorm(co,m=x))*10^6,\n+ 4*var(co*dnorm(co,m=x))*10^6)\n[1] 97182.02\n> x=2; 4*10^6*max(var(dnorm(co,m=x)),var(co*dnorm(co,m=x)))\n[1] 220778.1\n> x=4; 10^6*4*max(var(dnorm(co,m=x)),var(co*dnorm(co,m=x)))\n[1] 306877.9\n\\end{verbatim}\n\\item A similar implementation applies for the normal simulation, replacing \\verb=dnorm= with \\verb=dcauchy= in the\nabove. The comparison is clear in that the required number of normal simulations when $x=4$ is $1398.22$, to compare\nwith the above $306878$.\n\\end{enumerate}\n\n\\subsection{Exercise \\ref{exo:tailotwo}}\n\nDue to the identity \n$$\n\\mathbb{P}(X>20) = \\int_{20}^{\\infty}\\dfrac{\\exp(-\\frac{x^2}{2})}{\\sqrt{2\\pi}}\\text{d}x \n= \\int_{0}^{1/20}\\frac{\\exp(-\\frac{1}{2*u^2})}{20 u^2 \\sqrt{2\\pi}}20 \\text{d}u\\,, \n$$\nwe can see this integral as an expectation under the $\\mathcal{U}(0,1/20)$\ndistribution and thus use a Monte Carlo approximation to $\\mathbb{P}(X>20)$.\nThe following \\R code monitors the convergence of the corresponding approximation.\n\\begin{verbatim}\n# (C.) Thomas Bredillet, 2009\nh=function(x){ 1/(x^2*sqrt(2*pi)*exp(1/(2*x^2)))}\npar(mfrow=c(2,1))\ncurve(h,from=0,to=1/20,xlab=\"x\",ylab=\"h(x)\",lwd=\"2\")\nI=1/20*h(runif(10^4)/20)\nestint=cumsum(I)/(1:10^4)\nesterr=sqrt(cumsum((I-estint)^2))/(1:10^4)\nplot(estint,xlab=\"Iterations\",ty=\"l\",lwd=2,\nylim=mean(I)+20*c(-esterr[10^4],esterr[10^4]),ylab=\"\")\nlines(estint+2*esterr,col=\"gold\",lwd=2)\nlines(estint-2*esterr,col=\"gold\",lwd=2)\n\\end{verbatim}\nThe estimated probability is $2.505 e-89$ with an error of $\\pm 3.61 e-90$, compared with \n\\begin{verbatim}\n> integrate(h,0,1/20)\n2.759158e-89 with absolute error < 5.4e-89\n> pnorm(-20)\n[1] 2.753624e-89\n\\end{verbatim}\n\n\\subsection{Exercise \\ref{exo:fperron+}}\n\n{\\bf Warning: due to the (late) inclusion of an extra-exercise in the book, \nthe ``above exercise\" actually means Exercise \\ref{exo:tailotwo}!!!}\\\\\n\nWhen $Z\\sim\\mathcal{N}(0,1)$, with density $f$, the quantity of interest is $\\mathbb{P}(Z>4.5)$, \ni.e.~$\\mathbb{E}^{f}[\\mathbb{I}_{Z>4.5}]$. When $g$ is the density of\nthe exponential $\\mathcal{E}xp(\\lambda)$ distribution truncated at $4.5$,\n$$\ng(y)=\\frac{1_{y>4.5}\\lambda\\exp(-\\lambda y)}{\\int_{-4.5}^{\\infty}\\lambda\\exp(-\\lambda y)\\,\\text{d}y}\n=\\lambda e^{-\\lambda(y-4.5)}\\mathbb{I}_{y>4.5}\\,,\n$$\nsimulating iid $Y^{(i)}$'s from $g$ is straightforward. Given that the indicator function\n$\\mathbb{I}_{Y>4.5}$ is then always equal to $1$, $\\mathbb{P}(Z>4.5)$ is estimated by\n$$\n\\hat{h}_{n}=\\frac{1}{n}\\sum_{i=1}^{n}\\frac{f(Y^{(i)})}{g(Y^{(i)})}.\n$$\nA corresponding estimator of its variance is\n$$\nv_{n}=\\frac{1}{n\u00b2}\\sum_{i=1}^{n}(1-\\hat{h}_{n})^{2}{f(Y^{(i)})}\\big/{g(Y^{(i)})}\\,.\n$$\nThe following \\R code monitors the convergence of the estimator (with $\\lambda=1,10$)\n\\begin{verbatim}\n# (C.) Anne Sabourin, 2009\nNsim=5*10^4\nx=rexp(Nsim)\npar(mfcol=c(1,3))\nfor (la in c(.5,5,50)){\n y=(x/la)+4.5\n weit=dnorm(y)/dexp(y-4.5,la)\n est=cumsum(weit)/(1:Nsim)\n varest=cumsum((1-est)^2*weit/(1:Nsim)^2)\n plot(est,type=\"l\",ylim=c(3e-6,4e-6),main=\"P(X>4.5) estimate\",\n sub=paste(\"based on E(\",la,\") simulations\",sep=\"\"),xlab=\"\",ylab=\"\")\n abline(a=pnorm(-4.5),b=0,col=\"red\")\n }\n\\end{verbatim}\nWhen evaluating the impact of $\\lambda$ on the variance (and hence on the convergence) of the estimator,\nsimilar graphs can be plotted for different values of $\\lambda$. This experiment does not exhibit a clear\npattern, even though large values of $\\lambda$, like $\\lambda=20$ appear to slow down convergence very much.\nFigure \\ref{fig:nortail} shows the output of such a comparison. Picking $\\lambda=5$ seems however to produce\na very stable approximation of the tail probability.\n\n\\begin{figure\n\\begin{center}\n\\centerline{\\includegraphics[width=.95\\textwidth]{nortail.jpg}}\n\\caption{\\label{fig:nortail}\nComparison of three importance sampling approximations to the normal tail probability $\\mathbb{P}(Z>4.5)$ based\non a truncated $\\mathcal{E}xp(\\lambda)$ distribution with $\\lambda=.5,5.50$. The straight red line is the true value.}\n\\end{center}\n\\end{figure}\n\n\\subsection{Exercise \\ref{pb:some_jump}}\n\nWhile the expectation of $\\sqrt{x/(1-x)}$ is well defined for $\\nu>1/2$, the integral of\n$x/(1-x)$ against the $t$ density does not exist for any $\\nu$. Using an importance sampling representation,\n$$\n\\int \\frac{x}{1-x}\\,\\frac{f^2(x)}{g(x)}\\,\\text{d}x = \\infty\n$$\nif $g(1)$ is finite. The integral will be finite around $1$ when $1/(1-t)g(t)$ is integrable, which\nmeans that $g(t)$ can go to infinity at any rate. For instance, if $g(t)\\approx(1-t)^{-\\alpha}$ around\n$1$, any $\\alpha>0$ is acceptable.\\\\\n\n\\subsection{Exercise \\ref{pb:bayesAR2}}\n\nAs in Exercise \\ref{pb:norm_cauchy}, \nthe quantity of interest is $\\delta^{\\pi}(x)=\\mathbb{E}^{\\pi}(\\theta|x)=\\int\\theta\\pi(\\theta|x)\\,\\text{d}\\theta$\nwhere $x\\sim\\mathcal{N}(\\theta,1)$ and $\\theta\\sim\\mathcal{C}(0,1)$. The target\ndistribution is\n$$\n\\pi(\\theta|x)\\propto{\\pi(\\theta)e^{-(x-\\theta)^{2}/2}} = f_{x}(\\theta)\\,.\n$$\nA possible importance function is the prior distribution, $$g(\\theta)=\\frac{1}{\\pi(1+\\theta^{2})}$$\nand for every $\\theta\\in\\mathbb{R}$, $\\frac{f_{x}(\\theta)}{g(\\theta)}\\leq M$, when $M=\\pi$.\nTherefore, generating from the prior $g$ and accepting simulations according to the\nAccept-Reject ratio provides a sample from \n$\\pi(\\theta|x)$. The empirical mean of this sample is then a converging estimator of $\\mathbb{E}^{\\pi}(\\theta|x).$\nFurthermore, we directly deduce the estimation error for $\\delta$.\nA graphical evaluation of the convergence is given by the following \\R program:\n\\begin{verbatim}\nf=function(t){ exp(-(t-3)^2/2)/(1+t^2)}\nM=pi\nNsim=2500\npostdist=rep(0,Nsim)\nfor (i in 1:Nsim){\n u=runif(1)*M\n postdist[i]=rcauchy(1)\n while(u>f(postdist[i])/dcauchy(postdist[i])){\n u=runif(1)*M\n postdist[i]=rcauchy(1)\n }}\nestdelta=cumsum(postdist)/(1:Nsim)\nesterrd=sqrt(cumsum((postdist-estdelta)^2))/(1:Nsim)\npar(mfrow=c(1,2))\nC1=matrix(c(estdelta,estdelta+2*esterrd,estdelta-2*esterrd),ncol=3)\nmatplot(C1,ylim=c(1.5,3),type=\"l\",xlab=\"Iterations\",ylab=\"\")\nplot(esterrd,type=\"l\",xlab=\"Iterations\",ylab=\"\")\n\\end{verbatim}\n\n\\subsection{Exercise \\ref{pb:smalltail}}\n\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\item If $X \\sim \\mathcal{E}xp(1)$ then for $x \\ge a$, \n$$\n\\mathbb{P}[a + X < x] = \\int_{0}^{x-a} \\exp(-t)\\,\\text{d}t \n= \\int_{a}^{x} \\exp(-t+a)\\,\\text{d}t = \\mathbb{P}(Y < x) \n$$\nwhen $Y \\sim\\mathcal{E}xp^{+}(a,1)$, \n\n\\item If $ X \\sim \\chi^{2}_{3}$, then\n\\begin{align*}\n\\mathbb{P}(X>25) \n&= \\int_{25}^{+\\infty} \\frac{2^{-3/2}}{\\Gamma(\\frac{3}{2})}\\,x^{1/2}\\exp(-x/2)\\,\\text{d}x\\\\ \n&= \\int_{12.5}^{+\\infty} \\frac{\\sqrt(x)\\exp(-12.5)}{\\Gamma(\\frac{3}{2})}\\exp(-x+12.5)\\,\\text{d}x\\,. \n\\end{align*}\nThe corresponding \\R code\n\\begin{verbatim} \n# (C.) Thomas Bredilllet, 2009\nh=function(x){ exp(-x)*sqrt(x)/gamma(3/2)}\nX = rexp(10^4,1) + 12.5\nI=exp(-12.5)*sqrt(X)/gamma(3/2)\nestint=cumsum(I)/(1:10^4)\nesterr=sqrt(cumsum((I-estint)^2))/(1:10^4)\nplot(estint,xlab=\"Iterations\",ty=\"l\",lwd=2,\nylim=mean(I)+20*c(-esterr[10^4],esterr[10^4]),ylab=\"\")\nlines(estint+2*esterr,col=\"gold\",lwd=2)\nlines(estint-2*esterr,col=\"gold\",lwd=2)\n\\end{verbatim}\ngives an evaluation of the probability as $1.543e-05 $ with a $10^{-8}$ error, to compare\nwith\n\\begin{verbatim} \n> integrate(h,12.5,Inf)\n1.544033e-05 with absolute error < 3.4e-06\n> pchisq(25,3,low=F)\n[1] 1.544050e-05\n\\end{verbatim}\n\nSimilarly, when $X \\sim t_{5} $, then\n$$\n\\mathbb{P}(X>50) = \\int_{50}^{\\infty} \\dfrac{\\Gamma(3)}{\\sqrt(5*\\pi)\\Gamma(2,5)\n(1+\\frac{t^2}{5})^{3}\\exp(-t+50)}\\exp(-t+50)\\,\\text{d}t \n$$\nand a corresponding \\R code\n\\begin{verbatim}\n# (C.) Thomas Bredilllet, 2009\nh=function(x){ 1/sqrt(5*pi)*gamma(3)/gamma(2.5)*1/(1+x^2/5)^3}\nintegrate(h,50,Inf)\nX = rexp(10^4,1) + 50\nI=1/sqrt(5*pi)*gamma(3)/gamma(2.5)*1/(1+X^2/5)^3*1/exp(-X+50)\nestint=cumsum(I)/(1:10^4)\nesterr=sqrt(cumsum((I-estint)^2))/(1:10^4)\nplot(estint,xlab=\"Mean and error range\",type=\"l\",lwd=2,\nylim=mean(I)+20*c(-esterr[10^4],esterr[10^4]),ylab=\"\")\nlines(estint+2*esterr,col=\"gold\",lwd=2)\nlines(estint-2*esterr,col=\"gold\",lwd=2)\n\\end{verbatim}\nAs seen on the graph, this method induces jumps in the convergence patterns. Those jumps are indicative of\nvariance problems, as should be since the estimator does not have a finite variance in this case. The value\nreturned by this approach differs from alternatives evaluations:\n\\begin{verbatim}\n> mean(I)\n[1] 1.529655e-08\n> sd(I)/10^2\n[1] 9.328338e-10\n> integrate(h,50,Inf)\n3.023564e-08 with absolute error < 2e-08\n> pt(50,5,low=F)\n[1] 3.023879e-08\n\\end{verbatim}\nand cannot be trusted.\n\n\\item {\\bf Warning: There is a missing line in the text of this question, which should read:}\n\\begin{rema}\n\\noindent Explore the gain in efficiency from this method. Take $a=4.5$ in part (a) and run an\nexperiment to determine how many normal $\\mathcal{N}(0,1)$ random variables would be needed to calculate $P(Z > 4.5)$\nto the same accuracy obtained from using $100$ random variables in this importance sampler.\n\\end{rema}\n\nIf we use the representation\n$$\n\\mathbb{P}(Z>4.5) = \\int_{4.5}^\\infty \\varphi(z)\\,\\text{d}z = \\int_0^\\infty \\varphi(x+4.5)\n\\exp(x)\\exp(-x)\\,\\text{d}x\\,,\n$$\nthe approximation based on $100$ realisations from an $\\mathcal{E}xp(1)$ distribution, $x_1m\\ldots,x_100$, is\n$$\n\\frac{1}{100}\\,\\sum_{i=1}^{100} \\varphi(x_i+4.5) \\exp(x_i)\n$$\nand the \\R code\n\\begin{verbatim}\n> x=rexp(100)\n> mean(dnorm(x+4.5)*exp(x))\n[1] 2.817864e-06\n> var(dnorm(x+4.5)*exp(x))/100\n[1] 1.544983e-13\n\\end{verbatim}\nshows that the variance of the resulting estimator is about $10^{-13}$. A simple simulation of a normal sample of size $m$ and the\nresulting accounting of the portion of the sample above $4.5$ leads to a binomial estimator with a variance of $\\mathbb{P}(Z>4.5)\n\\mathbb{P}(Z<4.5)/m$, which results in a lower bound\n$$\nm \\ge \\mathbb{P}(Z>4.5) \\mathbb{P}(Z<4.5) / 1.5 10^{-13} \\approx 0.75 10^{7}\\,,\n$$\ni.e.~close to ten million simulations.\n\\end{enumerate}\n\n\\subsection{Exercise \\ref{pb:fitz}}\n\nFor the three choices, the importance weights are easily computed:\n\\begin{verbatim}\nx1=sample(c(-1,1),10^4,rep=T)*rexp(10^4)\nw1=exp(-sqrt(abs(x1)))*sin(x1)^2*(x1>0)/.5*dexp(x1)\nx2=rcauchy(10^4)*2\nw2=exp(-sqrt(abs(x2)))*sin(x2)^2*(x2>0)/dcauchy(x2/2)\nx3=rnorm(10^4)\nw3=exp(-sqrt(abs(x3)))*sin(x3)^2*(x3>0)/dnorm(x3)\n\\end{verbatim}\nThey can be evaluated in many ways, from \n\\begin{verbatim}\nboxplot(as.data.frame(cbind(w1,w2,w3)))\n\\end{verbatim} \nto computing the effective sample size \\verb=1/sum((w1/sum(w1))^2)= introduced in Example \\ref{ex:probit}.\nThe preferable choice is then $g_1$. The estimated sizes are given by\n\\begin{verbatim}\n> 4*10^6*var(x1*w1/sum(w1))/mean(x1*w1/sum(w1))^2\n[1] 10332203\n> 4*10^6*var(x2*w2/sum(w2))/mean(x2*w2/sum(w2))^2\n[1] 43686697\n> 4*10^6*var(x3*w3/sum(w3))/mean(x3*w3/sum(w3))^2\n[1] 352952159\n\\end{verbatim}\nagain showing the appeal of using the double exponential proposal. (Note that efficiency could be\ndoubled by considering the absolute values of the simulations.)\\\\\n\n\\subsection{Exercise \\ref{pb:top}}\n\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\item With a positive density $g$ and the representation\n$$ \nm(x) = \\int_{\\Theta}f(x|\\theta)\\dfrac{\\pi(\\theta)}{g(\\theta)}g(\\theta)\\,\\text{d}\\theta\\,,\n$$\nwe can simulate $\\theta_i$'s from $g$ to approximate $m(x)$ with\n$$\n\\frac{1}{n}\\sum_{i=1}^{n} \\dfrac{f(x|\\theta_{i})\\pi(\\theta_{i})}{g(\\theta_{i})}\\,. \n$$\n\n\\item When $ g(x) = \\pi(\\theta|x) = f(x|\\theta)\\pi(\\theta)/K $, then\n$$\nK\\frac{1}{n}\\sum_{i=1}^{n} \\dfrac{f(x|X_{i})\\pi(X_{i})}{f(X_{i}|\\theta)\\pi(\\theta)} = K \n$$\nand the normalisation constant is the exact estimate. If the normalising constant is unknown, we\nmust use instead the self-normalising version \\eqref{eq:Gby}.\n\n\\item Since\n$$\n\\int_{\\Theta} {\\tau(\\theta) \\over f(x|\\theta) \\pi(\\theta)} \\pi(\\theta|x) \\text{d}\\theta = \n\\int_{\\Theta} {\\tau(\\theta) \\over f(x|\\theta) \\pi(\\theta)} \\dfrac{f(x|\\theta) \\pi(\\theta)}{m(x)} \n\\text{d}\\theta = \\dfrac{1}{m(x)}\\,,\n$$\nwe have an unbiased estimator of $1/m(x)$ based on simulations from the posterior,\n$$\n{1\\over T} \\sum_{t=1}^T {\\tau(\\theta_i^*) \\over f(x|\\theta_i^*) \\pi(\\theta_i^*)}\n$$\nand hence a converging (if biased) estimator of $m(x)$. This estimator of the marginal density can\nthen be seen as an harmonic mean estimator, but also as an importance sampling estimator \\citep{robert:marin:2010}.\n\\end{enumerate}\n\n\\subsection{Exercise \\ref{pb:margin}}\n\n{\\bf Warning: There is a typo in question b, which should read}\n\\begin{rema}\n\\noindent Let $X|Y=y \\sim \\CG (1,y)$ and $Y \\sim \\CE xp(1)$.\n\\end{rema}\n\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\item If $(X_i, Y_i) \\sim f_{XY}(x,y)$, the Strong Law of Large Numbers tells us that\n\\begin{displaymath}\n\\lim_n {1\\over n}\n\\sum_{i=1}^n \\frac{f_{XY}(x^\\ast, y_i) w(x_i)}{f_{XY}(x_i, y_i)}\n= \\int \\int \\frac{f_{XY}(x^\\ast, y) w(x)}{f_{XY}(x, y)} f_{XY}(x,y) \\text{d}x \\text{d}y.\n\\end{displaymath}\nNow cancel $f_{XY}(x,y)$ and use that fact that $\\int w(x)dx=1$ to show\n$$\n\\int \\int \\frac{f_{XY}(x^\\ast, y) w(x)}{f_{XY}(x, y)} f_{XY}(x,y) \\text{d}x \\text{d}y=\\int f_{XY}(x^\\ast, y) dy= f_X(x^\\ast).\n$$\n\\item The exact marginal is \n$$\n\\int \\left[y e^{-yx}\\right] e^{-y} dy = \\int y^{2-1} e^{-y(1+x)} dy = \\frac{\\gamma(2)}{(1+x)^2}.\n$$\nWe tried the following \\R version of Monte Carlo marginalization:\n\\begin{verbatim}\nX=rep(0,nsim)\nY=rep(0,nsim)\nfor (i in 1:nsim){\n Y[i]=rexp(1)\n X[i]=rgamma(1,1,rate=Y[i])\n }\n\nMCMarg=function(x,X,Y){\n return(mean((dgamma(x,1,rate=Y)/dgamma(X,1,\n rate=Y))*dgamma(X,7,rate=3)))\n }\nTrue=function(x)(1+x)^(-2)\n\\end{verbatim}\nwhich uses a $\\mathcal{G}a(7,3)$ distribution to marginalize. It works ok, as you\ncan check by looking at the plot \n\\begin{verbatim}\n> xplot=seq(0,5,.05);plot(xplot,MCMarg(xplot,X,Y)-True(xplot))\n\\end{verbatim}\n\\item Choosing $w(x) = f_{X}(x)$ leads to the estimator\n\\begin{align*}\n\\dfrac{1}{n} \\sum_{i=1}^n \\dfrac{f_{XY}(x^\\ast, y_i) f_X(x_i)}{f_{XY}(x_i, y_i)}\n&=\n\\dfrac{1}{n} \\sum_{i=1}^n \\dfrac{f_X(x^\\ast)f_{Y|X}(y_i|x^\\ast) f_X(x_i)}{f_X(x_i)f_{Y|X}(y_i|x_i)}\n\\\\&= f_X(x^\\ast)\\, \n\\dfrac{1}{n} \\sum_{i=1}^n \\dfrac{f_{Y|X}(y_i|x^\\ast)}{f_{Y|X}(y_i|x_i)}\n\\end{align*}\nwhich produces $f_X(x^\\ast)$ modulo an estimate of $1$. If we decompose the variance of the estimator\nin terms of \n$$\n\\text{var}\\left\\{\\mathbb{E}\\left[\\left.\\dfrac{f_{XY}(x^\\ast, y_i) w(x_i)}{f_{XY}(x_i, y_i)}\\right|x_i\\right]\\right\\}+\n\\mathbb{E}\\left\\{\\text{var}\\left[\\left.\\dfrac{f_{XY}(x^\\ast, y_i) w(x_i)}{f_{XY}(x_i, y_i)}\\right|x_i\\right]\\right\\}\\,,\n$$\nthe first term is \n\\begin{align*}\n\\mathbb{E}\\left[\\left.\\dfrac{f_{XY}(x^\\ast, y_i) w(x_i)}{f_{XY}(x_i, y_i)}\\right|x_i\\right]&=\nf_X(x^\\ast)\\mathbb{E}\\left[\\left.\\dfrac{f_{Y|X}(y_i|x^\\ast)}{f_{Y|X}(y_i|x_i)}\\right|x_i\\right]\\,\\dfrac{w(x_i)}{f_X(x_i)}\\\\\n&= f_X(x^\\ast)\\dfrac{w(x_i)}{f_X(x_i)}\n\\end{align*}\nwhich has zero variance if $w(x) = f_{X}(x)$. If we apply a variation calculus argument to the whole quantity, we\nend up with\n$$\nw(x) \\propto f_X(x) \\bigg/ \\int \\dfrac{f^2_{Y|X}(y|x^\\ast)}{f_{Y|X}(y|x)}\\,\\text{d}y\n$$\nminimizing the variance of the resulting estimator. So it is likely $f_X$ is {\\em not} optimal...\n\\end{enumerate}\n\n\\chapter{Monte Carlo Optimization}\n\n\\subsection{Exercise \\ref{exo:smplmix}}\n\nThis is straightforward in \\R\n\\begin{verbatim}\npar(mfrow=c(1,2),mar=c(4,4,1,1))\nimage(mu1,mu2,-lli,xlab=expression(mu[1]),ylab=expression(mu[2]))\ncontour(mu1,mu2,-lli,nle=100,add=T)\nNobs=400\nda=rnorm(Nobs)+2.5*sample(0:1,Nobs,rep=T,prob=c(1,3))\nfor (i in 1:250)\nfor (j in 1:250)\n lli[i,j]=like(c(mu1[i],mu2[j]))\nimage(mu1,mu2,-lli,xlab=expression(mu[1]),ylab=expression(mu[2]))\ncontour(mu1,mu2,-lli,nle=100,add=T)\n\\end{verbatim}\nFigure \\ref{fig:compamix} shows that the log-likelihood surfaces are quite comparable, despite being\nbased on different samples. Therefore the impact of allocating $100$ and $300$ points to both components,\nrespectively, instead of the random $79$ and $321$ in the current realisation, is inconsequential. \n\n\\begin{figure\n\\centerline{\\includegraphics[width=\\textwidth,height=5truecm]{mixcomp.jpg}}\n\\caption{\\label{fig:compamix}\nComparison of two log-likelihood surfaces for the mixture model \\eqref{eq:maxmix}\nwhen the data is simulated with a fixed $100/300$ ratio in both components {\\em (left)}\nand when the data is simulated with a binomial $\\mathcal{B}(400,1/4)$ random number of points\non the first component.}\n\\end{figure}\n\n\\subsection{Exercise \\ref{ex:simpleMCO}}\n\n{\\bf Warning: as written, this problem has not simple solution! The constraint should be replaced with}\n\\begin{rema}\n$$\nx^2(1+\\sin(y/3)\\cos(8x))+y^2(2+\\cos(5x)\\cos(8y)) \\le 1\\,,\n$$\n\\end{rema}\n\nWe need to find a lower bound on the function of $(x,y)$. The coefficient of $y^2$ is obviously bounded\nfrom below by $1$, while the coefficient of $x^2$ is positive. Since the function is bounded from below by $y^2$,\nthis means that $y^2<1$, hence that $\\sin(y/3)>\\sin(-1/3)>-.33$. Therefore, a lower bound on the function is $0.77x^2+y^2$. \nIf we simulate uniformly over the ellipse $0.77x^2+y^2<1$, we can subsample the points that satisfy the constraint.\nSimulating the uniform distribution on $0.77x^2+y^2<1$ is equivalent to simulate the uniform distribution over the unit\ncircle $z^2+y^2<1$ and resizing $z$ into $x=z/\\sqrt{0.77}$.\n\\begin{verbatim}\ntheta=runif(10^5)*2*pi\nrho=runif(10^5)\nxunif=rho*cos(theta)/.77\nyunif=rho*sin(theta)\nplot(xunif,yunif,pch=19,cex=.4,xlab=\"x\",ylab=\"y\")\nconst=(xunif^2*(1+sin(yunif/3)*cos(xunif*8))+\n yunif^2*(2+cos(5*xunif)*cos(8*yunif))<1)\npoints(xunif[const],yunif[const],col=\"cornsilk2\",pch=19,cex=.4)\n\\end{verbatim}\nWhile the ellipse is larger than the region of interest, Figure \\ref{fig:alien} shows that it is\nreasonably efficient. The performances of the method are given by \\verb+sum(const)/10^4+, which is\nequal to $73\\%$.\n\\begin{figure\n\\centerline{\\includegraphics[width=.75\\textwidth]{alien.jpg}}\n\\caption{\\label{fig:alien}\nSimulation of a uniform distribution over a complex domain via uniform simulation over a\nsimpler encompassing domain for $10^5$ simulations and an acceptance rate of $0.73\\% $.}\n\\end{figure}\n\n\\subsection{Exercise \\ref{exo:stogramix}}\n\nSince the log-likelihood of the mixture model in Example \\ref{ex:maxmix} has been defined by\n\\begin{verbatim}\n#minus the log-likelihood function\nlike=function(mu){\n -sum(log((.25*dnorm(da-mu[1])+.75*dnorm(da-mu[2]))))\n }\n\\end{verbatim}\nin the {\\sf mcsm} package, we can reproduce the \\R program of Example \\ref{ex:find_max3} with the\nfunction $h$ now defined as \\verb+like+. The difference with the function $h$ of Example \\ref{ex:find_max3} \nis that the mixture log-likelihood is more variable and thus the factors $\\alpha_j$ and $\\beta_j$ need to\nbe calibrated against divergent behaviours. The following figure shows the impact of the different choices\n$(\\alpha_j,\\beta_j)=(.01/\\log(j+1),1/\\log(j+1)^{.5})$,\n$(\\alpha_j,\\beta_j)=(.1/\\log(j+1),1/\\log(j+1)^{.5})$,\n$(\\alpha_j,\\beta_j)=(.01/\\log(j+1),1/\\log(j+1)^{.1})$,\n$(\\alpha_j,\\beta_j)=(.1/\\log(j+1),1/\\log(j+1)^{.1})$,\non the convergence of the gradient optimization. In particular, the second choice exhibits a particularly\nstriking behavior where the sequence of $(\\mu_1,\\mu_2)$ skirts the true mode of the likelihood in a circular\nmanner. (The stopping rule used in the \\R program is \\verb@(diff<10^(-5))@.)\n\\begin{figure\n\\centerline{\\includegraphics[width=.95\\textwidth]{mixrad.jpg}}\n\\caption{\\label{fig:mixrad}\nFour stochastic gradient paths for four different choices \n$(\\alpha_j,\\beta_j)=(.01/\\log(j+1),1/\\log(j+1)^{.5})$ (u.l.h.s.),\n$(\\alpha_j,\\beta_j)=(.1/\\log(j+1),1/\\log(j+1)^{.5})$ (u.r.h.s.), \n$(\\alpha_j,\\beta_j)=(.01/\\log(j+1),1/\\log(j+1)^{.1})$ (l.l.h.s.),\n$(\\alpha_j,\\beta_j)=(.1/\\log(j+1),1/\\log(j+1)^{.1})$ (l.r.h.s.).}\n\\end{figure}\n\n\\subsection{Exercise \\ref{exo:freak}}\n\nThe \\R function \\verb+SA+ provided in Example \\ref{ex:mix_sa} can be used in the\nfollowing \\R program to test whether or not the final value is closer to the main mode\nor to the secondy mode:\n\\begin{verbatim}\nmodes=matrix(0,ncol=2,nrow=100)\nprox=rep(0,100)\nfor (t in 1:100){\n res=SA(mean(da)+rnorm(2))\n modes[t,]=res$the[res$ite,]\n diff=modes[t,]-c(0,2.5)\n duff=modes[t,]-c(2.5,0)\n prox[t]=sum(t(diff\n }\n\\end{verbatim}\n\nFor each new temperature schedule, the function \\verb0SA0 must be modified accordingly (for instance by\nthe on-line change \\verb+SA=vi(SA)+). Figure \\ref{fig:SAvabien} illustrates the output of an experiment\nfor four different schedules.\n\\begin{figure\n\\centerline{\\includegraphics[width=.95\\textwidth]{SAva.jpg}}\n\\caption{\\label{fig:SAvabien}\nFour simulated annealing outcomes corresponding to the temperature schedules\n$T_t=1/1\\log(1+t)$, \n$T_t=1/10\\log(1+t)$, \n$T_t=1/10\\sqrt{\\log(1+t)}$, \nand $T_t=(.95)^{1+t}$, based on $100$ replications. (The percentage of recoveries of the main mode\nis indicated in the title of each graph.)}\n\\end{figure}\n\n\\subsection{Exercise 5.9}\n\nIn principle, $Q(\\theta^\\prime|\\theta,\\mathbf{x})$ should also involve the logarithms\nof $1/4$ and $1/3$, raised to the powers $\\sum Z_i$ and $\\sum (1-Z_i)$, respectively.\nBut, due to the logarithmic transform, the expression does not involve the parameter\n$\\theta=(\\mu_1,\\mu_2)$ and can thus be removed from $Q(\\theta^\\prime|\\theta,\\mathbf{x})$\nwith no impact on the optimization problem.\n\n\\subsection{Exercise \\ref{exo:tan1}}\n\n{\\bf Warning: there is a typo in Example \\ref{ex:tan_wei}. The EM sequence should be\n$$\n\\hat\\theta_1 = \\displaystyle{\\left\\{{\\theta_0\\,x_1\\over 2+\\theta_0}\n+ x_4\\right\\}}\\bigg/\\displaystyle{\\left\\{{\\theta_0\\,x_1\\over 2+\\theta_0} +x_2+x_3+x_4\\right\\}} \\;.\n$$\ninstead of having $x_4$ in the denominator.}\n\nNote first that some $1/4$ factors have been removed from every term as they were not\ncontributing to the likelihood maximisation. Given a starting point $\\theta_0$, the \nEM sequence will always be the same.\n\\begin{verbatim}\nx=c(58,12,9,13)\nn=sum(x)\nstart=EM=cur=diff=.1\nwhile (diff>.001){ #stopping rule\n\n EM=c(EM,((cur*x[1]/(2+cur))+x[4])/((cur*x[1]/(2+cur))+x[2]+x[3]+x[4]))\n diff=abs(cur-EM[length(EM)])\n cur=EM[length(EM)]\n }\n\\end{verbatim}\nThe Monte Carlo EM version creates a sequence based on a binomial simulation:\n\\begin{verbatim}\nM=10^2\nMCEM=matrix(start,ncol=length(EM),nrow=500)\nfor (i in 2:length(EM)){\n MCEM[,i]=1/(1+(x[2]+x[3])/(x[4]+rbinom(500,M*x[1],\n prob=1/(1+2/MCEM[,i-1]))/M))\n }\nplot(EM,type=\"l\",xlab=\"iterations\",ylab=\"MCEM sequences\")\nupp=apply(MCEM,2,max);dow=apply(MCEM,2,min)\npolygon(c(1:length(EM),length(EM):1),c(upp,rev(dow)),col=\"grey78\")\nlines(EM,col=\"gold\",lty=2,lwd=2)\n}\n\\end{verbatim}\nand the associated graph shows a range of values that contains the true EM sequence. Increasing\n\\verb=M= in the above \\R program obviously reduces the range.\n\n\\subsection{Exercise \\ref{exo:maxmim}}\n\nThe \\R function for plotting the (log-)likelihood surface associated\nwith \\eqref{eq:maxmix} was provided in Example \\ref{ex:maxmix}. \nWe thus simply need to apply this function to the new sample,\nresulting in an output like Figure \\ref{fig:fakmix}, with a single mode\ninstead of the usual two modes. \n\\begin{figure\n\\centerline{\\includegraphics[width=.8\\textwidth]{fakmix.jpg}}\n\\caption{\\label{fig:fakmix}\nLog-likelihood surface of a mixture model applied to a five component mixture\nsample of size $400$.}\n\\end{figure}\n\n\\subsection{Exercise \\ref{pb:gyr_tre}}\n\n{\\bf Warning: there is a typo in question a where the formula should involve capital\n$Z_i$'s, namely}\n\\begin{rema}\n$$\nP(Z_i=1) = 1 - P(Z_i=2) = { p \\lambda \\exp (-\\lambda x_i) \\over\np \\lambda \\exp (-\\lambda x_i) +(1-p) \\mu \\exp (-\\mu x_i)}.\n$$\n\\end{rema}\n\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\item The likelihood is\n$$L(\\theta|{\\bf x})=\\prod_{i=1}^{12}{[p\\lambda e^{-\\lambda x_i}+(1-p)\\mu e^{-\\mu x_i}]},$$\nand the complete-data likelihood is\n$$\nL^c(\\theta|{\\bf x},{\\bf z})=\\prod_{i=1}^{12}{[p\\lambda e^{-\\lambda x_i}\\mathbb{I}_{(z_i=1)}+(1-p)\\mu e^{-\\mu x_i}\\mathbb{I}_{(z_i=2)}]},\n$$\nwhere $\\theta=(p,\\lambda,\\mu)$ denotes the parameter, using the same arguments as in Exercise \n\\ref{pb:7.4.5.1}.\n\n\\item The EM algorithm relies on the optimization of the expected log-likelihood\n\\begin{align*}\nQ(\\theta|\\hat\\theta_{(j)},{\\bf x})&=\\sum_{i=1}^{12} \\left[\\log{(p\\lambda e^{-\\lambda x_i})}\nP_{\\hat\\theta_{(j)}}(Z_i=1|x_i)\\right.\\\\\n&\\left.\\quad +\\log{((1-p)\\mu e^{-\\mu x_i})}P_{\\hat\\theta_{(j)}}(Z_i=2|x_i)\\right].\n\\end{align*}\nThe arguments of the maximization problem are\n$$\n\\left\\{%\n\\begin{array}{lll}\n\\hat p_{(j+1)}=\\hat P/12\\\\\n\\hat\\lambda_{(j+1)}=\\hat S_1/\\hat P\\\\\n\\hat\\mu_{(j+1)}=\\hat S_2/\\hat P,\n\\end{array}%\n\\right.\n$$\nwhere\n$$\n\\left\\{%\n\\begin{array}{lll}\n\\hat P=\\sum_{i=1}^{12}{P_{\\hat\\theta_{(j)}}(Z_i=1|x_i)}\\\\\\\\\n\\hat S_1=\\sum_{i=1}^{12}{x_iP_{\\hat\\theta_{(j)}}(Z_i=1|x_i)}\\\\\\\\\n\\hat S_2=\\sum_{i=1}^{12}{x_iP_{\\hat\\theta_{(j)}}(Z_i=2|x_i)}\\\\\n\\end{array}%\n\\right.\n$$\nwith\n$$\nP_{\\hat\\theta_{(j)}}(Z_i=1|x_i\n=\\frac{\\hat p_{(j)}\\hat\\lambda_{(j)}e^{-\\hat\\lambda_{(j)}x_i}}{\\hat\np_{(j)}\\hat\\lambda_{(j)}e^{-\\hat\\lambda_{(j)}x_i}+(1-\\hat\np_{(j)})\\hat\\mu_{(j)}e^{-\\hat\\mu_{(j)}x_i}}\\,.\n$$\nAn \\R implementation of the algorithm is then\n\\begin{verbatim}\nx=c(0.12,0.17,0.32,0.56,0.98,1.03,1.10,1.18,1.23,1.67,1.68,2.33)\nEM=cur=c(.5,jitter(mean(x),10),jitter(mean(x),10))\ndiff=1\nwhile (diff*10^5>1){\n \n probs=1/(1+(1-cur[1])*dexp(x,cur[3])/(cur[1]*dexp(x,cur[2])))\n phat=sum(probs);S1=sum(x*probs);S2=sum(x*(1-probs))\n EM=rbind(EM,c(phat/12,S1/phat,S2/phat))\n diff=sum(abs(cur-EM[dim(EM)[1],]))\n cur=EM[dim(EM)[1],]\n }\n\\end{verbatim}\nand it always produces a single component mixture.\n\\end{enumerate}\n\n\\subsection{Exercise \\ref{pb:EMCensored}}\n\n{\\bf Warning: Given the notations of Example \\ref{ex:EMCensored2}, the function\n$\\phi$ in question b should be written $\\varphi$...}\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\item The question is a bit vague in that the density of the missing data $(Z_{n-m+1},\\ldots,Z_n)$ is a normal\n${\\cal N}(\\theta, 1)$ density if we do not condition on $\\by$. Conditional upon $\\by$, the missing observations \n$Z_i$ are truncated in $a$, i.e.~we know that they are larger than $a$. The conditional distribution of the $Z_i$'s\nis therefore a normal ${\\cal N}(\\theta, 1)$ distribution truncated in $a$, with density\n$$\nf(z|\\theta,y) = \\dfrac{\\exp\\{-(z_i-\\theta)^2/2\\}}{\\sqrt{2\\pi}\\,P_\\theta(Y>a)}\\,\\mathbb{I}{z\\ge a}\\,.\n= \\dfrac{\\varphi(z-\\theta)}{1-\\Phi(a-\\theta)}\\,\\mathbb{I}{z\\ge a}\\,.\n$$\nwhere $\\varphi$ and $\\Phi$ are the normal pdf and cdf, respectively.\n\\item We have\n\\begin{align*}\n\\BE_{\\theta}[Z_i|Y_i] &= \\int_a^\\infty z\\,\\dfrac{\\varphi(z-\\theta)}{1-\\Phi(a-\\theta)}\\,\\text{d}z\\\\\n&= \\theta + \\int_a^\\infty (z-\\theta)\\,\\dfrac{\\varphi(z-\\theta)}{1-\\Phi(a-\\theta)}\\,\\text{d}z\\\\\n&= \\theta + \\int_{a-\\theta}^\\infty y\\,\\dfrac{\\varphi(y)}{1-\\Phi(a-\\theta)}\\,\\text{d}y\\\\\n&= \\theta + \\left[-\\varphi(x)\\right]_{a-\\theta}^\\infty\\\\\n&= \\theta + \\frac{\\varphi(a-\\theta)}{1-\\Phi(a-\\theta)},\n\\end{align*}\nsince $\\varphi^\\prime(x)=-x\\varphi(x)$.\n\\end{enumerate}\n\n\\subsection{Exercise \\ref{pb:uniroot}}\n\nRunning \\verb+uniroot+ on both intervals\n\\begin{verbatim}\n> h=function(x){(x-3)*(x+6)*(1+sin(60*x))}\n> uniroot(h,int=c(-2,10))\n$root\n[1] 2.999996\n$f.root\n[1] -6.853102e-06\n> uniroot(h,int=c(-8,1))\n$root\n[1] -5.999977\n$f.root\n[1] -8.463209e-06\n\\end{verbatim}\nmisses all solutions to $1+\\sin(60x)=0$\n\n\\subsection{Exercise \\ref{pb:used_up1}}\n\n{\\bf Warning: this Exercise duplicates Exercise \\ref{exo:tan1} and should not\nhave been included in the book!}\\\\\n\n\n\\chapter{Random Variable Generation\n\\renewcommandAccept-Reject{Accept--Reject~}\n\n\\subsection{Exercise \\ref{pb:discretePIT}}\n\nFor a random variable $X$ with cdf $F$, if\n\\[\nF^{-}(u)=\\inf\\{ x,F(x)\\leq u\\},\n\\] \nthen, for $U\\sim\\mathcal{U}[0,1]$, for all $y \\in \\mathbb{R}$,\n\\begin{eqnarray*}\n\\mathbb{P}(F^{-}(U)\\leq y)&=&\\mathbb{P}(\\inf\\{ x,F(x)\\leq U\\}\\leq y)\\\\\n && =\\mathbb{P}(F(y)\\geq U)\\qquad\\textrm{ as $F$ is non-decreasing }\\\\\n && =F(y)\\qquad\\qquad\\textrm{ as $U$ is uniform}\n\\end{eqnarray*}\n\n\\subsection{Exercise \\ref{pb:boxmuller}}\n\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\item It is easy to see that $\\BE[U_1]=0$, and a standard calculation shows that $\\text{var}(U_1)= 1/12$, from which the result follows.\n\\item Histograms show that the tails of the $12$ uniforms are not long enough. Consider the code\n\\begin{verbatim}\nnsim=10000\nu1=runif(nsim)\nu2=runif(nsim)\nX1=sqrt(-2*log(u1))*cos(2*pi*u2)\nX2=sqrt(-2*log(u1))*sin(2*pi*u2)\nU=array(0,dim=c(nsim,1))\nfor(i in 1:nsim)U[i]=sum(runif(12,-.5,.5))\npar(mfrow=c(1,2))\nhist(X1)\nhist(U)\na=3\nmean(X1>a)\nmean(U>a)\nmean(rnorm(nsim)>a)\n1-pnorm(a)\n\\end{verbatim}\n\n\\item You should see the difference in the tails of the histogram. Also, the numerical output from the above is\n\\begin{verbatim}\n[1] 0.0016\n[1] 5e-04\n[1] 0.0013\n[1] 0.001349898\n\\end{verbatim}\nwhere we see that the Box-Muller and \\verb+rnorm+ are very good when compared with the exact \\verb+pnorm+. \nTry this calculation for a range of \\verb+nsim+ and \\verb+a+.\n\\end{enumerate}\n\n\\subsection{Exercise \\ref{exo:acceP}}\n\nFor $U\\sim\\mathcal{U}_{[0,1]}$, $Y\\sim g(y)$, and $X\\sim f(x)$, such that\n$f/g\\leq M$, the acceptance condition in the Accept--Reject algorithm is that $U\\leq f(Y)/(Mg(Y)).$\nThe probability of acceptance is thus\n\\begin{align*}\n\\mathbb{P}(U \\leq f(Y)\\big/ Mg(Y))&=\\int_{-\\infty}^{+\\infty}\\int_{0}^{\\frac{f(y}{Mg(y)}}\\,\\text{d}ug(y)\\,\\text{d}y\\\\\n & =\\int_{-\\infty}^{+\\infty}\\frac{f(y)}{Mg(y)}g(y)\\,\\text{d}y\\\\\n & =\\frac{1}{M}\\int_{-\\infty}^{+\\infty}f(y)\\,\\text{d}y\\\\\n & =\\frac{1}{M}\\,.\n\\end{align*}\nAssume $f/g$ is only known up to a normalising constant, i.e.\n$f/g=k.\\tilde{f}/\\tilde{g}$, with $\\tilde{f}/\\tilde{g}\\leq\\tilde{M}$,\n$\\tilde{M}$ being a well-defined upper bound different from $M$ because of the missing\nnormalising constants. Since $Y\\sim g$,\n\\begin{align*}\n\\mathbb{P}(U \\leq \\tilde{f}(Y)\\big/ \\tilde{M}\\tilde{g}(Y))\n & =\\int_{-\\infty}^{+\\infty}\\int_{0}^{\\frac{\\tilde{f}(y}{\\tilde{M}\\tilde{g}(y)}}\\,\\text{d}ug(y)\\,\\text{d}y\\\\\n & =\\int_{-\\infty}^{+\\infty}\\frac{\\tilde{f}(y)}{\\tilde{M}\\tilde{g}(y)}g(y)\\,\\text{d}y\\\\\n & =\\int_{-\\infty}^{+\\infty}\\frac{f(y)}{k\\tilde{M}g(y)}g(y)\\,\\text{d}y\\\\\n & =\\frac{1}{k\\tilde{M}}\\,.\n\\end{align*}\nTherefore the missing constant is given by\n$$\nk=1\\bigg/ \\tilde{M.}\\mathbb{P}(U\\leq\\tilde{f}(Y)\\big/ \\tilde{M}\\tilde{g}(Y))\\,,\n$$\nwhich can be estimated from the empirical acceptance rate.\n\n\n\\subsection{Exercise \\ref{exo:trueMax}}\n\nThe ratio is equal to\n$$\n\\frac{\\Gamma(\\alpha+\\beta)}{\\Gamma(\\alpha)\\Gamma(\\beta)}\\,\n\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\, x^{\\alpha-a}\\,(1-x)^{\\beta-b}\n$$\nand it will not diverge at $x=0$ only if $a\\le \\alpha$ and at $x=1$ only if $b\\le \\beta$.\nThe maximum is attained for\n$$\n\\frac{\\alpha-a}{x^\\star} = \\frac{\\beta-b}{1-x^\\star}\\,,\n$$\ni.e.~is\n$$\nM_{a,b}=\\frac{\\Gamma(\\alpha+\\beta)}{\\Gamma(\\alpha)\\Gamma(\\beta)}\\,\n\\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}\\, \\frac{(\\alpha-a)^{\\alpha-a}(\\beta-b)^{\\beta-b}}\n{(\\alpha-a+\\beta-b)^{\\alpha-a+\\beta-b}}\\,.\n$$\nThe analytic study of this quantity as a function of $(a,b)$ is quite delicate but if we define\n\\begin{verbatim}\nmab=function(a,b){\n lgamma(a)+lgamma(b)+(alph-a)*log(alph-a)+(beta-b)*log(beta-b)\n -(alph+bet-a-b)*log(alph+bet-a-b)}\n\\end{verbatim}\nit is easy to see using \\verb=contour= on a sequence of $a$'s and $b$'s that the maximum of\n$M_{a,b}$ is achieved over integer values when $a=\\lfloor \\alpha \\rfloor$ and\n$b=\\lfloor \\beta \\rfloor$.\n\n\\subsection{Exercise \\ref{exo:inzabove}}\n\nGiven $\\theta$, exiting the loop is driven by $X=x_0$, which indeed has a probability\n$f(x_0|\\theta)$ to occur. If $X$ is a discrete random variable, this is truly a probability, while,\nif $X$ is a continuous random variable, this is zero. The distribution of the exiting $\\theta$ is\nthen dependent on the event $X=x_0$ taking place, i.e.~is proportional to $\\pi(\\theta)f(x_0|\\theta)$,\nwhich is exactly $\\pi(\\theta|x_0)$.\n\n\\subsection{Exercise \\ref{pb:hist}}\n\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\item Try the \\R code\n\\begin{verbatim}\nnsim<-5000\nn=25;p=.2;\ncp=pbinom(c(0:n),n,p)\nX=array(0,c(nsim,1))\nfor(i in 1:nsim){\n u=runif(1)\n X[i]=sum(cp alpha) u=runif(1)\n U[i]=u\n }\n return(U)\n }\n\nTrans<-function(s0,alpha){\n U=array(0,c(s0,1))\n for (i in 1:s0) U[i]=alpha*runif(1)\n return(U)\n }\n\\end{verbatim}\nUse \\verb+hist(Wait(1000,.5))+ and \\verb+hist(Trans(1000,.5))+ to see the\ncorresponding histograms. Vary $n$ and $\\alpha$. Use the \\verb+system.time+ command as in part a to see the timing. \nIn particular, \\verb+Wait+ is very bad if $\\alpha$ is small.\n\\end{enumerate}\n\n\\subsection{Exercise \\ref{pb:pareto_gen}}\n\nThe cdf of the Pareto $\\CP(\\alpha)$ distribution is\n$$\nF(x)=1-x^{-\\alpha}\n$$\nover $(1,\\infty)$. Therefore, $F^{-1}(U)=(1-U)^{-1/\\alpha}$, which is also\nthe $-1/\\alpha$ power of a uniform variate.\n\n\\subsection{Exercise \\ref{pb:specific}}\n\nDefine the \\R functions\n\\begin{verbatim}\n Pois1<-function(s0,lam0){\n spread=3*sqrt(lam0)\n t=round(seq(max(0,lam0-spread),lam0+spread,1))\n prob=ppois(t,lam0)\n X=rep(0,s0)\n for (i in 1:s0){\n u=runif(1)\n X[i]=max(t[1],0)+sum(prob nsim=100\n> lambda=3.4\n> system.time(Pois1(nsim,lambda))\n user system elapsed\n 0.004 0.000 0.005\n> system.time(Pois2(nsim,lambda))\n user system elapsed\n 0.004 0.000 0.004\n> system.time(rpois(nsim,lambda))\n user system elapsed\n 0 0 0\n\\end{verbatim}\nfor other values of \\verb+nsim+ and \\verb+lambda+. You will see that \\verb@rpois@ is by far the best, with the exponential generator \n(\\verb#Pois2#) not being very good for large $\\lambda$'s. Note also that \\verb@Pois1@ is not appropriate for small $\\lambda$'s since \nit could then return negative values.\n\n\\subsection{Exercise \\ref{pb:gammaAR}}\n\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\item Since, if $X\\sim \\mathcal{G}a(\\alpha,\\beta)$, then $\\beta X=\\sum_{j=1}^{\\alpha} \\beta X_{j} \\sim \\mathcal{G}a(\\alpha,1)$,\n$\\beta$ is the inverse of a scale parameter.\n\\item The Accept-Reject ratio is given by\n$$\n\\dfrac{f(x)}{g(x)} \\propto \\dfrac{x^{n-1}\\,e^{-x}}{\\lambda\\,e^{-\\lambda x}}=\\lambda^{-1} x^{n-1} e^{-(1-\\lambda)x}\\,.\n$$\nThe maximum of this ratio is obtained for \n$$\n\\dfrac{n-1}{x^\\star} - (1-\\lambda) = 0\\,,\\quad\\text{i.e. for}\\quad x^\\star = \\dfrac{n-1}{1-\\lambda}\\,.\n$$\nTherefore, \n$$\nM\\propto \\lambda^{-1} \\left( \\dfrac{n-1}{1-\\lambda} \\right)^{n-1} \\,e^{-(n-1)}\n$$\nand this upper bound is minimised in $\\lambda$ when $\\lambda=1/n$.\n\n\\item If $g$ is the density of the $\\mathcal{G}a(a,b)$ distribution and $f$ the\ndensity of the $\\mathcal{G}a(\\alpha,1)$ distribution, \n$$\ng(x) = \\frac{x^{a-1}e^{-bx}b^{a}}{\\Gamma(a)} \\quad\\text{and}\\quad f(x) = \\frac{x^{\\alpha-1}e^{-x}}{\\Gamma(\\alpha)} \n$$\nthe Accept-Reject ratio is given by\n$$ \n\\dfrac{f(x)}{g(x)} = \\dfrac{x^{\\alpha-1}e^{-x} \\Gamma(a)}{\\Gamma(\\alpha) b^{a} x^{a-1}e^{-bx}} \\propto\nb^{-a}x^{\\alpha-a}e^{-x(1-b)} \\,.\n$$\nTherefore,\n$$\n\\dfrac{\\partial}{\\partial x} \\dfrac{f}{g} = b^{a} e^{-x(1-b)}x^{\\alpha-a-1}\\left\\{(\\alpha-a)-(1-b)x\\right\\} \n$$\nprovides $x^\\star = {\\alpha-a}\\big/{1-b} $ as the argument of the maximum of the ratio, since $\\frac{f}{g} (0)= 0$.\nThe upper bound $M$ is thus given by\n$$\nM(a,b)=b^{-a}\\left(\\dfrac{\\alpha-a}{1-b}\\right)^{\\alpha-a}e^{-\\left(\\frac{\\alpha-a}{1-b}\\right)*(1-b)} \n =b^{-a}\\left(\\frac{\\alpha-a}{(1-b) e}\\right)^{\\alpha-a}\\,. \n$$\nIt obviously requires $b<1$ and $a<\\alpha$.\n\n\\item {\\bf Warning: there is a typo in the text of the first printing, it should be:}\n\\begin{rema}\nShow that the maximum of $b^{-a}(1 - b)^{a-\\alpha}$ is attained at $b = a/\\alpha$, and hence the optimal choice of $b$\nfor simulating ${\\cal{G}}a(\\alpha,1)$ is $b=a/\\alpha$, which gives the same mean for both ${\\cal{G}}a(\\alpha,1)$ and ${\\cal{G}}a(a,b)$.\n\\end{rema}\nWith this modification, the maximum of $M(a,b)$ in $b$ is obtained by derivation, i.e.~for $b$ solution of\n$$\n\\dfrac{a}{b}-\\dfrac{\\alpha-a}{1-b}=0\\,,\n$$\nwhich leads to $b = a/\\alpha$ as the optimal choice of $b$. Both ${\\cal{G}}a(\\alpha,1)$ and ${\\cal{G}}a(a,a/\\alpha)$ have the same mean $\\alpha$.\n\n\\item Since\n$$\nM(a,a/\\alpha) = (a/\\alpha)^{-a}\\left(\\frac{\\alpha-a}{(1-a/\\alpha) e}\\right)^{\\alpha-a}\n = (a/\\alpha)^{-a} \\alpha^{\\alpha-a} = \\alpha^\\alpha/a^a,,\n$$\n$M$ is decreasing in $a$ and the largest possible value is indeed $a=\\lfloor \\alpha \\rfloor$.\n\\end{enumerate}\n\n\\subsection{Exercise \\ref{pb:Norm-DEAR}}\n\nThe ratio $f/g$ is\n$$\n\\dfrac{f(x)}{g(x)} = \\dfrac{\\exp\\{-x^2/2\\}/\\sqrt{2\\pi}}{\\alpha\\exp\\{-\\alpha|x|\\}/2}\n=\\dfrac{\\sqrt{2/\\pi}}{\\alpha}\\,\\exp\\{\\alpha|x|-x^2/2\\}\n$$\nand it is maximal when $x=\\pm\\alpha$, so $M=\\sqrt{2/\\pi}\\exp\\{\\alpha^2/2\\}/\\alpha$.\nTaking the derivative in $\\alpha$ leads to the equation\n$$\n\\alpha-\\frac{1}{\\alpha^2} =0\\,,\n$$\nthat is, indeed, to $\\alpha=1$.\n\n\\subsection{Exercise \\ref{pb:noncen_chi}}\n\n{\\bf Warning: There is a typo in this exercise, it should be:}\n\\begin{rema}\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{(\\roman{enumi})}\n\\item a mixture representation (\\ref{eq:mixture_def}), where\n$g(x \\vert y)$ is the density of $\\chi_{p+2y}^{2}$ and $p(y)$ is the density of $\\CP(\\lambda/2)$, and\n\\item the sum of a $\\chi_{p-1}^{2}$ random variable and the square of a ${\\cal N}(\\sqrt{\\lambda},1)$.\n\\end{enumerate}\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\item Show that both those representations hold.\n\\item Compare the corresponding algorithms that can be derived from these\nrepresentations among themselves and also with {\\tt rchisq} for small and large values of $\\lambda$.\n\\end{enumerate}\n\\end{rema}\nIf we use the definition of the noncentral chi squared distribution, $\\chi_{p}^{2}(\\lambda)$ as corresponding to\nthe distribution of the squared norm $||x||^2$ of a normal vector $x\\sim\\mathcal{N}_p(\\theta,I_p)$\nwhen $\\lambda=||\\theta||^2$, this distribution is invariant by rotation over the normal vector and it is therefore\nthe same as when $x\\sim\\mathcal{N}_p((0,\\ldots,0,\\sqrt{\\lambda}),I_p)$, hence leading to the representation (ii),\ni.e.~as a sum of a $\\chi_{p-1}^{2}$ random variable and of the square of a ${\\cal N}(||\\theta||,1)$ variable.\nRepresentation (i) holds by a regular mathematical argument based on the series expansion of the modified\nBessel function since the density of a non-central chi-squared distribution is\n$$\nf(x|\\lambda) = {1\\over 2} (x/\\lambda)^{(p-2)/4} I_{(p-2)/2}(\\sqrt{\\lambda x}) e^{-(\\lambda+x)/2}\\,,\n$$\nwhere\n$$\nI_\\nu (t) = \\left({t\\over 2}\\right)^{\\nu} \\sum_{k=0}^\\infty\n{(z/2)^{2k} \\over k! \\Gamma(\\nu+k+1)}.\n$$\n\nSince \\verb=rchisq= includes an optional non-centrality parameter \\verb=nc=, it can be used\nto simulate directly a noncentral chi-squared distribution. The two scenarios (i) and (ii) lead\nto the following \\R codes.\n\\begin{verbatim}\n> system.time({x=rchisq(10^6,df=5,ncp=3)})\n user system elapsed\n> system.time({x=rchisq(10^6,df=4)+rnorm(10^6,mean=sqrt(3))^2})\n user system elapsed\n 1.700 0.056 1.757\n> system.time({x=rchisq(10^6,df=5+2*rpois(10^6,3/2))})\n user system elapsed\n 1.168 0.048 1.221\n\\end{verbatim}\nRepeated experiments with other values of $p$ and $\\lambda$ lead to the same conclusion that the\nPoisson mixture representation is the fastest.\\\\\n\n\\subsection{Exercise \\ref{pb:bayesAR}}\n\nSince the ratio $\\pi(\\theta|{\\mathbf x})/\\pi(\\theta)$ is the likelihood, it is obvious that\nthe optimal bound $M$ is the likelihood function evaluated at the MLE (assuming $\\pi$ is a true\ndensity and not an improper prior).\n\nSimulating from the posterior can then be done via\n\\begin{verbatim}\ntheta0=3;n=100;N=10^4\nx=rnorm(n)+theta0\nlik=function(the){prod(dnorm(x,mean=the))}\nM=optimise(f=function(the){prod(dnorm(x,mean=the))},\n int=range(x),max=T)$obj\ntheta=rcauchy(N)\nres=(M*runif(N)>apply(as.matrix(theta),1,lik));print(sum(res)/N)\nwhile (sum(res)>0){le=sum(res);theta[res]=rcauchy(le)\nres[res]=(M*runif(le)>apply(as.matrix(theta[res]),1,lik))}\n\\end{verbatim}\nThe rejection rate is given by $0.9785$, which means that the Cauchy proposal is quite inefficient.\nAn empirical confidence (or credible) interval at the level $95\\%$ on $\\theta$ is $(2.73,3.799)$.\nRepeating the experiment with $n=100$ leads (after a while) to the interval $(2.994,3.321)$, there is\ntherefore an improvement.\n\n\n", "meta": {"timestamp": "2010-01-17T18:43:52", "yymm": "1001", "arxiv_id": "1001.2906", "language": "en", "url": "https://arxiv.org/abs/1001.2906"}} {"text": "\\section{Introduction}\n\nSince it was initiated by the Brazil workers' party~\\cite{wainwright2003making} in the 90s, Participatory budgeting (PB)~\\cite{cabannes2004participatory} has been gaining increased attention all over the world.\nEssentially, the idea behind PB is a direct democracy approach in which the way to utilize a common budget (most usually a municipality budget) is being decided upon by the stakeholders themselves (most usually city residents).\nIn particular, given a set of proposed projects with their costs, and a designated total budget to be used, voters express their preferences over the projects and then an aggregation method takes the votes and decides upon a subset of the projects to be implemented.\n\nAs research on PB from the perspective of computational social choice is accordingly increasing (see, e.g., the survey of Aziz and Shah~\\cite{aziz2020participatory}; as well as some specific recent papers on PB~\\cite{talmon2019framework,pbsub,goel2015knapsack,aziz2017proportionally}), there is a need to have publicly-available datasets;\nthis is the goal behind the \\emph{PArticipatory BUdgeting LIBrary} (in short, \\emph{Pabulib}), that is available in \\url{http://pabulib.org}.\n\nThe main aim of this document is to define a data format that is used in Pabulib.\n\n\n\\section{The \\texttt{.pb} File Format}\n\nThe data concerning one instance of participatory budgeting is to be stored in a single UTF-8 text file with the extension \\texttt{.pb}.\nThe content of the file is to be divided into three sections:\n\\begin{itemize}\n \\item \\textbf{META} section with general metadata like the country, budget, number of votes.\n \\item \\textbf{PROJECTS} section with projects costs and possibly some other metadata regarding projects like category, target etc.\n \\item \\textbf{VOTES} section with votes, that can be in one of the four types: approval, ordinal, cumulative, scoring; and optionally with metadata regarding voters like age, sex etc.\n\\end{itemize}\n\n\n\\section{A Simple Example}\n\n\\begin{Verbatim}[frame=single]\nMETA \nkey; value\ndescription; Municipal PB in Wieliczka\ncountry; Poland\nunit; Wieliczka\ninstance; 2020\nnum_projects; 5\nnum_votes; 10\nbudget; 2500\nrule; greedy\nvote_type; approval\nmin_length; 1\nmax_length; 3\nPROJECTS\nproject_id; cost; category \n1; 600; culture, education \n2; 800; sport\n4; 1400; culture\n5; 1000; health, sport\n7; 1200; education\nVOTES\nvoter_id; age; sex; vote\n1; 34; f; 1,2,4\n2; 51; m; 1,2\n3; 23; m; 2,4,5\n4; 19; f; 5,7\n5; 62; f; 1,4,7\n6; 54; m; 1,7\n7; 49; m; 5\n8; 27; f; 4\n9; 39; f; 2,4,5\n10; 44; m; 4,5\n\\end{Verbatim}\n\n\n\\section{Detailed Description}\n\nThe \\textbf{bold} part is obligatory.\n\n\n\\subsection{Section 1: META}\n \n \\begin{itemize}\n \\item \\bftt{key}\n \\begin{itemize}\n \\item \\bftt{description}\n \\item \\bftt{country}\n \\item \\bftt{unit} -- name of the municipality, region, organization, etc., holding the PB process\n \\item \\texttt{subunit} -- name of the sub-jurisdiction or category within which the preferences are aggregated and funds are allocated\n \\begin{itemize}\n \\item \\textit{Example}: in Paris, there are 21 PBs -- a city-wide budgets and 20 district-wide budgets. For the city-wide budget, \\texttt{unit} is Paris, and \\texttt{subunit} is undefined, while for the district-wide budgets, \\texttt{unit} is also Paris, and \\texttt{subunit} is the name of the district (e.g., IIIe arrondissement).\n \\item \\textit{Example}: before 2019, in Warsaw there have been district-wide and neighborhood-wide PBs. For all of them, \\texttt{unit} is Warsaw, while \\texttt{subunit} is the name of the district for district-wide budgets, and the name of the neighborhood for neighborhood-wide budgets. To associate neighborhoods with districts (if desired), an additional property \\texttt{district} can be used.\n \\item \\textit{Example}: assume that in a given city, there are distinct PBs for each of $n>1$ categories (environmental projects, transportation projects, etc.). For all of them, \\texttt{unit} is the city name, while \\texttt{subunit} is the name of the category.\n \\end{itemize}\n \\item \\bftt{instance} -- a unique identifier of the specific edition of the PB process (year, edition number, etc.) used by the organizers to identify that edition; note that \\texttt{instance} will not necessarily correspond to the year in which the vote is actually held, as some organizers identify the edition by the fiscal year in which the PB projects are to be carried out\n \\item \\bftt{num\\_projects}\n \\item \\bftt{num\\_votes}\n \\item \\bftt{budget} -- the total amount of funds to be allocated \n \\item \\bftt{vote\\_type}\n \\begin{itemize}\n \\item \\texttt{approval} -- each vote is a vector of Boolean values, $\\mathbf{v} \\in \\mathbb{B}^{|P|}$, where $P$ is the set of all projects,\n \\item \\texttt{ordinal} -- each vote is a permutation of a subset of $P$ such that $|P| \\in [\\mathtt{min\\_length}, \\mathtt{max\\_length}]$, corresponding to a strict preference ordering,\n \\item \\texttt{cumulative} -- each vote is a vector $\\mathbf{v} \\in \\mathbb{R}_{+}^{|P|}$ such that ${\\lVert\\mathbf{v}\\rVert}_{1} \\le \\mathtt{max\\_sum\\_points} \\in \\mathbb{R}_{+}$,\n \\item \\texttt{scoring} -- each vote is a vector $\\mathbf{v} \\in I^{|P|}$, where $I \\subseteq \\mathbb{R}$.\n \\end{itemize}\n \\item \\bftt{rule} \n \\begin{itemize}\n \\item \\texttt{greedy} -- projects are ordered decreasingly by the value of the aggregation function (i.e., the total score), and are funded until funds are exhausted or there are no more projects\n \\item other rules will be defined in future versions\n \\end{itemize}\n \\item \\texttt{date\\_begin} -- the date on which voting starts\n \\item \\texttt{date\\_end} -- the date on which voting ends\n \\item \\texttt{language} -- language of the description texts (i.e., full project names)\n \\item \\texttt{edition}\n \\item \\texttt{district}\n \\item \\texttt{comment}\n \\item if \\texttt{vote\\_type} = \\texttt{approval}:\n \\begin{itemize}\n \\item \\texttt{min\\_length} [default: 1]\n \\item \\texttt{max\\_length} [default: num\\_projects]\n \\item \\texttt{min\\_sum\\_cost} [default: 0]\n \\item \\texttt{max\\_sum\\_cost} [default: $\\infty$]\n \\end{itemize}\n \\item if \\texttt{vote\\_type} = \\texttt{ordinal}:\n \\begin{itemize}\n \\item \\texttt{min\\_length} [default: 1]\n \\item \\texttt{max\\_length} [default: num\\_projects]\n \\item \\texttt{scoring\\_fn} [default: Borda]\n \\end{itemize}\n\n \\item if \\texttt{vote\\_type} = \\texttt{cumulative}:\n \\begin{itemize}\n \\item \\texttt{min\\_length} [default: 1]\n \\item \\texttt{max\\_length} [default: num\\_projects]\n \\item \\texttt{min\\_points} [default: 0]\n \\item \\texttt{max\\_points} [default: max\\_sum\\_points]\n \\item \\texttt{min\\_sum\\_points} [default: 0]\n \\item \\bftt{max\\_sum\\_points} \n \\end{itemize}\n \\item if \\texttt{vote\\_type} = \\texttt{scoring}:\n \\begin{itemize} \n \\item \\texttt{min\\_length} [default: 1]\n \\item \\texttt{max\\_length} [default: num\\_projects]\n \\item \\texttt{min\\_points} [default: $-\\infty$]\n \\item \\texttt{max\\_points} [default: $\\infty$]\n \\item \\texttt{default\\_score} [default: 0]\n \\end{itemize}\n \\item \\texttt{non-standard fields}\n \\end{itemize}\n \\item \\bftt{value}\n \\end{itemize}\n\n\n \n\\subsection{Section 2: PROJECTS}\n \n \\begin{itemize}\n \\item \\bftt{project\\_id}\n \\item \\bftt{cost}\n \\item \\texttt{name} -- full project name\n \\item \\texttt{category} -- for example: education, sport, health, culture, environmental protection, public space, public transit and roads\n \\item \\texttt{target} -- for example: adults, seniors, children, youth, people with disabilities, families with children, animals\n \\item \\texttt{non-standard fields}\n \\end{itemize}\n\n\n\n\n\\subsection{Section 3: VOTES}\n \\begin{itemize}\n \\item \\bftt{voter\\_id}\n \\item \\texttt{age}\n \\item \\texttt{sex}\n \\item \\texttt{voting\\_method} (e.g., paper, Internet, mail)\n \\item if \\texttt{vote\\_type} = \\texttt{approval}:\n \\begin{itemize}\n \\item \\bftt{vote} -- ids of the approved projects, separated by commas.\n \\end{itemize}\n \\item if \\texttt{vote\\_type} = \\texttt{ordinal}:\n \\begin{itemize}\n \\item \\bftt{vote} -- ids of the selected projects, from the most preferred one to the least preferred one, separated by commas.\n \\end{itemize}\n \\item if \\texttt{vote\\_type} = \\texttt{cumulative}:\n \\begin{itemize}\n \\item \\bftt{vote} -- project ids, in the decreasing order induced by \\texttt{points}, separated by commas; projects not listed are assumed to be awarded $0$ points.\n \\item \\bftt{points} -- points assigned to the selected projects, listed in the same order as project ids in \\bftt{vote}.\n \\end{itemize}\n \\item if \\texttt{vote\\_type} = \\texttt{scoring}:\n \\begin{itemize}\n \\item \\bftt{vote} -- project ids, in the decreasing order induced by \\texttt{points}, separated by commas; projects not listed are assumed to be awarded \\texttt{default\\_score} points.\n \\item \\bftt{points} -- points assigned to the selected projects, listed in the same order as project ids in \\bftt{vote}.\n \\end{itemize}\n \n \\item \\texttt{non-standard fields}\n \n \\end{itemize}\n \n\n\\section{Outlook}\n\nWe have introduced the PArticipatory BUdgeting LIBrary (Pabulib; available at \\url{http://pabulib.org}), and have described the \\texttt{.pb} file format that is used in it.\n\nWe hope that Pabulib will foster meaningful research on PB, in particularly helping the computational social choice community offer better aggregation methods to be used in real-world instances of PB.\n\n\n\\section*{Acknowledgement}\n\nNimrod Talmon has been supported by the Israel Science Foundation (grant No. 630/19). Dariusz Stolicki and Stanis\\l aw Szufa have been supported under the Polish Ministry of Science and Higher Education grant no. 0395/DLG/2018/10.\n\n\\bibliographystyle{plain}\n", "meta": {"timestamp": "2020-12-14T02:27:17", "yymm": "2012", "arxiv_id": "2012.06539", "language": "en", "url": "https://arxiv.org/abs/2012.06539"}} {"text": "\\section{Introduction}\n\nAny deformation of a Weyl or Clifford algebra can be realized \nthrough a change of generators in the undeformed algebra \n\\cite{mathias,ducloux}. ``$q$-Deformations''\nof Weyl or Clifford algebras that were covariant under the action of \na simple Lie algebra $\\mbox{\\bf g\\,}$ are characterized by their being \ncovariant under the action of the quantum group $U_h\\mbox{\\bf g\\,}$, where $q=e^h$.\nHere we briefly summarize our systematic construction procedure \n\\cite{fiojmp,fiocmp} of\nall the possible corresponding changes of generators, together with\nthe corresponding realizations of the $U_h\\mbox{\\bf g\\,}$-action.\n\nThis paves the way \\cite{fiojmp} for a physical \ninterpretation of deformed\ngenerators as ``composite operators'', functions of the\nundeformed ones. For instance, if the latter act as \ncreators and annihilators on\na bosonic or fermionic Fock space, then the former would act as\ncreators and annihilators of\nsome sort of ``dressed states'' in the same space.\nSince there exists \\cite{fiocmp} a\nbasis of $\\mbox{\\bf g\\,}$-invariants that depend on the undeformed generators in a\nnon-polynomial way, but on the deformed ones in a polynomial way,\nthese changes of generators might be employed\nto simplify the dynamics of some $\\mbox{\\bf g\\,}$-covariant quantum physical systems\nbased on some complicated $\\mbox{\\bf g\\,}$-invariant Hamiltonian.\n\nLet us list the essential ingredients of our construction procedure:\n\\begin{enumerate}\n\n\\item \\mbox{\\bf g\\,}, a simple Lie algebra.\n\\item The cocommutative Hopf algebra $H\\equiv(U\\mbox{\\bf g\\,},\\cdot,\\Delta,\n \\varepsilon,S)$ associated to \n $U\\mbox{\\bf g\\,}$; $\\cdot,\\Delta,\\varepsilon,S$ denote the product,\n coproduct, counit, antipode.\n\\item The quantum group \\cite{dr2} $H_h\\equiv(U_h\\mbox{\\bf g\\,},\\bullet,\\Delta_h,\n \\varepsilon_h,S_h,\\mbox{$\\cal R$\\,})$.\n\\item An algebra isomorphism\\cite{dr3} \n $\\varphi_h:U_h\\mbox{\\bf g\\,}\\rightarrow U\\mbox{\\bf g\\,}[[h]]$,\n $\\varphi_h\\circ\\bullet=\\cdot\\circ(\\varphi_h\\otimes \\varphi_h)$.\n\\item A corresponding Drinfel'd twist\\cite{dr3} \n $\\mbox{$\\cal F$}\\equiv\\mbox{$\\cal F$}^{(1)}\\!\\otimes\\!\\mbox{$\\cal F$}^{(2)}\\!=\\!{\\bf 1}^{\\otimes^2}\\!\\!\n +\\!O(h)\\in U\\mbox{\\bf g\\,}\\![[h]]^{\\otimes^2}$:\n \\[\n (\\varepsilon\\otimes \\mbox{id})\\mbox{$\\cal F$}={\\bf 1}=(\\mbox{id}\\otimes \\varepsilon)\\mbox{$\\cal F$},\n \\qquad\\: \\:\\Delta_h(a)=(\\varphi_h^{-1}\\otimes \\varphi_h^{-1})\\big\n \\{\\mbox{$\\cal F$}\\Delta[\\varphi_h(a)]\\mbox{$\\cal F$}^{-1}\\big\\}.\n \\]\n\\item $\\gamma':=\\mbox{$\\cal F$}^{(2)}\\cdot S\\mbox{$\\cal F$}^{(1)}$ and \n $\\gamma:=S\\mbox{$\\cal F$}^{-1(1)}\\cdot \\mbox{$\\cal F$}^{-1(2)}$.\n\\item The generators $a^+_i,a^i$ of a ordinary Weyl \n or Clifford algebra $\\mbox{$\\cal A$}$.\n\\item The action $\\triangleright:U\\mbox{\\bf g\\,}\\times\\mbox{$\\cal A$}\\rightarrow \\mbox{$\\cal A$}$; \n $\\mbox{$\\cal A$}$ is a left module algebra under $\\triangleright$.\n\\item The \n representation $\\rho$ of \\mbox{\\bf g\\,} to which $a^+_i,a^i$ belong:\n \\[\n x\\triangleright a^+_i=\\rho(x)^j_ia^+_j\\qquad\\qquad\n x\\triangleright a^i=\\rho(Sx)_j^ia^j.\n \\]\n\\item The Jordan-Schwinger algebra homomorphism \n $\\sigma:U\\mbox{\\bf g\\,}\\in\\mbox{$\\cal A$}[[h]]$:\n \\[\n \\sigma(x):=\n \\rho(x)^i_ja^+_ia^j\\qquad\\mbox{if~}x\\in\\mbox{\\bf g\\,}\\qquad\\qquad\\qquad\n \\sigma(yz)=\\sigma(y)\\sigma(z)\n \\]\n\\item The generators $\\tilde A^+_i,\\tilde A^i$ of a deformed Weyl \n or Clifford algebra $\\mbox{${\\cal A}_h$}$.\n\\item The action $\\triangleright_h:U_h\\mbox{\\bf g\\,}\\times\\mbox{${\\cal A}_h$}\\rightarrow \\mbox{${\\cal A}_h$}$; $\\mbox{${\\cal A}_h$}$ is a \n left module algebra under $\\triangleright_h$.\n\\item The representation $\\rho_h=\\rho\\circ \\varphi_h$\n of $U_h\\mbox{\\bf g\\,}$ to which $\\tilde A^+_i,\\tilde A^i$ belong:\n \\[\n X\\triangleright_h \\tilde A^+_i=\\tilde\\rho(X)^j_i\\tilde A^+_j\\qquad\\qquad\n X\\triangleright_h \\tilde A^i=\\tilde\\rho(S_h X)_j^i\\tilde A^j.\n \\]\n\\item $*$-structures $*,*_h,\\star,\\star_h$ in $H,H_h,\\mbox{$\\cal A$},\\mbox{${\\cal A}_h$}$, if any.\n\n\\end{enumerate}\n\n\\section{Constructing the deformed generators}\n\\label{con}\n\n\\begin{prop}\\cite{fiojmp}\nOne can realize the quantum group action $\\triangleright_h$ on $\\mbox{$\\cal A$}[[h]]$ by setting\nfor any $X\\in U_h\\mbox{\\bf g\\,}$ and $\\beta \\in\\mbox{$\\cal A$}[[h]]$\n(with $X_{(\\bar 1)}\\otimes X_{(\\bar 2)}:=\\Delta_h(X)$)\n\\begin{equation}\nX\\triangleright_h \\beta := \\sigma[\\varphi_h(X_{(\\bar 1)})]\\,\\beta\\,\n\\sigma[\\varphi_h(S_hX_{(\\bar 2)})].\n\\end{equation}\n\\end{prop}\n\n\\begin{prop}\\cite{fiojmp,fiocmp}\nFor any \\mbox{\\bf g\\,}-invariants $u,v\\in\\mbox{$\\cal A$}[[h]]$ the elements of $\\mbox{$\\cal A$}[[h]]$ \n\\begin{equation}\n\\begin{array}{lll}\nA_i^+ &:= & u\\,\\sigma(\\mbox{$\\cal F$}^{(1)})\\,a_i^+\\,\n\\sigma(S\\mbox{$\\cal F$}^{(2)}\\gamma)u^{-1} \\nonumber\\\\\nA^i &:= &v\\,\\sigma(\\gamma'S\\mbox{$\\cal F$}^{-1(2)})\\,a^i\\,\n\\sigma(\\mbox{$\\cal F$}^{-1(1)})v^{-1}. \n\\end{array}\n\\end{equation}\ntransform under $\\triangleright_h$ as $\\tilde A^+_i,\\tilde A^i$.\n\\end{prop}\nA suitable choice of $uv^{-1}$ may make $A^+_i,A^j$ fulfil also the \nQCR of $\\mbox{${\\cal A}_h$}$ \\cite{fiocmp}. In particular we have shown the\n\n\\begin{prop}\\cite{fiocmp}\nIf $\\rho$ is the defining representation of \\mbox{\\bf g\\,},\n$A^+_i,A^j$ fulfil the corresponding QCR provided\n\\begin{equation}\n\\begin{array}{llll}\nuv^{-1} & = & \\frac{\\Gamma(n\\!+\\!1)}{\\Gamma_{q^2}(n\\!+\\!1)}\\qquad\n\\qquad & \\mbox{\\rm if ~}\\mbox{\\bf g\\,}=sl(N) \\cr\nuv^{-1} & = & \\frac{\\Gamma[\\frac 12(n\\!+\\!1\\!+\\!\\frac N2-l)]\n\\Gamma[\\frac 12(n\\!+\\!1\\!+\\!\\frac N2\\!+\\!l)]}{\\Gamma_{q^2}\n[\\frac 12(n\\!+\\!1\\!+\\!\\frac N2\\!+\\!l)]\n\\Gamma_{q^2}[\\frac 12(n\\!+\\!1\\!+\\!\\frac N2-l)]}\n\\qquad\\qquad & \\mbox{\\rm if ~}\\mbox{\\bf g\\,}=so(N),\n\\end{array}\n\\nonumber\n\\end{equation}\nwhere $\\Gamma,\\Gamma_{q^2}$ are Euler's gamma-function and its\n$q$-deformation,\n$n:=a^+_ia^i$, $l:=\\sqrt{\\sigma({\\cal C}_{so(N)})}$, and\n${\\cal C}_{so(N)}$ is the quadratic Casimir of $so(N)$.\n\\end{prop}\n\nIf $A^+_i,A^j$ fulfil the QCR, then also \n\\begin{equation}\nA^+_{i,\\alpha}:=\\alpha \\,A^+_i\\, \\alpha^{-1}\\qquad\\qquad\nA^{i,\\alpha}:=\\alpha \\,A^i\\, \\alpha^{-1}\n\\end{equation}\nwill do, for any $\\alpha\\in\\mbox{$\\cal A$}[[h]]$ of the form $\\alpha={\\bf 1}+O(h)$.\nBy cohomological arguments one can prove\nthat there are no more elements in $\\mbox{$\\cal A$}[[h]]$ which do \\cite{fiocmp}. \n$A^+_{i,\\alpha},A^{i,\\alpha}$ transform as \n$\\tilde A^+_i,\\tilde A^i$ under the following modified realization of\n$\\triangleright_h$:\n\\begin{equation}\nX\\triangleright_h^{\\alpha} \\beta := \\alpha \\sigma[\\varphi_h(X_{(\\bar 1)})]\n\\alpha^{-1}\\,\\beta\\,\n\\alpha \\sigma[\\varphi_h(S_hX_{(\\bar 2)})]\\alpha^{-1}.\n\\end{equation}\nThe algebra homomorphism \n$f_{\\alpha}:\\mbox{${\\cal A}_h$}\\rightarrow \\mbox{$\\cal A$}[[h]]$ such that\n$f_{\\alpha}(\\tilde A^+_i)=A^+_{i,\\alpha}$ and\n$f_{\\alpha}(\\tilde A^i)=A^{i,\\alpha}$ is what is usually\ncalled a ``$q$-deforming map''.\n\nFor a compact section of $U\\mbox{\\bf g\\,}$ one can choose a unitary \\mbox{$\\cal F$},\n$\\mbox{$\\cal F$}^{*\\otimes *}=\\mbox{$\\cal F$}^{-1}$. Then the $U\\mbox{\\bf g\\,}$-covariant $*$-structure \n$(a^i)^{\\star}=a^+_i$ in $\\mbox{$\\cal A$}$\nis also $U_h\\mbox{\\bf g\\,}$-covariant in $\\mbox{$\\cal A$}[[h]]$ and\nhas the form\n$(A^{i,\\alpha})^{\\star}=A^+_{i,\\alpha}$, provided we choose \n$u=v^{-1}$ and $\\alpha^{\\star}=\\alpha^{-1}$. More formally,\nunder this assumption $\\star\\circ f_{\\alpha}=f_{\\alpha}\\circ \\star_h$,\nwith $\\star_h$ defined by $(\\tilde A^i)^{\\star_h}=\\tilde A^+_i$\n\nIf $H_h$ is instead a {\\it triangular} deformation\nof $U\\mbox{\\bf g\\,}$, the previous construction can be equally performed\nand leads essentially to the same results \\cite{fiojmp},\n provided we choose\nin the previous formulae $u\\equiv v\\equiv {\\bf 1}$. This follows\nfrom the triviality of the coassociator \\cite{dr3}, that\ncharacterizes triangular deformations $H_h$.\n\n\\section*{Acknowledgments}\n\nIt is a pleasure to thank J.\\ Wess for\nhis stimulus, support and \nwarm hospitality at his Institute.\nThis work was supported through a TMR fellowship \ngranted by the European Commission, Dir. Gen. XII for Science,\nResearch and Development, under the contract ERBFMBICT960921.\n\n\\section*{References}\n\n", "meta": {"timestamp": "1998-01-07T17:48:09", "yymm": "9710", "arxiv_id": "q-alg/9710024", "language": "en", "url": "https://arxiv.org/abs/q-alg/9710024"}} {"text": "\\section{Introduction}\r\nFor all terms related to digraphs which are not defined below, see Bang-Jensen and Gutin \\cite{Bang_Jensen_Gutin}.\r\nIn this paper,\r\nby a {\\it directed graph} (or simply {\\it digraph)}\r\n$D$ we mean a pair $(V,A)$, where\r\n$V=V(D)$ is the set of vertices and $A=A(D)\\subseteq V\\times V$ is the set of arcs.\r\nFor an arc $(u,v)$, the first vertex $u$ is called its {\\it tail} and the second\r\nvertex $v$ is called its {\\it head}; we also denote such an arc by $u\\to v$.\r\nIf $(u,v)$ is an arc, we call $v$ an {\\it out-neighbor} of $u$, and $u$ an {\\it in-neighbor} of $v$.\r\nThe number of out-neighbors of $u$ is called the {\\it out-degree} of $u$, and the number of in-neighbors of $u$ --- the {\\it in-degree} of $u$.\r\nFor an integer $k\\ge 2$, a {\\it walk} $W$ {\\it from} $x_1$ {\\it to} $x_k$ in $D$ is an alternating sequence\r\n$W = x_1 a_1 x_2 a_2 x_3\\dots x_{k-1}a_{k-1}x_k$ of vertices $x_i\\in V$ and arcs $a_j\\in A$\r\nsuch that the tail of $a_i$ is $x_i$ and the head of $a_i$ is $x_{i+1}$ for every\r\n$i$, $1\\le i\\le k-1$.\r\nWhenever the labels of the arcs of a walk are not important, we use the notation\r\n$x_1\\to x_2 \\to \\dotsb \\to x_k$ for the walk, and say that we have an $x_1x_k$-walk.\r\nIn a digraph $D$, a vertex $y$ is {\\it reachable} from a vertex $x$ if $D$ has a walk from $x$ to $y$. In\r\nparticular, a vertex is reachable from itself. A digraph $D$ is {\\it strongly connected}\r\n(or, just {\\it strong}) if, for every pair $x,y$ of distinct vertices in $D$,\r\n$y$ is reachable from $x$ and $x$ is reachable from $y$.\r\nA {\\it strong component} of a digraph $D$ is a maximal induced subdigraph of $D$ that is strong.\r\nIf $x$ and $y$ are vertices of a digraph $D$, then the\r\n{\\it distance from x to y} in $D$, denoted $\\dist(x,y)$, is the minimum length of\r\nan $xy$-walk, if $y$ is reachable from $x$, and otherwise $\\dist(x,y) = \\infty$.\r\nThe {\\it distance from a set $X$ to a set $Y$} of vertices in $D$ is\r\n\\[\r\n\\dist(X,Y) = \\max\r\n\\{\r\n\\dist(x,y) \\colon x\\in X,y\\in Y\r\n\\}.\r\n\\]\r\nThe {\\it diameter} of $D$ is $\\diam(D) = \\dist(V,V)$.\r\n\r\n\r\nLet $p$ be a prime, $e$ a positive integer, and $q = p^e$. Let\r\n$\\fq$ denote the finite field of $q$ elements, and $\\fq^*=\\fq\\setminus\\{0\\}$.\r\n\r\nLet $\\fq^2$ \r\ndenote the Cartesian product $\\fq \\times \\fq$, and let\r\n $f\\colon\\fq^2\\to\\fq$ be an arbitrary function. We define a digraph $D = D(q;f)$ as follows:\r\n $V(D)=\\fq^{2}$, and\r\nthere is an arc from a vertex ${\\bf x} = (x_1,x_2)$ to a vertex\r\n${\\bf y} = (y_1,y_{2})$ if and only if\r\n\\[\r\nx_2 + y_2 = f(x_1,y_1).\r\n\\]\r\n\r\nIf $(x,y)$ is an arc in $D$, then ${\\bf y}$ is uniquely determined by ${\\bf x}$ and $y_1$, and ${\\bf x}$ is uniquely determined by ${\\bf y}$ and $x_1$.\r\nHence, each vertex of $D$ has both its in-degree and out-degree equal to $q$.\r\n\r\nBy Lagrange's interpolation,\r\n $f$ can be uniquely represented by\r\na bivariate polynomial of degree at most $q-1$ in each of the variables. If ${f}(x,y) = x^m y^n$, $1\\le m,n\\le q-1$, we call $D$ a {\\it monomial} digraph, and denote it also by $D(q;m,n)$. Digraph $D(3; 1,2)$ is depicted in Fig.\\ $1.1$. It is clear, that ${\\bf x}\\to {\\bf y}$ in $D(q;m,n)$ if and only if ${\\bf y}\\to {\\bf x}$ in $D(q;n,m)$. Hence, one digraph is obtained from the other by reversing the direction of every arc. In general, these digraphs are not isomorphic, but if one of them is strong then so is the other and their diameters are equal. As this paper is concerned only with the diameter of $D(q;m,n)$, it is sufficient to assume that $1\\le m\\le n\\le q-1$.\r\n\r\n\\begin{figure}\r\n\\begin{center}\r\n\\begin{tikzpicture}\r\n\r\n\\tikzset{vertex/.style = {shape=circle,draw,inner sep=2pt,minimum size=.5em, scale = 1.0},font=\\sffamily\\scriptsize\\bfseries}\r\n\\tikzset{edge/.style = {->,> = triangle 45}}\r\n\\node[vertex] (a) at (0,0) {$(0,2)$};\r\n\\node[vertex] (b) at (4,0) {$(1,1)$};\r\n\\node[vertex] (c) at (8,0) {$(1,0)$};\r\n\\node[vertex] (d) at (0,-4) {$(0,1)$};\r\n\\node[vertex] (e) at (4,-4) {$(2,2)$};\r\n\\node[vertex] (f) at (8,-4) {$(2,0)$};\r\n\\node[vertex] (g) at (4,-1.5) {$(2,1)$};\r\n\\node[vertex] (h) at (4,-2.5) {$(1,2)$};\r\n\\node[vertex] (i) at (8,-2) {$(0,0)$};\r\n\r\n\\draw[edge] (a) to (b);\r\n\\draw[edge] (b) to (a);\r\n\r\n\\draw[edge] (a) to (d);\r\n\\draw[edge] (d) to (a);\r\n\r\n\\draw[edge] (b) to (c);\r\n\\draw[edge] (c) to (b);\r\n\r\n\\draw[edge] (g) to (b);\r\n\r\n\\draw[edge] (h) to (e);\r\n\\draw[edge] (c) to (b);\r\n\r\n\\draw[edge] (d) to (e);\r\n\\draw[edge] (e) to (d);\r\n\r\n\\draw[edge] (e) to (f);\r\n\\draw[edge] (f) to (e);\r\n\r\n\\draw[edge] (c) to (i);\r\n\\draw[edge] (i) to (c);\r\n\r\n\\draw[edge] (f) to (i);\r\n\\draw[edge] (i) to (f);\r\n\r\n\\draw[edge] (g) to (a);\r\n\\draw[edge] (a) to (g);\r\n\r\n\\draw[edge] (c) to (g);\r\n\r\n\\draw[edge] (d) to (h);\r\n\\draw[edge] (h) to (d);\r\n\r\n\\draw[edge] (f) to (h);\r\n\r\n\\path\r\n (g) edge [->,>={triangle 45[flex,sep=-1pt]},loop,out=330,in=300,looseness=8] node {} (g);\r\n\\path\r\n (h) edge [->,>={triangle 45[flex,sep=-1pt]},loop,out=160,in=130,looseness=8] node {} (h);\r\n\\path\r\n (i) edge [->,>={triangle 45[flex,sep=-1pt]},loop,out=210,in=170,looseness=8] node {} (i);\r\n\\end{tikzpicture}\r\n\\caption{The digraph $D(3;1,2)$: $x_2+y_2 = x_1y_1^2$.}\r\n\\end{center}\r\n\\end{figure}\r\n\r\nThe digraphs $D(q; {f})$\r\nand $D(q;m,n)$ are directed analogues\r\nof\r\nsome algebraically defined graphs, which have been studied extensively\r\nand have many applications. See\r\nLazebnik and Woldar \\cite{LazWol01} and references therein; for some\r\nsubsequent work see Viglione \\cite{Viglione_thesis},\r\nLazebnik and Mubayi \\cite{Lazebnik_Mubayi},\r\nLazebnik and Viglione \\cite{Lazebnik_Viglione},\r\nLazebnik and Verstra\\\"ete \\cite{Lazebnik_Verstraete},\r\nLazebnik and Thomason \\cite{Lazebnik_Thomason},\r\n Dmytrenko, Lazebnik and Viglione \\cite{DLV05},\r\n Dmytrenko, Lazebnik and Williford \\cite{DLW07},\r\n Ustimenko \\cite{Ust07}, Viglione \\cite{VigDiam08},\r\n Terlep and Williford \\cite{TerWil12}, Kronenthal \\cite{Kron12},\r\n Cioab\\u{a}, Lazebnik and Li \\cite{CLL14},\r\n Kodess \\cite{Kod14},\r\nand Kodess and Lazebnik \\cite{Kod_Laz_15}.\r\n\r\nThe questions of strong connectivity of digraphs $D(q;{f})$ and $D(q; m,n)$ and descriptions of their components were completely answered in\r\n\\cite{Kod_Laz_15}. Determining the diameter of a component of $D(q;{f})$ for an arbitrary prime power $q$ and an arbitrary $f$ seems to be out of reach, and most of our results below are concerned with some instances of this problem for strong monomial digraphs. The following theorems are the main results of this paper.\r\n\r\n\\begin{theorem\n\\label{main}\r\nLet $p$ be a prime, $e,m,n$ be positive integers, $q=p^e$, $1\\le m\\le n\\le q-1$, and $D_q= D(q;m,n)$. Then the following statements hold.\r\n\\begin{enumerate}\r\n\\item\\label{gen_lower_bound} If $D_q$ is strong, then $\\diam (D_q)\\ge 3$.\r\n\r\n\\item\\label{gen_upper_bound}\r\nIf $D_q$ is strong, then\r\n\\begin{itemize}\r\n\\item for $e = 2$, $\\diam(D_q)\\le 96\\sqrt{n+1}+1$;\r\n\\item for $e \\ge 3$, $\\diam(D_q)\\le 60\\sqrt{n+1}+1$.\r\n\\end{itemize}\r\n\r\n\\item\\label{diam_le_4} If $\\gcd(m,q-1)=1$ or $\\gcd(n,q-1)=1$, then $\\diam(D_q)\\le 4$.\r\nIf $\\gcd(m,q-1) = \\gcd(n,q-1) = 1$, then $\\diam(D_q) = 3$.\r\n\r\n\\item \\label{main3} If $p$ does not divide $n$, and $q > (n^2-n+1)^2$,\r\nthen $\\diam(D(q;1,n)) = 3$.\r\n\\item If $D_q$ is strong, then:\r\n\\begin{enumerate}\r\n\\item[(a)\\label{bound_q_le25}]\r\n If $q > n^2$, then $\\diam(D_q) \\le 49$. \n\\item[(b)\\label{bound_q_m4n4}]\r\nIf $q > (m-1)^4$, then $\\diam(D_q)\\le 13$.\r\n\\item[(c)]\\label{bound_q_le6} If $q > (n-1)^4$, then $\\diam(D(q;n,n))\\le 9$.\r\n\\end{enumerate}\r\n\\end{enumerate}\r\n\\end{theorem}\r\n\\begin{remark}\r\nThe converse to either of the statements in part (\\ref{diam_le_4}) of Theorem \\ref{main} is not true. Consider, for instance,\r\n$D(9;2,2)$ of diameter $4$, or $D(29;7,12)$ of diameter $3$.\r\n\\end{remark}\r\n\\begin{remark}\r\nThe result of part \\ref{bound_q_le25}a can hold for some $q\\le m^2$.\r\n\\end{remark}\r\nFor prime $q$, some of the results of Theorem \\ref{main} can be strengthened.\r\n\r\n\\begin{theorem}\r\n\\label{thm_diam_p}\r\nLet $p$ be a prime, $1\\le m \\le n\\le p-1$, and $D_p= D(p;m,n)$. Then $D_p$ is strong and the following statements hold.\r\n\\begin{enumerate}\r\n\\item\\label{diam_bound_p}\r\n$\\diam (D_p) \\le 2p-1$ with equality if\r\nand only if\r\n$m=n=p-1$.\r\n\\item\\label{bound_p_sqrt60}\r\nIf $(m,n)\\not\\in\\{((p-1)/2,(p-1)/2),((p-1)/2,p-1), (p-1,p-1)\\}$,\r\n then $\\diam(D_p)\\le 120\\sqrt{m}+1$.\r\n\\item\\label{bound_p_le10}\r\nIf $p > (m-1)^3$,\r\n then $\\diam(D_p) \\le 19$. \n\\end{enumerate}\r\n\\end{theorem}\r\n\r\nThe paper is organized as follows. In section \\ref{preres} we present all results which are needed for our proofs of Theorems \\ref{main} and \\ref{thm_diam_p} in sections \\ref{proofs1} and \\ref{proofs2}, respectively. Section \\ref{open} contains concluding remarks and open problems.\r\n\r\n\r\n\\section{Preliminary results.}\\label{preres}\r\nWe begin with a general result that gives necessary and sufficient conditions for a digraph $D(q;m,n)$ to be strong.\r\n\r\n\r\n\\begin{theorem} {\\rm [\\cite{Kod_Laz_15}, Theorem 2]}\r\n\\label{thm_conn}\r\n$D(q;m,n)$ is strong if and only if $\\gcd(q-1,m,n)$ is not divisible by any\r\n$q_d = (q-1)/(p^{d}-1)$ for any positive divisor $d$ of $e$, $d < e$.\r\nIn particular, $D(p;m,n)$ is strong for any $m,n$.\r\n\\end{theorem}\r\n\r\nEvery walk of length $k$ in $D = D(q; m,n)$ originating at $(a,{b})$ is of the form\r\n\\begin{align}\r\n(a, b) &\\to (x_1,- b + a^m x_1^n)\\nonumber\\\\\r\n &\\to (x_2, b - a^m x_1^n + x_1^m x_2^n)\\nonumber\\\\\r\n &\\to \\cdots \\nonumber\\\\\r\n &\\to(x_k, x_{k-1}^m x_k^n- x_{k-2}^m x_{k-1}^n+\\cdots +(-1)^{k-1} a^m x_1^n+(-1)^k b)\\nonumber.\r\n\\end{align}\r\n\r\nTherefore, in order to prove that $\\diam(D)\\le k$, one can show that for any choice of $a,b,u,v\\in\\fq$, there exists $(x_1,\\dotso,x_k)\\in\\fq^k$ so that\r\n\\begin{equation}\r\n\\label{eqn:walk_length_k}\r\n(u,v) = (x_k, x_{k-1}^m x_k^n- \\cdots +(-1)^{k-1} a^m x_1^n+(-1)^k b).\r\n\\end{equation}\r\nIn order to show that $\\diam(D)\\ge l$, one can show that there exist $a,b,u,v\\in~\\fq$ such that\r\n(\\ref{eqn:walk_length_k}) has no solution in $\\fq^k$ for any $k < l$.\r\n\\bigskip\r\n\r\n\r\n\r\n\\subsection{\r\nWaring's Problem\r\n}\r\nIn order to obtain an upper bound on $\\diam(D(q; m,n))$ we will use some results concerning Waring's problem over finite fields.\r\n\r\nWaring's number $\\gamma(r,q)$ over $\\fq$ is defined as the smallest positive integer $s$ (should it exist) such that the equation\r\n\\[\r\nx_1^r + x_2^r + \\dotsb + x_s^r = a\r\n\\]\r\nhas a solution $(x_1,\\dotso,x_s)\\in\\fq^s$ for any $a\\in\\fq$.\r\nSimilarly, $\\delta(r,q)$ is defined as the smallest positive integer $s$ (should it exist) such that\r\nfor any $a\\in\\fq$, there exists $(\\epsilon_1,\\dotso,\\epsilon_s)$,\r\neach $\\epsilon_i\\in\\{-1,1\\}\\subseteq\\mathbb{F}_q$,\r\nfor which the equation\r\n\\[\r\n\\epsilon_1 x_1^r + \\epsilon_2 x_2^r + \\dotsb + \\epsilon_s x_s^r = a\r\n\\]\r\n has a solution $(x_1,\\dotso,x_s)\\in\\fq^s$.\r\n\n\nIt is easy to argue that $\\delta(r,q)$ exists if and only if\r\n$\\gamma(r,q)$ exists, and in this case $\\delta(r,q)\\le \\gamma(r,q)$.\r\n\r\nA criterion on the existence of $\\gamma(r,q)$ is the following theorem by Bhashkaran \\cite{Bhashkaran_1966}.\r\n\r\n\\begin{theorem} {\\rm [\\cite{Bhashkaran_1966}, Theorem G]}\r\n\\label{thm:waring_exist}\r\nWaring's number $\\gamma(r,q)$ exists if and only if $r$ is not divisible by any $q_d\r\n = (q-1)/(p^{d}-1)$ for any positive divisor $d$ of $e$, $d < e$.\r\n\\end{theorem}\r\nThe study of various bounds on $\\gamma(r,q)$ has drawn considerable attention. We will use the following two upper bounds on Waring's number due to J.~Cipra \\cite{Cipra_2009}.\r\n\\begin{theorem}{\\rm [\\cite{Cipra_2009}, Theorem 4]}\r\n\\label{thm:waring_bound}\r\nIf $e = 2$ and $\\gamma(r,q)$ exists,\r\nthen $\\gamma(r,q)\\le 16\\sqrt{r+1}$. Also, if\r\n$e \\ge 3$ and $\\gamma(r,q)$ exists,\r\nthen $\\gamma(r,q)\\le 10\\sqrt{r+1}$.\r\n\\end{theorem}\r\n\\begin{cor} {\\rm [\\cite{Cipra_2009}, Corollary 7]}\r\n\\label{thm:diam_le_8}\r\nIf $\\gamma(r,q)$ exists and $r < \\sqrt{q}$, then $\\gamma(r,q)\\le 8$.\r\n\\end{cor}\r\nFor the case $q = p$, the following bound will be of interest.\r\n\\begin{theorem}{\\rm [Cochrane, Pinner \\cite{Cochrane_Pinner_2008}, Corollary 10.3]}\r\n\\label{thm:Cochrane_Pinner}\r\nIf $|\\{x^k\\colon x\\in\\mathbb{F}_p^\\ast\\}|>2$, then $\\delta(k,p)\\le 20\\sqrt{k}$.\r\n\\end{theorem}\r\n\r\nThe next two statements concerning very strong bounds on Waring's number in large fields follow from the work of Weil \\cite{Weil}, and Hua and Vandiver \\cite{Hua_Vandiver}.\r\n\\begin{theorem}{\\rm [Small \\cite{Small_1977}]}\r\n\\label{thm:waring_Small_estimates}\r\nIf $q > (k-1)^4$, then $\\gamma(k,q) \\le 2$.\r\n\\end{theorem}\r\n\\begin{theorem} {\\rm [Cipra \\cite{Cipra_thesis}, p.~4]}\r\n\\label{thm:waring_small_estimates}\r\nIf $ p > (k-1)^3$, then $\\gamma(k,p)\\le 3$.\r\n\\end{theorem}\r\n\r\nFor a survey on Waring's number over finite fields, see Castro and Rubio (Section 7.3.4, p.~211),\r\nand Ostafe and Winterhof (Section 6.3.2.3, p.~175)\r\nin Mullen and Panario \\cite{Handbook2013}. See also Cipra \\cite{Cipra_thesis}.\r\n\r\nWe will need the following technical lemma.\r\n\\begin{lemma}\r\n\t\\label{lemma:alt}\r\n\tLet $\\delta = \\delta(r,q)$ exist, and $k \\ge 2\\delta$.\r\n\tThen for every $a\\in\\fq$ the equation\r\n\t\\begin{equation}\r\n\t\\label{eqn:lemma_alt}\r\n\tx_1^r - x_2^r + x_3^r - \\dotsb + (-1)^{k+1} x_k^r = a\r\n\t\\end{equation}\r\n\thas a solution $(x_1,\\dotso,x_k)\\in\\fq^k$.\r\n\\end{lemma}\r\n\\begin{proof}\r\n\tLet $a\\in\\fq$ be arbitrary. There exist $\\varepsilon_1,\\dotso,\\varepsilon_\\delta$, each\r\n\t$\\varepsilon_i\\in\\{-1,1\\}\\subseteq \\fq$, such that\r\n\tthe equation\r\n\t$\\sum_{i=1}^{\\delta} \\varepsilon_i y_i^r = a$ has a solution\r\n\t$(y_1,\\dotso,y_{\\delta})\\in\\fq^{\\delta}$.\r\n\tAs $k \\ge 2\\delta$, the alternating sequence\r\n\t$1,-1,1,\\dotso,(-1)^k$ with $k$ terms contains the sequence\r\n\t$\\varepsilon_1,\\dotso,\\varepsilon_\\delta$ as a subsequence.\r\n\tLet the indices of this subsequence be\r\n\t$j_1,j_2,\\dotso,j_{\\delta}$.\r\n\tFor each $l$, $1\\le l\\le k$, let\r\n\t$x_l = 0$ if $l\\neq j_i$ for any $i$, and\r\n\t$x_l = y_i$ for $l = j_i$. Then $(x_1,\\dotso,x_k)$ is a solution of\r\n\t(\\ref{eqn:lemma_alt}).\r\n\\end{proof}\r\n\r\n\\subsection{The Hasse-Weil bound}\r\nIn the next section we will use\r\nthe Hasse-Weil bound,\r\nwhich provides\r\na bound on the number of $\\fq$-points on a plane non-singular absolutely irreducible projective curve over a finite field $\\fq$.\r\nIf the number of points on the curve $C$ of genus $g$ over the\r\nfinite field $\\fq$ is $|C(\\fq)|$, then\r\n\\begin{equation}\r\n\\label{hasse_weil_bound}\r\n||C(\\fq)| - q -1|\r\n\\le\r\n2g\\sqrt{q}.\r\n\\end{equation}\r\n It is also known that for a non-singular curve\r\n defined by a homogeneous polynomial of degree $k$, $g= (k-1)(k-2)/2$. Discussion of all related notions and a proof of this result can be found in\r\n Hirschfeld, Korchm\\'{a}ros, Torres \\cite{Hirschfeld} (Theorem 9.18, p.~343) or in Sz\\H{o}nyi \\cite{Szonyi1997} (p.~197).\r\n\r\n\\section{Proof of Theorem \\ref{main}} \\label{proofs1}\r\n\r\n\\noindent {\\bf (\\ref{gen_lower_bound}).}\r\nAs there is a loop at $(0,0)$, and there are arcs between $(0,0)$ and $(x,0)$ in either direction, for every $x\\in \\fq^*$, the number of vertices in $D_q$ which are at distance at most 2 from $(0,0)$ is\r\nat most $1+ (q-1)+(q-1)^2 < q^2$. Thus, there are vertices in $D_q$ which are at distance\r\nat least 3 from $(0,0)$, and so $\\diam(D_q)\\ge 3$.\r\n\r\n\\bigskip\r\n\r\n\\noindent {\\bf (\\ref{gen_upper_bound}).}\r\nAs $D_q$ is strong, by Theorem \\ref{thm_conn},\r\nfor any positive divisor $d$ of $e$, $d0$ for $q$, we obtain a lower bound on $q$ for which $N \\ge 1$.\r\n\r\n\\bigskip\r\n\r\n\\noindent{\\bf (\\ref{bound_q_le25}a).}\nThe result follows from Corollary \\ref{thm:diam_le_8} by an argument similar to that of the proof of part {\\bf (\\ref{gen_upper_bound})}.\r\n\r\n\\bigskip\r\n\r\n\\noindent {\\bf (\\ref{bound_q_m4n4}b).}\r\nFor $k=13$, (\\ref{eqn:walk_length_k}) is equivalent to\r\n\\[\r\n(u,v)\r\n=\r\n(x_{13},\r\n-b + a^m x_1^n -x_1^m x_2^n + x_2^m x_3^n -\\dotsb - x_{11}^m x_{12}^n + x_{12}^m x_{13}^n).\r\n\\]\r\nIf $q > (m-1)^4$, set $x_1 = x_4 = x_7 = x_{10} = 0$,\r\n$x_3 = x_6 = x_9 = x_{12} = 1$. Then\r\n$v - u^n + b = -x_{11}^m + x_8^m - x_5^m + x_2^m$, which has a solution $(x_2,x_5,x_8,x_{11})\\in\\fq^4$ by Theorem \\ref{thm:waring_Small_estimates} and Lemma \\ref{lemma:alt}.\r\n\r\n\\bigskip\r\n\r\n\\noindent {\\bf (\\ref{bound_q_le6}c).}\r\nFor $k=9$, (\\ref{eqn:walk_length_k}) is equivalent to\r\n\\[\r\n(u,v)\r\n=\r\n(x_9,\r\n-b + a^n x_1^n -x_1^n x_2^n + x_2^n x_3^n -\\dotsb - x_7^m x_8^n + x_8^n x_9^n).\r\n\\]\r\nIf $q > (n-1)^4$, set $x_1 = x_4 = x_5 = x_8 = 0$,\r\n$x_3 = x_7 = 1$. Then\r\n$v + b = x_2^n + x_6^n$, which has a solution $(x_2,x_6)\\in\\fq^2$ by Theorem \\ref{thm:waring_Small_estimates}.\r\n\\bigskip\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\\section{Proofs of Theorem \\ref{thm_diam_p}} \\label{proofs2}\r\n\r\n\\begin{lemma}\\label{AutoLemma}\r\nLet $D=D(q;m,n)$. Then, for any $\\lambda\\in\\mathbb{F}_q^*$, the function $\\phi:V(D) \\rightarrow V(D)$ given by $\\phi((a,b)) = (\\lambda a, \\lambda^{m+n} b)$ is\r\na digraph automorphism of $D$.\r\n\\end{lemma}\r\n\r\nThe proof of the lemma is straightforward. It amounts to showing that $\\phi$ is a bijection and that it preserves adjacency: ${\\bf x} \\to {\\bf y}$ if and only if $\\phi({\\bf x}) \\to \\phi({\\bf y})$. We omit the details. Due to Lemma \\ref{AutoLemma}, any walk in $D$ initiated at a vertex $(a,b)$ corresponds to a walk initiated at a vertex $(0,b)$ if $a=0$, or at a vertex $(1,b')$, where $b'= a^{-m-n} b$, if $a\\neq 0$. This implies that if we wish to show that $\\diam (D_p) \\le 2p-1$, it is sufficient to show that the distance from any vertex $(0,b)$ to any other vertex is at most $2p-1$, and that the distance from any vertex $(1,b)$ to any other vertex is at most $2p-1$.\r\n\r\n\r\n\r\nFirst we note that by Theorem \\ref{thm_conn}, $D_p = D(p;m,n)$ is strong for any choice of $m,n$.\r\n\r\nFor $a\\in\\mathbb{F}_p$, let integer $\\overline{a}$, $0\\le \\overline{a} \\le p-1$, be the representative of the residue class $a$.\r\n\r\n\r\nIt is easy to check that $\\diam (D(2; 1,1)) = 3$.\r\nTherefore, for the remainder of the proof, we may assume that $p$ is odd.\r\n\\bigskip\r\n\r\n\r\n\\noindent{\\bf (\\ref{diam_bound_p}).}\r\nIn order to show that diam$(D_p) \\le 2p-1$, we use (\\ref{eqn:walk_length_k}) with $k= 2p-1$, and prove that for any two vertices $(a,b)$ and $(u,v)$ of $D_p$ there\r\nis always a solution $(x_1, \\ldots, x_{2p-1})\\in \\fq^{2p-1}$ of\r\n$$(u,v) = (x_{2p-1}, -b + a^mx_1^n - x_1^mx_2^n + x_2^mx_3^n - \\dots -\r\nx_{2p-3}^mx_{2p-2}^n + x_{2p-2}^mx_{2p-1}^n),\r\n$$\r\nor, equivalently, a solution ${\\bf x} = (x_1, \\ldots, x_{2p-2})\\in \\fq^{2p-2}$ of\r\n\\begin{equation} \\label{eq:1}\r\na^mx_1^n - x_1^mx_2^n + x_2^mx_3^n - \\dots -\r\nx_{2p-3}^mx_{2p-2}^n + x_{2p-2}^mu^n = b+v.\r\n\\end{equation}\r\nAs the upper bound $2p-1$ on the diameter is exact and holds for all $p$, we need a more subtle argument compared to the ones we used before. The only way we can make it is (unfortunately) by performing a case analysis on $\\overline{b+v}$ with a nested case structure. In most of the cases we just exhibit a solution ${\\bf x}$ of (\\ref{eq:1}) by describing its components $x_i$.\r\nIt is always a straightforward verification that ${\\bf x}$ satisfies (\\ref{eq:1}), and we will suppress our comments as cases proceed.\r\n\r\nOur first observation is that if $\\overline{b+v} = 0$, then ${\\bf x} = (0,\\dots, 0)$ is a solution to (\\ref{eq:1}).\r\nWe may assume now that $\\overline{b+v}\\ne 0$.\\\\\r\n\r\n\\noindent\\underline{Case 1.1}: $\\overline{b+v}\\ge \\frac{p-1}{2} + 2$\r\n\r\n\\noindent\nWe define the components of ${\\bf x}$ as follows:\r\n\r\nif $1\\le i\\le 4(p-(\\overline{b+v}))$, then $x_i=0$ for $i\\equiv 1,2 \\mod{4}$, and $x_i=1$ for $i\\equiv 0,3 \\mod{4}$;\r\n\r\nif $4(p-(\\overline{b+v}))< i \\le 2p-2$, then $x_i=0$.\r\n\r\n\r\nNote that $x_i^mx_{i+1}^n = 0$ unless $i\\equiv 3 \\mod 4$,\r\nin which case $x_i^mx_{i+1}^n = 1$. If we group the terms\r\nin groups of four so that each group is of the form\r\n\\[\r\n-x_i^mx_{i+1}^n+x_{i+1}^mx_{i+2}^n-x_{i+2}^mx_{i+3}^n+x_{i+3}^mx_{i+4}^n,\r\n\\]\r\nwhere $i\\equiv 1 \\mod 4$, then assuming $i$, $i+1$, $i+2$, $i+3$, and $i+4$ are within the range of\r\n$1\\le i2$, $N_4=\\{(0,1)\\}$, $N_5=(*,-1)$. As there exist two (opposite) arcs between each vertex of $(*,x)$ and each vertex $(*,-x+1)$, these subsets of vertices induce the complete bipartite subdigraph $\\overrightarrow{K}_{p-1,p-1}$ if $x\\ne -x+1$, and the complete subdigraph $\\overrightarrow{K}_{p-1}$ if $x =-x+1$. Note that our $\\overrightarrow{K}_{p-1,p-1}$ has no loops, but $\\overrightarrow{K}_{p-1}$ has a loop on every vertex.\r\nDigraph $D(5;4,4)$ is depicted in Fig. $1.2$.\r\n\r\n\\begin{figure}\r\n\\begin{center}\r\n\\begin{tikzpicture}\r\n\r\n\\tikzset{vertex/.style = {shape=circle,draw,inner sep=2pt,minimum size=.5em, scale = 1.0},font=\\sffamily\\scriptsize\\bfseries}\r\n\\tikzset{edge/.style = {->,> = stealth'},shorten >=1pt}\r\n\r\n\\node[vertex,label={[xshift=-0.2cm, yshift=0.0cm]$(0,0)$}] (a) at (0,0) {};\r\n\r\n\\node[vertex] (b1) at (1,1.5) {};\r\n\\node[vertex] (b2) at (1,.5) {};\r\n\\node[vertex] (b3) at (1,-.5) {};\r\n\\node[vertex,label={[xshift=0.0cm, yshift=-0.8cm]$(\\ast,0)$}] (b4) at (1,-1.5) {};\r\n\r\n\\node[vertex] (c1) at (2,1.5) {};\r\n\\node[vertex] (c2) at (2,.5) {};\r\n\\node[vertex] (c3) at (2,-.5) {};\r\n\\node[vertex,label={[xshift=0.0cm, yshift=-0.8cm]$(\\ast,1)$}] (c4) at (2,-1.5) {};\r\n\r\n\\node[vertex,label={[xshift=0.25cm, yshift=-0.8cm]$(0,-1)$}] (d) at (3,0) {};\r\n\r\n\\node[vertex,label={[xshift=-0.2cm, yshift=0.0cm]$(0,1)$}] (e) at (4,0) {};\r\n\r\n\\node[vertex] (f1) at (5,1.5) {};\r\n\\node[vertex] (f2) at (5,.5) {};\r\n\\node[vertex] (f3) at (5,-.5) {};\r\n\\node[vertex,label={[xshift=0.0cm, yshift=-0.8cm]$(\\ast,-1)$}] (f4) at (5,-1.5) {};\r\n\r\n\\node[vertex] (g1) at (6,1.5) {};\r\n\\node[vertex] (g2) at (6,.5) {};\r\n\\node[vertex] (g3) at (6,-.5) {};\r\n\\node[vertex,label={[xshift=0.0cm, yshift=-0.8cm]$(\\ast,2)$}] (g4) at (6,-1.5) {};\r\n\r\n\\node[vertex,label={[xshift=0.25cm, yshift=-0.8cm]$(0,-2)$}] (h) at (7,0) {};\r\n\r\n\\node[vertex,label={[xshift=-0.3cm, yshift=0.00cm]$(0,2)$}] (i) at (8,0) {};\r\n\r\n\\node[vertex] (j1) at (9,1.5) {};\r\n\\node[vertex] (j2) at (9,.5) {};\r\n\\node[vertex] (j3) at (9,-.5) {};\r\n\\node[vertex,label={[xshift=0.0cm, yshift=-0.8cm]$(\\ast,-2)$}] (j4) at (9,-1.5) {};\r\n\r\n\\path\r\n \n \n (a) edge [->,>={stealth'[flex,sep=-1pt]},loop,out=240,in=270, looseness = 50] node {} (a);\r\n\r\n\\foreach \\x in {b1,b2,b3,b4}\r\n{\r\n \\draw [edge] (a) to (\\x);\r\n \\draw [edge] (\\x) to (a);\r\n}\r\n\r\n\\foreach \\x in {b1,b2,b3,b4}\r\n{\r\n \\foreach \\y in {c1,c2,c3,c4}\r\n {\r\n \\draw [edge] (\\x) to (\\y);\r\n \\draw [edge] (\\y) to (\\x);\r\n }\r\n}\r\n\r\n\\foreach \\x in {c1,c2,c3,c4}\r\n{\r\n \\draw [edge] (d) to (\\x);\r\n \\draw [edge] (\\x) to (d);\r\n}\r\n\r\n\\draw [edge] (d) to (e);\r\n\\draw [edge] (e) to (d);\r\n\r\n\\foreach \\x in {f1,f2,f3,f4}\r\n{\r\n \\draw [edge] (e) to (\\x);\r\n \\draw [edge] (\\x) to (e);\r\n}\r\n\r\n\\foreach \\x in {f1,f2,f3,f4}\r\n{\r\n \\foreach \\y in {g1,g2,g3,g4}\r\n {\r\n \\draw [edge] (\\x) to (\\y);\r\n \\draw [edge] (\\y) to (\\x);\r\n }\r\n}\r\n\r\n\\foreach \\x in {g1,g2,g3,g4}\r\n{\r\n \\draw [edge] (h) to (\\x);\r\n \\draw [edge] (\\x) to (h);\r\n}\r\n\r\n\\draw [edge] (h) to (i);\r\n\\draw [edge] (i) to (h);\r\n\r\n\\foreach \\x in {j1,j2,j3,j4}\r\n{\r\n \\draw [edge] (i) to (\\x);\r\n \\draw [edge] (\\x) to (i);\r\n}\r\n\r\n\\path\r\n (j1) edge [->,>={stealth'[flex,sep=-1pt]},loop,out=30,in=-20, looseness = 35] node {} (j1);\r\n\\path\r\n (j2) edge [->,>={stealth'[flex,sep=-1pt]},loop,out=30,in=-20, looseness = 35] node {} (j2);\r\n\\path\r\n (j3) edge [->,>={stealth'[flex,sep=-1pt]},loop,out=30,in=-20, looseness = 35] node {} (j3);\r\n\\path\r\n (j4) edge [->,>={stealth'[flex,sep=-1pt]},loop,out=30,in=-20, looseness = 35] node {} (j4);\r\n\r\n\\path\r\n(j1) edge[bend right,<->,>=stealth'] node [left] {} (j2);\r\n\\path\r\n(j1) edge[bend right = 60,<->,>=stealth'] node [left] {} (j3);\r\n\\path\r\n(j1) edge[bend right = 320,<->,>=stealth'] node [left] {} (j4);\r\n\\path\r\n(j2) edge[bend right,<->,>=stealth'] node [left] {} (j3);\r\n\\path\r\n(j2) edge[bend right = 60,<->,>=stealth'] node [left] {} (j4);\r\n\\path\r\n(j3) edge[bend right,<->,>=stealth'] node [left] {} (j4);\r\n\\end{tikzpicture}\r\n\\caption{The digraph $D(5;4,4)$: $x_2+y_2 = x_1^4y_1^4$.}\r\n\\end{center}\r\n\\end{figure}\r\n\r\n\r\nThe structure of $D(p;p-1,p-1)$ for any other prime $p$ is similar. We can describe it as follows: for each $t\\in \\{0,1, \\ldots, (p-1)/2\\}$, let\r\n$$\r\nN_{4{\\overline t}} = \\{(0, t)\\}, \\;\\;\r\nN_{4{\\overline t}+1} = (*, -t),\r\n$$\r\nand for each $t\\in \\{0,1, \\ldots, (p-3)/2\\}$, let\r\n$$\r\nN_{4{\\overline t}+2} = (*, t+1), \\;\r\nN_{4{\\overline t}+3} = \\{(0, -t-1)\\}.\r\n$$\r\nNote that for $0\\le {\\overline t}<(p-1)/2$, $N_{4{\\overline t}+1}\\neq N_{4{\\overline t}+2}$, and for ${\\overline t}=(p-1)/2$, $N_{2p-1}=(*,(p+1)/2)$. Therefore, for $p\\ge 3$, $D(p;p-1,p-1)$ contains $(p-1)/2$ induced copies of\r\n$\\overrightarrow{K}_{p-1,p-1}$ with partitions $N_{4{\\overline t}+1}$ and $N_{4{\\overline t}+2}$, and a copy of $\\overrightarrow{K}_{p-1}$ induced by $N_{2p-1}$. The proof is a trivial induction on $\\overline{t}$. Hence, $\\diam (D(p;p-1,p-1)) = 2p-1$. This ends the proof of Theorem~\\ref{thm_diam_p}~(\\ref{diam_bound_p}).\r\n\r\n\r\n\r\n\\bigskip\r\n\r\n\\noindent{\\bf (\\ref{bound_p_sqrt60}).}\r\nWe follow the argument of the proof of Theorem \\ref{main}, part {\\bf (\\ref{gen_upper_bound})} and use Lemma \\ref{lemma:alt}, with $k = 6\\delta(m,p)+1$. We note, additionally, that if $m\\not\\in\\{p,(p-1)/2\\}$, then $\\gcd(m,p-1) < (p-1)/2$, which implies $|\\{ x^m \\colon x\\in\\mathbb{F}_p^\\ast \\} | > 2$. The result then follows from Theorem \\ref{thm:Cochrane_Pinner}.\r\n\r\n\r\n\\bigskip\r\n\r\n\\noindent{\\bf (\\ref{bound_p_le10}).}\r\nWe follow the argument of the proof of Theorem \\ref{main}, part {\\bf (\\ref{bound_q_m4n4}b)} and use Lemma \\ref{lemma:alt} and Theorem \\ref{thm:waring_small_estimates}.\r\n\\medskip\r\n\r\nThis ends the proof of Theorem~\\ref{thm_diam_p}.\r\n\r\n\\bigskip\r\n\r\n\r\n\r\n\\section{Concluding remarks.}\\label{open}\r\n\r\n\r\n\r\nMany results in this paper follow the same pattern: if Waring's number $\\delta(r,q)$ exists and is bounded above by $\\delta$, then one can show that $\\diam(D(q;m,n))\\le 6\\delta + 1$. Determining the exact value of $\\delta(r,q)$ is an open problem, and it is likely to be very hard. Also, the upper bound $6\\delta +1$ is not exact in general. Out of all partial results concerning $\\delta(r,q)$, we used only those ones which helped us deal with the cases of the diameter of $D(q; m,n)$ that we considered, especially where the diameter was small. We left out applications of all asymptotic bounds on $\\delta(r,q)$. Our computer work demonstrates that some upper bounds on the diameter mentioned in this paper are still far from being tight. Here we wish to mention only a few strong patterns that we observed but have not been able to prove so far. We state them as problems.\r\n\\bigskip\r\n\r\n\\noindent{\\bf Problem 1.}\r\n Let $p$ be prime, $q=p^e$, $e \\ge 2$, and suppose $D(q;m,n)$ is strong. Let\r\n $r$ be the largest divisor of $q-1$\r\n not divisible by any\r\n $q_d = (p^e-1)/(q^d-1)$\r\n where $d$ is a positive divisor of $e$ smaller than $e$. Is it true that\r\n\\[\r\n\\max_{1\\le m\\le n\\le q-1}\r\n\\{\r\n\\diam(D(q;m,n))\r\n\\}\r\n=\r\n\\diam(D(q;r,r))?\r\n\\]\r\nFind an upper bound on $\\diam(D(q;r,r))$ better than the one of\r\nTheorem \\ref{main}, part {\\bf (\\ref{bound_q_le6}c)}.\r\n\\bigskip\r\n\r\n\\noindent{\\bf Problem 2.} Is it true that for every prime $p$ and $1\\le m \\le n$,\r\n$(m,n)\\neq (p-1,p-1)$, $\\diam (D(p;m,n)) \\le (p+3)/2$ with the equality if and only if $(m,n)=((p-1)/2, (p-1)/2)$ or $(m,n)=((p-1)/2, p-1)$?\r\n\r\n\\bigskip\r\n\r\n\\noindent{\\bf Problem 3.} Is it true that for every prime $p$, $\\diam (D(p;m,n))$ takes only one of two consecutive values which are completely determined by $\\gcd((p-1, m, n)$?\r\n\r\n\\section{Acknowledgement}\r\nThe authors are thankful to the anonymous referee whose careful reading and thoughtful comments led to a number of significant improvements in the paper.\r\n\r\n\r\n", "meta": {"timestamp": "2018-07-31T02:20:52", "yymm": "1807", "arxiv_id": "1807.11360", "language": "en", "url": "https://arxiv.org/abs/1807.11360"}} {"text": "\\section{Introduction}\n\n\nIn the last decade Machine Learning (ML) has been rapidly evolving due to the profound performance improvements that Deep Learning (DL) has ushered. Deep Learning has outperformed previous state-of-the-art methods in many fields of Machine Learning, such as Natural Language Processing (NLP)~\\cite{deng2018feature}, image processing~\\cite{larsson2018robust} and speech generation~\\cite{van2016wavenet}. As the number of new methods incorporating Deep Learning in many scientific fields increase, the proposed solutions begin to span across other disciplines where Machine Learning was used in a limited capacity. One such example is the quantitative analysis of the stock markets and the usage of Machine Learning to predict price movements or the volatility of the future prices or the detection of anomalous events in the markets.\n\nIn the field of quantitative analysis, the mathematical modelling of the markets has been the de facto approach to model stock price dynamics for trading, market making, hedging, and risk management. By utilizing a time series of values, such as the price fluctuations of financial products being traded in the markets, one can construct statistical models which can assist in the extraction of useful information about the current state of the market and a set of probabilities for possible future states, such as price or volatility changes. Many models, such as the Black-Scholes-Merton model~\\cite{black1973pricing}, attempted to mathematically deduce the price of options and can be used to provide useful indications of future price movements. \n\nHowever, at some point as more market participants started using the same model the behaviour of the price changed to the point that it could no longer be taken advantage of. Newer models, such as the stochastic modelling of limit order book dynamics \\cite{cont2010stochastic}, the jump-diffusion processes for stock dynamics \\cite{bandi2016price} and volatility estimation of market microstructure noise \\cite{ait2009estimating} have been attempts predict multiple aspects of the financial markets. However such models are designed to be tractable, even at the cost of reliability and accuracy, and thus they do not necessarily fit empirical data very well.\n\n\nThe aforementioned properties put handcrafted models at a disadvantage, since the financial markets very frequently exhibit irrational behaviour, mainly due to the large influence of human activity, which frequently causes these models to fail. Combining Machine Learning models with handcrafted features usually improves the forecasting abilities of such models, by overcoming some of the aforementioned limitations, and improving predictions about various aspects of financial markets. This led many organizations that participate in the Financial Markets, such as Hedge Funds and investment firms, to increasingly use ML models, along with the conventional mathematical models, to make crucial decisions.\n\nFurthermore, the introduction of electronic trading, that also led to the automation of trading operations, has magnified the volume of exchanges, producing a wealth of data. Deep Learning models are perfect candidates for analyzing such amounts of data, since they perform significantly better than the conventional Machine Learning methodologies when a large amount of data is available. This is one of the reasons that Deep Learning is starting to have a role in analyzing the data coming from financial exchanges. \\cite{kercheval2015modelling, tsantekidis2017using}\n\nThe most detailed type of data that financial exchanges are gathering is the comprehensive logs of every submitted order and event that is happening within their internal matching engine. This log can be used to reconstruct the Limit Order Book (LOB), which is explained further in Section \\ref{data-section}. A basic task that can arise from this data is the prediction of future price movements of an asset by examining the current and past supply and demand of Limit Orders. This type of comprehensive logs kept by the exchanges is excessively large and traditional Machine Learning techniques, such as Support Vector Machines (SVMs) \\cite{vapnik1995support}, usually cannot be applied out-of-the-box. \nUtilizing this kind of data directly with existing Deep Learning methods is also not possible due to their non-stationary nature. Prices fluctuate and suffer from stochastic drift, so in order for them to be effectively utilized by DL methods a preprocessing step is required to generate stationary features from them.\n\nThe main contribution of this work is the proposal of a set of stationary features that can be readily extracted from the Limit Order Book. The proposed features are thoroughly evaluated for predicting future mid price movements from large-scale high-frequency Limit Order data using several different Deep Learning models, ranging from simple Multilayer Perceptrons (MLPs) and CNNs to Recurrent Neural Networks (RNNs). Also we propose a novel Deep Learning model that combines the feature extraction ability of Convolutional Neural Networks (CNNs) with the Long Short Term Memory (LSTM) networks' power to analyze time series.\n\n\nIn Section~2 related work which employs ML models on financial data is briefly presented. Then, the dataset used is described in detail in Section~3. In Section~4 the proposed stationary feature extraction methodology is presented in detail, while in Section~5 the proposed Deep Learning methods are described. In Section~6 the experimental evaluation and comparisons are provided. Finally, conclusions are drawn and future work is discussed in Section~7.\n\n\\section{Related Work}\n\nThe task of regressing the future movements of financial assets has been the subject of many recent works such as \\cite{kazem2013support, hsieh2011forecasting, lei2018wavelet}. Proven models such as GARCH are improved and augmented with machine learning component such as Artificial Neural Networks \\cite{michell2018stock}. New hybrid models are employed along with Neural Networks to improve upon previous performance \\cite{huang2012hybrid}. \n\nOne of the most volatile financial markets is FOREX, the currency markets. In \\cite{galeshchuk2016neural}, neural networks are used to predict the future exchange rate of major FOREX pairs such as USD/EUR. The model is tested with different prediction steps ranging from daily to yearly which reaches the conclusion that shorter term predictions tend to be more accurate. Other financial metrics, such as cash flow prediction, are very closely correlated to price prediction. \n\nIn \\cite{heaton2016deep}, the authors propose the ``Deep Portfolio Theory'' which applies autoencoders in order to produce optimal portfolios. This approach outperforms several established benchmarks, such as the Biotechnology IBB Index. Likewise in \\cite{takeuchi2013applying}, another type of autoencoders, known as Restricted Boltzmann Machine (RBM), is applied to encode the end-of-month prices of stocks. Then, the model is fine-tuned to predict whether the price will move more than the median change and the direction of such movement. This strategy is able to outperform a benchmark momentum strategy in terms of annualized returns.\n\nAnother approach is to include data sources outside the financial time series, e.g., \\cite{xiong2015deep}, where phrases related to finance, such as ``mortgage'' and ``bankruptcy'' were monitored on the Google trends platform and included as an input to a recurrent neural network along with the daily S\\&P 500 market fund prices. The training target is the prediction of the future volatility of the market fund's price. This approach can greatly outperform many benchmark methods, such as the autoregressive GARCH and Lasso techniques.\n\nThe surge of DL methods has dramatically improved the performance over many conventional machine learning methods on tasks, such as speech recognition \\cite{graves2013speech}, image captioning\\cite{xu2015show, mao2014deep}, and question answering \\cite{zhu2016visual7w}. The most important building blocks of DL are the Convolutional Neural Networks (CNN) \\cite{lecun1995convolutional}, and the Recurrent Neural Networks (RNNs). Also worth mentioning is the improvement of RNNs with the introduction of Long Short-Term Memory Units (LSTMs) \\cite{hochreiter1997long}, which has made the analysis of time series using DL easier and more performant. \n\nUnfortunately DL methods are prone to overfit especially in tasks such as price regression and many works exist trying to prevent such overfitting \\cite{niu2012short, xi2014new}. Some might attribute overfitting to the lack of huge amounts of data that other tasks such as image and speech processing have available to them. A very rich data source for financial forecasting is the Limit Order Book. One of the few applications of ML in high frequency Limit Order Book data is \\cite{kercheval2015modelling}, where several handcrafted features are created, including price deltas, bid-ask spreads and price and volume derivatives. An SVM is then trained to predict the direction of future mid price movements using all the handcrafted features. In \\cite{tran2017temporal} a neural network architecture incorporating the idea of bilinear projection augmented with a temporal attention mechanism is used to predict LOB mid price.\n\nSimilarly in \\cite{ntakaris2018mid, tran2017tensor} utilize the Limit Order Book data along with ML methods such as multilinear methods and smart feature selection to predict the future price movements. In our previous work~\\cite{tsantekidis2017forecasting, tsantekidis2017using, passalis2017time} we introduced a large-scale high-frequency Limit Order Book dataset, that is also used in this paper, and we employed three simple DL models, the Convolutional Neural Networks (CNN), the Long-Short Term Memory Recurrent Neural Networks (LSTM RNNs) and the Neural Bag-of-Features (N-BoF) model, to tackle the problem of forecasting the mid price movements. However, these approaches directly used the non-stationary raw Order Book data, making them vulnerable to distribution shifts and harming their ability to generalize on unseen data, as we also experimentally demonstrate in this paper.\n\nTo the best of our knowledge this is the first work that proposes a structured approach for extracting stationary price features from the Limit Order Book that can be effectively combined with Deep Learning models. We also provide an extensive evaluation of the proposed methods on a large-scale dataset with more than 4 million events. Also, a powerful model, that combines the CNN feature extraction properties with the LSTM's time series modelling capabilities, is proposed in order to improve the accuracy of predicting the price movement of stocks. The proposed combined model is also compared with the previously introduced methods using the proposed stationary price features.\n\n\n\\section{Limit Order Book Data}\n\\label{data-section}\n\n\nIn an order-driven financial market, a market participant can place two types of buy/sell orders. By posting a {\\em limit order}, a trader promises to buy (sell) a certain amount of an asset at a specified price or less (more). The limit order book compromises on the valid limit order that are not executed or cancelled yet. \n\nThis Limit Order Book (LOB) contains all existing buy and sell orders that have been submitted and are awaiting to be executed. A limit order is placed on the queue at a given price level, where, in the case of standard limit orders, the execution priority at a given price level is dictated by the arrival time (first in, first out). A {\\em market order} is is an order to immediately buy/sell a certain quantity of the asset at the best available price in the limit order book. If the requested price of a limit order is far from the best prices, it may take a long time for the execution of the limit order, in which case, the order can finally be cancelled by the trader. The orders are split between two sides, the bid (buy) and the ask (sell) side. Each side contains the orders sorted by their price, in descending order for the bid side and ascending order for the ask side. \n\\newcommand{\\rho}{\\rho}\n\\newcommand{\\upnu}{\\upnu}\n\nFollowing the notation used in \\cite{cont2010stochastic}, a price grid is defined as $\\{\\rho^{(1)}(t),\\dots,\\rho^{(n)}(t)\\}$, where $\\rho^{(j)}(t) > \\rho^{(i)}(t)$ for all $j>i$. The price grid contains all possible prices and each consecutive price level is incremented by a single tick from the previous price level. The state of the order book is a continuous-time process $v(t) \\equiv \\left(v^{(1)}(t), v^{(2)}(t), \\dots, v^{(n)}(t) \\right)_{t \\geq 0}$, where $|v^{(i)}(t)|$ is the number of outstanding limit orders at price $\\rho^{(i)}(t)$, $1 \\leq i \\leq n$. If $v^{(i)}(t) < 0$, then there are $-v^{(i)}(t)$ bid orders at price $\\rho^{(i)}(t)$; if $v^{(i)}(t)>0$, then there are $v^{(i)}(t)$ ask orders at price $\\rho^{(i)}(t)$. That is, $v^{(i)}(t) > 0$ refers to ask orders and $v^{(i)}(t) < 0$ bid orders. \n\n\nThe location of the best ask price in the price grid is defined by:\n\\[\ni_a^{(1)}(t) = \\inf\\{i = 1, \\dots, n\\ ;\\ v^{(i)}(t)>0 \\},\n\\]\nand, correspondingly, the location of the best bid price is defined by:\n\\[\ni_b^{(1)}(t) = \\sup\\{i = 1, \\dots, n\\ ;\\ v^{(i)}(t)<0 \\}.\n\\]\nFor simplicity, we denote the best ask and bid prices as $p_a^{(1)}(t) \\equiv \\rho^{\\left(i_a^{(1)}(t) \\right)}(t)$ and $p_b^{(1)}(t) \\equiv \\rho^{\\left(i_b^{(1)} (t)\\right)}(t)$, respectively. Notice that if there are no ask (bid) orders in the book, the ask (bid) price is not defined. \n\nMore generally, given that the $k$th best ask and bid prices exist, their locations are denoted as $i_a^{(k)}(t) \\equiv i_a(t) + k-1$ and $i_b^{(k)}(t) \\equiv i_b(t) + k-1$. The $k$th best ask and bid prices are correspondingly denoted by $p_a^{(k)}(t) \\equiv \\rho^{\\left(i_a^{(k)}(t) \\right)}(t)$ and $p_b^{(k)}(t) \\equiv \\rho^{\\left(i_b^{(k)}(t) \\right)}(t)$, respectively. Correspondingly, we denote the number of outstanding limit orders at the $k$th best ask and bid levels by $\\upnu_a^{(k)}(t) \\equiv v^{\\left(i_a^{(k)}(t)\\right)}(t)$ and $\\upnu_b^{(k)}(t) \\equiv v^{\\left(i_b^{(k)}(t)\\right)}(t)$, respectively.\n\n\nLimit Order Book data can be used for a variety of tasks, such as the estimation of the future price trend or the regression of useful metrics, like the price volatility. Other possible tasks may include the early prediction of anomalous events, like extreme changes in price which may indicate manipulation in the markets. These examples are a few of multiple applications which can aid investors to protect their capital when unfavourable conditions exist in the markets or, in other cases, take advantage of them to profit.\n\nMost modern methods that utilize financial time series data employ subsampling techniques, such as the well-known OHLC (Open-High-Low-Close) candles \\cite{yang2000drift}, in order to reduce the number of features of each time interval. Although the OHLC candles preserve useful information, such as the market trend and movement ranges within the specified intervals, it removes possibly important microstructure information. Since the LOB is constantly receiving new orders in inconsistent intervals, it is not possible to subsample time-interval features from it in a way that preserves all the information it contains. This problem can be addressed, to some extent, using recurrent neural network architectures, such as LSTMs, that are capable of natively handling inputs of varying size. This allows to directly utilize the data fully without using a time interval-based subsampling.\n\n\nThe LOB data used in this work is provided by Nasdaq Nordic and consists of 10 days worth of LOB events for 5 different Finnish company stocks, namely Kesko Oyj, Outokumpu Oyj, Sampo, Rautaruukki and Wa\u0308rtsila\u0308 Oyj \\cite{ntakaris2017benchmark,siikanen2016limit}. The exact time period of the gathered data begins from the 1st of June 2010 to the 14th of June 2010. Also, note that trading only happens during business days.\n\nThe data consists of consecutive snapshots of the LOB state after each state altering event takes place. This event might be an order insertion, execution or cancellation and after it interacts with the LOB and change its state a snapshot of the new state is taken. The LOB depth of the data that are used is $10$ for each side of Order Book, which ends up being 10 active orders (consisting of price and volume) for each side adding up to a total of $40$ values for each LOB snapshot. This ends up summing to a total of $4.5$ million snapshots that can be used to train and evaluate the proposed models.\n\nIn this work the task we aim to accomplish is the prediction of price movements based on current and past changes occurring in the LOB. This problem is formally defined as follows: Let $\\mathbf{x}(t) \\in \\mathbb{R}^q$ denote the feature vector that describes the condition of the LOB at time $t$ for a specific stock, where $q$ is the dimensionality of the corresponding feature vector. The direction of the mid-price of that stock is defined as $l_k(t) = \\{-1, 0, 1\\}$ depending on whether the mid price decreased (-1), remained stationary (0) or increased (1) after $k$ LOB events occurred.\nThe number of orders $k$ is also called \\textit{prediction horizon}. We aim to learn a model $f_k(\\mathbf{x}(t))$, where $f_k: \\mathbb{R}^{n} \\rightarrow \\{-1, 0, 1\\} $, that predicts the direction $l_{k}(t)$ of the mid-price after $k$ orders.\nIn the following Section the aforementioned features and labels, as well as the procedure to calculate them are explained in depth. \n\n\\section{Stationary Feature and Label Extraction}\n\nThe raw LOB data cannot be directly used for any ML task without some kind of preprocessing. The order volume values can be gathered for all stocks' LOBs and normalized together, since they are expected to follow the same distribution. However, this is not true for price values, since the value of a stock or asset may fluctuate and increase with time to never before seen levels. This means that the statistics of the price values can change significantly with time, rendering the price time series non-stationary.\n\nSimply normalizing all the price values will not resolve the non-stationarity, since there will always be unseen data that may change the distribution of values to ranges that are not present in the current data. We present two solutions for this problem, one used in past work where normalization is applied constantly using past available statistics and a new approach to completely convert the price data to stationary values.\n\n\\subsection{Input Normalization}\n\\label{sec:input-normalization}\n\nThe most common normalization scheme is standardization (z-score):\n\\begin{equation}\nx_{\\text{norm}} = \\dfrac{{x} - \\bar{x}}{\\sigma_{\\bar{x}}}\n\\label{zscore-eq},\n\\end{equation}\nwhere ${x}$ is a feature to be normalized, $\\bar{x}$ is the mean and $\\sigma_{\\bar{x}}$ is the standard deviation across all samples. Such normalization is separately applied to the order size values and the price values. Using this kind of ``global'' normalization allows the preservation of the different scales between prices of different stocks, which we are trying to avoid. The solution presented in \\cite{tsantekidis2017forecasting,tsantekidis2017using} is to use z-score to normalize each stock-day worth of data with the means and standard deviations calculated using previous day's data of the same stock. This way a major problem is avoided which is the distribution shift in stock prices, that can be caused by events such as stock splits or the large shifts in price that can happen over longer periods of time.\n\nUnfortunately this presents another important issue for learning. The difference between the price values in different LOB levels are almost always minuscule. Since all the price levels are normalized using z-score with the same statistics, extracting features at that scale is hard. In this work we propose a novel approach to remedy this problem. Instead of normalizing the raw values of the LOB depth, we modify the price values to be their percentage difference to the current mid price of the Order Book. This removes the non-stationarity from the price values, makes the feature extraction process easier and significantly improves the performance of ML models, as it is also experimentally demonstrated in Section~\\ref{sec:experiments}. To compensate for the removal of the price value itself we add an extra value to each LOB depth sample which is the percentage change of the mid price since the previous event. \n\nThe mid-price is defined as the mid-point between the best bid and the best ask prices at time $t$ by\n\\begin{equation}\np_m^{(1)} (t) = \\dfrac{p_a^{(1)}(t) + p_b^{(1)}(t)}{2} \n\\label{mid-price-def}.\n\\end{equation}\nLet \n\\begin{align}\n{p'}_a^{(i)}(t) =& \\dfrac{p_a^{(i)}(t)}{p_m(t)} - 1, \t\\label{stationary-price-a} \\\\\n{p'}_b^{(i)}(t) =& \\dfrac{p_b^{(i)}(t)}{p_m(t)} - 1,\t\\label{stationary-price-b}\n\\end{align}\nand\n\\begin{equation}\n{p'}_m(t) = \\dfrac{p_m(t)}{p_m(t-1)} - 1.\t\\label{mid-price-change-def}\n\\end{equation}\nEquations (\\ref{stationary-price-a}) and (\\ref{stationary-price-b}) serve as statistic features that represent the proportional difference between $i$th price and the mid-price at time $t$. Equation (\\ref{mid-price-def}), on the other hand, serves as a dynamic feature that captures the proportional mid-price movement over the time period (that is, it represents asset's return in terms of mid-prices). \n\nWe also use the cumulative sum of the sizes of the price levels as a feature, also know as Total Depth:\n\\begin{align}\n\\upnu'^{(k)}_a(t) =& \\sum_{i=1}^k{\\upnu_a^{(i)}(t)} \n\\vspace{0.1cm} \t\\label{size-cumsum-a}\\\\\n\\upnu'^{(k)}_b(t) =& \\sum_{i=1}^k{\\upnu_b^{(i)}(t)} \n\\label{size-cumsum-b}\n\\end{align}\nwhere $\\upnu^{(i)}_a(t)$ is number of outstanding limit order at the $i$th best ask price level and $\\upnu^{(i)}_b(t)$ is number of outstanding limit order at the $b$th best ask price level. \n\nThe proposed stationary features are briefly summarized in Table \\ref{features-table}. After constructing these three types of stationary features, each of them is separately normalized using standardization (z-score), as described in (\\ref{zscore-eq}), and concatenated into a single feature vector $\\myvec{x}_t$, where $t$ denotes the time step.\n\nThe input used for the time-aware models, such as the CNN, LSTM and CNN-LSTM, is the sequence of vectors $\\myvec{X} = \\{\\myvec{x}_0, \\myvec{x}_1, \\dots , \\myvec{x}_w\\}$, where $w$ is the number of total number of events each one represented by a different time step input. For the models that need all the input into a single vector, such as the SVM and MLP models, the matrix $\\myvec{X}$ is flatten into a single dimension so it can be used as input for these models.\n\n\n\\begin{table}[t]\n\t\\caption{Brief description of each proposed stationary feature}\n\t\\label{features-table}\n\t\\begin{center}\n\t\t\\begin{tabular}{ | c | c|}\n\t\t\t\\hline\n\t\t\t\\textbf{Feature} & \\textbf{Description} \\\\\n\t\t\t\\hline\\hline\n\t\t\tPrice level difference & \\parbox[c]{10cm}{\\vspace{0.2em}The difference of each price level to the current mid price, see Eq. (\\ref{stationary-price-a}),(\\ref{stationary-price-b}) \n\t\t\t\t\\[{p'}^{(i)}(t) = \\dfrac{p^{(i)}(t)}{p_m(t)} - 1 \\]\n\t\t\t} \\\\ \n\t\t\t\\hline\n\t\t\tMid price change & \\parbox[c]{10cm}{\\vspace{0.2em} The change of the current mid price to the mid price of the previous time step, see Eq. (\\ref{mid-price-change-def}) \\\\\n\t\t\t\t\\[\n\t\t\t\t{p'}_m(t) = \\dfrac{p_m(t)}{p_m(t-1)} - 1\n\t\t\t\t\\]\n\t\t\t} \\\\\n\t\t\t\\hline\n\t\t\tDepth size cumsum & \\parbox[c]{10cm}{ \\vspace{0.2em} Total depth at each price level, see Eq. (\\ref{size-cumsum-a}), (\\ref{size-cumsum-b}) \n\t\t\t\t\\[\n\t\t\t\t\\upnu'^{(k)}(t) = \\sum_{i=1}^k{\\upnu^{(i)}(t)} \n\t\t\t\t\\]\n\t\t\t} \\\\ \n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\\end{table}\n\n\\subsection{Labels}\n\\label{sec:labels}\nThe proposed models aim to predict the future movements of the mid price. Therefore, the ground truth labels must be appropriately generated to reflect the future mid price movements. Note that the mid price is a ``virtual'' value and no order can be guaranteed to immediately executed if placed at that exact price. However being able to predict its upwards or downwards movement provides a good estimate of the price of the future orders. A set of discrete choices must be constructed from our data to use as target for our classification models. The labels for describing the movement denoted by $y_t \\in \\{-1, 0, 1\\}$, where $t$ denotes the timestep. \n\n\nSimply using $p_m(t + k) > p_m(t)$ to determine the upward direction of the mid price would introduce unmanageable amount of noise, since the smallest change would be registered as an upward or downward movement. To remedy this, in our previous work \\cite{tsantekidis2017forecasting, tsantekidis2017using} the noisy changes of the mid price were filtered by employing two averaging filters. One averaging filter was used on a window of size $k$ of the past values of the mid price and another averaging was applied on a future window $k$:\n\\begin{align}\nm_b(t) =& \\dfrac{1}{k+1} \\sum_{i=0}^k p_m(t-i) \\label{m-b} \\\\\nm_a(t) =& \\dfrac{1}{k} \\sum_{i=1}^k p_m(t+i) \\label{m-a}\n\\end{align}\nwhere $p_t$ is the mid price as described in Equation~(\\ref{mid-price-def}).\nThe label $l_t$, that expresses the direction of price movement at time $t$, is extracted by comparing the previously defined quantities ($m_b$ and $m_a$). However, using the $m_b$ values to create labels for the samples, as in \\cite{tsantekidis2017forecasting, tsantekidis2017using}, is making the problem significantly easier and predictable due to the slower adaptation of the mean filter values to sudden changes in price. Therefore, in this work we remedy this issue by replacing $m_b$ with the mid price. Therefore, the labels are redefined as:\n\\begin{equation}\nl_t =\n\\begin{cases}\n\\ \\ 1, & \\text{if } \\dfrac{m_a(t)}{p_m(t)} > 1 + \\alpha\n\\vspace{0.2cm}\\\\\n-1, & \\text{if } \\dfrac{m_a(t)}{p_m(t)} < 1 - \\alpha\n\\vspace{0.2cm}\\\\\n\\ \\ 0, & \\text{otherwise}\n\\end{cases}\n\\label{direction-eq}\n\\end{equation}\n\nwhere $\\alpha$ is the threshold that determines how significant a mid price change $m_a(t)$ must be in order to label the movement as upward or downward. Values that do not satisfy this inequality are considered as insignificant and are labeled as having no price movement, or in other words being ``stationary''. The resulting labels present the trend to be predicted. This process is applied across all time steps of the dataset to produce labels for all the depth samples.\n\n\n\\section{Machine Learning Models}\nIn this section we explain the particular inner workings of the CNN and LSTM models that are used and present how they are combined to form the proposed CNN-LSTM model. The technical details of each model are explained along with the employed optimization procedure.\n\t\n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[scale=0.4]{CNN}\n\t\\caption{A visual representation of the evaluated CNN model. Each layer includes the filter input size and the number of filters used.}\n\t\\label{fig:cnn-model}\n\\end{figure}\n\n\\subsection{Convolutional Neural Networks}\n\\label{sec:conv-nets}\n\nConvolutional Neural Networks (CNNs) consist of the sequential application of convolutional and pooling layers usually followed by some fully connected layers, as shown in Figure~\\ref{fig:cnn-model}. Each convolutional layer $i$ is equipped with a set of filters $\\mathbf{W}_i \\in \\mathbb{R} ^{S \\times D \\times N}$ that is convolved with an input tensor, where $S$ is the number of used filters, $D$ is the {filter size}, and $N$ is the number of the input channels. The input tensor $\\mathbf{X} \\in \\mathbb{R}^{(B \\times T \\times F)}$ is consisted by the temporally ordered features described in Section \\ref{sec:input-normalization}, where $B$ is the batch size, $T$ is the number of time steps and $F$ is the number of features per time step.\n\nIn this work we leverage the causal padding introduced in \\cite{van2016wavenet} to avoid using future information to produce features for the current time step. Using a series of convolutional layers allows for capturing the fine temporal dynamics of the time series as well as correlating temporally distant features. After the last convolutional/pooling layer a set of fully connected layers are used to classify the input time series. The network's output expresses the categorical distribution for the three direction labels (upward, downward and stationary), as described in (\\ref{direction-eq}), for each time-step.\n\nWe also employ a temporal batching technique, similar to the one used in LSTMs, to increase the computational efficiency and reduce memory requirements of our experiments when training with CNNs. Given the above described input tensor $\\myvec{X}$ and convolution filters $\\myvec{W}_i$ the last convolution produces a tensor with dimensions $ (B,T,S,N) $, which in most uses cases is flattened to a tensor of size $(B, T \\times S \\times N)$ before being fed to a fully connected layer. Instead we retain the temporal ordering by only reducing the tensor to dimension $(B, T, S \\times N) $. An identical fully connected network with a softmax output is applied for each $S \\times N$ vectors leading to $T$ different predictions. \n\nSince we are using causal convolutions with \"full\" padding, all the convolutional layers produce the same time steps $T$, hence we do not need to worry about label alignment to the correct time step. Also the causal convolutions ensure that no information from the future leaks to past time step filters. This technique reduces the receptive field of the employed CNN, but this can be easily remedied by using a greater number of convolutional layers and/or a larger filter size $D$.\n\n\n\\subsection{Long Short Term Memory Recurrent Neural Networks}\n\nOne of the most appropriate Neural Network architectures to apply on time series is the Recurrent Neural Network (RNN) architecture. Although powerful in theory, this type of network suffers from the vanishing gradient problem, which makes the gradient propagation through a large number of steps impossible. An architecture that was introduced to solve this problem is the Long Short Term Memory (LSTM) networks~\\cite{hochreiter1997long}. This architecture protects its hidden activation from the decay of unrelated inputs and gradients by using gated functions between its ``transaction'' points. The protected hidden activation is the ``cell state'' which is regulated by said gates in the following manner:\n\n\\begin{align}\n\\myvec{f}_t &= \\sigma(\\myvec{W}_{xf} \\cdot \\myvec{x} + \\myvec{W}_{hf} \\cdot \\myvec{h}_{t-1} + \\myvec{b}_f) \\\\\n\\myvec{i}_t &= \\sigma(\\myvec{W}_{xi} \\cdot \\myvec{x} + \\myvec{W}_{hi} \\cdot \\myvec{h}_{t-1} + \\myvec{b}_i) \\\\\n\\myvec{c}'_t &= tanh(\\myvec{W}_{hc} \\cdot \\myvec{h}_{t-1} + \\myvec{W}_{xc} \\cdot \\myvec{x}_t + \\myvec{b}_c) \\\\\n\\myvec{c}_t &= \\myvec{f}_t \\cdot \\myvec{c}_{t-1} + \\myvec{i}_t \\cdot \\myvec{c}'_t \\\\\n\\myvec{o}_t &= \\sigma(\\myvec{W}_{oc} \\cdot \\myvec{c}_t + \\myvec{W}_{oh} \\cdot \\myvec{h}_{t-1} + \\myvec{b}_o) \\\\\n\\myvec{h}_t &= \\myvec{o}_t \\cdot \\sigma(\\myvec{c}_t) \n\\end{align}\nwhere $\\myvec{f}_t$, $\\myvec{i}_t$ and $\\myvec{o}_t$ are the activations of the input, forget and output gates at time-step $t$, which control how much of the input and the previous state will be considered and how much of the cell state will be included in the hidden activation of the network. The protected cell activation at time-step $t$ is denoted by $\\myvec{c}_t$, whereas $\\myvec{h}_t$ is the activation that will be given to other components of the model. The matrices $\\myvec{W}_{xf}, \\myvec{W}_{hf}, \\myvec{W}_{xi}, \\myvec{W}_{hi}, \\myvec{W}_{hc}, \\myvec{W}_{xc}, \\myvec{W}_{oc}, \\myvec{W}_{oh}$ are used to denote the weights connecting each of the activations with the current time step inputs and the previous time step activations.\n\n\\subsection{Combination of models (CNN-LSTM)}\n\nWe also introduce a powerful combination of the two previously described models. The CNN model is identically applied as described in Section \\ref{sec:conv-nets}, using causal convolutions and temporal batching to produce a set of features for each time step. In essence the CNN acts as the feature extractor of the LOB depth time series, which produces a new time series of features with the same length as the original one, with each of them having time steps corresponding to one another.\n\nAn LSTM layer is then applied on the time series produced by the CNN, and in turn produces a label for each time step. This works in a very similar way to the fully connected layer described in \\ref{sec:conv-nets} for temporal batching, but instead of the Fully Connected layer the LSTM allows the model to incorporate the features from past steps. The model architecture is visualized in Figure~\\ref{fig:cnnlstm}.\n\n\n\\subsection{Optimization}\n\\label{sec:optimization}\nThe parameters of the models are learned by minimizing the categorical cross entropy loss defined as: \n\\begin{equation}\n\\mathcal{L}(\\myvec{W}) = -\\sum_{i=1}^{L} y_i \\cdot \\log \\hat{y}_i,\n\\end{equation}\nwhere $L$ is the number of different labels and the notation $\\myvec{W}$ is used to refer to the parameters of the models. The ground truth vector is denoted by $\\mathbf{y}$, while $\\hat{\\mathbf{y}}$ is the predicted label distribution. The loss is summed over all samples in each batch. Due to the unavoidable class imbalance of this type of dataset, a weighted loss is employed to improve the mean recall and precision across all classes:\n\\begin{equation}\n\\label{eq:loss}\n\\mathcal{L}(\\myvec{W}) = -\\sum_{i=1}^{L} c_{y_i} \\cdot y_i \\cdot \\log \\hat{y}_i,\n\\end{equation}\nwhere $c_{y_i}$ is the assigned weight for the class of $y_i$. The individual weight $c_i$ assigned to each class $i$ is calculated as:\n\\begin{equation}\nc_i = \\dfrac{|\\mathcal{D}|}{n \\cdot |\\mathcal{D}_i|},\n\\end{equation}\nwhere $ |\\mathcal{D}| $ is the total number of samples in our dataset $\\mathcal{D}$, $n$ is the total number of classes (which in our case is 3) and $\\mathcal{D}_i$ is set of samples from our dataset that have been labeled to belong in class $i$.\n\n\nThe most commonly used method to minimize the loss function defined in (\\ref{eq:loss}) and learn the parameters $\\myvec{W}$ of the model is gradient descent \\cite{werbos1990backpropagation}:\n\\begin{equation}\n\\myvec{W}' = \\myvec{W} - \\eta \\cdot \\dfrac{\\partial \\mathcal{L}}{\\partial \\myvec{W}}\n\\end{equation}\nwhere $\\myvec{W}'$ are the parameters of the model after each gradient descent step and $\\eta$ is the learning rate. In this work we utilize the RMSProp optimizer \\cite{tieleman2012lecture}, which is an adaptive learning rate method and has been shown to improve the training time and performance of DL models. \n\n\n\n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[scale=0.3]{CNNLSTM}\n\t\\caption{CNN-LSTM model}\n\t\\label{fig:cnnlstm}\n\\end{figure}\n\nThe LSTM, CNN and CNN-LSTM models along with all the training algorithms were developed using Keras \\cite{chollet2015keras}, which is a framework built on top of the Tensorflow library \\cite{tensorflow2015-whitepaper}.\n\n\\section{Experimental Evaluation}\n\\label{sec:experiments}\n\nAll the models were tested for step sizes $k = 10, 50, 100,$ and $200$ in (\\ref{m-a}), where the $\\alpha$ value for each was set at $2 \\times 10^{-5},\\ 9 \\times 10^{-5},\\ 3 \\times 10^{-4}$ and $ 3.5 \\times 10^{-4} $ respectively. The parameter $\\alpha$ was chosen in conjunction with the future horizon with the aim to have relatively balanced distribution of labels across classes. In a real trading scenario it is not possible to have a profitable strategy that creates as many trade signals as ``no-trade'' signals, because it would accumulate enormous commission costs. For that reason $\\alpha$ is selected with the aim to get a logical ratio of about 20\\% long, 20\\% short and 60\\% stationary labels. The effect of varying the parameter $\\alpha$ on the class distribution of labels is shown in Table \\ref{alpha-table}. Note that increasing the $\\alpha$ allows for reducing the number of trade signals which should be changed depending on the actual commission and slippage costs that are expected to occur.\n\n\\begin{table}\n\t\\caption{Example of sample distribution across classes depending on $\\alpha$ for prediction horizon $k =100$}\n\t\\label{alpha-table}\n\t\\begin{center}\n\t\n\t\t\\begin{tabular}{ | c |c| c| c|}\n\t\t\t\\hline\n\t\t\t\\hspace{2em}$\\alpha$\\hspace{2em} & \\hspace{1em}Down\\hspace{1em} & Stationary & \\hspace{1.5em}Up\\hspace{1.5em} \\\\\n\t\t\t\\hline\\hline\n\t\t\t$1.0 \\times 10^{-5}$ & $0.39$&$0.17$&$0.45$ \\\\ \\hline\n\t\t\t$2.0 \\times 10^{-5}$ & $0.38$&$0.19$&$0.43$ \\\\ \\hline\n\t\t\t$5.0 \\times 10^{-5}$ & $0.35$&$0.25$&$0.41$ \\\\ \\hline\n\t\t\t$1.0 \\times 10^{-4}$ & $0.30$&$0.33$&$0.36$ \\\\ \\hline\n\t\t\t$2.0 \\times 10^{-4}$ & $0.23$&$0.49$&$0.28$ \\\\ \\hline\n\t\t\t$3.0 \\times 10^{-4}$ & $0.18$&$0.60$&$0.22$ \\\\ \\hline\n\t\t\t$3.5 \\times 10^{-4}$ & $0.15$&$0.66$&$0.19$ \\\\ \\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\\end{table}\n\nWe tested the CNN and LSTM models using the raw features and the proposed stationary features separately and compared the results. The architecture of the three models that were tested is described bellow. \n\n\nThe proposed CNN model consists of the following sequential layers:\n\n\\begin{center}\n\t\\begin{minipage}{0.6\\textwidth}\n\t\t\\begin{enumerate}\n\t\t\t\\item 1D Convolution with 16 filters of size $(10,42)$\n\t\t\t\\item 1D Convolution with 16 filters of size $(10,)$\n\t\t\t\\item 1D Convolution with 32 filters of size $(8,)$\n\t\t\t\\item 1D Convolution with 32 filters of size $(6,)$\n\t\t\t\\item 1D Convolution with 32 filters of size $(4,)$\n\t\t\t\\item Fully connected layer with 32 neurons\n\t\t\t\\item Fully connected layer with 3 neurons \n\t\t\\end{enumerate}\n\t\\end{minipage}\n\\end{center}\nThe activation function used for all the convolutional and fully connected layer of the CNN is the Parametric Rectifying Linear Unit (PRELU) \\cite{he2015delving}. The last layer uses the softmax function for the prediction of the probability distribution between the different classes. All the convolutional layers are followed by a Batch Normalization (BN) layer after them.\n\nThe LSTM network uses 32 hidden neurons followed by a feed-forward layer with 64 neurons using Dropout and PRELU as activation function. Experimentally we found out that the hidden layer of the LSTM should contain 64 or less hidden neurons to avoid over-fitting the model. Experimenting with higher number of hidden neurons would be feasible if the dataset was even larger. \n\nFinally the CNN-LSTM model applies the convolutional feature extraction layers on the input and then feeds them in the correct temporal order to an LSTM model. The CNN component is comprised of the following layers:\n\\begin{center}\n\t\\begin{minipage}{0.6\\textwidth}\n\t\t\\begin{enumerate}\n\t\t\t\\item 1D Convolution with 16 filters of size $(5,42)$ \n\t\t\t\\item 1D Convolution with 16 filters of size $(5,)$ \n\t\t\t\\item 1D Convolution with 32 filters of size $(5,)$ \n\t\t\t\\item 1D Convolution with 32 filters of size $(5,)$ \n\t\t\\end{enumerate}\n\t\\end{minipage}\n\\end{center}\n\nNote that the receptive field of each convolutional filter in the CNN module is smaller that the standalone CNN, since the LSTM can capture most of the information from past time steps. The LSTM module has the same exact architecture as the standalone LSTM. A visual representation of this CNN-LSTM model is shown in Figure~\\ref{fig:cnnlstm}. Likewise, PRELU is the activation function used for the CNN and the fully connected layers, while the softmax function is used for the output layer of the network to predict the probability distribution of the classes.\n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[scale=0.70]{lstm_cost_per_step}\n\t\\caption{Mean cost per recurrent step of the LSTM network}\n\t\\label{bad-step-score}\n\\end{figure}\n\n\n\n\n\\begin{table*}\n\t\\caption{Experimental results for different prediction horizons $k$. The values that are reported are the mean of each metric for the last 20 training epochs.}\n\t\\label{results-table}\n\n\t\t\t\n\n\t\\begin{center}\n\\footnotesize\n\\bgroup\n\\def\\arraystretch{0.8\n\t\t\\begin{tabular}{ |c|c|c|c|c|c|c|c|c|}\n\t\t\t\\hline\n\n\t\t\t\\multirow{1}{*}{\\textbf{Feature Type}} & \n\t\t\t\\multicolumn{1}{c|}{\\textbf{Model}} &\n\t\t\t\\multicolumn{1}{c|}{\\textbf{Mean Recall}} &\n\t\t\t\\multicolumn{1}{c|}{\\textbf{Mean Precision}} &\n\t\t\t\\multicolumn{1}{c|}{\\textbf{Mean F1}} & \\multicolumn{1}{c|}{\\textbf{Cohen's} $\\kappa$} \\\\ \\cline{1-6} \n\t\t\t\n\t\t\t\\multicolumn{6}{|c|}{\\multirow{2}{*}{Prediction Horizon $k=10$}} \\\\ \n\t\t\t\\multicolumn{6}{|c|}{} \\\\ \\cline{1-6}\n\t\t\t\\multirow{4}{*}{\\textbf{Raw Values}} \n\t\t\t& SVM & $0.35 $ & $0.43 $ & $0.33 $ & $0.04 $ \\\\ \\cline{2-6}\n\t\t\t& MLP & ${ 0.34 }$ & ${ 0.34 }$ & ${ 0.09 }$ & ${ 0.00 }$ \\\\ \\cline{2-6}\n\t\t\t& CNN & ${ 0.51 }$ & ${ 0.42 }$ & ${ 0.38 }$ & ${ 0.14 }$ \\\\ \\cline{2-6}\n\t\t\t& LSTM & ${ 0.49 }$ & ${ 0.41 }$ & ${ 0.35 }$ & ${ 0.12 }$ \\\\ \\cline{1-6}\n\t\t\t\n\t\t\t\n\t\t\t\\multirow{5}{*}{\\textbf{Stationary Features}} \n\t\t\t& SVM & $0.33 $ & $\\mathbf{0.46 }$ & $0.30 $ & $0.011 $ \\\\ \\cline{2-6}\n\t\t\t& MLP & ${ 0.34 }$ & ${ 0.35 }$ & ${ 0.09 }$ & ${ 0.00 }$ \\\\ \\cline{2-6}\n\t\t\t& CNN & ${ 0.54 }$ & ${ 0.44 }$ & ${ 0.43 }$ & ${ 0.19 }$ \\\\ \\cline{2-6}\n\t\t\t& LSTM & ${ 0.55 }$ & ${ 0.45 }$ & ${ 0.42 }$ & ${ 0.18 }$ \\\\ \\cline{2-6}\n\t\t\t& CNNLSTM & $\\mathbf{ 0.56 }$ & ${ 0.45 }$ & $\\mathbf{ 0.44 }$ & $\\mathbf{ 0.21 }$ \\\\ \\cline{1-6}\n\t\t\t\n\t\t\t\\multicolumn{6}{|c|}{\\multirow{2}{*}{Prediction Horizon $k=50$}} \\\\ \n\t\t\t\\multicolumn{6}{|c|}{} \\\\ \\cline{1-6}\n\t\t\t\n\t\t\t\\multirow{4}{*}{\\textbf{Raw Values}} \n\t\t\t& SVM & $0.35 $ & $0.41 $ & $0.32 $ & $0.03 $ \\\\ \\cline{2-6}\n\t\t\t& MLP & ${ 0.41 }$ & ${ 0.38 }$ & ${ 0.21 }$ & ${ 0.04 }$ \\\\ \\cline{2-6}\n\t\t\t& CNN & ${ 0.50 }$ & ${ 0.42 }$ & ${ 0.37 }$ & ${ 0.13 }$ \\\\ \\cline{2-6}\n\t\t\t& LSTM & ${ 0.46 }$ & ${ 0.40 }$ & ${ 0.34 }$ & ${ 0.10 }$ \\\\ \\cline{1-6}\n\t\t\t\n\t\t\t\n\t\t\t\\multirow{5}{*}{\\textbf{Stationary Features}}\n\t\t\t& SVM & $0.39 $ & $0.41 $ & $0.38 $ & $0.09 $ \\\\ \\cline{2-6}\n\t\t\t& MLP & $0.49 $ & $0.43 $ & $0.38 $ & $0.14 $ \\\\ \\cline{2-6}\n\t\t\t& CNN & $0.55 $ & $0.45 $ & $0.43 $ & $0.20 $ \\\\ \\cline{2-6}\n\t\t\t&LSTM & $\\mathbf{0.56 } $ & $0.46 $ & $0.44 $ & $0.21 $ \\\\ \\cline{2-6}\n\t\t\t& CNNLSTM & $\\mathbf{0.56 }$ & $\\mathbf{0.47 }$ & $\\mathbf{0.47 }$ & $\\mathbf{0.24 } $ \\\\ \\cline{1-6}\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\\multicolumn{6}{|c|}{\\multirow{2}{*}{Prediction Horizon $k=100$}} \\\\ \n\t\t\t\\multicolumn{6}{|c|}{} \\\\ \\cline{1-6}\n\t\t\t\n\t\t\t\\multirow{4}{*}{\\textbf{Raw Values}} \n\t\t\t& SVM & $0.35 $ & $0.46 $ & $0.33 $ & $0.05 $ \\\\ \\cline{2-6}\n\t\t\t& MLP & ${ 0.45 }$ & ${ 0.39 }$ & ${ 0.26 }$ & ${ 0.06 }$ \\\\ \\cline{2-6}\n\t\t\t& CNN & ${ 0.49 }$ & ${ 0.42 }$ & ${ 0.37 }$ & ${ 0.12 }$ \\\\ \\cline{2-6}\n\t\t\t& LSTM & ${ 0.45 }$ & ${ 0.39 }$ & ${ 0.34 }$ & ${ 0.09 }$ \\\\ \\cline{1-6}\n\t\t\t\n\t\t\t\\multirow{5}{*}{\\textbf{Stationary Features}}\n\t\t\t& SVM & $0.36 $ & $0.46 $ & $0.35 $ & $0.07 $ \\\\ \\cline{2-6}\n\t\t\t& MLP & ${ 0.50 }$ & ${ 0.43 }$ & ${ 0.39 }$ & ${ 0.14 }$ \\\\ \\cline{2-6}\n\t\t\t& CNN & ${ 0.54 }$ & ${ 0.46 }$ & ${ 0.44 }$ & ${ 0.21 }$ \\\\ \\cline{2-6}\n\t\t\t& LSTM & $\\mathbf{ 0.56 }$ & ${ 0.46 }$ & ${ 0.44 }$ & ${ 0.20 }$ \\\\ \\cline{2-6}\n\t\t\t& CNNLSTM & ${ 0.55 }$ & $\\mathbf{ 0.47 }$ & $\\mathbf{ 0.48 }$ & $\\mathbf{ 0.24 }$ \\\\ \\cline{1-6}\n\t\t\t\n\t\t\t\n\t\t\t\\multicolumn{6}{|c|}{\\multirow{2}{*}{Prediction Horizon $k=200$}} \\\\ \n\t\t\t\\multicolumn{6}{|c|}{} \\\\ \\cline{1-6}\n\t\t\t\n\t\t\t\\multirow{4}{*}{\\textbf{Raw Values}} \n\t\t\t& SVM & $0.35 $ & $0.44 $ & $0.31 $ & $0.04 $ \\\\ \\cline{2-6}\n\t\t\t& MLP & ${ 0.44 }$ & ${ 0.40 }$ & ${ 0.32 }$ & ${ 0.08 }$ \\\\ \\cline{2-6}\n\t\t\t& CNN & ${ 0.47 }$ & ${ 0.43 }$ & ${ 0.39 }$ & ${ 0.14 }$ \\\\ \\cline{2-6}\n\t\t\t& LSTM & ${ 0.42 }$ & ${ 0.39 }$ & ${ 0.36 }$ & ${ 0.08 }$ \\\\ \\cline{1-6}\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\\multirow{5}{*}{\\textbf{Stationary Features}}\n\t\t\t& SVM & $0.38 $ & $0.46 $ & $0.36 $ & $0.10 $ \\\\ \\cline{2-6}\n\t\t\t& MLP & ${ 0.49 }$ & ${ 0.45 }$ & ${ 0.42 }$ & ${ 0.17 }$ \\\\ \\cline{2-6}\n\t\t\t& CNN & ${ 0.51 }$ & ${ 0.47 }$ & ${ 0.45 }$ & ${ 0.20 }$ \\\\ \\cline{2-6}\n\t\t\t& LSTM & ${ 0.52 }$ & ${ 0.47 }$ & ${ 0.46 }$ & ${ 0.22 }$ \\\\ \\cline{2-6}\n\t\t\t& CNNLSTM & $\\mathbf{ 0.53 }$ & $\\mathbf{ 0.48 }$ & $\\mathbf{ 0.49 }$ & $\\mathbf{ 0.25 }$ \\\\ \\cline{1-6}\n\t\t\t\n\t\t\t\n\t\t\\end{tabular}\n\\egroup\n\t\n\t\\end{center}\n\\end{table*}\n\n\nOne recurring effect we observe when training LSTM networks on LOB data is that for the first steps of observation the predictions $y_i$ yield a bigger cross entropy cost, meaning worse performance in our metrics. We run a set of experiments where the LSTM was trained for all the steps of the input windows $T$. The resulting mean cost per time step can be observed in Figure \\ref{bad-step-score}. As a result, trying to predict the price movement using insufficient past information is not possible and should be avoided since it leads to noisy gradients. To avoid this, a ``burn-in'' input is initially used to build its initial perception of the market before actually making correct decisions. In essence the first ``burn-in'' steps of the input are skipped, by not allowing any gradient to alter our model until after the 100th time step. We also apply the same method to the CNN-LSTM model.\n\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=1.02\\linewidth]{all_training}\n\t\\caption{F1 and Cohen's $\\kappa$ metrics during training for prediction horizon $k=100$. Plots are smoothed with a mean filter with window=3 to reduce fluctuations.\n\t}\n\t\\label{fig:f1-kappa-training}\n\\end{figure*}\n\nFor training the models, the dataset is split as follows. The first 7 days of each stock are used to train the models, while the final 3 days are used as test data. The experiments were conducted for 4 different prediction horizons $k$, as defined in (\\ref{m-a}) and (\\ref{direction-eq}).\n\nPerformance is measured using Cohen's kappa \\cite{cohen1960coefficient}, which is used to evaluate the concordance between sets of given answers, taking into consideration the possibility of random agreements happening. The mean recall, mean precision and mean F1 score between all 3 classes is also reported. Recall is the number of true positives samples divided by the sum of true positives and false negatives, while precision is the number of true positive divided by the sum of true positives and false positives. F1 score is the harmonic mean of the precision and recall metrics.\n\n\nThe results of the experiments are shown in Table \\ref{results-table}. The results are compared for the models trained on the raw price features with the ones trained using the extracted stationary features. The results confirm that extracting stationary features from the data significantly improve performance of Deep Learning models such as CNNs and LSTMs.\n\nWe also trained a Linear SVM model and a simple MLP model and compared them to the DL models. The SVM model was trained using Stochastic Gradient Descent since the size of the dataset is too large to use a regular Quadratic Programming solver. The SVM model implementation is provided by the sklearn library \\cite{pedregosa2011scikit}. The MLP model consists of three fully connected layers with sizes 128, 64, 32, and PRELU as activations for each layers. Dropout is also used to avoid overfitting and the softmax activation function was used in the last layer.\n\nSince both the SVM and the MLP models cannot iterate over timesteps to gain the same amount of information as the CNN and LSTM-based models, a window of 50 depth events is used and is flattened into a single sample. This process is applied in a rolling fashion for all the dataset to generate a dataset upon which the two models can be trained. One important note is the training fluctuations that are observed in Figure \\ref{fig:f1-kappa-training}, which are caused by the great class imbalance. Similar issues where observed in initial experiments with CNN and LSTM models but using the weighted loss described in \\ref{sec:optimization} the fluctuations subsided.\n\n\nThe proposed stationary price features significantly outperform the raw price features for all the tested models. This can be attributed to a great extent to the stationary nature of the proposed features. The employed price differences provide an intrinsically stationary and normalized price measure that can be directly used. This is in contrast with the raw price values that requires careful normalization to ensure that their values remain into a reasonable range and suffer for significantly non-stationarity issues when the price increases to levels not seen before. By converting the actual prices to the their difference to the mid price and normalize that, this important feature is exaggerated to avoid being suppressed by the much larger price movements through time. The proposed combination model CNN-LSTM also outperforms its separated individual component models as shown in Figure \\ref{fig:f1-kappa-training} and Table \\ref{results-table} showing that it can better handle the LOB data and use them to take advantage of the microstructure existing within the data to produce more accurate predictions. \n\n\\section{Conclusion}\n\nIn this paper we proposed a novel method for extracting stationary features from raw LOB data, suitable for use with different DL models. Using different ML models, i.e., SVMs, MLPs, CNNs and LSTMs, it was experimentally demonstrated that the proposed features significantly outperform the raw price features. The proposed stationary features achieve this by making the difference between the prices in the LOB depth the main metric instead of the price itself, which usually fluctuates much more through time than the price level within the LOB. A novel combined CNN-LSTM model was also proposed for time series predictions and it was demonstrated that exhibits more stable behaviour and leads to better results that the CNN and LSTM models.\n\nThere are several interesting future research directions. As with all the DL application, more data would enable the use of bigger models that would not be at risk of being overtrained as it was observed in this work. An RNN-type of network could be also used to perform a form of ``intelligent'' r-sampling extracting useful features from a specific and limited time-interval of depth events, which would avoid losing information and allow for the later models produce prediction for a certain time period and not for a number of following events. Another important addition would be an attention mechanism \\cite{xu2015show}, \\cite{cho2015describing}, which would allow for the better observation of the features by the network allowing it to ignore noisy parts of the data and use only the relevant information.\n\n\n\\section*{Acknowledgment}\nThe research leading to these results has received funding from the H2020 Project BigDataFinance MSCA-ITN-ETN 675044 (http://bigdatafinance.eu), Training for Big Data in Financial Research and Risk Management.\n\n\n\n\n\\bibliographystyle{elsarticle-num}\n\n", "meta": {"timestamp": "2018-10-24T02:19:17", "yymm": "1810", "arxiv_id": "1810.09965", "language": "en", "url": "https://arxiv.org/abs/1810.09965"}} {"text": "\\section{Introduction}\nGiven a data set and a model with some unknown parameters, the inverse problem aims to find the values of the model parameters that best fit the data. \nIn this work, in which we focus on systems of interacting elements,\n the inverse problem concerns the statistical inference\n of the underling interaction network and of its coupling coefficients from observed data on the dynamics of the system. \n Versions of this problem are encountered in physics, biology (e.g., \\cite{Balakrishnan11,Ekeberg13,Christoph14}), social sciences and finance (e.g.,\\cite{Mastromatteo12,yamanaka_15}), neuroscience (e.g., \\cite{Schneidman06,Roudi09a,tyrcha_13}), just to cite a few, and are becoming more and more important due to the increase in the amount of data available from these fields.\\\\\n \\indent \n A standard approach used in statistical inference is to predict the interaction couplings by maximizing the likelihood function.\n This technique, however, requires the evaluation of the \n \n partition function that, in the most general case, concerns a number of computations scaling exponentially with the system size.\n \n \n Boltzmann machine learning uses Monte Carlo sampling to compute the gradients of the Log-likelihood looking for stationary points \\cite{Murphy12} but this method is computationally manageable only for small systems. A series of faster approximations, such as naive mean-field, independent-pair approximation \\cite{Roudi09a, Roudi09b}, inversion of TAP equations \\cite{Kappen98,Tanaka98}, small correlations expansion \\cite{Sessak09}, adaptive TAP \\cite{Opper01}, adaptive cluster expansion \\cite{Cocco12} or Bethe approximations \\cite{Ricci-Tersenghi12, Nguyen12} have, then, been developed. These techniques take as input means and correlations of observed variables and most of them assume a fully connected graph as underlying connectivity network, or expand around it by perturbative dilution. In most cases, network reconstruction turns out to be not accurate for small data sizes and/or when couplings are strong or, else, if the original interaction network is sparse.\\\\\n\\indent\n A further method, substantially improving performances for small data, is the so-called Pseudo-Likelyhood Method (PLM) \\cite{Ravikumar10}. In Ref. \\cite{Aurell12} Aurell and Ekeberg performed a comparison between PLM and some of the just mentioned mean-field-based algorithms on the pairwise interacting Ising-spin ($\\sigma = \\pm 1$) model, showing how PLM performs sensitively better, especially on sparse graphs and in the high-coupling limit, i.e., for low temperature.\n \n In this work, we aim at performing statistical inference on a model whose interacting variables are continuous $XY$ spins, i.e., $\\sigma \\equiv \\left(\\cos \\phi,\\sin \\phi\\right)$ with $\\phi \\in [0, 2\\pi )$. The developed tools can, actually, be also straightforward applied to the $p$-clock model \\cite{Potts52} where the phase $\\phi$ takes discretely equispaced $p$ values in the $2 \\pi$ interval, $\\phi_a = a 2 \\pi/p$, with $a= 0,1,\\dots,p-1$. The $p$-clock model, else called vector Potts model, gives a hierarchy of discretization of the $XY$ model as $p$ increases. For $p=2$, one recovers the Ising model, for $p=4$ the Ashkin-Teller model \\cite{Ashkin43}, for $p=6$ the ice-type model \\cite{Pauling35,Baxter82} and the eight-vertex model \\cite{Sutherland70,Fan70,Baxter71} for $p=8$. \nIt turns out to be very useful also for numerical implementations of the continuous $XY$ model. \nRecent analysis on the multi-body $XY$ model has shown that for a limited number of discrete phase values ($p\\sim 16, 32$) the thermodynamic critical properties of the $p\\to\\infty$ $XY$ limit are promptly recovered \\cite{Marruzzo15, Marruzzo16}. \nOur main motivation to study statistical inference is that these kind of models have recently turned out to be rather useful in describing the behavior of optical systems, \nincluding standard mode-locking lasers \\cite{Gordon02,Gat04,Angelani07,Marruzzo15} and random lasers \\cite{Angelani06a,Leuzzi09a,Antenucci15a,Antenucci15b,Marruzzo16}. \nIn particular, the inverse problem on the pairwise XY model analyzed here might be of help in recovering images from light propagated through random media. \n\n\n This paper is organized as follows: in Sec. \\ref{sec:model} we introduce the general model and we discuss its derivation also as a model for light transmission through random scattering media. \n In Sec. \\ref{sec:plm} we introduce the PLM with $l_2$ regularization and with decimation, two variants of the PLM respectively introduced in Ref. \\cite{Wainwright06} and \\cite{Aurell12} for the inverse Ising problem. \n Here, we analyze these techniques for continuous $XY$ spins and we test them on thermalized data generated by Exchange Monte Carlo numerical simulations of the original model dynamics. In Sec. \\ref{sec:res_reg} we present the results related to the PLM-$l_2$. In Sec. \\ref{sec:res_dec} the results related to the PLM with decimation are reported and its performances are compared to the PLM-$l_2$ and to a variational mean-field method analyzed in Ref. \\cite{Tyagi15}. In Sec. \\ref{sec:conc}, we outline conclusive remarks and perspectives.\n\n \\section{The leading $XY$ model}\n \\label{sec:model}\n The leading model we are considering is defined, for a system of $N$ angular $XY$ variables, by the Hamiltonian \n \\begin{equation}\n \\mathcal{H} = - \\sum_{ik}^{1,N} J_{ik} \\cos{\\left(\\phi_i-\\phi_k\\right)} \n \\label{eq:HXY}\n \n \\end{equation} \n \n The $XY$ model is well known in statistical mechanics, displaying important physical\n insights, starting from the Berezinskii-Kosterlitz-Thouless\n transition in two dimensions\\cite{Berezinskii70,Berezinskii71,Kosterlitz72} and moving to, e.g., the\n transition of liquid helium to its superfluid state \\cite{Brezin82}, the roughening transition of the interface of a crystal in equilibrium with its vapor \\cite{Cardy96}. In presence of disorder and frustration \\cite{Villain77,Fradkin78} the model has been adopted to describe synchronization problems as the Kuramoto model \\cite{Kuramoto75} and in the theoretical modeling of Josephson junction arrays \\cite{Teitel83a,Teitel83b} and arrays of coupled lasers \\cite{Nixon13}.\n Besides several derivations and implementations of the model in quantum and classical physics, equilibrium or out of equilibrium, ordered or fully frustrated systems, Eq. (\\ref{eq:HXY}), in its generic form,\n has found applications also in other fields. A rather fascinating example being the behavior of starlings flocks \\cite{Reynolds87,Deneubourg89,Huth90,Vicsek95, Cavagna13}.\n Our interest on the $XY$ model resides, though, in optics. Phasor and phase models with pairwise and multi-body interaction terms can, indeed, describe the behavior of electromagnetic modes in both linear and nonlinear optical systems in the analysis of problems such as light propagation and lasing \\cite{Gordon02, Antenucci15c, Antenucci15d}. As couplings are strongly frustrated, these models turn out to be especially useful to the study of optical properties in random media \\cite{Antenucci15a,Antenucci15b}, as in the noticeable case of random lasers \\cite{Wiersma08,Andreasen11,Antenucci15e} and they might as well be applied to linear scattering problems, e.g., propagation of waves in opaque systems or disordered fibers. \n \n \n \\subsection{A propagating wave model}\n We briefly mention a derivation of the model as a proxy for the propagation of light through random linear media. \n Scattering of light is held responsible to obstruct our view and make objects opaque. Light rays, once that they enter the material, only exit after getting scattered multiple times within the material. In such a disordered medium, both the direction and the phase of the propagating waves are random. Transmitted light \n yields a disordered interference pattern typically having low intensity, random phase and almost no resolution, called a speckle. Nevertheless, in recent years it has been realized that disorder is rather a blessing in disguise \\cite{Vellekoop07,Vellekoop08a,Vellekoop08b}. Several experiments have made it possible to control the behavior of light and other optical processes in a given random disordered medium, \n by exploiting, e.g., the tools developed for wavefront shaping to control the propagation of light and to engineer the confinement of light \\cite{Yilmaz13,Riboli14}.\n \\\\\n \\indent\n In a linear dielectric medium, light propagation can be described through a part of the scattering matrix, the transmission matrix $\\mathbb{T}$, linking the outgoing to the incoming fields. \n Consider the case in which there are $N_I$ incoming channels and $N_O$ outgoing ones; we can indicate with $E^{\\rm in,out}_k$ the input/output electromagnetic field phasors of channel $k$. In the most general case, i.e., without making any particular assumptions on the field polarizations, each light mode and its polarization polarization state can be represented by means of the $4$-dimensional Stokes vector. Each $ t_{ki}$ element of $\\mathbb{T}$, thus, is a $4 \\times 4$ M{\\\"u}ller matrix. If, on the other hand, we know that the source is polarized and the observation is made on the same polarization, one can use a scalar model and adopt Jones calculus \\cite{Goodman85,Popoff10a,Akbulut11}:\n \\begin{eqnarray}\n E^{\\rm out}_k = \\sum_{i=1}^{N_I} t_{ki} E^{\\rm in}_i \\qquad \\forall~ k=1,\\ldots,N_O\n \\label{eq:transm}\n \\end{eqnarray}\n We recall that the elements of the transmission matrix are random complex coefficients\\cite{Popoff10a}. For the case of completely unpolarized modes, we can also use a scalar model similar to Eq. \\eqref{eq:transm}, but whose variables are the intensities of the outgoing/incoming fields, rather than the fields themselves.\\\\ \nIn the following, for simplicity, we will consider Eq. (\\ref{eq:transm}) as our starting point,\nwhere $E^{\\rm out}_k$, $E^{\\rm in}_i$ and $t_{ki}$ are all complex scalars. \nIf Eq. \\eqref{eq:transm} holds for any $k$, we can write:\n \\begin{eqnarray}\n \\int \\prod_{k=1}^{N_O} dE^{\\rm out}_k \\prod_{k=1}^{N_O}\\delta\\left(E^{\\rm out}_k - \\sum_{j=1}^{N_I} t_{kj} E^{\\rm in}_j \\right) = 1\n \\nonumber\n \\\\\n \\label{eq:deltas}\n \\end{eqnarray}\n\n Observed data are a noisy representation of the true values of the fields. Therefore, in inference problems it is statistically more meaningful to take that noise into account in a probabilistic way, \n rather than looking at the precise solutions of the exact equations (whose parameters are unknown). \n To this aim we can introduce Gaussian distributions whose limit for zero variance are the Dirac deltas in Eq. (\\ref{eq:deltas}).\n Moreover, we move to consider the ensemble of all possible solutions of Eq. (\\ref{eq:transm}) at given $\\mathbb{T}$, looking at all configurations of input fields. We, thus, define the function:\n \n \\begin{eqnarray}\n Z &\\equiv &\\int_{{\\cal S}_{\\rm in}} \\prod_{j=1}^{N_I} dE^{\\rm in}_j \\int_{{\\cal S}_{\\rm out}}\\prod_{k=1}^{N_O} dE^{\\rm out}_k \n \\label{def:Z}\n\\\\\n \\times\n &&\\prod_{k=1}^{N_O}\n \\frac{1}{\\sqrt{2\\pi \\Delta^2}} \\exp\\left\\{-\\frac{1}{2 \\Delta^2}\\left|\n E^{\\rm out}_k -\\sum_{j=1}^{N_I} t_{kj} E^{\\rm in}_j\\right|^2\n\\right\\} \n\\nonumber\n \\end{eqnarray}\n We stress that the integral of Eq. \\eqref{def:Z} is not exactly a Gaussian integral. Indeed, starting from Eq. \\eqref{eq:deltas}, two constraints on the electromagnetic field intensities must be taken into account. \n\n The space of solutions is delimited by the total power ${\\cal P}$ received by system, i.e., \n ${\\cal S}_{\\rm in}: \\{E^{\\rm in} |\\sum_k I^{\\rm in}_k = \\mathcal{P}\\}$, also implying a constraint on the total amount of energy that is transmitted through the medium, i. e., \n ${\\cal S}_{\\rm out}:\\{E^{\\rm out} |\\sum_k I^{\\rm out}_k=c\\mathcal{P}\\}$, where the attenuation factor $c<1$ accounts for total losses.\n As we will see more in details in the following, being interested in inferring the transmission matrix through the PLM, we can omit to explicitly include these terms in Eq. \\eqref{eq:H_J} since they do not depend on $\\mathbb{T}$ not adding any information on the gradients with respect to the elements of $\\mathbb{T}$.\n \n Taking the same number of incoming and outcoming channels, $N_I=N_O=N/2$, and ordering the input fields in the first $N/2$ mode indices and the output fields in the last $N/2$ indices, we can drop the ``in'' and ``out'' superscripts and formally write $Z$ as a partition function\n \\begin{eqnarray}\n \\label{eq:z}\n && Z =\\int_{\\mathcal S} \\prod_{j=1}^{N} dE_j \\left( \\frac{1}{\\sqrt{2\\pi \\Delta^2}} \\right)^{N/2} \n \\hspace*{-.4cm} \\exp\\left\\{\n -\\frac{ {\\cal H} [\\{E\\};\\mathbb{T}] }{2\\Delta^2}\n \\right\\}\n \\\\\n&&{\\cal H} [\\{E\\};\\mathbb{T}] =\n- \\sum_{k=1}^{N/2}\\sum_{j=N/2+1}^{N} \\left[E^*_j t_{jk} E_k + E_j t^*_{kj} E_k^* \n\\right]\n \\nonumber\n\\\\\n&&\\qquad\\qquad \\qquad + \\sum_{j=N/2+1}^{N} |E_j|^2+ \\sum_{k,l}^{1,N/2}E_k\nU_{kl} E_l^*\n \\nonumber\n \\\\\n \\label{eq:H_J}\n &&\\hspace*{1.88cm } = - \\sum_{nm}^{1,N} E_n J_{nm} E_m^*\n \\end{eqnarray}\n where ${\\cal H}$ is a real-valued function by construction, we have introduced the effective input-input coupling matrix\n\\begin{equation}\nU_{kl} \\equiv \\sum_{j=N/2+1}^{N}t^*_{lj} t_{jk} \n \\label{def:U}\n \\end{equation}\n and the whole interaction matrix reads (here $\\mathbb{T} \\equiv \\{ t_{jk} \\}$)\n \\begin{equation}\n \\label{def:J}\n \\mathbb J\\equiv \\left(\\begin{array}{ccc|ccc}\n \\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}\\\\\n \\phantom{()}&-\\mathbb{U} \\phantom{()}&\\phantom{()}&\\phantom{()}&{\\mathbb{T}}&\\phantom{()}\\\\\n\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}\\\\\n \\hline\n\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}&\\phantom{()}\\\\\n \\phantom{()}& \\mathbb T^\\dagger&\\phantom{()}&\\phantom{()}& - \\mathbb{I} &\\phantom{()}\\\\\n\\phantom{a}&\\phantom{a}&\\phantom{a}&\\phantom{a}&\\phantom{a}&\\phantom{a}\\\\\n \\end{array}\\right)\n \\end{equation}\n \n Determining the electromagnetic complex amplitude configurations that minimize the {\\em cost function} ${\\cal H}$, Eq. (\\ref{eq:H_J}), means to maximize the overall distribution peaked around the solutions of the transmission Eqs. (\\ref{eq:transm}). As the variance $\\Delta^2\\to 0$, eventually, the initial set of Eqs. (\\ref{eq:transm}) are recovered. The ${\\cal H}$ function, thus, plays the role of an Hamiltonian and $\\Delta^2$ the role of a noise-inducing temperature. The exact numerical problem corresponds to the zero temperature limit of the statistical mechanical problem. Working with real data, though, which are noisy, a finite ``temperature''\n allows for a better representation of the ensemble of solutions to the sets of equations of continuous variables. \n \n\n \n \n Now, we can express every phasor in Eq. \\eqref{eq:z} as $E_k = A_k e^{\\imath \\phi_k}$. As a working hypothesis we will consider the intensities $A_k^2$ as either homogeneous or as \\textit{quenched} with respect to phases.\nThe first condition occurs, for instance, to the input intensities $|E^{\\rm in}_k|$ produced by a phase-only spatial light modulator (SLM) with homogeneous illumination \\cite{Popoff11}.\nWith \\textit{quenched} here we mean, instead, that the intensity of each mode is the same for every solution of Eq. \\eqref{eq:transm} at fixed $\\mathbb T$.\nWe stress that, including intensities in the model does not preclude the inference analysis but it is out of the focus of the present work and will be considered elsewhere. \n\nIf all intensities are uniform in input and in output, this amount to a constant rescaling for each one of the four sectors of matrix $\\mathbb J$ in Eq. (\\ref{def:J}) that will not change the properties of the matrices.\nFor instance, if the original transmission matrix is unitary, so it will be the rescaled one and the matrix $\\mathbb U$ will be diagonal.\nOtherwise, if intensities are \\textit{quenched}, i.e., they can be considered as constants in Eq. (\\ref{eq:transm}),\nthey are inhomogeneous with respect to phases. The generic Hamiltonian element will, therefore, rescale as \n \\begin{eqnarray}\n E^*_n J_{nm} E_m = J_{nm} A_n A_m e^{\\imath (\\phi_n-\\phi_m)} \\to J_{nm} e^{\\imath (\\phi_n-\\phi_m)}\n \\nonumber\n \\end{eqnarray}\n and the properties of the original $J_{nm}$ components are not conserved in the rescaled one. In particular, we have no argument, anymore, to possibly set the rescaled $U_{nm}\\propto \\delta_{nm}$.\n Eventually, we end up with the complex couplings $XY$ model, whose real-valued Hamiltonian is written as\n \\begin{eqnarray}\n \\mathcal{H}& = & - \\frac{1}{2} \\sum_{nm} J_{nm} e^{-\\imath (\\phi_n - \\phi_m)} + \\mbox{c.c.} \n \\label{eq:h_im}\n\\\\ &=& - \\frac{1}{2} \\sum_{nm} \\left[J^R_{nm} \\cos(\\phi_n - \\phi_m)+\n J^I_{nm}\\sin (\\phi_n - \\phi_m)\\right] \n \\nonumber\n \\end{eqnarray}\nwhere $J_{nm}^R$ and $J_{nm}^I$ are the real and imaginary parts of $J_{nm}$. Being $\\mathbb J$ Hermitian, $J^R_{nm}=J^R_{mn}$ is symmetric and $J_{nm}^I=-J_{mn}^I$ is skew-symmetric.\n\n\\begin{comment}\n\\textcolor{red}{\nF: comment about quenched:\nI think that to obtain the XY model, it is not necessary that the intensities are strictly quenched (that is also a quite unfeasible situation, I guess).\nIndeed eq (2) does not deal with the dynamics of the modes, but just connect the in and out ones.\nFor this, what it is necessary to have the XY model, it is that the intensities are always the same on the different samples\n(so that the matrix $t_{ij}$ is the same for different phase data). If the intensities are fixed, then they can be incorporated in $t_{ij}$ and eq (2) can be written just for phases as described. \\\\\n}\n\\end{comment}\n\n\n \\section{Pseudolikelihood Maximization}\n \\label{sec:plm}\nThe inverse problem consists in the reconstruction of the parameters $J_{nm}$ of the Hamiltonian, Eq. (\\ref{eq:h_im}). \nGiven a set of $M$ data configurations of $N$ spins\n $\\bm\\sigma = \\{ \\cos \\phi_i^{(\\mu)},\\sin \\phi_i^{(\\mu)} \\}$, $i = 1,\\dots,N$ and $\\mu=1,\\dots,M$, we want to \\emph{infer} the couplings:\n \\begin{eqnarray}\n\\bm \\sigma \\rightarrow \\mathbb{J} \n\\nonumber\n \\end{eqnarray}\n With this purpose in mind,\n in the rest of this section we implement the working equations for the techniques used. \n In order to test our methods, we generate the input data, i.e., the configurations, by Monte-Carlo simulations of the model.\n The joint probability distribution of the $N$ variables $\\bm{\\phi}\\equiv\\{\\phi_1,\\dots,\\phi_N\\}$, follows the Gibbs-Boltzmann distribution:\n \\begin{equation}\\label{eq:p_xy}\n P(\\bm{\\phi}) = \\frac{1}{Z} e^{-\\beta \\mathcal{H\\left(\\bm{\\phi}\\right)}} \\quad \\mbox{ where } \\quad Z = \\int \\prod_{k=1}^N d\\phi_k e^{-\\beta \\mathcal{H\\left(\\bm{\\phi}\\right)}} \n \\end{equation}\n and where we denote $\\beta=\\left( 2\\Delta^2 \\right)^{-1}$ with respect to Eq. (\\ref{def:Z}) formalism.\n In order to stick to usual statistical inference notation, in the following we will rescale the couplings by a factor $\\beta / 2$: $\\beta J_{ij}/2 \\rightarrow J_{ij}$. \n The main idea of the PLM is to work with the conditional probability distribution of one variable $\\phi_i$ given all other variables, \n $\\bm{\\phi}_{\\backslash i}$:\n \n \\begin{eqnarray}\n\t\\nonumber\n P(\\phi_i | \\bm{\\phi}_{\\backslash i}) &=& \\frac{1}{Z_i} \\exp \\left \\{ {H_i^x (\\bm{\\phi}_{\\backslash i})\n \t\\cos \\phi_i + H_i^y (\\bm{\\phi}_{\\backslash i}) \\sin \\phi_i } \\right \\}\n\t\\\\\n \\label{eq:marginal_xy}\n\t&=&\\frac{e^{H_i(\\bm{\\phi}_{\\backslash i}) \\cos{\\left(\\phi_i-\\alpha_i(\\bm{\\phi}_{\\backslash i})\\right)}}}{2 \\pi I_0(H_i)}\n \\end{eqnarray}\n where $H_i^x$ and $H_i^y$ are defined as\n \\begin{eqnarray}\n H_i^x (\\bm{\\phi}_{\\backslash i}) &=& \\sum_{j (\\neq i)} J^R_{ij} \\cos \\phi_j - \\sum_{j (\\neq i) } J_{ij}^{I} \\sin \\phi_j \\phantom{+ h^R_i} \\label{eq:26} \\\\\n H_i^y (\\bm{\\phi}_{\\backslash i}) &=& \\sum_{j (\\neq i)} J^R_{ij} \\sin \\phi_j + \\sum_{j (\\neq i) } J_{ij}^{I} \\cos \\phi_j \\phantom{ + h_i^{I} }\\label{eq:27}\n \\end{eqnarray}\nand $H_i= \\sqrt{(H_i^x)^2 + (H_i^y)^2}$, $\\alpha_i = \\arctan H_i^y/H_i^x$ and we introduced the modified Bessel function of the first kind:\n \\begin{equation}\n \\nonumber\n I_k(x) = \\frac{1}{2 \\pi}\\int_{0}^{2 \\pi} d \\phi e^{x \\cos{ \\phi}}\\cos{k \\phi}\n \\end{equation}\n \n Given $M$ observation samples $\\bm{\\phi}^{(\\mu)}=\\{\\phi^\\mu_1,\\ldots,\\phi^\\mu_N\\}$, $\\mu = 1,\\dots, M$, the\n pseudo-loglikelihood for the variable $i$ is given by the logarithm of Eq. (\\ref{eq:marginal_xy}),\n \\begin{eqnarray}\n \\label{eq:L_i}\n L_i &=& \\frac{1}{M} \\sum_{\\mu = 1}^M \\ln P(\\phi_i^{(\\mu)}|\\bm{\\phi}^{(\\mu)}_{\\backslash i})\n \\\\\n \\nonumber\n & =& \\frac{1}{M} \\sum_{\\mu = 1}^M \\left[ H_i^{(\\mu)} \\cos( \\phi_i^{(\\mu)} - \\alpha_i^{(\\mu)}) - \\ln 2 \\pi I_0\\left(H_i^{(\\mu)}\\right)\\right] \\, .\n \\end{eqnarray}\nThe underlying idea of PLM is that an approximation of the true parameters of the model is obtained for values that maximize the functions $L_i$.\nThe specific maximization scheme differentiates the different techniques.\n\n\n \n \n \\subsection{PLM with $l_2$ regularization}\n Especially for the case of sparse graphs, it is useful to add a regularizer, which prevents the maximization routine to move towards high values of \n $J_{ij}$ and $h_i$ without converging. We will adopt an $l_2$ regularization so that the Pseudolikelihood function (PLF) at site $i$ reads:\n \\begin{equation}\\label{eq:plf_i}\n {\\cal L}_i = L_i\n - \\lambda \\sum_{i \\neq j} \\left(J_{ij}^R\\right)^2 - \\lambda \\sum_{i \\neq j} \\left(J_{ij}^I\\right)^2 \n \\end{equation}\n with $\\lambda>0$.\n Note that the values of $\\lambda$ have to be chosen arbitrarily, but not too large, in order not to overcome $L_i$.\n The standard implementation of the PLM consists in maximizing each ${\\cal L}_i$, for $i=1\\dots N$, separately. The expected values of the couplings are then:\n \\begin{equation}\n \\{ J_{i j}^*\\}_{j\\in \\partial i} := \\mbox{arg max}_{ \\{ J_{ij} \\}}\n \\left[{\\cal L}_i\\right]\n \\end{equation}\n In this way, we obtain two estimates for the coupling $J_{ij}$, one from maximization of ${\\cal L}_i$, $J_{ij}^{(i)}$, and another one from ${\\cal L}_j$, say $J_{ij}^{(j)}$.\n Since the original Hamiltonian of the $XY$ model is Hermitian, we know that the real part of the couplings is symmetric while the imaginary part is skew-symmetric. \n \n The final estimate for $J_{ij}$ can then be obtained averaging the two results:\n \n \n \n \\begin{equation}\\label{eq:symm}\n J_{ij}^{\\rm inferred} = \\frac{J_{ij}^{(i)} + \\bar{J}_{ij}^{(j)}}{2} \n \\end{equation}\n where with $\\bar{J}$ we indicate the complex conjugate.\n It is worth noting that the pseudolikelihood $L_i$, Eq. \\eqref{eq:L_i}, is characterized by the\n following properties: (i) the normalization term of Eq.\\eqref{eq:marginal_xy} can be\n computed analytically at odd with the {\\em full} likelihood case that\n in general require a computational time which scales exponentially\n with the size of the systems; (ii) the $\\ell_2$-regularized pseudolikelihood\n defined in Eq.\\eqref{eq:plf_i} is strictly concave (i.e. it has a single\n maximizer)\\cite{Ravikumar10}; (iii) it is consistent, i.e. if $M$ samples are\n generated by a model $P(\\phi | J*)$ the maximizer tends to $J*$\n for $M\\rightarrow\\infty$\\cite{besag1975}. Note also that (iii) guarantees that \n $|J^{(i)}_{ij}-J^{(j)}_{ij}| \\rightarrow 0$ for $M\\rightarrow \\infty$.\n In Secs. \\ref{sec:res_reg}, \\ref{sec:res_dec} \n we report the results obtained and we analyze the performances of the PLM having taken the configurations from Monte-Carlo simulations of models whose details are known.\n \n\n \n \\subsection{PLM with decimation}\n Even though the PLM with $l_2$-regularization allows to dwell the inference towards the low temperature region and in the low sampling case with better performances that mean-field methods, in some situations some couplings are overestimated and not at all symmetric. Moreover, in the technique there is the bias of the $l_2$ regularizer.\n Trying to overcome these problems, Decelle and Ricci-Tersenghi introduced a new method \\cite{Decelle14}, known as PLM + decimation: the algorithm maximizes the sum of the $L_i$,\n \\begin{eqnarray}\n {\\cal L}\\equiv \\frac{1}{N}\\sum_{i=1}^N \\mbox{L}_i\n \\end{eqnarray} \n and, then, it recursively set to zero couplings which are estimated very small. We expect that as long as we are setting to zero couplings that are unnecessary to fit the data, there should be not much changing on ${\\cal L}$. Keeping on with decimation, a point is reached where ${\\cal L}$ decreases abruptly indicating that relevant couplings are being decimated and under-fitting is taking place.\n Let us define by $x$ the fraction of non-decimated couplings. To have a quantitative measure for the halt criterion of the decimation process, a tilted ${\\cal L}$ is defined as,\n \\begin{eqnarray}\n \\mathcal{L}_t &\\equiv& \\mathcal{L} - x \\mathcal{L}_{\\textup{max}} - (1-x) \\mathcal{L}_{\\textup{min}} \\label{$t$PLF} \n \\end{eqnarray}\n where \n \\begin{itemize}\n \\item $\\mathcal{L}_{\\textup{min}}$ is the pseudolikelyhood of a model with independent variables. In the XY case: $\\mathcal{L}_{\\textup{min}}=-\\ln{2 \\pi}$.\n \\item\n $\\mathcal{L}_{\\textup{max}}$ is the pseudolikelyhood in the fully-connected model and it is maximized over all the $N(N-1)/2$ possible couplings. \n \\end{itemize}\n At the first step, when $x=1$, $\\mathcal{L}$ takes value $\\mathcal{L}_{\\rm max}$ and $\\mathcal{L}_t=0$. On the last step, for an empty graph, i.e., $x=0$, $\\mathcal{L}$ takes the value $\\mathcal{L}_{\\rm min}$ and, hence, again $\\mathcal{L}_t =0$. \n In the intermediate steps, during the decimation procedure, as $x$ is decreasing from $1$ to $0$, one observes firstly that $\\mathcal{L}_t$ increases linearly and, then, it displays an abrupt decrease indicating that from this point on relevant couplings are being decimated\\cite{Decelle14}. In Fig. \\ref{Jor1-$t$PLF} we give an instance of this behavior for the 2D short-range XY model with ordered couplings. We notice that the maximum point of $\\mathcal{L}_t$ coincides with the minimum point of the reconstruction error, the latter defined as \n \\begin{eqnarray}\\label{eq:errj}\n \\mbox{err}_J \\equiv \\sqrt{\\frac{\\sum_{i0,\\\\\n\t\t&\\forall \\ {\\mathbf{x}}_k\\neq{}{\\x}^{\\mathrm{r}}_k,\\ \\u_k\\neq{\\u}^{\\mathrm{r}}_k,\n\t\t\\end{split}\n\t\\end{align}\n\t\tin tracking schemes but not in economic ones. Note that~\\eqref{eq:tracking_cost} can only hold if $\\r={\\mathbf{y}}^{\\mathrm{r}}$, that is, if Assumption~\\ref{a:rec_ref} holds.\n\tConsequently, even if the cost is positive-definite, any MPC scheme formulated with an infeasible reference $\\r$ is an economic MPC. \n\tWe refer to~\\cite{Zanon2018a,Faulwasser2018} for a detailed discussion on the topic.\n\tOn the contrary, if ${\\mathbf{y}}^{\\mathrm{r}}$ is used as reference, we obtain the tracking stage cost $q_{{\\mathbf{y}}^{\\mathrm{r}}}$. Since precomputing a feasible reference ${\\mathbf{y}}^{\\mathrm{r}}$ can be impractical or involved, we focus next on the case of \\emph{infeasible references}.\n\t\n\t\n\t\n\tConsider the Lagrangian of the OCP~\\eqref{eq:ocp}\n\t\\begin{align*}\n\t\\mathcal{L}^\\mathrm{O}({\\boldsymbol{\\xi}}, {\\boldsymbol{\\nu}}, {\\boldsymbol{\\lambda}},{\\boldsymbol{\\mu}},\\mathbf{t}) &= {\\boldsymbol{\\lambda}}_0^\\top ({\\boldsymbol{\\xi}}_0 - {\\mathbf{x}}_{0}) +p_\\r({\\boldsymbol{\\xi}}_M,t_M)\\\\\n\t&+\\lim_{M\\rightarrow\\infty}\\sum_{n=0}^{M-1}\n\tq_\\r({\\boldsymbol{\\xi}}_n,{\\boldsymbol{\\nu}}_n,t_n) +{\\boldsymbol{\\mu}}_n^\\top h({\\boldsymbol{\\xi}}_n,{\\boldsymbol{\\nu}}_n)\\\\\n\t&+\\lim_{M\\rightarrow\\infty}\\sum_{n=0}^{M-1} {\\boldsymbol{\\lambda}}_{n+1}^\\top ({\\boldsymbol{\\xi}}_{n+1} - f_n({\\boldsymbol{\\xi}}_n,{\\boldsymbol{\\nu}}_n)),\n\t\\end{align*}\n\tand denote the optimal multipliers as ${\\boldsymbol{\\lambda}}^\\mathrm{r},{\\boldsymbol{\\mu}}^\\mathrm{r}$, and the solution of~\\eqref{eq:ocp} as ${\\mathbf{y}}^{\\mathrm{r}}:=({\\mathbf{x}}^{\\mathrm{r}},\\u^{\\mathrm{r}})$.\tIn order to construct a tracking cost from the economic one, we use the Lagrange multipliers of the OCP~\\eqref{eq:ocp} to construct a \\emph{rotated} problem, which has the same constraints as the original MPC problem~\\eqref{eq:nmpc} and the following \\emph{rotated stage and terminal costs} \n\t\t\\begin{align*}\n\t&\\bar q_\\r(\\xb,\\ub,t_n):=q_\\r(\\xb,\\ub,t_n)-q_\\r({\\mathbf{x}}^{\\mathrm{r}}_n,\\u^{\\mathrm{r}}_n,t_n)\\\\\n\t&\\hspace{1em}+ {\\boldsymbol{\\lambda}}_n^{\\mathrm{r}\\top}(\\xb[n][k]-{\\mathbf{x}}^{\\mathrm{r}}_n)- {\\boldsymbol{\\lambda}}^{{\\mathrm{r}}\\top}_{n+1} (f_n(\\xb[n][k],\\ub[n][k])-f_n({\\mathbf{x}}^{\\mathrm{r}}_n,\\u^{\\mathrm{r}}_n)), \\\\\n\t&\\bar{p}_\\r(\\xb,t_n):= p_\\r(\\xb)-p_\\r({\\mathbf{x}}_{n}^{\\mathrm{r}},t_n)+{\\boldsymbol{\\lambda}}^{{\\mathrm{r}}\\top}_{n}(\\xb-{\\mathbf{x}}^{\\mathrm{r}}_{n}).\n\t\\end{align*}\n\nAs we prove in the following Lemma~\\ref{lem:rot_ocp}, adopting the rotated stage cost $\\bar q_\\r$ and terminal cost $\\bar p_\\r$ in the OCP~\\eqref{eq:ocp} does not change its primal solution. Such property of the rotated costs will be exploited next in the formulation of the \\emph{ideal} MPC problem.\n\t\\begin{Lemma}\n\t\t\\label{lem:rot_ocp}\n\t\tIf OCP~\\eqref{eq:ocp} is formulated using the rotated cost instead of the original one, then the Second Order Sufficient optimality Conditions (SOSC) are satisfied~\\cite{Nocedal2006}, and the following claims hold:\n\t\t\\begin{enumerate}\n\t\t \\item[i)] the primal solution is unchanged;\n\t\t \\item[ii)] the rotated cost penalizes deviations from the optimal solution of Problem~\\eqref{eq:ocp}, i.e.,\n\t\t \\begin{align*}\n\t\t \\bar q_\\r({\\mathbf{x}}_n^{\\mathrm{r}},\\u_n^{\\mathrm{r}},t_n) =0,\\ \\bar q_\\r({\\mathbf{x}}_n,\\u_n,t_n)>0,\n\t\t \\end{align*}\n\t\t for all $({\\mathbf{x}}_n,\\u_n) \\neq ({\\x}_n^{\\mathrm{r}},{\\u}_n^{\\mathrm{r}})$ satisfying $h({\\mathbf{x}}_n,\\u_n) \\leq 0$.\n\t\t\\end{enumerate}\n\t\\end{Lemma}\n\t\\begin{proof}\n\t\tFirst, we prove that if Problem~\\eqref{eq:ocp} is formulated using stage cost $\\bar q_\\r$ and terminal cost $\\bar p_\\r$ instead of $q_\\r$ and $p_\\r$, the primal solution remains unchanged. \n\t\tThis is a known result from the literature on economic MPC and is based on the observation that all terms involving ${\\boldsymbol{\\lambda}}^\\mathrm{r}$ in the rotated cost form a telescopic sum and cancel out, such that only ${{\\boldsymbol{\\lambda}}_0^\\mathrm{r}}^\\top ({\\boldsymbol{\\xi}}_0-{\\mathbf{x}}_0^\\mathrm{r})$ remains. Since the initial state is fixed, the cost only differs by a constant term and the primal solution is unchanged. The cost $\\bar q_\\r$ being nonnegative is a consequence of the fact that the stage cost Hessian is positive definite by Assumption \\ref{a:cont}, the system dynamics are LTV, and the Lagrange multipliers $\\bar {\\boldsymbol{\\lambda}}$ associated with Problem~\\eqref{eq:ocp} using cost $\\bar q_\\r$ are $0$. \n\t\t\n\t\tTo prove the second claim, we define the Lagrangian of the rotated problem as \n\t\t\\begin{align*}\n\t\t\\mathcal{\\bar L}^\\mathrm{O}({\\boldsymbol{\\xi}}, {\\boldsymbol{\\nu}}, \\bar {\\boldsymbol{\\lambda}},\\bar {\\boldsymbol{\\mu}},\\mathbf{t}) \n\t\t= \\ & \\bar{{\\boldsymbol{\\lambda}}}_0^\\top ({\\boldsymbol{\\xi}}_0 - {\\mathbf{x}}_{0}) + \\bar p_\\r ({\\boldsymbol{\\xi}}_M,t_M)\\\\\n\t\t&\\hspace{-2em}+\\lim_{M\\rightarrow\\infty}\\sum_{n=0}^{M-1}\n\t\t\\bar{q}_\\mathrm{\\r}({\\boldsymbol{\\xi}}_n,{\\boldsymbol{\\nu}}_n,t_n) + \\bar {\\boldsymbol{\\mu}}_n^\\top h({\\boldsymbol{\\xi}}_n,{\\boldsymbol{\\nu}}_n)\\\\\n\t\t&\\hspace{-2em}+\\lim_{M\\rightarrow\\infty}\\sum_{n=0}^{M-1} \\bar{{\\boldsymbol{\\lambda}}}_{n+1}^\\top ( {\\boldsymbol{\\xi}}_{n+1} - f_n({\\boldsymbol{\\xi}}_n,{\\boldsymbol{\\nu}}_n) ).\n\t\t\\end{align*}\n\t\tFor compactness we denote next $\\nabla_n:=\\nabla_{({\\boldsymbol{\\xi}}_n,{\\boldsymbol{\\nu}}_n)}$. Since by construction $\\nabla_n \\bar q_\\mathrm{\\r}=\\nabla_n \\mathcal{L}^\\mathrm{O} - \\nabla_n {\\boldsymbol{\\mu}}_n^{{\\mathrm{r}}\\top} h $, we obtain\n\t\t\\begin{align*}\n\t\t\\nabla_n \\mathcal{\\bar L}^\\mathrm{O} &= \\nabla_n \\bar q_\\mathrm{\\r} + \\matr{c}{\\bar {\\boldsymbol{\\lambda}}_n \\\\ 0} - \\nabla_n \\bar {\\boldsymbol{\\lambda}}_{n+1}^\\top f_n + \\nabla_n \\bar {\\boldsymbol{\\mu}}_n^\\top h \\\\\n\t\t&\\hspace{-1.2em}= \\nabla_n \\mathcal{L}^\\mathrm{O} + \\matr{c}{\\bar {\\boldsymbol{\\lambda}}_{n} \\\\ 0} - \\nabla_n \\bar {\\boldsymbol{\\lambda}}_{n+1}^\\top f_n + \\nabla_n (\\bar {\\boldsymbol{\\mu}}_n-{\\boldsymbol{\\mu}}_n^\\mathrm{r})^\\top h.\n\t\t\\end{align*}\n\t\tTherefore, the KKT conditions of the rotated problem are solved by the same primal variables as the original problem and $\\bar {\\boldsymbol{\\mu}}_n = {\\boldsymbol{\\mu}}_n^\\mathrm{r}$, $\\bar {\\boldsymbol{\\lambda}}_n=0$. With similar steps we show that $\\bar{\\lambda}_P=0$, since $\\nabla_M\\bar{p}_\\r=\\nabla_M \\mathcal{L}^\\mathrm{O}$.\n\t\tBecause the system dynamics are LTV and the stage cost quadratic, we have that \n\t\t$\\nabla^2_n \\bar q_\\mathrm{\\r} = \\nabla^2_n q_\\mathrm{\\r}\\succ0$\n\t\tMoreover, since the solution satisfies the SOSC,\n\t\twe directly have that $\\bar q_\\mathrm{\\r}({\\mathbf{x}}_n^\\mathrm{r},\\u_n^\\mathrm{r},t_n) =0$ and $\\bar q_\\mathrm{\\r}({\\boldsymbol{\\xi}}_n,{\\boldsymbol{\\nu}}_n,t_n) > 0$ for all $({\\boldsymbol{\\xi}}_n,{\\boldsymbol{\\nu}}_n)\\neq({\\mathbf{x}}_n^\\mathrm{r},\\u_n^\\mathrm{r})$ s.t. $h({\\boldsymbol{\\xi}}_n,{\\boldsymbol{\\nu}}_n) \\leq 0$.\n\t\\end{proof}\n\t\\begin{Remark}\n\t\t\\label{rem:nl_sys}\n\t\tThe only reason that limits our result to LTV systems is that this entails $\\nabla^2_n \\bar q_\\mathrm{\\r} = \\nabla^2_n q_\\mathrm{\\r}\\succ0$. It seems plausible that this limitation could be overcome by assuming that OCP~\\eqref{eq:ocp} satisfies the SOSC for all initial states at all times. However, because further technicalities would be necessary to obtain the proof, we leave this investigation for future research.\n\t\\end{Remark}\n\t\\begin{Corollary} The rotated value function of OCP~\\eqref{eq:ocp}, i.e.,\n\t \\begin{align*}\n\t\t \\bar V^\\mathrm{O}({\\mathbf{x}}_k,t_k) &=\\ V^\\mathrm{O}({\\mathbf{x}}_k,t_k) + {\\boldsymbol{\\lambda}}^{{\\mathrm{r}}^\\top}_k ({\\mathbf{x}}_k-{\\mathbf{x}}^{\\mathrm{r}})\\\\\n\t\t &-\\lim_{M\\rightarrow\\infty}\\sum_{n=k}^{k+M-1}q_\\r({\\mathbf{x}}^{\\mathrm{r}}_n,\\u^{\\mathrm{r}}_n,t_n)-p_\\r({\\mathbf{x}}^{\\mathrm{r}}_{k+M},t_M),\n\t \\end{align*}\n\t\tis positive definite, and its minimum is $\\bar V^\\mathrm{O}({\\x}_k^\\mathrm{r},t_k)=0$.\n\t\\end{Corollary}\n\t\\begin{proof}\n\tWe note from the proof in Lemma~1, that the rotated stage and terminal costs are positive definite and that they are zero at the feasible reference $({\\mathbf{x}}^{\\mathrm{r}}_n,\\u^{\\mathrm{r}}_n)$, hence, the rotated value function is also positive, and zero at ${\\mathbf{x}}^\\r_k$.\n\t\\end{proof}\n\t\n\t\n\n\t\\paolor{While Proposition~\\ref{prop:stab_feas} proves the stability of system~(1) in closed-loop with the solution of~\\eqref{eq:nmpc} under Assumptions~\\ref{a:rec_ref} and~\\ref{a:terminal}, in Theorem~\\ref{thm:as_stab_0} we will prove stability in case the reference trajectory does not satisfy Assumption~\\ref{a:rec_ref}. The stability proof in Theorem~\\ref{thm:as_stab_0} \nbuilds on the following \\emph{ideal} formulation} \n\t\\begin{subequations}\n\t\\label{eq:ideal_nmpc}\n\t\\begin{align}\n\t\\begin{split}V^\\mathrm{i}({\\mathbf{x}}_k,t_k) = \\min_{{\\x},{\\u}}&\\sum_{n=k}^{k+N-1} q_\\r(\\xb,\\ub,t_n) \\\\\n\t\t&\\hspace{2em}+p_{\\tilde{\\mathbf{y}}^{\\mathrm{r}}}(\\xb[k+N],t_{k+N})\n\t\\end{split} \\\\\n\t\t\\mathrm{s.t.} \\ \\ &\\eqref{eq:nmpcState}-\\eqref{eq:nmpcInequality_known}, \\ \\xb[k+N] \\in\\mathcal{X}^\\mathrm{f}_{{\\mathbf{y}}^{\\mathrm{r}}}(t_{k+N}),\\label{eq:ideal_nmpc_terminal}\n\t\\end{align}\n\\end{subequations}\nwhere\n\\begin{align}\\label{eq:minimizer_tilde_yr}\n\t\t\\tilde{\\mathbf{y}}^{\\mathrm{r}}_k &:= \\arg\\min_{{\\mathbf{x}}} p_{{\\mathbf{y}}^{\\mathrm{r}}}({\\mathbf{x}},t_k)-{\\boldsymbol{\\lambda}}_k^{{\\mathrm{r}}\\top}({\\mathbf{x}}-{\\mathbf{x}}^{\\mathrm{r}}_k).\n\\end{align}\nThe Problems~\\eqref{eq:nmpc} and~\\eqref{eq:ideal_nmpc} only differ in the terminal cost and constraint: in~\\eqref{eq:ideal_nmpc} they are written with respect to the solution ${\\mathbf{y}}^{\\mathrm{r}}$ and ${\\boldsymbol{\\lambda}}^{\\mathrm{r}}$ of~\\eqref{eq:ocp} rather than~$\\r$. In order to distinguish the solutions of~\\eqref{eq:nmpc} and~\\eqref{eq:ideal_nmpc}, we denote the solution of~\\eqref{eq:nmpc} by ${\\mathbf{x}}^\\star$, $\\u^\\star$, and the solution of~\\eqref{eq:ideal_nmpc} by ${\\mathbf{x}}^\\mathrm{i}$, $\\u^\\mathrm{i}$. In addition, when the stage cost $\\bar{q}_\\r$ and terminal cost $\\bar p_{{\\tilde\\y}^{\\mathrm{r}}}$ are used, we obtain the corresponding \\emph{rotated} formulation of~\\eqref{eq:ideal_nmpc}\n\\begin{align}\n\t\\label{eq:ideal_rot_nmpc}\n\t\\begin{split}\\bar V^\\mathrm{i}({\\mathbf{x}}_k,t_k) = \\min_{{\\mathbf{x}},\\u} &\\sum_{n=k}^{k+N-1} \\bar q_\\r(\\xb,\\ub,t_n) \\\\\n\t\t&\\hspace{2em}+ \\bar p_{\\tilde{\\mathbf{y}}^\\mathrm{r}}(\\xb[k+N],t_{k+N})\n\t\\end{split} \\\\\n\t\\mathrm{s.t.}\\hspace{0em} \\ \\ &\\eqref{eq:nmpcState}-\\eqref{eq:nmpcInequality_known}, \\ \\xb[k+N] \\in\\mathcal{X}^\\mathrm{f}_{{\\mathbf{y}}^\\mathrm{r}}(t_{k+N}),\\nonumber\n\\end{align}\nwhere the rotated terminal cost is defined as\n\\begin{align}\\begin{split}\\label{eq:rot_tilde_terminal_cost}\n\t\\bar{p}_{{\\tilde\\y}^{\\mathrm{r}}}(\\xb,t_n)&:= p_{{\\tilde\\y}^{\\mathrm{r}}}(\\xb,t_n)-p_{{\\tilde\\y}^{\\mathrm{r}}}({\\mathbf{x}}_{n}^{\\mathrm{r}},t_n)\\\\\n\t&+{\\boldsymbol{\\lambda}}^{{\\mathrm{r}}\\top}_{n}(\\xb-{\\mathbf{x}}^{\\mathrm{r}}_{n}).\\end{split}\n\\end{align}\nNote that by Lemma~\\ref{lem:rot_ocp}, the rotated cost $\\bar q_\\r$ penalizes deviations from ${\\mathbf{y}}^{\\mathrm{r}}$, i.e., the solution to \\eqref{eq:ocp}. We will prove next that $\\bar p_{{\\tilde\\y}^\\r}$ also penalizes deviations from ${\\mathbf{y}}^\\r$, implying that \\emph{the rotated ideal MPC formulation is of tracking type}.\n\n\t\n\t\n\n\t\\begin{Lemma}\n\t\t\\label{lem:rot_mpc}\n\t\tConsider the \\emph{rotated} \\emph{ideal} MPC Problem~\\eqref{eq:ideal_rot_nmpc}, formulated using the rotated costs $\\bar q_\\r$ and $\\bar{p}_{{\\tilde\\y}^\\r}$, and the terminal set $\\mathcal{X}_{{\\mathbf{y}}^\\r}^\\mathrm{f}$. Then, the primal solution of~\\eqref{eq:ideal_rot_nmpc} coincides with the primal solution of the ideal MPC Problem~\\eqref{eq:ideal_nmpc}.\n\t\\end{Lemma}\n\t\\begin{proof}\n\t\tFrom~\\eqref{eq:minimizer_tilde_yr} and~\\eqref{eq:rot_tilde_terminal_cost} we have that $\\bar p_{{\\tilde\\y}^\\r}({\\mathbf{x}}_k^{\\mathrm{r}},t_k) =0$ and that\n\t\t$\\nabla \\bar p_{{\\tilde\\y}^{\\mathrm{r}}}({\\mathbf{x}}^{\\mathrm{r}}_k,t_k) = \\nabla p_{{\\tilde\\y}^{\\mathrm{r}}}({\\mathbf{x}}^{\\mathrm{r}}_k,t_k) + \\nabla p_{{\\mathbf{y}}^{\\mathrm{r}}}({\\tilde\\y}^{\\mathrm{r}}_k,t_k) = 0$, since the terminal costs are quadratic~\\eqref{eq:terminal_cost}. The proof then follows along the same lines as Lemma~\\ref{lem:rot_ocp} and~\\cite{Diehl2011,Amrit2011a}.\n\t\\end{proof}\n\t\n\t\n\n\tIn order to prove Theorem~\\ref{thm:as_stab_0}, we need that the terminal conditions of the rotated ideal formulation~\\eqref{eq:ideal_rot_nmpc} satisfy Assumption~\\ref{a:terminal}. To that end, we introduce the following assumption.\n\t\\begin{Assumption}\\label{a:terminal_for_rotated}\n\tThere exists a parametric stabilizing terminal set $\\mathcal{X}^\\mathrm{f}_{{\\mathbf{y}}^{\\mathrm{r}}}(t)$ and a terminal control law $\\kappa^\\mathrm{f}_{{\\mathbf{y}}^{\\mathrm{r}}}({\\mathbf{x}},t)$ yielding:\n\t\\begin{align*}\n\t\t\\mathbf{x}_{+}^\\kappa=f_k(\\mathbf{x}_k,\\kappa^\\mathrm{f}_{{\\mathbf{y}}^{\\mathrm{r}}}({\\mathbf{x}}_k,t)), && t_+ = t_k + t_\\mathrm{s},\n\t\\end{align*}\n\tso that\n\t$\\bar p_{{\\tilde\\y}^{\\mathrm{r}}}({\\mathbf{x}}_{+}^\\kappa,t_{+})- \\bar p_{{\\tilde\\y}^{\\mathrm{r}}}({\\mathbf{x}}_k,t_k) \\leq{} - \\bar q_\\r({\\mathbf{x}}_k,\\kappa^\\mathrm{f}_{{\\mathbf{y}}^{\\mathrm{r}}}({\\mathbf{x}}_k,t_k),t_k)$, ${\\mathbf{x}}_k\\in\\mathcal{X}^\\mathrm{f}_{{\\mathbf{y}}^{\\mathrm{r}}}(t_k)\\Rightarrow {\\mathbf{x}}^\\kappa_{+}\\in\\mathcal{X}^\\mathrm{f}_{{\\mathbf{y}}^{\\mathrm{r}}}(t_{+})$, and $h({\\mathbf{x}}_k,\\kappa^\\mathrm{f}_{{\\mathbf{y}}^{\\mathrm{r}}}({\\mathbf{x}}_k,t_k)) \\leq{} 0$ hold for all $k\\in\\mathbb{I}_0^\\infty$.\n\\end{Assumption}\t\nNote that Assumption~\\ref{a:terminal_for_rotated} only differs from Assumption~\\ref{a:terminal} by the fact that the set and control law are centered on ${\\mathbf{y}}^{\\mathrm{r}}$ rather than $\\r$, and that the costs are rotated.\n\t\\begin{Theorem}\n\t\t\\label{thm:as_stab_0}\n\t\tSuppose that Assumptions \\ref{a:cont} and~\\ref{a:terminal_for_rotated} hold, and that Problem~\\eqref{eq:ocp} is feasible for initial state $({\\mathbf{x}}_k,t_k)$. Then, system~\\eqref{eq:sys} in closed-loop with the ideal MPC~\\eqref{eq:ideal_nmpc} is asymptotically stabilized to the optimal trajectory ${\\x}^{\\mathrm{r}}$.\n\t\\end{Theorem}\n\t\\begin{proof}\n\t\tBy Lemma~\\ref{lem:rot_mpc}, the rotated ideal MPC problem has positive-definite stage and terminal costs penalizing deviations from the optimal trajectory ${\\mathbf{y}}^{\\mathrm{r}}$. Hence, the rotated ideal MPC problem is of tracking type. \n\t\t\t\n\t\tAssumption~\\ref{a:cont} directly entails a lower bounding by a $\\mathcal{K}_\\infty$ function, and can also be used to prove an upper bound~\\cite[Theorem 2.19]{rawlings2009model}, such that the following holds\n\t\t\\begin{equation*}\n\t\t\t\\alpha_1(\\|{\\mathbf{x}}_k-{\\mathbf{x}}^{\\mathrm{r}}_k\\|) \\leq{} \\bar V^\\mathrm{i}({\\mathbf{x}}_k,t_k)\\leq{} \\alpha_2(\\|{\\mathbf{x}}_k-{\\mathbf{x}}^{\\mathrm{r}}_k\\|),\n\t\t\\end{equation*}\n\t\twhere $\\alpha_1,\\alpha_2\\in\\mathcal{K}_\\infty$. Then, solving Problem~\\eqref{eq:ideal_rot_nmpc}, we obtain $\\bar V^{\\mathrm{i}}({\\mathbf{x}}_k,t_k)$ and the optimal trajectories $\\{\\xb[k]^\\mathrm{i},...,\\xb[k+N]^\\mathrm{i}\\}$, and $\\{\\ub[k]^\\mathrm{i},...,\\ub[k+N-1]^\\mathrm{i}\\}$. By relying on Assumptions~\\ref{a:rec_ref} and~\\ref{a:terminal}, using terminal control law $\\kappa^\\mathrm{f}_{{\\mathbf{y}}^{\\mathrm{r}}}$, we can construct the feasible sub-optimal trajectories $\\{\\xb[k+1]^\\mathrm{i},...,\\xb[k+N]^\\mathrm{i},f_{k+N}(\\xb[k+N]^\\mathrm{i},\\kappa^\\mathrm{f}_{{\\mathbf{y}}^\\r})\\}$ and $\\{\\ub[k+1]^\\mathrm{i},...,\\ub[k+N]^\\mathrm{i},\\kappa^\\mathrm{f}_{{\\mathbf{y}}^{\\mathrm{r}}}\\}$ at time $k+1$, which can be used to derive the decrease condition following standard arguments~\\cite{rawlings2009model,borrelli2017predictive}:\n\t\t$$\\bar{V}^\\mathrm{i}({\\mathbf{x}}_{k+1},t_{k+1})-\\bar{V}^\\mathrm{i}({\\mathbf{x}}_k,t_k)\\leq{}-\\alpha_3(\\|{\\mathbf{x}}_k-{\\mathbf{x}}^{\\mathrm{r}}_k\\|).$$ \n\t\tThis entails that the \\emph{rotated} \\emph{ideal} value function $\\bar{V}^\\mathrm{i}({\\mathbf{x}}_k,t_k)$ is a Lyapunov function, and that the closed-loop system is asymptotically stabilized to ${\\mathbf{x}}^{\\mathrm{r}}$. \n\t\tFinally, using Lemma~\\ref{lem:rot_mpc} we establish asymptotic stability also for the \\emph{ideal} MPC scheme~\\eqref{eq:ideal_nmpc}, since the primal solutions of the two problems coincide.\n\t\\end{proof}\n\n\tTheorem~\\ref{thm:as_stab_0} establishes the first step towards the desired result: \n\tan MPC problem can be formulated using an \\emph{infeasible reference}, which stabilizes system~\\eqref{eq:sys} to the optimal trajectory of Problem~\\eqref{eq:ocp} provided that the appropriate terminal conditions are used.\n\n\tAt this stage, the main issue is to express the terminal constraint set as \n\ta positive invariant set containing ${\\x}^{\\mathrm{r}}$, and the terminal control law stabilizing the system to ${\\x}^{\\mathrm{r}}$.\n\tTo that end, one needs to know the feasible reference trajectory~${\\x}^{\\mathrm{r}}$, i.e., to solve Problem~\\eqref{eq:ocp}. Since solving Problem~\\eqref{eq:ocp} is not practical, we prove in the next section how sub-optimal terminal conditions can be used instead to prove ISS for the closed-loop system.\n\t\n\t\n\t\n\t\\subsection{Practical MPC and ISS}\\label{sec:iss}\n\t\n\tIn this subsection, we analyze the case in which the terminal conditions are not enforced based on the feasible reference trajectory, but \n\trather based on an \\emph{approximatively feasible} reference (see Assumption~\\ref{a:approx_feas}). \n\tSince in that case asymptotic stability cannot be proven, we will prove ISS for the closed-loop system, where the input is some terminal reference ${\\mathbf{y}}^{\\mathrm{f}}$. In particular, we are interested in the practical approach ${\\mathbf{y}}^{\\mathrm{f}}=\\r(t_{k+N})$ and the ideal setting ${\\mathbf{y}}^{\\mathrm{f}}={\\mathbf{y}}^{\\mathrm{r}}(t_{k+N})$. \n\tTo that end, we define the following closed-loop dynamics\n\t\\begin{align}\\label{eq:iss_dynamics}\n\t{\\mathbf{x}}_{k+1}({\\mathbf{y}}^\\mathrm{f}) = f_k({\\mathbf{x}}_{k},\\u_\\mathrm{MPC}({\\mathbf{x}}_{k},{\\mathbf{y}}^\\mathrm{f})) = \\bar f_k({\\mathbf{x}}_{k},{\\mathbf{y}}^\\mathrm{f}),\n\t\\end{align}\n\twhere we stress that~$\\u_\\mathrm{MPC}$ is obtained as~$\\ub[k]^\\star$ solving problem~\\eqref{eq:nmpc} (in case one uses ${\\mathbf{y}}^\\mathrm{f}=\\r$ and terminal cost $p_\\r$); or as~$\\ub[k]^\\mathrm{i}$ solving the ideal problem~\\eqref{eq:ideal_nmpc} (in case one uses ${\\mathbf{y}}^\\mathrm{f}={\\mathbf{y}}^{\\mathrm{r}}$ and terminal cost $p_{\\tilde{\\mathbf{y}}^\\r}$). In the following we will use the notation ${\\mathbf{x}}_{k+1}({\\mathbf{y}}^\\mathrm{f})$ to stress that the terminal reference ${\\mathbf{y}}^\\mathrm{f}$ is used in the computation of the control yielding the next state. Additionally, we define \n\tthe following quantities \n\t\\begin{align*}\n\t\\bar J_{{\\tilde\\y}^\\mathrm{r}}^{\\star}({\\mathbf{x}}_k,t_k) &:= \\sum_{n=k}^{N-1} \\bar q_\\r(\\xb^\\star,\\ub^\\star,t_n) + \\bar p_{{\\tilde\\y}^\\mathrm{r}}(\\xb[k+N]^\\star,t_{k+N}), \\\\\n\t\\bar J_\\r^{\\mathrm{i}}({\\mathbf{x}}_k,t_k) &:= \\sum_{n=k}^{N-1} \\bar q_\\r(\\xb^\\mathrm{i},\\ub^\\mathrm{i},t_n) + \\bar p_\\r(\\xb[k+N]^\\mathrm{i},t_{k+N}),\n\t\\end{align*} \n\tand we remind that\n\t\\begin{align*}\n\t\\bar V({\\mathbf{x}}_k,t_k) &= \\sum_{n=k}^{N-1} \\bar q_\\r(\\xb^\\star,\\ub^\\star,t_n) + \\bar p_\\r(\\xb[k+N]^\\star,t_{k+N}),\\\\\n\t\\bar V^\\mathrm{i}({\\mathbf{x}}_k,t_k) &= \\sum_{n=k}^{N-1} \\bar q_\\r(\\xb^\\mathrm{i},\\ub^\\mathrm{i},t_n) + \\bar p _{{\\tilde\\y}^\\mathrm{r}}(\\xb[k+N]^\\mathrm{i},t_{k+N}).\n\t\\end{align*}\n\t\n\tBefore formulating the stability result in the next theorem, we need to introduce an additional assumption on the reference infeasibility.\n\t\\begin{Assumption}[Approximate feasibility of the reference]\n\t\t\\label{a:approx_feas}\n\t\tThe reference ${\\mathbf{y}}^{\\mathrm{f}}$ satisfies the constraints \\eqref{eq:nmpcInequality_known}, i.e., $h({\\x}^{\\mathrm{f}}_n,{\\u}^{\\mathrm{f}}_n) \\leq{} 0$, $n\\in \\mathbb{I}_k^{k+N-1}$, for all $k\\in\\mathcal{N}^+$. Additionally, recursive feasibility holds for both Problem~\\eqref{eq:nmpc} and~\\eqref{eq:ideal_nmpc} when the system is controlled in closed-loop using the feedback from Problem~\\eqref{eq:nmpc}.\n\t\\end{Assumption}\n\t\n\t\\begin{Remark}\n\t\tAssumption~\\ref{a:approx_feas} essentially only requires that the reference used in the definition of the terminal conditions (constraint and cost) is feasible with respect to the system constraints, and not the system dynamics. However, recursive feasibility holds if the reference satisfies, e.g., $\\|{\\mathbf{x}}_{n+1}^\\mathrm{f}-f_n({\\mathbf{x}}_n^\\mathrm{f},\\u_n^\\mathrm{f})\\|\\leq{}\\epsilon$, for some small $\\epsilon$ i.e., if the reference satisfies the system dynamics approximately.\n\t\tNote that, if $\\epsilon=0$, then Assumption~\\ref{a:rec_ref} is satisfied and Assumption~\\ref{a:approx_feas} is not needed anymore. Finally, the infeasibility due to $\\epsilon\\neq0$ could be formally accounted for so as to satisfy Assumption~\\ref{a:approx_feas} by taking a robust MPC approach, see, e.g.,~\\cite{Mayne2005,Chisci2001}. \n\t\\end{Remark}\nFrom a practical standpoint, Assumption~\\ref{a:approx_feas} sets a rather mild requirement. In fact, it is not uncommon to use infeasible references for simplicity or satisfying approximated system dynamics to capture the most relevant dynamics of the system (keeping $\\epsilon$ small).\n\t\n\t{We are now ready to state the main result of the paper.}\n\n\t\\begin{Theorem}\\label{thm:iss}\n\t\tSuppose that Problem~\\eqref{eq:ocp} is feasible and Assumptions~\\ref{a:cont} and~\\ref{a:terminal} hold for the reference ${\\mathbf{y}}^{\\mathrm{r}}$ with costs $\\bar{q}_\\r$ and $\\bar{p}_{{\\tilde\\y}^\\r}$ and terminal set $\\mathcal{X}_{{\\mathbf{y}}^\\r}$. Suppose moreover that Problem~\\eqref{eq:nmpc} and Problem~\\eqref{eq:ideal_nmpc} are feasible at time $k$ with inital state $({\\mathbf{x}}_k,t_k)$, and that reference ${\\mathbf{y}}^\\mathrm{f}$, with terminal set $\\mathcal{X}^\\mathrm{f}_{{\\mathbf{y}}^\\mathrm{f}}$, satisfies Assumption~\\ref{a:approx_feas}. Then, system~\\eqref{eq:iss_dynamics} obtained from~\\eqref{eq:sys} in closed-loop with MPC formulation~\\eqref{eq:nmpc} is ISS.\n\t\\end{Theorem}\n\t\\begin{proof}\n\t\t\n\t\tWe prove the result using the value function $\\bar V^\\mathrm{i}({\\mathbf{x}}_k,t_k)$ of the rotated ideal Problem~\\eqref{eq:ideal_rot_nmpc} as an ISS-Lyapunov function candidate \\cite{jiang2001input}. From the prior analysis in Theorem \\ref{thm:as_stab_0} we know that Assumption~\\ref{a:rec_ref} holds for ${\\mathbf{y}}^{\\mathrm{r}}$ since Problem~\\eqref{eq:ocp} is feasible, and that $\\bar V^\\mathrm{i}({\\mathbf{x}}_k,t_k)$ is a Lyapunov function {when the \\emph{ideal} terminal conditions} ${\\mathbf{y}}^\\mathrm{f}={\\mathbf{y}}^{\\mathrm{r}}$ are used. Hence, when {we apply the ideal control input $\\ub[k][k]^\\mathrm{i}$, i.e., use \\eqref{eq:iss_dynamics} to obtain the next state ${\\mathbf{x}}_{k+1}({\\mathbf{y}}^{\\mathrm{r}})=\\bar{f}_k({\\mathbf{x}}_k,{\\mathbf{y}}^{\\mathrm{r}})$}, we have the following relations\n\t\t\\begin{align*}\n\t\t\t\\alpha_1(\\| {\\mathbf{x}}_k-{\\mathbf{x}}^{\\mathrm{r}}_k \\|) \\leq \\bar V^\\mathrm{i}({\\mathbf{x}}_k,t_k) \\leq \\alpha_2(\\| {\\mathbf{x}}_k-{\\mathbf{x}}^{\\mathrm{r}}_k \\|),\\\\\n\t\t\t\\bar V^\\mathrm{i}({\\mathbf{x}}_{k+1}({\\mathbf{y}}^{\\mathrm{r}}),t_{k+1}) - \\bar V^\\mathrm{i}({\\mathbf{x}}_{k},t_k) \\leq -\\alpha_3(\\| {\\mathbf{x}}_k-{\\mathbf{x}}^{\\mathrm{r}}_k \\|),\n\t\t\\end{align*}\n\t\twith $\\alpha_i\\in \\mathcal{K}_\\infty$, $i=1,2,3$. \n\t\t\n\t\tWe are left with proving ISS, i.e., that $\\exists \\, \\sigma\\in\\mathcal{K}$ such that when the reference ${\\mathbf{y}}^\\mathrm{f}$ is treated as an external input, the next state is given by ${\\mathbf{x}}_{k+1}({\\mathbf{y}}^\\mathrm{f})=\\bar f_k({\\mathbf{x}}_k,{\\mathbf{y}}^\\mathrm{f})$, the following holds\n\t\t\\begin{align}\\begin{split}\n\t\t\t\\label{eq:iss_decrease}\n\t\t\t\\bar V^\\mathrm{i}({{\\mathbf{x}}_{k+1}({\\mathbf{y}}^\\mathrm{f})},t_{k+1})- \\bar V^\\mathrm{i}({\\mathbf{x}}_{k},t_k)\\leq&\\sigma( \\| {\\mathbf{y}}^\\mathrm{f}-{\\mathbf{y}}^{\\mathrm{r}} \\| )\\\\&-\\alpha_3(\\| {\\mathbf{x}}_k-{\\x}^{\\mathrm{r}}_k \\|).\n\t\t\\end{split}\\end{align}\n\t\t\n\t\tIn order to bound $\\bar V^\\mathrm{i}({{\\mathbf{x}}_{k+1}({\\mathbf{y}}^\\mathrm{f})},t_{k+1}) - \\bar V^\\mathrm{i}({\\mathbf{x}}_{k},t_{k})$, we first derive an upper bound on $\\bar J_\\r^\\mathrm{i}$ which depends on $\\bar V^\\mathrm{i}$.\n\t\tTo that end, we observe that the rotated cost of the ideal trajectory $\\xb^\\mathrm{i}$, $\\ub^\\mathrm{i}$ satisfies \n\t\t\\begin{align*}\n\t\t\t\\bar J_\\r^\\mathrm{i}({\\mathbf{x}}_{k},t_k)&= \\bar V^\\mathrm{i}({\\mathbf{x}}_{k},t_k)-\\bar p_{{\\tilde\\y}^\\mathrm{r}}(\\xb[k+N]^\\mathrm{i},t_{k+N})\\\\\n\t\t\t&+\\bar p_\\r(\\xb[k+N]^\\mathrm{i},t_{k+N}).\n\t\t\\end{align*}\n\t\tDefining\n\t\t\\begin{align*}\n\t\t\t\\phi({\\mathbf{y}}^\\mathrm{f})&:=\\bar p_{{\\mathbf{y}}^\\mathrm{f}}(\\xb[k+N]^\\mathrm{i},t_{k+N})-\\bar p_{{\\tilde\\y}^\\mathrm{r}}(\\xb[k+N]^\\mathrm{i},t_{k+N}),\n\t\t\\end{align*} \n\t\tthere exists a $\\sigma_1 \\in \\mathcal{K}$ such that $\\phi({\\mathbf{y}}^\\mathrm{f}) \\leq{} \\sigma_1(\\|{\\mathbf{y}}^\\mathrm{f}-{\\mathbf{y}}^\\r\\|)$\n\t\tsince, by~\\eqref{eq:terminal_cost}, $\\phi({\\mathbf{y}}^\\mathrm{f})$ is a continuous function of ${\\mathbf{y}}^\\mathrm{f}$ and $\\phi({\\mathbf{y}}^\\mathrm{r})=0$.\n\t\tThen, the following upper bound is obtained\n\t\t\\begin{align*}\n\t\t\t\\bar J_\\r^\\mathrm{i}({\\mathbf{x}}_{k},t_k)&\\leq \\bar V^\\mathrm{i}({\\mathbf{x}}_{k},t_k) + \\sigma_1(\\| {\\mathbf{y}}^\\mathrm{f}-{\\mathbf{y}}^\\mathrm{r} \\| ).\n\t\t\\end{align*}\n\t\tUpon solving Problem~\\eqref{eq:nmpc}, we obtain $\\bar V({\\mathbf{x}}_{k},t_k)\\leq\\bar J_\\r^\\mathrm{i}({\\mathbf{x}}_{k},t_k)$. Starting from the optimal solution ${\\mathbf{x}}^\\star$, and $\\u^\\star$, we will construct an upper bound on the decrease condition. To that end, we first need to evaluate the cost of this trajectory, i.e., \n\t\t\\begin{align*}\\begin{split}\n\t\t\t\\bar J_{{\\tilde\\y}^\\mathrm{r}}^{\\star}({\\mathbf{x}}_{k},t_k)&=\\bar V({\\mathbf{x}}_{k},t_k)-\\bar p_\\r(\\xb[k+N]^\\star,t_{k+N})\\\\\n\t\t\t&+\\bar p_{{\\tilde\\y}^\\mathrm{r}}(\\xb[k+N]^\\star,t_{k+N}).\n\t\t\\end{split}\\end{align*}\n\t\tUsing the same reasoning as before, there exists $\\sigma_2 \\in \\mathcal{K}$ such that\n\t\t\\begin{align*}\n\t\t\t&\\bar p_{{\\tilde\\y}^\\mathrm{r}}(\\xb[k+N]^\\star,t_{k+N})-\\bar p_\\r(\\xb[k+N]^\\star,t_{k+N})\\\\\n\t\t\t&\\hspace{5em}\\leq \\sigma_2(\\| {\\mathbf{y}}^\\mathrm{f}_{k+N}-{\\mathbf{y}}^\\mathrm{r}_{k+N} \\| ).\n\t\t\\end{align*}\n\t\tThen, we obtain\n\t\t\\begin{align}\\label{eq:jbar}\n\t\t\t\\begin{split}\n\t\t\t\\bar J_{{\\tilde\\y}^\\mathrm{r}}^{\\star}({\\mathbf{x}}_{k},t_k) &\\leq \\bar V({\\mathbf{x}}_{k},t_k) + \\sigma_2(\\| {\\mathbf{y}}^\\mathrm{f}_{k+N}-{\\mathbf{y}}_{k+N}^{\\mathrm{r}} \\| ) \\\\\n\t\t\t&\\leq \\bar J_\\r^\\mathrm{i}({\\mathbf{x}}_{k},t_k) + \\sigma_2(\\| {\\mathbf{y}}^\\mathrm{f}_{k+N}-{\\mathbf{y}}_{k+N}^{\\mathrm{r}} \\| ) \\\\\n\t\t\t& \\leq \\bar V^\\mathrm{i}({\\mathbf{x}}_{k},t_k) + \\sigma(\\| {\\mathbf{y}}^\\mathrm{f}_{k+N}-{\\mathbf{y}}_{k+N}^{\\mathrm{r}} \\| ),\n\t\t\t\\end{split}\n\t\t\\end{align}\n\t\twhere we defined $\\sigma:=\\sigma_1+\\sigma_2$. \n\t\t\n\tProceeding similarly as in the proof of Proposition~\\ref{prop:stable}, we apply the control input $\\ub[k]^\\star$ from~\\eqref{eq:nmpc}, i.e., ${\\mathbf{y}}^\\mathrm{f}=\\r$, to obtain $${\\mathbf{x}}_{k+1}({\\mathbf{y}}^\\mathrm{f})=\\bar{f}_k({\\mathbf{x}}_k,{\\mathbf{y}}^\\mathrm{f}).$$\n\t\tIn order to be able to apply this procedure, we first assume that the obtained initial guess is feasible for the ideal problem~\\eqref{eq:ideal_nmpc} and proceed as follows. \n\t\tBy Assumption~3, we use the terminal control law {$\\kappa_{{\\mathbf{y}}^\\r}^\\mathrm{f}({\\mathbf{x}},t)$}, to form a guess at the next time step and upper bound the \\emph{ideal} rotated value function. By optimality\n\t\t\\begin{align}\\label{eq:iss_value_decrease}\n\t\t\t\\bar{V}^\\mathrm{i}&( {\\mathbf{x}}_{k+1}({\\mathbf{y}}^\\mathrm{f}),t_{k+1}) \\leq{}\\\\ &\\nonumber\\leq{}\\sum_{n=k+1}^{N-1}\\bar{q}_\\r(\\xb^\\star,\\ub^\\star,t_n)+\\bar q_\\r(\\xb[k+N]^\\star,\\kappa_{{\\mathbf{y}}^\\r},t_{k+N})\\\\\n\t\t\t&\\nonumber\\hspace{4em}+\\bar p_{{\\tilde\\y}^\\r}(\\xb[k+N+1]^{\\kappa,\\star},t_{k+N+1})\\\\\n\t\t\t&\\nonumber=\\bar{J}_{{\\tilde\\y}^\\r}^\\star({\\mathbf{x}}_k,t_k)-\\bar{q}_\\r(\\xb[k]^\\star,\\ub[k]^\\star,t_k)-\\bar{p}_{{\\tilde\\y}^\\r}(\\xb[k+N]^\\star,t_{k+N})\\\\\n\t\t\t&\\nonumber+\\bar{p}_{{\\tilde\\y}^\\r}(\\xb[k+N+1]^{\\star,\\kappa},t_{k+N+1})+\\bar{q}_{\\r}(\\xb[k+N]^\\star,\\kappa_{{\\mathbf{y}}^\\r},t_{k+N}),\n\t\t\\end{align}\n\t\twhere we used\n\t\t$$\\xb[k+N+1]^{\\star,\\kappa}\\hspace{-0.2em}:= \\hspace{-0.2em}f_{k+N}(\\xb[k+N],\\kappa_{{\\mathbf{y}}^\\r}),\\, \\kappa_{{\\mathbf{y}}^\\r}\\hspace{-0.2em}:=\\hspace{-0.2em}\\kappa_{{\\mathbf{y}}^\\r}(\\xb[k+N]^\\star,t_{k+N}),$$\n\t\t{and assumed that $\\xb[k+N+1]^{\\star,\\kappa}\\in\\mathcal{X}_{{\\mathbf{y}}^\\r}(t_{k+N+1})$}. Again, using Assumption~3 we can now upper bound the terms\n\t\t\\begin{align*} \\bar{p}_{{\\tilde\\y}^\\r}(\\xb[k+N+1]^{\\star,\\kappa},t_{k+N+1})-\\bar{p}_{{\\tilde\\y}^\\r}(\\xb[k+N]^\\star,t_{k+N})\\\\\\\n\t\t+\\bar{q}_\\r(\\xb[k+N]^\\star,\\kappa_{{\\mathbf{y}}^\\r})\\leq{}0,\n\t\t\\end{align*}\n\t\tso that~\\eqref{eq:iss_value_decrease} can be written as\n\t\t\\begin{align}\n\t\t\t\\bar{V}^\\mathrm{i}({\\mathbf{x}}_{k+1}({\\mathbf{y}}^\\mathrm{f}),t_{k+1}) &\\leq{}J_{{\\tilde\\y}^\\r}^\\star({\\mathbf{x}}_k,t_k)-\\bar{q}_\\r(\\xb[k]^\\star,\\ub[k]^\\star,t_{k}),\\\\\n\t\t\t\\bar{V}^\\mathrm{i}({\\mathbf{x}}_{k+1}({\\mathbf{y}}^\\mathrm{f})) &\\leq{}J_{{\\tilde\\y}^\\r}^\\star({\\mathbf{x}}_k,t_k)-\\alpha_3(\\|{\\mathbf{x}}_k-{\\mathbf{x}}^\\r_k\\|),\\label{eq:iss_bound_decr}\n\t\t\\end{align}\n\t\twhich, in turn, proves~\\eqref{eq:iss_decrease}.\n\t\t\n\t\t\n\t\tIn case ${\\xb[k+N+1]^{\\star,\\kappa}\\not\\in\\mathcal{X}^\\mathrm{f}_{{\\mathbf{y}}^\\mathrm{r}}(t_{k+N+1})}$, \n\t\twe resort to a relaxation of the terminal constraint with an exact penalty~\\cite{Scokaert1999a,Fletcher1987} in order to compute an upper bound to the cost. This relaxation has the property that the solution of the relaxed formulation coincides with the one of the non-relaxed formulation whenever it exists. Then, by construction, the cost of an infeasible trajectory is higher than that of the feasible solution. \n\t\t\n\t\t{Finally, from Assumption~\\ref{a:approx_feas} we know that the value functions $\\bar{V}({\\mathbf{x}}_{k+1}({\\mathbf{y}}^\\mathrm{f}),t_{k+1})$ and $\\bar V^\\mathrm{i}({\\mathbf{x}}_{k+1}({\\mathbf{y}}^\\mathrm{f}),t_{k+1})$ are feasible and bounded for time $k+1$.}\n\t\\end{proof}\n\n\\begin{figure*}[ht]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{iss_closed.eps} \n\t\\caption{Closed-loop simulation with initial condition $(x_1,x_2)=(-4.69,-1.62,0,0)$ and initial time $k=167$. The gray trajectories show the infeasible reference $\\r=(\\r^{\\mathbf{x}},\\r^\\u)$, while the black trajectories show the optimal reference ${\\mathbf{y}}^{\\mathrm{r}}=({\\mathbf{x}}^{\\mathrm{r}},\\u^{\\mathrm{r}})$ obtained from Problem~\\eqref{eq:ocp}. The orange trajectories show the closed-loop behavior for the practical MPC Problem~\\eqref{eq:nmpc}, while the blue trajectories show the closed-loop behavior for the \\emph{ideal} MPC Problem~\\eqref{eq:ideal_nmpc}.}\n\t\\label{fig:mpatc_1_states}\n\\end{figure*}\n\n\n\n\n\tThis theorem proves that one can use an infeasible reference, at the price of not converging exactly to the (unknown) optimal trajectory from OCP~\\eqref{eq:ocp}, with an inaccuracy which depends on how inaccurate the terminal reference is. It is important to remark that, as proven in~\\cite{Zanon2018a,Faulwasser2018}, since the MPC formulation has a turnpike, the effect of the terminal condition on the closed-loop trajectory is decreasing as the prediction horizon increases. \n\t\\begin{Remark}\n\t\tWe note that similar results may be possible to prove for general nonlinear systems if there exists a storage function such that strict dissipativity holds for the rotated cost functions~\\cite{muller2014necessity}. Future research will investigate ways to extend the results of Theorems~1 and~2 for general nonlinear systems.\n\t\\end{Remark}\n\t\n\t\n\t\\section{Simulations}\\label{sec:simulations}\n\tIn this section we implement the robotic example in~\\cite{Faulwasser2009} to illustrate the results of Theorems~\\ref{thm:as_stab_0} and \\ref{thm:iss}. We will use the quadratic stage and terminal costs in \\eqref{eq:stage_cost}-\\eqref{eq:terminal_cost}, i.e.,\n\t\\begin{gather*}\n\t\tq_\\r(\\xb,\\ub,t_n) := \\matr{c}{\\xb-\\rx_n\\\\\\ub-\\ru_n}^\\top{}W\\matr{c}{\\xb-\\rx_n\\\\\\ub-\\ru_n},\\\\\n\t\tp_\\r(\\xb,t_{n}) := (\\xb-\\rx_{n})^\\top{}P(\\xb-\\rx_{n}).\n\t\\end{gather*}\n\t\n\t\n\tWe consider the system presented in~\\cite{Faulwasser2009}, i.e., an actuated planar robot with two degrees of freedom with dynamics\n\t\\begin{align}\n\t\t\\matr{c}{\\dot{x}_1\\\\\\dot{x}_2} &= \\matr{c}{ x_2\\\\B^{-1}(x_1)(u-C(x_1,x_2)x_2-g(x_1))},\\label{eq:robot}\n\t\\end{align}\n\twhere $x_1=(q_1,q_2)$ are the joint angles, $x_2=(\\dot{q}_1,\\dot{q}_2)$ the joint velocities, and $B$, $C$, and $g$\u00a0are given by\n\t\\begin{subequations}\\label{eq:modelparams}\n\t\t\\begin{align*}\n\t\t\tB(x_1) &:= \\matr{cc}{200+50\\cos(q_2) & 23.5+25\\cos(q_2)\\\\ \n\t\t\t\t23.5+25\\cos(q_2) & 122.5},\\\\\n\t\t\tC(x_1,x_2) &:= 25\\sin(q_2)\\matr{cc}{\\dot{q}_1 & \\dot{q}_1+\\dot{q}_2\\\\\n\t\t\t\t-\\dot{q}_1\t& 0}\\\\\n\t\t\tg(x_1) &:= \\matr{c}{784.8\\cos(q_1)+245.3\\cos(q_1+q_2)\\\\\n\t\t\t\t245.3\\cos(q_1+q_2)},\n\t\t\\end{align*}\n\t\\end{subequations}\n\tand with following constraints on the state and control \n\t\\begin{align}\\label{eq:box_constr}\n\t\t\\|x_2\\|_\\infty\\leq{}3/2\\pi, && \\|u\\|_\\infty\\leq{}4000.\n\t\\end{align}\n\tBy transforming the control input as\n\t$$u = C(x_1,x_2)x_2+g(x_1)+B(x_1)v,$$\n\tsystem~\\eqref{eq:robot} can be rewritten into a linear system\n\t\\begin{align}\n\t\t\\matr{c}{\\dot{x}_1\\\\\\dot{x}_2} &= \\matr{c}{ x_2\\\\v},\\label{eq:robot_linear}\n\t\\end{align}\n\tsubject to the non-linear input constraint\n\t\\begin{equation}\n\t\t\\|C(x_1,x_2)x_2+g(x_1)+B(x_1)v\\|_\\infty\\leq{}4000.\n\t\\end{equation}\n\n\tSimilar to~\\cite{Faulwasser2009}, we use\n\t\\begin{equation}\\label{eq:path}\n\t\tp(\\theta)=\\left (\\theta-\\frac{\\pi}{3},\\,5\\sin\\left (0.6 \\left (\\theta-\\frac{\\pi}{3}\\right )\\right )\\right ),\n\t\\end{equation}\n\twith $\\theta\\in[-5.3,0]$ as the desired path to be tracked, and define the timing law, with $t_0=0\\ \\mathrm{s}$, to be given by\n\t\\begin{align*}\n\t\t\\theta(t_0) = -5.3,\\, \\dot{\\theta}(t) = \\frac{v_\\mathrm{ref}(t) }{\\left \\| \\nabla_\\theta \\rho(\\theta(t))\\right \\|_2},\\, v_\\mathrm{ref}(t) =\\left \\{ \n\t\t\\begin{array}{@{}ll@{}}\n\t\t\t1 & \\hspace{-0.5em}\\text{if } \\theta<0\\\\\n\t\t\t0 & \\hspace{-0.5em}\\text{if }\\theta\\geq{}0\n\t\t\\end{array}\n\t\t\\right . .\n\t\\end{align*}\n\tThis predefined path evolution implies that the norm of the reference trajectory for the joint velocities will be $1\\ \\mathrm{rad/s}$ for all $\\theta<0$ and zero at the end of the path. Hence, we use the following reference trajectories\n\t\\begin{align*}\n\t\t\\r^{\\mathbf{x}}(t) &= \\matr{cc}{p(\\theta(t)) &\\frac{\\partial{p}}{\\partial\\theta}\\dot{\\theta}(t)}^\\top\\hspace{-0.3em},\\ \n\t\t\\r^\\u(t) = \\matr{c}{ \\frac{\\partial^2 p}{\\partial\\theta^2}\\dot{\\theta}^2+\\frac{\\partial p}{\\partial \\theta}\\ddot{\\theta}}^\\top\\hspace{-0.3em},\n\t\\end{align*}\n\twhich have a discontinuity at $\\theta=0$.\n\t\n\tFor the stage cost we use $W = \\mathrm{blockdiag}(Q,R)$ with\n\t\\begin{align*}\n\t\tQ=\\mathrm{diag}(10,10,1,1),\\\n\t\tR=\\mathrm{diag}(1,1).\n\t\\end{align*}\n\tThe terminal cost matrix is computed using an LQR controller with the cost defined by $Q$ and $R$ and is given by\n\t$$ P = \\matr{cc}{290.34\\cdot{}\\mathbf{1}^2 &105.42\\cdot{}\\mathbf{1}^2\\\\105.42\\cdot{}\\mathbf{1}^2&90.74\\cdot{}\\mathbf{1}^2}\\in\\mathbb{R}^4,$$\n\twhere $\\mathbf{1}^2\\in\\mathbb{R}^{2\\times2}$ is an identity matrix. Consequently, the corresponding terminal set is then given by\n\t\\begin{equation*}\n\t\t\\mathcal{X}^\\mathrm{f}_\\r(t_n) =\\{ {\\mathbf{x}}\\, |\\, ({\\mathbf{x}}-\\r^{\\mathbf{x}}_n)^\\top P({\\mathbf{x}}-\\r^{\\mathbf{x}}_n) \\leq{} 61.39\\}.\n\t\\end{equation*}\n\tFor detailed derivations of the terminal cost and terminal set, we refer the reader to the Appendix in~\\cite{Faulwasser2016,batkovic2020safe}.\n\t\n\tIn order to obtain the feasible reference ${\\mathbf{y}}^{\\mathrm{r}}=({\\mathbf{x}}^{\\mathrm{r}},\\u^{\\mathrm{r}})$, we approximate the infinite horizon Problem~\\eqref{eq:ocp} with a prediction horizon of $M=1200$ and sampling time $t_\\mathrm{s}=0.03\\ \\mathrm{s}$. For the closed-loop simulations, we use the control input obtained from formulations~\\eqref{eq:nmpc} and~\\eqref{eq:ideal_nmpc} with horizon $N=10$ and sampling time $t_\\mathrm{s}= 0.03\\ \\mathrm{s}$. Note that we used the linear system~\\eqref{eq:robot_linear} with its corresponding state and input constraints for all problem formulations. Furthermore, all simulations ran on a laptop computer (i5 2GHz, 16GB RAM) and were implemented in Matlab using the CasADi~\\cite{Andersson2019} software together with the IPOPT~\\cite{wachter2006implementation} solver.\n\t\n\tFigure \\ref{fig:mpatc_1_states} shows the closed-loop trajectories for the initial condition $(x_1,x_2)=(-4.69,-1.62,0,0)$ and initial time $k=167$. The gray lines denote the infeasible reference $\\r=(\\r^{\\mathbf{x}},\\r^\\u)$ for each state while the black lines denote the optimal reference ${\\mathbf{y}}^{\\mathrm{r}}=({\\mathbf{x}}^{\\mathrm{r}},\\u^{\\mathrm{r}})$ from~\\eqref{eq:ocp}. The orange lines show the closed-loop evolution for the practical MPC Problem~\\eqref{eq:nmpc}, i.e., when the terminal conditions are based on the infeasible reference ${\\mathbf{y}}^\\mathrm{f}=\\r$. The blue lines instead show the closed-loop evolution for the \\emph{ideal} MPC Problem~\\eqref{eq:ideal_nmpc}, where the terminal conditions are based on the optimal reference from Problem~\\eqref{eq:ocp}, i.e, ${\\mathbf{y}}^\\mathrm{f}={\\mathbf{y}}^{\\mathrm{r}}$. The bottom right plot of Figure~\\ref{fig:mpatc_1_states} shows that the closed-loop error for both the practical MPC (orange lines) and \\emph{ideal} MPC (blue lines) stabilize towards the reference $\\r$ for times $t\\leq{}5\\mathrm{s}$. Between $5\\ \\mathrm{s}\\leq{}t\\leq{}9\\ \\mathrm{s}$, we can see that the discontinuity of the reference trajectory $\\r$ affects how the two formulations behave. The \\emph{ideal} formulation manages to track the optimal reference ${\\mathbf{y}}^{\\mathrm{r}}$ (black trajectory), while the practical formulation instead tries to track the infeasible reference $\\r$ and therefore deviates compared to the \\emph{ideal} formulation. After the discontinuity, the rest of the reference trajectory is feasible and both formulations asymptotically stable.\n\t\t\n\t\n\t\\section{Conclusions}\\label{sec:conclusions}\n\tThe use of infeasible references in MPC formulations is of great interest due to its convenience and simplicity. In this paper, we have discussed how such references affect the tracking performance for MPC formulations. We have proved that MPC formulations can yield asymptotic stability to an optimal trajectory when terminal conditions are suitably chosen. In addition, we also proved that the stability results can be extend for sub-optimal terminal conditions, in which case the controlled system is stabilized around a neighborhood of the optimal trajectory. Future research will investigate ways to extend the stability results to general nonlinear systems.\n\t\t\n\t\t\n\t\n\t\n\t\n\t\\bibliographystyle{IEEEtran}\n\t", "meta": {"timestamp": "2021-09-13T02:19:33", "yymm": "2109", "arxiv_id": "2109.04846", "language": "en", "url": "https://arxiv.org/abs/2109.04846"}} {"text": "\\section{Introduction}\nThe unexpected accelerated expansion of the universe as predicted by recent series of observations is speculated by cosmologist as a smooth transition from decelerated era in recent past \\cite{Riess:1998cb,Perlmutter:1998np,Spergel:2003cb,Allen:2004cd,Riess:2004nr}. The cosmologists are divided in opinion about the cause of this transition. One group has the opinion of modification of the gravity theory while others are in favour introducing exotic matter component. Due to two severe drawbacks \\cite{RevModPhys.61.1} of the cosmological constant as a DE candidate dynamical DE models namely quintessence field (canonical scalar field), phantom field \\cite{Caldwell:2003vq,Vikman:2004dc,Nojiri:2005sr,Saridakis:2009pj,Setare:2008mb} (ghost scalar field) or a unifield model named quintom \\cite{Feng:2004ad,Guo:2004fq,Feng:2004ff,Feng:2004ff} are popular in the literature.\\par \n\tHowever, a new cosmological problem arises due to the dynamical nature of the DE although vacuum energy and DM scale independently during cosmic evolution but why their energy densities are nearly equal today. To resolve this coincidence problem cosmologists introduce interaction between the DE and DM. As the choice of this interaction is purely phenomenological so various models appear to match the observational prediction. Although these models may resolve the above coincidence problem but a non-trivial, almost tuned sequence of cosmological eras \\cite{Amendola:2006qi} appear as a result. Further, the interacting phantom DE models \\cite{Chen:2008ft,Nunes:2004wn,Clifton:2007tn,Xu:2012jf,Zhang:2005jj,Fadragas:2014mra,Gonzalez:2007ht} deal with some special coupling forms, alleviating the coincidence problem.\\par\n\tAlternatively cosmologists put forward with a special type of interaction between DE and DM where the DM particles has variable mass, depending on the scalar field representing the DE \\cite{Anderson:1997un}. Such type of interacting model is physically more sound as scalar field dependent varying mass model appears in string theory or scalar-tensor theory \\cite{PhysRevLett.64.123}. This type of interacting model in cosmology considers mass variation as linear \\cite{Farrar:2003uw,Anderson:1997un,Hoffman:2003ru}, power law \\cite{Zhang:2005rg} or exponential \\cite{Berger:2006db,PhysRevD.66.043528,PhysRevD.67.103523,PhysRevD.75.083506,Amendola:1999er,Comelli:2003cv,PhysRevD.69.063517} on the scalar field. Among these the exponential dependence is most suitable as it not only solves the coincidence problem but also gives stable scaling behaviour.\\par\n\tIn the present work, varying mass interacting DE/DM model is considered in the background of homogeneous and isotropic space-time model. Due to highly coupled nonlinear nature of the Einstein field equations it is not possible to have any analytic solution. So by using suitable dimensionless variables the field equations are converted to an autonomous system. The phase space analysis of non-hyperbolic equilibrium points has been done by center manifold theory (CMT) for various choices of the mass functions and the scalar field potentials. The paper is organized as follows: Section \\ref{BES} deals with basic equations for the varying mass interacting dark energy and dark matter cosmological model. Autonomous system is formed and critical points are determined in Section \\ref{FASC}. Also stability analysis of all critical points for various choices of the involving parameters are shown in this section. Possible bifurcation scenarios \\cite{10.1140/epjc/s10052-019-6839-8, 1950261, 1812.01975} by Poincar\\'{e} index theory and global cosmological evolution have been examined in Section \\ref{BAPGCE}. Finally, brief discussion and important concluding remarks of the present work is proposed in Section \\ref{conclusion}. \n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\\section{Varying mass interacting dark energy and dark matter cosmological model : Basic Equations\\label{BES}}\nThroughout this paper, we assume a homogeneous and isotropic universe with the flat Friedmann-Lema\\^{i}tre-Robertson-Walker (FLRW) metric\tas follows:\n\\begin{equation}\nds^2=-dt^2+a^2(t)~d{\\Sigma}^2,\n\\end{equation}\nwhere `$t$' is the comoving time; $a(t)$ is the scale factor; $d{\\Sigma}^2$ is the 3D flat space line element.\\\\\nThe\tFriedmann equations in the background of flat FLRW metric can be expressed as\n\\begin{eqnarray}\n3H^2&=&k^2(\\rho_\\phi +\\rho_{_{DM}}),\\label{equn2}\\\\\n2\\dot{H}&=&-k^2(\\rho_\\phi +p_\\phi +\\rho_{_{DM}}),\\label{equn3}\n\\end{eqnarray}\nwhere `$\\cdot $' denotes the derivative with respect to $t$; $\\kappa(=\\sqrt{8\\pi G}$) is the gravitational coupling; $\\{\\rho_\\phi,p_\\phi\\}$ are the energy density and thermodynamic pressure of the phantom scalar field $\\phi$ (considered as DE) having expressions \t\n\t\\begin{align}\n\t\\begin{split}\n\t\\rho_{\\phi}&=-\\frac{1}{2}\\dot{\\phi}^2+V(\\phi),\\\\\n\tp_\\phi&=-\\frac{1}{2}\\dot{\\phi}^2-V(\\phi),\\label{equn4}\n\t\\end{split}\n\t\\end{align}\n\tand $\\rho_{_{DM}}$ is the energy density for the dark matter in the form of dust having expression \n\t\t\\begin{align}\n\t\\rho_{_{DM}}=M_{_{DM}}(\\phi)n_{_{DM}},\\label{equn5}\n\t\\end{align}\nwhere $n_{_{DM}}$, the number density \\cite{Leon:2009dt} for DM satisfies the number conservation equation\t\n\t\\begin{align}\n\t\\dot{n}_{_{DM}}+3H n_{_{DM}}=0.\\label{equn6}\n\t\\end{align}\nNow differentiating (\\ref{equn5}) and using (\\ref{equn6}) one has the DM conservation equation as\n\\begin{align}\n\\dot{\\rho}_{_{DM}}+3H\\rho_{_{DM}}=\\frac{d}{d\\phi}\\left\\{\\ln M_{_{DM}}(\\phi)\\right\\}\\dot{\\phi}\\rho_{_{DM}},\\label{equn7}\n\\end{align}\nwhich shows that mass varying DM (in the form of dust) can be interpreted as a barotropic fluid with variable equation of state : $\\omega_{_{DM}}=\\frac{d}{d\\phi}\\left\\{\\ln M_{_{DM}}(\\phi)\\right\\}\\dot{\\phi}$. Now due to Bianchi identity, using the Einstein field equations (\\ref{equn2}) and (\\ref{equn3}) the conservation equation for DE takes the form\n\t\\begin{align}\n\\dot{\\rho}_{\\phi}+ 3H(\\rho_{\\phi}+p_{\\phi})=-\\frac{d}{d\\phi}\\left\\{\\ln M_{_{DM}}(\\phi)~\\right\\}\\dot{\\phi}\\rho_{_{DM}}.\\label{equn8}\n\\end{align}\t\nor using (\\ref{equn4}) one has\n\\begin{align}\n\\ddot{\\phi}+3H\\dot{\\phi}-\\frac{\\partial V}{\\partial \\phi}=\\frac{d}{d\\phi}\\left\\{\\ln M_{_{DM}}(\\phi)\\right\\}\\rho_{_{DM}}.\\label{equn9}\n\\end{align}\t\t\nThe combination of the conservation equations (\\ref{equn7})\tand (\\ref{equn8}) for DM (dust) and phantom DE (scalar) shows that the interaction between these two matter components depends purely on the mass variation, i.e., $Q=\\frac{d}{d\\phi}\\left\\{\\ln M_{_{DM}}(\\phi)\\right\\}\\rho_{_{DM}}$. So, if $M_{_{DM}}$ is an increasing function of $\\phi$, i.e., $Q>0$ then energy is exchanged from DE to DM while in the opposite way if $M_{_{DM}}$ is a decreasing function of $\\phi$. Further, combining equations (\\ref{equn7})\tand (\\ref{equn8}) the total matter $\\rho_{tot}=\\rho_{DM}+\\rho_{DE}$ satisfies\n\\begin{align}\n\\dot{\\rho}_{tot}+3H(\\rho_{tot}+p_{tot})=0\n\\end{align}\nwith\n\\begin{align}\n\\omega_{tot}=\\frac{p_{\\phi}}{\\rho_{\\phi}+\\rho_{_{DM}}}=\\omega_{\\phi}\\Omega_{\\phi}.\n\\end{align}\t\nHere $\\omega_{\\phi}=\\frac{p_{\\phi}}{\\rho_{\\phi}}$ is the equation of state parameter for phantom field and $\\Omega_{\\phi}=\\frac{\\rho_{\\phi}}{\\frac{3H^2}{\\kappa^2}}$ is the density parameter for DE.\n\n\\section{Formation of Autonomous System : Critical point and stability analysis\\label{FASC}}\n \tIn the present work the dimensionless variables can be taken as \\cite{Leon:2009dt}\n \t\t\\begin{eqnarray}\n \t\tx:&=&\\frac{\\kappa\\dot{\\phi}}{\\sqrt{6}H}, \\\\\n \t\ty:&=&\\frac{\\kappa\\sqrt{V(\\phi)}}{\\sqrt{3}H}, \\\\\n \t\tz:&=&\\frac{\\sqrt{6}}{\\kappa \\phi}\n \t\t\\end{eqnarray}\n \t\ttogether with $N=\\ln a$ and the expression of the cosmological parameters can be written as\n \t\\begin{align}\n \t\t\\Omega_{\\phi}\\equiv \\frac{{\\kappa}^2\\rho_{\\phi}}{3H^2}&=-x^2+y^2,\\label{eq4}\n \t\t \\end{align}\n \t\t \\begin{equation}\n \t\t\\omega_{\\phi}= \\frac{-x^2-y^2}{-x^2+y^2}\n \t\t\\end{equation}\n \t\tand\n \t\t\\begin{equation}\n \t\t\\omega_{tot}=-x^2-y^2.\n \t\t\\end{equation}\n For the scalar field potential we consider two well\n studied cases in the literature, namely the power-law\n \t\\begin{equation}\n V(\\phi)=V_0 \\phi^{-\\lambda}\n \\end{equation}\n and the exponential dependence as\n \\begin{equation}\n V(\\phi)=V_1 e^{-\\kappa\\lambda \\phi}.\n \\end{equation}\n For the dark matter particle mass we also consider power-law \n \t \\begin{eqnarray}\n \t M_{_{DM}}(\\phi)&=& M_0 {\\phi}^{-\\mu}\n \t \\end{eqnarray} \n \t and the exponential dependence as\n \t \\begin{eqnarray}\n \t M_{_{DM}}(\\phi)&=& M_1 e^{-\\kappa\\mu \\phi},\n \t \\end{eqnarray}\n \t where $V_0,V_1,M_0,M_1 (>0)$ and $\\lambda,\\mu$ are constant parameters. Here we study the dynamical analysis of this cosmological system for five possible models. In Model $1$ (\\ref{M1}) we consider $V(\\phi)=V_0\\phi^{-\\lambda}, M_{_{DM}}(\\phi)=M_0\\phi^{-\\mu}$, in Model $2$ (\\ref{M2}) we consider $V(\\phi)=V_0\\phi^{-\\lambda}, M_{_{DM}}(\\phi)=M_1e^{-\\kappa\\mu\\phi}$, in Model $3$ (\\ref{M3}) we consider $V(\\phi)=V_1e^{-\\kappa\\lambda\\phi}, M_{_{DM}}(\\phi)=M_0\\phi^{-\\mu}$, in Model $4$ (\\ref{M4}) we consider $V(\\phi)=V_1 e ^{-\\kappa\\lambda\\phi}, M_{_{DM}}(\\phi)=M_1e^{-\\kappa\\mu\\phi}$ and lastly in Model $5$ (\\ref{M5}) we consider $V(\\phi)=V_2\\phi^{-\\lambda} e ^{-\\kappa\\lambda\\phi}, M_{_{DM}}(\\phi)=M_2\\phi^{-\\mu}e^{-\\kappa\\mu\\phi}$, where $V_2=V_0V_1$ and $M_2=M_0M_1$.\n \t \n \t \\subsection{Model 1: Power-law potential and power-law-dependent dark-matter particle mass \\label{M1}}\n \t\t In this consideration evolution equations in Section \\ref{BES} can be converted to an autonomous system as follows \n \t\t\t\\begin{eqnarray}\n \t\t\tx'&=&-3x+\\frac{3}{2}x(1-x^2-y^2)-\\frac{\\lambda y^2 z}{2}-\\frac{\\mu}{2}z(1+x^2-y^2),\\label{eq9} \\\\\n \t\t\ty'&=&\\frac{3}{2}y(1-x^2-y^2)-\\frac{\\lambda xyz}{2},\\label{eq10} \\\\\n \t\t\tz'&=&-xz^2,\\label{eq11} \n \t\t\t\\end{eqnarray}\n \t\t\t\twhere $\\lq$dash' over a variable denotes differentiation with respect to $ N=\\ln a $.\\bigbreak\n To obtain the stability analysis of the critical points corresponding to the autonomous system $(\\ref{eq9}-\\ref{eq11})$, we consider four possible choices of $\\mu$ and $\\lambda$ as\n$(i)$ $\\mu\\neq0$ and $\\lambda\\neq0$, $~~~(ii)$ $\\mu\\neq0$ and $\\lambda=0$, $(iii)$ $\\mu=0$ and $\\lambda\\neq0$, $(iv)$ $\\mu=0$ and $\\lambda=0$.\n\\subsubsection*{Case-(i)$~$\\underline{$\\mu\\neq0$ and $\\lambda\\neq0$}}\t\n In this case we have three real and physically meaningful critical points $A_1(0, 0, 0)$, $A_2(0, 1, 0)$ and $A_3(0, -1, 0)$. First we determine the Jacobian matrix at these critical points corresponding to the autonomous system $(\\ref{eq9}-\\ref{eq11})$. Then we shall find the eigenvalues and corresponding eigenvectors of the Jacobian matrix. After that we shall obtain the nature of the vector field near the origin for every critical points. If the critical point is hyperbolic in nature we use Hartman-Grobman theorem and if the critical point is non-hyperbolic in nature we use Center Manifold Theory \\cite{Chakraborty:2020vkp}. At every critical points the eigenvalues of the Jacobian matrix corresponding to the autonomous system $(\\ref{eq9}-\\ref{eq11})$, value of cosmological parameters and the nature of the critical points are shown in Table \\ref{TI}.\n \t\\begin{table}[h]\n \t\t\t\\caption{\\label{TI}Table shows the eigenvalues, cosmological parameters and nature of the critical points corresponding to each critical points $(A_1-A_3)$.}\n \t\t\t\n \t\t\t\\begin{tabular}{|c|c c c|c|c|c| c|c|}\n \t\t\t\t\\hline\n \t\t\t\t\\hline\t\n \t\t\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$~Critical~ Points$\\\\$~~$\\end{tabular} ~~ & $ \\lambda_1 $ ~~ & $\\lambda_2$ ~~ & $\\lambda_3$& $~\\Omega_\\phi~$&$~\\omega_\\phi~$ &$~\\omega_{tot}~$& $~q~$ & $Nature~of~critical~points$ \\\\ \\hline\\hline\n \t\t\t\t~ & ~ & ~& ~& ~ & ~ & ~ & ~ & ~\\\\\n \t\t\t\t$A_1(0,0,0)$ & $-\\frac{3}{2}$ & $\\frac{3}{2}$ & 0 & 0 & Undetermined & 0 &$\\frac{1}{2}$& Non-hyperbolic\\\\ \n \t\t\t\t~ & ~ & ~& ~& ~ & ~ & ~ & ~ & ~\\\\\\hline\n \t\t\t\t\t~ & ~ & ~& ~& ~ & ~ & ~ & ~ & ~\\\\\n \t\t\t\t$A_2(0,1,0)$ & $-3$ & $-3$ & 0 & 1 & $-1$ & $-1$&$-1$& Non-hyperbolic\\\\\n \t\t\t\t~ & ~ & ~& ~& ~ & ~ & ~ & ~& ~\\\\ \\hline\n \t\t\t\t\t~ & ~ & ~& ~& ~ & ~ & ~ & ~ & ~\\\\\n \t\t\t\t$A_3(0,-1,0)$ & $-3$ & $-3$ & $0$ & $1$ & $-1$ & $-1$&$-1$& Non-hyperbolic\\\\\n \t\t\t\t~ & ~ & ~& ~& ~ & ~ & ~ & ~ & ~\\\\ \\hline\n \t\t\t\\end{tabular}\n \t\t \\end{table}\n\n \t \\begin{center}\n \t $1.~Critical~Point~A_1$\n \t \\end{center}\n \t\t\tThe Jacobian matrix at the critical point $A_1$ can be put as\n \t\t\t\\begin{equation}\\renewcommand{\\arraystretch}{1.5}\t\n \t\t\tJ(A_1)=\\begin{bmatrix}\n \t\t\t-\\frac{3}{2} & 0 & -\\frac{\\mu}{2}\\\\\t\n \t\t\t~~0 & \\frac{3}{2} & ~~0\\\\\n \t\t\t~~0 & 0 & ~~0 \n \t\t\t\\end{bmatrix}.\\label{eq12}\t\n \t\t\t\\end{equation}\t\n \t\t\tThe eigenvalues of $J(A_1)$ are $-\\frac{3}{2}$, $\\frac{3}{2}$ and $0$. $[1, 0, 0]^T$ , $[0, 1, 0]^T$ and $[-\\frac{\\mu}{3}, 0, 1]^T$ are the eigenvectors corresponding to the eigenvalues $-\\frac{3}{2}$, $\\frac{3}{2}$ and 0 respectively. Since the critical point $A_1$ is non-hyperbolic in nature, so we use Center Manifold Theory for analyzing the stability of this critical point. From the entries of the Jacobian matrix we can see that there is a linear term of $z$ corresponding to the eqn.(\\ref{eq9}) of the autonomous system $(\\ref{eq9}-\\ref{eq11})$. But the eigen value $0$ of the Jacobian matrix (\\ref{eq12}) is corresponding to (\\ref{eq11}). So we have to introduce another coordinate system $(X,~Y,~Z)$ in terms of $(x,~y,~z)$. By using the eigenvectors of the Jacobian matrix (\\ref{eq12}), we introduce the following coordinate system\n \t\t\t\\begin{equation}\\renewcommand{\\arraystretch}{1.5}\t\n \t\t\t\\begin{bmatrix}\n \t\t\tX\\\\\n \t\t\tY\\\\\n \t\t\tZ\n \t\t\t\\end{bmatrix}\\renewcommand{\\arraystretch}{1.5}\n \t\t\t=\\begin{bmatrix}\n \t\t\t1 & 0 & \\frac{\\mu}{3} \\\\\t\n \t\t\t0 & 1 & 0 \\\\\n \t\t\t0 & 0 & 1\n \t\t\t\\end{bmatrix}\\renewcommand{\\arraystretch}{1.5}\n \t\t\t\\begin{bmatrix}\n \t\t\tx\\\\\n \t\t\ty\\\\\n \t\t\tz\n \t\t\t\\end{bmatrix}\\label{eq15}\t\n \t\t\t\\end{equation}\t\t\n \t\t\tand in these new coordinate system the equations $(\\ref{eq9}-\\ref{eq11})$ are transformed into\t\n \t\t\t\\begin{equation}\\renewcommand{\\arraystretch}{1.5}\t\n \t\t\t\\begin{bmatrix}\n \t\t\tX'\\\\\n \t\t\tY'\\\\\n \t\t\tZ'\n \t\t\t\\end{bmatrix}\\renewcommand{\\arraystretch}{1.5}\n \t\t\t=\\begin{bmatrix}\n \t\t\t-\\frac{3}{2} & 0 & 0 \\\\\t\n \t\t\t~~0 & \\frac{3}{2} & 0 \\\\\n \t\t\t~~0 & 0 & 0\n \t\t\t\\end{bmatrix}\n \t\t\t\\begin{bmatrix}\n \t\t\tX\\\\\n \t\t\tY\\\\\n \t\t\tZ\n \t\t\t\\end{bmatrix}\t\t\n \t\t\t+\\renewcommand{\\arraystretch}{1.5}\t\n \t\t\t\\begin{bmatrix}\n \t\t\tnon\\\\\n \t\t\tlinear\\\\\n \t\t\tterms\n \t\t\t\\end{bmatrix}.\t\n \t\t\t\\end{equation}\t\n \t\t\tBy Center Manifold Theory there exists a continuously differentiable function \t$h$:$\\mathbb{R}$$\\rightarrow$$\\mathbb{R}^2$ such that \n \t\t\t\\begin{align}\\renewcommand{\\arraystretch}{1.5}\n \t\t\th(Z)=\\begin{bmatrix}\n \t\t\tX \\\\\n \t\t\tY \\\\\n \t\t\t\\end{bmatrix}\n \t\t\t=\\begin{bmatrix}\n \t\t\ta_1Z^2+a_2Z^3+a_3Z^4 +\\mathcal{O}(Z^5)\\\\\n \t\t\tb_1Z^2+b_2Z^3+a_3Z^4 +\\mathcal{O}(Z^5) \n \t\t\t\\end{bmatrix}.\n \t\t\t\\end{align}\n \t\t\tDifferentiating both side with respect to $N$, we get \n \t\t\t\\begin{eqnarray}\n \t\t\tX'&=&(2a_1Z+3a_2Z^2+4a_3Z^3)Z',\\\\\n \t\t\tY'&=&(2b_1Z+3b_2Z^2+4b_3Z^3)Z',\n \t\t\t\\end{eqnarray}\t\n \t\t\twhere $a_i$, $b_i$ $\\in\\mathbb{R}$. We only concern about the non-zero coefficients of the lowest power terms in CMT as we analyze arbitrary small neighbourhood of the origin. Comparing coefficients corresponding to power of Z we get, \n \t\t\t$a_1$=0, $a_2=\\frac{2\\mu^2}{27}$, $a_3=0$ and $b_i$=0 for all $i$.\t\n \t\t\tSo, the center manifold is given by \n \t\t\t\\begin{eqnarray}\n \t\t\tX&=&\\frac{2\\mu^2}{27}Z^3,\\label{eq18}\\\\\n \t\t\tY&=&0\\label{eq19}\n \t\t\t\\end{eqnarray} \n \t\t\tand the flow on the Center manifold is determined by\n \t\t\t\\begin{eqnarray}\n \t\t\tZ'&=&\\frac{\\mu}{3}Z^3+\\mathcal{O}(Z^5).\\label{eq20}\n \t\t\t\\end{eqnarray}\n \t\t\t\\begin{figure}[h]\n \t\t\t\t\\includegraphics[width=1\\textwidth]{A11}\n \t\t\t\t\\caption{Vector field near the origin for the critical point $A_1$ in $XZ$-plane. L.H.S. figure is for $\\mu>0$ and R.H.S. figure is for $\\mu<0$. }\n \t\t\t\t\\label{A_1}\n \t\t\t\\end{figure}\n The flow on the center manifold depends on the sign of $\\mu$. If $\\mu>0$ then $Z'>0$ for $Z>0$ and $Z'<0$ for $Z<0$. Hence, we conclude that for $\\mu>0$ the origin is a saddle node and unstable in nature (FIG.\\ref{A_1}(a)). Again if $\\mu<0$ then $Z'<0$ for $Z>0$ and $Z'>0$ for $Z<0$. So, we conclude that for $\\mu<0$ the origin is a stable node, i.e., stable in nature (FIG.\\ref{A_1}(b)).\t\\bigbreak\t\n\t\\begin{center}\n \t\t\t$2.~Critical~Point~A_2$\n \t\t\\end{center}\n \t\t\tThe Jacobian matrix at $A_2$ can be put as\n \t\t\t\\begin{equation}\\renewcommand{\\arraystretch}{1.5}\t\n \t\t\tJ(A_2)=\\begin{bmatrix}\n \t\t-3 & ~~0 & -\\frac{\\lambda}{2}\\\\\t\n \t\t\t~~0 & -3 & ~~0\\\\\n \t\t\t~~0 & ~~0 & ~~0 \n \t\t\t\\end{bmatrix}\\label{eq21}.\t\n \t\t\t\\end{equation}\n \t\t\tThe eigenvalues of the above matrix are $-3$, $-3$ and $0$. $[1, 0, 0]^T$ and $[0, 1, 0]^T$ are the eigenvectors corresponding to the eigenvalue $-3$ and $\\left[-\\frac{\\lambda}{6}, 0, 1\\right]^T$ be the eigenvector corresponding to the eigenvalue $0$. Since the critical point $A_2$ is non-hyperbolic in nature, so we use Center Manifold Theory for analyzing the stability of this critical point. We first transform the coordinates into a new system $x=X,~ y=Y+1,~ z=Z$, such that the critical point $A_2$ moves to the origin. By using the eigenvectors of the Jacobian matrix $J(A_2)$, we introduce another set of new coordinates $(u,~v,~w)$ in terms of $(X,~Y,~Z)$ as\n \t\t\t\\begin{equation}\\renewcommand{\\arraystretch}{1.5}\t\n \t\t\t\\begin{bmatrix}\n \t\t\tu\\\\\n \t\t\tv\\\\\n \t\t\tw\n \t\t\t\\end{bmatrix}\\renewcommand{\\arraystretch}{1.5}\n \t\t\t=\\begin{bmatrix}\n \t\t\t1 & 0 & \\frac{\\lambda}{6} \\\\\t\n \t\t\t0 & 1 & 0 \\\\\n \t\t\t0 & 0 & 1\n \t\t\t\\end{bmatrix}\\renewcommand{\\arraystretch}{1.5}\n \t\t\t\\begin{bmatrix}\n \t\t\tX\\\\\n \t\t\tY\\\\\n \t\t\tZ\n \t\t\t\\end{bmatrix}\\label{eq24}\n \t\t\t\\end{equation}\t\t\n \t\t\tand in these new coordinates the equations $(\\ref{eq9}-\\ref{eq11})$ are transformed into\t\n \t\t\t\\begin{equation}\t\\renewcommand{\\arraystretch}{1.5}\n \t\t\t\\begin{bmatrix}\n \t\t\tu'\\\\\n \t\t\tv'\\\\\n \t\t\tw'\n \t\t\t\\end{bmatrix}\n \t\t\t=\\begin{bmatrix}\n \t\t\t-3 & ~~0 & 0 \\\\\t\n \t\t\t~~0 & -3 & 0 \\\\\n \t\t\t~~0 & ~~0 & 0\n \t\t\t\\end{bmatrix}\n \t\t\t\\begin{bmatrix}\n \t\t\tu\\\\\n \t\t\tv\\\\\n \t\t\tw\n \t\t\t\\end{bmatrix}\t\t\n \t\t\t+\t\n \t\t\t\\begin{bmatrix}\n \t\t\tnon\\\\\n \t\t\tlinear\\\\\n \t\t\tterms\n \t\t\t\\end{bmatrix}.\t\n \t\t\t\\end{equation}\t\n \t\t\tBy center manifold theory there exists a continuously differentiable function \t$h$:$\\mathbb{R}$$\\rightarrow$$\\mathbb{R}^2$ such that \n \t\t\t\\begin{align}\\renewcommand{\\arraystretch}{1.5}\n \t\t\th(w)=\\begin{bmatrix}\n \t\t\tu \\\\\n \t\t\tv \\\\\n \t\t\t\\end{bmatrix}\n \t\t\t=\\begin{bmatrix}\n \t\t\ta_1w^2+a_2w^3 +\\mathcal{O}(w^4)\\\\\n \t\t\tb_1w^2+b_2w^3 +\\mathcal{O}(w^4) \n \t\t\t\\end{bmatrix}.\n \t\t\t\\end{align}\n \t\t\tDifferentiating both side with respect to $N$, we get \n \t\t\t\\begin{eqnarray}\n \t\t\tu'&=&(2a_1w+3a_2w^2)w'+\\mathcal{O}(w^3)\\label{eq25}\\\\\n \t\t\tv'&=&(2b_1w+3b_2w^2)w'+\\mathcal{O}(w^3)\\label{eq26}\n \t\t\t\\end{eqnarray}\n \t\t\twhere $a_i$, $b_i$ $\\in\\mathbb{R}$. We only concern about the non-zero coefficients of the lowest power terms in CMT as we analyze arbitrary small neighbourhood of the origin. Comparing coefficients corresponding to power of $w$ both sides of (\\ref{eq25}) and (\\ref{eq26}), we get\t\n \t\t\t$a_1$=0, $a_2=\\frac{\\lambda^2}{108}$ and $b_1=\\frac{\\lambda^2}{72}$, $b_2=0$. So, the center manifold can be written as \n \t\t\t\\begin{eqnarray}\n \t\t\tu&=&\\frac{\\lambda^2}{108}w^3,\\label{eqn27}\\\\\n \t\t\tv&=&\\frac{\\lambda^2}{72}w^2\\label{eqn28}\n \t\t\t\\end{eqnarray} \n \t\t\t\t\\begin{figure}\n \t\t\t\t\\includegraphics[width=1\\textwidth]{A12}\n \t\t\t\t\\caption{Vector field near the origin for the critical point $A_2$ in (uw)-plane. L.H.S. figure is for $\\lambda>0$ and R.H.S. figure is for $\\lambda<0$. }\n \t\t\t\t\\label{19}\n \t\t\t\\end{figure}\n \t\t\t\\begin{figure}\n \t\t\t\t\\includegraphics[width=1\\textwidth]{A22}\n \t\t\t\t\\caption{Vector field near the origin for the critical point $A_2$ in $(vw)$-plane. L.H.S. figure is for $\\lambda>0$ and R.H.S. figure is for $\\lambda<0$.}\n \t\t\t\t\\label{20}\n \t\t\t\\end{figure}\n \t\t\tand the flow on the center manifold is determined by\n \t\t\t\\begin{eqnarray}\n \t\t\tw'&=&\\frac{\\lambda}{6}w^3+\\mathcal{O}(w^4) .\\label{eq29}\n \t\t\t\\end{eqnarray}\n \t\t\tHere we see the center manifold and the flow on the center manifold is completely same as the center manifold and the flow which was determined in \\cite{1111.6247} and the stability of the vector field near the origin depends on the sign of $\\lambda$. If $\\lambda<0$ then $w'<0$ for $w>0$ and $w'>0$ for $w<0$. So, for $\\lambda<0$ the origin is a stable node, i.e., stable in nature. Again if $\\lambda>0$ then $w'>0$ for $w>0$ and $w'<0$ for $w<0$. So, for $\\lambda>0$ the origin is a saddle node, i.e., unstable in nature.\n \t\t\t The vector field near the origin are shown as in FIG.\\ref{19} and FIG.\\ref{20} separately for $(wu)$-plane and $(wv)$-plane respectively. As the new coordinate system $(u,~v,~w)$ is topologically equivalent to the old one, hence the origin in the new coordinate system, i.e., the critical point $A_2$ in the old coordinate system $(x,~y,~z)$ is a stable node for $\\lambda<0$ and a saddle node for $\\lambda>0$. \n\\begin{center}\n\t$3.~Critical~Point~A_3$\n\\end{center}\n \t\t\tThe Jacobian matrix at the critical point $A_3$ is same as (\\ref{eq21}). So, the eigenvalues and corresponding eigenvectors are also same as above. Now we transform the coordinates into a new system $x=X,~ y=Y-1,~ z=Z$, such that the critical point is at the origin. Then by using the matrix transformation (\\ref{eq24}) and after putting similar arguments as above, the expressions of the center manifold can be written as\n \t\t\t\\begin{eqnarray}\n \t\t\tu&=&-\\frac{\\lambda^2}{108}w^3\\label{eqn30},\\\\\n \t\t\tv&=&-\\frac{\\lambda^2}{72}w^2\\label{eqn31}\n \t\t\t\\end{eqnarray} \n \t\t\tand the flow on the center manifold is determined by\n \t\t\t\\begin{eqnarray}\n \t\t\tw'&=&\\frac{\\lambda}{6}w^3+\\mathcal{O}(w^4) .\\label{eqn32}\n \t\t\t\\end{eqnarray}\n \t\t\tHere also the stability of the vector field near the origin depends on the sign of $\\lambda$. Again as the expression of the flow on the center manifold is same as (\\ref{eq29}). So we can conclude as above that for $\\lambda<0$ the origin is a stable node,i.e., stable in nature and for $\\lambda>0$ the origin is unstable due to its saddle nature. The vector fields near the origin on $uw$-plane and $vw$-plane are shown as in FIG.\\ref{24} and FIG.\\ref{25} respectively. Hence, the critical point $A_3$ is a stable node for $\\lambda<0$ and a saddle node for $\\lambda>0$.\\bigbreak\n\t\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{A21}\n\t\\caption{Vector field near the origin for the Critical point $A_3$ in $(uw)$-plane. L.H.S. figure is for $\\lambda>0$ and R.H.S. figure is for $\\lambda<0$.}\n\t\\label{24}\n\\end{figure}\n\\begin{figure}[h]\n\t\\includegraphics[width=1\\textwidth]{A31}\n\t\\caption{Vector field near the origin for the Critical point $A_3$ in $(vw)$-plane. L.H.S. figure is for $\\lambda>0$ and R.H.S. figure is for $\\lambda<0$.}\n\t\\label{25}\n\\end{figure}\n\\newpage\n\\subsubsection*{Case-(ii)$~$\\underline{$\\mu\\neq0$ and $\\lambda=0$}}\nIn this case the autonomous system $(\\ref{eq9}-\\ref{eq11})$ changes into\n\\begin{eqnarray}\nx'&=&-3x+\\frac{3}{2}x(1-x^2-y^2)-\\frac{\\mu}{2}z(1+x^2-y^2),\\label{eq33} \\\\\ny'&=&\\frac{3}{2}y(1-x^2-y^2),\\label{eq34} \\\\\nz'&=&-xz^2.\\label{eq35} \n\\end{eqnarray}\nWe have also three critical points corresponding to the above autonomous system, in which two are space of critical points. The critical points for this autonomous system are $C_1(0, 0, 0)$, $C_2(0,1, z_c)$ and $C_3(0,-1,z_c)$ where $z_c$ is any real number. Corresponding to the critical points $C_0$, $C_1$ and $C_2$ the eigenvalues of the Jacobian matrix, value of cosmological parameters and the nature of the critical points are same as $A_1$, $A_2$ and $A_3$ respectively.\t\t\t\n\n\\begin{center}\n\t$1.~Critical~Point~C_1$ \n\\end{center}\nThe Jacobian matrix $J(C_1)$ for the autonomous system $(\\ref{eq33}-\\ref{eq35})$ at this critical point is same as (\\ref{eq12}). So, all the eigenvalues and the corresponding eigenvectors are also same as for $J(C_1)$. If we put forward argument like the stability analysis of the critical point $A_1$ then the center manifold can be expressed as $(\\ref{eq18}-\\ref{eq19})$ and the flow on the center manifold is determined by $(\\ref{eq20})$. So the stability of the vector field near the origin is same as for the critical point $A_1$.\n\\begin{center}\n\t$2.~Critical~Point~C_2$ \n\\end{center}\nThe Jacobian matrix at the critical point $C_2$ can be put as\n\n\\begin{equation}\\renewcommand{\\arraystretch}{1.5}\t\nJ(C_2)=\\begin{bmatrix}\n-3 & ~~\\mu z_c & 0\\\\\t\n~~0 & -3 & 0\\\\\n-z_c^2 & ~~0 & 0 \n\\end{bmatrix}.\\label{eq36}\t\n\\end{equation}\nThe eigenvalues of the above matrix are $-3$, $-3$, 0. $\\left[1, 0, \\frac{z_c^2}{3}\\right]^T$ and $[0, 1, 0]^T$ are the eigenvectors corresponding to the eigenvalue -3 and $[0, 0, 1]^T$ be the eigenvector corresponding to the eigenvalue 0. To apply CMT for a fixed $z_c$, first we transform the coordinates into a new system $x=X,~ y=Y+1,~ z=Z+z_c$, such that the critical point is at the origin and after that if we put forward argument as above to determine center manifold, then the center manifold can be written as\n\\begin{eqnarray}\nX&=&0,\\label{eq37}\\\\\nY&=&0\\label{eq38}\n\\end{eqnarray} \nand the flow on the center manifold is determined by\n\\begin{eqnarray}\nZ'&=&0.\\label{eq39}\n\\end{eqnarray}\nSo, the center manifold is lying on the $Z$-axis and the flow on the center manifold can not be determined by (\\ref{eq39}). Now, if we project the vector field on the plane\nwhich is parallel to $XY$-plane, i.e., the plane $Z=constant$(say), then the vector field is shown as in FIG.\\ref{z_c}. So every point on $Z$- axis is a stable star.\n\n\\begin{center}\n\t$2.~Critical~Point~C_3$ \n\\end{center}\nIf we put forward argument as above to obtain the center manifold and the flow on the center manifold. Then we will get the center manifold same as $(\\ref{eq37}-\\ref{eq38})$ and the flow on the center manifold is determined by (\\ref{eq39}). In this case also we will get the same vector field as FIG.\\ref{z_c}.\\bigbreak\nFrom the above discussion, firstly we have seen that the space of critical points $C_2$ and $C_3$ are non-hyperbolic in nature but by using CMT we could not determine the vector field near those critical points and also the flow on the vector field. So, in this case the last eqn.(\\ref{eq35}) of the autonomous system $(\\ref{eq33}-\\ref{eq35})$ did not provide any special behaviour. For this reason and as the expressions of $\\Omega_\\phi$, $\\omega_\\phi$ and $\\omega_{total}$ depends only on $x$ and $y$ coordinates, we want to take only the first two equations of the autonomous system $(\\ref{eq33}-\\ref{eq35})$ and try to analyze the stability of the critical points which are lying on the plane, parallel to $xy-$plane, i.e., the plane $z=constant=c$ (say). In $z=c$ plane the first two equations in $(\\ref{eq33}-\\ref{eq35})$ can be written as\n\\begin{eqnarray}\nx'&=&-3x+\\frac{3}{2}x(1-x^2-y^2)-\\frac{\\mu}{2}c(1+x^2-y^2),\\label{eqn40} \\\\\ny'&=&\\frac{3}{2}y(1-x^2-y^2).\\label{eqn41} \n\\end{eqnarray}\nIn this case we have five critical points corresponding to the autonomous system $(\\ref{eqn40}-\\ref{eqn41})$. The set of critical points, existence of critical points and the value of cosmological parameters are shown in Table \\ref{T3} and the eigenvalues and the nature of critical points are shown in Table \\ref{T4}.\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.6\\textwidth]{stable_z_c}\n\t\\caption{Vector field near about every point on $Z-$axis for the critical points $C_2$ and $C_3$.}\n\t\\label{z_c}\n\\end{figure}\n\\begin{table}[!]\n\t\\caption{\\label{T3}Table shows the set of critical points, existence of critical points and the value of cosmological parameters corresponding to the autonomous system $(\\ref{eqn40}-\\ref{eqn41})$. }\n\t\\begin{tabular}{|c|c|c c |c c c c|}\n\t\t\\hline\n\t\t\\hline\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$ CPs $\\\\$~$\\end{tabular} ~~ & $ Existence $ ~~ & ~~$x$ ~~&~~ $y$& $\\Omega_\\phi$&~~$\\omega_{\\phi}$~~ &$\\omega_{tot}$ & $~~~~q$ \\\\ \\hline\\hline\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$ E_1 $\\\\$~$\\end{tabular} ~~ & $For~all~\\mu~and~c $&$0$&$~~~1$&$1$&$-1$&$-1$&$~~-1$ \\\\ \\hline \n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$ E_2 $\\\\$~$\\end{tabular} ~~ & $For~all~\\mu ~and~c $ ~~ & ~~$0$ &$~~-1$~~&$1$& $~-1$ ~~& $-1$ & $~~-1$\\\\ \\hline \n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$ E_3 $\\\\$~$\\end{tabular} ~~ & $For~all ~~\\mu ~and~c $ ~~ & ~~$-\\frac{\\mu c}{3}$ ~~&~~$~~0$~&$-\\frac{\\mu^2 c^2}{9}$ & $~~-1$ ~~&~~ $-\\frac{\\mu^2 c^2}{9}$ &~~ $\\frac{1}{2}\\left(1-\\frac{\\mu^2 c^2}{3}\\right)$\\\\ \\hline \n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$ E_4 $\\\\$~$\\end{tabular} ~~ & \\begin{tabular}{@{}c@{}}$ For~c\\neq 0~ and~$\\\\$for~all~\\mu\\in \\left(-\\infty,-\\frac{3}{c}\\right]\\cup\\left[\\frac{3}{c},\\infty\\right)$ \\end{tabular} ~~ & ~~$-\\frac{3}{\\mu c}$ ~~&~~ $\\sqrt{1-\\frac{9}{\\mu^2 c^2}}$~~&~~$\\left(1-\\frac{18}{\\mu^2 c^2}\\right)$&$~~~~\\frac{\\mu^2 c^2}{18-\\mu^2c^2}$~~&~~$-1$ ~~& ~~$~-1$\\\\ \\hline \n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$ E_5 $\\\\$~$\\end{tabular} ~~ & \\begin{tabular}{@{}c@{}}$ For~c\\neq 0~ and~$\\\\$for~all~\\mu\\in \\left(-\\infty,-\\frac{3}{c}\\right]\\cup\\left[\\frac{3}{c},\\infty\\right)$ \\end{tabular} ~~ & ~~$-\\frac{3}{\\mu c}$ ~~&~~ $-\\sqrt{1-\\frac{9}{\\mu^2 c^2}}$~~&$\\left(1-\\frac{18}{\\mu^2 c^2}\\right)$ &~~$~~\\frac{\\mu^2 c^2}{18-\\mu^2c^2}$~~&~~$-1$ ~~& ~~$~-1$\\\\ \\hline \t\n\t\\end{tabular}\t\n\\end{table}\n\n \\begin{table}[!]\n\t\\caption{\\label{T4}Table shows the eigenvalues $(\\lambda_1, \\lambda_2)$ of the Jacobian matrix corresponding to the critical points and the nature of all critical points $(E_1-E_5)$.}\t\t \n\t\\begin{tabular}{|c|c c|c|}\n\t\t\\hline\n\t\t\\hline\t\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$ Critical~Points $\\\\$~$\\end{tabular} &$ ~~\\lambda_1 $ & $~~\\lambda_2$ & $ Nature~~ of~~ Critical~~ points$ \\\\ \\hline\\hline\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\ $E_1$ \\\\$~$\\end{tabular} & $-3$ & $ -3 $&Hyperbolic\\\\ \\hline\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$ E_2 $\\\\$~$\\end{tabular} & $-3$ & $ -3 $& Hyperbolic\\\\ \\hline\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$ E_3 $\\\\$~$\\end{tabular} & $-\\frac{3}{2}\\left(1+\\frac{\\mu^2c^2}{9}\\right)$ & $\\frac{3}{2}\\left(1-\\frac{\\mu^2c^2}{9}\\right)$& \\begin{tabular}{@{}c@{}}$~~$\\\\Non-hyperbolic for $\\mu c=\\pm3$\\\\and\\\\hyperbolic for $\\mu c\\neq\\pm3$\\\\$~~$\\end{tabular}\\\\ \\hline\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$ E_4 $\\\\$~$\\end{tabular} & \\begin{tabular}{@{}c@{}}$~~$\\\\$\\frac{-3+\\sqrt{45-\\frac{324}{\\mu^2 c^2}}}{2}$ \\\\$~~$\\end{tabular}&\\begin{tabular}{@{}c@{}}$~~$\\\\ $\\frac{-3-\\sqrt{45-\\frac{324}{\\mu^2 c^2}}}{2}$\\\\$~~$\\end{tabular} &\\begin{tabular}{@{}c@{}}$~~$\\\\Non-hyperbolic for $\\mu c=\\pm3$\\\\and\\\\hyperbolic for $\\mu c\\neq\\pm3$\\\\$~~$\\end{tabular}\\\\ \\hline\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$ E_5 $\\\\$~$\\end{tabular} & \\begin{tabular}{@{}c@{}}$~~$\\\\$\\frac{-3+\\sqrt{45-\\frac{324}{\\mu^2 c^2}}}{2}$ \\\\$~~$\\end{tabular}&\\begin{tabular}{@{}c@{}}$~~$\\\\ $\\frac{-3-\\sqrt{45-\\frac{324}{\\mu^2 c^2}}}{2}$\\\\$~~$\\end{tabular} &\\begin{tabular}{@{}c@{}}Non-hyperbolic for $\\mu c=\\pm3$\\\\and\\\\hyperbolic for $\\mu c\\neq\\pm3$\\end{tabular}\\\\ \\hline\n\t\\end{tabular}\n\\end{table}\n\\newpage\nFor avoiding similar arguments which we have mentioned for analyzing the stability of the above critical points, we only state the stability and the reason behind the stability of these critical points in a tabular form, which is shown as in Table \\ref{T_stability}. \n\\begin{table}[h]\n\t\\caption{\\label{T_stability}Table shows the stability and the reason behind the stability of the critical points $(E_1-E_5)$}\n\t\\begin{tabular}{|c|c|c|}\n\t\\hline\n\t\\hline\t\n\t\\begin{tabular}{@{}c@{}}$~~$\\\\$ CPs $\\\\$~$\\end{tabular} &$Stability$& $Reason~behind~the~stability$ \\\\ \\hline\\hline\n\t$E_1,~E_2$& Both are stable star & \t\\begin{tabular}{@{}c@{}}$~~$\\\\As both eigenvalues $\\lambda_1$ and $\\lambda_2$ are negative and equal. By Hartman-\\\\Grobman theorem we can conclude that the critical points $E_1$ and \\\\$E_2$ both are stable star.\\\\$~~$\\end{tabular}\\\\ \\hline\n\t$E_3$&\\begin{tabular}{@{}c@{}}$~~$\\\\ Stable node for $\\mu c=-3$,\\\\saddle node for $\\mu c=3$ ,\\\\ stable node for $\\mu c>3~or,~<-3$,\\\\saddle node for $-3<\\mu c<3$ \\\\$~~$\\end{tabular}&\\begin{tabular}{@{}c@{}}$~~$\\\\For $\\mu c=-3:$\\\\After shifting the this critical point into the origin by taking the\\\\ transformation $x= X-\\frac{\\mu c}{3}$, $y= Y$ and by using CMT, the CM \\\\is given by $X=Y^2+\\mathcal{O}(Y^4) $ and the flow on the CM is determined \\\\by $ Y'=-\\frac{3}{2}Y^3+\\mathcal{O}(Y^5)$. $Y'<0$ while $Y>0$ and for $Y<0$, $Y'>0$.\\\\ So, the critical point $E_3$ is a stable node (FIG.\\ref{mu_c_3}(a)).\\\\$~~$\\\\ For $\\mu c=3:$\\\\ The center manifold is given by $X=-Y^2+\\mathcal{O}(Y^4) $ and the flow on\\\\ the center manifold is determined by $ Y'=\\frac{3}{2}Y^3+\\mathcal{O}(Y^5)$. $Y'<0$\\\\ while $Y<0$ and for $Y>0$, $Y'>0$. So, the critical point $E_3$ is a\\\\ saddle node (FIG.\\ref{mu_c_3}(b)).\\\\$~~$\\\\For $\\mu c>3~or,~\\mu c<-3$:\\\\ Both of the eigenvalues $\\lambda_1$ and $\\lambda_2$ are negative and unequal. So by\\\\ Hartman-Grobman theorem the critical point $E_3$ is a stable node.\\\\ $~~$\\\\ For $-3<\\mu c<3:$\\\\$\\lambda_1$ is negative and $\\lambda_2$ is positive. So by Hartman-Grobman theorem\\\\ the critical point $E_3$ is unstable node.\\\\$~~$\\end{tabular} \\\\ \\hline\n\t$E_4,~E_5$ &\\begin{tabular}{@{}c@{}}$~~$\\\\Both are stable node for $\\mu c=-3$,\\\\ saddle node for $\\mu c=3$,\\\\ stable node for $\\mu c>3~or,~<-3$\\\\$~~$\\end{tabular}&\t\\begin{tabular}{@{}c@{}}$~~$\\\\For $\\mu c=3$ and $\\mu c=-3$:\\\\ The expression of the center manifold and the flow on the center\\\\ manifold is same as the expressions for $\\mu c=-3$ and $\\mu c=-3$\\\\ cases respectively for $E_3$.\\\\ $~~$\\\\ For $\\mu c>3,~or~<-3$:\\\\ Both of the eigenvalues $\\lambda_1$ and $\\lambda_2$ are negative and unequal.\\\\ Hence, by Hartman-Grobman theorem we can conclude that the critical\\\\ points $E_4$ and $E_5$ both are unstable in nature.\\\\$~~$ \\end{tabular}\\\\ \\hline\n\\end{tabular}\n\\end{table}\nNote that $\\mu c\\geq3$ and $\\mu c\\leq-3$ be the domain of existence of the critical point $E_4$ and $E_5$. For this reason we did not determine the stability analysis of the critical points $E_4$ and $E_5$ for $\\mu c\\in (-3,3)$.\n\\begin{figure}[!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{mu_c_3}\n\t\\caption{Vector field near near the origin for the critical point $E_3$. L.H.S. for $\\mu c=3$ and R.H.S. for $\\mu c=-3$.}\n\t\\label{mu_c_3}\n\\end{figure}\n\n\\newpage\n\\subsubsection*{Case-(iii)$~$\\underline{$\\mu=0$ and $\\lambda\\neq 0$}}\t\nIn this case the autonomous system $(\\ref{eq9}-\\ref{eq11})$ changes into\n\\begin{eqnarray}\nx'&=&-3x+\\frac{3}{2}x(1-x^2-y^2)-\\frac{\\lambda y^2 z}{2},\\label{eq40} \\\\\ny'&=&\\frac{3}{2}y(1-x^2-y^2)-\\frac{\\lambda xyz}{2},\\label{eq41} \\\\\nz'&=&-xz^2. \\label{eq42} \n\\end{eqnarray}\nCorresponding to the above autonomous system we have three space of critical points $P_1(0,0,z_c)$, $P_2(0,1,0)$ and $P_3(0,-1,0)$ where $z_c$ is any real number. The value of cosmological parameters, eigenvalues of the Jacobian matrix at those critical points corresponding to the autonomous system $(\\ref{eq40}-\\ref{eq42})$ and the nature of critical points $P_1$, $P_2$ and $P_3$ are same as for the critical points $A_1$, $A_2$ and $A_3$ respectively, shown as in Table \\ref{TI}. \\newpage\n\\begin{center}\n\t$1.~Critical~Point~P_1$\n\\end{center}\nThe Jacobian matrix at the critical point $P_1$ can be put as\n\\begin{equation}\n\\renewcommand{\\arraystretch}{1.5}\t\nJ(P_1)=\\begin{bmatrix}\n-\\frac{3}{2} & 0 & 0\\\\\t\n~~0 & \\frac{3}{2}& 0\\\\\n-z_c^2 & 0 & 0 \n\\end{bmatrix}.\\label{eq45}\t\n\\end{equation}\nThe eigenvalues of the above matrix are $-\\frac{3}{2}$, $\\frac{3}{2}$ and $0$ and $\\left[1, 0, \\frac{2}{3}z_c^2\\right]^T$, $[0, 1, 0]^T$ and $[0, 0, 1]^T$ are the corresponding eigenvectors respectively.\nFor a fixed $z_c$, first we shift the critical point $P_0$ to the origin by the coordinate transformation $x=X$, $y=Y$ and $z=Z+z_c$, if we put forward argument as above for non-hyperbolic critical points. Then, the center manifold can be written as $(\\ref{eq37}-\\ref{eq38})$ and the flow on the center manifold is determined by (\\ref{eq39}). Similarly as above (the discussion of stability for the critical point $C_2$) we can conclude that the center manifold for the critical point $P_1$ is also lying in the $Z-$axis but flow on the center manifold can not be determined. Now, if we project the vector field on the plane\nwhich is parallel to $XY$-plane, i.e., the plane $Z=constant$(say), then the vector field is shown as in FIG.\\ref{saddle_z_c}. So every point on $Z$- axis is a saddle node.\\bigbreak\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.7\\textwidth]{saddle_z_c}\n\t\\caption{Vector field near about every point on $Z-$axis for the critical point $P_1$.}\n\t\\label{saddle_z_c}\n\\end{figure}\nAgain if we want to obtain the stability of the critical points in the plane which is parallel to $xy$-plane, i.e., $z=constant=c$(say), then we only take the first two equations (\\ref{eq40}) and (\\ref{eq41}) of the autonomous system $(\\ref{eq40}-\\ref{eq42})$ and also replace $z$ by $c$ in those two equations. After that we can see that there exists three real and physically meaningful hyperbolic critical points $B_1(0,0)$, $B_2\\left(-\\frac{\\lambda c}{6}, \\sqrt{1+\\frac{\\lambda^2 c^2}{36}}\\right)$ and $B_3\\left(-\\frac{\\lambda c}{6}, -\\sqrt{1+\\frac{\\lambda^2 c^2}{36}}\\right)$. So by obtaining the eigenvalues of the Jacobian matrix corresponding to the autonomous system at those critical points and using Hartman-Grobman theorem we only state the stability of all critical points and also write the value of cosmological parameters corresponding to these critical points in tabular form, which is shown as in Table \\ref{TB}.\\bigbreak\nFor the critical points $P_2$ and $P_3$ we have the same Jacobian matrix (\\ref{eq21}) and if we will take the similar transformations (shifting and matrix) and then by using the similar arguments as $A_2$ and $A_3$ respectively, we conclude that the the stability of $P_2$ and $P_3$ is same as $A_2$ and $A_3$ respectively.\n \\begin{table}[!]\n\t\\caption{\\label{TB}Table shows the eigenvalues $(\\lambda_1, \\lambda_2)$ of the Jacobian matrix, stability and value of cosmological parameters corresponding to the critical points and the nature of all critical points $(B_1-B_3)$.}\t\t \n\t\\begin{tabular}{|c|c c|c|c c c c|}\n\t\t\\hline\n\t\t\\hline\t\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$ Critical~Points $\\\\$~$\\end{tabular} &$ ~~\\lambda_1 $ & $~~\\lambda_2$ & $ Stability$&$~\\Omega_\\phi~$& ~~$\\omega_{\\phi}$~~ &$\\omega_{tot}$ & ~~$q$ \\\\ \\hline\\hline\n\t\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$B_1$\\\\$~$\\end{tabular} &$-\\frac{3}{2}$ & $\\frac{3}{2}$&Stable star&$0$&Undetermined&$0$& $\\frac{1}{2}$\\\\ \\hline\n\t\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$B_2,~B_3$\\\\$~$\\end{tabular}&$-3\\left(1+\\frac{\\lambda^2 c^2}{18}\\right)$&$-3\\left(1+\\frac{\\lambda^2 c^2}{36}\\right)$&\t\\begin{tabular}{@{}c@{}}$~~$\\\\Stable star for $\\lambda c=0$\\\\and\\\\stable node for $\\lambda c\\neq 0$\\\\$~~$\\end{tabular}& $1$&$-\\left(1+\\frac{\\lambda^2 c^2}{18}\\right)$&$-\\left(1+\\frac{\\lambda^2 c^2}{18}\\right)$&$-\\left(1+\\frac{\\lambda^2 c^2}{12}\\right)$\\\\ \\hline\n\t\t\\end{tabular}\n\\end{table}\n\\newpage\n\\subsubsection*{Case-(iv)$~$\\underline{$\\mu=0$ and $\\lambda=0$}}\nIn this case the autonomous system $(\\ref{eq9}-\\ref{eq11})$ changes into\n\n\\begin{eqnarray}\nx'&=&-3x+\\frac{3}{2}x(1-x^2-y^2),\\label{eq49} \\\\\ny'&=&\\frac{3}{2}y(1-x^2-y^2),\\label{eq50} \\\\\nz'&=&-xz^2\\label{eq51}. \n\\end{eqnarray}\nCorresponding to the above autonomous system we have three space of critical points $S_1(0,0,z_c)$, $S_2(0,1,z_c)$ and $S_3(0,-1,z_c)$ where $z_c$ is any real number, which are exactly same as $C_1$, $C_2$ and $C_3$. In this case also all critical points are non-hyperbolic in nature. By taking the possible shifting transformations (for $S_1~(x=X,y=Y,z=Z+z_c)$, for $S_2~(x=X,y=Y+1,z=Z+z_c)$ and for $S_3~(x=X,y=Y-1,z=Z+z_c)$ ) as above we can conclude that for all critical points the center manifold is given by $(\\ref{eq37}-\\ref{eq38})$ and the flow on the center manifold is determined by (\\ref{eq39}), i.e., for all critical points the center manifold is lying on the $Z$-axis. Again if we plot the vector field in $Z=constant$ plane, we can see that for the critical point $S_1$ every points on $Z$-axis is a saddle node (same as FIG.\\ref{saddle_z_c}) and for $S_2$ and $S_3$ every points on $Z$-axis is a stable star (same as FIG.\\ref{z_c}). \n\n\\subsection{Model 2: Power-law potential and\n\texponentially-dependent dark-matter particle mass \\label{M2}}\n\t In this consideration evolution equations in Section \\ref{BES} can be converted to the autonomous system as follows \n\\begin{eqnarray}\nx'&=&-3x+\\frac{3}{2}x(1-x^2-y^2)-\\frac{\\lambda y^2 z}{2}-\\sqrt{\\frac{3}{2}}\\mu(1+x^2-y^2),\\label{eq54} \\\\\ny'&=&\\frac{3}{2}y(1-x^2-y^2)-\\frac{\\lambda xyz}{2},\\label{eq55} \\\\\nz'&=&-xz^2,\\label{eq56} \n\\end{eqnarray}\nWe have five critical points $L_1$, $L_2$, $L_3$, $L_4$ and $L_5$ corresponding to the above autonomous system. The set of critical points, their existence and the value of cosmological parameters at those critical points are shown as in Table \\ref{TPLE} and the eigenvalues of the Jacobian matrix corresponding to the autonomous system $(\\ref{eq54}-\\ref{eq56})$ at those critical points and the nature of the critical points are shown in Table \\ref{TNE}.\\par\n\nHere we only concern about the stability of the critical points for $\\mu\\neq 0$ and $\\lambda\\neq 0$ because for another possible cases we will get the similar types result which we have obtained for Model $1$.\n\t\\begin{table}[h]\n\t\\caption{\\label{TPLE}Table shows the set of critical points and their existence, value of cosmological parameters corresponding to that critical points. }\n\t\\begin{tabular}{|c|c|c c c|c|c|c| c|}\n\t\t\\hline\n\t\t\\hline\t\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$~Critical ~Points$\\\\$~~$\\end{tabular} &$Existence$&$x$&$y$&$z~~$& $~\\Omega_\\phi~$&$~\\omega_\\phi~$ &$~\\omega_{tot}~$& $~q~$ \\\\ \\hline\\hline\n\t\\begin{tabular}{@{}c@{}}$~~$\\\\\t$L_1$\\\\$~~$\\end{tabular}& For all $\\mu$ and $\\lambda$&0&1&0 & 1 & $-1$ & $-1$&$-1$\\\\ \\hline\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\\t$L_2$\\\\$~~$\\end{tabular}& For all $\\mu$ and $\\lambda$&0&$-1$&0 & 1 & $-1$ & $-1$&$-1$\\\\ \\hline\n\t\t$L_3$ & \t\\begin{tabular}{@{}c@{}}$~~$\\\\For all \\\\$\\mu\\in\\left(-\\infty,-\\sqrt{\\frac{3}{2}}\\right]\\cup\\left[\\sqrt{\\frac{3}{2}},\\infty\\right)$\\\\and all $\\lambda$\\\\$~~$\\end{tabular}&$-\\frac{1}{\\mu}\\sqrt{\\frac{3}{2}}$&$\\sqrt{1-\\frac{3}{2\\mu^2}}$&0&$1-\\frac{3}{\\mu^2}$&$\\frac{\\mu^2}{3-\\mu^2}$&$-1$&$-1$\\\\ \\hline\n\t\t$L_4$ & \t\\begin{tabular}{@{}c@{}}$~~$\\\\For all \\\\$\\mu\\in\\left(-\\infty,-\\sqrt{\\frac{3}{2}}\\right]\\cup\\left[\\sqrt{\\frac{3}{2}},\\infty\\right)$\\\\and all $\\lambda$\\\\$~~$\\end{tabular}&$-\\frac{1}{\\mu}\\sqrt{\\frac{3}{2}}$&$-\\sqrt{1-\\frac{3}{2\\mu^2}}$&0&$1-\\frac{3}{\\mu^2}$&$\\frac{\\mu^2}{3-\\mu^2}$&$-1$&$-1$\\\\ \\hline\n\t\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\\t$L_5$\\\\$~~$\\end{tabular}& For all $\\mu$ and $\\lambda$&$-\\sqrt{\\frac{2}{3}}\\mu$&$0$&$0$&$-\\frac{2}{3}\\mu^2$ & $1$&$-\\frac{2}{3}\\mu^2$&$\\frac{1}{2}\\left(1-2\\mu^2\\right)$ \\\\ \\hline\n\t\\end{tabular}\n\\end{table}\n\\begin{table}[h]\n\t\\caption{\\label{TNE}The eigenvalues $(\\lambda_1,\\lambda_2,\\lambda_3)$ of the Jacobian matrix corresponding to the autonomous system $(\\ref{eq54}-\\ref{eq56})$ at those critical points $(L_1-L_5)$ and the nature of the critical points}\n\t\\begin{tabular}{|c|c c c|c|}\n\t\t\\hline\n\t\t\\hline\t\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$~Critical ~Points$\\\\$~~$\\end{tabular} &$\\lambda_1$&$\\lambda_2$&$\\lambda_3$&$Nature~ of~ critical~ Points$ \\\\ \\hline\\hline\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$L_1$\\\\$~~$\\end{tabular}&$-3$&$-3$&$0$&Non-hyperbolic\\\\ \\hline\n\t\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$L_2$\\\\$~~$\\end{tabular}&$-3$&$-3$&$0$&Non-hyperbolic\\\\ \\hline\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$L_3$\\\\$~~$\\end{tabular}&$-\\frac{3}{2}\\left(1+\\frac{1}{\\mu}\\sqrt{-6+5\\mu^2}\\right)$&$-\\frac{3}{2}\\left(1-\\frac{1}{\\mu}\\sqrt{-6+5\\mu^2}\\right)$&$0$&Non-hyperbolic\\\\ \\hline\n\t\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$L_4$\\\\$~~$\\end{tabular}&$-\\frac{3}{2}\\left(1+\\frac{1}{\\mu}\\sqrt{-6+5\\mu^2}\\right)$&$-\\frac{3}{2}\\left(1-\\frac{1}{\\mu}\\sqrt{-6+5\\mu^2}\\right)$&$0$&Non-hyperbolic\\\\ \\hline\n\t\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$L_5$\\\\$~~$\\end{tabular}&$-\\frac{3}{2}$&$\\frac{3}{2}$&$0$&Non-hyperbolic \\\\ \\hline \n\t\\end{tabular}\n\\end{table}\n\n\\begin{center}\n\t$1.~Critical~Point~L_1$\n\\end{center}\nThe Jacobian matrix corresponding to the autonomous system $(\\ref{eq54}-\\ref{eq56})$ at the critical point $L_1$ can be put as\n\\begin{equation}\n\\renewcommand{\\arraystretch}{1.5}\nJ(L_1)=\\begin{bmatrix}\n-3&\\sqrt{6}\\mu&-\\frac{\\lambda}{2}\\\\\n~~0&-3&~~0\\\\\n~~0 & ~~0& ~~0\n\\end{bmatrix}.\n\\end{equation}\nThe eigenvalues of $J(L_1)$ are $-3$, $-3$, $0$ and $[1,0,0]^T$, $\\left[-\\frac{\\lambda}{6}, 0,1\\right]^T$ are the eigenvectors corresponding to the eigenvalues $-3$ and $0$ respectively. Since the algebraic multiplicity corresponding to the eigenvalue $-3$ is $2$ but the dimension of the eigenspace corresponding to that eigenvalue is $1$, i.e., algebraic multiplicity and geometric multiplicity corresponding to the eigenvalue $-3$ are not equal to each other. So, the Jacobian matrix $J(L_1)$ is not diagonalizable. To determine the center manifold for this critical point there only arises a problem for presence of the nonzero element in the top position of third column of the Jacobian matrix. First we take the coordinate transformation $x=X,y=Y+1,z=Z$ which shift the critical point $L_1$ to the origin. Now we introduce another coordinate system which will remove the term in the top position of the third column. Since, there are only two linearly independent eigenvectors, so we have to obtain another linearly independent column vector that will help to construct the new coordinate system. Since, $[0,1,0]^T$ be the column vector which is linearly independent to the eigenvectors of $J(L_1)$. The new coordinate system $(u,v,w)$ can be written in terms of $(X,Y,Z)$ as (\\ref{eq24})\n\tand in these new coordinate system the equations $(\\ref{eq54}-\\ref{eq56})$ are transformed into\t\n\\begin{equation}\\renewcommand{\\arraystretch}{1.5}\t\n\\begin{bmatrix}\nu'\\\\\nv'\\\\\nw'\n\\end{bmatrix}\\renewcommand{\\arraystretch}{1.5}\n=\\begin{bmatrix}\n-3&\\sqrt{6}\\mu&0\\\\\n~~0&-3&~~0\\\\\n~~0 & ~~0& ~~0\n\\end{bmatrix}\n\\begin{bmatrix}\nu\\\\\nv\\\\\nw\n\\end{bmatrix}\t\t\n+\\renewcommand{\\arraystretch}{1.5}\t\n\\begin{bmatrix}\nnon\\\\\nlinear\\\\\nterms\n\\end{bmatrix}.\t\n\\end{equation}\t\nBy similar arguments which we have derived in the stability analysis of the critical point $A_2$, the center manifold can be written as (\\ref{eqn27}-\\ref{eqn28})\nand the flow on the center manifold is determined by (\\ref{eq29}).\nAs the expression of center manifold and the flow are same as for the critical point $A_2$. So the stability of the critical point $L_1$ is same as the stability of $A_2$.\n\\begin{center}\n\t$2.~Critical~Point~L_2$\n\\end{center}\nAfter shifting the critical points to the origin (by taking the shifting transformations $(x=X,y=Y-1,z=Z)$ and the matrix transformation (\\ref{eq24})) and by putting the forward arguments which we have mentioned for the analysis of $L_1$, the center manifold can be expressed as $(\\ref{eqn30}-\\ref{eqn31})$ and the flow on the center manifold is determined by (\\ref{eqn32}). So the stability of the critical point $L_2$ is same as the stability of $A_3$.\n\n\n\\begin{center}\n\t$3.~Critical~Point~L_3$\n\\end{center}\nThe Jacobian matrix corresponding to the autonomous system $(\\ref{eq54}-\\ref{eq56})$ at the critical point $L_3$ can be put as\n\\begin{equation}\n\t\\renewcommand{\\arraystretch}{3}\n\tJ(L_3)=\\begin{bmatrix}\n\t\t-\\frac{9}{2\\mu^2}&\\sqrt{1-\\frac{3}{2\\mu^2}}\\left(\\frac{3}{\\mu}\\sqrt{\\frac{3}{2}}+\\sqrt{6}\\mu\\right)&-\\frac{\\lambda}{2}\\left(1-\\frac{3}{2\\mu^2}\\right)\\\\\n\t\t\\frac{3}{\\mu}\\sqrt{\\frac{3}{2}}\\sqrt{1-\\frac{3}{2\\mu^2}}&-3\\left(1-\\frac{3}{2\\mu^2}\\right)&\\frac{\\lambda}{2\\mu}\\sqrt{\\frac{3}{2}}\\sqrt{1-\\frac{3}{2\\mu^2}}\\\\\n\t\t~~0 & ~~0& ~~0\n\t\\end{bmatrix}.\n\\end{equation}\nThe eigenvalues corresponding to the Jacobian matrix $J(L_3)$ are shown in Table.\\ref{TNE}. From the existence of the critical point $L_3$ we can conclude that the eigenvalues of $J(L_3)$ always real. Since the critical point $L_3$ exists for $\\mu\\leq -\\sqrt{\\frac{3}{2}}$ or $\\mu\\geq \\sqrt{\\frac{3}{2}}$, our aim is to define the stability in all possible regions of $\\mu$ for at least one choice of $\\mu$ in these region. For this reason we will define the stability at four possible choices of $\\mu$. We first determine the stability of this critical point at $\\mu=\\pm\\sqrt{\\frac{3}{2}}$. Then for $\\mu< -\\sqrt{\\frac{3}{2}}$, we shall determine the stability of $L_3$ at $\\mu=-\\sqrt{3}$ and for $\\mu>\\sqrt{\\frac{3}{2}}$, we shall determine the stability of $L_3$ at $\\mu=\\sqrt{3}$.\\par\n\nFor $\\mu=\\pm\\sqrt{\\frac{3}{2}}$, the Jacobian matrix $J(L_3)$ converts into\n$$\n\\begin{bmatrix}\n-3&0&0\\\\~~0&0&0\\\\~~0&0&0\n\\end{bmatrix}\n$$\nand as the critical point $L_3$ converts into $(\\mp 1,0,0)$, first we take the transformation $x=X\\mp 1, y= Y, z=Z$ so that $L_3$ moves into the origin. As the critical point is non-hyperbolic in nature we use CMT for determining the stability of this critical point. From center manifold theory there exist a continuously differentiable function\n$h:$$\\mathbb{R}^2$$\\rightarrow$$\\mathbb{R}$ such that $X=h(Y,Z)=aY^2+bYZ+cZ^2+higher~order~terms,$ where $a,~b,~c~ \\epsilon~\\mathbb{R}$. \\\\\nNow differentiating both side with respect to $N$, we get\n\\begin{eqnarray}\n\\frac{dX}{dN}=[2aY+bZ ~~~~ bY+2cZ]\\begin{bmatrix}\n\\frac{dY}{dN}\\\\\n~\\\\\n\\frac{dZ}{dN}\\\\\n\\end{bmatrix}\\label{equn52}\n\\end{eqnarray}\nComparing L.H.S. and R.H.S. of (\\ref{equn52}) we get,\n$a=1$, $b=0$ and $c=0$, i.e., the center manifold can be written as\n\\begin{eqnarray}\nX&=&\\pm Y^2+higher~order~terms\\label{eq65}\n\\end{eqnarray}\nand the flow on the center manifold is determined by \n\\begin{eqnarray}\n\\frac{dY}{dN}&=&\\pm\\frac{\\lambda}{2}YZ+higher~order~terms,\\label{eq66}\\\\\n\\frac{dZ}{dN}&=&\\pm Z^2+higher~order~terms\\label{eq67}.\n\\end{eqnarray}\n We only concern about the non-zero coefficients of the lowest power terms in CMT as we analyze arbitrary small neighborhood of the origin and here the lowest power term of the expression of center manifold depends only on $Y$. So, we draw the vector field near the origin only on $XY$-plane, i.e., the nature of the vector field implicitly depends on $Z$ not explicitly. Now we try to write the flow equations $(\\ref{eq66}-\\ref{eq67})$ in terms of $Y$ only. For this reason, we divide the corresponding sides of (\\ref{eq66}) by the corresponding sides of (\\ref{eq67}) and then we will get \n \\begin{align*}\n&\\frac{dY}{dZ}=\\frac{\\lambda}{2}\\frac{Y}{Z}\\\\ \\implies& Z=\\left(\\frac{Y}{C}\\right)^{2/\\lambda},~~\\mbox{where $C$ is a positive arbitrary constant}\n \\end{align*}\nAfter substituting this any of $(\\ref{eq66})$ or $(\\ref{eq67})$, we get\n\\begin{align}\n\\frac{dY}{dN}=\\frac{\\lambda}{2C^{2/\\lambda}}Y^{1+2/\\lambda}\n\\end{align}\nAs the power of $Y$ can not be negative or fraction, so we have only two choices of $\\lambda$, $\\lambda=1$ or $\\lambda=2$. For $\\lambda=1$ or, $\\lambda=2$ both of the cases the origin is a saddle node, i.e., unstable in nature (FIG.\\ref{L_21} is for $\\mu=\\sqrt{\\frac{3}{2}}$ and FIG.\\ref{L_2_1_1} is for $\\mu=-\\sqrt{\\frac{3}{2}}$). Hence, for $\\mu=\\pm \\sqrt{\\frac{3}{2}}$, in the old coordinate system the critical point $L_3$ is unstable due to its saddle nature.\\bigbreak\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{L21}\n\t\\caption{Vector field near the origin when $\\mu=\\sqrt{\\frac{3}{2}}$, for the critical point $L_3$. L.H.S. phase plot is for $\\lambda=1$ and R.H.S. phase plot is for $\\lambda=2$.}\n\t\\label{L_21}\n\\end{figure}\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{L211}\n\t\\caption{Vector field near the origin when $\\mu=-\\sqrt{\\frac{3}{2}}$, for the critical point $L_3$. L.H.S. phase plot is for $\\lambda=1$ and R.H.S. phase plot is for $\\lambda=2$.}\n\t\\label{L_2_1_1}\n\\end{figure}\nFor $\\mu=\\sqrt{3}$, the Jacobian matrix $J(L_3)$ converts into\n$$\t\\renewcommand{\\arraystretch}{1.5}\n\\begin{bmatrix}\n-\\frac{3}{2}&~~\\frac{9}{2}&-\\frac{\\lambda}{4}\\\\~~\\frac{3}{2}&-\\frac{3}{2}&~~\\frac{\\lambda}{4}\\\\~~0&~~0&~~0\n\\end{bmatrix}.\n$$\nThe eigenvalues of the above Jacobian matrix are $-\\frac{3}{2}(1+\\sqrt{3})$, $-\\frac{3}{2}(1-\\sqrt{3})$ and $0$ and the corresponding eigenvectors are $[-\\sqrt{3},1,0]^T$, $[\\sqrt{3},1,0]^T$ and $\\left[-\\frac{\\lambda}{6},0,1\\right]^T$ respectively. As for $\\mu=\\sqrt{3}$, the critical point $L_3$ converts into $\\left(-\\frac{1}{\\sqrt{2}},\\frac{1}{\\sqrt{2}},0\\right)$; so first we take the transformations $x= X-\\frac{1}{\\sqrt{2}}$, $y= Y+\\frac{1}{\\sqrt{2}}$ and $z= Z$ which shift the critical point to the origin. By using the eigenvectors of the above Jacobian matrix, we introduce a new coordinate system $(u,v,w)$ in terms of $(X,Y,Z)$ as\n\\begin{equation}\\renewcommand{\\arraystretch}{1.5}\t\n\\begin{bmatrix}\nu\\\\\nv\\\\\nw\n\\end{bmatrix}\\renewcommand{\\arraystretch}{1.5}\n=\\begin{bmatrix}\n-\\frac{1}{2\\sqrt{3}} & \\frac{1}{2} & -\\frac{\\lambda}{12\\sqrt{3}} \\\\\t\n\\frac{1}{2\\sqrt{3}} & \\frac{1}{2} & \\frac{\\lambda}{12\\sqrt{3}}\\\\\n0 & 0 & 1\n\\end{bmatrix}\\renewcommand{\\arraystretch}{1.5}\n\\begin{bmatrix}\nX\\\\\nY\\\\\nZ\n\\end{bmatrix}\t\n\\end{equation}\t\t\nand in these new coordinates the equations $(\\ref{eq54}-\\ref{eq56})$ are transformed into\t\n\\begin{equation}\t\\renewcommand{\\arraystretch}{1.5}\n\\begin{bmatrix}\n-u'+v'\\\\\nu'+v'\\\\\nw'\n\\end{bmatrix}\n=\\begin{bmatrix}\n\\frac{3}{2}(1+\\sqrt{3})& -\\frac{3}{2}(1-\\sqrt{3}) & 0 \\\\\t\n -\\frac{3}{2}(1+\\sqrt{3}) & -\\frac{3}{2}(1-\\sqrt{3}) & 0 \\\\\n~~0 & ~~0 & 0\n\\end{bmatrix}\n\\begin{bmatrix}\nu\\\\\nv\\\\\nw\n\\end{bmatrix}\t\t\n+\t\n\\begin{bmatrix}\nnon\\\\\nlinear\\\\\nterms\n\\end{bmatrix}.\t\n\\end{equation}\t\nNow if we add $1$st and $2$nd equation of the above matrix equation and then divide both sides by $2$, then we get $v'$. Again, if we subtract $1$st equation from $2$nd equation and divide both sides by $2$, we get $u'$. Finally, in matrix form in the new coordinate system the autonomous system can be written as\n\\begin{equation}\t\\renewcommand{\\arraystretch}{1.5}\n\\begin{bmatrix}\nu'\\\\\nv'\\\\\nw'\n\\end{bmatrix}\n=\\begin{bmatrix}\n-\\frac{3}{2}(1+\\sqrt{3})& 0 & 0 \\\\\t\n0 & -\\frac{3}{2}(1-\\sqrt{3}) & 0 \\\\\n0 & ~~0 & 0\n\\end{bmatrix}\n\\begin{bmatrix}\nu\\\\\nv\\\\\nw\n\\end{bmatrix}\t\t\n+\t\n\\begin{bmatrix}\nnon\\\\\nlinear\\\\\nterms\n\\end{bmatrix}.\t\n\\end{equation}\nIf we put similar arguments which we have mentioned for the analysis of $A_2$, then the center manifold can be expressed as\n\\begin{align}\nu&=\\frac{2}{3(1+\\sqrt{3})}\\left\\{\\frac{(\\sqrt{3}-1)\\lambda^2-4\\lambda}{48\\sqrt{6}}\\right \\}w^2+\\mathcal{O}(w^3),\\label{eqn72}\\\\\nv&=-\\frac{2}{3(\\sqrt{3}-1)}\\left\\{\\frac{(\\sqrt{3}+1)\\lambda^2+4\\lambda}{48\\sqrt{6}}\\right \\}w^2+\\mathcal{O}(w^3)\\label{eqn73}\n\\end{align}\nand the flow on the center manifold is determined by\n\\begin{align}\nw'&=\\frac{1}{\\sqrt{2}}w^2+\\mathcal{O}(w^3)\\label{eqn74}.\n\\end{align}\nFrom the flow equation we can easily conclude that the origin is a saddle node and unstable in nature. The vector field near the origin in $uw$-plane is shown as in FIG.\\ref{L_22} and the vector field near the origin in $vw$-plane is shown as in FIG.\\ref{L_2_2}. Hence, in the old coordinate system $(x,y,z)$, for $\\mu=\\sqrt{3}$ the critical point $L_3$ is unstable due to its saddle nature.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{L222}\n\t\\caption{Vector field near the origin in $uw$-plane when $\\mu=\\sqrt{3}$, for the critical points $L_3$ and $L_4$. For the critical point $L_3$, the phase plot (a) is for $\\lambda<0$ or $\\lambda>\\frac{4}{\\sqrt{3}-1}$ and the phase plot (b) is for $0<\\lambda<\\frac{4}{\\sqrt{3}-1}$. For the critical point $L_4$, the phase plot (a) is for $0<\\lambda<\\frac{4}{\\sqrt{3}-1}$ and the phase plot (b) is for $\\lambda<0$ or $\\lambda>\\frac{4}{\\sqrt{3}-1}$.}\n\t\\label{L_22}\n\\end{figure}\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{L22}\n\t\\caption{Vector field near the origin in $vw$-plane when $\\mu=\\sqrt{3}$, for the critical points $L_3$ and $L_4$. For the critical point $L_3$, the phase plot (a) is for $\\lambda<-\\frac{4}{\\sqrt{3}+1}$ or $\\lambda>0$ and the phase plot (b) is for $-\\frac{4}{\\sqrt{3}+1}<\\lambda<0$. For the critical point $L_4$, the phase plot (a) is for $-\\frac{4}{\\sqrt{3}+1}<\\lambda<0$ and the phase plot (b) is for $\\lambda<-\\frac{4}{\\sqrt{3}+1}$ or $\\lambda>0$.}\n\t\\label{L_2_2}\n\\end{figure}\nLastly, for $\\mu=-\\sqrt{3}$, we have the same eigenvalues $-\\frac{3}{2}(1+\\sqrt{3})$, $-\\frac{3}{2}(1-\\sqrt{3})$ and $0$ and the corresponding eigenvectors are $[\\sqrt{3},1,0]^T$, $[-\\sqrt{3},1,0]^T$ and $\\left[-\\frac{\\lambda}{6},0,1\\right]^T$ respectively of $J(L_3)$. After putting corresponding arguments which we have mentioned for $\\mu=\\sqrt{3}$ case, then we will get the same expressions $(\\ref{eqn72}-\\ref{eqn73})$ for center manifold and (\\ref{eqn74}) for flow on the center manifold. So, for this case also we conclude that the critical point $L_3$ is a saddle node and unstable in nature.\n\\newpage\n\\begin{center}\n\t$4.~Critical~Point~L_4$\n\\end{center}\nThe Jacobian matrix corresponding to the autonomous system $(\\ref{eq54}-\\ref{eq56})$ at the critical point $L_4$ can be put as\n\\begin{equation}\n\\renewcommand{\\arraystretch}{3}\nJ(L_4)=\\begin{bmatrix}\n-\\frac{9}{2\\mu^2}&-\\sqrt{1-\\frac{3}{2\\mu^2}}\\left(\\frac{3}{\\mu}\\sqrt{\\frac{3}{2}}+\\sqrt{6}\\mu\\right)&-\\frac{\\lambda}{2}\\left(1-\\frac{3}{2\\mu^2}\\right)\\\\\n-\\frac{3}{\\mu}\\sqrt{\\frac{3}{2}}\\sqrt{1-\\frac{3}{2\\mu^2}}&-3\\left(1-\\frac{3}{2\\mu^2}\\right)&-\\frac{\\lambda}{2\\mu}\\sqrt{\\frac{3}{2}}\\sqrt{1-\\frac{3}{2\\mu^2}}\\\\\n~~0 & ~~0& ~~0\n\\end{bmatrix}.\n\\end{equation}\nFor this critical point also we analyze the stability for the above four choices of $\\mu$, i.e., $\\mu=\\pm\\sqrt{\\frac{3}{2}}$, $\\mu=\\sqrt{3}$ and $\\mu=-\\sqrt{3}$. \\par\n For $\\mu=\\pm\\sqrt{\\frac{3}{2}}$, we will get the same expressions of center manifold (\\ref{eq65}) and the flow on the center manifold $(\\ref{eq66}-\\ref{eq67})$. So, for this case the critical point $L_4$ is unstable due to its saddle nature. \\par\nFor $\\mu=\\sqrt{3}$, after putting corresponding arguments as $L_3$, the center manifold can be written as\n\\begin{align}\nu&=\\frac{2}{3(1+\\sqrt{3})}\\left\\{\\frac{(1-\\sqrt{3})\\lambda^2+4\\lambda}{48\\sqrt{6}}\\right \\}w^2+\\mathcal{O}(w^3),\\label{eqn76}\\\\\nv&=\\frac{2}{3(\\sqrt{3}-1)}\\left\\{\\frac{(\\sqrt{3}+1)\\lambda^2+4\\lambda}{48\\sqrt{6}}\\right \\}w^2+\\mathcal{O}(w^3)\\label{eqn77}\n\\end{align}\nand the flow on the center manifold is determined by\n\\begin{align}\nw'&=\\frac{1}{\\sqrt{2}}w^2+\\mathcal{O}(w^3)\\label{eqn78}.\n\\end{align}\nFrom the flow equation we can conclude that the origin is a saddle node and hence in the old coordinate system $L_4$ is a saddle node, i.e., unstable in nature. The vector field near the origin in $uw$-plane is shown as in FIG.\\ref{L_22} and the vector field near the origin in $vw$-plane is shown as in FIG.\\ref{L_2_2}.\\par\nFor $\\mu=-\\sqrt{3}$ we also get the same expression of center manifold and flow equation as for $\\mu=\\sqrt{3}$ case.\n\n\\begin{center}\n\t$5.~Critical~Point~L_5$\n\\end{center}\nFirst we shift the critical point $L_5$ to the origin by the transformation $x= X-\\sqrt{\\frac{2}{3}}\\mu$, $y=Y$ and $z= Z$. For avoiding similar arguments which we have mentioned for the above critical points, we only state the main results center manifold and the flow equation for this critical point. The center manifold for this critical point can be written as\n\\begin{align}\nX&=0,\\\\Y&=0\n\\end{align}\nand the flow on the center manifold can be obtained as\n\\begin{align}\n\\frac{dZ}{dN}=\\sqrt{\\frac{2}{3}}\\mu Z^2 +\\mathcal{O}(Z^3).\n\\end{align}\nFrom the expressions of the center manifold we can conclude that the center manifold is lying on the $Z$-axis. From the flow on the center manifold FIG.\\ref{z_center_manifold}, we conclude that the origin is unstable for both of the cases $\\mu>0$ or $\\mu<0$.\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{z_center_manifold}\n\t\\caption{Flow on the center manifold near the origin for the critical point $L_5$. (a) is for $\\mu>0$ and (b) is for $\\mu<0$.}\n\t\\label{z_center_manifold}\n\\end{figure}\n\\subsection{Model 3: Exponential potential and\n\tpower-law-dependent dark-matter particle mass \\label{M3}}\nIn this case evolution equations in Section \\ref{BES} can be written to the autonomous system as follows \n\\begin{eqnarray}\nx'&=&-3x+\\frac{3}{2}x(1-x^2-y^2)-\\sqrt{\\frac{3}{2}}\\lambda y^2-\\frac{\\mu}{2}z(1+x^2-y^2),\\label{eq82} \\\\\ny'&=&\\frac{3}{2}y(1-x^2-y^2)-\\sqrt{\\frac{3}{2}}\\lambda xy,\\label{eq83} \\\\\nz'&=&-xz^2.\\label{eq84} \n\\end{eqnarray}\nWe have three physical meaningful critical points $R_1$, $R_2$ and $R_3$ corresponding to the above autonomous system. The set of critical points, their existence and the value of cosmological parameters at those critical points corresponding to the autonomous system $(\\ref{eq82}-\\ref{eq84})$ shown in Table \\ref{TPRE} and the eigenvalues of the Jacobian matrix corresponding to the autonomous system $(\\ref{eq82}-\\ref{eq84})$ at those critical points and the nature of the critical points are shown in Table \\ref{TNRE}.\\par\n\nHere we also concern about the stability of the critical points for $\\mu\\neq 0$ and $\\lambda\\neq 0$ because for another possible cases we will get the similar types result which we have obtained for Model $1$.\n\t\\begin{table}[h]\n\t\\caption{\\label{TPRE}Table shows the set of critical points and their existence, value of cosmological parameters corresponding to that critical points. }\n\t\\begin{tabular}{|c|c|c c c|c|c|c| c|}\n\t\t\\hline\n\t\t\\hline\t\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$~Critical ~Points$\\\\$~~$\\end{tabular} &$Existence$&$x$&$y$&$z~~$& $~\\Omega_X~$&$~\\omega_X~$ &$~\\omega_{tot}~$& $~q~$ \\\\ \\hline\\hline\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$R_1$\\\\$~~$\\end{tabular}& For all $\\mu$ and $\\lambda$&$0$&$0$&$0$&$0$&Undetermined&$0$&$\\frac{1}{2}$\\\\ \\hline\n\t\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$R_2$\\\\$~~$\\end{tabular}&For all $\\mu$ and $\\lambda$&$-\\frac{\\lambda}{\\sqrt{6}}$&$\\sqrt{1+\\frac{\\lambda^2}{6}}$&$0$&$1$&$-1-\\frac{\\lambda^2}{3}$&$-1-\\frac{\\lambda^2}{3}$&$-\\frac{1}{2}\\left(2+\\lambda^2\\right)$\\\\ \\hline\n\t\t\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$R_3$\\\\$~~$\\end{tabular}&For all $\\mu$ and $\\lambda$&$-\\frac{\\lambda}{\\sqrt{6}}$&$-\\sqrt{1+\\frac{\\lambda^2}{6}}$&$0$&$1$&$-1-\\frac{\\lambda^2}{3}$&$-1-\\frac{\\lambda^2}{3}$&$-\\frac{1}{2}\\left(2+\\lambda^2\\right)$\\\\ \\hline\n \\end{tabular}\n\\end{table}\n\\begin{table}[h]\n\t\\caption{\\label{TNRE}The eigenvalues $(\\lambda_1,\\lambda_2,\\lambda_3)$ of the Jacobian matrix corresponding to the autonomous system $(\\ref{eq82}-\\ref{eq84})$ at those critical points $(R_1-R_3)$ and the nature of the critical points.}\n\t\\begin{tabular}{|c|c c c|c|}\n\t\t\\hline\n\t\t\\hline\t\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$~Critical ~Points$\\\\$~~$\\end{tabular} &$\\lambda_1$&$\\lambda_2$&$\\lambda_3$&$Nature~ of~ critical~ Points$ \\\\ \\hline\\hline\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$R_1$\\\\$~~$\\end{tabular}&$-\\frac{3}{2}$&$\\frac{3}{2}$&$0$&Non-hyperbolic\\\\ \\hline\n\t\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$R_2$\\\\$~~$\\end{tabular}&$-(3+\\lambda^2)$&$-\\left(3+\\frac{\\lambda^2}{2}\\right)$&$0$&Non-hyperbolic\\\\ \\hline\n\t\t\t\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$R_3$\\\\$~~$\\end{tabular}&$-(3+\\lambda^2)$&$-\\left(3+\\frac{\\lambda^2}{2}\\right)$&$0$&Non-hyperbolic\\\\ \\hline\n \\end{tabular}\n\\end{table}\n\nFor avoiding similar types of argument, we only state the stability of every critical points and the reason behind the stability in the tabular form, which is shown as in Table \\ref{T_R_stability}.\n\n\n\\begin{table}[!]\n\t\\caption{\\label{T_R_stability}Table shows the stability and the reason behind the stability of the critical points $(R_1-R_3)$.}\n\t\\begin{tabular}{|c|c|c|}\n\t\t\\hline\n\t\t\\hline\t\n\t\t\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$ CPs $\\\\$~$\\end{tabular} &$Stability$& $Reason~behind~the~stability$ \\\\ \\hline\\hline\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$R_1$\\\\$~$\\end{tabular}&\\begin{tabular}{@{}c@{}}$~~$\\\\For $\\mu>0$, $R_1$ is a saddle node\\\\$~~$\\\\ and \\\\$~~$\\\\for $\\mu<0$, $R_1$ is a stable node\\\\$~$\\end{tabular}&\\begin{tabular}{@{}c@{}}$~~$\\\\After introducing the coordinate transformation (\\ref{eq15}),\\\\ we will get the same expression of center manifold\\\\ $(\\ref{eq18}-\\ref{eq19})$ and the flow on the center manifold is\\\\ determined by $(\\ref{eq20})$(FIG.\\ref{A_1}).\\\\$~$\\end{tabular}\\\\ \\hline\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$R_2,R_3$\\\\$~$\\end{tabular}&\\begin{tabular}{@{}c@{}}$~~$\\\\For $\\lambda>0$ or $\\lambda<0$, \\\\$~~$\\\\$R_2$ and $R_3$ both are unstable\\\\$~$\\end{tabular}&\t\\begin{tabular}{@{}c@{}}$~~$\\\\After shifting $R_2$ and $R_3$ to the origin by using coordinate\\\\ transformation $\\left(x=X-\\frac{\\lambda}{\\sqrt{6}},y=Y+\\sqrt{1+\\frac{\\lambda^2}{6}},z=Z\\right)$ and\\\\ $\\left(x=X-\\frac{\\lambda}{\\sqrt{6}},y=Y-\\sqrt{1+\\frac{\\lambda^2}{6}},z=Z \\right)$ respectively,\\\\ we can conclude that the center manifold is lying on $Z$-axis\\\\ and the flow on the center manifold is determined by\\\\\n\t\t$\\frac{dZ}{dN}=\\frac{\\lambda}{\\sqrt{6}}Z^2+\\mathcal{O}(Z^3)$.\\\\$~~$\\\\ The origin is unstable for both of the cases $\\lambda>0$\\\\ (same as FIG.\\ref{z_center_manifold}\\textbf{(a)}) and $\\lambda<0$ (same as FIG.\\ref{z_center_manifold}\\textbf{(b)}).\\\\$~$\\end{tabular}\\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\\subsection{Model 4: Exponential potential and\n\texponentially-dependent dark-matter particle mass \\label{M4}}\nIn this consideration evolution equations in Section \\ref{BES} can be written to the autonomous system as follows \n\\begin{eqnarray}\nx'&=&-3x+\\frac{3}{2}x(1-x^2-y^2)-\\sqrt{\\frac{3}{2}}\\lambda y^2-\\sqrt{\\frac{3}{2}}\\mu(1+x^2-y^2),\\label{eq85} \\\\\ny'&=&\\frac{3}{2}y(1-x^2-y^2)-\\sqrt{\\frac{3}{2}}\\lambda xy.\\label{eq86}\n\\end{eqnarray}\nWe ignore the equation corresponding to the auxiliary variable $z$ in the above autonomous system because the R.H.S. expression of $x'$ and $y'$ does not depend on $z$.\\par\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{M_1}\n\t\\caption{Vector field near the origin for the critical point $M_1$. L.H.S. for $\\mu >3$ and R.H.S. for $\\mu<0$.}\n\t\\label{M_1}\n\\end{figure}\n\n Corresponding to the above autonomous system we have four critical points $M_1$, $M_2$, $M_3$ and $M_4$. The set of critical points, their existence and the value of cosmological parameters at those critical points corresponding to the autonomous system $(\\ref{eq85}-\\ref{eq86})$ shown in Table \\ref{TPME} and the eigenvalues of the Jacobian matrix corresponding to the autonomous system $(\\ref{eq85}-\\ref{eq86})$ at those critical points and the nature of the critical points are shown in Table \\ref{TNME}.\\bigbreak\n\n\\begin{table}[h]\n\t\\caption{\\label{TPME}Table shows the set of critical points and their existence, value of cosmological parameters corresponding to that critical points. }\n\t\\begin{tabular}{|c|c|c c|c|c|c| c|}\n\t\t\\hline\n\t\t\\hline\t\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$~Critical ~Points$\\\\$~~$\\end{tabular} &$Existence$&$x$&$y$& $~\\Omega_X~$&$~\\omega_X~$ &$~\\omega_{tot}~$& $~q~$ \\\\ \\hline\\hline\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\\t$M_1$\\\\$~~$\\end{tabular}& For all $\\mu$ and $\\lambda$&$-\\sqrt{\\frac{2}{3}}\\mu$&$0$&$-\\frac{2}{3}\\mu^2$ & $1$&$-\\frac{2}{3}\\mu^2$&$\\frac{1}{2}\\left(1-2\\mu^2\\right)$ \\\\ \\hline\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$M_2$\\\\$~~$\\end{tabular}&For all $\\mu$ and $\\lambda$&$-\\frac{\\lambda}{\\sqrt{6}}$&$\\sqrt{1+\\frac{\\lambda^2}{6}}$&$1$&$-1-\\frac{\\lambda^2}{3}$&$-1-\\frac{\\lambda^2}{3}$&$-\\frac{1}{2}\\left(2+\\lambda^2\\right)$\\\\ \\hline\n\t\\begin{tabular}{@{}c@{}}$~~$\\\\$M_3$\\\\$~~$\\end{tabular}&For all $\\mu$ and $\\lambda$&$-\\frac{\\lambda}{\\sqrt{6}}$&$-\\sqrt{1+\\frac{\\lambda^2}{6}}$&$1$&$-1-\\frac{\\lambda^2}{3}$&$-1-\\frac{\\lambda^2}{3}$&$-\\frac{1}{2}\\left(2+\\lambda^2\\right)$\\\\ \\hline\n \\begin{tabular}{@{}c@{}}$~~$\\\\$M_4$\\\\$~~$\\end{tabular}&\\begin{tabular}{@{}c@{}}$~~$\\\\For $\\mu\\neq\\lambda$\\\\and\\\\ $\\min\\{\\mu^2-\\frac{3}{2},\\lambda^2+3\\}\\geq\\lambda\\mu$\\\\$~~$\\end{tabular}&$\\frac{\\sqrt{\\frac{3}{2}}}{\\lambda-\\mu}$&$\\frac{\\sqrt{-\\frac{3}{2}-\\mu(\\lambda-\\mu)}}{|\\lambda-\\mu|}$&$\\frac{\\mu^2-\\lambda\\mu-3}{(\\lambda-\\mu)^2}$&$\\frac{\\mu(\\lambda-\\mu)}{\\mu^2-\\lambda\\mu-3}$&$\\frac{\\mu}{\\lambda-\\mu}$&$\\frac{1}{2}\\left(\\frac{\\lambda+2\\mu}{\\lambda-\\mu}\\right)$\\\\ \\hline \n\\end{tabular}\n\\end{table}\n\n\\begin{table}[h]\n\t\\caption{\\label{TNME}The eigenvalues $(\\lambda_1,\\lambda_2)$ of the Jacobian matrix corresponding to the autonomous system $(\\ref{eq85}-\\ref{eq86})$ at those critical points $(M_1-M_4)$ and the nature of the critical points.}\n\t\\begin{tabular}{|c|c c|c|}\n\t\t\\hline\n\t\t\\hline\t\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$~Critical ~Points$\\\\$~~$\\end{tabular} &$\\lambda_1$&$\\lambda_2$&$Nature~ of~ critical~ Points$ \\\\ \\hline\\hline\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$M_1$\\\\$~~$\\end{tabular}&$-\\left(\\frac{3}{2}+\\mu^2\\right)$$~~$&$~~$$-\\left(\\mu^2-\\frac{3}{2}\\right)+\\lambda\\mu$& \\begin{tabular}{@{}c@{}}$~~$\\\\Hyperbolic if $\\left(\\mu^2-\\frac{3}{2}\\right)\\neq\\lambda\\mu$,\\\\$~~$ \\\\ non-hyperbolic if $\\left(\\mu^2-\\frac{3}{2}\\right)=\\lambda\\mu$\\\\$~~$\\end{tabular}\\\\ \\hline\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$M_2$\\\\$~~$\\end{tabular}&$-(3+\\lambda^2)+\\lambda\\mu$&$-\\left(3+\\frac{\\lambda^2}{2}\\right)$&\\begin{tabular}{@{}c@{}}$~~$\\\\Hyperbolic if $(\\lambda^2+3)\\neq\\lambda\\mu$,\\\\$~~$ \\\\ non-hyperbolic if $\\left(\\lambda^2+3\\right)=\\lambda\\mu$\\\\$~~$\\end{tabular}\\\\ \\hline\n\t\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$M_3$\\\\$~~$\\end{tabular}&$-(3+\\lambda^2)+\\lambda\\mu$&$-\\left(3+\\frac{\\lambda^2}{2}\\right)$&\\begin{tabular}{@{}c@{}}$~~$\\\\Hyperbolic if $(\\lambda^2+3)\\neq\\lambda\\mu$,\\\\$~~$ \\\\ non-hyperbolic if $\\left(\\lambda^2+3\\right)=\\lambda\\mu$\\\\$~~$\\end{tabular}\\\\ \\hline\n\t\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$M_4$\\\\$~~$\\end{tabular}&$\\frac{a+d+\\sqrt{(a-d)^2+4bc}}{2}$&$\\frac{a+d-\\sqrt{(a-d)^2+4bc}}{2}$&\\begin{tabular}{@{}c@{}}$~~$\\\\Hyperbolic when $\\mu^2-\\frac{3}{2}>\\lambda\\mu$\\\\ and $\\lambda^2+3>\\lambda\\mu$,\\\\$~~$\\\\non-hyperbolic when $\\mu^2-\\frac{3}{2}=\\lambda\\mu$\\\\ or $\\lambda^2+3=\\lambda\\mu$\\\\$~~$\\end{tabular}\\\\ \\hline\n \\end{tabular}\n\\end{table}\nNote that for the critical point $M_4$ we have written the eigenvalues in terms of $a$, $b$, $c$ and $d$, where $a=-\\frac{3}{2(\\lambda-\\mu)^2}(\\lambda^2+3-\\lambda\\mu)$, $b=\\mp\\sqrt{\\frac{3}{2}}\\left(\\frac{3}{(\\lambda-\\mu)^2}+2\\right)\\sqrt{-\\frac{3}{2}-\\mu(\\lambda-\\mu)}$, $c=\\mp\\sqrt{\\frac{3}{2}}\\left\\{\\frac{\\lambda^2+3-\\lambda\\mu}{(\\lambda-\\mu)^2}\\right\\}\n\\sqrt{-\\frac{3}{2}-\\mu(\\lambda-\\mu)}$, $d=-\\frac{3}{(\\lambda-\\mu)^2}\\left\\{\\left(\\mu^2-\\frac{3}{2}\\right)-\\lambda\\mu\\right\\}$.\\par\n\nAgain, here we only state the stability of every critical points $(M_1-M_4)$ and the reason behind the stability in the tabular form, which is shown as in Table \\ref{T_M_stability}.\n\n\\begin{table}[!]\n\t\\caption{\\label{T_M_stability}Table shows the stability and the reason behind the stability of the critical points $(M_1-M_4)$}\n\t\\begin{tabular}{|c|c|c|}\n\t\t\\hline\n\t\t\\hline\t\n\t\t\\begin{tabular}{@{}c@{}}$~~$\\\\$ CPs $\\\\$~$\\end{tabular} &$Stability$& $Reason~behind~the~stability$ \\\\ \\hline\\hline\n\t \\begin{tabular}{@{}c@{}}$~~$\\\\$ M_1 $\\\\$~$\\end{tabular}& \\begin{tabular}{@{}c@{}}$~~$\\\\Stable node for $\\left(\\mu^2-\\frac{3}{2}\\right)>\\lambda\\mu$\\\\ $~~$\\\\and\\\\$~~$\\\\ saddle node for $\\left(\\mu^2-\\frac{3}{2}\\right)\\leq\\lambda\\mu$\\\\$~$\\end{tabular}& \\begin{tabular}{@{}c@{}}$~~$\\\\For $\\left(\\mu^2-\\frac{3}{2}\\right)>\\lambda\\mu$, as both eigenvalues \\\\of the Jacobian matrix at $M_1$ are negative, so by\\\\ Hartman-Grobman theorem we can conclude that\\\\ the critical point $M_1$ is a stable node.\\\\$~~$\\\\ For $\\left(\\mu^2-\\frac{3}{2}\\right)<\\lambda\\mu$, as one eigenvalue is positive\\\\ and another is negative, so by Hartman-Grobman theorem\\\\ we can conclude that the critical point $M_1$ is a saddle node.\\\\$~~$\\\\ For $\\left(\\mu^2-\\frac{3}{2}\\right)=\\lambda\\mu$, after shifting the critical point\\\\ $M_1$ to the origin by the coordinate transformation\\\\ $\\left(x=X-\\sqrt{\\frac{2}{3}}\\mu,y=Y\\right)$, the center manifold can be written as \\\\$X=\\frac{1}{\\mu}\\sqrt{\\frac{3}{2}}Y^2+\\mathcal{O}(Y^3)$\\\\ and the flow on the center manifold can be determined as\\\\ $\\frac{dY}{dN}=\\frac{9}{4\\mu^2}Y^3+\\mathcal{O}(Y^4)$.\\\\ Hence, for both of the cases $\\mu>0$ and $\\mu<0$ the origin\\\\ is a saddle node and unstable in nature (FIG.\\ref{M_1}).\\\\$~~$\\end{tabular}\\\\ \\hline\n\t \\begin{tabular}{@{}c@{}}$~~$\\\\$ M_2,M_3 $\\\\$~$\\end{tabular}& \\begin{tabular}{@{}c@{}}$~~$\\\\Stable node for $\\left(\\lambda^2+3\\right)>\\lambda\\mu$\\\\$~~$\\\\ and\\\\$~~$\\\\ saddle node for $\\left(\\lambda^2+3\\right)\\leq\\lambda\\mu$\\\\$~$\\end{tabular}& \\begin{tabular}{@{}c@{}}$~~$\\\\For $\\left(\\lambda^2+3\\right)>\\lambda\\mu$, as both eigenvalues \\\\of the Jacobian matrix at $M_2$ are negative, so by\\\\ Hartman-Grobman theorem we can conclude that\\\\ the critical point $M_2$ is a stable node.\\\\$~~$\\\\ For $\\left(\\lambda^2+3\\right)<\\lambda\\mu$, as one eigenvalue is positive\\\\ and another is negative, so by Hartman-Grobman theorem\\\\ we can conclude that the critical point $M_2$ is a saddle node.\\\\$~~$\\\\ For $\\left(\\lambda^2+3\\right)=\\lambda\\mu$, after shifting the critical point\\\\ $M_1$ to the origin by the coordinate transformation\\\\ $\\left(x=X-\\frac{\\lambda}{\\sqrt{6}},y=Y\\pm\\sqrt{1+\\frac{\\lambda^2}{6}}\\right)$, the center manifold can be\\\\ written as $~~Y=\\mp\\frac{1}{2\\sqrt{1+\\frac{\\lambda^2}{6}}}X^2+\\mathcal{O}(X^3)$\\\\ and the flow on the center manifold can be determined as\\\\ $\\frac{dX}{dN}=\\frac{\\lambda}{2}\\sqrt{\\frac{3}{2}}\\left\\{1-\\frac{6}{\\lambda^2}\\pm\\frac{12}{\\lambda^2}\\left(1+\\frac{\\lambda^2}{6}\\right)^{\\frac{3}{2}}\\right\\}X^2+\\mathcal{O}(X^4)$.\\\\ Hence, for all possible values $\\lambda$ due to the even power $X$\\\\ in the R.H.S. of the flow equation, the origin is\\\\ a saddle node and unstable in nature.\\\\$~~$\\end{tabular}\\\\ \\hline\n\t \\begin{tabular}{@{}c@{}}$~~$\\\\$ M_4 $\\\\$~$\\end{tabular}& \\begin{tabular}{@{}c@{}}$~~$\\\\Saddle node for both of the cases, i.e.,\\\\ $\\mu^2-\\frac{3}{2}=\\lambda\\mu$ or $\\lambda^2+3=\\lambda\\mu$\\\\$~$\\end{tabular}& \\begin{tabular}{@{}c@{}}$~~$\\\\ For $\\mu^2-\\frac{3}{2}=\\lambda\\mu$, as $M_4$ converts into\\\\ $M_1$, so we get the same stability like $M_1$.\\\\$~~$\\\\ For $\\lambda^2+3=\\lambda\\mu$ as $M_4$ converts into $M_2$ and $M_3$, \\\\so we get the same stability like $M_2$ and $M_3$.\\\\$~$\\end{tabular}\\\\\\hline\n \\end{tabular}\n\\end{table}\nAlso note that for hyperbolic case of $M_4$, the components of the Jacobian matrix $a,b,c$ and $d$ are very complicated and from the determination of eigenvalue, it is very difficult to provide any conclusion about the stability and for this reason we skip the stability analysis for this case.\n\n\\subsection{Model 5: Product of exponential and power-law potential and\n\tproduct of exponentially-dependent and power-law-dependent dark-matter particle mass \\label{M5}}\nIn this consideration evolution equations in Section \\ref{BES} can be written to the autonomous system as follows \n\\begin{eqnarray}\nx'&=&-3x+\\frac{3}{2}x(1-x^2-y^2)-\\sqrt{\\frac{3}{2}}\\lambda y^2-\\frac{\\lambda}{2}y^2z-\\sqrt{\\frac{3}{2}}\\mu(1+x^2-y^2)-\\frac{\\mu}{2}z(1+x^2-y^2),\\label{eqn80} \\\\\ny'&=&\\frac{3}{2}y(1-x^2-y^2)-\\sqrt{\\frac{3}{2}}\\lambda xy-\\frac{\\lambda}{2}xyz,\\label{eqn81}\\\\\nz'&=&-xz^2\\label{eqn82}\n\\end{eqnarray}\nFor determining the critical points corresponding to the above autonomous system, we first equate the R.H.S. of (\\ref{eqn82}) with $0$. Then we have either $x=0$ or $z=0$. For $z=0$ then the above autonomous system converts in to the autonomous system of Model 4. So, then we will get the similar types of result as Model 4. When $x=0$, we have three physically meaningful critical points corresponding to the above autonomous system for $\\mu\\neq 0$ and $\\lambda\\neq 0$. For another choices of $\\mu$ and $\\lambda$ like Model 1, we will get similar types of results. The critical points are $N_1(0,0,-\\sqrt{6})$, $N_2(0,1,-\\sqrt{6})$ and $N_3(0,-1,-\\sqrt{6})$ and all are hyperbolic in nature. As the $x$ and $y$ coordinates of these critical points are same as $A_1$, $A_2$ and $A_3$ and the value of cosmological parameters are not depending on $z$ coordinate, so we get the same result for the value of cosmological parameters as $A_1$, $A_2$ and $A_3$ respectively, which are presented in Table \\ref{TI}.\n\\begin{center}\n\t$1.~Critical~Point~N_1$\n\\end{center}\nThe Jacobian matrix corresponding to the autonomous system (\\ref{eqn80}-\\ref{eqn82}) at the critical point $N_1$ has three eigenvalues $\\frac{3}{2}$, $-\\frac{1}{4}\\left(3+\\sqrt{9+48\\mu}\\right)$ and $-\\frac{1}{4}\\left(3-\\sqrt{9+48\\mu}\\right)$ and the corresponding eigenvectors are $[0,1,0]^T$, $\\left[\\frac{1}{24}\\left(3+\\sqrt{9+48\\mu}\\right),0,1\\right]^T$ and $\\left[\\frac{1}{24}\\left(3-\\sqrt{9+48\\mu}\\right),0,1\\right]^T$ respectively. As the critical point is hyperbolic in nature, so we use Hartman-Grobman theorem for analyzing the stability of this critical point. From the determination of eigenvalues, we conclude that the stability of the critical point $N_1$ depends on $\\mu$. For $\\mu<-\\frac{9}{48}$, the last two eigenvalues are complex conjugate with negative real parts. For $\\mu\\geq-\\frac{9}{48}$, all eigenvalues are real.\\par\nFor $\\mu<-\\frac{9}{48}$, due to presence of negative real parts of last two eigenvalues, $yz$-plane is the stable subspace and as the first eigenvalue is positive, $x$-axis is the unstable subspace. Hence, the critical point $N_1$ is saddle-focus, i.e., unstable in nature. The phase portrait in $xyz$ coordinate system is shown as in FIG.\\ref{focus_1}.\\par\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.4\\textwidth]{focus11}\n\t\\caption{Phase portrait near the origin for the critical point $N_1$ in $xyz$ coordinate system. This phase portrait is drawn for $\\mu=-1$.}\n\t\\label{focus_1}\n\\end{figure}\nFor $\\mu\\geq-\\frac{9}{48}$, always we have at least one positive eigenvalue and at least one negative eigenvalue and hence we can conclude that the critical point $N_1$ is unstable due to its saddle nature.\n\\begin{center}\n\t$2.~Critical~Point~N_2~\\&~ N_3$\n\\end{center}\nThe Jacobian matrix corresponding to the autonomous system $(\\ref{eqn80}-\\ref{eqn82})$ at the critical point $N_2$ and $N_3$ has three eigenvalues $-3$, $-\\frac{1}{2}\\left(3+\\sqrt{9+12\\lambda}\\right)$ and $-\\frac{1}{2}\\left(3-\\sqrt{9+12\\lambda}\\right)$ and the corresponding eigenvectors are $[0,1,0]^T$, $\\left[\\frac{1}{12}\\left(3+\\sqrt{9+12\\lambda}\\right),0,1\\right]^T$ and $\\left[\\frac{1}{12}\\left(3-\\sqrt{9+12\\lambda}\\right),0,1\\right]^T$ respectively. From the determination of the eigenvalue, we conclude that the last two eigenvalues are complex conjugate while $\\lambda<-\\frac{3}{4}$ and the eigenvalues are real while $\\lambda\\geq-\\frac{3}{4}$.\\par \nFor $\\lambda<-\\frac{3}{4}$, we can see that the last two eigenvalues are complex with negative real parts and first eigenvalue is always negative. Hence, by Hartman-Grobman theorem we conclude that the critical points $N_2$ and $N_3$ both are stable focus-node in this case. The phase portrait in $xyz-$coordinate system is shown as in FIG.\\ref{focus_2}.\\par\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.4\\textwidth]{focus2}\n\t\\caption{Phase portrait near the origin for the critical point $N_2$ and $N_3$ in $xyz$ coordinate system. This phase portrait is drawn for $\\lambda=-1$.}\n\t\\label{focus_2}\n\\end{figure}\nFor $-\\frac{3}{4}\\leq\\lambda<0$, we can see that all eigenvalues are negative. So, by Hartman-Grobman theorem we conclude that the critical points $N_2$ and $N_3$ both are stable node in this case.\\par\nFor $\\lambda>0$, we have two negative and one positive eigenvalues. Hence, by Hartman-Grobman theorem we conclude that the critical points $N_2$ and $N_3$ both are saddle node and unstable in nature.\\bigskip\n\n\n\\section{Bifurcation Analysis by Poincar\\'{e} index and Global Cosmological evolution \\label{BAPGCE}}\n\nThe flat potential plays a crucial role to obtain the bouncing solution. After the bounce, the flat potential naturally allows the universe to penetrate the slow-roll inflation regime, as a result of that making the bouncing universe compatible with observations.\\par\n\nIn Model 1 (\\ref{M1}), for the inflationary scenario, we consider $\\lambda$ and $\\mu$ very small positive number so that $V(\\phi) \\approx V_0$ and $M_{DM} \\approx M_0$. The Eqn. (\\ref{eq11}) mainly regulate the flow along $Z$-axis. Due to Eqn. (\\ref{eq11}) the overall 3-dimensional phase space splits up into two compartments and the $ZY$-plane becomes the separatrix. In the right compartment, for $x>0$, we have $z' <0$ and $z'>0$ in the left compartment. on the $ZY$ plane $z' \\approx 0$. For $\\lambda \\neq 0$ and $\\mu \\neq 0$, all critical points are located on the Y-axis. As all cosmological parameters can be expressed in terms of $x$ and $y$, so we rigorously inspect the vector field on $XY$-plane. Due to Eqn. (\\ref{eq4}), the viable phase-space region (say $S$) satisfies $y^2-x^2 \\leqslant 1$ which is inside of a hyperbola centered at the origin (FIG.\\ref{hyperbola}). On the $XY$-plane $z' \\approx 0$. So on the $XY$-plane, by Hartman-Grobman theorem we can conclude there are four hyperbolic sectors around $A_1$ ($\\alpha$-limit set) and one parabolic sector around each of $A_2$ and $A_3$ ($\\omega$-limit sets). So, by Bendixson theorem, it is to be noted that, the index of $A_1|_{XY}$ is $1$ and the index of $A_2|_{XY}$ and $A_3|_{XY}$ is $-1$. If the initial position of the universe is in left compartment and near to the $\\alpha$-limit, then the universe remains in the left compartment and moves towards $\\omega$-limit set asymptotically at late time. Similar phenomenon happens in right compartment also. The universe experiences a fluid dominated non-generic evolution near $A_1$ for $\\mu>0$ and a generic evolution for $\\mu<0$. For sufficiently flat potential, near $A_2$ and $A_3$, a scalar field dominated non-generic and generic evolution occur for $\\lambda>0$ and $\\lambda<0$ respectively (see FIG. \\ref{Model1}).\n\n\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=.4\\textwidth]{hyperbolic.pdf}\n\t\\caption{Vector field on the projective plane by antipodal points identified of the disk.}\n\t\\label{hyperbola}\n\\end{figure} \n\n\\begin{figure}[htbp!]\n\n\t\\begin{subfigure}{0.34\\textwidth}\n\t\n\t\t\\includegraphics[width=.9\\linewidth]{A1.png}\n\t\t\\caption{}\n\t\t\\label{fig:A1}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{0.34\\textwidth}\n\t\n\t\t\\includegraphics[width=.9\\linewidth]{A2.png}\n\t\t\\caption{}\n\t\t\\label{fig:A2}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.34\\textwidth}\n\t\n\t\t\\includegraphics[width=.9\\linewidth]{A3.png}\n\t\t\\caption{}\n\t\t\\label{fig:A3}\n\t\\end{subfigure}\n\t\n\t\\caption{\\label{Model1}\\textit{Model 1}: Qualitative evolution of the physical variables $\\omega_{total}$, $\\omega_{\\phi}$ and $q$ for perturbation of the parameters ($\\lambda$ \\& $\\mu$) near the bifurcation values for three sets of initial conditions. (a) The initial condition near the point $A_1$. (b) The initial condition near the point $A_2$. (c) The initial condition near the point $A_3$. We observe that the limit of the physical parameter $\\omega_{total}\\rightarrow -1$. In early or present time the scalar field may be in phantom phase but the field is attracted to the de-Sitter phase.}\n\t\n\\end{figure}\n\n\nThe Poinca\\'{e} index theorem \\cite{0-387-95116-4} helps us to determine Euler Poincar\\'{e} characteristic which is $\\chi(S)=n-f-s$, where $n$, $f$, $s$ are the number of nodes, foci and saddle on $S$. Henceforward we consider index as Poinca\\'{e} index. So for the vector field of case-(i)$|_{XY-plane}$, $\\chi(S)=1$. This vector field can define a vector field on the projective plane, i.e., in 3-dimensional phase-space, if we consider a closed disk the $XY$-plane of radius one and centered at the origin, then we have the same vector field on the projective plane by antipodal point identified.\\par\nFor $z=constant (\\neq 0)$ plane the above characterization of vector field changes as a vertical flow along $Z$-axis regulate the character of the vector field. Using Bendixson theorem \\cite{0-387-95116-4} we can find the index of nonhyperbolic critical point by restricting the vector field on a suitable two-dimensional subspace.\\par\nIf we restrict ourselves on $XZ$-plane, $A_1$ is saddle in nature for $\\mu > 0$. On the $XZ$ plane the index of $A_1$ is -1 for $\\mu>0$ as four hyperbolic sectors are separated by two separatices around $A_1$. For $\\mu<0$, there is only one parabolic sector and the index is zero (FIG.\\ref{A_1}). On the $YZ$ plane $A_1$ swap its index with $XZ$ plane depending on the sign of $\\mu$.\\par \nOn the uw-plane $A_2$ and $A_3$ have index 1 for $\\lambda>0$ and -1 for $\\lambda \\leqslant 0$. On the uw-plane $A_2$ and $A_3$ have index -1 for $\\lambda>0$ and 1 for $\\lambda < 0$. At $\\lambda=0$, the index of $A_2$ is 0 but the index of $A_3$ is 1. On uv-plane the index $A_2$ or $A_3$ is 1 and does not depend on $\\lambda$. On the (uw)-plane around $A_2$ the number of hyperbolic sector is four and there is no elliptic sector. So the index of $A_2$ and $A_3$ $(origin)|_{uw~plane}/ _{vw~plane}$ is -1 for $\\lambda>0$ and for $\\lambda<0$ the index is 1 as there is no hyperbolic or elliptic orbit.\\par\nA set of non-isolated equilibrium points is said to be normally hyperbolic if the only eigenvalues with zero real parts are those whose corresponding eigenvectors are tangent to the set. For the case (ii) to case (iv), we get normally hyperbolic critical points as the eigenvector $[0~ 0~ 1]^T$ (in new $(u,v,w)$ coordinate system) corresponding to only zero eigenvalue, is tangent to the line of critical points. The stability of a set which is normally hyperbolic can be completely classified by considering the signs of the eigenvalues in the remaining directions. So the character of the flow of the phase space for each $z=constant$ plane is identical to the $XY$-plane in the previous case. Thus the system (\\ref{eq9}-\\ref{eq11}) is structurally unstable \\cite{0-387-95116-4} at $\\lambda=0$ or $\\mu=0$ or both. On the other hand, the potential changes its character from runaway to non-runaway as $\\lambda$ crosses zero from positive to negative. Thus $\\lambda=0$ and $\\mu=0$ are the bifurcation values\\cite{1950261}.\\bigbreak\n\nModel 2 (\\ref{M2}) contains five critical points $L_1-L_5$. For $\\lambda>0$, the flow is unstable and for $\\lambda<0$ the flow on the center manifold is stable. Around $L_2$, the character of the vector field same as $L_1$. For $\\mu=\\pm \\sqrt{\\frac{3}{2}}$, the flow on the center manifold at $L_3$ or $L_4$ depends on the sign of $\\lambda$ (FIG.\\ref{L_21} \\& FIG.\\ref{L_2_1_1}). On the other hand, $\\mu>\\sqrt{\\frac{3}{2}}$ or $\\mu< \\sqrt{\\frac{3}{2}}$ the flow on the center manifold does not depend on $\\lambda$. For $\\mu >0$, the flow on the center manifold at $L_5$ moves increasing direction of $z$. On the other hand, for $\\mu <0$, the flow on the center manifold is in decreasing direction of $z$. The index of $L_1$ is same as $A_2$. \nFor $\\mu=\\pm \\sqrt{\\frac{3}{2}}$ and $\\lambda=1$, the index of $L_2|_{XY plane}$ is -1 as there are only four hyperbolic sectors. But for $\\lambda=2$, there are two hyperbolic and one parabolic sectors, so the index is zero. \nThe index of $L_3$ is same as $L_2$. The index of $L_4$ on $ZX$ or $XY$ plane is zero as there are two hyperbolic and one parabolic sector for each $\\mu>0$ and $\\mu<0$. So it is to be noted that, for $\\lambda=0, \\pm \\sqrt{\\frac{3}{2}} $ and $\\mu=0$ the system is structurally unstable. \n\\begin{figure}[htbp!]\n\n\t\\begin{subfigure}{0.34\\textwidth}\n\t\n\t\t\\includegraphics[width=.9\\linewidth]{L1.png}\n\t\t\\caption{}\n\t\t\\label{fig:L1}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{0.34\\textwidth}\n\t\n\t\t\\includegraphics[width=.9\\linewidth]{L2.png}\n\t\t\\caption{}\n\t\t\\label{fig:L2}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.34\\textwidth}\n\t\n\t\t\\includegraphics[width=.9\\linewidth]{L3mun.png}\n\t\t\\caption{}\n\t\t\\label{fig:L3n}\n\t\\end{subfigure}\n\t\n\t\\begin{subfigure}{0.34\\textwidth}\n\t\n\t\t\\includegraphics[width=.9\\linewidth]{L3mup.png}\n\t\t\\caption{}\n\t\t\\label{fig:L3n}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{0.34\\textwidth}\n\t\n\t\t\\includegraphics[width=.9\\linewidth]{L4mup.png}\n\t\t\\caption{}\n\t\t\\label{fig:L4p}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.34\\textwidth}\n\t\n\t\t\\includegraphics[width=.9\\linewidth]{L5mup.png}\n\t\t\\caption{}\n\t\t\\label{fig:L5}\n\t\\end{subfigure}\n\t\n\t\n\t\\caption{\\label{Model2}\\textit{Model 2}: Some interesting qualitative evolution of the physical variables $\\omega_{total}$, $\\omega_{\\phi}$ and $q$ for perturbation of the parameters ($\\lambda$ \\& $\\mu$) near the bifurcation values for six sets of initial conditions. (a) The initial position near the point $L_1$. (b) The initial position near the point $L_2$. (c) The initial position near the point $L_3$ and $\\mu<-\\sqrt{\\frac{3}{2}}$. (d) The initial position near the point $L_3$ and $\\mu>\\sqrt{\\frac{3}{2}}$. (e) The initial position near the point $L_4$ and $\\mu>\\sqrt{\\frac{3}{2}}$. (f) The initial position near the point $L_5$ and $\\mu>0$. We observe that the limit of the physical parameter $\\omega_{total}\\rightarrow -1$. In early or present time the scalar field may be in phantom phase but the field is attracted to the de-Sitter phase except for (b) and (e). In (e) the scalar field crosses phantom boundary line and enters into the phantom phase in late timeand would cause Big-Rip.}\n\\end{figure}\n\nThe universe experiences a scalar field dominated non-generic evolution near $L_1$ and $L_2$ for $\\lambda>0$ and a scalar field dominated generic evolution for $\\lambda<0$ or on the z-nullcline. Near $L_3$ and $L_4$, a scalar field dominated non-generic evolution of the universe occur at $\\mu \\approx \\pm \\sqrt{\\frac{3}{2}}$. At $\\mu \\approx 0$ a scaling non-generic evolution occur near $L_5$ (see FIG.\\ref{Model2}).\n\\bigbreak\n\nModel 3 (\\ref{M3}) contains three critical points $R_1-R_3$. $R_1$ is saddle for all values of $\\mu$. On the $xy$ plane the index of $R_1$ is same as $A_1$. On the projection of the $xy$-plane $R_2$ and $R_3$ are stable nodes for all values of $\\lambda$. On the center manifold at $R_2$ or $R_3$, the flow is increasing direction along $z$-axis and the flow is decreasing direction along $z$-axis for $\\lambda<0$. On the $XZ$ or $YZ$ plane, the index of $R_2$ or $R_3$ is zero as around each of them there are two hyperbolic and one parabolic sectors.Thus we note that, for $\\mu=0$ and $\\lambda=0$, the stability of the system bifurcate.\\\\ \nWe observe that no scaling solutions or a tracking solutions exist in this specific model like in the quintessence\ntheory. However, the critical points which describe the de Sitter solution do not exist in the case of quintessence for\nthe exponential potential; the universe experiences a fluid dominated non-generic evolution near critical point $R_1$ and a scalar field dominated non-generic evolution near critical point $R_2$ and $R_3$. For sufficiently flat potential, early or present phantom/non-phantom universe is attracted to $\\Lambda$CDM cosmological model (see FIG. \\ref{fig:Model3}).\\bigbreak\n\nModel 4 (\\ref{M4}) contains four critical points $M_1-M_4$. $M_1-M_3$ are stable node for $\\left(\\lambda^2+3\\right)>\\lambda\\mu$ (index 1) and saddle node (index zero) for $\\left(\\lambda^2+3\\right)\\leq\\lambda\\mu$, i.e., the stability of the system bifurcate at $\\left(\\lambda^2+3\\right)=\\lambda\\mu$. Thus we find a generic evolution for $\\left(\\lambda^2+3\\right)\\neq \\lambda\\mu$ and no-generic otherwise. The kinetic dominated solution ($M_1$) and scalar field dominated solutions ($M_2$ and $M_3$) are stable for $\\left(\\lambda^2+3\\right)>\\lambda\\mu$. For the energy density, near $M_2$ and $M_3$, we observe that at late times the scalar field dominates $\\Omega_X=\\Omega_\\phi \\rightarrow 1$ and $\\Omega_m \\rightarrow 0$, while the parameter for the equation of state $\\omega_{tot}$ have the limits $\\omega_{tot} \\rightarrow -1$ for sufficiently flat potential.\\bigbreak\n\nModel 5 (\\ref{M5}) contains three critical points $N_1$, $N_2$, $N_3$. For $\\mu< -\\frac{3}{16}$, the Shilnikov's saddle index \\cite{Shilnikov} of $N_1$ is $\\nu_{N_1}=\\frac{\\rho_{N_1}}{\\gamma_{N_1}}=0.5$ and saddle value is $\\sigma_{N_1}=-\\rho_{N_1}+\\gamma_{N_1}=0.75$. As So Shilnikov condition \\cite{Shilnikov} is satisfied as $\\nu_{N_1}<1$ and $\\sigma_{N_1}>0$. The second Shilnikov's saddle value $\\sigma^{(2)}_{N_1}=-2\\rho_{N_1}+\\gamma_{N_1}=0$. So, by L. Shilnikov's theorem (Shilnikov, 1965) \\cite{Shilnikov} there are countably many saddle periodic orbits in a neighborhood of the homoclinic loop of the saddle-focus $N_1$. As $\\nu_{N_1}$ is invariant for any choice of $\\mu$, so Shilnikov's bifurcation does not appear. For $-\\frac{3}{16}<\\mu < 0$, the vector field near $N_1$ is saddle in character. On the other hand, $N_1$ is saddle for $\\mu>0$. So, $\\mu=0$ is a bifurcation value for the bifurcation point $N_1$. Similarly, $\\lambda=0$ is a bifurcation point for the bifurcation points $N_2$ and $N_3$. We observe scalar field dominated solutions near $N_2$ and $N_3$ which exists at bifurcation value, i.e., for sufficiently flat universe and attracted to $\\Lambda$CDM cosmological model. \\\\\n\n\n\\begin{figure}[htbp!]\n\n\t\\begin{subfigure}{0.34\\textwidth}\n\t\n\t\t\\includegraphics[width=.9\\linewidth]{R1.png}\n\t\t\\caption{}\n\t\t\\label{fig:R1}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{0.34\\textwidth}\n\t\n\t\t\\includegraphics[width=.9\\linewidth]{R2.png}\n\t\t\\caption{}\n\t\t\\label{fig:R2}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.34\\textwidth}\n\t\n\t\t\\includegraphics[width=.9\\linewidth]{R3.png}\n\t\t\\caption{}\n\t\t\\label{fig:R3}\n\t\\end{subfigure}\n\t\n\t\n\t\n\t\\caption{Qualitative evolution of the physical variables $\\omega_{total}$, $\\omega_{\\phi}$ and $q$ for perturbation of the parameters ($\\lambda$ \\& $\\mu$) near the bifurcation values each of \\textit{Model 3}, \\textit{Model 4} and \\textit{Model 5} for three sets of initial conditions. The initial positions in (a), (b) and (c) are near \\underline{$R_1$, $R_2$ and $R_3$} (\\underline{$M_1$, $M_2$ and $M_3$}/\\underline{$N_1$, $N_2$ and $N_3$}) respectively. \\label{fig:Model3} }\n\\end{figure}\n\n\n\\section{Brief discussion and concluding remarks \\label{conclusion}}\nThe present work deals with a detailed dynamical system analysis of the interacting DM and DE cosmological model in the background of FLRW geometry. The DE is chose as a phantom scalar field with self-interacting potential while varying mass (a function of the scalar field) DM is chosen as dust. The potential of the scalar field and the varying mass of DM are chosen as exponential or power-law form (or a product of them) and five possible combination of them are studied.\\bigbreak\n\\textbf{Model 1: $V(\\phi)=V_0\\phi^{-\\lambda}, M_{_{DM}}(\\phi)=M_0\\phi^{-\\mu}$}\\par\nFor case (i), i.e., $\\mu\\neq 0, \\lambda\\neq 0$; there are three non-hyperbolic critical points $A_1$, $A_2$, $A_3$ of which $A_1$ corresponds to DM dominated decelerating phase (dust era) while $A_2$ and $A_3$ purely DE dominated and they represent the $\\Lambda$CDM model (i.e., de-Sitter phase) of the universe.\\par\nFor case (ii), i.e., $\\mu\\neq 0, \\lambda=0$; there is one critical point and two space of critical points. The cosmological consequence of these critical points are similar to case (i).\\par\nFor case (iii), i.e., $\\mu= 0, \\lambda\\neq 0$; there is one space of critical points and two distinct critical points. But as before the cosmological analysis is identical to case (i).\\par\nFor the fourth case, i.e., $\\mu=0, \\lambda=0$; there are three space of critical points $(S_1,S_2,S_3)$ which are all non-hyperbolic in nature and are identical to the critical points in case (ii). Further, considering the vector fields in $Z=constant$ plane, it is found that for the critical point $S_1$, every point on $Z-$ axis is a saddle node while for critical points $S_2$ and $S_3$ every point on $Z-$axis is a stable star.\\bigbreak\n\\textbf{Model 2: $V(\\phi)=V_0\\phi^{-\\lambda}, M_{_{DM}}(\\phi)=M_1e^{-\\kappa\\mu\\phi}$}\\par\nThe autonomous system for this model has five non-hyperbolic critical points $L_i$, $i=1,\\ldots,5$. For $L_1$ and $L_2$, the cosmological model is completely DE dominated and the model describes cosmic evolution at the phantom barrier. The critical points $L_3$ and $L_4$ are DE dominated cosmological solution ($\\mu^2>3$) representing the $\\Lambda$CDM model. The critical point $L_5$ corresponds to ghost (phantom) scalar field and it describes the cosmic evolution in phantom domain ($2\\mu^2>3$).\\bigbreak\n\\textbf{Model 3: $V(\\phi)=V_1e^{-\\kappa\\lambda\\phi}, M_{_{DM}}(\\phi)=M_0\\phi^{-\\mu}$}\\par\nThere are three non-hyperbolic critical points in this case. The first one (i.e., $R_1$) is purely DM dominated cosmic evolution describing the dust era while the other two critical points (i.e., $R_2$, $R_3$) are fully dominated by DE and both describe the cosmic evolution in the phantom era.\\bigbreak\n\\textbf{Model 4: $V(\\phi)=V_1 e ^{-\\kappa\\lambda\\phi}, M_{_{DM}}(\\phi)=M_1e^{-\\kappa\\mu\\phi}$}\\par\nThe autonomous system so formed in this case has four critical points $M_i$, $i=1,\\ldots,4$ which may be hyperbolic/non-hyperbolic depending on the parameters involved. The critical point $M_1$ represents DE as ghost scalar field and it describes the cosmic evolution in the phantom domain. For the critical points $M_2$ and $M_3$, the cosmic evolution is fully DE dominated and is also in the phantom era. The cosmic era corresponding to the critical point $M_4$ describes scaling solution where both DM and DE contribute to the cosmic evolution.\\bigbreak\n\\textbf{Model 5: $V(\\phi)=V_2\\phi^{-\\lambda} e ^{-\\kappa\\lambda\\phi}, M_{_{DM}}(\\phi)=M_2\\phi^{-\\mu}e^{-\\kappa\\mu\\phi}$}\\par\nThis model is very similar to either model $4$ or model $1$, depending on the choices of the dimensionless variables $x$ and $z$. For $z=0$, the model reduces to model $4$ while for $x=0$ the model is very similar to model $1$ and hence the cosmological analysis is very similar to that.\\par\nFinally, using Poincar\\'{e} index theorem, Euler Poincar\\'{e} characteristic is determined for bifurcation analysis of the above cases from the point of view of the cosmic evolution described by the equilibrium points. Lastly, inflationary era of cosmic evolution is studied by using bifurcation analysis.\n\n\n \n\n\n\n\\begin{acknowledgements}\n\tThe author Soumya Chakraborty is grateful to CSIR, Govt. of India for giving Junior Research Fellowship (CSIR Award No: 09/096(1009)/2020-EMR-I) for the Ph.D work.\n\tThe author S. Mishra is grateful to CSIR, Govt. of India for giving Senior Research Fellowship (CSIR Award No: 09/096 (0890)/2017-EMR-I) for the Ph.D work. The author Subenoy Chakraborty is thankful to Science and Engineering Research Board (SERB) for awarding MATRICS Research Grant support (File No: MTR/2017/000407).\\\\ \n\\end{acknowledgements}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \t\n \t\t \n\t\t \\bibliographystyle{unsrt}\n\t\t ", "meta": {"timestamp": "2020-11-20T02:16:16", "yymm": "2011", "arxiv_id": "2011.09842", "language": "en", "url": "https://arxiv.org/abs/2011.09842"}} {"text": "\\section{Introduction} \\label{sec:intro} \nNeutrinos of astrophysical and cosmological origin have been crucial for unraveling neutrino masses and properties. Solar neutrinos provided the first evidence for neutrino oscillations, and hence massive neutrinos. We know that at least two massive neutrinos should exist, as required by the two distinct squared mass differences measured, the atmospheric $\\lvert\\Delta m^2_{31}\\rvert \\approx 2.51\\cdot 10^{-3}$~eV$^2$ and the solar $\\Delta m^2_{21} \\approx 7.42\\cdot 10^{-5}$~eV$^2$ splittings~\\cite{deSalas:2020pgw,Esteban:2020cvm,Capozzi:2021fjo}~\\footnote{The current ignorance on the sign of $\\lvert\\Delta m^2_{31}\\rvert$ is translated into two possible mass orderings. In the \\emph{normal} ordering (NO), the total neutrino mass is $\\sum m_\\nu \\gtrsim 0.06$~eV, while in the \\emph{inverted} ordering (IO) it is $\\sum m_\\nu \\gtrsim 0.10 $~eV.}. However, neutrino oscillation experiments are not sensitive to the absolute neutrino mass scale. On the other hand, cosmological observations provide the most constraining recent upper bound on the total neutrino mass via relic neutrinos, $\\sum m_\\nu<0.09$~eV at $95\\%$~CL~\\cite{DiValentino:2021hoh}, where the sum runs over the distinct neutrino mass states. However, this limit is model-dependent, see for example~\\cite{DiValentino:2015sam,Palanque-Delabrouille:2019iyz,Lorenz:2021alz,Poulin:2018zxs,Ivanov:2019pdj,Giare:2020vzo,Yang:2017amu,Vagnozzi:2018jhn,Gariazzo:2018meg,Vagnozzi:2017ovm,Choudhury:2018byy,Choudhury:2018adz,Gerbino:2016sgw,Yang:2020uga,Yang:2020ope,Yang:2020tax,Vagnozzi:2018pwo,Lorenz:2017fgo,Capozzi:2017ipn,DiValentino:2021zxy,DAmico:2019fhj,Colas:2019ret}. \n\nThe detection of supernova (SN) neutrinos can also provide constraints on the neutrino mass, by exploiting the time of flight delay~\\cite{Zatsepin:1968ktq} experienced by a neutrino of mass $m_\\nu$ and energy $E_\\nu$:\n\n\\begin{equation}\n \\label{eq:delay}\n \\Delta t = \\frac{D}{2c}\\left(\\frac{m_\\nu}{E_{\\nu}}\\right)^2~,\n\\end{equation}\n\n\\noindent where $D$ is the distance travelled by the neutrino. This method probes the same neutrino mass constrained via laboratory-based kinematic measurements of beta-decay electrons~\\footnote{The current limit from the tritium beta decay experiment KATRIN (Karlsruhe Tritium Neutrino) is $m_{\\beta}<0.8$~eV~\\cite{Aker:2021gma} and the expected sensitivity is 0.2~eV~\\cite{Drexlin:2013lha}, both at 90\\% CL.}. Using neutrinos from SN1987A~\\cite{Kamiokande-II:1989hkh,Kamiokande-II:1987idp,Bionta:1987qt,Alekseev:1988gp,Alekseev:1987ej}, a $95\\%$ confidence level (CL) current upper limit of $m_\\nu<5.8$~eV~\\cite{Pagliaroli:2010ik} has ben derived (see also Ref.~\\cite{Loredo:2001rx}). Prospects for future SN explosions may reach the sub-eV level~\\cite{Pagliaroli:2010ik,Nardi:2003pr,Nardi:2004zg,Lu:2014zma,Hyper-Kamiokande:2018ofw,Hansen:2019giq}. Nevertheless, these forecasted estimates rely on the detection of inverse $\\beta$ decay events in water Cherenkov or liquid scintillator detectors, mostly sensitive to $\\bar{\\nu}_e$ events. An appealing and alternative possibility is the detection of the $\\nu_e$ neutronization burst exploiting the liquid argon technology at the DUNE far detector~\\cite{DUNE:2020zfm,Rossi-Torres:2015rla}. The large statistics and the very distinctive neutrino signal in time will ensure a unique sensitivity to the neutrino mass signature via time delays. \n\n\\section{Supernova electron neutrino events} \\label{sec:events}\nCore-collapse supernovae emit $99\\%$ of their energy ($\\simeq 10^{53}$~ergs) in the form of (anti)neutrinos of all flavors with mean energies of $\\mathcal{O}(10~\\si{\\mega\\electronvolt})$. The explosion mechanism of a core-collapse SN can be divided into three main phases: the \\emph{neutronization burst}, the \\emph{accretion phase} and the \\emph{cooling phase}. The first phase, which lasts for 25 milliseconds approximately, is due to a fast \\emph{neutronization} of the stellar nucleus via electron capture by free protons, causing an emission of electron neutrinos ($e^- + p\\rightarrow \\nu_e + n$). The flux of $\\nu_e$ stays trapped behind the shock wave until it reaches sufficiently low densities for neutrinos to be suddenly released. Unlike subsequent phases, the neutronization burst phase has little dependence on the progenitor star properties. In numerical simulations, there is a second \\emph{accretion} phase of $\\sim 0.5$~s in which the shock wave leads to a hot accretion mantle around the high density core of the neutron star. High luminosity $\\nu_e$ and $\\bar{\\nu}_e$ fluxes are radiated via the processes $e^- + p\\rightarrow \\nu_e + n$ and $e^+ + n \\rightarrow \\bar{\\nu}_e + p$ due to the large number of nucleons and the presence of a quasi-thermal $e^+e^-$ plasma. Finally, in the \\emph{cooling} phase, a hot neutron star is formed. This phase is characterized by the emission of (anti)neutrino fluxes of all species within tens or hundreds of seconds.\n\nFor numerical purposes, we shall make use here of the following quasi-thermal parametrization, representing well detailed numerical simulations~\\cite{Keil:2002in,Hudepohl:2009tyy,Tamborra:2012ac,Mirizzi:2015eza}: \n\\begin{equation}\n\\label{eq:differential_flux}\n\\Phi^{0}_{\\nu_\\beta}(t,E) = \\frac{L_{\\nu_\\beta}(t)}{4 \\pi D^2}\\frac{\\varphi_{\\nu_\\beta}(t,E)}{\\langle E_{\\nu_\\beta}(t)\\rangle}\\,,\n\\end{equation}\nand describing the differential flux for each neutrino flavor $\\nu_\\beta$ at a time $t$ after the SN core bounce, located at a distance $D$. In Eq.~\\ref{eq:differential_flux}, $L_{\\nu_\\beta}(t)$ is the $\\nu_\\beta$ luminosity, $\\langle E_{\\nu_\\beta}(t)\\rangle$ the mean neutrino energy and $\\varphi_{\\nu_\\beta}(t,E)$ is the neutrino energy distribution, defined as:\n\\begin{equation}\n\\label{eq:nu_energy_distribution}\n\\varphi_{\\nu_\\beta}(t,E) = \\xi_\\beta(t) \\left(\\frac{E}{\\langle E_{\\nu_\\beta}(t)\\rangle}\\right)^{\\alpha_\\beta(t)} \\exp{\\left\\{\\frac{-\\left[\\alpha_\\beta(t) + 1\\right] E}{\\langle E_{\\nu_\\beta}(t)\\rangle}\\right\\}}\\,,\n\\end{equation}\n\n\\noindent where $\\alpha_\\beta(t)$ is a \\emph{pinching} parameter and $\\xi_\\beta(t)$ is a unit-area normalization factor.\n\n\n\nThe input for luminosity, mean energy and pinching parameter values have been obtained from the \\texttt{SNOwGLoBES} software \\cite{snowglobes}. \\texttt{SNOwGLoBES} includes fluxes from the Garching Core-Collapse Modeling Group~\\footnote{\\url{https://wwwmpa.mpa-garching.mpg.de/ccsnarchive/index.html}}, providing computationally expensive simulation results for a progenitor star of $8.8 M_\\odot$~\\cite{Hudepohl:2009tyy}.\n\nNeutrinos experience flavor conversion inside the SN as a consequence of their coherent interactions with electrons, protons and neutrons in the medium, being subject to the MSW (Mikheyev-Smirnov-Wolfenstein) resonances associated to the solar and atmospheric neutrino sectors~\\cite{Dighe:1999bi}. After the resonance regions, the neutrino mass eigenstates travel incoherently in their way to the Earth, where they are detected as flavor eigenstates. The neutrino fluxes at the Earth ($\\Phi_{\\nu_e}$ and $\\Phi_{\\nu_\\mu}=\\Phi_{\\nu_\\tau}=\\Phi_{\\nu_x}$) can be written as:\n\n\\begin{eqnarray}\n\\label{eq:nue}\n \\Phi_{\\nu_e}&= &p \\Phi^{0}_{\\nu_e} +(1-p) \\Phi^{0}_{\\nu_x}~;\\\\\n \\Phi_{\\nu_\\mu}+\\Phi_{\\nu_\\tau} \\equiv 2\\Phi_{\\nu_x} & =& (1-p) \\Phi^{0}_{\\nu_e} + (1+p) \\Phi^{0}_{\\nu_x}~,\n\\end{eqnarray}\n\n\\noindent where $\\Phi^{0}$ refers to the neutrino flux in the SN interior, and the $\\nu_e$ survival probability $p$ is given by $p = |U_{e3}|^2= \\sin^2 \\theta_{13}$ ($p \\simeq |U_{e2}|^2 \\simeq \\sin^2 \\theta_{12}$) for NO (IO), due to adiabatic transitions in the $H$ ($L$) resonance, which refer to flavor conversions associated to the atmospheric $\\Delta m^2_{31}$ (solar $ \\Delta m^2_{21}$) mass splitting, see e.g.~\\cite{Dighe:1999bi}. Here we are neglecting possible non-adiabaticity effects occurring when the resonances occur near the shock wave \\cite{Schirato:2002tg,Fogli:2003dw,Fogli:2004ff,Tomas:2004gr,Dasgupta:2005wn,Choubey:2006aq,Kneller:2007kg,Friedland:2020ecy}, as well as the presence of turbulence in the matter density \\cite{Fogli:2006xy,Friedland:2006ta,Kneller:2010sc,Lund:2013uta,Loreti:1995ae,Choubey:2007ga,Benatti:2004hn,Kneller:2013ska,Fogli:2006xy}. The presence of non-linear collective effects~\\cite{Mirizzi:2015eza,Chakraborty:2016yeg,Horiuchi:2018ofe,Tamborra:2020cul,Capozzi:2022slf} is suppressed by the large flavor asymmetries of the neutronization burst~\\cite{Mirizzi:2015eza}.\n \n\n\n \nEarth matter regeneration effects also affect the neutrino propagation in case the SN is shadowed by the Earth for the DUNE detector. The trajectories of the neutrinos depend on the SN location and on the time of the day at which the neutrino burst reaches the Earth. Neutrinos therefore travel a certain distance through the Earth characterized by a zenith angle $\\theta$, analogous to the one usually defined for atmospheric neutrino studies. This convention assumes $\\cos \\theta=-1$ for upward-going events, \\emph{i.e.} neutrinos that cross a distance equal to the Earth's diameter, and $\\cos \\theta\\geq 0$ for downward-going neutrinos that are un-shadowed by the Earth. An analytical expression for the electron neutrino fluxes after crossing the Earth~\\footnote{In what follows, we shall focus on electron neutrino events, the dominant channel in DUNE.} yields no modifications for NO. \nIn turn, for IO, an approximate formula for the $\\nu_e$ survival probability in Eq.~\\ref{eq:nue} and after crossing the Earth, assuming that SN neutrinos have traveled a distance $L(\\cos\\theta)$ inside the Earth and in a density constant medium, reads as~\\cite{Dighe:1999bi,Lunardini:2001pb}:\n\\begin{widetext}\n\\begin{eqnarray}\n\\label{eq:p2e}\np & = & \\sin^2\\theta_{12} + \\sin2\\theta^m_{12} \\, \\label{P2e}\n \\sin(2\\theta^m_{12}-2\\theta_{12}) \n\\sin^2\\left( \n\\frac{\\Delta m^2_{21} \\sin2\\theta_{12}}{4 E \\,\\sin2\\theta^m_{12}}\\,L(\\cos\\theta) \n\\right)\\,, \n\\end{eqnarray}\n\\end{widetext}\n\\noindent\nwhere $\\theta^m_{12}$ is the effective value of mixing angle $\\theta_{12}$ in matter for neutrinos:\n\n\\begin{eqnarray} \n\\sin2\\theta^m_{12} = \\frac{\\sin^2\\theta_{12}}\n{\\sin^2\\theta_{12}+ \\left(\\cos^2\\theta_{12}- \\frac{2\\sqrt{2}G_F N{e} E}{\\Delta m^2_{21}}\\right)^2}~.\n\\end{eqnarray}\nIn the expression above, $N_e$ refers to the electron number density in the medium, $\\sqrt{2}G_F N_e (\\textrm{eV})\\simeq 7.6 \\times 10^{-14} Y_e\\rho$, with $Y_e$ and $\\rho$ the electron fraction and the Earth's density in g/cm$^3$ respectively. \n\nOur numerical results are obtained calculating $p$ in Eq.~\\ref{eq:p2e} in the general case of neutrino propagation in multiple Earth layers, with sharp edge discontinuities between different layers and a mild density dependence within a layer, see \\cite{Lisi:1997yc,Fogli:2012ua}. Our method consists in evaluating the evolution operator for the propagation in a single layer using the Magnus expansion \\cite{Magnus_exp}, where the evolution operator is written as the exponential of an operator series. In our case, we stop at the second order of the series. With the approximation of the electron density being a fourth order polynomial as a function of the Earth radius, the integrals involved in the Magnus expansion become analytical. The evolution operator over the entire trajectory in the Earth is simply the product of the operators corresponding to each crossed layer.\n\nThe neutrino interaction rate per unit time and energy in the DUNE far detector is defined as:\n\\begin{equation}\n\\label{eq:rate_DUNE_fun}\nR(t,E) = N_\\text{target}~\\sigma_{\\nu_e\\text{CC}}(E)~\\epsilon(E)~\\Phi_{\\nu_e}(t,E)~,\n\\end{equation}\n\\noindent where $t$ is the neutrino emission time, $E$ is the neutrino energy, $N_\\text{target}=\\num{6.03e32}$ is the number of argon nuclei for a $40$ kton fiducial mass of liquid argon, $\\sigma_{\\nu_e\\text{CC}}(E)$ is the $\\nu_e$ cross-section, $\\epsilon(E)$ is the DUNE reconstruction efficiency and $\\Phi_{\\nu_e}(t,E)$ is the electron neutrino flux reaching the detector per unit time and energy. The total number of expected events is given by $R\\equiv \\int R(t,E)\\mathop{}\\!\\mathrm{d} t \\mathop{}\\!\\mathrm{d} E$.\n\nAs far as cross-sections are concerned, liquid argon detectors are mainly sensitive to electron neutrinos via their charged-current interactions with $^{40}$Ar nuclei, $\\nu_e + {^{40} Ar} \\rightarrow e^{-} + {^{40} K^{*}}~$, through the observation of the final state electron plus the de-excitation products (gamma rays, ejected nucleons) from $^{40} K^{*}$. We use the MARLEY~\\footnote{MARLEY (Model of Argon Reaction Low Energy Yields) is a Monte Carlo event generator for neutrino interactions on argon nuclei at energies of tens-of-MeV and below, see \\url{http://www.marleygen.org/} and Ref.~\\cite{Gardiner:2021qfr}.} charged-current $\\nu_e$ cross-section on $^{40}$Ar, implemented in \\texttt{SNOwGLoBES} \\cite{snowglobes}. Concerning event reconstruction, we assume the efficiency curve as a function of neutrino energy given in Ref.~\\cite{DUNE:2020zfm}, for the most conservative case quoted there of 5~MeV as deposited energy threshold. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{IO_eventsperbin_1ms.pdf} \n\\caption{\\label{fig:events}Number of $\\nu_e$ events per unit time in the DUNE far detector. The plot zooms into the first 50~ms since core bounce. A SN distance of 10~kpc is assumed. Several histograms are shown: neglecting oscillations (both in the SN and in the Earth), as well as including oscillations for the NO and IO cases. For IO, we show the variation of the Earth matter effects with zenith angle $\\theta$.}\n\\end{center}\n\\end{figure}\n\nFigure~\\ref{fig:events} shows the number of $\\nu_e$ events as a function of emission time at the DUNE far detector from a SN explosion at $10$~kpc from Earth, for negligible time delays due to non-zero neutrino masses. We illustrate the case where no oscillations are considered. We also account for oscillations in NO and IO cases, the latter for several possible SN locations with respect to the Earth. The neutronization burst is almost entirely (partially) suppressed in the normal (inverted) mass ordering.\n\nFor a SN located at $D=10$~kpc from the Earth and without Earth matter effects, $R$ is found to be 860, 1372 and 1228 for the no oscillations, NO and IO cases, respectively. In other words, the largest total event rate is obtained for the largest swap of electron with muon/tau neutrinos in the SN interior, \\emph{i.e.} the smallest value of $p$ in Eq.~\\ref{eq:nue}, corresponding to the NO case. This can be understood from the larger average neutrino energy at production of muon/tau neutrinos compared to electron neutrinos, resulting in a higher (on average) neutrino cross-section and reconstruction efficiency. \n\nFinally, as shown in Fig.~\\ref{fig:events}, Earth matter effects are expected to have a mild effect on the event rate in all cases. The $\\nu_e$ flux is left unchanged in the normal ordering, while Earth matter effects modify slightly the neutronization burst peak in the IO case. The total number of events becomes $R=1206, 1214, 1260, 1200$ for IO and $\\cos\\theta = -0.3,-0.5,-0.7,-1$, respectively. \n\n\n\\section{Neutrino mass sensitivity} \\label{sec:likelihood}\nIn order to compute the DUNE sensitivity to the neutrino mass, we adopt an 'unbinned' maximum likelihood method similar to the one in \\cite{Pagliaroli:2010ik}. \n\nWe start by generating many DUNE toy experiment datasets (a few hundreds, typically) for each neutrino oscillation and SN distance scenario, and assuming massless neutrinos. For each dataset, the time/energy information of the $R$ generated events are sampled following the parametrization of Eq.~\\ref{eq:rate_DUNE_fun}, and events are sorted in time-ascending order. \n\nFurthermore, we assume a $10\\%$ fractional energy resolution in our $\\mathcal{O}$(10~MeV) energy range of interest, see~\\cite{DUNE:2020zfm}, and smear the neutrino energy of each generated event accordingly. We assume perfect time resolution for our studies. On the one hand, DUNE's photon detection system provides a time resolution better than 1~$\\mu$s~\\cite{DUNE:2020zfm}, implying a completely negligible smearing effect. On the other hand, even in the more conservative case of non-perfect matching between TPC and optical flash information, the DUNE charge readout alone yields a time resolution of order 1~ms~\\cite{DUNE:2020ypp}. While not completely negligible, the time smearing is expected to have a small impact also in this case, considering the typical 25~ms duration of the SN neutronization burst.\n\nOnce events are generated for each DUNE dataset, we proceed with our minimization procedure. The two free parameters constrained in our fit are an offset time $t_\\text{off}$ between the moment when the earliest SN burst neutrino reaches the Earth and the detection of the first event $i=1$, and the neutrino mass $m_\\nu$. The fitted emission times $t_{i,fit}$ for each event $i$ depend on these two fit parameters as follows:\n\\begin{equation}\n\\label{eq:emission_t}\nt_{i,fit} = \\delta t_i - \\Delta t_{i}(m_\\nu) + t_\\text{off}\\,,\n\\end{equation}\nwhere $\\delta t_i $ is the time at which the neutrino interaction $i$ is measured in DUNE (with the convention that $\\delta t_1\\equiv 0$ for the first detected event), $\\Delta t_i(m_\\nu)$ is the delay induced by the non-zero neutrino mass (see Eq.~\\ref{eq:delay}), and $t_\\text{off}$ is the offset time. We do not include any free parameter describing the SN emission model uncertainties in our fit. \n\nBy neglecting backgrounds and all the constant (irrelevant) factors, our likelihood $\\mathcal{L}$ function \\cite{Pagliaroli:2008ur} reads as\n\\begin{equation}\n\\label{eq:likelihood_fun}\n\\mathcal{L}(m_{\\nu},t_\\text{off}) = \\prod_{i=1}^{R}\\int R(t_i,E_i)G_i(E)\\mathop{}\\!\\mathrm{d} E~, \n\\end{equation}\n\\noindent where $G_i$ is a Gaussian distribution with mean $E_i$ and sigma $0.1E_i$, accounting for energy resolution. The estimation of the $m_\\nu$ fit parameter is done by marginalizing over the nuisance parameter $t_\\text{off}$. For each fixed $m_\\nu$ value, we minimize the following $\\chi^2$ function:\n\\begin{equation}\n\\label{eq:chi2_fun}\n\\chi^2(m_{\\nu}) = -2 \\log(\\mathcal{L}(m_{\\nu},t_\\text{off,best}))~,\n\\end{equation}\n\\noindent where $\\mathcal{L}(m_{\\nu},t_\\text{off,best})$ indicates the maximum likelihood at this particular $m_\\nu$ value. \n\nThe final step in our analysis is the combination of all datasets for the same neutrino oscillation and SN distance scenario, to evaluate the impact of statistical fluctuations. For each $m_\\nu$ value, we compute the mean and the standard deviation of all toy dataset $\\chi^2$ values. In order to estimate the allowed range in $m_\\nu$, the $\\Delta\\chi^2$ difference between all mean $\\chi^2$ values and the global mean $\\chi^2$ minimum is computed. The mean 95\\% CL sensitivity to $m_\\nu$ is then defined as the largest $m_\\nu$ value satisfying $\\Delta \\chi^2<3.84$. The $\\pm 1\\sigma$ uncertainty on the 95\\% CL $m_\\nu$ sensitivity can be computed similarly, including into the $\\Delta\\chi^2$ evaluation also the contribution from the standard deviation of all toy dataset $\\chi^2$ values.\n\n\n\n\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{chi2_10kpc.pdf}\n\\caption{\\label{fig:chi2}$\\Delta\\chi^2(m_\\nu)$ profiles as a function of neutrino mass $m_\\nu$, for DUNE generated samples assuming massless neutrinos and a SN distance of 10~kpc. We show the no oscillations' case together with the results for NO and IO. The mean sensitivities and their $\\pm 1\\sigma$ uncertainties are shown with solid lines and filled bands, respectively. The horizontal dotted line depicts the $95\\%$~CL.}\n\\end{center}\n\\end{figure}\n\n\\begin{table}\n\\centering\n\\caption{Mean and standard deviation of the $95\\%$~CL sensitivity on neutrino mass from a sample of DUNE SN datasets at $D=10$~kpc, for different neutrino oscillation scenarios. For the IO case, we give sensitivities for different zenith angles $\\theta$.}\n\\label{tab:m_nu_mass_bounds}\n\\begin{tabular}{@{\\extracolsep{0.5cm}}ccc@{\\extracolsep{0cm}}}\n\\toprule\nNeutrino mass ordering & $\\cos\\theta$ & $m_\\nu$(eV) \\\\\n\\midrule\nNo oscillations & $0$ & $0.51^{+0.20}_{-0.20}$ \\\\\n\\midrule\nNormal Ordering & $0$ & $2.01^{+0.69}_{-0.55}$ \\\\\n\\midrule\n\\multirow{5}*{Inverted Ordering} & $0$ & $0.91^{+0.31}_{-0.33}$ \\\\\n & $-0.3$ & $0.85^{+0.33}_{-0.30}$ \\\\\n & $-0.5$ & $0.88^{+0.29}_{-0.33}$ \\\\\n & $-0.7$ & $0.91^{+0.30}_{-0.32}$ \\\\\n & $-1$ & $0.87^{+0.32}_{-0.28}$ \\\\\n\\bottomrule \n\\end{tabular}\n\\end{table} \n\n\nOur statistical procedure, and its results for a SN distance of $D=10$~kpc, can be seen in Fig.~\\ref{fig:chi2}. The $\\Delta\\chi^2$ profiles as a function of neutrino mass are shown for no oscillations, and oscillations in SN environment assuming either NO or IO. Earth matter effects have been neglected in all cases. After including Earth matter effects as previously described, only the IO expectation is affected. Table~\\ref{tab:m_nu_mass_bounds} reports our results on the mean and standard deviation of the $m_{\\nu}$ sensitivity values for different $\\cos\\theta$ values, that is, for different angular locations of the SN with respect to the Earth.\n\nAs can be seen from Fig.~\\ref{fig:chi2} and Tab.~\\ref{tab:m_nu_mass_bounds}, 95\\% CL sensitivities in the 0.5--2.0~eV range are expected. The best, sub-eV reach, results are expected for the no oscillations and IO scenarios. Despite the largest overall event statistics, $R=1372$, the NO reach is the worst among the three cases, of order 2.0~eV. This result clearly indicates the importance of the shape information, in particular of the sharp neutronization burst time structure visible in Fig.~\\ref{fig:events} only for the no oscillations and IO cases. Table~\\ref{tab:m_nu_mass_bounds} also shows that oscillations in the Earth's interior barely affect the neutrino mass sensitivity. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{mass_sensitivity_comparison_errbar.pdf} \n\\caption{\\label{fig:distance}Dependence of the $95\\%$~CL neutrino mass sensitivity with the distance $D$ from Earth at which the SN explodes. The mean and standard deviation of the expected sensitivity values are shown with solid lines and filled bands, respectively.}\n\\end{center}\n\\end{figure}\n\nFigure ~\\ref{fig:distance} shows how the $95\\%~$CL sensitivity on the neutrino mass varies with the SN distance $D$. Both the mean and the standard deviation of the expected sensitivity values are shown. In all scenarios, the sensitivities to $m_\\nu$ worsen by about a factor of 2 as the SN distance increases from 5 to 25~kpc. As is well known, as the distance $D$ increases, the reduced event rate ($R\\propto 1/D^2$) tends to be compensated by the increased time delays for a given $m_\\nu$ ($\\Delta t_i(m_\\nu)\\propto D$). Our analysis shows that this compensation is only partial, and better sensitivities are obtained for nearby SNe. \n\n\n\n\\section{Conclusions} \\label{sec:conclusions}\n\nThe capability to detect the electron neutrino flux component from a core-collapse SN in our galactic neighborhood makes large liquid argon detectors powerful observatories to obtain constraints on the absolute value of neutrino mass via time of flight measurements.\nExploiting the signal coming from charged-current interactions of $\\nu_e$ with argon nuclei, a 0.9~eV sensitivity on the absolute value of neutrino mass has been obtained in DUNE for the inverted ordering (IO) of neutrino masses, a SN distance of 10~kpc and at 95\\% CL. The sensitivity is expected to be significantly worse in the normal ordering (NO) scenario, 2.0~eV for the same SN distance and confidence level. The sensitivity difference between the two orderings demonstrates the benefit of detecting the $\\nu_e$ neutronization burst, whose sharp time structure would be almost entirely suppressed in NO while it should be clearly observable in DUNE if the mass ordering is IO. The mild effects of oscillations induced by the Earth matter, affecting only the inverted mass ordering, and of the SN distance from Earth, have been studied. The DUNE sensitivity reach appears to be competitive with both laboratory-based direct neutrino mass experiments (such as KATRIN) and next-generation SN observatories primarily sensitive to the $\\bar{\\nu}_e$ flux component (such as Hyper-Kamiokande and JUNO).\n\n\n\n\\begin{acknowledgments}\nThis work has been supported by the Spanish grants FPA2017-85985-P, PROMETEO/2019/083 and PROMETEO/2021/087, and by the European ITN project HIDDeN (H2020-MSCA-ITN-2019/860881-HIDDeN). The work of FC is supported by GVA Grant No. CDEIGENT/2020/003.\n\\end{acknowledgments}\n\n", "meta": {"timestamp": "2022-03-02T02:00:28", "yymm": "2203", "arxiv_id": "2203.00024", "language": "en", "url": "https://arxiv.org/abs/2203.00024"}} {"text": "\\section{Introduction}\nSpin-orbit interaction (SOI) plays an important role in the widely\nstudied spin-related effects and spintronic devices. In the latter\nit can be either directly utilized to create spatial separation of\nthe spin-polarized charge carries or indirectly influence the device\nperformance through spin-decoherence time. In 2D structures two\nkinds of SOI are known to be of the most importance, namely Rashba\nand Dresselhaus mechanisms. The first one characterized by parameter\n$\\alpha$ is due to the structure inversion asymmetry (SIA) while the\nsecond one characterized by $\\beta$ is due to the bulk inversion\nasymmetry (BIA). Most brightly both of the contributions reveal\nthemselves when the values of $\\alpha$ and $\\beta$ are comparable.\nIn this case a number of interesting effects occur: the electron\nenergy spectrum becomes strongly anisotropic \\cite{AnisotrSpectrum},\nthe electron spin relaxation rate becomes dependent on the spin\norientation in the plane of the quantum well\n\\cite{AverkievObserved}, a magnetic break-down should be observed in\nthe Shubnilov de Haas effect\\cite{magn}. The energy spectra\nsplitting due to SOI can be observed in rather well-developed\nexperiments as that based on Shubnikov--de Haas effect. However,\nthese experiments can hardly tell about the partial contributions of\nthe two mechanisms leaving the determination of the relation between\n$\\alpha$ and $\\beta$ to be a more challenging task. At the same\ntime, in some important cases spin relaxation time $\\tau_s$ and spin\npolarization strongly depend on the $\\frac{\\alpha}{\\beta}$ ratio. In\nthis paper we consider the tunneling between 2D electron layers,\nwhich turns out to be sensitive to the relation between Rashba and\nDresselhaus contributions. The specific feature of the tunneling in\nthe system under consideration is that the energy and in-plane\nmomentum conservation put tight restrictions on the tunneling.\nWithout SOI the tunneling conductance exhibits delta function-like\nmaximum at zero bias broadened by elastic scattering in the layers\n\\cite{MacDonald}, and fluctuations of the layers width\n\\cite{VaksoFluctuations}. Such a behavior was indeed observed in a\nnumber of experiments \\cite{Eisenstein,Turner,Dubrovski}. Spin-orbit\ninteraction splits the electron spectra into two subbands in each\nlayer. At that energy and momentum conservation can be fulfilled for\nthe tunneling between opposite subbands of the layers at a finite\nvoltage corresponding to the subbands splitting. However, if the\nparameters of SOI are equal for left and right layers, the tunneling\nremains prohibited due to orthogonality of the appropriate spinor\neigenstates. In \\cite{Raichev} it was pointed out that this\nrestriction can also be eliminated if Rashba parameters are\ndifferent for the two layers. A structure design was proposed\n\\cite{Raikh} where exactly opposite values of the Rashba parameters\nresult from the built-in electric field in the left layer being\nopposite to that in the right layer. Because the SOI of Rashba type\nis proportional to the electric field, this would result in\n$\\alpha^R=-\\alpha^L$, where $\\alpha^L$ and $\\alpha^R$ are the Rashba\nparameters for the left and right layers respectively. In this case\nthe\n peak of the conductance should occur at the voltage $U_0$ corresponding\nto the energy of SOI: $eU_0=\\pm2\\alpha k_F$, where $k_F$ is Fermi\nwavevector. In this paper we consider arbitrary Rashba and\nDresselhaus contributions and show how qualitatively different\nsituations can be realized depending on their partial impact. In\nsome cases the structure of the electrons eigenstates suppresses\ntunneling at ever voltage. At that the scattering is important as it\nrestores the features of voltage-current characteristic containing\ninformation about SOI parameters. Finally the parameters $\\alpha$\nand $\\beta$ can be obtained in the tunneling experiment which unlike\nother spin-related experiments requires neither magnetic field nor\npolarized light.\n\\section{Calculations}\nWe consider two 2D electron layers separated by potential barrier at\nzero temperature (see Fig.\\ref{fig:layers}). We shall consider only\none level of size quantization and not too narrow barrier so that\nthe electrons wavefunctions in the left and right layers overlap\nweakly.\nThe system can be described by the phenomenological tunneling Hamiltonian \\cite{MacDonald,MacDonald2,VaksoFluctuations}\n\\begin{figure}[h]\n\\leavevmode\n \\centering\\epsfxsize=180pt \\epsfbox[30 530 500 760]{fig1.eps}\n\\caption{\\label{fig:layers} Energy diagram of two 2D electron\nlayers.}\n\\end{figure}\n\\begin{equation}\n\\label{HT0} H=H_{0}^L+H_{0}^R+H_T,\n\\end{equation}\nwhere $H_{0}^L,H_{0}^R$ are the partial Hamiltonians for the left\nand right layers respectively, $H_T$ is the tunneling term. With\naccount for the elastic scattering and SOI in the layers the partial\nHamiltonians and the tunneling term have the the following form in\nrepresentation of secondary quantization:\n\\begin{equation}\n\\label{eqH}\n\\begin{array}{l}\nH_{0}^l = \\sum\\limits_{k,\\sigma} {\\varepsilon^l_{k} c^{l+}_{k\\sigma}\n c^l_{k\\sigma } } + \\sum\\limits_{k,k',\\sigma} {V^l_{kk'} c^{l+}_{k\\sigma}c^l_{k'\\sigma }} + H^l_{SO} \\\\\nH_T = \\sum\\limits_{k,k',\\sigma,\\sigma'} {T_{kk'\\sigma\\sigma'}\\left( {c^{L+}_{k\\sigma} c^{R}_{k'\\sigma'} + c^{R+}_{k'\\sigma'} c^L_{k\\sigma} } \\right)}, \\\\\n \\end{array}\n\\end{equation}\nHere index $l$ is used for the layer designation and can take the\nvalues $l=R$ for the right layer, $l=L$ for the left layer. By $k$\nhere and further throughout the paper we denote the wavevector\naligned parallel to the layers planes, $\\sigma$ denotes spin\npolarization and can take the values $\\sigma=\\pm 1/2$.\n$\\varepsilon_k^l$ is the energy of an electron in the layer $l$\nhaving in-plane wavevector $k$. It can be expressed as:\n\\begin{equation}\n\\label{spectrum}\n \\varepsilon _k^l = \\varepsilon+\\varepsilon_0^l+\\Delta^l,\n \\end{equation}\nwhere $\\varepsilon=\\frac{\\hbar^2k^2}{2m}$, $m$ being electron's\neffective mass, $\\varepsilon_0^l$ and $\\Delta^l$ are the size\nquantization energy and the energy shift due to external voltage for\nthe layer $l$ . We shall also use the value $\\Delta^{ll'}$ defined\nas\n$\\Delta^{ll'}=(\\Delta^l-\\Delta^{l'})+(\\varepsilon_0^l-\\varepsilon_0^{l'})$.\nSimilar\n notation will be used for spin polarization denoted by indices $\\sigma$, $\\sigma'$.\n The second term in the Hamiltonian (\\ref{eqH}) $V_{kk'}^l$ is the matrix element of the scattering operator.\n We consider only elastic scattering. The tunneling\nterm $H_T$ in (\\ref{eqH}) is described by the tunneling constant\n$T_{kk'\\sigma\\sigma'}$, which\n has the meaning of size quantization levels splitting due to\nthe wavefunctions overlap. By lowercase $t$ we shall denote the\noverlap integral itself. Our consideration is valid only for the\ncase of weak overlapping, i.e. $t\\ll1$. Parametrically $T\\sim\nt\\varepsilon_F$, where $\\varepsilon_F$ is the electrons Fermi\nenergy. The term $H^{l}_{SO}$ describes the spin-orbit part of the\nHamiltonian:\n\\begin{equation}\n\\label{eqSOH}\n \\hat{H}^l_{SO}=\\alpha^l \\left( \\bm{\\sigma} \\times \\bm{k}\n\\right)_z + \\beta^{l} \\left( {\\sigma _x k_x - \\sigma _y k_y }\n\\right),\n\\end{equation}\nwhere $\\sigma_i$ are the Pauli matrices, $\\alpha^l,\\beta^l$ are\nrespectively the parameters of Rashba and Dresselhaus interactions\nfor the layer $l$. In the secondary quantization representation:\n\\begin{eqnarray}\n \\hat {H}_{SO}^l =\\alpha^l \\sum\\limits_k {\\left( {k_y\n-ik_x } \\right)c_{k\\sigma }^{l+} c_{k\\sigma '}^l +} \\left( {k_y\n+ik_x }\n\\right)c_{k\\sigma '}^{l+} c_{k,\\sigma }^l \\nonumber \\\\\n +\\beta^l \\sum\\limits_k\n{\\left( {k_x -ik_y } \\right)c_{k\\sigma }^{l+} c_{k\\sigma '}^l +}\n\\left( {k_x +ik_y } \\right)c_{k\\sigma '}^{l+} c_{k\\sigma }^l\n\\label{eqSOHc}\n\\end{eqnarray}\nThe operator of the tunneling current can be expressed as\n\\cite{MacDonald}:\n\\begin{equation}\n\\label{current0}\n \\hat{I} = \\frac{{ie}}{\\hbar\n}\\sum\\limits_{k,k',\\sigma,\\sigma'} T_{kk'\\sigma\\sigma'}\n\\left(\\hat\\rho_{kk'\\sigma\\sigma'}^{RL}-\\hat\\rho_{k'k\\sigma'\\sigma}^{LR}\n\\right),\n\\end{equation}\nwhere\n$\\hat\\rho_{kk'\\sigma\\sigma'}^{ll'}=c_{k,\\sigma}^{l+}c_{k',\\sigma'}^{l'}$\n We shall assume the case of in-plane momentum and the spin\nprojection being conserved in the tunneling event so the tunneling\nconstant $T_{kk'\\sigma\\sigma'}$ has the form\n$T_{kk'\\sigma\\sigma'}=T\\delta_{kk'}\\delta_{\\sigma\\sigma'}$, where\n$\\delta$ is the Cronecker symbol. The tunneling current is then\ngiven by\n \\begin{equation}\n \\label{current}\n I = \\frac{ie}{\\hbar}\nT \\int dk\\: \\mathrm{Tr} \\left( \\left<\\hat\\rho^{RL}_{k\\sigma}\\right>\n-\\left<\\hat\\rho^{LR}_{k\\sigma}\\right>\\right),\n\\end{equation}\nwhere $<>$ denotes the expectation value in quantum-mechanical\nsense. For further calculations it is convenient to introduce vector\noperator\n$\\bm{\\hat{S}}^{ll'}_{kk'}=\\left\\{\\hat{S_0},\\bm{\\hat{s}}\\right\\}=\\left\\{\\mathrm{Tr}\\left(\\hat\\rho^{ll'}_{kk'\\sigma\\sigma'}\\right),\\mathrm{Tr}\\left({\\bm\n\\sigma}\\hat\\rho^{ll'}_{kk'\\sigma\\sigma'}\\right) \\right\\}$. This\nvector fully determines the current because the latter can be\nexpressed through the difference\n$\\hat{S}^{RL}_{0k}-\\hat{S}^{LR}_{0k}$. The time evolution of\n$\\bm{\\hat{S}}^{ll'}_{kk'}$ is governed by:\n\\begin{equation}\n\\label{drodt}\n\\frac{d\\bm{\\hat{S}}_{kk'}^{ll'}}{dt}=\\frac{i}{\\hbar}[H,\\bm{\\hat{S}}_{kk'}^{ll'}]\n\\end{equation}\nIn the standard way of reasoning \\cite{Luttinger} we assume\nadiabatic onset of the interaction with characteristic time\n$w^{-1}$. We will set $w=0$ in the final expression. With this\n(\\ref{drodt}) turns into:\n\\begin{equation}\n\\label{drodt0}\n(\\bm{\\hat{S}}_{kk'}^{ll'}-\\bm{\\hat{S}}_{kk'}^{(0)ll'})w=\\frac{i}{\\hbar}[H,\\bm{\\hat{S}}_{kk'}^{ll'}]\n\\end{equation}\nHere $\\bm{\\hat{S}}_{kk'}^{(0)ll'}$ represents the stationary\nsolution of (\\ref{drodt}) without interaction. By interaction here\nwe mean the tunneling and the elastic scattering by impurities but\nnot the external voltage. The role of the latter is merely shifting\nthe layers by $eU$ on the energy scale. From such defined\ninteraction it immediately follows that the only non-zero elements\nof $\\bm{\\hat{S}}_{kk'}^{(0)ll'}$ are that with $l=l'$ and $k=k'$. In\nfurther abbreviations we will avoid duplication of the indices i.e.\nwrite single $l$ instead of $ll$ and $k$ instead of $kk$:\n\\begin{equation}\n\\label{Sdiag}\n\\bm{\\hat{S}}_{kk'}^{(0)ll'}=\\bm{\\hat{S}}_{k}^{(0)l}\\delta_{kk'}\\delta_{ll'}\n\\end{equation}\n With use of fermion commutation rules\n \\begin{eqnarray*}\n \\left\\{ {c_i c_k } \\right\\} = \\left\\{ {c_i^ + c_k^ + } \\right\\} = 0 \\\\\n \\left\\{ {c_i c_k^ + } \\right\\} = \\delta _{ik}\n \\end{eqnarray*}\nthe calculations performed in a way similar to \\cite{Luttinger}\nbring us to the following system of equations\n with respect to\n$\\bm{\\hat{S}}_{k}^{ll'}$:\n\\begin{eqnarray}\n 0= \\left( {\\Delta^{ll'}+i\\hbar w } \\right){\\bf{\\hat\nS}}_k^{ll'} + T\\left( {{\\bf{\\hat S}}_k^{l'} - {\\bf{\\hat S}}_k^l }\n\\right)+{\\bf{M(}}k{\\bf{)\\hat S}}_k^{ll'} \\nonumber \\\\\n- \\sum\\limits_{k'} {\\left( {\\frac{{A_{kk'} {\\bf{\\hat S}}_k^{ll'} -\nB_{kk'} {\\bf{\\hat S}}_{k'}^{ll'} }}{{ {\\varepsilon' - \\varepsilon\n-\\Delta^{ll'} } + i\\hbar w}} + \\frac{{B_{kk'} {\\bf{\\hat S}}_k^{ll'}\n- A_{kk'} {\\bf{\\hat S}}_{k'}^{ll'} }}{{ {\\varepsilon -\n\\varepsilon' -\\Delta^{ll'} } + i\\hbar w}}} \\right)}\n \\label{system1}\n \\end{eqnarray}\n\\begin{eqnarray}\ni\\hbar w\\left( {{\\bf{\\hat S}}_k^{\\left( 0 \\right)l} - {\\bf{\\hat\nS}}_k^l } \\right) = T\\left( {{\\bf{\\hat S}}_k^{l'l} - {\\bf{\\hat\nS}}_k^{ll'} } \\right) + {\\bf{M}}(k){\\bf{\\hat S}}_k^l \\nonumber \\\\ +\n\\sum\\limits_{k'} { {\\frac{{2i\\hbar wA_{kk'} \\left( {{\\bf{\\hat\nS}}_k^l - {\\bf{\\hat S}}_{k'}^{l'} } \\right)}}{{\\left( {\\varepsilon'\n- \\varepsilon } \\right)^2 + \\left( {\\hbar w} \\right)^2 }}} },\n\\label{system2}\n\\end{eqnarray}\nwhere $\\bm{M}$ is a known matrix, depending on $k$ and parameters of\nspin-orbit interaction in the layers. Here we also introduced the\nquadratic forms of the impurities potential matrix elements:\n\\begin{eqnarray}\nA_{kk'} \\equiv \\left| {V_{k'k}^{l} } \\right|^2 \\nonumber \\\\\nB_{kk'} \\equiv V_{k'k}^{l} V_{kk'}^{l'}\n\\label{correlators}\n\\end{eqnarray}\nAs (\\ref{system1}) and (\\ref{system2}) comprise a system of linear\nintegral equations these quantities enter the expression\n(\\ref{current}) for the current linearly and can be themselves\naveraged over spatial distribution of the impurities. In order to\nperform this averaging we assume the short range potential of\nimpurities:\n\\begin{equation}\n\\label{ImpuritiesPotential} V\\left( r \\right) = \\sum\\limits_a\n{V_0^{} \\delta \\left( {r - r_a } \\right)}\n\\end{equation}\nThe averaging immediately shows that the correlators\n$\\left\\equiv A$ and $\\left\\equiv B$\nhave different parametrical dependence on the tunneling transparency\n$t$, namely\n\\begin{equation}\n\\label{T2}\n \\frac{B}{A}\\sim t^{2}\\sim T^2\n \\end{equation}\nWe emphasize that this result holds for non-correlated distribution\nof the impurities as well as for their strongly correlated\narrangement such as a thin layer of impurities placed in the middle\nof the barrier. The corresponding expressions for these two cases\nare given below. Index 'rand' stands for uniform impurities\ndistribution and 'cor' for their correlated arrangement in the\nmiddle of the barrier $(z=0)$:\n\\begin{eqnarray}\n {B^{rand} } = \\frac{{V_0^2 n}}{W}\\int {dz}\nf_l ^2 (z)f_{l'} ^2 (z)\\sim\\frac{{V_0^2\nn}}{W}\\frac{{t^2 }}{d} \\nonumber \\\\\n {A^{rand} }\n = \\frac{{V_0^2 n}}{W}\\int {dz} f_l^4\\left(z\\right)\n\\sim\\frac{{V_0^2 n}}{W}\\frac{1}{d}\n\\nonumber \\\\\n{B^{cor} } = \\frac{{V_0^2 n_s }}{W}f_l ^2 (0)f_{l'} ^2\n(0)\\sim\\frac{{V_0^2 n_s}}{W}\\frac{{t^2 }}{d}\n\\nonumber \\\\\n {A^{cor} } = \\frac{{V_0^2 n_s\n}}{W}f_l ^4 \\left( 0 \\right)\\sim\\frac{{V_0^2 n_s}}{W}\\frac{1}{d},\n\\label{correlators1}\n\\end{eqnarray}\nwhere $n$ and $n_s$ are bulk and surface concentrations of the\nimpurities, $W$ is the lateral area of the layers, $d$ is the width\nof the barrier and $f(z)$ is the eigenfunction corresponding to the\nsize quantization level, $z$ is coordinate in the direction normal\nto the layers planes, $z=0$ corresponding to the middle of the\nbarrier\\cite{Raikh}.\n Unlike \\cite{Raikh} and according to (\\ref{T2}) we\nconclude that the correlator $\\left$ has to be\nneglected as soon as we shall be interested in calculating the\ncurrent within the order of $T^2$. In the hereused method of\ncalculation this result appears quite naturally, however, it can be\nsimilarly traced in the technique used in \\cite{Raikh} (see\nAppendix). For the same reason the tunneling term should be dropped\nfrom (\\ref{system2}) as it would give second order in $T$ if\n(\\ref{system2}) substituted into (\\ref{system1}). According to\n(\\ref{correlators}) $A$ can be expressed in terms of electrons\nscattering time:\n\\begin{equation}\n\\label{tau} \\frac{1}{\\tau } = \\frac{{2\\pi }}{\\hbar }\\nu\\left\\langle\n{\\left| {V_{kk'} } \\right|^2 } \\right\\rangle = \\frac{{2\\pi\n}}{\\hbar }\\nu A ,\n\\end{equation}\nwhere $\\nu$ is the 2D density of states $\\nu=\\frac{m}{2\\pi\\hbar^2}$.\nBy means of Fourier transformation on energy variable the system\n(\\ref{system1}),(\\ref{system2}) can be reduced to the system of\nlinear algebraic equations. Finally ${{\\bf{\\hat S}}_k^{ll'} }$ can\nbe expressed as a function of ${{\\bf{\\hat S}}_k^{\\left( 0 \\right)l}\n}$. Consequently the current (\\ref{current}) becomes a function of\n$\\left<\\hat{\\rho}_{k\\sigma}^{(0)R}\\right>$,\n$\\left<\\hat{\\rho}_{k\\sigma}^{(0)L}\\right>$. For the considered case\nof zero temperature:\n \\[\n\\left<\\rho _{k\\sigma}^{(0)l}\\right> = \\frac{1}{2W} \\theta \\left(\n{\\varepsilon _F^l + \\Delta ^l - \\varepsilon - \\varepsilon _\\sigma }\n\\right),\n\\]\nwhere\n\\[\n\\varepsilon _\\sigma = \\pm \\left| {\\alpha ^l \\left( {k_x - ik_y }\n\\right) - \\beta ^l \\left( {ik_x - k_y } \\right)} \\right|,\n\\]. Without loss of generality we shall consider the\ncase of identical layers and external voltage applied as shown in\nFig.\\ref{fig:layers}:\n\\begin{eqnarray*}\n\\varepsilon_0^R=\\varepsilon_0^L\\\\\n\\Delta^L=-\\frac{eU}{2}, \\Delta^R=+\\frac{eU}{2}\\\\\n \\Delta^{RL}=-\\Delta^{LR}=eU\n\\end{eqnarray*}\n The calculations can be simplified with account for\ntwo small parameters:\n\\begin{eqnarray}\n \\xi=\\frac{\\hbar}{\\varepsilon_F\\tau}\\ll1 \\nonumber \\\\\n\\eta=\\frac{eU}{\\varepsilon_F}\\ll1 \\label{deltaef}\n\\end{eqnarray}\nWith (\\ref{eftau}) calculation yields the following expression for\nthe current:\n\\begin{equation}\n\\label{currentfinal0} I = \\frac{{ie}}{{2\\pi \\hbar }}T^2 \\nu\n\\int\\limits_0^\\infty {\\int\\limits_0^{2\\pi } {\\left( {\\zeta ^L +\n\\zeta ^R } \\right)\\mathrm{Tr}\\left( {\\rho _\\sigma ^{\\left( 0\n\\right)R} - \\rho _\\sigma ^{\\left( 0 \\right)L} } \\right)d\\varepsilon\nd\\varphi } },\n\\end{equation}\nwhere\n\\[\n\\label{constants} \\zeta ^l = \\frac{{C^l \\left[ {\\left( {C^l }\n\\right)^2 - 2bk^2 \\sin2\\varphi - gk^2 } \\right]}}{{\\mathop {\\left(\n{f + 2d\\sin2\\varphi } \\right)}\\nolimits^2 k^4 - 2\\left( {C^l }\n\\right)^2 \\left( {c + 2a\\sin2\\varphi } \\right)k^2 + \\left( {C^l }\n\\right)^4 }}, \\]\n \\[ C^l\\left(U\\right) = \\Delta ^l + i\\frac{\\hbar\n}{\\tau },\n\\]\n\\begin{eqnarray}\n a = \\alpha ^L \\beta ^L + \\alpha ^R \\beta ^R \\nonumber \\\\\n b = \\left( {\\beta ^L + \\beta ^R } \\right)\\left( {\\alpha ^L + \\alpha ^R } \\right)\\nonumber \\\\\n c = \\left( {\\beta ^L } \\right)^2 + \\left( {\\beta ^R } \\right)^2 + \\left( {\\alpha ^L } \\right)^2 + \\left( {\\alpha ^R } \\right)^2 \\nonumber \\\\\n d = \\alpha ^L \\beta ^L - \\alpha ^R \\beta ^R \\nonumber \\\\\n f = \\left( {\\beta ^L } \\right)^2 - \\left( {\\beta ^R } \\right)^2 + \\left( {\\alpha ^L } \\right)^2 - \\left( {\\alpha ^R } \\right)^2 \\nonumber \\\\\n g = \\mathop {\\left( {\\beta ^L + \\beta ^R } \\right)}\\nolimits^2 + \\mathop {\\left( {\\alpha ^L + \\alpha ^R } \\right)}\\nolimits^2 \\nonumber \\\\\n\\label{constants}\n \\end{eqnarray}\nParameters $a$-$g$ are various combinations of the Rashba and\nDresselhaus parameters of SOI in the layers. Both types of SOI are\nknown to be small in real structures so that:\n\\begin{equation}\n\\alpha k_F\\ll\\varepsilon_F, \\; \\beta k_F\\ll\\varepsilon_F\n\\end{equation}\nThis additional assumption together with (\\ref{deltaef}) reduces\n(\\ref{currentfinal0}) to\n\\begin{equation}\n\\label{currentfinal} I = \\frac{{ie^2 }}{{2\\pi \\hbar }}T^2 \\nu\nWU\\int\\limits_0^{2\\pi } {\\left[ {\\zeta ^L \\left( {\\varepsilon_F }\n\\right) + \\zeta ^R \\left( {\\varepsilon_F } \\right)} \\right]d\\varphi\n}\n\\end{equation}\nThe integral over $\\varphi$ in (\\ref{currentfinal}) can be\ncalculated analytically by means of complex variable integration.\nHowever, the final result for arbitrary $\\alpha^l,\\beta^l$ is not\ngiven here for it is rather cumbersome. In the next section some\nparticular cases are discussed.\n\\section{Results and Discussion}\nThe obtained general expression (\\ref{currentfinal}) can be\nsimplified for a few particular important relations between Rashba\nand Dresselhaus contributions. These calculations reveal\nqualitatively different dependencies of the d.c. tunneling current\non the applied voltage.\n\\begin{figure}[h]\n \\leavevmode\n \\centering\\epsfxsize=210pt \\epsfbox[130 350 700 800]{fig2.eps}\n\\caption{\\label{fig:tunnelingmain}Tunneling conductance, a:\n$\\varepsilon_F=10$ meV, $\\alpha=\\beta=0$, $\\tau=2*10^{-11}$ s; b:\nsame as a, but $\\alpha k_F=0.6$ meV; c: same as b, but\n$\\beta=\\alpha$; d: same as c, but $\\tau=2*10^{-12}$ s.}\n\\end{figure}\nThe results of the calculations shown below were obtained using the\nfollowing parameters: Fermi energy $\\varepsilon_F=10$ meV,\nspin-orbit splitting was taken to resemble $GaAs$ structures:\n$\\alpha k_F=0.6$ meV.\n\\subsection{No Spin-Orbit Interaction}\n In the absence of SOI ($\\alpha^R=\\alpha^L=0$, $\\beta^R=\\beta^L=0$) the\nenergy spectrum for each of the layers forms a paraboloid:\n\\begin{equation}\nE^l(k)=\\varepsilon_0+\\frac{\\hbar^2k^2}{2m}\\pm \\frac{eU}{2}.\n\\end{equation}\n According to our assumptions (\\ref{current0}),(\\ref{current}), the tunneling takes place at:\n\\begin{eqnarray}\n E^R=E^L\\nonumber \\\\\n k^R=k^L\n \\label{conservation}\n\\end{eqnarray}\nBoth conditions are satisfied\n only at $U=0$ so that a nonzero external voltage does not produce any current\n despite it produces empty states in one layers aligned to the filled states in the other layer\n (Fig.\\ref{fig:layers}). The momentum conservation restriction in (\\ref{conservation}) is weakened if the electrons scatter at the impurities.\n Accordingly, one should expect a nonzero tunneling current\n within a finite voltage range in vicinity of zero.\n For the considered case the general formula (\\ref{currentfinal}) is simplified radically as all the parameters (\\ref{constants})\n have zero values. Finally we get the well-known\n result\\cite{MacDonald}:\n\\begin{equation}\n\\label{currentMacDonald}\n I = 2e^2 T^2 \\nu\nWU\\frac{{\\frac{1}{\\tau }}}{{\\left( {eU} \\right)^2 + \\left(\n{\\frac{\\hbar }{\\tau }} \\right)^2 }}. \\end{equation}\n The conductance defined as $G(U)=I/U$ has Lorentz-shaped peak at $U=0$\n turning into delta function at $\\tau\\rightarrow\\infty$.\n This case is shown in (Fig.\\ref{fig:tunnelingmain},a).\n All the curves in Fig.\\ref{fig:tunnelingmain} show the results of the\ncalculations for very weak scattering. The corresponding scattering\ntime is taken $\\tau=2*10^{-11}s$.\n\\subsection{Spin-Orbit Interaction of Rashba type}\n The spin-orbit interaction gives\nqualitatively new option for the d.c. conductance to be finite at\nnon-zero voltage. SOI splits the spectra into two subbands. Now an\nelectron from the first subband of the left layer can tunnel to a\nstate in a second subband of the right layer. Let us consider a\nparticular case when only Rashba type of SOI interaction exists in\nthe system, its magnitude being the same in both layers, i.e.\n$|\\alpha^R|=|\\alpha^L|\\equiv \\alpha$, $\\beta^R=\\beta^L=0$. In this\ncase the spectra splits into two paraboloid-like subbands \"inserted\"\ninto each other. Fig.\\ref{fig:spectraRashba} shows their\ncross-sections for both layers,\n arrows show spin orientation. By applying a certain external\nvoltage $U_0=\\frac{2\\alpha k_F}{e}$,\n$k_F=\\frac{\\sqrt{2m\\varepsilon_F}}{\\hbar}$ the layers can be shifted\non the energy scale in such a way that the cross-section of the\n\"outer\" subband of the right layer coincides with the \"inner\"\nsubband of the left layer (see solid circles in\nFig.\\ref{fig:spectraRashba}). At that both conditions\n(\\ref{conservation}) are satisfied. However, if the spin is taken\ninto account, the interlayer transition can still remain forbidden.\nIt happens if the appropriate spinor eigenstates involved in the\ntransition are orthogonal. This very case occurs if\n$\\alpha^R=\\alpha^L$, consequently the conductance behavior remains\nthe same as that without SOI. Contrary, if the Rashba terms are of\nthe opposite signs, i.e. $\\alpha^R=-\\alpha^L$ the spin orientations\nin the \"outer\" subband of the right layer and the \"inner\" subband of\nthe left layer are the same and the tunneling is allowed at a finite\nvoltage but forbidden at $U=0$ . This situation, pointed out in\n\\cite{Raichev,Raikh} should reveal itself in sharp maxima of the\nconductance at $U=\\pm U_0$ as shown in\nFig.\\ref{fig:tunnelingmain},b. From this dependence the value of\n$\\alpha$ can be immediately extracted from the position of the peak.\nEvaluating (\\ref{constants}) for this case and further the\nexpression (\\ref{currentfinal}) we obtain the following result for\nthe current:\n\\begin{equation}\n\\label{currentRaikh} I = \\frac{{2e^2T^2 W\\nu U\\frac{\\hbar }{\\tau\n}\\left[ {\\delta^2 + e^2 U^2 + \\left( {\\frac{\\hbar }{\\tau }}\n\\right)^2 } \\right]}}{{\\left[ {\\left( {eU - \\delta } \\right)^2 +\n\\left( {\\frac{\\hbar }{\\tau }} \\right)^2 } \\right]\\left[ {\\left( {eU\n+ \\delta } \\right)^2 + \\left( {\\frac{\\hbar }{\\tau }} \\right)^2 }\n\\right]}},\n\\end{equation}\nwhere $\\delta=2\\alpha k_F$. The result is in agreement with that\nderived in\\cite{Raikh}, taken for uncorrelated spatial arrangement\nof the impurities. As we have already noted we do not take into\naccount interlayer correlator $\\left$\n($\\ref{correlators}$) because parametrically it has higher order of\ntunneling overlap integral $t$ than the intralayer correlator\n$\\left$. Therefore the result (\\ref{currentRaikh}) is\nvalid for arbitrary degree of correlation in spatial distribution of\nthe impurities in the system.\n\\begin{figure}[h]\n \\leavevmode\n \\centering\\epsfxsize=220pt \\epsfbox[130 500 700 800]{fig3.eps}\n \\caption{\\label{fig:spectraRashba}Cross-section of electron energy spectra in the left(a) and right (b) layer for\n the case\n$\\alpha^{L}=-\\alpha^{R}, \\beta^{L}=\\beta^{R}=0$.}\n\\end{figure}\nIt is worth noting that the opposite case when only Dresselhaus type\nof SOI exists in the system leads to the same results. However, it\nis rather non-practical to study the case of the different\nDresselhaus parameters in the layers because this type of SOI\noriginates from the crystallographic asymmetry and therefore cannot\nbe varied if the structure composition is fixed. For this case to be\nrealized one needs no make the two layers of different materials.\n\\subsection{Both Rashba and Dresselhaus contributions}\nThe presence of Dresselhaus term in addition to the Rashba\ninteraction can further modify the tunneling conductance in a\nnon-trivial way. A special case occurs if the magnitude of the\nDresselhaus term is comparable to that of the Rashba term. We shall\nalways assume the Dresselhaus contribution being the same in both\nlayers: $\\beta^{L}=\\beta^{R}\\equiv\\beta$. Let us add the Dresselhaus\ncontribution to the previously discussed case so that\n$\\alpha^{L}=-\\alpha^{R}\\equiv\\alpha,\\;\\alpha=\\beta$. The\ncorresponding energy spectra and spin orientations are shown in\nFig.\\ref{fig:spectraRD}. Note that while the spin orientations in\nthe initial and final states are orthogonal for any transition\nbetween the layers, the spinor eigenstates are not, so that the\ntransitions are allowed whenever the momentum and energy\nconservation requirement (\\ref{conservation}) is fulfilled. It can\nbe also clearly seen from Fig.\\ref{fig:spectraRD} that the condition\n(\\ref{conservation}), meaning overlap of the cross-sections a. and\nb. occurs only at few points. This is unlike the previously\ndiscussed case where the overlapping occurred within the whole\ncircular cross-section shown by solid lines in\nFig.\\ref{fig:spectraRashba}. One should naturally expect the\nconductance for the case presently discussed to be substantially\nlower. Using (\\ref{currentfinal}) we arrive at a rather cumbersome\nexpression for the current:\n\\begin{figure}[h]\n \\leavevmode\n \\centering\\epsfxsize=220pt \\epsfbox[130 500 700 810]{fig4.eps}\n\\caption{\\label{fig:spectraRD}Cross-section of electron energy\nspectra in the left(a) and right (b) layer for\n the case\n$\\alpha^{R}=-\\alpha^L=\\beta$.}\n\\end{figure}\n\\begin{eqnarray}\n I = eT^2 W\\nu U\\left[ {\\frac{{G_ - \\left(\n{G_ - ^2 - \\delta ^2 } \\right)}}{{\\sqrt {F_ - \\left( {\\delta ^4 +\nF_ - } \\right)} }} - \\frac{{G_ + \\left( {G_ + ^2 - \\delta ^2 }\n\\right)}}{{\\sqrt {F_ + \\left( {\\delta ^4 + F_ + } \\right)} }}}\n\\right], \\label{CurrentSpecial}\n\\end{eqnarray}\nwhere \\begin{eqnarray*}\nG_ \\pm = eU \\pm i\\frac{\\hbar }{\\tau } \\\\\nF_ \\pm = G_ \\pm ^2 \\left( {G_ \\pm ^2 - 2\\delta^2 } \\right).\n\\end{eqnarray*}\nAlternatively, for the case of no interaction with impurities a\nprecise formula for the transition rate between the layers can be\nobtained by means of Fermi's golden rule. We obtained the following\nexpression for the current:\n\\begin{equation}\n\\label{CurrentPrecise} I = \\frac{{2\\pi eT^2 W}}{{\\hbar \\alpha ^2\n}}\\left( {\\sqrt {K + \\frac{{8m\\alpha ^2 eU}}{{\\hbar ^2 }}} - \\sqrt\n{K - \\frac{{8m\\alpha ^2 eU}}{{\\hbar ^2 }}} } \\right),\n\\end{equation} where\n\\[\nK = 2\\delta^2 - e^2 U^2 + \\frac{{16m^2 \\alpha ^4 }}{{\\hbar ^4 }}\n\\]\nComparing the results obtained from (\\ref{CurrentSpecial}) and\n(\\ref{CurrentPrecise}) is an additional test for the correctness of\n(\\ref{CurrentSpecial}). Both dependencies are presented in\nFig.\\ref{fig:goldenRule} and show a good match. The same dependence\nof conductance on voltage is shown in Fig.\\ref{fig:tunnelingmain},c.\nAs can be clearly seen in the figure the conductance is indeed\nsubstantially suppressed in the whole voltage range. This is\nqualitatively different from all previously mentioned cases.\nFurthermore, the role of the scattering at impurities appears to be\ndifferent as well. For the considered above cases characterized by\nresonance behavior of the conductance, the scattering broadens the\nresonances into Lorentz-shape peaks with the characteristic width\n$\\delta=\\hbar/(e\\tau)$. Contrary, for the last case the weakening of\nmomentum conservation, caused by the scattering, increases the\nconductivity and restores the manifestation of SOI in its dependence\non voltage. Fig.\\ref{fig:tunnelingmain},d shows this dependence for\na shorter scattering time $\\tau=2*10^{-12}$. The reason for that is\nthe weakening of the momentum conservation requirement due to the\nelastic scattering. One should now consider the overlap of the\nspectra cross-sections the circles in Fig.\\ref{fig:spectraRD} having\na certain thickness proportional to $\\tau^{-1}$. This increases the\nnumber of points at which the overlap occurs and, consequently, the\nvalue of the tunneling current. As the calculations show, for\narbitrary $\\alpha$ and $\\beta$ the dependence of conductance on\nvoltage can exhibit various complicated shapes with a number of\nmaxima, being very sensitive to the relation between the two\ncontributions. The origin of such a sensitivity is the interference\nof the angular dependencies of the spinor eigenstates in the layers.\nA few examples of such interference are shown in\nFig.\\ref{fig:variousRD}, a--c. All the dependencies shown were\ncalculated for the scattering time $\\tau=2*10^{-12}$ s.\nFig.\\ref{fig:variousRD},a summarizes the results for all previously\ndiscussed cases of SOI parameters, i.e. no SOI (curve 1), the case\n$\\alpha_R=-\\alpha_L, \\beta=0$ (curve 2) and\n$\\alpha_R=-\\alpha_L=\\beta$ (curve 3). Following the magnitude of\n$\\tau$ all the reasonances are broadenered compared to that shown in\nFig.\\ref{fig:tunnelingmain}. Fig.\\ref{fig:variousRD},b (curve 2)\ndemonstrates the conductiance calculated for the case\n$\\alpha_L=-\\frac{1}{2}\\alpha_R=\\beta$, Fig.\\ref{fig:variousRD},c\n(curve 2) -- for the case $\\alpha_L=\\frac{1}{2}\\alpha_R=\\beta$. The\ncurve 1 corresponding to the case of no SOI is also shown in all the\nfigures for reference. Despite of a significant scattering parameter\nall the patterns shown in Fig.\\ref{fig:variousRD} remain very\ndistinctive. That means that in principle the relation between the\nRashba and Dresselhaus contributions to SOI can be extracted merely\nfrom the I-V curve measured in a proper tunneling experiment.\n\\begin{figure}[h]\n \\leavevmode\n \\centering\\epsfxsize=190pt \\epsfbox[130 350 700 800]{fig5.eps}\n\\caption{\\label{fig:goldenRule}Tunneling conductance calculated for\nthe case $\\alpha^R=-\\alpha^L=\\beta$ and very weak scattering\ncompared to the precise result obtained through Fermi's golden rule\ncalculation.}\n\\end{figure}\n\\begin{figure}[h]\n \\hfill\n \\begin{minipage}[t]{0.5\\textwidth}\n \\begin{center}\n \\centering\\epsfxsize=170pt \\epsfbox[70 650 266 801]{fig6a.eps}\n \\nonumber\n \\end{center}\n \\end{minipage}\n \\hfill\n \\begin{minipage}[t]{0.5\\textwidth}\n \\begin{center}\n \\epsfxsize=170pt \\epsfbox[70 650 266 801]{fig6b.eps}\n \\nonumber\n \\end{center}\n \\end{minipage}\n \\hfill\n \\begin{minipage}[t]{0.5\\textwidth}\n \\begin{center}\n \\epsfxsize=170pt \\epsfbox[70 650 266 801]{fig6c.eps}\n \\end{center}\n \\end{minipage}\n \\caption{\\label{fig:variousRD}Tunneling conductance calculated for various parameters of\nSOI}\n\\end{figure}\n\\section{Summary}\nAs we have shown, in the system of two 2D electron layers separated\nby a potential barrier SOI can reveal itself in the tunneling\ncurrent. The difference in spin structure of eigenstates in the\nlayers results in a sort of interference in the tunneling\nconductance. The dependence of tunneling conductance on voltage\nappears to be very sensitive to the parameters of SOI. Thus, we\npropose a way to extract the parameters of SOI and, in particular,\nthe relation between Rashba and Dresselhaus contributions in the\ntunneling experiment. We emphasize that unlike many other\nspin-related experiments the manifestation of SOI studied in this\npaper should be observed without external magnetic field. Our\ncalculations show that the interference picture may be well resolved\nfor GaAs samples with the scattering times down to $\\sim 10^{-12}$\ns, in some special cases the scattering even restores the traces of\nSOI otherwise not seen due to destructive interference.\n\\section*{ACKNOWLEDGEMENTS}\n This work has been supported in part by RFBR, President\nof RF support (grant MK-8224.2006.2) and Scientific Programs of RAS.\n", "meta": {"timestamp": "2007-10-24T13:36:13", "yymm": "0710", "arxiv_id": "0710.4435", "language": "en", "url": "https://arxiv.org/abs/0710.4435"}} {"text": "\\section{Introduction}\nNano-manufacturing by polymer self-assembly is attracting interests in recent \ndecades due to its wide applications~\\cite{FINK:1998}. The numerical simulation\nof this process can be used to research the mechanisms of phase separation of \npolymer blends and predict the unobservable process states and unmeasurable \nmaterial properties. The mathematical principles and numerical simulation of \nself-assembly via phase separation has been extensively \nstudied~\\cite{SCOTT:1949,HSU:1973,CHEN:1994,HUANG:1995,ALTENA:1982,ZHOU:2006,TONG:2002,HE:1997,MUTHUKUMAR:1997,KARIM:1998}. But few specific software\ntoolkit have been developed to efficiently investigate this phenomenon. \\par\n\nA computer program is developed in MATLAB for the numerical simulation of the \npolymer blends phase separation. \nWith this software, the mechanisms of the phase separation are investigated. \nAlso the mobility, gradient energy coefficient energy, and the surface energy \nin the experiment are estimated with the numerical model. The software can \nevaluate the physical parameters in the numerical model by implementing the \nreal experimental parameters and materials properties. The numerical simulation\nresults can be analyzed with the software and the results from the simulation\nsoftware can be validated with the experimental results. \\par\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{fg_gui_screen_shot.eps}\n\t\\caption{Screenshot of the simulation program graphical user interface.\n\t\\label{fg_gui_screenshot}}\n\\clearpage\n\\end{figure} \n\n\\section{Fundamentals}\nThe numerical model for phase separation of polymer blends are established and \nvalidated with experimental results work~\\cite{SHANG:2010}. The free energy profile during the phase separation in a inhomogeneous mixture \nis described by Cahn-Hilliard \nEquation~\\cite{CAHN:1958, CAHN:1959, CAHN:1961, CAHN:1965}, as shown below,\n\n\\begin{equation}\n\tF(C_1,C_2,C_3)=\\int_{V} \\left\\{ f(C_1,C_2,C_3)+\\displaystyle\\sum_{i=1,2,3} [\\kappa_i (\\nabla C_i)^2] \\right\\} dV \\label{cahn_hilliard_intro}\n\\end{equation}\n\nwhere $f$ is the local free energy density of homogeneous material, $\\phi _i$ \nis the lattice volume fraction of component $i$, and $\\kappa_i$ is the gradient\nenergy coefficient for the component $i$. The total free energy of the system \nis composed by two items as shown in Equation~\\ref{cahn_hilliard_intro}. The \nfirst item is the local free energy and the second is the composition gradient \ncontribution to the free energy. \\par\n\nIn our study, the local free energy is in the form of Flory-Huggins equation,\nwhich is well know and studied for polymer blends~\\cite{HUANG:1999}\nThe ternary Flory-Huggins Equation is shown as follows, \n\n\\begin{equation}\n\t\\begin{split}\n\t\tf(C_1,C_2,C_3)\n\t\t\t &= \\frac{RT}{v_{site}}\\bigg( \\frac{C_1}{m_1}\\ln{C_1}+\\frac{C_2}{m_2}\\ln{C_2} + C_3\\ln{C_3} \\\\\n\t\t\t& \\chi_{12}C_1C_2+\\chi_{13}C_1C_3+\\chi_{23}C_2C_3\\bigg) \n\\label{eq_flory_huggins_intro}\n\t\\end{split}\n\\end{equation}\n\nwhere $R$ is the ideal gas constant, $T$ is the absolute temperature, \n$v_{site}$ is the lattice site volume in the Flory-Huggins model, $m_i$ is the \ndegree of polymerization of component $i$, and $C_i$ is the composition for the\ncomponent $i$. \\par\n\nThere are some parameters in the numerical model which can not be measured\ndirectly, such as the gradient energy coefficient and the mobility. These \nparameters have to be estimated from the experimental \nparameters.The gradient energy coefficient, $\\kappa$, determines the influence \nof the composition gradient to the total free energy of the domain. \nThe value of $\\kappa$ is difficult to measure experimentally. Though efforts \nhave been made by Saxena and Caneba~\\cite{SAXENA:2002} to estimate the \ngradient energy coefficient in a ternary polymer system from experimental \nmethods, few experimental results are published for our conditions. Initially, \nthe value of $\\kappa$ can be estimated by the interaction distance between \nmolecules~\\cite{WISE_THESIS:2003},\n\n\\begin{equation}\n\t\\kappa=\\frac{RTa^2}{3v_{site}}\\label{eq_gradient_energy_coefficient}\n\\end{equation} \n\nwhere $a$ is the monomer size. A modified equation to calculate $\\kappa$ \nconsidering the effects of the composition is reported by Gennes, \net al.~\\cite{GENNES:1980}.\n\n\\begin{equation}\n\t\\kappa_i=\\frac{RTa^2}{36v_{site}C_i}\n\\end{equation} \n\nwhere the subscript, $i$, represents component $i$. \\par\n\nThe mobility is estimated from the diffusivity of the components. The mobility \nof the polymer blends with long chains can be estimated by the equation as \nfollows~\\cite{GENNES:1980}, \n\n\\begin{equation}\nM_i=\\frac{C_i}{m_i}\\frac{D_mN_ev_{site}}{RT}\n\\end{equation} \n\nwhere $m_i$ is the degree of polymerization as stated before, $D_m$ is the \ndiffusivity of the monomer, and $N_e$ is the effective number of monomers per \nentanglement length. Because of the scarce experimental data for $N_e$, a more \ngeneralized form is employed for our study, \n\n\\begin{equation}\nM=\\frac{Dv_{site}}{RT}\\label{eq_mobility}\n\\end{equation} \n\nThe time evolution of the composition of component $i$ can be represented \nas~\\cite{HUANG:1995,BATTACHARYYA:2003,GENNES:1980,SHANG:2009},\\par\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\frac{\\partial C_i}{\\partial t}\n\t\t&= M_{ii}\\left[ \\frac{\\partial f}{\\partial C_i}-\\frac{\\partial f}{\\partial C_3}-2\\kappa_{ii}\\nabla^2C_i-2\\kappa_{ij}\\nabla^2C_j\\right] \\\\\n\t\t& +M_{ij}\\left[ \\frac{\\partial f}{\\partial C_j}-\\frac{\\partial f}{\\partial C_3}-2\\kappa_{ji}\\nabla^2C_i-2\\kappa_{jj}\\nabla^2C_j \\right]\n\t\\end{split}\\label{eq6_paper2}\n\\end{equation}\n\nwhere the subscripts $i$ and $j$ represent components 1 and 2, and\\par\n\n\\begin{equation}\n\t\\begin{aligned}\n\t\tM_{ii}=&(1-\\overline{C}_i)^2M_i+\\overline{C}_i^2\\displaystyle\\sum_{j\\neq i}M_j\\qquad i=1,2;j=1,2,3\\\\\n\t\tM_{ij}=&-\\displaystyle\\sum_{i\\neq j}\\left[(1-\\overline{C}_i)\\overline{C}_j\\right]M_i+\\overline{C}_i\\overline{C}_jM_3\\qquad i=1,2;j=1,2\n\t\\end{aligned}\n\\end{equation}\n\nwhere $\\overline{C}_i$ is the average composition of component $i$. To simplify\nthe solution of Equation \\ref{eq6_paper2}, $\\kappa_{ii}=\\kappa_i+\\kappa_3$, and\n$\\kappa_{12}=\\kappa_{21}=\\kappa_3$, where $\\kappa_i$ is the gradient energy \ncoefficient in Equation~\\ref{eq_gradient_energy_coefficient}. \\par\n\nFor detailed discussion and practical scientific cases with this software can \nbe found in our previous \nworks~\\cite{SHANG:2008,SHANG:2009,SHANG:2009THESIS}.\\par\n\n\\section{The MATLAB Program for Simulation of Polymer Phase Separation}\n\n\\subsection{Design Principles}\nThe program is developed in MATLAB m-code. A graphical user interface (GUI) is \nimplemented in the program created with MATLAB GUI editor. MATLAB is widely \nused in scientific computation and has many toolkits and commonly used \nmathematical functionalities. But implementing the software in MATLAB the\nefficiency of development is greatly improved. Also, by developing the program\nin MATLAB, the program is cross platform. \\par\n \nThe software is designed for daily usage of simulation and experiment \nscientists. The program is light weighted and programmed with high computation\nefficiency so that it can produce significant science results in a common PC.\nIt also extensible to a parallel version or implement code to use the high \ncomputation performance of GPU. The GUI is implemented so that the users can \nconveniently input the experiment parameters. The results as well as the user\nsettings can be saved and revisited by the program. Also, for better assistance\nto a real productive environment, the simulation model is carefully designed, \nso that the users provide the real processing and material parameters and the \nprogram will produce quantitative results comparable to experimental results.\nAnalytical tools are also provided with the program for post-processing of the\nresults. \\par\n\n\\subsection{Numerical Methods}\nTo solve the partial differential equation, the discrete cosine transform \nspectral method is employed. The discrete cosine transform (DCT) is applied \nto the right hand side and left hand side of Equation~\\ref{eq6_paper2}. The \npartial differential equation in the ordinary space then transformed into an \nordinary differential equation in frequency space. When the ODE in frequency \nspace then is solved, the results are transformed back to the ordinary \nspace. \\par\n\nComparing to conventional finite element method, the spectral method is more \nefficient and accurate. This method enabled the program to solve the equation \nin a reasonable computation time to investigate the changes of the phase \nseparation during an real time span long enough to observe the phase evolution. \nThe spectral method is only\napplied to the spatial coordinates since the time length of the evolution is\nnot predictable. Actually the real time for phase evolution is usually one of\nthe major concerns as the result of the simulation. \\par \n\nThe DCT takes a considerable portion of the computation time. Especially in a \n3-dimensional numerical model, the 3-dimensional DCT function with conventional\napproach has a complexity of $O(n^3)$, which can not be practical for real \napplication on a PC. To overcome this computational difficulty, the code can \neither be translated to C code embedded in MATLAB m scripts, or a different \nmathematical approach can be implemented as well. In this program, the DCT is \ncalculated from the fast Fourier transform (FFT) which is optimized in \nMATLAB. \\par\n\n\\subsection{Quantitative Simulation with Real Experimental Parameters}\nMany of previous numerical simulations in the self-assembly with polymer blends\nphase separation are qualitative other than quantitative. The results can only \nbe used to provide non-quantitative suggestions to the experiments. While this\nprogram implemented a numerical model which quantitatively simulates the \nexperimental results with the real processing and material parameters. Most of \ninputs in to this program can be directly measured and read from the instrument\nor material labels. For some of the physical parameters such as $\\kappa$ and\nthe mobility, the program can provide a start value from the calculation with\nthe theoretical model. The user may need to validate the value by comparing \nthe simulation results to the experimental results. Eventually, a more accurate \nestimation can be find with optimization methods by setting the difference \nbetween the simulation and experiment results as the cost function. \\par \n\nBesides the parameters in Cahn-Hilliard equation, other effects such as the \nevaporation, substrate functionalization, and the degree of polymerization are\nalso implemented with the real conditions. The final results are saved and \nsummarized. The characteristic length of result pattern from simulation and its\ncompatibility with the substrate functionalization are calculated. These \nnumbers can be used to compare with the experimental results. \\par\n\n\\subsection{Data Visualization and Results Analysis} \nWhen running the program, the message from the software will be output to the \nworking console of MATLAB. The messages will show the current state and real\ntime results of the simulation. Also, when the simulation is started, the phase \npattern will be plotted in a real time plot window. Users can set the frequency\nof real time plot and the scale factor on the domain of the contour plot in\nthe GUI. The results of the simulation will be saved to a folder designated by\nthe user. The real time plot will be saved to the result folder. The \nquantitative results will be saved as several comma separated values (CSV)\ntext files. The result folder can be loaded into the analysis toolkit of the \nprogram and the user can view the assessment values such as the characteristic\nlength, the compatibility parameters, and the composition profile wave in depth\ndirection with convenient plotting tools. Usually these results such as the \ncomposition profile in each direction in the domain are difficult to observe\nin experiment results. \\par\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{fg_gui_running.eps}\n\t\\caption{The simulation is running with the real time plot of the \n\tcurrent ternary phase morphology.\n\t\\label{fg_gui_running}}\n\\clearpage\n\\end{figure} \n\n\n\\section{Examples}\nTo demonstrate the capability of this program, example simulation cases are \nshown in this paper. The results of numerical simulation have been validated\nwith the experimental results in our previous work~\\cite{SHANG:2010}. To \ncompare the simulated results with a real experimental system, we directed \nthe morphologies of polystyrene (PS) / polyacrylic acid (PAA) blends using \nchemically heterogeneous patterns. More specifically, alkanethiols with \ndifferent chemical functionalities were patterned by electron beam \nlithography, which were then used to direct the assembly of PS/PAA blends \nduring the spin coating from their mutual solvent~\\cite{MING:2009}. The \nexperimental conditions are implemented into the numerical simulation. The\neffects such as the substrate functionalization and the solvent evaporation\nare involved in the numerical modeling. \nThe parameters difficult to measure are acquired with the optimization methods\n~\\cite{SHANG:2009}. \n\\par\n\nSophisticated techniques are required to investigate the composition profile in\nthe depth of the polymer film~\\cite{GEOGHEGAN:2003}. While the numerical \nsimulation results can provide the composition profile in each position of the \nfile, the composition profile change in depth direction can be easily accessed.\nTo investigate the composition wave allow the direction perpendicular to the \nfilm surface, a thick film is implemented to the numerical simulation. This kind\nof film is not only difficult to fabricate and characterize in experiments, however\nin the numerical modeling, the user only needs to change the mesh grid domain size. \nThe depth profiles with different substrate functionalization are shown in \nFigure~\\ref{fg_thick_film}, where $|f_s|$ denotes the surface energy term from the \nsubstrate functionalization. This term will be added to the total free energy \non the interface of the polymer film and the substrate. The initial thickness\nof the film is 1 mm and decreases to 8 $\\mu m$ due to the evaporation of the \nsolvent. The thickness results are scaled by 0.5 to fit in the figures. It can \nbe seen that a higher surface interaction force can result in a faster substrate\ndirected phase separation in the film. A stronger substrate interface attraction\nforce can direct the phase separation morphology near the substrate surface. While\nwith a lower surface energy, the phase separation dynamics in the bulk of the \nfilm overcomes the substrate attraction force. It can be seen that at 30 seconds,\nthe substrate functionalization has little effects on the morphology on the\nsubstrate surface. Also, the checker board structure can be seen near the \nsubstrate surface with a higher surface energy~\\cite{KARIM:1998}. \\par\n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=\\textwidth]{fg_thick_film.eps}\n \\caption{The phase separation in a thick film. \\label{fg_thick_film}}\n\\clearpage\n\\end{figure} \n\n\\iffalse \n \\qquad\n(Figure~\\ref{fg_thick_film} The phase separation in a thick film.)\\par\n \\qquad\n\\fi\n\nTo investigate the effects of a more complicated pattern, a larger domain is \nsimulated. The pattern on the substrate applied on the substrate surface is \nshown in Figure~\\ref{fg_chn_pattern}. The substrate pattern is designed to \ninvestigate the effects of various shapes and contains components such as \nsquares, circles, and dead end lines in different sizes. The initial surface \ndimensions of the model are changed to 12$\\mu m\\times$12$\\mu m$. The initial \nthickness of the film is 1mm and shrinks during the solvent evaporation. The \nelements in the modelling is 384$\\times$384$\\times$16. The average composition \nratio of PS/PAA is changed to 38/62 to match the pattern. The result patterns \nfrom the simulation can be seen in Figure~\\ref{fg_complicated_patterns}. \\par\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{fg_chn_pattern.eps}\n\t\\caption{The substrate pattern with complicated features. \n\t\\label{fg_chn_pattern}}\n\\clearpage\n\\end{figure} \n\\iffalse \n \\qquad\n(Figure~\\ref{fg_chn_pattern}The substrate pattern with complicated features.)\\par\n \\qquad\n\\fi\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{fg_complicated_patterns.eps}\n\t\\caption{The effects of complicated substrate patterns. \n\t\\label{fg_complicated_patterns}}\n\\clearpage\n\\end{figure} \n\\iffalse \n \\qquad\n(Figure~\\ref{fg_complicated_patterns}The effects of complicated substrate \npatterns.)\\par\n \\qquad\n\\fi\n\nIt can be seen that in a larger domain with complicated substrate patterns, the\nattraction factor has to be increased to obtain a better replication. In \ngeneral, the increase of the attraction factor will increase the refinement of \nthe pattern according to the substrate pattern. But since the substrate pattern\nhas geometrical features in different sizes, the attraction factor has to be \nstrong enough to force the intrinsic phase separation with unified \ncharacteristic length to match the substrate pattern in different sizes. This \nwould be the main challenge to the replication of complicated patterns. It has \nbeen reported by Ming et. al.~\\cite{MING:2009} that the addition of the \ncopolymer can improve the refinement of the final patterns in experiments. The \nreason is that the PAA-b-PS block copolymer will concentrate in the interface \nof the PS and PAA domains in the phase separation, therefore decreasing the \nmixing free energy. Fundamentally, the addition of the block copolymer \nincreased the miscibility of the two polymers. To simulate these phenomena, the\nFlory-Huggins interaction parameter is decreased from 0.22 to 0.1 to increase \nthe miscibility of PS/PAA in the modelling. The result pattern is also shown in\nFigure~\\ref{fg_complicated_patterns}, in comparison to the cases without the \naddition of block copolymers. It can be seen that the refinement of the phase \nseparated pattern is improved by the addition of the block copolymer. The $C_s$\nvalues of the phase separation with complicated patten are measured and plotted\nin Figure~\\ref{fg_cs_complicated_patterns}. \\par\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{fg_cs_complicated_patterns.eps}\n\t\\caption{The effects of complicated substrate patterns. \\label{fg_cs_complicated_patterns}}\n\\clearpage\n\\end{figure} \n\\iffalse \n \\qquad\n(Figure~\\ref{fg_cs_complicated_patterns}The effects of complicated substrate patterns.)\\par\n \\qquad\n\\fi\nA assessment parameter, $C_s$, the compatibility parameter is introduced to \nevaluate the replication of the morphology to the substrate pattern, where \na higher $C_s$ value denotes a better \nreplication of the polymer film morphology according to the substrate pattern\nIt can be seen in Figure~\\ref{fg_cs_complicated_patterns} that the $C_s$ value \nfor the system with block copolymer is 7.69E-01, which is higher than the \nsystem without the block copolymer when attraction forces are the same. The \ndecrease of the Flory-Huggins interaction parameter increases the miscibility \nof the polymers, which will decrease the miscibility gap of the polymers, as \ncan be seen in Equation~\\ref{eq_flory_huggins_intro}. The two phase at \nequilibrium will be less concentrated in different types of polymer. This is an\nissue may need to be concerned when the interaction parameter of the two \npolymers is changed. \\par\n\n\\section{Conclusion}\n\nA computer program for simulation of polymer self-assembly with phase separation\nis introduced. The program is developed in MATLAB m code and designed to\nassist the scientists in real working environments. The program is able to \nsimulate the experiment results quantitatively with real experimental\nparameters. The unmeasurable physical parameters such as the gradient energy\ncoefficient and the mobility can be estimated with the program. The program \nprovides a graphical user interface and analytical toolkits. This program \ncan help the scientists in research in polymer phase separation mechanisms and \ndynamics with high efficiency, convenience of usage, quantitative results \nanalysis, and validated reliability. \n\n\\section{Acknowledgement}\nThe authors would thank the efforts of Liang Fang and Ming Wei for providing\nhelp in the experimental procedurals. The authors also appreciate the\nvaluable suggestions and comments from other users and testers of this program.\nThis project is a part of the research in Center of High-rate Nanomanufacturing,\nsponsored by National Science Foundation (grant number NSF-0425826). \n\n\n\\bibliographystyle{unsrt}\n", "meta": {"timestamp": "2010-07-09T02:00:25", "yymm": "1007", "arxiv_id": "1007.1254", "language": "en", "url": "https://arxiv.org/abs/1007.1254"}} {"text": "\\section{Introduction}\n\n Sunspot oscillations are a significant phenomenon observed in the solar atmosphere. Studying the oscillations started in 1969 \\citep{1969SoPh....7..351B}, when non-stationary brightenings in the CaII and K were discovered. These brightenings were termed umbral flashes (UFs). Furthermore, \\citep{1972ApJ...178L..85Z} and \\citep{1972SoPh...27...71G}, using observations in the $H\\alpha$ line wing, discovered ring structures in sunspots. Those structures propagated from the umbral centre to the penumbral outer boundary with a three-minute periodicity. The authors referred to these background structures as running penumbral waves (RPWs). Below, at the photosphere level, the oscillation spectrum shows a wide range of frequencies with a peak near five-minute oscillations. These frequencies are coherent, which indicates at the umbral brightness variations within this range as a whole \\citep{2004A&A...424..671K}. Also, there exist low-frequency 10-40 minute components in sunspots \\citep{2009A&A...505..791S, 2008ASPC..383..279B, 2013A&A...554A.146K}. Their nature has been in doubt so far.\n\n Observations in \\cite{2002A&A...387L..13D} showed that the observed emission in magnetic loops anchored in a sunspot has an $\\sim$ 172 sec frequency periodicity, which indicates that photospheric oscillations in the form of waves can penetrate through the transition zone upwards into the corona. According to \\cite{1977A&A....55..239B}, the low-frequency waves oscillated at the subphotospheric level (p-mode) propagate through natural waveguides as a concentration of magnetic elements (e.g. sunspots and pores). Their oscillation period may be modified by a mechanism for the cut-off frequency. In \\cite{1984A&A...133..333Z} showed that the oscillations with a frequency lower than the cut-off frequency fade quickly. The main factor affecting the cut-off frequency is the inclination of the field lines, along which the wave propagation occurs. We can observe five-minute oscillations both in the chromosphere spicules \\citep{2004Natur.430..536D}, and in the coronal loops of active regions \\citep{2005ApJ...624L..61D, 2009ApJ...702L.168D}. Further investigations of low-frequency oscillations in the solar atmosphere higher layers \\citep{2009ASPC..415...28W, 2009ApJ...697.1674M, 2011SoPh..272..101Y} corroborated the assumption that their emergence at such heights is a consequence of wave channelling in the inclined magnetic fields. The observed rate of the disturbance indicates on propagation of slow magneto-acoustic waves \\citep{2009A&A...505..791S, 2012SoPh..279..427K}.\n \n For high-frequency oscillations, the sources with less than three-minute period are localized in the umbra, and they decrease in size as the period decreases \\citep{2008SoPh..248..395S, 2014A&A...569A..72S, 2014A&A...561A..19Y, 2012ApJ...757..160J}. Here in the umbral central part, where the field is almost perpendicular to the Sun surface and there is no field line beam divergence, we see the footpoints of the elementary magnetic loops in the form of oscillating cells \\citep{2014AstL...40..576Z}. The main mechanism that determines their power is related to the presence of the subphotospheric and chromospheric resonator in the sunspot. Outside the central part, where the field inclination starts to manifest itself, the mechanism for a cut-off frequency change begins.\n \n Sunspot oscillations are also expressed in the form of UFs \\citep{1969SoPh....7..351B, 1969SoPh....7..366W}, whose emission manifests itself most definitely in the kernel of chromospheric lines. A number of papers \\citep{2007PASJ...59S.631N, 2007A&A...463.1153T, 2003A&A...403..277R, 2001ApJ...552..871L, 2000Sci...288.1396S, 1983SoPh...87....7T, 1981A&A...102..147K} have studied this phenomenon. \\cite{2010ApJ...722..888B} assumed that UFs are induced by magneto-acoustic waves propagating upwards that are converted into shocks. Photospheric oscillations become more abrupt as the waves move into a medium with lower density and transform into a shock front, thus heating the ambient medium. The temperature in the UF source surroundings surpasses the ambient values by 1000 K, which results in brightening of individual umbral sites of the order of several arcsec. On these scales, one also observes sunspot umbral magnetic field variations, although there is no visible confirmation of field line inclination variations or their common configuration throughout these processes \\citep{2003A&A...403..277R}. The observations taken recently have shown the presence of very small-size jet-like spatial details of less than 0.1 Mm in the sunspot umbra. Their positions are apparently related to the footpoints of single magnetic loops, along which sunspot oscillations propagate \\citep{2014ApJ...787...58Y}.\n \n Umbral flashes are also related to the running wave phenomenon in a sunspot penumbra. This phenomenon is observed in $H\\alpha$ and He lines \\citep{2007ApJ...671.1005B} and in CaII \\citep{2013A&A...556A.115D} in the form of travelling spatial structures moving horizontally, radially from the umbra towards the penumbral outer boundary \\citep{2000A&A...355..375T, 2003A&A...403..277R}. The waves that propagate along field lines are non-stationary with changes in the oscillation power both in time and in space \\citep{2010SoPh..266..349S}. These results in noticeable periodic emission modulation by propagating three-minute waves at the footpoints of magnetic loops. A possible response of such a modulation is the emergence of both low-frequency wave trains, and individual oscillation maxima brightnesses as UFs.\n \n In this study, we analysed the association between the sunspot UFs source spatial distribution and the spatial structure of the field lines anchored in the umbra. To better understand the association between oscillation activation and flash emergence, we studied the dynamics of the three-minute oscillations in UFs sources. For the spatial localization of the propagating wave fronts to magnetic waveguides, we used the method of pixelized wavelet filtration (PWF technique) \\citep{2008SoPh..248..395S}. The paper is arranged as follows: in Section 1, we introduce the paper subject; in Section 2, we provide the observational data and processing methods; in Section 3, we describe the data analysis and obtained results; in Section 4, we discuss the processes of the flash evolutions; and in Section 5, we make conclusions concerning the obtained results.\n \n\\section{Observations and data processing}\n\n To study the connection between UFs and sunspot oscillations we used the data observations of the Solar Dynamic Observatory (SDO/AIA) \\citep{2012SoPh..275...17L} obtained with a high spatial and temporal resolution. We studied the four active regions with developed sunspots at the maximum of their wave activity. To obtain the spatial location of the UFs sources in space and height we used the observations of January 26, 2015 (NOAA 12268, 01:00-04:00 UT), January 10, 2016 (NOAA 12480, 01:00-04:00 UT), and March 27, 2016 (NOAA 12526, 01:00-04:00 UT). A more comprehensive analysis was carried out for the observations of December 08, 2010 (NOAA 11131, 00:00-03:20 UT).\n\nWe used calibrated and centred images of the Sun (Lev 1.5) for various wavelengths. The observations were performed within extreme ultraviolet (EUV; 1600 \\AA) and UV (304 \\AA, 171 \\AA) ranges with cadences of 24 sec and 12 sec, respectively. The pixel size was 0.6 \\arcsec. The differential rotation of the investigated regions during the observation was removed by using the Solar Software.\n\nWe built time-distance plots along the detected UF sources to search for a correlation between the background wavefront propagation process and the UF emergence. The precise value of the revealed oscillation periods was obtained through the Fourier method. For 2D wave processing and obtaining of their time dynamics, we used the PWF technique. The spectral algorithm applied in this method enabled us to search for waves throughout the sunspot and to trace the direction of their propagation.\n \nUsing the helioseismologic method to calculate the time lag of propagating three-minute wavefronts relative to each other \\citep{2014A&A...569A..72S} enabled us to establish the height affixment of the SDO/AIA temperature channels. The channel of the extreme ultraviolet range 1600 \\AA ~records the emission at the levels of the upper photosphere and transition region with temperatures 6000 K and 100000 K, respectively. However, the main sensitivity of the channel and, correspondingly, the minimum wave lag at the upwards propagation, falls at the emission arriving from the lower atmosphere. This channel often shows dotted, fine-structure details, brightening the field line magnetic footpoint regions. The regions with a high concentration of field lines appear black, particularly near to sunspots and active regions. The 304 \\AA ~(He II) channel shows bright regions at the level of the upper chromosphere and lower transition region, where plasma has a high density. The characteristic temperature of the channel is about 50000 K. This channel is best suited to study various oscillation processes in the solar atmosphere, particularly in sunspots where the power of three-minute oscillations reaches maximum. To observe the coronal magnetic structures, we used observations with a 171 \\AA ~(Fe IX) wavelength. The emission arrives from the quiet corona and from the upper transition region with the temperature of about 1000000K.\n\n\\section{Results}\n\nWe investigated the emergence of umbral short-time recurrent flashes of brightness by using the unique observational possibility of the SDO/AIA temperature channels to receive emission from different heights of the sunspot atmosphere. This allowed us to obtain, for the first time, the information on the UF source distribution throughout an umbra and to understand their height location. To test the stability of the recurrent UF source location and their seeing at different heights, we built variation maps for the SDO/AIA different temperature channels. These maps show the distribution of the signal variation relative to its mean value in each point of images throughout the observational time.\n\n\\subsection{Spatial and heights location of UFs}\n\n\\begin{figure\n\\begin{center}\n\\includegraphics[width=9.0 cm]{Fig1.eps}\n\\end{center}\n\\caption{Upper panels: Snapshots of the UFs in sunspot active regions on January 26, 2015 (01:57:54 UT), January 10, 2016 (01:33:52.6 UT), and March 27, 2016 (01:49:28.6 UT) obtained by SDO/AIA (1600 \\AA). The broken black rectangles show the umbral regions. The arrows indicate the UFs sources. Middle panels: The corresponding sunspot regions at 171 \\AA. The original maps (contours) overlapped on variation maps (colour background) of UV emission obtained during the observation. Asterisks denote the localization of the UFs sources. Bottom panels: Scaled variation maps of the umbral regions at 1600 \\AA. The small white rectangles show sources of UFs.}\n\\label{1}\n\\end{figure}\n\nFigure~\\ref{1} presents a series of sunspots images and their variation maps during the emergence of separate bright UFs obtained by SDO/AIA at 1600 \\AA ~and 171 \\AA. The observational time was about three hours for each day during the four days of observation. The number of the obtained images for one day was 450 frames at a 24-sec temporal resolution. Similar images were also obtained in the 304 \\AA ~and 171 \\AA ~channels, where the temporal resolution was 12 seconds. The number of frames was 900. This observational material is adequate to compile with confidence the statistical material both by UF number and by the location in the umbral area. The umbral regions are shown by the dash squares. To increase the visibility of the umbral weak brightening sources, we used the log scale. This enabled us to record weak processes of the umbral background wavefront propagation and to study their association with the UF emergence. This procedure was applied to all studied SDO/AIA temperature channels. This allowed us to obtain time cubes of images and to make the films, in which the dynamics of the umbral emission intensity are presented. \n\n\\begin{figure\n\\begin{center}\n\\includegraphics[width=9.0 cm]{Fig2.eps}\n\\end{center}\n\\caption{Variation maps of umbral UV emission in the SDO/AIA different temperature channels (1600 \\AA, 304 \\AA~ and 171 \\AA) obtained during 00:00-03:20 UT observation NOAA 11131 on December 08, 2010. Squares with numerals indicate the position of observed UF sources. The arrows show the scanning direction when obtaining the time-distance plots. The dash circle outlines schematically the umbral boundary. The variation intensity is presented by colours in the logarithmic scale.}\n\\label{2}\n\\end{figure}\n\nWatching and studying frame-by-frame the films obtained for a variety of ultraviolet wavelengths showed the presence of two dynamic components in a sunspots. The first is related to a continuous propagation of the background three-minute oscillations in the umbra and longer periodicity in the penumbra. This component is visible with a naked eye in the form of wavefronts propagating towards a penumbra from a pulsing source located in the sunspot centre. This source agrees well with the centre of the spiral wavefronts propagation described previously in \\cite{2014A&A...569A..72S} for December 08, 2010 event. The other component is related to short-time brightenings of separate parts of the propagating fronts and with the emergence of small-angular size details as UF sources. \n\nWe can see on variation maps at 1600 \\AA ~(Fig.~\\ref{1}, bottom panels) that the UFs sources as local brightenings have different localizations, intensities, and shapes located in the umbral periphery. There are both bright point sources and extended sources that have different spatial orientation. Some localize near to the light bridge for example on January 10, 2016. This type of intensity variation was described in \\cite{2014ApJ...792...41Y}. Watching the obtained films showed that the fast processes of the UF brightening mainly appear in the same umbral site. Also, they manifest themselves both as individual pulses and as a series of modulated pulsations. \n\nWhen we compare the obtained spatial location of bright points of variation inside umbra at 1600 \\AA ~and 171 \\AA, we can see well coinciding UFs sources with footpoints of coronal loops, anchored in the umbra of the sunspots (Fig.~\\ref{1}, middle panels). Mainly variation maps on coronal heights show the elongated details, which can be interpret as magnetic loops along which waves propagate from the bottom layers of the sunspot atmosphere to the corona. The maxima of waves variation distributes along the loops as a bright elongated details. The main behaviour of the oscillation sources at separated periods is determined by the cut-off frequency. \n\nThe UF source visibility varies depending on the height of the ultraviolet emission generation. We can observe a part of the flashes at all heights. The other part manifests itself only lower, at the photospheric level. The angular size of UF sources varies from flash-to-flash by revealing itself as a point or as an extended source.\n\n\\begin{figure\n\\begin{center}\n\\includegraphics[width=9.0 cm]{Fig3.eps}\n\\end{center}\n\\caption{Snapshots of the narrowband maps of umbral region NOAA 11131 with 3-min periodicity on December 08, 2010. The left panel shows \nthe localization of the stable source of the local UFs at 1600 \\AA ~(00:22:17 UT). The right panel shows the position of the bright sources at 304 \\AA ~(00:22:32 UT), which ride the expending 3-min spiral wave fronts as background UFs. The dash circle outlines the umbral boundary. The arrows shows the position of UFs sources.}\n\\label{3}\n\\end{figure}\n\nFigure~\\ref{2} shows the variation maps obtained at 1600 \\AA, 304 \\AA, and 171 \\AA ~wavelengths on December 08, 2010. One can see that the brightness variation distribution shows an inhomogeneous structure in the umbra, whose value depends on the SDO/AIA recording channel. Below, at the upper photosphere level (1600 \\AA), there is a well-defined umbra indicated by the dashed circle. These umbral features have a lower level of the emission variation. Against its background, the sources that have both points and extended shapes stand out.\n\nWe found eight UF sources within the umbral boundary. The source size varies from 2 to 8 \\arcsec. Mainly, these sources are located on the periphery near to the sunspot umbral boundary. When moving upwards onto the transition region level (304 \\AA) we observe the disappearance of the point UF sources (No.1-4) and the increase in the brightness of the extended UF sources (No.5-8). There is an increase in the emission variation and accordingly the umbral brightness increases owing to the boost of the background three-minute oscillations. Higher, in the corona (171 \\AA), we see that along with the UF sources visible below extended details appear that spatially coincide with the magnetic loops. Propagation of the background three-minute waves along these loops contributes mainly to the emission variation increase.\n\nFor the UF-type short-time processes, the maximal brightness is reached lower at the photosphere level (1600 \\AA). When comparing the three-minute background component emission variations within different SDO/AIA temperature channels, the maximal value is reached at the transition region level (304 \\AA). \n\n The obtained variation maps show the values of the signal variance both in periodic and non-periodic components. To isolate only periodic signal, we have constructed a series of narrowband maps with 3 min signal periodicity in space and time with used the PWF technique. Figure \\ref{3} shows the obtained snapshots of narrowband oscillation maps (positive half-periods) in the SDO/AIA temperature channels at 1600 \\AA, 00:22:17 UT and 304 \\AA, 00:22:45 UT. These times correspond to the appearance of maximum brightness in UF source N5. We see that at wavelength 1600 \\AA ~there is only one bright, local source UFs associated with periodical oscillations in a limited spatial area. Its position almost does not change with time. At the transition zone (304 \\AA), we see the wave fronts as an evolving spiral with the pulse source in the centre of umbra. Similar dynamics of wave fronts was discussed in \\cite{2014A&A...569A..72S}. Contours highlight the details of the fronts, the brightness of which exceed 50 \\% of the maximum value in time. With propagation waves from umbra centre to its boundary, these details continuously appear and disappear, originating the short-term brightening of separated parts of the fronts as background UFs. On the variation maps, these changes are connected with background brightening.\n\nTo understand how UF sources are related to the umbral magnetic structures, we compared their spatial position with the coronal loops seen in the UV emission (SDO/AIA, 171 \\AA) and magnetic field structure of this active region previously described in \\cite{2012ApJ...756...35R}. Because the considered sunspot is the leading in the group, the magnetic field configuration shows a well-defined east-west asymmetry. The magnetic field lines anchored in the eastern part of the sunspot are much lower and more compact, than the field lines anchored in the western part of the sunspot. \n\n When considering the UF source positions (Fig.~\\ref{2}, 1600 \\AA), we notice that the detected UF point sources (numbered 1-4) are localized in the umbral western part near to the footpoints of large magnetic loops. More extended sources (numbered 5-8) are related to the eastern part, and are located near the compact loops of the footpoints, relating the sunspot with its tail part. The size of the extended UF sources is about 7-10 \\arcsec, and the point UFs are about 2.5 \\arcsec.\n\n\\begin{figure\n\\begin{center}\n\\includegraphics[width=9.0 cm]{Fig4.eps}\n\\end{center}\n\\caption{Time-distance plots along the N5 UF source obtained by SDO/AIA at 1600 \\AA ~(left panel), and at 304 \\AA ~(right panel) temperature channels on December 08, 2010. The brightness periodic changes are the 3-minute oscillation wavefronts. The arrows show the UF. The horizontal dashed lines indicate the umbra/penumbra border. The 1D spatial coordinates are in arcsec, the time in UT.}\n\\label{4}\n\\end{figure}\n\n\n\n\\subsection{Time dynamics of UFs on December 08, 2010}\n\nMore comprehensive analysis of the time dynamics for wave processes was performed for the sunspot active region NOAA 11131 on December 08, 2010. The wave processes inside the umbra were intensively studied by \\cite{2012A&A...539A..23S, 2014A&A...569A..72S, 2014A&A...561A..19Y, 2014AstL...40..576Z}.\n\nThe detected compact sources of the maximal variation in Fig.~\\ref{2} were studied to reveal the existence of flash and/or oscillation activity. For this we scanned each of the sources at 1600 \\AA ~and 304 \\AA and built the time-distance plots. The arrows show the UF source scan directions.\n\n\\begin{figure\n\\begin{center}\n\\includegraphics[width=9.0 cm]{Fig5.eps}\n\\end{center}\n\\caption{Time dynamics of the EUV emission for the N2 and N6 sources at 1600 \\AA. The arrows show the maximum emission of UF. Time in UT.}\n\\label{5}\n\\end{figure}\n\nFigure~\\ref{4} presents an example of the obtained time-distance plots in 1600 \\AA ~(left panel) and in 304 \\AA ~(right panel) for the N5 extended source. We see that throughout the entire observational time there are background three-minute broad brightness variations in the umbra that smoothly transit into the five-minute oscillations at the boundary of the umbra and penumbra shown by the dashed line. This type of partial brightening of wave fronts during propagation in the umbra as UFs was described in \\cite{2014ApJ...792...41Y}. Most clearly, these UFs are exhibited at the level of the transition region in 304 \\AA ~(Fig.~\\ref{4}, right panel). Also, these oscillations exist lower, at the level of the upper photosphere (1600 \\AA). Against their background, we note a series of periodically recurrent various-power local UFs. The arrows in Fig.~\\ref{4} indicate separate pulses (left panel). The position of flashes by space coincides with the maximal brightness of the N5 extended source. The fine spatio-temporal structure of the UF sources also coincides with the brightenings of the three-minute oscillation background wavefronts.\n\nWhen comparing the flash peak values below and above the sunspot atmosphere, we note that UFs have shorter duration at the level of the photosphere than that at the level of the transition region. Low-frequency modulation of three-minute oscillations occurs. The brightness change at 304 \\AA ~occurs smoothly without well-defined peaks. During flashes brightenings of the 3-minute wavefront in the source occur. The brightness contrast decreases as the height of the UF observation increases. One may assume that UFs and the background three-minute oscillations have identical natures in the form of the wave activity increase within the magnetic loops, where their propagation occurs with different time and spatial scales.\n\nTo compare the time profiles of the brightness variation within different UF sources for one wavelength, we used cuts along spatial coordinates with the maximal brightness on the time-distance plots (Fig.~\\ref{4}). The profiles for each UF source were obtained. Fig.~\\ref{5} shows a brightness change example for N2 and N6 sources at the level of the upper photosphere (1600 \\AA), where the UF visibility is maximal.\n\nOne can see that, along with the well-defined three-minute oscillations (Fig.~\\ref{5}, left panel), there also exist pulse events as UFs. Their number and duration depends on the flash source. Thus, we only observed individual flashes during a three-hour interval of observations for the sources numbered 1 through 4. At the same time, on the profiles of the 5-8 sources, we note series of flashes with different amplitudes and durations (Fig.~\\ref{5}, right panel).\n\nComparing the shape of the revealed sources in Fig.~\\ref{2} with the corresponding profiles in Fig.~\\ref{5} showed that, for point sources, the emergence of rare individual UFs is common. The UF extended sources are related to the series of periodically recurring different amplitude pulses, about 4-14 flashes during the observations. Comparing the peak amplitudes of various UF sources revealed that the brightness change in the point sources is almost five times less, than that for the extended.\n\n\\begin{figure\n\\begin{center}\n\\includegraphics[width=9.0 cm]{Fig6.eps}\n\\end{center}\n\\caption{Time dynamics for the N5 UF source at various SDO/AIA channels: 1600 \\AA, left panel and 171 \\AA, right panel. The blue lines show the brightness changes recorded during the flashes. The red lines show the time profiles of the filtered 3-minute oscillations. The numerals denote the oscillation train numbers. Bottom panels: Fourier spectra of UF signals for SDO/AIA channels accordingly.}\n\\label{6}\n\\end{figure}\n\n\n\\subsubsection{Relation between wave dynamics and UFs}\n\nBased on the obtained 1D time-distance plots for each source ~(Fig.~\\ref{4}), for which the relation between the oscillating 3-minute component and the UF emergence is well traced, we performed a spectral analysis of the time profiles by using the fast Fourier transform (FFT), and PWF technique. We applied the Fourier transform to provide a good spectral resolution, and the PWF technique to obtain a spatio-temporal structure of the wavefronts propagating in UF sources.\n\nFigure~\\ref{6} shows an example of the oscillations detected in the N5 extended source over the 00:10-00:50 UT observational period, when there emerged UFs. We can see the profiles with sharp UFs at 1600 \\AA. At the corona level at 171 \\AA ~there are stable 3 min oscillations without spikes. This served as the main criterion for studying the spectral behaviour of filtered 3 min oscillations at 171 \\AA ~and its comparison with the original signal at 1600 \\AA. In this case the spectral power does not change because of sharp jumps in the signals.\n\nOne can see that at the level of the upper photosphere (Fig.~\\ref{6}a, 1600 \\AA, blue lines), there exist periodic brightness changes in the EUV emission. These changes take shape as a UF series, where UFs were exhibited as a sequence of low-frequency trains of higher frequency oscillations. Those higher frequency oscillations are particularly expressed in the sunspot atmosphere higher coronal layers at 171 \\AA ~(Fig.~\\ref{6}b). The Fourier spectrum showed the existence of significant harmonics. These harmonics are related to an $\\sim$ 3-5-minute periodicity and to the $\\sim$ 13-min low-frequency oscillations (Fig.~\\ref{6}c,d).\n\nTo trace the time dynamics for the detected periodicity, we performed a wavelet filtration of the series in the period band near three minutes. We found the four trains of high-frequency oscillations numbered in Fig.~\\ref{6}a. If one compares the behaviour of the filtered three-minute signal (red lines) and the UF emergence (blue lines), it is apparent that the train maxima coincide with the UF brightness maxima. A complex UF time profile (in the form of a series of variable peaks) is related to the existence of oscillations with different amplitudes, phases, and lifetimes in the trains.\n\nWhen comparing the oscillations in UFs, one can see (Fig.~\\ref{6}), that the low-frequency trains are well visible in the lower atmosphere. Their power decreases in the upper atmosphere. This is well traced on the Fourier spectra of the signals for different height levels (Fig.~\\ref{6}c,d). We note the inverse dependence between the harmonic power. At the level of the upper photosphere, the low-frequency modulation is maximal at a low level of the 3-minute harmonic. In contrast, in the corona, there is a pronounced peak of 3-minute oscillations with the minimal value of the $\\sim$ 13-minute component power.\n\nIncreasing oscillations in the source led to the formation of compact brightenings in the form of UFs on the time-distance plot (Fig.~\\ref{4}, left panel). As the low-frequency oscillation power decreases, at the corona level a smooth increase occurs in the high-frequency three-minute component in the form of brightenings of the wavefront separate details (Fig.~\\ref{4}, right panel). The mean UF duration for extended sources was $\\sim$ 3.7 minutes. This value is near the value of one period for the three-minute oscillation maximal power.\n\nTo test the obtained association between UFs and oscillations, we calculated the correlation coefficients between the original signal and the three-minute filtered signal in various SDO/AIA channels. There is a direct correlation between the three-minute oscillation power and the UFs power. The maximal value for the correlation coefficient is at 1600 \\AA, and this value varies within the 0.65 - 0.85 range for different sources of flashes.\n\nOne may assume that the obtained association between the increase in the three-minute oscillations and the UF emergence is characteristic of not only the detected N5 source, but is also present in all the detected sources. To test this statement, we calculated the narrowband three-minute oscillation power variations in the N7 and N8 sources above, at the corona level (171 \\AA), and compared these variations with the UF emergence in the integral signal lower, at the photosphere level (1600 \\AA). The observational interval was 00:00-03:20 UT.\n\n\\begin{figure\n\\begin{center}\n\\includegraphics[width=9.0 cm]{Fig7.eps}\n\\end{center}\n\\caption{Amplitude variations of the N7 and N8 extended sources of UFs at 1600 \\AA ~and 171 \\AA ~temperature channels. Blue lines show the profiles of the original signal at 1600 \\AA. Red lines show the 3-min oscillation power at 171 \\AA.}\n\\label{7}\n\\end{figure}\n\nFigure~\\ref{7} shows the time profiles for the signals in the N7 and N8 extended sources, and the corresponding variation of power oscillations in the corona. Apparently, in the sources at the upper photosphere level (blue lines, 1600 \\AA), there are recurrent UFs of different amplitude. In addition to the case with the N5 source, the bulk of the UF peak values are accompanied by an increase in the three-minute oscillation low-frequency trains at the corona level (red lines, 171 \\AA). There is a well-defined correlation between the signals. Thus, over 01:20-03:20 UT, the emergence of the \"step-like\" signals at the photosphere level with their gradual steeping and the emergence of UF pulses is followed by a smoothly varying increase in the power of the three-minute oscillation trains in the corona.\n\n\\begin{figure\n\\begin{center}\n\\includegraphics[width=9.0 cm]{Fig8.eps}\n\\end{center}\n\\caption{Amplitude variations of the point N1 and N3 UF sources. Green lines show the original signal at 1600 \\AA; blue lines present the signal at 171 \\AA. Red lines show the mean power of the 3-min oscillations.}\n\\label{8}\n\\end{figure}\n\n\\begin{figure*\n\\begin{center}\n\\includegraphics[width=14.0 cm]{Fig9.eps}\n\\end{center}\n\\caption{Snapshots of spatial distribution of travelling wave fronts during the UF for the N5 extended source. Duration of propagating the 3-min waves along a magnetic waveguide of about one period. The observational wavelength is 1600 \\AA. A continuous line represents a positive half-period of propagating waves, and the dashed line separates the negative half-period. The background image is the distribution of the brightness of the source at the time of the maximum flash. The minimum negative half-period is indicated in green. The time resolution is 24 sec.}\n\\label{9}\n\\end{figure*}\n\nFor the N1 and N4 point sources only single pulses with a low-intensity level were observed. For these sources, we compared the coronal three-minute oscillation power mean-level variations with the moments of the UF single burst peak emergence at the photosphere level. Fig.~\\ref{8} shows the original signal profiles at varying height levels (green lines for 1600 \\AA, blue lines for 171 \\AA) with the superposition of the three-minute oscillation mean power (red lines, 171 \\AA). Apparently, the moments of the short flash emergence below the sunspot coincide with the three-minute oscillation power maxima above. In this, we note a similar sequence in the signal evolution such as that for the extended sources. The difference is in the duration of the flashes. Thus, for the N1 source (02:36:15 UT), the UF duration was $\\sim$ 1.5 minutes, for N2 (03:07:30 UT) - about 1.1 minutes, for N3 (01:01:30 UT) - about 1.0 minute, and for N4 (03:12:00 UT) - about 1.1 minutes. The UF mean value for the point sources was $\\sim$ 1.2 minutes.\n\n\\subsubsection{Wave propagation in UF sources}\n\nTo study narrowband wave propagation over the UF source space, we used the PWF technique. Fig.~\\ref{9} shows the time sequence for the EUV emission wavefront images (SDO/AIA, 1600 \\AA) obtained for the N5 source during the second train of the three-minute oscillations (00:18:00 - 00:20:48 UT). The temporal resolution was 24 sec. The oscillation positive half-period is shown by the continuous contours, the negative is outlined by the dash contours. The basis is the source image at the UF maximum instant at 00:20 UT.\n\nComparing the obtained images (Fig.~\\ref{9}) with the profile for the UF maximal brightness variation (Fig.~\\ref{6}a), we can clearly see that the brightness increase is accompanied by the onset of the wave propagation along the detected direction coinciding (by shape) with the UF maximal emission source. These motion directions towards the penumbra, in turn, coincide with the footpoint of the magnetic loop, along which the waves propagate. There are recurrent instances when the fronts emerge in the same site of the umbra limited space. The beginning of the N5 extended source coincides (by space) with the pulsing centre of the three-minute waves expanding spirally. One may assume that the wave source coincides with the footpoint of the magnetic bundle that diverges in the upper atmosphere. Separate spiral arms rotate anti-clockwise. These background waves were studied in \\cite{2014A&A...569A..72S} for this active region.\n\n Presumably, propagation of spiral-shaped waves (Fig.~\\ref{3}, 304 \\AA) is the initiator of the wave increase in separate magnetic loops. In this case, the bulk of bright fronts propagates towards the bright extended UF emergences. The wave propagation projection velocities along the waveguide lie within the 20-30 km/s interval. These values agree with the slow magneto-acoustic wave propagation velocity in the sunspot.\n\nFor different low-frequency numbered trains of the UF in the N5 source (Fig.~\\ref{6}a, 1600 \\AA), the maximal brightness was located in various parts of the magnetic waveguide, and it varied with time. Each series UFs with the $\\sim$ 10-13 minute duration was accompanied by an increase in the low-frequency trains of the 3-minute waves. There are differences for each wave train. One observes the UFs, when both propagating and standing waves are visible throughout one train. The wave velocity can vary from train to train. Mainly, the waves move towards the sunspot boundary from its centre.\n\nThe increase in the wave processes for the UF point sources occurs in the form of producing single pulses in the umbral site limited to several pixels. The emergence of so-called standing waves without their apparent propagation is characteristic for these sources. Mainly, the 2D time dynamics of the three-minute oscillation sources agrees with the UF source dynamics.\n\n\\section{Discussion}\n\nThe results obtained from the SDO/AIA data showed that the investigated phenomenon of UFs is characteristic of all the heights within a sunspot atmosphere. We see a response to both below, at the photosphere level, and above, at the corona level, the sunspot atmosphere. This means that flashes represent a global process of energy release and this process encompasses all the layers of an sunspot umbra.\n \nUsually, an umbra is considered a relatively quiet region as compared with a penumbra. This is because the umbral magnetic field represents a vertical bundle of magnetic field lines diverging with height. The umbral field line inclination is minimal. Correspondingly, the magnetic reconnection responsible for the flash energy release emergence is unlikely in a homogeneous, vertical field. This conclusion indicates that there are other mechanisms for the emission increase during UFs.\n\nA wave mechanism is an alternative to explain this increase. It is based on the assumption that the observed brightenings in the form of UFs are short-time power increase in wave processes within separate umbral parts. This viewpoint to be common, because the well-known three-minute umbral oscillation were revealed to propagate non-uniformly both over the sunspot space and with time \\citep{2012A&A...539A..23S, 2014A&A...569A..72S}. Mainly, the waves are modulated by a low-frequency component in the form of the $\\sim$ 13-15 minute trains and their power is time variable. The wave motion direction is determined by the spatial structure of the umbra-anchored magnetic field lines, along which slow magneto-acoustic waves propagate.\n\nThere are instances when a significant increase in power of the three-minute oscillation trains occurs at separate footpoints of magnetic loops. These processes have an indefinite character, and the source of the next wave increase is impossible to locate. On the other hand, the magnetic loop footpoints are stable over the umbral space over a certain time period. This enables us to assume that the positions of the UF sources are probably directly related to the magnetic loop footpoints, in which short-time increases in the three-minute waves are observed.\n\n These assumptions agree well with the spatial localization of the UF sources at the umbral boundary (Fig.~\\ref{1}) as well as the difference in shape, i.e. extended and point. Umbral flash sources maintain their spatial stability for about three hours, producing UF series. On the other hand, \\cite{2003A&A...403..277R} noted that some flashes possess instability both in space and time. \n \nIn \\cite{2014ApJ...792...41Y}, the authors showed that the UFs visible on time-distance plots occur at random locations without a well-established occurrence rate. It has been established that the appearance of new UFs sources is associated with the trains of three-minute oscillations in the sunspot umbra with much larger amplitude. The individual UFs ride wave fronts of umbral oscillations. A possible explanation for this is the presence in the umbra background oscillations as expending fronts of 3-min waves and their interaction between each other. A similar type of brightening was considered in \\cite{2014A&A...569A..72S}. These authors noted that the individual parts of the wave fronts, which are shaped as rings or spirals, during propagation along magnetic loops with different\nspatial configuration and interactions between each other, can lead to the appearance of diffuse brightening \nwith spatial instability. Such short-lived background UFs are well visible on the time-distance diagrams, constantly appear in umbra, and do not have stable shapes and localizations in space (Fig.~\\ref{3}, 304 \\AA). Basically, the pulse source of such wave fronts is located in the centre of umbra, and is possibly associated with the footpoint of the magnetic bundle whose loops are expanding with height. \n\nIn the case of background UFs, we observed the local traces of waves that propagate along loops with different inclinations relative to the solar normal and correspondingly different cut-off frequencies. This forms a brightening of wave tracks, which we observed as diffuse UFs during increasing of power oscillations in selected areas of umbra. We can also obtain the same effect during interactions between wave fronts. With height, the visibility and positions in space of these sources are shifted in a radial direction because of upwards wave propagation.\n\nFor the local UFs discussed in our work, the sources have small angular size with a periodic 3-min component and stable location, both over space and height (Fig.~\\ref{3}, 1600 \\AA). Their appearance is associated with the power of the maximum wave propagating near the footpoints of coronal loops outside the main magnetic bundle. The origin of these loops is umbral periphery. Their inclination can be different relatively to the configuration of the main magnetic bundle. \n \n \n The existence of an UFs fine structure was previously assumed in \\cite{2000Sci...288.1396S} and \\cite{2005ApJ...635..670C} using spectroscopic observations. Improving the angular and spatial resolutions of astronomical instruments enabled us to observe such changes in UF sources directly. Thus, \\cite{2009ApJ...696.1683S} used HINODE data (CaII H line) to find an umbra fine structure in the form of a filamentary structures that emerged during UFs. These details were present at an oscillation increase, and formed a system of extended filaments, along which brightness varied with time in the form of travelling bright and dark details. The calculated horizontal velocity varied within 30-100 km/s.\n \nWe can assume that we observe in UF sources projection motions (at the footpoints of magnetic field lines) of the three-minute longitudinal wavefronts propagating upwards \\citep{2003ApJ...599..626B}. Depending on the field line start localization (the umbral centre or near its boundary) and on the inclination to the solar normal, there is a projection effect of wave visibility. Near the sunspot boundary, one observes extended UF sources, whereas closer to the centre point sources are observable. This statement is true if we assume a symmetry of diverging field lines relative to the umbral central part. In reality, there is often an east-west asymmetry of an active group. This asymmetry is related to the presence of the head (sunspot) and tail (floccule) parts.\n\nThe wave path length, and, accordingly, the wave visibility with certain oscillation periods is determined by the cut-off frequency \\citep{1977A&A....55..239B, 1984A&A...133..333Z}. The path also varies as a cosine of the magnetic waveguide inclination angle. Point UF sources with a minimal angular size are related to the footpoints of vertical magnetic field lines. Large, extended UF sources, are related to the footpoints of field lines with a large inclination to the solar normal.\n\nComparing the positions of the sources at the NOAA 11131, various heights showed a good concurrence of the UF sources underneath (1600 \\AA) with the footpoints of coronal loops (171 \\AA), which play the role of magnetic waveguides for three-minute waves. For the low-lying loops in the eastern part of the NOAA 11131 that connect the sunspot with the tail part, we see extended UF sources at their footpoints. For the western part, we see point sources.\n\nThe revealed interconnection between the UF emergence and the increase in the three-minute wave trains indicates that we can consider UFs as events, in which peak increases in the trains of oscillations at the footpoints of magnetic loops exhibit themselves. There is a direct dependence between the oscillation power and the flash brightness at a maximal correlation. The higher the magnitude of the three-minute waves, the more powerful the flash. This dependence concerns both extended UF sources and point UF\\ sources. The UF emission maximum coincides with the maximum of the three-minute oscillations within one wave train.\n\nThe 2D spectral PWF-analysis for the SDO/AIA image cube directly showed (Fig.~\\ref{9}) that, during UFs, three-minute wave motions emerge along the detected magnetic loops towards the umbral boundary at the UF sources. The wave propagation starts at the footpoint and terminates at the level, where its inclination angle corresponds to the cut-off frequency beyond the limits of the observed three-minute frequency band. The greater the loop inclination, the greater the projection of the UF source to the observer, and the more wave trains (UF pulses) we can record. Correspondingly, we will observe extended, bright UF sources. Contrary to the propagating waves, so-called standing waves will be observed for point sources. An explanation of this is the projection of the propagating waves along vertical magnetic loops towards the observer. In this case, spatial wavefronts will be observed within a loop cut region limited by space. Those fronts form UF sources with a small angular size.\n\nThe UF source lifetime will also be different. For point UFs, the source lifetime is about 1-2 minutes; for extended UFs, it is 3-15 minutes. The visibility of the source is restricted by a low integral power level of the UF emission of the point sources and a short observational time for maximal oscillations (1-2 half-periods). For extended UF sources, we can observe a few travelling wave trains simultaneously, which intensifies their integral brightness and increases the observational time (lifetime).\n\n\\section{Conclusions}\n\nWe analysed the association between an increase of wave activity in the sunspot active groups and the emergence of UFs. We used the observational data in the UV emission obtained in the various temperature channels of the SDO/AIA with high spatial and temporal resolutions. To detect the oscillations, we used the time-distance plots and Fourier and wavelet transform spectral techniques. The results are as follows:\n\n1) We revealed fast periodic disturbances related to the wave activity in the sunspot umbra during a three-hour observation. These disturbances correlate well with the continuous diffuse brightening of separate details of the propagating three-minute wavefronts as described in \\cite{2014ApJ...792...41Y}. Along with this, short-time emergences of the small local sources having a periodic component and identified as UFs are observed.\n\n2) We can divide the observed umbral brightening into two types. The first type are background UFs associated with random brightening of separated parts of wave fronts during their propagation. These UFs are observed all of the time in umbra as weak diffuse details that ride the wave fronts without stable shapes and localization in space. The second type are local UFs associated with the increasing of wave activity near to the footpoints of magnetic loops. These sources not change spatial position in time and show pronounced wave dynamics during UFs.\n\n3). For the local UFs we revealed various types of spatial shapes of the sources. We suppose that the point sources are located at the footpoints of large magnetic loops. Their feature is the activity with rare single pulses of low power and duration. The extended sources are related to the footpoints of low magnetic loops and large inclinations. The features of this source type are series of recurrent UF pulses related to propagating trains of three-minute waves. The flash power depends on the distance of the wave path, along which the emission is integrated. The wave path and, correspondingly the UF source projection size, are determined by the cut-off frequency.\n\n4) The emergence of the main UF maximum is shown to coincide with the maximal value of the power of the three-minute oscillation trains in separated loops. This type of wave dynamics follows that described in \\cite{2014ApJ...792...41Y} for background UFs but localized in magnetic loops. There is a correlation between the UF emergence at the photosphere level and the increase in the power of the three-minute wave trains in the corona. \n\nThese results explicitly show the correlation between the sunspot three-minute oscillation processes and the UF emergence. These processes are a reflection of the slow magneto-acoustic wave propagation from the subphotospheric level into the corona along the inclined magnetic fields. The wave process power dynamics in separate magnetic loops determines the time and site of the UF source emergence. The main mechanism responsible for the observed UF parameters is the wave cut-off frequency. In the future we plan to study in more detail the relationship between the shape of the local UFs sources and the inclination of the magnetic loops near to the footpoints of which the flashes are observed.\n\n\\begin{acknowledgements}\nWe are grateful to the referee for helpful and constructive comments and suggestions. The authors are grateful to the SDO/AIA teams for operating the instruments and performing the basic data reduction, and especially, for the open data policy. This work is partially supported by the Ministry of Education and Science of the Russian Federation, the Siberian Branch of the Russian Academy of Sciences\n(Project II.16.3.2) and by the programme of basic research of the RAS Presidium No.28. The work is carried out as part of Goszadanie 2018, project No. 007-00163-18-00 of 12.01.2018 and supported by the Russian Foundation for Basic Research (RFBR), grants Nos. 14-0291157 and \n17-52-80064 BRICS-a. The research was funded by Chinese Academy of Sciences President\u2019s International Fellowship Initiative, Grant No. 2015VMA014. \n\\end{acknowledgements}\n\n\\bibliographystyle{aa}\n", "meta": {"timestamp": "2018-05-04T02:04:40", "yymm": "1710", "arxiv_id": "1710.08100", "language": "en", "url": "https://arxiv.org/abs/1710.08100"}} {"text": "\\section{Introduction}\r\n A process of $e^+e^-$ pair production by a high-energy electron in the atomic field is interesting both from experimental and theoretical points of view. It is important to know the cross section of this process with high accuracy at data analysis in detectors. Besides, this process gives the substantial contribution to a background at precision experiments devoted to search of new physics. From the theoretical point of view, the cross section of electroproduction in the field of heavy atoms reveals very interesting properties of the Coulomb corrections, which are the difference between the cross section exact in the parameters of the field and that calculated in the lowest order of the perturbation theory (the Born approximation).\r\n\r\nThe cross sections in the Born approximation, both differential and integrated, have been discussed in numerous papers\r\n\\cite{Bhabha2,Racah37,BKW54,MUT56,Johnson65,Brodsky66,BjCh67,Henry67,Homma74}. The Coulomb corrections to the differential cross section of high-energy electroproduction by an ultra-relativistic electron in the atomic field have been obtained only recently in our paper \\cite{KM2016}. In that paper it is shown that the Coulomb corrections significantly modify the differential cross section of the process as compared with the Born result. It turns out that both effects, the exact account for the interaction of incoming and outgoing electrons with the atomic field and the exact account for the interaction of the produced pair with the atomic field, are very important for the value of the differential cross section. On the other hand, the are many papers devoted to the calculation of $e^+e^-$ electroproduction by a heavy particles (muons or nuclei) in an atomic field \\cite{Nikishov82,IKSS1998,SW98,McL98,Gre99,LM2000}. It that papers, the interaction of a heavy particle and the atomic field have been neglected. In our recent paper \\cite{KM2017} it has been shown that the cross section, differential over the angles of a heavy outgoing particle, changes significantly due to the exact account for the interaction of a heavy particle with the atomic field. However, the cross section integrated over these angles is not affected by this interaction. Such unusual properties of the cross section of electroproduction by a heavy particle stimulated us to perform the detailed investigation of the integrated cross section of the electroproduction by the ultra-relativistic electron.\r\n\r\nIn the present paper we investigate in detail the integrated cross section, using the analytical result for the matrix element of the process obtained in our paper \\cite{KM2016} with the exact account for the interaction of all charged particles with the atomic field. Our goal is to understand the relative importance of various contributions to the integrated cross section under consideration.\r\n\r\n\\section{General discussion}\\label{general}\r\n\r\n\\begin{figure}[h]\r\n\\centering\r\n\\includegraphics[width=1.\\linewidth]{diagrams.eps}\r\n\\caption{Diagrams $T$ (left) and $\\widetilde{T}$ (right) for the contributions to the amplitude ${\\cal T}$ of the process $e^-Z\\to e^- e^+e^-Z$. Wavy line denotes the photon propagator, straight lines denote the wave functions in the atomic field.}\r\n\\label{fig:diagrams}\r\n\\end{figure}\r\n\r\nThe differential cross section of high-energy electroproduction by an unpolarized electron in the atomic field reads\r\n\\begin{equation}\\label{eq:cs}\r\nd\\sigma=\\frac{\\alpha^2}{(2\\pi)^8}\\,d\\varepsilon_3d\\varepsilon_4\\,d\\bm p_{2\\perp}\\,d\\bm p_{3\\perp}d\\bm p_{4\\perp}\\,\\frac{1}{2}\\sum_{\\mu_i=\\pm1}|{\\cal T}_{\\mu_1\\mu_2\\mu_3\\mu_4}|^{2}\\,,\r\n\\end{equation}\r\nwhere $\\bm p_1$ is the momentum of the initial electron, $\\bm p_2$ and $\\bm p_3$ are the final electron momenta, $\\bm p_4$ is the positron momentum, $\\mu_i=\\pm 1$ corresponds to the helicity of the particle with the momentum $\\bm p_i$, $\\bar\\mu_i=-\\mu_i$, $\\varepsilon_{1}=\\varepsilon_{2}+\\varepsilon_{3}+\\varepsilon_{4}$ is the energy of the incoming electron, $\\varepsilon_{i}=\\sqrt{{p}_{i}^2+m^2}$, $m$ is the electron mass, and $\\alpha$ is the fine-structure constant, $\\hbar=c=1$. In Eq.~\\eqref{eq:cs} the notation $\\bm X_\\perp=\\bm X-(\\bm X\\cdot\\bm \\nu)\\bm\\nu$ for any vector $\\bm X$ is used, $\\bm\\nu=\\bm p_1/p_1$.\r\nWe have\r\n\\begin{equation}\\label{TTT}\r\n{\\cal T}_{\\mu_1\\mu_2\\mu_3\\mu_4}=T_{\\mu_1\\mu_2\\mu_3\\mu_4}-\\widetilde{T}_{\\mu_1\\mu_2\\mu_3\\mu_4}\\,,\r\n\\quad \\widetilde{T}_{\\mu_1\\mu_2\\mu_3\\mu_4}=T_{\\mu_1\\mu_3\\mu_2\\mu_4}(\\bm p_2\\leftrightarrow \\bm p_3)\\,,\r\n\\end{equation}\r\nwhere the contributions $T$ and $\\widetilde{T}$ correspond, respectively, to the left and right diagrams in Fig.~\\ref{fig:diagrams}.\r\nThe amplitude $T$ has been derived in Ref.~\\cite{KM2016} by means of the quasiclassical approximation \\cite{KLM2016}. Its explicit form is given in Appendix with one modification. Namely, we have introduced the parameter $\\lambda$ which is equal to unity if the interaction of electrons, having the momenta $\\bm p_1$, $\\bm p_2$ in the term $T$ and $\\bm p_1$ , $\\bm p_3$ in the term $\\widetilde T$, with the atomic field is taken into account. The parameter $\\lambda$ equals to zero, if one neglects this interaction. Insertion of this parameter allows us to investigate the importance of various contributions to the cross section.\r\n\r\n First of all we note that the term $T$ is a sum of two contributions, see Appendix,\r\n$$T=T^{(0)}+T^{(1)}\\,,$$\r\nwhere $T^{(0)}$ is the contribution to the amplitude in which the produced $e^+e^-$ pair does not interact with the atomic field, while the contribution $T^{(1)}$ contains such interaction.\r\nIn other words, the term $T^{(0)}$ corresponds to bremsstrahlung of the virtual photon decaying into a free $e^+e^-$ pair. In the contribution $T^{(1)}$, electrons with the momenta $\\bm p_1$ and\r\n$\\bm p_2$ may interact or not interact with the atomic field. The latter contribution is given by the amplitude $T^{(1)}$ at $\\lambda=0$. Below we refer to the result of account for such interaction in the term $T^{(1)}$ as the Coulomb corrections to scattering. Note that the contribution $T^{(0)}$ at $\\lambda=0$ vanishes.\r\n\r\n\r\nIn the present work we are going to elucidate the following points: the relative contribution of the term $T^{(0)}$ to the cross section, an importance of the Coulomb corrections to scattering, an importance of the interference between the amplitudes $T$ and $\\widetilde{T}$ in the cross section.\r\n\r\nWe begin our analysis with the case of the differential cross section. Let us consider the quantity $S$,\r\n \\begin{equation}\\label{S}\r\nS=\\sum_{\\mu_i=\\pm1}\\Bigg|\\frac{\\varepsilon_1 m^4 {\\cal T}_{\\mu_1\\mu_2\\mu_3\\mu_4}}{\\eta (2\\pi)^2}\\Bigg|^2 \\,,\r\n\\end{equation}\r\nwhere $\\eta=Z\\alpha$ and $Z$ is the atomic charge number. In Fig.~\\ref{dif} the dependence of $S$\r\n on the positron transverse momentum $p_{4\\perp}$ is shown for gold ($Z=79$) at some values of $\\varepsilon_i$, $\\bm p_{2\\perp}$, and $\\bm p_{3\\perp}$. Solid curve is the exact result, long-dashed curve corresponds to $\\lambda=0$, dashed curve is the result obtained without account for the contributions $T^{(0)}$ and $\\widetilde{T}^{(0)}$, dash-dotted curve is the result obtained without account for the interference between $T$ and $\\widetilde{T}$, and dotted curve is the Born result (in the Born approximation $S$ is independent of $\\eta$). One can see for the case considered, that the Born result differs significantly from the exact one, and account for the interference is also very important. The contributions $T^{(0)}$ and $\\widetilde{T}^{(0)}$ are noticeable but not large, and the Coulomb corrections to the contributions $T^{(1)}$ and $\\widetilde{T}^{(1)}$ are essential.\r\nThe effect of screening for the values of the parameters considered in Fig.~\\ref{dif} is unimportant. Note that relative importance of different effects under discussions for the differential cross section strongly depends on the values of $\\bm p_{i}$. However, in all cases a deviation of the Born result from the exact one is substantial even for moderate values of $Z$.\r\n\r\n\\begin{figure}\r\n\\centering\r\n\\includegraphics[width=0.5\\linewidth]{plotdif.eps}\r\n\\caption{The quantity $S$, see Eq. \\eqref{S}, as the function of $p_{4\\perp}/m$ for $Z=79$, $\\varepsilon_1=100m$, $\\varepsilon_2/\\varepsilon_1=0.28$, $\\varepsilon_3/\\varepsilon_1=0.42$, $\\varepsilon_4/\\varepsilon_1=0.3$, $p_{2\\perp}=1.3 m$, $p_{3\\perp}=0.5 m$, $\\bm p_{3\\perp}$ parallel to $\\bm p_{4\\perp}$, and the angle between $\\bm p_{2\\perp}$ and $\\bm p_{4\\perp}$ being $\\pi/2$; solid curve is the exact result, dotted curve is the Born result, dash-dotted curve is that obtained without account for the interference between $T$ and $\\widetilde{T}$, the result for $\\lambda=0$ is given by long-dashed curve, and the dashed curve corresponds to the result obtained by neglecting the contribution $T^{(0)}$ and $\\widetilde{T}^{(0)}$.}\r\n\\label{dif}\r\n\\end{figure}\r\n\r\n\r\nLet us consider the cross sections $d\\sigma/dp_{2\\perp}$, i.e., the cross sections differential over the electron transverse momentum $p_{2\\perp}$. This cross section for $Z=79$ and $\\varepsilon_1=100 m$ is shown in the left panel in Fig.~\\ref{dif2}. In this picture solid curve is the exact result, dotted curve is the Born result, and long-dashed curve corresponds to $\\lambda=0$.\r\nIt is seen that the exact result significantly differs from the Born one, and account for the Coulomb corrections to scattering is also essential. An importance of account for the interference between $T$ and $\\widetilde{T}$, as well as account for the contributions of $T^{(0)}$ and $\\widetilde{T}^{(0)}$, is demonstrated by the right panel in Fig.~\\ref{dif2}. In this picture the quantity $\\delta$, which is the deviation of the approximate result for $d\\sigma/dp_{2\\perp}$ from the exact one in units of the exact cross section, is shown. Dash-dotted curve is obtained without account for the interference between $T$ and $\\widetilde{T}$, dashed curve is obtained without contributions of $T^{(0)}$ and $\\widetilde{T}^{(0)}$. It seen that both effects are noticeable.\r\n\r\nOur results are obtained under the condition $\\varepsilon_i\\gg m$, and a question on the limits of integration over energies appears at the numerical calculations of $d\\sigma/dp_{2\\perp}$. We have examined this question and found that the variation of the limits of integration in the vicinity of the threshold changes only slightly the result of integration.\r\nIn any case, such a variation does not change the interplay of various contributions to the cross sections, and we present the results obtained by the integration over all kinematical region allowed.\r\n\r\n\\begin{figure}[H]\r\n\t\\centering\r\n\t\\includegraphics[width=0.45\\linewidth]{dp2new.eps}\r\n\t\\includegraphics[width=0.45\\linewidth]{dp2_difnew.eps}\r\n\t\\caption{Left panel: the dependence of $d\\sigma/dp_{2\\perp}$ on $p_{2\\perp}/m$ in units $\\sigma_0/m=\\alpha^2\\eta^2/m^3$ for $Z=79$, $\\varepsilon_1/m=100$; solid curve is the exact result, dotted curve is the Born result, and long-dashed curve corresponds to $\\lambda=0$. Right panel: the quantity $\\delta$ as the function of $p_{2\\perp}/m$, where $\\delta$ is the deviation of the approximate result for $d\\sigma/dp_{2\\perp}$ from the exact one in units of the exact cross section. Dash-dotted curve is obtained without account for the interference between $T$ and $\\widetilde{T}$, dashed curve is obtained without contributions of $T^{(0)}$ and $\\widetilde{T}^{(0)}$.}\r\n\\label{dif2}\r\n\\end{figure}\r\n\r\n\r\nIt follows from Fig.~\\ref{dif2} that the deviation of the results obtained for $\\lambda=1$ from that obtained for $\\lambda=0$ is\r\nnoticeable and negative in the vicinity of the pick and small and positive in the wide region outside the pick. However, these two deviations (positive and negative) strongly compensate each other in the cross section integrated over both electron transverse momenta $\\bm p_{2\\perp}$ and $\\bm p_{3\\perp}$. This statement is illustrated in Fig.~\\ref{dif4}, where\r\nthe cross sections differential over the positron transverse momentum, $d\\sigma/dp_{4\\perp}$ is shown for $Z=79$ and $\\varepsilon_1=100 m$.\r\n\r\n\r\n\\begin{figure}[H]\r\n\t\\centering\r\n\t\\includegraphics[width=0.45\\linewidth]{dp4new.eps}\r\n\t\\includegraphics[width=0.45\\linewidth]{dp4_difnew.eps}\r\n\t\\caption{Left panel: the dependence of $d\\sigma/dp_{4\\perp}$ on $p_{4\\perp}/m$ in units $\\sigma_0/m=\\alpha^2\\eta^2/m^3$ for $Z=79$, $\\varepsilon_1/m=100$; solid curve is the exact result and dotted curve is the Born result. Right panel: the quantity $\\delta_1$ as the function of $p_{4\\perp}/m$, where $\\delta_1$ is the deviation of the approximate result for $d\\sigma/dp_{4\\perp}$ from the exact one in units of the exact cross section. Dash-dotted curve is obtained without account for the interference between $T$ and $\\widetilde{T}$, dashed curve is obtained without contributions of $T^{(0)}$ and $\\widetilde{T}^{(0)}$, and long-dashed curve corresponds to $\\lambda=0$.}\r\n\\label{dif4}\r\n\\end{figure}\r\n\r\nAgain, the Born result differs significantly from the exact one. It is seen that all relative deviations $\\delta_1$ depicted in the right panel are noticeable. Then,\r\nthe results obtained for $\\lambda=0$ and that without contributions $T^{(0)}$ and $\\widetilde{T}^{(0)}$ are very close to each other. This means that account for the Coulomb corrections to scattering leads to a very small shift of the integrated cross section $d\\sigma/dp_{4\\perp}$, in contrast to the cross section $d\\sigma/dp_{2\\perp}$. Such suppression is similar to that found in our resent paper \\cite{KM2017} at the consideration of $e^+e^-$ pair electroproduction by a heavy charged particle in the atomic field.\r\n\r\nAt last, let us consider the total cross section $\\sigma$ of the process under consideration. The cross section $\\sigma$ for $Z=79$ as the function of $\\varepsilon_1/m$ is shown in the left panel in Fig.~\\ref{tot}. In this picture solid curve is the exact result, dotted curve is the Born result, and dash-dotted curve is the ultra-relativistic asymptotics of the Born result given by the formula of Racah \\cite{Racah37}. Note that a small deviation of our Born result at relatively small energies from the asymptotics of the Born result is due, first, to uncertainty of our result related to the uncertainty of low limit of integration over the energies of the produced particles, and secondly, to neglecting identity of the final electrons in Ref.~\\cite{Racah37}.\r\n\r\n\r\n\\begin{figure}[H]\r\n\t\\centering\r\n\t\\includegraphics[width=0.45\\linewidth]{total_cs.eps}\r\n\\includegraphics[width=0.45\\linewidth]{total_cs_dif.eps}\r\n\t\t\\caption{Left panel: the total cross section $\\sigma$ as the function of $\\varepsilon_1/m$ in units $\\sigma_0=\\alpha^2\\eta^2/m^2$ for $Z=79$; solid curve is the exact result, dotted curve is the Born result, and dash-dotted curve is the ultra-relativistic asymptotics of the Born result given by the formula of Racah \\cite{Racah37}. Right panel: the quantity $\\delta_2$ as the function of $\\varepsilon_1/m$, where $\\delta_2$ is the deviation of the approximate result for $\\sigma$ from the exact one in units of the exact cross section. Dash-dotted curve is obtained without account for the interference between $T$ and $\\widetilde{T}$, dashed curve is obtained without contributions of $T^{(0)}$ and $\\widetilde{T}^{(0)}$, and long-dashed curve corresponds to $\\lambda=0$.}\r\n\\label{tot}\r\n\\end{figure}\r\n\r\nIt is seen that the exact result differs significantly from the Born one. In the right panel of Fig.~\\ref{tot} we show the relative deviation $\\delta_2$ of the approximate result for $\\sigma$ from the exact one. Dash-dotted curve is obtained without account for the interference between $T$ and $\\widetilde{T}$, dashed curve is obtained without contributions $T^{(0)}$ and $\\widetilde{T}^{(0)}$, and long-dashed curve corresponds to $\\lambda=0$. The corrections to the total cross section due to account for the contributions $T^{(0)}$ and $\\widetilde{T}^{(0)}$, and the Coulomb corrections to scattering are small even at moderate energy $\\varepsilon_1$. The effect of the interference is more important at moderate energy and less important at high energies.\r\n\r\nIn our recent paper \\cite{KM2016} the differential cross section of electroproduction by relativistic electron has been derived. For the differential cross section, we have pointed out that the Coulomb corrections to the scattering are the most noticeable in the region $p_{2\\perp}\\sim \\omega/\\gamma$. On the basis of this statement, we have evaluated in the leading logarithmic approximation the Coulomb corrections to the total cross section, see Eq.~(33) of Ref.~\\cite{KM2016}. However, as it is shown in the present paper, for the total cross section the contribution of the Coulomb corrections to scattering in the region $p_{2\\perp}\\sim \\omega/\\gamma$ is compensated strongly by the contribution of the Coulomb corrections to scattering in the wide region outside $p_{2\\perp}\\sim \\omega/\\gamma$. As a result, the Coulomb corrections to the total cross section derived in the leading logarithmic approximation does not affected by account for the Coulomb corrections to scattering. This means that the coefficient in Eq.~(33) of Ref.~\\cite{KM2016} should be two times smaller and equal to that in the Coulomb corrections to the total cross section of $e^+e^-$ electroproduction by a relativistic heavy particle calculated in the leading logarithmic approximation. Note that an accuracy of the result obtained for the Coulomb corrections to the total cross section is very low because in electroproduction there is a strong compensation between the leading and next-to-leading terms in the Coulomb corrections, see Ref.~\\cite{LM2009}.\r\n\r\n\r\n\\section{Conclusion}\r\nPerforming tabulations of the formula for the differential cross section of $e^+e^-$ pair electroproduction by a relativistic electron in the atomic field \\cite{KM2016}, we have elucidated the importance of various contributions to the integrated cross sections of the process. It is shown that the Coulomb corrections are very important both for the differential cross section and for the integrated cross sections even for moderate values of the atomic charge number. This effect is mainly related to the Coulomb corrections to the amplitudes $T^{(1)}$ and ${\\widetilde T}^{(1)}$ due to the exact account of the interaction of the produced $e^+e^-$ pair with the atomic field (the Coulomb corrections to the amplitude of $e^+e^-$ pair photoproduction by a virtual photon). There are also some other effects. For the cross section differential over the electron transverse momentum, $d\\sigma/dp_{2\\perp}$, the account for the interference of the amplitudes and the contribution of virtual bremsstrahlung (the contribution of the amplitudes $T^{(0)}$ and ${\\widetilde T}^{(0)}$)\r\nis noticeable. The Coulomb corrections to scattering is larger than these two effects but essentially smaller than the Coulomb corrections to the amplitude of pair photoproduction by a virtual photon. However, in the cross section differential over the positron transverse momentum, $d\\sigma/dp_{4\\perp}$, the interference of the amplitudes and the contribution of virtual bremsstrahlung lead to the same corrections as the effect of the Coulomb corrections to scattering. They are of the same order as in the case of $d\\sigma/dp_{2\\perp}$. This means that there is a strong suppression of the effect of the Coulomb corrections to scattering in the cross section $d\\sigma/dp_{4\\perp}$. Relative importance of various effects for the total cross section is the same as in the case of the cross section $d\\sigma/dp_{4\\perp}$.\r\n\r\n\r\n\\section*{Acknowledgement}\r\nThis work has been supported by Russian Science Foundation (Project No. 14-50-00080). It has been also supported in part by\r\nRFBR (Grant No. 16-02-00103).\r\n\r\n\r\n\\section*{Appendix}\\label{app}\r\nHere we present the explicit expression for the amplitude $T$, derived in Ref.~\\cite{KM2016}, with one modification. Namely, since we are going to investigate the importance of the interaction of electrons with the momenta $\\bm p_1$ and $\\bm p_2$ with the atomic field, we introduce the parameter $\\lambda$ which is equal to unity if this interaction is taken into account and equals to zero if one neglects this interaction. We write the amplitude $T$ in the form\r\n$$T=T^{(0)}+T^{(1)}\\,,\\quad T^{(0)}=T^{(0)}_\\parallel+T^{(0)}_\\perp\\,,\\quad T^{(1)}=T^{(1)}_\\parallel+T^{(1)}_\\perp\\,,$$\r\nwhere the helicity amplitudes $T^{(0)}_{\\perp\\parallel}$ read\r\n\\begin{align}\\label{T0}\r\n&T_\\perp^{(0)}=\\frac{8\\pi A(\\bm\\Delta_0)}{\\omega(m^2+\\zeta^2)} \\Big\\{\\delta_{\\mu_1\\mu_2}\\delta_{\\mu_3\\bar\\mu_4}\r\n\\Big[\\frac{\\varepsilon_3}{\\omega^2}(\\bm s_{\\mu_3}^*\\cdot \\bm X)(\\bm s_{\\mu_3}\\cdot\\bm\\zeta)(\\varepsilon_1\\delta_{\\mu_1\\mu_3}+\\varepsilon_2\\delta_{\\mu_1\\mu_4})\\nonumber\\\\\r\n&-\\frac{\\varepsilon_4}{\\omega^2}(\\bm s_{\\mu_4}^*\\cdot \\bm X)(\\bm s_{\\mu_4}\\cdot\\bm\\zeta) (\\varepsilon_1\\delta_{\\mu_1\\mu_4}+\\varepsilon_2\\delta_{\\mu_1\\mu_3})\\Big]\\nonumber\\\\\r\n&-\\frac{m\\mu_1}{\\sqrt{2}\\varepsilon_1\\varepsilon_2}R\\delta_{\\mu_1\\bar\\mu_2}\\delta_{\\mu_3\\bar\\mu_4}\r\n(\\bm s_{\\mu_1}\\cdot\\bm\\zeta)(-\\varepsilon_3\\delta_{\\mu_1\\mu_3}+\\varepsilon_4\\delta_{\\mu_1\\mu_4})\\nonumber\\\\\r\n&+\\frac{m\\mu_3}{\\sqrt{2}\\varepsilon_3\\varepsilon_4}\\delta_{\\mu_1\\mu_2}\\delta_{\\mu_3\\mu_4}(\\bm s_{\\mu_3}^*\\cdot\\bm X)(\\varepsilon_1\\delta_{\\mu_3\\mu_1}+\\varepsilon_2\\delta_{\\mu_3\\bar\\mu_1})\r\n+\\frac{m^2\\omega^2}{2\\varepsilon_1\\varepsilon_2\\varepsilon_3\\varepsilon_4}R\\delta_{\\mu_1\\bar\\mu_2}\\delta_{\\mu_3\\mu_4}\\delta_{\\mu_1\\mu_3}\\Big\\}\\,,\\nonumber\\\\\r\n&T_\\parallel^{(0)}=-\\frac{8\\pi }{\\omega^2}A(\\bm\\Delta_0)R\\delta_{\\mu_1\\mu_2}\\delta_{\\mu_3\\bar\\mu_4}\\,.\r\n\\end{align}\r\nHere $\\mu_i=\\pm 1$ corresponds to the helicity of the particle with the momentum $\\bm p_i$, $\\bar\\mu_i=-\\mu_i$, and\r\n\\begin{align}\\label{T0not}\r\n&A(\\bm\\Delta)=-\\frac{i\\lambda}{\\Delta_{\\perp}^2}\\int d\\bm r\\,\\exp[-i\\bm\\Delta\\cdot\\bm r-i\\chi(\\rho)]\\bm\\Delta_{\\perp}\\cdot\\bm\\nabla_\\perp V(r)\\,,\\nonumber\\\\\r\n&\\chi(\\rho)=\\lambda\\int_{-\\infty}^\\infty dz\\,V(\\sqrt{z^2+\\rho^2})\\,,\\quad\\bm\\rho=\\bm r_\\perp\\,,\\quad\\bm\\zeta=\\frac{\\varepsilon_3\\varepsilon_4}{\\omega}\\bm\\theta_{34}\\,,\\nonumber\\\\\r\n&\\omega=\\varepsilon_3+\\varepsilon_4\\,, \\quad \\bm\\Delta_{0\\perp}=\\varepsilon_2\\bm\\theta_{21}+\\varepsilon_3\\bm\\theta_{31}+\\varepsilon_4\\bm\\theta_{41}\\,,\\nonumber\\\\\r\n&\\Delta_{0\\parallel}=-\\frac{1}{2}\\left[m^2\\omega\\left(\\frac{1}{\\varepsilon_1\\varepsilon_2}+\\frac{1}{\\varepsilon_3\\varepsilon_4}\\right)+\\frac{p_{2\\perp}^2}{\\varepsilon_2}+ \\frac{p_{3\\perp}^2}{\\varepsilon_3}+\\frac{p_{4\\perp}^2}{\\varepsilon_4}\\right]\\,,\\nonumber\\\\\r\n&R=\\frac{1}{d_1d_2}[\\Delta^2_{0\\perp} (\\varepsilon_1+\\varepsilon_2)+2\\varepsilon_1\\varepsilon_2(\\bm\\theta_{12}\\cdot\\bm\\Delta_{0\\perp})]\\,,\\nonumber\\\\\r\n&\\bm X=\\frac{1}{d_1}(\\varepsilon_3\\bm\\theta_{23}+\\varepsilon_4\\bm\\theta_{24})-\\frac{1}{d_2}(\\varepsilon_3\\bm\\theta_{13}+\\varepsilon_4\\bm\\theta_{14})\\,,\\nonumber\\\\\r\n&d_1=m^2\\omega\\varepsilon_1\\left(\\frac{1}{\\varepsilon_1\\varepsilon_2}+\\frac{1}{\\varepsilon_3\\varepsilon_4}\\right)+\\varepsilon_2\\varepsilon_3\\theta_{23}^2\r\n+\\varepsilon_2\\varepsilon_4\\theta_{24}^2+\\varepsilon_3\\varepsilon_4\\theta_{34}^2\\,,\\nonumber\\\\\r\n&d_2=m^2\\omega\\varepsilon_2\\left(\\frac{1}{\\varepsilon_1\\varepsilon_2}+\\frac{1}{\\varepsilon_3\\varepsilon_4}\\right)+\\varepsilon_2\\varepsilon_3\\theta_{31}^2\r\n+\\varepsilon_2\\varepsilon_4\\theta_{41}^2+(\\varepsilon_3\\bm\\theta_{31}+\\varepsilon_4\\bm\\theta_{41})^2\\,,\\nonumber\\\\\r\n&\\bm\\theta_i=\\bm p_{i\\perp}/p_i \\,,\\quad \\bm\\theta_{ij}=\\bm\\theta_{i}-\\bm\\theta_{j}\\,,\r\n\\end{align}\r\nwith $V(r)$ being the electron potential energy in the atomic field. In the amplitude $T^{(0)}$ the interaction of the produced $e^+e^-$ pair with the atomic field is neglected, so that $T^{(0)}$ depends on the atomic potential in the same way as the bremsstrahlung amplitude, see, e.g., Ref.~\\cite{LMSS2005}.\r\n\r\nThe amplitudes $T^{(1)}_{\\perp\\parallel}$ have the following form\r\n\\begin{align}\\label{T1C}\r\n&T_\\perp^{(1)}=\\frac{8i\\eta}{\\omega \\varepsilon_1}|\\Gamma(1-i\\eta)|^2 \\int\\frac{d\\bm\\Delta_\\perp\\, A(\\bm\\Delta_\\perp+\\bm p_{2\\perp})F_a(Q^2)}{Q^2 M^2\\,(m^2\\omega^2/\\varepsilon_1^2+\\Delta_\\perp^2)}\\left(\\frac{\\xi_2}{\\xi_1}\\right)^{i\\eta}\r\n{\\cal M}\\,, \\nonumber\\\\\r\n&{\\cal M}=-\\frac{\\delta_{\\mu_1\\mu_2}\\delta_{\\mu_3\\bar\\mu_4}}{\\omega} \\big[ \\varepsilon_1(\\varepsilon_3 \\delta_{\\mu_1\\mu_3}-\\varepsilon_4 \\delta_{\\mu_1\\mu_4})\r\n(\\bm s_{\\mu_1}^*\\cdot\\bm \\Delta_\\perp)(\\bm s_{\\mu_1}\\cdot\\bm I_1)\\,\\nonumber\\\\\r\n&+\\varepsilon_2(\\varepsilon_3 \\delta_{\\mu_1\\bar\\mu_3}-\\varepsilon_4 \\delta_{\\mu_1\\bar\\mu_4})(\\bm s_{\\mu_1}\\cdot\\bm \\Delta_\\perp)(\\bm s_{\\mu_1}^*\\cdot\\bm I_1) \\big]+\\delta_{\\mu_1\\bar\\mu_2}\\delta_{\\mu_3\\bar\\mu_4}\\frac{m\\omega\\mu_1}{\\sqrt{2}\\varepsilon_1 }(\\varepsilon_3 \\delta_{\\mu_1\\mu_3}-\\varepsilon_4 \\delta_{\\mu_1\\mu_4})(\\bm s_{\\mu_1}\r\n\\cdot\\bm I_1)\\nonumber\\\\\r\n&+\\delta_{\\mu_1\\mu_2}\\delta_{\\mu_3\\mu_4}\\frac{m\\mu_3}{\\sqrt{2}}(\\varepsilon_1 \\delta_{\\mu_1\\mu_3}+\\varepsilon_2 \\delta_{\\mu_1\\bar\\mu_3})(\\bm s_{\\mu_3}^*\\cdot\\bm \\Delta_\\perp)I_0\r\n-\\frac{m^2\\omega^2}{2\\varepsilon_1}\\delta_{\\mu_1\\bar\\mu_2}\\delta_{\\mu_3\\mu_4}\\delta_{\\mu_1\\mu_3}I_0\\,,\\nonumber\\\\\r\n&T_\\parallel^{(1)}=-\\frac{8i\\eta\\varepsilon_3\\varepsilon_4}{\\omega^3}|\\Gamma(1-i\\eta)|^2 \\int \\frac{d\\bm\\Delta_\\perp\\, A(\\bm\\Delta_\\perp+\\bm p_{2\\perp})F_a(Q^2)}{Q^2 M^2}\\left(\\frac{\\xi_2}{\\xi_1}\\right)^{i\\eta}\\,I_0\r\n\\delta_{\\mu_1\\mu_2}\\delta_{\\mu_3\\bar\\mu_4}\\,,\r\n\\end{align}\r\nwhere $F_a(Q^2)$ is a atomic form factor,\r\nand the following notations are used\r\n\\begin{align}\\label{T1Cnot}\r\n&M^2=m^2\\Big(1+\\frac{\\varepsilon_3\\varepsilon_4}{\\varepsilon_1\\varepsilon_2}\\Big)\r\n+\\frac{\\varepsilon_1\\varepsilon_3\\varepsilon_4}{\\varepsilon_2\\omega^2} \\Delta_\\perp^2\\,,\\quad\r\n\\bm Q_\\perp=\\bm \\Delta_\\perp-\\bm p_{3\\perp}-\\bm p_{4\\perp}\\,, \\nonumber\\\\\r\n&Q^2= Q_\\perp^2+\\Delta_{0\\parallel}^2\\,,\\quad\r\n\\bm q_1=\\frac{\\varepsilon_3}{\\omega}\\bm \\Delta_\\perp- \\bm p_{3\\perp}\\,,\\quad \\bm q_2=\r\n\\frac{\\varepsilon_4}{\\omega}\\bm \\Delta_\\perp- \\bm p_{4\\perp} \\,,\\nonumber\\\\\r\n&I_0=(\\xi_1-\\xi_2)F(x)+(\\xi_1+\\xi_2-1)(1-x)\\frac{F'(x)}{i\\eta}\\,,\\nonumber\\\\\r\n&\\bm I_1=(\\xi_1\\bm q_1+\\xi_2\\bm q_2)F(x)+(\\xi_1\\bm q_1-\\xi_2\\bm q_2)(1-x)\\frac{F'(x)}{i\\eta}\\,,\\nonumber\\\\\r\n&\\xi_1=\\frac{M^2}{M^2+q_1^2}\\,,\\quad \\xi_2=\\frac{M^2}{M^2+q_2^2}\\,,\\quad x=1-\\frac{Q_\\perp^2\\xi_1\\xi_2}{M^2}\\,,\\nonumber\\\\\r\n&F(x)=F(i\\eta,-i\\eta, 1,x)\\,,\\quad F'(x)=\\frac{\\partial}{\\partial x}F(x)\\,,\\quad \\eta=Z\\alpha\\,.\r\n\\end{align}\r\nNote that the parameter $\\lambda$ is contained solely in the function $A(\\bm\\Delta)$, Eq.~\\eqref{T0not}.\r\n\r\n\r\n\r\n\r\n\r\n", "meta": {"timestamp": "2017-05-22T02:04:52", "yymm": "1705", "arxiv_id": "1705.06906", "language": "en", "url": "https://arxiv.org/abs/1705.06906"}} {"text": "\\section{Introduction}\n\t\\label{sec:introduction}\n \n \\subsection{Motivation: Road and Terrain Mapping}\n \\label{subsec:terrain}\n There has been a steep rise of interest in the last decade among researchers in academia and the commercial sector in autonomous vehicles and self driving cars. Although adaptive estimation has been studied for some time, applications such as terrain or road mapping continue to challenge researchers to further develop the underlying theory and algorithms in this field. These vehicles are required to sense the environment and navigate surrounding terrain without any human intervention. The environmental sensing capability of such vehicles must be able to navigate off-road conditions or to respond to other agents in urban settings. As a key ingredient to achieve these goals, it can be critical to have a good {\\em a priori} knowledge of the surrounding environment as well as the position and orientation of the vehicle in the environment. \nTo collect this data for the construction of terrain maps, mobile vehicles equipped with multiple high bandwidth, high resolution imaging sensors are deployed. The mapping sensors retrieve the terrain data relative to the vehicle and navigation sensors provide georeferencing relative to a fixed coordinate system. The geospatial data, which can include the digital terrain maps acquired from these mobile mapping systems, find applications in emergency response planning and road surface monitoring. Further, to improve the ride and handling characteristic of an autonomous vehicle, it might be necessary that these digital terrain maps have accuracy on a sub-centimeter scale. \n\nOne of the main areas of improvement in current state of the art terrain modeling technologies is the localization. Since the localization heavily relies on the quality of GPS/GNSS, IMU data, it is important to come up with novel approaches which could fuse the data from multiple sensors to generate the best possible estimate of the environment. Contemporary data acquisition systems used to map the environment generate scattered data sets in time and space. These data sets must be either post-processed or processed online for construction of three dimensional terrain maps. \n\nFig.\\ref{fig:vehicle1} and Fig.\\ref{fig:vehicle2} depict a map building vehicle and trailer developed by some of the authors at Virginia Tech. The system generates experimental observations in the form of data that is scattered in time and space. These data sets have extremely high dimensionality. \nRoughly 180 million scattered data points are collected per minute of data acquisition, which corresponds to a data file of roughly $\\mathcal{O}(1GB)$ in size. Current algorithms and software developed in-house post-process the scattered data to generate road and terrain maps. This offline batch computing problem can take many days of computing time to complete. It remains a challenging task to derive a theory and associated algorithms that would enable adaptive or online estimation of terrain maps from such high dimensional, scattered measurements.\n\nThis paper introduces a novel theory and associated algorithms that are amenable to observations that take the form of scattered data. The key attribute of the approach is that the unknown function representing the terrain is viewed as an element of a RKHS. The RKHS is constructed in terms of a kernel function $k(\\cdot,\\cdot): \\Omega \\times \\Omega \\rightarrow \\mathbb{R}$ where $\\Omega \\subseteq \\mathbb{R}^d$ is the domain over which scattered measurements are made. \nThe kernel $k$ can often be used to define a collection of radial basis functions (RBFs) $k_x(\\cdot):=k(x,\\cdot)$, each of which is said to be centered at some point $x\\in \\Omega$. For example, these RBFs might be exponentials, wavelets, or thin plate splines \\cite{wendland}. \nBy embedding the unknown function that represents the terrain in a RKHS, the new formulation generates a system that constitutes a distributed parameter system. The unknown function, representing map terrain, is the infinite dimensional distributed parameter. \nAlthough the study of infinite dimensional distributed parameter systems can be substantially more difficult than the study of ODEs, a key result is that stability and convergence of the approach can be established succinctly in many cases. \nMuch of the complexity \\cite{bsdr1997,bdrr1998} associated with construction of Gelfand triples or the analysis of infinitesimal generators and semigroups that define a DPS can be avoided for many examples of the systems in this paper.\nThe kernel $k(\\cdot,\\cdot): \\Omega \\times \\Omega \\rightarrow \\mathbb{R}$ that defines the RKHS provides a natural collection of bases for approximate estimates of the solution that are based directly on some subset of scattered measurements $\\{ x_i \\}_{i=1}^n \\subset \\mathbb{R}^d$. \nIt is typical in applications to select the centers $\\{x_i\\}_{i=1}^n$ that locate the basis functions from some sub-sample of the locations at which the scattered data is measured. Thus, while we do not study the nuances of such methods, in this paper the formulation provides a natural framework to pose so-called ``basis adaptive methods'' such as in~\\cite{dzcf2012} and the references therein. \n\nWhile our formulation is motivated by this particular application, it is a general construction for framing and generalizing some conventional approaches for online adaptive estimation. This framework introduces sufficient conditions that guarantee convergence of estimates in spatial domain $\\Omega$ to the unknown function $f$. In contrast, nearly all conventional strategies consider stability and convergence in time alone for some fixed finite dimensional space of $\\mathbb{R}^d \\times \\mathbb{R}^n$, with $n$ the number of parameters used to represent the estimate. The remainder of this paper studies the existence and uniqueness of solutions, stability, and convergence of approximate solutions for the infinite dimensional adaptive estimation problem defined over an RKHS. The paper concludes with an example of an RKHS adaptive estimation problem for a simple model of map building from vehicles. The numerical example demonstrates the rate of convergence for finite dimensional models constructed from RBF bases that are centered at a subset of scattered observations. \n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.75]{Picture1.png}\n\\caption{Vehicle Terrain Measurement System, Virginia Tech}\n\\label{fig:vehicle1}\n\\end{figure}\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.75]{Picture2.png}\n\\caption{Experimental Setup with LMI 3D GO-Locator Lasers}\n\\label{fig:vehicle2}\n\\end{figure} \n\n\\subsection{Related Research}\n\\label{sec:related_research}\nThe general theory derived in this paper has been motivated in part by the terrain mapping application discussed in Section \\ref{sec:introduction}, but also by recent research in a number of fields related to estimation of nonlinear functions. In this section we briefly review some of the recent research in probabilistic or Bayesian mapping methods, nonlinear approximation and learning theory, statistics, and nonlinear regression.\n\n\\subsubsection{Bayesian and Probabilistic Mapping}\nMany popular known techniques adopt a probabilistic approach towards solving the localization and mapping problem in robotics. The algorithms used to solve this problem fundamentally rely on Bayesian estimation techniques like particle filters, Kalman filters and other variants of these methods \\cite{Thrun2005Probabilistic, Whyte2006SLAM1, Whyte2006SLAM2}. The computational efforts required to implement these algorithms can be substantial since they involve constructing and updating maps while simultaneously tracking the relative locations of agents with respect to the environment. Over the last three decades significant progress has been made on various frontiers in terms of high-end sensing capabilities, faster data processing hardwares, robust and efficient computational algorithms \\cite{Dissanayake2011Review, Dissanayake2000Computational}. However, the usual Kalman filter based approaches implemented in these applications often are required to address the inconsistency problem in estimation that arise from uncertainties in state estimates \\cite{Huang2007Convergence,Julier2001Counter}. Furthermore, it is well acknowledged among the community that these methods suffer from a major drawback of `{\\em closing the loop}'. This refers to the ability to adaptively update the information if it is revisited. Since such a capability for updating information demands huge memory to store the high resolution and high bandwidth data. Moreover, it is highly nontrivial to guarantee that the uncertainties in estimates would converge to lower bound at sub optimal rates, since matching these rates and bounds significantly constraint the evolution of states along infeasible trajectories. While probabilistic methods, and in particular Bayesian estimation techniques, for the construction of terrain maps have flourished over the past few decades, relatively few approaches for establishing deterministic theoretical error bounds in the spatial domain of the unknown function representing the terrain have appeared.\n\n\\subsubsection{Approximation and Learning Theory}\nApproximation theory has a long history, but the subtopics of most relevance to this paper include recent studies in multiresolution analysis (MRA), radial basis function (RBF) approximation and learning theory. The study of MRA techniques became popular in the late 1980's and early 1990's, and it has flourished since that time. We use only a small part of the general theory of MRAs in this paper, and we urge the interested reader to consult one of the excellent treatises on this topic for a full account. References \\cite{Meyer,mallat,daubechies, dl1993} are good examples of such detailed treatments. We briefly summarize the pertinent aspects of MRA here and in Section \\ref{sec:MRA}. A multiresolution analysis defines a family of nested approximation spaces $\\seq{H_j}_{j\\in \\mathbb{N}}\\subseteq H$ of an abstract space $H$ in terms of a single function $\\phi$, the scaling function. The approximation space $H_j$ is defined in terms of bases that are constructed from dilates and translates $\\seq{\\phi_{j,k}}_{k\\in \\mathbb{Z}^d}$ with $\\phi_{j,k}(x):=2^{jd/2}\\phi(2^jx-k)$ for $x\\in \\mathbb{R}^d$ of this single function $\\phi$. It is for this reason that these spaces are sometimes referred to as shift invariant spaces. While the MRA is ordinarily defined only in terms of the scaling functions, the theory provides a rich set of tools to derive bases $\\seq{\\psi_{j,k}}_{k\\in \\mathbb{Z}}$, or wavelets, \nfor the complement spaces $W_j:=V_{j+1}- V_{j}$. Our interest in multiresolution analysis arises since these methods can be used to develop multiscale kernels for RKHS, as summarized in \\cite{opfer1,opfer2}. We only consider approximation spaces defined in terms of the scaling functions in this paper. Specifically, with a parameter $s \\in \\mathbb{R}^+$ measuring smoothness, we use $s-$regular MRAs to define admissible kernels for the reproducing kernels that embody the online and adaptive estimation strategies in this paper.\nWhen the MRA bases are smooth enough, the RKHS kernels derived from a MRA can be shown to be equivalent to a scale of Sobolev spaces having well documented approximation properties. \nThe B-spline bases in the numerical examples yield RKHS embeddings with good condition numbers. The details of the RKHS embedding strategy given in terms of wavelet bases associated with an MRA is treated in the forthcoming paper.\n\\subsubsection{Learning Theory and Nonlinear Regression}\nThe methodology defined in this paper for online adaptive estimation can be viewed as similar in philosophy to the recent efforts that synthesize learning theory and approximation theory. \\cite{dkpt2006,kt2007,cdkp2001,t2008} In these references, independent and identically distributed observations of some unknown function are collected, and they are used to define an estimator of that unknown function. Sharp estimates of error, guaranteed to hold in probability spaces, are possible using tools familiar from learning theory and thresholding in approximation spaces. The approximation spaces are usually defined terms of subspaces of an MRA. However, there are a few key differences between the these efforts in nonlinear regression and learning theory and this paper. The learning theory approaches to estimation of the unknown function depend on observations of the function itself. In contrast, the adaptive online estimation framework here assumes that observations are made of the estimator states, not directly of the unknown function itself. The learning theory methods also assume a discrete measurement process, instead of the continuous measurement process that characterizes online adaptive estimation. On the other hand, the methods based on learning theory derive sharp function space rates of convergence of the estimates of the unknown function. Such estimates are not available in conventional online adaptive estimation methods. Typically, convergence in adaptive estimation strategies is guaranteed in time in a fixed finite dimensional space. One of the significant contributions of this paper is to construct sharp convergence rates in function spaces, similar to approaches in learning theory, of the unknown function using online adaptive estimation. \n\n\\subsubsection{Online Adaptive Estimation and Control}\n\nSince the approach in this paper generalizes a standard strategy in online adaptive estimation and control theory, we review this class of methods in some detail. This summary will be crucial in understanding the nuances of the proposed technique and in contrasting the sharp estimates of error available in the new strategy to those in the conventional approach. \n Many popular textbooks study online or adaptive estimation within the context of adaptive control theory for systems governed by ordinary differential equations \\cite{sb2012,IaSu,PoFar}. The theory has been extended in several directions, each with its subtle assumptions and associated analyses. \n Adaptive estimation and control theory has been refined for decades, and significant progress has been made in deriving convergent estimation and stable control strategies that are robust with respect to some classes of uncertainty.\nThe efforts in \\cite{bsdr1997,bdrr1998} are relevant to this paper, where the authors generalize some of adaptive estimation and model reference adaptive control (MRAC) strategies for ODEs so that they apply to deterministic infinite dimensional evolution systems. In addition, \\cite{dmp1994,dp1988,dpg1991,p1992} also investigate adaptive control and estimation problems under various assumptions for classes of stochastic and infinite dimensional systems. \nRecent developments in $\\mathcal{L}^1$ control theory as presented in \\cite{HC}, for example, utilize adaptive estimation and control strategies in obtaining stability and convergence for systems generated by collections of nonlinear ODEs. \n\nTo motivate this paper, we consider a model problem in which the plant dynamics are generated by the nonlinear ordinary differential equations\n\\begin{align}\n\\dot{x}(t)&= A x(t) + Bf(x(t)), \\quad \\quad x(0)=x_0 \n\\label{eq:simple_plant}\n\\end{align}\nwith state $x(t)\\in \\mathbb{R}^d$, the known Hurwitz system matrix $ A \\in \\mathbb{R}^{d\\times d}$, the known control influence matrix $B\\in \\mathbb{R}^d$, and the unknown function $f:\\mathbb{R}^d \\rightarrow \\mathbb{R}$. \nAlthough this model problem is an exceedingly simple prototypical example studied in adaptive estimation and control of ODEs \\cite{sb2012,IaSu,PoFar}, it has proven to be an effective case study in motivating alternative formulations such as in \\cite{HC} and will suffice to motivate the current approach. \nOf course, much more general plants are treated in standard methods \\cite{sb2012,IaSu,PoFar,naranna} and can be attacked using the strategy that follows. This structurally simple problem is chosen so as to clearly illustrate the essential constructions of RKHS embedding method while omitting the nuances associated with general plants. A typical adaptive estimation problem can often be formulated in terms of an estimator equation and a learning law. One of the simplest estimators for this model problem takes the form \n\\begin{align}\n\\dot{\\hat{x}}(t)&= A \\hat{x}(t) + B\\hat{f}(t,x(t)), \n\\quad \\quad \n\\hat{x}(0)=x_0 \n\\label{eq:sim_estimator}\n\\end{align}\nwhere $\\hat{x}(t)$ is an estimate of the state $x(t)$ and $\\hat{f}(t,x(t))$ is time varying estimate of the unknown function $f$ that depends on measurement of the state $x(t)$ of the plant at time $t$. When the state error $\\tilde{x}:=x-\\hat{x}$ and function estimate error $\\tilde{f}:=f-\\hat{f}$ are defined, the state error equation is simply \n\\begin{align}\n\\dot{\\tilde{x}}(t)&= A \\tilde{x}(t) + B\\tilde{f}(t,x(t)), \\quad \\quad \n\\tilde{x}(0)=\\tilde{x}_0.\n\\label{eq:sim_error}\n\\end{align}\nThe goal of adaptive or online estimation is to determine a learning law that governs the evolution of the function estimate $\\hat{f}$ and guarantees that the state estimate $\\hat{x}$ converges to the true state $x$, \n$\n\\tilde{x}(t)= x(t)-\\hat{x}(t) \\to\n0 \\text{ as } t\\to \\infty\n$.\nPerhaps additionally, it is hoped that the function estimates $\\hat{f}$ converge to the unknown function $f$,\n$\n\\tilde{f}(t)= f(t) -\\hat{f}(t) \\to\n0 \\text{ as } t \\to \\infty.\n$\nThe choice of the learning law for the update of the adaptive estimate $\\hat{f}$ depends intrinsically on what specific information is available about the unknown function $f$.\nIt is most often the case for ODEs that the estimate $\\hat{f}$ depends on a finite set of unknown parameters $\\hat{\\alpha}_1,\\ldots,\\hat{\\alpha}_n$. The learning law is then expressed as an evolution law for the parameters $\\hat{\\alpha}_i$, $i=1,\\ldots,n$. The discussion that follows emphasizes that this is a very specific underlying assumption regarding the information available about unknown function $f$. Much more general prior assumptions are possible. \n\n\\subsubsection{Classes of Uncertainty in Adaptive Estimation}\nThe adaptive estimation task seeks to construct a learning law based on the knowledge that is available regarding the function $f$. \nDifferent methods for solving this problem have been developed depending on the type of information available about the unknown function $f$. \nThe uncertainty about $f$ is often described as forming a continuum between structured and unstructured uncertainty. \nIn the most general case, we might know that $f$ lies in some compact set $\\mathcal{C}$ of a particular Hilbert space of functions $H$ over a subset $\\Omega \\subseteq \\mathbb{R}^d$.\nThis case, that reflects in some sense the least information regarding the unknown function, can be expressed as the condition that \n$\nf \\in \\{g \\in \\mathcal{C} | \\mathcal{C}\\subset {H} \\},\n$\n for some compact set of functions $\\mathcal{C}$ in a Hilbert space of functions $H$. \nIn approximation theory, learning theory, or non-parametric estimation problems this information is sometimes referred to as the {\\em prior}, and choices of $H$ commonly known as the hypothesis space. The selection of the hypothesis space $H$ and set $\\mathcal{C}$ often reflect the approximation, smoothness, or compactness properties of the unknown function \\cite{dkpt2006}.\nThis example may in some sense utilize only limited or minimal information regarding the unknown function $f$, and we may refer to the uncertainty as unstructured. Numerous variants of conventional adaptive estimation admit additional knowledge about the unknown function. \nIn most conventional cases the unknown function $f$ is assumed to be given in terms of some fixed set of parameters. \nThis situation is similar in philosophy to problems of parametric estimation which restrict approximants to classes of functions that admit representation in terms of a specific set of parameters. \nSuppose the finite dimensional basis $\\left \\{ \\phi_k\\right \\}_{k=1,\\ldots, n}$ is known for a particular finite dimensional subspace $H_n \\subseteq H$ in which the function lies, and further that the uncertainty is expressed as the condition that there is a unique set of unknown coefficients $\\left \\{\\alpha_i^*\\right\\}_{i=1,\\ldots,n} $ such that $f:=f^*=\\sum_{i=1,\\ldots,n} \\alpha_i^* \\phi_i \\in H_n$. Consequently, conventional approaches may restrict the adaptive estimation technique to construct an estimate with knowledge that $f$ lies in the set\n\\begin{align}\n\\label{eq:e2}\nf \\in \\biggl \\{ g \\in H_n \\subseteq H \\biggl | \n&g = \\sum_{i=1,\\ldots,n} \\alpha_i \\phi_i\n\\text{ with } \\\\\n\\notag &\\alpha_i \\in [a_i,b_i] \n\\subset \\mathbb{R} \\text{ for } i=1,\\ldots,n \n\\biggr \\}\n\\end{align}\n\\noindent This is an example where the uncertainty in the estimation problem may be said to be structured. The unknown function is parameterized by the collection of coefficients $\\{\\alpha_i^*\\}_{i=1,\\ldots,n}$.\nIn this case the compact set the $\\mathcal{C}$ is a subset of $H_n$. As we discuss in sections ~\\ref{subsec:Lit},~\\ref{sec:RKHS},and ~\\ref{sec:existence}, the RKHS embedding approach can be characterised by the fact that the uncertainty is more general and even unstructured, in contrast to conventional methods.\n\n\\subsubsection{Adaptive Estimation in $\\mathbb{R}^d \\times \\mathbb{R}^n$}\n\\label{subsec:adapt1}\nThe development of adaptive estimation strategies when the uncertainty takes the form in \\ref{eq:e2} represents, in some sense, an iconic approach in the adaptive estimation and control community. \nEntire volumes \\cite{sb2012,IaSu,PoFar,NarPar199D} contain numerous variants of strategies that can be applied to solve adaptive estimation problems in which the uncertainty takes the form in \\ref{eq:e2}. \nOne canonical approach to such an adaptive estimation problem is governed by three coupled equations: the plant dynamics ~\\ref{eq:f}, estimator equation \\ref{eq:a2}, and the learning rule.\nWe organize the basis functions as $\\phi:=[\\phi_1,\\dots,\\phi_n]^T$ and the parameters as $\\alpha^{*^T}=[\\alpha^*_1,\\ldots,\\alpha^*_n]$, \n$\\hat{\\alpha}^T=[\\hat{\\alpha}_1,\\ldots,\\hat{\\alpha}_n]$. A common gradient based learning law yields the governing equations that incorporate the plant dynamics, estimator equation, and the learning rule.\n\\begin{align}\n\\label{eq:f}\n\\dot{x}(t) &= Ax(t) + B \\alpha^{*^T} \\phi(x(t)),\\\\\n\\label{eq:a2}\n\\dot{\\hat{x}}(t) &\n=A \\hat{x}(t) + B \\hat{\\alpha}^T(t) \\phi(x(t)), \\\\\n\\label{eq:a3}\n\\dot{\\hat{\\alpha}}(t) &= \\Gamma^{-1}\\phi B^T P(x-\\hat{x}),\n\\end{align}\nwhere $\\Gamma\\in \\mathbb{R}^{n\\times n}$ is symmetric and positive definite. The symmetric positive definite matrix $P\\in\\mathbb{R}^{d\\times d}$ is the unique solution of Lyapunov's equation $A^T P + PA = -Q$, for some selected symmetric positive definite $Q \\in \\mathbb{R}^{d\\times d}$.\n\\noindent Usually the above equations are summarized in terms the two error equations \n\\begin{align}\n\\label{eq:a4}\n\\dot{\\tilde{x}}(t) &= A \\tilde{x} + B \\phi^{T}(x(t))\\tilde{\\alpha}(t)\\\\\n\\label{eq:a5}\n\\dot{\\tilde{\\alpha}}(t) &= -\\Gamma^{-1} \\phi(x(t)) B^T P\\tilde{x}.\n\\end{align}\nwith $\\tilde{\\alpha}:=\\alpha^*-\\hat{\\alpha}$ and $\\tilde{x}:=x-\\hat{x}$. \n Equations ~\\ref{eq:a4},~\\ref{eq:a5} can also be written as \n\\begin{align}\n\\begin{Bmatrix}\n\\dot{\\tilde{x}}(t) \\\\\n\\dot{\\tilde{\\alpha}}(t)\n\\end{Bmatrix}\n=\n\\begin{bmatrix}\nA & B \\phi^T (x(t))\\\\\n-\\Gamma^{-1} \\phi(x(t)) B ^T P & 0\n\\end{bmatrix}\n\\begin{Bmatrix}\n\\tilde{x}(t)\\\\\n\\tilde{\\alpha}(t)\n\\end{Bmatrix}.\n\\label{eq:error_conv}\n\\end{align}\nThis equation defines an evolution on $\\mathbb{R}^d \\times \\mathbb{R}^n$\nand has been studied in great detail in ~\\cite{naranna,narkud,mornar}. \nStandard texts such as ~\\cite{sb2012,IaSu,PoFar,NarPar199D} outline numerous other variants for the online adaptive estimation problem using projection, least squares methods and other popular approaches.\n\n\n \\subsection{Overview of Our Results}\n \\label{subsec:Lit}\n \\subsubsection{Adaptive Estimation in $\\mathbb{R}^d \\times H$}\n \\label{subsec:adapt2}\n In this paper, we study the method of RKHS embedding that interprets the unknown function $f$ as an element of the RKHS $H$, without any {\\em a priori} selection of the particular finite dimensional subspace used for estimation of the unknown function. The counterparts to Equations ~\\ref{eq:f},~\\ref{eq:a2},~\\ref{eq:a3} are the plant, estimator, and learning laws \n\\begin{align}\n\\dot{x}(t) &= Ax(t) + BE_{x(t)}f,\\\\\n\\dot{\\hat{x}}(t) &= A\\hat{x}(t) + BE_{x(t)}\\hat{f}(t), \\label{eq:rkhs_plant}\\\\\n\\dot{\\hat{f}}(t) &= \\Gamma^{-1}(BE_{x(t)})^*P(x(t) - \\hat{x}(t)),\n\\end{align}\nwhere as before $x,\\hat{x}\\in \\mathbb{R}^d$, but $f$ and $\\hat{f}(t)\\in H$, $E_{\\xi}: H \\to \\mathbb{R}^d $ is the evaluation functional that is given by $E_{\\xi}: f \\mapsto f(\\xi)$ for all $\\xi\\in \\mathbb{R}^d$ and $f \\in H$, and $\\Gamma\\in \\mathcal{L}(H,H)$ is a self adjoint, positive definite linear operator.a The error equation analogous to Equation~\\ref{eq:error_conv} system is then given by\n\\begin{align}\n\\begin{Bmatrix}\n\\dot{\\tilde{x}}(t) \\\\\n\\dot{\\tilde{f}}(t)\n\\end{Bmatrix}\n=\n\\begin{bmatrix}\nA & B E_{x(t)}\\\\\n-\\Gamma^{-1}(B E_{x(t)})^*P & 0\n\\end{bmatrix}\n\\begin{Bmatrix}\n\\tilde{x}(t)\\\\\n\\tilde{f}(t)\n\\end{Bmatrix},\n\\label{eq:eom_rkhs}\n\\end{align}\nwhich defines an evolution on $\\mathbb{R}^d \\times H$, instead of on $\\mathbb{R}^d \\times \\mathbb{R}^n$. \n\n\\subsubsection{Existence, Stability, and Convergence Rates}\nWe briefly summarize and compare the conlusions that can be reached for the conventional and RKHS embedding approaches. Let $(\\hat{x}, \\hat{f})$ be estimates of $(x,f)$ that evolve according to the state, estimator, and learning law of RKHS embedding. Define the state and distributed parameter error as $\\tilde{x}:=x-\\hat{x}$ and $\\tilde{f}:=f-\\hat{f}$, respectively. Under the assumptions outlined in Theorems \\ref{th:unique}, \\ref{th:stability}, and \\ref{th:PE} for each $T>0$ there is a unique mild solution for the error $(\\tilde{x},\\tilde{f})\\in C([0,T];\\mathbb{R}^d\\times H)$ to the DPS described by Equations \\ref{eq:eom_rkhs}. Moreover, the error in state estimates $\\tilde{x}(t)$ converges to zero,\n$\\lim_{t \\rightarrow \\infty} \\| \\tilde{x}(t)\\|=0$. If all the evolutions with initial conditions in an open ball containing the origin exist in $C([0,\\infty);\\mathbb{R}\\times H)$, the equilibrium at the origin $(\\tilde{x},\\tilde{f})=(0,0)$ is stable. The results so far are therefore entirely analogous to conventional estimation method, but are cast in the infinite dimensional RKHS $H$. See the standard texts ~\\cite{sb2012,IaSu,PoFar,NarPar199D} for proofs of existence and convergence of the conventional methods. It must be emphasized again that the conventional results are stated for evolutions in $\\mathbb{R}^d\\times\\mathbb{R}^n$, and the RKHS results hold for evolutions in $\\mathbb{R}^d\\times H$. Considerably more can be said about the convergence of finite dimensional approximations. For the RKHS embedding approach state and finite dimensional approximations $(\\hat{x}_j,\\hat{f}_j)$ of the infinite dimensional estimates $(\\hat{x},\\hat{f})$ on a grid that has resolution level $j$ are governed by Equations \\ref{eq:approx_on_est1} and \\ref{eq:approx_on_est2}. The finite dimensional estimates $(\\hat{x}_j,\\hat{f}_j)$ converge to the infinite dimensional estimates $(\\hat{x},\\hat{f})$ at a rate that depends on $\\|I-\\Gamma\\Pi_j^*\\Gamma_j^{-1} \\Pi_j\\|$ and $\\|I - \\Pi_j\\|$ where $\\Pi_j : H \\to H_j$ is the $H$-orthogonal projection.\n\nThe remainder of this paper studies the existence and uniqueness of solutions, stability, and convergence of approximate solutions for infinite dimensional, online or adaptive estimation problems. The analysis is based on a study of distributed parameter systems (DPS) that contains the RKHS $H$. The paper concludes with an example of an RKHS adaptive estimation problem for a simple model of map building from vehicles. The numerical example demonstrates the rate of convergence for finite dimensional models constructed from radial basis function (RBF) bases that are centered at a subset of scattered observations. \nThe discussion focuses on a comparison and contrast of the analysis for the ODE system and the distributed parameter system.\nPrior to these discussions, however, we present a brief review fundamental properties of RKHS spaces in the next section.\n \\section{Reproducing Kernel Hilbert Space}\n \\label{sec:RKHS}\nEstimation techniques for distributed parameter systems have been previously studied in \\cite{bk1989}, and further developed to incorporate adaptive estimation of parameters in certain infinite dimensional systems by \\cite{bsdr1997} and the references therein. These works also presented the necessary conditions required to achieve parameter convergence during online estimation. But both approaches rely on delicate semigroup analysis and evolution, or Gelfand triples.The approach herein is much simpler and amenable to a wide class of applications. It appears to be simpler, practical approach to generalise conventional methods. This paper considers estimation problems that are cast in terms of the unknown function $f:\\Omega \\subseteq \\mathbb{R}^d \\to \\mathbb{R}$, and our approximations will assume that this function is an element of a reproducing kernel Hilbert space. One way to define a reproducing kernel Hilbert space relies on demonstrating the boundedness of evaluation functionals, but we briefly summarize a constructive approach that is helpful in applications and understanding computations such as in our numerical examples. \n\nIn this paper $\\mathbb{R}$ denotes the real numbers, $\\mathbb{N}$ the positive integers, $\\mathbb{N}_0$ the non-negative integers, and $\\mathbb{Z}$ the integers. We follow the convention that $a \\gtrsim b$ means that there is a constant $c$, independent of $a$ or $b$, such that $b \\leq ca$. When $a\\gtrsim b $ and $b\\gtrsim a$, we write $a \\approx b $. Several function spaces are used in this paper. The $p$-integrable Lebesgue spaces are denoted $L^p(\\Omega)$ for $1\\leq p \\leq \\infty$, and $C^s (\\Omega)$ is the space of continuous functions on $\\Omega$ all of whose derivatives less than or equal to $s$ are continuous. The space $C_b^s (\\Omega)$ is the normed vector subspace of $C^s (\\Omega)$ and consists of all $f\\in C^s (\\Omega)$ whose derivatives of order less than or equal to $s$ are bounded. The space $C^{s,\\lambda} (\\Omega)\\subseteq C_b^s (\\Omega) \\subseteq C^s (\\Omega)$ is the collection of functions with derivatives $\\frac{\\partial^{|\\alpha|}f}{\\partial x^{|\\alpha|}}$ that are $\\lambda$-Holder continuous, \n\\begin{align*}\n\\|f(x)-f(y)\\| \\leq C\\|x - y\\|^{\\lambda}\n\\end{align*}\nThe Sobolev space of functions that have weak derivatives of the order less than equal to $r$ that lie in $L^p(\\Omega)$ is denoted $H^r_p(\\Omega)$.\n\nA reproducing kernel Hilbert space is constructed in terms of a symmetric, continuous, and positive definite function $k:\\Omega \\times \\Omega \\to \\mathbb{R}$, where positive definiteness requires that for any finite collection of points \n$\\{x_i\\}_{i=1}^n \\subseteq \\Omega $ \n$$\\sum_{i,j=1}^{n}k(x_i , x_j ) \\alpha_i \\alpha_j \\gtrsim \\|\\alpha\\|^{2}_{\\mathbb{R}^n}\n$$\nfor all $\\alpha = \\{\\alpha_1,\\hdots, \\alpha_n \\}^T$.. For each $x\\in \\Omega$, we denote the function $k_x := k_x (\\cdot) = k(x,\\cdot)$ and refer to $k_x$ as the kernel function centered at $x$. In many typical examples ~\\cite{wendland}, $k_x$ can be interpreted literally as a radial basis function centered at $x\\in \\Omega$. For any kernel functions $k_x$ and $k_y$ centered at $x,y \\in \\Omega$, we define the inner product $(k_x,k_y):= k(x,y)$. \nThe RKHS $H$ is then defined as the completion of all finite sums extracted from the set $\\{k_x|x \\in \\Omega\\}$.\nIt is well known that this construction guarantees the boundedness of the evaluation functionals $E_x : H \\to \\mathbb{R}$. In other words for each $x\\in \\Omega$ we have a constant $c_x$ such that \n$$ |E_x f | = |f(x)| \\leq c_x \\|f\\|_H$$\nfor all $f\\in H$. The reproducing property of the RKHS $H$ plays a crucial role in the analysis here, and it states that,\n$$E_xf = f(x) = (k_x , f)_H$$\nfor $x \\in \\Omega$ and $f\\in H$. We will also require the adjoint $E_x^* :\\mathbb{R}\\to H $ in this paper, which can be calculated directly by noting that \n$$ (E_x f,\\alpha )_\\mathbb{R} = (f,\\alpha k_x)_H = (f,E_x^* \\alpha)_H $$\nfor $\\alpha \\in \\mathbb{R}$ , $x\\in \\Omega$ and $f\\in H$. Hence, $E_x^* : \\alpha \\mapsto \\alpha k_x \\in H$.\n\nFinally, we will be interested in the specific case in which it is possible to show that the RKHS $H$ is a subset of $C(\\Omega)$, and furthermore, that the associated injection$i:H \\rightarrow C(\\Omega)$ is uniformly bounded.\nThis uniform embedding is possible, for example, provided that the kernel is bounded by a constant $\\tilde{C}^2$, \n$\n\\sup_{x\\in \\Omega} k(x,x) \\leq \\tilde{C}^2.\n$\nThis fact follows by first noting that by the reproducing kernel property of the RKHS, \nwe can write \n\\begin{equation}\n|f(x)|=|E_x f |= |(k_x, f)_H | \\leq \\|k_x \\|_H \\|f\\|_H.\n\\end{equation}\nFrom the definition of the inner product on $H$, we have \n$\n\\|k_x \\|^2=|(k_x, k_x)_H |=|(k(x,x)| \\leq \\tilde{C}^2.\n$\nIt follows that $\\|if\\|_{C(\\Omega)}:= \\|f\\|_{C(\\Omega)} \\leq {\\tilde{C}} \\|f\\|_H$ and thereby that $\\|i\\|\\leq {\\tilde{C}}$. We next give two examples that will be studied in this paper.\n\n\\subsection*{Example: The Exponential Kernel}\nA popular example of an RKHS, one that will be used in the numerical examples, is constructed from the family of exponentials $\\kappa(x,y):=e^{-\\| x-y\\|^2/\\sigma^2}$ where $\\sigma>0$. \nSuppose that $\\tilde{C} = \\sqrt{\\sup_{x\\in\\Omega}\\kappa(x,x)}<\\infty$. Smale and Zhou in \\cite{sz2007} argue that \n$$\n|f(x)|=|E_x(f)|=|(\\kappa_x,f)_H|\\leq \n\\|\\kappa_x\\|_H \\|f\\|_H\n$$\nfor all $x\\in \\Omega$ and $f\\in H$, and since \n$\\|\\kappa_x\\|^2=|\\kappa(x,x)|\\leq \\tilde{C}^2$, it follows that the embedding $i:H \\rightarrow L^\\infty(\\Omega)$ is bounded,\n$$\n\\|f\\|_{L^\\infty(\\Omega)}:=\\|i(f)\\|_{L^\\infty(\\Omega)}\\leq \\tilde{C} \\|f\\|_H.\n$$\nFor the exponential kernel above, $\\tilde{C}=1$. \nLet $C^s(\\Omega)$ denote the space of functions on $\\Omega$ all of whose partial derivatives of order less than or equal to $s$ are continuous. The space $C^s_b(\\Omega)$ is endowed with the norm\n$$\n\\|f\\|_{C^s_b(\\Omega)}:= \\max_{|\\alpha|\\leq s}\n\\left \\| \n\\frac{\\partial^{|\\alpha|}f}{\\partial x^\\alpha}\n\\right \\|_{L^\\infty(\\Omega)},\n$$\nwith the summation taken over multi-indices $\\alpha:=\\left \\{ \\alpha_1, \\ldots,\\alpha_d \\right \\}\\in \\mathbb{N}^d$, $\\partial x^{\\alpha}:=\\partial x_1^{\\alpha_1} \\cdots \\partial x_d^{\\alpha_d}$, and $|\\alpha|=\\sum_{i=1,\\ldots,d} \\alpha_i$. \nObserve that the continuous functions in $C^s(\\Omega)$ need not be bounded even if $\\Omega$ is a bounded open domain. The space $C^s_b(\\Omega)$ is the subspace consisting of functions $f\\in C^s_b(\\Omega)$ for which all derivatives of order less than or equal to $s$ are bounded. \nThe space $C^{s,\\lambda}(\\Omega)$ is the subspace of functions $f$ in $C^{s}(\\Omega)$ \nfor which all of the partial derivatives $\\frac{\\partial f^{|\\alpha|}}{\\partial x^\\alpha}$ with $|\\alpha|\\le s$ are\n$\\lambda$-Holder continuous. The norm of $C^{s,\\lambda}(\\Omega)$ for $0 < \\lambda \\leq 1$ is given by\n$$\n\\|f\\|_{C^{s,\\lambda}(\\Omega)} = \\|f\\|_{C^s(\\Omega)}+ \\max_{0 \\leq \\alpha \\leq s} \\sup_{\\substack{x,y\\in \\Omega \\\\x\\ne y}}\\frac{\\left| \\frac{\\partial^{|\\alpha|} f}{\\partial x^{|\\alpha|}}(x) -\\frac{\\partial^{|\\alpha|}f}{\\partial x^{|\\alpha|}}(y) \\right|}{|x-y|^\\lambda}\n$$\nAlso, reference \\cite{sz2007} notes that if $\\kappa(\\cdot,\\cdot)\\in C^{2s,\\lambda}_b(\\Omega \\times \\Omega)$ with $0<\\lambda<2$ and $\\Omega$ is a closed domain, then the inclusion $H\\rightarrow C^{s,\\lambda/2}_b(\\Omega)$ is well defined and continuous. That is the mapping $i:H \\rightarrow C^{s,\\lambda/2}_b$ defined via $f\\mapsto i(f):=f$ satisfies\n$$\n\\| f\\|_{C^{s,\\lambda/2}_b(\\Omega)}\\lesssim \\|f\\|_H.\n$$\nIn fact reference \\cite{sz2007} shows that \n$$\n\\|f \\|_{C^s_b(\\Omega)} \\leq 4^s \\|\\kappa\\|_{{C^{2s}_b}(\\Omega\\times \\Omega)}^{1/2} \\|f\\|_H.\n$$\nThe overall important conclusion to draw from the summary above is that there are many conditions that guarantee that the imbedding $H\\hookrightarrow C_b(\\Omega)$ is continuous. This condition will play a central role in devising simple conditions for existence of solutions of the RKHS embedding technique.\n\n\n\\subsection{Multiscale Kernels Induced by $s$-Regular Scaling Functions}\n\\label{sec:MRA}\nThe characterization of the norm of the Sobolev space $H^{r}_2:=H^{r}_2(\\mathbb{R}^d)$ has appeared in many monographs that discuss multiresolution analysis \\cite{Meyer,mallat,devore1998}. It is also possible to define the Sobolev space $H^{r}_2(\\mathbb{R}^d)$ as the Hilbert space constructed from a reproducing kernel $\\kappa(\\cdot,\\cdot):\\mathbb{R}^d \\times \\mathbb{R}^d \\rightarrow \\mathbb{R}$ that is defined in terms of an $s$-regular scaling function $\\phi$ of an multi-resolution analysis (MRA) \\cite{Meyer,devore1998}. The scaling function $\\phi$ is $s$-regular provided that, for $\\frac{d}{2}d/2$ we have the embedding\n$$\nH^r_2 \\hookrightarrow C_b^{r-d/2} \\subset C^{r-d/2}\n$$\nwhere $C_b^r$ is the subspace of functions $f$ in $C^r$ all of whose derivatives up through order $r$ are bounded. In fact, by choosing the $s$-regular MRA with $s$ and $r$ large enough, we have the imbedding \n$H^r_2(\\Omega) \\hookrightarrow C(\\Omega)$ when $\\Omega \\subseteq \\mathbb{R}^d$ \\cite{af2003}.\n\nOne of the simplest examples that meet the conditions of this section includes the normalized B-splines of order $r>0$. We denote by $N^r$ the normalized B-spline of order $r$ with integer knots and define its translated dilates by $N^r_{j,k}:=2^{jd/2}N^r(2^{jd} x - k)$ for $k\\in \\mathbb{Z}^d$ and $j\\in \\mathbb{N}_0$. In this case the kernel is written in the form\n$$\n\\kappa(u,v):=\\sum_{j=0}^\\infty 2^{-2rj}\\sum_{k\\in \\mathbb{Z}^d}N^r_{j,k}(u)N^r_{j,k}(v).\n$$\nFigure \\ref{fig:nbsplines} depicts the translated dilates of the normalized B-splines of order $1$ and $2$ respectively.\n\\begin{center}\n\\begin{figure}[h!]\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width=.4\\textwidth]{nbsplines_N1}\n&\n\\includegraphics[width=.4\\textwidth]{nbsplines_N2}\\\\\n{ B-splines $N^1$}\n&\n{ B-splines $N^2$}\n\\end{tabular}\n\\caption{Translated Dilates of Normalized B-Splines}\n\\label{fig:nbsplines}\n\\end{figure}\n\\end{center}\n\\section{Existence,Uniqueness and Stability}\n\\label{sec:existence}\nIn the adaptive estimation problem that is cast in terms of a RKHS $H$, we seek a solution $X = (\\tilde{x},\\tilde{f}) \\in \\mathbb{R}^d \\times H \\equiv \\mathbb{X}$ that satisfies Equation \\ref{eq:eom_rkhs}. \nIn general $\\mathbb{X}$ is an infinite dimensional state space for this estimation problem, which can in principle substantially complicate the analysis in comparison to conventional ODE methods. \nWe first establish that the adaptive estimation problem in Equation \\ref{eq:eom_rkhs} is well-posed. \nThe result that is derived below is not the most general possible, but rather has been emphasised because its conditions are simple and easily verifiable in many applications.\n\\begin{theorem}\n\\label{th:unique}\nSuppose that $x \\in C([0,T];\\mathbb{R}^d)$ and that the embedding $i:H \\hookrightarrow C(\\Omega)$ is uniform in the sense that there is a constant $C>0$ such that for any $f \\in H$,\n\\begin{equation}\n\\label{6}\n\\|f\\|_{C(\\Omega)}\\equiv \\|if\\|_{C(\\Omega)} \\leq C\\|f\\|_H.\n\\end{equation}\nFor any $T>0$ there is a unique mild solution $(\\tilde{X},\\tilde{f}) \\in C([0,T],\\mathbb{X})$ to Equation \\ref{eq:eom_rkhs} and the map $X_0 \\equiv (\\tilde{x}_0,\\tilde{f}_0) \\mapsto (\\tilde{x},\\tilde{f}) $ is Lipschitz continuous from $\\mathbb{X}$ to $C([0,T],\\mathbb{X})$.\n\\end{theorem}\n\\begin{proof}\nWe can split the governing Equation \\ref{eq:eom_rkhs} into the form\n\\begin{align}\n\\begin{split}\n\\begin{Bmatrix}\n\\dot{\\tilde{x}}(t)\\\\\n\\dot{{\\tilde{f}}}(t)\n\\end{Bmatrix}\n= \n&\\begin{bmatrix}\nA & 0\\\\\n0 & A_0\n\\end{bmatrix}\n\\begin{Bmatrix}\n\\tilde{x}(t)\\\\\n\\tilde{f}(t)\n\\end{Bmatrix}+\n\\begin{bmatrix}\n0 & B E_{(x(t))}\\\\\n-\\Gamma^1 (B E_{(x(t)})^* P & -A_0\n\\end{bmatrix}\n\\begin{Bmatrix}\n\\tilde{x}(t)\\\\\n\\tilde{f}(t)\n\\end{Bmatrix},\n\\end{split}\n\\end{align}\nand write it more concisely as \n\\begin{equation}\n\\dot{\\tilde{X}} = \\mathbb{A}\\tilde{X}(t) + \\mathbb{F}(t,\\tilde{X}(t))\n\\end{equation}\nwhere the operator $A_0 \\in \\mathcal{L}(H,H)$ is arbitrary. It is immediately clear that $\\mathbb{A}$ is the infinitesimal generator of $C_0$ semigroup on $\\mathbb{X}\\equiv \\mathbb{R}^d\\times H$ since $\\mathbb{A}$ is bounded on $\\mathbb{X}$. In addition, we see the following:\n\\begin{enumerate} \n\\item The function $\\mathbb{F}: \\mathbb{R}^+ \\times \\mathbb{X} \\to \\mathbb{X}$ is uniformly globally Lipschitz continuous: there is a constant $L>0$ such that \n$$\n\\|\\mathbb{F}(t,X)-\\mathbb{F}(t,Y)\\| \\leq L\\|X-Y\\|\n$$ \nfor all $ X,Y \\in \\mathbb{X}$ and $t\\in [0,T]$. \n\\item The map $t \\mapsto \\mathbb{F}(t,X)$ is continuous on $[0,T]$ for each fixed $X\\in \\mathbb{X}$.\n\\end{enumerate}\nBy Theorem 1.2, p.184, in reference \\cite{pazy}, there is a unique mild solution \n$$\\tilde{X} = \\{\\tilde{x},\\tilde{f}\\}^T \\in C([0,T];\\mathbb{X})\\equiv C([0,T];\\mathbb{R}^d\\times H). $$\nIn fact the map $\\tilde{X}_0 \\mapsto X$ is Lipschitz continuous from $\\mathbb{X}\\to C([0,T];\\mathbb{X})$.\n\\end{proof}\nThe proof of stability of the equilibrium at the origin of the RKHS \nEquation \\ref{eq:eom_rkhs} closely resembles the Lyapunov analysis of Equation \\ref{eq:error_conv}; the extension to consideration of the infinite dimensional state space $\\mathbb{X}$ is required.\nIt is useful to carry out this analysis in some detail to see how the adjoint $E_x^* :\\mathbb{R}\\to H $ of the evaluation functional $E_x : H \\to \\mathbb{R}$ plays a central and indispensable role in the study of the stability of evolution equations on the RKHS.\n\\begin{theorem}\n\\label{th:stability}\nSuppose that the RKHS Equations \\ref{eq:eom_rkhs} have a unique solution in $C([0,\\infty);H)$ for every initial condition $X_0$ in some open ball $B_r (0) \\subseteq \\mathbb{X}$. Then the equilibrium at the origin is Lyapunov stable. Moreover, the state error $\\tilde{x}(t) \\rightarrow 0$ as $t \\rightarrow \\infty$. \n\\end{theorem}\n\\begin{proof}\nDefine the Lyapunov function $V:\\mathbb{X} \\to \\mathbb{R}$ as \n$$ V \\begin{Bmatrix}\n\\tilde{x}\\\\\n\\tilde{f}\n\\end{Bmatrix}\n= \\frac{1}{2}\\tilde{x}^T P\\tilde{x} + \\frac{1}{2}(\\Gamma \\tilde{f},\\tilde{f})_H.\n$$\nThis function is norm continuous and positive definite on any neighborhood of the origin since $ V(X) \\geq \\|X\\|^2_{\\mathbb{X}}$ for all $X \\in \\mathbb{X}$. For any $X$, and in particular over the open set $B_r(0)$, the derivative of the Lyapunov function $V$ along trajectories of the system is given as \n\\begin{align*}\n\\dot{V} &= \\frac{1}{2}(\\dot{\\tilde{x}}^T P\\tilde{x}+\\tilde{x}^TP\\dot{\\tilde{x}})+(\\Gamma \\tilde{f},\\dot{\\tilde{f}})_H\\\\\n&= -\\frac{1}{2}\\tilde{x}^T Q\\tilde{x}+(\\tilde{f},E_x^*B^*P\\tilde{x}+\\Gamma\\dot{\\tilde{f}})_{H}= -\\frac{1}{2}\\tilde{x}^T Q\\tilde{x},\n\\end{align*}\nsince $(\\tilde{f},E_x^*B^*P\\tilde{x}+\\Gamma\\dot{\\tilde{f}})_{H}=0$.\nLet $\\epsilon$ be some constant such that $0 < \\epsilon < r$. Define $\\gamma (\\epsilon)$ and $\\Omega_\\gamma$ according to \n$$\\gamma(\\epsilon) = \\inf_{\\|X\\|_\\mathbb{X}=\\epsilon} V(X),$$\n$$\\Omega_\\gamma = \\{X \\in \\mathbb{X}|V(X)<\\gamma \\}.$$\nWe can picture these quantities as shown in Fig. \\ref{fig:lyapfun} and Fig. \\ref{fig:kernels}.\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.35]{fig1Lyap_2}\n\\caption{Lyapunov function, $V(x)$}\n\\label{fig:lyapfun}\n\\end{figure}\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.55]{fig2Stability_2}\n\\caption{Stability of the equilibrium}\n\\label{fig:kernels}\n\\end{figure}\nBut $\\Omega_\\gamma=\\{X\\in \\mathbb{X}|V(X)<\\gamma\\}$ is an open set since it is the inverse image of the open set $(-\\infty,\\gamma) \\subset \\mathbb{R}$ under the continuous mapping $V:\\mathbb{X} \\to \\mathbb{R}$. The set $\\Omega_\\gamma$ therefore contains an open neighborhood of each of its elements. Let $\\delta>0$ be the radius of such an open ball containing the origin with $B_\\delta(0) \\subset \\Omega_\\gamma$. \nSince $\\overline{\\Omega}_\\gamma:=\\{X\\in \\mathbb{X}|V(X)\\leq \\gamma\\}$ is a level set of $V$ and $V$ is non-increasing, it is a positive invariant set. Given any initial condition $x_0 \\in B_\\delta(0) \\subseteq \\Omega_\\gamma$, we know that the trajectory $x(t)$ starting at $x_0$ satisfies\n$x(t) \\in \\overline{\\Omega}_\\gamma \\subseteq \\overline{B_\\epsilon(0)} \\subseteq B_r(0)$ for all $t\\in [0,\\infty)$. \nThe equilibrium at the origin is stable.\n\nThe convergence of the state estimation error $\\tilde{x}(t) \\rightarrow 0$ as $t\\rightarrow \\infty$ can be based on Barbalat's lemma by modifying the conventional arguments for ODE systems. Since $\\frac{d}{dt}(V(X(t))) = - \\frac{1}{2} \\tilde{x}^T(t) Q \\tilde{x}\\leq 0$, $V(X(t))$ is non-increasing and bounded below by zero. There is a constant $V_\\infty:=\\lim_{t \\rightarrow \\infty}V(X(t))$, and we have\n$$\nV(X_0)-V_\\infty = \\int_0^\\infty \\tilde{x}^T(\\tau)Q\\tilde{x} d\\tau \\gtrsim \\|\\tilde{x}\\|^2_{L^2((0,\\infty);\\mathbb{R}^d)}.\n$$\nSince $V(X(t)) \\leq V(X_0)$, we likewise have $\\|\\tilde{x}\\|_{L^\\infty(0,\\infty)}\\lesssim V(X_0)$ and $\\|\\tilde{f}\\|_{L^\\infty((0,\\infty);H)}\\lesssim V(X_0)$. The equation of motion enables a uniform bound on $\\dot{\\tilde{x}}$ since \n\\begin{align}\n&\\|\\dot{\\tilde{x}}(t)\\|_{\\mathbb{R}^d}\n\\leq \\|A\\| \\| \\tilde{x}(t)\\|_{\\mathbb{R}^d}\n+ \\|B\\| \\|E_{x(t)} \\tilde{f}(t)\\|_{\\mathbb{R}^d}, \\notag \\\\\n&\\leq \\|A\\| \\| \\tilde{x}(t)\\|_{\\mathbb{R}^d}\n+ \\tilde{C} \\|B\\| \\| \\tilde{f}(t) \\|_{H},\\\\\n& \\leq \\|A\\| \\|\\tilde{x}\\|_{L^\\infty((0,\\infty);\\mathbb{R}^d)} \n+ \\tilde{C} \\|B\\| \\| \\tilde{f} \\|_{L^\\infty((0,\\infty),H)}. \\notag\n\\end{align}\nSince $\\tilde{x}\\in L^\\infty((0,\\infty);\\mathbb{R}^d)) \\cap L^2((0,\\infty);\\mathbb{R}^d)$ and $\\dot{\\tilde{x}} \\in L^\\infty((0,\\infty);\\mathbb{R}^d)$, we conclude by generalizations of Barbalat's lemma \\cite{Farkas2016Variations} that $\\tilde{x}(t) \\rightarrow 0$ as $t \\to \\infty$.\n\\end{proof}\n\nIt is evident that Theorem \\ref{th:stability} yields results about stability and convergence over the RKHS of the state estimate error to zero that are analogous to typical results for conventional ODE systems. As expected, conclusions for the convergence of the function estimates $\\hat{f}$ to $f$ are more difficult to generate, and they rely on {\\em persistency of excitation } conditions that are suitably extended to the RKHS framework.\n\\begin{mydef}\nWe say that the plant in the RKHS Equation ~\\ref{eq:rkhs_plant} is {\\em strongly persistently exciting} if there exist constants $\\Delta,\\gamma>0,\\text{ and }T$ such that for $f\\in H$ with $\\|f\\|_H=1$ and $t>T$ sufficiently large, \n$$\n\\int_{t}^{t+\\Delta}\n\\left(E^*_{x(\\tau)}E_{x(\\tau)}f,f\\right)_H d\\tau \\gtrsim \\gamma.\n$$\n\\end{mydef}\nAs in the consideration of ODE systems, persistency of excitation is sufficient to guarantee convergence of the function parameter estimates to the true function.\n\n\\begin{theorem}\n\\label{th:PE}\nSuppose that the plant in Equation \\ref{eq:rkhs_plant} is strongly persistently exciting and that either (i) the function $k(x(.),x(.)) \\in L^1((0,\\infty);\\mathbb{R})$, or (ii) the matrix $-A$ is coercive in the sense that $(-Av,v)\\geq c\\|v\\|^2$ $\\forall$ $v\\in\\mathbb{R}^d$ and $\\Gamma =P=I_d$. Then the parameter function error $\\tilde{f}$ converges strongly to zero,\n$$\n\\lim_{t\\rightarrow \\infty} \\| f-\\hat{f}(t) \\|_H = 0.\n$$\n\\end{theorem}\n\\begin{proof}\nWe begin by assuming $(i)$ holds, \nIn the proof of Theorem \\ref{th:stability} it is shown that $V$ is bounded below and non-increasing, and therefore approaches a limit\n$$\n\\lim_{t\\rightarrow \\infty} V(t)=V_\\infty< \\infty.\n$$\nSince $\\tilde{x}(t) \\rightarrow 0$ as $t\\rightarrow \\infty$, we can conclude that the limit\n$$\n\\lim_{t\\rightarrow \\infty} \\| \\tilde{f}(t) \\|_H \\lesssim V_\\infty.\n$$\nSuppose that $V_\\infty \\not = 0.$ Then there exists a positive, increasing sequence of times $\\left\\{ t_k\\right \\}_{k\\in \\mathbb{N}}$ with $\\lim_{k\\rightarrow \\infty} t_k = \\infty$ and some constant $\\delta>0$ \nsuch that \n$$\n\\| \\tilde{f}(t_k)\\|^2_H \\ge \\delta\n$$\nfor all $k\\in\\mathbb{N}$. \nSince the RKHS is persistently exciting, we can write\n\\begin{align*}\n\\int^{t_k+\\Delta}_{t_k} \\left(E^{*}_{x(\\tau)}E_{x(\\tau)}\\tilde{f}(t_k),\\tilde{f}(t_k)\\right)_Hd\\tau \\gtrsim \\gamma \\| \\tilde{f}{(t_k)}\\|_{H}^{2} \\geq \\gamma \\delta\n\\end{align*}\n\nfor each $k\\in \\mathbb{N}$. By the reproducing property of the RKHS, we can then see that \n\\begin{align*}\n\\gamma \\delta \\leq \\gamma \\| \\tilde{f}(t_k) \\|_H^2 &\\lesssim \\int_{t_k}^{t_k + \\Delta} \\left ( \\kappa_{x(\\tau)}, \\tilde{f}(t_k) \\right )_H^2 d\\tau\\\\\n&\\leq \\|\\tilde{f}(t_k)\\|_H^2 \\int_{t_k}^{t_k + \\Delta} \\|\\kappa_{x(\\tau)} \\|_H^2 d\\tau \\\\\n&= \\| \\tilde{f}(t_k) \\|_H^2\n\\int_{t_k}^{t_k+\\Delta} \\left (\\kappa_{x(\\tau)},\\kappa_{x(\\tau)}\\right )_H d\\tau \\\\\n& = \\| \\tilde{f}(t_k) \\|_H^2 \n\\int_{t_k}^{t_k+\\Delta} \\kappa(x(\\tau),x(\\tau)) d\\tau.\n\\end{align*}\nSince $\\kappa_r(x(.),x(.)) \\in L^1((0,\\infty);\\mathbb{R})$ by assumption, when we take the limit as $k\\rightarrow \\infty$, we obtain the contradiction $0<\\gamma \\leq 0$. We conclude therefore that $V_\\infty=0$ and $\\lim_{t\\rightarrow \\infty} \\|\\tilde{f}(t)\\|_H = 0$. \n\nWe outline the proof when (ii) holds, which is based on slight modifications of arguments that appear in \\cite{d1993,bsdr1997,dr1994,dr1994pe,bdrr1998,kr1994} that treat a different class of infinite dimensional nonlinear systems whose state space is cast in terms of a Gelfand triple. \nPerhaps the simplest analysis follows from \\cite{bsdr1997} for this case. Our hypothesis that $\\Gamma=P=I_d$ reduces Equations \\ref{eq:eom_rkhs} to the form of Equations 2.20 in \\cite{bsdr1997}. The assumption that $-A$ is coercive in our theorem implies the coercivity assumption (A4) in \\cite{bsdr1997} holds. If we define $\\mathbb{X}=\\mathbb{Y}:=\\mathbb{R}^n \\times H$, then it is clear that the imbeddings $\\mathbb{Y} \\rightarrow \\mathbb{X} \\rightarrow \\mathbb{Y}$ are continuous and dense, so that they define a Gelfand triple. Because of the trivial form of the Gelfand triple in this case, it is immediate that the Garding inequality holds in Equation 2.17 in \\cite{bsdr1997}. \n We identify $BE_{x(t)}$ as the control influence operator $\\mathcal{B}^*(\\overline{u}(t))$ in \\cite{bsdr1997}.\nUnder these conditions, Theorem ~\\ref{th:PE} follows from Theorem 3.4 in \\cite{bsdr1997} as a special case.\n\\end{proof}\n \\section{Finite Dimensional Approximations}\n \\label{sec:finite}\n\\subsection{Convergence of Finite Dimensional Approximations}\nThe governing system in Equations \\ref{eq:eom_rkhs} constitute a distributed parameter system since the functions $\\tilde{f}(t)$ evolve in the infinite dimensional space $H$. In practice these equations must be approximated by some finite dimensional system. Let $\\{H_n\\}_{n\\in\\mathbb{N}_0} \\subseteq H$ be a nested sequence of subspaces. Let $\\Pi_j$ be a collection of approximation operators $\n\\Pi_j:{H}\\rightarrow {H}_n$ such that $\\lim_{j\\to \\infty}\\Pi_j f = f$ for all $f\\in H$ and $\\sup_{j\\in \\mathbb{N}_0} \\|\\Pi_j\\| \\leq C $ for a constant $C > 0$. Perhaps the most evident example of such collection might choose $\\Pi_j$ as the $H$-orthogonal projection for a dense collection of subspaces $H_n$. It is also common to choose $\\Pi_j$ as a uniformly bounded family of quasi-interpolants \\cite{devore1998}. We next construct a finite dimensional approximations $\\hat{x}_j$ and $\\hat{f}_j$ of the online estimation equations in\n\\begin{align}\n\\dot{\\hat{x}}_j(t) & = A\\hat{x}_j(t) + \nB E_{x(t)} \\Pi^*_j \\hat{f}_j(t), \\label{eq:approx_on_est1} \\\\\n\\dot{\\hat{f}}_j(t) & = \\Gamma_j^{-1}\\left ( B E_{x(t)} \\Pi^*_j \\right)^* P\\tilde{x}_j(t)\n\\label{eq:approx_on_est2}\n\\end{align}\nwith $\\tilde{x}_j:=x-\\hat{x}_j$. \nIt is important to note that in the above equation $\n\\Pi_j:{H}\\rightarrow {H}_n$, and $\\Pi_j^*:{H}_n\\rightarrow {H}$.\n\\begin{theorem}\nSuppose that $x \\in C([0,T],\\mathbb{R}^d)$ and that the embedding $i:H \\to C(\\Omega)$ is uniform in the sense that \n\\begin{equation}\n\\label{6}\n\\|f\\|_{C(\\Omega)}\\equiv \\|if\\|_{C(\\Omega)} \\leq C\\|f\\|_H.\n\\end{equation}\nThen for any $T>0$, \n\\begin{align*}\n\\| \\hat{x} - \\hat{x}_j\\|_{C([0,T];\\mathbb{R}^d)} &\\rightarrow 0,\\\\\n\\|\\hat{f} - \\hat{f}_j\\|_{C([0,T];H)} &\\rightarrow 0,\n\\end{align*}\nas $j\\rightarrow \\infty$.\n\\end{theorem}\n\\begin{proof}\nDefine the operators $\\Lambda(t):= B E_{x(t)}:H\\rightarrow \\mathbb{R}^d$ and for each $t\\geq 0$, introduce the measures of state estimation error $\\overline{x}_j:=\\hat{x}-\\hat{x}_j$, and define the function estimation error $\\overline{f}_j\n=\\hat{f}-\\hat{f}_j$. \nNote that $\\tilde{x}_j:=x-\\hat{x}_j=x-\\hat{x} + \\hat{x}-\\hat{x}_j=\\tilde{x}+ \\overline{x}_j$.\nThe time derivative of the error induced by approximation of the estimates can be expanded as follows:\n\\begin{align*}\n&\\frac{1}{2} \\frac{d}{dt}\\left (\n( {\\overline{x}}_j, {\\overline{x}}_j )_{\\mathbb{R}^d} + ({\\overline{f}}_j,{\\overline{f}}_j )_H \n\\right ) = \n( \\dot{\\overline{x}}_j, {\\overline{x}}_j )_{\\mathbb{R}^d} + (\\dot{\\overline{f}}_j,{\\overline{f}}_j )_H \n \\\\\n&= (A\\overline{x}_j + \\Lambda \\overline{f}_j , \\overline{x}_j)_{\\mathbb{R}^d} + \n\\left ( \n\\left (\\Gamma^{-1}-\\Pi_j^*\\Gamma_j^{-1}\\Pi_j \\right )\n\\Lambda^*P \\tilde{x}, \\overline{f}_j\n\\right )_H \n-\\left (\\Pi_j^* \\Gamma_j^{-1} \\Pi_j \\Lambda^* P \\overline{x}_j,\\overline{f}_j \\right)_H\n\\\\\n&\\leq C_A \\| \\overline{x}_j \\|^2_{\\mathbb{R}^d} + \\|\\Lambda\\| \\| \\overline{f}_j \\|_{H} \\| \\overline{x}_j \\|_{\\mathbb{R}^d} \\\\\n&\\quad \\quad \n+ \\| \\Gamma^{-1} \n(I-\\Gamma \\Pi_j^*\\Gamma_j^{-1}\\Pi_j) \\Lambda^* P \\tilde{x}\\|_{H} \\|\\overline{f}_j \\|_H \n+\\left \\| \n\\Pi_j^* \\Gamma_j^{-1} \\Pi_j \\Lambda^* P \n\\right \\| \\|\\overline{x}_j\\| \n\\|\\overline{f}_j \\|\n\\\\\n& \\leq \nC_A \\| \\overline{x}_j \\|_{\\mathbb{R}^d}^2 + \\frac{1}{2}\n\\|\\Lambda\\| \\left ( \n\\| \\overline{f}_j \\|_{H}^2\n+ \\| \\overline{x}_j \\|_{\\mathbb{R}^d}^2\n\\right ) \n+ \\frac{1}{2}\\|\\Pi^*_j \\Gamma_j^{-1} \\Pi_j\\|\n\\| \\Lambda^*\\| \\|P\\| \\left ( \\|\\overline{x}_j\\|^2_{\\mathbb{R}^d} + \\| \\overline{f}_j\\|_H \\right ) \n\\\\\n&\\quad \\quad\n+ \\frac{1}{2} \\left ( \n\\Gamma^{-1} \n(I-\\Gamma \\Pi_j^*\\Gamma_j^{-1}\\Pi_j) \\Lambda^* P \\tilde{x}\\|_{H}\n+ \n\\|\\overline{f}_j \\|^2_H \n\\right) \\\\\n& \\leq\n\\frac{1}{2} \\|\\Gamma^{-1} \\| \\| \\Lambda^*\\| \\|P\\|\n\\| I-\\Gamma \\Pi_j^*\\Gamma_j^{-1}\\Pi_j \\|^2\\|\\tilde{x}\\|^2_{\\mathbb{R}^d}\n+\\\\\n&\\quad \\quad\n+\\left (C_A + \\frac{1}{2} \\|\\Lambda\\| \n+ \\frac{1}{2} C_B \\|\\Lambda^*\\| \\|P\\|\n\\right ) \\|\\overline{x}_j\\|^{2}_{\\mathbb{R}^d}\n+\n\\frac{1}{2} \\left ( \\|\\Lambda\\| + 1\n+ \\frac{1}{2} C_B \\|\\Lambda^*\\| \\|P\\|\\right) \\|\\overline{f}_j\\|^{2}_H \n\\end{align*}\nWe know that $\\|\\Lambda(t)\\|=\\|\\Lambda^*(t)\\|$ is bounded uniformly in time from the assumption that $H$ is uniformly embedded in $C(\\Omega)$.\nWe next consider the operator error that manifests in the term $(\\Gamma^{-1} - \\Pi^*_j \\Gamma_j^{-1} \\Pi_j)$. For any $g\\in H$ we have\n\\begin{align*}\n\\| (\\Gamma^{-1} - \\Pi^*_j \\Gamma_j^{-1} \\Pi_j)g \\|_H & =\n\\| \\Gamma^{-1}( I - \\Gamma \\Pi^*_j \\Gamma_j^{-1} \\Pi_j)g \\|_H \\\\\n&\\leq \n\\| \\Gamma^{-1} \\| \n\\|\\left (\\Pi_j + (I-\\Pi_j)\\right )( I - \\Gamma \\Pi^*_j \\Gamma_j^{-1} \\Pi_j)g \\|_H \\\\\n&\\lesssim \\| I-\\Pi_j \\| \\|g\\|_H.\n\\end{align*}\nThis final inequality follows since $\\Pi_j(I - \\Gamma \\Pi^*_j \\Gamma_j^{-1} \\Pi_j)=0$ and \n$\\Gamma \\Pi^*_j \\Gamma_j^{-1} \\Pi_j\\equiv\\Gamma \\Pi^*_j \\left (\\Pi_j \\Gamma \\Pi_j^* \\right)^{-1} \\Pi_j $ is uniformly bounded.\nWe then can write\n\\begin{align*}\n \\frac{d}{dt}\\left (\n \\|\\overline{x}_j\\|^2_{\\mathbb{R}^d} + \\|\\overline{f}_j\\|^2_H \n\\right )\n&\\leq C_1 \\| I-\\Gamma \\Pi_j^*\\Gamma_j^{-1}\\Pi_j \\|^2 \\\\\n&\\quad \\quad+ C_2 \\left (\\|\\overline{x}_j\\|^2_{\\mathbb{R}^d} + \\|\\overline{f}_j\\|^2_H \\right )\n\\end{align*}\nwhere $C_1,C_2>0$. We integrate this inequality over the interval $[0,T]$ and obtain\n\\begin{align*}\n\\|\\overline{x}_j(t)\\|^2_{\\mathbb{R}^d}\n+ \\|\\overline{f}_j(t)\\|^2_H \n&\\leq \n\\|\\overline{x}_j(0)\\|^2_{\\mathbb{R}^d}\n+ \\|\\overline{f}_j(0)\\|^2_H \\\\\n&\n+ C_1T \\| I-\\Gamma\\Pi_j^*\\Gamma_j^{-1}\\Pi_j \\|^2 \\\\\n&+ C_2\\int_0^T \\left ( \n\\|\\overline{x}_j(\\tau)\\|^2_{\\mathbb{R}^d}\n+ \\|\\overline{f}_j(\\tau)\\|^2_H\n\\right ) d\\tau\n\\end{align*}\nWe can always choose $\\hat{x}(0) = \\hat{x}_j(0)$, so that $\\overline{x}_j(0) = 0$. If we choose $\\hat{f}_j(0):=\\Pi_j\\hat{f}(0)$ then,\n\\begin{align*}\n\\|\\overline{f}_j(0)\\| &= \\|\\hat{f}(0)-\\Pi_j\\hat{f}(0)\\|_H\\\\\n&\\leq \\|I-\\Pi_j\\|_H \\|\\hat{f}(0)\\|_H.\n\\end{align*}\nThe non-decreasing term can be rewritten as $C_1T \\| I-\\Gamma\\Pi_j^* \\Gamma_j^{-1} \\Pi_j \\|^2 \\leq C_3 \\|I-\\Pi_j\\|^2_H$. \n\\begin{align}\n\\|\\overline{x}_j(t)\\|^2_{\\mathbb{R}^d}\n+ \\|\\overline{f}_j(t)\\|^2_H \n&\\leq C_4\\|I-\\Pi_j\\|^2_H+ C_2\\int_0^T \\left ( \n\\|\\overline{x}_j(\\tau)\\|^2_{\\mathbb{R}^d}\n+ \\|\\overline{f}_j(\\tau)\\|^2_H\n\\right ) d\\tau\n\\label{eq:gron_last}\n\\end{align}\nLet $\\alpha(t):=C_4\\|I-\\Pi_j\\|^2_H$ and applying Gronwall's inequality to equation \\ref{eq:gron_last}, we get\n\\begin{align}\n\\|\\overline{x}_j(t)\\|^2_{\\mathbb{R}^d}\n+ \\|\\overline{f}_j(t)\\|^2_H \n&\\leq \\alpha(t) e^{C_2 T}\n\\end{align}\nAs $j\\to \\infty$ we get $\\alpha(t) \\to 0$, this implies $\\overline{x}_j(t)\\to 0$ and $\\overline{f}_j(t)\\to 0$.\nTherefore the finite dimensional approximation converges to the infinite dimensional states in $\\mathbb{R}^d \\times H$.\n\\end{proof} \n\t\\section{Numerical Simulations}\n \\label{sec:numerical}\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.3]{Figure1parta}\n\\hspace{1cm}\n\\includegraphics[scale=0.3]{Figure1partb}\n\\captionsetup{justification=justified,margin=1cm}\n\\caption{Experimental setup and definition of basis functions}\n\\label{fig:Model}\n\\end{figure} \nA schematic representation of a quarter car model consisting of a chassis, suspension and road measuring device is shown in Fig ~\\ref{fig:Model}. In this simple model the displacement of car suspension and chassis are $x_1$ and $x_2$ respectively. The arc length $s$ measures the distance along the track that vehicle follows. The equation of motion for the two DOF model has the form,\n\\begin{equation}\nM\\ddot{x}(t)+C\\dot{x}(t)+Kx(t)=Bf(s(t))\n\\end{equation}\nwith the mass matrix $M \\in \\mathbb{R}^{2\\times2}$, the stiffness matrix $K \\in \\mathbb{R}^{2\\times2}$, the damping matrix $C \\in \\mathbb{R}^{2\\times2}$, the control influence vector $b \\in \\mathbb{R}^{2\\times 1}$ in this example. The road profile is denoted by the unknown function $f:\\mathbb{R} \\to \\mathbb{R}$. For simulation purposes, the car is assumed to traverse a circular path of radius $R$, so that we restrict attention to periodic round profiles $f : [0,R]\\to \\mathbb{R}$. To illustrate the methodology, we first assume that the unknown function, $f$ is restricted to the class of uncertainty mentioned in Equation~\\ref{eq:e2} and therefore can be approximated as\n\\begin{equation}\nf(\\cdot)=\\sum_{i=1}^n{\\alpha_i^*k_{x_i}(\\cdot)}\n\\end{equation}\nwith $n$ as the number of basis functions, $\\alpha_i^*$ are the true unknown coefficients to be estimated, and $k_{x_i}(\\cdot)$ are basis functions over the circular domain. \nHence the state space equation can be written in the form\n\\begin{equation}\n\\dot{x}(t)=Ax(t)+B\\sum_{i=1}^n{\\alpha_i^*k_{x_i}(s(t))}.\n\\label{eq:num_sim}\n\\end{equation}\nwhere the state vector $x = [\\dot{x}_1,x_1,\\dot{x}_2,x_2]$, the system matrix $A\\in \\mathbb{R}^{4 \\times 4}$, and control influence matrix $B \\in \\mathbb{R}^{4 \\times 1}$.\nFor the quarter car model shown in Fig. \\ref{fig:Model} we derive the matrices, \n$$\nA=\\begin{bmatrix}\n\\frac{-c_2}{m_1} &\\frac{-(k_1+k_2)}{m_1} &\\frac{c_2}{m_1} &\\frac{k_2}{m_1}\\\\\n1 &0 &0 &0\\\\\n\\frac{-c_2}{m_2} &\\frac{(k_2)}{m_2} &\\frac{-c_2}{m_2} &\\frac{-k_2}{m_2}\\\\\n0 &0 &1 &0\n\\end{bmatrix}\n\\quad \\text{and} \\quad \nB=\\begin{bmatrix}\n\\frac{k_1}{m_1}\\\\\n0\\\\\n0\\\\\n0\n\\end{bmatrix}.\n$$\nNote that if we augment the state to be $\\{x_1,x_2,x_3,x_4,s\\}$ and append an ODE that specifies $\\dot{s}(t)$ for $t\\in \\mathbb{R}^+$ the equations ~\\ref{eq:num_sim} can be written in the form of equations ~\\ref{eq:simple_plant}.Then the finite dimensional set of coupled ODE's for the adaptive estimation problem can be written in terms of the plant dynamics, estimator equation, and the learning law which are of the form shown in Equations \\ref{eq:f}, \\ref{eq:a2}, and \\ref{eq:a3} respectively.\n\n \\subsection{Synthetic Road Profile}\n The constants in the equation are initialized as follows: $m_1=0.5$ kg, $m_2=0.5$ kg, $k_1=50000$ N/m, $k_2=30000$ N/m and $c_2=200$ Ns/m, $\\Gamma=0.001$. \nThe radius of the path traversed $R=4$ m, the road profile to be estimated is assumed to have the shape $f(\\cdot)= \\kappa\\sin(2\\pi \\nu (\\cdot))$ where $\\nu =0.04$ Hz and $\\kappa=2$. \nThus our adaptive estimation problem is formulated for a synthetic road profile in the RKHS $H = \\overline{\\{k_x(\\cdot)|x\\in \\Omega\\}}$ with $k_x(\\cdot)=e^\\frac{-\\|x-{\\cdot} \\|^2}{2\\sigma^2 }$.\nThe radial basis functions, each with standard deviation of $\\sigma=50$, span over the range of $25^o$ with their centers $s_i$ evenly separated along the arc length. It is important to note that we have chosen a scattered basis that can be located at any collection of centers $\\{s_i\\}_{i=1}^{n}\\subseteq \\Omega$ but the uniformly spaced centers are selected to illustrate the convergence rates. \n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=0.45]{rbf_road}\n\\caption{Road surface estimates for $n=\\{10,20,\\cdots,100\\}$}\n\\label{fig:Sine Road}\n\\end{figure}\nFig.\\ref{fig:Sine Road} shows the finite dimensional estimates $\\hat{f}$ of the road and the true road surface $f$ for different number of basis kernels ranging from $n=\\{10,20,\\cdots,100\\}$. \n\\begin{figure}[h!]\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width=.5\\textwidth]{L2_example}\n&\n\\includegraphics[width=.5\\textwidth]{C_error_example}\\\\\n\\end{tabular}\n\\caption{Convergence rates using Gaussian kernel for synthetic data}\n\\label{fig:logsup}\n\\end{figure}\nThe plots in Fig.\\ref{fig:logsup} show the rate of convergence of $L^2$ error and the $C(\\Omega)$ error with respect to the number of basis functions. The {\\em{log}} along the axes in the figures refer to the natural logarithm unless explicitly specified.\n\n\\subsection{Experimental Road Profile Data}\nThe road profile to be estimated in this subsection is based on the experimental data obtained from the Vehicle Terrain Measurement System shown in Fig.~\\ref{fig:circle}. The constants in the estimation problem are initialized to the same numerical values as in previous subsection.\n\\begin{figure}[h]\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width=0.4\\textwidth]{Road_Run1}\n&\n\\includegraphics[width=0.4\\textwidth]{Circle}\\\\\n{Longitudinal Elevation Profile.}\n&\n{Circular Path followed by VTMS.}\n\\end{tabular}\n\\caption{Experimental Data From VTMS.}\n\\label{fig:circle}\n\\end{figure}\nIn the first study in this section the adaptive estimation problem is formulated in the RKHS $H = \\overline{k_x(\\cdot)|x\\in \\Omega\\}}$ with $k_x(\\cdot)=e^\\frac{-\\|x-{\\cdot}\\|^2}{2\\sigma^2 }$. The radial basis functions, each with standard deviation of $\\sigma=50$, span over the range of with a collection of centers located at $\\{s_i\\}_{i=1}^{n}\\subseteq \\Omega$ evenly separated along the arclength. This is repeated for kernels defined using B-splines of first order and second order respectively. \n\nFig.\\ref{fig:Kernels} shows the finite dimensional estimates of the road and the true road surface $f$ for a data representing single lap around the circular track, the finite dimensional estimates $\\hat{f}_n$ are plotted for different number of basis kernels ranging from $n=\\{35,50,\\cdots,140\\}$ using the Gaussian kernel as well as the second order B-splines. \nThe finite dimensional estimates $\\hat{f}_n$ of the road profile and the true road profile $f$ for data collected representing multiple laps around the circular track is plotted for the first order B-splines as shown in Fig.~\\ref{fig:Lsplines Road}. The plots in Fig.~\\ref{fig:sup_error_compare} show the rate of convergence of the $L^2$ error and the $C(\\Omega)$ error with respect to number of basis functions. \nIt is seen that the rate of convergence for $2^{nd}$ order B-Spline is better as compared to other kernels used to estimate in these examples. This corroborates the fact that smoother kernels are expected to have better convergence rates. \n\nAlso, the condition number of the Grammian matrix varies with $n$, as illustrated in Table.\\ref{table:1} and Fig.\\ref{fig:conditionnumber}. This is an important factor to consider when choosing a specific kernel for the RKHS embedding technique since it is well known that the error in numerical estimates of solutions to linear systems is bounded above by the condition number. The implementation of the RKHS embedding method requires such a solution that depends on the grammian matrix of the kernel bases at each time step. We see that the condition number of Grammian matrices for exponentials is $\\mathcal{O}(10^{16})$ greater than the corresponding matrices for splines. Since the sensitivity of the solutions of linear equations is bounded by the condition numbers, it is expected that the use of exponentials could suffer from a severe loss of accuracy as the dimensionality increases. The development for preconditioning techniques for Grammian matrices constructed from radial basis functions to address this problem is an area of active research.\n\\begin{figure}[H]\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width = 0.4 \\textwidth]{Exp_RBF_Road}\n&\n\\includegraphics[width = 0.4 \\textwidth]{Bsplines_Road}\\\\\n{Road surface estimates for Gaussian kernels}\n&\n{Road surface estimate for second-order B-splines}\n\\end{tabular}\n\\caption{Road surface estimates for single lap}\n\\label{fig:Kernels}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width = 0.4 \\textwidth]{LSpline_Road}\n\\caption{Road surface estimate using first-order B-splines}\n\\label{fig:Lsplines Road}\n\\end{figure}\n\\begin{center}\n\\begin{figure}[H]\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width = 0.5\\textwidth]{Compare_L2_Error}\n&\n\\includegraphics[width = 0.5\\textwidth]{Compare_C_Error}\\\\\n\\end{tabular}\n\\caption{Convergence rates for different kernels}\n\\label{fig:sup_error_compare}\n\\end{figure}\n\\end{center}\n\\begin{center}\n\\centering\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{|p{1cm}|p{2.2cm}|p{2.2cm}|p{2.2cm}|}\n\\hline\nNo. of Basis Functions & Condition No. (First order B-Splines) $\\times 10^3$ & Condition No.(Second order B-Splines) $\\times 10^4$ & Condition No.(Gaussian Kernels) $\\times 10^{20}$\\\\ \n \\hline \\hline\n 10 & 0.6646 & 0.3882 & 0.0001 \\\\ \n 20 & 1.0396 & 0.9336 & 0.0017 \\\\\n 30 & 1.4077 & 1.5045 & 0.0029 \\\\\n 40 & 1.7737 & 2.0784 & 0.0074 \\\\\n 50 & 2.1388 & 2.6535 & 0.0167\\\\ \n 60 & 2.5035 & 3.2293 & 0.0102\\\\ \n 70 & 2.8678 & 3.8054& 0.0542\\\\ \n 80 & 3.2321 & 4.3818& 0.0571\\\\ \n 90 & 3.5962 & 4.9583& 0.7624\\\\ \n 100 & 3.9602 & 5.5350& 1.3630\\\\ \n \\hline\n\\end{tabular}\n\\caption{Condition number of Grammian Matrix vs Number of Basis Functions}\n\\label{table:1}\n\\end{table}\n\\end{center}\n\\begin{figure}[H]\n\\centering\n\\includegraphics[height=0.3\\textheight,width=0.65\\textwidth]{Conditon_Number}\n\\caption{Condition Number of Grammian Matrix vs Number of Basis Functions}\n\\label{fig:conditionnumber}\n\\end{figure}\n\\vspace{-1cm}\n\t\\section{Conclusions}\n\t\\label{sec:conclusions}\n In this paper, we introduced a novel framework based on the use of RKHS embedding to study online adaptive estimation problems. The applicability of this framework to solve estimation problems that involve high dimensional scattered data approximation provides the motivation for the theory and algorithms described in this paper. A quick overview of the background theory on RKHS enables rigorous derivation of the results in Sections \\ref{sec:existence} and \\ref{sec:finite}. In this paper we derive (1) the sufficient conditions for the existence and uniqueness of solutions to the RKHS embedding problem, (2) the stability and convergence of the state estimation error, and (3) the convergence of the finite dimensional approximate solutions to the solution of the infinite dimensional state space. To illustrate the utility of this approach, a simplified numerical example of adaptive estimation of a road profile is studied and the results are critically analyzed. It would be of further interest to see the ramifications of using multiscale kernels to achieve semi-optimal convergence rates for functions in a scale of Sobolev spaces. It would likewise be important to extend this framework to adaptive control problems and examine the consequences of {\\em persistency of excitation} conditions in the RKHS setting, and further extend the approach to adaptively generate bases over the state space. \n \n ", "meta": {"timestamp": "2017-07-11T02:15:02", "yymm": "1707", "arxiv_id": "1707.01567", "language": "en", "url": "https://arxiv.org/abs/1707.01567"}} {"text": "\\section{Introduction}\n\nGiven a closed Riemannian manifold $(M,g)$ we consider the\nconformal class of the metric $g$, $[g]$. The Yamabe \nconstant of $[g]$, $Y(M,[g])$, is the\ninfimum of the normalized total scalar curvature functional on\nthe conformal class. Namely,\n\n$$Y(M,[g])= \\inf_{h\\in [g]} \n\\frac{\\int {\\bf s}_h \\ dvol(h)}{(Vol(M,h))^{\\frac{n-2}{n}}},$$\n\n\\noindent\nwhere ${\\bf s}_h$ denotes the scalar curvature of the metric $h$\nand $dvol(h)$ its volume element.\n\n\nIf one writes metrics conformal to $g$ as $h=f^{4/(n-2)} \\ g$,\none obtains the expression\n\n\n\n$$Y(M,[g])= \\inf_{f\\in C^{\\infty} (M)} \n\\frac{\\int ( \\ 4a_n {\\| \\nabla f \\|}_g^2 + f^2 {\\bf s}_g \\ ) \n\\ dvol(g)}{{\\| f\\|}_{p_n}^2},$$\n\n\\noindent\nwhere $a_n =4(n-1)/(n-2) $ and $p_n =2n/(n-2)$. It is a fundamental\nresult \non the subject that the infimum is actually achieved\n(\\cite{Yamabe, Trudinger, Aubin, Schoen}). The functions $f$ achieving\nthe infimum are called {\\it Yamabe functions} and the corresponding metrics\n$f^{4/(n-2)} \\ g$ are called {\\it Yamabe metrics}. Since the critical points \nof the total scalar curvature functional restricted to a conformal \nclass of metrics are precisely the metrics of constant scalar\ncurvature in the conformal class, Yamabe metrics are metrics of\nconstant scalar curvature. \n\n\n\nIt is well known that by considering functions supported in a small\nnormal neighborhood of a point one can prove that \n$Y(M^n,[g]) \\leq Y(S^n ,[g_0 ])$, where $g_0$ is the round metric\nof radius one on the sphere and $(M^n ,g)$ is any closed n-dimensional \nRiemannian manifold (\\cite{Aubin}). \nWe will use the notation $Y_n = Y(S^n ,[g_0 ])$ and \n$V_n =Vol(S^n ,g_0 )$. Therefore $Y_n =n(n-1)V_n^{\\frac{2}{n}}$.\n\nThen one \ndefines the {\\it Yamabe invariant} of a closed manifold $M$ \n\\cite{Kobayashi, Schoen2} as\n\n$$Y(M)=\\sup_g Y(M,[g]) \\leq Y_n .$$ \n\n\nIt follows that $Y(M)$ is positive if and only if $M$ admits a\nmetric of positive scalar curvature. Moreover, the sign of \n$Y(M)$ determines the technical difficulties in understanding \nthe invariant. When the Yamabe constant of a conformal class\nis non-positive there is a unique metric (up to multiplication\nby a positive constant) of constant scalar curvature in the\nconformal class and if $g$ is any metric in the conformal \nclass, the Yamabe constant is bounded from below by\n$(\\inf_M {\\bf s}_g ) \\ (Vol(M,g))^{2/n}$. This can be used for instance\nto study the behavior of the invariant under surgery and so to\nobtain information using cobordism theory \\cite{Yun, Petean, Botvinnik}.\nNote also that in the non-positive case the Yamabe invariant \ncoincides with Perelman's invariant \\cite{Ishida}.\nThe previous estimate is no longer true\nin the positive case, but one does get a lower bound in the case of\npositive Ricci curvature by a theorem of S. Ilias: \nif $Ricci(g)\\geq \\lambda g $ \n($\\lambda >0$) then $Y(M,[g]) \\geq n \\lambda (Vol(M,g))^{2/n}$\n(\\cite{Ilias}). Then in order to use this inequality to\nfind lower bounds on the Yamabe invariant of a closed \nmanifold $M$ one would try to maximize the volume of the manifold\nunder some positive lower bound of the Ricci curvature. \nNamely, if one denotes ${\\bf Rv} (M)= \\sup \\{ Vol(M,g): Ricci(g)\\geq\n(n-1) g \\} $ then one gets $Y(M) \\geq n(n-1) ({\\bf Rv} (M))^{2/n}$ \n(one should define ${\\bf Rv} (M) =0$ if $M$ does not admit \na metric of positive Ricci curvature). Very little is known \nabout the invariant ${\\bf Rv} (M)$. Of course, Bishop's inequality \ntells us that for any n-dimensional closed manifold \n${\\bf Rv} (M^n) \\leq {\\bf Rv} (S^n )$ \n(which is of course attained by the volume\nof the metric of constant sectional curvature 1). Moreover,\nG. Perelman \\cite{Perelman} proved that there is a\nconstant $\\delta =\\delta_n >0$ such that if ${\\bf Rv} (M) \\geq\n{\\bf Rv} (S^n ) -\\delta_n $ then \n$M$ is homeomorphic to $S^n$. Beyond this, results on\n${\\bf Rv} (M)$ have been obtained by computing Yamabe invariants, so\nfor instance ${\\bf Rv} ({\\bf CP}^2 )= 2 \\pi^2 $\n(achieved by the Fubini-Study \nmetric as shown by C. LeBrun \\cite{Lebrun} and M. Gursky and C. \nLeBrun \\cite{Gursky}) and ${\\bf Rv} ({\\bf RP}^3) = \\pi^2$ (achieved by the \nmetric of constant sectional curvature as shown by H. Bray and \nA. Neves \\cite{Bray}).\n\n\nOf course, there is no hope to apply the previous comments directly\nwhen the fundamental group of $M$ is infinite. Nevertheless it\nseems that even in this case the Yamabe invariant is\nrealized by conformal classes of metrics which maximize volume\nwith a fixed positive lower bound on the Ricci curvature\n``in certain sense''. The standard example is $S^{n-1} \\times \nS^1$. The fact that $Y(S^n \\times S^1 ) =Y_{n+1}$ is one\nof the first things we learned about the Yamabe invariant\n\\cite{Kobayashi, Schoen2}. One way to see this is as follows:\nfirst one notes that $\\lim_{T\\rightarrow \\infty} \nY(S^n \\times S^1 ,[g_0 + T^2 dt^2 ])=\nY(S^n \\times {\\mathbb R}, [g_0 + dt^2 ])$ \\cite{Akutagawa} \n(the Yamabe constant for a non-compact Riemannian manifold\nis computed as the infimum of the Yamabe functional over\ncompactly supported functions).\nBut the Yamabe\nfunction for $g_0 + dt^2$ is precisely the conformal factor\nbetween $S^n \\times {\\mathbb R}$ and $S^{n+1} -\\{ S, N \\}$. Therefore \none can think of $Y(S^n \\times S^1 ) =Y_{n+1}$ as realized \nby the positive\nEinstein metric on $S^{n+1} -\\{ S, N \\} $. We will see in this\narticle that a similar situation occurs for any closed positive\nEinstein manifold $(M,g)$ (although we only get the lower\nbound for the invariant). \n\n\\vspace{.3cm}\n\n\nLet $(N,h)$ be a closed Riemannian manifold. An \n{\\it isoperimetric region}\nis an open subset $U$ with boundary $\\partial U$ such that \n$\\partial U$ minimizes area among hypersurfaces bounding a\nregion of volume $Vol(U)$. Given any positive number $s$, \n$s0$ by\n\n$$I_h (\\beta) =\\inf \\{ Vol(\\partial U)/Vol(N,h) : \nVol(U,h) = \\beta Vol(N,h) \\},$$\n\n\\noindent\nwhere $Vol(\\partial U)$ is measured with the Riemannian metric \ninduced by $h$ (on the non-singular part of $\\partial U$). \n\n\n\n\n\n\nGiven a closed Riemannian manifold $(M,g)$ we will call\nthe {\\it spherical cone} on $M$ the space $X$ obtained collapsing\n$M \\times \\{0 \\} $ and $M\\times \\{ \\pi \\}$ in\n$M\\times [0,\\pi ]$ to points $S$ and $N$ (the vertices)\nwith the metric ${\\bf g} =\\sin^2 (t)g + dt^2$ \n(which is a Riemannian metric on $X-\\{ S,N \\}$). Now if\n$Ricci(g) \\geq (n-1) g$ one can see that $Ricci({\\bf g})\n\\geq n{\\bf g}$. One should compare this with the Euclidean cones \nconsidered by F. Morgan and M. Ritor\\'{e} in \\cite{Morgan}:\n$\\hat{g} =t^2 g + dt^2$ for which $Ricci(g) \\geq (n-1)g $\nimplies that $Ricci(\\hat{g}) \\geq 0$. The importance of these\nspherical cones for the study of Yamabe constants is that \nif one takes out the vertices the corresponding (non-complete)\nRiemannian manifold is conformal to \n$M\\times {\\mathbb R}$. But using the (warped product version) of the\nRos Product Theorem \\cite[Proposition 3.6]{Ros} (see \n\\cite[Section 3]{Morgan2}) and the Levy-Gromov isoperimetric\ninequality \\cite{Gromov} one can understand isoperimetric \nregions in these spherical cones. Namely, \n\n\n\\begin{Theorem} Let $(M^n,g)$ be a compact manifold with \nRicci curvature $Ricci(g) \\geq (n-1)g$. Let $(X,{\\bf g})$ be\nits spherical cone. Then geodesic balls around any of the \nvertices are isoperimetric.\n\\end{Theorem}\n\n\nBut now, since the spherical cone over $(M,g)$ \nis conformal to $(M\\times {\\mathbb R} ,\ng+ dt^2 )$ we can use the previous result \nand symmetrization of a function with respect to the\ngeodesic balls centered at a vertex to prove:\n\n\n\n\n\\begin{Theorem} Let $(M,g)$ be a closed Riemannian manifold of \npositive Ricci curvature, $Ricci(g) \\geq (n-1)g$ and volume $V$. \nThen \n\n$$Y(M\\times {\\mathbb R} ,[g+dt^2 ]) \\geq \n(V/V_n )^{\\frac{2}{n+1}} \\ Y_{n+1} .$$\n\\end{Theorem}\n\n\\vspace{.2cm}\n\n\nAs we mentioned before one of the differences between the positive\nand non-positive cases in the study of the Yamabe constant is\nthe non-uniqueness of constant scalar curvature metrics on\na conformal class with positive Yamabe constant. And the simplest\nfamily of examples of non-uniqueness comes from Riemannian \nproducts. If $(M,g)$ and $(N^n ,h)$ are closed Riemannian manifolds\nof constant scalar curvature and ${\\bf s}_g$ is positive then\nfor small $\\delta >0$, $\\delta g + h$ is a constant scalar\ncurvature metric on $M \\times N$ which cannot be a Yamabe\nmetric. If $(M,g)$ is Einstein and $Y(M)=Y(M,[g])$ it seems\nreasonable that $Y(M\\times N)= \\lim_{\\delta \\rightarrow 0} \nY(M\\times N ,[ \\delta g + h ])$.\nMoreover as it is shown in \\cite{Akutagawa}\n\n$$ \\lim Y(M\\times N , [\\delta g + h ]) =Y(M\\times {\\mathbb R}^n,[ g+ dt^2 ]).$$\n\nThe only case which is well understood is when $M=S^n$ and $N=S^1$.\nHere every Yamabe function is a function of the $S^1$-factor\n\\cite{Schoen2} and the Yamabe function for $(S^n \\times {\\mathbb R} , g_0 +\ndt^2 )$ is the factor which makes $S^n\\times {\\mathbb R}$ conformal to\n$S^{n+1} -\\{ S, N \\}$. It seems possible that under\ncertain conditions on $(M,g)$ the Yamabe functions of \n$(M \\times {\\mathbb R}^n , g+dt^2 )$ depend only on the second \nvariable. The best case scenario would be that this is true\nif $g$ is a Yamabe metric but it seems more attainable the\ncase when $g$ is Einstein. It is a corollary to the previous\ntheorem that this is actually true in the case $n=1$. Namely,\nusing the notation (as in \\cite{Akutagawa}) \n$Y_N (M\\times N , g +h)$\nto denote the infimum of the $(g+h)$-Yamabe functional restricted\nto functions of the $N$-factor we have:\n\n\n\n\n\\begin{Corollary} Let $(M^n,g)$ be a closed positive Einstein manifold \nwith Ricci curvature $Ricci(g)=(n-1)g$. Then\n\n$$Y(M\\times {\\mathbb R} , [g+ dt^2 ])=Y_{{\\mathbb R}}(M\\times {\\mathbb R} , g+ dt^2 )=\n{\\left( \\frac{V}{V_n} \\right) }^{\\frac{2}{n+1}} \\ Y_{n+1}.$$ \n\n\\end{Corollary}\n\n\\vspace{.3cm}\n\n\nAs $Y(M\\times {\\mathbb R} , [g+ dt^2 ]) = \\lim_{T\\rightarrow \\infty }\nY(M\\times S^1 ,[g+T dt^2 ])$ it also follows from Theorem 1.2\nthat:\n\n\\begin{Corollary} If $(M^n ,g)$ is a closed Einstein manifold\nwith $Ricci(g) = (n-1)g$ and volume $V$ then\n\n$$Y(M\\times S^1) \\geq (V/V_n )^{\\frac{2}{n+1}} \\ Y_{n+1} .$$\n\n\\end{Corollary}\n\n\\vspace{.3cm}\n\n\nSo for example using the product metric we get\n\n$$Y(S^2 \\times S^2 \\times S^1 )\\geq {\\left(\\frac{2}{3}\n \\right)}^{(2/5)} \\ Y_5 $$\n\n\\noindent\nand using the Fubini-Study metric we get\n\n\n$$Y({\\bf CP}^2 \\times S^1 ) \\geq {\\left(\\frac{3}{4} \\right)}^{(2/5)} \n\\ Y_5 .$$\n\n\n\\vspace{.4cm}\n\n\n\n{\\it Acknowledgements:} The author would like to thank \nManuel Ritor\\'{e}, Kazuo Akutagawa and Frank Morgan\nfor several useful comments on the first drafts of \nthis manuscript.\n\n\n\n\n\n\n\\section{Isoperimetric regions in spherical cones}\n\n\n\n\nAs we mentioned in the introduction, the isoperimetric\nproblem for spherical cones (over manifolds with \nRicci curvature $\\geq n-1$) is understood using\nthe Levy-Gromov isoperimetric inequality \n(to compare the isoperimetric functions of\n$M$ and of $S^n$) and the Ros Product Theorem for warped products\n(to compare then the isoperimetric functions of \nthe spherical cone over $M$ to the isoperimetric function\nof $S^{n+1}$). \nSee for example section 3 of \\cite{Morgan2}\n(in particular {\\bf 3.2} and the remark after it). For the\nreader familiar with isoperimetric problems, this should be\nenough to understand Theorem 1.1. In this section, for the\nconvenience of the reader, we will\ngive a brief outline on these issues. We will mostly \ndiscuss and follow section 3 of \\cite{Ros} and ideas\nin \\cite{Morgan, Montiel} which we think might be useful in\ndealing with other problems arising from the study of Yamabe \nconstants. \n\n \nLet $(M^n ,g)$ be a closed Riemannian manifold\nof volume $V$ and Ricci curvature $Ricci(g) \\geq (n-1)g$. \nWe will\nconsider $(X^{n+1}, \\bf{g}) $ where as a topological space $X$ is the \nsuspension of $M$ ($X=M\\times [0,\\pi ]$ with $M\\times \\{ 0 \\}$ and\n$M\\times \\{ \\pi \\}$ identified to points $S$ and $N$) \nand $\\bf{g}$ $ =\\sin^2 (t) \\ g \\ +\ndt^2$. Of course $X$ is not a manifold (except when $M$ is $S^n$) and\n$\\bf{g} $ is a Riemannian metric only on $X-\\{ S,N \\}$. \n\nThe following is a standard result in geometric measure theory.\n\n\n$\\bf{Theorem:}$ For any positive number $r< Vol(x)$ there exists \nan isoperimetric open subset $U$ of $X$ of volume $r$. Moreover\n$\\partial U$ is a smooth stable constant mean curvature\nhypersurface of $X$ except for a singular piece $\\partial_1 U$\nwhich consists of (possibly)\n$S$, $N$, and a subset of codimension at least 7. \n\n\n\n\nLet us call $\\partial_0 U$ the regular part of $\\partial U$,\n$\\partial_0 U= \\partial U - \\partial_1 U$. Let\n$X_t$, $t\\in (-\\varepsilon ,\\varepsilon )$, \nbe a variation of $\\partial_0 U$ such that the \nvolume of the enclosed region $U_t$ remains constant. \nLet $\\lambda (t)$ be the area of $X_t$. Then $\\lambda '(0) =0$\nand $\\lambda ''(0) \\geq 0$. The first condition is satisfied\nby hypersurfaces of constant mean curvature and the ones \nsatisfying the second condition are called ${\\it stable}$.\nIf $N$ denotes a normal \nvector field to the hypersurface then variations are obtained \nby picking a function $h$ with compact support on $\\partial_0 U$ and\nmoving $\\partial_0 U$ in the direction of $h \\ N$. Then\nwe have that if the mean of $h$ on $\\partial_0 U$\nis 0 then $\\lambda_h '(0) =0$\n $\\lambda_h ''(0) \\geq 0$. This last condition is written as\n\n$$Q(h,h)=-\\int_{\\partial_0 U} h(\\Delta h + (Ricci (N,N) +\n\\sigma^2 )h ) dvol(\\partial_0 U) \\geq 0.$$\n\n\\noindent\nHere we consider $\\partial_0 U$ as a Riemannian manifold\n(with the induced metric) and use the corresponding Laplacian\nand volume element. $\\sigma^2$ is the square of the norm of the second\nfundamental form. \nThis was worked out by J. L. Barbosa, M. do Carmo and \nJ. Eschenburg in \\cite{Barbosa,\ndoCarmo}. As we said before, the function $h$ \nshould apriori have compact support \nin $\\partial_0 U$ but as shown by F. Morgan and M. Ritor\\'{e} \n\\cite[Lemma 3.3]{Morgan} it is enough that $h$ is bounded\nand $h\\in L^2 (\\partial_0 U)$. This is important in order to study\nstable constant mean curvature surfaces on a space like $X$ because\n$X$ admits what is called a ${\\it conformal}$ vector field $V=\n\\sin (t) \\partial /\\partial t$ and the function $h$ one wants to\nconsider is $h=div (V-{\\bf g}(V,N) \\ N )$ where $N$ is the unit\nnormal to the hypersurface (and then $h$ is the divergence of\nthe tangencial part of $V$). This has been used for instance in \n\\cite{Montiel,Morgan} to classify stable constant mean curvature\nhypersurfaces in Riemannian manifolds with a conformal vector field.\nWhen the hypersurface is smooth this function $h$ has mean 0 by \nthe divergence theorem and one can apply the stability condition. \nBut when the hypersurface has singularities one would apriori need \nthe function $h$ to have compact support on the regular part. This\nwas done by F. Morgan and M. Ritor\\'{e} in \n\\cite[Lemma 3.3]{Morgan}. \n\n\nWe want to prove that the geodesic balls around $S$ are\nisoperimetric. One could try to apply the techniques of\nMorgan and Ritor\\'{e} in \\cite{Morgan} and see that they are \nthe only stable constant mean \ncurvature hypersurfaces in $X$. This should be possible, and \nactually it might be necessary to deal with isoperimetric regions \nof more general singular spaces that appear naturally in the study of \nYamabe constants of Riemannian products.\nBut in this case we will instead\ntake a more direct approach using the Levy-Gromov \nisoperimetric inequality \\cite{Gromov} and Ros Product Theorem \n\\cite{Ros}. \n\n\n\\vspace{.3cm}\n\n\nThe sketch of the proof is as follows: First one has to note that \ngeodesic balls centered at the vertices {\\it produce} the same \nisoperimetric function as the one of the round sphere. Therefore\nto prove that geodesic balls around the vertices are isoperimetric \nis equivalent to prove that the isoperimetric function of ${\\bf g}$ \nis bounded from below by the isoperimetric function of $g_0$. To\ndo this, given any open subset $U$ of $X$ one considers \nits symmetrization\n$U^s \\subset S^{n+1}$, so the the {\\it slices} of $U^s$ are geodesic\nballs with the same normalized volumes as the slices of $U$. Then\nby the Levy-Gromov isoperimetric inequality we can compare the\nnormalized areas of the boundaries of the slices. We have to \nprove that the normalized area of $\\partial U^s$ is at most the\nnormalized area of $\\partial U$. \nThis follows from \nthe warped product version of \\cite[Proposition 3.6]{Ros}. We will\ngive an outline following Ros' proof for the Riemannian product case. \nWe will use the notion of Minkowski\ncontent. This is the bulk of the proof and we will divide it into \nLemma 2.1, Lemma 2.2 and Lemma 2.3.\n\n\n\n\\vspace{.3cm}\n\n\n{\\it Proof of Theorem 1.1 :} \nLet $U\\subset X$ be a closed subset.\nFor any $t\\in (0,\\pi )$ let \n\n$$U_t =U \\cap (M\\times \\{ t \\} ) .$$ \n\n\nFix any point $E\\in S^n$ and let $(U^s )_t$ be the geodesic ball\ncentered at $E$ with volume \n\n$$Vol((U^s )_t , g_0 ) = \\frac{V_n}{V} \\ Vol(U_t ,g).$$ \n\n\\noindent\n(recall that $V=Vol(M,g)$ and $V_n = Vol(S^n ,g_0 )$).\nLet $U^s\n\\subset S^{n+1}$ be the corresponding subset (i.e. we consider\n$S^{n+1} -\\{ S,N \\}$ as $S^n \\times (0,\\pi )$ and $U^s$ is\nsuch that $U^s \\cap (S^n \\times \\{ t \\}) $ =$(U^s )_t$.\nOne might add\n$S$ and/or $N$ to make $U^s$ closed and connected). Note\nthat one can write $(U^s )_t = (U_t )^s = U_t^s$ as long as there\nis no confusion (or no difference) on whether we are considering \nit as a subset of $S^n$ or as a subset of $S^{n+1}$.\n\n\nNow \n\n$$Vol(U)=\\int_0^{\\pi} \\sin^n (t) \\ Vol(U_t ,g) \\ dt $$\n\n$$= \\frac{V}{V_n} \\int_0^{\\pi} \\sin^n (t) \\ Vol((U^s )_t ,g_0 ) \\ dt \n= \\frac{V}{V_n} Vol(U^s ,g_0 ).$$\n\n\n\nAlso if $B(r) =M\\times [0,r]$ (the geodesic ball of radius\n$r$ centered at the vertex at 0) then\n\n$$Vol(B(r))=\\int_0^r \\sin^n (t) V dt = \\frac{V}{V_n} \n\\int_0^r \\sin^n (t) V_n dt = \\frac{V}{V_n} Vol (B_0 (r)) \\ \\ (1)$$\n\n\\noindent\nwhere $B_0 (r)$ is the geodesic ball of radius $r$ in the\nround sphere. And \n\n$$Vol(\\partial B(r))=\\sin^n (r) V =\\frac{V}{V_n} \nVol(\\partial B_0 (r)) \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ (2).$$\n\n\nFormulas (1) and (2) tell us that the geodesic balls around the \nvertices in $X$ produce the same isoperimetric function as \nthe round metric $g_0$. Therefore given any open subset $U \\subset\nX$ we want to compare the area of $\\partial U$ with the area\nof the boundary \nof the geodesic ball in $S^{n+1}$ with the same normalized volume\nas $U$. \n\n\n\n\\vspace{.3cm}\n\n\nGiven a closed set $W$ let $B(W,r)$ be the set of points at distance\nat most $r$ from $W$. Then one considers the {\\it Minkowski content}\nof $W$, \n\n$$\\mu^+ (W) = \\liminf \\frac{Vol (B(W,r) ) -Vol(W)}{r}.$$\n\n\\noindent \nIf $W$ is a smooth submanifold with boundary then \n$\\mu^+ (W) = Vol (\\partial W)$. And this is still true if the\nboundary has singularities of codimension $\\geq 2$ (and finite \ncodimension 1 Hausdorff measure).\n \nThe Riemannian measure on $(S^n ,g_0 )$, normalized to be a\nprobability measure is what is called a {\\it model measure}:\nif $D^t$, $t\\in (0,1)$ is the family of geodesic balls \n(with volume $Vol(D^t )=t$) centered at some fixed point then \nthey are \nisoperimetric regions which are ordered by volume and such \nthat for any $t$, $ B(D^t ,r) =D^{t'}$ for some $t'$. \nSee \\cite[Section 3.2]{Ros}. The following result follows \ndirectly from the \nLevy-Gromov isoperimetric inequality \\cite[Appendix C]{Gromov}\nand \\cite[Proposition 3.5]{Ros} (see the lemma in \n\\cite[page 77]{Morgan3} for a more elementary proof and point of view\non \\cite[Proposition 3.5]{Ros}).\n\n\n\\begin{Lemma}: Let $(M,g)$ be a closed Riemannian manifold\nof volume $V$ and Ricci curvature $Ricci(g) \\geq (n-1) g$. \nFor any nonempty closed subset $\\Omega \\subset M$ and\nany $r\\geq 0$ if $B_{\\Omega}$ is a geodesic ball in\n$(S^n , g_0 )$ with volume $Vol(B_{\\Omega})=(V_n /V)\nVol(\\Omega )$ then $Vol(B(B_{\\Omega} ,r)) \\leq \n(V_n /V) Vol(B(\\Omega ,r))$.\n\\end{Lemma}\n\n\\begin{proof} Given any closed Riemannian manifold $(M,g)$, \ndividing the\nRiemannian measure by the volume one obtains a probability \nmeasure which we will denote $\\mu_g$. \nAs we said before, the round metric on the sphere\ngives a model measure $\\mu_{g_0}$. On the other hand the Levy-Gromov\nisoperimetric inequality \\cite{Gromov}\nsays that $I_{\\mu_g} \\geq I_{\\mu_{g_0}}$. \nThe definition of $B_{\\Omega}$ says that $\\mu_g (\\Omega )=\\mu_{g_0}\n(B_{\\Omega})$ and what we want to prove is that $\\mu_g (B(\\Omega ,r))\n\\geq \\mu_{g_0}\n(B(B_{\\Omega} ,r) )$ .\nTherefore the\nstatement of the lemma is precisely \\cite[Proposition 3.5]{Ros}.\n\n\n\\end{proof}\n\n\n\nFix a positive constant $\\lambda$. Note that the previous lemma\nremains unchanged if we replace $g$ and $g_0$ by $\\lambda g$\nand $\\lambda g_0$: the correspondence $\\Omega \\rightarrow\nB_{\\Omega}$ is the same and $\\mu_{\\lambda g} = \\mu_g$.\n\n\n\n\n\n\\begin{Lemma} For any $t_0 \\in (0,\\pi )$ \n$B((U^s )_{t_0} ,r) \\subset (B(U_{t_0} ,r ))^s $.\n\\end{Lemma}\n\n\\begin{proof} First note that the distance from a point \n$(x,t) \\in X$ to a vertex depends only on $t$ and not on $x$ \n(or even on $X$). Therefore if $r$ is greater than the\ndistance $\\delta$ between $t_0$ and $0$ or $\\pi$ \nthen both sets in the lemma\nwill contain a geodesic ball of radius $r-\\delta$ around the \ncorresponding vertex. \n\nAlso observe that the distance between points $(x,t_0 )$ and\n$(y,t)$ depends only on the distance between $x$ and $y$\n(and $t$, $t_0$, and the function in the warped product, \nwhich in this case is $\\sin$) but not on $x, y$ or $X$.\nIn particular for any $t$ so that $|t-t_0 |0$ such that, considered as subsets of $M$, \n\n$$(B(U_{t_0} ,r ))_t = B(U_{t_0} ,\\rho )$$\n\n\\noindent\nand, as subsets of $S^n$,\n\n$$(B((U^s )_{t_0} ,r) )_t =B(U^s_{t_0} ,\\rho ).$$\n\nThe lemma then follows from Lemma 2.1 (and the comments after it). \n\n\\end{proof}\n\n\n\nNow for any closed subset $U\\subset X$ let $B_U$ be a \ngeodesic ball in $(S^{n+1} ,g_0 )$ with volume \n$Vol(B_U ,g_0 )= (V_n /V)\nVol(U,{\\bf g})$. Since geodesic balls in round spheres are isoperimetric\n(and $Vol(B_U ,g_0 )=Vol(U^s ,g_0 )$)\nit follows that $Vol(\\partial B_U )\\leq \\mu^+ (U^s )$.\n\n\n\\begin{Lemma} Given any closed set $U\\subset X$, $\\mu^+(U) \n\\geq (V/V_n ) Vol(\\partial B_U )$.\n\\end{Lemma}\n\n\\begin{proof}\nSince $(B(U,r) )^s$ is closed \nand $B(U^s ,r)$ is the closure of\n$\\cup_{t\\in (0,\\pi )} \\ B(U_t ^s ,r)$\nwe have from the previous lemma that\n\n\n\n$$B(U^s ,r) \\subset (B(U,r ) )^s .$$ \n\n\n\n Then\n\n$$Vol(\\partial B_U )\\leq \\mu^+ (U^s ) \n=\\liminf \\frac{Vol(B(U^s ,r) ) - Vol (U^s )}{r}$$\n\n$$\\leq \\liminf \\frac{Vol((B(U ,r))^s ) - Vol (U^s )}{r}$$\n\n\n$$=(V_n /V)\\liminf \\frac{Vol(B(U,r) ) - Vol (U)}{r}\n =(V_n /V) \\mu^+ (U) $$\n\n\\noindent\nand the lemma follows.\n\n\n\\end{proof}\n\n\n\nNow if we let $B_U^M$ be a geodesic ball around a vertex in $X$ \nwith volume \n\n$$Vol(B_U^M ,{\\bf g}) = Vol(U,{\\bf g} ) = \n\\frac{V}{V_n} Vol(B_U, g_0 )$$\n\n\\noindent \nthen it follows from (1) and (2) in the beginning of the proof that \n\n$$Vol(\\partial B_U^M ,{\\bf g}) = \\frac{V}{V_n} Vol(\\partial B_U ,g_0 ).$$\n\n\\noindent\nand so by Lemma 2.3\n\n$$Vol(\\partial B_U^M ,{\\bf g}) \\leq \\mu^+ (U)$$\n\n\\noindent\nand Theorem 1.1 is proved.\n\n{\\hfill$\\Box$\\medskip}\n\n\n\n\\section{The Yamabe constant of $M\\times {\\mathbb R}$}\n\n\n\n\nNow assume that $g$ is a metric of positive Ricci curvature, \n$Ricci(g) \\geq (n-1)g$ on $M$ and consider as before the\nspherical cone $(X,{\\bf g})$ with ${\\bf g} =\\sin^2 (t) g + dt^2$.\nBy a direct \ncomputation the sectional curvature of ${\\bf g}$ is given by:\n\n$$K_{{\\bf g}} (v_i ,v_j )=\\frac{K_g (v_i ,v_j )-\\cos^2 (t)}{\\sin^2 (t)}$$\n\n$$K_{\\bf g} (v_i ,\\partial /\\partial t)=1,$$\n\n\\noindent\nfor a $g$-orthonormal basis $\\{ v_1 ,...,v_n \\}$. And the Ricci\ncurvature is given by:\n\n$$Ricci({\\bf g}) (v_i ,\\partial /\\partial t )=0$$\n\n$$Ricci({\\bf g}) (v_i ,v_j )= Ricci(g) (v_i ,v_j ) - (n-1)\\cos^2 (t)\\delta_i^j\n+\\sin^2 (t) \\delta_i^j$$\n\n$$Ricci({\\bf g}) (\\partial_t ,\\partial_t )=n.$$ \n\nTherefore by picking $\\{ v_1 ,...,v_n \\}$ which diagonalizes $Ricci(g)$ one\neasily sees that if $Ricci(g)\\geq (n-1)g$ then $Ricci({\\bf g})\\geq n \n{\\bf g}$. Moreover, if $g$ is an Einstein metric with Einstein \nconstant $n-1$ the ${\\bf g}$ is Einstein with Einstein constant $n$.\n\n\n\n\n\n\n\n\n\n\n\nLet us recall that for non-compact \nRiemannian manifolds one defines\nthe Yamabe constant of a metric as the infimum of the Yamabe\nfunctional of the metric\nover smooth compactly supported functions (or functions\nin $L_1^2$, of course). So for instance if $g$ is a Riemannian metric\non the closed manifold $M$ then\n\n\n$$Y(M\\times {\\mathbb R} ,[g+dt^2 ]) =\\inf_{f \\in C^{\\infty}_0 (M\\times {\\mathbb R} )}\n\\frac{\\int_{M\\times {\\mathbb R} } \\left( \\ a_{n+1} {\\| \\nabla f \\|}^2 + \n{\\bf s}_g \\ f^2 \\ \\right) \ndvol(g+dt^2)}{\n{\\| f \\|}_{p_{n+1}}^2 } .$$\n\n\n\\vspace{.2cm}\n\n\n{\\it Proof of Theorem 1.2 :}\nWe have a closed Riemannian manifold $(M^n ,g)$\nsuch that $Ricci(g) \\geq (n-1) g$. Let $f_0 (t)= \\cosh^{-2} (t)$\nand consider the diffeomorphism \n\n$$H: M \\times (0, \\pi ) \\rightarrow M \\times {\\mathbb R} $$\n\n\\noindent\ngiven by $H(x,t)=(x,h_0 (t))$, where $h_0 :(0,\\pi ) \\rightarrow {\\mathbb R} $\nis the diffeomorphism defined by $h_0 (t) =cosh^{-1} ( (\\sin\n(t))^{-1})$\non $[\\pi /2, \\pi )$ and $h_0 (t)=-h_0 (\\pi /2 -t)$ if\n$t\\in(0,\\pi /2 )$. \n\nBy a direct computation $H^* ( f_0 (g+dt^2))= \n{\\bf g}= \\sin^2 (t) g +dt^2$ on $M\\times (0,\\pi )$. \n\n\nTherefore by conformal invariance if we call $g_{f_0} = f_0 (g+dt^2)$\n\n$$Y(M\\times {\\mathbb R} , [g+dt^2 ] ) \n=\\inf_{ f \\in C^{\\infty}_0 (M\\times {\\mathbb R} )}\n\\frac{\\int_{M\\times {\\mathbb R} } \\left( \\ a_{n+1} {\\| \\nabla f \\|}_{g+dt^2}^2 + \n{\\bf s}_g f^2 \\right) \\ dvol(g+dt^2)}{\n{\\| f \\|}_{p_{n+1}}^2 } $$\n\n\n$$=\\inf_{f \\in C^{\\infty}_0 (M\\times {\\mathbb R} )}\n\\frac{\\int_{M\\times {\\mathbb R} } \\left( \\ a_{n+1} {\\| \\nabla f \\|}^2_{g_{f_0}} \n+ {\\bf s}_{g_{f_0}} f^2 \\ \\right) \\ \n dvol(g_{f_0} )}{\n{\\| f \\|}_{p_{n+1}}^2 } $$\n\n$$=\\inf_{f \\in C^{\\infty}_0 (M\\times (0,\\pi ))}\n\\frac{\\int_{M\\times (0,\\pi ) } \\ \\left( a_{n+1} {\\| \\nabla f \\|}^2_{\\bf g} \n+ {\\bf s}_{\\bf g} \nf^2 \\ \\right) \\ dvol({\\bf g})}{\n{\\| f \\|}_{p_{n+1}}^2 } =Y(M\\times (0,\\pi ),[{\\bf g}]).$$\n\nNow, as we showed in the previous section, $Ricci({\\bf g})\n\\geq n$. Therefore ${\\bf s}_{\\bf g} \\geq n(n+1)$. So we get \n\n$$Y(M\\times {\\mathbb R} , [g+dt^2 ]) \\geq \n\\inf_{f \\in C^{\\infty}_0 (M\\times (0,\\pi ))}\n\\frac{\\int_{M\\times (0,\\pi ) } \\ \\left( a_{n+1} {\\| \\nabla f \\|}^2_{\\bf g} \n+ n(n+1) \nf^2 \\ \\right) \\ dvol({\\bf g})}{\n{\\| f \\|}_{p_{n+1}}^2 }.$$\n\nTo compute the infimum one needs to consider only non-negative\nfunctions. \nNow for any non-negative function \n$f \\in C^{\\infty}_0 (M\\times (0,\\pi ) \\ )$ consider its symmetrization\n$f_* :X \\rightarrow {\\mathbb R}_{\\geq 0}$ defined by $f_* (S) =\\sup f$ and\n$f_* (x,t) =s$ if and only if $Vol(B(S,t), {\\bf g} )=\nVol(\\{ f > s \\} ,{\\bf g})$ (i.e. $f_*$ is a \nnon-increasing function of $t$\nand $Vol(\\{ f_* > s \\})=Vol(\\{ f > s \\}) $ for any $s$). \nIt is inmediate that the $L^q$-norms of $f_*$ and $f$ are the\nsame for any $q$. Also, by the coarea formula\n\n$$\\int \n \\| \\nabla f \\|_{\\bf g}^2 = \\int_0^{\\infty} \n\\left( \\int_{f^{-1}(t)} \\| \\nabla f \\|_{\\bf g} d\\sigma_t \\right) dt.$$\n\n\n$$ \\geq \\int_0^{\\infty} (\\mu (f^{-1} (t)))^2 \n{\\left( \\int_{f^{-1}(t)} \\| \\nabla f \\|_{\\bf g}^{-1} d\\sigma_t \n\\right)}^{-1} \\ dt$$\n\n\\noindent\nby H\\\"{o}lder's inequality, where $d\\sigma_t$ is the measure induced\nby ${\\bf g}$ on $\\{ f^{-1} (t) \\}$. But \n\n$$\\int_{f^{-1}(t)} \\| \\nabla f \\|_{\\bf g}^{-1} d\\sigma_t \n=-\\frac{d}{dt} (\\mu\\{ f>t \\})$$\n\n$$=-\\frac{d}{dt} (\\mu\\{ f_* >t \\}) = \n\\int_{f_*^{-1}(t)} \\| \\nabla f_* \\|_{\\bf g}^{-1} d\\sigma_t $$\n\n\\noindent \nand since $f^{-1} (t) =\\partial \\{ f>t \\}$ by Theorem 1.1 \nwe have $\\mu (f^{-1} (t))\\geq \\mu (f_*^{-1} (t))$. Therefore\n\n\n\n$$ \\int_0^{\\infty} (\\mu (f^{-1} (t)))^2 \n{\\left( \\int_{f^{-1}(t)} \\| \\nabla f \\|_{\\bf g}^{-1} d\\sigma_t \n\\right)}^{-1} \\ dt$$ \n\n\n$$ \\geq \\int_0^{\\infty} (\\mu (f_*^{-1} (t)))^2 \n{\\left( \\int_{f_*^{-1}(t)} \\| \\nabla f_* \\|_{\\bf g}^{-1} d\\sigma_t \n\\right)}^{-1} \\ dt $$\n\n\\noindent\n(and since $\\| \\nabla f_* \\|_{\\bf g}$ is constant along \n$f_*^{-1}(t)$ )\n\n\n$$=\\int_0^{\\infty}\\mu (f_*^{-1} (t)) \\| \\nabla f_* \\|_{\\bf g} \\ dt$$\n\n$$= \\int_0^{\\infty} \n\\left( \\int_{f_* ^{-1}(t)} \\| \\nabla f_* \\|_{\\bf g} d\\sigma_t \\right)\ndt =\\int \n \\| \\nabla f_* \\|_{\\bf g}^2 .$$\n\nConsidering $S^{n+1}$ as the spherical cone over $S^n$ we have \nthe function $f^0_* : S^{n+1} \\rightarrow {\\mathbb R}_{\\geq 0}$ which\ncorresponds to $f_*$. \n\nThen for all $s$ \n\n$$Vol (\\{ f_*^0 >s \\} ) = \n\\left( \\frac{V_n}{V} \\right) \\ Vol( \\{ f_* >s \\},$$\n\n\\noindent\nand so for any $q$,\n\n$$\\int (f^0_*)^q dvol(g_0 ) = \\left( \\frac{V_n}{V} \\right) \n\\int (f_* )^q dvol({\\bf g}).$$\n\nAlso for any $s\\in (0,\\pi )$\n\n$$\\mu ( (f_*^0 )^{-1} (s)) = \\frac{V_n}{V} \\mu (f_*^{-1} (s)),$$\n\n\\noindent\nand since ${\\| \\nabla f_*^0 \\|}_{g_0} = {\\| \\nabla f_* \\| }_{\\bf g}$\nwe have\n\n$$ \\int \n \\| \\nabla f^0_* \\|_{g_0}^2 = \\frac{V_n}{V} \\int \n \\| \\nabla f_* \\|_{\\bf g}^2 .$$\n\nWe obtain \n\n\n\n$$Y(M\\times {\\mathbb R} , [g+dt^2 ]) \\geq \n\\inf_{f \\in C^{\\infty}_0 (M\\times (0,\\pi ))}\n\\frac{\\int_{M\\times (0,\\pi ) } a_{n+1} {\\| \\nabla f \\|}^2_{\\bf g} \n+ n(n+1) \nf^2 \\ dvol({\\bf g})}{\n{\\| f \\|}_{p_{n+1}} ^2 }$$\n\n$$\\geq \\inf_{f \\in C^{\\infty}_0 (M\\times (0,\\pi ))}\n\\frac{\\int_{M\\times (0,\\pi ) } a_{n+1} {\\| \\nabla f_* \\|}^2_{\\bf g} \n+ n(n+1) \nf_*^2 \\ dvol({\\bf g})}{\n{\\| f_* \\|}_{p_{n+1}}^2 }$$\n\n$$={\\left( \\frac{V}{V_n} \\right)}^{1-(2/p_{n+1})}\n\\inf_{f \\in C^{\\infty}_0 (M\\times (0,\\pi ))}\n\\frac{\\int_{M\\times (0,\\pi ) } a_{n+1} {\\| \\nabla f^0_* \\|}^2_{g_0} \n+ n(n+1) \n{f^0_*}^2 dvol({g_0})}{\n{\\| f^0_* \\|}_{p_{n+1}}^2 }$$\n\n$$ \\geq \n{\\left( \\frac{V}{V_n} \\right)}^{2/(n+1)} \\ Y_{n+1}$$\n\nThis finishes the proof of Theorem 1.2.\n\n{\\hfill$\\Box$\\medskip}\n\n\n{\\it Proof of Corollary 1.3 :} Note that if\n${\\bf s}_g$ is constant $Y_{{\\mathbb R}} (M \\times {\\mathbb R} , g +\ndt^2)$\nonly depends on ${\\bf s}_g$ and $V=Vol(M,g)$ Actually,\n\n$$Y_{{\\mathbb R}} (M\\times {\\mathbb R} ,g +dt^2 )=\n\\inf_{f\\in C_0^{\\infty} ( {\\mathbb R} )} \\frac{\\int_{{\\mathbb R}} \\ a_{n+1} {\\|\\nabla f\n \\|}^2_{dt^2} V\n+ {\\bf s}_g V f^2 \\ dt^2}{(\\int_{{\\mathbb R}} f^p )^{2/p} \\ V^{2/p}}$$\n\n$$=V^{1-(2/p)}\n\\inf_{f\\in C_0^{\\infty} ( {\\mathbb R} )} \\frac{\\int_{{\\mathbb R}} \\ a_{n+1} {\\|\\nabla f\n \\|}^2_{dt^2} \n+ {\\bf s}_g f^2 \\ dt^2}{(\\int_{{\\mathbb R}} f^p )^{2/p}}.$$\n\n\nBut as we said \n\n$$\\inf_{f\\in C_0^{\\infty} ( {\\mathbb R} )} \\frac{\\int_{{\\mathbb R}} \\ a_{n+1} {\\|\\nabla f\n \\|}^2_{dt^2} \n+ {\\bf s}_g f^2 \\ dt^2}{(\\int_{{\\mathbb R}} f^p )^{2/p}}$$\n\n\\noindent\nis independent of $(M,g)$ and it is known to be equal to \n$Y_{n+1} V_n^{-2/(n+1)}$. Corollary 1.3 then follows \ndirectly from Theorem 1.2.\n\n{\\hfill$\\Box$\\medskip}\n\n\n", "meta": {"timestamp": "2007-11-09T20:47:52", "yymm": "0710", "arxiv_id": "0710.2536", "language": "en", "url": "https://arxiv.org/abs/0710.2536"}} {"text": "\\section{Introduction}\nThe averaged quantities can be obtained in two different ways in\nmagnetohydrodynamics. The first way is to solve 3D MHD equations\nand then average the results. The second way is to solve some\nsystem of equations on averages. Combination of numerical\nsimulations and averaged theory brings phenomenology that can\ndescribe observations or experimental data.\n\nThe problem of spherically symmetric accretion takes its origin\nfrom Bondi's work \\citep{bondi}. He presented idealized\nhydrodynamic solution with accretion rate $\\dot{M}_B.$ However,\nmagnetic field $\\vec{B}$ always exists in the real systems. Even\nsmall seed $\\vec{B}$ amplifies in spherical infall and becomes\ndynamically important \\citep{schwa}.\n\nMagnetic field inhibits accretion \\citep{schwa}. None of many\ntheories has reasonably calculated the magnetic field evolution\nand how it influences dynamics. These theories have some common\npitfalls. First of all, the direction of magnetic field is usually\ndefined. Secondly, the magnetic field strength is prescribed by\nthermal equipartition assumption. In third, dynamical effect of\nmagnetic field is calculated with conventional magnetic energy and\npressure. All these inaccuracies can be eliminated.\n\nIn Section 2\\ref{section_method} I develop a model that abandons\nequipartition prescription, calculates the magnetic field\ndirection and strength and employs the correct equations of\nmagnetized fluid dynamics. In Section 3\\ref{results} I show this\naccretion pattern to be in qualitative agreement with Sgr A*\nspectrum models. I discuss my assumptions in Section 4\n\\ref{discussion}.\n\n\\section{Analytical method}\\label{section_method}\n Reasonable turbulence evolution model is the key difference of my\n method. I build an averaged turbulence theory that corresponds to\nnumerical simulations. I start with the model of isotropic\nturbulence that is consistent with simulations of collisional MHD\nin three regimes. Those regimes are decaying hydrodynamic\nturbulence, decaying MHD turbulence and dynamo action. I introduce\neffective isotropization of magnetic field in 3D model.\nIsotropization is taken to have a timescale of the order of\ndissipation timescale that is a fraction $\\gamma\\sim1$ of the\nAlfven wave crossing time $\\tau_{\\rm diss}=\\gamma r/v_A.$\n\nCommon misconception exists about the dynamical influence of\nmagnetic field. Neither magnetic energy nor magnetic pressure can\nrepresent $\\vec{B}$ in dynamics. Correct averaged Euler and energy\nequations were derived in \\citep{scharlemann} for radial magnetic\nfield. Magnetic force $\\vec{F}_M=[\\vec{j}\\times\\vec{B}]$ can be\naveraged over the solid angle with proper combination of\n$\\vec{\\nabla}\\cdot\\vec{B}=0.$ I extend the derivation to random\nmagnetic field without preferred direction. Dynamical effect of\nmagnetic helicity \\citep{biskamp03} is also investigated. I\nneglect radiative and mechanical transport processes.\n\nThe derived set of equations requires some modifications and\nboundary conditions to be applicable to the real astrophysical\nsystems. I add external energy input to turbulence to balance\ndissipative processes in the outer flow. The outer turbulence is\ntaken to be isotropic and has magnetization $\\sigma\\sim1.$\nTransonic smooth solution is chosen as possessing the highest\naccretion rate as in \\citep{bondi}.\n\n\\begin{figure}\\label{fig1}\n \\includegraphics[height=.5\\textheight]{velocities}\n \\caption{Normalized to Keplerian speed characteristic velocities of magnetized flow. Horizontal lines correspond to self-similar solution $v\\sim r^{-1/2}.$}\n\\end{figure}\n\n\\section{Results \\& Application to Sgr A*}\\label{results}\n\n\\begin{figure}\\label{fig2}\n \\includegraphics[height=.5\\textheight]{magnetization}\n \\caption{Plot of magnetization $\\sigma=(E_M+E_K)/E_{Th}$ with radius.}\n\\end{figure}\nThe results of my calculations confirm some known facts about\nspherical magnetized accretion, agree with the results of\nnumerical simulations and have some previously unidentified\nfeatures.\n\nInitially isotropic magnetic field exhibits strong anisotropy with\nlarger radial field $B_r.$ Perpendicular magnetic field\n$B_\\perp\\ll B_r$ is dynamically unimportant in the inner accretion\nregion Fig\\ref{fig1}. Because magnetic field dissipates, infall\nonto the black hole can proceed \\citep{schwa}.\n\nTurbulence is supported by external driving in the outer flow\nregions, but internal driving due to freezing-in amplification\ntakes over in the inner flow Fig\\ref{fig2}. Magnetization of the\nflow increases in the inner region with decreasing radius\nconsistently with simulations \\cite{igumen06}. Density profile\nappears to be $\\rho\\sim r^{-1.25}$ that is different from\ntraditional ADAF scaling $\\rho\\sim r^{-1.5}$ \\citep{narayan}. Thus\nthe idea of self-similar behavior is not supported.\n\nCompared to non-magnetized accretion, infall rate is 2-5 times\nsmaller depending on outer magnetization. In turn, gas density is\n2-5 times smaller in the region close to the black hole, where\nsynchrotron radiation emerges \\citep{narayan}. Sgr A* produces\nrelatively weak synchrotron \\citep{narayan}. So, either gas\ndensity $n$ or electron temperature $T_e$ or magnetic field $B$\nare small in the inner flow or combination of factors works. Thus\nlow gas density in magnetized model is in qualitative agreement\nwith the results of modelling the spectrum.\n\nFlow is convectively stable on average in the model of moving\nblobs, where dissipation heat is released homogeneously in volume.\nMoving blobs are in radial and perpendicular pressure\nequilibriums. They are governed by the same equations as the\nmedium.\n\n\\section{Discussion \\& Conclusion}\\label{discussion}\nThe presented accretion study self-consistently treats turbulence\nin the averaged model. This model introduces many weak assumptions\ninstead of few strong ones.\n\nI take dissipation rate to be that of collisional MHD simulations.\nBut flow in question is rather in collisionless regime.\nObservations of collisionless flares in solar corona\n\\citep{noglik} gives dissipation rate $20$ times smaller than in\ncollisional simulations \\citep{biskamp03}. However, flares in\nsolar corona may represent a large-scale reconnection event rather\nthan developed turbulence. It is unclear which dissipation rate is\nmore realistic for accretion.\n\nMagnetic field presents another caveat. Magnetic field lines\nshould close, or $\\vec{\\nabla}\\cdot\\vec{B}=0$ should hold. Radial\nfield is much larger than perpendicular in the inner region.\nTherefore, characteristic radial scale of the flow is much larger\nthan perpendicular. If radial turbulence scale is larger than\nradius, freezing-in condition does not hold anymore. Matter can\nfreely slip along radial field lines into the black hole. If\nmatter slips already at the sonic point, the accretion rate should\nbe higher than calculated.\n\nSome other assumptions are more likely to be valid. Diffusion\nshould be weak because of high Mach number that approaches unity\nat large radius. Magnetic helicity was found to play very small\ndynamical role. Only when the initial turbulence is highly\nhelical, magnetic helicity conservation may lead to smaller\naccretion rate. Neglect of radiative cooling is justified a\nposteriori. Line cooling time is about $20$ times larger that\ninflow time from outer boundary.\n\nThe study is the extension of basic theory, but realistic\nanalytical models should include more physics. The work is\nunderway.\n\\begin{theacknowledgments}\nI thank my advisor Prof. Ramesh Narayan for fruitful discussions.\n\\end{theacknowledgments}\n\n\\bibliographystyle{aipproc}\n\n", "meta": {"timestamp": "2007-10-12T22:05:43", "yymm": "0710", "arxiv_id": "0710.2543", "language": "en", "url": "https://arxiv.org/abs/0710.2543"}} {"text": "\\section{Introduction}\n\nLocated at about 1$'$ to the NW of the Orion Trapezium, the\nBN/KL region has been, as\nthe closest region of massive star formation, the subject of extensive studies.\nRecently, Rodr\\'\\i guez et al. (2005) and G\\'omez et al. (2005)\nreported large proper motions (equivalent to velocities of the order of\na few tens of km s$^{-1}$) for the radio sources associated with the infrared sources\nBN and n, as well as for the radio source I. All three objects\nare located at the core of the BN/KL region and appear \nto be moving away from a common point where they must all have been \nlocated about 500 years ago.\nEven when these proper motions are now available, there is no\nradial velocity information for these three sources, with the\nexception of the near-infrared spectroscopic study of BN\nmade by Scoville et al. (1983), that report an LSR radial\nvelocity of +21 km s$^{-1}$ for this source.\nIn this paper we present 7 mm continuum and H53$\\alpha$\nradio recombination line observations of the BN/KL region in an\nattempt to obtain additional information on the radial velocities of\nthese sources.\n\n\\section{Observations}\n\n\nThe 7 mm observations were made in the B configuration\nof the VLA of the NRAO\\footnote{The National Radio \nAstronomy Observatory is operated by Associated Universities \nInc. under cooperative agreement with the National Science Foundation.},\nduring 2007 December 14. The central rest frequency observed was\nthat of the H53$\\alpha$ line, 42951.97 MHz,\nand we integrated on-source for a total of\napproximately 3 hours. We observed in the spectral line\nmode, with 15 channels of 1.56 MHz each (10.9 km s$^{-1}$)\nand both circular polarizations. The bandpass calibrator was\n0319+415. A continuum channel recorded the\ncentral 75\\% of the full spectral window. The absolute amplitude\ncalibrator was 1331+305 \n(with an adopted flux density of 1.47 Jy)\nand the phase calibrator was 0541$-$056 (with a bootstrapped flux density\nof 1.78$\\pm$0.08 Jy). The phase noise rms was about 30$^\\circ$,\nindicating good weather conditions. The phase center of these observations was at\n$\\alpha(2000) = 05^h~35^m~14\\rlap.^s13;~\\delta(2000) = -05^\\circ~22{'}~26\\rlap.^{''}6$.\n\nThe data were acquired and reduced using the recommended VLA procedures\nfor high frequency data, including the fast-switching mode with a\ncycle of 120 seconds. \nClean maps were\nobtained using the task IMAGR of AIPS with the ROBUST parameter set to 0.\n\n\n\n\n\n\n\\section{Continuum Analysis}\n\n\\subsection{Spectral Indices}\n\nIn Figure 1 we show the image obtained from the continuum channel.\nThree sources, BN, I and n, are evident in the image. No other sources\nwere detected above a 5-$\\sigma$ lower limit of 1.75 mJy in our $1'$\nfield of view. The positions, flux\ndensities, and deconvolved angular sizes of these sources are\ngiven in Table 1. The continuum flux density of the sources\nhas been obtained from the line-free channels.\nThe line emission will be discussed below.\nThe flux density obtained at 7 mm by us \nfor BN is in good agreement with the values previously reported in\nthe literature: \nwe obtain a flux density of 28.6$\\pm$0.6 mJy, while\nvalues of 31$\\pm$5 and 28.0$\\pm$0.6 were obtained by Menten \\& Reid (1995)\nand Chandler \\& Wood (1997), respectively. \nIn the case of source I, the agreement is acceptable,\nsince we obtain a flux density of 14.5$\\pm$0.7 mJy,\nwhile values of \n13$\\pm$2 and 10.8$\\pm$0.6 mJy were reported by Menten \\& Reid (1995)\nand Chandler \\& Wood (1997), respectively. \nCareful monitoring would be required\nto test if the radio continuum from source I is variable in time. \n\nThe spectral indices determined from our 7 mm observations and the\n3.6 cm observations of G\\'omez et al. (2008) are given in the last column of Table 2.\nOur spectral indices for BN and\nI are in excellent agreement in this spectral range with the more detailed analysis \npresented by Plambeck et al. (1995) and Beuther et al. (2004).\n\nWe have detected source n for the first time\nat 7 mm and this detection allows the first estimate of the spectral index of this source\nover a wide frequency range.\nThe value of 0.2$\\pm$0.1 suggests marginally thick free-free emission, as expected in\nan ionized outflow. This supports the interpretation of this source\nas an ionized outflow by G\\'omez et al. (2008).\nThe position given by us in Table 1 is consistent with the\nextrapolation of the proper motions of this source discussed by G\\'omez et al. (2008).\n\n\\subsection{Deconvolved Angular Sizes}\n\nThe radio source I has parameters\nconsistent with an optically thick free-free source (spectral\nindex of $1.5\\pm0.1$).\nBeuther et al. (2004) suggest that this spectral index is either the result of \noptically thick free-free plus dust emission, or $H^-$ free-free emission \nthat gives rise to a power-law spectrum with an index of $\\sim$1.6. \n\nIn the case of the radio source associated with the infrared source n\nwe only have an upper limit to its size at 7 mm. In addition,\nG\\'omez et al. (2008) report important morphological variations\nover time in this source\nthat suggest that comparisons at different frequencies should be made\nonly from simultaneous observations.\n\nIn the case of BN, \nthe frequency dependences of flux density and angular size (this last\nparameter taken to\nbe the geometric mean of the major and minor axes reported in Tables 1 and 2) can be accounted for with\na simple model of a sphere of ionized gas in which \nthe electron density\ndecreases as a power-law function of radius, $n_e \\propto r^{-\\alpha}$. \nIn this case, the flux density of the source is expected to go with\nfrequency as $S_\\nu \\propto \\nu^{(6.2-4\\alpha)/(1-2\\alpha)}$ and the angular size is expected to go with\nfrequency as $\\theta_\\nu \\propto \\nu^{2.1/(1-2\\alpha)}$ (Reynolds 1986).\nThe frequency dependences of flux density ($S_\\nu \\propto \\nu^{1.1\\pm0.1}$) and angular \nsize ($\\theta_\\nu \\propto \\nu^{-0.36\\pm0.12}$) for\nBN are consistent with a steeply declining electron density\ndistribution \nwith power law index of\n$\\alpha = 3.0\\pm0.3$. The continuum spectrum of BN produced \nby Plambeck et al. (1995) indicates that a constant\nspectral index extends from 5 to 100 GHz.\n\n\\section{Analysis of the H53$\\alpha$ Recombination Line Emission}\n\n\\subsection{Radial LSR Velocity}\n\nWe clearly detected the H53$\\alpha$ line emission only from BN.\nThe spectrum is shown in Figure 2. The parameters of\nthe Gaussian least squares fit to the profile are given in Table 3.\nWe note that the radial LSR velocity determined by us, $+20.1\\pm2.1$\nkm s$^{-1}$, agrees well with the value of $+21$ km s$^{-1}$\nreported by Scoville et al. (1983) from near-IR spectroscopy.\nIn a single dish study of the H41$\\alpha$ line made with an\nangular resolution of 24$''$ toward\nOrion IRc2, Jaffe \\& Mart\\'\\i n-Pintado (1999) report emission\nwith $v_{LSR}$ = -3.6 km s$^{-1}$. \nMost likely, this is emission from the ambient H~II region, since\nits radial velocity practically coincides with the\nvalue determined for the large H~II region (Orion A) ionized by\nthe Trapezium stars (e. g. Peimbert et al. 1988).\nThe single dish observations of the H51$\\alpha$ emission\nof Hasegawa \\& Akabane (1984), made with an angular resolution of 33$''$, \nmost probably come also from the ambient ionized gas and not\nfrom BN.\n\n\\subsection{LTE Interpretation}\n\nIf we assume that the line emission is optically thin and in LTE,\nthe electron temperature, $T_e^*$, is given by \n(Mezger \\& H\\\"oglund 1967; Gordon 1969; Quireza et al. 2006): \n\n\\begin{equation}\\Biggl[{{T_e^*} \\over {K}}\\Biggr] = \\Biggl[7100 \\biggl({{\\nu_L} \\over {GHz}} \\biggr)^{1.1}\n\\biggl({{S_C} \\over {S_L}} \\biggr) \\biggl({{\\Delta v} \\over {km~s^{-1}}}\\biggr)^{-1}\n(1 + y^+)^{-1} \\Biggr]^{0.87}, \\end{equation}\n\n\\noindent where $\\nu_L$ is the line frequency, $S_C$ is the continuum flux density,\n$S_L$ is the peak line flux density, $\\Delta v$ is the FWHM line width, and\n$y^+$ is the ionized helium to ionized hydrogen abundance ratio.\nIn the case of BN, we can adopt $y^+ \\simeq 0$ given that the\nsource is not of very high luminosity, and using the values given in Tables 1 and 3,\nwe obtain $T_e^* \\simeq 8,200$ K. This value is similar to that\ndetermined for the nearby Orion A from radio recombination lines (e. g. Lichten, Rodr\\'\\i guez, \\&\nChaisson 1979). \n\nIt is somewhat\nsurprising that we get a very reasonable estimate for $T_e^*$ when our previous discussion\nseemed to imply that BN is partially optically thick at 7 mm.\nOne possibility is that we have two effects fortuitously canceling each other. For example, the\noptical thickness of the source will diminish the \nline emission, while maser effects (such as those observed\nin MWC 349; Mart\\'\\i n-Pintado et al. 1989) will amplify the line.\nHowever, in an attempt to understand this result in LTE conditions, we will discuss the expected\nLTE radio recombination line emission from \na sphere of ionized gas in which the electron density\ndecreases as a power-law function of radius, $n_e \\propto r^{-\\alpha}$. \nAs noted before, the modeling of the continuum emission from such a source \nwas presented in detail by Panagia \\& Felli (1975) and Reynolds (1986). The radio recombination line emission\nfor the case $\\alpha = 2$ has been discussed by Altenhoff, Strittmatter, \\&\nWendker (1981) and Rodr\\'\\i guez (1982).\nHere we generalize the derivation of the recombination line emission \nto the case of $\\alpha > 1.5$. This lower limit is\nadopted to avoid the total emission from the source to diverge. \n\nFor a sphere of ionized gas, the free-free continuum emission will be given by\n(Panagia \\& Felli 1975):\n\n\\begin{equation}S_C = 2 \\pi {{r_0^2} \\over {d^2}} B_\\nu \\int_0^\\infty \n\\biggl(1 - exp[-\\tau_C(\\xi)]\\biggr)~ \\xi~ d\\xi, \\end{equation}\n\n\\noindent where $r_0$ is a reference radius, $d$ is the distance to the source,\n$B_\\nu$ is Planck's function, $\\xi$ is the projected radius in units of $r_0$,\nand $\\tau_C(\\xi)$ is the continuum optical depth along the line of sight with\nprojected radius $\\xi$. On the other hand, the free-free continuum plus \nradio recombination line emission will be given by an equation similar to eqn. (2), but with the\ncontinuum opacity substituted by the continuum plus line opacity (Rodr\\'\\i guez 1982):\n\n\\begin{equation}S_{L+C} = 2 \\pi {{r_0^2} \\over {d^2}} B_\\nu \\int_0^\\infty \\biggl(1 - exp[-\\tau_{L+C}(\\xi)]\n\\biggr) \\xi d\\xi, \\end{equation}\n\n\\noindent where $\\tau_{L+C}(\\xi)$ is the line plus continuum optical depth along the line of sight with\nprojected radius $\\xi$.\n\nThe line-to-continuum ratio will be given by:\n\n\\begin{equation}{{S_L} \\over {S_C}} = {{S_{L+C} - S_C} \\over {S_C}}. \\end{equation}\n\nThe opacity of these emission processes depends on projected radius as (Panagia \\& Felli 1975):\n\n\\begin{equation}\\tau(\\xi) \\propto \\xi^{-(2 \\alpha -1)}. \\end{equation}\n\nWe now introduce the definite integral (Gradshteyn \\& Ryzhik 1994)\n\n\\begin{equation}\\int_0^\\infty [1- exp(-\\mu x^{-p})]~x~ dx = \n- {{1} \\over {p}}~ \\mu^{{2} \\over{p}}~ \\Gamma(-{{2} \\over{p}}), \\end{equation}\n\n\\noindent valid for $\\mu > 0$ and $p > 0$ and with $\\Gamma$ being the Gamma function.\nSubstituting eqns. (2) and (3) in eqn. (4), and using the integral\ndefined in eqn. (7), it can be shown that\n\n\\begin{equation}{{S_L} \\over {S_C}} = \\Biggl[{{\\kappa_L + \\kappa_C} \n\\over {\\kappa_C}} \\Biggr]^{1/(\\alpha -0.5)} - 1, \\end{equation}\n\n\\noindent where $\\kappa_L$ and $\\kappa_C$ are the line and continuum absorption coefficients\nat the frequency of observation, respectively.\nIn this last step we have also\nassumed that the opacity of the line and continuum processes are proportional to\nthe line and continuum absorption coefficients, respectively, that is, that the\nphysical depths producing the line and continuum emissions are the\nsame. Under the LTE assumption, we have\nthat\n\n\\begin{equation}{{\\kappa_L} \\over {\\kappa_C}} = 7100 \\biggl({{\\nu_L} \\over {GHz}} \\biggr)^{1.1}\n\\biggl({{T_e^*} \\over {K}} \\biggr)^{-1.1} \\biggl({{\\Delta v} \\over {km~s^{-1}}}\\biggr)^{-1}\n(1 + y^+)^{-1}. \\end{equation}\n\nFor $\\nu \\leq$ 43 GHz and typical parameters of an H II region, we\ncan see from eqn. (8) that $\\kappa_L<\\kappa_C$, and\neqn. (7) can be approximated by:\n\n\\begin{equation}{{S_L} \\over {S_C}} \\simeq {{1} \\over \n{(\\alpha -0.5)}} \\Biggl[{{\\kappa_L} \\over {\\kappa_C}} \\Biggr]. \\end{equation}\n\nThat is, the expected optically-thin, LTE line-to-continuum ratio:\n\n\\begin{equation}{{S_L} \\over {S_C}} \\simeq \\Biggl[{{\\kappa_L} \\over {\\kappa_C}} \\Biggr], \\end{equation}\n\n\\noindent becomes attenuated by a factor $1/(\\alpha -0.5)$. In the case of $\\alpha = 2$,\nthe factor is 2/3, and we reproduce the result of Altenhoff, Strittmatter, \\&\nWendker (1981) and Rodr\\'\\i guez (1982). In the case of BN, we have that $\\alpha \\simeq 3$, and\nwe expect the attenuation factor to be 2/5. If BN can be modeled this way, we would have expected\nto derive electron temperatures under the LTE assumption (see eqn. 1) of order \n\n\\begin{equation}T_e^*(\\alpha = 3) \\simeq 2.2~ T_e^*(thin). \\end{equation}\n\nHowever, from the discussion in the first paragraph of this section observationally\nwe determine that \n\n\\begin{equation}T_e^*(\\alpha = 3) \\simeq T_e^*(thin). \\end{equation}\n\nSummarizing: i) BN seems to have significant optical depth in the continuum at\n7 mm, ii) this significant optical depth should attenuate the observed recombination\nline emission with respect to the optically-thin case, but iii) the line emission seems\nto be as strong as in the optically-thin case. \n\nAs possible explanations for the ``normal'' (apparently optically-thin and in LTE)\nradio recombination line emission\nobserved from BN we can think of two options.\nThe first is that, as noted before, there is a non-LTE line-amplifying\nmechanism that approximately compensates for the optical depth attenuation.\nThe second possibility is that the free-free emission from BN at 7 mm is already optically thin.\nHowever, this last possibility seems to be in contradiction with the results\nof Plambeck et al. (1995) that suggest a single spectral index \nfrom 5 to 100 GHz. Observations of radio recombination lines around\n100 GHz are needed to solve this problem.\n\nA comparison with the H53$\\alpha$ emission from the hypercompact H~II\nregion G28.20-0.04N is also of interest. \nThe continuum flux densities from this source at \n21, 6, 3.6, and 2 cm are 49, 135, 297, and 543 mJy, respectively\n(Sewilo et al. 2004). At 7 mm the continuum flux density is 641 mJy\n(Sewilo et al. 2008), indicating\nthat the source has become optically thin at this wavelength.\nUsing the H53$\\alpha$ line parameters given by (Sewilo et al. 2008)\nwe derive an LTE electron temperature of $T_e^* \\simeq 7,600$ K, \nsimilar to the value for BN and in this case consistent with\nthe optically-thin nature of G28.20-0.04N. \n\nThe non detection of H53$\\alpha$ emission from radio source I is consistent\nwith its expected large optical depth. The formulation above implies $\\alpha \\simeq 5$, and an\nattenuation factor of 2/9. \nThis confirms the notion that BN and radio source I are two sources\nintrinsically very different in nature.\nThis difference is also evident in the brightness temperature of both sources.\nAt 7 mm, the brightness temperature of a source is\n\n\\begin{equation}\\Biggl[{{T_B} \\over {K}} \\Biggr] \\simeq 0.96 \\Biggl[{{S_\\nu} \\over {mJy}} \n\\Biggr] \\Biggl[{{\\theta_{maj} \\times\n\\theta_{min}} \\over {arcsec^2}} \\Biggr]^{-2}. \\end{equation}\n\nUsing the values of Table 1, we get $T_B \\simeq$ 7,800 K for BN, confirming\nits nature as photoionized gas. However, for the radio source I we get\n$T_B \\simeq$ 2,600 K. So, even when source I seems to be optically thick, its\nbrightness temperature is substantially lower than that expected for\na photoionized region. Reid et al. (2007) have discussed as possible\nexplanations for this low brightness temperature $H^-$ free-free opacity or \na photoionized disk.\n\nFollowing the discussion of Reid et al. (2007), we consider\nit unlikely that dust emission could be a dominant contributor to the 7 mm emission of BN or\nOrion I. A dense, warm, dusty disk would be expected to show many molecular lines at\nmillimeter/submillimeter wavelengths. While Beuther et al. (2006) and Friedel\n\\& Snyder(2008) find numerous, strong,\nmolecular lines toward the nearby \"hot core\", they find no strong lines toward the position of\nOrion I (with the exception of\nthe strong SiO masers slightly offset from Orion I) or BN.\nAlso, the brightness temperatures derived by us at 7 mm (7,800 K for BN and\n2,600 K for source I) are \nhigh enough to sublimate dust and suggest that free-free emission from\nionized gas dominates the continuum emission.\nFinally, the continuum spectra of BN and of source I measured by Plambeck et al.(1995)\nand Beuther et al. (2006), respectively, suggest that the dust\nemission becomes dominant only above $\\sim$300 GHz.\n\n\nIn the case of source n, no detection was expected given its\nweakness even in the continuum.\n\n\\subsection{Spatial Distribution of the H53$\\alpha$ Line Emission}\n\nThe H53$\\alpha$ line emission in the individual velocity\nchannels shows evidence of structure but unfortunately the signal-to-noise\nratio is not large enough to reach reliable conclusions from the\nanalysis of these individual channels. However, an image\nwith good signal-to-noise ratio can be obtained averaging over the velocity\nrange of -21.2 to +66.1 km s$^{-1}$, using the task MOMNT in\nAIPS. This line image is compared\nin Figure 3 with a continuum image\nmade from the line-free channels.\nThe larger apparent size of the continuum image is simply the\nresult of its much better signal-to-noise ratio.\nFor the total line emission we obtain an upper limit of\n$0\\rlap.{''}12$ for its size, that is consistent with the\nsize of the continuum emission given in Table 1.\nWe also show images of the blueshifted (-21.2 to +22.5 km s$^{-1}$)\nand redshifted (+22.5 to 66.1 km s$^{-1}$) line emission in Figure 3.\nThe cross in the figure indicates the centroid of the total line\nemission. The centroid of the line emission does not appear to\ncoincide with the centroid of the continuum emission and\nwe attribute this to opacity effects.\n\nAn interesting conclusion comes from comparing the total\nline emission, with the blueshifted and redshifted components.\nThe blueshifted emission seems slightly shifted to the SW, while the\nredshifted emission seems slightly shifted to the NE, suggesting a\nvelocity gradient. This result supports the suggestion of\nJiang et al. (2005) of the presence of an outflow in BN along a\nposition angle of 36$^\\circ$. Given the modest signal-to-noise ratio\nof the data, it is difficult to estimate the magnitude\nof the velocity shift and we crudely assume it is of order one\nchannel ($\\sim$10 km s$^{-1}$), since most of the line\nemission is concentrated in the central two channels\nof the spectrum (see Figure 2). The position shift between the blueshifted and\nthe redshifted emissions is $0\\rlap.{''}028 \\pm 0\\rlap.{''}007$\n($12 \\pm 3$ AU at the distance of 414 pc given by Menten et al. 2007), significant to the \n4-$\\sigma$ level. Unfortunately, the data of Jiang et al. (2005) does not\ninclude line observations and there is no kinematic information in their paper to\ncompare with our results.\n\nThe small velocity gradient observed by us in BN is consistent with a\nslow bipolar outflow but also with Keplerian rotation around a central mass\nof only 0.2 $M_\\odot$. \n \n\\section{Conclusions}\n\nWe presented observations of the H53$\\alpha$ recombination line\nand adjacent continuum toward the Orion BN/KL region.\nIn the continuum we detect the BN object, the radio source \nI (GMR I) and the radio counterpart of the infrared source n \n(Orion-n) and discuss its parameters. \nIn the H53$\\alpha$ line we only detect the BN object,\nthe first time that radio recombination lines have been detected from this source.\nThe LSR radial velocity of BN from the H53$\\alpha$ line, $v_{LSR} = 20.1 \\pm 2.1$\nkm s$^{-1}$,\nis consistent with that found from previous studies in near-infared lines,\n$v_{LSR} = 21$ km s$^{-1}$.\nWe discuss the line-to-continuum ratio from BN and present evidence\nfor a possible velocity gradient across this source. \n\n\n\n\n\n\n\n\n\\acknowledgments\n\nLFR and LAZ acknowledge the support\nof CONACyT, M\\'exico and DGAPA, UNAM.\n\n\n\n{\\it Facilities:} \\facility{VLA}\n\n\n\n\n\n\n", "meta": {"timestamp": "2008-10-28T16:34:26", "yymm": "0810", "arxiv_id": "0810.5055", "language": "en", "url": "https://arxiv.org/abs/0810.5055"}} {"text": "\\section{Introduction}\n\nThe study of magnetic models has \ngenerated considerable progresses in the understanding \nof magnetic materials, \nand lately, it has overcome the frontiers of magnetism,\nbeing considered in many areas of knowledge. \nCertainly, the Ising model represents one of the most \nstudied and important models of magnetism and\nstatistical mechanics~\\cite{huang,reichl}, \nand it has been employed also to typify a wide variety of \nphysical systems, like lattice gases, binary alloys, and \nproteins (with a particular interest in the problem of protein \nfolding). \nAlthough real magnetic systems should be properly \ndescribed by means of Heisenberg spins (i.e.,\nthree-dimensional variables), many materials are\ncharacterized by anisotropy fields that make these \nspins prefer given directions in space, explaining \nwhy simple models, characterized by \nbinary variables, became so important \nfor the area of magnetism. \nParticularly, models defined in terms of Ising variables\nhave shown the ability for exhibiting a wide variety \nof multicritical behavior\nby introducing randomness, and/or competing \ninteractions, has attracted the attention of many\nresearchers~(see, e.g., Refs.~\\cite{aharony,mattis,kaufman,nogueira98\nnuno08a,nuno08b,salmon1,morais12}). \n\nCertainly, the simplicity of Ising variables, \nwhich are very suitable for both analytical and numerical\nstudies, has led to proposals of important models outside\nthe scope of magnetism, particularly in the \narea of complex systems.\nThese models have been successful for describing \na wide variety of relevant \nfeatures in such systems, and have raised \ninterest in many fields, like\nfinancial markets, optimization problems, \nbiological membranes, and social behavior.\nIn some cases, more than one Ising variable have been used, \nespecially by considering a coupling between them, as \nproposed within the framework of choice \ntheories~\\cite{fernandez}, or in plastic \ncrystals~\\cite{plastic1,brandreview,folmer}. \nIn the former case, each set of Ising variables represents\na group of identical individuals, all of which can make two\nindependent binary choices. \n\n\\begin{figure}[htp]\n\\begin{center}\n\\includegraphics[height=5.5cm]{figure1.eps}\n\\end{center}\n\\vspace{-1cm}\n\\caption{Illustrative pictures of the three phases as the temperature \nincreases, low-temperature (ordered) solid, intermediate \nplastic crystal, and high-temperature (disordered) liquid phase. \nIn the plastic state the centers of mass of the molecules form a \nregular crystalline lattice but the molecules are \ndisordered with respect to the orientational degrees of freedom.} \n\\label{fig:fasesdecristais}\n\\end{figure}\n\nThe so-called plastic \ncrystals~\\cite{plastic1,brandreview,folmer,michel85,michel87,%\ngalam87,galam89,salinas1,salinas2} appear as states \nof some compounds considered to be simpler than those of canonical \nglasses, but still presenting rather nontrivial \nrelaxation and equilibrium properties. Such a plastic\nphase corresponds to an intermediate stable state, between a \nhigh-temperature (disordered) liquid phase, and a low-temperature\n(ordered) solid phase and both transitions, \nnamely, liquid-plastic and plastic-solid, are first order. \nIn this intermediate phase, the rotational disorder coexists\nwith a translationally ordered state, characterized by \nthe centers of mass of the molecules forming a regular crystalline \nlattice with the molecules presenting disorder in their \norientational degrees of freedom, as shown \nin Fig.~\\ref{fig:fasesdecristais}. \nMany materials undergo a liquid-plastic phase transition, \nwhere the lower-temperature phase presents such a \npartial orientational order, like the plastic-crystal \nof Fig.~\\ref{fig:fasesdecristais}. \nThe property of translational invariance makes the plastic crystals\nmuch simpler to be studied from both analytical and numerical \nmethods, becoming very useful towards a proper \nunderstanding of the glass transition~\\cite{plastic1,brandreview,folmer}. \nIn some plastic-crystal models one introduces a coupling \nbetween two Ising models, associating each of these \nsystems respectively, to the translational and rotational degrees of \nfreedom~\\cite{galam87,galam89,salinas1,salinas2}, \nas a proposal for explaining satisfactorily \nthermodynamic properties of the plastic phase. \n\nAccordingly, spin variables $\\{t_{i}\\}$ and $\\{r_{i}\\}$ are introduced in \nsuch a way to mimic translational \nand rotational degrees of freedom of each molecule $i$, respectively. \nThe following Hamiltonian is \nconsidered~\\cite{galam87,galam89,salinas1,salinas2}, \n\n\\begin{equation}\n\\label{eq:hamplastcrystals}\n{\\cal H} = - J_{t}\\sum_{\\langle ij \\rangle}t_{i}t_{j}\n- J_{r} \\sum_{\\langle ij \\rangle}r_{i}r_{j} \n- \\sum_{i} (\\alpha t_{i} + h_{i})r_{i}~,\n\\end{equation}\n\n\\vskip \\baselineskip\n\\noindent\nwhere $\\sum_{\\langle ij \\rangle}$ represents a sum over \ndistinct pairs of nearest-neighbor spins. \nIn the first summation, the Ising variables $t_{i}=\\pm 1$ \nmay characterize two lattices A and B (or occupied and vacant sites). \nOne notices that the rotational \nvariables $r_{i}$ could be, in principle, continuous variables, \nalthough the fact that the minimization of the coupling contribution \n$\\alpha t_{i}r_{i}$ is achieved \nfor $t_{i}r_{i} =1$ ($\\alpha>0$), or \nfor $t_{i}r_{i} =-1$ ($\\alpha<0$), suggests the simpler choice\nof binary variables ($r_{i}=\\pm 1$) to be appropriate, \nbased on the energy minimization requirement. \n\nIn the present model the variables $t_{i}$ and \n$r_{i}$ represent very different characteristics of a \nmolecule. Particularly, the rotational variables $r_{i}$\nare expected to change more freely than the translational ones; \nfor this reason, one introduces a random field acting only \non the rotational degrees of freedom. \nIn fact, the whole contribution $\\sum_{i} (\\alpha t_{i} + h_{i})r_{i}$\nis known to play a fundamental role for the plastic phase of \nionic plastic crystals, like the alkalicyanides KCN, NaCN and RbCN. \nIn spite of its simplicity, the above Hamiltonian is able to capture \nthe most relevant features of the plastic-crystal phase, as well \nas the associated phase transitions,\nnamely, liquid-plastic and plastic-solid ones~\\cite{michel85,michel87,%\ngalam87,galam89,salinas1,salinas2,vives}. \n\nA system described by a Hamiltonian slightly different \nfrom the one of~\\eq{eq:hamplastcrystals}, in which the \nwhole contribution\n$\\sum_{i} (\\alpha t_{i} + h_{i})r_{i}$ was replaced by \n$\\sum_{i} \\alpha_{i} t_{i}r_{i}$, i.e., with no random \nfield acting on variable $r_{i}$ separately, was considered\nin Ref.~\\cite{salinas2}. In such a work one finds a detailed\nanalysis of the phase diagrams and order-parameter behavior \nof the corresponding model. However, to our knowledge, \nprevious investigations on the model defined \nby~\\eq{eq:hamplastcrystals} have not \nconsidered thoroughly the effects of the random \nfield $h_{i}$, with a particular attention to the phase diagrams \nfor the case of a randomly distributed bimodal\none, $h_{i}=\\pm h_{0}$; \nthis represents the main motivation\nof the present work. \nIn the next section we define the model, determine its free-energy \ndensity, and describe the \nnumerical procedure to be used. \nIn Section III we exhibit typical phase diagrams \nand analyze the behavior of the corresponding order parameters,\nfor both zero and finite temperatures; the ability of the model \nto exhibit a rich variety of phase diagrams, characterized \nby multicritical behavior, is shown. \nFinally, in Section IV we present our main conclusions. \n \n\\section{The Model and Free-Energy Density} \n\nBased on the discussion of the previous section, herein\nwe consider a system composed by two interacting Ising models, \ndescribed by the Hamiltonian\n\n\\begin{equation}\n\\label{eq:hamiltonian1}\n{\\cal H}(\\{h_{i}\\}) = - J_{\\sigma} \\sum_{(ij)}\\sigma_{i}\\sigma_{j}\n- J_{\\tau} \\sum_{(ij)}\\tau_{i}\\tau_{j} + D\\sum_{i=1}^{N}\\tau_{i}\\sigma_{i}\n-\\sum_{i=1}^{N}h_{i}\\tau_{i}~, \n\\end{equation}\n\n\\vskip \\baselineskip\n\\noindent\nwhere $\\sum_{(ij)}$ represent sums over all distinct pairs of spins, \na limit for which the mean-field approximation becomes exact. Moreover, \n$\\tau_{i}= \\pm 1$ and $\\sigma_{i}= \\pm 1$ ($i=1,2, \\cdots , N$) depict \nIsing variables, \n$D$ stands for a real parameter, whereas both $J_{\\sigma}$ and \n$J_{\\tau}$ are positive coupling constants, which will be \nrestricted herein to \nthe symmetric case, $J_{\\sigma}=J_{\\tau}=J>0$. Although this later \ncondition may seem as a rather artificial simplification of the \nHamiltonian in~\\eq{eq:hamplastcrystals}, the application of a \nrandom field $h_{i}$ acting separately on one set of variables, will \nproduce the expected distinct physical behavior associated with \n$\\{ \\tau_{i} \\}$ and $\\{ \\sigma_{i} \\}$. The random fields \n$\\{ h_{i} \\}$ will be considered as following \na symmetric bimodal probability distribution function, \n\n\\begin{equation}\n\\label{eq:hpdf}\n P(h_{i}) = \\frac{1}{2} \\, \\delta(h_{i}-h_{0}) +\\frac{1}{2} \\, \\delta(h_{i}+h_{0})~. \n\\end{equation}\n \n\\vskip \\baselineskip\n\\noindent\nThe infinite-range character of the interactions allows one to write the above \nHamiltonian in the form\n\n\\begin{equation}\n\\label{eq:hamiltonian2}\n{\\cal H}(\\{h_{i}\\})= - \\frac{J}{2N}{\\left (\\sum_{i=1}^{N}\\sigma_{i} \\right )}^{2} \n- \\frac{J}{2N}{\\left (\\sum_{i=1}^{N}\\tau_{i} \\right )}^{2} \n+D\\sum_{i=1}^{N}\\tau_{i}\\sigma_{i} -\\sum_{i=1}^{N}h_{i}\\tau_{i}~,\n\\end{equation} \n\n\\vskip \\baselineskip\n\\noindent\nfrom which one may calculate the partition function associated with \na particular configuration of the fields $\\{ h_{i}\\}$, \n\n\\begin{equation}\nZ(\\{h_{i}\\}) = {\\rm Tr} \\exp \\left[- \\beta {\\cal H}(\\{h_{i}\\}) \\right]~, \n\\end{equation}\n\n\\vskip \\baselineskip\n\\noindent\nwhere $\\beta=1/(kT)$ and \n${\\rm Tr} \\equiv {\\rm Tr}_{\\{ \\tau_{i},\\sigma_{i}=\\pm 1 \\}} $ indicates a sum over \nall spin configurations. One can now make use of \nthe Hubbbard-Stratonovich transformation~\\cite{dotsenkobook,nishimoribook}\nto linearize the quadratic terms, \n\n\\begin{equation}\nZ(\\{h_{i}\\}) = \\frac{1}{\\pi} \\int_{-\\infty}^{\\infty}dx dy \\exp(-x^{2}-y^{2}) \n\\prod_{i=1}^{N} {\\rm Tr} \\exp [ H_{i}(\\tau,\\sigma)]~,\n\\end{equation}\n \n\\vskip \\baselineskip\n\\noindent\nwhere $H_{i}(\\tau,\\sigma)$ depends on the random \nfields $\\{ h_{i}\\}$, \nas well as on the spin variables, being given by\n\n\\begin{equation}\nH_{i}(\\tau,\\sigma) = \\sqrt{\\frac{2\\beta J}{N}} \\ x \\tau + \\sqrt{\\frac{2\\beta J}{N}} \\ y \\sigma \n- \\beta D \\tau \\sigma + \\beta h_{i} \\tau~. \n\\end{equation}\n\n\\vskip \\baselineskip\n\\noindent\nPerforming the trace over the spins and defining new variables, related to \nthe respective order parameters, \n\n\\begin{equation}\n\\label{eq:mtausigma}\nm_{\\tau} = \\sqrt{\\frac{2kT}{JN}} \\ x~; \\qquad \nm_{\\sigma} = \\sqrt{\\frac{2kT}{JN}} \\ y~, \n\\end{equation}\n\n\\vskip \\baselineskip\n\\noindent\none obtains\n\n\\begin{equation}\nZ(\\{h_{i}\\})= \\frac{\\beta J N}{2 \\pi} \\int_{-\\infty}^{\\infty} dm_{\\tau} dm_{\\sigma} \\exp[N g_{i} (m_{\\tau},m_{\\sigma})]~, \n\\end{equation}\n\n\\vskip \\baselineskip\n\\noindent\nwhere\n\n\\begin{eqnarray}\ng_{i}(m_{\\tau},m_{\\sigma}) &=& - \\frac{1}{2} \\beta J m_{\\tau}^{2} \n- \\frac{1}{2} \\beta J m_{\\sigma}^{2} + \\log \\left \\{ \n2e^{-\\beta D} \\cosh[\\beta J(m_{\\tau}+m_{\\sigma}+h_{i}/J)]\n\\right. \\nonumber \\\\ \\nonumber \\\\\n\\label{eq:gimtausigma}\n&+& \\left. 2e^{\\beta D} \\cosh[\\beta J(m_{\\tau}-m_{\\sigma}+h_{i}/J)] \\right \\}~.\n\\end{eqnarray}\n\n\\vskip \\baselineskip\n\\noindent\nNow, one takes the thermodynamic limit ($N \\rightarrow \\infty$), and uses the saddle-point\nmethod to obtain \n\n\\begin{equation}\nZ = \\displaystyle \\frac{\\beta J N}{2 \\pi} \\int_{-\\infty}^{\\infty} dm_{\\tau} dm_{\\sigma} \n\\exp[-N \\beta f(m_{\\tau},m_{\\sigma})]~, \n\\end{equation}\n\n\\vskip \\baselineskip\n\\noindent\nwhere the free-energy density functional $f(m_{\\tau},m_{\\sigma})$ results \nfrom a quenched average of \n$g_{i}(m_{\\tau},m_{\\sigma})$ in~\\eq{eq:gimtausigma}, over the \nbimodal probability distribution of~\\eq{eq:hpdf}, \n\n\\begin{equation}\n\\label{eq:freeenergy}\nf(m_{\\tau},m_{\\sigma}) = \\displaystyle \\frac{1}{2} J m_{\\tau}^{2} \n+ \\frac{1}{2} J m_{\\sigma}^{2} - \\frac{1}{2\\beta}\\log Q(h_{0}) \n- \\frac{1}{2\\beta}\\log Q(-h_{0})~, \n\\end{equation}\n\n\\vskip \\baselineskip\n\\noindent\nwith\n\n\\begin{equation}\nQ(h_{0}) = 2e^{-\\beta D} \\cosh[\\beta J(m_{\\tau}+m_{\\sigma} + h_{0}/J)]\n+2e^{\\beta D} \\cosh[\\beta J(m_{\\tau}-m_{\\sigma} + h_{0}/J)]~. \n\\end{equation}\n\n\\vskip \\baselineskip\n\\noindent\nThe extremization of the free-energy density above with respect to the \nparameters $m_{\\tau}$ and $m_{\\sigma}$ yields the following equations of state, \n\n\\begin{eqnarray}\n\\label{eq:mtau}\nm_{\\tau} &=& \\frac{1}{2} \\frac{R_{+}(h_{0})}{Q(h_{0})} \n+ \\frac{1}{2} \\frac{R_{+}(-h_{0})}{Q(-h_{0})}~, \n\\\\ \\nonumber \\\\\n\\label{eq:msigma}\nm_{\\sigma} &=& \\frac{1}{2} \\frac{R_{-}(h_{0})}{Q(h_{0})} \n+ \\frac{1}{2} \\frac{R_{-}(-h_{0})}{Q(-h_{0})}~,\n\\end{eqnarray}\n\n\\vskip \\baselineskip\n\\noindent\nwhere \n\n\\begin{equation}\nR_{\\pm}(h_{0}) = e^{-\\beta D} \\sinh[\\beta J(m_{\\tau}+m_{\\sigma} + h_{0}/J)] \n\\pm e^{\\beta D} \\sinh[\\beta J(m_{\\tau}-m_{\\sigma} +h_{0}/J)]~. \n\\end{equation}\n\n\\vskip \\baselineskip\n\\noindent\n\nIn the following section we present numerical results for the \norder parameters and phase diagrams of the model, at both\nzero and finite temperatures. \nAll phase diagrams are represented \nby rescaling conveniently the energy parameters of the system, namely, \n$kT/J$, $h_{0}/J$ and $D/J$. \nTherefore, for given values of these dimensionless parameters,\nthe equations of state [Eqs.(\\ref{eq:mtau}) and~(\\ref{eq:msigma})] \nare solved numerically for $m_{\\tau}$ and $m_{\\sigma}$. \nIn order to avoid metastable states, all solutions obtained for \n$m_{\\tau} \\in [-1,1]$ and $m_{\\sigma} \\in [-1,1]$ are \nsubstituted in~\\eq{eq:freeenergy},\nto check for the minimization of the free-energy density. \nThe continuous (second order) critical frontiers are found by the set \nof input values for which the order parameters fall continuously down to \nzero, whereas the first-order frontiers were found through \nMaxwell constructions. \n\nBoth ordered ($m_{\\tau} \\neq 0$ and $m_{\\sigma} \\neq 0$) \nand partially-ordered ($m_{\\tau}=0$ and $m_{\\sigma} \\neq 0$) \nphases have appeared in our analysis, and will be labeled \naccordingly. \nThe usual paramagnetic phase ({\\bf P}), \ngiven by $m_{\\tau}=m_{\\sigma}=0$, always occurs for sufficiently\nhigh temperatures. \nA wide variety of critical points appeared in our analysis \n(herein we follow the classification due to Griffiths~\\cite{griffiths}): \n(i) a tricritical point signals the encounter of a continuous frontier \nwith a first-order line with no change of slope; \n(ii) an ordered critical point corresponds to an isolated critical\npoint inside the ordered region, terminating a first-order line that\nseparates two distinct ordered phases;\n(ii) a triple point, where three distinct phases coexist, signaling the \nencounter of three first-order critical frontiers. \nIn the phase diagrams we shall use distinct symbols and \nrepresentations for the critical points and frontiers, as described below. \n\n\\begin{itemize}\n\n\\item Continuous (second order) critical frontier: continuous line;\n\n\\item First-order critical frontier: dotted line;\n\n\\item Tricritical point: located by a black circle;\n\n\\item Ordered critical point: located by a black asterisk;\n\n\\item Triple point: located by an empty triangle.\n\n\\end{itemize}\n\n\\section{Phase Diagrams and Behavior of Order Parameters}\n\n\\subsection{Zero-Temperature Analisis}\n\nAt $T=0$, one has to analyze the different spin orderings that \nminimize the Hamiltonian of~\\eq{eq:hamiltonian2}.\nDue to the coupling between the two \nsets of spins, the minimum-energy configurations will correspond to \n$\\{\\tau_{i}\\}$ and $\\{\\sigma_{i}\\}$ antiparallel ($D>0$), or parallel ($D<0$).\nTherefore, in the absence of random fields ($h_{0}=0$) one should have \n$m_{\\tau}=-m_{\\sigma}$ ($D>0$), and \n$m_{\\tau}=m_{\\sigma}$ ($D<0$), where $m_{\\sigma}=\\pm1$. \nHowever, when random fields act on the $\\{\\tau_{i}\\}$ spins, there will \nbe a competition between these fields and the coupling parameter $D$,\nleading to several phases, as represented in Fig.~\\ref{fig:groundstate}, \nin the plane $h_{0}/J$ versus $D/J$. One finds three ordered \nphases for sufficiently low values of $h_{0}/J$\nand $|D|/J$, in addition to {\\bf P} phases for $(|D|/J)>0.5$ and $(h_{0}/J)>1$. \nAll frontiers shown in Fig.~\\ref{fig:groundstate} are first-order critical lines. \n\n\\begin{figure}[htp]\n\\begin{center}\n\\includegraphics[height=5.5cm]{figure2.eps}\n\\end{center}\n\\caption{Phase diagram of the model defined by Hamiltonian \nof~\\eq{eq:hamiltonian2}, at zero temperature. All critical frontiers \nrepresent first-order phase transitions; the empty triangles denote \ntriple points.}\n\\label{fig:groundstate}\n\\end{figure}\n\nWhen $(h_{0}/J) \\leq 1/2$ one finds ordered phases for all values of $D/J$, \nwith a vertical straight line at $D=0$ separating the \nsymmetric state ($D<0$), where $m_{\\tau}=m_{\\sigma}$, from the \nantisymmetric one ($D>0$), characterized by $m_{\\tau}=-m_{\\sigma}$. \nTwo critical frontiers (symmetric under a reflection operation)\nemerge from the triple point at \n$(D/J)=0.0$ and $(h_{0}/J)=0.5$, given, respectively, by \n$(h_{0}/J)=0.5 + (D/J)$ for $D>0$, and \n$(h_{0}/J)=0.5 - (D/J)$ for $D<0$.\nThese critical frontiers terminate at $(h_{0}/J)=1.0$ and\nseparate the low random-field-valued ordered phases from \na partially-ordered \nphase, given by $m_{\\tau}=0$ and $m_{\\sigma}= \\pm 1$. \nAs shown in Fig.~\\ref{fig:groundstate}, three triple points\nappear, each of them signaling the encounter \nof three first-order lines, characterized by a coexistence of three phases, \ndefined by distinct values of the magnetizations $m_{\\tau}$ and\n$m_{\\sigma}$, as described below. \n\n\\begin{itemize}\n\n\\item $[(D/J)=-0.5$ and $(h_{0}/J)=1.0]$~: \n$(m_{\\tau},m_{\\sigma})=\\{ (0,0);(0,\\pm 1); (\\pm 1, \\pm 1) \\}$. \n\n\\item $[(D/J)=0.5$ and $(h_{0}/J)=1.0]$~:\n$(m_{\\tau},m_{\\sigma})=\\{ (0,0);(0,\\pm 1); (\\pm 1, \\mp 1) \\}$. \n\n\\item $[(D/J)=0.0$ and $(h_{0}/J)=0.5]$~:\n$(m_{\\tau},m_{\\sigma})=\\{ (\\pm 1,\\pm 1);(\\pm 1, \\mp 1); (0, \\pm 1) \\}$. \n\n\\end{itemize}\n\nSuch a rich critical behavior shown for $T=0$ suggests that interesting \nphase diagrams should occur when the temperature is taken \ninto account. From now on, we investigate the model \ndefined by the Hamiltonian of~\\eq{eq:hamiltonian2} for finite temperatures. \n\n\\subsection{Finite-Temperature Analysis}\n\nAs shown above, the zero-temperature phase diagram presents\na reflection symmetry with respect \nto $D=0$ (cf. Fig.~\\ref{fig:groundstate}). \nThe only difference between the two sides of this phase\ndiagram concerns the magnetization solutions \ncharacterizing the ordered phases for low random-field values,\nwhere one has \n$m_{\\tau}=-m_{\\sigma}$ ($D>0$), or \n$m_{\\tau}=m_{\\sigma}$ ($D<0$). \nThese results come as a consequence\nof the symmetry of the Hamiltonian of~\\eq{eq:hamiltonian2}, \nwhich remains unchanged under the operations, \n$D \\rightarrow -D$, $\\sigma_{i} \\rightarrow -\\sigma_{i}$ $(\\forall i)$, or \n$D \\rightarrow -D$, $\\tau_{i} \\rightarrow -\\tau_{i}$,\n$h_{i} \\rightarrow -h_{i}$ $(\\forall i)$. \nHence, the finite-temperature phase diagrams should present similar \nsymmetries with respect to a change $D \\rightarrow -D$. From now on, \nfor the sake of simplicity, we will restrict ourselves to the \ncase $(D/J) \\geq 0$, for which the zero-temperature and \nlow-random-field magnetizations present \nopposite signals, as shown in Fig.~\\ref{fig:groundstate}, i.e., \n$m_{\\tau}=-m_{\\sigma}$~. \n\n\\begin{figure}[htp]\n\\begin{center}\n\\vspace{.5cm}\n\\includegraphics[height=5cm]{figure3a.eps}\n\\hspace{0.5cm} \n\\includegraphics[height=5cm]{figure3b.eps}\n\\end{center}\n\\vspace{-.5cm}\n\\caption{Phase diagrams of the model\ndefined by the Hamiltonian of~\\eq{eq:hamiltonian2} in two \nparticular cases: \n(a) The plane of dimensionless variables $kT/J$ versus $D/J$, \nin the absence of random fields $(h_{0}=0)$; \n(b) The plane of dimensionless variables $kT/J$ \nversus $h_{0}/J$, for $D=0$. \nThe full lines represent continuous phase transitions,\nwhereas the dotted line stands for a \nfirst-order critical frontier.\nFor sufficiently high temperatures one finds a \nparamagnetic phase ({\\bf P}), whereas \nthe magnetizations $m_{\\tau}$ and \n$m_{\\sigma}$ become nonzero \nby lowering the temperature. \nIn case (b), two low-temperature phases appear, namely,\nthe ordered (lower values of $h_{0}$) and \nthe partially-ordered (higher values of $h_{0}$).\nThese two phases are separated by a continuous\ncritical frontier (higher temperatures), which turns into \na first-order critical line (lower temperatures) at a tricritical\npoint (black circle). The type of phase \ndiagram exhibited in (b) will be called herein of topology I.}\n\\label{fig:tdh00}\n\\end{figure}\n\nIn Fig.~\\ref{fig:tdh00} we exhibit phase diagrams of the model\nin two particular cases, namely, in the absence of fields $(h_{0}=0)$\n[Fig.~\\ref{fig:tdh00}(a)] and for zero coupling \n$(D=0)$ [Fig.~\\ref{fig:tdh00}(b)]. \nThese figures provide useful \nreference data in the numerical procedure \nto be employed for constructing phase diagrams in\nmore general situations, e.g., in the plane \n$kT/J$ versus $h_{0}/J$, for several values of $(D/J)>0$. \n\nIn Fig.~\\ref{fig:tdh00}(a) we present the phase diagram \nof the model in the plane of dimensionless variables $kT/J$ versus $D/J$, \nin the absence of random fields ($h_{0}=0$), \nwhere one sees the point $D=0$ that corresponds \nto two noninteracting Ising models, leading to the well-known mean-field\ncritical temperature of the Ising model [$(kT_{c}/J)=1$]. \nAlso in Fig.~\\ref{fig:tdh00}(a), \nthe ordered solution $m_{\\tau}=-m_{\\sigma}$ \nminimizes the free energy at low temperatures\nfor any $D>0$; a second-order frontier separates this ordered phase \nfrom the paramagnetic one that appears for sufficiently high temperatures. \nFor high values of $D/J$ one sees that this critical frontier approaches \nasymptotically $(kT/J) = 2$. \nSince the application of a random field results in \na decrease of the critical temperature, when compared with the one\nof the case $h_{0}=0$~\\cite{aharony,mattis,kaufman}, \nthe result of Fig.~\\ref{fig:tdh00}(a) shows that no \nordered phase should occur for $h_{0}>0$ and $(kT/J)>2$. \n\nThe phase diagram for $D=0$ is shown in \nthe plane of dimensionless variables $kT/J$ \nversus $h_{0}/J$ in Fig.~\\ref{fig:tdh00}(b). \nThe {\\bf P} phase occurs for $(kT/J)>1$, whereas \nfor $(kT/J)<1$ two phases appear, namely, \nthe ordered one (characterized by \n$m_{\\sigma} \\neq 0$ and $m_{\\tau} \\neq 0$, with \n$|m_{\\sigma}| \\geq |m_{\\tau}|$), \nas well as the partially-ordered phase \n($m_{\\sigma} \\neq 0$ and $m_{\\tau} = 0$). \nSince the two Ising models are uncorrelated for $D=0$\nand the random fields act only on the $\\{\\tau_{i}\\}$ \nvariables, one finds that the critical behavior associated \nwith variables $\\{\\sigma_{i}\\}$ and $\\{\\tau_{i}\\}$ occur \nindependently: \n(i) The variables $\\{\\sigma_{i}\\}$ order at $(kT/J)=1$, for \nall values of $h_{0}$; \n(ii) The critical frontier shown in\nFig.~\\ref{fig:tdh00}(b), separating the two\nlow-temperature phases, is characteristic of an \nIsing ferromagnet in the presence of a bimodal \nrandom field~\\cite{aharony}. The black circle\ndenotes a tricritical point, where the higher-temperature\ncontinuous frontier meets the lower-temperature\nfirst-order critical line. The type of phase \ndiagram exhibited in Fig.~\\ref{fig:tdh00}(b)\nwill be referred herein as topology I. \n\n\\begin{figure}[htp]\n\\begin{center}\n\\includegraphics[height=7.0cm,clip,angle=-90]{figure4a.eps}\n\\hspace{0.1cm} \n\\includegraphics[height=7.0cm,clip,angle=-90]{figure4b.eps} \\\\\n\\vspace{0.5cm} \\hspace{-0.5cm}\n\\includegraphics[height=4.5cm,clip]{figure4c.eps}\n\\hspace{1.0cm}\n\\includegraphics[height=4.5cm,clip]{figure4d.eps}\n\\end{center}\n\\vspace{-.2cm}\n\\caption{Phase diagram and order parameters in the case\n$(D/J)=0.1$. \n(a) Phase diagram in the plane of dimensionless variables $kT/J$ \nversus $h_{0}/J$. At low temperatures, a first-order\ncritical frontier that terminates in an \nordered critical point (black asterisk) separates \nthe ordered phase (lower values of $h_{0}/J$) from \nthe partially-ordered phase (higher values of $h_{0}/J$); \nthis type of phase \ndiagram will be referred herein as topology II. \nThe order parameters $m_{\\tau}$ and $m_{\\sigma}$\nare represented versus the dimensionless temperature \n$kT/J$ for typical values of $h_{0}/J$: \n(b) As one goes through the ordered \nphase (low temperatures) to the {\\bf P} phase; \n(c) As one goes through the first-order critical \nfrontier, which separates the two ordered phases, \nup to the {\\bf P} phase; \n(d) As one goes through the partially-ordered phase \n(slightly to the right of the first-order critical frontier) up \nto the {\\bf P} phase. Equivalent solutions exist by \ninverting the signs of $m_{\\tau}$ and $m_{\\sigma}$.}\n\\label{fig:d01}\n\\end{figure}\n\nThe effects of a small interaction [$(D/J)=0.1$] \nbetween the variables $\\{\\sigma_{i}\\}$ and \n$\\{\\tau_{i}\\}$ are presented in Fig.~\\ref{fig:d01}, where\none sees that the topology I [Fig.~\\ref{fig:tdh00}(b)] goes \nthrough substantial changes, as shown \nin Fig.~\\ref{fig:d01}(a) (to be called herein as topology II).\nAs expected from the behavior presented \nin Fig.~\\ref{fig:tdh00}(a), one notices that \nthe border of the {\\bf P} phase (a continuous frontier) \nis shifted to higher temperatures. \nHowever, the most significant difference between\ntopologies I and II consists in \nthe low-temperature frontier\nseparating the ordered and partially-ordered phases. \nParticularly, the continuous frontier, as well\nas the tricritical point shown \nin Fig.~\\ref{fig:tdh00}(b), give place to an \nordered critical point~\\cite{griffiths}, at which\nthe low-temperature first-order critical \nfrontier terminates. \nSuch a topology has been found also in some \nrandom magnetic systems, like the Ising and Blume-Capel\nmodels, subject to random fields and/or \ndilution~\\cite{kaufman,salmon1,salmon2,benyoussef,\ncarneiro,kaufmankanner}. \nIn the present model, we verified that topology II holds \nfor any $0<(D/J)<1/2$, with \nthe first-order frontier starting at zero temperature and \n$(h_{0}/J)=(D/J)+1/2$, which in Fig.~\\ref{fig:d01}(a)\ncorresponds to $(h_{0}/J)=0.6$. Such a first-order line\nessentially affects the parameter $m_{\\tau}$, as will be \ndiscussed next. \n\nIn Figs.~\\ref{fig:d01}(b)--(d) the order parameters \n$m_{\\tau}$ and $m_{\\sigma}$ are exhibited versus \n$kT/J$ for conveniently chosen values of $h_{0}/J$, \ncorresponding to distinct physical situations of the \nphase diagram for $(D/J)=0.1$. \nA curious behavior is \npresented by the magnetization \n$m_{\\tau}$ by varying $h_{0}/J$, and more \nparticularly, around the first-order critical line. \nFor $(h_{0}/J)=0.59$ [Fig.~\\ref{fig:d01}(c)], \none starts at low temperatures\nessentially to the left of the critical frontier and by increasing \n$kT/J$ one crosses this critical frontier at $(kT/J)=0.499$, \nvery close to the ordered critical point. \nAt this crossing point, \n$|m_{\\tau}|$ presents an abrupt decrease, i.e., \na discontinuity, corresponding \nto a change to the partially-ordered phase; on \nthe other hand, the magnetization $m_{\\sigma}$\nremains unaffected when going through this critical frontier. \nFor higher temperatures, \n$|m_{\\tau}|$ becomes very small, but still finite,\nturning up zero only at the {\\bf P} boundary; in fact, \nthe whole region around the ordered critical point\nis characterized by a finite small value of $|m_{\\tau}|$. \nAnother unusual effect is presented in \nFig.~\\ref{fig:d01}(d), for which $(h_{0}/J)=0.65$, i.e., \nslightly to the right of the first-order critical frontier: \nthe order parameter $m_{\\tau}$ is zero\nfor low temperatures, but becomes nonzero by increasing the \ntemperature, as one becomes closer to the critical ordered \npoint. This rather curious phenomenon is directly related to \nthe correlation between the variables $\\{\\sigma_{i}\\}$ and \n$\\{\\tau_{i}\\}$: since for $(kT/J) \\approx 0.5$ the magnetization\n$m_{\\sigma}$ is still very close to its maximum value, \na small value for $|m_{\\tau}|$ is induced, so that both \norder parameters go to zero together only at the {\\bf P} frontier. \n\nBehind the results presented in Figs.~\\ref{fig:d01}(a)--(d) \none finds a very interesting feature, namely, the \npossibility of going continuously from the ordered phase to the \npartially-ordered phase by circumventing the ordered critical point.\nThis is analogous to what happens in many substances, e.g., water,\nwhere one goes continuously (with no latent heat) \nfrom the liquid to the gas \nphase by circumventing a critical end point~\\cite{huang,reichl}. \n\n\\begin{figure}[htp]\n\\begin{center}\n\\includegraphics[height=6.5cm,angle=-90]{figure5a.eps}\n\\hspace{0.2cm}\n\\includegraphics[height=6.5cm,angle=-90]{figure5b.eps}\n\\end{center}\n\\vspace{-.5cm}\n\\caption{The first-order critical line in Fig.~\\ref{fig:d01}(a), \ncorresponding to $(D/J)=0.1$, is amplified, and \nthe dimensionless free-energy density $f/J$ of~\\eq{eq:freeenergy} \n(shown in the insets) is analyzed\nat two distinct points along this frontier:\n(a) A low-temperature point located at $[(h_{0}/J)=0.599,(kT/J)=0.010]$, showing the \ncoexistence of the ordered ($|m_{\\tau}|=1$) and partially-ordered ($m_{\\tau}=0$)\nsolutions;\n(b) A higher-temperature point located at $[(h_{0}/J)=0.594,(kT/J)=0.387]$, \nshowing the coexistence of solutions with $|m_{\\tau}|>0$, namely, \n$|m_{\\tau}|=0.868$ and $|m_{\\tau}|=0.1$. \nIn both cases (a) and (b) the free energy presents four minima,\nassociated with distinct pairs of solutions\n$(m_{\\tau},m_{\\sigma})$: the full lines show the two minima \nwith positive $m_{\\sigma}$, whereas the dashed lines correspond\nto the two minima with negative $m_{\\sigma}$.} \n\\label{fig:freeenergyd01}\n\\end{figure}\n\nIn Fig.~\\ref{fig:freeenergyd01} the free-energy density of~\\eq{eq:freeenergy}\nis analyzed at two different points along the first-order critical frontier of \nFig.~\\ref{fig:d01}(a), namely, a low-temperature \none [Fig.~\\ref{fig:freeenergyd01}(a)], and a point at a higher \ntemperature [Fig.~\\ref{fig:freeenergyd01}(b)].\nIn both cases the free energy presents four minima \nassociated with distinct pairs of solutions\n$(m_{\\tau},m_{\\sigma})$. The point at $(kT/J)=0.010$ presents\n$(m_{\\tau},m_{\\sigma})=\\{(-1,1);(0,1); (0,-1);(1,-1)\\}$, whereas the \npoint at $(kT/J)=0.387$ presents \n$(m_{\\tau},m_{\\sigma})=\\{(-0.868, 0.991); (-0.100,0.991); (0.100, -0.991); \n(0.868, -0.991)\\}$. \nThe lower-temperature point represents a coexistence of the two phases \nshown in the case $D=0$ [cf. Fig.~\\ref{fig:tdh00}(b)], namely, the \nordered ($|m_{\\tau}|=1$) and partially-ordered ($m_{\\tau}=0$) phases. \nHowever, the higher-temperature point typifies the phenomenon\ndiscussed in Fig.~\\ref{fig:d01}, where distinct solutions with \n$|m_{\\tau}|>0$ coexist, leading to a jump in this \norder parameter as one crosses the critical frontier, \nlike illustrated in Fig.~\\ref{fig:d01}(c) for the point \n$[(h_{0}/J)=0.59,(kT/J)=0.499]$. Although the \nmagnetization $m_{\\tau}$ presents a very \ncurious behavior in topology II [cf., e.g., \nFigs.~\\ref{fig:d01}(b)--(d)], \n$m_{\\sigma}$ remains essentially \nunchanged by the presence of the first-order \ncritical frontier of \nFig.~\\ref{fig:d01}(a), as shown also in \nFig.~\\ref{fig:freeenergyd01}. \n\n\\begin{figure}[htp]\n\\begin{center}\n\\includegraphics[height=7cm,angle=-90]{figure6a.eps}\n\\hspace{0.2cm}\n\\includegraphics[height=7cm,angle=-90]{figure6b.eps}\n\\end{center}\n\\vspace{-.5cm}\n\\caption{Phase diagrams in the plane of dimensionless variables $kT/J$ \nversus $h_{0}/J$ for two different values of $D/J$:\n(a) $(D/J)=0.5$, to be referred as topology III;\n(b) $(D/J)=0.7$, to be referred as topology IV.} \n\\label{fig:phasediagd0507}\n\\end{figure}\n\nIn Fig.~\\ref{fig:phasediagd0507} we present two other possible phase\ndiagrams, namely, the cases $(D/J)=0.5$ [Fig.~\\ref{fig:phasediagd0507}(a), \ncalled herein topology III] and \n$(D/J)=0.7$ [Fig.~\\ref{fig:phasediagd0507}(b), called herein topology IV]. \nWhereas topology III represents\na special situation that applies only for $(D/J)=0.5$, exhibiting the \nrichest critical behavior of the present model, topology IV holds \nfor any $(D/J)>0.5$.\nIn Fig.~\\ref{fig:phasediagd0507}(a) one observes the appearance of\nseveral multicritical points, denoted by the black circle (tricritical \npoint), black asterisk (ordered critical point), and\nempty triangles (triple points):\n(i) The tricritical point, which signals the\nencounter of the higher-temperature continuous phase transition\nwith the lower-temperature first-order phase transition,\nfound in the $D=0$ phase diagram [cf. Fig.~\\ref{fig:tdh00}(b)],\nhave curiously disappeared for $0<(D/J)<0.5$, \nand emerged again for $(D/J)=0.5$; \n(ii) The ordered critical point exists for any $0 < (D/J) \\leq 0.5$ \n[as shown in Fig.~\\ref{fig:d01}(a)]; \n(iii) Two triple points, one at a finite temperature, whereas the \nother one occurs at zero temperature. It should be mentioned \nthat such a zero-temperature triple point corresponds \nprecisely to the one of Fig.~\\ref{fig:groundstate}, at \n$(D/J)=0.5$ and $(h_{0}/J)=1.0$. \nThe value $(D/J)=0.5$ is very special and will be considered as \na threshold for both multicritical behavior and correlations \nbetween the two systems. We have observed that for \n$(D/J) \\gtrsim 0.5$, the critical points shown in\nFig.~\\ref{fig:phasediagd0507}(a) disappear, except for the\ntricritical point that survives for \n$(D/J)>0.5$ [as shown in Fig.~\\ref{fig:phasediagd0507}(b)].\nChanges similar to those occurring \nherein between topologies II and III, as well as \ntopologies III and IV, \nwere found also in some \nmagnetic systems, like the Ising and Blume-Capel\nmodels, subject to random fields and/or \ndilution~\\cite{kaufman,salmon1,salmon2,benyoussef,\ncarneiro,kaufmankanner}. \nParticularly, the splitting of the\nlow-temperature first-order critical frontier into \ntwo higher-temperature first-order lines that terminate\nin the ordered and tricritical points, \nrespectively [as exhibited in Fig.~\\ref{fig:phasediagd0507}(a)],\nis consistent with results found in \nthe Blume-Capel model under \na bimodal random magnetic, by \nvarying the intensity of the crystal \nfield~\\cite{kaufmankanner}.\n\nAnother important feature of topology III concerns the \nlack of any type of \nmagnetic order at finite temperatures for $(h_{0}/J)>1.1$, \nin contrast to the phase diagrams for \n$0 \\leq (D/J) < 0.5$, for which there is $m_{\\sigma} \\neq 0$ \nfor all $h_{0}/J$\n[see, e.g., Figs.~\\ref{fig:tdh00}(b) and~\\ref{fig:d01}(a)]. \nThis effect shows that $(D/J)=0.5$ represents a threshold value \nfor the coupling between the variables $\\{\\sigma_{i}\\}$ and \n$\\{\\tau_{i}\\}$, so that for $(D/J) \\geq 0.5$ the \ncorrelations among these variables become significant. \nAs a consequence of these correlations, the fact \nof no magnetic \norder on the $\\tau$-system ($m_{\\tau} =0$) \ndrives the the magnetization of the \n$\\sigma$-system to zero as well, for $(h_{0}/J)>1.1$. \nIt is important to notice that the $T=0$ phase diagram \nof Fig.~\\ref{fig:groundstate}\npresents a first-order critical line for $(D/J)=0.5$ and \n$(h_{0}/J)>1.0$, at which \n$m_{\\tau} =0$, whereas in the $\\sigma$-system both\n$m_{\\sigma}=0$ and $|m_{\\sigma}|=1$ minimize the Hamiltonian. \nBy analyzing numerically the free-energy density \nof~\\eq{eq:freeenergy} at low temperatures and $(h_{0}/J)>1.0$, \nwe have verified that for any infinitesimal value of \n$kT/J$ destroys such a coexistence of solutions, leading to \na minimum free energy at \n$m_{\\tau}=m_{\\sigma}=0 \\ (\\forall \\, T>0)$. Consequently,\none finds that the low-temperature region in the interval \n$1.0 \\leq (h_{0}/J) \\leq 1.1$ becomes part of the {\\bf P} phase. \nHence, the phase diagram in \nFig.~\\ref{fig:phasediagd0507}(a) presents \na reentrance phenomena for \n$1.0 \\leq (h_{0}/J) \\leq 1.1$. In this region, by lowering\nthe temperature gradually, one goes from a {\\bf P} phase\nto the ordered phase \n($m_{\\tau} \\neq 0$ ; $m_{\\sigma} \\neq 0$), and then back \nto the {\\bf P} phase. This effect appears frequently \nin both theoretical and experimental investigations of \ndisordered magnets~\\cite{dotsenkobook,nishimoribook}. \n\n\\begin{figure}[htp]\n\\begin{center}\n\\includegraphics[height=7cm,clip,angle=-90]{figure7a.eps}\n\\hspace{0.5cm} \\vspace{0.7cm}\n\\includegraphics[height=7cm,clip,angle=-90]{figure7b.eps} \\\\\n\\vspace{0cm} \\hspace{-0.8cm}\n\\includegraphics[height=4.5cm,clip]{figure7c.eps}\n\\hspace{1.2cm} \n\\includegraphics[height=4.5cm,clip]{figure7d.eps}\n\\end{center}\n\\vspace{0.2cm}\n\\caption{(a) The region of multicritical points of the phase diagram for \n$(D/J)=0.5$ [Fig.~\\ref{fig:phasediagd0507}(a)] is amplified and three\nthermodynamic paths are chosen for analyzing the magnetizations \n$m_{\\tau}$ and $m_{\\sigma}$. \n(b) Order parameters along thermodynamic path (1): \n$(h_{0}/J)=0.97$ and increasing temperatures.\n(c) Order parameters along thermodynamic path (2): \n$(h_{0}/J)=1.05$ and increasing temperatures. \n(d) Order parameters along thermodynamic path (3): \n$(kT/J)=0.42$ and varying the field \nstrength in the interval $0.9 \\leq (h_{0}/J) \\leq 1.15$.\nEquivalent solutions exist by inverting the signs of \n$m_{\\tau}$ and $m_{\\sigma}$.}\n\\label{fig:magpaths123}\n\\end{figure}\n\nIn Fig.~\\ref{fig:magpaths123} we analyze the behavior of the \n$m_{\\tau}$ and $m_{\\sigma}$ for topology III, \nin the region of multicritical \npoints of the phase diagram for \n$(D/J)=0.5$, along three typical thermodynamic paths, as \nshown in Fig.~\\ref{fig:magpaths123}(a). \nIn Fig.~\\ref{fig:magpaths123}(b) we exhibit the behavior of\n$m_{\\tau}$ and $m_{\\sigma}$ along path (1), where one \nsees that both parameters go through a jump by \ncrossing the first-order critical line [$(kT/J)=0.445$], \nexpressing a coexistence of different types \nof solutions for $m_{\\tau}$ and $m_{\\sigma}$ at this \npoint. One notices a larger jump in $m_{\\tau}$, so that \nto the right of the ordered critical point \none finds a behavior similar to the one verified in topology II, \nwhere $|m_{\\tau}|$ becomes very small, whereas \n$m_{\\sigma}$ still presents significant values. \nThen, by further increasing the temperature, these parameters\ntend smoothly to zero at the continuous critical frontier\nseparating the ordered and {\\bf P} phases. \nIn Fig.~\\ref{fig:magpaths123}(c) we show the magnetizations\n$m_{\\tau}$ and $m_{\\sigma}$ along path (2), \nwithin the region of the phase diagram \nwhere the reentrance phenomenon occurs; along this path, \none increases the temperature, going from the {\\bf P} phase \nto the ordered phase and then to the {\\bf P} phase again.\nBoth parameters are zero for low enough temperatures, \njumping to nonzero values at $(kT/J)=0.396$, as one \ncrosses the first-order critical line. After such jumps, \nby increasing the temperature, these parameters\ntend smoothly to zero at the border of the \n{\\bf P} phase. The behavior shown in \nFig.~\\ref{fig:magpaths123}(c) confirm the reentrance \neffect, discussed previously. \nFinally, in Fig.~\\ref{fig:magpaths123}(d) we exhibit \nthe order parameters along thermodynamic path (3), \nfor which the temperature is fixed at $(kT/J)=0.42$, with\nthe field varying in the range \n$0.9 \\leq (h_{0}/J) \\leq 1.15$. One sees that both \nmagnetizations $m_{\\tau}$ and $m_{\\sigma}$ display \njumps as one crosses each of the two first-order lines, \nevidencing a \ncoexistence of different ordered states at the lower-temperature\njump, as well as a coexistence of the ordered and {\\bf P} states\nat the higher-temperature jump. \n\nThe behavior presented by the order parameters in \nFigs.~\\ref{fig:magpaths123}(b)--(d) shows clearly \nthe fact that $(D/J)=0.5$ represents a threshold value \nfor the coupling between the variables $\\{\\sigma_{i}\\}$ and \n$\\{\\tau_{i}\\}$. In all these cases, one sees that jumps\nin the magnetization $m_{\\sigma}$ are correlated with \ncorresponding jumps in $m_{\\tau}$. \nThese results should be contrasted with those for the \ncases $(D/J)<0.5$, as illustrated \nin Fig.~\\ref{fig:d01}(c), where a discontinuity \nin $m_{\\tau}$ does not affect the smooth behavior presented\nby $m_{\\sigma}$. \n\n\\begin{figure}[htp]\n\\begin{center}\n\\includegraphics[height=7cm,angle=-90]{figure8a.eps}\n\\hspace{0.2cm}\n\\includegraphics[height=7cm,angle=-90]{figure8b.eps}\n\\end{center}\n\\vspace{-.5cm}\n\\caption{The order parameters $m_{\\tau}$ and $m_{\\sigma}$\nare represented versus the dimensionless temperature \n$kT/J$ for $(D/J)=8.0$ and two typical values of $h_{0}/J$: \n(a) Slightly to the left of the tricritical point; \n(b) Slightly to the right of the tricritical point. \nThe associated phase diagram corresponds \nto topology IV [cf. Fig.~\\ref{fig:phasediagd0507}(b)].\nEquivalent solutions exist by inverting the signs of \n$m_{\\tau}$ and $m_{\\sigma}$.} \n\\label{fig:magd80}\n\\end{figure}\n\nThe phase diagram shown in Fig.~\\ref{fig:phasediagd0507}(b), \nwhich corresponds to topology IV, is valid for any\nfor any $(D/J)>0.5$. Particularly, \nthe critical point where the low-temperature \nfirst-order critical \nfrontier touches the zero-temperature axis \nis kept at $(h_{0}/J)=1$, for all $(D/J)>0.5$, \nin agreement with Fig.~\\ref{fig:groundstate}. \nWe have verified \nonly quantitative changes in such a phase diagram \nby increasing \nthe coupling between the variables $\\{\\sigma_{i}\\}$ and \n$\\{\\tau_{i}\\}$. Essentially, the whole continuous critical \nfrontier moves towards \nhigher temperatures, leading to an increase in the values \nof the critical temperature \nfor $(h_{0}/J)=0$, as well as in the temperature \nassociated with the tricritical point, whereas the abscissa \nof this point remains typically unchanged. Moreover,\nin what concerns the order parameters, \nthe difference between $|m_{\\tau}|$ \nand $m_{\\sigma}$ decreases, in such a way \nthat for $(D/J) \\rightarrow \\infty$, one obtains \n$m_{\\tau}=-m_{\\sigma}$. \nThis later effect is illustrated in \nFig.~\\ref{fig:magd80}, where we represent the \norder parameters $m_{\\tau}$ and $m_{\\sigma}$\nversus temperature, for a sufficiently large value of\n$D/J$, namely, $(D/J)=8.0$, in \ntwo typical choices of $h_{0}/J$, close \nto the tricritical point. \nIn Fig.~\\ref{fig:magd80}(a) $m_{\\tau}$ and $m_{\\sigma}$\nare analyzed slightly to the left of the tricritical \npoint, exhibiting the usual continuous behavior, \nwhereas in Fig.~\\ref{fig:magd80}(b) they \nare considered slightly to the right of the tricritical\npoint, presenting jumps as one crosses the\nfirst-order critical frontier. However, \nthe most important conclusion from Fig.~\\ref{fig:magd80} \nconcerns the fact that in both cases one has essentially \n$m_{\\tau}=-m_{\\sigma}$, showing that the random \nfield applied solely to the $\\tau$-system influences the \n$\\sigma$-system in a similar way, due to the \nhigh value of $D/J$ considered. \nWe have verified that for $(D/J)=8.0$\nthe two systems become so strongly \ncorrelated, such that \n$m_{\\tau}=-m_{\\sigma}$ holds along \nthe whole phase diagram, \nwithin our numerical accuracy. \n\n\\section{Conclusions}\n\nWe have analyzed the effects of a coupling $D$ \nbetween two Ising models, defined in terms of variables\n$\\{\\tau_{i}\\}$ and $\\{\\sigma_{i}\\}$. \nThe model was considered in the limit of infinite-range\ninteractions, where all spins in each system\ninteract by means of an exchange coupling $J>0$, typical \nof ferromagnetic interactions.\nMotivated by a qualitative description of\nsystems like plastic crystals, \nthe variables $\\{\\tau_{i}\\}$ and $\\{\\sigma_{i}\\}$ would \nrepresent rotational and translational degrees \nof freedom, respectively. Since the rotational \ndegrees of freedom are expected to change more \nfreely than the translational ones, \na random field acting only on the variables\n$\\{\\tau_{i}\\}$ was considered.\nFor this purpose, a bimodal random field, \n$h_{i} = \\pm h_{0}$, with equal probabilities,\nwas defined on the $\\tau$-system.\nThe model was investigated through its free energy \nand its two order parameters, namely, \n$m_{\\tau}$ and $m_{\\sigma}$. \n \nWe have shown that such a system presents a very rich \ncritical behavior, depending on the particular choices \nof $D/J$ and $h_{0}/J$. \nParticularly, at zero temperature, the phase diagram in the plane \n$h_{0}/J$ versus $D/J$ exhibits ordered, partially-ordered, \nand disordered phases. This phase diagram is symmetric \naround $(D/J)=0$, so that for sufficiently low values of \n$h_{0}/J$ one finds ordered phases characterized by \n$m_{\\sigma}=m_{\\tau}=\\pm 1$ ($D<0$) and \n$m_{\\sigma}=-m_{\\tau}=\\pm 1$ ($D>0$). \nWe have verified that $|D/J|=1/2$\nplays an important role in the present model, such\nthat at zero temperature one has the disordered \nphase ($m_{\\sigma}=m_{\\tau}=0$)\nfor $|D/J|>1/2$ and $(h_{0}/J)>1$. \nMoreover, the partially-ordered phase, \nwhere $m_{\\sigma}=\\pm 1$ and $m_{\\tau}=0$, \noccurs for $(h_{0}/J)>1/2+|D/J|$ and $|D/J|<1/2$. \nIn this phase diagram all phase transitions are \nof the first-order type, and three triple points were found. \nIn the case of plastic crystals, \nthe sequence of transitions from the disordered \nto the partially-ordered, and then to the\nordered phases, would correspond to the \nsequence of transitions from the liquid to\nthe plastic crystal, and then to ordered crystal\nphases.\n\nDue to the symmetry around $D=0$, the \nfinite-temperature phase diagrams were considered\nonly for $D>0$, for which the ordered\nphase was identified by $m_{\\sigma}>0$ and\n$m_{\\tau}<0$, whereas the partially-ordered phase\nby $m_{\\sigma}>0$ and\n$m_{\\tau}=0$ (equivalent solutions also exist by \ninverting the signs of these order parameters). \nSeveral phase diagrams in the \nplane $kT/J$ versus $h_{0}/J$ were studied, \nby varying gradually $D/J$. We have found\nfour qualitatively different types of phase diagrams, \ndenominated as topologies I [$(D/J)=0$], II [$0<(D/J)<1/2$], \nIII [$(D/J)=1/2$], and IV [$(D/J)>1/2$]. Such a\nclassification reflects the fact that $(D/J)=1/2$ \nrepresents a threshold value \nfor the coupling between the variables $\\{\\sigma_{i}\\}$ and \n$\\{\\tau_{i}\\}$, so that for $(D/J) \\geq 1/2$ the \ncorrelations among these variables become significant, \nas verified through the behavior of the order parameters\n$m_{\\tau}$ and $m_{\\sigma}$. \nFrom all these cases, only topology IV \ntypifies a well-known phase diagram, \ncharacterized by a tricritical point, where the \nhigher-temperature continuous frontier meets \nthe lower-temperature first-order critical line.\nThis phase diagram is qualitatively similar to \nthe one found for the \nIsing ferromagnet in the presence of a bimodal \nrandom field~\\cite{aharony}, and it does not \npresent the herein physically relevant \npartially-ordered phase. \nFor $(D/J) \\geq 1/2$, even though the random field\nis applied only in the $\\tau$-system, the correlations \nlead the $\\sigma$-system to follow a qualitatively\nsimilar behavior. \n \nThe phase diagrams referred as topologies I and II \nexhibit all three phases. In the later case we have found \na first-order critical line terminating at an ordered\ncritical point, leading to the potential physical realization\nof going continuously from the ordered phase to the \npartially-ordered phase by circumventing this \ncritical point.\nIn these two topologies, the sequence of transitions \nfrom the disordered \nto the partially-ordered, and then to the\nordered phase, represents the physical \nsituation that occurs in plastic crystals. \nFor conveniently chosen thermodynamic paths, \ni.e., varying temperature and random field appropriately,\none may go from the liquid phase \n($m_{\\sigma}=m_{\\tau}=0$), to a plastic-crystal phase\n($m_{\\sigma} \\neq 0$; $m_{\\tau}=0$), where the rotational degrees \nof freedom are found in a disordered state, and then, \nto an ordered crystal phase\n($m_{\\sigma} \\neq 0$; $m_{\\tau} \\neq 0$).\n \n From the point of view of multicritical behavior, \ntopology III [$(D/J)=1/2$] corresponds to \nthe richest type of phase diagram, being \ncharacterized by several critical lines and \nmulticritical points; one finds its most \ncomplex criticality around $(h_{0}/J)=1$, signaling \na great competition among the different types of orderings. \nAlthough the partially-ordered phase \ndoes not appear in this particular case, one has also \nthe possibility of circumventing the ordered critical point, \nsuch as to reach a region of the phase diagram \nalong which $|m_{\\tau}|$ becomes very small,\nresembling a partially-ordered phase. \n\nSince the infinite-range interactions among \nvariables of each Ising system correspond to a limit\nwhere mean-field approach becomes exact, an immediate \nquestion concerns whether some of the results obtained above\nrepresent an artifact of such limit. \nCertainly, such a relevant point is directly related with the \nexistence of some of these features in the associated\nshort-range three-dimensional magnetic models. For example, the\ntricritical point found in topologies III and IV is essentially \nthe same that appears within the mean-field approach of the \nIsing model in the presence of a bimodal random field. \nThis later model has been extensively investigated on a cubic \nlattice through different numerical approaches, where the \nexistence of this tricritical point is still very controversial. \nOn the other hand, a first-order critical frontier terminating\nat an ordered critical point, and the fact that one can\ngo from one phase to another by\ncircumventing this point, represents a typical \nphysical situation that occurs in real\nsubstances. The potential for exhibiting such a \nrelevant feature represents an important advantage \nof the present model. \n \nFinally, we emphasize that the rich critical behavior presented \nin the phase diagrams corresponding to topologies II and III\nsuggest the range $0<(D/J) \\leq 1/2$ as appropriate\nfor describing plastic crystals. \nThe potential of exhibiting successive transitions from the \nordered to the partially-ordered and then to the \ndisordered phase should be useful for a better \nunderstanding of these systems. \nFurthermore, the characteristic \nof going continuously from the ordered phase \nto the partially-ordered phase by circumventing an ordered \ncritical point represents a typical physical situation that \noccurs in many substances, \nand opens the possibility for \nthe present model to describe a wider range of materials. \n\n\\vskip 2\\baselineskip\n\n{\\large\\bf Acknowledgments}\n\n\\vskip \\baselineskip\n\\noindent\n The partial financial supports from CNPq, \nFAPEAM-Projeto-Universal-Amazonas, \nand FAPERJ (Brazilian agencies) are acknowledged. \n\n\\vskip 2\\baselineskip\n\n", "meta": {"timestamp": "2014-06-24T02:06:43", "yymm": "1406", "arxiv_id": "1406.5628", "language": "en", "url": "https://arxiv.org/abs/1406.5628"}} {"text": "\\section{Introduction}\n\\label{intro}\nIt is now a well-established fact that according to our present theory of gravity, 85\\%~of the matter content of our universe is missing. Observational evidence for this discrepancy ranges from macroscopic to microscopic scales, e.g. gravitational lensing in galaxy clusters, galactic rotation curves and fluctuations measured in the Cosmic Microwave Background. This has resulted in the hypothesised existence of a new type of matter called Dark Matter. Particle physics provides a well-motivated explanation for this hypothesis: The existence of (until now unobserved) massive weakly interacting particles (WIMPs). A favorite amongst the several WIMP candidates is the neutralino, the lightest particle predicted by Supersymmetry, itself a well-motivated extension of the Standard Model. \n\nIf Supersymmetry is indeed realised in Nature, Supersymmetric particles would have been copiously produced at the start of our Universe in the Big Bang. Initially these particles would have been in thermal equilibrium. After the temperature of the Universe dropped below the neutralino mass, the neutralino number density would have decreased exponentially. Eventually the expansion rate of the Universe would have overcome the neutralino annihilation rate, resulting in a neutralino density in our Universe today similar to the cosmic microwave background. \n\nThese relic neutralinos could then accumulate in massive celestial bodies in our Universe like our Sun through weak interactions with normal matter and gravity. Over time the neutralino density in the core of the object would increase considerably, thereby increasing the local neutralino annihilation probability. In the annihilation process new particles would be created, amongst which neutrinos. This neutrino flux could be detectable as a localised emission with earth-based neutrino telescopes like ANTARES.\n\nThis paper gives a brief overview of the prospects for the detection of neutrinos originating from neutralino annihilation in the Sun with the ANTARES neutrino telescope.\n\n\\begin{figure}[b]\n\\center{\n\\includegraphics[width=0.45\\textwidth,angle=0]{NEA_60kHz0XOFF_off.png}\n\\caption{The ANTARES Neutrino Effective Area vs. $E_\\nu$.}\n\\label{fig:1} \n}\n\\end{figure}\n\n\\begin{figure*}[t]\n\\center{\n\\includegraphics[width=0.8\\textwidth,angle=0]{psflux.png}\n\\caption{Predicted $\\nu_\\mu+\\bar{\\nu}_\\mu$ flux from the Sun in mSUGRA parameter space.}\n\\label{fig:2} \n}\n\\end{figure*}\n\n\\section{The ANTARES neutrino telescope}\n\\label{sec:1} \nThe ANTARES undersea neutrino telescope consists of a 3D~grid of 900~photomultiplier tubes arranged in 12~strings, at a depth of 2475~m in the Mediterranean Sea. Three quarters of the telescope have been deployed and half of the detector is already fully operational, making ANTARES the largest neutrino telescope on the northern hemisphere. The angular resolution of the telescope is of the order of one degree at low energy, relevant to dark matter searches, and reaches 0.3 degree at high energies ($>$~10~TeV).\n\nThe sensitivity of a neutrino detector is conventionally expressed as the Neutrino Effective Area, $A_{\\rm eff}^{\\nu}$. The $A_{\\rm eff}^{\\nu}$ is a function of neutrino energy $E_\\nu$ and direction $\\Omega$, and is defined as \n\\begin{equation}\nA_{\\rm eff}^{\\nu}(E_\\nu,\\Omega) \\;=\\; V_{\\rm eff}(E_\\nu,\\Omega)\\;\\sigma(E_\\nu)\\;\\rho N_A\\;P_E(E_\\nu,\\Omega)\n\\label{eq:1}\n\\end{equation}\n\n\\noindent where $\\sigma(E_\\nu)$ is the neutrino interaction cross section, $\\rho\\,N_A$ is the nucleon density in/near the detector,\\linebreak $P_E(E_\\nu,\\Omega)$ is the neutrino transmission probability\\linebreak through the Earth and $V_{\\rm eff}(E_\\nu,\\Omega)$ represents the effective detector volume. This last quantity depends not only on the detector geometry and instrumentation, but is also on the efficiency of the trigger and reconstruction algorithms that are used. \n\nThe ANTARES $A_{\\rm eff}^{\\nu}$ for upgoing $\\nu_\\mu$ and $\\bar{\\nu}_\\mu$'s, integrated over all directions as a function of the neutrino energy is shown in Fig.~\\ref{fig:1}. The curves represent the $A_{\\rm eff}^{\\nu}$ after triggering only (``{\\em Trigger level}'', in blue) and after reconstruction and selection (``{\\em Detection level}'', in red). The increase of the $A_{\\rm eff}^{\\nu}$ with neutrino energy is mainly due to the fact that $\\sigma(E_\\nu)$ as well as the muon range are both proportional to the neutrino energy.\n\nThe detection rate $R(t)$ for a certain neutrino flux $\\Phi(E_\\nu,\\Omega,t)$ is then defined as\n\n\\begin{equation}\nR(t) \\;=\\; \\iint\\,A_{\\rm eff}^{\\nu}(E_\\nu,\\Omega)\\;\\frac{d\\Phi(E_\\nu,\\Omega,t)}{dE_\\nu\\,d\\Omega}\\;dE_\\nu\\,d\\Omega\n\\label{eq:2}\n\\end{equation}\n\n\\section{Neutralino annihilation in the Sun}\n\\label{sec:2} \nWe calculated the $\\nu_\\mu+\\bar{\\nu}_\\mu$ flux resulting from neutralino annihilation in the centre of the Sun using the DarkSUSY simulation package \\cite{DarkSUSY}. Furthermore, the effects of neutrino oscillations in matter and vacuum as well as absorption were taken into account. The top quark mass was set to 172.5~GeV and the NFW-model for the Dark Matter halo with a local halo density \\mbox{$\\rho_0 = 0.3$~GeV/cm$^3$} was used. Instead of the general Supersymmetric framework, we used the more constrained approach of minimal Supergravity (mSUGRA). In mSUGRA, models are characterized by four parameters and a sign: A common gaugino mass $m_{1/2}$, scalar mass $m_0$ and tri-linear scalar coupling $A_0$ at the GUT scale ($10^{16}$ GeV), the ratio of vacuum expectation values of the two Higgs fields $tan(\\beta)$ and the sign of the Higgsino mixing parameter $\\mu$. We considered only $\\mu=+1$ models within the following parameter ranges: \\mbox{$01$, models that are already experimentally excluded, or models where the neutralino is not the lightest superpartner. Models in the so-called ``Focus Point'' region\\,\\footnote{The region of mSUGRA parameter space around $(m_0,m_{1/2}) = (2000,400)$.} produce the highest neutrino flux: In this region the neutralino has a relatively large Higgsino component \\cite{Nerzi}. This enhances the neutralino capture rate through $Z$-boson exchange as well as the neutralino annihilation through the\\linebreak \\mbox{$\\chi\\chi\\rightarrow WW/ZZ$} channel, resulting in a large flux of relatively high energetic neutrinos.\n\nThe $\\nu_\\mu+\\bar{\\nu}_\\mu$ flux can also be plotted against the neutralino mass $m_\\chi$, as is shown in Fig.~\\ref{fig:3}. In this plot, the mSUGRA models are subdivided into three categories according to how well their $\\Omega_\\chi h^2$ agrees with $\\Omega_{\\rm CDM} h^2$ as measured by WMAP\\,\\footnote{WMAP: $\\Omega_{\\rm CDM} h^2 = 0.1126_{-0.013}^{+0.008}$}: \\mbox{$\\Omega_\\chi h^2-\\Omega_{\\rm CDM}h^2<2\\sigma$} (black), \\mbox{$0< \\Omega_\\chi h^2 < \\Omega_{\\rm CDM} h^2$} (blue) and \\mbox{$\\Omega_{\\rm CDM} h^2 < \\Omega_\\chi h^2 < 1$} (magenta).\n\n\\begin{figure}[b]\n \\includegraphics[width=0.45\\textwidth,angle=0]{detection_rate_relic_density.png}\n\\caption{ANTARES detection rate per 3~years vs. $m_\\chi$.}\n\\label{fig:5} \n\\end{figure}\n\n\\section{ANTARES prospects to detect neutralino annihilation in the Sun}\n\\label{sec:3} \nThe ANTARES detection rate (See Eq.~\\ref{eq:2}) for the detection of neutralino annihilation in the Sun was calculated as follows: For each mSUGRA model considered in Sect.~\\ref{sec:2}, the differential $\\nu_\\mu+\\bar{\\nu}_\\mu$ flux from the Sun was convoluted with the Sun's zenith angle distribution as well as the ANTARES $A_{\\rm eff}^{\\nu}$ (see Eq.~\\ref{eq:1} and Fig.~\\ref{fig:1}). The resulting detection rate in ANTARES per 3~years is shown as a function of the neutralino mass in Fig.~\\ref{fig:5}. The color coding in the plot corresponds to the one used in Fig.~\\ref{fig:3}.\n\nThe ANTARES exclusion limit for the detection of neutralino annihilation in the Sun was calculated as follows: As can be seen from Fig.~\\ref{fig:5}, the expected detection rates for all mSUGRA model considered in Sect.~\\ref{sec:2} are small. Therefore the Feldman Cousins approach \\cite{FeldmanCousins} was used to calculate 90\\%~CL exclusion limits. The two sources of background were taken into account as follows: Since we know the Sun's position in the sky, the atmospheric neutrino background (Volkova parametrisation) was considered only in a 3~degree radius search cone around the Sun's position. After applying the event selection criteria used to determine the $A_{\\rm eff}^{\\nu}$ in Fig.~\\ref{fig:1}, the misreconstructed atmospheric muon background was considered to be 10\\% of the atmospheric neutrino background. mSUGRA models that are excludable at 90\\%~CL by ANTARES in 3~years are shown in blue in Fig.~\\ref{fig:6}, those that are non-excludable are shown in red. Bright colors indicate models which have \\mbox{$\\Omega_\\chi h^2-\\Omega_{\\rm CDM}h^2<2\\sigma$}. The fraction of ANTARES excludable models in mSUGRA parameter space is shown in Fig.~\\ref{fig:4}.\n\n\n\\begin{figure}[t]\n \\includegraphics[width=0.45\\textwidth,angle=0]{detection_rate_exclusion.png}\n\\caption{mSUGRA models 90\\% CL excludable by ANTARES per 3~years vs. $m_\\chi$.}\n\\label{fig:6} \n\\end{figure}\n\n\\begin{figure}[b]\n \\includegraphics[width=0.45\\textwidth,angle=0]{crossection_exclusion_direct_limits.png}\n\\caption{Spin-independent $\\chi p$~cross section vs. $m_\\chi$.}\n\\label{fig:7} \n\\end{figure}\n\n\\begin{figure}[t]\n \\includegraphics[width=0.44\\textwidth,angle=0]{NEA_triggercomparison_off.png}\n\\caption{The ANTARES Neutrino Effective Area at the trigger level vs. $E_\\nu$.}\n\\label{fig:8} \n\\end{figure}\n\n\\section{Comparison to direct detection}\nTo compare with direct detection experiments, the spin-independent $\\chi p$~cross section versus the neutralino mass for all mSUGRA models considered in Sect.~\\ref{sec:2} is shown in Fig.~\\ref{fig:7}. The color coding in the plot corresponds to the one used in Fig.~\\ref{fig:6}. The limits in this plot were taken from the Dark Matter Limit Plot Generator \\cite{DirectDetection}. The spin-independent cross section is driven by CP-even Higgs boson exchange \\cite{Nerzi}. Therefore, mSUGRA models in which the neutralino is of the mixed gaugino-Higgsino type will produce the largest cross sections. This implies a correlation between the models that are excludable by direct detection experiments and models excludable ANTARES, as can be seen from Fig.~\\ref{fig:7}.\n\n\\section{Conclusion \\& Outlook}\nNearly half of the ANTARES detector has been operational since January this year. The detector is foreseen to be completed in early 2008. The data show that the detector is working within the design specifications. \n\nAs can be seen from Fig.~\\ref{fig:4}, mSUGRA models that are excludable by ANTARES at 90\\%~CL are found in the Focus Point region. The same models should also be excludable by future direct detection experiments, as is shown in Fig.~\\ref{fig:7}.\n\nTo improve the ANTARES sensitivity, a directional trigger algorithm has recently been implemented in the data acquisition system. In this algorithm, the known position of the potential neutrino source is used to lower the trigger condition. This increases the trigger efficiency, resulting in a larger $A_{\\rm eff}^{\\nu}$. In Fig.~\\ref{fig:8}, the $A_{\\rm eff}^{\\nu}$ at the trigger level for the standard- and the directional trigger algorithm are shown in black (``{\\em trigger3D}'') and red (``{\\em triggerMX}'') respectively.\n\n\n\n", "meta": {"timestamp": "2007-10-19T14:19:38", "yymm": "0710", "arxiv_id": "0710.3685", "language": "en", "url": "https://arxiv.org/abs/0710.3685"}} {"text": "\\section{Introduction}\\label{S1}\n\nThe multiple access interferences (MAI) is the root of user\nlimitation in CDMA systems \\cite{R1,R3}. The parallel least mean\nsquare-partial parallel interference cancelation (PLMS-PPIC) method\nis a multiuser detector for code division multiple access (CDMA)\nreceivers which reduces the effect of MAI in bit detection. In this\nmethod and similar to its former versions like LMS-PPIC \\cite{R5}\n(see also \\cite{RR5}), a weighted value of the MAI of other users is\nsubtracted before making the decision for a specific user in\ndifferent stages \\cite{cohpaper}. In both of these methods, the\nnormalized least mean square (NLMS) algorithm is engaged\n\\cite{Haykin96}. The $m^{\\rm th}$ element of the weight vector in\neach stage is the true transmitted binary value of the $m^{\\rm th}$\nuser divided by its hard estimate value from the previous stage. The\nmagnitude of all weight elements in all stages are equal to unity.\nUnlike the LMS-PPIC, the PLMS-PPIC method tries to keep this\nproperty in each iteration by using a set of NLMS algorithms with\ndifferent step-sizes instead of one NLMS algorithm used in LMS-PPIC.\nIn each iteration, the parameter estimate of the NLMS algorithm is\nchosen whose element magnitudes of cancelation weight estimate have\nthe best match with unity. In PLMS-PPIC implementation it is assumed\nthat the receiver knows the phases of all user channels. However in\npractice, these phases are not known and should be estimated. In\nthis paper we improve the PLMS-PPIC procedure \\cite{cohpaper} in\nsuch a way that when there is only a partial information of the\nchannel phases, this modified version simultaneously estimates the\nphases and the cancelation weights. The partial information is the\nquarter of each channel phase in $(0,2\\pi)$.\n\nThe rest of the paper is organized as follows: In section \\ref{S4}\nthe modified version of PLMS-PPIC with capability of channel phase\nestimation is introduced. In section \\ref{S5} some simulation\nexamples illustrate the results of the proposed method. Finally the\npaper is concluded in section \\ref{S6}.\n\n\\section{Multistage Parallel Interference Cancelation: Modified PLMS-PPIC Method}\\label{S4}\n\nWe assume $M$ users synchronously send their symbols\n$\\alpha_1,\\alpha_2,\\cdots,\\alpha_M$ via a base-band CDMA\ntransmission system where $\\alpha_m\\in\\{-1,1\\}$. The $m^{th}$ user\nhas its own code $p_m(.)$ of length $N$, where $p_m(n)\\in \\{-1,1\\}$,\nfor all $n$. It means that for each symbol $N$ bits are transmitted\nby each user and the processing gain is equal to $N$. At the\nreceiver we assume that perfect power control scheme is applied.\nWithout loss of generality, we also assume that the power gains of\nall channels are equal to unity and users' channels do not change\nduring each symbol transmission (it can change from one symbol\ntransmission to the next one) and the channel phase $\\phi_m$ of\n$m^{th}$ user is unknown for all $m=1,2,\\cdots,M$ (see\n\\cite{cohpaper} for coherent transmission). According to the above\nassumptions the received signal is\n\\begin{equation}\n\\label{e1} r(n)=\\sum\\limits_{m=1}^{M}\\alpha_m\ne^{j\\phi_m}p_m(n)+v(n),~~~~n=1,2,\\cdots,N,\n\\end{equation}\nwhere $v(n)$ is the additive white Gaussian noise with zero mean and\nvariance $\\sigma^2$. Multistage parallel interference cancelation\nmethod uses $\\alpha^{s-1}_1,\\alpha^{s-1}_2,\\cdots,\\alpha^{s-1}_M$,\nthe bit estimates outputs of the previous stage, $s-1$, to estimate\nthe related MAI of each user. It then subtracts it from the received\nsignal $r(n)$ and makes a new decision on each user variable\nindividually to make a new variable set\n$\\alpha^{s}_1,\\alpha^{s}_2,\\cdots,\\alpha^{s}_M$ for the current\nstage $s$. Usually the variable set of the first stage (stage $0$)\nis the output of a conventional detector. The output of the last\nstage is considered as the final estimate of transmitted bits. In\nthe following we explain the structure of a modified version of the\nPLMS-PIC method \\cite{cohpaper} with simultaneous capability of\nestimating the cancelation weights and the channel phases.\n\nAssume $\\alpha_m^{(s-1)}\\in\\{-1,1\\}$ is a given estimate of\n$\\alpha_m$ from stage $s-1$. Define\n\\begin{equation}\n\\label{e6} w^s_{m}=\\frac{\\alpha_m}{\\alpha_m^{(s-1)}}e^{j\\phi_m}.\n\\end{equation}\nFrom (\\ref{e1}) and (\\ref{e6}) we have\n\\begin{equation}\n\\label{e7} r(n)=\\sum\\limits_{m=1}^{M}w^s_m\\alpha^{(s-1)}_m\np_m(n)+v(n).\n\\end{equation}\nDefine\n\\begin{subequations}\n\\begin{eqnarray}\n\\label{e8} W^s&=&[w^s_{1},w^s_{2},\\cdots,w^s_{M}]^T,\\\\\n\\label{e9}\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!X^{s}(n)\\!\\!\\!&=&\\!\\!\\![\\alpha^{(s-1)}_1p_1(n),\\alpha^{(s-1)}_2p_2(n),\\cdots,\\alpha^{(s-1)}_Mp_M(n)]^T.\n\\end{eqnarray}\n\\end{subequations}\nwhere $T$ stands for transposition. From equations (\\ref{e7}),\n(\\ref{e8}) and (\\ref{e9}), we have\n\\begin{equation}\n\\label{e10} r(n)=W^{s^T}X^{s}(n)+v(n).\n\\end{equation}\nGiven the observations $\\{r(n),X^{s}(n)\\}^{N}_{n=1}$, in modified\nPLMS-PPIC, like the PLMS-PPIC \\cite{cohpaper}, a set of NLMS\nadaptive algorithm are used to compute\n\\begin{equation}\n\\label{te1} W^{s}(N)=[w^{s}_1(N),w^{s}_2(N),\\cdots,w^{s}_M(N)]^T,\n\\end{equation}\nwhich is an estimate of $W^s$ after iteration $N$. To do so, from\n(\\ref{e6}), we have\n\\begin{equation}\n\\label{e13} |w^s_{m}|=1 ~~~m=1,2,\\cdots,M,\n\\end{equation}\nwhich is equivalent to\n\\begin{equation}\n\\label{e14} \\sum\\limits_{m=1}^{M}||w^s_{m}|-1|=0.\n\\end{equation}\nWe divide $\\Psi=\\left(0,1-\\sqrt{\\frac{M-1}{M}}\\right]$, a sharp\nrange for $\\mu$ (the step-size of the NLMS algorithm) given in\n\\cite{sg2005}, into $L$ subintervals and consider $L$ individual\nstep-sizes $\\Theta=\\{\\mu_1,\\mu_2,\\cdots,\\mu_L\\}$, where\n$\\mu_1=\\frac{1-\\sqrt{\\frac{M-1}{M}}}{L}, \\mu_2=2\\mu_1,\\cdots$, and\n$\\mu_L=L\\mu_1$. In each stage, $L$ individual NLMS algorithms are\nexecuted ($\\mu_l$ is the step-size of the $l^{th}$ algorithm). In\nstage $s$ and at iteration $n$, if\n$W^{s}_k(n)=[w^s_{1,k},\\cdots,w^s_{M,k}]^T$, the parameter estimate\nof the $k^{\\rm th}$ algorithm, minimizes our criteria, then it is\nconsidered as the parameter estimate at time iteration $n$. In other\nwords if the next equation holds\n\\begin{equation}\n\\label{e17} W^s_k(n)=\\arg\\min\\limits_{W^s_l(n)\\in I_{W^s}\n}\\left\\{\\sum\\limits_{m=1}^{M}||w^s_{m,l}(n)|-1|\\right\\},\n\\end{equation}\nwhere $W^{s}_l(n)=W^{s}(n-1)+\\mu_l \\frac{X^s(n)}{\\|X^s(n)\\|^2}e(n),\n~~~ l=1,2,\\cdots,k,\\cdots,L-1,L$ and\n$I_{W^s}=\\{W^s_1(n),\\cdots,W^s_L(n)\\}$, then we have\n$W^s(n)=W^s_k(n)$, and therefore all other algorithms replace their\nweight estimate by $W^{s}_k(n)$. At time instant $n=N$, this\nprocedure gives $W^s(N)$, the final estimate of $W^s$, as the true\nparameter of stage $s$.\n\nNow consider $R=(0,2\\pi)$ and divide it into four equal parts\n$R_1=(0,\\frac{\\pi}{2})$, $R_2=(\\frac{\\pi}{2},\\pi)$,\n$R_3=(\\pi,\\frac{3\\pi}{2})$ and $R_4=(\\frac{3\\pi}{2},2\\pi)$. The\npartial information of channel phases (given by the receiver) is in\na way that it shows each $\\phi_m$ ($m=1,2,\\cdots,M$) belongs to\nwhich one of the four quarters $R_i,~i=1,2,3,4$. Assume\n$W^{s}(N)=[w^{s}_1(N),w^{s}_2(N),\\cdots,w^{s}_M(N)]^T$ is the weight\nestimate of the modified algorithm PLMS-PPIC at time instant $N$ of\nthe stage $s$. From equation (\\ref{e6}) we have\n\\begin{equation}\n\\label{tt3}\n\\phi_m=\\angle({\\frac{\\alpha^{(s-1)}_m}{\\alpha_m}w^s_m}).\n\\end{equation}\nWe estimate $\\phi_m$ by $\\hat{\\phi}^s_m$, where\n\\begin{equation}\n\\label{ee3}\n\\hat{\\phi}^s_m=\\angle{(\\frac{\\alpha^{(s-1)}_m}{\\alpha_m}w^s_m(N))}.\n\\end{equation}\nBecause $\\frac{\\alpha^{(s-1)}_m}{\\alpha_m}=1$ or $-1$, we have\n\\begin{eqnarray}\n\\hat{\\phi}^s_m=\\left\\{\\begin{array}{ll} \\angle{w^s_m(N)} &\n\\mbox{if}~\n\\frac{\\alpha^{(s-1)}_m}{\\alpha_m}=1\\\\\n\\pm\\pi+\\angle{w^s_m(N)} & \\mbox{if}~\n\\frac{\\alpha^{(s-1)}_m}{\\alpha_m}=-1\\end{array}\\right.\n\\end{eqnarray}\nHence $\\hat{\\phi}^s_m\\in P^s=\\{\\angle{w^s_m(N)},\n\\angle{w^s_m(N)+\\pi, \\angle{w^s_m(N)}-\\pi}\\}$. If $w^s_m(N)$\nsufficiently converges to its true value $w^s_m$, the same region\nfor $\\hat{\\phi}^s_m$ and $\\phi_m$ is expected. In this case only one\nof the three members of $P^s$ has the same region as $\\phi_m$. For\nexample if $\\phi_m \\in (0,\\frac{\\pi}{2})$, then $\\hat{\\phi}^s_m \\in\n(0,\\frac{\\pi}{2})$ and therefore only $\\angle{w^s_m(N)}$ or\n$\\angle{w^s_m(N)}+\\pi$ or $\\angle{w^s_m(N)}-\\pi$ belongs to\n$(0,\\frac{\\pi}{2})$. If, for example, $\\angle{w^s_m(N)}+\\pi$ is such\na member between all three members of $P^s$, it is the best\ncandidate for phase estimation. In other words,\n\\[\\phi_m\\approx\\hat{\\phi}^s_m=\\angle{w^s_m(N)}+\\pi.\\]\nWe admit that when there is a member of $P^s$ in the quarter of\n$\\phi_m$, then $w^s_m(N)$ converges. What would happen when non of\nthe members of $P^s$ has the same quarter as $\\phi_m$? This\nsituation will happen when the absolute difference between $\\angle\nw^s_m(N)$ and $\\phi_m$ is greater than $\\pi$. It means that\n$w^s_m(N)$ has not converged yet. In this case where we can not\ncount on $w^s_m(N)$, the expected value is the optimum choice for\nthe channel phase estimation, e.g. if $\\phi_m \\in (0,\\frac{\\pi}{2})$\nthen $\\frac{\\pi}{4}$ is the estimation of the channel phase\n$\\phi_m$, or if $\\phi_m \\in (\\frac{\\pi}{2},\\pi)$ then\n$\\frac{3\\pi}{4}$ is the estimation of the channel phase $\\phi_m$.\nThe results of the above discussion are summarized in the next\nequation\n\\begin{eqnarray}\n\\nonumber \\hat{\\phi}^s_m = \\left\\{\\begin{array}{llll} \\angle\n{w^s_m(N)} & \\mbox{if}~\n\\angle{w^s_m(N)}, \\phi_m\\in R_i,~~i=1,2,3,4\\\\\n\\angle{w^s_m(N)}+\\pi & \\mbox{if}~ \\angle{w^s_m(N)}+\\pi, \\phi_m\\in\nR_i,~~i=1,2,3,4\\\\\n\\angle{w^n_m(N)}-\\pi & \\mbox{if}~ \\angle{w^s_m(N)}-\\pi, \\phi_m\\in\nR_i,~~i=1,2,3,4\\\\\n\\frac{(i-1)\\pi+i\\pi}{4} & \\mbox{if}~ \\phi_m\\in\nR_i,~~\\angle{w^s_m(N)},\\angle\n{w^s_m(N)}\\pm\\pi\\notin R_i,~~i=1,2,3,4.\\\\\n\\end{array}\\right.\n\\end{eqnarray}\nHaving an estimation of the channel phases, the rest of the proposed\nmethod is given by estimating $\\alpha^{s}_m$ as follows:\n\\begin{equation}\n\\label{tt4}\n\\alpha^{s}_m=\\mbox{sign}\\left\\{\\mbox{real}\\left\\{\\sum\\limits_{n=1}^{N}\nq^s_m(n)e^{-j\\hat{\\phi}^s_m}p_m(n)\\right\\}\\right\\},\n\\end{equation}\nwhere\n\\begin{equation} \\label{tt5}\nq^{s}_{m}(n)=r(n)-\\sum\\limits_{m^{'}=1,m^{'}\\ne\nm}^{M}w^{s}_{m^{'}}(N)\\alpha^{(s-1)}_{m^{'}} p_{m^{'}}(n).\n\\end{equation}\nThe inputs of the first stage $\\{\\alpha^{0}_m\\}_{m=1}^M$ (needed for\ncomputing $X^1(n)$) are given by\n\\begin{equation}\n\\label{qte5}\n\\alpha^{0}_m=\\mbox{sign}\\left\\{\\mbox{real}\\left\\{\\sum\\limits_{n=1}^{N}\nr(n)e^{-j\\hat{\\phi}^0_m}p_m(n)\\right\\}\\right\\}.\n\\end{equation}\nAssuming $\\phi_m\\in R_i$, then\n\\begin{equation}\n\\label{qqpp} \\hat{\\phi}^0_m =\\frac{(i-1)\\pi+i\\pi}{4}.\n\\end{equation}\nTable \\ref{tab4} shows the structure of the modified PLMS-PPIC\nmethod. It is to be notified that\n\\begin{itemize}\n\\item Equation (\\ref{qte5}) shows the conventional bit detection\nmethod when the receiver only knows the quarter of channel phase in\n$(0,2\\pi)$. \\item With $L=1$ (i.e. only one NLMS algorithm), the\nmodified PLMS-PPIC can be thought as a modified version of the\nLMS-PPIC method.\n\\end{itemize}\n\nIn the following section some examples are given to illustrate the\neffectiveness of the proposed method.\n\n\\section{Simulations}\\label{S5}\n\nIn this section we have considered some simulation examples.\nExamples \\ref{ex2}-\\ref{ex4} compare the conventional, the modified\nLMS-PPIC and the modified PLMS-PPIC methods in three cases: balanced\nchannels, unbalanced channels and time varying channels. In all\nexamples, the receivers have only the quarter of each channel phase.\nExample \\ref{ex2} is given to compare the modified LMS-PPIC and the\nPLMS-PPIC in the case of balanced channels.\n\n\\begin{example}{\\it Balanced channels}:\n\\label{ex2}\n\\begin{table}\n\\caption{Channel phase estimate of the first user (example\n\\ref{ex2})} \\label{tabex5} \\centerline{{\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\multirow{6}{*}{\\rotatebox{90}{$\\phi_m=\\frac{3\\pi}{8},M=15~~$}} & N(Iteration) & Stage Number& NLMS & PNLMS \\\\\n&&&&\\\\\n\\cline{2-5} & \\multirow{2}{*}{64}& s = 2 & $\\hat{\\phi}^s_m=\\frac{3.24\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{3.18\\pi}{8}$ \\\\\n\\cline{3-5} & & s = 3 & $\\hat{\\phi}^s_m=\\frac{3.24\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{3.18\\pi}{8}$ \\\\\n\\cline{2-5} & \\multirow{2}{*}{256}& s = 2 & $\\hat{\\phi}^s_m=\\frac{2.85\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{2.88\\pi}{8}$ \\\\\n\\cline{3-5} & & s = 3 & $\\hat{\\phi}^s_m=\\frac{2.85\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{2.88\\pi}{8}$ \\\\\n\\cline{2-5} \\hline\n\\end{tabular} }}\n\\end{table}\nConsider the system model (\\ref{e7}) in which $M$ users\nsynchronously send their bits to the receiver through their\nchannels. It is assumed that each user's information consists of\ncodes of length $N$. It is also assumd that the signal to noise\nratio (SNR) is 0dB. In this example there is no power-unbalanced or\nchannel loss is assumed. The step-size of the NLMS algorithm in\nmodified LMS-PPIC method is $\\mu=0.1(1-\\sqrt{\\frac{M-1}{M}})$ and\nthe set of step-sizes of the parallel NLMS algorithms in modified\nPLMS-PPIC method are\n$\\Theta=\\{0.01,0.05,0.1,0.2,\\cdots,1\\}(1-\\sqrt{\\frac{M-1}{M}})$,\ni.e. $\\mu_1=0.01(1-\\sqrt{\\frac{M-1}{M}}),\\cdots,\n\\mu_4=0.2(1-\\sqrt{\\frac{M-1}{M}}),\\cdots,\n\\mu_{12}=(1-\\sqrt{\\frac{M-1}{M}})$. Figure~\\ref{Figexp1NonCoh}\nillustrates the bit error rate (BER) for the case of two stages and\nfor $N=64$ and $N=256$. Simulations also show that there is no\nremarkable difference between results in two stage and three stage\nscenarios. Table~\\ref{tabex5} compares the average channel phase\nestimate of the first user in each stage and over $10$ runs of\nmodified LMS-PPIC and PLMS-PPIC, when the the number of users is\n$M=15$.\n\\end{example}\n\nAlthough LMS-PPIC and PLMS-PPIC, as well as their modified versions,\nare structured based on the assumption of no near-far problem\n(examples \\ref{ex3} and \\ref{ex4}), these methods and especially the\nsecond one have remarkable performance in the cases of unbalanced\nand/or time varying channels.\n\n\\begin{example}{\\it Unbalanced channels}:\n\\label{ex3}\n\\begin{table}\n\\caption{Channel phase estimate of the first user (example\n\\ref{ex3})} \\label{tabex6} \\centerline{{\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\multirow{6}{*}{\\rotatebox{90}{$\\phi_m=\\frac{3\\pi}{8},M=15~~$}} & N(Iteration) & Stage Number& NLMS & PNLMS \\\\\n&&&&\\\\\n\\cline{2-5} & \\multirow{2}{*}{64}& s=2 & $\\hat{\\phi}^s_m=\\frac{2.45\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{2.36\\pi}{8}$ \\\\\n\\cline{3-5} & & s=3 & $\\hat{\\phi}^s_m=\\frac{2.71\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{2.80\\pi}{8}$ \\\\\n\\cline{2-5} & \\multirow{2}{*}{256}& s=2 & $\\hat{\\phi}^s_m=\\frac{3.09\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{2.86\\pi}{8}$ \\\\\n\\cline{3-5} & & s=3 & $\\hat{\\phi}^s_m=\\frac{2.93\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{3.01\\pi}{8}$ \\\\\n\\cline{2-5} \\hline\n\\end{tabular} }}\n\\end{table}\nConsider example \\ref{ex2} with power unbalanced and/or channel loss\nin transmission system, i.e. the true model at stage $s$ is\n\\begin{equation}\n\\label{ve7} r(n)=\\sum\\limits_{m=1}^{M}\\beta_m\nw^s_m\\alpha^{(s-1)}_m c_m(n)+v(n),\n\\end{equation}\nwhere $0<\\beta_m\\leq 1$ for all $1\\leq m \\leq M$. Both the LMS-PPIC\nand the PLMS-PPIC methods assume the model (\\ref{e7}), and their\nestimations are based on observations $\\{r(n),X^s(n)\\}$, instead of\n$\\{r(n),\\mathbf{G}X^s(n)\\}$, where the channel gain matrix is\n$\\mathbf{G}=\\mbox{diag}(\\beta_1,\\beta_2,\\cdots,\\beta_m)$. In this\ncase we repeat example \\ref{ex2}. We randomly get each element of\n$G$ from $[0,0.3]$. Figure~\\ref{Figexp2NonCoh} illustrates the BER\nversus the number of users. Table~\\ref{tabex6} compares the channel\nphase estimate of the first user in each stage and over $10$ runs of\nmodified LMS-PPIC and modified PLMS-PPIC for $M=15$.\n\\end{example}\n\n\\begin{example}\n\\label{ex4} {\\it Time varying channels}: Consider example \\ref{ex2}\nwith time varying Rayleigh fading channels. In this case we assume\nthe maximum Doppler shift of $40$HZ, the three-tap\nfrequency-selective channel with delay vector of $\\{2\\times\n10^{-6},2.5\\times 10^{-6},3\\times 10^{-6}\\}$sec and gain vector of\n$\\{-5,-3,-10\\}$dB. Figure~\\ref{Figexp3NonCoh} shows the average BER\nover all users versus $M$ and using two stages.\n\\end{example}\n\n\n\\section{Conclusion}\\label{S6}\n\nIn this paper, parallel interference cancelation using adaptive\nmultistage structure and employing a set of NLMS algorithms with\ndifferent step-sizes is proposed, when just the quarter of the\nchannel phase of each user is known. In fact, the algorithm has been\nproposed for coherent transmission with full information on channel\nphases in \\cite{cohpaper}. This paper is a modification on the\npreviously proposed algorithm. Simulation results show that the new\nmethod has a remarkable performance for different scenarios\nincluding Rayleigh fading channels even if the channel is\nunbalanced.\n\n", "meta": {"timestamp": "2007-10-23T02:36:00", "yymm": "0710", "arxiv_id": "0710.4172", "language": "en", "url": "https://arxiv.org/abs/0710.4172"}} {"text": "\\section{Introduction}\\label{sec:intro}}\n\n\n\n\n\\IEEEPARstart{H}{uman} action recognition is a fast developing research area due to its wide applications\nin intelligent surveillance, human-computer interaction, robotics, and so on.\nIn recent years, human activity analysis based on human skeletal data has attracted a lot of attention,\nand various methods for feature extraction and classifier learning have been developed for skeleton-based action recognition \\cite{zhu2016handcrafted,presti20163d,han2016review}.\nA hidden Markov model (HMM) is utilized by Xia {\\emph{et~al.}}~ \\cite{HOJ3D} to model the temporal dynamics over a histogram-based representation of joint positions for action recognition.\nThe static postures and dynamics of the motion patterns are represented via eigenjoints by Yang and Tian \\cite{eigenjointsJournal}.\nA Naive-Bayes-Nearest-Neighbor classifier learning approach is also used by \\cite{eigenjointsJournal}.\nVemulapalli {\\emph{et~al.}}~ \\cite{vemulapalli2014liegroup} represent the skeleton configurations and action patterns as points and curves in a Lie group,\nand then a SVM classifier is adopted to classify the actions.\nEvangelidis {\\emph{et~al.}}~ \\cite{skeletalQuads} propose to learn a GMM over the Fisher kernel representation of the skeletal quads feature.\nAn angular body configuration representation over the tree-structured set of joints is proposed in \\cite{hog2-ohnbar}.\nA skeleton-based dictionary learning method using geometry constraint and group sparsity is also introduced in \\cite{Luo_2013_ICCV}.\n\nRecently, recurrent neural networks (RNNs) which can handle the sequential data with variable lengths \\cite{graves2013speechICASSP,sutskever2014sequence},\nhave shown their strength in language modeling \\cite{mikolov2011extensions,sundermeyer2012lstm,mesnil2013investigation},\nimage captioning \\cite{vinyals2015show,xu2015show},\nvideo analysis \\cite{srivastava2015unsupervised,Singh_2016_CVPR,Jain_2016_CVPR,Alahi_2016_CVPR,Deng_2016_CVPR,Ibrahim_2016_CVPR,Ma_2016_CVPR,Ni_2016_CVPR,li2016online},\nand RGB-based activity recognition \\cite{yue2015beyond,donahue2015long,li2016action,wu2015ACMMM}.\nApplications of these networks have also shown promising achievements in skeleton-based action recognition \\cite{du2015hierarchical,veeriah2015differential,nturgbd}.\n\nIn the current skeleton-based action recognition literature, RNNs are mainly used to model the long-term context information across the temporal dimension by representing motion-based dynamics.\nHowever, there is often strong dependency relations among the skeletal joints in spatial domain also,\nand the spatial dependency structure is usually discriminative for action classification.\n\n\nTo model the dynamics and dependency relations in both temporal and spatial domains,\nwe propose a spatio-temporal long short-term memory (ST-LSTM) network in this paper.\nIn our ST-LSTM network,\neach joint can receive context information from its stored data from previous frames and also from the neighboring joints at the same time frame to represent its incoming spatio-temporal context.\nFeeding a simple chain of joints to a sequence learner limits the performance of the network,\nas the human skeletal joints are not semantically arranged as a chain.\nInstead, the adjacency configuration of the joints in the skeletal data can be better represented by a tree structure.\nConsequently, we propose a traversal procedure by following the tree structure of the skeleton\nto exploit the kinematic relationship among the body joints for better modeling spatial dependencies.\n\nSince the 3D positions of skeletal joints provided by depth sensors are not always very accurate,\nwe further introduce a new gating framework, so called ``trust gate'',\nfor our ST-LSTM network to analyze the reliability of the input data at each spatio-temporal step.\nThe proposed trust gate gives better insight to the ST-LSTM network about\nwhen and how to update, forget, or remember the internal memory content as the representation of the long-term context information.\n\nIn addition, we introduce a feature fusion method within the ST-LSTM unit to better exploit the multi-modal features extracted for each joint.\n\nWe summarize the main contributions of this paper as follows.\n(1) A novel spatio-temporal LSTM (ST-LSTM) network for skeleton-based action recognition is designed.\n(2) A tree traversal technique is proposed to feed the structured human skeletal data into a sequential LSTM network.\n(3) The functionality of the ST-LSTM framework is further extended by adding the proposed ``trust gate''.\n(4) A multi-modal feature fusion strategy within the ST-LSTM unit is introduced.\n(5) The proposed method achieves state-of-the-art performance on seven benchmark datasets.\n\nThe remainder of this paper is organized as follows.\nIn section \\ref{sec:relatedwork}, we introduce the related works on skeleton-based action recognition, which used recurrent neural networks to model the temporal dynamics.\nIn section \\ref{sec:approach}, we introduce our end-to-end trainable spatio-temporal recurrent neural network for action recognition.\nThe experiments are presented in section \\ref{sec:exp}.\nFinally, the paper is concluded in section \\ref{sec:conclusion}.\n\n\\section{Related Work}\n\\label{sec:relatedwork}\n\nSkeleton-based action recognition has been explored in different aspects during recent years \\cite{7284883,actionletPAMI,MMMP_PAMI,MMTW,Vemulapalli_2016_CVPR,rahmani2014real,shahroudy2014multi,rahmani2015learning,lillo2014discriminative,\njhuang2013towards,\nchen_2016_icassp,liu2016IVC,cai2016TMM,al2016PRL,Tao_2015_ICCV_Workshops\n}.\nIn this section, we limit our review to more recent approaches which use RNNs or LSTMs for human activity analysis\n\nDu {\\emph{et~al.}}~ \\cite{du2015hierarchical} proposed a Hierarchical RNN network by utilizing multiple bidirectional RNNs in a novel hierarchical fashion.\nThe human skeletal structure was divided to five major joint groups.\nThen each group was fed into the corresponding bidirectional RNN.\nThe outputs of the RNNs were concatenated to represent the upper body and lower body,\nthen each was further fed into another set of RNNs.\nBy concatenating the outputs of two RNNs, the global body representation was obtained, which was fed to the next RNN layer.\nFinally, a softmax classifier was used in \\cite{du2015hierarchical} to perform action classification.\n\nVeeriah {\\emph{et~al.}}~ \\cite{veeriah2015differential} proposed to add a new gating mechanism for LSTM to model the derivatives of the memory states and explore the salient action patterns.\nIn this method, all of the input features were concatenated at each frame and were fed to the differential LSTM at each step.\n\nZhu {\\emph{et~al.}}~ \\cite{zhu2016co} introduced a regularization term to the objective function of the LSTM\nnetwork to push the entire framework towards learning co-occurrence relations among the joints for action recognition.\nAn internal dropout \\cite{dropout} technique within the LSTM unit was also introduced in \\cite{zhu2016co}.\n\nShahroudy {\\emph{et~al.}}~ \\cite{nturgbd} proposed to split the LSTM's memory cell to sub-cells to push the network towards learning the context representations for each body part separately.\nThe output of the network was learned by concatenating the multiple memory sub-cells.\n\nHarvey and Pal \\cite{harvey2015semi} adopted an encoder-decoder recurrent network to reconstruct the skeleton sequence and perform action classification at the same time.\nTheir model showed promising results on motion capture sequences.\n\nMahasseni and Todorovic \\cite{mahasseni2016regularizing} proposed to use LSTM to encode a skeleton sequence as a feature vector.\nAt each step, the input of the LSTM consists of the concatenation of the skeletal joints' 3D locations in a frame.\nThey further constructed a feature manifold by using a set of encoded feature vectors.\nFinally, the manifold was used to assist and regularize the supervised learning of another LSTM for RGB video based action recognition.\n\nDifferent from the aforementioned works,\nour proposed method does not simply concatenate the joint-based input features to build the body-level feature representation.\nInstead, the dependencies between the skeletal joints are explicitly modeled by applying recurrent analysis over temporal and spatial dimensions concurrently.\nFurthermore, a novel trust gate is introduced to make our ST-LSTM network more reliable against the noisy input data.\n\nThis paper is an extension of our preliminary conference version \\cite{liu2016spatio}.\nIn \\cite{liu2016spatio}, we validated the effectiveness of our model on four benchmark datasets.\nIn this paper, we extensively evaluate our model on seven challenging datasets.\nBesides, we further propose an effective feature fusion strategy inside the ST-LSTM unit.\nIn order to improve the learning ability of our ST-LSTM network, a last-to-first link scheme is also introduced.\nIn addition, we provide more empirical analysis of the proposed framework.\n\n\n\\section{Spatio-Temporal Recurrent Networks}\n\\label{sec:approach}\n\n\nIn a generic skeleton-based action recognition problem, the input observations are limited to the 3D locations of the major body joints at each frame.\nRecurrent neural networks have been successfully applied to\nthis problem recently \\cite{du2015hierarchical,zhu2016co,nturgbd}.\nLSTM networks \\cite{lstm} are among the most successful extensions of recurrent neural networks.\nA gating mechanism controlling the contents of an internal memory cell is adopted by the LSTM model\nto learn a better and more complex representation of long-term dependencies in the input sequential data.\nConsequently, LSTM networks are very suitable for feature learning over time series data (such as human skeletal sequences over time).\n\nWe will briefly review the original LSTM model in this section,\nand then introduce our ST-LSTM network and the tree-structure based traversal approach.\nWe will also introduce a new gating mechanism for ST-LSTM to handle the noisy measurements in the input data for better action recognition.\nFinally, an internal feature fusion strategy for ST-LSTM will be proposed.\n\n\\subsection{Temporal Modeling with LSTM}\n\\label{sec:approach:lstm}\n\nIn the standard LSTM model, each recurrent unit contains an input gate $i_t$, a forget gate $f_t$, an output gate $o_t$, and an internal memory cell state $c_t$, together with a hidden state $h_t$.\nThe input gate $i_{t}$ controls the contributions of the newly arrived input data at time step $t$ for updating the memory cell,\nwhile the forget gate $f_{t}$ determines how much the contents of the previous state $(c_{t-1})$ contribute to deriving the current state $(c_{t})$.\nThe output gate $o_{t}$ learns how the output of the LSTM unit at current time step should be derived from the current state of the internal memory cell.\nThese gates and states can be obtained as follows:\n\n\\begin{eqnarray}\n\\left(\n \\begin{array}{ccc}\n i_{t} \\\\\n f_{t} \\\\\n o_{t} \\\\\n u_{t} \\\\\n \\end{array}\n\\right)\n&=&\n\\left(\n \\begin{array}{ccc}\n \\sigma \\\\\n \\sigma \\\\\n \\sigma \\\\\n \\tanh \\\\\n \\end{array}\n\\right)\n\\left(\n M\n \\left(\n \\begin{array}{ccc}\n x_{t} \\\\\n h_{t-1} \\\\\n \\end{array}\n \\right)\n\\right)\\\\\nc_{t} &=& i_{t} \\odot u_{t} + f_{t} \\odot c_{t-1}\n\\label{eq:ct}\\\\\nh_{t} &=& o_{t} \\odot \\tanh( c_{t})\n\\label{eq:ht}\n\\end{eqnarray}\nwhere $x_t$ is the input at time step $t$, $u_t$ is the modulated input, $\\odot$ denotes the element-wise product,\nand $M: \\mathbb{R}^{D+d} \\to \\mathbb{R}^{4d}$ is an affine transformation.\n$d$ is the size of the internal memory cell, and $D$ is the dimension of $x_t$.\n\n\\subsection{Spatio-Temporal LSTM}\n\\label{sec:approach:stlstm}\n\n\n\\begin{figure}\n\t\\begin{minipage}[b]{1.0\\linewidth}\n\t\t\\centering\n\t\t\\centerline{\\includegraphics[scale=.338]{STLSTM.pdf}}\n\t\\end{minipage}\n\n\n\n\n\t\\caption{\n\t\tIllustration of the spatio-temporal LSTM network.\n In temporal dimension, the corresponding body joints are fed over the frames.\n\t\tIn spatial dimension, the skeletal joints in each frame are fed as a sequence.\n\t\tEach unit receives the hidden representation of the previous joints and the same joint from previous frames.}\n\n\t\\label{fig:STLSTM}\n\\end{figure}\n\nRNNs have already shown their strengths in modeling the complex dynamics of human activities as time series data,\nand achieved promising performance in skeleton-based human action recognition \\cite{du2015hierarchical,zhu2016co,veeriah2015differential,nturgbd}.\nIn the existing literature, RNNs are mainly utilized in temporal domain to discover the discriminative dynamics and motion patterns for action recognition.\nHowever, there is also discriminative spatial information encoded in the joints' locations and posture configurations at each video frame,\nand the sequential nature of the body joints makes it possible to apply RNN-based modeling to spatial domain as well.\n\nDifferent from the existing methods which concatenate the joints' information as the entire body's representation,\nwe extend the recurrent analysis to spatial domain by discovering the spatial dependency patterns among different body joints.\nWe propose a spatio-temporal LSTM (ST-LSTM) network to simultaneously model the temporal dependencies among different frames and also the spatial dependencies of different joints at the same frame.\nEach ST-LSTM unit, which corresponds to one of the body joints,\nreceives the hidden representation of its own joint from the previous time step\nand also the hidden representation of its previous joint at the current frame.\nA schema of this model is illustrated in \\figurename{ \\ref{fig:STLSTM}}.\n\nIn this section, we assume the joints are arranged in a simple chain sequence, and the order is depicted in \\figurename{ \\ref{fig:tree16joints}(a)}.\nIn section \\ref{sec:approach:skeltree}, we will introduce a more advanced traversal scheme to take advantage of the adjacency structure among the skeletal joints.\n\nWe use $j$ and $t$ to respectively denote the indices of joints and frames,\nwhere $j \\in \\{1,...,J\\}$ and $t \\in \\{1,...,T\\}$.\nEach ST-LSTM unit is fed with the input ($x_{j, t}$, the information of the corresponding joint at current time step),\nthe hidden representation of the previous joint at current time step $(h_{j-1,t})$,\nand the hidden representation of the same joint at the previous time step $(h_{j,t-1})$.\n\nAs depicted in \\figurename{ \\ref{fig:STLSTMFig}},\neach unit also has two forget gates, $f_{j, t}^{T}$ and $f_{j, t}^{S}$, to handle the two sources of context information in temporal and spatial dimensions, respectively.\nThe transition equations of ST-LSTM are formulated as follows:\n\\begin{eqnarray}\n\\left(\n \\begin{array}{ccc}\n i_{j, t} \\\\\n f_{j, t}^{S} \\\\\n f_{j, t}^{T} \\\\\n o_{j, t} \\\\\n u_{j, t} \\\\\n \\end{array}\n\\right)\n&=&\n\\left(\n \\begin{array}{ccc}\n \\sigma \\\\\n \\sigma \\\\\n \\sigma \\\\\n \\sigma \\\\\n \\tanh \\\\\n \\end{array}\n\\right)\n\\left(\n M\n \\left(\n \\begin{array}{ccc}\n x_{j, t} \\\\\n h_{j-1, t} \\\\\n h_{j, t-1} \\\\\n \\end{array}\n \\right)\n\\right)\n\\\\\nc_{j, t} &=& i_{j, t} \\odot u_{j, t} + f_{j, t}^{S} \\odot c_{j-1, t} + f_{j, t}^{T} \\odot c_{j, t-1}\n\\\\\nh_{j, t} &=& o_{j, t} \\odot \\tanh( c_{j, t})\n\\end{eqnarray}\n\n\\begin{figure}\n\n\n\t\\centerline{\\includegraphics[scale=0.479]{STLSTMFig.pdf}}\n\n\n\n\n\n\t\\caption{Illustration of the proposed ST-LSTM with one unit.}\n\t\\label{fig:STLSTMFig}\n\\end{figure}\n\n\n\n\n\n\\subsection{Tree-Structure Based Traversal}\n\\label{sec:approach:skeltree}\n\n\n\n\\begin{figure}\n\t\\begin{minipage}[b]{0.32\\linewidth}\n\t\t\\centering\n\t\t\\centerline{\\includegraphics[scale=.27]{Skeleton16Joints.pdf}}\n\t\t\\centerline{(a)}\n\t\\end{minipage}\n\t\\begin{minipage}[b]{0.63\\linewidth}\n\t\t\\centering\n\t\t\\centerline{\\includegraphics[scale=.27]{Tree16Joints.pdf}}\n\t\t\\centerline{(b)}\n\t\\end{minipage}\n\t\\begin{minipage}[b]{0.99\\linewidth}\n\t\t\\centering\n\t\t\\centerline{\\includegraphics[scale=.27]{BiTree16Joints.pdf}}\n\t\t\\centerline{(c)}\n\t\\end{minipage}\n\t\\caption{(a) The skeleton of the human body. In the simple joint chain model, the joint visiting order is 1-2-3-...-16.\n(b) The skeleton is transformed to a tree structure.\n(c) The tree traversal scheme. The tree structure can be unfolded to a chain with the traversal scheme, and the joint visiting order is 1-2-3-2-4-5-6-5-4-2-7-8-9-8-7-2-1-10-11-12-13-12-11-10-14-15-16-15-14-10-1.}\n\t\\label{fig:tree16joints}\n\\end{figure}\n\nArranging the skeletal joints in a simple chain order ignores the kinematic interdependencies among the body joints.\nMoreover, several semantically false connections between the joints, which are not strongly related, are added.\n\nThe body joints are popularly represented as a tree-based pictorial structure \\cite{zou2009automatic,yang2011articulated} in human parsing,\nas shown in \\figurename{ \\ref{fig:tree16joints}(b)}.\nIt is beneficial to utilize the known interdependency relations between various sets of body joints as an adjacency tree structure inside our ST-LSTM network as well.\nFor instance, the hidden representation of the neck joint (joint 2 in \\figurename{ \\ref{fig:tree16joints}(a)})\nis often more informative for the right hand joints (7, 8, and 9) compared to the joint 6, which lies before them in the numerically ordered chain-like model.\nAlthough using a tree structure for the skeletal data sounds more reasonable here, tree structures cannot be directly fed into our current form of the proposed ST-LSTM network.\n\nIn order to mitigate the aforementioned limitation, a bidirectional tree traversal scheme is proposed.\nIn this scheme, the joints are visited in a sequence, while the adjacency information in the skeletal tree structure will be maintained.\nAt the first spatial step, the root node (central spine joint in \\figurename{ \\ref{fig:tree16joints}(c)}) is fed to our network.\nThen the network follows the depth-first traversal order in the spatial (skeleton tree) domain.\nUpon reaching a leaf node, the traversal backtracks in the tree.\nFinally, the traversal goes back to the root node.\n\nIn our traversal scheme, each connection in the tree is met twice,\nthus it guarantees the transmission of the context data in both top-down and bottom-up directions within the adjacency tree structure.\nIn other words, each node (joint) can obtain the context information from both its ancestors and descendants in the hierarchy defined by the tree structure.\nCompared to the simple joint chain order described in section \\ref{sec:approach:stlstm},\nthis tree traversal strategy, which takes advantage of the joints' adjacency structure, can discover stronger long-term spatial dependency patterns in the skeleton sequence.\n\nOur framework's representation capacity can be further improved by stacking multiple layers of the tree-structured ST-LSTMs and making the network deeper, as shown in \\figurename{ \\ref{fig:stackedTreeSTLSTM}}.\n\nIt is worth noting that at each step of our ST-LSTM framework,\nthe input is limited to the information of a single joint at a time step,\nand its dimension is much smaller compared to the concatenated input features used by other existing methods.\nTherefore, our network has much fewer learning parameters.\nThis can be regarded as a weight sharing regularization for our learning model,\nwhich leads to better generalization in the scenarios with relatively small sets of training samples.\nThis is an important advantage for skeleton-based action recognition, since the numbers of training samples in most existing datasets are limited.\n\n\n\\begin{figure}\n\t\\begin{minipage}[b]{0.99\\linewidth}\n\t\t\\centering\n\t\t\\centerline{\\includegraphics[scale=.38]{StackedTreeSTLSTM.pdf}}\n\t\\end{minipage}\n\t\\caption{\nIllustration of the deep tree-structured ST-LSTM network.\nFor clarity, some arrows are omitted in this figure.\nThe hidden representation of the first ST-LSTM layer is fed to the second ST-LSTM layer as its input.\nThe second ST-LSTM layer's hidden representation is fed to the softmax layer for classification.\n}\n\t\\label{fig:stackedTreeSTLSTM}\n\\end{figure}\n\n\\subsection{Spatio-Temporal LSTM with Trust Gates}\n\\label{sec:approach:trustgate}\n\n\nIn our proposed tree-structured ST-LSTM network, the inputs are the positions of body joints provided by depth sensors (such as Kinect),\nwhich are not always accurate because of noisy measurements and occlusion.\nThe unreliable inputs can degrade the performance of the network.\n\nTo circumvent this difficulty, we propose to add a novel additional gate to our ST-LSTM network to analyze the reliability of the input measurements based on the derived estimations of the input from the available context information at each spatio-temporal step.\nOur gating scheme is inspired by the works in natural language processing \\cite{sutskever2014sequence},\nwhich use the LSTM representation of previous words at each step to predict the next coming word.\nAs there are often high dependency relations among the words in a sentence, this idea works decently.\nSimilarly, in a skeletal sequence, the neighboring body joints often move together,\nand this articulated motion follows common yet complex patterns,\nthus the input data $x_{j,t}$ is expected to be predictable by using the contextual information ($h_{j-1,t}$ and $h_{j,t-1}$) at each spatio-temporal step.\n\nInspired by this predictability concept, we add a new mechanism to our ST-LSTM calculating a prediction of the input at each step and comparing it with the actual input.\nThe amount of estimation error is then used to learn a new ``trust gate''.\nThe activation of this new gate can be used to assist the ST-LSTM network to learn better decisions about when and how to remember or forget the contents in the memory cell.\nFor instance, if the trust gate learns that the current joint has wrong measurements,\nthen this gate can block the input gate and prevent the memory cell from being altered by the current unreliable input data.\n\nConcretely, we introduce a function to produce a prediction of the input at step $(j,t)$ based on the available context information as:\n\\begin{equation}\np_{j, t} = \\tanh\n\\left(\n M_{p}\n \\left(\n \\begin{array}{ccc}\n h_{j-1, t} \\\\\n h_{j, t-1} \\\\\n \\end{array}\n \\right)\n\\right)\n\\label{eq:p_j_t}\n\\end{equation}\nwhere $M_p$ is an affine transformation mapping the data from $\\mathbb{R}^{2d}$ to $\\mathbb{R}^d$, thus the dimension of $p_{j,t}$ is $d$.\nNote that the context information at each step does not only contain the representation of the previous temporal step,\nbut also the hidden state of the previous spatial step.\nThis indicates that the long-term context information of both the same joint at previous frames and the other visited joints at the current frame are seamlessly incorporated.\nThus this function is expected to be capable of generating reasonable predictions.\n\nIn our proposed network, the activation of trust gate is a vector in $\\mathbb{R}^d$ (similar to the activation of input gate and forget gate).\nThe trust gate $\\tau_{j, t}$ is calculated as follows:\n\\begin{eqnarray}\nx'_{j, t} &=& \\tanh\n\\left(\n M_{x}\n \\left(\n x_{j, t}\n \\right)\n\\right)\n\\label{eq:x_prime_j_t}\n\\\\\n\\tau_{j, t} &=& G (p_{j, t} - x'_{j, t})\n\\label{eq:tau}\n\\end{eqnarray}\nwhere $M_x: \\mathbb{R}^{D} \\to \\mathbb{R}^{d}$ is an affine transformation.\nThe activation function $G(\\cdot)$ is an element-wise operation calculated as $G(z) = \\exp(-\\lambda z^{2})$,\nfor which $\\lambda$ is a parameter to control the bandwidth of Gaussian function ($\\lambda > 0$).\n$G(z)$ produces a small response if $z$ has a large absolute value and a large response when $z$ is close to zero.\n\nAdding the proposed trust gate, the cell state of ST-LSTM will be updated as:\n\\begin{eqnarray}\nc_{j, t} &=& \\tau_{j, t} \\odot i_{j, t} \\odot u_{j, t}\n\\nonumber\\\\\n &&+ (\\bold{1} - \\tau_{j, t}) \\odot f_{j, t}^{S} \\odot c_{j-1, t}\n\\nonumber\\\\\n &&+ (\\bold{1} - \\tau_{j, t}) \\odot f_{j, t}^{T} \\odot c_{j, t-1}\n\\end{eqnarray}\n\nThis equation can be explained as follows:\n(1) if the input $x_{j,t}$ is not trusted (due to the noise or occlusion),\nthen our network relies more on its history information, and tries to block the new input at this step;\n(2) on the contrary, if the input is reliable, then our learning algorithm updates the memory cell regarding the input data.\n\nThe proposed ST-LSTM unit equipped with trust gate is illustrated in \\figurename{ \\ref{fig:TrustGateSTLSTMFig}}.\nThe concept of the proposed trust gate technique is theoretically generic and can be used in other domains to handle noisy input information for recurrent network models.\n\n\\begin{figure}\n\n\n\t\\centerline{\\includegraphics[scale=0.479]{TrustGateSTLSTMFig_X.pdf}}\n\n\n\n\n\n\t\\caption{Illustration of the proposed ST-LSTM with trust gate.}\n\t\\label{fig:TrustGateSTLSTMFig}\n\\end{figure}\n\n\n\\subsection{Feature Fusion within ST-LSTM Unit}\n\\label{sec:approach:innerfusion}\n\n\\begin{figure}\n\t\\centerline{\\includegraphics[scale=0.469]{FusionSTLSTMFig.pdf}}\n\t\\caption{Illustration of the proposed structure for feature fusion inside the ST-LSTM unit.}\n\t\\label{fig:FusionSTLSTMFig}\n\\end{figure}\n\nAs mentioned above, at each spatio-temporal step, the positional information of the corresponding joint at the current frame is fed to our ST-LSTM network.\nHere we call joint position-based feature as a geometric feature.\nBeside utilizing the joint position (3D coordinates),\nwe can also extract visual texture and motion features ({\\emph{e.g.}}~ HOG, HOF \\cite{dalal2006human,wang2011action}, or ConvNet-based features \\cite{simonyan2014very,cheron2015p})\nfrom the RGB frames, around each body joint as the complementary information.\nThis is intuitively effective for better human action representation, especially in the human-object interaction scenarios.\n\n\nA naive way for combining geometric and visual features for each joint is to concatenate them in the feature level\nand feed them to the corresponding ST-LSTM unit as network's input data.\nHowever, the dimension of the geometric feature is very low intrinsically,\nwhile the visual features are often in relatively higher dimensions.\nDue to this inconsistency, simple concatenation of these two types of features in the input stage of the network causes degradation in the final performance of the entire model.\n\nThe work in \\cite{nturgbd} feeds different body parts into the Part-aware LSTM \\cite{nturgbd} separately,\nand then assembles them inside the LSTM unit.\nInspired by this work, we propose to fuse the two types of features inside the ST-LSTM unit,\nrather than simply concatenating them at the input level.\n\n\nWe use $x_{j,t}^{\\mathcal{F}}$ (${\\mathcal{F}} \\in \\{1,2\\}$) to denote the geometric feature and visual feature for a joint at the $t$-th time step.\nAs illustrated in \\figurename{ \\ref{fig:FusionSTLSTMFig}}, at step $(j,t)$, the two features $(x_{j,t}^{1}$ and $x_{j,t}^{2})$ are fed to the ST-LSTM unit separately as the new input structure.\nInside the recurrent unit, we deploy two sets of gates, input gates $(i_{j,t}^{\\mathcal{F}})$, forget gates with respect to time $(f_{j,t}^{T, \\mathcal{F}})$ and space $(f_{j,t}^{S, \\mathcal{F}})$, and also trust gates $(\\tau_{j, t}^{\\mathcal{F}})$, to deal with the two heterogeneous sets of modality features.\nWe put the two cell representations $(c_{j,t}^{\\mathcal{F}})$ together to build up the multimodal context information of the two sets of modality features.\nFinally, the output of each ST-LSTM unit is calculated based on the multimodal context representations,\nand controlled by the output gate $(o_{j,t})$ which is shared for the two sets of features.\n\nFor the features of each modality, it is efficient and intuitive to model their context information independently.\nHowever, we argue that the representation ability of each modality-based sets of features can be strengthened by borrowing information from the other set of features.\nThus, the proposed structure does not completely separate the modeling of multimodal features.\n\nLet us take the geometric feature as an example.\nIts input gate, forget gates, and trust gate are all calculated from the new input $(x_{j,t}^{1})$ and hidden representations $(h_{j,t-1}$ and $h_{j-1,t})$,\nwhereas each hidden representation is an associate representation of two features' context information from previous steps.\nAssisted by visual features' context information,\nthe input gate, forget gates, and also trust gate for geometric feature can effectively learn how to update its current cell state $(c_{j,t}^{1})$.\nSpecifically, for the new geometric feature input $(x_{j,t}^{1})$,\nwe expect the network to produce a better prediction when it is not only based on the context of the geometric features, but also assisted by the context of visual features.\nTherefore, the trust gate $(\\tau_{j, t}^{1})$ will have stronger ability to assess the reliability of the new input data $(x_{j,t}^{1})$.\n\nThe proposed ST-LSTM with integrated multimodal feature fusion is formulated as:\n\n\\begin{eqnarray}\n\\left(\n \\begin{array}{ccc}\n i_{j, t}^\\mathcal{F} \\\\\n f_{j, t}^{S,\\mathcal{F}} \\\\\n f_{j, t}^{T,\\mathcal{F}} \\\\\n u_{j, t}^\\mathcal{F} \\\\\n \\end{array}\n\\right)\n&=&\n\\left(\n \\begin{array}{ccc}\n \\sigma \\\\\n \\sigma \\\\\n \\sigma \\\\\n \\tanh \\\\\n \\end{array}\n\\right)\n\\left(\n M^\\mathcal{F}\n \\left(\n \\begin{array}{ccc}\n x_{j, t}^\\mathcal{F} \\\\\n h_{j-1, t} \\\\\n h_{j, t-1} \\\\\n \\end{array}\n \\right)\n\\right)\n\\\\\np_{j, t}^\\mathcal{F} &=& \\tanh\n\\left(\n M_{p}^\\mathcal{F}\n \\left(\n \\begin{array}{ccc}\n h_{j-1, t} \\\\\n h_{j, t-1} \\\\\n \\end{array}\n \\right)\n\\right)\n\\\\\n{x'}_{j, t}^\\mathcal{F} &=& \\tanh\n\\left(\n M_{x}^\\mathcal{F}\n \\left(\n \\begin{array}{ccc}\n x_{j, t}^\\mathcal{F}\\\\\n \\end{array}\n \\right)\n\\right)\n\\\\\n\\tau_{j, t}^{\\mathcal{F}} &=& G ({x'}_{j, t}^{\\mathcal{F}} - p_{j, t}^{\\mathcal{F}})\n\\\\\nc_{j, t}^{\\mathcal{F}} &=& \\tau_{j, t}^{\\mathcal{F}} \\odot i_{j, t}^{\\mathcal{F}} \\odot u_{j, t}^{\\mathcal{F}}\n\\nonumber\\\\\n &&+ (\\bold{1} - \\tau_{j, t}^{\\mathcal{F}}) \\odot f_{j, t}^{S,\\mathcal{F}} \\odot c_{j-1, t}^{\\mathcal{F}}\n\\nonumber\\\\\n &&+ (\\bold{1} - \\tau_{j, t}^{\\mathcal{F}}) \\odot f_{j, t}^{T,\\mathcal{F}} \\odot c_{j, t-1}^{\\mathcal{F}}\n\\\\\no_{j, t} &=& \\sigma\n\\left(\nM_{o}\n \\left(\n \\begin{array}{ccc}\n x_{j, t}^{1} \\\\\n x_{j, t}^{2} \\\\\n h_{j-1, t} \\\\\n h_{j, t-1} \\\\\n \\end{array}\n \\right)\n\\right)\n\\\\\nh_{j, t} &=& o_{j, t} \\odot \\tanh\n \\left(\n \\begin{array}{ccc}\n c_{j, t}^{1} \\\\\n c_{j, t}^{2} \\\\\n \\end{array}\n \\right)\n\\end{eqnarray}\n\n\n\\subsection{Learning the Classifier}\n\\label{sec:approach:learning}\nAs the labels are given at video level, we feed them as the training outputs of our network at each spatio-temporal step.\nA softmax layer is used by the network to predict the action class $\\hat{y}$ among the given class set $Y$.\nThe prediction of the whole video can be obtained by averaging the prediction scores of all steps.\nThe objective function of our ST-LSTM network is as follows:\n\\begin{equation}\n\\mathcal{L} = \\sum_{j=1}^J \\sum_{t=1}^T l(\\hat{y}_{j,t}, y)\n\\end{equation}\nwhere $l(\\hat{y}_{j,t}, y)$ is the negative log-likelihood loss \\cite{graves2012supervised}\nthat measures the difference between the prediction result $\\hat{y}_{j,t}$ at step $(j,t)$ and the true label $y$.\n\nThe back-propagation through time (BPTT) algorithm \\cite{graves2012supervised} is often effective for minimizing the objective function for the RNN/LSTM models.\nAs our ST-LSTM model involves both spatial and temporal steps, we adopt a modified version of BPTT for training.\nThe back-propagation runs over spatial and temporal steps simultaneously by starting at the last joint at the last frame.\nTo clarify the error accumulation in this procedure, we use $e_{j,t}^T$ and $e_{j,t}^S$ to denote the error back-propagated from step $(j,t+1)$ to $(j,t)$ and the error back-propagated from step $(j+1,t)$ to $(j,t)$, respectively.\nThen the errors accumulated at step $(j,t)$ can be calculated as $e_{j,t}^T+e_{j,t}^S$.\nConsequently, before back-propagating the error at each step, we should guarantee both its subsequent joint step and subsequent time step have already been computed.\n\n\n\n\n\n\nThe left-most units in our ST-LSTM network do not have preceding spatial units, as shown in \\figurename{ \\ref{fig:STLSTM}}.\nTo update the cell states of these units in the feed-forward stage,\na popular strategy is to input zero values into these nodes to substitute the hidden representations from the preceding nodes.\nIn our implementation, we link the last unit at the last time step to the first unit at the current time step.\nWe call the new connection as last-to-first link.\nIn the tree traversal, the first and last nodes refer to the same joint (root node of the tree),\nhowever the last node contains holistic information of the human skeleton in the corresponding frame.\nLinking the last node to the starting node at the next time step provides the starting node with the whole body structure configuration,\nrather than initializing it with less effective zero values.\nThus, the network has better ability to learn the action patterns in the skeleton sequence.\n\n\n\n\n\n\n\n\n\n\n\\section{Experiments}\n\\label{sec:exp}\n\n\nThe proposed method is evaluated and empirically analyzed on seven benchmark datasets for which the coordinates of skeletal joints are provided.\nThese datasets are NTU RGB+D, UT-Kinect, SBU Interaction, SYSU-3D, ChaLearn Gesture, MSR Action3D, and Berkeley MHAD.\nWe conduct extensive experiments with different models to verify the effectiveness of individual technical contributions proposed, as follows:\n\n(1) ``ST-LSTM (Joint Chain)''.\n In this model, the joints are visited in a simple chain order, as shown in \\figurename{ \\ref{fig:tree16joints}(a)};\n\n(2) ``ST-LSTM (Tree)''.\n In this model, the tree traversal scheme illustrated in \\figurename{ \\ref{fig:tree16joints}(c) is used to take advantage of the tree-based spatial structure of skeletal joints;\n\n(3) ``ST-LSTM (Tree) + Trust Gate''.\n This model uses the trust gate to handle the noisy input.\n\n\nThe input to every unit of of our network at each spatio-temporal step is the location of the corresponding skeletal joint (i.e., geometric features) at the current time step.\nWe also use two of the datasets (NTU RGB+D dataset and UT-Kinect dataset) as examples\nto evaluate the performance of our fusion model within the ST-LSTM unit by fusing the geometric and visual features.\nThese two datasets include human-object interactions (such as making a phone call and picking up something)\nand the visual information around the major joints can be complementary to the geometric features for action recognition.\n\n\n\n\n\n\\subsection{Evaluation Datasets}\n\\label{sec:exp:datasets}\n\n{\\bf NTU RGB+D dataset} \\cite{nturgbd} was captured with Kinect (v2).\nIt is currently the largest publicly available dataset for depth-based action recognition, which contains more than 56,000 video sequences and 4 million video frames.\nThe samples in this dataset were collected from 80 distinct viewpoints.\nA total of 60 action classes (including daily actions, medical conditions, and pair actions) were performed by 40 different persons aged between 10 and 35.\nThis dataset is very challenging due to the large intra-class and viewpoint variations.\nWith a large number of samples, this dataset is highly suitable for deep learning based activity analysis.\nThe parameters learned on this dataset can also be used to initialize the models for smaller datasets to improve and speed up the training process of the network. \nThe 3D coordinates of 25 body joints are provided in this dataset.\n\n{\\bf UT-Kinect dataset} \\cite{HOJ3D} was captured with a stationary Kinect sensor.\nIt contains 10 action classes.\nEach action was performed twice by every subject.\nThe 3D locations of 20 skeletal joints are provided.\nThe significant intra-class and viewpoint variations make this dataset very challenging.\n\n{\\bf SBU Interaction dataset} \\cite{yun2012two} was collected with Kinect.\nIt contains 8 classes of two-person interactions, and includes 282 skeleton sequences with 6822 frames.\nEach body skeleton consists of 15 joints.\nThe major challenges of this dataset are:\n(1) in most interactions, one subject is acting, while the other subject is reacting; and\n(2) the 3D measurement accuracies of the joint coordinates are low in many sequences.\n\n\n{\\bf SYSU-3D dataset} \\cite{jianfang_CVPR15} contains 480 sequences and was collected with Kinect.\nIn this dataset, 12 different activities were performed by 40 persons.\nThe 3D coordinates of 20 joints are provided in this dataset.\nThe SYSU-3D dataset is a very challenging benchmark because:\n(1) the motion patterns are highly similar among different activities, and\n(2) there are various viewpoints in this dataset.\n\n{\\bf ChaLearn Gesture dataset} \\cite{escalera2013multi} consists of 23 hours of videos captured with Kinect.\nA total of 20 Italian gestures were performed by 27 different subjects.\nThis dataset contains 955 long-duration videos and has predefined splits of samples as training, validation and testing sets.\nEach skeleton in this dataset has 20 joints.\n\n{\\bf MSR Action3D dataset} \\cite{li2010action} is widely used for depth-based action recognition.\nIt contains a total of 10 subjects and 20 actions.\nEach action was performed by the same subject two or three times.\nEach frame in this dataset contains 20 skeletal joints.\n\n{\\bf Berkeley MHAD dataset} \\cite{ofli2013berkeley} was collected by using a motion capture network of sensors.\nIt contains 659 sequences and about 82 minutes of recording time.\nEleven action classes were performed by five female and seven male subjects.\nThe 3D coordinates of 35 skeletal joints are provided in each frame.\n\n\n\n\n\\subsection{Implementation Details}\n\\label{sec:exp:impdetails}\n\n\nIn our experiments, each video sequence is divided to $T$ sub-sequences with the same length, and one frame is randomly selected from each sub-sequence.\nThis sampling strategy has the following advantages:\n(1) Randomly selecting a frame from each sub-sequence can add variation to the input data, and improves the generalization strengths of our trained network.\n(2) Assume each sub-sequence contains $n$ frames,\nso we have $n$ choices to sample a frame from each sub-sequence.\nAccordingly, for the whole video, we can obtain a total number of $n^T$ sampling combinations.\nThis indicates that the training data can be greatly augmented.\nWe use different frame sampling combinations for each video over different training epochs.\nThis strategy is useful for handling the over-fitting issues,\nas most datasets have limited numbers of training samples.\nWe observe this strategy achieves better performance in contrast with uniformly sampling frames.\nWe cross-validated the performance based on the leave-one-subject-out protocol on the large scale NTU RGB+D dataset, and found $T=20$ as the optimum value.\n\nWe use Torch7 \\cite{collobert2011torch7} as the deep learning platform to perform our experiments.\nWe train the network with stochastic gradient descent,\nand set the learning rate, momentum, and decay rate to $2$$\\times$$10^{-3}$, $0.9$, and $0.95$, respectively.\nWe set the unit size $d$ to 128, and the parameter $\\lambda$ used in $G(\\cdot)$ to $0.5$.\nTwo ST-LSTM layers are used in our stacked network.\nAlthough there are variations in terms of joint number, sequence length, and data acquisition equipment for different datasets,\nwe adopt the same parameter settings mentioned above for all datasets.\nOur method achieves promising results on all the benchmark datasets with these parameter settings untouched, which shows the robustness of our method.\n\n\nAn NVIDIA TitanX GPU is used to perform our experiments.\nWe evaluate the computational efficiency of our method on the NTU RGB+D dataset and set the batch size to $100$.\nOn average, within one second, $210$, $100$, and $70$ videos can be processed\nby using ``ST-LSTM (Joint Chain)'', ``ST-LSTM (Tree)'', and ``ST-LSTM (Tree) + Trust Gate'', respectively.\n\n\n\n\n\n\\subsection{Experiments on the NTU RGB+D Dataset}\n\\label{sec:exp:resNTU}\n\n\n\nThe NTU RGB+D dataset has two standard evaluation protocols \\cite{nturgbd}.\nThe first protocol is the cross-subject (X-Subject) evaluation protocol,\nin which half of the subjects are used for training and the remaining subjects are kept for testing.\nThe second is the cross-view (X-View) evaluation protocol,\nin which $2/3$ of the viewpoints are used for training,\nand $1/3$ unseen viewpoints are left out for testing.\nWe evaluate the performance of our method on both of these protocols.\nThe results are shown in \\tablename{ \\ref{table:resultNTU}}.\n\n\\begin{table}[!htp]\n\\caption{Experimental results on the NTU RGB+D Dataset}\n\\label{table:resultNTU}\n\\centering\n\\begin{tabular}{|l|c|c|c|}\n\\hline\nMethod & Feature & X-Subject & X-View \\\\\n\\hline\nLie Group \\cite{vemulapalli2014liegroup} & Geometric & 50.1\\% & 52.8\\% \\\\\nCippitelli {\\emph{et~al.}}~ \\cite{cippitelli2016evaluation} & Geometric & 48.9\\% & 57.7\\% \\\\\nDynamic Skeletons \\cite{jianfang_CVPR15} & Geometric & 60.2\\% & 65.2\\% \\\\\nFTP \\cite{rahmani20163d} & Geometric & 61.1\\% & 72.6\\% \\\\ \nHierarchical RNN \\cite{du2015hierarchical} & Geometric & 59.1\\% & 64.0\\% \\\\\nDeep RNN \\cite{nturgbd} & Geometric & 56.3\\% & 64.1\\% \\\\\nPart-aware LSTM \\cite{nturgbd} & Geometric & \t62.9\\% &\t70.3\\% \\\\\n\\hline\nST-LSTM (Joint Chain) & Geometric &\t61.7\\%\t& 75.5\\% \\\\\nST-LSTM (Tree) & Geometric & \t65.2\\% &\t76.1\\% \\\\\nST-LSTM (Tree) + Trust Gate & Geometric & \\textbf{69.2\\%}\t& \\textbf{77.7\\%} \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nIn \\tablename{ \\ref{table:resultNTU}},\nthe deep RNN model concatenates the joint features at each frame and then feeds them to the network to model the temporal kinetics, and ignores the spatial dynamics.\nAs can be seen, both ``ST-LSTM (Joint Chain)'' and ``ST-LSTM (Tree)'' models outperform this method by a notable margin.\nIt can also be observed that our approach utilizing the trust gate brings significant performance improvement,\nbecause the data provided by Kinect is often noisy and multiple joints are frequently occluded in this dataset.\nNote that our proposed models (such as ``ST-LSTM (Tree) + Trust Gate'') reported in this table only use skeletal data as input.\n\nWe compare the class specific recognition accuracies of ``ST-LSTM (Tree)'' and ``ST-LSTM (Tree) + Trust Gate'', as shown in \\figurename{ \\ref{fig:ClassAccuracy_NTU}}.\nWe observe that ``ST-LSTM (Tree) + Trust Gate'' significantly outperforms ``ST-LSTM (Tree)'' for most of the action classes,\nwhich demonstrates our proposed trust gate can effectively improve the human action recognition accuracy by learning the degrees of reliability over the input data at each time step.\n\n\\begin{figure*}\n\\begin{minipage}[b]{1.0\\linewidth}\n \\centering\n \\centerline{\\includegraphics[scale=0.38]{ClassAccuracy_NTU.pdf}}\n\\end{minipage}\n\\caption{Recognition accuracy per class on the NTU RGB+D dataset}\n\\label{fig:ClassAccuracy_NTU}\n\\end{figure*}\n\nAs shown in \\figurename{ \\ref{fig:NTUNoisySamples}},\na notable portion of videos in the NTU RGB+D dataset were collected in side views.\nDue to the design of Kinect's body tracking mechanism,\nskeletal data is less accurate in side view compared to the front view.\nTo further investigate the effectiveness of the proposed trust gate,\nwe analyze the performance of the network by feeding the side views samples only.\nThe accuracy of ``ST-LSTM (Tree)'' is 76.5\\%,\nwhile ``ST-LSTM (Tree) + Trust Gate'' yields 81.6\\%.\nThis shows how trust gate can effectively deal with the noise in the input data.\n\n\\begin{figure}\n\\begin{minipage}[b]{1.0\\linewidth}\n \\centering\n \\centerline{\\includegraphics[scale=0.199]{NoisySamples.jpg}}\n\\end{minipage}\n\\caption{Examples of the noisy skeletons from the NTU RGB+D dataset.}\n\\label{fig:NTUNoisySamples}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nTo verify the performance boost by stacking layers,\nwe limit the depth of the network by using only one ST-LSTM layer,\nand the accuracies drop to 65.5\\% and 77.0\\% based on the cross-subject and cross-view protocol, respectively.\nThis indicates our two-layer stacked network has better representation power than the single-layer network.\n\n\n\n\nTo evaluate the performance of our feature fusion scheme,\nwe extract visual features from several regions based on the joint positions and use them in addition to the geometric features (3D coordinates of the joints).\nWe extract HOG and HOF \\cite{dalal2006human,wang2011action} features from a $80\\times80$ RGB patch centered at each joint location.\nFor each joint, this produces a 300D visual descriptor,\nand we apply PCA to reduce the dimension to 20.\nThe results are shown in \\tablename{ \\ref{table:resultNTUFusion}}.\nWe observe that our method using the visual features together with the joint positions improves the performance.\nBesides, we compare our newly proposed feature fusion strategy within the ST-LSTM unit with two other feature fusion methods:\n(1) early fusion which simply concatenates two types of features as the input of the ST-LSTM unit;\n(2) late fusion which uses two ST-LSTMs to deal with two types of features respectively,\nthen concatenates the outputs of the two ST-LSTMs at each step,\nand feeds the concatenated result to a softmax classifier.\nWe observe that our proposed feature fusion strategy is superior to other baselines.\n\n\\begin{table}[h]\n\t\t\\caption{Evaluation of different feature fusion strategies on the NTU RGB+D dataset.\n``Geometric + Visual (1)'' indicates the early fusion scheme.\n``Geometric + Visual (2)'' indicates the late fusion scheme.\n``Geometric $\\bigoplus$ Visual'' means our newly proposed feature fusion scheme within the ST-LSTM unit.}\n\t\t\\label{table:resultNTUFusion}\n\t\t\\centering\n\t\t\\begin{tabular}{|l|c|c|}\n\t\t\t\\hline\n\t\t\tFeature Fusion Method & X-Subject & X-View\n \\\\\n\t\t\t\\hline\n\t\t\n\t\t\tGeometric Only & 69.2\\%\t& 77.7\\% \\\\\n Geometric + Visual (1) & 70.8\\% & 78.6\\% \\\\\n Geometric + Visual (2) & 71.0\\% & 78.7\\% \\\\\n Geometric $\\bigoplus$ Visual &73.2\\% & 80.6\\% \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\\\\\n\\end{table}\n\nWe also evaluate the sensitivity of the proposed network with respect to the variation of neuron unit size and $\\lambda$ values.\nThe results are shown in \\figurename{ \\ref{fig:NTUResultLambda}}.\nWhen trust gate is added,\nour network obtains better performance for all the $\\lambda$ values compared to the network without the trust gate.\n\n\\begin{figure}\n\\begin{minipage}[b]{1.0\\linewidth}\n \\centering\n \\centerline{\\includegraphics[scale=.57]{NTUResultLambda1.pdf}}\n\n \\centerline{\\includegraphics[scale=.57]{NTUResultLambda2.pdf}}\n \n\\end{minipage}\n\\caption{(a) Performance comparison of our approach using different values of neuron size ($d$) on the NTU RGB+D dataset (X-subject).\n(b) Performance comparison of our method using different $\\lambda$ values on the NTU RGB+D dataset (X-subject).\nThe blue line represents our results when different $\\lambda$ values are used for trust gate,\nwhile the red dashed line indicates the performance of our method when trust gate is not added.}\n\\label{fig:NTUResultLambda}\n\\end{figure}\n\n\nFinally, we investigate the recognition performance with early stopping conditions\nby feeding the first $p$ portion of the testing video to the trained network based on the cross-subject protocol ($p \\in \\{0.1, 0.2, ..., 1.0\\}$).\nThe results are shown in \\figurename{ \\ref{fig:NTUResultEarlyStop}}.\nWe can observe that the results are improved when a larger portion of the video is fed to our network.\n\n\\begin{figure}\n\\begin{minipage}[b]{1.0\\linewidth}\n \\centering\n \\centerline{\\includegraphics[scale=.57]{NTUResultEarlyStop.pdf}}\n\\end{minipage}\n\\caption{Experimental results of our method by early stopping the network evolution at different time steps.}\n\\label{fig:NTUResultEarlyStop}\n\\end{figure}\n\n\n\n\n\n\\subsection{Experiments on the UT-Kinect Dataset}\n\\label{sec:exp:resUTKinect}\nThere are two evaluation protocols for the UT-Kinect dataset in the literature.\nThe first is the leave-one-out-cross-validation (LOOCV) protocol \\cite{HOJ3D}.\nThe second protocol is suggested by \\cite{zhu2013fusing}, for which half of the subjects are used for training, and the remaining are used for testing.\nWe evaluate our approach using both protocols on this dataset.\n\nUsing the LOOCV protocol,\nour method achieves better performance than other skeleton-based methods,\nas shown in \\tablename{ \\ref{table:resultUTKinectprotocol1}}.\nUsing the second protocol (see \\tablename{ \\ref{table:resultUTKinectprotocol2}}),\nour method achieves competitive result (95.0\\%) to the Elastic functional coding method \\cite{anirudh2015elastic} (94.9\\%),\nwhich is an extension of the Lie Group model \\cite{vemulapalli2014liegroup}.\n\n\\begin{table}[!htp]\n\t\t\\caption{Experimental results on the UT-Kinect dataset (LOOCV protocol \\cite{HOJ3D})}\n\t\t\\label{table:resultUTKinectprotocol1}\n\t\t\\centering\n\t\t\\begin{tabular}{|l|c|c|}\n\t\t\t\\hline\n\t\t\tMethod & Feature & Acc. \\\\\n\t\t\t\\hline\n\t\t\n\t\t\tGrassmann Manifold \\cite{slama2015accurate} & Geometric & 88.5\\% \\\\\n Jetley {\\emph{et~al.}}~ \\cite{jetley20143d} & Geometric& 90.0\\% \\\\\n Histogram of 3D Joints \\cite{HOJ3D} & Geometric & 90.9\\% \\\\\n Space Time Pose \\cite{devanne2013space} & Geometric & 91.5\\% \\\\\n\t\t\tRiemannian Manifold \\cite{devanne20153d} & Geometric & 91.5\\% \\\\\n SCs (Informative Joints) \\cite{jiang2015informative} & Geometric & 91.9\\% \\\\\n Chrungoo {\\emph{et~al.}}~ \\cite{chrungoo2014activity} & Geometric & 92.0\\% \\\\\n Key-Pose-Motifs Mining\\cite{Wang_2016_CVPR_Mining} & Geometric & 93.5\\% \\\\\n \n\t\t\t\\hline\n\t\t\tST-LSTM (Joint Chain) & Geometric & 91.0\\% \\\\\n\t\t\tST-LSTM (Tree) & Geometric & 92.4\\% \\\\\n\t\t\tST-LSTM (Tree) + Trust Gate & Geometric & \\textbf{97.0\\%} \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\\end{table}\n\n\n\n\\begin{table}[!htp]\n\t\t\\caption{Results on the UT-Kinect dataset (half-vs-half protocol \\cite{zhu2013fusing})}\n\t\t\\label{table:resultUTKinectprotocol2}\n\t\t\\centering\n\t\t\\begin{tabular}{|l|c|c|}\n\t\t\t\\hline\n\t\t\tMethod & Feature & Acc. \\\\\n\t\t\t\\hline\n \n\t\t\tSkeleton Joint Features \\cite{zhu2013fusing} & Geometric & 87.9\\% \\\\\n Chrungoo {\\emph{et~al.}}~ \\cite{chrungoo2014activity} & Geometric & 89.5\\% \\\\\n\t\t\tLie Group \\cite{vemulapalli2014liegroup} (reported by \\cite{anirudh2015elastic}) & Geometric & 93.6\\% \\\\\n\t\t\tElastic functional coding \\cite{anirudh2015elastic} & Geometric & 94.9\\% \\\\\n\t\t\n\t\t\t\\hline\n\t\t\tST-LSTM (Tree) + Trust Gate & Geometric & \\textbf{95.0\\%} \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\\end{table}\n\n\n\n\n\n\n\n\nSome actions in the UT-Kinect dataset involve human-object interactions, thus appearance based features representing visual information of the objects can be complementary to the geometric features.\nThus we can evaluate our proposed feature fusion approach within the ST-LSTM unit on this dataset.\nThe results are shown in \\tablename{ \\ref{table:resultUTFusion}.\nUsing geometric features only, the accuracy is 97\\%.\nBy simply concatenating the geometric and visual features, the accuracy improves slightly.\nHowever, the accuracy of our approach can reach 98\\% when the proposed feature fusion method is adopted.\n\n\\begin{table}[h]\n\t\t\\caption{Evaluation of our approach for feature fusion on the UT-Kinect dataset (LOOCV protocol \\cite{HOJ3D}).\n``Geometric + Visual'' indicates we simply concatenate the two types of features as the input.\n``Geometric $\\bigoplus$ Visual'' means we use the newly proposed feature fusion scheme within the ST-LSTM unit.}\n\t\t\\label{table:resultUTFusion}\n\t\t\\centering\n\t\t\\begin{tabular}{|l|c|c|}\n\t\t\t\\hline\n\t\t\tFeature Fusion Method & Acc. \\\\\n\t\t\n\t\t\t\\hline\n\t\t\tGeometric Only & 97.0\\% \\\\\n Geometric + Visual & 97.5\\% \\\\\n Geometric $\\bigoplus$ Visual &98.0\\% \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\\\\\n\\scriptsize\n\\end{table}\n\n\n\n\n\\subsection{Experiments on the SBU Interaction Dataset}\n\\label{sec:exp:resSBU}\nWe follow the standard evaluation protocol in \\cite{yun2012two} and perform 5-fold cross validation on the SBU Interaction dataset.\nAs two human skeletons are provided in each frame of this dataset,\nour traversal scheme visits the joints throughout the two skeletons over the spatial steps.\n\nWe report the results in terms of average classification accuracy in \\tablename{ \\ref{table:resultSBU}}.\nThe methods in \\cite{zhu2016co} and \\cite{du2015hierarchical} are both LSTM-based approaches, which are more relevant to our method.\n\n\\begin{table}[h]\n\t\t\\caption{Experimental results on the SBU Interaction dataset}\n\t\t\\label{table:resultSBU}\n\t\t\\centering\n\t\t\\begin{tabular}{|l|c|c|}\n\t\t\t\\hline\n\t\t\tMethod & Feature & Acc. \\\\\n\t\t\n \\hline\n\t\t\tYun {\\emph{et~al.}}~ \\cite{yun2012two} & Geometric & 80.3\\% \\\\\n\t\t\tJi {\\emph{et~al.}}~ \\cite{ji2014interactive} & Geometric & 86.9\\% \\\\\n\t\t\tCHARM \\cite{li2015category} & Geometric & 83.9\\% \\\\\n\t\t\tHierarchical RNN \\cite{du2015hierarchical} & Geometric & 80.4\\% \\\\\n\t\t\tCo-occurrence LSTM \\cite{zhu2016co} & Geometric & 90.4\\% \\\\\n\t\t\tDeep LSTM \\cite{zhu2016co} & Geometric & 86.0\\% \\\\\n \n \n \\hline\n\t\t\tST-LSTM (Joint Chain) & Geometric & 84.7\\% \\\\\n\t\t\tST-LSTM (Tree) & Geometric & 88.6\\% \\\\\n\t\t\tST-LSTM (Tree) + Trust Gate & Geometric & \\textbf{93.3\\%} \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\t\t\n\\end{table}\n\nThe results show that the proposed ``ST-LSTM (Tree) + Trust Gate'' model outperforms all other skeleton-based methods.\n``ST-LSTM (Tree)'' achieves higher accuracy than ``ST-LSTM (Joint Chain)'',\nas the latter adds some false links between less related joints.\n\nBoth Co-occurrence LSTM \\cite{zhu2016co} and Hierarchical RNN \\cite{du2015hierarchical} adopt the Svaitzky-Golay filter \\cite{savitzky1964smoothing} in the temporal domain\nto smooth the skeletal joint positions and reduce the influence of noise in the data collected by Kinect.\n\nThe proposed ``ST-LSTM (Tree)'' model without the trust gate mechanism outperforms Hierarchical RNN,\nand achieves comparable result (88.6\\%) to Co-occurrence LSTM.\nWhen the trust gate is used, the accuracy of our method jumps to 93.3\\%.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Experiments on the SYSU-3D Dataset}\n\\label{sec:exp:resSYSU}\n\nWe follow the standard evaluation protocol in \\cite{jianfang_CVPR15} on the SYSU-3D dataset.\nThe samples from 20 subjects are used to train the model parameters,\nand the samples of the remaining 20 subjects are used for testing.\nWe perform 30-fold cross validation and report the mean accuracy in \\tablename{~\\ref{table:resultSYSU}}.\n\n\\begin{table}[h]\n\t\t\\caption{Experimental results on the SYSU-3D dataset}\n\t\t\\label{table:resultSYSU}\n\t\t\\centering\n\t\t\\begin{tabular}{|l|c|c|}\n\t\t\t\\hline\n\t\t\tMethod & Feature & Acc. \\\\\n\t\t\t\\hline\n\t\t\n LAFF (SKL) \\cite{hu2016ECCV} & Geometric & 54.2\\% \\\\\n Dynamic Skeletons \\cite{jianfang_CVPR15} & Geometric & 75.5\\% \\\\\n\t\t\t\\hline\n\t\t\n\t\t\tST-LSTM (Joint Chain) & Geometric & 72.1\\% \\\\\n\t\t\tST-LSTM (Tree) & Geometric & 73.4\\% \\\\\n\t\t\tST-LSTM (Tree) + Trust Gate & Geometric & \\textbf{76.5\\%} \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\\end{table}\n\nThe results in \\tablename{~\\ref{table:resultSYSU}} show that our proposed ``ST-LSTM (Tree) + Trust Gate'' method outperforms all the baseline methods on this dataset.\nWe can also find that the tree traversal strategy can help to improve the classification accuracy of our model.\nAs the skeletal joints provided by Kinect are noisy in this dataset,\nthe trust gate, which aims at handling noisy data, brings significant performance improvement (about 3\\% improvement).\n\n\n\n\nThere are large viewpoint variations in this dataset.\nTo make our model reliable against viewpoint variations,\nwe adopt a similar skeleton normalization procedure as suggested by \\cite{nturgbd} on this dataset.\nIn this preprocessing step, we perform a rotation transformation on each skeleton,\nsuch that all the normalized skeletons face to the same direction.\nSpecifically, after rotation, the 3D vector from ``right shoulder'' to ``left shoulder'' will be parallel to the X axis,\nand the vector from ``hip center'' to ``spine'' will be aligned to the Y axis\n(please see \\cite{nturgbd} for more details about the normalization procedure).\n\nWe evaluate our ``ST-LSTM (Tree) + Trust Gate'' method by respectively using the original skeletons without rotation and the transformed skeletons,\nand report the results in \\tablename{~\\ref{table:resultSYSURotation}}.\nThe results show that it is beneficial to use the transformed skeletons as the input for action recognition.\n\n\\begin{table}[h]\n\\caption{Evaluation for skeleton rotation on the SYSU-3D dataset}\n\\label{table:resultSYSURotation}\n\\centering\n\\begin{tabular}{|l|c|}\n\\hline\nMethod & Acc. \\\\\n\\hline\nWith Skeleton Rotation & 76.5\\% \\\\\nWithout Skeleton Rotation & \t73.0\\% \\\\\n\\hline\n\\end{tabular}\n\\\\\n\\end{table}\n\n\\subsection{Experiments on the ChaLearn Gesture Dataset}\n\\label{sec:exp:resChaLearn}\n\nWe follow the evaluation protocol adopted in \\cite{wang2015hierarchical,fernando2015modeling}\nand report the F1-score measures on the validation set of the ChaLearn Gesture dataset.\n\n\\begin{table}[h]\n\t\t\\caption{Experimental results on the ChaLearn Gesture dataset}\n\t\t\\label{table:resultChaLearn}\n\t\t\\centering\n\t\t\\begin{tabular}{|l|c|c|}\n\t\t\t\\hline\n\t\t\tMethod & Feature & F1-Score \\\\\n\t\t\t\\hline\n\t\t\n Portfolios \\cite{yao2014gesture} & Geometric & 56.0\\% \\\\\n Wu {\\emph{et~al.}}~ \\cite{wu2013fusing} & Geometric & 59.6\\% \\\\\n Pfister {\\emph{et~al.}}~ \\cite{pfister2014domain} & Geometric & 61.7\\% \\\\\n HiVideoDarwin \\cite{wang2015hierarchical} & Geometric & 74.6\\% \\\\\n VideoDarwin \\cite{fernando2015modeling} & Geometric & 75.2\\% \\\\\n \n Deep LSTM \\cite{nturgbd} & Geometric & 87.1\\% \\\\\n \n\t\t\t\\hline\n\t\t\n ST-LSTM (Joint Chain) & Geometric & 89.1\\% \\\\\n ST-LSTM (Tree) & Geometric & 89.9\\% \\\\\n\t\t\tST-LSTM (Tree) + Trust Gate & Geometric & \\textbf{92.0\\%} \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\\end{table}\n\nAs shown in \\tablename{~\\ref{table:resultChaLearn}},\nour method surpasses the state-of-the-art methods \\cite{yao2014gesture,wu2013fusing,pfister2014domain,wang2015hierarchical,fernando2015modeling,nturgbd},\nwhich demonstrates the effectiveness of our method in dealing with skeleton-based action recognition problem.\n\nCompared to other methods, our method focuses on modeling both temporal and spatial dependency patterns in skeleton sequences.\nMoreover, the proposed trust gate is also incorporated to our method to handle the noisy skeleton data captured by Kinect,\nwhich can further improve the results.\n\n\n\n\n\\subsection{Experiments on the MSR Action3D Dataset}\n\\label{sec:exp:resMSR3D}\n\nWe follow the experimental protocol in \\cite{du2015hierarchical} on the MSR Action3D dataset,\nand show the results in \\tablename{~\\ref{table:resultMSR3D}}.\n\nOn the MSR Action3D dataset, our proposed method, ``ST-LSTM (Tree) + Trust Gate'', achieves 94.8\\% of classification accuracy,\nwhich is superior to the Hierarchical RNN model \\cite{du2015hierarchical} and other baseline methods.\n\n\\begin{table}[h]\n\t\t\\caption{Experimental results on the MSR Action3D dataset}\n\t\t\\label{table:resultMSR3D}\n\t\t\\centering\n\t\t\\begin{tabular}{|l|c|c|}\n\t\t\t\\hline\n\t\t\tMethod & Feature & Acc. \\\\\n\t\t\t\\hline\n\t\t\n Histogram of 3D Joints \\cite{HOJ3D} & Geometric & 79.0\\% \\\\\n Joint Angles Similarities \\cite{hog2-ohnbar} & Geometric & 83.5\\% \\\\\n SCs (Informative Joints) \\cite{jiang2015informative} & Geometric & 88.3\\% \\\\\n\t\t\tOriented Displacements \\cite{gowayyed2013histogram} & Geometric & 91.3\\% \\\\\n\t\t\tLie Group \\cite{vemulapalli2014liegroup} & Geometric & 92.5\\% \\\\\n Space Time Pose \\cite{devanne2013space} & Geometric & 92.8\\% \\\\\n Lillo {\\emph{et~al.}}~ \\cite{lillo2016hierarchical} & Geometric & 93.0\\% \\\\\n\t\t\tHierarchical RNN \\cite{du2015hierarchical} & Geometric & 94.5\\% \\\\\n\t\t\t\\hline\n\t\t\tST-LSTM (Tree) + Trust Gate & Geometric & \\textbf{94.8\\%} \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\\end{table}\n\n\n\\subsection{Experiments on the Berkeley MHAD Dataset}\n\\label{sec:exp:resMHAD}\n\\begin{table}[h]\n\t\t\\caption{Experimental results on the Berkeley MHAD dataset}\n\t\t\\label{table:resultMHAD}\n\t\t\\centering\n\t\t\\begin{tabular}{|l|c|c|}\n\t\t\t\\hline\n\t\t\tMethod & Feature & Acc. \\\\\n\t\t\t\\hline\n\t\t\n\t\t\tOfli {\\emph{et~al.}}~ \\cite{Ofli2014jvci} & Geometric & 95.4\\% \\\\\n Vantigodi {\\emph{et~al.}}~ \\cite{vantigodi2013real} & Geometric & 96.1\\% \\\\\n\t\t\tVantigodi {\\emph{et~al.}}~ \\cite{vantigodi2014action} & Geometric & 97.6\\% \\\\\n\t\t\tKapsouras {\\emph{et~al.}}~ \\cite{kapsouras2014action} & Geometric & 98.2\\% \\\\\n\t\t\tHierarchical RNN \\cite{du2015hierarchical} & Geometric & 100\\% \\\\\n Co-occurrence LSTM \\cite{zhu2016co} & Geometric & 100\\% \\\\\n\t\t\t\\hline\n\t\t\tST-LSTM (Tree) + Trust Gate & Geometric & \\textbf{100\\%} \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\\end{table}\n\nWe adopt the experimental protocol in \\cite{du2015hierarchical} on the Berkeley MHAD dataset.\n384 video sequences corresponding to the first seven persons are used for training,\nand the 275 sequences of the remaining five persons are held out for testing.\nThe experimental results in \\tablename{ \\ref{table:resultMHAD}} show that our method achieves very high accuracy (100\\%) on this dataset.\nUnlike \\cite{du2015hierarchical} and \\cite{zhu2016co}, our method does not use any preliminary manual smoothing procedures.\n\n\n\n\n\\subsection{Visualization of Trust Gates}\n\\label{sec:visualization}\n\n\nIn this section, to better investigate the effectiveness of the proposed trust gate scheme, we study the behavior of the proposed framework against the presence of noise in skeletal data from the MSR Action3D dataset.\nWe manually rectify some noisy joints of the samples by referring to the corresponding depth images.\nWe then compare the activations of trust gates on the noisy and rectified inputs.\nAs illustrated in \\figurename{ \\ref{fig:TrustGateEffect}(a)},\nthe magnitude of trust gate's output ($l_2$ norm of the activations of the trust gate) is smaller when a noisy joint is fed, compared to the corresponding rectified joint.\nThis demonstrates how the network controls the impact of noisy input on its stored representation of the observed data.\n\nIn our next experiment, we manually add noise to one joint for all testing samples on the Berkeley MHAD dataset, in order to further analyze the behavior of our proposed trust gate.\nNote that the Berkeley MHAD dataset was collected with motion capture system, thus\nthe skeletal joint coordinates in this dataset are much more accurate than those captured with Kinect sensors.\n\nWe add noise to the right foot joint by moving the joint away from its original location.\nThe direction of the translation vector is randomly chosen and the norm is a random value around $30cm$, which is a significant noise in the scale of human body.\nWe measure the difference in the magnitudes of trust gates' activations between the noisy data and the original ones.\nFor all testing samples, we carry out the same operations and then calculate the average difference.\nThe results in \\figurename{ \\ref{fig:TrustGateEffect}(b)} show that the magnitude of trust gate is reduced when the noisy data is fed to the network.\nThis shows that our network tries to block the flow of noisy input and stop it from affecting the memory.\nWe also observe that the overall accuracy of our network does not drop after adding the above-mentioned noise to the input data.\n\n\n\\begin{figure}[htb]\n\\begin{minipage}[b]{0.47\\linewidth}\n \\centering\n \\centerline{\\includegraphics[scale=.53]{VisualizationTrustGate1.pdf}}\n\\end{minipage}\n\\begin{minipage}[b]{0.52\\linewidth}\n \\centering\n \\centerline{\\includegraphics[scale=.53]{VisualizationTrustGate2.pdf}}\n\\end{minipage}\n\\caption{Visualization of the trust gate's behavior when inputting noisy data.\n(a) $j_{3'}$ is a noisy joint position, and $j_3$ is the corresponding rectified joint location.\nIn the histogram, the blue bar indicates the magnitude of trust gate when inputting the noisy joint $j_{3'}$.\nThe red bar indicates the magnitude of the corresponding trust gate when $j_{3'}$ is rectified to $j_3$.\n(b) Visualization of the difference between the trust gate calculated when the noise is imposed at the step $(j_N, t_N)$ and that calculated when inputting the original data.}\n\\label{fig:TrustGateEffect}\n\\end{figure}\n\n\n\n\n\n\n\n\n\\begin{table*}[htb]\n\\caption{Performance comparison of different spatial sequence models}\n\\label{table:resultDoubleChain}\n\\centering\n\\footnotesize\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n~~~~~~~~~~~~~~~~~~Dataset~~~~~~~~~~~~~~~~~~ & NTU (X-Subject) & NTU (X-View) & ~~~UT-Kinect~~~ & SBU Interaction & ChaLearn Gesture \\\\\n\\hline\nST-LSTM (Joint Chain) & 61.7\\% & 75.5\\% & 91.0\\% & 84.7\\% & 89.1\\% \\\\\nST-LSTM (Double Joint Chain) & 63.5\\% & 75.6\\% & 91.5\\% & 85.9\\% & 89.2\\% \\\\\nST-LSTM (Tree) & 65.2\\% & 76.1\\% & 92.4\\% & 88.6\\% & 89.9\\% \\\\\n\\hline\n\\end{tabular}\n\\\\\n\\end{table*}\n\n\\begin{table*}[tb]\n\\caption{Performance comparison of Temporal Average, LSTM, and our proposed ST-LSTM}\n\\label{table:resultLSTMTG}\n\\centering\n\\footnotesize\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n~~~~~~~~~~~~~~~~~~Dataset~~~~~~~~~~~~~~~~~~ & NTU (X-Subject) & NTU (X-View) & ~~~UT-Kinect~~~ & SBU Interaction & ChaLearn Gesture\\\\\n\\hline\nTemporal Average & 47.6\\% & 52.6\\% & 81.9\\% & 71.5\\% & 77.9\\% \\\\\n\\hline\nLSTM & 62.0\\% & 70.7\\% & 90.5\\% & 86.0\\% & 87.1\\% \\\\\nLSTM + Trust Gate & 62.9\\% & 71.7\\% & 92.0\\% & 86.6\\% & 87.6\\% \\\\\n\\hline\nST-LSTM & 65.2\\% & 76.1\\% & 92.4\\% & 88.6\\% & 89.9\\% \\\\\nST-LSTM + Trust Gate & 69.2\\% & 77.7\\% & 97.0\\% & 93.3\\% & 92.0\\% \\\\\n\\hline\n\\end{tabular}\n\\\\\n\\end{table*}\n\n\\begin{table*}[tb]\n\\caption{Evaluation of the last-to-first link in our proposed network}\n\\label{table:resultLTFLink}\n\\centering\n\\footnotesize\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n~~~~~~~~~~~~~~~~~~Dataset~~~~~~~~~~~~~~~~~~ & NTU (X-Subject) & NTU (X-View) & ~~~UT-Kinect~~~ & SBU Interaction & ChaLearn Gesture \\\\\n\\hline\nWithout last-to-first link & 68.5\\% & 76.9\\% & 96.5\\% & 92.1\\% & 90.9 \\% \\\\\nWith last-to-first link & 69.2\\% & 77.7\\% & 97.0\\% & 93.3\\% & 92.0 \\% \\\\\n\\hline\n\\end{tabular}\n\\\\\n\\end{table*}\n\n\n\\subsection{Evaluation of Different Spatial Joint Sequence Models}\n\\label{sec:discussion1}\n\nThe previous experiments showed how ``ST-LSTM (Tree)'' outperforms ``ST-LSTM (Joint Chain)'', because ``ST-LSTM (Tree)'' models the kinematic dependency structures of human skeletal sequences.\nIn this section, we further analyze the effectiveness of our ``ST-LSTM (Tree)'' model and compare it with a ``ST-LSTM (Double Joint Chain)'' model.\n\nThe ``ST-LSTM (Joint Chain)'' has fewer steps in the spatial dimension than the ``ST-LSTM (Tree)''.\nOne question that may rise here is if the advantage of ``ST-LSTM (Tree)'' model could be only due to the higher length and redundant sequence of the joints fed to the network, and not because of the proposed semantic relations between the joints.\nTo answer this question, we evaluate the effect of using a double chain scheme to increase the spatial steps of the ``ST-LSTM (Joint Chain)'' model.\nSpecifically, we use the joint visiting order of 1-2-3-...-16-1-2-3-...-16,\nand we call this model as ``ST-LSTM (Double Joint Chain)''.\nThe results in \\tablename{~\\ref{table:resultDoubleChain}} show that the performance of ``ST-LSTM (Double Joint Chain)'' is better than ``ST-LSTM (Joint Chain)'',\nyet inferior to ``ST-LSTM (Tree)''.\n\n\nThis experiment indicates that it is beneficial to introduce more passes in the spatial dimension to the ST-LSTM for performance improvement.\nA possible explanation is that the units visited in the second round can obtain the global level context representation from the previous pass,\nthus they can generate better representations of the action patterns by using the context information.\nHowever, the performance of ``ST-LSTM (Double Joint Chain)'' is still weaker than ``ST-LSTM (Tree)'',\nthough the numbers of their spatial steps are almost equal.\n\nThe proposed tree traversal scheme is superior because it connects the most semantically related joints\nand avoids false connections between the less-related joints (unlike the other two compared models).\n\n\\subsection{Evaluation of Temporal Average, LSTM and ST-LSTM}\n\\label{sec:discussion2}\n\nTo further investigate the effect of simultaneous modeling of dependencies in spatial and temporal domains,\nin this experiment, we replace our ST-LSTM with the original LSTM which only models the temporal dynamics among the frames without explicitly considering spatial dependencies.\nWe report the results of this experiment in \\tablename{ \\ref{table:resultLSTMTG}}.\nAs can be seen, our ``ST-LSTM + Trust Gate'' significantly outperforms ``LSTM + Trust Gate''.\nThis demonstrates that the proposed modeling of the dependencies in both temporal and spatial dimensions provides much richer representations than the original LSTM.\n\nThe second observation of this experiment is that if we add our trust gate to the original LSTM,\nthe performance of LSTM can also be improved,\nbut its performance gain is less than the performance gain on ST-LSTM.\nA possible explanation is that we have both spatial and temporal context information at each step of ST-LSTM to generate a good prediction of the input at the current step ((see Eq. (\\ref{eq:p_j_t})),\nthus our trust gate can achieve a good estimation of the reliability of the input at each step by using the prediction (see Eq. (\\ref{eq:tau})).\nHowever, in the original LSTM, the available context at each step is from the previous temporal step,\ni.e., the prediction can only be based on the context in the temporal dimension,\nthus the effectiveness of the trust gate is limited when it is added to the original LSTM.\nThis further demonstrates the effectiveness of our ST-LSTM framework for spatio-temporal modeling of the skeleton sequences.\n\n\nIn addition, we investigate the effectiveness of the LSTM structure for handling the sequential data.\nWe evaluate a baseline method (called ``Temporal Average'') by averaging the features from all frames instead of using LSTM.\nSpecifically, the geometric features are averaged over all the frames of the input sequence (i.e., the temporal ordering information in the sequence is ignored),\nand then the resultant averaged feature is fed to a two-layer network, followed by a softmax classifier.\nThe performance of this scheme is much weaker than our proposed ST-LSTM with trust gate,\nand also weaker than the original LSTM, as shown in \\tablename{~\\ref{table:resultLSTMTG}}.\nThe results demonstrate the representation strengths of the LSTM networks for modeling the dependencies and dynamics in sequential data, when compared to traditional temporal aggregation methods of input sequences.\n\n\\subsection{Evaluation of the Last-to-first Link Scheme}\n\\label{sec:discussion3}\n\nIn this section, we evaluate the effectiveness of the last-to-first link in our model (see section \\ref{sec:approach:learning}).\nThe results in \\tablename{ \\ref{table:resultLTFLink}} show the advantages of using the last-to-first link in improving the final action recognition performance.\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nIn this paper, we have extended the RNN-based action recognition method to both spatial and temporal domains.\nSpecifically, we have proposed a novel ST-LSTM network which analyzes the 3D locations of skeletal joints at each frame and at each processing step.\nA skeleton tree traversal method based on the adjacency graph of body joints is also proposed to better represent the structure of the input sequences and\nto improve the performance of our network by connecting the most related joints together in the input sequence.\nIn addition, a new gating mechanism is introduced to improve the robustness of our network against the noise in input sequences.\nA multi-modal feature fusion method is also proposed for our ST-LSTM framework.\nThe experimental results have validated the contributions and demonstrated the effectiveness of our approach\nwhich achieves better performance over the existing state-of-the-art methods on seven challenging benchmark datasets.\n\n\\section*{Acknowledgement}\nThis work was carried out at Rapid-Rich Object Search (ROSE) Lab, Nanyang Technological University.\nROSE Lab is supported by the National Research Foundation, Singapore, under its IDM Strategic Research Programme.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n\n\n", "meta": {"timestamp": "2017-06-27T02:10:29", "yymm": "1706", "arxiv_id": "1706.08276", "language": "en", "url": "https://arxiv.org/abs/1706.08276"}} {"text": "\\section{Introduction}\nAdoption of convolutional neural network(CNN) \\cite{lecun1989backpropagation} brought huge success on a lot of computer vision tasks such as classification and segmentation. One of limitation of CNN is its poor scalability with increasing input image size in terms of computation efficiency. With limited time and resources, it is necessary to be smart on selecting where, what, and how to look the image. Facing bird specific fine grained classification task, for example, it does not help much to pay attention on non-dog image part such as tree and sky. Rather, one should focus on regions which play decisive roles on classification such as beak or wings. If machine can learn how to pay attention on those regions will results better performance with lower energy usage. \n\nIn this context, \\textbf{Recurrent Attention Model(RAM)} \\cite{mnih2014recurrent} introduces visual attention method in the problem of fine-grained classification task. By sequentially choosing where and what to look, RAM achieved better performance with lower usage of memory. Even more, attention mechanism tackled the vulnerable point, that deep learning model is the black box model by enabling interpretations of the results. But still there is more room for RAM for improvement. In addition to where and what to look, if one can give some clues on how to look, the task specific hint, learning could be more intuitive and efficient. From this insight, we propose the novel architecture, \\textbf{Clued Recurrent Attention Model(CRAM)} which inserts problem solving oriented clue on RAM. These clues, or constraints give directions to machine for faster convergence and better performance. \n\nFor evaluation, we perform experiments on two computer vision task classification and inpainting. In classification task, clue is given as the binary saliency of the image which indicates the rough location of the object. In inpainting problem, clue is given as the binary mask which indicates the location the occluded region. Codes are implemented in tensorflow version 1.6.0 and uploaded at https://github.com/brekkanegg/cram.\n\nIn summary, the contributions of this work are as follows: \n\\begin{enumerate}\n \\item Proposed novel model clued recurrent attention model(CRAM) which inserted clue on RAM for more efficient problem solving.\n \\item Defined clues for classification and inpainting task respectively for CRAM which are easy to interpret and approach.\n \\item Evaluated on classification and inpainting task, showing the powerful extension of RAM. \n \\end{enumerate}\n \n\n\n\n\n\n\n\n\n\n\n\\section{Related Work}\n\n\\subsection{Reccurrent Attention Model(RAM)}\n\nRAM \\cite{mnih2014recurrent} first proposed recurrent neural network(RNN) \\cite{mikolov2010recurrent} based attention model inspired by human visual system. When human are confronted with large image which is too big to be seen at a glance, he processes the image from part by part depending on his interest. By selectively choosing what and where to look RAM showed higher performance while reducing calculations and memory usage. However, since RAM attend the image region by using sampling method, it has fatal weakness of using REINFORCE, not back-propagation for optimization. After works of RAM, Deep Recurrent Attention Model(DRAM) \\cite{ba2014multiple} showed advanced architecture for multiple object recognition and Deep Recurrent Attentive Writer(DRAW) \\cite{gregor2015draw} introduced sequential image generation method without using REINFORCE. \n\nSpatial transformer network (STN) \\cite{jaderberg2015spatial} first proposed a parametric spatial attention module for object classification task. This model includes a localization network that outputs the parameters for selecting region to attend in the input image. Recently, Recurrent Attentional Convolutional-Deconvolutional Network(RACDNN) \\cite{kuen2016recurrent} gathered the strengths of both RAM and STN in saliency detection task. By replacing RAM locating module with STN, RACDNN can sequentially select where to attend on the image while still using back-propagation for optimization. This paper mainly adopted the RACDNN network with some technical twists to effectively insert the clue which acts as supervisor for problem solving.\n\n\n\\section{CRAM}\n\nThe architecture of CRAM is based on encoder-decoder structure. Encoder is similar to RACDNN\\cite{kuen2016recurrent} with modified spatial transformer network\\cite{jaderberg2015spatial} and inserted clue. While encoder is identical regardless of the type of task, decoder becomes different where the given task is classification or inpainting. Figure \\ref{fig:overall} shows the overall architecture of CRAM. \n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{overall_architecture.png}\n \\caption{Overall architecture of CRAM. Note that image and clue become different depending on the task (left under and right under).}\n \\label{fig:overall}\n\\end{figure}\n\n\n\\subsection{\\bf{Encoder}}\nEncoder is composed of 4 subnetworks: context network, spatial transformer network, glimpse network and core recurrent neural network. The overall architecture of encoder is shown in Figure \\ref{fig:enc}. Considering the flow of information, we will go into details of each network one by one.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{encoder.jpg}\n \\caption{Architecture of CRAM encoder. Note that the image is for inpainting task, where clue is given as binary mask that indicates the occluded region.}\n \\label{fig:enc}\n\\end{figure}\n\n\\textbf{Context Network: }The context network is the first part of encoder which receives image and clue as inputs and outputs the initial state tuple of {$r_{0}^{2}$}. {$r_{0}^{2}$} is first input of second layer of core recurrent neural network as shown in Figure \\ref{fig:enc}. Using downsampled image{$(i_{coarse}),$} and downsampled clue{$(c_{coarse})$}, context network provides reasonable starting point for choosing image region to concentrate. Downsampled image and clue are processed with CNN followed by MLP respectively. \n\n\\begin{align}\\label{eq:cn}\nc_{0} = MLP_{c}(CNN_{context}(i_{coarse}, c_{coarse})) \\\\\nh_{0} = MLP_{h}(CNN_{context}(i_{coarse}, c_{coarse})) \n\\end{align}\nwhere ({$c_{0}$}, {$h_{0}$}) is the first state tuple of {$r_{0}^{2}$}. \n\n\\textbf{Spatial Transformer Network: } Spatial transformer network(STN) select region to attend considering given task and clue \\cite{jaderberg2015spatial}. Different from existing STN, CRAM uses modified STN which receives image, clue, and output of second layer of core RNN as an inputs and outputs glimpse patch. From now on, glimpse patch indicates the attended image which is cropped and zoomed in. Here, the STN is composed of two parts. One is the localization part which calculates the transformation matrix {$\\tau$} with CNN and MLP. The other is the transformer part which zoom in the image using the transformation matrix {$\\tau$} above and obtain the glimpse. The affine transformation matrix {$\\tau$} with isotropic scaling and translation is given as Equation \\ref{eq:tau}. \n\n\\begin{equation}\\label{eq:tau}\n\\tau = \\begin{bmatrix}\ns & 0 & t_{x} \\\\ \n0 & s & t_{y}\\\\ \n0 & 0 & 1\n\\end{bmatrix}\n\\end{equation}\nwhere {$s, t_{x}, t_{y}$} are the scaling, horizontal translation and vertical translation parameter respectively.\n\nTotal process of STN is shown in Figure \\ref{fig:stn}.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{stn.png}\n \\caption{Architecture of STN. STN is consists of localisation part which calculates {$\\tau$} and transformer part which obtain glimpse. }\n \\label{fig:stn}\n\\end{figure}\n\nIn equations, the process STN is as follows:\n\n\\begin{equation}\\label{eq:sn}\nglimpse\\_patch_{n} = STN(image, clue, \\tau_{n})\n\\end{equation}\nwhere {$n$} in {$ glimpse\\_patch_{n}$} is the step of core RNN ranging from 1 to total glimpse number. {$\\tau$} is obtained by the below equation.\n\n\\begin{equation}\\label{eq:en}\n\\tau_{n} = MLP_{loc}(CNN_{i}(image)\\oplus CNN_{c}(clue)\\oplus MLP_{r}(r_{n}^{(2)}))\n\\end{equation}\nwhere {$\\oplus$} is concat operation.\n\n\\textbf{Glimpse Network: }The glimpse network is a non-linear function which receives current glimpse patch, {$ glimpse\\_patch_{n}(gp_{n}$)} and attend region information, {$\\tau$} as inputs and outputs current step glimpse vector. Glimpse vector is later used as the input of first core RNN. {$glimpse\\_vector_{n}(gv_{n})$} is obtained by multiplicative interaction between extracted features of {$glimpse\\_patch_{n}$} and {$\\tau$}. The method of interaction is first proposed by \\cite{larochelle2010learning}. Similar to other mentioned networks, CNN and MLP are used for feature extraction. \n\n\\begin{equation}\\label{eq:gn}\n\\begin{split}\ngv_{n} = MLP_{what}(CNN_{what}(gp_{n})) \\odot MLP_{where}(\\tau_{n})\n\\end{split}\n\\end{equation}\n\nwhere {$\\odot$} is a element-wise vector multiplication operation. \n\n\\textbf{Core Recurrent Neural Network: } Recurrent neural network is the core structure of CRAM, which aggregates information extracted from the stepwise glimpses and calculates encoded vector z. Iterating for set RNN steps(total glimpse numbers), core RNN receives {$glimpse\\_vector_{n}$} at the first layer. The output of second layer {$r_{n}^{(2)}$} is again used by spatial transformer network's localization part as Equation \\ref{eq:en}. \n\n\\begin{equation}\\label{eq:rn}\nr_{n}^{(1)} = R_{recur}^{ 1}(glimpse\\_vector_{n}, r_{n-1}^{(1)}) \\\\\n\\end{equation}\n\\begin{equation}\\label{eq:rn}\nr_{n}^{(2)} = R_{recur}^{ 2}(r_{n}^{(1)}, r_{n-1}^{(2)})\n\\end{equation}\n\n\\subsection{\\bf{Decoder}}\n\n\\subsubsection{Classification}\nLike general image classification approach, encoded z is passed through MLP and outputs possibility of each class. Image of decoder for classification is shown in Figure \\ref{fig:deccls}.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{cls_decoder.png}\n \\caption{Architecture of CRAM decoder for image classification.}\n \\label{fig:deccls}\n\\end{figure}\n\n\n\\subsubsection{Inpainting}\nUtilizing the architecture of DCGAN \\cite {radford2015unsupervised}, contaminated image is completed starting from the the encoded z from the encoder. To ensure the quality of completed image, we adopted generative adversarial network(GAN)\\cite{goodfellow2014generative} framework in both local and global scale \\cite{iizuka2017globally}. Here decoder works as generator and local and global discriminators evaluate its plausibility in local and global scale respectively. Image of decoder for inpainting is shown in Figure \\ref{fig:dec}.\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{ip_decoder.jpg}\n \\caption{Architecture of CRAM decoder and discriminators for image inpainting.}\n \\label{fig:dec}\n\\end{figure}\n\n\n\\section{Training}\nLoss function of CRAM can be divided into two: encoder related loss({$L_{enc}$}) and decoder-related loss({$L_{dec}$}). {$L_{enc}$} constraints the glimpse patch to be consistent with the clue. For classification task, where the clue is object saliency, it is favorable if the glimpse patches covers the salient part at most. For inpainting case, there should be a supervisor that urges the glimpse patches contains the occluded region considering the region neighbor of occlusion are the most relevant part for completion. In order to satisfy above condition for both classification and inpainting cases {$L_{enc}$} or {$L_{clue}$} is as follows:\n\n\\begin{equation}\\label{eq:lossg}\nL_{enc}=L_{clue}(clue, STN, \\tau) = \\sum_{n}{ST(clue, \\tau_{n})} \n\\end{equation}\n\nwhere {$STN$} is the trained spatial transformer network Equation \\ref{eq:sn} and {$\\tau_{n}$} is obtained from Equation \\ref{eq:tau} in each step of core RNN. Decoder loss which is different depending on the given task will be dealt separately shortly. Note that clue is binary image for both classification and inpainting tasks. \n\nSince {$L_{dec}$} is different depending on whether the problem is classification or completion, further explanations for losses will be divided into two. \n\n\\subsection{Classification}\nDecoder related loss in image classification task utilize cross entropy loss like general classification approach. Then total loss {$L_{tot-cls}$} for image classification becomes:\n \n\\begin{align}\\label{eq:losscls}\nL_{tot-cls} &= L_{enc} + L_{dec} \\\\\n& = L_{clue}(clue, ST, \\tau s) + L_{cls}(Y, Y^{*}) \n\\end{align}\n\nwhere clue is the binary image which takes the value of 1 for salient part and otherwise 0, and {$Y$} and {$Y^{*}$} are predicted class label vector and ground truth class label vector respectively.\n\n\n\\subsection{Inpainting}\nDecoder related loss for image inpainting consists of reconstruction loss and gan loss. \n\nReconstruction loss helps completion to be more stable and gan loss enables better quality of restoration. For reconstruction loss L1 loss considering contaminated region of input is used:\n\n\\begin{equation}\\label{eq:reconloss}\nL_{recon}(z, clue, Y^{*}) = \\| clue \\odot (G(z) - Y^{*}) \\| _{1} \n\\end{equation}\nwhere z is encoded vector from the encoder, clue is the binary image which takes the value of 1 for occluded region and otherwise 0, G is generator(or decoder) and {$Y^{*}$} is the original image before contamination.\n\nSince there are two discriminators, local and global scale gan loss is summation of local gan loss and global gan loss.\n\\begin{equation}\\label{eq:ganlosses}\n\\begin{split}\nL_{gan} &= L_{global\\_gan} + L_{local\\_gan}\n\\end{split}\n\\end{equation}\n\nGAN loss for local and global scale are defined as follows: \n\\begin{equation}\\label{eq:ganloss}\n\\begin{split}\nL_{local\\_gan} &= log(1-D_{local}(Y^{*} \\odot clue)) \\\\ &+ logD_{local}(G(image, clue) \\odot clue) \\\\\n\\end{split}\n\\end{equation}\n\n\\begin{equation}\\label{eq:ganloss2}\n\\begin{split}\nL_{global\\_gan} &= log(1-D_{global}(Y^{*} ))\\\\ &+ logD_{global}(G(image, clue))\n\\end{split}\n\\end{equation}\n\nCombining Equation \\ref{eq:lossg}, \\ref{eq:reconloss} and \\ref{eq:ganlosses}, the total loss for image inpainting {$L_{tot-ip}$} becomes:\n\n\\begin{align}\\label{eq:ganloss3}\nL_{tot-ip} &= L_{enc} + L_{dec} \\\\\n&= L_{clue} + \\alpha L_{recon} +\\beta L_{gan}\n\\end{align}\nwhere {$\\alpha$} and {$\\beta$} is weighting hyperparameter and {$L_{gan}$} is summation of {$L_{global\\_gan}$} and {$L_{global\\_gan}$}. \n\n\n\\section{Implementation Details}\n\n\\subsection{Classification}\nIn order to obtain the clue, saliency map, we use a convolutional deconvolutional network (CNN-DecNN) \\cite{noh2015learning} as shown in Figure \\ref{fig:cnndecnn}. CNN-DecNN is pre-trained with the MSRA10k\\cite{cheng2015global} dataset, which is by far the largest publicly available saliency detection dataset, containing 10,000 annotated saliency images. This CNN-DecNN is trained with Adam\\cite{kingma2014adam} in default learning settings. In training and inference period, rough saliency(or clue) is obtained from the pre-trained CNN-DecNN. \n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{cnndecnn.png}\n \\caption{CNN-DecNN to obtain rough saliency of image. This rough saliency is the clue classification task.}\n \\label{fig:cnndecnn}\n\\end{figure}\n\nAs mentioned earlier, encoder consists of 4 subnetworks: context network, spatial transformer network, glimpse network, and core RNN.\nImage and clue is 4 times downsampled and used an input of context network. Each passes 3 layer-CNN(3 x 3 kernel size, 1 x 1 stride, same zero padding) each followed by max-pooling layer(3 x 3 kernel size, 2 x 2 stride, same zero padding) and outputs vectors. These vectors are concatenated and once again passes 2 layer MLP and outputs the initial state for second layer of core RNN. \nLocalization part of spatial transformer network consists of CNN and MLP. For image and clue input, 3 layer-CNN(5 x 5 kernel size, 2 x 2 stride, same zero padding) is applied. 2 layer-MLP is applied in second core RNN output input. Output vectors of CNN and MLP are concatenated and once again pass through 2-layer MLP for {$s, t_{x}, t_{y}$}, the parameters of {$\\tau$}.\nGlimpse network receives glimpse patch and {$\\tau$} above as an input. 1-layer MLP is applied on {$\\tau$} while Glimpse patch passes through 3-layer CNN and 1-layer MLP to match the vector length with the {$\\tau$} vector after 1-layer MLP. Glimpse vector is obtained by element-wise vector multiplication operation of above output vectors. \nCore RNN is composed of 2 layer with Long-Short-Term Memory (LSTM) units for \\cite{hochreiter1997long} for of its ability to learn long-range dependencies and stable learning dynamics. \nDecoder is quite simple, only made up of 3-layer MLP.\nFilter number of CNNs, dimensions of MLPs, dimensions of core RNNs, number of core RNN steps vary depending on the size of the image.\nAll CNN and MLP except last layer includes batch normalization\\cite{ioffe2015batch} and elu activation\\cite{clevert2015fast}.\nWe used Adam optimizer \\cite{kingma2014adam} with learning rate 1e-4. \n\n\\subsection{Inpainting}\nEncoder settings are identical with image classification case.\nDecoder(or generator) consists of fractionally-strided CNN(3 x 3 kernel size, 1/2 stride) until the original image size are recovered. \nBoth local and global discriminators are based on CNN, extracts the features from the image to judge the input genuineness. Local discriminator is composed of 4 layer-CNN(5 x 5 kernel size, 2 x 2 stride, same zero padding) and 2-layer MLP. Global discriminator consists of 3-layer CNN(5 x 5 kernel size, 2 x 2 stride, same zero padding)and 2-layer MLP. Sigmoid function is applied on the last outputs of local and global discriminator in order to ensure the output value to be between 0 and 1. All CNN, fractionally-strided CNN, and MLP except last layer includes batch normalization and elu activation. Same as classification settings, filter number of CNNs, filter number of fractionally-strided CNNs, dimensions of MLPs, dimensions of core RNNs, number of core RNN steps vary depending on the size of the image. \n\n\n\\section{Experiment}\n\n\\subsection{Image Classification}\nWork in progress.\n\n\n\\subsection{Image Inpainting}\n\\subsubsection{Dataset}\nStreet View House Numbers (SVHN) dataset\\cite{netzer2011reading} is a real world image dataset for object recognition obtained from house numbers in Google street view image. SVHN dataset contains 73257 training digits and 26032 testing digits size of 32 x 32 in RGB color scale. \n\n\\subsubsection{Result}\nFigure \\ref{fig:svhn} showed the result of inpainting with SVHN dataset. 6.25\\% pixels of image at the center are occluded. Even though the result is not excellent, it is enough to show the possibility and scalability of CRAM. With better generative model, it is expected to show better performance.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{svhn.png}\n \\caption{Experiment results on SVHN. From left to right is ground truth, input contaminated image, generated image by CRAM decoder and finally the completed image which was partially replaced with generated image only for missing region.}\n \\label{fig:svhn}\n\\end{figure}\n\n\n\\section{Conclusion}\nWork in progress.\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n", "meta": {"timestamp": "2018-05-01T02:07:17", "yymm": "1804", "arxiv_id": "1804.10844", "language": "en", "url": "https://arxiv.org/abs/1804.10844"}} {"text": "\\section{Introduction} \n\nOver the past decade, our large-scale view of the Universe has\nundergone a revolution. Cosmologists have agreed on a standard model\nthat matches a wide range of astronomical data (eg. Spergel et\nal. 2007). However, this $\\Lambda$CDM concordance model relies on\nthree ingredients whose origin and nature are unknown: dark matter,\ndark energy and fundamental fields driving a period of inflation,\nduring which density fluctuations are imprinted on the Universe. All\nthese elements of the model represent new aspects of fundamental\nphysics, which can best be studied via astronomy. The nature of the\ndark energy, which now comprises the bulk of the mass-energy budget of\nthe Universe, will determine the ultimate fate of the Universe and is\namong the deepest questions in physics.\n\n\nThe most powerful tool that can be brought to bear on these problems\nis weak gravitational lensing of distant galaxies; this forms the core\nof the DUNE mission\\footnote{for further information on DUNE:\n www.dune-mission.net}. Gravitational deflection of light by\nintervening dark matter concentrations causes the images of background\ngalaxies to acquire an additional ellipticity of order of a percent,\nwhich is correlated over scales of tens of arcminutes. Measuring this\nsignature probes the expansion history in two complementary ways: (1)\ngeometrically, through the distance-redshift relation, and (2)\ndynamically, through the growth rate of density fluctuations in the\nUniverse.\n\nUtilisation of these cosmological probes relies on the measurement of\nimage shapes and redshifts for several billion galaxies. The\nmeasurement of galaxy shapes for weak lensing imposes tight\nrequirements on the image quality which can only be met in the absence\nof atmospheric phase errors and in the thermally stable environment of\nspace. For this number of galaxies, distances must be estimated using\nphotometric redshifts, involving photometry measurements over a wide\nwavelength range in the visible and near-IR. The necessary visible\ngalaxy colour data can be obtained from the ground, using current or\nupcoming instruments, complementing the unique image quality of space\nfor the measurement of image distortions. However, at wavelengths\nbeyond 1$\\mu$m, we require a wide NIR survey to depths that are only\nachievable from space.\n\n\nGiven the importance of the questions being addressed and to provide\nsystematic cross-checks, DUNE will also measure Baryon Acoustic\nOscillations, the Integrated Sachs-Wolfe effect, and galaxy Cluster\nCounts. Combining these independent cosmological probes, DUNE will\ntackle the following questions: What are the dynamics of dark energy?\nWhat are the physical characteristics of the dark matter? What are\nthe seeds of structure formation and how did structure grow? Is\nEinstein's theory of General Relativity the correct theory of gravity?\n\nDUNE will combine its unique space-borne observation with existing and\nplanned ground-based surveys, and hence increases the science return\nof the mission while limiting costs and risks. The panoramic visible\nand NIR surveys required by DUNE's primary science goals will afford\nunequalled sensitivity and survey area for the study of galaxy\nevolution and its relationship with the distribution of the dark\nmatter, the discovery of high redshift objects, and of the physical\ndrivers of star formation. Additional surveys at low galactic\nlatitudes will provide a unique census of the Galactic plane and\nearth-mass exoplanets at distances of 0.5-5 AU from their host star\nusing the microlensing technique. These DUNE surveys will provide a\nunique all-sky map in the visible and NIR and thus complement other\nspace missions such as Planck, WMAP, eROSITA, JWST, and WISE. The\nfollowing describes the science objectives, instrument concept and\nmission profile (see Table~\\ref{table:summary} for a baseline\nsummary). A description of an earlier version of the mission without\nNIR capability and developped during a CNES phase 0 study can be found\nin Refregier et al 2006 and Grange et al. 2006.\n\\begin{table}\n\\caption{DUNE Baseline summary} \n\\label{table:summary}\n \\begin{tabular}{|l|l|}\n\\hline\nScience objectives & Must: Cosmology and Dark Energy. Should: Galaxy formation\\\\\n & Could: Extra-solar planets\\\\\n\\hline\nSurveys & Must: 20,000 deg$^2$ extragalactic, Should: Full sky (20,000\ndeg$^2$ \\\\\n& Galactic), 100 deg$^2$ medium-deep. Could: 4 deg$^2$ planet hunting\\\\\n\\hline\nRequirements & 1 visible band (R+I+J) for high-precision shape measurements,\\\\\n & 3 NIR bands (Y, J, H) for photometry\\\\\n\\hline\nPayload & 1.2m telescope, Visible \\& NIR cameras with 0.5 deg$^2$ FOV\neach\\\\\n\\hline\nService module & Mars/Venus express, Gaia heritage \\\\\n\\hline\nSpacecraft & 2013kg launch mass\\\\\n\\hline\nOrbit & Geosynchronous\\\\\n\\hline\nLaunch & Soyuz S-T Fregat\\\\\n\\hline\nOperations & 4 year mission\\\\\n\\hline\n \\end{tabular}\n\\end{table}\n\n\\section{\\label{section2}Science Objectives} \n\nThe DUNE mission will investigate a broad range of astrophysics and\nfundamental physics questions detailed below. Its aims are twofold:\nfirst study dark energy and measure its equation of state parameter\n$w$ (see definition below) and its evolution with a precision of 2\\%\nand 10\\% respectively, using both expansion history and structure\ngrowth, second explore the nature of dark matter by testing the Cold\nDark Matter (CDM) paradigm and by measuring precisely the sum of the\nneutrino masses. At the same time, it will test the validity of\nEinstein's theory of gravity. In addition, DUNE will investigate how\ngalaxies form, survey all Milky-Way-like galaxies in the 2$\\pi$\nextra-galactic sky out to $z \\sim 2$ and detect thousands of galaxies\nand AGN at $61$ are known, DUNE will find hundreds of Virgo-cluster-mass\nobjects at $z>2$, and several thousand clusters of M=$1-2 \\times\n10^{13}$M$_{\\odot}$. The latter are the likely environments in which the peak\nof QSO activity at $z\\sim2$ takes place, and hold the empirical\nkey to understanding the heyday of QSO activity.\n\nUsing the Lyman-dropout technique in the near-IR, the DUNE-MD survey\nwill be able to detect the most luminous objects in the early Universe\n($z>6$): $\\sim 10^4$ star-forming galaxies at $z\\sim8$ and up to\n$10^3$ at $z\\sim10$, for SFRs $>30-100$M$_{\\odot}$/yr. It will also be able\nto detect significant numbers of high-$z$ quasars: up to $10^4$ at\n$z\\sim7$, and $10^3$ at $z\\sim9$. These will be central to understanding the\nreionisation history of the Universe.\n\nDune will also detect a very large number of strong lensing systems: \nabout $10^5$ galaxy-galaxy lenses, $10^3$ galaxy-quasar lenses and\n5000 strong lensing arcs in clusters (see Menegetthi et al. 2007). It\nis also estimated that several tens of galaxy-galaxy lenses will be\n\\emph{double} Einstein rings (Gavazzi et al. 2008), which are powerful\nprobes of the cosmological model as they simultaneously probe several redshifts.\n\n\nIn addition, during the course of the DUNE-MD survey (over 6 months),\nwe expect to detect $\\sim 3000$ Type Ia Supernovae with redshifts up\nto $z\\sim0.6$ and a comparable number of Core Collapse SNe (Types II\nand Ib/c) out to $z\\sim0.3$. This will lead to measurement of SN rates\nthus providing information on their progenitors and on the star\nformation history.\n\n\n\n\n\n\\subsection{Studying the Milky Way with DUNE}\n\nDUNE is also primed for a breakthrough in Galactic astronomy. DASS-EX,\ncomplemented by the shallower survey of the Galactic plane (with\n$|b|<30\\; deg$) will provide all-sky high resolution (0.23'' in the wide red\nband, and 0.4'' in YJH) deep imaging of the stellar content of the\nGalaxy, allowing the deepest detailed structural studies of the thin\nand thick disk components, the bulge/bar, and the Galactic halo\n(including halo stars in nearby galaxies such as M31 and M33) in bands\nwhich are relatively insensitive to dust in the Milky Way.\n\nDUNE will be little affected by extinction and will supersede by\norders of magnitude all of the ongoing surveys in terms of angular\nresolution and sensitivity.\nDUNE will thus\nenable the most comprehensive stellar census of late-type dwarfs and\ngiants, brown dwarfs, He-rich white dwarfs, along with detailed\nstructural studies, tidal streams and merger fragments. DUNE's\nsensitivity will also open up a new discovery space for rare stellar\nand low-temperature objects via its H-band imaging. Currently, much\nof Galactic structure studies are focussed on the halo. Studying the\nGalactic disk components requires the combination of spatial\nresolution (crowding) and dust-penetration (H-band) that DUNE can\ndeliver.\n\nBeyond our Milky Way, DUNE will also yield the most detailed and\nsensitive survey of structure and substructure in nearby galaxies,\nespecially of their outer boundaries, thus constraining their merger\nand accretion histories.\n\n\n\\subsection{Search for Exo-Planets}\nThe discovery of extrasolar planets is the most exciting development\nin astrophysics over the past decade, rivalled only by the discovery\nof the acceleration of the Universe. Space observations (e.g. COROT, KEPLER), supported by\nground-based high-precision radial velocity surveys will probe\nlow-mass planets (down to $1 M_\\oplus$). DUNE is also\nperfectly suited to trace the distribution of matter on very small\nscales those of the normally invisible extrasolar planets. Using\nmicrolensing effect, DUNE can provide a statistical census of\nexoplanets in the Galaxy with masses over $0.1 M_\\oplus$ from orbits\nof 0.5 AU to free-floating objects. This includes analogues to all the\nsolar system's planets except for Mercury, as well as most planets\npredicted by planet formation theory. Microlensing is the temporary\nmagnification of a Galactic bulge source star by the gravitational\npotential of an intervening lens star passing near the line of\nsight. A planet orbiting the lens star, will have an altered\nmagnification, showing a brief flash or a dip in the observed light\ncurve of the star(see Fig. \\ref{figc5}). \nBecause of atmospheric seeing (limiting the monitoring to large source\nstars), and poor duty cycle even using networks, ground-based\nmicrolensing surveys are only able to detect a few to 15 $M_\\oplus$\nplanets in the vicinity of the Einstein ring radius (2-3 AU). The high\nangular resolution of DUNE, and the uninterrupted visibility and NIR\nsensitivity afforded by space observations will provide detections of\nmicrolensing events using as sources G and K bulge dwarfs stars and\ntherefore can detect planets down to $0.1-1 M_\\odot$ from orbits of\n0.5 AU. Moreover, there will be a very large number of transiting hot\nJupiters detected towards the Galactic bulge as 'free' ancillary\nscience. A space-based microlensing survey is thus the only way to\ngain a comprehensive census and understanding of the nature of\nplanetary systems and their host stars. We also underline that the\nplanet search scales linearly with the surface of the focal plane and\nthe duration of the experiment.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=7cm, height=6cm, angle=0]{duneml.ps}\n\\caption{Exoplanet discovery parameter space (planet mass vs orbit size)\nshowing for reference the 8 planets from our solar system (labeled as letters),\nthose detected by Doppler wobble (T), transit (circle), and\nmicrolensing. We outline regions that can be probed by different\nmethods. Note the uniqueness of the parameter space probed by DUNE\ncompared to other techniques. \n}\n\\label{figc5}\n\\end{center}\n\\end{figure} \n\n\\section{DUNE Surveys: the Need for All-Sky Imaging from Space}\nThere are two key elements to a high precision weak lensing survey: a\nlarge survey area to provide large statistics, and the control of\nsystematic errors. Figure \\ref{fig:req} shows that to\nreach our dark energy target (2\\% error on $w_n$) a survey of 20,000\nsquare degrees with galaxies at $z\\sim1$ is required. This result is\nbased on detailed studies showing that, for a fixed observing time,\nthe accuracy of all the cosmological parameters is highest for a wide\nrather than deep survey (Amara \\& Refregier 2007a, 2007b). This required\nsurvey area drives the choice of a 1.2m telescope and a combined\nvisible/NIR FOV of 1 deg$^{\\rm 2}$ for the DUNE baseline.\n\n\\begin{figure}\n \\includegraphics[width=0.5\\textwidth,height=5cm,angle=0]{surveyarea.ps}\\includegraphics[width=0.5\\textwidth,angle=0]{des_dens.eps}\n\\caption{Left: Error on the dark energy equation of state\nparameter $w_n$ as a function of weak lensing survey area (in deg$^{\\rm\n2}$) for several shape measurement systematic levels (assuming 40\ngalaxies/amin$^2$ with a median redshift $z_m$=1). An area of 20,000 deg$^2$\nand a residual systematic shear variance of\n$\\sigma_{sys}^2<10^{-7}$ is required to achieve the DUNE objective\n(error on $w_n$ better than 2\\%). \nRight (from Abdalla et\nal. 2007): Photometric redshift performance for a DES-like ground survey\nwith and without the DUNE NIR bands (J,H). The deep NIR photometry,\nonly achievable in space, results in a dramatic reduction of the\nphotometric redshift errors and catastrophic failures which are needed for all\nthe probes (weak lensing, BAO, CC, ISW).}\n\\label{fig:req}\n\\end{figure} \n\n\nGround based facilities plan to increase area coverage, but they will\neventually be limited by systematics inherent in ground based\nobservations (atmospheric seeing which smears the image, instabilities\nof ground based PSFs, telescope flexure and wind-shake, and\ninhomogeneous photometric calibrations arising from seeing\nfluctuations). The most recent ground-based wide field imagers\n(e.g. MegaCam on CFHT, and Subaru) have a stochastic variation of the\nPSF ellipticity of the order of a few percent, i.e. of the same order\nof magnitude as the sought-after weak lensing signal. Current\nmeasurements have a residual shear systematics variance of\n$\\sigma_{sys}^2 \\sim 10^{-5}$, as indicated with both the results of\nthe STEPII program and the scatter in measured value of\n$\\sigma_8$. This level of systematics is comparable to the statistical\nerrors for surveys that cover a few tens of square degree\n(Fig. \\ref{fig:req}). As seen on the figure, to reach DUNE's dark\nenergy targets, the systematics must be at the level of\n$\\sigma_{sys}^2 \\sim 10^{-7}$, 100 times better than the current level\n(see Amara \\& Refregier 2007b for details). While ground based surveys\nmay improve their systematics control, reaching this level will be an\nextreme challenge. One ultimate limit arises from the finite\ninformation contained in the stars used to calibrate the PSF, due to\nnoise and pixelisation. Simulations by Paulin-Henriksson et al. (2008)\nshow that, to reach our systematic targets, the PSF must remain\nconstant (within a tolerance of 0.1\\%) over 50 arcmin$^2$ (which\ncorresponds to $\\sim 50$ stars). While this is prohibitive from the\nground, we have demonstrated during a CNES phase 0 study (Refregier et\nal. 2006), that with the careful design of the instrument, this can be\nachieved from space. In addition to shape measurements, wide area\nimagining surveys use photometric information to measure the redshift\nof galaxies in the images. Accurate measurements of the photometric\nredshifts require the addition of NIR photometry (an example of this\nis shown in Fig. \\ref{fig:req}, right panel, and also Abdalla et\nal. 2007). Such depths in the NIR cannot be achieved from the ground\nover wide area surveys and can only be done from space.\n\n\\par\\bigskip\n\nTo achieve the scientific goals listed in section \\ref{section2}, DUNE will\nperform four surveys detailed in the following and in Table \\ref{tableC5}. \n\n\\subsection{Wide Extragalactic Survey: DASS-EX }\nTo measure dark energy to the required precision, we need to make\nmeasurements over the entire extra-galactic sky to a depth which\nyields 40 gal/arcmin$^2$ useful for lensing with a median redshift\n$z_m \\simeq 0.9$. This can be achieved with a survey (DASS-EX) that\nhas AB-magnitude limit of 24.5 (10$\\sigma$ extended source) in a broad\nred visible filter (R+I+Z). Based on the fact that DUNE focuses on\nobservations that cannot be obtained from the ground, the wide survey\nrelies on two unique factors that are enabled by space: image quality\nin the visible and NIR photometry. Central to shape measurements for\nweak lensing the PSF of DUNE needs to be sampled better than 2-2.5\npixels per FWHM (Paulin-Henriksson et al. 2008), to be constant over\n50 stars around each galaxy (within a tolerance of $\\sim 0.1\\%$ in\nshape parameters), and to have a wavelength dependence which can be\ncalibrated. Accurate measurement of the redshift of distant galaxies\n($z \\sim 1$) requires photometry in the NIR where galaxies have a\ndistinctive feature (the 4000$\\AA$ break). Deep NIR photometry\nrequires space observations. The bands Y, J and H are the perfect\nsynergy for ground based survey complement (see Abdalla et al. 2007\nfor discussion), as recommended by the ESO/ESA Working Group on\nFundamental Cosmology (Peacock et al. 2006).\n\n\n\\subsection{Legacy surveys: DASS-G, DUNE-MD, and DUNE-ML}\n\nWe propose to allocate six months to a medium deep survey (DUNE-MD) with\nan area of 100 deg$^2$ to magnitudes of 26 in Y, J and H, located at\nthe N and S ecliptic poles. This survey can be used to calibrate DUNE\nduring the mission, by constructing it from a stack of $>30$\nsub-images too achieve the required depths. DUNE will also perform a\nwide Galactic survey (DASS-G) that will complement the 4$\\pi$ coverage\nof the sky and a microlensing survey (DUNE-ML). Both surveys require\nshort exposures. Together with the DASS-EX, these surveys need good\nimage quality with low level of stray light. A summary of all the\nsurveys is shown in Table \\ref{tableC5}.\n\n\\begin{table}\n\\caption{Requirements and geometry for the four DUNE surveys.}\n\\label{tableC5}\n \\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{\\textbf{Wide Extragalactic Survey DASS-EX (must)}}\\\\ \\hline\n\\multicolumn{2}{|c|}{Area}&\\multicolumn{2}{|c|}{20,000 sq degrees -- $|b|> 30 \\deg$\n}\\\\ \\hline\n\\multirow{2}{*}{Survey Strategy}& Contiguous patches & \\multicolumn{2}{|c|}{$> 20 \\deg \\times 20 \\deg$} \\\\ \\cline{2-4}\n & Overlap & \\multicolumn{2}{|c|}{$ 10 \\%$} \\\\ \\hline\n\\multicolumn{2}{|c|}{Shape Measurement Channel}& R+I+Z (550-920nm) & R+I+$Z_{AB}$ $<$24.5 (10$\\sigma$ ext) \\\\ \\hline\n\\multicolumn{2}{|c|}{ } & Y (920-1146nm) & $Y_{AB}<$24 (5$\\sigma$ point) \\\\ \\cline{3-4}\n\\multicolumn{2}{|c|}{Photometric Channel} & J (1146-1372nm) & $J_{AB}<$24 (5$\\sigma$ point) \\\\ \\cline{3-4}\n\\multicolumn{2}{|c|}{ } & H (1372-1600nm) & $H_{AB}<$24 (5$\\sigma$ point) \\\\ \\hline\n\\multirow{2}{*}{PSF} & Size \\& Sample & 0.23\" FWHM & $>$ 2.2 pixels per FWHM \\\\\\cline{2-4} \n & Stability & \\multicolumn{2}{|c|}{within tolerance of 50 stars} \\\\ \\hline\n\\multirow{2}{*}{Image Quality} & Dead pixels &\\multicolumn{2}{|c|}{$<$ 5 \\% of final image}\\\\ \\cline{2-4}\n & Linearity &\\multicolumn{2}{|c|}{Instrument calibratable for $1<$S/N$<1000$}\\\\ \\hline \n\\multicolumn{4}{|c|}{\\textbf{Medium Deep Survey DUNE-MD (should)}}\\\\ \\hline \n\\multicolumn{2}{|c|}{Area}&\\multicolumn{2}{|c|}{ $\\sim$100 sq degrees\n-- Ecliptic poles}\\\\ \\hline\nSurvey Strategy& Contiguous patches & \\multicolumn{2}{|c|}{Two patches each $7 \\deg \\times 7 \\deg$} \\\\ \\hline\n\\multicolumn{2}{|c|}{Photometric Channel} & \\multicolumn{2}{|c|}{ $Y_{AB}, \\; J_{AB}, \\; H_{AB} <$26 (5$\\sigma$ point) -- for stack}\\\\ \\hline\n\\multicolumn{2}{|c|}{PSF} & \\multicolumn{2}{c|}{Same conditions as the wide survey} \\\\ \\hline\n\\multicolumn{4}{|c|}{\\textbf{Wide Galactic Survey DASS-G (should)}}\\\\ \\hline \n\\multicolumn{2}{|c|}{Area}&\\multicolumn{2}{|c|}{ 20,000 sq degrees --\n $|b| < 30 \\deg$}\\\\ \\hline\n\\multicolumn{2}{|c|}{Shape Measurement Channel}&\\multicolumn{2}{|c|}{$R+I+Z_{AB}<23.8$ ($5\\sigma$ ext)}\\\\ \\hline\n\\multicolumn{2}{|c|}{Photometric Channel} & \\multicolumn{2}{|c|}{ $Y_{AB}, \\; J_{AB}, \\; H_{AB} <$22 (5$\\sigma$ point)}\\\\ \\hline\nPSF & Size & \\multicolumn{2}{|c|}{$< 0.3\"$ FWHM}\\\\ \\hline\n\\multicolumn{4}{|c|}{\\textbf{Microlensing Survey DUNE-ML (could)}}\\\\ \\hline \n\\multicolumn{2}{|c|}{Area}&\\multicolumn{2}{|c|}{ 4 sq degrees -- Galactic bulge}\\\\ \\hline\nSurvey Strategy & Time sampling & \\multicolumn{2}{|c|}{Every 20 min -- 1 month blocks -- total of 3 months}\\\\ \\hline\n\\multicolumn{2}{|c|}{Photometric Channel} & \\multicolumn{2}{|c|}{\n $Y_{AB}, \\; J_{AB}, \\; H_{AB} <$22 (5$\\sigma$ point) -- per visit}\\\\ \\hline\nPSF & Size & \\multicolumn{2}{|c|}{$< 0.4\"$ FWHM}\\\\ \\cline{2-4}\n \\hline\n\\end{tabular}\n\n\\end{table} \n\n\n\\section{Mission Profile and Payload instrument} \nThe mission design of DUNE is driven by the need for the stability of\nthe PSF and large sky coverage. PSF stability puts stringent\nrequirements on pointing and thermal stability during the observation\ntime. The 20,000 square degrees of DASS-EX demands high operational\nefficiency, which can be achieved using a drift scanning mode (or Time\nDelay Integration, TDI, mode) for the CCDs in the visible focal\nplane. TDI mode necessitates the use of a counter-scanning mirror to\nstabilize the image in the NIR focal plane channel.\n\nThe baseline for DUNE is a Geosynchronous Earth orbit (GEO), with a\nlow inclination and altitude close to a standard geostationary\norbit. Based on Phase 0 CNES study, this solution was chosen to meet\nboth the high science telemetry needs and the spacecraft low\nperturbation requirements. This orbit also provides substantial\nlaunch flexibility, and simplifies the ground segment.\n\nAs for the PSF size and sampling requirements, a\nbaseline figure for the line-of-sight stability is 0.5 pixel (smearing\nMTF $> 0.99$ at cut-off frequency), with the stability budget to be\nshared between the telescope thermal stability (0.35 pixel) and the\nattitude orbit control system (AOCS) (0.35 pixel). This implies a\nline-of-sight stability better than 0.2 $\\mu$rad over 375 sec (the\nintegration time across one CCD). This stringent requirement calls for\na minimalization of external perturbations which mainly consist of solar radiation\npressure and gravity gradient torques. A gravitational torque of 20\n$\\mu$Nm is acceptable, and requires an orbit altitude of at least\n25,000 km. The attitude and orbit control design is based on proportional\nactuators. \n\nA stable thermal environment is requested for the payload ($\\sim 10\nmK$ variation over 375sec), hence mission design requires a permanent\ncold face for the focal plane radiators and an orbit that minimizes\nheat load from the Earth. This\ncould be achieved by having the whole payload in a regulated temperature\ncavity. \n\nA primary driver for the GEO orbit choice is the high data rate -- the\norbit must be close enough to Earth to facilitate the transmission of\nthe high amount of data produced by the payload every day (about\n1.5~Tbits) given existing ground network facilities, while minimizing\ncommunication downlink time during which science observations cannot\nbe made (with a fixed antenna).\n\nThe effects of the radiation environment at GEO, for both CCD bulk\ndamage induced by solar protons and false detections induced by\nelectrons with realistic shielding, is considered acceptable. However,\nDUNE specific radiation tests on CCD sensors will be required as an\nearly development for confirming the measurement robustness to proton\ninduced damage. A High Elliptical Orbit (HEO) operating beyond the\nradiation belt is an alternative in case electron radiation or thermal\nconstraints prevent the use of GEO.\n\n\nThe payload for DUNE is a passively cooled 1.2m diameter Korsch-like\nthree-mirror telescope with two focal planes, visible and NIR covering\n1 square degree. Figure~\\ref{fig:4.1} provides an overview of the\npayload. The Payload module design uses Silicon Carbide (SiC)\ntechnology for the telescope optics and structure. This provides low\nmass, high stability, low sensitivity to radiation and the ability to\noperate the entire instrument at cold temperature, typically below 170\nK, which will be important for cooling the large focal planes. The two\nFPAs, together with their passive cooling structures are isostatically\nmounted on the M1 baseplate. Also part of the payload are the de-scan\nmirror mechanism for the NIR channel and the additional payload data\nhandling unit (PDHU).\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.75\\textwidth,angle=0]{telescope.eps}\n\\caption{Overview of all payload elements. }\n\\label{fig:4.1}\n\\end{center}\n\\end{figure}\n\n\\subsection{Telescope}\n\nThe telescope is a Korsch-like $f/20$ three-mirror telescope. After\nthe first two mirrors, the optical bundle is folded just after passing\nthe primary mirror (M1) to reach the off-axis tertiary mirror. A\ndichroic element located near the exit pupil of the system provides\nthe spectral separation of the visible and NIR channels. For the NIR,\nthe de-scan mechanism close to the dichroic filter allows for a\nlargely symmetric configuration of both spectral channels. The whole\ninstrument fits within a cylinder of 1.8m diameter and\n2.65m length. The payload mass is approximately 500~kg, with 20\\%\nmargin, and average/peak power estimates are 250/545~W.\n\nSimulations have shown that the overall wavefront error (WFE) can be\ncontained within 50 nm r.m.s, compatible with the required\nresolution. Distortion is limited to between 3-4$\\mu$m, introducing an\n0.15$\\mu$rad fixed (hence accessible to calibration) displacement in\nthe object space. The need to have a calibration of the PSF shape\nerror better than 0.1 \\% over 50 arcmin$^2$ leads to a thermal\nstability of $\\sim 10$ mK over 375 s. Slow variations of solar\nincidence angle on the sunshield for DUNE will not significantly\nperturb the payload performance, even for angles as large as 30\ndegrees.\n\n\\subsection{Visible FPA}\n\nThe visible Focal Plane Array (VFP) consists of 36 large format\nred-sensitive CCDs, arranged in a 9x4 array (Figure~\\ref{fig:4.2})\ntogether with the associated mechanical support structure and\nelectronics processing chains. Four additional CCDs dedicated to\nthe AOCS measurements are located at the edge of\nthe array. All CCDs are 4096 pixel red-enhanced e2v CCD203-82 devices\nwith square 12 $\\mu$m pixels. The physical size of the array is\n466x233 mm which corresponds to $1.09\\deg \\times 0.52 \\deg$. Each pixel is\n0.102 arcsec, so that the PSF is well sampled in each direction over\napproximately 2.2 pixels, including all contributions. The VFP\noperates in the red band from 550-920nm. This bandpass is produced by\nthe dichroic. The CCDs are 4-phase devices, so they can be clocked in\n$1/4$ pixel steps. The exposure duration on each CCD is 375s,\npermitting a slow readout rate and minimising readout noise. Combining\n4 rows of CCDs will then provide a total exposure time of 1500s.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth,angle=0]{VisFPA.eps}\n\\caption{Left: The VFP assembly with the 9x4 array of CCDs\nand the 4 AOCS sensors on the front (blue) and the warm electronics\nradiator at the back (yellow). Right: An expanded view of the VFP\nassembly, including the electronics modules and thermal hardware (but\nexcluding the CCD radiator). Inset: The e2v CCD203-82 4kx4k pixels\nshown here in a metal pack with flexi-leads for electrical\nconnections. One of the flexi-leads will be removed. }\n\\label{fig:4.2}\n\\end{center}\n\\end{figure} \n\nThe VFP will be used by the spacecraft in a closed-loop system to\nensure that the scan rate and TDI clocking are synchronised. The two\npairs of AOCS CCDs provide two speed measurements on relatively bright\nstars (V $\\sim 22-23$). The DUNE VFP is largely a self-calibrating\ninstrument. For the shape measurements, stars of the appropriate\nmagnitude will allow the PSF to be monitored for each CCD including\nthe effects of optical distortion and detector alignment.\nRadiation-induced charge transfer inefficiency will modify the PSF and\nwill also be self-calibrated in orbit.\n\n\n\\subsection{NIR FPA}\n\nThe NIR FPA consists of a 5 x 12 mosaic of 60 Hawaii 2RG detector\narrays from Teledyne, NIR bandpass filters for the wavelength bands Y,\nJ, and H, the mechanical support structure, and the detector readout\nand signal processing electronics (see Figure~\\ref{fig:4.3}). The FPA\nis operated at a maximum temperature of 140 K for low dark current of\n0.02$e^-$/s. Each array has 2048 x 2048 square pixels of 18 $\\mu$m\nsize resulting in a 0.15 x 0.15 arcsec$^2$ field of view (FOV) per pixel.\nThe mosaic has a physical size of 482 x 212 mm, and covers a\nFOV of $1.04^\\circ \\times 0.44^\\circ$ or 0.46 square degrees. The\nHgCdTe Hawaii 2RG arrays are standard devices sensitive in the 0.8 to\n1.7 $\\mu$m wavelength range.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.75\\textwidth,angle=0]{NIRFPA.eps}\n\\caption{Layout of the NIR FPA (MPE/Kayser-Threde). The 5\nx 12 Hawaii 2RG Teledyne detector arrays (shown in the inset) are\ninstalled in a molybdenum structure}\n\\label{fig:4.3}\n\\end{center}\n\\end{figure} \n\nAs the spacecraft is scanning the sky, the image motion on the NIR FPA\nis stabilised by a de-scanning mirror during the integration time of\n300s or less per NIR detector. The total integration time of 1500 s\nfor the $0.4^\\circ$ high field is split among five rows and 3\nwavelengths bands along the scan direction. The effective integration\ntimes are 600 s in J and H, and 300 s in Y. For each array, the\nreadout control, A/D conversion of the video output, and transfer of\nthe digital data via a serial link is handled by the SIDECAR ASIC\ndeveloped for JWST. To achieve the limiting magnitudes defined by the\nscience requirements within these integration times, a minimum of 13\nreads are required. Data are\nprocessed in the dedicated unit located in the service module.\n\\section{Basic spacecraft key factors}\nThe spacecraft platform architecture is fully based on well-proven and\nexisting technologies. The mechanical, propulsion, and solar array\nsystems are reused from Venus Express (ESA) and Mars-Express. All the\nAOCS, $\\mu$-propulsion, Power control and DMS systems are reused from\nGAIA. Finally, the science telemetry system is a direct reuse from the\nPLEIADES (CNES) spacecraft. All TRLs are therefore high and all\ntechnologies are either standard or being developed for GAIA (AOCS for\ninstance).\n\n\\subsection{Spacecraft architecture and configuration}\nThe spacecraft driving requirements are: (1) Passive cooling of both\nvisible and NIR focal planes below 170 K and 140 K, respectively; (2)\nthe PSF stability requirement, which translates to line of sight (LOS)\nand payload thermal stability requirements; and (3) the high science\ndata rate. The spacecraft consists of a Payload Module (PLM) that\nincludes the instrument (telescope hardware, focal plane assemblies\nand on board science data management) and a Service Module (SVM). The\nSVM design is based on the Mars Express parallelepiped structure that\nis 1.7 m $\\times$ 1.7 m $\\times$ 1.4 m, which accommodates all\nsubsystems (propulsion, AOCS, communication, power, sunshield, etc) as\nwell as the PLM. \nThe spacecraft platform, and all technologies, are either\nstandard ones or being developed into GAIA programme (e.g. AOCS).\n\n\\subsection{Sunshield and Attitude Control}\nThe nominal scan strategy assumes a constant, normal ($0\\deg$)\nincidence of the sun on the sunshield, while allowing a sun incidence\nangle of up to $30\\deg$ to provide margin, flexibility for data\ntransmission manoeuvres and potential for further scan\noptimisation. The sunshield is a ribbed, MLI-covered central frame\nfixed to the platform. The satellite rotates in a draft\nscan-and-sweep-back approach where the spacecraft is brought back to\nnext scan position after each $20\\deg$ strip. The scan rate is $1.12\n\\deg$ per hour, such that every day, one complete strip is scanned and\ntransmitted to ground.\n\n\nDue to the observation strategy and the fixed high gain antenna (HGA),\nthe mission requires a high level of attitude manoeuvrability.\nDuring data collection, the spacecraft is\nrotated slowly about the sunshield axis. The slow scan control\nrequirements are equivalent to three-axis satellite control. The\nline-of-sight stability requirement is 0.2 $\\mu$rad over 375s (the\nintegration time for one CCD) and is driven by optical quality and PSF\nsmearing, and will be partially achieved using\na continuous PSF calibration using the stars located in the\nneighborhood (50 arcmin$^2$) of each observed galaxy. Detailed\nanalyses show that DUNE high pointing performance is comparable in\ndifficulty to that achieved on GAIA during science\nobservations. Similarly to GAIA, two pairs of dedicated CCD in the\nvisible focal plane are used for measuring the spacecraft attitude\nspeed vector. Hybridisation of the star tracker and payload\nmeasurements is used to reduce the noise injected by the star tracker\nin the loop. For all other operational phases and for the transition\nfrom coarse manoeuvres to the science observation mode, the attitude\nis controlled using the Mars Express propulsion system. The attitude\nestimation is based on using two star trackers (also used in science\nobserving mode), six single-axis gyros and two sun sensors for\nmonitoring DUNE pointing during manoeuvres with a typically accuracy\nbetter than 1 arcmin.\n\n\\subsection{Functional architecture: propulsion and electrical systems}\nThe star signal collected in the instrument is spread on the focal\nplane assembly and transformed into a digital electrical signal which\nis transferred to the Payload Data Handling Unit (PDHU), based on\nThales/AlienaSpace heritage. Power management and regulation are\nperformed by the Power Conditioning \\& Distribution Unit (PCDU), and\nbased on the GAIA program. Electrical power is generated by two solar\narrays (2.7 m$^2$ each), as used in the Mars Express and Venus Express\nESA spacecraft. The control of their orientation is based on the\norientation of the whole spacecraft\ntowards the Sun. The panels are filled with AsGa cells.\n\n\n\nThe RF architecture is divided into two parts with the TT\\&C system\n(S-Band) plus a dedicated payload telemetry system (X-Band in the EES\nband (Earth Exploration Service). The allocated bandwidth for payload\ntelemetry is 375 MHz and high rate transmitters already exist for\nthis purpose. The X-band 155 Mbits/s TMTHD modulator can be reused\nfrom Pleiades spacecraft. A single fixed HGA of 30 cm diameter can be\nused (re-used from Venus Express). The RF power required is 25 W, which\nalso enables the re-use of the solid state power amplifier (SSPA) from\nPleiades. The transmitted science data volume is estimated at 1.5\nTbits per day. The baseline approach consists in\nstoring the science data on board in the PDHU, then to downlink the\ndata twice per day. This can be achieved naturally twice per orbit at\n06h and 18h local time and using the rotation degree of freedom about\nthe satellite-sun axis for orienting the antenna towards the ground\nstation. The total transmission duration is less than 3 hours. The\nspacecraft attitude variation during transmission is less than 30 deg\n(including AOCS margins). 20 kg hydrazine propellant budget is\nrequired. In case the operational orbit would change to HEO, a dual\nfrequency (S-Band + X-Band) 35 m ESOC antenna could fit with the\nmission needs, with in an increased HGA diameter (70 cm).\n\n\nThe required power on the GEO orbit is 1055 W. The\nsizing case is the science mode after eclipse with battery\ncharging. Electrical power is generated by the two solar arrays of 2.7\nm$^2$ each. With a $30\\deg$ solar angle,\nthe solar array can generate up to 1150 W at the end of its life. The battery \nhas been sized in a preliminary\napproach for the eclipse case (64 Ah need).\n\n\\section{Science Operations and Data Processing}\nThe DUNE operational scenario follows the lines of a survey-type\nproject. The satellite will operate autonomously except for defined\nground contact periods during which housekeeping and science telemetry\nwill be downlinked, and the commands needed to control spacecraft and\npayload will be uploaded. The DUNE processing pipeline is inspired by\nthe Terapix pipeline used for the CFHT Legacy Survey. The total\namount of science image data expected from DUNE is $\\sim 370$\nTerapixels (TPx): 150TPx from the Wide, 120TPx for 3 months of the\nmicrolensing survey, 60TPx for the 3 months of the Galactic plane\nsurvey, and 40TPx for 6 months deep survey. Based on previous\nexperience, we estimate an equal amount of calibration data (flat\nfields, dark frames, etc.) will be taken over the course of the\nmission. This corresponds to 740TB, roughly 20 times the amount of\nscience data for CFHT during 2003-2007.\n\nThere are four main activities necessary for the data processing,\nhandling, and data organisation of the DUNE surveys:\n\\begin{enumerate}\n \\item software development: image and catalogue processing, \n quality control, image and catalogue handling tools,\n pipeline development, survey monitoring, data archiving and\n distribution, numerical simulations, image simulations;\n\\item processing operation: running the pipeline, quality control and\nquality assessment operation and versioning,\npipeline/software/database update and maintenance;\n\\item data archiving and data distribution: data and meta-data\nproducts and product description, public user interface, external data\n(non-DUNE) archiving and distribution, public outreach;\n\\item computing resources: data storage, cluster architecture,\nGRID technology.\n\\end{enumerate}\n\n\n\n\\section{Conclusion: DUNE's Impact on Cosmology and Astrophysics} \n\nESA's Planck mission will bring unprecedented precision to the\nmeasurement of the high redshift Universe. This will leave the dark\nenergy dominated low redshift Universe as the next frontier in high\nprecision cosmology. Constraints from the radiation perturbation in\nthe high redshift CMB, probed by Planck, combined with density\nperturbations at low redshifts, probed by DUNE, will form a complete\nset for testing all sectors of the cosmological model. In this\nrespect, a DUNE+Planck programme can be seen as the next significant\nstep in testing, and thus challenging, the standard model of\ncosmology. Table \\ref{tableC2} illustrates just how precise the\nconstraints on theory are expected to be: DUNE will offer high\npotential for ground-breaking discoveries of new physics, from dark\nenergy to dark matter, initial conditions and the law of gravity. Our\nunderstanding of the Universe will be fundamentally altered in a\npost-DUNE era, with ESA's science programmes at the forefront of these\ndiscoveries.\nAs described above, the science goals of DUNE go far beyond the\nmeasurement of dark energy. It is a mission which:\n(i) measures both effects of dark energy (i.e. the expansion history\nof the Universe and the growth of structure) by using weak lensing as the\ncentral probe; (ii) places this high precision measurement of dark\nenergy within a broader framework of high precision cosmology by\nconstraining all sectors of the standard cosmology model (dark matter,\ninitial conditions and Einstein gravity); (iii) through a collection\nof unique legacy surveys is able to push the frontiers of the\nunderstanding of galaxy\nevolution and the physics of the local group; and finally (iv) is able\nto obtain information on some of the lowest masses astronomy\nextrasolar planets, which could contain mirror Earths.\n\nDUNE has been selected jointly with SPACE (Cimatti et al. 2008) in\nESA's Cosmic Vision programme for an assessment phase which lead to\nthe Euclid merged concept.\n\n\\begin{acknowledgements}\nWe thank CNES for support on an earlier version of the DUNE mission\nand EADS/Astrium, Alcatel/Alenia Space, as well as Kayser/Threde for\ntheir help in the preparation of the ESA proposal.\n\n\\end{acknowledgements}\n\n\\bibliographystyle{spbasic} \n\n", "meta": {"timestamp": "2008-07-24T11:01:20", "yymm": "0802", "arxiv_id": "0802.2522", "language": "en", "url": "https://arxiv.org/abs/0802.2522"}} {"text": "\\section{Introduction}\nTwo dimensional (2D) materials like graphene, silicene and germanene are semimetals with zero-gap \\cite{w11,cta09}, and their charge carriers are massless fermions\\cite{nltzzq12}. Graphene have been studied vastly because of its superior advantages such as mechanical, optical and electronic properties \\cite{ajyr19, kjna19, lkk19, lmhlz19, m18, qxz18, rilts19, sjhs19, thxwl20,ytycqw19, zxldxl, pky17,geh17,z16, mwh18}. Different doping are performed in graphene for the new applications such as sulfur-doping for micro-supercapacitors\\cite{csqmj19}, nitrogen-doped graphene quantum dots for photovoltaic\\cite{hgrpgc19}, silicon nanoparticles embedded in n-doped few-layered graphene for lithium ion batteries\\cite{lyzsgc} and implanting germanium into graphene for single-atom catalysis applications\\cite{tmbfbk18}. Theoretical and experimental investigations of graphene-like structures such as silicene and germanene have been vastly carried out \\cite{vpqafa,loekve,dsbf12,wqdzkd17,cxhzlt17,ctghhg17}. Silicene and germanene have been grown on Au(111)\\cite{cstfmg17}, Ag(111)\\cite{jlscl16} and Ir(111)\\cite{mwzdwl13} that can encourage researchers to do more study about them. Due to the buckled structure of silicene, it has different physical properties compared to graphene, such as higher surface reactivity\\cite{jlscl16}, and a tunable band gap by using an external electric field which is highly favorable in nanoelectronic devices\\cite{nltzzq12,dzf12}. However, the formation of imperfections on the synthesis of silicene is usually inevitable which influences the magnetic and electronic properties of the material\\cite{lwtwjl15}. There are some studies about doped atoms such as lithium, aluminum and phosphorus in silicene to achieve wide variety of electronic and optical properties\\cite{mmt17,dcmj15}. \nRecently simulation and fabrication of 2D silicon-carbon compounds known as siligraphene (Si$_m$C$_n$) have received more attentions due to their extraordinary electronic and optical properties. For example, SiC$_2$ siligraphene which has been experimentally synthesized\\cite{lzlxpl15}, is a promising anchoring material for lithium-sulfur batteries\\cite{dlghll15}, a promising metal-free catalyst for oxygen reduction reaction\\cite{dlghll15}, and a novel donor material in excitonic solar cells\\cite{zzw13}. Also, graphitic siligraphene g-SiC$_3$ in the presence of strain can be classified in different electrical phases such as a semimetal or a semiconductor. g-SiC$_3$ has a semimetallic behavior under compression strain up to 8\\%, but it becomes a semiconductor with direct band gap (1.62 eV) for 9\\% of compression strain and becomes a semiconductor with indirect band gap (1.43 eV) for 10\\% of compression strain \\cite{dlghll15}. Moreover, g-SiC$_5$ has semimetallic properties and it can be used as a gas sensor for air pollutant\\cite{dwzhl17}. Furthermore, SiC$_7$ siligraphene has a good photovoltaic applications \\cite{heakba19} and can be used as a high capacity hydrogen storage material\\cite{nhla18}. It shows superior structural, dynamical and thermal stability comparing to other types of siligraphene and it is a novel donor material with extraordinary sunlight absorption\\cite{dzfhll16}. The structural and electronic properties of silicene-like SiX and XSi$_3$ (X = B, C, N, Al, P) honeycomb lattices have been investigated\\cite{dw13}. Also, the planarity and non-planarity properties for g-SiC$_n$ and g-Si$_n$C (n = 3, 5, and 7) structures have been studied\\cite{tllpz19}.\n\nThe excellent properties of siligraphene\\cite{dzfhll16} motivated us to study CSi$_7$ and GeSi$_7$, in order to find a new approach of silicene buckling and band gap control and to obtain new electronic and optical properties. Here we call CSi$_7$ carbosilicene and GeSi$_7$ germasilicene. We choose carbon and germanium atoms respectively for CSi$_7$ and GeSi$_7$ because these atoms, same as silicon atom, have four valence electrons in their highest energy orbitals. Using density functional theory, we show that both structures are stable but CSi$_7$ is more stable than GeSi$_7$. The carbon atom in CSi$_7$ decreases the buckling, while germanium atom in GeSi$_7$ increases the buckling. It is shown that CSi$_7$ is a semiconductor with 0.24 eV indirect band gap\\cite{plgkl20} but GeSi$_7$, similar to silicene, is a semimetal. Also, we investigate the effects of strain and we show that for CSi$_7$, the compressive strain can increase the band gap and the tensile strain can decrease. At sufficient tensile strain (>3.7\\%), the band gap of CSi$_7$ becomes zero and thus the semiconducting properties of this material change to metallic properties. As a result, the band gap of CSi$_7$ can be tuned by strain and this material can be used in straintronic devices such as strain sensors and strain switches. For GeSi$_7$, strain does not have any significant effect on it. In contrast, GeSi$_7$ has high dielectric constant and can be used as a 2D material with high dielectric constant in advanced capacitors. Finally, we investigate the optical properties of these materials and we find that the light absorption of both CSi$_7$ and GeSi$_7$ are significantly greater than the light absorption of silicene. Because of high absorption of CSi$_7$ and GeSi$_7$, these materials can be considered as a good candidate for solar cell applications. It is worth to mention that germasilicene, GeSi$_7$, is a new 2D material proposed and studied in this paper, while carbosilicene, CSi$_7$, has been proposed previously as a member of siligraphene but only its band structure has been studied\\cite{tllpz19,plgkl19,plgkl20}.\nThe rest of the paper is organized as follows. In Sec. II, method of calculations is introduced and the results and discussion are given in Sec. III. Section IV contains a summary and conclusion.\n\n\\section{Method of calculations}\n Density functional theory (DFT) calculations are performed using the projector-augmented wave pseudopotentials \\cite{b94} as implemented in the Quantum-ESPRESSO code\\cite{gc09}. To describe the exchange-correlation functional, the generalized gradient approximation (GGA) of Perdew-Bruke-Ernzerhof (PBE) is used\\cite{pbe96}. After optimization, the optimum value for the cutoff energy is obtained equal to 80 Ry. Also, Brillouin-zone integrations are performed using Monkhorst-Pack\\cite{mp76} and optimum reciprocal meshes of 12\u00d712\u00d71 are considered for calculations. At first, unit cells and atomic positions of both CSi$_7$ and GeSi$_7$ are optimized and then their electronic properties are determined by calculating the density of states and band structure. Moreover, their optical properties are determined by calculating the absorption and the imaginary and real parts of dielectric constant.\n\n\\section{Results and discussion}\n\\subsection{Structural properties}\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.98\\linewidth,clip=true]{Fig1.eps}\n\\caption{(a) Top view of silicene and (b) Si$_8$ unit cells. (c) Side view of Si$_8$ unit cell. \n} \n\\label{fig1}\n\\end{figure}\n\nBy increasing silicene unit cell [see Fig.~\\ref{fig1} (a)] in x and y direction twice, Si$_8$ has been constructed [see Fig. ~\\ref{fig1}(b)] in hexagonal lattice (i.e., $ \\alpha=\\beta=90^{\\circ},\\gamma=120^{\\circ}$). In physical view, both silicene and Si$_8$ have the same physical properties because by increasing both unit cells, silicene monolayer has been achieved. In this work, Si$_8$ unit cell considered because CSi$_7$ and GeSi$_7$ can be constructed by replacing a silicone atom with a carbon or a germanium atom. After relaxation, the bond length of Si$_8$ was $d=2.4 \\;\\AA$ [see Fig.~\\ref{fig1} (a)] and lattice parameters were $|a|=|b|=7.56 \\; \\AA$ and $|c|=14.4 \\;\\AA$ [see Figs.~\\ref{fig1}(b) and 1(c)] and buckling parameter $\\Delta=0.44 \\;\\AA$ [see Fig.~\\ref{fig1}1 (c)] which has a good agreement with previous works\\cite{wwltxh,gzj12,zlyqzw16}. Here c is the distance to make sure that there is no interaction between adjacent layers. \nFor carbosilicene, CSi$_7$, unit cell construction, a silicon atom can be replaced with a carbon atom as shown in Fig. ~\\ref{fig2}. Because of structural symmetry of CSi$_7$ monolayer (see Fig. ~\\ref{fig6}), the position of impurity atom is not important, and our calculations also show the same ground state energy for all the eight possible impurity positions. After relaxation, optimum lattice parameters are obtained as $|a|= |b|=7.49\\; \\AA$ and $|c|= 12.86 \\; \\AA$ for CSi$_7$ unit cell. Fig. ~\\ref{fig2} shows this structure before and after relaxation. For a more detailed explanation, we labeled atoms in this figure. It is observed that Si-C bond length (i.e., $d_{2-4}=1.896 \\; \\AA$) is shorter than Si-Si band length (i.e., $d_{1-2}=2.317,\\; d_{1-3}=2.217 \\; \\AA$) because of sp$^2$ hybridization. Also, unlike graphene, the hexagonal ring is not a regular hexagon due to the electronegativity difference between C and Si atoms\\cite{dzfhll16}.\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.8\\linewidth,clip=true]{Fig2.eps}\n\\caption{Top view of CSi$_7$ unit cell (a) before and (b) after relaxation. Carbon atom is shown by yellow sphere and silicon atoms by blue spheres. \n} \n\\label{fig2}\n\\end{figure}\n\nFig. ~\\ref{fig3} shows the side view of CSi$_7$ unit cell. After relaxation, the buckling parameter between atoms 1 and 3 ($\\Delta_{1-3}$) is 0.1 $\\AA$ whereas this parameter for atoms 2 and 4 ($\\Delta_{2-4}$) is 0.39 $\\AA$. So, CSi$_7$ has a structure with two different buckling parameters and one can use the carbon atoms to decrease buckling parameter of silicene. Silicene has one buckling and two sublattices\\cite{zyy15}, while carbosilicene has two bucklings and thus three sublattices including one for carbon atoms and two others for silicon atoms.\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.98\\linewidth,clip=true]{Fig3.eps}\n\\caption{Side view of CSi$_7$ unit cell (a) before and (b) after relaxation.\n} \n\\label{fig3}\n\\end{figure}\n\nIf we replace a silicon atom with a germanium atom as shown in Fig.~\\ref{fig4}, we could obtain germasilicene, GeSi$_7$, structure. As we can see in this figure, the optimized parameters are $|a|$=$|b|$=7.8$ \\AA$, $|c|$=11.98 $\\AA$ and the Si-Ge bond length and lattice constants are greater than those of Si-Si. Also, by comparing bond lengths and lattice parameters of GeSi$_7$ and CSi$_7$ structures, it is seen that the bond lengths and lattice parameters of GeSi$_7$ are significantly greater than those of CSi$_7$ which is due to the larger atomic number and thus atomic radius of germanium relative to the carbon\\cite{zyihm18}.\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.8\\linewidth,clip=true]{Fig4.eps}\n\\caption{ Top view of GeSi$_7$ unit cell (a) before and (b) after relaxation. Here germanium atom is shown by purple color.\n} \n\\label{fig4}\n\\end{figure}\n\nThe buckling parameters of germasilicene structure are depicted in Fig. ~\\ref{fig5}. After relaxation, we find that the value of these parameters are $\\Delta_{2-4}=0.53\\; \\AA$ and $\\Delta_{1-3}=0.43 \\; \\AA$. Therefore, GeSi$_7$ like CSi$_7$ has a structure with two different buckling and the germanium impurity atom increases the buckling of silicene. Bond length values and other structural parameters after relaxation are shown in Table 1.\n\\begin{table*}[t]\n \\centering\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \\hline\n &$|a|=|b|$&\t$|c|$\t&$d_{1-2}$\t&$d_{2-4}$&\t$d_{1-3}$\t&$\\Delta_{2-4}$&\t$\\Delta_{1-3}$&\t$\\Delta_d$ \\\\\n \\hline\n Si$_8$\t& 7.65 & \t14.4 &\t2.4 &\t2.4\t& 2.4 &\t0.44 &\t0.44 &\t0 \\\\\n \\hline\n CSi$_7$ &\t7.49 &\t12.86 &\t2.317 &\t1.896 &\t2.217 &\t0.1\t& 0.39 &\t0.29\\\\\n \\hline\n GeSi$_7$ &\t7.8 &\t11.98 &\t2.287 &\t2.34 &\t2.297 &\t0.53 &\t0.43 &\t0.1\\\\\n \\hline\n \n \\end{tabular}\n \\caption{Optimum lattice parameters $|a|$, $|b|$ and $|c|$, bond lengths $d_{1-2}$, $d_{2-4}$ and \n$d_{1-3}$ and buckling parameters $\\Delta_{2-4}$, $\\Delta_{1-3}$ and $\\Delta_d$. All values are in Angstrom.\n}\n \\label{tab1}\n\\end{table*}\n\n\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.98\\linewidth,clip=true]{Fig5.eps}\n\\caption{ Side view of GeSi$_7$ unit cell (a) before and (b) after relaxation \n} \n\\label{fig5}\n\\end{figure}\n\nWe now introduce a new parameter for buckling as \n\\begin{equation}\n \\Delta_d=|\\Delta_{2-4}-\\Delta_{1-3}|\n \\label{eq1}\n\\end{equation}\nwhich shows the difference between two buckling parameters. Value of $\\Delta_d$ for CSi$_7$ (i.e., 0.29 $\\AA$) is greater than that for GeSi$_7$ (i.e., 0.062 $\\AA$) which means the carbon impurity atom has a greater impact than germanium on silicene buckling. This effect could be explained based on electronegativity difference\\cite{drsbf13}. The electronegativity by Pauling scale is 2.55 \\cite{ipl09,zhhkl20}, 1.9 \\cite{gperdd20} and 2.01 \\cite{mgyz19} for carbon, silicon, and germanium respectively. Therefore, electronegativity difference is 0.65 for CSi$_7$ and 0.11 for GeSi$_7$ which show that CSi$_7$ has a greater electronegativity difference which leads to the in-plane hybridized bondings and reduces the buckling in comparison to the other cases.\n\nFig. ~\\ref{fig6} shows the charge density of a monolayer of CSi$_7$ and GeSi$_7$. The charge density of a monolayer of Si is also shown in this figure for comparison [see Fig. ~\\ref{fig6}(a)]. The high charge density around the carbon and germanium impurity atoms [see Figs. ~\\ref{fig6}(b) and 6(c)] shows charge transfer from silicon atoms to impurity atoms. Also, the electron aggregation around impurity atoms indicates ionic- covalent bonds in CSi$_7$ and GeSi$_7$ structures because of electronegativity difference.\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.98\\linewidth,clip=true]{Fig6.eps}\n\\caption{ Charge density of (a) silicene, (b) CSi$_7$ and (c) GeSi$_7$ \n} \n\\label{fig6}\n\\end{figure}\n\nNow, we calculate the cohessive and formation energies for these structures. The cohessive energy is -4.81 eV/atom and -4.32 eV/atom for CSi$_7$ and GeSi$_7$, respectively. The negative value of cohecive energy for CSi$_7$ and GeSi$_7$ means that these structures will not be decomposed into their atoms. The more negative cohesive energy, the more stable structure, so CSi$_7$ is more stable than GeSi$_7$. Also, the caculated cohesive energy for silicene is -4.89 eV/atom which is in good agreement with previous studies \\cite{gperdd20,mgyz19} and shows CSi$_7$ has a stable structure with cohessive energy very close to silicene.\nOur calculations show the formation energy for CSi$_7$ and GeSi$_7$ structures are +0.16 eV/atom and -0.005 eV/atom, respectively. So, the formation of CSi$_7$ (GeSi$_7$) from their constituents is endothermic (exothermic) because of the positive (negative) value of formation energy. On the other hand, positive formation energy for CSi$_7$ represents a high stability of this structure, while the negative or nearly zero value for GeSi$_7$ is attributed mostly to the high reactivity related to silicene\\cite{dw13}.\n\n\\subsection{Electronic properties}\nTo investigate electronic properties of CSi$_7$ and GeSi$_7$, at first, we compare band structure of silicene, CSi$_7$ and GeSi$_7$ monolayers and we show the results in Fig. ~\\ref{fig7}. As we can see in this figure, like graphene and silicene, GeSi$_7$ is semi-metal (or zero-gap semiconductor) with Dirac cone in point $K$. This is because the $\\pi$ and $\\pi^*$ bands cross linearly at the Fermi energy $E_F$. These band structures indicate that the charge career in silicene and GeSi$_7$ behave like massless Dirac fermions\\cite{zlyqzw16}. In contrast with GeSi$_7$, CSi$_7$ is a semiconductor with indirect band gap. The value of its indirect band gap is 0.24 eV in $K-\\Gamma$ direction which significantly less than its direct band gap value (i.e., 0.5 eV in $K-K$ direction).\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.7\\linewidth,clip=true]{Fig7.eps}\n\\caption{ Band structure of (a) silicene, (b) CSi$_7$ and (c) GeSi$_7$. \n} \n\\label{fig7}\n\\end{figure}\n\nFor a better comparison, an enlarged band structure of silicene, CSi$_7$ and GeSi$_7$ are shown in Fig. ~\\ref{fig8}. It is seen that, in point $K$, silicene and GeSi$_7$ have similar band structures with zero band gap, whereas CSi$_7$ has a band gap. In Dirac cone of graphene and silicene, $\\pi$ and $\\pi^*$ bands are made from the same atoms\\cite{dw13} but these bonds in GeSi$_7$ are made from two different atoms. To determine the Fermi velocity, $v_F$, the graphs for silicene and GeSi$_7$ must be fitted linearly near the Fermi level by using equation $E_{k+K}=\\gamma k$. Then the Fermi velocity is given by $v_F=\\gamma/ \\hbar$. Our calculations show that $v_F$ is $5\\cross10^5$ m/s for silicene (which shows a good agreement with previous works\\cite{dw13,wd13}) and $4.8\\cross10^5$ m/s for GeSi$_7$. A comparison between Fermi velocity in silicene and GeSi$_7$ indicates that Ge atoms in GeSi$_7$ do not have a significant effect on Fermi velocity. The total density of states (DOS) is also shown in Fig. ~\\ref{fig8}. It is observed that the total DOS has a good agreement with the band structure.\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.9\\linewidth,clip=true]{Fig8.eps}\n\\caption{ Enlarged band structure and total DOS of silicene, CSi$_7$ and GeSi$_7$. \n} \n\\label{fig8}\n\\end{figure}\n\n\nWe now investigate the effect of strain on the band structure of CSi$_7$ and GeSi$_7$ and the results are shown in Fig. ~\\ref{fig9}. As we can see in Figs. ~\\ref{fig9}(a) and ~\\ref{fig9}(b), compressive strain has important effects on band structure of CSi$_7$ but it has no significant effect on GeSi$_7$ [compare these figures with Figs. ~\\ref{fig7}(b) and ~\\ref{fig7}(c)]. In the presence of compressive strain for CSi$_7$, both direct and indirect band gaps increase, respectively from 0.5 eV and 0.24 eV to 0.52 eV and 0.44 eV. But for GeSi$_7$, the zero-band gap remains unchanged and compressive strain cannot open any band gap. Fig. ~\\ref{fig9}(c) shows the direct and indirect band gap variations of CSi$_7$ versus the both compressive and tensile strains. It is observed that both direct and indirect band gaps increase with increasing the compressive strain, while they decrease with increasing the tensile strain. The variation of band gaps versus strain S is nearly linear and could be formulated by $E_g=-0.017S+0.447$ for direct band gap and $E_g=-0.059 S+0.227$ for indirect one. Under strain and without strain, the direct band gap has significantly larger values relative to indirect band gap, thus it has no important effect on electronic transport properties in CSi$_7$. In contrast with GeSi$_7$, the strain is an important factor for tuning of band gap in CSi$_7$. For example, when the tensile strain increases above the band gap of CSi$_7$ disappears and this 2D material becomes a metal [see Fig. ~\\ref{fig9}(c)]. This property of CSi$_7$ is important in straintronic devices such as strain switches and strain sensors.\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=.98\\linewidth,clip=true]{Fig9.eps}\n\\caption{ Band structure of (a) CSi$_7$ and (b) GeSi$_7$ under compressive strain with value -3$\\%$. (c) Energy gap variation of CSi$_7$ versus both compressive and tensile strains.\n} \n\\label{fig9}\n\\end{figure}\n\n\\subsection{Optical properties}\nThe complex dielectric function $\\epsilon=\\epsilon_r+\\epsilon_i$ can be calculated for both polarizations of light: (i) parallel (x direction) and (ii) perpendicular (z direction), where $\\epsilon_r$ is the real part and $\\epsilon_i$ is the imaginary part of the dielectric function. This function is an important parameter for calculation of optical properties of matters. For instance, the real and imaginary parts of refractive index (i.e., $n=n_r+n_i$) can be written as\\cite{w}\n\n\\begin{equation}\n n_r=\\sqrt{\\frac{(\\epsilon_r^2+\\epsilon_i^2)^{1/2}+\\epsilon_r}{2}}\n \\label{eq2}\n\\end{equation}\nand\n\\begin{equation}\n n_i=\\sqrt{\\frac{(\\epsilon_r^2+\\epsilon_i^2)^{1/2}-\\epsilon_r}{2}}\n \\label{eq3}\n\\end{equation}\nrespectively. The absorption coefficient $\\alpha$ is given by\\cite{w}\n\\begin{equation}\n \\alpha=\\frac{2\\omega n_i}{C}\n \\label{eq4}\n\\end{equation}\nwhere C is the speed of light in vacuum. The real parts of dielectric function of CSi$_7$, GeSi$_7$ and silicene are depicted in Fig. ~\\ref{fig10} for x and z directions. This figure shows that $\\epsilon _r$ in both directions are inhomogeneous because the graphs of $\\epsilon_r$ are not similar for the two directions. The root of real part (where $\\epsilon_r=0$) represents the plasma energy (frequency) which for these materials it locates at $4.3\\; eV \\;(1.04\\;PHz)$ for x-direction. It can be seen from Figs. ~\\ref{fig10}(a) and ~\\ref{fig10}(b) that the values of static dielectric constant (the value of dielectric function real part at zero frequency or zero energy) in the x-direction are 12.3 for silicene and CSi$_7$ and 30 for GeSi$_7$, and in the z-direction are 2.4, 2 and 2.9 for silicene, CSi$_7$ and GeSi$_7$ respectively. Thus, for both directions GeSi$_7$ has the biggest static dielectric constant. Also, the static dielectric constant of GeSi$_7$ is significantly greater than graphene (1.25 for z-direction and 7.6 for x-direction\\cite{rdj14}). According to the energy density equation of capacitors (i.e., $u=\\epsilon\\epsilon_0 E^2/2$), by increasing dielectric constant $\\epsilon$, the energy density u increases. Here, E in the electric field inside the capacitor. So, materials with high dielectric constant have attracted a lot of attentions because of their potential applications in transistor gate, non-volatile ferroelectric memory and integral capacitors\\cite{tic06}. Among the 2D-materials, graphene has been used for electrochemical capacitors\\cite{cls13} and supercapacitors\\cite{vrsgr08}. Since GeSi$_7$ has a high dielectric constant, it can be used as a 2D-material with high performance dielectric in advanced capacitors. \n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=.7\\linewidth,,clip=true]{Fig10.eps}\n\\caption{ Comparison of real part of dielectric function for CSi$_7$ and GeSi$_7$ (a) in x direction and (b) in z direction. The graphs of silicene are also shown in this figure for comparison.\n} \n\\label{fig10}\n\\end{figure}\n\nFig. ~\\ref{fig11} shows absorption coefficient $\\alpha$ for CSi$_7$ and GeSi$_7$. Absorption coefficient for silicene is also shown in this figure for comparison. The absorption coefficient shown in this figure for silicene is in agreement with previous works\\cite{hkb18,cjlmty16}. There are two peaks for CSi$_7$: one locates in 1.18 eV (infrared region) and the other in 1.6 eV (visible region). The peak for silicene (at 1.83 eV) locates in visible region (1.8-3.1 eV). So, carbon atom increases and shifts the edge of absorption from the visible region to infrared region because it breaks the symmetry of silicene structure and it opens a narrow energy band gap in silicene band structure. For GeSi$_7$ there is an absorption peak in visible region (at 2.16 eV). Also, the peak height of GeSi$_7$ is larger than that of silicene and CSi$_7$. The sun light spectrum includes different wavelengths and absorption of each part has a special application. For example, ultraviolet-visible region absorption spectrophotometry and its analysis are used in pharmaceutical analysis, clinical chemistry, environmental analysis and inorganic analysis\\cite{rop88}. Also near infrared ($\\lambda$= 800 to 1100 nm or E = 1.55 eV to 1.13 eV) and infrared ($\\lambda$ > 1100 nm or E < 1.13eV) regions are used for solar cells\\cite{wrmss,sgzlgp20}, latent fingerprint development\\cite{bsckm19}, brain stimulation and imaging\\cite{cwcgy20}, photothermal therapy\\cite{hhwl19}, photocatalysis\\cite{qzzwz10} and photobiomodulation\\cite{whwlh17}.\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.75\\linewidth,clip=true]{Fig11.eps}\n\\caption{ Absorption coefficient for silicene, CSi$_7$ and GeSi$_7$.\n} \n\\label{fig11}\n\\end{figure}\n\nOn the other hand, sunlight radiation received by earth is comprising 5$\\%$ u$\\%$ltraviolet, 45$\\%$ infrared and 50$\\%$ visible \\cite{hs11}. So, we investigate area under the absorption curve of CSi$_7$ and GeSi$_7$ in visible (from 1.8 to 3.1 eV), near infrared (from 1.13 to 1.55 eV) and infrared (<1.13 eV). Fig. ~\\ref{fig12} shows this area for silicene, CSi$_7$ and GeSi$_7$ in infrared, near infrared and visible spectrum regions. As we can see in this figure, the absorption of CSi$_7$ for all three spectrum regions and total absorption are significantly greater than those of silicene. The absorption of GeSi$_7$ is greater than that of silicene in infrared and visible regions and it is smaller in near infrared region, but the total absorption of GeSi$_7$ is significantly greater than the total absorption of silicene. For comparison, we also calculate the absorption coefficient in infrared region for siligraphene SiC$_7$ , a new material studied recently\\cite{dzfhll16}. The absorption for siligraphene in infrared region is equal to 2.7 which shows that CSi$_7$ with absorption 8.78 and GeSi$_7$ with absorption 6.31 have more than two times greater absorption relative to siligraphene in infrared region.\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.9\\linewidth,clip=true]{Fig12.eps}\n\\caption{ Areas under the absorption curve for silicene, CSi$_7$ and GeSi$_7$ in infrared, near infrared and visible spectrum regions.\n} \n\\label{fig12}\n\\end{figure}\n\n\\subsection{Summary and conclusion}\nWe studied the structural, electronic and optical properties of CSi$_7$ and GeSi$_7$ structures using density functional theory within Quantum Espresso code. We showed that the carbon atom in CSi$_7$ decreases the buckling, whereas germanium atom in GeSi$_7$ increases the buckling which promises a new way to control the buckling in silicene-like structures. Both structures are stable but CSi$_7$ is more stable than GeSi$_7$. Band structure and DOS plots show CSi$_7$ is a semiconductor with 0.24 eV indirect band gap but GeSi$_7$, similar to silicene, is a semimetal. Strain does not have any significant effect on GeSi$_7$, but for CSi$_7$, the compressive strain can increase the band gap and tensile strain can decrease it. At sufficient tensile strain ($> 3.7 \\%$), the band gap becomes zero or negative and thus the semiconducting properties of CSi$_7$ change to metallic properties. As a result, the band gap of CSi$_7$ could be changed and controlled by strain and this material can be used in straintronic devices such as strain sensor and strain switch. Furthermore, we investigated the optical properties of CSi$_7$ and GeSi$_7$ such as static dielectric constant and light absorption. The GeSi$_7$ has high dielectric constant relative to CSi$_7$, silicene and graphene and can be used as a 2D-material with high performance dielectric in advanced capacitors. The light absorption of CSi$_7$ for near infrared, infrared and visible regions and its total absorption are significantly greater than those of silicene. The absorption of GeSi$_7$ is greater than that of silicene in infrared and visible regions and it is smaller in near infrared region, but the total absorption of GeSi$_7$ is significantly greater than the total absorption of silicene. Because of high absorption of CSi$_7$ and GeSi$_7$, these materials can be considered as proper candidates to solar cell applications.\n", "meta": {"timestamp": "2022-01-24T02:09:19", "yymm": "2201", "arxiv_id": "2201.08590", "language": "en", "url": "https://arxiv.org/abs/2201.08590"}} {"text": "\\section{Introduction}\n\nDe Sitter (dS) spacetime is among the most popular backgrounds in\ngravitational physics. There are several reasons for this. First of all dS\nspacetime is the maximally symmetric solution of Einstein's equation with a\npositive cosmological constant. Due to the high symmetry numerous physical\nproblems are exactly solvable on this background. A better understanding of\nphysical effects in this background could serve as a handle to deal with\nmore complicated geometries. De Sitter spacetime plays an important role in\nmost inflationary models, where an approximately dS spacetime is employed to\nsolve a number of problems in standard cosmology \\cite{Lind90}. More\nrecently astronomical observations of high redshift supernovae, galaxy\nclusters and cosmic microwave background \\cite{Ries07} indicate that at the\npresent epoch the universe is accelerating and can be well approximated by a\nworld with a positive cosmological constant. If the universe would\naccelerate indefinitely, the standard cosmology would lead to an asymptotic\ndS universe. In addition to the above, an interesting topic which has\nreceived increasing attention is related to string-theoretical models of dS\nspacetime and inflation. Recently a number of constructions of metastable dS\nvacua within the framework of string theory are discussed (see, for\ninstance, \\cite{Kach03,Silv07} and references therein).\n\nThere is no reason to believe that the version of dS spacetime which may\nemerge from string theory, will necessarily be the most familiar version\nwith symmetry group $O(1,4)$ and there are many different topological spaces\nwhich can accept the dS metric locally. There are many reasons to expect\nthat in string theory the most natural topology for the universe is that of\na flat compact three-manifold \\cite{McIn04}. In particular, in Ref. \\cite%\n{Lind04} it was argued that from an inflationary point of view universes\nwith compact spatial dimensions, under certain conditions, should be\nconsidered a rule rather than an exception. The models of a compact universe\nwith nontrivial topology may play an important role by providing proper\ninitial conditions for inflation (for the cosmological consequences of the\nnontrivial topology and observational bounds on the size of compactified\ndimensions see, for example, \\cite{Lach95}). The quantum creation of the\nuniverse having toroidal spatial topology is discussed in \\cite{Zeld84} and\nin references \\cite{Gonc85} within the framework of various supergravity\ntheories. The compactification of spatial dimensions leads to the\nmodification of the spectrum of vacuum fluctuations and, as a result, to\nCasimir-type contributions to the vacuum expectation values of physical\nobservables (for the topological Casimir effect and its role in cosmology\nsee \\cite{Most97,Bord01,Eliz06} and references therein). The effect of the\ncompactification of a single spatial dimension in dS spacetime (topology $%\n\\mathrm{R}^{D-1}\\times \\mathrm{S}^{1}$) on the properties of quantum vacuum\nfor a scalar field with general curvature coupling parameter and with\nperiodicity condition along the compactified dimension is investigated in\nRef. \\cite{Saha07} (for quantum effects in braneworld models with dS spaces\nsee, for instance, \\cite{dSbrane}).\n\nIn view of the above mentioned importance of toroidally compactified dS\nspacetimes, in the present paper we consider a general class of\ncompactifications having the spatial topology $\\mathrm{R}^{p}\\times (\\mathrm{%\nS}^{1})^{q}$, $p+q=D$. This geometry can be used to describe two types of\nmodels. For the first one $p=3$, $q\\geqslant 1$,\\ and which corresponds to\nthe universe with Kaluza-Klein type extra dimensions. As it will be shown in\nthe present work, the presence of extra dimensions generates an additional\ngravitational source in the cosmological equations which is of barotropic\ntype at late stages of the cosmological evolution. For the second model $D=3$\nand the results given below describe how the properties of the universe with\ndS geometry are changed by one-loop quantum effects induced by the\ncompactness of spatial dimensions. In quantum field theory on curved\nbackgrounds among the important quantities describing the local properties\nof a quantum field and quantum back-reaction effects are the expectation\nvalues of the field square and the energy-momentum tensor for a given\nquantum state. In particular, the vacuum expectation values of these\nquantities are of special interest. In order to evaluate these expectation\nvalues, we construct firstly the corresponding positive frequency Wightman\nfunction. Applying to the mode-sum the Abel-Plana summation formula, we\npresent this function as the sum of the Wightman function for the topology $%\n\\mathrm{R}^{p+1}\\times (\\mathrm{S}^{1})^{q-1}$ plus an additional term\ninduced by the compactness of the $(p+1)$th dimension. The latter is finite\nin the coincidence limit and can be directly used for the evaluation of the\ncorresponding parts in the expectation \\ values of the field square and the\nenergy-momentum tensor. In this way the renormalization of these quantities\nis reduced to the renormalization of the corresponding quantities in\nuncompactified dS spacetime. Note that for a scalar field on the background\nof dS spacetime the renormalized vacuum expectation values of the field\nsquare and the energy-momentum tensor are investigated in Refs. \\cite%\n{Cand75,Dowk76,Bunc78} by using various regularization schemes (see also\n\\cite{Birr82}). The corresponding effects upon phase transitions in an\nexpanding universe are discussed in \\cite{Vile82,Alle83}.\n\nThe paper is organized as follows. In the next section we consider the\npositive frequency Wightman function for dS spacetime of topology $\\mathrm{R}%\n^{p}\\times (\\mathrm{S}^{1})^{q}$. In sections \\ref{sec:vevPhi2} and \\ref%\n{sec:vevEMT2} we use the formula for the Wightman function for the\nevaluation of the vacuum expectation values of the field square and the\nenergy-momentum tensor. The asymptotic behavior of these quantities is\ninvestigated in the early and late stages of the cosmological evolution. The\ncase of a twisted scalar field with antiperiodic boundary conditions is\nconsidered in section \\ref{sec:Twisted}. The main results of the paper are\nsummarized in section \\ref{sec:Conc}.\n\n\\section{Wightman function in de Sitter spacetime with toroidally\ncompactified dimensions}\n\n\\label{sec:WF}\n\nWe consider a free massive scalar field with curvature coupling parameter $%\n\\xi $\\ on background of $(D+1)$-dimensional de Sitter spacetime ($\\mathrm{dS}%\n_{D+1}$) generated by a positive cosmological constant $\\Lambda $. The field\nequation has the form%\n\\begin{equation}\n\\left( \\nabla _{l}\\nabla ^{l}+m^{2}+\\xi R\\right) \\varphi =0, \\label{fieldeq}\n\\end{equation}%\nwhere $R=2(D+1)\\Lambda /(D-1)$ is the Ricci scalar for $\\mathrm{dS}_{D+1}$\nand $\\xi $ is the curvature coupling parameter. The special cases $\\xi =0$\nand $\\xi =\\xi _{D}\\equiv (D-1)/4D$ correspond to minimally and conformally\ncoupled fields respectively. The importance of these special cases is\nrelated to that in the massless limit the corresponding fields mimic the\nbehavior of gravitons and photons. We write the line element for $\\mathrm{dS}%\n_{D+1}$ in planar (inflationary) coordinates most appropriate for\ncosmological applications:%\n\\begin{equation}\nds^{2}=dt^{2}-e^{2t/\\alpha }\\sum_{i=1}^{D}(dz^{i})^{2}, \\label{ds2deSit}\n\\end{equation}%\nwhere the parameter $\\alpha $ is related to the cosmological constant by the\nformula%\n\\begin{equation}\n\\alpha ^{2}=\\frac{D(D-1)}{2\\Lambda }. \\label{alfa}\n\\end{equation}%\nBelow, in addition to the synchronous time coordinate $t$ we will also use\nthe conformal time $\\tau $ in terms of which the line element takes\nconformally flat form:%\n\\begin{equation}\nds^{2}=(\\alpha /\\tau )^{2}[d\\tau ^{2}-\\sum_{i=1}^{D}(dz^{i})^{2}],\\;\\tau\n=-\\alpha e^{-t/\\alpha },\\;-\\infty <\\tau <0. \\label{ds2Dd}\n\\end{equation}%\nWe assume that the spatial coordinates $z^{l}$, $l=p+1,\\ldots ,D$, are\ncompactified to $\\mathrm{S}^{1}$ of the length $L_{l}$: $0\\leqslant\nz^{l}\\leqslant L_{l}$, and for the other coordinates we have $-\\infty\n \\ 0 \\hspace*{15pt} \\quad \\hbox{for all} \\quad j \\in V \\vspace*{3pt} \\\\\n \\label{eq:uniform} \\rho_j \\ = \\ F^{-1} \\quad \\hbox{for all} \\quad j \\in V\n\\end{align}\n where~$F := \\card V$ refers to the total number of opinions.\n Key quantities to understand the long-term behavior of the system are the radius and the diameter of the opinion graph defined respectively\n as the minimum and maximum eccentricity of any vertex:\n $$ \\begin{array}{rclcl}\n \\mathbf{r} & := & \\min_{i \\in V} \\ \\max_{j \\in V} \\ d (i, j) & = & \\hbox{the {\\bf radius} of the graph~$\\Gamma$} \\vspace*{3pt} \\\\\n \\mathbf{d} & := & \\max_{i \\in V} \\ \\max_{j \\in V} \\ d (i, j) & = & \\hbox{the {\\bf diameter} of the graph~$\\Gamma$}. \\end{array} $$\n To state our first theorem, we also introduce the subset\n\\begin{equation}\n\\label{eq:center}\n C (\\Gamma, \\tau) \\ := \\ \\{i \\in V : d (i, j) \\leq \\tau \\ \\hbox{for all} \\ j \\in V \\}\n\\end{equation}\n that we shall call the~{\\bf $\\tau$-center} of the opinion graph.\n The next result states that, whenever the confidence threshold is at least equal to the radius of the opinion graph, the infinite one-dimensional\n system fluctuates and clusters while the probability that the finite system reaches ultimately a consensus, i.e., fixates in a configuration where\n all the individuals share the same opinion, is bounded from below by a positive constant that does not depend on the size of the spatial structure.\n Here, infinite one-dimensional means that the spatial structure is the graph with vertex set~$\\mathbb{Z}$ and where each vertex is connected to its\n two nearest neighbors.\n\\begin{theorem} --\n\\label{th:fluctuation}\n Assume~\\eqref{eq:product}. Then,\n\\begin{enumerate}\n \\item[a.] the process on~$\\mathbb{Z}$ fluctuates whenever\n \\begin{equation}\n \\label{eq:fluctuation}\n d (i, j) \\leq \\tau \\quad \\hbox{for all} \\quad (i, j) \\in V_1 \\times V_2 \\quad \\hbox{for some $V$-partition~$\\{V_1, V_2 \\}$}.\n \\end{equation}\n\\end{enumerate}\n Assume in addition that~$\\mathbf{r} \\leq \\tau$. Then,\n\\begin{enumerate}\n \\item[b.] the process on~$\\mathbb{Z}$ clusters and \\vspace*{3pt}\n \\item[c.] the probability of consensus on any finite connected graph satisfies\n $$ \\begin{array}{l} P \\,(\\eta_t \\equiv \\hbox{constant for some} \\ t > 0) \\ \\geq \\ \\rho_{\\cent} := \\sum_{j \\in C (\\Gamma, \\tau)} \\,\\rho_j \\ > \\ 0. \\end{array} $$\n\\end{enumerate}\n\\end{theorem}\n We will show that the~$\\tau$-center is nonempty if and only if the threshold is at least equal to the radius so the\n probability of consensus in the last part is indeed positive.\n In fact, except when the threshold is at least equal to the diameter, in which case all three conclusions of the theorem turn out to be trivial,\n when the threshold is at least equal to the radius, both the~$\\tau$-center and its complement are nonempty, and therefore form a partition\n that satisfies~\\eqref{eq:fluctuation}.\n In particular, fluctuation also holds when the radius is not more than the threshold.\n We also point out that the last part of the theorem implies that the average domain length in the final absorbing state scales like the population\n size, namely~$\\card \\mathscr V$.\n This result applies in particular to the constrained voter model where the opinion graph is a path with three vertices interpreted as leftists,\n centrists and rightists, thus contradicting the conjecture on domain length scaling in~\\cite{vazquez_krapivsky_redner_2003}.\n\n\\indent We now seek for sufficient conditions for fixation of the infinite one-dimensional system, beginning with general opinion graphs.\n At least for the process starting from the uniform product measure, these conditions can be expressed using\n $$ N (\\Gamma, s) \\ := \\ \\card \\{(i, j) \\in V \\times V : d (i, j) = s \\} \\quad \\hbox{for} \\quad s = 1, 2, \\ldots, \\mathbf{d}, $$\n which is the number of pairs of opinions at opinion distance~$s$ of each other.\n In the statement of the next theorem, the function~$\\ceil{\\,\\cdot \\,}$ refers to the ceiling function.\n\\begin{theorem} --\n\\label{th:fixation}\n For the opinion model on~$\\mathbb{Z}$, fixation occurs\n\\begin{enumerate}\n \\item[a.] when~\\eqref{eq:uniform} holds and\n \\begin{equation}\n \\label{eq:th-fixation}\n \\begin{array}{l} S (\\Gamma, \\tau) \\ := \\ \\sum_{k > 0} \\,((k - 2) \\,\\sum_{s : \\ceil{s / \\tau} = k} \\,N (\\Gamma, s)) \\ > \\ 0, \\end{array}\n \\end{equation}\n \\item[b.] for some initial distributions~\\eqref{eq:product} when~$\\mathbf{d} > 2 \\tau$.\n\\end{enumerate}\n\\end{theorem}\n Combining Theorems~\\ref{th:fluctuation}.a and~\\ref{th:fixation}.b shows that these two results are sharp when~$\\mathbf{d} = 2 \\mathbf{r}$,\n which holds for opinion graphs such as paths and stars:\n for such graphs, the one-dimensional system fluctuates starting from any initial distribution~\\eqref{eq:product}\n if and only if~$\\mathbf{r} \\leq \\tau$.\n\n\\indent Our last theorem, which is also the most challenging result of this paper, gives a significant improvement of the previous condition for fixation for {\\bf distance-regular} opinion graphs.\n This class of graphs is defined mathematically as follows: let\n $$ \\Gamma_s (i) \\ := \\ \\{j \\in V : d (i, j) = s \\} \\quad \\hbox{for} \\quad s = 0, 1, \\ldots, \\mathbf{d} $$\n be the {\\bf distance partition} of the vertex set~$V$ for some~$i \\in V$.\n Then, the opinion graph is said to be a distance-regular graph when the so-called {\\bf intersection numbers}\n\\begin{equation}\n\\label{eq:dist-reg-1}\n \\begin{array}{rrl}\n N (\\Gamma, (i_-, s_-), (i_+, s_+)) & := & \\card (\\Gamma_{s_-} (i_-) \\cap \\Gamma_{s_+} (i_+)) \\vspace*{3pt} \\\\\n & = & \\card \\{j \\in V : d (i_-, j) = s_- \\ \\hbox{and} \\ d (i_+, j) = s_+ \\} \\vspace*{3pt} \\\\\n & = & f (s_-, s_+, d (i_-, i_+)) \\end{array}\n\\end{equation}\n only depend on the distance~$d (i_-, i_+)$ but not on the particular choice of~$i_-$ and~$i_+$.\n This implies that, for distance-regular opinion graphs, the number of vertices\n $$ N (\\Gamma, (i, s)) \\ := \\ \\card (\\Gamma_s (i)) \\ = \\ f (s, s, 0) \\ =: \\ h (s) $$\n does not depend on vertex~$i$.\n To state our last theorem, we let\n $$ \\begin{array}{l} \\mathbf W (k) \\ := \\ - 1 + \\sum_{1 < n \\leq k} \\,\\sum_{n \\leq m \\leq \\ceil{\\mathbf{d} / \\tau}} \\,(q_n \\,q_{n + 1} \\cdots q_{m - 1}) / (p_n \\,p_{n + 1} \\cdots p_m) \\end{array} $$\n where by convention an empty sum is equal to zero and an empty product is equal to one, and where the coefficients~$p_n$ and~$q_n$ are defined in terms of the intersection numbers as\n $$ \\begin{array}{rcl}\n p_n & := & \\max \\,\\{\\sum_{s : \\ceil{s / \\tau} = n - 1} f (s_-, s_+, s) / h (s_+) : \\ceil{s_- / \\tau} = 1 \\ \\hbox{and} \\ \\ceil{s_+ / \\tau} = n \\} \\vspace*{3pt} \\\\\n q_n & := & \\,\\min \\,\\{\\sum_{s : \\ceil{s / \\tau} = n + 1} f (s_-, s_+, s) / h (s_+) : \\ceil{s_- / \\tau} = 1 \\ \\hbox{and} \\ \\ceil{s_+ / \\tau} = n \\}. \\end{array} $$\n Then, we have the following sufficient condition for fixation.\n\\begin{theorem} --\n\\label{th:dist-reg}\n Assume~\\eqref{eq:uniform} and~\\eqref{eq:dist-reg-1}.\n Then, the process on~$\\mathbb{Z}$ fixates when\n\\begin{equation}\n\\label{eq:th-dist-reg}\n \\begin{array}{l} S_{\\reg} (\\Gamma, \\tau) \\ := \\ \\sum_{k > 0} \\,(\\mathbf W (k) \\,\\sum_{s : \\ceil{s / \\tau} = k} \\,h (s)) \\ > \\ 0. \\end{array}\n\\end{equation}\n\\end{theorem}\n To understand the coefficients~$p_n$ and~$q_n$, we note that, letting~$i_-$ and~$j$ be two opinions at opinion distance~$s_-$ of each other, we have the following\n interpretation:\n $$ \\begin{array}{rcl}\n f (s_-, s_+, s) / h (s_+) & = & \\hbox{probability that an opinion~$i_+$ chosen uniformly} \\vspace*{0pt} \\\\\n & & \\hbox{at random among the opinions at distance~$s_+$ from} \\\\\n & & \\hbox{opinion~$j$ is at distance~$s$ from opinion~$i_-$}. \\end{array} $$\n\n\n\\noindent{\\bf Outline of the proofs} --\n The lower bound for the probability of consensus on finite connected graphs follows from the optional stopping theorem after proving that\n the process that keeps track of the number of supporters of opinions belonging to the~$\\tau$-center is a martingale.\n The analysis of the infinite system is more challenging.\n The first key to all our proofs is to use the formal machinery introduced in~\\cite{lanchier_2012, lanchier_scarlatos_2013, lanchier_schweinsberg_2012}\n that consists in keeping track of the disagreements along the edges of the spatial structure.\n This technique has also been used in~\\cite{lanchier_moisson_2014, lanchier_scarlatos_2014} to study related models.\n In the context of our general opinion model, we put a pile of~$s$ particles on edges that connect individuals who are at opinion distance~$s$\n of each other, i.e., we set\n $$ \\xi_t ((x, x + 1)) \\ := \\ d (\\eta_t (x), \\eta_t (x + 1)) \\quad \\hbox{for all} \\quad x \\in \\mathbb{Z}. $$\n The definition of the confidence threshold implies that, piles with at most~$\\tau$ particles, that we call active, evolve\n according to symmetric random walks, while larger piles, that we call frozen, are static.\n In addition, the jump of an active pile onto another pile results in part of the particles being annihilated.\n The main idea to prove fluctuation is to show that, after identifying opinions that belong to the same member of the\n partition~\\eqref{eq:fluctuation}, the process reduces to the voter model, and use that the one-dimensional voter model\n fluctuates according to~\\cite{arratia_1983}.\n Fluctuation, together with the stronger assumption~$\\mathbf{r} \\leq \\tau$, implies that the frozen piles, and ultimately all the piles\n of particles, go extinct, which is equivalent to clustering of the opinion model.\n\n\\indent In contrast, fixation occurs when the frozen piles have a positive probability of never being reduced, which is more difficult to establish.\n To briefly explain our approach to prove fixation, we say that the pile at~$(x, x + 1)$ is of order~$k$ when\n $$ (k - 1) \\,\\tau \\ < \\ \\xi_t ((x, x + 1)) \\ \\leq \\ k \\tau. $$\n To begin with, we use a construction due to~\\cite{bramson_griffeath_1989} to obtain an implicit condition for fixation in terms\n of the initial number of piles of any given order in a large interval.\n Large deviation estimates for the number of such piles are then proved and used to turn this implicit condition into the explicit\n condition~\\eqref{eq:th-fixation}.\n To derive this condition, we use that at least~$k - 1$ active piles must jump onto a pile initially of order~$k > 1$ to turn this pile\n into an active pile.\n Condition~\\eqref{eq:th-fixation} is obtained assuming the worst case scenario when the number of particles that annihilate is maximal.\n To show the improved condition for fixation~\\eqref{eq:th-dist-reg} for distance-regular opinion graphs, we use the same approach but\n count more carefully the number of annihilating events.\n First, we use duality-like techniques to prove that, when the opinion graph is distance-regular, the system of piles becomes Markov.\n This is used to prove that the jump of an active pile onto a pile of order~$n > 1$ reduces/increases its order with respective\n probabilities at most~$p_n$ and at least~$q_n$.\n This implies that the number of active piles that must jump onto a pile initially of order~$k > 1$ to turn it into an active pile\n is stochastically larger than the first hitting time to state~1 of a certain discrete-time birth and death process.\n This hitting time is equal in distribution to\n $$ \\begin{array}{l} \\sum_{1 < n \\leq k} \\,\\sum_{n \\leq m \\leq \\ceil{\\mathbf{d} / \\tau}} \\,(q_n \\,q_{n + 1} \\cdots q_{m - 1}) / (p_n \\,p_{n + 1} \\cdots p_m) \\ = \\ 1 + \\mathbf W (k). \\end{array} $$\n The probabilities~$p_n$ and~$q_n$ are respectively the death parameter and the birth parameter of the discrete-time birth and death\n process while the integer~$\\ceil{\\mathbf{d} / \\tau}$ is the number of states of this process, which is also the maximum order\n of a pile. \\vspace*{8pt}\n\n\n\\noindent{\\bf Application to concrete opinion graphs} --\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.98\\textwidth]{graphs.eps}\n\\caption{\\upshape{Opinion graphs considered in Corollaries~\\ref{cor:path}--\\ref{cor:hypercube}}}\n\\label{fig:graphs}\n\\end{figure}\n We now apply our general results to particular opinion graphs, namely the ones which are represented in Figure~\\ref{fig:graphs}.\n First, we look at paths and more generally stars with~$b$ branches of equal length.\n For paths, one can think of the individuals as being characterized by their position about one issue, ranging from strongly agree to strongly disagree.\n For stars, individuals are offered~$b$ alternatives:\n the center represents undecided individuals while vertices far from the center are more extremist in their position.\n These graphs are not distance-regular so we can only apply Theorem~\\ref{th:fixation} to study fixation of the infinite system.\n This theorem combined with Theorem~\\ref{th:fluctuation} gives the following two corollaries.\n\\begin{corollary}[path] --\n\\label{cor:path}\n When~$\\Gamma$ is the path with~$F$ vertices,\n\\begin{itemize}\n \\item the system fluctuates when~\\eqref{eq:product} holds and~$F \\leq 2 \\tau + 1$ whereas \\vspace*{3pt}\n \\item the system fixates when~\\eqref{eq:uniform} holds and~$3 F^2 - (20 \\tau + 3) \\,F + 10 \\,(3 \\tau + 1) \\,\\tau > 0$.\n\\end{itemize}\n\\end{corollary}\n\\begin{corollary}[star] --\n\\label{cor:star}\n When~$\\Gamma$ is the star with~$b$ branches of length~$r$,\n\\begin{itemize}\n \\item the system fluctuates when~\\eqref{eq:product} holds and~$r \\leq \\tau$ whereas \\vspace*{3pt}\n \\item the system fixates when~\\eqref{eq:uniform} holds, $2r > 3 \\tau$ and\n $$ 4 \\,(b - 1) \\,r^2 + 2 \\,((4 - 5b) \\,\\tau + b - 1) \\,r + (6b - 5) \\,\\tau^2 + (1 - 2b) \\,\\tau \\ > \\ 0. $$\n\\end{itemize}\n\\end{corollary}\n To illustrate Theorem~\\ref{th:dist-reg}, we now look at distance-regular graphs, starting with the five convex regular polyhedra also known as the Platonic solids.\n These graphs are natural mathematically though we do not have any specific interpretation from the point of view of social sciences except, as explained below,\n for the cube and more generally hypercubes.\n For these five graphs, Theorems~\\ref{th:fluctuation} and~\\ref{th:dist-reg} give sharp results with the exact value of the critical threshold except\n for the dodecahedron for which the behavior when~$\\tau = 3$ remains an open problem.\n\\begin{corollary}[Platonic solids] --\n\\label{cor:polyhedron}\n Assume~\\eqref{eq:uniform}. Then,\n\\begin{itemize}\n \\item the tetrahedral model fluctuates for all~$\\tau \\geq 1$, \\vspace*{2pt}\n \\item the cubic model fluctuates when~$\\tau \\geq 2$ and fixates when~$\\tau \\leq 1$, \\vspace*{2pt}\n \\item the octahedral model fluctuates for all~$\\tau \\geq 1$, \\vspace*{2pt}\n \\item the dodecahedral model fluctuates when~$\\tau \\geq 4$ and fixates when~$\\tau \\leq 2$, \\vspace*{2pt}\n \\item the icosahedral model fluctuates when~$\\tau \\geq 2$ and fixates when~$\\tau \\leq 1$.\n\\end{itemize}\n\\end{corollary}\n Next, we look at the case where the individuals are characterized by some preferences represented by the set of vertices of a cycle.\n For instance, as explained in~\\cite{boudourides_scarlatos_2005}, all strict orderings of three alternatives can be represented by the cycle\n with~$3! = 6$ vertices.\n\\begin{corollary}[cycle] --\n\\label{cor:cycle}\n When~$\\Gamma$ is the cycle with~$F$ vertices,\n\\begin{itemize}\n \\item the system fluctuates when~\\eqref{eq:product} holds and~$F \\leq 2 \\tau + 2$ whereas \\vspace*{3pt}\n \\item the system fixates when~\\eqref{eq:uniform} hold and~$F \\geq 4 \\tau + 2$.\n\\end{itemize}\n\\end{corollary}\n Finally, we look at hypercubes with~$F = 2^d$ vertices, which are generalizations of the three-dimensional cube.\n In this case, the individuals are characterized by their position --~in favor or against~-- about~$d$ different issues, and the opinion distance between two\n individuals is equal to the number of issues they disagree on.\n Theorem~\\ref{th:dist-reg} gives the following result.\n\\begin{corollary}[hypercube] --\n\\label{cor:hypercube}\n When~$\\Gamma$ is the hypercube with~$2^d$ vertices,\n\\begin{itemize}\n \\item the system fluctuates when~\\eqref{eq:product} holds and~$d \\leq \\tau + 1$ whereas \\vspace*{3pt}\n \\item the system fixates when~\\eqref{eq:uniform} holds and~$d / \\tau > 3$ or when~$d / \\tau > 2$ with~$\\tau$ large.\n\\end{itemize}\n\\end{corollary}\n\\begin{table}[t]\n\\begin{center}\n\\begin{tabular}{cccccc}\n\\hline \\noalign{\\vspace*{2pt}}\n opinion graph & radius & diameter & fluctuation & fix. ($\\tau = 1$) & fix. ($\\tau$ large) \\\\ \\noalign{\\vspace*{1pt}} \\hline \\noalign{\\vspace*{6pt}}\n path & $\\mathbf{r} = \\integer{F/2}$ & $\\mathbf{d} = F - 1$ & $F \\leq 2 \\tau + 1$ & $F \\geq 6$ & $F / \\tau > (10 + \\sqrt{10}) / 3 \\approx 4.39$ \\\\ \\noalign{\\vspace*{3pt}}\n star~($b = 3$) & $\\mathbf{r} = r$ & $\\mathbf{d} = 2r$ & $r \\leq \\tau$ & $r \\geq 2$ & $r / \\tau > (11 + \\sqrt{17}) / 8 \\approx 1.89$ \\\\ \\noalign{\\vspace*{3pt}}\n star~($b = 5$) & $\\mathbf{r} = r$ & $\\mathbf{d} = 2r$ & $r \\leq \\tau$ & $r \\geq 2$ & $r / \\tau > (21 + \\sqrt{41}) / 16 \\approx 1.71$ \\\\ \\noalign{\\vspace*{3pt}}\n cycle & $\\mathbf{r} = \\integer{F/2}$ & $\\mathbf{d} = \\integer{F/2}$ & $F \\leq 2 \\tau + 2$ & $F \\geq 6$ & $F / \\tau > 4$ \\\\ \\noalign{\\vspace*{3pt}}\n hypercube & $\\mathbf{r} = d$ & $\\mathbf{d} = d$ & $d \\leq \\tau + 1$ & $d \\geq 3$ & $d / \\tau > 2$ \\\\ \\noalign{\\vspace*{8pt}} \\hline \\noalign{\\vspace*{3pt}}\n opinion graph & radius & diameter & fluctuation & \\multicolumn{2}{c}{fixation when} \\\\ \\noalign{\\vspace*{1pt}} \\hline \\noalign{\\vspace*{6pt}}\n tetrahedron & $\\mathbf{r} = 1$ & $\\mathbf{d} = 1$ & $\\tau \\geq 1$ & \\multicolumn{2}{c}{$\\tau = 0$} \\\\ \\noalign{\\vspace*{3pt}}\n cube & $\\mathbf{r} = 3$ & $\\mathbf{d} = 3$ & $\\tau \\geq 2$ & \\multicolumn{2}{c}{$\\tau \\leq 1$} \\\\ \\noalign{\\vspace*{3pt}}\n octahedron & $\\mathbf{r} = 2$ & $\\mathbf{d} = 2$ & $\\tau \\geq 1$ & \\multicolumn{2}{c}{$\\tau = 0$} \\\\ \\noalign{\\vspace*{3pt}}\n dodecahedron & $\\mathbf{r} = 5$ & $\\mathbf{d} = 5$ & $\\tau \\geq 4$ & \\multicolumn{2}{c}{$\\tau \\leq 2$} \\\\ \\noalign{\\vspace*{3pt}}\n icosahedron & $\\mathbf{r} = 3$ & $\\mathbf{d} = 3$ & $\\tau \\geq 2$ & \\multicolumn{2}{c}{$\\tau \\leq 1$} \\\\ \\noalign{\\vspace*{4pt}} \\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\upshape{Summary of our results for the opinion graphs in Figure~\\ref{fig:graphs}}}\n\\label{tab:summary}\n\\end{table}\n Table~\\ref{tab:summary} summarizes our results for the graphs of Figure~\\ref{fig:graphs}.\n The second and third columns give the value of the radius and the diameter.\n The conditions in the fourth column are the conditions for fluctuation of the infinite system obtained from the corollaries.\n For opinion graphs with a variable number of vertices, the last two columns give sufficient conditions for fixation in the two extreme cases when\n the confidence threshold is one and when the confidence threshold is large.\n To explain the last column for paths and stars, note that the opinion model fixates whenever~$\\mathbf{d} / \\tau$\n is larger than the largest root of the polynomials\n $$ \\begin{array}{rl}\n 3 X^2 - 20 X + 30 & \\hbox{for the path} \\vspace*{3pt} \\\\\n 2 X^2 - 11 X + 13 & \\hbox{for the star with~$b = 3$ branches} \\vspace*{3pt} \\\\\n 4 X^2 - 21 X + 25 & \\hbox{for the star with~$b = 5$ branches} \\end{array} $$\n and the diameter of the opinion graph is sufficiently large.\n These polynomials are obtained from the conditions in Corollaries~\\ref{cor:path}--\\ref{cor:star} by only keeping the terms with degree two.\n\n\n\\section{Coupling with a system of annihilating particles}\n\\label{sec:coupling}\n\n\\indent To study the one-dimensional system, it is convenient to construct the process from a graphical representation and to introduce\n a coupling between the opinion model and a certain system of annihilating particles that keeps track of the discrepancies along the edges\n of the lattice rather than the opinion at each vertex.\n This system of particles can also be constructed from the same graphical representation.\n Since the opinion model on general finite graphs will be studied using other techniques, we only define the graphical representation\n for the process on~$\\mathbb{Z}$, which consists of the following collection of independent Poisson processes:\n\\begin{itemize}\n \\item For each~$x \\in \\mathbb{Z}$, we let~$(N_t (x, x \\pm 1) : t \\geq 0)$ be a rate one Poisson process. \\vspace*{3pt}\n \\item We denote by~$T_n (x, x \\pm 1) := \\inf \\,\\{t : N_t (x, x \\pm 1) = n \\}$ its~$n$th arrival time.\n\\end{itemize}\n This collection of independent Poisson processes is then turned into a percolation structure by drawing an arrow~$x \\to x \\pm 1$\n at time~$t := T_n (x, x \\pm 1)$.\n We say that this arrow is {\\bf active} when\n $$ d (\\eta_{t-} (x), \\eta_{t-} (x \\pm 1)) \\ \\leq \\ \\tau. $$\n The configuration at time~$t$ is then obtained by setting\n\\begin{equation}\n\\label{eq:rule}\n \\begin{array}{rcll}\n \\eta_t (x \\pm 1) & = & \\eta_{t-} (x) & \\hbox{when the arrow~$x \\to x \\pm 1$ is active} \\vspace*{3pt} \\\\\n & = & \\eta_{t-} (x \\pm 1) & \\hbox{when the arrow~$x \\to x \\pm 1$ is not active} \\end{array}\n\\end{equation}\n and leaving the opinion at all the other vertices unchanged.\n An argument due to Harris \\cite{harris_1972} implies that the opinion model starting from any configuration can indeed\n be constructed using this percolation structure and rule~\\eqref{eq:rule}.\n From the collection of active arrows, we construct active paths as in percolation theory.\n More precisely, we say that there is an {\\bf active path} from~$(z, s)$ to~$(x, t)$, and write~$(z, s) \\leadsto (x, t)$, whenever there exist\n $$ s_0 = s < s_1 < \\cdots < s_{n + 1} = t \\qquad \\hbox{and} \\qquad\n x_0 = z, \\,x_1, \\,\\ldots, \\,x_n = x $$\n such that the following two conditions hold:\n\\begin{itemize}\n \\item For~$j = 1, 2, \\ldots, n$, there is an active arrow~$x_{j - 1} \\to x_j$ at time~$s_j$. \\vspace*{3pt}\n \\item For~$j = 0, 1, \\ldots, n$, there is no active arrow that points at~$\\{x_j \\} \\times (s_j, s_{j + 1})$.\n\\end{itemize}\n These two conditions imply that\n $$ \\hbox{for all} \\ (x, t) \\in \\mathbb{Z} \\times \\mathbb{R}_+ \\ \\hbox{there is a unique} \\ z \\in \\mathbb{Z} \\ \\hbox{such that} \\ (z, 0) \\leadsto (x, t). $$\n Moreover, because of the definition of active arrows, the opinion at vertex~$x$ at time~$t$ originates from and is therefore equal to the\n initial opinion at vertex~$z$ so we call vertex~$z$ the {\\bf ancestor} of vertex~$x$ at time~$t$.\n\n\\indent As previously mentioned, to study the one-dimensional system, we look at the process that keeps track of the discrepancies along the edges\n rather than the actual opinion at each vertex, that we shall call the {\\bf system of piles}.\n To define this process, it is convenient to identify each edge with its midpoint and to define translations on the edge set as follows:\n $$ \\begin{array}{rclcl}\n e & := & \\{x, x + 1 \\} \\ \\equiv \\ x + 1/2 & \\hbox{for all} & x \\in \\mathbb{Z} \\vspace*{3pt} \\\\\n e + v & := & \\{x, x + 1 \\} + v \\ \\equiv \\ x + 1/2 + v & \\hbox{for all} & (x, v) \\in \\mathbb{Z} \\times \\mathbb{R}. \\end{array} $$\n The system of piles is then defined as\n $$ \\xi_t (e) \\ := \\ d (\\eta_t (e - 1/2), \\eta_t (e + 1/2)) \\quad \\hbox{for all} \\quad e \\in \\mathbb{Z} + 1/2, $$\n and it is convenient to think of edge~$e$ as being occupied by a pile of~$\\xi_t (e)$ particles.\n The dynamics of the opinion model induces the following evolution rules on this system of particles.\n Assuming that there is an arrow~$x - 1 \\to x$ at time~$t$ and that\n $$ \\begin{array}{rcl}\n \\xi_{t-} (x - 1/2) & := & d (\\eta_{t-} (x), \\eta_{t-} (x - 1)) \\ = \\ s_- \\vspace*{3pt} \\\\\n \\xi_{t-} (x + 1/2) & := & d (\\eta_{t-} (x), \\eta_{t-} (x + 1)) \\ = \\ s_+ \\end{array} $$\n we have the following alternative:\n\\begin{itemize}\n \\item In case~$s_- = 0$, meaning that there is no particle on the edge, the two interacting agents already agree just before the interaction therefore nothing happens. \\vspace*{3pt}\n \\item In case~$s_- > \\tau$, meaning that there are more than~$\\tau$ particles on the edge, the two interacting agents disagree too much to trust each other so nothing happens. \\vspace*{3pt}\n \\item In case~$0 < s_- \\leq \\tau$, meaning that there is at least one but no more than~$\\tau$ particles on the edge, the agent at vertex~$x$ mimics her left neighbor, which gives\n $$ \\begin{array}{rcl}\n \\xi_t (x - 1/2) & := & d (\\eta_t (x), \\eta_t (x - 1)) \\ = \\ d (\\eta_{t-} (x - 1), \\eta_{t-} (x - 1)) \\ = \\ 0 \\vspace*{3pt} \\\\\n \\xi_t (x + 1/2) & := & d (\\eta_t (x), \\eta_t (x + 1)) \\ = \\ d (\\eta_{t-} (x - 1), \\eta_{t-} (x + 1)). \\end{array} $$\n In particular, there is no more particles at edge~$x - 1/2$.\n In addition, the size~$s$ of the pile of particles at edge~$x + 1/2$ at time~$t$, where size of a pile refers to the number of particles\n in that pile, satisfies the two inequalities\n\\begin{equation}\n\\label{eq:size}\n \\begin{array}{rcl}\n s & \\leq & |d (\\eta_{t-} (x - 1), \\eta_{t-} (x)) + d (\\eta_{t-} (x), \\eta_{t-} (x + 1))| \\ = \\ |s_- + s_+| \\vspace*{3pt} \\\\\n s & \\geq & |d (\\eta_{t-} (x - 1), \\eta_{t-} (x)) - d (\\eta_{t-} (x), \\eta_{t-} (x + 1))| \\ = \\ |s_- - s_+|. \\end{array}\n\\end{equation}\n Note that the first inequality implies that the process involves deaths of particles but no births, which is a key property that will be used later.\n\\end{itemize}\n Similar evolution rules are obtained by exchanging the direction of the interaction from which we deduce the following description\n for the dynamics of piles:\n\\begin{itemize}\n \\item Piles with more than~$\\tau$ particles cannot move: we call such piles {\\bf frozen piles} and the particles in such piles frozen particles. \\vspace*{3pt}\n \\item Piles with at most~$\\tau$ particles jump one unit to the left or to the right at rate one: we call such piles {\\bf active piles} and the particles in such piles active particles.\n Note that arrows in the graphical representation are active if and only if they cross an active pile. \\vspace*{3pt}\n \\item When a pile of size~$s_-$ jumps onto a pile of size~$s_+$ this results in a pile whose size~$s$ satisfies the two inequalities in~\\eqref{eq:size}\n so we say that~$s_- + s_+ - s$ particles are {\\bf annihilated}.\n\\end{itemize}\n\n\n\\section{Proof of Theorem~\\ref{th:fluctuation}}\n\\label{sec:fluctuation}\n\n\\indent Before proving the theorem, we start with some preliminary remarks.\n To begin with, we observe that, when the diameter~$\\mathbf{d} \\leq \\tau$, the~$\\tau$-center covers all the opinion graph,\n indicating that the model reduces to a multitype voter model with~$F = \\card V$ opinions.\n In this case, all three parts of the theorem are trivial, with the probability of consensus in the last part being equal to one.\n To prove the theorem in the nontrivial case~$\\tau < \\mathbf{d}$, we introduce the set\n\\begin{equation}\n\\label{eq:boundary}\n B (\\Gamma, \\tau) \\ := \\ \\{i \\in V : d (i, j) > \\tau \\ \\hbox{for some} \\ j \\in V \\}\n\\end{equation}\n and call this set the~{\\bf $\\tau$-boundary} of the opinion graph.\n One key ingredient to our proof is the following lemma, which gives a sufficient condition for~\\eqref{eq:fluctuation} to hold.\n\\begin{lemma} --\n\\label{lem:partition}\n The sets~$V_1 = C (\\Gamma, \\tau)$ and~$V_2 = B (\\Gamma, \\tau)$ satisfy~\\eqref{eq:fluctuation} when~$\\mathbf{r} \\leq \\tau < \\mathbf{d}$.\n\\end{lemma}\n\\begin{proof}\n From~\\eqref{eq:center} and~\\eqref{eq:boundary}, we get~$B (\\Gamma, \\tau) = V \\setminus C (\\Gamma, \\tau)$ therefore\n $$ C (\\Gamma, \\tau) \\,\\cup \\,B (\\Gamma, \\tau) = V \\quad \\hbox{and} \\quad C (\\Gamma, \\tau) \\,\\cap \\,B (\\Gamma, \\tau) = \\varnothing. $$\n In addition, the~$\\tau$-center of the graph is nonempty because\n\\begin{equation}\n\\label{eq:center-radius}\n \\begin{array}{rcl}\n C (\\Gamma, \\tau) \\neq \\varnothing & \\hbox{if and only if} & \\hbox{there is~$i \\in V$ such that~$d (i, j) \\leq \\tau$ for all~$j \\in V$} \\vspace*{3pt} \\\\\n & \\hbox{if and only if} & \\hbox{there is~$i \\in V$ such that~$\\max_{j \\in V} \\,d (i, j) \\leq \\tau$} \\vspace*{3pt} \\\\\n & \\hbox{if and only if} & \\min_{i \\in V} \\,\\max_{j \\in V} \\,d (i, j) \\leq \\tau \\vspace*{3pt} \\\\\n & \\hbox{if and only if} & \\mathbf{r} \\leq \\tau \\end{array}\n\\end{equation}\n while the~$\\tau$-boundary is nonempty because\n $$ \\begin{array}{rcl}\n B (\\Gamma, \\tau) \\neq \\varnothing & \\hbox{if and only if} & \\hbox{there is~$i \\in V$ such that~$d (i, j) > \\tau$ for some~$j \\in V$} \\vspace*{3pt} \\\\\n & \\hbox{if and only if} & \\hbox{there are~$i, j \\in V$ such that~$d (i, j) > \\tau$} \\vspace*{3pt} \\\\\n & \\hbox{if and only if} & \\max_{i \\in V} \\,\\max_{j \\in V} \\,d (i, j) > \\tau \\vspace*{3pt} \\\\\n & \\hbox{if and only if} & \\mathbf{d} > \\tau. \\end{array} $$\n This shows that~$\\{V_1, V_2 \\}$ is a partition of the set of opinions.\n Finally, since all the vertices in the~$\\tau$-center are within distance~$\\tau$ of all the other vertices, condition~\\eqref{eq:fluctuation} holds.\n\\end{proof} \\\\ \\\\\n This lemma will be used in the proof of part b where clustering will follow from fluctuation, and in the proof of part c to show that\n the probability of consensus on any finite connected graph is indeed positive.\n From now on, we call vertices in the~$\\tau$-center the centrist opinions and vertices in the~$\\tau$-boundary the extremist opinions. \\\\ \\\\\n\\begin{demo}{Theorem~\\ref{th:fluctuation}a (fluctuation)} --\n Under condition~\\eqref{eq:fluctuation}, agents who support an opinion in the set~$V_1$ are within the confidence threshold of\n agents who support an opinion in~$V_2$, therefore we deduce from the expression of the transition rates~\\eqref{eq:rates} that\n\\begin{equation}\n\\label{eq:fluctuation-1}\n \\begin{array}{rcl}\n c_{i \\to j} (x, \\eta_t) & = &\n \\lim_{h \\to 0} \\ (1/h) \\,P \\,(\\eta_{t + h} (x) = j \\,| \\,\\eta_t \\ \\hbox{and} \\ \\eta_t (x) = i) \\vspace*{4pt} \\\\ & = &\n \\card \\{y \\in N_x : \\eta_t (y) = j \\} \\end{array}\n\\end{equation}\n for every~$(i, j) \\in V_1 \\times V_2$ and every~$(i, j) \\in V_2 \\times V_1$. Let\n\\begin{equation}\n\\label{eq:fluctuation-3}\n \\zeta_t (x) \\ := \\ \\mathbf{1} \\{\\eta_t (x) \\in V_2 \\} \\quad \\hbox{for all} \\quad x \\in \\mathbb{Z}.\n\\end{equation}\n Since, according to~\\eqref{eq:fluctuation-1}, we have\n\\begin{itemize}\n \\item for all~$j \\in V_2$, the rates~$c_{i \\to j} (x, \\eta_t)$ are constant across all~$i \\in V_1$, \\vspace*{4pt}\n \\item for all~$i \\in V_1$, the rates~$c_{j \\to i} (x, \\eta_t)$ are constant across all~$j \\in V_2$,\n\\end{itemize}\n the process~$(\\zeta_t)$ is Markov with transition rates\n $$ \\begin{array}{rrl}\n c_{0 \\to 1} (x, \\zeta_t) & := &\n \\lim_{h \\to 0} \\ (1/h) \\,P \\,(\\zeta_{t + h} (x) = 1 \\,| \\,\\zeta_t \\ \\hbox{and} \\ \\zeta_t (x) = 0) \\vspace*{4pt} \\\\ & = &\n \\sum_{i \\in V_1} \\sum_{j \\in V_2} c_{i \\to j} (x, \\eta_t) \\,P \\,(\\eta_t (x) = i \\,| \\,\\zeta_t (x) = 0) \\vspace*{4pt} \\\\ & = &\n \\sum_{i \\in V_1} \\sum_{j \\in V_2} \\card \\{y \\in N_x : \\eta_t (y) = j \\} \\,P \\,(\\eta_t (x) = i \\,| \\,\\zeta_t (x) = 0) \\vspace*{4pt} \\\\ & = &\n \\sum_{j \\in V_2} \\card \\{y \\in N_x : \\eta_t (y) = j \\} \\ = \\ \\card \\{y \\in N_x : \\zeta_t (y) = 1 \\} \\end{array} $$\n and similarly for the reverse transition\n $$ \\begin{array}{l}\n c_{1 \\to 0} (x, \\zeta_t) \\ = \\ \\card \\{y \\in N_x : \\eta_t (y) \\in V_1 \\} \\ = \\ \\card \\{y \\in N_x : \\zeta_t (y) = 0 \\}. \\end{array} $$\n This shows that~$(\\zeta_t)$ is the voter model.\n In addition, since~$V_1, V_2 \\neq \\varnothing$,\n $$ \\begin{array}{l} P \\,(\\zeta_0 (x) = 0) \\ = \\ P \\,(\\eta_0 (x) \\in V_1) \\ = \\ \\sum_{j \\in V_1} \\,\\rho_j \\ \\in \\ (0, 1) \\end{array} $$\n whenever condition~\\eqref{eq:product} holds.\n In particular, the lemma follows from the fact that the one-dimensional voter model starting with a positive density of each type fluctuates.\n This last result is a consequence of site recurrence for annihilating random walks proved in \\cite{arratia_1983}.\n\\end{demo} \\\\ \\\\\n\\begin{demo}{Theorem~\\ref{th:fluctuation}b (clustering)} --\n Since~$\\mathbf{r} \\leq \\tau < \\mathbf{d}$,\n $$ V_1 \\ = \\ C (\\Gamma, \\tau) \\quad \\hbox{and} \\quad V_2 \\ = \\ B (\\Gamma, \\tau) $$\n form a partition of~$V$ according to Lemma~\\ref{lem:partition}.\n This implies in particular that, not only the opinion model fluctuates, but also the coupled voter model~\\eqref{eq:fluctuation-3}\n for this specific partition fluctuates, which is the key to the proof.\n To begin with, we define the function\n $$ \\begin{array}{l} u (t) \\ := \\ E \\,\\xi_t (e) \\ = \\ \\sum_{0 \\leq j \\leq \\mathbf{d}} \\,j \\,P \\,(\\xi_t (e) = j) \\end{array} $$\n which, in view of translation invariance of the initial configuration and the evolution rules, does not depend on the choice of~$e$.\n Note that, since the system of particles coupled with the process involves deaths of particles but no births, the function~$u (t)$ is nonincreasing in time.\n Since it is also nonnegative, it has a limit:~$u (t) \\to l$ as~$t \\to \\infty$.\n Now, on the event that an edge~$e$ is occupied by a pile with at least one particle at a given time~$t$, we have the following alternative:\n\\begin{itemize}\n \\item[(1)] In case edge~$e := x + 1/2$ carries a frozen pile, since the centrist agents are within the confidence threshold of all the other individuals, we must have\n $$ \\eta_t (x) \\in V_2 = B (\\Gamma, \\tau) \\quad \\hbox{and} \\quad \\eta_t (x + 1) \\in V_2 = B (\\Gamma, \\tau). $$\n Now, using that the voter model~\\eqref{eq:fluctuation-3} fluctuates,\n $$ T \\ := \\ \\inf \\,\\{s > t : \\eta_s (x) \\in V_1 = C (\\Gamma, \\tau) \\ \\hbox{or} \\ \\eta_s (x + 1) \\in V_1 = C (\\Gamma, \\tau) \\} < \\infty $$\n almost surely, while by definition of the~$\\tau$-center, we have\n $$ \\xi_T (e) \\ = \\ d (\\eta_T (x), \\eta_T (x + 1)) \\ \\leq \\ \\tau \\ < \\ \\xi_t (e). $$\n In particular, at least one of the frozen particles at~$e$ is annihilated eventually. \\vspace*{4pt}\n \\item[(2)] In case edge~$e := x + 1/2$ carries an active pile, since one-dimensional symmetric random walks are recurrent, this pile eventually intersects\n another pile.\n Let~$s_-$ and~$s_+$ be respectively the size of these two piles and let~$s$ be the size of the pile of particles resulting from their intersection.\n Then, we have the following alternative:\n \\begin{itemize}\n \\item[(a)] In case~$s < s_- + s_+$ and~$s > \\tau$, at least one particle is annihilated and there is either formation or increase of a frozen pile so we are\n back to case~(1): since the voter model coupled with the opinion model fluctuates, at least one of the frozen particles in this pile is annihilated eventually.\n \\item[(b)] In case~$s < s_- + s_+$ and~$s \\leq \\tau$, at least one particle is annihilated.\n \\item[(c)] In case~$s = s_- + s_+$ and~$s > \\tau$, there is either formation or increase of a frozen pile so we are back to case~(1):\n since the voter model coupled with the opinion model fluctuates, at least one of the frozen particles in this pile is annihilated eventually.\n \\item[(d)] In case~$s = s_- + s_+$ and~$s \\leq \\tau$, the resulting pile is again active so it keeps moving until, after a finite number of collisions, we are back to either~(a) or~(b) or~(c)\n and at least one particle is annihilated eventually.\n \\end{itemize}\n\\end{itemize}\n This shows that there is a sequence~$0 < t_1 < \\cdots < t_n < \\cdots < \\infty$ such that\n $$ u (t_n) \\ \\leq \\ (1/2) \\,u (t_{n - 1}) \\ \\leq \\ (1/4) \\,u (t_{n - 2}) \\ \\leq \\ \\cdots \\ \\leq \\ (1/2)^n \\,u (0) \\ \\leq \\ (1/2)^n \\,F $$\n from which it follows that the density of particles decreases to zero:\n $$ \\begin{array}{l} \\lim_{t \\to \\infty} \\,P \\,(\\xi_t (e) \\neq 0) \\ \\leq \\ \\lim_{t \\to \\infty} \\,u (t) \\ = \\ 0 \\quad \\hbox{for all} \\quad e \\in \\mathbb{Z} + 1/2. \\end{array} $$\n In conclusion, for all~$x, y \\in \\mathbb{Z}$ with~$x < y$, we have\n $$ \\begin{array}{rcl}\n \\lim_{t \\to \\infty} \\,P \\,(\\eta_t (x) \\neq \\eta_t (y)) & \\leq &\n \\lim_{t \\to \\infty} \\,P \\,(\\xi_t (z + 1/2) \\neq 0 \\ \\hbox{for some} \\ x \\leq z < y) \\vspace*{4pt} \\\\ & \\leq &\n \\lim_{t \\to \\infty} \\,\\sum_{x \\leq z < y} \\,P \\,(\\xi_t (z + 1/2) \\neq 0) \\vspace*{4pt} \\\\ & = &\n (y - x) \\lim_{t \\to \\infty} \\,P \\,(\\xi_t (e) \\neq 0) \\ = \\ 0, \\end{array} $$\n which proves clustering.\n\\end{demo} \\\\ \\\\\n The third part of the theorem, which gives a lower bound for the probability of consensus of the process on finite connected graphs,\n relies on very different techniques, namely techniques related to martingale theory following an idea from \\cite{lanchier_2010}, section 3.\n However, the partition of the opinion set into centrist opinions and extremist opinions is again a key to the proof. \\\\ \\\\\n\\begin{demo}{Theorem~\\ref{th:fluctuation}c (consensus)} --\n We first prove that the process that keeps track of the number of supporters of any given opinion is a martingale.\n Then, applying the optional stopping theorem, we obtain a lower bound for the probability of extinction of the extremist agents, which is\n also a lower bound for the probability of consensus.\n For every~$j \\in V$, we set\n $$ X_t (j) \\ := \\ \\card \\{x \\in \\mathscr V : \\eta_t (x) = j \\} \\quad \\hbox{and} \\quad X_t \\ := \\ \\card \\{x \\in \\mathscr V : \\eta_t (x) \\in C (\\Gamma, \\tau) \\} $$\n and we observe that\n\\begin{equation}\n\\label{eq:consensus-1}\n \\begin{array}{l}\n X_t \\ = \\ \\sum_{j \\in C (\\Gamma, \\tau)} X_t (j). \\end{array}\n\\end{equation}\n Letting~$\\mathcal F_t$ denote the natural filtration of the process, we also have\n $$ \\begin{array}{l}\n \\lim_{h \\to 0} \\ (1/h) \\,E \\,(X_{t + h} (j) - X_t (j) \\,| \\,\\mathcal F_t) \\vspace*{4pt} \\\\ \\hspace*{25pt} = \\\n \\lim_{h \\to 0} \\ (1/h) \\,P \\,(X_{t + h} (j) - X_t (j) = 1 \\,| \\,\\mathcal F_t) \\vspace*{4pt} \\\\ \\hspace*{40pt} - \\\n \\lim_{h \\to 0} \\ (1/h) \\,P \\,(X_{t + h} (j) - X_t (j) = - 1 \\,| \\,\\mathcal F_t) \\vspace*{4pt} \\\\ \\hspace*{25pt} = \\\n \\card \\{(x, y) \\in \\mathscr E : \\eta_t (x) \\neq j \\ \\hbox{and} \\ \\eta_t (y) = j \\ \\hbox{and} \\ |\\eta_t (x) - j| \\leq \\tau \\} \\vspace*{4pt} \\\\ \\hspace*{40pt} - \\\n \\card \\{(x, y) \\in \\mathscr E : \\eta_t (x) = j \\ \\hbox{and} \\ \\eta_t (y) \\neq j \\ \\hbox{and} \\ |\\eta_t (y) - j| \\leq \\tau \\} \\ = \\ 0. \\end{array} $$\n This shows that the process~$X_t (j)$ is a martingale with respect to the natural filtration of the opinion model.\n This, together with equation~\\eqref{eq:consensus-1}, implies that~$X_t$ also is a martingale.\n Because of the finiteness of the graph, this martingale is bounded and gets trapped in an absorbing state after an almost surely\n finite stopping time:\n $$ T \\ := \\ \\inf \\,\\{t : \\eta_t = \\eta_s \\ \\hbox{for all} \\ s > t \\} \\ < \\ \\infty \\quad \\hbox{almost surely}. $$\n We claim that~$X_T$ can only take two values:\n\\begin{equation}\n\\label{eq:consensus-2}\n X_T \\in \\{0, N \\} \\quad \\hbox{where} \\quad \\hbox{$N := \\card (\\mathscr V)$ = the population size}.\n\\end{equation}\n Indeed, assuming by contradiction that~$X_T \\notin \\{0, N \\}$ implies the existence of an absorbing state with at least one centrist\n agent and at least one extremist agent.\n Since the graph is connected, this further implies the existence of an edge~$e = (x, y)$ such that\n $$ \\eta_T (x) \\in C (\\Gamma, \\tau) \\quad \\hbox{and} \\quad \\eta_T (y) \\in B (\\Gamma, \\tau) $$\n but then we have\n $$ \\eta_T (x) \\neq \\eta_T (y) \\quad \\hbox{and} \\quad d (\\eta_T (y), \\eta_T (x)) \\leq \\tau $$\n showing that~$\\eta_T$ is not an absorbing state, in contradiction with the definition of time~$T$.\n This proves that our claim~\\eqref{eq:consensus-2} is true.\n Now, applying the optional stopping theorem to the bounded martingale~$X_t$ and the almost surely finite stopping time~$T$,\n we obtain\n $$ \\begin{array}{l} E X_T \\ = \\ E X_0 \\ = \\ N \\times P \\,(\\eta_0 (x) \\in C (\\Gamma, \\tau)) \\ = \\ N \\times \\sum_{j \\in C (\\Gamma, \\tau)} \\,\\rho_j \\ = \\ N \\rho_{\\cent} \\end{array} $$\n which, together with~\\eqref{eq:consensus-2}, implies that\n\\begin{equation}\n\\label{eq:consensus-3}\n \\begin{array}{rcl}\n P \\,(X_T = N) & = & (1/N)(0 \\times P \\,(X_T = 0) + N \\times P \\,(X_T = N)) \\vspace*{4pt} \\\\\n & = & (1/N) \\ E X_T \\ = \\ \\rho_{\\cent}. \\end{array}\n\\end{equation}\n To conclude, we observe that, on the event that~$X_T = N$, all the opinions present in the system at the time to absorption\n are centrist opinions and since the only absorbing states with only centrist opinions are the configurations in which all the agents\n share the same opinion, we deduce that the system converges to a consensus.\n This, together with~\\eqref{eq:consensus-3}, implies that\n $$ P \\,(\\eta_t \\equiv \\hbox{constant for some} \\ t > 0) \\ \\geq \\ P \\,(X_T = N) \\ = \\ \\rho_{\\cent}. $$\n Finally, since the threshold is at least equal to the radius, it follows from~\\eqref{eq:center-radius} that\n the~$\\tau$-center is nonempty, so we have~$\\rho_{\\cent} > 0$.\n This completes the proof of Theorem~\\ref{th:fluctuation}.\n\\end{demo}\n\n\n\\section{Sufficient condition for fixation}\n\\label{sec:condition}\n\n\\indent This section and the next two ones are devoted to the proof of Theorem~\\ref{th:fixation} which studies the fixation regime\n of the infinite one-dimensional system.\n In this section, we give a general sufficient condition for fixation that can be expressed based on the initial number of active\n particles and frozen particles in a large random interval.\n The main ingredient of the proof is a construction due to Bramson and Griffeath~\\cite{bramson_griffeath_1989} based on\n duality-like techniques looking at active paths.\n The next section establishes large deviation estimates for the initial number of particles in order to simplify\n the condition for fixation using instead the expected number of active and frozen particles per edge.\n This is used in the subsequent section to prove Theorem~\\ref{th:fixation}.\n The next lemma gives a condition for fixation based on properties of the active paths, which is the\n analog of~\\cite[Lemma~2]{bramson_griffeath_1989}.\n\\begin{lemma} --\n\\label{lem:fixation-condition}\n For all~$z \\in \\mathbb{Z}$, let\n $$ T (z) \\ := \\ \\inf \\,\\{t : (z, 0) \\leadsto (0, t) \\}. $$\n Then, the opinion model on~$\\mathbb{Z}$ fixates whenever\n\\begin{equation}\n\\label{eq:fixation}\n \\begin{array}{l} \\lim_{N \\to \\infty} \\,P \\,(T (z) < \\infty \\ \\hbox{for some} \\ z < - N) \\ = \\ 0. \\end{array}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\n This follows closely the proof of~\\cite[Lemma~4]{lanchier_scarlatos_2013}.\n\\end{proof} \\\\ \\\\\n To derive a more explicit condition for fixation, we let\n $$ H_N \\ := \\ \\{T (z) < \\infty \\ \\hbox{for some} \\ z < - N \\} $$\n be the event introduced in~\\eqref{eq:fixation}.\n Following the construction in~\\cite{bramson_griffeath_1989}, we also let~$\\tau_N$ be the first time an active path starting from the\n left of~$- N$ hits the origin, and observe that\n $$ \\tau_N \\ = \\ \\inf \\,\\{T (z) : z \\in (- \\infty, - N) \\}. $$\n In particular, the event~$H_N$ can be written as\n\\begin{equation}\n\\label{eq:key-event}\n \\begin{array}{l} H_N \\ = \\ \\bigcap_{z < - N} \\, \\{T (z) < \\infty \\} \\ = \\ \\{\\tau_N < \\infty \\}. \\end{array}\n\\end{equation}\n Given the event~$H_N$, we let~$z_- < - N$ and~$z_+ \\geq 0$ be the initial position of the active path and the rightmost\n source of an active path that reaches the origin by time~$\\tau_N$, i.e.,\n\\begin{equation}\n\\label{eq:paths}\n \\begin{array}{rcl}\n z_- & := & \\,\\min \\,\\{z \\in \\mathbb{Z} : (z, 0) \\leadsto (0, \\tau_N) \\} \\ < \\ - N \\vspace*{2pt} \\\\\n z_+ & := & \\max \\,\\{z \\in \\mathbb{Z} : (z, 0) \\leadsto (0, \\sigma_N) \\ \\hbox{for some} \\ \\sigma_N < \\tau_N \\} \\ \\geq \\ 0, \\end{array}\n\\end{equation}\n and define~$I_N = (z_-, z_+)$.\n Now, we observe that, on the event~$H_N$,\n\\begin{itemize}\n\\item All the frozen piles initially in~$I_N$ must have been destroyed, i.e., turned into active piles due to the occurrence of\n annihilating events, by time~$\\tau_N$. \\vspace*{4pt}\n\\item The active particles initially outside the interval~$I_N$ cannot jump inside the space-time region delimited\n by the two active paths implicitly defined in~\\eqref{eq:paths} because the existence of such particles would contradict\n the minimality of~$z_-$ or the maximality of~$z_+$.\n\\end{itemize}\n This, together with equation~\\eqref{eq:key-event}, implies that, given the event~$H_N$, all the frozen piles initially in the random\n interval~$I_N$ must have been destroyed by either active piles initially in this interval or active piles that result from\n the destruction of these frozen piles.\n To quantify this statement, we attach random variables, that we call {\\bf contributions}, to each edge.\n The definition depends on whether the edge initially carries an active pile or a frozen pile.\n To begin with, we give an arbitrary deterministic contribution, say~$-1$, to each pile initially active by setting\n\\begin{equation}\n\\label{eq:contribution-active}\n \\cont (e) \\ := \\ - 1 \\quad \\hbox{whenever} \\quad 0 < \\xi_0 (e) \\leq \\tau.\n\\end{equation}\n Now, we observe that, given~$H_N$, for each frozen pile initially in~$I_N$, a random number of active piles must have jumped onto this frozen\n pile to turn it into an active pile.\n Therefore, to define the contribution of a frozen pile, we let\n\\begin{equation}\n\\label{eq:breaks}\n T_e \\ := \\ \\inf \\,\\{t > 0 : \\xi_t (e) \\leq \\tau \\}\n\\end{equation}\n and define the contribution of a frozen pile initially at~$e$ as\n\\begin{equation}\n\\label{eq:contribution-frozen}\n \\cont (e) \\ := \\ - 1 + \\hbox{number of active piles that hit~$e$ until time~$T_e$}.\n\\end{equation}\n Note that~\\eqref{eq:contribution-frozen} reduces to~\\eqref{eq:contribution-active} when edge~$e$ carries initially an active pile since in this\n case the time until the edge becomes active is zero, therefore~\\eqref{eq:contribution-frozen} can be used as the general definition for the contribution\n of an edge with at least one particle.\n Edges with initially no particle have contribution zero.\n Since the occurrence of~$H_N$ implies that all the frozen piles initially in~$I_N$ must have been destroyed by either active piles initially\n in this interval or active piles that result from the destruction of these frozen piles, in which case~$T_e < \\infty$ for all the\n edges in the interval, and since particles in an active pile jump all at once rather than individually,\n\\begin{equation}\n\\label{eq:inclusion-1}\n \\begin{array}{rcl}\n H_N & \\subset & \\{\\sum_{e \\in I_N} \\,\\cont (e \\,| \\,T_e < \\infty) \\leq 0 \\} \\vspace*{4pt} \\\\\n & \\subset & \\{\\sum_{e \\in (l, r)} \\,\\cont (e \\,| \\,T_e < \\infty) \\leq 0 \\ \\hbox{for some~$l < - N$ and some~$r \\geq 0$} \\}. \\end{array}\n\\end{equation}\n Lemma~\\ref{lem:fixation-condition} and~\\eqref{eq:inclusion-1} are used in section~\\ref{sec:fixation} together with the large\n deviation estimates for the number of active and frozen piles showed in the following section to prove Theorem~\\ref{th:fixation}.\n\n\n\\section{Large deviation estimates}\n\\label{sec:deviation}\n\n\\indent In order to find later a good upper bound for the probability in~\\eqref{eq:fixation} and deduce a sufficient condition for\n fixation of the opinion model, the next step is to prove large deviation estimates for the initial number of piles with~$s$~particles\n in a large interval.\n More precisely, the main objective of this section is to prove that for all~$s$ and all~$\\epsilon > 0$ the probability that\n $$ \\card \\{e \\in (0, N) : \\xi_0 (e) = s \\} \\ \\notin \\ (1 - \\epsilon, 1 + \\epsilon) \\ E \\,(\\card \\{e \\in (0, N) : \\xi_0 (e) = s \\}) $$\n decays exponentially with~$N$.\n Note that, even though it is assumed that the process starts from a product measure and therefore the initial opinions at different\n vertices are chosen independently, the initial states at different edges are not independent in general.\n When starting from the uniform product measure, these states are independent if and only if, for every size~$s$,\n $$ \\card \\{j \\in V : d (i, j) = s \\} \\quad \\hbox{does not depend on~$i \\in V$}. $$\n This holds for cycles or hypercubes but not for graphs which are not vertex-transitive.\n When starting from more general product measures, the initial number of particles at different edges are not independent, even for\n very specific graphs.\n In particular, the number of piles of particles with a given size in a given interval does not simply reduce to a binomial random variable.\n\n\\indent The main ingredient to prove large deviation estimates for the initial number of piles with a given number of particles in\n a large spatial interval is to first show large deviation estimates for the number of so-called changeovers in a sequence of independent\n coin flips.\n Consider an infinite sequence of independent coin flips such that\n $$ P \\,(X_t = H) = p \\quad \\hbox{and} \\quad P \\,(X_t = T) = q = 1 - p \\quad \\hbox{for all} \\quad t \\in \\mathbb{N} $$\n where~$X_t$ is the outcome: heads or tails, at time~$t$.\n We say that a {\\bf changeover} occurs whenever two consecutive coin flips result in two different outcomes.\n The expected value of the number of changeovers~$Z_N$ before time~$N$ can be easily computed by observing that\n $$ \\begin{array}{l} Z_N \\ = \\ \\sum_{0 \\leq t < N} \\,Y_t \\quad \\hbox{where} \\quad Y_t \\ := \\ \\mathbf{1} \\{X_{t + 1} \\neq X_t \\} \\end{array} $$\n and by using the linearity of the expected value:\n $$ \\begin{array}{rcl}\n E Z_N & = & \\sum_{0 \\leq t < N} \\,E Y_t \\vspace*{4pt} \\\\\n & = & \\sum_{0 \\leq t < N} \\,P \\,(X_{t + 1} \\neq X_t) \\ = \\ N \\,P \\,(X_0 \\neq X_1) \\ = \\ 2 N p q. \\end{array} $$\n Then, we have the following result for the number of changeovers.\n\\begin{lemma} --\n\\label{lem:changeover}\n For all~$\\epsilon > 0$, there exists~$c_0 > 0$ such that\n $$ P \\,(Z_N - E Z_N \\notin (- \\epsilon N, \\epsilon N)) \\ \\leq \\ \\exp (- c_0 N) \\quad \\hbox{for all~$N$ large}. $$\n\\end{lemma}\n\\begin{proof}\n To begin with, we let~$\\tau_{2K}$ be the time to the~$2K$th changeover and notice that, since all the outcomes between two consecutive\n changeovers are identical, the sequence of coin flips up to this stopping time can be decomposed into~$2K$ strings with an alternation\n of strings with only heads and strings with only tails followed by one more coin flip.\n In addition, since the coin flips are independent, the length distribution of each string is\n $$ \\begin{array}{rcl}\n H_j & := & \\hbox{length of the~$j$th string of heads} \\ = \\ \\geometric (q) \\vspace*{2pt} \\\\\n T_j & := & \\hbox{length of the~$j$th string of tails} \\ = \\ \\geometric (p) \\end{array} $$\n and lengths are independent.\n In particular,~$\\tau_{2K}$ is equal in distribution to the sum of~$2K$ independent geometric random variables with parameters~$p$ and~$q$,\n therefore we have\n\\begin{equation}\n\\label{eq:changeover-1}\n P \\,(\\tau_{2K} = n) \\ = \\ P \\,(H_1 + T_1 + \\cdots + H_K + T_K = n) \\quad \\hbox{for all} \\quad n \\in \\mathbb{N}.\n\\end{equation}\n Now, observing that, for all~$K \\leq n$,\n $$ \\begin{array}{l}\n \\displaystyle P \\,(H_1 + H_2 + \\cdots + H_K = n) \\ = \\,{n - 1 \\choose K - 1} \\,q^K \\,(1 - q)^{n - K} \\vspace*{2pt} \\\\ \\hspace*{50pt}\n \\displaystyle = \\,\\frac{K}{n} \\ {n \\choose K} \\,q^K \\,(1 - q)^{n - K} \\ \\leq \\ P \\,(\\binomial (n, q) = K), \\end{array} $$\n large deviation estimates for the binomial distribution imply that\n\\begin{equation}\n\\label{eq:changeover-2}\n \\begin{array}{l}\n P \\,((1/K)(H_1 + H_2 + \\cdots + H_K) \\geq (1 + \\epsilon)(1/q)) \\vspace*{4pt} \\\\ \\hspace*{70pt} \\leq \\\n P \\,(\\binomial ((1 + \\epsilon)(1/q) K, q) \\leq K) \\ \\leq \\ \\exp (- c_1 K) \\vspace*{8pt} \\\\\n P \\,((1/K)(H_1 + H_2 + \\cdots + H_K) \\leq (1 - \\epsilon)(1/q)) \\vspace*{4pt} \\\\ \\hspace*{70pt} \\leq \\\n P \\,(\\binomial ((1 - \\epsilon)(1/q) K, q) \\geq K) \\ \\leq \\ \\exp (- c_1 K) \\end{array}\n\\end{equation}\n for a suitable constant~$c_1 > 0$ and all~$K$ large.\n Similarly, for all~$\\epsilon > 0$,\n\\begin{equation}\n\\label{eq:changeover-3}\n \\begin{array}{l}\n P \\,((1/K)(T_1 + T_2 + \\cdots + T_K) \\geq (1 + \\epsilon)(1/p)) \\ \\leq \\ \\exp (- c_2 K) \\vspace*{4pt} \\\\\n P \\,((1/K)(T_1 + T_2 + \\cdots + T_K) \\leq (1 - \\epsilon)(1/p)) \\ \\leq \\ \\exp (- c_2 K) \\end{array}\n\\end{equation}\n for a suitable~$c_2 > 0$ and all~$K$ large.\n Combining~\\eqref{eq:changeover-1}--\\eqref{eq:changeover-3}, we deduce that\n $$ \\begin{array}{l}\n P \\,((1/K) \\,\\tau_{2K} \\notin ((1 - \\epsilon)(1/p + 1/q), (1 + \\epsilon)(1/p + 1/q))) \\vspace*{4pt} \\\\ \\hspace*{20pt} = \\\n P \\,((1/K)(H_1 + T_1 + \\cdots + H_K + T_K) \\notin ((1 - \\epsilon)(1/p + 1/q), (1 + \\epsilon)(1/p + 1/q))) \\vspace*{4pt} \\\\ \\hspace*{20pt} \\leq \\\n P \\,((1/K)(H_1 + H_2 + \\cdots + H_K) \\notin ((1 - \\epsilon)(1/q), (1 + \\epsilon)(1/q))) \\vspace*{4pt} \\\\ \\hspace*{80pt} + \\\n P \\,((1/K)(T_1 + T_2 + \\cdots + T_K) \\notin ((1 - \\epsilon)(1/p), (1 + \\epsilon)(1/p))) \\vspace*{4pt} \\\\ \\hspace*{20pt} \\leq \\\n 2 \\exp (- c_1 K) + 2 \\exp (- c_2 K). \\end{array} $$\n Taking~$K := pq N$ and using that~$pq \\,(1/p + 1/q) = 1$, we deduce\n $$ \\begin{array}{l}\n P \\,((1/N) \\,\\tau_{2K} \\notin (1 - \\epsilon, 1 + \\epsilon)) \\vspace*{4pt} \\\\ \\hspace*{20pt} = \\\n P \\,((1/K) \\,\\tau_{2K} \\notin ((1 - \\epsilon) (1/p + 1/q), (1 + \\epsilon)(1/p + 1/q))) \\ \\leq \\ \\exp (- c_3 N) \\end{array} $$\n for a suitable~$c_3 > 0$ and all~$N$ large.\n In particular,\n $$ \\begin{array}{rcl}\n P \\,((1/N) \\,\\tau_{2K - \\epsilon N} \\geq 1) & \\leq & \\exp (- c_4 N) \\vspace*{4pt} \\\\\n P \\,((1/N) \\,\\tau_{2K + \\epsilon N} \\leq 1) & \\leq & \\exp (- c_5 N) \\end{array} $$\n for suitable constants~$c_4 > 0$ and~$c_5 > 0$ and all~$N$ sufficiently large.\n Using the previous two inequalities and the fact that the event that the number of changeovers before time~$N$ is equal to~$2K$\n is also the event that the time to the~$2K$th changeover is less than~$N$ but the time to the next changeover is more than~$N$,\n we conclude that\n $$ \\begin{array}{l}\n P \\,(Z_N - E Z_N \\notin (- \\epsilon N, \\epsilon N)) \\ = \\\n P \\,(Z_N \\notin (2 pq - \\epsilon, 2 pq + \\epsilon) N) \\vspace*{4pt} \\\\ \\hspace*{20pt} = \\\n P \\,((1/N) \\,Z_N \\notin (2 pq - \\epsilon, 2 pq + \\epsilon)) \\vspace*{4pt} \\\\ \\hspace*{20pt} = \\\n P \\,((1/N) \\,Z_N \\leq 2 pq - \\epsilon) + P \\,((1/N) \\,Z_N \\geq 2 pq + \\epsilon) \\vspace*{4pt} \\\\ \\hspace*{20pt} = \\\n P \\,((1/N) \\,\\tau_{2K - \\epsilon N} \\geq 1) + P \\,((1/N) \\,\\tau_{2K + \\epsilon N} \\leq 1) \\ \\leq \\\n \\exp (- c_4 N) + \\exp (- c_5 N) \\end{array} $$\n for all~$N$ large.\n This completes the proof.\n\\end{proof} \\\\ \\\\\n Now, we say that an edge is of type~$i \\to j$ if it connects an individual with initial opinion~$i$ on the left to an individual\n with initial opinion~$j$ on the right, and let\n $$ e_N (i \\to j) \\ := \\ \\card \\{x \\in (0, N) : \\eta_0 (x) = i \\ \\hbox{and} \\ \\eta_0 (x + 1) = j \\} $$\n denote the number of edges of type~$i \\to j$ in the interval~$(0, N)$.\n Using the large deviation estimates for the number of changeovers established in the previous lemma, we can deduce large\n deviation estimates for the number of edges of each type.\n\\begin{lemma} --\n\\label{lem:edge}\n For all~$\\epsilon > 0$, there exists~$c_6 > 0$ such that\n $$ P \\,(e_N (i \\to j) - N \\rho_i \\,\\rho_j \\notin (- 2 \\epsilon N, 2 \\epsilon N)) \\ \\leq \\ \\exp (- c_6 N) \\quad \\hbox{for all~$N$ large and~$i \\neq j$}. $$\n\\end{lemma}\n\\begin{proof}\n For any given~$i$, the number of edges~$i \\to j$ and~$j \\to i$ with~$j \\neq i$ has the same distribution as the number of changeovers in a\n sequence of independent coin flips of a coin that lands on heads with probability~$\\rho_i$.\n In particular, applying Lemma~\\ref{lem:changeover} with~$p = \\rho_i$ gives\n\\begin{equation}\n\\label{eq:edge-1}\n \\begin{array}{l} P \\,(\\sum_{j \\neq i} \\,e_N (i \\to j) - N \\rho_i \\,(1 - \\rho_i) \\notin (- \\epsilon N, \\epsilon N)) \\ \\leq \\ \\exp (- c_0 N) \\end{array}\n\\end{equation}\n for all~$N$ sufficiently large.\n In addition, since each~$i$ preceding a changeover is independently followed by any of the remaining~$F - 1$ opinions, for all~$i \\neq j$, we have\n\\begin{equation}\n\\label{eq:edge-2}\n \\begin{array}{l}\n P \\,(e_N (i \\to j) = n \\ | \\,\\sum_{k \\neq i} \\,e_N (i \\to k) = K) \\vspace*{4pt} \\\\ \\hspace*{50pt} = \\\n P \\,(\\binomial (K, \\rho_j \\,(1 - \\rho_i)^{-1}) = n). \\end{array}\n\\end{equation}\n Combining~\\eqref{eq:edge-1}--\\eqref{eq:edge-2} with large deviation estimates for the binomial distribution, conditioning on the number\n of edges of type~$i \\to k$ for some~$k \\neq i$, and using that\n $$ (N \\rho_i \\,(1 - \\rho_i) + \\epsilon N) \\,\\rho_j \\,(1 - \\rho_i)^{-1} \\ = \\ N \\rho_i \\,\\rho_j + \\epsilon N \\rho_j \\,(1 - \\rho_i)^{-1} $$\n we deduce the existence of~$c_7 > 0$ such that\n\\begin{equation}\n\\label{eq:edge-3}\n \\begin{array}{l}\n P \\,(e_N (i \\to j) - N \\rho_i \\,\\rho_j \\geq 2 \\epsilon N) \\vspace*{4pt} \\\\ \\hspace*{20pt} \\leq \\\n P \\,(\\sum_{k \\neq i} \\,e_N (i \\to k) - N \\rho_i \\,(1 - \\rho_i) \\geq \\epsilon N) \\vspace*{4pt} \\\\ \\hspace*{20pt} + \\\n P \\,(e_N (i \\to j) \\geq N \\rho_i \\,\\rho_j + 2 \\epsilon N \\ | \\ \\sum_{k \\neq i} \\,e_N (i \\to k) - N \\rho_i \\,(1 - \\rho_i) < \\epsilon N) \\vspace*{4pt} \\\\ \\hspace*{20pt} \\leq \\\n \\exp (- c_0 N) + P \\,(\\binomial (N \\rho_i \\,(1 - \\rho_i) + \\epsilon N, \\rho_j \\,(1 - \\rho_i)^{-1}) \\geq N \\rho_i \\,\\rho_j + 2 \\epsilon N) \\vspace*{4pt} \\\\ \\hspace*{20pt} \\leq \\\n \\exp (- c_0 N) + \\exp (- c_7 N) \\end{array}\n\\end{equation}\n for all~$N$ large.\n Similarly, there exists~$c_8 > 0$ such that\n\\begin{equation}\n\\label{eq:edge-4}\n P \\,(e_N (i \\to j) - N \\rho_i \\,\\rho_j \\leq - 2 \\epsilon N) \\ \\leq \\ \\exp (- c_0 N) + \\exp (- c_8 N)\n\\end{equation}\n for all~$N$ large.\n The lemma follows from~\\eqref{eq:edge-3}--\\eqref{eq:edge-4}.\n\\end{proof} \\\\ \\\\\n Note that the large deviation estimates for the initial number of piles of particles easily follows from the previous lemma.\n Finally, from the large deviation estimates for the number of edges of each type, we deduce the analog for a general class of\n functions~$W$ that will be used in the next section to prove the first sufficient condition for fixation.\n\\begin{lemma} --\n\\label{lem:weight}\n Let~$w : V \\times V \\to \\mathbb{R}$ be any function such that\n $$ w (i, i) = 0 \\quad \\hbox{for all} \\quad i \\in V $$\n and let~$W : \\mathbb{Z} + 1/2 \\to \\mathbb{R}$ be the function defined as\n $$ W_e := w (i, j) \\quad \\hbox{whenever} \\quad \\hbox{edge~$e$ is of type~$i \\to j$}. $$\n Then, for all~$\\epsilon > 0$, there exists~$c_9 > 0$ such that\n $$ \\begin{array}{l} P \\,(\\sum_{e \\in (0, N)} \\,(W_e - E W_e) \\notin (- \\epsilon N, \\epsilon N)) \\ \\leq \\ \\exp (- c_9 N) \\quad \\hbox{for all~$N$ large}. \\end{array} $$\n\\end{lemma}\n\\begin{proof}\n First, we observe that\n $$ \\begin{array}{l}\n \\sum_{e \\in (0, N)} \\,(W_e - E W_e) \\ = \\\n \\sum_{e \\in (0, N)} W_e - N E W_e \\vspace*{4pt} \\\\ \\hspace*{20pt} = \\\n \\sum_{i \\neq j} \\,w (i, j) \\,e_N (i \\to j) - N \\,\\sum_{i \\neq j} \\,w (i, j) \\,P \\,(e \\ \\hbox{is of type} \\ i \\to j) \\vspace*{4pt} \\\\ \\hspace*{20pt} = \\\n \\sum_{i \\neq j} \\,w (i, j) \\,(e_N (i \\to j) - N \\rho_i \\,\\rho_j). \\end{array} $$\n Letting~$m := \\max_{i \\neq j} |w (i, j)| < \\infty$ and applying Lemma~\\ref{lem:edge}, we conclude that\n $$ \\begin{array}{l}\n P \\,(\\sum_{e \\in (0, N)} \\,(W_e - E W_e) \\notin (- \\epsilon N, \\epsilon N)) \\vspace*{4pt} \\\\ \\hspace*{25pt} = \\\n P \\,(\\sum_{i \\neq j} \\,w (i, j) \\,(e_N (i \\to j) - N \\rho_i \\,\\rho_j) \\notin (- \\epsilon N, \\epsilon N)) \\vspace*{4pt} \\\\ \\hspace*{25pt} \\leq \\\n P \\,(w (i, j) \\,(e_N (i \\to j) - N \\rho_i \\,\\rho_j) \\notin (- \\epsilon N/F^2, \\epsilon N/F^2) \\ \\hbox{for some} \\ i \\neq j) \\vspace*{4pt} \\\\ \\hspace*{25pt} \\leq \\\n P \\,(e_N (i \\to j) - N \\rho_i \\,\\rho_j \\notin (- \\epsilon N/mF^2, \\epsilon N/mF^2) \\ \\hbox{for some} \\ i \\neq j) \\vspace*{4pt} \\\\ \\hspace*{25pt} \\leq \\\n F^2 \\,\\exp (- c_{10} N) \\end{array} $$\n for a suitable constant~$c_{10} > 0$ and all~$N$ large.\n\\end{proof}\n\n\n\\section{Proof of Theorem~\\ref{th:fixation} (general opinion graphs)}\n\\label{sec:fixation}\n\n\\indent The key ingredients to prove Theorem~\\ref{th:fixation} are Lemma~\\ref{lem:fixation-condition} and inclusions~\\eqref{eq:inclusion-1}.\n The large deviation estimates of the previous section are also important to make the sufficient condition for fixation more explicit and\n applicable to particular opinion graphs.\n First, we find a lower bound~$W_e$, that we shall call {\\bf weight}, for the contribution of any given edge~$e$.\n This lower bound is deterministic given the initial number of particles at the edge and is obtained assuming the worst case scenario\n where all the active piles annihilate with frozen piles rather than other active piles.\n More precisely, we have the following lemma.\n\\begin{lemma} --\n\\label{lem:deterministic}\n For all~$k > 0$,\n\\begin{equation}\n\\label{eq:weight}\n \\cont (e \\,| \\,T_e < \\infty) \\ \\geq \\ W_e \\ := \\ k - 2 \\quad \\hbox{when} \\quad (k - 1) \\,\\tau < \\xi_0 (e) \\leq k \\tau. \n\\end{equation}\n\\end{lemma}\n\\begin{proof}\n The jump of an active pile of size~$s_- \\leq \\tau$ onto a frozen pile of size~$s_+ > \\tau$ decreases the size of this frozen\n pile by at most~$s_-$ particles.\n Since in addition active piles have at most~$\\tau$ particles, whenever the initial number of frozen particles at edge~$e$ satisfies\n $$ (k - 1) \\,\\tau < \\xi_0 (e) \\leq k \\tau \\quad \\hbox{for some} \\quad k \\geq 2, $$\n at least~$k - 1$ active piles must have jumped onto~$e$ until time~$T_e < \\infty$.\n Recalling~\\eqref{eq:contribution-frozen} gives the result when edge~$e$ carries a frozen while the result is trivial when the\n edge carries an active pile since, in this case, both its contribution and its weight are equal to~$-1$.\n\\end{proof} \\\\ \\\\\n In view of Lemma~\\ref{lem:deterministic}, it is convenient to classify piles depending on the number of complete blocks of~$\\tau$ particles\n they contain: we say that the pile at~$e$ is of {\\bf order}~$k > 0$ when\n $$ (k - 1) \\,\\tau < \\xi_t (e) \\leq k \\tau \\quad \\hbox{or equivalently} \\quad \\ceil{\\xi_t (e) / \\tau} = k $$\n so that active piles are exactly the piles of order one and the weight of a pile is simply its order minus two.\n Now, we note that Lemma~\\ref{lem:deterministic} and~\\eqref{eq:inclusion-1} imply that\n\\begin{equation}\n\\label{eq:inclusion-2}\n \\begin{array}{l}\n H_N \\ \\subset \\ \\{\\sum_{e \\in (l, r)} W_e \\leq 0 \\ \\hbox{for some~$l < - N$ and some~$r \\geq 0$} \\}. \\end{array}\n\\end{equation}\n Motivated by Lemma~\\ref{lem:fixation-condition}, the main objective to study fixation is to find an upper bound for the probability of the\n event on the right-hand side of~\\eqref{eq:inclusion-2}.\n This is the key to proving the following general fixation result from which both parts of Theorem~\\ref{th:fixation} can be easily deduced.\n\\begin{lemma} --\n\\label{lem:expected-weight}\n Assume~\\eqref{eq:product}.\n Then, the system on~$\\mathbb{Z}$ fixates whenever\n $$ \\begin{array}{l} \\sum_{i, j \\in V} \\,\\rho_i \\,\\rho_j \\,\\sum_{k > 0} \\,((k - 2) \\,\\sum_{s : \\ceil{s / \\tau} = k} \\,\\mathbf{1} \\{d (i, j) = s \\}) \\ > \\ 0. \\end{array} $$\n\\end{lemma}\n\\begin{proof}\n To begin with, we observe that\n $$ \\begin{array}{rcl}\n P \\,(\\xi_0 (e) = s) & = & \\sum_{i, j \\in V} \\,P \\,(\\eta_0 (x) = i \\ \\hbox{and} \\ \\eta_0 (x + 1) = j) \\ \\mathbf{1} \\{d (i, j) = s \\} \\vspace*{4pt} \\\\\n & = & \\sum_{i, j \\in V} \\,\\rho_i \\,\\rho_j \\ \\mathbf{1} \\{d (i, j) = s \\}. \\end{array} $$\n Recalling~\\eqref{eq:weight}, it follows that\n $$ \\begin{array}{rcl}\n E W_e & = & \\sum_{k > 0} \\,(k - 2) \\,P \\,((k - 1) \\,\\tau < \\xi_0 (e) \\leq k \\tau) \\vspace*{4pt} \\\\\n & = & \\sum_{k > 0} \\,((k - 2) \\,\\sum_{(k - 1) \\,\\tau < s \\leq k \\tau} \\,P \\,(\\xi_0 (e) = s)) \\vspace*{4pt} \\\\\n & = & \\sum_{k > 0} \\,((k - 2) \\,\\sum_{(k - 1) \\,\\tau < s \\leq k \\tau} \\,\\sum_{i, j \\in V} \\,\\rho_i \\,\\rho_j \\ \\mathbf{1} \\{d (i, j) = s \\}) \\vspace*{4pt} \\\\\n & = & \\sum_{i, j \\in V} \\,\\rho_i \\,\\rho_j \\,\\sum_{k > 0} \\,((k - 2) \\,\\sum_{s : \\ceil{s / \\tau} = k} \\,\\mathbf{1} \\{d (i, j) = s \\}) \\end{array} $$\n which is strictly positive under the assumption of the lemma.\n In particular, applying the large deviation estimate in Lemma~\\ref{lem:weight} with~$\\epsilon := E W_e > 0$, we deduce that\n $$ \\begin{array}{l}\n P \\,(\\sum_{e \\in (0, N)} W_e \\leq 0) \\ = \\\n P \\,(\\sum_{e \\in (0, N)} \\,(W_e - E W_e) \\leq - \\epsilon N) \\vspace*{4pt} \\\\ \\hspace*{40pt} \\leq \\\n P \\,(\\sum_{e \\in (0, N)} \\,(W_e - E W_e) \\notin (- \\epsilon N, \\epsilon N)) \\ \\leq \\ \\exp (- c_9 N) \\end{array} $$\n for all~$N$ large, which, in turn, implies with~\\eqref{eq:inclusion-2} that\n $$ \\begin{array}{rcl}\n P \\,(H_N) & \\leq &\n P \\,(\\sum_{e \\in (l, r)} W_e \\leq 0 \\ \\hbox{for some~$l < - N$ and~$r \\geq 0$}) \\vspace*{4pt} \\\\ & \\leq &\n \\sum_{l < - N} \\,\\sum_{r \\geq 0} \\,\\exp (- c_9 \\,(r - l)) \\ \\to \\ 0 \\end{array} $$\n as~$N \\to \\infty$.\n This together with Lemma~\\ref{lem:fixation-condition} implies fixation.\n\\end{proof} \\\\ \\\\\n Both parts of Theorem~\\ref{th:fixation} directly follow from the previous lemma. \\\\ \\\\\n\\begin{demo}{Theorem~\\ref{th:fixation}a} --\n Assume that~\\eqref{eq:uniform} holds and that\n $$ \\begin{array}{l} S (\\Gamma, \\tau) \\ = \\ \\sum_{k > 0} \\,((k - 2) \\,\\sum_{s : \\ceil{s / \\tau} = k} \\,N (\\Gamma, s)) \\ > \\ 0. \\end{array} $$\n Then, the expected weight becomes\n $$ \\begin{array}{rcl}\n E W_e & = & \\sum_{i, j \\in V} \\,\\rho_i \\,\\rho_j \\,\\sum_{k > 0} \\,((k - 2) \\,\\sum_{s : \\ceil{s / \\tau} = k} \\,\\mathbf{1} \\{d (i, j) = s \\}) \\vspace*{4pt} \\\\\n & = & (1/F)^2 \\,\\sum_{k > 0} \\,((k - 2) \\,\\sum_{s : \\ceil{s / \\tau} = k} \\,\\sum_{i, j \\in V} \\,\\mathbf{1} \\{d (i, j) = s \\}) \\vspace*{4pt} \\\\\n & = & (1/F)^2 \\,\\sum_{k > 0} \\,((k - 2) \\,\\sum_{s : \\ceil{s / \\tau} = k} \\,\\card \\{(i, j) \\in V \\times V : d (i, j) = s \\}) \\vspace*{4pt} \\\\\n & = & (1/F)^2 \\,\\sum_{k > 0} \\,((k - 2) \\,\\sum_{s : \\ceil{s / \\tau} = k} \\,N (\\Gamma, s)) \\vspace*{4pt} \\\\\n & = & (1/F)^2 \\,S (\\Gamma, \\tau) \\ > \\ 0 \\end{array} $$\n which, according to Lemma~\\ref{lem:expected-weight}, implies fixation.\n\\end{demo} \\\\ \\\\\n\\begin{demo}{Theorem~\\ref{th:fixation}b} --\n Assume that~$\\mathbf{d} > 2 \\tau$. Then,\n $$ d (i_-, i_+) = \\mathbf{d} > 2 \\tau \\quad \\hbox{for some pair} \\quad (i_-, i_+) \\in V \\times V. $$\n Now, let~$X, Y \\geq 0$ such that~$2X + (F - 2)Y = 1$ and assume that\n $$ \\rho_{i_-} = \\rho_{i_+} = X \\quad \\hbox{and} \\quad \\rho_i = Y \\quad \\hbox{for all} \\quad i \\notin B := \\{i_-, i_+ \\}. $$\n To simplify the notation, we also introduce\n $$ \\begin{array}{l} Q (i, j) \\ := \\ \\sum_{k > 0} \\,((k - 2) \\,\\sum_{s : \\ceil{s / \\tau} = k} \\,\\mathbf{1} \\{d (i, j) = s \\}) \\end{array} $$\n for all~$(i, j) \\in V \\times V$.\n Then, the expected weight becomes\n $$ \\begin{array}{rcl}\n P (X, Y) & = & \\sum_{i, j \\in V} \\,\\rho_i \\,\\rho_j \\ Q (i, j) \\vspace*{4pt} \\\\\n & = & \\sum_{i, j \\in B} \\,\\rho_i \\,\\rho_j \\ Q (i, j) + \\sum_{i \\notin B} \\,2 \\,\\rho_i \\,\\rho_{i_-} \\,Q (i, i_-) \\vspace*{4pt} \\\\ && \\hspace*{25pt} + \\\n \\sum_{i \\notin B} \\,2 \\,\\rho_i \\,\\rho_{i_+} \\,Q (i, i_+) + \\sum_{i, j \\notin B} \\,\\rho_i \\,\\rho_j \\ Q (i, j) \\vspace*{4pt} \\\\\n & = & 2 \\,Q (i_-, i_+) \\,X^2 + 2 \\,(\\sum_{i \\notin B} \\,Q (i, i_-) + Q (i, i_+)) \\,XY + \\sum_{i, j \\notin B} \\,Q (i, j) \\,Y^2. \\end{array} $$\n This shows that~$P$ is continuous in both~$X$ and~$Y$ and that\n $$ \\begin{array}{rcl}\n P (1/2, 0) & = & (1/2) \\,Q (i_-, i_+) \\vspace*{4pt} \\\\\n & = & (1/2) \\,\\sum_{k > 0} \\,((k - 2) \\,\\sum_{s : \\ceil{s / \\tau} = k} \\,\\mathbf{1} \\{d (i_-, i_+) = s \\}) \\vspace*{4pt} \\\\\n & \\geq & (1/2) \\,(3 - 2) \\,\\sum_{s > 2 \\tau} \\,\\mathbf{1} \\{d (i_-, i_+) = s \\} \\ = \\ 1/2 \\ > \\ 0. \\end{array} $$\n Therefore, according to Lemma~\\ref{lem:expected-weight}, there is fixation of the one-dimensional process starting from any product\n measure whose densities are in some neighborhood of\n $$ \\rho_{i_-} = \\rho_{i_+} = 1/2 \\quad \\hbox{and} \\quad \\rho_i = 0 \\quad \\hbox{for all} \\quad i \\notin \\{i_-, i_+ \\}. $$\n This proves the second part of Theorem~\\ref{th:fixation}.\n\\end{demo}\n\n\n\\section{Proof of Theorem~\\ref{th:dist-reg} (distance-regular graphs)}\n\\label{sec:dist-reg}\n\n\\indent To explain the intuition behind the proof, recall that, when an active pile of size~$s_-$ jumps to the right onto a frozen\n pile of size~$s_+$ at edge~$e$, the size of the latter pile becomes\n $$ \\xi_t (e) \\ = \\ d (\\eta_t (e - 1/2), \\eta_t (e + 1/2)) \\ = \\ d (\\eta_{t-} (e - 3/2), \\eta_{t-} (e + 1/2)) $$\n and the triangle inequality implies that\n\\begin{equation}\n\\label{eq:triangle}\n s_+ - s_- \\ = \\ \\xi_{t-} (e) - \\xi_{t-} (e - 1) \\ \\leq \\ \\xi_t (e) \\ \\leq \\ \\xi_{t-} (e) + \\xi_{t-} (e - 1) \\ = \\ s_+ + s_-.\n\\end{equation}\n The exact distribution of the new size cannot be deduced in general from the size of the intersecting piles, indicating that\n the system of piles is not Markov.\n The key to the proof is that, at least when the underlying opinion graph is distance-regular, the system of piles becomes Markov.\n The first step is to show that, for all opinion graphs, the opinions on the left and on the right of a pile of size~$s$ are\n conditioned to be at distance~$s$ of each other but are otherwise independent, which follows from the fact that both opinions\n originate from two different ancestors at time zero, and the fact that the initial distribution is a product measure.\n If in addition the opinion graph is distance-regular then the number of possible opinions on the left and on the right of the pile,\n which is also the number of pairs of opinions at distance~$s$ of each other, does not depend on the actual opinion on the left of\n the pile.\n This implies that, at least in theory, the new size distribution of a pile right after a collision can be computed explicitly.\n This is then used to prove that a jump of an active pile onto a pile of order~$n > 1$ reduces its order with probability at most\n $$ \\begin{array}{l} p_n \\ = \\ \\max \\,\\{\\sum_{s : \\ceil{s / \\tau} = n - 1} f (s_-, s_+, s) / h (s_+) : \\ceil{s_- / \\tau} = 1 \\ \\hbox{and} \\ \\ceil{s_+ / \\tau} = n \\} \\end{array} $$\n while it increases its order with probability at least\n $$ \\begin{array}{l} q_n \\ = \\ \\,\\min \\,\\{\\sum_{s : \\ceil{s / \\tau} = n + 1} f (s_-, s_+, s) / h (s_+) : \\ceil{s_- / \\tau} = 1 \\ \\hbox{and} \\ \\ceil{s_+ / \\tau} = n \\}. \\end{array} $$\n In particular, the number of active piles that need to be sacrificed to turn a frozen pile into an active pile is stochastically\n larger than the hitting time to state~1 of a certain discrete-time birth and death process.\n To turn this into a proof, we let~$x = e - 1/2$ and\n $$ x - 1 \\to_t x \\ := \\ \\hbox{the event that there is an arrow~$x - 1 \\to x$ at time~$t$}. $$\n Then, we have the following lemma.\n\\begin{lemma} --\n\\label{lem:collision}\n Assume~\\eqref{eq:uniform} and~\\eqref{eq:dist-reg-1}.\n For all~$s \\geq 0$ and $s_-, s_+ > 0$ with~$s_- \\leq \\tau$,\n $$ \\begin{array}{l} P \\,(\\xi_t (e) = s \\,| \\,(\\xi_{t-} (e - 1), \\xi_{t-} (e)) = (s_-, s_+) \\ \\hbox{and} \\ x - 1 \\to_t x) \\ = \\ f (s_-, s_+, s) / h (s_+). \\end{array} $$\n\\end{lemma}\n\\begin{proof}\n The first step is similar to the proof of~\\cite[Lemma~3]{lanchier_scarlatos_2013}.\n Due to one-dimensional nearest neighbor interactions, active paths cannot cross each other.\n In particular, the opinion dynamics preserve the ordering of the ancestral lineages therefore\n\\begin{equation}\n\\label{eq:collision-1}\n a (x - 1, t-) \\ \\leq \\ a (x, t-) \\ \\leq \\ a (x + 1, t-)\n\\end{equation}\n where~$a (z, t-)$ refers to the ancestor of~$(z, t-)$, i.e., the unique source at time zero of an active path reaching space-time point~$(z, t-)$.\n Since in addition~$s_-, s_+ > 0$, given the conditioning in the statement of the lemma, the individuals at~$x$ and~$x \\pm 1$ must disagree at\n time~$t-$ and so must have different ancestors.\n This together with~\\eqref{eq:collision-1} implies that\n\\begin{equation}\n\\label{eq:collision-2}\n a (x - 1, t-) \\ < \\ a (x, t-) \\ < \\ a (x + 1, t-).\n\\end{equation}\n Now, we fix~$i_-, j \\in V$ such that~$d (i_-, j) = s_-$ and let\n $$ B_{t-} (i_-, j) \\ := \\ \\{\\eta_{t-} (x - 1) = i_- \\ \\hbox{and} \\ \\eta_{t-} (x) = j \\}. $$\n Then, given this event and the conditioning in the statement of the lemma, the probability that the pile of particles at~$e$ becomes of size~$s$\n is equal to\n\\begin{equation}\n\\label{eq:collision-3}\n \\begin{array}{l}\n P \\,(\\xi_t (e) = s \\,| \\,B_{t-} (i_-, j) \\ \\hbox{and} \\ \\xi_{t-} (e) = s_+ \\ \\hbox{and} \\ x - 1 \\to_t x) \\vspace*{4pt} \\\\ \\hspace*{25pt} = \\\n P \\,(d (i_-, \\eta_{t-} (x + 1)) = s \\,| \\,B (i_-, j) \\ \\hbox{and} \\vspace*{4pt} \\\\ \\hspace*{100pt}\n d (j, \\eta_{t-} (x + 1)) = s_+ \\ \\hbox{and} \\ x - 1 \\to_t x) \\vspace*{4pt} \\\\ \\hspace*{25pt} = \\\n \\card \\{i_+ : d (i_-, i_+) = s \\ \\hbox{and} \\ d (i_+, j) = s_+ \\} / \\card \\{i_+ : d (i_+, j) = s_+ \\} \\end{array}\n\\end{equation}\n where the last equality follows from~\\eqref{eq:uniform} and~\\eqref{eq:collision-2} which, together, imply that the opinion at~$x + 1$ just before\n the jump is independent of the other opinions on its left and chosen uniformly at random from the set of opinions at distance~$s_+$ of opinion~$j$.\n Assuming in addition that the underlying opinion graph is distance-regular~\\eqref{eq:dist-reg-1}, we also have\n\\begin{equation}\n\\label{eq:collision-4}\n \\begin{array}{l}\n \\card \\{i_+ : d (i_-, i_+) = s \\ \\hbox{and} \\ d (i_+, j) = s_+ \\} \\vspace*{4pt} \\\\ \\hspace*{50pt} = \\\n N (\\Gamma, (i_-, s), (j, s_+)) \\ = \\ f (s_-, s_+, s) \\vspace*{8pt} \\\\\n \\card \\{i_+ : d (i_+, j) = s_+ \\} \\ = \\ N (\\Gamma, (j, s_+)) \\ = \\ h (s_+). \\end{array}\n\\end{equation}\n In particular, the conditional probability in~\\eqref{eq:collision-3} does not depend on the particular choice of the pair of opinions~$i_-$ and~$j$ from\n which it follows that\n\\begin{equation}\n\\label{eq:collision-5}\n \\begin{array}{l}\n P \\,(\\xi_t (e) = s \\,| \\,\\xi_{t-} (e - 1) = s_- \\ \\hbox{and} \\ \\xi_{t-} (e) = s_+ \\ \\hbox{and} \\ x - 1 \\to_t x) \\vspace*{4pt} \\\\ \\hspace*{40pt} = \\\n P \\,(\\xi_t (e) = s \\,| \\,B_{t-} (i_-, j) \\ \\hbox{and} \\ \\xi_{t-} (e) = s_+ \\ \\hbox{and} \\ x - 1 \\to_t x) \\end{array}\n\\end{equation}\n The lemma is then a direct consequence of~\\eqref{eq:collision-3}--\\eqref{eq:collision-5}.\n\\end{proof} \\\\ \\\\\n As previously mentioned, it follows from Lemma~\\ref{lem:collision} that, provided the opinion model starts from a product measure in which\n the density of each opinion is constant across space and the opinion graph is distance-regular, the system of piles itself is a Markov process.\n Another important consequence is the following lemma, which gives bounds for the probabilities that the jump of an active pile onto a frozen\n pile results in a reduction or an increase of its order.\n\\begin{lemma} --\n\\label{lem:jump}\n Let~$x = e - 1/2$. Assume~\\eqref{eq:uniform} and~\\eqref{eq:dist-reg-1}. Then,\n $$ \\begin{array}{l}\n P \\,(\\ceil{\\xi_t (e) / \\tau} < \\ceil{\\xi_{t-} (e) / \\tau} \\,| \\,(\\xi_{t-} (e - 1), \\xi_{t-} (e)) = (s_-, s_+) \\ \\hbox{and} \\ x - 1 \\to_t x) \\ \\leq \\ p_n \\vspace*{4pt} \\\\\n P \\,(\\ceil{\\xi_t (e) / \\tau} > \\ceil{\\xi_{t-} (e) / \\tau} \\,| \\,(\\xi_{t-} (e - 1), \\xi_{t-} (e)) = (s_-, s_+) \\ \\hbox{and} \\ x - 1 \\to_t x) \\ \\geq \\ q_n \\end{array} $$\n whenever~$0 < \\ceil{s_- / \\tau} = 1$ and~$\\ceil{s_+ / \\tau} = n > 1$.\n\\end{lemma}\n\\begin{proof}\n Let~$p (s_-, s_+, s)$ be the conditional probability\n $$ P \\,(\\xi_t (e) = s \\,| \\,(\\xi_{t-} (e - 1), \\xi_{t-} (e)) = (s_-, s_+) \\ \\hbox{and} \\ x - 1 \\to_t x) $$\n in the statement of Lemma~\\ref{lem:collision}.\n Then, the probability that the jump of an active pile onto the pile of order~$n$ at edge~$e$ results in a reduction of its order is smaller than\n\\begin{equation}\n\\label{eq:jump-1}\n \\begin{array}{l} \\max \\,\\{\\sum_{s : \\ceil{s / \\tau} = n - 1} \\,p (s_-, s_+, s) : \\ceil{s_- / \\tau} = 1 \\ \\hbox{and} \\ \\ceil{s_+ / \\tau} = n \\} \\end{array}\n\\end{equation}\n while the probability that the jump of an active pile onto the pile of order~$n$ at edge~$e$ results in an increase of its order is larger than\n\\begin{equation}\n\\label{eq:jump-2}\n \\begin{array}{l} \\min \\,\\{\\sum_{s : \\ceil{s / \\tau} = n + 1} \\,p (s_-, s_+, s) : \\ceil{s_- / \\tau} = 1 \\ \\hbox{and} \\ \\ceil{s_+ / \\tau} = n \\}. \\end{array}\n\\end{equation}\n But according to Lemma~\\ref{lem:collision}, we have\n $$ p (s_-, s_+, s) \\ = \\ f (s_-, s_+, s) / h (s_+) $$\n therefore~\\eqref{eq:jump-1}--\\eqref{eq:jump-2} are equal to~$p_n$ and~$q_n$, respectively.\n\\end{proof} \\\\ \\\\\n We refer to Figure~\\ref{fig:coupling} for a schematic illustration of the previous lemma.\n In order to prove the theorem, we now use Lemmas~\\ref{lem:collision}--\\ref{lem:jump} to find a stochastic lower bound for the contribution of each edge.\n To express this lower bound, we let~$X_t$ be the discrete-time birth and death Markov chain with transition probabilities\n $$ p (n, n - 1) \\ = \\ p_n \\qquad p (n, n) \\ = \\ 1 - p_n - q_n \\qquad p (n, n + 1) \\ = \\ q_n $$\n for all~$1 < n < M := \\ceil{\\mathbf{d} / \\tau}$ and boundary conditions\n $$ p (1, 1) \\ = \\ 1 \\quad \\hbox{and} \\quad p (M, M - 1) \\ = \\ 1 - p (M, M) \\ = \\ p_M. $$\n This process will allow us to retrace the history of a frozen pile until time~$T_e$ when it becomes an active pile.\n To begin with, we use a first-step analysis to compute explicitly the expected value of the first hitting time to state~1 of the birth and death process.\n\\begin{lemma} --\n\\label{lem:hitting}\n Let~$T_n := \\inf \\,\\{t : X_t = n \\}$. Then,\n $$ E \\,(T_1 \\,| \\,X_0 = k) \\ = \\ 1 + \\mathbf W (k) \\quad \\hbox{for all} \\quad 0 < k \\leq M = \\ceil{\\mathbf{d} / \\tau}. $$\n\\end{lemma}\n\\begin{proof}\n Let~$\\sigma_n := E \\,(T_{n - 1} \\,| \\,X_0 = n)$.\n Then, for all~$1 < n < M$,\n $$ \\begin{array}{rcl}\n \\sigma_n & = & p (n, n - 1) + (1 + \\sigma_n) \\,p (n, n) + (1 + \\sigma_n + \\sigma_{n + 1}) \\,p (n, n + 1) \\vspace*{3pt} \\\\\n & = & p_n + (1 + \\sigma_n)(1 - p_n - q_n) + (1 + \\sigma_n + \\sigma_{n + 1}) \\,q_n \\vspace*{3pt} \\\\\n & = & p_n + (1 + \\sigma_n)(1 - p_n) + q_n \\,\\sigma_{n + 1} \\vspace*{3pt} \\\\\n & = & 1 + (1 - p_n) \\,\\sigma_n + q_n \\,\\sigma_{n + 1} \\end{array} $$\n from which it follows, using a simple induction, that\n\\begin{equation}\n\\label{eq:hitting-1}\n \\begin{array}{rcl}\n \\sigma_n & = & 1 / p_n + \\sigma_{n + 1} \\,q_n / p_n \\vspace*{4pt} \\\\\n & = & 1 / p_n + q_n / (p_n \\,p_{n + 1}) + \\sigma_{n + 2} \\,(q_n \\,q_{n + 1}) / (p_n \\,p_{n + 1}) \\vspace*{4pt} \\\\\n & = & \\sum_{n \\leq m < M} \\,(q_n \\cdots q_{m - 1}) / (p_n \\cdots p_m) + \\sigma_M \\,(q_n \\cdots q_{M - 1}) / (p_n \\cdots p_{M - 1}). \\end{array} \n\\end{equation}\n Since~$p (M, M - 1) = 1 - p (M, M) = p_M$, we also have\n\\begin{equation}\n\\label{eq:hitting-2}\n \\sigma_M \\ = \\ E \\,(T_{M - 1} \\,| \\,X_0 = M) \\ = \\ E \\,(\\geometric (p_M)) \\ = \\ 1 / p_M.\n\\end{equation}\n Combining~\\eqref{eq:hitting-1}--\\eqref{eq:hitting-2}, we deduce that\n $$ \\begin{array}{l} \\sigma_n \\ = \\ \\sum_{n \\leq m \\leq M} \\,(q_n \\,q_{n + 1} \\cdots q_{m - 1}) / (p_n \\,p_{n + 1} \\cdots p_m), \\end{array} $$\n which finally gives\n $$ \\begin{array}{rcl}\n E \\,(T_1 \\,| \\,X_0 = k) & = & \\sum_{1 < n \\leq k} \\,E \\,(T_{n - 1} \\,| \\,X_0 = n) \\ = \\ \\sum_{1 < n \\leq k} \\,\\sigma_n \\vspace*{4pt} \\\\\n & = & \\sum_{1 < n \\leq k} \\,\\sum_{n \\leq m \\leq M} \\,(q_n \\cdots q_{m - 1}) / (p_n \\cdots p_m) \\ = \\ 1 + \\mathbf W (k). \\end{array} $$\n This completes the proof.\n\\end{proof} \\\\ \\\\\n The next lemma gives a lower bound for the contribution~\\eqref{eq:contribution-frozen} of an edge~$e$ that keeps track of the number\n of active piles that jump onto~$e$ before the pile at~$e$ becomes active.\n The key is to show how the number of jumps relates to the birth and death process.\n Before stating our next result, we recall that~$T_e$ is the first time the pile of particles at edge~$e$ becomes active.\n\\begin{figure}[t]\n\\centering\n\\scalebox{0.45}{\\input{coupling.pstex_t}}\n\\caption{\\upshape{Schematic illustration of the coupling between the opinion model and the system of piles along with their evolution rules.\n In our example, the threshold~$\\tau = 2$, which makes piles with three or more particles frozen piles and piles with one or two particles active piles.}}\n\\label{fig:coupling}\n\\end{figure}\n\\begin{lemma} --\n\\label{lem:coupling}\n Assume~\\eqref{eq:uniform} and~\\eqref{eq:dist-reg-1}.\n Then, for~$1 < k \\leq \\ceil{\\mathbf{d} / \\tau}$,\n $$ \\begin{array}{l} E \\,(\\cont (e \\,| \\,T_e < \\infty)) \\ \\geq \\ \\mathbf W (k) \\quad \\hbox{when} \\quad \\ceil{\\xi_0 (e) / \\tau} = k. \\end{array} $$\n\\end{lemma}\n\\begin{proof}\n Since active piles have at most~$\\tau$ particles, the triangle inequality~\\eqref{eq:triangle} implies that the jump of an active pile onto\n a frozen pile can only increase or decrease its size by at most~$\\tau$ particles, and therefore can only increase or decrease its order by at most one.\n In particular,\n $$ \\begin{array}{l}\n P \\,(|\\ceil{\\xi_t (e) / \\tau} - \\ceil{\\xi_{t-} (e) / \\tau}| > 2 \\,| \\,x - 1 \\to_t x) \\ = \\ 0. \\end{array} $$\n This, together with the bounds in Lemma~\\ref{lem:jump} and the fact that the outcomes of consecutive jumps of active piles onto a frozen pile are independent\n as explained in the proof of Lemma~\\ref{lem:collision}, implies that the order of a frozen pile before it becomes active dominates stochastically the\n state of the birth and death process~$X_t$ before it reaches state~1.\n In particular,\n $$ \\begin{array}{l} E \\,(\\cont (e \\,| \\,T_e < \\infty)) \\ \\geq \\ - 1 + E \\,(T_1 \\,| \\,X_0 = k) \\quad \\hbox{when} \\quad \\ceil{\\xi_0 (e) / \\tau} = k. \\end{array} $$\n Using Lemma~\\ref{lem:hitting}, we conclude that\n $$ \\begin{array}{l} E \\,(\\cont (e \\,| \\,T_e < \\infty)) \\ \\geq \\ - 1 + (1 + \\mathbf W (k)) \\ = \\ \\mathbf W (k) \\end{array} $$\n whenever~$\\ceil{\\xi_0 (e) / \\tau} = k$.\n\\end{proof} \\\\ \\\\\n We now have all the necessary tools to prove the theorem.\n The key idea is the same as in the proof of Lemma~\\ref{lem:expected-weight} but relies on the previous lemma in place of Lemma~\\ref{lem:deterministic}. \\\\ \\\\\n\\begin{demo}{Theorem~\\ref{th:dist-reg}} --\n Assume~\\eqref{eq:uniform} and~\\eqref{eq:dist-reg-1} and\n $$ \\begin{array}{l} S_{\\reg} (\\Gamma, \\tau) \\ = \\ \\sum_{k > 0} \\,(\\mathbf W (k) \\,\\sum_{s : \\ceil{s / \\tau} = k} \\,h (s)) \\ > \\ 0. \\end{array} $$\n Since the opinion graph is distance-regular,\n $$ \\begin{array}{rcl}\n P \\,(\\xi_0 (e) = s) & = & \\sum_{i \\in V} \\,P \\,(\\xi_0 (e) = s \\,| \\,\\eta_0 (e - 1/2) = i) \\,P \\,(\\eta_0 (e - 1/2) = i) \\vspace*{4pt} \\\\\n & = & \\sum_{i \\in V} \\,F^{-1} \\,\\card \\{j \\in V : d (i, j) = s \\} \\ P \\,(\\eta_0 (e - 1/2) = i) \\vspace*{4pt} \\\\\n & = & \\sum_{i \\in V} \\,F^{-1} \\,h (s) \\,P \\,(\\eta_0 (e - 1/2) = i) \\ = \\ F^{-1} \\,h (s). \\end{array} $$\n Using also Lemma~\\ref{lem:coupling}, we get\n $$ \\begin{array}{rcl}\n E \\,(\\cont (e \\,| \\,T_e < \\infty)) & \\geq & \\sum_{k > 0} \\,\\mathbf W (k) \\,P \\,(\\ceil{\\xi_0 (e) / \\tau} = k) \\vspace*{4pt} \\\\\n & = & \\sum_{k > 0} \\,\\mathbf W (k) \\,P \\,((k - 1) \\tau < \\xi_0 (e) \\leq k \\tau) \\vspace*{4pt} \\\\\n & = & \\sum_{k > 0} \\,\\mathbf W (k) \\,\\sum_{s : \\ceil{s / \\tau} = k} \\,F^{-1} \\,h (s) \\vspace*{4pt} \\\\\n & = & F^{-1} \\,S_{\\reg} (\\Gamma, \\tau) \\ > \\ 0. \\end{array} $$\n Now, let~$\\mathbf W_e$ be the collection of random variables\n $$ \\begin{array}{l} \\mathbf W_e \\ := \\ \\sum_{k > 0} \\,\\mathbf W (k) \\,\\mathbf{1} \\{\\xi_0 (e) = k \\} \\quad \\hbox{for all} \\quad e \\in \\mathbb{Z} + 1/2. \\end{array} $$\n Using Lemma~\\ref{lem:weight} and the fact the number of collisions to turn a frozen pile into an active pile is independent for different\n frozen piles, we deduce that there exists~$c_{11} > 0$ such that\n $$ \\begin{array}{l}\n P \\,(\\sum_{e \\in (0, N)} \\cont (e \\,| \\,T_e < \\infty) \\leq 0) \\ \\leq \\\n P \\,(\\sum_{e \\in (0, N)} \\mathbf W_e \\leq 0) \\vspace*{4pt} \\\\ \\hspace*{40pt} = \\\n P \\,(\\sum_{e \\in (0, N)} \\,(\\mathbf W_e - E \\mathbf W_e) \\notin (- \\epsilon N, \\epsilon N)) \\ \\leq \\ \\exp (- c_{11} N) \\end{array} $$\n for all~$N$ large.\n This, together with~\\eqref{eq:inclusion-1}, implies that\n $$ \\begin{array}{rcl}\n P \\,(H_N) & \\leq &\n P \\,(\\sum_{e \\in (l, r)} \\cont (e \\,| \\,T_e < \\infty) \\leq 0 \\ \\hbox{for some~$l < - N$ and~$r \\geq 0$}) \\vspace*{4pt} \\\\ & \\leq &\n \\sum_{l < - N} \\,\\sum_{r \\geq 0} \\,\\exp (- c_{11} \\,(r - l)) \\ \\to \\ 0 \\end{array} $$\n as~$N \\to \\infty$.\n In particular, it follows from Lemma~\\ref{lem:fixation-condition} that the process fixates.\n\\end{demo}\n\n\n\\section{Proof of Corollaries~\\ref{cor:path}--\\ref{cor:hypercube}}\n\\label{sec:graphs}\n\n\\indent This section is devoted to the proof of Corollaries~\\ref{cor:path}--\\ref{cor:hypercube} that give sufficient conditions\n for fluctuation and fixation of the infinite system for the opinion graphs shown in Figure~\\ref{fig:graphs}.\n To begin with, we prove the fluctuation part of all the corollaries at once. \\\\ \\\\\n\\begin{demo}{Corollaries~\\ref{cor:path}--\\ref{cor:hypercube} (fluctuation)} --\n We start with the tetrahedron.\n In this case, the diameter equals one therefore, whenever the threshold is positive, the system reduces to a four-opinion voter model,\n which is known to fluctuate according to~\\cite{arratia_1983}.\n To deal with paths and stars, we recall that combining Theorem~\\ref{th:fluctuation}a and Lemma~\\ref{lem:partition} gives fluctuation\n when~$\\mathbf{r} \\leq \\tau$.\n Recalling also the expression of the radius from Table~\\ref{tab:summary} implies fluctuation when\n $$ \\begin{array}{rl}\n F \\leq 2 \\tau + 1 & \\hbox{for the path with~$F$ vertices} \\vspace*{3pt} \\\\\n r \\leq \\tau & \\hbox{for the star with~$b$ branches of length~$r$}. \\end{array} $$\n For the other graphs, it suffices to find a partition that satisfies~\\eqref{eq:fluctuation}.\n For the remaining four regular polyhedra and the hypercubes, we observe that there is a unique vertex at distance~$\\mathbf{d}$ of any\n given vertex.\n In particular, fixing an arbitrary vertex~$i_-$ and setting\n $$ V_1 \\ := \\ \\{i_-, i_+ \\} \\quad \\hbox{and} \\quad V_2 \\ := \\ V \\setminus V_1 \\quad \\hbox{where} \\quad d (i_-, i_+) = \\mathbf{d} $$\n defines a partition of the set of opinions such that\n $$ d (i, j) \\ \\leq \\ \\mathbf{d} - 1 \\quad \\hbox{for all} \\quad (i, j) \\in V_1 \\times V_2. $$\n Recalling the expression of the diameter from Table~\\ref{tab:summary} and using~Theorem~\\ref{th:fluctuation}a give the fluctuation\n parts of Corollaries~\\ref{cor:polyhedron} and~\\ref{cor:hypercube}.\n Using the exact same approach implies fluctuation when the opinion graph is a cycle with an even number of vertices and~$F \\leq 2 \\tau + 2$.\n For cycles with an odd number of vertices, we again use Lemma~\\ref{lem:partition} to deduce fluctuation if\n $$ \\integer{F / 2} = \\mathbf{r} \\leq \\tau \\quad \\hbox{if and only if} \\quad F \\leq 2 \\tau + 1 \\quad \\hbox{if and only if} \\quad F \\leq 2 \\tau + 2, $$\n where the last equivalence is true because~$F$ is odd.\n\\end{demo} \\\\ \\\\\n We now prove the fixation part of the corollaries using Theorems~\\ref{th:fixation} and~\\ref{th:dist-reg}.\n The first two classes of graphs, paths and stars, are not distance-regular therefore, to study the behavior of the\n model for these opinion graphs, we rely on the first part of Theorem~\\ref{th:fixation}. \\\\ \\\\\n\\begin{demo}{Corollary~\\ref{cor:path} (path)} --\n Assume that~$4 \\tau < \\mathbf{d} = F - 1 \\leq 5 \\tau$. Then,\n $$ \\begin{array}{rcl}\n S (\\Gamma, \\tau) & = & \\sum_{k > 0} \\,((k - 2) \\,\\sum_{s : \\ceil{s / \\tau} = k} \\,N (\\Gamma, s)) \\vspace*{4pt} \\\\\n & = & \\sum_{0 < k \\leq 4} \\,((k - 2) \\,\\sum_{s : \\ceil{s / \\tau} = k} \\,2 \\,(F - s)) + 3 \\,\\sum_{4 \\tau < s \\leq d} \\,2 \\,(F - s) \\vspace*{4pt} \\\\\n & = & \\sum_{0 < k \\leq 4} \\,((k - 2)(2 F \\tau - (k \\tau)(k \\tau + 1) + ((k - 1) \\,\\tau)((k - 1) \\,\\tau + 1)) \\vspace*{4pt} \\\\ && \\hspace*{50pt} + \\\n 3 \\,(2F \\,(F - 4 \\tau - 1) - F \\,(F - 1) + 4 \\tau \\,(4 \\tau + 1)) \\vspace*{4pt} \\\\\n & = & 4 F \\tau + \\tau \\,(\\tau + 1) + 2 \\tau \\,(2 \\tau + 1) + 3 \\tau \\,(3 \\tau + 1) \\vspace*{4pt} \\\\ && \\hspace*{50pt} + \\\n 4 \\tau \\,(4 \\tau + 1) + 6 F \\,(F - 4 \\tau - 1) - 3 F \\,(F - 1) \\vspace*{4pt} \\\\\n & = & 3 F^2 - (20 \\tau + 3) \\,F + 10 \\,(3 \\tau + 1) \\,\\tau. \\end{array} $$\n Since the largest root~$F_+ (\\tau)$ of this polynomial satisfies\n $$ 4 \\tau \\leq F_+ (\\tau) - 1 = (1/6)(20 \\,\\tau + 3 + \\sqrt{40 \\,\\tau^2 + 9}) - 1 \\leq 5 \\tau \\quad \\hbox{for all} \\quad \\tau \\geq 1 $$\n and since for any fixed~$\\tau$ the function~$F \\mapsto S (\\Gamma, \\tau)$ is nondecreasing, we deduce that fixation occurs under the assumptions of the lemma\n according to Theorem~\\ref{th:fixation}.\n\\end{demo} \\\\ \\\\\n The case of the star with~$b$ branches of equal length~$r$ is more difficult mainly because there are two different expressions for the number of pairs of vertices at a\n given distance of each other depending on whether the distance is smaller or larger than the branches' length.\n In the next lemma, we compute the number of pairs of vertices at a given distance of each other, which we then use to find a condition for fixation when\n the opinion graph is a star.\n\\begin{lemma} --\n\\label{lem:star}\n For the star with~$b$ branches of length~$r$,\n $$ \\begin{array}{rclcl}\n N (\\Gamma, s) & = & b \\,(2r + (b - 3)(s - 1)) & \\hbox{for all} & s \\in (0, r] \\vspace*{3pt} \\\\\n & = & b \\,(b - 1)(2r - s + 1) & \\hbox{for all} & s \\in (r, 2r]. \\end{array} $$\n\\end{lemma}\n\\begin{proof}\n Let~$n_1 (s)$ and~$n_2 (s)$ be respectively the number of directed paths of length~$s$ embedded in a given branch of the star and the total\n number of directed paths of length~$s$ embedded in a given pair of branches of the star.\n Then, as in the proof of the corollary for paths,\n $$ n_1 (s) = 2 \\,(r + 1 - s) \\quad \\hbox{and} \\quad n_2 (s) = 2 \\,(2r + 1 - s) \\quad \\hbox{for all} \\quad s \\leq r. $$\n Since there are~$b$ branches and $(1/2)(b - 1) \\,b$ pairs of branches, and since self-avoiding paths embedded in the star cannot\n intersect more than two branches, we deduce that\n $$ \\begin{array}{rcl}\n N (\\Gamma, s) & = & b \\,n_1 (s) + ((1/2)(b - 1) \\,b)(n_2 (s) - 2 n_1 (s)) \\vspace*{4pt} \\\\\n & = & 2b \\,(r + 1 - s) + b \\,(b - 1)(s - 1) \\vspace*{4pt} \\\\\n & = & b \\,(2r + 2 \\,(1 - s) + (b - 1)(s - 1)) \\ = \\ b \\,(2r + (b - 3)(s - 1)) \\end{array} $$\n for all~$s \\leq r$.\n To deal with~$s > r$, we let~$o$ be the center of the star and observe that there is no vertex at distance~$s$ of vertices which are\n close to the center whereas there are~$b - 1$ vertices at distance~$s$ from vertices which are far from the center.\n More precisely,\n $$ \\begin{array}{rclcl}\n \\card \\{j \\in V : d (i, j) = s \\} & = & 0 & \\quad \\hbox{when} & d (i, o) < s - r \\vspace*{3pt} \\\\\n \\card \\{j \\in V : d (i, j) = s \\} & = & b - 1 & \\quad \\hbox{when} & d (i, o) \\geq s - r. \\end{array} $$\n The number of directed paths of length~$s$ is then given by\n $$ \\begin{array}{rcl}\n N (\\Gamma, s) & = & (b - 1) \\,\\card \\{i \\in V : d (i, o) \\geq s - r \\} \\vspace*{4pt} \\\\\n & = & b \\,(b - 1)(r - (s - r - 1)) \\ = \\ b \\,(b - 1)(2r - s + 1) \\end{array} $$\n for all~$s > r$.\n This completes the proof of the lemma.\n\\end{proof} \\\\ \\\\\n\\begin{demo}{Corollary~\\ref{cor:star} (star)} --\n Assume that~$3 \\tau < \\mathbf{d} = 2r \\leq 4 \\tau$. Then,\n $$ \\begin{array}{rcl}\n S (\\Gamma, \\tau) & = & \\sum_{k > 0} \\,((k - 2) \\,\\sum_{s : \\ceil{s / \\tau} = k} \\,N (\\Gamma, s)) \\vspace*{4pt} \\\\\n & = & - \\ \\sum_{0 < s \\leq \\tau} \\,N (\\Gamma, s) + \\sum_{2 \\tau < s \\leq 3 \\tau} \\,N (\\Gamma, s) + 2 \\,\\sum_{3 \\tau < s \\leq 2r} \\,N (\\Gamma, s). \\end{array} $$\n Since~$\\tau < r \\leq 2 \\tau$, it follows from Lemma~\\ref{lem:star} that\n $$ \\begin{array}{rcl}\n S (\\Gamma, \\tau) & = & - \\ \\sum_{0 < s \\leq \\tau} \\,b \\,(2r + (b - 3)(s - 1)) \\vspace*{4pt} \\\\ &&\n + \\ \\sum_{2 \\tau < s \\leq 3 \\tau} \\,b \\,(b - 1)(2r - s + 1) + 2 \\,\\sum_{3 \\tau < s \\leq 2r} \\,b \\,(b - 1)(2r - s + 1) \\vspace*{4pt} \\\\\n & = & - \\ b \\,(2r - b + 3) \\,\\tau - (b/2)(b - 3) \\,\\tau \\,(\\tau + 1) \\vspace*{4pt} \\\\ &&\n + \\ b \\,(b - 1)(2r + 1) \\,\\tau + (b/2)(b - 1)(2 \\tau \\,(2 \\tau + 1) - 3 \\tau \\,(3 \\tau + 1)) \\vspace*{4pt} \\\\ &&\n + \\ 2b \\,(b - 1)(2r + 1)(2r - 3 \\tau) + b \\,(b - 1)(3 \\tau \\,(3 \\tau + 1) - 2 r \\,(2r + 1)). \\end{array} $$\n Expanding and simplifying, we get\n $$ (1/b) \\,S (\\Gamma, \\tau) \\ = \\ 4 \\,(b - 1) \\,r^2 + 2 \\,((4 - 5b) \\,\\tau + b - 1) \\,r + (6b - 5) \\,\\tau^2 + (1 - 2b) \\,\\tau. $$\n As for paths, the result is a direct consequence of Theorem~\\ref{th:fixation}.\n\\end{demo} \\\\ \\\\\n The remaining graphs in Figure~\\ref{fig:graphs} are distance-regular, which makes Theorem~\\ref{th:dist-reg} applicable.\n Note that the conditions for fixation in the last three corollaries give minimal values for the confidence threshold that lie between one third and\n one half of the diameter.\n In particular, we apply the theorem in the special case when~$\\ceil{\\mathbf{d} / \\tau} = 3$.\n In this case, we have\n $$ \\mathbf W (1) \\ = \\ - 1 \\qquad \\mathbf W (2) \\ = \\ \\mathbf W (1) + (1 / p_2)(1 + q_2 / p_3) \\qquad \\mathbf W (3) \\ = \\ \\mathbf W + 1 / p_3 $$\n so the left-hand side of~\\eqref{eq:th-dist-reg} becomes\n\\begin{equation}\n\\label{eq:common}\n \\begin{array}{rcl}\n S_{\\reg} (\\Gamma, \\tau) & = & \\sum_{0 < k \\leq 3} \\,(\\mathbf W (k) \\,\\sum_{s : \\ceil{s / \\tau} = k} \\,h (s)) \\vspace*{4pt} \\\\ & = &\n - \\ (h (1) + h (2) + \\cdots + h (\\mathbf{d})) \\vspace*{4pt} \\\\ &&\n + \\ (1/p_2)(1 + q_2 / p_3)(h (\\tau + 1) + h (\\tau + 2) + \\cdots + h (\\mathbf{d})) \\vspace*{4pt} \\\\ &&\n + \\ (1/p_3)(h (2 \\tau + 1) + h (2 \\tau + 2) + \\cdots + h (\\mathbf{d})). \\end{array}\n\\end{equation}\n This expression is used repeatedly to prove the remaining corollaries. \\\\ \\\\\n\\begin{demo}{Corollary~\\ref{cor:polyhedron} (cube)} --\n When~$\\Gamma$ is the cube and~$\\tau = 1$, we have\n $$ p_2 \\ = \\ f (1, 2, 1) / h (2) \\ = \\ 2/3 \\quad \\hbox{and} \\quad q_2 \\ = \\ f (1, 2, 3) / h (2) \\ = \\ 1/3 $$\n which, together with~\\eqref{eq:common} and the fact that~$p_3 \\leq 1$, implies that\n $$ \\begin{array}{rcl}\n S_{\\reg} (\\Gamma, 1) & \\geq & - \\ (h (1) + h (2) + h (3)) + (1/p_2)(1 + q_2)(h (2) + h (3)) + h (3) \\vspace*{4pt} \\\\\n & = & - \\ (3 + 3 + 1) + (3/2)(1 + 1/3)(3 + 1) + 1 \\ = \\ 2 \\ > \\ 0. \\end{array} $$\n This proves fixation according to Theorem~\\ref{th:dist-reg}.\n\\end{demo} \\\\ \\\\\n\\begin{demo}{Corollary~\\ref{cor:polyhedron} (icosahedron)} --\n When~$\\Gamma$ is the icosahedron and~$\\tau = 1$,\n $$ p_2 \\ = \\ f (1, 2, 1) / h (2) \\ = \\ 2/5 \\qquad \\hbox{and} \\qquad q_2 \\ = \\ f (1, 2, 3) / h (2) \\ = \\ 1/5. $$\n Using in addition~\\eqref{eq:common} and the fact that~$p_3 \\leq 1$, we obtain\n $$ \\begin{array}{rcl}\n S_{\\reg} (\\Gamma, 1) & \\geq & - \\ (h (1) + h (2) + h (3)) + (1/p_2)(1 + q_2)(h (2) + h (3)) + h (3) \\vspace*{4pt} \\\\\n & = & - \\ (5 + 5 + 1) + (5/2)(1 + 1/5)(5 + 1) + 1 \\ = \\ 8 \\ > \\ 0 \\end{array} $$\n which, according to Theorem~\\ref{th:dist-reg}, implies fixation.\n\\end{demo} \\\\ \\\\\n\\begin{demo}{Corollary~\\ref{cor:polyhedron} (dodecahedron)} --\n Fixation of the opinion model when the threshold equals one directly follows from Theorem~\\ref{th:fixation} since in this case\n $$ \\begin{array}{rcl}\n F^{-1} \\,S (\\Gamma, 1) & = & (1/20)(- h (1) + h (3) + 2 \\,h (4) + 3 \\,h (5)) \\vspace*{3pt} \\\\\n & = & (1/20)(- 3 + 6 + 2 \\times 3 + 3 \\times 1) \\ = \\ 3/5 \\ > \\ 0. \\end{array} $$\n However, when the threshold~$\\tau = 2$,\n $$ \\begin{array}{rcl}\n F^{-1} \\,S (\\Gamma, 2) & = & (1/20)(- h (1) - h (2) + h (5)) \\vspace*{3pt} \\\\\n & = & (1/20)(- 3 - 6 + 1) \\ = \\ - 2/5 \\ < \\ 0 \\end{array} $$\n so we use Theorem~\\ref{th:dist-reg} instead: when~$\\tau = 2$, we have\n $$ \\begin{array}{rcl}\n p_2 & = & \\max \\,\\{\\sum_{s = 1, 2} f (s_-, s_+, s) / h (s_+) : s_- = 1, 2 \\ \\hbox{and} \\ s_+ = 3, 4 \\} \\vspace*{4pt} \\\\\n & = & \\max \\,\\{f (1, 3, 2) / h (3), (f (2, 3, 2) + f (2, 3, 1)) / h (3), f (2, 4, 2) / h (4) \\} \\vspace*{4pt} \\\\\n & = & \\max \\,\\{2/6, (2 + 1) / 6, 1/3 \\} \\ = \\ 1/2. \\end{array} $$\n In particular, using~\\eqref{eq:common} and the fact that~$p_3 \\leq 1$ and~$q_2 \\geq 0$, we get\n $$ \\begin{array}{rcl}\n S_{\\reg} (\\Gamma, 2) & \\geq & - \\ (h (1) + h (2) + h (3) + h (4) + h (5)) \\vspace*{4pt} \\\\ && \\hspace*{25pt} + \\\n (1/p_2)(h (3) + h (4) + h (5)) + h (5) \\vspace*{4pt} \\\\\n & = & - \\ (3 + 6 + 6 + 3 + 1) + 2 \\times (6 + 3 + 1) + 1 \\ = \\ 2 \\ > \\ 0, \\end{array} $$\n which again gives fixation.\n\\end{demo} \\\\ \\\\\n\\begin{demo}{Corollary~\\ref{cor:cycle} (cycle)} --\n Regardless of the parity of~$F$,\n\\begin{equation}\n\\label{eq:cycle-1}\n \\begin{array}{rclclcl}\n f (s_-, s_+, s) & = & 0 & \\hbox{when} & s_- \\leq s_+ \\leq \\mathbf{d} & \\hbox{and} & s > s_+ - s_- \\vspace*{2pt} \\\\\n f (s_-, s_+, s) & = & 1 & \\hbox{when} & s_- \\leq s_+ \\leq \\mathbf{d} & \\hbox{and} & s = s_+ - s_- \\end{array}\n\\end{equation}\n while the number of vertices at distance~$s_+$ of a given vertex is\n\\begin{equation}\n\\label{eq:cycle-2}\n h (s_+) = 2 \\ \\ \\hbox{for all} \\ \\ s_+ < F/2 \\quad \\hbox{and} \\quad h (s_+) = 1 \\ \\ \\hbox{when} \\ \\ s_+ = F/2 \\in \\mathbb{N}.\n\\end{equation}\n Assume that~$F = 4 \\tau + 2$.\n Then, $\\mathbf{d} = 2 \\tau + 1$ so it follows from~\\eqref{eq:cycle-1}--\\eqref{eq:cycle-2} that\n $$ \\begin{array}{rcl}\n p_2 & = & \\max \\,\\{\\sum_{s : \\ceil{s / \\tau} = 1} f (s_-, s_+, s) / h (s_+) : \\ceil{s_- / \\tau} = 1 \\ \\hbox{and} \\ \\ceil{s_+ / \\tau} = 2 \\} \\vspace*{3pt} \\\\\n & = & \\max \\,\\{f (s_-, s_+, s_+ - s_-) / h (s_+) : \\ceil{s_- / \\tau} = 1 \\ \\hbox{and} \\ \\ceil{s_+ / \\tau} = 2 \\} \\vspace*{3pt} \\\\\n & = & \\max \\,\\{f (s_-, s_+, s_+ - s_-) / h (s_+) : \\ceil{s_+ / \\tau} = 2 \\} \\ = \\ 1/2. \\end{array} $$\n Using in addition that~$p_3 \\leq 1$ and~$q_2 \\geq 0$ together with~\\eqref{eq:common}, we get\n $$ \\begin{array}{rcl}\n S_{\\reg} (\\Gamma, \\tau) & \\geq & - \\ (h (1) + h (2) + \\cdots + h (2 \\tau + 1)) \\vspace*{4pt} \\\\ &&\n + \\ (1/p_2)(h (\\tau + 1) + h (\\tau + 2) + \\cdots + h (2 \\tau + 1)) + h (2 \\tau + 1) \\vspace*{4pt} \\\\\n & = & - \\ (4 \\tau + 1) + 2 \\times (2 \\tau + 1) + 1 \\ = \\ 2 \\ > \\ 0. \\end{array} $$\n In particular, the corollary follows from Theorem~\\ref{th:dist-reg}.\n\\end{demo} \\\\ \\\\\n\\begin{demo}{corollary~\\ref{cor:hypercube} (hypercube)} --\n The first part of the corollary has been explained heuristically in~\\cite{adamopoulos_scarlatos_2012}.\n To turn it into a proof, we first observe that opinions on the hypercube can be represented by vectors with coordinates equal to zero or one while the distance\n between two opinions is the number of coordinates the two corresponding vectors disagree on.\n In particular, the number of opinions at distance~$s$ of a given opinion, namely~$h (s)$, is equal to the number of subsets of size~$s$ of a set of size~$d$.\n Therefore, we have the symmetry property\n\\begin{equation}\n\\label{eq:hypercube-1}\n h (s) \\ = \\ {d \\choose s} \\ = \\ {d \\choose d - s} \\ = \\ h (d - s) \\quad \\hbox{for} \\quad s = 0, 1, \\ldots, d,\n\\end{equation}\n from which it follows that, for~$d = 3 \\tau + 1$,\n $$ \\begin{array}{rcl}\n 2^{-d} \\,S (\\Gamma, \\tau) & = & - \\ h (1) - \\cdots - h (\\tau) + h (2 \\tau + 1) + \\cdots + h (d - 1) + 2 \\,h (d) \\vspace*{3pt} \\\\\n & = & h (d - 1) - h (1) + h (d - 2) - h (2) + \\cdots + h (d - \\tau) - h (\\tau) + 2 \\,h (d) \\vspace*{3pt} \\\\\n & = & 2 \\,h (d) \\ = \\ 2 \\ > \\ 0. \\end{array} $$\n Since in addition the function~$d \\mapsto S (\\Gamma, \\tau)$ is nondecreasing, a direct application of Theorem~\\ref{th:fixation} gives the first part of the corollary.\n The second part is more difficult.\n Note that, to prove this part, it suffices to show that, for any fixed~$\\sigma > 0$, fixation occurs when\n\\begin{equation}\n\\label{eq:hypercube-2}\n d \\ = \\ (2 + 3 \\sigma) \\,\\tau \\quad \\hbox{and} \\quad \\tau \\ \\ \\hbox{is large}.\n\\end{equation}\n The main difficulty is to find a good upper bound for~$p_2$ which relies on properties of the hypergeometric random variable.\n Let~$u$ and~$v$ be two opinions at distance~$s_-$ of each other.\n By symmetry, we may assume without loss of generality that both vectors disagree on their first~$s_-$ coordinates.\n Then, changing each of the first~$s_-$ coordinates in either one vector or the other vector and changing each of the remaining coordinates in either both vectors\n simultaneously or none of the vectors result in the same vector.\n In particular, choosing a vector~$w$ such that\n $$ d (u, w) \\ = \\ s_+ \\quad \\hbox{and} \\quad d (v, w) \\ = \\ s $$\n is equivalent to choosing~$a$ of the first~$s_-$ coordinates and then choosing~$b$ of the remaining~$d - s_-$ coordinates with the following constraint:\n $$ a + b \\ = \\ s_+ \\quad \\hbox{and} \\quad (s_- - a) + b \\ = \\ s. $$\n In particular, letting~$K := \\ceil{(1/2)(s_- + s_+ - \\tau)}$, we have\n $$ \\sum_{s = 1}^{\\tau} \\ f (s_-, s_+, s) \\ = \\ \\sum_{a = K}^{s_-} {s_- \\choose a}{d - s_- \\choose s_+ - a} \\ = \\ h (s_+) \\,P \\,(Z \\geq K) $$\n where~$Z = \\hypergeometric (d, s_-, s_+)$.\n In order to find an upper bound for~$p_2$ and deduce fixation, we first prove the following lemma about the hypergeometric random variable.\n\\begin{lemma} --\n Assume~\\eqref{eq:hypercube-2}, that~$\\ceil{s_- / \\tau} = 1$ and~$\\ceil{s_+ / \\tau} = 2$. Then,\n $$ P \\,(Z \\geq K) \\ = \\ \\sum_{a = K}^{s_-} {s_- \\choose a}{d - s_- \\choose s_+ - a}{d \\choose s_+}^{-1} \\leq \\ 1/2. $$\n\\end{lemma}\n\\begin{proof}\n The proof is made challenging by the fact that there is no explicit expression for the cumulative distribution function of the hypergeometric random variable\n and the idea is to use a combination of symmetry arguments and large deviation estimates.\n Symmetry is used to prove the result when~$s_-$ is small while large deviation estimates are used for larger values.\n Note that the result is trivial when~$s_+ > s_- + \\tau$ since in this case the sum in the statement of the lemma is empty so equal to zero.\n To prove the result when the sum is nonempty, we distinguish two cases. \\vspace*{5pt} \\\\\n{\\bf Small active piles} -- Assume that~$s_- < \\sigma \\tau$. Then,\n\\begin{equation}\n\\label{eq:hypergeometric-1}\n \\begin{array}{rcl}\n s_+ & \\leq & s_- + \\tau < (1 + \\sigma) \\,\\tau \\ = \\ (1/2)(d - \\sigma \\tau) \\ < \\ (1/2)(d - s_-) \\vspace*{3pt} \\\\\n K & \\geq & (1/2)(s_- + s_+ - \\tau) \\ > \\ s_- / 2 \\ > \\ s_- - K \\end{array}\n\\end{equation}\n from which it follows that\n\\begin{equation}\n\\label{eq:hypergeometric-2}\n {s_- \\choose a}{d - s_- \\choose s_+ - a} \\ \\leq \\ {s_- \\choose a}{d - s_- \\choose s_+ - s_- + a} \\quad \\hbox{for all} \\quad K \\leq a \\leq s_-.\n\\end{equation}\n Using~\\eqref{eq:hypergeometric-2} and again the second part of~\\eqref{eq:hypergeometric-1}, we deduce that\n $$ \\begin{array}{rcl}\n h (s_+) \\,P \\,(Z \\geq K) & = & \\displaystyle \\sum_{a = K}^{s_-} {s_- \\choose a}{d - s_- \\choose s_+ - a}\n \\ \\leq \\ \\displaystyle \\sum_{a = K}^{s_-} {s_- \\choose a}{d - s_- \\choose s_+ - s_- + a} \\vspace*{4pt} \\\\\n & = & \\displaystyle \\sum_{a = 0}^{s_- - K} {s_- \\choose s_- - a}{d - s_- \\choose s_+ - a}\n \\ \\leq \\ \\displaystyle \\sum_{a = 0}^{K - 1} {s_- \\choose a}{d - s_- \\choose s_+ - a}. \\end{array} $$\n In particular, we have~$P \\,(Z \\geq K) \\leq P \\,(Z < K)$, which gives the result. \\vspace*{5pt} \\\\\n{\\bf Larger active piles} -- Assume that~$\\sigma \\tau \\leq s_- \\leq \\tau$.\n In this case, the result is a consequence of the following large deviation estimates for the hypergeometric random variable:\n\\begin{equation}\n\\label{eq:hypergeometric-3}\n P \\,\\bigg(Z \\geq \\bigg(\\frac{s_-}{d} + \\epsilon \\bigg) \\,s_+ \\bigg) \\ \\leq \\ \\bigg(\\bigg(\\frac{s_-}{s_- + \\epsilon d} \\bigg)^{s_- / d + \\epsilon} \\bigg(\\frac{d - s_-}{d - s_- - \\epsilon d} \\bigg)^{1 - s_- / d - \\epsilon} \\bigg)^{s_+}\n\\end{equation}\n for all~$0 < \\epsilon < 1 - s_- / d$, that can be found in~\\cite{hoeffding_1963}.\n Note that\n $$ \\begin{array}{rcl}\n d \\,(s_+ + s_- - \\tau) - 2 s_+ \\,s_- & = & (d - 2 s_-) \\,s_+ + d \\,(s_- - \\tau) \\vspace*{3pt} \\\\\n & \\geq & (d - 2 s_-)(\\tau + 1) + d \\,(s_- - \\tau) \\ \\geq \\ (d - 2 \\tau) \\,s_- \\vspace*{3pt} \\\\\n & = & 3 \\sigma \\tau s_- \\ = \\ (3 \\sigma \\tau / 2 s_+)(2 s_+ \\,s_-) \\ \\geq \\ (3 \\sigma / 4)(2 s_+ \\,s_-) \\end{array} $$\n for all~$\\tau < s_+ \\leq 2 \\tau$.\n It follows that\n $$ K \\ \\geq \\ \\frac{s_+ + s_- - \\tau}{2} \\ \\geq \\ \\bigg(1 + \\frac{3 \\sigma}{4} \\bigg) \\,\\frac{s_+ \\,s_-}{d} \\ = \\ \\bigg(\\frac{s_-}{d} + \\frac{3 \\sigma s_-}{4d} \\bigg) \\,s_+ \\ \\geq \\ \\bigg(\\frac{s_-}{d} + \\frac{\\sigma^2}{3} \\bigg) \\,s_+ $$\n which, together with~\\eqref{eq:hypergeometric-3} for~$\\epsilon = \\sigma^2 / 3$, gives\n $$ \\begin{array}{rcl}\n P \\,(Z \\geq K) & \\leq &\n \\displaystyle P \\,\\bigg(Z \\geq \\bigg(\\frac{s_-}{d} + \\epsilon \\bigg) \\,s_+ \\bigg) \\ \\leq \\ \\bigg(\\frac{s_-}{s_- + \\epsilon d} \\bigg)^{s_+ s_- / d} \\vspace*{8pt} \\\\ & \\leq &\n \\displaystyle \\bigg(\\frac{3 s_-}{3 s_- + \\sigma^2 d} \\bigg)^{s_+ s_- / d} \\leq \\ \\bigg(\\frac{3}{3 + 2 \\sigma^2} \\bigg)^{(\\sigma / 3) \\,s_+} \\leq \\ \\bigg(\\frac{3}{3 + 2 \\sigma^2} \\bigg)^{(\\sigma / 3) \\,\\tau}. \\end{array} $$\n Since this tends to zero as~$\\tau \\to \\infty$, the proof is complete.\n\\end{proof} \\\\ \\\\\n It directly follows from the lemma that\n $$ \\begin{array}{l}\n p_2 \\ = \\ \\max \\,\\{\\sum_{s : \\ceil{s / \\tau} = 1} f (s_-, s_+, s) / h (s_+) : \\ceil{s_- / \\tau} = 1 \\ \\hbox{and} \\ \\ceil{s_+ / \\tau} = 2 \\} \\ \\leq \\ 1/2. \\end{array} $$\n This, together with~\\eqref{eq:common} and~$p_3 \\leq 1$ and~$q_2 \\geq 0$, implies that\n $$ \\begin{array}{rcl}\n S_{\\reg} (\\Gamma, \\tau) & \\geq & - \\ h (1) - \\cdots - h (d) + (1/p_2) \\,h (\\tau + 1) + \\cdots + (1/p_2) \\,h (d) \\vspace*{3pt} \\\\\n & \\geq & - \\ h (1) - \\cdots - h (d) + 2 \\,h (\\tau + 1) + \\cdots + 2 \\,h (d) \\vspace*{3pt} \\\\\n & = & - \\ h (1) - \\cdots - h (\\tau) + h (\\tau + 1) + \\cdots + h (d). \\end{array} $$\n Finally, using again~\\eqref{eq:hypercube-1} and the fact that~$d > 2 \\tau$, we deduce that\n $$ \\begin{array}{rcl}\n S_{\\reg} (\\Gamma, \\tau) & \\geq & - \\ h (1) - \\cdots - h (\\tau) + h (\\tau + 1) + \\cdots + h (d) \\vspace*{3pt} \\\\\n & \\geq & h (d - 1) - h (1) + h (d - 2) - h (2) + \\cdots + h (d - \\tau) - h (\\tau) + h (d) \\vspace*{3pt} \\\\\n & = & h (d) \\ = \\ 1 \\ > \\ 0. \\end{array} $$\n The corollary follows once more from Theorem~\\ref{th:dist-reg}.\n\\end{demo}\n\n\n", "meta": {"timestamp": "2014-12-16T02:00:52", "yymm": "1412", "arxiv_id": "1412.4142", "language": "en", "url": "https://arxiv.org/abs/1412.4142"}} {"text": "\\section{Introduction}\n\\label{sec1}\n\\IEEEPARstart{W}{ith} the development of the internet of vehicles (IoV) and cloud computing, caching technology facilitates various real-time vehicular applications for vehicular users (VUs), such as automatic navigation, pattern recognition and multimedia entertainment \\cite{Liuchen2021} \\cite{QWu2022}. For the standard caching technology, the cloud caches various contents like data, video and web pages. In this scheme, vehicles transmit the required contents to a macro base station (MBS) connected to a cloud server, and could fetch the contents from the MBS, which would cause high content transmission delay from the MBS to vehicles due to the communication congestion caused by frequently requested contents from vehicles \\cite{Dai2019}. The content transmission delay can be effectively reduced by the emergence of vehicular edge computing (VEC), which caches contents in the road side unit (RSU) deployed at the edge of vehicular networks (VNs) \\cite{Javed2021}. Thus, vehicles can fetch contents directly from the local RSU, to reduce the content transmission delay. In the VEC, since the caching capacity of the local RSU is limited, if some vehicles cannot fetch their required contents, a neighboring RSU who has the required contents could forward them to the local RSU. The worst case is that vehicles need to fetch contents from the MBS due to both local and neighboring RSUs not having cached the requested contents.\n\n\nIn the VEC, it is critical to design a caching scheme to cache the popular contents. The traditional caching schemes cache contents based on the previously requested contents \\cite{Narayanan2018}. However, owing to the high-mobility characteristics of vehicles in VEC, the previously requested contents from vehicles may become outdated quickly, thus the traditional caching schemes may not satisfy all the VUs' requirement. Therefore, it is necessary to predict the most popular contents in the VEC and cache them in the suitable RSUs in advance. Machine learning (ML) as a new tool, can extract hidden features by training user data to efficiently predict popular contents\\cite{Yan2019}. However, the user data usually contains privacy information and users are reluctant to share their data directly with others, which make it difficult to collect and train users' data. Federated learning (FL) can protect the privacy of users by sharing their local models instead of data\\cite{Chen2021}. In traditional FL, the global model is periodically updated by aggregating all vehicles' local models\\cite{Wang2020} -\\cite{Cheng2021}. However, vehicles may frequently drive out of the coverage area of the VEC before they update their local models and thus the local models cannot be uploaded in the same area, which would reduce the accuracy of the global model as well as the probability of getting the predicted popular contents. Hence, it motivates us to consider the mobility of vehicles and propose an asynchronous FL to predict accurate popular contents in VEC.\n\n\nGenerally, the predicted popular contents should be cached in their local RSU of vehicles to guarantee a low content transmission delay. However, the caching capacity of each local RSU is limited and the popular contents may be diverse, thus the size of the predicted popular contents usually exceeds the cache capacity of the local RSU. Hence, the VEC has to determine where the predicted popular contents are cached and updated. The content transmission delay is an important metric for vehicles to provide real-time vehicular application. The different popular contents cached in the local and neighboring RSUs would impact the way vehicles fetch contents, and thus affect the content transmission delay. In addition, the content transmission delay of each vehicle is impacted by its channel condition, which is affected by vehicle mobility. Therefore, it is necessary to consider the mobility of vehicles to design a cooperative caching scheme, in which the predicted popular contents can be cached among RSUs to optimize the content transmission delay. In contrast to some conventional decision algorithms, deep reinforcement learning (DRL) is a favorable tool to construct the decision-making framework and optimize the cooperative caching for the contents in complex vehicular environment \\cite{Zhu2021}. Therefore, we shall employ DRL to determine the optimal cooperative caching to reduce the content transmission delay of vehicles.\n\nIn this paper, we consider the vehicle mobility and propose a cooperative Caching scheme in VEC based on Asynchronous Federated and deep Reinforcement learning (CAFR). The main contributions of this paper are summarized as follows.\n\n\\begin{itemize}\n\\item[1)] By considering the mobility characteristics of vehicles including the positions and velocities, we propose an asynchronous FL algorithm to improve the accuracy of the global model.\n\n\n\\item[2)] We propose an algorithm to predict the popular contents based on the global model, where each vehicle adopts the autoencoder (AE) to predict its interested contents based on the global model, while the local RSU collects the interested contents of all vehicles within the coverage area to catch the popular contents.\n\n\n\\item[3)] We elaborately design a DRL framework (dueling deep Q-network (DQN)) to illustrate the cooperative caching problem, where the state, action and reward function have been defined. Then the local RSU can determine optimal cooperative caching to minimize the content transmission delay based on the dueling DQN algorithm.\n\\end{itemize}\n\nThe rest of the paper is organized as follows. Section \\ref{sec2} reviews the related works on content caching in VNs. Section \\ref{sec3} briefly describes the system model. Section \\ref{sec5} proposes a mobility-aware cooperative caching in the VEC based on asynchronous federated and deep reinforcement learning method. We present some simulation results in Section \\ref{sec6}, and then conclude them in Section \\ref{sec7}.\n\n\\section{Related Work}\n\\label{sec2}\nIn this section, we first review the existing works related to the content caching in vehicular networks (VNs), and then survey the current state of art of the cooperative content caching schemes in VEC.\n\n\nIn \\cite{YDai2020}, Dai \\textit{et al.} proposed a distributed content caching framework with empowering blockchain to achieve security and protect privacy, and considered the mobility of vehicles to design an intelligent content caching scheme based on DRL framework.\nIn \\cite{Yu2021}, Yu \\textit{et al.} proposed a mobility-aware proactive edge caching scheme in VNs that allows multiple vehicles with private data to collaboratively train a global model for predicting content popularity, in order to meet the requirements for computationally intensive and latency-sensitive vehicular applications.\nIn \\cite{JZhao2021}, Zhao \\textit{et al.} optimized the edge caching and computation management for service caching, and adopted Lyapunov optimization to deal with the dynamic and unpredictable challenges in VNs.\nIn \\cite{SJiang2020}, Jiang \\textit{et al.} constructed a two-tier secure access control structure for providing content caching in VNs with the assistance of edge devices, and proposed the group signature-based scheme for the purpose of anonymous authentication.\nIn \\cite{CTang2021}, Tang \\textit{et al.} proposed a new optimization method to reduce the average response time of caching in VNs, and then adopted Lyapunov optimization technology to constrain the long-term energy consumption to guarantee the stability of response time.\nIn \\cite{YDai2022}, Dai \\textit{et al.} proposed a VN with digital twin to cache contents for adaptive network management and policy arrangement, and designed an offloading scheme based on the DRL framework to minimize the total offloading delay.\nHowever, the above content caching schemes in VNs did not take into account the cooperative caching in the VEC environment.\n\n\nThere are some works considering cooperative content caching schemes in VEC.\nIn \\cite{GQiao2020}, Qiao \\textit{et al.} proposed a cooperative edge caching scheme in VEC and constructed the double time-scale markov decision process to minimize the content access cost, and employed the deep deterministic policy gradient (DDPG) method to solve the long-term mixed-integer linear programming problems.\nIn \\cite{JChen2020}, Chen \\textit{et al.} proposed a cooperative edge caching scheme in VEC which considered the location-based contents and the popular contents, while designing an optimal scheme for cooperative content placement based on an ant colony algorithm to minimize the total transmission delay and cost.\nIn \\cite{LYao2022}, Yao \\textit{et al.} designed a cooperative edge caching scheme with consistent hash and mobility prediction in VEC to predict the path of each vehicle, and also proposed a cache replacement policy based on the content popularity to decide the priorities of collaborative contents.\nIn \\cite{RWang2021}, Wang \\textit{et al.} proposed a cooperative edge caching scheme in VEC based on the long short-term memory (LSTM) networks, which caches the predicted contents in RSUs or other vehicles and thus reduces the content transmission delay.\nIn \\cite{DGupta2020}, Gupta \\textit{et al.} proposed a cooperative caching scheme that jointly considers cache location, content popularity and predicted rating of contents to make caching decision based on the non-negative matrix factorization, where it employs a legitimate user authorization to ensure the secure delivery of cached contents.\nIn \\cite{LYao2019}, Yao \\textit{et al.} proposed a cooperative caching scheme based on the mobility prediction and drivers' social similarities in VEC, where the regularity of vehicles' movement behaviors are predicted based on the hidden markov model to improve the caching performance.\nIn \\cite{RWu2022}, Wu \\textit{et al.} proposed a hybrid service provisioning framework and cooperative caching scheme in VEC to solve the profit allocation problem among the content providers (CPs), and proposed an optimization model to improve the caching performance in managing the caching resources.\nIn \\cite{LYao2017}, Yao \\textit{et al.} proposed a cooperative caching scheme based on mobility prediction, where the popular contents may be cached in the mobile vehicles within the coverage area of hot spot. They also designed a cache replacement scheme according to the content popularity to solve the limited caching capacity problem for each edge cache device.\nIn \\cite{KZhang2018}, Zhang \\textit{et al.} proposed a cooperative edge caching architecture that focuses on the mobility-aware caching, where the vehicles cache the contents with base stations collaboratively. They also introduced a vehicle-aided edge caching scheme to improve the capability of edge caching.\nIn \\cite{KLiu2016}, Liu \\textit{et al.} designed a cooperative caching scheme that allows vehicles to search the unrequested contents. This scheme facilitates the content sharing among vehicles and improves the service performance.\nIn \\cite{SWang2017}, Wang \\textit{et al.} proposed a VEC caching scheme to reduce the total transmission delay. This scheme extends the capability of the data center from the core network to the edge nodes by cooperatively caching popular contents in different CPs. It minimizes the VUs' average delay according to an iterative ascending price method.\nIn \\cite{MLiu2021}, Liu \\textit{et al.} proposed a real-time caching scheme in which edge devices cooperate to improve the caching resource utilization. In addition, they adopted the DRL framework to optimize the problem of searching requests and utility models to guarantee the search efficiency.\nIn \\cite{BKo2019}, Ko \\textit{et al.} proposed an adaptive scheduling scheme consisting of the centralized scheduling mechanism, ad hoc scheduling mechanism and cluster management mechanism to exploit the ad hoc data sharing among different RSUs.\nIn \\cite{JCui2020}, Cui \\textit{et al.} proposed a privacy-preserving data downloading method in VEC, where the RSUs can find popular contents by analyzing encrypted requests of nearby vehicles to improve the downloading efficiency of the network.\nIn \\cite{QLuo2020}, Luo \\textit{et al.} designed a communication, computation and cooperative caching framework, where computing-enabled RSUs provide computation and bandwidth resource to the VUs to minimize the data processing cost in VEC.\n\nAs mentioned above, no other works has considered the vehicle mobility and privacy of VUs simultaneously to design cooperative caching schemes in VEC, which motivates us to propose a mobility-aware cooperative caching in VEC based on the asynchronous FL and DRL.\n\n\n\n\n\\begin{figure}\n\\center\n\\includegraphics[scale=0.7]{1-eps-converted-to.pdf}\n\\caption{VEC scenario}\n\\label{fig1}\n\\end{figure}\n\n\n\n\\section{System Model}\n\\label{sec3}\n\n\\subsection{System Scenario}\nAs shown in Fig. \\ref{fig1}, we consider a three-tier VEC in an urban scenario that consists of a local RSU, a neighboring RSU, a MBS attached with a cloud and some vehicles moving in the coverage area of the local RSU. The top tier is the MBS deployed at the center of the VEC, while middle tier is the RSUs deployed in the coverage area of the MBS. They are placed on one side of the road. The bottom tier is the vehicles driving within the coverage area of the RSUs.\n\n\nEach vehicle stores a large amount of VUs' historical data, i.e., local data. Each data is a vector reflecting different information of a VU, including the VU's personal information such as identity (ID) number, gender, age and postcode, the contents that the VU may request, as well as the VU's ratings for the contents where a larger rating for a content indicates that the VU is more interested in the content. Particularly, the rating for a content may be $0$, which means that it is not popular or is not requested by VUs. Each vehicle randomly chooses a part of the local data to form a training set while the rest is used as a testing set. The time duration of vehicles within the coverage area of the MBS is divided into rounds. For each round, each vehicle randomly selects contents from its testing set as the requested contents, and sends the request information to the local RSU to fetch the contents at the beginning of each round. We consider the MBS has abundant storage capacity and caches all available contents, while the limited storage capacity of each RSU can only accommodate part of contents. Therefore, the vehicle fetches each of the requested content from the local RSU, neighboring RSU or MBS in different conditions. Specifically,\n\n\\subsubsection{Local RSU}If a requested content is cached in the local RSU, the local RSU sends back the requested content to the vehicle. In this case the vehicle fetches the content from the local RSU.\n\\subsubsection{neighboring RSU}If a requested content is not cached in the local RSU, the local RSU transfers the request to the neighboring RSU, and the neighboring RSU sends the content to the local RSU if it caches the requested content. Afterward, the local RSU sends back the content to the vehicle. In this case the vehicle fetches the content from the neighboring RSU.\n\\subsubsection{MBS}If a content is neither cached in the local RSU nor the neighboring RSU, the vehicle sends the request to the MBS that directly sends back the requested content to the vehicle. In this case, the VU fetches the content from the MBS.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Mobility Model of Vehicles}\nThe model assumes that all vehicles drive in the same direction and vehicles arrive at a local RSU, following a Poisson distribution with the arrival rate $\\lambda_{v}$. Once a vehicle enters the coverage of the local RSU, it sends request information to the local RSU. Each vehicle keeps the same mobility characteristics including position and velocity within a round and may change its mobility characteristics at the beginning of each round. The velocity of different vehicles follows an independent identically distribution. The velocity of each vehicle is generated by a truncated Gaussian distribution, which is flexible and consistent with the real dynamic vehicular environment. For round $r$, the number of vehicles driving in the coverage area of the local RSU is $N^{r}$. The set of $N^{r}$ vehicles are denoted as $\\mathbb{V}^{r}=\\left\\{V_{1}^{r}, V_{2}^{r},\\ldots, V_{i}^{r}, \\ldots, V_{N^{r}}^{r}\\right\\}$, where $V_{i}^{r}$ is vehicle $i$ driving in the local RSU $(1 \\leq i \\leq N^{r})$. Let $\\left\\{U_{1}^{r}, U_{2}^{r}, \\ldots, U_{i}^{r}, \\ldots, U_{N^{r}}^{r}\\right\\}$ be the velocities of all vehicles driving in the local RSU, where $U_{i}^{r}$ is velocity of $V_{i}^{r}$. According to \\cite{AlNagar2019}, the probability density function of $U_{i}^{r}$ is expressed as\n\\begin{equation}\nf({U_{i}^r}) = \\left\\{ \\begin{aligned}\n\\frac{{{e^{ - \\frac{1}{{2{\\sigma ^2}}}{{({U_{i}^r} - \\mu )}^2}}}}}{{\\sqrt {2\\pi {\\sigma ^2}} (erf(\\frac{{{U_{\\max }} - \\mu }}{{\\sigma \\sqrt 2 }}) - erf(\\frac{{{U_{\\min }} - \\mu }}{{\\sigma \\sqrt 2 }}))}},\\\\\n{U_{min }} \\le {U_{i}^r} \\le {U_{max }},\\\\\n0 \\qquad \\qquad \\qquad \\qquad \\quad otherwise.\n\\end{aligned} \\right.\n\\label{eq1}\n\\end{equation}\nwhere $U_{\\max}$ and $U_{\\min}$ are the maximum and minimum velocity threshold of each vehicle, respectively, and $erf\\left(\\frac{U_{i}^{r}-\\mu}{\\sigma \\sqrt{2}}\\right)$ is the Gauss error function of $U_{i}^{r}$ under the mean $\\mu$ and variance $\\sigma^{2}$.\n\\subsection{Communication Model}\n\nThe communications between the local RSU and neighboring RSU adopt the wired link. Each vehicle keeps the same communication model during a round and changes its communication model for different rounds. When the round is $r$, the channel gain of $V_{i}^{r}$ is modeled as \\cite{3gpp}\n\n\\begin{equation}\n\\begin{aligned}\nh_{i}^{r}(dis(x,V_{i}^{r}))=\\alpha_{i}^{r}(dis(x,V_{i}^{r})) g_{i}^{r}(dis(x,V_{i}^{r})), \\\\\nx=S,M,\\\\\n\\label{eq2}\n\\end{aligned}\n\\end{equation}\nwhere $x=S$ means the local RSU and $x=M$ means the MBS, $dis(x,V_{i}^{r})$ is the distance between the local RSU$/$MBS and $V_{i}^{r}$, $\\alpha_{i}^{r}(dis(x,V_{i}^{r}))$ is the path loss between the local RSU$/$MBS and $V_{i}^{r}$, and $g_{i}^{r}(dis(x,V_{i}^{r}))$ is the shadowing channel fading between the local RSU$/$MBS and $V_{i}^{r}$, which follows a Log-normal distribution.\n\nEach RSU communicates with the vehicles in its coverage area through vehicle to RSU (V2R) link, while the MBS communicates with vehicles through vehicle to base station (V2B) link. Since the distances between the local RSU$/$MBS and $V_{i}^{r}$ are different in different rounds, V2R$/$V2B link suffers from different channel impairments, and thus transmit with different transmission rates in different rounds. The transmission rates under V2R and V2B link are calculated as follows.\n\nAccording to the Shannon theorem, the transmission rate between the local RSU and $V_{i}^{r}$ is calculated as \\cite{Chenwu2020}\n\\begin{equation}\nR_{R, i}^{r}=B\\log _{2}\\left(1+\\frac{p_B h_{i}^{r}(dis(S,V_{i}^{r}))}{\\sigma_{c}^{2}}\\right),\n\\label{eq3}\n\\end{equation}where $B$ is the available bandwidth, $p_B$ is the transmit power level used by the local RSU and $\\sigma_{c}^{2}$ is the noise power.\n\nSimilarly, the transmission rate between the MBS and $V_{i}^{r}$ is calculated as\n\n\\begin{equation}\nR_{B, i}^{r}=B\\log _{2}\\left(1+\\frac{p_{M} h_{i}^{r}(dis(M,V_{i}^{r}))}{\\sigma_{c}^{2}}\\right),\n\\label{eq4}\n\\end{equation}where $p_{M}$ is the transmit power level used by MBS.\n\n\n\\begin{figure}\n\\center\n\\includegraphics[scale=0.75]{2-eps-converted-to.pdf}\n\\caption{Asynchronous FL}\n\\label{fig2}\n\\end{figure}\n\n\\section{Cooperative Caching Scheme}\n\\label{sec5}\nIn this section, we propose a cooperative cache scheme to optimize the content transmission delay in each round $r$. We first propose an asynchronous FL algorithm to protect VU's information and obtain an accurate model. Then we propose an algorithm to predict the popular contents based on the obtained model. Finally, we present a DRL based algorithm to determine the optimal cooperative caching according to the predicted popular contents. Next, we will introduce the asynchronous FL algorithm, the popular content prediction algorithm and the DRL-based algorithm, respectively.\n\n\\subsection{Asynchronous Federated Learning}\n\n\n\nAs shown in Fig. \\ref{fig2}, the asynchronous FL algorithm consists of 5 steps as follows.\n\n\n\n\n\\subsubsection{Select Vehicles}\n\\\n\\newline\n\\indent\nThe main goal of this step is to select the vehicles whose staying time in the local RSU is long enough to ensure they can participate in the asynchronous FL and complete the training process.\n\nEach vehicle first sends its mobility characteristics including its velocity and position (i.e., the distance to the local RSU and distance it has traversed within the coverage of the local RSU), then the local RSU selects vehicles according to the staying time that is calculated based on the vehicle's mobility characteristics. The staying time of $V_{i}^{r}$ in the local RSU is calculated as\n\n\\begin{equation}\nT_{r,i}^{staying}=\\left(L_{s}-P_{i}^{r}\\right) / U_{i}^{r},\n\\label{eq5}\n\\end{equation}\nwhere $L_s$ is the coverage range of the local RSU, $P_{i}^{r}$ is the distance that $V_{i}^{r}$ has traversed within the coverage of the local RSU.\n\nThe staying time of $V_{i}^{r}$ should be larger than the sum of the average training time $T_{training}$ and inference time $T_{inference}$ to guarantee that $V_{i}^{r}$ can complete the training process. Therefore, if $T_{r,i}^{staying}>T_{training}+T_{inference}$, the local RSU selects $V_{i}^{r}$ to participate in asynchronous FL training. Otherwise, $V_{i}^{r}$ is ignored.\n\n\\subsubsection{Download Model}\n\\\n\\newline\n\\indent\nIn this step, the local RSU will generate the global model $\\omega^{r}$. For the first round, the local RSU initializes a global model based on the AE, which can extract the hidden features used for popular content prediction. In each round, the local RSU updates the global model and transfers the global model $\\omega^{r}$ to all the selected vehicles in the end.\n\n\n\n\n\n\n\\subsubsection{Local Training}\n\\\n\\newline\n\\indent\nIn this step, each vehicle in the local RSU sets the downloaded global model $\\omega^{r}$ as the initial local model and updates the local model iteratively through training. Afterward, the updated local model will be the feedback to the local RSU.\nFor each iteration $k$, $V_{i}^{r}$ randomly samples some training data $n_{i,k}^{r}$ from the training set. Then, it uses $n_{i,k}^{r}$ to train the local model based on the AE that consists of an encoder and a decoder. Let $W_{i,k}^{r,e}$ and $b_{i,k}^{r,e}$ be the weight matrix and bias vector of the encoder for iteration $k$, respectively, $W_{i,k}^{r,d}$ and $b_{i,k}^{r,d}$ be the weight matrix and bias vector of the decoder for iteration $k$, respectively. Thus the local model of $V_{i,j}^{r}$ for iteration $k$ is expressed as $\\omega_{i,k}^r=\\{W_{i,k}^{r,e}, b_{i,k}^{r,e}, W_{i,k}^{r,d}, b_{i,k}^{r,d}\\}$. For each training data $x$ in $n_{i,k}^{r}$, the encoder first maps the original training data $x$ to a hidden layer to obtain the hidden feature of $x$, i.e., $z(x)=f\\left(W_{i,k}^{r,e}x+b_{i,k}^{r,e}\\right)$. Then the decoder calculates the reconstructed input $\\hat{x}$, i.e., $\\hat{x}=g\\left(W_{i,k}^{r,d}z(x)+b_{i,k}^{r,d}\\right)$, where $f{(\\cdot)}$ and $g{(\\cdot)}$ are the nonlinear and logical activation function \\cite{Ng2011}. Afterward, the loss function of data $x$ under the local model $\\omega_{i,k}^r$ is calculated as\n\\begin{equation}\nl\\left(\\omega_{i,k}^r;x\\right)=(x-\\hat{x})^{2},\n\\label{eq6}\n\\end{equation}where $\\omega^{r}_{i,1}=\\omega^{r}$.\n\n\n\n\n\n\n\nAfter the loss functions of all the data in $n_{i,k}^{r}$ are calculated, the local loss function for iteration $k$ is calculated as\n\\begin{equation}\nf(\\omega_{i,k}^r)=\\frac{1}{\\left| n_{i,k}^r\\right|}\\sum_{x\\in n_{i,k}^r} l\\left(\\omega_{i,k}^r;x\\right),\n\\label{eq7}\n\\end{equation}\nwhere $\\left| n_{i,k}^r\\right|$ is the number of data in $n_{i,k}^r$.\n\nThen the regularized local loss function is calculated to reduce the deviation between the local model $\\omega_{i,k}^r$ and global model $\\omega^{r}$ to improve the algorithm convergence, i.e.,\n\\begin{equation}\ng\\left(\\omega_{i,k}^r\\right)=f\\left(\\omega_{i,k}^r\\right)+\\frac{\\rho}{2}\\left\\|\\omega^{r}-\\omega_{i,k}^r\\right\\|^{2},\n\\label{eq8}\n\\end{equation}\nwhere $\\rho$ is the regularization parameter.\n\n\nLet $\\nabla g(\\omega_{i,k}^{r})$ be the gradient of $g\\left(\\omega_{i,k}^r\\right)$, which is referred to as the local gradient. In the previous round, some vehicles may upload the updated local model unsuccessfully due to the delayed training time, and thus adversely affect the convergence of global model \\cite{Chen2020}\\cite{Xie2019}\\cite{-S2021}. Here, these vehicles are called stragglers and the local gradient of a straggler in the previous round is referred to as the delayed local gradient. To solve this problem, the delayed local gradient will be aggregated into the local gradient of the current round $r$. Thus, the aggregated local gradient can be calculated as\n\\begin{equation}\n\\nabla \\zeta_{i,k}^{r}=\\nabla g(\\omega_{i,k}^{r})+\\beta \\nabla g_{i}^{d},\n\\label{eq9}\n\\end{equation}\nwhere $\\beta$ is the decay coefficient and $\\nabla g_{i}^{d}$ is the delayed local gradient. Note that $\\nabla g_{i}^{d}=0$ if $V_{i}^{r}$ uploads successfully in the previous round.\n\nThen the local model for the next iteration is updated as\n\\begin{equation}\n\\omega^{r}_{i,k+1}=\\omega^{r}-\\eta_{l}^{r}\\nabla \\zeta_{i,k}^{r},\n\\label{eq10}\n\\end{equation}where $\\eta_{l}^{r}$ is the local learning rate in round $r$, which is calculated as\n\\begin{equation}\n\\eta_{l}^{r}=\\eta_{l} \\max \\{1, \\log (r)\\},\n\\label{eq11}\n\\end{equation} where $\\eta_{l}$ is the initial value of local learning rate.\n\nThen iteration $k$ is finished and $V_{i}^{r}$ randomly samples some training data again to start the next iteration. When the number of iterations reaches the threshold $e$, $V_{i}^{r}$ completes the local training and upload the updated local model $\\omega_{i}^{r}$ to the local RSU.\n\n\n\\subsubsection{Upload Model}\n\\\n\\newline\n\\indent\nEach vehicle uploads its updated local model to the local RSU after it completes local training.\n\n\\subsubsection{Asynchronous aggregation}\n\\\n\\newline\n\\indent\nIf the local model of $V_{i}^{r}$, i.e., $\\omega^{r}_{i}$, is the first model received by the local RSU, the upload is successful and the local RSU updates the global model. Otherwise, the local RSU drops $\\omega^{r}_{i}$ and thus the upload is not successful.\n\nWhen the upload is successful, the local RSU updates the global model $\\omega^{r}$ by weighted averaging as follows:\n\n\\begin{algorithm}\n\t\\caption{The Asynchronous Federated Learning Algorithm}\n\t\\label{al1}\n\tSet global model $\\omega^{r}$;\\\\\n\t\\For{each round $r$ from $1$ to $R^{max}$}\n {\n\t\t\\For{each vehicle $ V^{r}_{i} \\in \\mathbb{V}^{r}$ \\textbf{in parallel}}\n\t\t{\n\t\t\t$T_{r,i}^{staying}=\\left(L_{s}-P_{i}^{r}\\right) / U_{i}^{r}$;\\\\\n\t\t\t\\If{ $T_{r,i}^{staying}>T_{training}+T_{inference}$}\n\t\t\t{\n\t\t\t$V^{r}_i$ is selected to participate in asynchronous FL training;\n\t\t\t}\n\t\t}\n\t\n\t\n\t\t\\For{each selected vehicle $ V^{r}_{i}$}\n\t\t{\n\t\t\t$\\omega^{r}_{i} \\leftarrow \\textbf{Vehicle Updates}(\\omega^r,i)$;\\\\\n\t\t\tUpload the local model $\\omega^{r}_{i}$ to the local RSU;\\\\\n\t\t}\n\t\tReceive the updated model $\\omega^{r}_{i}$;\\\\\n\t\tCalculate the weight of the asynchronous aggregation $\\chi_{i}$ based on Eq. \\eqref{eq14};\\\\\n\t\tUpdate the global model based on Eq. \\eqref{eq12};\\\\\n\t\\Return $w^{r+1}$\n\t}\n\t\\textbf{Vehicle Update}($w,i$):\\\\\n\t\\textbf{Input:} $w^r$ \\\\\n\tCalculate the local learning rate $\\eta_{l}^{r}$ based on Eq. \\eqref{eq11};\\\\\n\t\\For{each local epoch k from $1$ to $e$}\n\t{\n\t\tRandomly samples some data $n_{i,k}^r$ from the training set;\\\\\n\t\t\\For{each data $x \\in n_{i,k}^r$ }\n\t\t{\n\t\t\tCalculate the loss function of data $x$ based on Eq. \\eqref{eq6};\\\\\n\t\t}\n\t\t\tCalculate the local loss function for interation $k$ based on Eq. \\eqref{eq7};\\\\\n\t\t\tCalculate the regularized local loss function $g\\left(\\omega_{i,k}^r\\right)$ based on Eq. \\eqref{eq8};\\\\\n\t\t\tAggregate local gradient $\\nabla \\zeta_{i,k}^{r}$ based on Eq. \\eqref{eq9};\\\\\n\t\t\tUpdate the local model $\\omega^{r}_{i,k}$ based on Eq. \\eqref{eq10};\\\\\n\t}\n\tSet $\\omega^{r}_{i}=\\omega^{r}_{i,e}$;\\\\\n\t\\Return$\\omega^{r}_{i}$\n\n\\end{algorithm}\n\n\\begin{equation}\n\\omega^{r}=\\omega^{r-1}+\\frac{d_{i}^r}{d^r} \\chi_{i} \\omega^{r}_{i},\n\\label{eq12}\n\\end{equation}where $d_{i}^r$ is the size of local data in $V_i^r$, $d^r$ is the total local data size of the selected vehicles and $\\chi_{i}$ is the weight of the asynchronous aggregation for $V_{i}^{r}$.\nThe weight of the asynchronous aggregation $\\chi_{i}$ is calculated by considering the traversed distance of $V_{i}^{r}$ in the coverage area of the local RSU and the content transmission delay from local RSU to $V_{i}^{r}$ to improve the accuracy of the global model and reduce the content transmission delay. Specifically, if the traversed distance of $V_{i}^{r}$ is large, it may have long available time to participate in the training, thus its local model should occupy large weight for aggregation to improve the accuracy of the global model. In addition, the content transmission delay from local RSU to $V_{i}^{r}$ is important because $V_{i}^{r}$ would finally download the content from the local RSU when the content is either cached in the local or neighboring RSU. Thus, if the content transmission delay from local RSU to $V_{i}^{r}$ is small, its local model should also occupy large weight for aggregation to reduce the content transmission delay. The weight of the asynchronous aggregation $\\chi_{i}$ is calculated as\n\n\\begin{equation}\n\\chi_{i}=\\mu_{1} {(L_{s}-P_{i}^{r})}+\\mu_{2} \\frac{s}{R_{R, i}^{r}},\n\\label{eq13}\n\\end{equation}where $\\mu_{1}$ and $\\mu_{2}$ are coefficients of the position weight and transmission weight, respectively (i.e., $\\mu_{1}+\\mu_{2}=1$), $s$ is the size of each content. Thus, the content transmission delay from local RSU to $V_{i}^{r}$ is affected by the transmission rate between the local RSU and $V_{i}^{r}$, i.e., $R_{R, i}^{r}$. We can further calculate $\\chi_{i}$ based on the normalized $L_{s}-P_{i}^{r}$ and $R_{R, i}^{r}$, i.e.,\n\\begin{equation}\n\\chi_{i}=\\mu_{1} \\frac{(L_{s}-P_{i}^{r})}{L_{s}}+\\mu_{2} \\frac{R_{R, i}^{r}}{\\max _{k \\in N^{r}}\\left(R_{R, k}^{r}\\right)}.\n\\label{eq14}\n\\end{equation}\n\n\nSince the local RSU knows $dis(S,V_{i}^{r})$ and $P_{i}^{r}$ for each vehicle $i$ at the beginning of the asynchronous FL, the local RSU can calculate $R_{R, i}^{r}$ according to Eqs. \\eqref{eq2} and \\eqref{eq3}, and further calculate $\\chi_{i}$ according to Eq. \\eqref{eq13}.\n\n\n\n\n\n\n\n\n\nUp to now, the asynchronous FL in round $r$ is finished and the updated global model $\\omega^{r}$ is obtained. The process of the asynchronous FL algorithm is shown in Algorithm \\ref{al1} for ease of understanding, where $R^{max}$ is the maximum number of rounds, $e$ is the maximum number of local epochs. Then, the local RSU sends the obtained model to each vehicle to predict popular contents.\n\n\n\n\\subsection{Popular Content Prediction}\n\n\\begin{figure*}\n\\center\n\\includegraphics[scale=0.6]{3-eps-converted-to.pdf}\n\\caption{Popular content prediction process}\n\\label{fig3}\n\\end{figure*}\n\n\n\n\nIn this subsection, we propose an algorithm to predict the popular contents. As shown in Fig. \\ref{fig3}, the popular content prediction algorithm consists of the 4 steps as follows.\n\n\n\n\n\n\\subsubsection{Data Preprocessing}\n\\\n\\newline\n\\indent\nThe VU's rating for a content is $0$ when VU is uninterested in the content or has not requested a content. Thus, it is difficult to differentiate if a content is an interesting one for the VU when its rating is $0$. Marking all contents with rating $0$ as uninterested contents is a bias prediction. Therefore, we adopt the obtained model to reconstruct the rating for each content in the first step, which is described as follows.\n\nEach vehicle abstracts a rating matrix from the data in the testing set, where the first dimension of the matrix is VUs' ID and the second dimension is VU's ratings for all contents. Denote the rating matrix of $V_{i}^r$ as $\\boldsymbol{R}_{i}^r$. Then, the AE with the obtained model is adopted to reconstruct $\\boldsymbol{R}_{i}^r$. The rating matrix $\\boldsymbol{R}_{i}^r$ is used as the input data for the AE that outputs the reconstructed rating matrix $\\hat{\\boldsymbol{R}}_{i}^r$. Since $\\hat{\\boldsymbol{R}}_{i}^r$ is reconstructed based on the obtained model which reflects the hidden features of data, $\\hat{\\boldsymbol{R}}_{i}^r$ can be used to approximate the rating matrix $\\boldsymbol{R}_{i}^r$.\nThen, similar to the rating matrix, each vehicle also abstracts a personal information matrix from the data of the testing set, where the first dimension of the matrix is VUs' ID and the second dimension is VU's personal information.\n\n\\subsubsection{Cosine Similarity}\n\\\n\\newline\n\\indent\n$V_{i}^r$ counts the number of the nonzero ratings for each VU in $\\boldsymbol{R}_{i}^r$ and marks the VUs with the $1$$/$$m$ largest numbers as active VUs. Then, each vehicle combines $\\hat{\\boldsymbol{R}}_{i}^r$ and the personal information matrix (denoted as $\\boldsymbol{H}_{i}^r$) to calculate the similarity between each active VU and other VUs. The similarity between an active VU $a$ and $b$ is calculated according to cosine similarity \\cite{yuet2018}\n\\begin{equation}\n\\begin{aligned}\n\\operatorname{sim}_{a,b}^{r,i}=\\cos \\left(\\boldsymbol{H}_{i}^r(a,:), \\boldsymbol{H}_{i}^r(b,:)\\right)\\\\\n=\\frac{\\boldsymbol{H}_{i}^r(a,:) \\cdot \\boldsymbol{H}_{i}^r(b,:)^T}{\\left\\|\\boldsymbol{H}_{i}^r(a,:)\\right\\|_{2} \\times\\left\\|\\boldsymbol{H}_{i}^r(b,:)\\right\\|_{2}}\n\\label{eq15}\n\\end{aligned}\n\\end{equation}where $\\boldsymbol{H}_{i}^r(a,:)$ and $\\boldsymbol{H}_{i}^r(b,:)$ are the vectors corresponding to the active VU $a$ and $b$ in the combined matrixes, respectively, $\\left\\|\\boldsymbol{H}_{i}^r(a,:)\\right\\|_{2}$ and $\\left\\|\\boldsymbol{H}_{i}^r(b,:)\\right\\|_{2}$ are the 2-norm of $\\boldsymbol{H}_{i}^r(a,:)$ and $\\boldsymbol{H}_{i}^r(b,:)$, respectively. Then for each active VU $a$, $V_{i}^r$ selects the VUs with the $K$ largest similarities as the $K$ neighboring VUs of VU $a$. The ratings of the $K$ neighboring VUs also reflect the preferences of VU $a$ to a certain extent.\n\n\n\\subsubsection{Interested Contents}\n\\\n\\newline\n\\indent\nAfter determining the neighboring VUs of active VUs, in $\\boldsymbol{R}_{i}^r$, the vectors of neighboring VUs for each active VU are abstracted to construct a matrix $\\boldsymbol{H}_K$, where the first dimension of $\\boldsymbol{H}_K$ is the IDs of the neighboring VUs for active VUs, while the second dimension of $\\boldsymbol{H}_K$ is the ratings of the contents from neighboring VUs. In $\\boldsymbol{H}_K$, a content with a VU's nonzero rating is regarded as the VU's interested content. Then the number of interested contents is counted for each VU, where the counted number of a content is referred to as the content popularity of the content. $V_{i}^r$ selects the contents with the $F_c$ largest content popularity as the predicted interested contents.\n\n\\subsubsection{Popular Contents}\n\\\n\\newline\n\\indent\nAfter vehicle in the local RSU uploads their predicted interested contents, the local RSU collects and compares the predicted interested contents uploaded from all vehicles to select the contents with the $F_{c}$ largest content popularity as the popular contents. The proposed popular content prediction algorithm is illustrated in Algorithm \\ref{al2}, where $\\mathbb{C}^{r}$ is the set of the popular contents and $\\mathbb{C}_{i}^r$ is the set of interested contents of $V^{r}_i$.\n\n\\begin{algorithm}\n\t\\caption{The Popular Content Prediction Algorithm}\n\t\\label{al2}\n\t\t\\textbf{Input: $\\omega^{r}$}\\\\\n\t\t\\For{each vehicle $ V^{r}_{i} \\in \\mathbb{V}^{r}$}\n\t\t{\n\t\t\tConstruct the rating matrix $\\boldsymbol{R}_{i}^r$ and personal information matrix;\\\\\n\t\t\t$\\hat{\\boldsymbol{ R}}_{i}^r \\leftarrow AE(\\omega^{r},\\boldsymbol{R}_{i}^r)$;\\\\\n\t\t\tCombine $\\hat{\\boldsymbol{ R}}_{i}^r$ and information matrix as $\\boldsymbol{H}_{i}^r$;\\\\\n\t\t\t$\\mathbb{C}_{i}^r \\leftarrow \\textbf{Vehicle Predicts}(\\boldsymbol{H}_{i}^r,i)$;\\\\\n\t\t\tUploads $\\mathbb{C}_{i}^r$ to the local RSU;\\\\\n\t\t}\n\t\t\\textbf{Compare} received contents and select the $F_c$ most interested contents into $\\mathbb{C}^{r}$.\\\\\n\t\\Return $\\mathbb{C}^{r}$\\\\\n\t\\textbf{Vehicle Predicts}$(\\boldsymbol{H}_{i}^r, i)$:\\\\\n\t\\textbf{Input: $\\boldsymbol{H}_{i}^r, i\\in {1,2,...,N^r}$}\\\\\n\tCalculate the similarity between $V_{i}^r$ and other vehicles based on Eq. \\eqref{eq15};\\\\\n\tSelect the first $K$ vehicles with the largest similarity as neighboring vehicles of $V_{i}^r$;\\\\\n\tConstruct reconstructed rating matrixes of $K$ neighboring vehicles as $\\boldsymbol{H}_K$;\\\\\n\tSelect the $F_c$ most interested contents as $\\mathbb{C}_{i}^r$;\\\\\n\t\\Return $\\mathbb{C}_{i}^r$\n\n\\end{algorithm}\n\nThe cache capacity of the each RSU $c$, i.e., the largest number of contents that each RSU can accommodate, is usually smaller than $F_{c}$.\nNext, we will propose a cooperative caching to determine where the predicted popular contents can be cached.\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Cooperative Caching Based on DRL}\n\n\nWe consider the computation capability of each RSU is powerful and the cooperative caching can be determined within a short time. The main goal is to find an optimal cooperative caching based on DRL to minimize the content transmission delay. Next, we will formulate the DRL framework and then introduce the DRL algorithm.\n\n\\subsubsection{DRL Framework}\n\\\n\\newline\n\\indent\nThe DRL framework includes state, action and reward. The training process is divided into slots. For the current slot $t$, the local RSU observes the current state $s(t)$ and decides the current action $a(t)$ based on $s(t)$ according to a policy $\\pi$, which is used to generate the action based on the state at each slot. Then the local RSU can obtain the current reward $r(t)$ and observes the next state $s(t+1)$ that is transited from the current state $s(t)$. We will design $s(t)$, $a(t)$ and $r(t)$, respectively, for this DRL framework.\n\n\\paragraph{State}\n\\\n\\newline\n\\indent\nWe consider the contents cached by the local RSU as the current state $s(t)$. In order to focus on the contents with high popularity, the contents of the state space $s(t)$ are sorted in descending order based on the predicted content popularity of the $F_c$ popular contents, thus the current state can be expressed as $s(t)=\\left(s_{1}, s_{2}, \\ldots, s_{c}\\right)$, where $s_{i}$ is the $i$th most popular content.\n\n\n\\paragraph{Action}\n\\\n\\newline\n\\indent\nAction $a(t)$ represents whether the contents cached in the local RSU need to be relocated or not. In the $F_c$ predicted popular contents, the contents that are not cached in the local RSU form a set $\\mathbb{N}$. If $a(t)=1$, the local RSU randomly selects $n(n