diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzcqxr" "b/data_all_eng_slimpj/shuffled/split2/finalzzcqxr" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzcqxr" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nIn this paper we study the problem of the optimal dividend payment strategy\nwhich maximizes the expected discounted sum of dividends, in a\nmultidimensional setup of $n$ associated insurance companies. We assume that\nthe surplus process follows a multidimensional compound Poisson process. The\ngeneral manager of the companies has the possibility to exercise an\nirreversible switch into another regime at any time; we also take into account\nan expected discounted value at ruin, which is due at the first time of ruin\nof one of the companies, and may depend on the value of the surplus of all the\ncompanies both at and before this time of ruin. This ruin value is a\ngeneralization to the multidimensional setting of the Gerber-Shiu penalty\nfunctions introduced in Gerber and Shiu \\cite{GS 1998}.\n\nThe problem of optimal dividend payments in the one-dimensional case was\nproposed by de Finetti \\cite{De Finetti} and it was studied in different model\nsetups. In the compound Poisson risk model, this problem was studied by Gerber\n\\cite{Gerber} using a limit of an associated discrete problem, and by Azcue\nand Muler \\cite{AM 2005} using a dynamic programming approach; see also an\noverview on this problem in Schmidli \\cite{Schmidli book 2008} and in Azcue\nand Muler \\cite{AM Libro}. For the limit diffusion approximations, see for\nexample Asmussen and Taksar \\cite{Asmussen Taksar 1997} and for spectrally\nnegative L\\'{e}vy risk processes see, for instance, Avram, Palmowski and\nPistorious \\cite{APP 2007} and Loeffen \\cite{Loeffen2008}.\n\nIn the one dimensional case, the final value of the portfolio at ruin is\nnon-positive and it is called a penalty. Let us mention for instance Dickson\nand Waters \\cite{DicksonWaters2004}, where the shareholders take care of the\ndeficit at ruin; Gerber, Lin and Yang \\cite{GerberLinYang2006} where the\npenalty is a function depending on the deficit at ruin; Thonhauser and\nAlbrecher \\cite{TA} where they address the optimal dividend problem with\nconstant penalty. The optimal dividend problem in the spectrally negative\nL\\'{e}vy setting was solved by Loeffen and Renaud \\cite{LoeffenRenaud2010}\nwith an affine penalty function, and by Avram, Palmowski and Pistorius\n\\cite{APP 2015} with a general penalty function depending on the deficit at ruin.\n\nThe one dimensional dividend problem with the possibility of an irreversible\nswitch was addressed by Ly Vath, Pham and Villeneuve \\cite{LPV} in the\nBrownian motion setup and by Azcue and Muler \\cite{AM Switching} in the\ncompound Poisson setting.\n\nThe problem of dividend payment in the case of two insurances companies in the\ncompound Poisson risk model was studied by Czarna and Palmowski \\cite{CP} for\na particular dividend strategy of reflecting two-dimensional risk process from\nthe line, and by Albrecher, Azcue and Muler \\cite{AlAZMU} where they study the\noptimal dividend strategy for two collaborating companies.\n\nIn this paper, the multidimensional dividend problem is a mixed singular\ncontrol\/optimal problem. Its associated Hamilton-Jacobi-Bellman equation (HJB)\ninvolves a first-order integro-differential operator, an obstacle operator and\n$n$ derivative constraints; the integro-differential operator corresponds to\nthe discounted infinitesimal operator of the compound Poisson process, the\nobstacle is related to the value of the portfolio after switching and the\nderivative constraints are related to the dividend payments of the companies.\nWe prove that the optimal value function is a viscosity solution of the HJB\nequation, that it can be characterized as the smallest viscosity supersolution\nand also that a convergent limit of a family of admissible strategies that is\na viscosity solution of the associated HJB equation should be the optimal\nvalue function (verification result). These results are natural extensions of\nthe results of \\cite{AM Switching} to the multidimensional setting.\n\nThe way in which the optimal value function\\ solves the HJB equation in the\n$n$-dimensional state space suggests the optimal local control: in the closed\nset where the optimal value function coincides with the obstacle\n(\\textit{switch region}), an immediate switch should be done; in the interior\nof the set where the integro-differential operator is zero\n(\\textit{non-action} \\textit{region}), no dividends are paid; and in the\ninterior of the set in which one or more of the derivative constraints are\ntight (\\textit{dividend payment region}), the corresponding companies pay a\nlump sum of dividends. However, it is not clear what the optimal local control\nis in the \\textit{free boundaries} between the non-action region and the\ndividend payment region. In the one dimensional case the \"free boundaries\" are\nindeed \"free points\", and it can be seen that the optimal local control at\nthese points is just to pay all the incoming premium as dividends, so the\ncontrol surplus stays there until the arrival of the next claim. This is the\nreason why the optimal strategy has a band structure and this free points can\nbe obtained by one-dimensional optimization techniques, see \\cite{AM\nSwitching}. It is a hard task to obtain the free boundaries in the\nmultidimensional setting and there is no hope of finding a closed-form\nsolution for the optimal value function. The main contribution of this paper\nis to provide a numerical method to approximate (locally uniformly) the\noptimal value function by a sequence of sub-optimal value functions of\nadmissible strategies defined in an $n$-dimensional grid. These sub-optimal\nvalue functions solve a discrete version of the HJB equation, and the\ncorresponding sub-optimal strategies are constructed partitioning the grid in\nswitch, non-action and dividend payment regions; so we also obtain numerical\napproximations of the optimal switch, non-action and dividend payment regions\nand the free boundaries between them.\n\nFor a convergence analysis of a numerical scheme for multidimensional singular\ncontrol problems in the diffusion setting using Markov chain approximation\nmethods, let us mention Kushner and Martins \\cite{KM} and Budhiraja and Ross\n\\cite{BR}; see also the book of Kushner and Dupuis \\cite{KD} for an exhaustive\nsurvey. Regarding convergence of numerical schemes using the viscosity\nsolution approach, let us mention for instance Souganidis \\cite{S} and Barles\nand Souganidis \\cite{BS}, where they propose a numerical scheme for\nnon-singular control problems in the context of the diffusion setting; roughly\nspeaking, they prove that the solutions of the numerical scheme converge to a\nviscosity solution of the associated HJB equation and then, using a uniqueness\nargument, they obtain the convergence result. In the numerical method of the\npresent work, there is not uniqueness of viscosity solutions in the HJB\nequation; nevertheless, we construct numerically an increasing sequence of\nvalue functions of a family of admissible strategies whose limit is a\nviscosity solution of the associated HJB equation; then, using the\nverification result mentioned above, we deduce that this limit is the optimal\nvalue function.\n\nAs an application, we present the optimal time of merger (as change of regime\nat the switch time) for two insurance companies. We show examples where the\nnon-action region could be non-connected even for exponential claim size\ndistributions. For a criteria of merger being an advantage over keeping the\ntwo stand-alone companies under barrier strategies see Gerber and Shiu\n\\cite{GS Merger}.\n\nThe rest of the paper is organized as follows. In Section 2, we introduce the\nmodel and derive some basic properties of the optimal value function. In\nSection 3, we show that the optimal value function is a viscosity solution of\nthe corresponding (HJB) equation; we also characterize it as the smallest\nviscosity supersolution and give a verification result. In Section 4, we\nconstruct a family of admissible strategies at any point in a suitable grid.\nIn Section 5, we show that the discrete scheme convergences locally uniformly\nby taking a suitable sequence of embedded grids. In Section 6, we present\nexamples of the problem of optimal merger time. Finally, in Section 7, there\nis an Appendix with the proofs of the technical lemmas.\n\nWe use the following notation: $\\mathbf{R}_{+}^{n}=[0,\\infty)^{n}$, $\\leq$\nrefers to the element-wise order on $\\mathbf{R}^{n}$, $\\mathbf{1}%\n=(1,1,\\ldots,1)\\in$ $\\mathbf{N}^{n}$, $\\left( \\mathbf{e}_{i}\\right)\n_{i=1,...,n}\\ $is the standard basis of $\\mathbf{R}^{n}$, $\\left[\n\\mathbf{x},\\mathbf{y}\\right] =\\left\\{ \\mathbf{z}\\in\\mathbf{R}^{n}%\n:\\mathbf{x}\\leq\\mathbf{z}\\leq\\mathbf{y}\\right\\} $, $\\mathbf{x}\\vee\n\\mathbf{y}=(x_{1}\\vee y_{1},...,x_{n}\\vee y_{n})$, $\\mathbf{x}\\wedge\n\\mathbf{y}=(x_{1}\\wedge y_{1},...,x_{n}\\wedge y_{n})$.\n\n\\section{Model \\label{Seccion Modelistica}}\n\nLet us consider that the surplus process of $n$ companies, or branches of the\nsame company, follows an $n$-dimensional compound Poisson process with drift,\nthat means that the uncontrolled process $\\mathbf{X}_{t}\\in$ $\\mathbf{R}%\n_{+}^{n}$ can be written as\n\\begin{equation}\n\\mathbf{X}_{t}=\\mathbf{x}^{0}+\\mathbf{p}t-%\n{\\displaystyle\\sum\\nolimits_{k=1}^{N_{t}}}\n\\mathbf{U}_{k}.\\label{UncontrolledSurplusOriginal}%\n\\end{equation}\nHere $\\mathbf{x}^{0}\\in$ $\\mathbf{R}_{+}^{n}$ is the initial surplus,\n$\\mathbf{p}=(p_{1},...,p_{n})$ where $p_{i}>0$ is the premium rate of company\n$i$, $N_{t}$ is a Poisson process with intensity $\\lambda$ and the downward\njumps $\\mathbf{U}_{k}\\in\\mathbf{R}_{+}^{n}$ are i.i.d. vector random vectors\nwith joint multivariate distribution function $F$. We also assume that\n$\\mathbb{E(}\\left\\Vert \\mathbf{U}_{k}\\right\\Vert \\mathbb{)<\\infty}$ and\n$F(\\mathbf{0})=0$. We call $\\tau_{k}$ the time of arrival of the $k$-th jump\nof the process, so $N_{t}=\\max\\{k:\\tau_{k}\\leq t\\}$.\n\nWe can describe this model in a rigorous way by defining its filtered\nprobability space $(\\Omega,\\mathcal{F},\\left( \\mathcal{F}_{t}\\right)\n_{t\\geq0},\\mathbb{P})$, where\n\\[\n\\Omega=\\{(\\tau_{k},\\mathbf{U}_{k})_{k\\geq1}\\in\\left( \\lbrack0,\\infty\n)\\times\\mathbf{R}_{+}^{n}\\right) ^{\\mathbf{N}}:\\tau_{k}<\\tau_{k+1}\\}\n\\]\nand $\\mathcal{F}_{t}$ is the $\\sigma$-field generated by the set $\\{(\\tau\n_{k},\\mathbf{U}_{k}):\\tau_{k}\\leq t\\}$. The uncontrolled surplus process\n$\\mathbf{X}_{t}$ is an $\\mathcal{F}_{t}$-adapted c\\`{a}dl\\`{a}g (right\ncontinuous with left limits) stochastic process. Each company pays dividends\nto the same shareholders, let $\\mathbf{L}_{t}\\in$ $\\mathbf{R}_{+}^{n}$ be the\nvector of cumulative amount of dividends paid out up to time $t$ by each\ncompany; we say that the dividend payment strategy $\\mathbf{L}_{t}$ is\nadmissible if it is a non decreasing process, c\\`{a}dl\\`{a}g, adapted with\nrespect to the filtration $\\left( \\mathcal{F}_{t}\\right) _{t\\geq0}$ and\nsatisfies $\\mathbf{L}_{0}\\geq0$ and $\\mathbf{L}_{t}\\leq\\mathbf{X}_{t}$ for any\n$0\\leq t<\\tau^{\\mathbf{L}}$, where $\\tau^{\\mathbf{L}}$ is the time in which\nthe process exits the set $\\mathbf{R}_{+}^{n}$ due to a jump, that is\n\\begin{equation}\n\\tau^{\\mathbf{L}}:=\\inf\\{\\tau_{k}:\\mathbf{X}_{\\tau_{k}}-\\mathbf{L}_{\\tau\n_{k}^{-}}\\notin\\mathbf{R}_{+}^{n}\\}\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{.}\\label{Definicion Tau L}%\n\\end{equation}\nWe define the \\textit{controlled} surplus process as\n\\begin{equation}\n\\mathbf{X}_{t}^{\\mathbf{L}}:=\\mathbf{X}_{t}-\\mathbf{L}_{t}\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{.}\\label{XL}%\n\\end{equation}\nIt is not possible to pay any dividends once the controlled process\n$\\mathbf{X}_{t}^{\\mathbf{L}}$ exits $\\mathbf{R}_{+}^{n}$ so we extend\n$\\mathbf{L}_{t}=\\mathbf{L}_{\\tau^{\\mathbf{L}}{}^{-}}$ for $t\\geq\n\\tau^{\\mathbf{L}}$. Note that $\\mathbf{X}_{\\tau^{\\mathbf{L}}}^{\\mathbf{L}%\n}=\\mathbf{X}_{\\tau^{\\mathbf{L}}{}^{-}}^{\\mathbf{L}}-\\mathbf{U}_{k_{0}}$ if\n$\\tau^{\\mathbf{L}}=\\tau_{k_{0}}$. At time $\\tau^{\\mathbf{L}}$, the\nshareholders pay a penalty $\\upsilon(\\mathbf{X}_{\\tau^{\\mathbf{L}}{}^{-}%\n}^{\\mathbf{L}},\\mathbf{U}_{k_{0}})$ (or get a reward in the case that\n$\\upsilon(\\mathbf{X}_{\\tau^{\\mathbf{L}}{}^{-}}^{\\mathbf{L}},\\mathbf{U}_{k_{0}%\n})$ is negative) depending on the surplus prior to ruin $\\mathbf{X}%\n_{\\tau^{\\mathbf{L}}{}^{-}}^{\\mathbf{L}}$ and the size $\\mathbf{U}_{k_{0}}$ of\nthe last jump of the uncontrolled process. Denote\n\\begin{equation}\nB=\\{(\\mathbf{x},\\mathbf{\\alpha})\\in\\mathbf{R}_{+}^{n}\\times\\mathbf{R}_{+}%\n^{n}\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{ s.t. }\\mathbf{x}-\\mathbf{\\alpha}\\notin\\mathbf{R}_{+}^{n}%\n\\},\\label{Definicion B}%\n\\end{equation}\nthe function $\\upsilon:B\\rightarrow\\mathbf{R}$ generalizes the concept of\npenalty at ruin. It is natural to assume that the penalty function\n$\\upsilon(\\mathbf{x},\\mathbf{\\alpha})$ is non-increasing on $\\mathbf{x}$ and\nnon-decreasing on $\\mathbf{\\alpha}$; furthermore, we assume that\n$\\mathbb{E}\\left( \\left\\vert \\upsilon(\\mathbf{0},\\mathbf{U}_{1})\\right\\vert\n\\right) <\\infty$. The manager of the company also has the possibility at any\ntime $0\\leq t<\\tau^{\\mathbf{L}}$ to exercise an irreversible switch whose\nvalue is associated to a given function $f:\\mathbf{R}_{+}^{n}\\rightarrow\n\\mathbf{R} $. We assume that the function $f$ is either right continuous and\nnon decreasing or continuous.\n\nGiven an initial surplus $\\mathbf{x}\\geq0$, let us denote by $\\Pi_{\\mathbf{x}%\n}$ the set of all pairs $\\pi=(\\mathbf{L},\\overline{\\tau})$ where $\\mathbf{L}$\nis an admissible dividend payment strategy and $\\overline{\\tau}$ is a switch\ntime\\textit{. }We define%\n\\begin{equation}%\n\\begin{array}\n[c]{lll}%\nV_{\\pi}(\\mathbf{x}) & = & \\mathbb{E}_{\\mathbf{x}}\\left( \\int_{0^{-}%\n}^{\\overline{\\tau}\\wedge\\tau^{\\mathbf{L}}}e^{-cs}\\mathbf{a}\\cdot\nd\\mathbf{L}_{s}+I_{\\{\\overline{\\tau}<\\tau^{\\mathbf{L}}\\}}e^{-c\\overline{\\tau}%\n}f(\\mathbf{X}_{\\overline{\\tau}}^{\\mathbf{L}})\\right) \\\\\n& & -\\mathbb{E}_{\\mathbf{x}}\\left( I_{\\{\\overline{\\tau}\\geq\\tau^{\\mathbf{L}%\n}\\}}e^{-c\\tau^{\\mathbf{L}}}\\upsilon(\\mathbf{X}_{\\tau^{\\mathbf{L}}{}^{-}%\n}^{\\mathbf{L}}{},\\mathbf{X}_{\\tau^{\\mathbf{L}}{}^{-}}^{\\mathbf{L}}%\n{}-\\mathbf{X}_{\\tau^{\\mathbf{L}}}^{\\mathbf{L}})\\right)\n\\end{array}\n\\label{Definicion VLTau}%\n\\end{equation}\nfor any $\\pi\\in\\Pi_{\\mathbf{x}}$ and the optimal value function as%\n\\begin{equation}\nV(\\mathbf{x})=\\sup\\nolimits_{\\pi\\in\\Pi_{\\mathbf{x}}}V_{\\pi}(\\mathbf{x}%\n).\\label{Definicion V}%\n\\end{equation}\nThe value $c>0$ is a constant discount factor, and $a_{i}>0$ are the weights\nof the dividends paid by the $i$-th company. The integral in\n(\\ref{Definicion VLTau}) is defined as\n\\[\n\\int_{0^{-}}^{t}e^{-cs}\\mathbf{a}\\cdot d\\mathbf{L}_{s}=\\mathbf{a}%\n\\cdot\\mathbf{L}_{0}+\\int_{0}^{t}e^{-cs}\\mathbf{a}\\cdot d\\mathbf{L}_{s}.\n\\]\nNote that we are allowing to make a lump dividend payment $\\mathbf{L}%\n_{\\overline{\\tau}}-\\mathbf{L}_{\\overline{\\tau}^{-}}$ at the switch time\n$\\overline{\\tau}<\\tau^{\\mathbf{L}}$ and also at time zero.\n\n\\begin{remark}\n[on the multivariate compound Poisson process]The most important cases of\nmultivariate compound Poisson process we are considering in the examples\ncorrespond to $m$ independent sources of risk that are coinsured between the\n$n$ insurance companies with different proportions. More precisely, let us\nassume that there are $m$ independent (univariate) compound Poisson processes\ngiven by%\n\\begin{equation}\nC^{l}(t)=\\sum\\nolimits_{k=1}^{N_{t}^{l}}u_{k}^{l},\\label{CP_Univariadol}%\n\\end{equation}\nwhere $N_{t}^{l}$ is a Poisson process with intensity $\\lambda_{l}$ and\n$u_{k}^{l}$ with $k=1,2,...$ are i.i.d. random variables with distribution\n$F_{l}$. Assume that the total claim arrival process is given by\n\\[%\n{\\displaystyle\\sum\\nolimits_{j=1}^{N_{t}}}\nu_{j}:=\\sum\\nolimits_{l=1}^{m}C^{l}(t)\n\\]\nand that the $i$-th company pays a proportion $a_{il}$ of any claim of the $l\n$-th compound Poisson process $C_{t}^{l}$. We denote $A:=(a_{il})\\in\n\\mathbf{R}^{n\\times m}$ with $\\sum_{i=1}^{n}a_{il}=1$ and $a_{il}\\geq0$. The\ncompound Poisson process $\\sum_{l=1}^{m}C^{l}(t)$ has intensity $\\lambda\n=\\sum_{l=1}^{m}\\lambda_{l}$. Furthermore,%\n\\begin{equation}%\n{\\displaystyle\\sum\\nolimits_{k=1}^{N_{t}}}\n\\mathbf{U}_{k}=A\\cdot\\left( C^{1}(t),...,C^{m}(t)\\right) ^{\\prime\n},\\label{CP_Multivariado_RiesgosIndependientes}%\n\\end{equation}\nwhere $N_{t}=\\sum_{l=1}^{m}N_{t}^{l}$ is a compound Poisson process with\nintensity $\\lambda=\\sum_{l=1}^{m}\\lambda_{l}$ and multivariate distribution\n\\[\nF(\\mathbf{x})=\\mathbb{P}(\\mathbf{U}\\leq\\mathbf{x})=\\sum\\nolimits_{l=1}%\n^{m}\\frac{\\lambda_{l}}{\\lambda}F_{l}(\\min\\nolimits_{1\\leq i\\leq n,~a_{il}%\n\\neq0}\\{\\frac{x_{i}}{a_{il}}\\}).\n\\]\n\n\nWithout loss of generality, we can assume that the columns $\\mathbf{a}%\n_{l}:=(a_{il})_{i=1,..,n}$ of the matrix $A$ are different, because if\n$\\mathbf{a}_{l_{1}}=\\mathbf{a}_{l_{2}}$, we can regard $C^{l_{1}}(t)+C^{l_{2}%\n}(t)$ as just one independent source of risk. For instance, in the special\ncase in which the $n$ uncontrolled one-dimensional surplus processes of the\ncompanies are independent compound Poisson processes with intensity\n$\\lambda_{i}$ and claim size distribution $F_{i}(x_{i})$ ($i=1,...,n$), $A$\nwould be the identity matrix and\n\\begin{equation}\nF(\\mathbf{x})=%\n{\\displaystyle\\sum\\nolimits_{i=1}^{n}}\n\\lambda_{i}F_{i}(x_{i})\/\\lambda\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{.}\\label{independent_Surpluses}%\n\\end{equation}\n\n\\end{remark}\n\n\\begin{remark}\n[on the penalty function $\\upsilon$]Consider the multivariate uncontrolled\nPoisson process (\\ref{UncontrolledSurplusOriginal}) described in the previous\nremark. Suppose that the penalty (or reward) function depends on two factors:\n(1) which of the $m$ independent compound Poisson processes\n(\\ref{CP_Univariadol}) make the controlled process exit $\\mathbf{R}_{+}^{n}$\nand (2) the deficit at this exit time. Let $\\mathbf{a}_{l}=(a_{il}%\n)_{i=1,..,n}$ be the $l$-th column of $A$, then we have that%\n\\begin{equation}\n\\upsilon(\\mathbf{x},\\mathbf{\\alpha})=\\sum\\nolimits_{l=1}^{m}\\upsilon\n_{l}(\\mathbf{x}-\\mathbf{\\alpha})I_{\\{\\mathbf{\\alpha}=\\beta_{l}\\mathbf{a}%\n_{l}\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{ with }\\beta_{l}>0\\}},\\label{Funcion NU}%\n\\end{equation}\nwhere $\\upsilon_{l}(\\mathbf{X}_{\\tau^{\\mathbf{L}}}^{\\mathbf{L}})$ is the\npenalty (or reward) when the process $\\mathbf{X}_{t}^{\\mathbf{L}}$ exits\n$\\mathbf{R}_{+}^{n}$ due to a jump of $C^{l}$.\n\nIf $n=m=1$, this definition of penalty function $\\upsilon$ includes: (1) the\npenalty function defined in Gerber and Shiu \\cite{GS 1998}, taking\n$\\upsilon(x,\\alpha)=w(x,\\left\\vert x-\\alpha\\right\\vert )\\geq0$; (2) the case\nin which the shareholders take care of the deficit at ruin, taking\n$\\upsilon(x,\\alpha)=\\alpha-x>0\\ $(Dickson and Waters \\cite{DicksonWaters2004}%\n); (3) the case in which the insurer earns continuously $\\Lambda$ as long as\nthe company is alive. This is equivalent to consider $\\upsilon(x,\\alpha)=$\n$\\Lambda\/c$ (Thonhauser and Albrecher \\cite{TA}).\n\nIn the multidimensional framework, the function $\\upsilon$ could be negative,\nand so considered as a reward. For example, in the case of two companies with\nindependent compound Poisson processes as in (\\ref{independent_Surpluses}), we\ncan consider the situation in which if one of the companies goes to ruin, the\nother survives and continues paying dividends with its own optimal policy. In\nthis case, $A$ is the $2\\times2$ identity matrix and\n\\begin{equation}\n\\upsilon(\\mathbf{x},\\mathbf{\\alpha})=-(V_{2}(x_{2})I_{\\{x_{1}-\\alpha_{1}%\n<0\\}}+V_{1}(x_{1})I_{\\{x_{2}-\\alpha_{2}<0\\}}%\n),\\label{Nu the dos companias independientes}%\n\\end{equation}\nwhere $V_{i}$ is the optimal dividend payment function of the $i$-th company.\nNote that $\\upsilon(\\mathbf{x},\\mathbf{\\alpha})$ is non-increasing on\n$\\mathbf{x}$ and non-decreasing on $\\mathbf{\\alpha}$.\n\\end{remark}\n\n\\begin{remark}\n[on the switch-value function $f$]The switch-value function $f(\\mathbf{x})$\ncan be though as the price in which the shareholders can sell the shares when\nthe controlled current surplus of the $n$ companies is $\\mathbf{x}$. It also\ncan be though as the present value of all the dividends paid in the future\nafter a change of regime is decided by the manager (this change of regime\ncould have a cost); for instance, if the manager decides to merge the $n$\ncompanies (that is the $n$ companies put together all their surpluses, pay all\nthe claims and pay dividends until the joined surplus becomes negative). In\nthe case of merger,\n\\begin{equation}\nf(\\mathbf{x})=V_{M}(x_{1}+x_{2}+...+x_{n}-c_{M})I_{\\{x_{1}+x_{2}+...+x_{n}\\geq\nc_{M}\\}}\\label{g de merger}%\n\\end{equation}\nwhere the one-dimensional function $V_{M}$ is the optimal dividend payment\nfunction of the merger of all the companies and $c_{M}\\geq0$ is the merger\ncost. So, $f$ is right continuous and non decreasing. The case $n=2$, $A$ the\n$2\\times2$ identity matrix, $\\upsilon\\ $as in\n(\\ref{Nu the dos companias independientes}) and\n\\begin{equation}\nf(x_{1},x_{2})=V_{M}(x_{1}+x_{2}-c_{M})I_{\\{x_{1}+x_{2}\\geq c_{M}%\n\\}}\\label{Merger 2x2}%\n\\end{equation}\ncorresponds to the problem of optimal time of merger proposed by Gerber and\nShiu \\cite{GS Merger}. The case where no switching is allowed is also included\nin this work, just consider $f\\ $small enough (see Remark\n\\ref{V sin obstaculo}).\n\\end{remark}\n\nIn the next proposition we give sufficient conditions under which the function\n$V$ is well defined. We say that a function $h:\\mathbf{R}_{+}^{n}%\n\\rightarrow\\mathbf{R}$ satisfies the growth condition \\textbf{GC} if\n\\begin{equation}\nh(\\mathbf{x})\/h_{0}(\\mathbf{x})\\ \\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{is upper bounded in }\\mathbf{R}_{+}%\n^{n}\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{,}\\label{gc}%\n\\end{equation}\nwhere\n\\begin{equation}\nh_{0}(\\mathbf{x}):=e^{\\frac{c}{2n}\\sum_{i=1}^{n}\\frac{x_{i}}{p_{i}}%\n}.\\label{ho}%\n\\end{equation}\n\n\n\\begin{proposition}\n\\label{Crecimiento de V} If the functions $f\\ $and $S(\\mathbf{x}%\n):=\\sup_{\\left\\{ \\mathbf{\\alpha}:\\left( \\mathbf{x},\\mathbf{\\alpha}\\right)\n\\in B\\right\\} }\\left( -\\upsilon(\\mathbf{x},\\mathbf{\\alpha})\\right) $\nsatisfy the growth condition \\textbf{GC}, then $V$ is well defined, satisfies\nthe growth condition \\textbf{GC} and $V\\geq-\\mathbb{E}\\left( \\left\\vert\n\\upsilon(\\mathbf{0},\\mathbf{U}_{1})\\right\\vert \\right) .$\n\\end{proposition}\n\n\\textit{Proof}. Take any initial surplus $\\mathbf{x}\\geq\\mathbf{0}$ and any\nadmissible strategy $\\pi=(\\mathbf{L},\\overline{\\tau})\\in\\Pi_{\\mathbf{x}}$,\nsince $\\mathbf{L}_{t}\\leq\\mathbf{X}_{t}\\leq\\mathbf{x}+\\mathbf{p}t$, we have\n(using integration by parts),%\n\n\\[%\n\\begin{array}\n[c]{lll}%\n\\mathbb{E}_{\\mathbf{x}}\\left( \\int\\nolimits_{0^{-}}^{\\tau^{\\mathbf{L}}%\n\\wedge\\overline{\\tau}}e^{-cs}dL_{i}(s)\\right) & = & \\mathbb{E}_{\\mathbf{x}%\n}\\left( \\int\\nolimits_{0}^{\\tau^{\\mathbf{L}}\\wedge\\overline{\\tau}}%\ne^{-cs}dL_{i}(s)\\right) +L_{i}(0)\\\\\n& \\leq & \\mathbb{E}_{\\mathbf{x}}\\left( \\int\\nolimits_{0}^{\\tau^{\\mathbf{L}%\n}\\wedge\\overline{\\tau}}e^{-cs}d(x_{i}+p_{i}s)\\right) +x_{i}\\leq x_{i}%\n+\\frac{p_{i}}{c}.\n\\end{array}\n\\]\nSo%\n\n\\[\n\\mathbb{E}_{\\mathbf{x}}\\left( \\int_{0^{-}}^{\\overline{\\tau}\\wedge\n\\tau^{\\mathbf{L}}}e^{-cs}\\mathbf{a}\\cdot d\\mathbf{L}_{s}\\right)\n\\leq\\mathbf{a}\\cdot(\\mathbf{x}+\\frac{\\mathbf{p}}{c})\\leq d_{1}e^{\\frac{c}%\n{2n}\\sum_{i=1}^{n}\\frac{x_{i}}{p_{i}}}=d_{1}h_{0}(\\mathbf{x})\n\\]\nfor $d_{1}\\geq2n\\max\\left\\{ a_{1},...,a_{n}\\right\\} \\max\\left\\{\np_{1},...,p_{n}\\right\\} \/c$ since%\n\\[\ne^{\\frac{c}{2n}\\sum_{i=1}^{n}\\frac{x_{i}}{p_{i}}}\\geq1+\\frac{c}{2n}\\sum\n_{i=1}^{n}\\frac{x_{i}}{p_{i}}.\n\\]\nConsider the processes $z(s)=\\mathbf{X}_{s}^{\\mathbf{L}}{}$ defined in\n(\\ref{XL}) and let us call $\\tau=\\tau^{\\mathbf{L}}$. We get that\n$\\mathbf{z}(s)\\leq\\mathbf{x}+\\mathbf{p}s$ and $f$ satisfies (\\ref{gc}) in\n$\\mathbf{R}_{+}^{n}$, so\n\\[\n\\mathbb{E}_{\\mathbf{x}}\\left( e^{-c\\overline{\\tau}}f(\\mathbf{z}%\n_{\\overline{\\tau}})I_{\\{\\tau>\\overline{\\tau}\\}}\\right) \\leq d_{2}%\ne^{\\sum_{i=1}^{n}\\frac{cx_{i}}{2np_{i}}}=d_{2}h_{0}(\\mathbf{x})\n\\]\nfor $d_{2}$ large enough. Similarly,\n\\[\n\\mathbb{E}_{\\mathbf{x}}\\left( -e^{-c\\tau}\\upsilon(\\mathbf{z}_{\\tau^{-}%\n},\\mathbf{z}_{\\tau^{-}}-\\mathbf{z}_{\\tau})I_{\\{\\tau\\leq\\overline{\\tau}%\n\\}}\\right) \\leq\\mathbb{E}_{\\mathbf{x}}\\left( e^{-c\\tau}I_{\\{\\tau\n\\leq\\overline{\\tau}\\}}S(\\mathbf{z}_{\\tau^{-}})\\right) \\leq d_{3}e^{\\sum\n_{i=1}^{n}\\frac{cx_{i}}{2np_{i}}}=d_{3}h_{0}(\\mathbf{x})\n\\]\nfor $d_{3}$ large enough. Then $V_{\\pi}$ (and so $V$) satisfy the growth\ncondition (\\ref{gc}) in $\\mathbf{R}_{+}^{n}$. Finally, since $\\tau$ is the\nfirst time that the controlled process $\\mathbf{X}^{\\mathbf{L}}$ leaves\n$\\mathbf{R}_{+}^{n}$, calling $\\mathbf{U}_{k_{0}}$ the jump size at $\\tau$, we\nhave $\\mathbf{z}_{\\tau}=\\mathbf{z}_{\\tau^{-}}-\\mathbf{U}_{k_{0}}\\geq\n\\mathbf{0}-\\mathbf{U}_{k_{0}}$. Since $-\\upsilon(\\mathbf{x},\\mathbf{\\alpha})$\nis non-decreasing on $\\mathbf{x}$ and non-increasing on $\\mathbf{\\alpha}$, we\nobtain taking the strategy with no switching and no dividend payment, that\n\\[\nV(\\mathbf{x})\\geq\\mathbb{E}_{\\mathbf{x}}\\left( -e^{-c\\tau}\\upsilon\n(\\mathbf{z}_{\\tau^{-}},\\mathbf{U}_{k_{0}}\\right) \\geq\\mathbb{E}_{\\mathbf{x}%\n}\\left( -e^{-c\\tau}\\upsilon(\\mathbf{0},\\mathbf{U}_{k_{0}})\\right)\n\\geq-\\mathbb{E}\\left( \\left\\vert \\upsilon(\\mathbf{0},\\mathbf{U}%\n_{1})\\right\\vert \\right) \\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{. }\\blacksquare\n\\]\n\n\n\\begin{remark}\nLet us extend the definition of $\\upsilon$ to the closure of $B$ as\n$\\upsilon(\\mathbf{x},\\mathbf{\\alpha})=\\inf_{\\mathbf{\\beta}\\geq\\mathbf{\\alpha\n},\\left( \\mathbf{x},\\mathbf{\\beta}\\right) \\in B}\\upsilon(\\mathbf{x}%\n,\\mathbf{\\beta})$. Since $-\\upsilon(\\mathbf{x},\\mathbf{\\alpha})$ is\nnon-decreasing on $\\mathbf{x}$ and non-increasing on $\\mathbf{\\alpha}$ then\n\\[\n\\sup\\nolimits_{\\mathbf{\\alpha}\\geq\\mathbf{0},\\mathbf{x}-\\mathbf{\\alpha}%\n\\notin\\mathbf{R}_{+}^{n}}(-\\upsilon(\\mathbf{x},\\mathbf{\\alpha}))\\leq\\max\n{}_{i=1,...n}(-\\upsilon(\\mathbf{x},x_{i}\\mathbf{e}_{i}))\n\\]\nand so the assumption on $\\upsilon$ of Proposition \\ref{Crecimiento de V}\nbecomes that $\\max{}_{i=1,...n}\\left( -\\upsilon(\\mathbf{x},x_{i}%\n\\mathbf{e}_{i})\\right) $ satisfies the growth condition \\textbf{GC}.\n\\end{remark}\n\n\\begin{remark}\n\\label{V sin obstaculo} By Proposition \\ref{Crecimiento de V}, taking any\nswitch-value function $f<-\\mathbb{E}\\left( \\left\\vert \\upsilon(\\mathbf{0}%\n,\\mathbf{U}_{1})\\right\\vert \\right) $, it is never optimal to switch. So, the\nproblem of maximizing the expected cumulative discounted dividend payments\nuntil $\\tau^{\\mathbf{L}}$ (without the possibility of switching) is a\nparticular case of the problem (\\ref{Definicion V}).\n\\end{remark}\n\n\\begin{remark}\n\\label{Monotonia de Obstaculos} Consider $f_{1}\\leq f_{2}$ and $\\upsilon\n_{1}\\geq\\upsilon_{2}$ . Let $V_{f_{1},\\upsilon_{1}}$ and $V_{f_{2}%\n,\\upsilon_{2}}$ be the corresponding optimal value functions, then it is\nstraightforward to see that $V_{f_{1},\\upsilon_{1}}\\leq V_{f_{2},\\upsilon_{2}%\n}$.\n\\end{remark}\n\n\\begin{remark}\nSince the optimal dividend payment function in the one-dimensional problem has\nlinear growth, see for instance Proposition 1.2 in \\cite{AM Libro}; the\nfunctions (\\ref{Nu the dos companias independientes})\\ and (\\ref{g de merger})\nsatisfy the conditions of Proposition \\ref{Crecimiento de V}\\textbf{.}\n\\end{remark}\n\nIn the next proposition, we show that $V$ is increasing and locally Lipschitz\n(so it is absolutely continuous).\n\n\\begin{proposition}\n\\label{V Lipschitz} $V$ is increasing, locally Lipschitz in $\\mathbf{R}%\n_{+}^{n}$ and satisfies for each $\\mathbf{x}\\in$ $\\mathbf{R}_{+}^{n}$, $h>0$\nand $1\\leq i\\leq n$,\n\\[\na_{i}h\\leq V(\\mathbf{x}+h\\mathbf{e}_{i})-V(\\mathbf{x})\\leq(e^{(c+\\lambda\n)h\/p_{i}}-1)V(\\mathbf{x}).\n\\]\n\n\\end{proposition}\n\n\\textit{Proof}. Given $h>0$ and $\\mathbf{x}\\in$ $\\mathbf{R}_{+}^{n}$, consider\nfor each $\\varepsilon>0$ an admissible strategy $\\pi_{\\mathbf{x}}=\\left(\n\\mathbf{L},\\overline{\\tau}\\right) \\in\\Pi_{\\mathbf{x}}$ such that $V_{\\pi\n}(\\mathbf{x})\\geq V(\\mathbf{x})-\\varepsilon$. \\ Let us define an strategy\n$\\widetilde{\\pi}\\in\\Pi_{\\mathbf{x}+h\\mathbf{e}_{i}}$ as follows: the $i$-th\ncompany pays immediately $h$ as dividends and then follows the strategy $\\pi$\n$\\in\\Pi_{\\mathbf{x}}$. For each $\\varepsilon>0$, we get\n\\[\nV(\\mathbf{x}+h\\mathbf{e}_{i})\\geq V_{\\widetilde{\\pi}}(\\mathbf{x}%\n+h\\mathbf{e}_{i})=V_{\\pi}(\\mathbf{x})+a_{i}h\\geq V(\\mathbf{x})-\\varepsilon\n+a_{i}h,\n\\]\nso we obtain the first inequality.\n\nNow consider for each $\\varepsilon>0$ and $1\\leq i\\leq n$, a strategy\n$\\pi=(\\mathbf{L},\\overline{\\tau})\\in\\Pi_{\\mathbf{x}+h\\mathbf{e}_{i}}$ such\nthat\n\\[\nV_{\\pi}(\\mathbf{x}+h\\mathbf{e}_{i})\\geq V(\\mathbf{x}+h\\mathbf{e}%\n_{i})-\\varepsilon.\n\\]\nTake now the following admissible strategy $\\widetilde{\\pi}=(\\widetilde\n{\\mathbf{L}},\\widetilde{\\tau})\\in\\Pi_{\\mathbf{x}}$ starting with surplus\n$\\mathbf{x}$: the $i$-th company pays no dividends and the other companies pay\nall the incoming premium as dividends as long as $\\mathbf{X}_{t}%\n^{\\widetilde{\\mathbf{L}}}<\\mathbf{x}+h\\mathbf{e}_{i}$; after the current\nsurplus reaches $\\mathbf{x}+h\\mathbf{e}_{i}$, follow strategy $\\pi$. Let us\ncall $\\tau^{\\widetilde{L}}$ the exit time of the process $\\mathbf{X}%\n_{t}^{\\widetilde{\\mathbf{L}}}$. If\n\\[\n\\tau:=\\min\\{t:\\mathbf{X}_{t}^{\\widetilde{\\mathbf{L}}}=\\mathbf{x}%\n+h\\mathbf{e}_{i}\\},\n\\]\nthen $\\widetilde{\\tau}=\\tau+\\overline{\\tau}$ and we get that $p_{i}\\tau\\geq\nh$. So,%\n\n\\[%\n\\begin{array}\n[c]{lll}%\nV(\\mathbf{x}) & \\geq & V_{\\widetilde{\\pi}}(\\mathbf{x})\\geq V_{\\pi}%\n(\\mathbf{x}+h\\mathbf{e}_{i})\\mathbb{E}\\left( e^{-c\\frac{h}{p_{i}}}%\nI_{\\{\\tau<\\tau^{\\widetilde{L}}\\}}\\right) \\geq\\left( V(\\mathbf{x}%\n+h\\mathbf{e}_{i})-\\varepsilon\\right) e^{-c\\frac{h}{p_{i}}}\\mathbb{P}%\n(\\tau<\\tau^{\\widetilde{L}})\\\\\n& \\geq & \\left( V(\\mathbf{x}+h\\mathbf{e}_{i})-\\varepsilon\\right)\ne^{-c\\frac{h}{p_{i}}}\\mathbb{P}(\\tau_{1}>\\frac{h}{p_{i}})=\\left(\nV(\\mathbf{x}+h\\mathbf{e}_{i})-\\varepsilon\\right) e^{-\\left( c+\\lambda\n\\right) \\frac{h}{p_{i}}}\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{,}%\n\\end{array}\n\\]\nwhere $\\tau_{1}$ is the time of the first jump; so we get the second\ninequality. $\\blacksquare$\n\nIn order to distinguish the jumps of the controlled process due to the jumps\nof the uncontrolled process from the ones due to lump dividend payments, let\nus define an auxiliary process which includes the jump of the uncontrolled\nprocess occurring at time $t$ but excludes the lump dividend payment occurring\nat this time as\n\\begin{equation}\n\\mathbf{\\check{X}}_{t}^{\\mathbf{L}}=\\mathbf{X}_{t}-\\mathbf{L}_{t^{-}%\n}=\\mathbf{X}_{t^{-}}^{\\mathbf{L}}-\\left( \\mathbf{X}_{t^{-}}-\\mathbf{X}%\n_{t}\\right) \\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{.}\\label{Xv}%\n\\end{equation}\nNote that $\\mathbf{\\check{X}}_{t}^{\\mathbf{L}}=\\mathbf{X}_{t^{-}}^{\\mathbf{L}%\n}-\\mathbf{U}_{k}$ if $t=\\tau_{k}$ and $\\mathbf{\\check{X}}_{t}^{\\mathbf{L}%\n}=\\mathbf{X}_{t^{-}}^{\\mathbf{L}}$ otherwise. Also, $\\mathbf{X}_{\\tau\n^{\\mathbf{L}}}^{\\mathbf{L}}=\\mathbf{\\check{X}}_{\\tau^{\\mathbf{L}}}%\n^{\\mathbf{L}}$ because no dividends are paid at the exit time $\\tau\n^{\\mathbf{L}}$.\n\n\\section{HJB equation\\label{Seccion Viscosidad}}\n\nIn this section we show that the optimal value function $V$ defined in\n(\\ref{Definicion V}) is a viscosity solution of the corresponding\nHamilton-Jacobi-Bellman (HJB) equation; moreover we characterize the optimal\nvalue function as the smallest viscosity supersolution with growth condition\n\\textbf{GC}. We also give a verification result for $V$. These results are a\ngeneralization to the multidimensional case of the ones given in Section 3 of\n\\cite{AM Switching} for the one dimensional case.\n\nThe HJB equation of problem (\\ref{Definicion V}) can be written as%\n\\begin{equation}\n\\max\\{\\mathbf{a}-{\\Greekmath 0272} V(\\mathbf{x}),\\mathcal{L}(V)(\\mathbf{x}),f(\\mathbf{x}%\n)-V(\\mathbf{x})\\}=0,\\label{HJB}%\n\\end{equation}\nwhere%\n\\begin{equation}\n\\mathcal{L}(V)(\\mathbf{x})=\\mathbf{p\\cdot}{\\Greekmath 0272} V(\\mathbf{x})-(c+\\lambda\n)V(\\mathbf{x})+\\mathcal{I}(V)(\\mathbf{x})-\\mathcal{R}(\\mathbf{x}%\n),\\label{L para Cramer-Lundberg}%\n\\end{equation}\n\n\\begin{equation}\n\\mathcal{I}(W)(\\mathbf{x}):=\\lambda\\int_{\\left( \\mathbf{x}-\\mathbf{\\alpha\n}\\right) \\in\\mathbf{R}_{+}^{n}}W(\\mathbf{x}-\\mathbf{\\alpha})dF(\\mathbf{\\alpha\n})\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{ and }\\mathcal{R}(\\mathbf{x}):=\\lambda\\int_{\\left( \\mathbf{x}%\n-\\mathbf{\\alpha}\\right) \\notin\\mathbf{R}_{+}^{n}}\\upsilon(\\mathbf{x}%\n,\\mathbf{\\alpha})dF(\\mathbf{\\alpha})\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{.}\\label{Def R e I}%\n\\end{equation}\n\n\nAs usual, the operator $\\mathcal{L}$ is the discounted infinitesimal generator\nof the uncontrolled surplus process $\\mathbf{X}_{t}$ defined in\n(\\ref{UncontrolledSurplusOriginal}); that is, for any continuously\ndifferentiable function $W:\\mathbf{R}_{+}^{n}\\rightarrow\\mathbf{R}$\\textbf{,}\nwe have\n\\[\n\\mathcal{L}(W)(\\mathbf{x})=\\lim_{t\\searrow0}\\frac{\\mathbb{E}_{\\mathbf{x}%\n}\\left( e^{-ct}W(\\mathbf{X}_{t})-W(\\mathbf{x})\\right) }{t}.\n\\]\nThus, if $W$ is a solution of $\\mathcal{L}(W)=0$ in an open set in\n$\\mathbf{R}_{+}^{n}$, then the process $e^{-ct}W(\\mathbf{X}_{t})$ is a\nmartingale in this set.\n\nThe HJB equation implies that $\\mathcal{L}(V)\\leq0$, the condition\n$\\mathcal{L}(V)=0$ in an open set in $\\mathbf{R}_{+}^{n}$ would suggest that\n(locally) the optimal dividend strategy consists on paying no dividends as\nlong as the current surplus is in this set. The HJB equation also implies that\n$V$ is always above $f$, so $f$ can be interpreted as \\textit{an obstacle }in\nequation (\\ref{HJB}). Moreover, the condition $V_{x_{i}}(\\mathbf{x})=a_{i}$ in\nan open set means that (locally) the optimal dividend strategy should be the\none in which the $i$-th company pays immediately a lump sum as dividends.\n\nWe prove in this section that, under the assumption%\n\\begin{equation}\n\\mathcal{R}:\\mathbf{R}_{+}^{n}\\rightarrow\\mathbf{R}\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{ is continuous,}%\n\\label{R connttinua}%\n\\end{equation}\nthe value function $V$ is a viscosity solution of the HJB equation\n(\\ref{HJB}). From now on, we assume that this assumption holds.\n\nCrandall and Lions \\cite{CL} introduced the concept of viscosity solutions for\nfirst-order Hamilton-Jacobi equations. It is the standard tool for studying\nHJB equations, see for instance Fleming and Soner \\cite{FS}. In the context of\nusing viscosity solutions for the problem of dividend payment optimization in\nthe one-dimensional case, see for instance \\cite{AM Libro}.\n\n\\begin{definition}\n\\label{NuevaDefinicionSubySuper}A locally Lipschitz function $\\underline\n{u}:\\mathbf{R}_{+}^{n}\\rightarrow\\mathbf{R}$\\ is a viscosity subsolution of\n(\\ref{HJB}) at $\\mathbf{x}\\in$ $\\mathbf{R}_{+}^{n}$\\ if any continuously\ndifferentiable function $\\psi\\ $defined in $\\mathbf{R}_{+}^{n}$\\ with\n$\\psi(\\mathbf{x})=$\\ $\\underline{u}(\\mathbf{x})$\\ such that $\\underline\n{u}-\\psi$\\ reaches the maximum at $\\mathbf{x}$\\ satisfies\n\\[\n\\max\\{\\mathbf{a}-{\\Greekmath 0272}\\psi(\\mathbf{x}),\\mathcal{L}(\\psi)(\\mathbf{x}%\n),f(\\mathbf{x})-\\psi(\\mathbf{x})\\}\\geq0,\n\\]\nand a locally Lipschitz function $\\overline{u}$\\ $:\\mathbf{R}_{+}%\n^{n}\\rightarrow\\mathbf{R}$\\ is a viscosity supersolution of (\\ref{HJB}) at\n$\\mathbf{x}\\in$ $\\mathbf{R}_{+}^{n}$\\ if any continuously differentiable\nfunction $\\varphi$ defined in $\\mathbf{R}_{+}^{n}$\\ \\ with $\\varphi\n(\\mathbf{x})=$\\ $\\overline{u}(\\mathbf{x})$\\ such that $\\overline{u}-\\varphi\n$\\ reaches the minimum at $\\mathbf{x}$\\ satisfies\n\\[\n\\max\\{\\mathbf{a}-{\\Greekmath 0272}\\varphi(\\mathbf{x}),\\mathcal{L}(\\varphi)(\\mathbf{x}%\n),f(\\mathbf{x})-\\varphi(\\mathbf{x})\\}\\leq0.\n\\]\nFinally, a locally Lipschitz function $u$\\ $:\\mathbf{R}_{+}^{n}\\rightarrow\n\\mathbf{R}$\\ is a viscosity solution of (\\ref{HJB}) if it is both a viscosity\nsubsolution and a viscosity supersolution at any $\\mathbf{x}\\in\\mathbf{R}%\n_{+}^{n}$.\n\\end{definition}\n\nIn order to prove that $V$ is a viscosity solution of the HJB equation we need\nto use the following two lemmas. The first one states the Dynamic Programming\nPrinciple (DPP), its proof follows from standard arguments, see for instance\nLemma 1.2 of \\cite{AM Libro}. The proof of the second one is in the Appendix.\n\n\\begin{lemma}\n\\label{DPP} Given any $\\mathbf{x}\\in\\mathbf{R}_{+}^{n}$\\ and any finite\nstopping time $\\widetilde{\\tau}$,\\ we have that the function $V$ defined in\n(\\ref{Definicion V}) satisfies $V(\\mathbf{x})=\\sup_{\\pi=(L,\\overline{\\tau}%\n)\\in\\Pi_{\\mathbf{x}}}v_{\\pi,\\widetilde{\\tau}}(\\mathbf{x})$, where\n\\[%\n\\begin{array}\n[c]{lll}%\nv_{\\pi,\\widetilde{\\tau}}(\\mathbf{x}) & = & \\mathbb{E}_{\\mathbf{x}}\\left(\n\\int\\nolimits_{0^{-}}^{\\overline{\\tau}\\wedge\\tau^{\\mathbf{L}}\\wedge\n\\widetilde{\\tau}}e^{-cs}\\mathbf{a}\\cdot d\\mathbf{L}_{s}+e^{-c(\\overline{\\tau\n}\\wedge\\tau^{\\mathbf{L}}\\wedge\\widetilde{\\tau})}(I_{\\{\\overline{\\tau}%\n\\wedge\\widetilde{\\tau}<\\tau^{\\mathbf{L}}\\}}V(\\mathbf{X}_{\\overline{\\tau}%\n\\wedge\\widetilde{\\tau}}^{\\mathbf{L}})\\right) \\\\\n& & -\\mathbb{E}_{\\mathbf{x}}\\left( I_{\\{\\tau^{\\mathbf{L}}\\leq\\overline{\\tau\n}\\wedge\\widetilde{\\tau}\\}}\\upsilon(\\mathbf{X}_{\\tau^{\\mathbf{L}}{}^{-}%\n}^{\\mathbf{L}}{},\\mathbf{X}_{\\tau^{\\mathbf{L}}{}^{-}}^{\\mathbf{L}}%\n-\\mathbf{X}_{\\tau^{\\mathbf{L}}}^{\\mathbf{L}})\\right) .\n\\end{array}\n\\]\n\n\\end{lemma}\n\n\\begin{lemma}\n\\label{Dynkins} Given any continuously differentiable function $g:\\mathbf{R}%\n_{+}^{n}\\rightarrow\\mathbf{R}$, any admissible strategy $\\pi=(\\mathbf{L}%\n,\\overline{\\tau})\\in\\Pi_{\\mathbf{x}}$ and any finite stopping time $\\tau\n\\leq\\tau^{\\mathbf{L}}$, consider\n\\[\n\\mathbf{L}_{t}=\\int\\nolimits_{0}^{t}d\\mathbf{L}_{s}^{c}+\\sum\\nolimits_{0\\leq\ns\\leq t}\\Delta\\mathbf{L}_{s},\n\\]\nwhere $\\Delta\\mathbf{L}_{s}=\\mathbf{L}_{s}-\\mathbf{L}_{s^{-}}$ and\n$\\mathbf{L}_{s}^{c}$ is a continuous and non-decreasing process. Then we have%\n\\[%\n\\begin{array}\n[c]{l}%\n(g(\\mathbf{X}_{\\tau}^{\\mathbf{L}})I_{\\{\\tau<\\tau^{\\mathbf{L}}\\}}%\n-\\upsilon(\\mathbf{X}_{\\tau^{-}}^{\\mathbf{L}},\\mathbf{X}_{\\tau^{-}}%\n^{\\mathbf{L}}-\\mathbf{X}_{\\tau}^{\\mathbf{L}})I_{\\{\\tau=\\tau^{\\mathbf{L}}%\n\\}})e^{-c\\tau}-g(\\mathbf{x})\\\\%\n\\begin{array}\n[c]{ll}%\n= & \\int\\nolimits_{0}^{\\tau}\\mathcal{L}(g)(\\mathbf{X}_{s^{-}}^{\\mathbf{L}%\n})e^{-cs}ds-\\int_{0^{-}}^{\\tau}e^{-cs}\\mathbf{a}\\cdot d\\mathbf{L}_{s}%\n+\\int\\nolimits_{0}^{\\tau}e^{-cs}(\\mathbf{a}-{\\Greekmath 0272} g(\\mathbf{X}_{s^{-}%\n}^{\\mathbf{L}}))\\mathbf{\\cdot}d\\mathbf{L}_{s}^{c}\\\\\n& +\\sum\\limits_{\\mathbf{L}_{s}\\neq\\mathbf{L}_{s^{-}},s\\leq\\tau}e^{-cs}%\n\\int\\nolimits_{0}^{1}(\\mathbf{a}-{\\Greekmath 0272} g(\\mathbf{\\check{X}}_{s}^{\\mathbf{L}%\n}-\\gamma\\Delta\\mathbf{L}_{s})\\mathbf{\\cdot}\\Delta\\mathbf{L}_{s})d\\gamma\n+M(\\tau);\n\\end{array}\n\\end{array}\n\\]\nwhere $M(t)$ is a martingale with zero expectation.\n\\end{lemma}\n\n\\begin{proposition}\n\\label{Prop V is a viscosity solution}The optimal value function $V$ is a\nviscosity solution of the HJB equation (\\ref{HJB}) at any $\\mathbf{x}$ in the\ninterior of $\\mathbf{R}_{+}^{n}$.\n\\end{proposition}\n\n\\textit{Proof}. Let us show that $V$ is a viscosity supersolution at any\n$\\mathbf{x}$ in the interior of $\\mathbf{R}_{+}^{n}$. The inequality $V\\geq f$\nfollows from the definition (\\ref{Definicion V}) taking $\\overline{\\tau}=0$.\nGiven any initial surplus $\\mathbf{x}$ in the interior of $\\mathbf{R}_{+}^{n}$\nand any $\\mathbf{l}\\in\\mathbf{R}_{+}^{n}$, take $h>0$ small enough such that\n$h(\\mathbf{l}-\\mathbf{p})<\\mathbf{x}$\\textbf{.} Consider the dividend payment\nstrategy $\\mathbf{L}_{t}=\\mathbf{l}t$ for $t\\tau^{\\mathbf{L}}$. Using Lemma\n\\ref{DPP} with stopping time $\\widetilde{\\tau}=h\\wedge\\tau_{1}$, we get%\n\\[\nV(\\mathbf{x})\\geq\\mathbb{E}_{\\mathbf{x}}\\left( \\mathbf{a}\\cdot\\mathbf{l}%\n\\int\\nolimits_{0}^{h\\wedge\\tau_{1}}e^{-cs}ds+e^{-c\\left( h\\wedge\\tau\n_{1}\\right) }(I_{\\{h\\wedge\\tau_{1}<\\tau^{\\mathbf{L}}\\}}V(\\mathbf{X}%\n_{h\\wedge\\tau_{1}}^{\\mathbf{L}})-I_{\\{\\tau^{\\mathbf{L}}=\\tau_{1}\\leq\nh\\}}\\upsilon(\\mathbf{X}_{\\tau_{1}^{-}}^{\\mathbf{L}},\\mathbf{U}_{1}))\\right) .\n\\]\nLet $\\varphi$ be a test function for supersolution of (\\ref{HJB}) at\n$\\mathbf{x}$ as in Definition \\ref{NuevaDefinicionSubySuper}. We have,%\n\\[%\n\\begin{array}\n[c]{lll}%\n\\varphi(\\mathbf{x}) & = & V(\\mathbf{x})\\\\\n& \\geq & \\mathbb{E}_{\\mathbf{x}}\\left( \\mathbf{a}\\cdot\\mathbf{l}%\n\\int\\nolimits_{0}^{h\\wedge\\tau_{1}}e^{-cs}ds\\right) \\\\\n& & +\\mathbb{E}_{\\mathbf{x}}\\left( e^{-c\\left( h\\wedge\\tau_{1}\\right)\n}(I_{\\{h\\wedge\\tau_{1}<\\tau^{\\mathbf{L}}\\}}\\varphi(\\mathbf{X}_{h\\wedge\\tau\n_{1}}^{\\mathbf{L}})-I_{\\{\\tau^{\\mathbf{L}}=\\tau_{1}\\leq h\\}}\\upsilon\n(\\mathbf{X}_{\\tau_{1}^{-}}^{\\mathbf{L}},\\mathbf{U}_{1}))\\right) .\n\\end{array}\n\\]\nWe can write%\n\\[%\n\\begin{array}\n[c]{l}%\n\\mathbb{E}_{\\mathbf{x}}\\left( e^{-c\\left( h\\wedge\\tau_{1}\\right)\n}(I_{\\{h\\wedge\\tau_{1}<\\tau^{\\mathbf{L}}\\}}\\varphi(\\mathbf{X}_{h\\wedge\\tau\n_{1}}^{\\mathbf{L}})-I_{\\{\\tau^{\\mathbf{L}}=\\tau_{1}\\leq h\\}}\\upsilon\n(\\mathbf{X}_{\\tau_{1}^{-}}^{\\mathbf{L}},\\mathbf{U}_{1}))\\right) \\\\%\n\\begin{array}\n[c]{ll}%\n= & \\mathbb{E}_{\\mathbf{x}}\\left( I_{\\{h<\\tau_{1}\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{ }\\}}e^{-ch}%\n\\varphi(\\mathbf{x}+\\left( \\mathbf{p}-\\mathbf{l}\\right) h\\right) \\\\\n& +\\mathbb{E}_{\\mathbf{x}}\\left( I_{\\{\\tau_{1}\\leq h\\}}I_{\\{\\mathbf{U}%\n_{1}\\leq\\mathbf{x}+\\left( \\mathbf{p}-\\mathbf{l}\\right) \\tau_{1}\\}}%\ne^{-c\\tau_{1}}\\varphi(\\mathbf{x}+\\left( \\mathbf{p}-\\mathbf{l}\\right)\n\\tau_{1}-\\mathbf{U}_{1})\\right) \\\\\n& -\\mathbb{E}_{\\mathbf{x}}\\left( I_{\\{\\tau_{1}\\leq h\\}}I_{\\{\\mathbf{U}%\n_{1}\\nleqslant\\mathbf{x}+\\left( \\mathbf{p}-\\mathbf{l}\\right) \\tau_{1}%\n\\}}e^{-c\\tau_{1}}\\upsilon(\\mathbf{x}+\\left( \\mathbf{p}-\\mathbf{l}\\right)\n\\tau_{1},\\mathbf{U}_{1})\\right) .\n\\end{array}\n\\end{array}\n\\]\nTherefore, using that $\\mathcal{R}$ is continuous,%\n\\begin{equation}%\n\\begin{array}\n[c]{lll}%\n0 & \\geq & (\\mathbf{a}\\cdot\\mathbf{l})\\lim_{h\\rightarrow0^{+}}\\frac{1}%\n{h}\\mathbb{E}_{\\mathbf{x}}\\left( \\int\\nolimits_{0}^{h\\wedge\\tau_{1}}%\ne^{-cs}ds\\right) +\\lim_{h\\rightarrow0^{+}}\\frac{1}{h}\\left( e^{-(\\lambda\n+c)h}\\varphi(\\mathbf{x}+\\left( \\mathbf{p}-\\mathbf{l}\\right) h)-\\varphi\n(\\mathbf{x})\\right) \\\\\n& & +\\lim_{h\\rightarrow0^{+}}\\frac{1}{h}\\mathbb{E}_{\\mathbf{x}}\\left(\nI_{\\{\\tau_{1}\\leq h\\}}I_{\\{\\mathbf{U}_{1}\\leq\\mathbf{x}+\\left( \\mathbf{p}%\n-\\mathbf{l}\\right) \\tau_{1}\\}}e^{-c\\tau_{1}}\\varphi(\\mathbf{x}+\\left(\n\\mathbf{p}-\\mathbf{l}\\right) \\tau_{1}-\\mathbf{U}_{1})\\right) \\\\\n& & -\\lim_{h\\rightarrow0^{+}}\\frac{1}{h}\\mathbb{E}_{\\mathbf{x}}\\left(\nI_{\\{\\tau_{1}\\leq h\\}}I_{\\{\\mathbf{U}_{1}\\leq\\mathbf{x}+\\left( \\mathbf{p}%\n-\\mathbf{l}\\right) \\tau_{1}\\}}e^{-c\\tau_{1}}\\upsilon(\\mathbf{x}+\\left(\n\\mathbf{p}-\\mathbf{l}\\right) \\tau_{1},\\mathbf{U}_{1})\\right) \\\\\n& = & \\mathbf{a}\\cdot\\mathbf{l}-\\left( c+\\lambda\\right) \\varphi\n(\\mathbf{x})+\\left( \\mathbf{p}-\\mathbf{l}\\right) \\mathbf{\\cdot}{\\Greekmath 0272}\n\\varphi(\\mathbf{x})+\\mathcal{I}(\\varphi)(\\mathbf{x})-\\mathcal{R}(\\mathbf{x}).\n\\end{array}\n\\nonumber\n\\end{equation}\nAnd so $\\mathcal{L}(\\varphi)(\\mathbf{x})+\\mathbf{l}$\\textbf{$\\cdot$}$\\left(\n\\mathbf{a}-{\\Greekmath 0272}\\varphi(\\mathbf{x})\\right) \\leq0$. Taking $\\mathbf{l}%\n=\\mathbf{0}$, we get $\\mathcal{L}(\\varphi)(\\mathbf{x})\\leq0$; taking\n$\\mathbf{l}=l\\mathbf{e}_{i}$ with $l\\rightarrow\\infty$ ($1\\leq i\\leq n$), we\nobtain $\\mathbf{a}-{\\Greekmath 0272}\\varphi(\\mathbf{x})\\leq\\mathbf{0}$. So $V$ is a\nviscosity supersolution at the point $\\mathbf{x}$.\n\nWe omit the proof that $V$ is a viscosity subsolution in the interior of\n$\\mathbf{R}_{+}^{n}$. This result follows from Lemma \\ref{Dynkins} and the\nproof is similar to the ones of Proposition 3.2 in \\cite{AM Switching} for the\nunidimensional case with switching and of Proposition 3.2 in \\cite{AlAZMU} for\nthe multidimensional case with no switching. $\\blacksquare$\n\n\\begin{remark}\n\\label{Muchas soluciones viscosas} In general, we cannot expect to have\nuniqueness of viscosity solutions of the HJB equation (\\ref{HJB}). Take for\ninstance the two dimensional case with independent companies, the switch\nfunction $f$ given in (\\ref{Merger 2x2}) and the function $\\upsilon$ defined\nin (\\ref{Nu the dos companias independientes}). Consider the function\n$W_{k}(\\mathbf{x}):=x_{1}+x_{2}+k$ for $\\mathbf{x\\in R}_{+}^{2},$ and take\n$k_{0}$ large enough such that,%\n\\[\nk_{0}>\\frac{p_{1}+p_{2}}{c}\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{, }V_{2}(z)0$, we define the grid domain\n\\[\n\\mathcal{G}^{\\delta}:=\\left\\{ (m_{1}\\delta p_{1},...,m_{n}\\delta\np_{n}):\\mathbf{m}\\in\\mathbf{N}_{0}^{n}\\right\\} \\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{.}%\n\\]\nThe idea of the numerical scheme is to find, at each point of the grid\n$\\mathcal{G}^{\\delta}$, the best local strategy among the ones suggested by\nthe operators of the HJB equation (\\ref{HJB}); these possible local strategies\nare: none of the companies pay dividends, one of the companies pays a lump sum\nas dividends, or the manager of the company opts to switch immediately. We\nmodify these local strategies in such a way that the controlled surplus lies\nin the grid after the arrival of a jump of the uncontrolled process. In order\nto do that, let us introduce the functions $g^{\\delta}:\\mathbf{N}_{0}%\n^{n}\\rightarrow\\mathbf{R}_{+}^{n}$ \\ which relates the indices with the\ncorresponding points of the grid and $\\rho^{\\delta}:\\mathbf{R}_{+}%\n^{n}\\rightarrow\\mathbf{N}_{0}^{n}$ which assigns to each point points\n$\\mathbf{x}$ in $\\mathbf{R}_{+}^{n}$ the index of the closest point of the\ngrid below $\\mathbf{x}$. More precisely,\n\\[\ng^{\\delta}(\\mathbf{m})=(p_{1}\\delta m_{1},...,p_{n}\\delta m_{n})~\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{and\n}\\rho^{\\delta}(\\mathbf{x}):=\\max\\{\\mathbf{m}\\in\\mathbf{N}_{0}^{n}:g^{\\delta\n}(\\mathbf{m})\\leq\\mathbf{x}\\}\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{;}%\n\\]\nwe can also write\n\\[\n\\rho^{\\delta}(\\mathbf{x})=(\\left[ \\frac{x_{1}}{\\delta p_{1}}\\right]\n,...,\\left[ \\frac{x_{n}}{\\delta p_{n}}\\right] )\\in\\mathbf{N}_{0}^{n}%\n\\]\nwhere $\\left[ .\\right] $ means the integer part in each coordinate. Note\nthat $\\rho^{\\delta}\\ $is the left-inverse function of $g^{\\delta}$ and that%\n\\[\n\\left\\langle \\mathbf{x}\\right\\rangle ^{\\delta}:=g^{\\delta}(\\rho^{\\delta\n}(\\mathbf{x}))=\\max\\{\\mathbf{y}\\in\\mathcal{G}^{\\delta}:\\mathbf{y}%\n\\leq\\mathbf{x}\\}\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{.}%\n\\]\n\n\nGiven any current surplus $g^{\\delta}(\\mathbf{m})\\in\\mathcal{G}^{\\delta}$, let\n$\\tau$ and $\\mathbf{U}$ be the arrival time and the size of the next jump of\nthe uncontrolled process. We first define the $n+2$ possible control actions\nat any point of the grid $\\mathcal{G}^{\\delta}$ as follows.\n\n\\begin{itemize}\n\\item Control action $\\mathbf{E}_{0}$: Pay no dividends up to the time\n$\\delta\\wedge\\tau$. In the case that $\\delta<\\tau$, the uncontrolled surplus\nat time $\\delta$ is $g^{\\delta}\\left( \\mathbf{m}+\\mathbf{1}\\right)\n\\in\\mathcal{G}^{\\delta}$; and if $\\delta\\geq\\tau$, the uncontrolled surplus at\ntime $\\tau$ is\n\\[\ng^{\\delta}(\\mathbf{m})+\\tau\\mathbf{p}-\\mathbf{U}.\n\\]\nIf this vector is in $\\mathbf{R}_{+}^{n}$, the companies pay immediately the\nminimum amount of dividends in such a way that the controlled surplus lies in\na point of the grid; this end surplus can be written as $g^{\\delta}\\left(\n\\mathbf{k}\\right) $, where\n\\[\n\\mathbf{k}=\\rho^{\\delta}(g^{\\delta}(\\mathbf{m})+\\tau\\mathbf{p}-\\mathbf{U}).\n\\]\nThe amount paid as dividends is equal to\n\\[\ng^{\\delta}\\left( \\mathbf{m}-\\mathbf{k}\\right) +\\tau\\mathbf{p}-\\mathbf{U}.\n\\]\nIn the case that the surplus $g^{\\delta}(\\mathbf{m})+\\tau\\mathbf{p}%\n-\\mathbf{U}\\notin\\mathbf{R}_{+}^{n}$ at time $\\tau\\leq$ $\\delta$, the process stops.\n\n\\item Control actions $\\mathbf{E}_{i}$ with $i=1,...,n$: The $i$-th company\npays immediately $p_{i}\\delta$ as dividends, so the controlled surplus becomes\n$g^{\\delta}\\left( \\mathbf{m}-\\mathbf{e}_{i}\\right) \\in\\mathcal{G}^{\\delta}$.\nThe control action $\\mathbf{E}_{i}$\\textbf{\\ }can only be applied for current\nsurplus $g^{\\delta}(\\mathbf{m})\\in\\mathcal{G}^{\\delta} $ if $m_{i}>0$.\n\n\\item Control action $\\mathbf{E}_{s}$: The manager opts to switch immediately\nand the process stops.\n\\end{itemize}\n\nWe denote the space of controls as\n\\[\n\\mathcal{E}=\\{\\mathbf{E}_{s},\\left( \\mathbf{E}_{i}\\right) _{i=1,...,n}%\n,\\mathbf{E}_{0}\\}.\n\\]\n\n\nConsider $\\Pi_{g^{\\delta}(\\mathbf{m})}^{\\delta}\\subset\\Pi_{g^{\\delta\n}(\\mathbf{m})}$ as the set of all the admissible strategies with initial\nsurplus $g^{\\delta}(\\mathbf{m})\\in\\mathcal{G}^{\\delta}$ which can be obtained\nby a sequence of control actions in $\\mathcal{E}$ at each point of the grid.\nLet us describe the strategies $\\pi=(\\mathbf{L},\\overline{\\tau})\\in\n\\Pi_{g^{\\delta}(\\mathbf{m})}^{\\delta}$; we take, for any $\\omega=(\\tau\n_{j},\\mathbf{U}_{j})_{j\\geq1}\\in\\Omega$, a sequence $\\mathbf{s}=(s_{k}%\n)_{k=1,...,\\tilde{k}}$ with $s_{k}\\in\\mathcal{E}\\ $and $1\\leq\\tilde{k}%\n\\leq\\infty$, the first control action $s_{1}$ is applied at the point\n$g^{\\delta}(\\mathbf{m})\\in\\mathcal{G}^{\\delta},$ the second control action\n$s_{2}$ is applied at the end surplus in $\\mathcal{G}^{\\delta}$ resulting from\nthe control action $s_{1},$ and so on. If the length of the sequence\n$\\mathbf{s}$ is $\\tilde{k}<\\infty$, then $s_{\\tilde{k}}$ should be either\n$\\mathbf{E}_{s}$ or $\\mathbf{E}_{0}$. In the last case, the end surplus\nresulting from the final control action $s_{\\tilde{k}}$ is outside\n$\\mathbf{R}_{+}^{n}$ due to the arrival of a jump.\n\nTake $\\mathbf{m}^{k}\\in\\mathbf{N}_{0}^{n}\\ $in such a way that $g^{\\delta\n}(\\mathbf{m}^{k})$ is the point of $\\mathcal{G}^{\\delta}$ in which the control\naction $s_{k}$ is applied; let $t_{k}$ be the time in which the control action\n$s_{k}$ is chosen; let $\\Delta_{k}$ be the time elapsed for the control action\n$s_{k}$ and let $\\mathbf{y}^{k}\\in\\mathcal{G}^{\\delta}\\cup\\left(\n\\mathbf{R}_{+}^{n}\\right) ^{c}$ be the end surplus resulting from the control\naction $s_{k}$.\n\n\\begin{remark}\nLet us describe in a precise way the values of $(\\mathbf{m}^{k},\\Delta\n_{k},t_{k},\\mathbf{y}^{k})_{k=1,...,\\tilde{k}}$.\n\n\\begin{itemize}\n\\item In the case that $s_{k}=\\mathbf{E}_{i}$, then $k<\\tilde{k},$ $\\Delta\n_{k}=0,$ $t_{k+1}=t_{k},$ $\\mathbf{m}^{k+1}=\\mathbf{m}^{k}-\\mathbf{e}_{i}$ and\n$\\mathbf{y}^{k}=g^{\\delta}(\\mathbf{m}^{k+1}).$\n\n\\item In the case that $s_{k}=\\mathbf{E}_{s}$, then\n\\[\nk=\\tilde{k},\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{ }t_{k}=\\overline{\\tau},\\Delta_{k}=0\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{ and }%\n\\mathbf{y}^{k}=g^{\\delta}(\\mathbf{m}^{k}).\n\\]\n\n\n\\item In the case that $s_{k}=\\mathbf{E}_{0}$, take $j_{k}:=\\min\\{j:\\tau\n_{j}>t_{k}\\}$ (so $\\tau_{j_{k}}$ is the arrival time of the first jump after\n$t_{k}$); there are three possibilities:\n\n(a) If $\\tau_{j_{k}}>t_{k}+\\delta$ , then\n\\[\nk<\\tilde{k},\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{ }\\Delta_{k}=\\delta,\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{ }t_{k+1}=t_{k}+\\delta,\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{\n}\\mathbf{m}^{k+1}=\\mathbf{m}^{k}+\\mathbf{1}\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{ and }\\mathbf{y}%\n^{k}=g^{\\delta}\\left( \\mathbf{m}^{k+1}\\right) .\n\\]\n\n\n(b) If $\\tau_{j_{k}}\\leq t_{k}+\\delta$ and $g^{\\delta}(\\mathbf{m})+\\left(\n\\tau_{j_{k}}-t_{k}\\right) \\mathbf{p}-\\mathbf{U}_{j}\\in\\mathbf{R}_{+}^{n}$,\nthen\n\\[\nk<\\tilde{k},~\\Delta_{k}=\\tau_{j_{k}}-t_{k},~t_{k+1}=\\tau_{j_{k}}%\n,~\\mathbf{m}^{k+1}=\\rho^{\\delta}(g^{\\delta}(\\mathbf{m})+\\left( \\tau_{j_{k}%\n}-t_{k}\\right) \\mathbf{p}-\\mathbf{U}_{j})~\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{and~}\\mathbf{y}^{k}%\n=g^{\\delta}\\left( \\mathbf{m}^{k+1}\\right) .\n\\]\n\n\n(c) If $\\tau_{j_{k}}\\leq t_{k}+\\delta$ and $\\mathbf{y}^{k}=g^{\\delta\n}(\\mathbf{m})+\\left( \\tau_{j_{k}}-t_{k}\\right) \\mathbf{p}-\\mathbf{U}%\n_{j}\\notin\\mathbf{R}_{+}^{n}$, then\n\\[\nk=\\tilde{k},~\\Delta_{k}=\\tau_{j_{k}}-t_{k}\\ ~\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{and}\\ ~t_{k}+\\Delta\n_{k}=\\tau_{j_{k}}=\\tau^{\\mathbf{L}}.\n\\]\n\n\\end{itemize}\n\\end{remark}\n\nDefining $\\Delta\\mathbf{L}_{k}$ as the amount of dividends paid by the control\naction $s_{k}$, we have%\n\\[\n\\Delta\\mathbf{L}_{k}=\\left\\{\n\\begin{array}\n[c]{ll}%\np_{i}\\delta\\mathbf{e}_{i} & \\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{if }s_{k}=\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{$\\mathbf{E}_{i}$}\\\\\n\\mathbf{c}_{k}-\\left\\langle \\mathbf{c}_{k}\\right\\rangle ^{\\delta} & \\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{if\n}s_{k}=\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{$\\mathbf{E}_{0},$ }\\tau_{j_{k}}\\in(t_{k},t_{k}+\\delta]\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{ and\n}\\mathbf{c}_{k}\\in\\mathbf{R}_{+}^{n}\\\\\n0 & \\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{otherwise},\n\\end{array}\n\\right.\n\\]\nwhere $j_{k}\\ $is defined in the previous remark and\n\\[\n\\mathbf{c}_{k}=g^{\\delta}(\\mathbf{m}^{k})+(\\tau_{j_{k}}-t_{k})\\mathbf{p}%\n-\\mathbf{U}_{j}.\n\\]\nTherefore, if the strategy $\\pi=(\\mathbf{L},\\overline{\\tau})\\in\\Pi_{g^{\\delta\n}(\\mathbf{m})}^{\\delta}$ then the cumulative dividend payment strategy is\n\\[\n\\mathbf{L}_{t}=\\sum\\nolimits_{k\\leq\\tilde{k},t_{k}\\leq t}\\Delta\\mathbf{L}_{k},\n\\]\nand the switch time $\\overline{\\tau}$ is the time in which the control action\n$\\mathbf{E}_{s}$ is chosen. By construction, if $\\pi\\in\\Pi_{g^{\\delta\n}(\\mathbf{m})}^{\\delta}$ then $\\mathbf{X}_{t_{k}}^{\\mathbf{L}}\\in\n\\mathcal{G}^{\\delta}$ for all $k\\leq\\tilde{k}$ , also the set of times\n\\[\n\\{t_{k}:k\\leq\\tilde{k}\\}\\subseteq\\{\\tau_{i}+j\\delta:i,j\\in\\mathbf{N}_{0}\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{\nand }j\\leq\\frac{\\tau_{i+1}-\\tau_{i}}{\\delta}\\};\n\\]\nhere $\\tau_{0}=0$. For the strategy $(\\mathbf{L},\\overline{\\tau})$ to be\nadmissible, we need to assume the following condition: If the arrival times\nand sizes of the claims of two elements in $\\Omega$ coincide up to time $t$,\nthen the corresponding sequences of control actions $\\mathbf{s}=(s_{k}%\n)_{k=1,...,\\tilde{k}}$ must coincide for all $k$ such that $t_{k}\\leq t$.\n\nThe following lemma states that the sequences $\\left( t_{k}\\right) _{k\\geq1}\n$ do not have an accumulation point, the proof is in the Appendix.\n\n\\begin{lemma}\n\\label{Lema de tiempo infinito} Given $\\pi\\in\\Pi_{g^{\\delta}(\\mathbf{m}%\n)}^{\\delta},$ $\\lim_{k\\rightarrow\\infty}t_{k}=\\infty$ a.s. within the subset\n$\\{\\tilde{k}=\\infty\\}\\subset\\Omega$.\n\\end{lemma}\n\nWe define the $\\mathcal{G}^{\\delta}$\\textit{-optimal function} $v^{\\delta}$ as\nthe supremum of the value functions of admissible strategies which are\ncombination of the control actions in $\\mathcal{E}$, that is\n\\begin{equation}\nv^{\\delta}(\\mathbf{m})=\\sup\\nolimits_{\\pi\\in\\Pi_{g^{\\delta}(\\mathbf{m}%\n)}^{\\delta}}V_{\\pi}(g^{\\delta}(\\mathbf{m}))\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{.}\\label{vdelta}%\n\\end{equation}\n\n\n\\subsection{Characterization of the $\\mathcal{G}^{\\delta}$\\textit{-}optimal\nfunction}\n\nIn this subsection, we show that the $\\mathcal{G}^{\\delta}$-optimal function\n$v^{\\delta}:\\mathbf{N}_{0}^{n}\\rightarrow\\mathbf{R}$ is a solution of a\ndiscrete version of the HJB equation (\\ref{HJB}). We also see that $v^{\\delta\n}$ can be characterized as the smallest supersolution of this discrete HJB\nequation. Moreover, we prove that there exists an optimal admissible strategy\nfor the problem (\\ref{vdelta}). This strategy, called the $\\mathcal{G}%\n^{\\delta}$\\textit{-optimal strategy}, is stationary in the sense that the\ncontrol actions depend only on which point of the grid the current surplus lies.\n\nWe now introduce some operators related to the control actions in\n$\\mathcal{E}$, these operators will be involved in the discrete version of the\nHJB equation. Given any family of admissible strategies $\\widetilde{\\pi}%\n=(\\pi_{g^{\\delta}(\\mathbf{m})})_{\\mathbf{m}\\in\\mathbf{N}_{0}^{n}}$ with\n$\\pi_{g^{\\delta}(\\mathbf{m})}\\in\\Pi_{g^{\\delta}(\\mathbf{m})}^{\\delta}$, we\ndefine the value function $\\widetilde{w}:\\mathbf{N}_{0}^{n}\\rightarrow\n\\mathbf{R}$ of $\\widetilde{\\pi}$ as\n\\[\n\\widetilde{w}(\\mathbf{m}):=V_{\\pi_{g^{\\delta}(\\mathbf{m})}}(g^{\\delta\n}(\\mathbf{m})).\n\\]\n\n\nLet us consider the admissible strategies with initial surplus $g^{\\delta\n}(\\mathbf{m})\\in\\mathcal{G}^{\\delta}$ which consists on applying first one of\nthe control actions in $\\mathcal{E}$\\textbf{,} and afterwards applying the\nstrategy in the family $\\widetilde{\\pi}$ corresponding to the end surplus (if\nit is possible); the value functions of these strategies are given by%\n\n\\begin{equation}%\n\\begin{array}\n[c]{lll}%\nT_{0}(\\widetilde{w})(\\mathbf{m})\\bigskip & := & \\widetilde{w}(\\mathbf{m}%\n+\\mathbf{1})e^{-(c+\\lambda)\\delta}+\\mathcal{I}^{\\delta}(\\widetilde\n{w})(\\mathbf{m})-\\int_{0}^{\\delta}e^{-(c+\\lambda)t}\\mathcal{R}(g^{\\delta\n}(\\mathbf{m})+t\\mathbf{p})dt\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{ , }%\n\\end{array}\n\\label{Definicion TE}%\n\\end{equation}%\n\\begin{equation}%\n\\begin{array}\n[c]{ccc}%\nT_{i}(\\widetilde{w})(\\mathbf{m}):=\\widetilde{w}(\\mathbf{m}-\\mathbf{e}%\n_{i})+\\delta a_{i}p_{i} & \\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{and} & T_{s}(\\widetilde{w})(\\mathbf{m}%\n):=f(g^{\\delta}(\\mathbf{m}))\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{,}%\n\\end{array}\n\\label{Definicion Ti TS}%\n\\end{equation}\ndepending on which control action in $\\mathcal{E}$ is chosen. Here,%\n\\begin{equation}%\n\\begin{array}\n[c]{l}%\n\\mathcal{I}^{\\delta}(w)(\\mathbf{m})\\\\%\n\\begin{array}\n[c]{ll}%\n:= & \\int\\limits_{0}^{\\delta}(%\n{\\textstyle\\int\\limits_{\\mathbf{0}}^{g^{\\delta}(\\mathbf{m})+t\\mathbf{p}}}\n\\lambda e^{-(c+\\lambda)t}w(\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{$\\rho^{\\delta}$}(g^{\\delta}(\\mathbf{m}%\n)+t\\mathbf{p}-\\mathbf{\\alpha}))dF(\\mathbf{\\alpha}))dt\\\\\n& +\\int\\limits_{0}^{\\delta}(%\n{\\textstyle\\int\\limits_{\\mathbf{0}}^{g^{\\delta}(\\mathbf{m})+t\\mathbf{p}}}\n\\lambda e^{-(c+\\lambda)t}\\mathbf{a}\\cdot(g^{\\delta}(\\mathbf{m})+t\\mathbf{p}%\n-\\mathbf{\\alpha}-\\left\\langle g^{\\delta}(\\mathbf{m})+t\\mathbf{p}%\n-\\mathbf{\\alpha}\\right\\rangle ^{\\delta})dF(\\mathbf{\\alpha}))dt.\n\\end{array}\n\\end{array}\n\\label{Definicion Idelta(W)}%\n\\end{equation}\nWe can consider $T_{0}$, $T_{i}$ and $T_{s}$ as operators in the set of\nfunctions $\\left\\{ w:\\mathbf{N}_{0}^{n}\\rightarrow\\mathbf{R}\\right\\} $; we\nalso define the operator $T$ as%\n\\begin{equation}\nT:=\\max\\{T_{0},\\left( T_{i}\\right) _{i=1,...,n},T_{s}\\}.\\label{Definicion T}%\n\\end{equation}\n\n\nThe following lemma is technical and the proof is in the Appendix.\n\n\\begin{lemma}\n\\label{Ts crecientes} The operators $T_{0}$, $T_{i},$ $T_{s}$ and $T$ are\nnon-decreasing and $T$ satisfies,\n\\[\n\\sup\\nolimits_{\\mathbf{m}\\in\\mathbf{N}_{0}^{n}}\\left\\vert T(w_{1}%\n)(\\mathbf{m})-T(w_{2})(\\mathbf{m})\\right\\vert \\leq\\sup\\nolimits_{\\mathbf{m}%\n\\in\\mathbf{N}_{0}^{n}}\\left\\vert w_{1}(\\mathbf{m})-w_{2}(\\mathbf{m}%\n)\\right\\vert .\n\\]\nMoreover, $T_{0}(w)$, $T_{i}(w)$ and $T_{s}(w)$ can be written as a linear\ncombination of the values of $w(\\mathbf{m})$ with $\\mathbf{m\\in N}_{0}^{n}$\nplus a constant.\n\\end{lemma}\n\nWe define the \\textit{discrete HJB equation} as%\n\\begin{equation}\n\\left( T(w)-w\\right) (\\mathbf{m})=\\max\\{T_{0}(w)-w,\\left( T_{i}%\n(w)-w\\right) _{i=1,...,n},T_{s}(w)-w\\}(\\mathbf{m})=0\\label{Delta HJB}%\n\\end{equation}\nfor $\\mathbf{m}\\in\\mathbf{N}_{0}^{n}$. Analogously to Definition\n\\ref{NuevaDefinicionSubySuper}, we say that a function $\\overline\n{w}:\\mathbf{N}_{0}^{n}\\rightarrow\\mathbf{R}$ is a \\textit{supersolution} of\n(\\ref{Delta HJB}) if $T(\\overline{w})-\\overline{w}\\leq0$, and a function\n$\\underline{w}:\\mathbf{N}_{0}^{n}\\rightarrow\\mathbf{R}$ is a\n\\textit{subsolution} of (\\ref{Delta HJB}) if $T(\\underline{w})-\\underline\n{w}\\geq0$.\n\nThe following results are the discrete versions of Propositions\n\\ref{Prop V is a viscosity solution}, Lemma\n\\ref{SupersolucionMayor-ValueFunction}, Theorems \\ref{characterization} and\n\\ref{verification result}. The discrete version of the growth condition\n(\\ref{gc}) is given by%\n\\begin{equation}\nw(\\mathbf{m})e^{\\frac{-c}{2n}%\n{\\textstyle\\sum_{i=1}^{n}}\n\\delta m_{i}}\\ \\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{is upper bounded in }\\mathbf{N}_{0}^{n}\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{.}%\n\\label{Growth Condition discreta}%\n\\end{equation}\n\n\n\\begin{proposition}\n\\label{Propiedades Vdelta}$~$The function $v^{\\delta}:\\mathbf{N}_{0}%\n^{n}\\rightarrow\\mathbf{R}$ is well defined and it is a solution of\n(\\ref{Delta HJB}). Moreover, given an initial surplus $g^{\\delta}\\left(\n\\mathbf{m}_{0}\\right) \\in\\mathcal{G}^{\\delta}$, there exists a $\\mathcal{G}%\n^{\\delta}$-optimal strategy $\\pi_{g^{\\delta}\\left( \\mathbf{m}_{0}\\right)\n}^{\\delta}\\in\\Pi_{g^{\\delta}\\left( \\mathbf{m}_{0}\\right) }^{\\delta}$ such\nthat\n\\[\nv^{\\delta}(\\mathbf{m}_{0})=V_{\\pi_{g^{\\delta}\\left( \\mathbf{m}_{0}\\right)\n}^{\\delta}}(g^{\\delta}\\left( \\mathbf{m}_{0}\\right) ).\n\\]\nThis $\\mathcal{G}^{\\delta}$-optimal strategy is\\textit{\\ stationary} in the\nfollowing sense: the control action $s_{k}$ in the sequence $\\mathbf{s}%\n=(s_{k})_{k=1,...,\\tilde{k}}$ depends only on the current surplus $g^{\\delta\n}\\left( \\mathbf{m}^{k}\\right) \\in\\mathcal{G}^{\\delta}$.\n\\end{proposition}\n\n\\textit{Proof}. By definitions (\\ref{Definicion V}) and (\\ref{vdelta}), we\nhave\n\\[\nf(g^{\\delta}(\\mathbf{m}))\\leq v^{\\delta}(\\mathbf{m})\\leq V(g^{\\delta\n}(\\mathbf{m})),\n\\]\nso $v^{\\delta}$ is well defined.\n\nLet us prove that $v^{\\delta}=T(v^{\\delta})$. Take a sequence $(p_{l}%\n)_{l\\geq1}$ of families of strategies $p_{l}=(\\pi_{g^{\\delta}(\\mathbf{m})}%\n^{l})_{\\mathbf{m}\\in\\mathbf{N}_{0}^{n}}$ with $\\pi_{g^{\\delta}(\\mathbf{m}%\n)}^{l}\\in\\Pi_{g^{\\delta}(\\mathbf{m})}^{\\delta}$ such that\n\\[\nv^{\\delta}(\\mathbf{m})-V_{\\pi_{g^{\\delta}(\\mathbf{m})}^{l}}(g^{\\delta\n}(\\mathbf{m}))\\leq\\frac{1}{l}%\n\\]\nfor all $\\mathbf{m}\\in\\mathbf{N}_{0}^{n}$. Define $w_{l}:\\mathbf{N}_{0}%\n^{n}\\rightarrow\\mathbf{R}$ as $w_{l}(\\mathbf{m})=V_{\\pi_{g^{\\delta}%\n(\\mathbf{m})}^{l}}(g^{\\delta}(\\mathbf{m}))$, by Lemma \\ref{Ts crecientes}, we\nhave that\n\\[\nT(v^{\\delta})(\\mathbf{m})=\\lim_{l\\rightarrow\\infty}T(w_{l})(\\mathbf{m})\\leq\nv^{\\delta}(\\mathbf{m})\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{.}%\n\\]\nOn the other hand, since $\\pi_{g^{\\delta}(\\mathbf{m})}^{l}$ can be obtained by\na sequence of control actions $\\mathbf{s}=(s_{k})_{k=1,...,\\tilde{k}}$ and at\nany point $g^{\\delta}(\\mathbf{m})$ of the grid all the value functions of\nstrategies in $\\Pi_{g^{\\delta}(\\mathbf{m})}^{\\delta}$ are bellow $v^{\\delta\n}(\\mathbf{m})$, we have by definition of $T$ given in (\\ref{Definicion T}),\nthat $w_{l}(\\mathbf{m})\\leq T(v^{\\delta})(\\mathbf{m})$. So taking the limit as\n$l\\rightarrow\\infty,$ we obtain that\n\\[\nv^{\\delta}(\\mathbf{m})\\leq T(v^{\\delta})(\\mathbf{m}).\n\\]\n\n\nFinally, since $v^{\\delta}=T(v^{\\delta})$, we can define for any\n$\\mathbf{m}\\in\\mathbf{N}_{0}^{n}$, a control action $S(\\mathbf{m}%\n)\\in\\mathcal{E}$ in the following way:\n\n\\begin{itemize}\n\\item If $T_{s}(v^{\\delta})(\\mathbf{m})=v^{\\delta}(\\mathbf{m})$, take\n$S(\\mathbf{m})=\\mathbf{E}_{s}$.\n\n\\item If $T_{0}(v^{\\delta})(\\mathbf{m})=v^{\\delta}(\\mathbf{m})$, take\n$S(\\mathbf{m})=\\mathbf{E}_{0}$.\n\n\\item and if $T_{i}(v^{\\delta})(\\mathbf{m})=v^{\\delta}(\\mathbf{m})$ for some\n$i=1,...,n$, take $S(\\mathbf{m})=\\mathbf{E}_{i}$\\textbf{.}\n\\end{itemize}\n\nGiven an initial surplus $g^{\\delta}(\\mathbf{m}_{0})\\in\\mathcal{G}^{\\delta},$\nthe $\\mathcal{G}^{\\delta}$-optimal\\textit{\\ }strategy\\textit{\\ }%\n$\\pi_{g^{\\delta}(\\mathbf{m}_{0})}^{\\delta}\\in\\Pi_{g^{\\delta}(\\mathbf{m}_{0}%\n)}^{\\delta}$ is defined inductively as follows: $s_{1}=S(\\mathbf{m}_{0}) $;\nassuming that $s_{1},s_{2},..,s_{k-1}$ are defined and the process does not\nstop at step $k-1$, we define $s_{k}=S(\\mathbf{m}_{0}^{k})$ where $g^{\\delta\n}\\left( \\mathbf{m}_{0}^{k}\\right) \\in$ $\\mathcal{G}^{\\delta}$ is the end\nsurplus of $s_{k-1}$. $\\blacksquare$\n\nAnalogously to Remark \\ref{Muchas soluciones viscosas}, we cannot expect in\ngeneral to have uniqueness of viscosity solutions of the discrete HJB equation\n(\\ref{Delta HJB}). For instance, in the two dimensional case with independent\ncompanies, the switch function $f$ given in (\\ref{Merger 2x2}) and the\nfunction $\\upsilon$ defined in (\\ref{Nu the dos companias independientes}), we\nhave that\n\\[\nw(\\mathbf{m}):=%\n{\\textstyle\\sum_{i=1}^{n}}\np_{i}m_{i}\\delta+k\n\\]\nis a solution of (\\ref{Delta HJB}) for $k$ large enough. The following lemma\nis the discrete version of Lemma \\ref{SupersolucionMayor-ValueFunction}, the\nproof is in the Appendix.\n\n\\begin{lemma}\n\\label{Menor Supersolucion Discreta} Given any $\\pi=(\\mathbf{L},\\overline\n{\\tau})\\in$ $\\Pi_{g^{\\delta}(\\mathbf{m})}^{\\delta}$ and any supersolution $w:$\n$\\mathbf{N}_{0}^{n}\\rightarrow\\mathbf{R}$ of (\\ref{Delta HJB}) with growth\ncondition (\\ref{Growth Condition discreta}), we have that $V_{\\pi}(g^{\\delta\n}(\\mathbf{m}))\\leq w(\\mathbf{m})$.\n\\end{lemma}\n\nFrom Lemma \\ref{Menor Supersolucion Discreta}, we obtain the following theorems.\n\n\\begin{theorem}\nThe $\\mathcal{G}^{\\delta}$-optimal value function $v^{\\delta}$ $:\\mathbf{N}%\n_{0}^{n}\\rightarrow\\mathbf{R}$ can be characterized as the smallest\nsupersolution of the discrete HJB equation (\\ref{Delta HJB}) with growth\ncondition (\\ref{Growth Condition discreta}).\n\\end{theorem}\n\n\\begin{theorem}\n\\label{TeoremaVerificacionDiscreto} If the function $w:$ $\\mathbf{N}_{0}%\n^{n}\\rightarrow\\mathbf{R}$ with growth condition\n(\\ref{Growth Condition discreta})$\\ $is a supersolution of (\\ref{Delta HJB}),\nand also satisfies that for any $\\mathbf{m}\\in\\mathbf{N}_{0}^{n},$\n$w(\\mathbf{m})$ is either $V_{\\pi}(g^{\\delta}(\\mathbf{m}))$ with $\\pi\\in\n\\Pi_{g^{\\delta}(\\mathbf{m})}^{\\delta}$ or $\\lim_{l\\rightarrow\\infty}V_{\\pi\n_{l}}(g^{\\delta}(\\mathbf{m}))$ with $\\pi_{l}\\in\\Pi_{g^{\\delta}(\\mathbf{m}%\n)}^{\\delta}$ for any $l\\geq1$, then $w=v^{\\delta}$.\n\\end{theorem}\n\n\\subsection{Construction of the $\\mathcal{G}^{\\delta}$\\textit{-}optimal\nstrategy and the $\\mathcal{G}^{\\delta}$\\textit{-}optimal function}\n\nIn this subsection we construct recursively the $\\mathcal{G}^{\\delta}$-optimal\nstrategy and the corresponding $\\mathcal{G}^{\\delta}$-optimal function.\n\nSince $T$ defined in (\\ref{Definicion T}) is not a contraction operator,\n$v^{\\delta}$ can not be obtained numerically as a fixed point; so we construct\nvalue functions $v_{l}^{\\delta}$ of strategies in $\\Pi_{g^{\\delta}%\n(\\mathbf{m})}^{\\delta}$ which can be calculated explicitly by\n(\\ref{Definicion TE}), (\\ref{Definicion Ti TS}) and (\\ref{Definicion T}) such\nthat $v_{l}^{\\delta}$ $\\nearrow$ $v^{\\delta}$ as $l\\rightarrow\\infty$.\n\nLet us define iteratively the families of strategies $\\widetilde{\\pi}_{l}%\n=(\\pi_{g^{\\delta}(\\mathbf{m})}^{l})_{\\mathbf{m}\\in\\mathbf{N}_{0}^{n}}$ for\neach $l\\geq1$ in the following way:\n\n(1) We start with the family of strategies $\\widetilde{\\pi}_{1}=(\\pi\n_{g^{\\delta}(\\mathbf{m})}^{1})_{\\mathbf{m}\\in\\mathbf{N}_{0}^{n}}$ where\n$\\pi_{g^{\\delta}(\\mathbf{m})}^{1}\\in\\Pi_{g^{\\delta}(\\mathbf{m})}^{\\delta}$\nconsists on switching immediately; the value of this strategy is\n\\[\nv_{1}^{\\delta}(\\mathbf{m}):=f(g^{\\delta}(\\mathbf{m})).\n\\]\n\n\n(2) Given the family of strategies $\\widetilde{\\pi}_{l}=(\\pi_{g^{\\delta\n}(\\mathbf{m})}^{l})_{\\mathbf{m}\\in\\mathbf{N}_{0}^{n}}$ with $\\pi_{g^{\\delta\n}(\\mathbf{m})}^{l}\\in\\Pi_{g^{\\delta}(\\mathbf{m})}^{\\delta}$, we define the\nfamily $\\widetilde{\\pi}_{l+1}=(\\pi_{g^{\\delta}(\\mathbf{m})}^{l+1}%\n)_{\\mathbf{m}\\in\\mathbf{N}_{0}^{n}}$ as follows: We choose for any\n$\\mathbf{m}\\in\\mathbf{N}_{0}^{n}$, the best strategy $\\pi_{g^{\\delta\n}(\\mathbf{m})}^{l+1}\\in\\Pi_{g^{\\delta}(\\mathbf{m})}^{\\delta}$ among the ones\nwhich initially follows one of control actions in $\\mathcal{E}$ and then\ncontinues with the corresponding strategy in the family $\\widetilde{\\pi}_{l}$.\nThe value of this new strategy is given by%\n\\begin{equation}\nv_{l+1}^{\\delta}(\\mathbf{m}):=T(v_{l}^{\\delta})(\\mathbf{m})=T^{l}%\n(v_{1}^{\\delta})(\\mathbf{m})\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{ for }\\mathbf{m}\\in\\mathbf{N}_{0}%\n^{n}.\\label{Definicionvk}%\n\\end{equation}\n\n\n\\begin{remark}\n$v_{l}^{\\delta}$ can be thought as the maximum of the value function of\nstrategies $\\pi\\in\\Pi_{g^{\\delta}(\\mathbf{m})}^{\\delta}$ where the length\n$\\tilde{k}$ of the corresponding sequence $\\mathbf{s}\\ $is upper bounded by\n$l$ and $s_{l}=\\mathbf{E}_{s}\\mathbf{\\ }$in the case that $\\tilde{k}=l.$\n\\end{remark}\n\nIn the next proposition we use Theorem \\ref{TeoremaVerificacionDiscreto} to\nsee that the limit of $v_{l}^{\\delta}$ is indeed $v^{\\delta}$.\n\n\\begin{proposition}\nWe have that $v_{l+1}^{\\delta}\\geq v_{l}^{\\delta}$ for $l\\geq1$ and that\n$\\lim_{l\\rightarrow\\infty}v_{l}^{\\delta}=v^{\\delta}.$\n\\end{proposition}\n\n\\textit{Proof}. Take $\\mathbf{m}\\in\\mathbf{N}_{0}^{n}$, it is straightforward\nto see by (\\ref{Definicionvk}) that $v_{2}^{\\delta}(\\mathbf{m})\\geq\nv_{1}^{\\delta}(\\mathbf{m})$; on the other hand, the operator $T$ is\nnon-decreasing, so we obtain that $v_{l+1}^{\\delta}\\geq v_{l}^{\\delta}$ for\n$l\\geq1$. Then, there exists $w_{0}:\\mathbf{N}_{0}^{n}$ $\\rightarrow\n\\mathbf{R}$ such that\n\\[\nw_{0}(\\mathbf{m}):=\\lim\\nolimits_{l\\rightarrow\\infty}v_{l}^{\\delta}%\n(\\mathbf{m})\\leq V(g^{\\delta}(\\mathbf{m})).\n\\]\nNote that all the functions $v_{l}^{\\delta}$ are subsolutions (\\ref{Delta HJB}%\n) and that $w_{0}$ is a solution of (\\ref{Delta HJB}) because $T(w_{0})=w_{0}%\n$. Since $w_{0}$ satisfies the growth condition\n(\\ref{Growth Condition discreta}), $w_{0}$ coincides with the value function\n$v^{\\delta}$ by Theorem \\ref{TeoremaVerificacionDiscreto}. $\\blacksquare$\n\n\\subsection{Definition of the value function $V^{\\delta}$}\n\nIn this subsection we define, using the $\\mathcal{G}^{\\delta}$-optimal\nfunctions and strategies, a family of admissible strategies for any point in\n$\\mathbf{R}_{+}^{n}$ and the corresponding value function $V^{\\delta}.$\n\n\\begin{description}\n\\item\n\\begin{definition}\n\\label{Definicion de Vdelta en la grilla} We use the $\\mathcal{G}^{\\delta}%\n$-optimal function $v^{\\delta}:\\mathbf{N}_{0}^{n}\\rightarrow\\mathbf{R}$ to\ndefine a function $V^{\\delta}:\\mathcal{G}^{\\delta}\\rightarrow\\mathbf{R}$ as\n\\[\nV^{\\delta}(g^{\\delta}(\\mathbf{m})):=v^{\\delta}(\\mathbf{m})\n\\]\nfor $\\mathbf{m}\\in\\mathbf{N}_{0}^{n}$. Note that $V^{\\delta}(g^{\\delta\n}(\\mathbf{m}))$ is the value of the $\\mathcal{G}^{\\delta}$-optimal admissible\nstrategy $\\pi_{g^{\\delta}(\\mathbf{m})}^{\\delta}\\in\\Pi_{g^{\\delta}(\\mathbf{m}%\n)}^{\\delta}$.\n\\end{definition}\n\\end{description}\n\nWe construct now a family of strategies $\\widetilde{\\pi}^{\\delta}=\\left(\n\\pi_{\\mathbf{x}}\\right) _{\\mathbf{x}\\in\\mathbf{R}_{+}^{n}}$, where\n$\\pi_{\\mathbf{x}}\\in\\Pi_{\\mathbf{x}}$, such that the corresponding value\nfunction $V^{\\delta}(\\mathbf{x})=V_{\\pi_{\\mathbf{x}}}(\\mathbf{x})$ extends to\n$\\mathbf{R}_{+}^{n}$ the function defined in Definition\n\\ref{Definicion de Vdelta en la grilla}. Take the strategy $\\pi_{\\mathbf{x}%\n}\\in\\Pi_{\\mathbf{x}}$ which pays immediately $\\mathbf{x}-\\left\\langle\n\\mathbf{x}\\right\\rangle ^{\\delta}$ as dividends and then follows the\n$\\mathcal{G}^{\\delta}$-optimal strategy $\\pi_{\\left\\langle \\mathbf{x}%\n\\right\\rangle ^{\\delta}}^{\\delta}\\in\\Pi_{\\left\\langle \\mathbf{x}\\right\\rangle\n^{\\delta}}^{\\delta}$. We obtain that $V^{\\delta}:\\mathbf{R}_{+}^{n}%\n\\rightarrow\\mathbf{R}$ is given by\n\\begin{equation}\nV^{\\delta}(\\mathbf{x}):=V^{\\delta}(\\left\\langle \\mathbf{x}\\right\\rangle\n^{\\delta})+\\mathbf{a}\\cdot(\\mathbf{x}-\\left\\langle \\mathbf{x}\\right\\rangle\n^{\\delta}).\\label{Definicion Vdelta}%\n\\end{equation}\n\n\n\\section{Convergence of the Discrete Scheme}\n\nIn this section we show the locally uniformly convergence of the discrete\nscheme defined in the previous section by taking a suitable sequence of\nembedded grids.\n\nIn the next technical lemma, we show that the functions $v^{\\delta}$ satisfy a\n$\\delta$-locally Lipschitz condition and a relation between $v^{2\\delta}$ and\n$v^{\\delta}$ which gives a monotonicity condition on the embedded grids; the\nproof is in the Appendix.\n\n\\begin{lemma}\n\\label{v_delta es lipschitz y monotona en el reticulado}The functions\n$v^{\\delta}$ defined in (\\ref{vdelta}) satisfy:\n\n(1) $v^{\\delta}(\\mathbf{m}+\\mathbf{e}_{i})-v^{\\delta}(\\mathbf{m})\\geq\na_{i}p_{i}\\delta$ and $v^{\\delta}(\\mathbf{m}+\\mathbf{1})-v^{\\delta}%\n(\\mathbf{m})\\leq v^{\\delta}(\\mathbf{m})(e^{(c+\\lambda)\\delta}-1);$\n\n(2) $\\Pi_{g^{2\\delta}(\\mathbf{m})}^{2\\delta}\\subset\\Pi_{2g^{\\delta}%\n(\\mathbf{m})}^{\\delta}\\subset\\Pi_{2g^{\\delta}(\\mathbf{m})}$ and so\n$v^{2\\delta}(\\mathbf{m})\\leq v^{\\delta}(2\\mathbf{m})$.\n\\end{lemma}\n\nLet us take $\\delta_{k}:=\\delta\/2^{k}$ for $k\\geq0$. In the remainder of the\nsection we will prove that $V^{\\delta_{k}}$ $\\nearrow V$ locally uniformly as\n$k$ goes to infinity. Consider the dense set in $\\mathbf{R}_{+}^{n}$,\n$\\mathcal{G}:=\\bigcup\\nolimits_{k\\geq0}\\mathcal{G}^{\\delta_{k}}$. Note that\n$\\mathcal{G}^{\\delta_{k}}\\subset\\mathcal{G}^{\\delta_{k+1}}$, so by Lemma\n\\ref{v_delta es lipschitz y monotona en el reticulado}-(2),\n\\[\nV^{\\delta_{k}}\\leq V^{\\delta_{k+1}}\\leq V;\n\\]\nthen we can define the function $\\overline{V}:$ $\\mathbf{R}_{+}^{n}%\n\\rightarrow\\mathbf{R}$ as%\n\n\\begin{equation}\n\\overline{V}(\\mathbf{x}):=\\lim\\nolimits_{k\\rightarrow\\infty}V^{\\delta_{k}%\n}(\\mathbf{x}).\\label{ubarra como limite}%\n\\end{equation}\n\n\n\\begin{remark}\n\\label{Limit value function u barra} We will prove that $\\overline{V}$ is the\noptimal value function. In order to do that, we will show that $\\overline{V}$\nis a viscosity supersolution of (\\ref{HJB}). It is straightforward to see that\n$\\overline{V}(\\mathbf{x})$ is a limit of value functions of admissible\nstrategies in $\\Pi_{\\mathbf{x}}$ for all $\\mathbf{x}\\in\\mathbf{R}_{+}^{n}$ so\nthe result will follow from Theorem \\ref{verification result}. Since there is\nno uniqueness of solution of the HJB equation, it is essential to show that\nthis function is a limit of value functions of admissible strategies.\n\\end{remark}\n\nIn the next lemma, we find a bound on the variation of $V^{\\delta_{k}}$ and we\nshow that $\\overline{V}$ is locally Lipschitz in $\\mathbf{R}_{+}^{n}$ and so\nit is absolutely continuous; the proof is in the Appendix.\n\n\\begin{lemma}\n\\label{Lipschitz Inequality u barra} We have for each $\\mathbf{y}%\n\\geq\\mathbf{x}$ in $\\mathbf{R}_{+}^{n}$ that\n\\[\n\\left\\vert V^{\\delta_{k}}(\\mathbf{y})-V^{\\delta_{k}}(\\mathbf{x})\\right\\vert\n\\leq\\left\\Vert \\left\\langle \\mathbf{y}\\right\\rangle ^{\\delta_{k}}-\\left\\langle\n\\mathbf{x}\\right\\rangle ^{\\delta_{k}}\\right\\Vert _{1}\\frac{2}{\\hat{p}%\n}V^{\\delta_{k}}(\\left\\langle \\mathbf{x}\\vee\\mathbf{y}\\right\\rangle\n^{\\delta_{k}})(\\frac{e^{(c+\\lambda)\\delta_{k}}-1}{\\delta_{k}})+2\\delta\n_{k}\\mathbf{a}\\cdot\\mathbf{p}\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{,}%\n\\]\nand also%\n\\[\n\\mathbf{a}\\cdot\\left( \\mathbf{y}-\\mathbf{x}\\right) \\leq\\overline\n{V}(\\mathbf{y})-\\overline{V}(\\mathbf{x})\\leq\\overline{V}(\\mathbf{y}%\n)\\frac{2(c+\\lambda)}{\\hat{p}}\\left\\Vert \\mathbf{y}-\\mathbf{x}\\right\\Vert\n_{1}\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{,}%\n\\]\nwhere $\\hat{p}:=\\min_{i=1,..,n}p_{i}$.\n\\end{lemma}\n\nIn the next two propositions we address the convergence of $V^{\\delta_{k}}$ to\n$\\overline{V}$ and we prove that $\\overline{V}$ coincides with $V$.\n\n\\begin{proposition}\n\\label{Limite u barra uniforme}For any $\\delta>0$, $V^{\\delta_{k}}$\n$\\nearrow\\overline{V}\\ $ locally uniformly as $k$ goes to infinity.\n\\end{proposition}\n\n\\textit{Proof}. Consider a compact set $K$ in $\\mathbf{R}_{+}^{n}$,\n$\\mathbf{x}^{1}\\in K$ and $\\varepsilon>0$. Let us take\\ an upper bound\n$\\mathbf{z}\\in\\mathbf{R}_{+}^{n}$ of $K$. We show first that there exists\n$k_{0}$ large enough and $\\eta>0$ small enough such that if $\\left\\Vert\n\\mathbf{x}-\\mathbf{x}^{1}\\right\\Vert _{1}<\\eta$ and $k\\geq$ $k_{1}$, then%\n\\begin{equation}\n\\overline{V}(\\mathbf{x})-V^{\\delta_{k}}(\\mathbf{x})<\\varepsilon\n.\\label{diferencia en Bolas}%\n\\end{equation}\nIndeed, by pointwise convergence at $\\mathbf{x}^{1}$, there exists $k_{1}$\nsuch that\n\\begin{equation}\n\\overline{V}(\\mathbf{x}^{1})-V^{\\delta_{k}}(\\mathbf{x}^{1})<\\varepsilon\n\/3~\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{for }k\\geq k_{1}.\\label{desig 1}%\n\\end{equation}\nBy Lemma \\ref{Lipschitz Inequality u barra}, there exists $\\eta_{1}$ such that\nif $\\left\\Vert \\mathbf{x}-\\mathbf{x}^{1}\\right\\Vert _{1}<\\eta_{1}$, then\n\\begin{equation}\n\\left\\vert \\overline{V}(\\mathbf{x})-\\overline{V}(\\mathbf{x}^{1})\\right\\vert\n<\\varepsilon\/3.\\label{desig 2}%\n\\end{equation}\nAlso, from Lemma \\ref{Lipschitz Inequality u barra}, there exists $\\eta_{2}$\nand $k_{2}$ such that if $\\left\\Vert \\mathbf{x}-\\mathbf{x}^{1}\\right\\Vert\n_{1}<\\eta_{1}$, then%\n\\begin{equation}\n\\left\\vert V^{\\delta_{k}}(\\mathbf{x})-V^{\\delta_{k}}(\\mathbf{x}^{1}%\n)\\right\\vert \\leq\\left\\Vert g^{\\delta_{k}}\\left( \\rho^{\\delta_{k}}%\n(\\mathbf{x})-\\rho^{\\delta_{k}}(\\mathbf{x}^{1})\\right) \\right\\Vert\n_{1}\\overline{V}(\\mathbf{z})2e^{(c+\\lambda)}\/\\hat{p}+2\\delta_{k}%\n\\mathbf{a}\\cdot\\mathbf{p}<\\varepsilon\/3\\label{desig 3}%\n\\end{equation}\nfor $k\\geq k_{2}$. Therefore, taking $\\eta:=\\eta_{1}\\wedge\\eta_{2}$, for\n$k\\geq k_{0}:=k_{1}\\vee k_{2}$, we obtain (\\ref{diferencia en Bolas}) from\n(\\ref{desig 1}), (\\ref{desig 2}) and (\\ref{desig 3}).\n\nFinally, we conclude the result taking a finite covering of the compact set\n$K$. $\\blacksquare$\n\n\\begin{proposition}\n\\label{vbarra es supersolucion} The function $\\overline{V}$ defined in\n(\\ref{ubarra como limite}) is the optimal value function $V$.\n\\end{proposition}\n\n\\textit{Proof}. By Remark \\ref{Limit value function u barra}, it is enough to\nprove that $\\overline{V}$ is a viscosity supersolution of (\\ref{HJB}) in the\ninterior of $\\mathbf{R}_{+}^{n}$. Take $\\mathbf{x}^{0}$ in the interior of\n$\\mathbf{R}_{+}^{n}$ and a differentiable test function $\\varphi\n:\\mathbf{R}_{+}^{n}\\rightarrow\\mathbf{R}$ for viscosity supersolution of\n(\\ref{HJB}) at $\\mathbf{x}^{0}$, that is\n\\begin{equation}\n\\overline{V}(\\mathbf{x})\\geq\\varphi(\\mathbf{x})\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{ and }\\overline\n{V}(\\mathbf{x}^{0})=\\varphi(\\mathbf{x}^{0}).\\label{Comparacion}%\n\\end{equation}\nSince $\\mathcal{G}$ is a dense set in $\\mathbf{R}_{+}^{n}$, we obtain by the\ncontinuity assumptions on the function $f$ given in Section\n\\ref{Seccion Modelistica} and (\\ref{ubarra como limite}) that $f\\leq\n\\overline{V}$ in $\\mathbf{R}_{+}^{n}$, so $f(\\mathbf{x}^{0})-\\varphi\n(\\mathbf{x}^{0})\\leq0$. By Proposition \\ref{Lipschitz Inequality u barra},\n\\[\n\\overline{V}(\\mathbf{y})-\\overline{V}(\\mathbf{x})\\geq\\mathbf{a}\\cdot\\left(\n\\mathbf{y}-\\mathbf{x}\\right)\n\\]\nfor all $\\mathbf{y}\\geq\\mathbf{x}$, so it holds that $\\mathbf{a}-{\\Greekmath 0272}\n\\varphi(\\mathbf{x}^{0})\\leq\\mathbf{0}$. In order to prove that $\\mathcal{L}%\n(\\varphi)(\\mathbf{x}^{0})\\leq0,$ consider now for $\\eta>0$ small enough,\n\\[\n\\varphi_{\\eta}(\\mathbf{x})=\\varphi(\\mathbf{x})-\\eta\\left( \\mathbf{x}%\n-\\mathbf{x}^{0}\\right) \\mathbf{\\cdot}(\\mathbf{x}-\\mathbf{x}^{0}).\n\\]\nGiven $k\\geq0$, the set $\\mathcal{G}^{\\delta_{k}}\\cap\\lbrack\\mathbf{0}%\n,\\mathbf{x}^{0}+\\mathbf{1}]$ is finite, so we can define\n\\begin{equation}\na_{k}^{\\eta}:=\\min\\nolimits_{\\mathcal{G}^{\\delta_{k}}\\cap\\lbrack\n\\mathbf{0},\\mathbf{x}^{0}+\\mathbf{1}]}\\{V^{\\delta_{k}}(\\mathbf{x}%\n)-\\varphi_{\\eta}(\\mathbf{x})\\}.\\label{minimo V_delta-Fi}%\n\\end{equation}\nSince $V^{\\delta_{k}}\\leq\\overline{V}$, we have from (\\ref{Comparacion}), that\n$a_{k}^{\\eta}\\leq0$. Taking%\n\\[\n0\\leq b_{k}:=\\max\\nolimits_{\\mathcal{G}^{\\delta_{k}}\\cap\\lbrack\\mathbf{0}%\n,\\mathbf{x}^{0}+\\mathbf{1}]}\\left( \\overline{V}-V^{\\delta_{k}}\\right) ,\n\\]\nby Proposition \\ref{Limite u barra uniforme}, $b_{k}\\rightarrow0$ as\n$k\\rightarrow\\infty$. \\ For all $\\mathbf{x}\\in\\mathcal{G}^{\\delta_{k}}%\n\\cap\\lbrack\\mathbf{0},\\mathbf{x}^{0}+\\mathbf{1}]$ we get from\n(\\ref{Comparacion}),%\n\n\\[%\n\\begin{array}\n[c]{lll}%\nV^{\\delta_{k}}(\\mathbf{x})-\\varphi_{\\eta}(\\mathbf{x}) & = & V^{\\delta_{k}%\n}(\\mathbf{x})-\\overline{V}(\\mathbf{x})+\\overline{V}(\\mathbf{x})-\\varphi\n(\\mathbf{x})+\\eta\\left( \\mathbf{x}-\\mathbf{x}^{0}\\right) \\mathbf{\\cdot\n}(\\mathbf{x}-\\mathbf{x}^{0})\\\\\n& \\geq & -b_{k}+\\eta\\left( \\mathbf{x}-\\mathbf{x}^{0}\\right) \\mathbf{\\cdot\n}(\\mathbf{x}-\\mathbf{x}^{0}).\n\\end{array}\n\\]\nThen, the minimum argument in (\\ref{minimo V_delta-Fi}) is attained at\n$\\mathbf{x}^{k}\\in\\mathcal{G}^{\\delta_{k}}$ such that\n\\[\n\\left( \\mathbf{x}^{k}-\\mathbf{x}^{0}\\right) \\mathbf{\\cdot}(\\mathbf{x}%\n^{k}-\\mathbf{x}^{0})\\leq b_{k}\/\\eta.\n\\]\nThen, we have $\\mathbf{x}^{k}\\rightarrow\\mathbf{x}^{0}$ and $-a_{k}^{\\eta\n}\\rightarrow0$ as $k$ goes to infinity. So\n\\[\nV^{\\delta_{k}}(\\mathbf{x})\\geq\\varphi_{\\eta}(\\mathbf{x})-a_{k}^{\\eta}\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{\nfor }\\mathbf{x}\\in\\mathcal{G}^{\\delta_{k}}\\cap\\lbrack0,\\mathbf{x}%\n^{0}+\\mathbf{1}]\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{ and }V^{\\delta_{k}}(\\mathbf{x}^{k})=\\varphi_{\\eta\n}(\\mathbf{x}^{k})-a_{k}^{\\eta}.\n\\]\nSince\n\\[\nT_{0}(v^{\\delta_{k}})\\left( \\left[ \\tfrac{x_{1}^{k}}{\\delta_{k}p_{1}%\n}\\right] ,...,\\left[ \\tfrac{x_{n}^{k}}{\\delta_{k}p_{n}}\\right] \\right)\n-v^{\\delta_{k}}\\left( \\left[ \\tfrac{x_{1}^{k}}{\\delta_{k}p_{1}}\\right]\n,...,\\left[ \\tfrac{x_{n}^{k}}{\\delta_{k}p_{n}}\\right] \\right) \\leq0,\n\\]\nwe obtain\n\\[%\n\\begin{array}\n[c]{lll}%\n0 & \\geq & e^{-(c+\\lambda)\\delta_{k}}\\left( V^{\\delta_{k}}(\\mathbf{x}%\n^{k}+\\delta_{k}\\mathbf{p})\\right) \\\\\n& & +\\int_{0}^{\\delta_{k}}\\lambda e^{-(c+\\lambda)t}(%\n{\\textstyle\\int\\nolimits_{\\mathbf{0}\\leq\\mathbf{\\alpha}\\leq\\mathbf{x}%\n^{k}+t\\mathbf{p}}}\nV^{\\delta_{k}}(\\mathbf{x}^{k}+t\\mathbf{p}-\\mathbf{\\alpha})dF(\\mathbf{\\alpha\n}))dt\\\\\n& & -\\int_{0}^{\\delta_{k}}e^{-(c+\\lambda)t}\\mathcal{R}(\\mathbf{x}%\n^{k}+t\\mathbf{p})dt-V^{\\delta_{k}}(\\mathbf{x}^{k})\\\\\n& \\geq & e^{-(c+\\lambda)\\delta_{k}}\\left( \\varphi_{\\eta}(\\mathbf{x}%\n^{k}+\\delta_{k}\\mathbf{p})-\\varphi_{\\eta}(\\mathbf{x}^{k})\\right) \\\\\n& & -\\left( \\varphi_{\\eta}(\\mathbf{x}^{k})-a_{k}^{\\eta}\\right)\n(1-e^{-(c+\\lambda)\\delta_{k}})\\\\\n& & +\\int_{0}^{\\delta_{k}}\\lambda e^{-(c+\\lambda)t}(%\n{\\textstyle\\int\\nolimits_{\\mathbf{0}\\leq\\mathbf{\\alpha}\\leq\\mathbf{x}%\n^{k}+t\\mathbf{p}}}\n\\left( \\varphi_{\\eta}(\\rho^{\\delta_{k}}(\\mathbf{x}^{k}+t\\mathbf{p}%\n-\\mathbf{\\alpha})-a_{k}^{\\eta}\\right) dF(\\mathbf{\\alpha}))dt\\\\\n& & +\\int_{0}^{\\delta_{k}}\\lambda e^{-(c+\\lambda)t}(%\n{\\textstyle\\int\\nolimits_{\\mathbf{0}\\leq\\mathbf{\\alpha}\\leq\\mathbf{x}%\n^{k}+t\\mathbf{p}}}\n\\left( \\mathbf{a}\\cdot\\left( \\mathbf{x}^{k}+t\\mathbf{p}-\\mathbf{\\alpha\n}-\\left\\langle \\mathbf{x}^{k}+t\\mathbf{p}-\\mathbf{\\alpha}\\right\\rangle\n^{\\delta_{k}}\\right) \\right) dF(\\mathbf{\\alpha}))dt\\\\\n& & -\\int_{0}^{\\delta_{k}}e^{-(c+\\lambda)t}\\mathcal{R}(\\mathbf{x}%\n^{k}+t\\mathbf{p})dt\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{.}%\n\\end{array}\n\\]\nDividing by $\\delta_{k}$, taking $k\\ $to infinity and using the continuity of\n$\\mathcal{R}$, we get $\\mathcal{L}(\\varphi_{\\eta})(\\mathbf{x}^{0})\\leq0 $.\nFinally, since ${\\Greekmath 0272}\\varphi_{\\eta}(\\mathbf{x}^{0})={\\Greekmath 0272}\\varphi\n(\\mathbf{x}^{0})$ and $\\varphi_{\\eta}\\nearrow\\varphi$ as $\\eta\\searrow0$, we\nobtain that $\\mathcal{L}(\\varphi)(\\mathbf{x}^{0})\\leq0$ and the result\nfollows. $\\blacksquare$\n\nFrom Propositions \\ref{Limite u barra uniforme} and\n\\ref{vbarra es supersolucion}, we conclude the main result of the paper.\n\n\\begin{theorem}\n\\label{Main Theorem} For any $\\delta>0$, the functions $V^{\\delta_{k}}$\n$\\nearrow\\overline{V}=V$ locally uniformly as $k$ goes to infinity.\n\\end{theorem}\n\n\\section{Optimal merger time}\n\nLet us assume that the uncontrolled bivariate surplus $\\mathbf{X}_{t}$ of two\ninsurance companies with the same shareholders follows the process\n(\\ref{UncontrolledSurplusOriginal}). Both branches pay dividends up to the\ntime of their respective ruin $\\tau^{L_{i}}$ with $i=1,2$, but the\nshareholders has the possibility of \\textit{merging} the two branches at any\ntime $\\overline{\\tau}$ prior to $\\tau^{\\mathbf{L}}=\\tau^{L_{1}}\\wedge\n\\tau^{L_{2}}$ (as defined in (\\ref{Definicion Tau L})); at this time the\nbranches put together all their surplus, pay the claims of both branches and\npay dividends until the joined surplus becomes negative, see e.g. Gerber and\nShiu \\cite{GS Merger}. The aim is to find both the dividend payment policy and\nthe merging time which maximize the expected sum of all the discounted\ndividends paid to the shareholders. This problem corresponds to\n(\\ref{Definicion V}) where $n=2$, $\\mathbf{a}=(1,1)$, $A$ is the $2\\times2$\nidentity matrix, the function $\\upsilon$ is defined as in\n(\\ref{Nu the dos companias independientes}) and the the switch-value function\n$f$ is defined as in (\\ref{Merger 2x2}). In the numerical examples, we\nconsider\n\\[\nF(x_{1},x_{2})=\\mathbb{P}(\\alpha_{1}\\leq x_{1},\\alpha_{2}\\leq x_{2}%\n)=\\frac{\\lambda_{1}}{\\lambda}(1-e^{-d_{1}x_{1}})+\\frac{\\lambda_{2}}{\\lambda\n}(1-e^{-d_{2}x_{2}})\n\\]\nwith $d_{1}=3$ and $d_{2}=3.5$. Note that the above formula for $F$\ncorresponds to the case in which the surplus processes of the two branches are\nindependent, as we pointed out in (\\ref{independent_Surpluses}); so the\nfunction\n\\[\n\\mathcal{R(}x_{1},x_{2})=\\frac{\\lambda_{1}}{\\lambda}V_{2}(x_{2})e^{-d_{1}%\nx_{1}}+\\frac{\\lambda_{2}}{\\lambda}V_{1}(x_{1})e^{-d_{2}x_{2}}%\n\\]\nis continuous in $\\mathbf{R}_{+}^{2}$. The parameters of the merger company\n(that is a one dimension problem) are $\\lambda_{M}=\\lambda_{1}+\\lambda_{2} $,\n$p_{M}=p_{1}+p_{2}$ and $F_{M}(x)=F(x,x)$.\n\nIn the first example, we consider $\\lambda_{1}=2.4$, $\\lambda_{2}=2$,\n$\\lambda=\\lambda_{1}+\\lambda_{2}$, $p_{1}=1.08$, $p_{2}=0.674$, $c=0.11$,\n$\\delta=1\/60$ and $c_{M}=0$. In Figure 1, we show the $\\mathcal{G}^{\\delta}%\n$-optimal strategy: the merger region is in black, the non-action region in\nwhite, the dividend payment region for the first company region in dark grey\nand the dividend payment region for the second company in light grey. Note\nthat the non-action region has two connected components; in the one on the\ntop, the optimal strategy is to withhold dividend payments in order to reach\nthe merger region, and in the white rectangle on the bottom the optimal\nstrategy corresponds to the non-action region of the stand-alone problem (in\nwhich the companies never merge). This figure suggests that, as $\\delta\n\\rightarrow0$, the optimal local control in the boundary between the\nnon-action rectangle and the dividend payment region for the second company\nregion (light grey), should be that the second company pay the incoming\npremium as dividends while the first company pays no dividends, so the\nbivariate control surplus stays on the top boundary $x_{2}=0.33$ of the\nrectangle and moves rightward at constant speed $p_{1}$ to the point\n$(0.33,1.42)$, which corresponds to the righ-top corner of the rectangle\n(until the arrival of the next claim). Analogously, the optimal strategy in\nthe right boundary $x_{1}=1.42$ of the non action rectangle should be that the\nfirst company pay the incoming premium as dividends while the second company\npay no dividends, in this case the bivariate control surplus stays on the\nright boundary of the rectangle and moves upward at constant speed $p_{2}$ to\nthe righ-top corner (until the arrival of the next claim). At this corner,\nboth companies pay their incoming premium as dividends and the surplus process\nremains constant (until the arrival of the next claim). It is more difficult\nto guess the optimal local control (as $\\delta\\rightarrow0)$ in the boundary\nbetween the upper connected component of the non-action region and the\ndividend payment region for the second company region (light grey). Our\nconjecture, assuming some regularity on this boundary, is the following: In\nthe upper part of this boundary (up to the furthest point to the right), the\nsecond company should pay dividends with some rate in such a way that the\nbivariate control surplus stays in this part of the boundary (moving\ndownwards), and in the lower part of this boundary, the second company should\npay a lump sum in such a way that the bivariant surplus reaches the line\n$x_{2}=0.33$.\n\nIn the second example, we consider $\\lambda_{1}=2.44$, $\\lambda_{2}=2.22$,\n$\\lambda=\\lambda_{1}+\\lambda_{2}$, $p_{1}=1.100$, $p_{2}=0.825$, $c=0.1$,\n$\\delta=1\/50$ and $c_{M}=0.364$. In Figure 2, we show the $\\mathcal{G}%\n^{\\delta}$-optimal strategy; the regions are described with the same colors as\nbefore. This figure suggests that, as $\\delta\\rightarrow0$, the optimal local\ncontrol in the boundary between the non-action region (white) and the dividend\npayment region for the second company (light grey region), would be (assuming\nsome regularity on the boundary) that the second company pay dividends with\nsome rate in such a way that the bivariate control surplus stays in the\nboundary: this control surplus would move downward until the bivariate surplus\nreach the point $(1.61,1.06)$ in which the light grey, the dark grey and the\nwhite regions meet. At this point, both companies should pay the incoming\npremiums as dividends and the bivariate surplus process remains constant until\nthe arrival of the next claim. Similarly, the optimal local control in the\nboundary between the non-action region (white) and the dividend payment region\nfor the first company (dark grey region), would be (assuming some regularity\non the boundary) that the first company pay dividends with some rate and the\ncontrol surplus would move leftward until the bivariate surplus reaches the\npoint $(1.61,1.06)$.%\n\n\\[%\n\\begin{array}\n[c]{ccc}%\n{\\parbox[b]{2.7588in}{\\begin{center}\n\\includegraphics[\nheight=2.8072in,\nwidth=2.7588in\n]%\n{Figure1.png}%\n\\\\\nFigure 1\n\\end{center}}}\n& &\n{\\parbox[b]{2.7501in}{\\begin{center}\n\\includegraphics[\nheight=2.7985in,\nwidth=2.7501in\n]%\n{Figure2.png}%\n\\\\\nFigure 2\n\\end{center}}}\n\\end{array}\n\\]\n\n\n\\section{Appendix}\n\nThis section contains the proofs of all the lemmas.\n\n\\textit{Proof of Lemma \\ref{Dynkins}}. Let us extend the function $g$ to\n$\\mathbf{R}^{n}$ as $g(\\mathbf{x})=0$ for $\\mathbf{x}\\notin\\mathbf{R}_{+}^{n}\n$ and the function $\\upsilon\\ $to $\\mathbf{R}^{n}\\times\\mathbf{R}_{+}^{n}$ as\n$\\upsilon(\\mathbf{x},\\mathbf{\\alpha})=0$ for $\\left( \\mathbf{R}^{n}%\n\\times\\mathbf{R}_{+}^{n}\\right) \\diagup B$, where $B$ is defined in\n(\\ref{Definicion B}). Using the expressions (\\ref{UncontrolledSurplusOriginal}%\n) and the change of variables formula for finite variation processes, and\ncalling $\\mathbf{z}_{s}=\\mathbf{X}_{s}^{\\mathbf{L}}{}$ and $\\mathbf{\\breve{z}%\n}_{s}=\\mathbf{\\check{X}}_{s}^{\\mathbf{L}}$, we can write%\n\\begin{equation}%\n\\begin{array}\n[c]{l}%\ng(\\mathbf{z}_{\\tau})e^{-c\\tau}-g(\\mathbf{x})\\\\%\n\\begin{array}\n[c]{ll}%\n= & \\int\\nolimits_{0}^{\\tau}\\mathbf{p\\cdot}{\\Greekmath 0272} g(\\mathbf{z}_{s^{-}}%\n)e^{-cs}ds-c\\int\\nolimits_{0}^{\\tau}g(\\mathbf{z}_{s^{-}})e^{-cs}ds\\\\\n& -\\int\\nolimits_{0}^{\\tau}e^{-cs}\\left( {\\Greekmath 0272} g(\\mathbf{z}_{s^{-}%\n})\\mathbf{\\cdot}d\\mathbf{L}_{s}^{c}\\right) +\\sum\\limits_{\\mathbf{L}_{s}%\n\\neq\\mathbf{L}_{s^{-}},~s\\leq\\tau}\\left( g(\\mathbf{z}_{s})-g(\\mathbf{\\breve\n{z}}_{s})\\right) e^{-cs}\\\\\n& +\\sum\\limits_{\\mathbf{\\breve{z}}_{s}\\neq\\mathbf{z}_{s^{-}},~s\\leq\\tau\n}\\left( g(\\mathbf{\\breve{z}}_{s})-g(\\mathbf{z}_{s^{-}})\\right) e^{-cs}.\n\\end{array}\n\\end{array}\n\\label{Paso 1}%\n\\end{equation}\nNote that $\\mathbf{z}_{s}\\in\\mathbf{R}_{+}^{n}$ for $s\\leq\\tau$ except in the\ncase that $\\tau=\\tau^{\\mathbf{L}}$. Since $\\mathbf{z}_{s}=$ $\\mathbf{\\breve\n{z}}_{s}-\\Delta\\mathbf{L}_{s},$%\n\\begin{equation}%\n\\begin{array}\n[c]{l}%\n-\\int\\nolimits_{0}^{\\tau}e^{-cs}{\\Greekmath 0272} g(\\mathbf{z}_{s^{-}})\\mathbf{\\cdot\n}d\\mathbf{L}_{s}^{c}+\\sum\\limits_{\\mathbf{L}_{s}\\neq\\mathbf{L}_{s^{-}}%\n,s\\leq\\tau}\\left( g(\\mathbf{z}_{s})-g(\\mathbf{\\breve{z}}_{s})\\right)\ne^{-cs}\\\\%\n\\begin{array}\n[c]{cl}%\n= & -\\int\\nolimits_{0}^{\\tau}e^{-cs}{\\Greekmath 0272} g(\\mathbf{z}_{s^{-}})\\mathbf{\\cdot\n}d\\mathbf{L}_{s}^{c}-\\sum\\limits_{\\mathbf{L}_{s}\\neq\\mathbf{L}_{s^{-}}%\n,s\\leq\\tau}e^{-cs}\\left( \\int\\nolimits_{0}^{1}\\left( {\\Greekmath 0272} g\\left(\n\\mathbf{\\breve{z}}_{s}-\\gamma\\Delta\\mathbf{L}_{s}\\right) \\mathbf{\\cdot}%\n\\Delta\\mathbf{L}_{s}\\right) d\\gamma\\right) \\\\\n= & -\\int_{0^{-}}^{\\tau}e^{-cs}\\mathbf{a}\\cdot d\\mathbf{L}_{s}+\\int\n\\nolimits_{0}^{\\tau}e^{-cs}\\left( \\mathbf{a}-{\\Greekmath 0272} g(\\mathbf{z}_{s^{-}%\n})\\right) \\mathbf{\\cdot}d\\mathbf{L}_{s}^{c}\\\\\n& +\\sum\\limits_{\\mathbf{L}_{s}\\neq\\mathbf{L}_{s^{-}},s\\leq\\tau}e^{-cs}%\n\\int\\nolimits_{0}^{1}\\left( \\mathbf{a}-{\\Greekmath 0272} g\\left( \\mathbf{\\breve{z}}%\n_{s}-\\gamma\\Delta\\mathbf{L}_{s}\\right) \\right) \\mathbf{\\cdot}\\Delta\n\\mathbf{L}_{s}d\\gamma.\n\\end{array}\n\\end{array}\n\\label{Paso 2}%\n\\end{equation}\nSince\n\\begin{equation}\nM_{1}(t)=\\sum\\limits_{\\mathbf{\\breve{z}}\\left( s^{-}\\right) \\neq\n\\mathbf{z}_{s^{-}},s\\leq t}\\left( g(\\mathbf{\\breve{z}}_{s})-g(\\mathbf{z}%\n_{s^{-}})\\right) e^{-cs}-\\lambda\\int\\limits_{0}^{t}e^{-cs}\\int\n\\limits_{\\mathbf{R}_{+}^{n}}\\left( g(\\mathbf{z}_{s^{-}}-\\mathbf{\\alpha\n})-g(\\mathbf{z}_{s^{-}})\\right) dF(\\mathbf{\\alpha})ds\\label{M1}%\n\\end{equation}\nand%\n\\begin{equation}\nM_{2}(t)=\\sum\\limits_{\\mathbf{\\breve{z}}\\left( s^{-}\\right) \\neq\n\\mathbf{z}_{s^{-}},s\\leq t}-\\upsilon(\\mathbf{\\breve{z}}_{s^{-}},\\mathbf{z}%\n(s^{-})-\\mathbf{\\breve{z}}_{s})e^{-cs}+\\lambda\\int\\limits_{0}^{t}e^{-cs}%\n\\int\\limits_{\\mathbf{R}_{+}^{n}}\\upsilon(\\mathbf{z}_{s^{-}},\\mathbf{\\alpha\n})dF(\\mathbf{\\alpha})ds\\label{M2}%\n\\end{equation}\nare martingales with zero expectation, we have from (\\ref{Paso 1}) and\n(\\ref{Paso 2})%\n\\[%\n\\begin{array}\n[c]{l}%\n(g(\\mathbf{z}_{\\tau})I_{\\{\\tau<\\tau^{\\mathbf{L}}\\}}-\\upsilon(\\mathbf{z}%\n_{\\tau^{-}},\\mathbf{z}_{\\tau^{-}}-\\mathbf{z}_{\\tau})I_{\\{\\tau=\\tau\n^{\\mathbf{L}}\\}})e^{-c\\tau}-g(\\mathbf{x})\\\\%\n\\begin{array}\n[c]{ll}%\n= & (g(\\mathbf{z}_{\\tau})-\\upsilon(\\mathbf{z}_{\\tau^{-}},\\mathbf{z}_{\\tau^{-}%\n}-\\mathbf{z}(\\tau)))e^{-c\\tau}-g(\\mathbf{x})\\\\\n= & \\int\\nolimits_{0}^{\\tau}\\mathcal{L}(g)(\\mathbf{z}_{s^{-}})e^{-cs}%\nds-\\int_{0^{-}}^{\\tau}e^{-cs}\\mathbf{a}\\cdot d\\mathbf{L}_{s}\\\\\n& +\\int\\nolimits_{0}^{\\tau}e^{-cs}\\left( \\mathbf{a}-{\\Greekmath 0272} g(\\mathbf{z}%\n_{s^{-}})\\right) \\mathbf{\\cdot}d\\mathbf{L}_{s}^{c}\\\\\n& +\\sum\\limits_{\\mathbf{L}_{s}\\neq\\mathbf{L}_{s^{-}},s\\leq\\tau}e^{-cs}%\n\\int\\nolimits_{0}^{1}\\left( \\mathbf{a}-{\\Greekmath 0272} g\\left( \\mathbf{\\breve{z}}%\n_{s}-\\gamma\\Delta\\mathbf{L}_{s}\\right) \\mathbf{\\cdot}\\Delta\\mathbf{L}%\n_{s}\\right) d\\gamma+M(\\tau);\n\\end{array}\n\\end{array}\n\\]\nwhere $M(t)=M_{1}(t)+M_{2}(t)$. $\\blacksquare$\n\nIn order to prove Lemma \\ref{SupersolucionMayor-ValueFunction}, we will use a\ntechnical lemma in which we construct a sequence of smooth functions that\napproximate a (possible non-smooth) viscosity supersolution. This is done in\norder to apply Lemma \\ref{Dynkins} to an approximate smooth function instead\nof the viscosity supersolution; we have to do that because the amount of time\nthe controlled process spends at non-differentiable points of the viscosity\nsupersolution could have positive Lebesgue measure. We omit the proof of this\nlemma because it is similar to the one-dimensional version given in Lemma 4.1\nof \\cite{AM Libro}; the result is obtained by standard convolution arguments\nusing that the function $\\mathcal{R}$ is continuous.\n\n\\begin{lemma}\n\\label{A.1} Fix $\\mathbf{x}^{0}\\ $in the interior of $\\mathbf{R}_{+}^{n}$ and\nlet $\\overline{u}$\\ be a supersolution of (\\ref{HJB}) satisfying the growth\ncondition (\\ref{gc}).\\ We can find a sequence of functions $\\overline{u}%\n_{m}:\\mathbf{R}_{+}^{n}\\rightarrow\\mathbf{R}$\\ such that:\n\n(a) $\\overline{u}_{m}$\\ is continuously differentiable and $\\overline{u}%\n_{m}\\geq\\overline{u}\\geq f.$\n\n(b) $\\overline{u}_{m}\\ $satisfies the growth condition (\\ref{gc}).\n\n(c)$\\ \\mathbf{p\\cdot}{\\Greekmath 0272}\\overline{u}_{m}$\\ $\\leq\\left( c+\\lambda\\right)\n\\overline{u}_{m}+\\lambda\\left\\vert \\overline{u}(\\mathbf{0})\\right\\vert\n+\\lambda\\mathbb{E}\\left( \\left\\vert \\upsilon(\\mathbf{0},\\mathbf{U}%\n_{1})\\right\\vert \\right) $ in $\\mathbf{R}_{+}^{n}$ and $\\mathbf{a}%\n-{\\Greekmath 0272}\\overline{u}_{m}\\leq\\mathbf{0}$.\n\n(d) $\\overline{u}_{m}$\\ $\\searrow$ $\\overline{u}$\\ uniformly on compact sets\nin $\\mathbf{R}_{+}^{n}$ and ${\\Greekmath 0272}\\overline{u}_{m}$\\ converges to\n${\\Greekmath 0272}\\overline{u}$\\ a.e. in $\\mathbf{R}_{+}^{n}$.\n\n(e) There exists a sequence $c_{m}$ with $\\lim\\limits_{m\\rightarrow\\infty\n}c_{m}=0$ such that\n\\[\n\\sup\\nolimits_{\\mathbf{x}\\in\\lbrack\\mathbf{0},\\mathbf{x}^{0}]}\\mathcal{L}%\n(\\overline{u}_{m})\\left( \\mathbf{x}\\right) \\leq c_{m}.\n\\]\n\n\n\\textit{Proof of Lemma \\ref{SupersolucionMayor-ValueFunction}}. Consider the\nprocesses $\\mathbf{z}_{s}=\\mathbf{X}_{s}^{\\mathbf{L}}{}$ defined in\n(\\ref{XL}), let us call $\\tau=\\tau^{\\mathbf{L}}$ and take $\\widetilde{\\tau\n}=\\overline{\\tau}\\wedge\\tau$. Let us consider the functions $\\overline{u}_{m}$\ndefined in Lemma \\ref{A.1} in $\\mathbf{R}_{+}^{n}$ . Using Lemma \\ref{Dynkins}\nfor $\\widetilde{\\tau}\\wedge t$, we get from Lemma \\ref{A.1} (a) and (c) that%\n\\begin{equation}%\n\\begin{array}\n[c]{l}%\n\\overline{u}_{m}(\\mathbf{z}_{t})e^{-ct}I_{\\{t<\\widetilde{\\tau}\\}}%\n+e^{-c\\overline{\\tau}}f(\\mathbf{z}_{\\overline{\\tau}})I_{\\{t\\wedge\n\\widetilde{\\tau}=\\overline{\\tau},\\overline{\\tau}<\\tau\\}}-e^{-c\\overline{\\tau}%\n}\\upsilon\\left( \\mathbf{z}_{\\tau^{-}},\\mathbf{z}_{\\tau^{-}}-\\mathbf{z}_{\\tau\n}\\right) I_{\\{t\\wedge\\widetilde{\\tau}=\\tau\\}}-\\overline{u}_{m}(\\mathbf{x})\\\\%\n\\begin{array}\n[c]{ll}%\n\\leq & \\overline{u}_{m}(\\mathbf{z}_{t})e^{-ct}I_{\\{t<\\widetilde{\\tau}%\n\\}}+e^{-c\\overline{\\tau}}\\overline{u}_{m}(\\mathbf{z}_{\\overline{\\tau}%\n})I_{\\{t\\wedge\\widetilde{\\tau}=\\overline{\\tau},\\overline{\\tau}<\\tau\n\\}}-e^{-c\\overline{\\tau}}\\upsilon\\left( \\mathbf{z}_{\\tau^{-}},\\mathbf{z}%\n_{\\tau^{-}}-\\mathbf{z}_{\\tau}\\right) I_{\\{t\\wedge\\widetilde{\\tau}=\\tau\n\\}}-\\overline{u}_{m}(\\mathbf{x})\\\\\n\\leq & \\int\\nolimits_{0}^{t\\wedge\\widetilde{\\tau}}\\mathcal{L}(\\overline{u}%\n_{m})(\\mathbf{z}_{s^{-}})e^{-cs}ds-\\int_{0^{-}}^{t\\wedge\\widetilde{\\tau}%\n}e^{-cs}\\mathbf{a}\\cdot d\\mathbf{L}_{s}+M(t\\wedge\\widetilde{\\tau}),\n\\end{array}\n\\end{array}\n\\label{ItoUnMenorSuper}%\n\\end{equation}\nwhere $M(t)\\ $is a zero-expectation martingale. Since $\\mathbf{L}_{s}$ is\nnon-decreasing we get, using the monotone convergence theorem, that%\n\\[%\n\\begin{array}\n[c]{l}%\n\\lim\\limits_{t\\rightarrow\\infty}\\mathbb{E}_{\\mathbf{x}}\\left( \\int_{0^{-}%\n}^{t\\wedge\\widetilde{\\tau}}e^{-cs}\\mathbf{a}\\cdot d\\mathbf{L}_{s}%\n+e^{-c\\overline{\\tau}}f(\\mathbf{z}_{\\overline{\\tau}})I_{\\{t\\wedge\n\\widetilde{\\tau}=\\overline{\\tau},\\overline{\\tau}<\\tau\\}}-e^{-c\\overline{\\tau}%\n}\\upsilon\\left( \\mathbf{z}_{\\tau^{-}},\\mathbf{z}_{\\tau^{-}}-\\mathbf{z}_{\\tau\n}\\right) I_{\\{t\\wedge\\widetilde{\\tau}=\\tau\\}}\\right) \\\\\n=V_{\\pi}(\\mathbf{x}).\n\\end{array}\n\\]\nFrom Lemma \\ref{A.1}-(c), we have%\n\\begin{equation}\n-\\left( c+\\lambda\\right) \\overline{u}_{m}(\\mathbf{x})+\\overline{u}%\n_{m}(0)\\lambda F(\\mathbf{x})-\\lambda\\mathbb{E}\\left( \\left\\vert\n\\upsilon(\\mathbf{0},\\mathbf{U}_{1})\\right\\vert \\right) \\leq\\mathcal{L}%\n(\\overline{u}_{m})(\\mathbf{x})\\leq\\lambda\\overline{u}_{m}(\\mathbf{x}%\n)+\\lambda\\left\\vert \\overline{u}(\\mathbf{0})\\right\\vert +\\lambda\n\\mathbb{E}\\left( \\left\\vert \\upsilon(\\mathbf{0},\\mathbf{U}_{1})\\right\\vert\n\\right) -\\mathcal{R}(\\mathbf{x}).\\label{PrimeraCotaL}%\n\\end{equation}\nBy Lemma \\ref{A.1}-(b), (c) and the inequality $\\mathbf{z}_{s}\\leq\n\\mathbf{x}+\\mathbf{p}s,$ there exists $d_{0}$ large enough such that\n\\begin{equation}\n\\overline{u}_{m}(\\mathbf{z}_{s})\\leq\\overline{u}_{m}(\\mathbf{x}+\\mathbf{p}%\ns)\\leq d_{0}e^{\\frac{c}{2n}\\sum_{i=1}^{n}\\frac{x_{i}+p_{i}s}{p_{i}}}%\n=d_{0}h_{0}(\\mathbf{x})e^{\\frac{c}{2}s}\\label{Acotacionu}%\n\\end{equation}\nand%\n\\begin{equation}\n-\\upsilon(\\mathbf{z}_{s^{-}},\\mathbf{\\alpha})\\leq S(\\mathbf{z}_{s^{-}})\\leq\nd_{0}h_{0}(\\mathbf{x})e^{\\frac{c}{2}s}\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{ for }\\left( \\mathbf{z}_{s^{-}%\n}-\\mathbf{\\alpha}\\right) \\notin\\mathbf{R}_{+}^{n},\\label{Acotacion_Rho}%\n\\end{equation}\nwhere $h_{0}$ and $S$ are defined in (\\ref{ho}) and Proposition\n\\ref{Crecimiento de V} respectively. Therefore, from (\\ref{PrimeraCotaL}), we\nobtain that there exists $d_{1}$ large enough such that,\n\\begin{equation}\ne^{-cs}\\left\\vert \\mathcal{L}(\\overline{u}_{m})\\left( \\mathbf{z}_{s^{-}%\n}\\right) \\right\\vert \\leq d_{1}e^{-\\frac{c}{2}s}.\\label{Acotacion_L}%\n\\end{equation}\nAnd using the bounded convergence theorem,%\n\\begin{equation}\n\\lim\\limits_{t\\rightarrow\\infty}\\mathbb{E}_{\\mathbf{x}}\\left( \\int\n\\nolimits_{0}^{t\\wedge\\widetilde{\\tau}}\\mathcal{L}(\\overline{u}_{m}%\n)(\\mathbf{z}_{s^{-}})e^{-cs}ds\\right) =\\mathbb{E}_{\\mathbf{x}}\\left(\n\\int\\nolimits_{0}^{\\widetilde{\\tau}}\\mathcal{L}(\\overline{u}_{m}%\n)(\\mathbf{z}_{s^{-}})e^{-cs}ds\\right) .\\label{monotone2}%\n\\end{equation}\nFrom (\\ref{ItoUnMenorSuper}) and (\\ref{monotone2}), we get\n\\begin{equation}\n\\lim\\limits_{t\\rightarrow\\infty}\\mathbb{E}_{\\mathbf{x}}\\left( \\overline\n{u}_{m}(\\mathbf{z}_{t})e^{-ct}I_{\\{t<\\widetilde{\\tau}\\}}\\right) -\\overline\n{u}_{m}(\\mathbf{x})\\leq\\mathbb{E}_{\\mathbf{x}}\\left( \\int\\nolimits_{0}%\n^{\\widetilde{\\tau}}\\mathcal{L}(\\overline{u}_{m})(\\mathbf{z}_{s^{-}}%\n)e^{-cs}ds\\right) -V_{\\pi}(\\mathbf{x}).\\label{limite0}%\n\\end{equation}\nBy (\\ref{Acotacionu}),\n\\begin{equation}\n\\lim\\limits_{t\\rightarrow\\infty}\\mathbb{E}_{\\mathbf{x}}\\left( \\overline\n{u}_{m}(\\mathbf{z}_{t})e^{-ct}I_{\\{t<\\widetilde{\\tau}\\}}\\right)\n=0.\\label{limite1}%\n\\end{equation}\nLet us prove now that\n\\begin{equation}\n\\limsup\\limits_{m\\rightarrow\\infty}\\mathbb{E}_{\\mathbf{x}}\\left(\n\\int\\nolimits_{0}^{\\widetilde{\\tau}}\\mathcal{L}(\\overline{u}_{m}%\n)(\\mathbf{z}_{s^{-}})e^{-cs}ds\\right) \\leq0.\\label{limite2}%\n\\end{equation}\nGiven any $\\varepsilon>0$, from (\\ref{Acotacion_L}), we can find $T$ large\nenough such that\n\\begin{equation}\n\\mathbb{E}_{\\mathbf{x}}\\left( \\int\\nolimits_{T\\wedge\\widetilde{\\tau}%\n}^{\\widetilde{\\tau}}\\left\\vert \\mathcal{L}(\\overline{u}_{m})(\\mathbf{z}%\n_{s^{-}})\\right\\vert e^{-cs}ds\\right) \\leq\\frac{2d_{1}}{c}(e^{-\\frac{c}{2}%\nT})<\\frac{\\varepsilon}{2}.\\label{arreglado5}%\n\\end{equation}\nFor $s\\leq T$, we get $\\mathbf{z}_{s^{-}}\\in\\lbrack\\mathbf{0},$ $\\mathbf{x}%\n+\\mathbf{p}T$ $]$ , then from Lemma \\ref{A.1}-(e) we can find $m_{0}$ large\nenough such that for any $m\\geq m_{0}$%\n\\[\n\\int\\nolimits_{0}^{T}\\mathcal{L}(\\overline{u}_{m})(\\mathbf{z}_{s^{-}}%\n)e^{-cs}ds\\leq c_{m}\\int\\nolimits_{0}^{T}e^{-cs}ds\\leq\\frac{c_{m}}{c}\\leq\n\\frac{\\varepsilon}{2}%\n\\]\nand so we have (\\ref{limite2}). Thus, from (\\ref{limite0}) and using\n(\\ref{limite1}) and (\\ref{limite2}), we obtain\n\\end{lemma}%\n\n\\begin{equation}\n\\overline{u}(\\mathbf{x})=\\lim\\nolimits_{m\\rightarrow\\infty}\\overline{u}%\n_{m}(\\mathbf{x})\\geq V_{\\pi}(\\mathbf{x})\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{. }\\blacksquare\n\\label{supersolucionMayorqueVL}%\n\\end{equation}\n\n\n\\textit{Proof of Lemma \\ref{Lema de tiempo infinito}. }Suppose that$\\ \\tilde\n{k}=\\infty$, calling\n\\[\nk_{l}:=\\mathbf{m\\cdot1}+(l-1)n+1,\n\\]\nthere are at least $i_{l}\\geq l$ control actions $\\mathbf{E}_{0}$ in $\\left(\ns_{1},s_{2},...,s_{k_{l}}\\right) $. Let us consider the non-decreasing\nsequence $(j_{l})_{l}$ defined as\n\\[\nj_{l}:=\\max\\{j:\\tau_{j}\\leq t_{k_{l}}\\},\n\\]\nwe have that $t_{k_{l}}\\geq\\tau_{j_{l}}+(i_{l}-j_{l})\\delta$. If\n$\\lim_{l\\rightarrow\\infty}i_{l}-j_{l}=\\infty$, then\n\\[\n\\lim\\nolimits_{l\\rightarrow\\infty}t_{k_{l}}\\geq\\lim\\nolimits_{l\\rightarrow\n\\infty}\\tau_{j_{l}}+(i_{l}-j_{l})\\delta\\geq\\lim\\nolimits_{l\\rightarrow\\infty\n}(i_{l}-j_{l})\\delta=\\infty;\n\\]\nif not, $\\lim_{l\\rightarrow\\infty}j_{l}=\\infty$ and so%\n\n\\[\n\\lim\\nolimits_{l\\rightarrow\\infty}t_{k_{l}}\\geq\\lim\\nolimits_{l\\rightarrow\n\\infty}\\tau_{j_{l}}+(i_{l}-j_{l})\\delta\\geq\\lim\\nolimits_{l\\rightarrow\\infty\n}\\tau_{j_{l}}%\n\\]\nand since $\\lim_{l\\rightarrow\\infty}\\tau_{j_{l}}=$ $\\lim_{i\\rightarrow\\infty\n}\\tau_{i}=$ $\\infty$ a.s., we have the result. $\\blacksquare$\n\n\\textit{Proof of Lemma \\ref{Ts crecientes}. }It is straightforward\nthat\\textit{\\ }$T_{0}$, $T_{i},$ $T_{s}$ and $T$ are non-decreasing and that\n\\[\n\\sup\\nolimits_{\\mathbf{m}\\in\\mathbf{N}_{0}^{n}}\\left\\vert T(w_{1}%\n)(\\mathbf{m})-T(w_{2})(\\mathbf{m})\\right\\vert \\leq\\sup\\nolimits_{\\mathbf{m}%\n\\in\\mathbf{N}_{0}^{n}}\\left\\vert w_{1}(\\mathbf{m})-w_{2}(\\mathbf{m}%\n)\\right\\vert .\n\\]\n\n\nAlso, given a function $w:\\mathbf{N}_{0}^{n}\\rightarrow\\mathbf{R}$ it is\nimmediate to see that $T_{i}(w)$ and $T_{s}(w)$ can be written as a linear\ncombination of the values of $w(\\mathbf{m})$ plus a constant. Let us prove now\nthat\n\\[\nT_{0}(w)(\\mathbf{m})=e^{-(c+\\lambda)\\delta}w(\\mathbf{m}+\\mathbf{1}%\n)+\\sum\\nolimits_{0\\leq\\mathbf{k}\\leq\\mathbf{m}}a_{1}(\\mathbf{k},\\mathbf{m}%\n)w(\\mathbf{k})+a_{2}(\\mathbf{m})\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{,}%\n\\]\n\n\n\\begin{lemma}\nwhere%\n\\[%\n\\begin{array}\n[c]{lll}%\na_{1}(\\mathbf{k},\\mathbf{m}) & = & I_{\\{\\mathbf{k}\\leq\\mathbf{m}-\\mathbf{1}%\n\\}}\\int\\limits_{0}^{\\delta}\\lambda e^{-(c+\\lambda)t}(F(g^{\\delta}\\left(\n\\mathbf{m}-\\mathbf{k}\\right) +t\\mathbf{p})-F(g^{\\delta}\\left( \\mathbf{m}%\n-\\mathbf{k}-\\mathbf{1}\\right) +t\\mathbf{p}))dt\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{ \\ }\\\\\n& & +I_{\\{\\mathbf{k}\\leq\\mathbf{m},\\mathbf{k}\\nleqslant\\mathbf{m}%\n-\\mathbf{1}\\}}\\int\\limits_{0}^{\\delta}\\lambda e^{-(c+\\lambda)t}(F(g^{\\delta\n}\\left( \\mathbf{m}-\\mathbf{k}\\right) +t\\mathbf{p})-F(\\mathbf{0}\\vee\\left(\ng^{\\delta}\\left( \\mathbf{m}-\\mathbf{k}\\right) +t\\mathbf{p}\\right) ))dt~\n\\end{array}\n\\]\n\n\nand%\n\\[%\n\\begin{array}\n[c]{lll}%\na_{2}(\\mathbf{m}) & = & \\sum\\limits_{0\\leq\\mathbf{k}<\\mathbf{m}-\\mathbf{1}%\n}\\int\\limits_{0}^{\\delta}(\\lambda e^{-(c+\\lambda)t}%\n{\\textstyle\\int\\limits_{g^{\\delta}\\left( \\mathbf{m}-\\mathbf{k}-\\mathbf{1}%\n\\right) +t\\mathbf{p}}^{g^{\\delta}\\left( \\mathbf{m}-\\mathbf{k}\\right)\n+t\\mathbf{p}}}\n\\mathbf{a}\\cdot(g^{\\delta}\\left( \\mathbf{m}-\\mathbf{k}\\right) +t\\mathbf{p}%\n-\\mathbf{\\alpha})dF(\\mathbf{\\alpha}))dt\\\\\n& & +\\sum\\limits_{\\mathbf{k}\\leq\\mathbf{m},\\mathbf{k}\\nleqslant\n\\mathbf{m}-\\mathbf{1}}\\int\\limits_{0}^{\\delta}(\\lambda e^{-(c+\\lambda)t}%\n{\\textstyle\\int\\limits_{\\mathbf{0}\\vee\\left( g^{\\delta}\\left( \\mathbf{m}%\n-\\mathbf{k}\\right) +t\\mathbf{p}\\right) }^{g^{\\delta}\\left( \\mathbf{m}%\n-\\mathbf{k}\\right) +t\\mathbf{p}}}\n\\mathbf{a}\\cdot(g^{\\delta}\\left( \\mathbf{m}-\\mathbf{k}\\right) +t\\mathbf{p}%\n-\\mathbf{\\alpha})dF(\\mathbf{\\alpha}))dt\\\\\n& & -\\int\\limits_{0}^{\\delta}e^{-(c+\\lambda)t}\\mathcal{R}(g^{\\delta\n}(\\mathbf{m})+t\\mathbf{p})dt.\n\\end{array}\n\\]\n\n\\end{lemma}\n\n\\textit{\\ }Given $\\mathbf{m}\\in\\mathbf{N}_{0}^{n}$, $\\mathbf{\\alpha}%\n\\in\\mathbf{R}_{+}^{n}$ and $0$ $2\\delta;$\n\n\\item or by $\\mathbf{E}_{0}^{\\delta},\\mathbf{E}_{0}^{\\delta}\\mathbf{,}$ and a\npossible combination of $\\mathbf{E}_{i}^{\\delta\\prime}s$, if it arrives at\ntime $\\tau\\in$ $(\\delta$,$2\\delta]$, so the surplus goes to the nearest\nsmaller point in $\\mathcal{G}^{2\\delta}$;\n\n\\item or by $\\mathbf{E}_{0}^{\\delta}$\\textbf{, }and a possible combination of\n$\\mathbf{E}_{i}^{\\delta\\prime}s$, if it arrives at time $\\tau\\leq$ $\\delta$,\nso again the surplus goes to the nearest smaller point in $\\mathcal{G}%\n^{2\\delta}$.\n\nSo we have the result. $\\blacksquare$\n\\end{itemize}\n\n\\textit{Proof of Lemma \\ref{Lipschitz Inequality u barra}}. Let us first prove\nthat\n\\begin{equation}%\n\\begin{array}\n[c]{l}%\n\\left\\vert V^{\\delta_{k}}(\\mathbf{y})-V^{\\delta_{k}}(\\mathbf{x})\\right\\vert \\\\\n\\leq\\frac{2}{\\hat{p}}V^{\\delta_{k}}(\\left\\langle \\mathbf{x}\\vee\\mathbf{y}%\n\\right\\rangle ^{\\delta_{k}})(\\frac{e^{(c+\\lambda)\\delta_{k}}-1}{\\delta_{k}%\n})\\left\\Vert \\left\\langle \\mathbf{y}\\right\\rangle ^{\\delta_{k}}-\\left\\langle\n\\mathbf{x}\\right\\rangle ^{\\delta_{k}}\\right\\Vert _{1}+2\\delta_{k}%\n\\mathbf{a}\\cdot\\mathbf{p},\n\\end{array}\n\\label{Lipschitz V delta}%\n\\end{equation}\nfor any $\\mathbf{x}$ and $\\mathbf{y}$ in $\\mathbf{R}_{+}^{n}$. Let us assume\nfirst that $\\mathbf{y}>\\mathbf{x}$. We have from Lemma\n\\ref{v_delta es lipschitz y monotona en el reticulado},%\n\\[\nV^{\\delta_{k}}(g^{\\delta_{k}}\\left( \\mathbf{m}+\\mathbf{e}_{i}\\right)\n)-V^{\\delta_{k}}(g^{\\delta_{k}}(\\mathbf{m}))\\leq V^{\\delta_{k}}(g^{\\delta_{k}%\n}\\left( \\mathbf{m}+\\mathbf{1}\\right) )-V^{\\delta_{k}}(g^{\\delta_{k}%\n}(\\mathbf{m}))\\leq V^{\\delta_{k}}(g^{\\delta_{k}}(\\mathbf{m}))(e^{(c+\\lambda\n)\\delta_{k}}-1).\n\\]\nLet us call $\\mathbf{m}_{\\mathbf{y}}=\\rho^{\\delta_{k}}(\\mathbf{y})$ and\n$\\mathbf{m}_{\\mathbf{x}}=\\rho^{\\delta_{k}}(\\mathbf{x})$. Then,%\n\n\\[%\n\\begin{array}\n[c]{lll}%\nV^{\\delta_{k}}(\\mathbf{y})-V^{\\delta_{k}}(\\mathbf{x}) & \\leq & V^{\\delta_{k}%\n}(g^{\\delta_{k}}(\\mathbf{m}_{\\mathbf{y}}))-V^{\\delta_{k}}(g^{\\delta_{k}%\n}\\left( \\mathbf{m}_{\\mathbf{x}}\\right) )+\\mathbf{a}\\cdot(\\mathbf{y}%\n-g^{\\delta_{k}}(\\mathbf{m}_{\\mathbf{y}}))\\\\\n& \\leq & (\\frac{e^{(c+\\lambda)\\delta_{k}}-1}{\\delta_{k}})V^{\\delta_{k}%\n}(\\mathbf{y})\\sum_{i=1}^{n}\\frac{g_{i}^{\\delta_{k}}\\left( \\mathbf{m}%\n_{\\mathbf{y}}-\\mathbf{m}_{\\mathbf{x}}\\right) }{p_{i}}+\\delta_{k}%\n\\mathbf{a}\\cdot\\mathbf{p}\\\\\n& \\leq & \\left( \\frac{e^{(c+\\lambda)\\delta_{k}}-1}{\\hat{p}\\delta_{k}}\\right)\nV^{\\delta_{k}}(\\mathbf{y})\\left\\Vert g^{\\delta_{k}}\\left( \\mathbf{m}%\n_{\\mathbf{y}}-\\mathbf{m}_{\\mathbf{x}}\\right) \\right\\Vert _{1}+\\delta\n_{k}\\mathbf{a}\\cdot\\mathbf{p}.\n\\end{array}\n\\]\nLet us consider now $\\mathbf{x}$ and $\\mathbf{y}$ in $\\mathbf{R}_{+}^{n}$,\nconsider \\ $\\mathbf{m}_{0}=\\rho^{\\delta_{k}}(\\mathbf{x}\\wedge\\mathbf{y})$,\n\\[%\n\\begin{array}\n[c]{l}%\n\\left\\vert V^{\\delta_{k}}(\\mathbf{y})-V^{\\delta_{k}}(\\mathbf{x})\\right\\vert \\\\%\n\\begin{array}\n[c]{ll}%\n\\leq & V^{\\delta_{k}}(\\mathbf{y})-V^{\\delta_{k}}(\\mathbf{x}\\wedge\n\\mathbf{y})+V^{\\delta_{k}}(\\mathbf{x})-V^{\\delta_{k}}(\\mathbf{x}%\n\\wedge\\mathbf{y})\\\\\n\\leq & \\frac{1}{\\hat{p}}V^{\\delta_{k}}(\\mathbf{x}\\vee\\mathbf{y})(\\frac\n{e^{(c+\\lambda)\\delta_{k}}-1}{\\delta_{k}})\\left( \\left\\Vert g^{\\delta_{k}%\n}\\left( \\mathbf{m}_{\\mathbf{y}}-\\mathbf{m}_{0}\\right) \\right\\Vert\n_{1}+\\left\\Vert g^{\\delta_{k}}\\left( \\mathbf{m}_{\\mathbf{x}}-\\mathbf{m}%\n_{0}\\right) \\right\\Vert _{1}\\right) +2\\delta_{k}\\mathbf{a}\\cdot\\mathbf{p}\\\\\n\\leq & \\frac{2}{\\hat{p}}V^{\\delta_{k}}(\\mathbf{x}\\vee\\mathbf{y})(\\frac\n{e^{(c+\\lambda)\\delta_{k}}-1}{\\delta_{k}})\\left\\Vert g^{\\delta_{k}}\\left(\n\\mathbf{m}_{\\mathbf{y}}-\\mathbf{m}_{\\mathbf{x}}\\right) \\right\\Vert\n_{1}+2\\delta_{k}\\mathbf{a}\\cdot p.\n\\end{array}\n\\end{array}\n\\]\nTherefore we have (\\ref{Lipschitz V delta}).\n\nBy definitions (\\ref{Definicion Vdelta}) and (\\ref{ubarra como limite}), and\nsince $T_{i}\\left( v^{\\delta_{k}}\\right) \\leq v^{\\delta_{k}}$,\n\\[%\n\\begin{array}\n[c]{lll}%\n\\overline{V}(\\mathbf{y})-\\overline{V}(\\mathbf{x}) & \\geq & \\overline\n{V}(\\mathbf{y})-V^{\\delta_{k}}(\\mathbf{y})+\\mathbf{a}\\cdot g^{\\delta_{k}%\n}\\left( \\rho^{\\delta_{k}}(\\mathbf{y})-\\rho^{\\delta_{k}}(\\mathbf{x})\\right) \\\\\n& & +\\mathbf{a}\\cdot(\\mathbf{y}-g^{\\delta_{k}}(\\rho^{\\delta_{k}}%\n(\\mathbf{y})-\\rho^{\\delta_{k}}(\\mathbf{x}))+\\mathbf{x})+V^{\\delta_{k}%\n}(\\mathbf{x})-\\overline{V}(\\mathbf{x});\n\\end{array}\n\\]\ntaking the limit as $k$ goes to infinity, we obtain the first inequality of\nthe Lipschitz inequality.\n\nWe can write, from (\\ref{Lipschitz V delta}),\n\\[%\n\\begin{array}\n[c]{lll}%\n\\overline{V}(\\mathbf{y})-\\overline{V}(\\mathbf{x}) & = & \\overline\n{V}(\\mathbf{y})-V^{\\delta_{k}}(\\mathbf{y})+V^{\\delta_{k}}(\\mathbf{y}%\n)-V^{\\delta_{k}}(\\mathbf{x})+V^{\\delta_{k}}(\\mathbf{x})-\\overline\n{V}(\\mathbf{x})\\\\\n& \\leq & \\overline{V}(\\mathbf{y})-V^{\\delta_{k}}(\\mathbf{y})+\\frac{2}{\\hat{p}%\n}\\overline{V}(\\mathbf{y})(\\frac{e^{(c+\\lambda)\\delta_{k}}-1}{\\delta_{k}%\n})\\left\\Vert g^{\\delta_{k}}\\left( \\rho^{\\delta_{k}}(\\mathbf{y})-\\rho\n^{\\delta_{k}}(\\mathbf{x})\\right) \\right\\Vert _{1}\\\\\n& & +2\\delta_{k}\\mathbf{a}\\cdot\\mathbf{p}+V^{\\delta_{k}}(\\mathbf{x}%\n)-\\overline{V}(\\mathbf{x});\n\\end{array}\n\\]\ntaking the limit as $k$ goes to infinity, we obtain the second inequality of\nthe Lipschitz inequality.$~\\blacksquare$\n\n\n\\section*{Abstract (Not appropriate in this style!)}%\n \\else \\small \n \\begin{center}{\\bf Abstract\\vspace{-.5em}\\vspace{\\z@}}\\end{center}%\n \\quotation \n \\fi\n }%\n }{%\n }%\n\\@ifundefined{endabstract}{\\def\\endabstract\n {\\if@twocolumn\\else\\endquotation\\fi}}{}%\n\\@ifundefined{maketitle}{\\def\\maketitle#1{}}{}%\n\\@ifundefined{affiliation}{\\def\\affiliation#1{}}{}%\n\\@ifundefined{proof}{\\def\\proof{\\noindent{\\bfseries Proof. }}}{}%\n\\@ifundefined{endproof}{\\def\\endproof{\\mbox{\\ \\rule{.1in}{.1in}}}}{}%\n\\@ifundefined{newfield}{\\def\\newfield#1#2{}}{}%\n\\@ifundefined{chapter}{\\def\\chapter#1{\\par(Chapter head:)#1\\par }%\n \\newcount\\c@chapter}{}%\n\\@ifundefined{part}{\\def\\part#1{\\par(Part head:)#1\\par }}{}%\n\\@ifundefined{section}{\\def\\section#1{\\par(Section head:)#1\\par }}{}%\n\\@ifundefined{subsection}{\\def\\subsection#1%\n {\\par(Subsection head:)#1\\par }}{}%\n\\@ifundefined{subsubsection}{\\def\\subsubsection#1%\n {\\par(Subsubsection head:)#1\\par }}{}%\n\\@ifundefined{paragraph}{\\def\\paragraph#1%\n {\\par(Subsubsubsection head:)#1\\par }}{}%\n\\@ifundefined{subparagraph}{\\def\\subparagraph#1%\n {\\par(Subsubsubsubsection head:)#1\\par }}{}%\n\\@ifundefined{therefore}{\\def\\therefore{}}{}%\n\\@ifundefined{backepsilon}{\\def\\backepsilon{}}{}%\n\\@ifundefined{yen}{\\def\\yen{\\hbox{\\rm\\rlap=Y}}}{}%\n\\@ifundefined{registered}{%\n \\def\\registered{\\relax\\ifmmode{}\\r@gistered\n \\else$\\m@th\\r@gistered$\\fi}%\n \\def\\r@gistered{^{\\ooalign\n {\\hfil\\raise.07ex\\hbox{$\\scriptstyle\\rm\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{R}$}\\hfil\\crcr\n \\mathhexbox20D}}}}{}%\n\\@ifundefined{Eth}{\\def\\Eth{}}{}%\n\\@ifundefined{eth}{\\def\\eth{}}{}%\n\\@ifundefined{Thorn}{\\def\\Thorn{}}{}%\n\\@ifundefined{thorn}{\\def\\thorn{}}{}%\n\\def\\TEXTsymbol#1{\\mbox{$#1$}}%\n\\@ifundefined{degree}{\\def\\degree{{}^{\\circ}}}{}%\n\\newdimen\\theight\n\\def\\Column{%\n \\vadjust{\\setbox\\z@=\\hbox{\\scriptsize\\quad\\quad tcol}%\n \\theight=\\ht\\z@\\advance\\theight by \\dp\\z@\\advance\\theight by \\lineskip\n \\kern -\\theight \\vbox to \\theight{%\n \\rightline{\\rlap{\\box\\z@}}%\n \\vss\n }%\n }%\n }%\n\\def\\qed{%\n \\ifhmode\\unskip\\nobreak\\fi\\ifmmode\\ifinner\\else\\hskip5\\p@\\fi\\fi\n \\hbox{\\hskip5\\p@\\vrule width4\\p@ height6\\p@ depth1.5\\p@\\hskip\\p@}%\n }%\n\\def\\cents{\\hbox{\\rm\\rlap\/c}}%\n\\def\\miss{\\hbox{\\vrule height2\\p@ width 2\\p@ depth\\z@}}%\n\\def\\vvert{\\Vert\n\\def\\tcol#1{{\\baselineskip=6\\p@ \\vcenter{#1}} \\Column} %\n\\def\\dB{\\hbox{{}}\n\\def\\mB#1{\\hbox{$#1$}\n\\def\\nB#1{\\hbox{#1}\n\\@ifundefined{note}{\\def\\note{$^{\\dag}}}{}%\n\n\\defLaTeX2e{LaTeX2e}\n\n\\ifx\\fmtnameLaTeX2e\n \\DeclareOldFontCommand{\\rm}{\\normalfont\\rmfamily}{\\mathrm}\n \\DeclareOldFontCommand{\\sf}{\\normalfont\\sffamily}{\\mathsf}\n \\DeclareOldFontCommand{\\tt}{\\normalfont\\ttfamily}{\\mathtt}\n \\DeclareOldFontCommand{\\bf}{\\normalfont\\bfseries}{\\mathbf}\n \\DeclareOldFontCommand{\\it}{\\normalfont\\itshape}{\\mathit}\n \\DeclareOldFontCommand{\\sl}{\\normalfont\\slshape}{\\@nomath\\sl}\n \\DeclareOldFontCommand{\\sc}{\\normalfont\\scshape}{\\@nomath\\sc}\n\\fi\n\n\n\\def\\alpha{{\\Greekmath 010B}}%\n\\def\\beta{{\\Greekmath 010C}}%\n\\def\\gamma{{\\Greekmath 010D}}%\n\\def\\delta{{\\Greekmath 010E}}%\n\\def\\epsilon{{\\Greekmath 010F}}%\n\\def\\zeta{{\\Greekmath 0110}}%\n\\def\\eta{{\\Greekmath 0111}}%\n\\def\\theta{{\\Greekmath 0112}}%\n\\def\\iota{{\\Greekmath 0113}}%\n\\def\\kappa{{\\Greekmath 0114}}%\n\\def\\lambda{{\\Greekmath 0115}}%\n\\def\\mu{{\\Greekmath 0116}}%\n\\def\\nu{{\\Greekmath 0117}}%\n\\def\\xi{{\\Greekmath 0118}}%\n\\def\\pi{{\\Greekmath 0119}}%\n\\def\\rho{{\\Greekmath 011A}}%\n\\def\\sigma{{\\Greekmath 011B}}%\n\\def\\tau{{\\Greekmath 011C}}%\n\\def\\upsilon{{\\Greekmath 011D}}%\n\\def\\phi{{\\Greekmath 011E}}%\n\\def\\chi{{\\Greekmath 011F}}%\n\\def\\psi{{\\Greekmath 0120}}%\n\\def\\omega{{\\Greekmath 0121}}%\n\\def\\varepsilon{{\\Greekmath 0122}}%\n\\def\\vartheta{{\\Greekmath 0123}}%\n\\def\\varpi{{\\Greekmath 0124}}%\n\\def\\varrho{{\\Greekmath 0125}}%\n\\def\\varsigma{{\\Greekmath 0126}}%\n\\def\\varphi{{\\Greekmath 0127}}%\n\n\\def{\\Greekmath 0272}{{\\Greekmath 0272}}\n\\def\\FindBoldGroup{%\n {\\setbox0=\\hbox{$\\mathbf{x\\global\\edef\\theboldgroup{\\the\\mathgroup}}$}}%\n}\n\n\\def\\Greekmath#1#2#3#4{%\n \\if@compatibility\n \\ifnum\\mathgroup=\\symbold\n \\mathchoice{\\mbox{\\boldmath$\\displaystyle\\mathchar\"#1#2#3#4$}}%\n {\\mbox{\\boldmath$\\textstyle\\mathchar\"#1#2#3#4$}}%\n {\\mbox{\\boldmath$\\scriptstyle\\mathchar\"#1#2#3#4$}}%\n {\\mbox{\\boldmath$\\scriptscriptstyle\\mathchar\"#1#2#3#4$}}%\n \\else\n \\mathchar\"#1#2#3#\n \\fi \n \\else \n \\FindBoldGroup\n \\ifnum\\mathgroup=\\theboldgroup\n \\mathchoice{\\mbox{\\boldmath$\\displaystyle\\mathchar\"#1#2#3#4$}}%\n {\\mbox{\\boldmath$\\textstyle\\mathchar\"#1#2#3#4$}}%\n {\\mbox{\\boldmath$\\scriptstyle\\mathchar\"#1#2#3#4$}}%\n {\\mbox{\\boldmath$\\scriptscriptstyle\\mathchar\"#1#2#3#4$}}%\n \\else\n \\mathchar\"#1#2#3#\n \\fi \t \n\t \\fi}\n\n\\newif\\ifGreekBold \\GreekBoldfalse\n\\let\\SAVEPBF=\\pbf\n\\def\\pbf{\\GreekBoldtrue\\SAVEPBF}%\n\n\\@ifundefined{theorem}{\\newtheorem{theorem}{Theorem}}{}\n\\@ifundefined{lemma}{\\newtheorem{lemma}[theorem]{Lemma}}{}\n\\@ifundefined{corollary}{\\newtheorem{corollary}[theorem]{Corollary}}{}\n\\@ifundefined{conjecture}{\\newtheorem{conjecture}[theorem]{Conjecture}}{}\n\\@ifundefined{proposition}{\\newtheorem{proposition}[theorem]{Proposition}}{}\n\\@ifundefined{axiom}{\\newtheorem{axiom}{Axiom}}{}\n\\@ifundefined{remark}{\\newtheorem{remark}{Remark}}{}\n\\@ifundefined{example}{\\newtheorem{example}{Example}}{}\n\\@ifundefined{exercise}{\\newtheorem{exercise}{Exercise}}{}\n\\@ifundefined{definition}{\\newtheorem{definition}{Definition}}{}\n\n\n\\@ifundefined{mathletters}{%\n \n \\newcounter{equationnumber} \n \\def\\mathletters{%\n \\addtocounter{equation}{1}\n \\edef\\@currentlabel{\\arabic{equation}}%\n \\setcounter{equationnumber}{\\c@equation}\n \\setcounter{equation}{0}%\n \\edef\\arabic{equation}{\\@currentlabel\\noexpand\\alph{equation}}%\n }\n \\def\\endmathletters{%\n \\setcounter{equation}{\\value{equationnumber}}%\n }\n}{}\n\n\\@ifundefined{BibTeX}{%\n \\def\\BibTeX{{\\rm B\\kern-.05em{\\sc i\\kern-.025em b}\\kern-.08em\n T\\kern-.1667em\\lower.7ex\\hbox{E}\\kern-.125emX}}}{}%\n\\@ifundefined{AmS}%\n {\\def\\AmS{{\\protect\\usefont{OMS}{cmsy}{m}{n}%\n A\\kern-.1667em\\lower.5ex\\hbox{M}\\kern-.125emS}}}{}%\n\\@ifundefined{AmSTeX}{\\def\\AmSTeX{\\protect\\AmS-\\protect\\TeX\\@}}{}%\n\n\\def\\@@eqncr{\\let\\@tempa\\relax\n \\ifcase\\@eqcnt \\def\\@tempa{& & &}\\or \\def\\@tempa{& &}%\n \\else \\def\\@tempa{&}\\fi\n \\@tempa\n \\if@eqnsw\n \\iftag@\n \\@taggnum\n \\else\n \\@eqnnum\\stepcounter{equation}%\n \\fi\n \\fi\n \\global\\@ifnextchar*{\\@tagstar}{\\@tag}@false\n \\global\\@eqnswtrue\n \\global\\@eqcnt\\z@\\cr}\n\n\n\\def\\@ifnextchar*{\\@TCItagstar}{\\@TCItag}{\\@ifnextchar*{\\@TCItagstar}{\\@TCItag}}\n\\def\\@TCItag#1{%\n \\global\\@ifnextchar*{\\@tagstar}{\\@tag}@true\n \\global\\def\\@taggnum{(#1)}}\n\\def\\@TCItagstar*#1{%\n \\global\\@ifnextchar*{\\@tagstar}{\\@tag}@true\n \\global\\def\\@taggnum{#1}}\n\\def\\tfrac#1#2{{\\textstyle {#1 \\over #2}}}%\n\\def\\dfrac#1#2{{\\displaystyle {#1 \\over #2}}}%\n\\def\\binom#1#2{{#1 \\choose #2}}%\n\\def\\tbinom#1#2{{\\textstyle {#1 \\choose #2}}}%\n\\def\\dbinom#1#2{{\\displaystyle {#1 \\choose #2}}}%\n\\def\\QATOP#1#2{{#1 \\atop #2}}%\n\\def\\QTATOP#1#2{{\\textstyle {#1 \\atop #2}}}%\n\\def\\QDATOP#1#2{{\\displaystyle {#1 \\atop #2}}}%\n\\def\\QABOVE#1#2#3{{#2 \\above#1 #3}}%\n\\def\\QTABOVE#1#2#3{{\\textstyle {#2 \\above#1 #3}}}%\n\\def\\QDABOVE#1#2#3{{\\displaystyle {#2 \\above#1 #3}}}%\n\\def\\QOVERD#1#2#3#4{{#3 \\overwithdelims#1#2 #4}}%\n\\def\\QTOVERD#1#2#3#4{{\\textstyle {#3 \\overwithdelims#1#2 #4}}}%\n\\def\\QDOVERD#1#2#3#4{{\\displaystyle {#3 \\overwithdelims#1#2 #4}}}%\n\\def\\QATOPD#1#2#3#4{{#3 \\atopwithdelims#1#2 #4}}%\n\\def\\QTATOPD#1#2#3#4{{\\textstyle {#3 \\atopwithdelims#1#2 #4}}}%\n\\def\\QDATOPD#1#2#3#4{{\\displaystyle {#3 \\atopwithdelims#1#2 #4}}}%\n\\def\\QABOVED#1#2#3#4#5{{#4 \\abovewithdelims#1#2#3 #5}}%\n\\def\\QTABOVED#1#2#3#4#5{{\\textstyle \n {#4 \\abovewithdelims#1#2#3 #5}}}%\n\\def\\QDABOVED#1#2#3#4#5{{\\displaystyle \n {#4 \\abovewithdelims#1#2#3 #5}}}%\n\\def\\tint{\\mathop{\\textstyle \\int}}%\n\\def\\tiint{\\mathop{\\textstyle \\iint }}%\n\\def\\tiiint{\\mathop{\\textstyle \\iiint }}%\n\\def\\tiiiint{\\mathop{\\textstyle \\iiiint }}%\n\\def\\tidotsint{\\mathop{\\textstyle \\idotsint }}%\n\\def\\toint{\\mathop{\\textstyle \\oint}}%\n\\def\\tsum{\\mathop{\\textstyle \\sum }}%\n\\def\\tprod{\\mathop{\\textstyle \\prod }}%\n\\def\\tbigcap{\\mathop{\\textstyle \\bigcap }}%\n\\def\\tbigwedge{\\mathop{\\textstyle \\bigwedge }}%\n\\def\\tbigoplus{\\mathop{\\textstyle \\bigoplus }}%\n\\def\\tbigodot{\\mathop{\\textstyle \\bigodot }}%\n\\def\\tbigsqcup{\\mathop{\\textstyle \\bigsqcup }}%\n\\def\\tcoprod{\\mathop{\\textstyle \\coprod }}%\n\\def\\tbigcup{\\mathop{\\textstyle \\bigcup }}%\n\\def\\tbigvee{\\mathop{\\textstyle \\bigvee }}%\n\\def\\tbigotimes{\\mathop{\\textstyle \\bigotimes }}%\n\\def\\tbiguplus{\\mathop{\\textstyle \\biguplus }}%\n\\def\\dint{\\mathop{\\displaystyle \\int}}%\n\\def\\diint{\\mathop{\\displaystyle \\iint }}%\n\\def\\diiint{\\mathop{\\displaystyle \\iiint }}%\n\\def\\diiiint{\\mathop{\\displaystyle \\iiiint }}%\n\\def\\didotsint{\\mathop{\\displaystyle \\idotsint }}%\n\\def\\doint{\\mathop{\\displaystyle \\oint}}%\n\\def\\dsum{\\mathop{\\displaystyle \\sum }}%\n\\def\\dprod{\\mathop{\\displaystyle \\prod }}%\n\\def\\dbigcap{\\mathop{\\displaystyle \\bigcap }}%\n\\def\\dbigwedge{\\mathop{\\displaystyle \\bigwedge }}%\n\\def\\dbigoplus{\\mathop{\\displaystyle \\bigoplus }}%\n\\def\\dbigodot{\\mathop{\\displaystyle \\bigodot }}%\n\\def\\dbigsqcup{\\mathop{\\displaystyle \\bigsqcup }}%\n\\def\\dcoprod{\\mathop{\\displaystyle \\coprod }}%\n\\def\\dbigcup{\\mathop{\\displaystyle \\bigcup }}%\n\\def\\dbigvee{\\mathop{\\displaystyle \\bigvee }}%\n\\def\\dbigotimes{\\mathop{\\displaystyle \\bigotimes }}%\n\\def\\dbiguplus{\\mathop{\\displaystyle \\biguplus }}%\n\n\\ifx\\ds@amstex\\relax\n \\message{amstex already loaded}\\makeatother\\endinpu\n\\else\n \\@ifpackageloaded{amsmath}%\n {\\message{amsmath already loaded}\\makeatother\\endinput}\n {}\n \\@ifpackageloaded{amstex}%\n {\\message{amstex already loaded}\\makeatother\\endinput}\n {}\n \\@ifpackageloaded{amsgen}%\n {\\message{amsgen already loaded}\\makeatother\\endinput}\n {}\n\\fi\n\\let\\DOTSI\\relax\n\\def\\RIfM@{\\relax\\ifmmode}%\n\\def\\FN@{\\futurelet\\next}%\n\\newcount\\intno@\n\\def\\iint{\\DOTSI\\intno@\\tw@\\FN@\\ints@}%\n\\def\\iiint{\\DOTSI\\intno@\\thr@@\\FN@\\ints@}%\n\\def\\iiiint{\\DOTSI\\intno@4 \\FN@\\ints@}%\n\\def\\idotsint{\\DOTSI\\intno@\\z@\\FN@\\ints@}%\n\\def\\ints@{\\findlimits@\\ints@@}%\n\\newif\\iflimtoken@\n\\newif\\iflimits@\n\\def\\findlimits@{\\limtoken@true\\ifx\\next\\limits\\limits@true\n \\else\\ifx\\next\\nolimits\\limits@false\\else\n \\limtoken@false\\ifx\\ilimits@\\nolimits\\limits@false\\else\n \\ifinner\\limits@false\\else\\limits@true\\fi\\fi\\fi\\fi}%\n\\def\\multint@{\\int\\ifnum\\intno@=\\z@\\intdots@ \n \\else\\intkern@\\fi \n \\ifnum\\intno@>\\tw@\\int\\intkern@\\fi \n \\ifnum\\intno@>\\thr@@\\int\\intkern@\\fi \n \\int\n\\def\\multintlimits@{\\intop\\ifnum\\intno@=\\z@\\intdots@\\else\\intkern@\\fi\n \\ifnum\\intno@>\\tw@\\intop\\intkern@\\fi\n \\ifnum\\intno@>\\thr@@\\intop\\intkern@\\fi\\intop}%\n\\def\\intic@{%\n \\mathchoice{\\hskip.5em}{\\hskip.4em}{\\hskip.4em}{\\hskip.4em}}%\n\\def\\negintic@{\\mathchoice\n {\\hskip-.5em}{\\hskip-.4em}{\\hskip-.4em}{\\hskip-.4em}}%\n\\def\\ints@@{\\iflimtoken@ \n \\def\\ints@@@{\\iflimits@\\negintic@\n \\mathop{\\intic@\\multintlimits@}\\limits \n \\else\\multint@\\nolimits\\fi \n \\eat@\n \\else \n \\def\\ints@@@{\\iflimits@\\negintic@\n \\mathop{\\intic@\\multintlimits@}\\limits\\else\n \\multint@\\nolimits\\fi}\\fi\\ints@@@}%\n\\def\\intkern@{\\mathchoice{\\!\\!\\!}{\\!\\!}{\\!\\!}{\\!\\!}}%\n\\def\\plaincdots@{\\mathinner{\\cdotp\\cdotp\\cdotp}}%\n\\def\\intdots@{\\mathchoice{\\plaincdots@}%\n {{\\cdotp}\\mkern1.5mu{\\cdotp}\\mkern1.5mu{\\cdotp}}%\n {{\\cdotp}\\mkern1mu{\\cdotp}\\mkern1mu{\\cdotp}}%\n {{\\cdotp}\\mkern1mu{\\cdotp}\\mkern1mu{\\cdotp}}}%\n\\def\\RIfM@{\\relax\\protect\\ifmmode}\n\\def\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{\\RIfM@\\expandafter\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi@\\else\\expandafter\\mbox\\fi}\n\\let\\nfss@text\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi\n\\def\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi@#1{\\mathchoice\n {\\textdef@\\displaystyle\\f@size{#1}}%\n {\\textdef@\\textstyle\\tf@size{\\firstchoice@false #1}}%\n {\\textdef@\\textstyle\\sf@size{\\firstchoice@false #1}}%\n {\\textdef@\\textstyle \\ssf@size{\\firstchoice@false #1}}%\n \\glb@settings}\n\n\\def\\textdef@#1#2#3{\\hbox{{%\n \\everymath{#1}%\n \\let\\f@size#2\\selectfont\n #3}}}\n\\newif\\iffirstchoice@\n\\firstchoice@true\n\\def\\Let@{\\relax\\iffalse{\\fi\\let\\\\=\\cr\\iffalse}\\fi}%\n\\def\\vspace@{\\def\\vspace##1{\\crcr\\noalign{\\vskip##1\\relax}}}%\n\\def\\multilimits@{\\bgroup\\vspace@\\Let@\n \\baselineskip\\fontdimen10 \\scriptfont\\tw@\n \\advance\\baselineskip\\fontdimen12 \\scriptfont\\tw@\n \\lineskip\\thr@@\\fontdimen8 \\scriptfont\\thr@@\n \\lineskiplimit\\lineskip\n \\vbox\\bgroup\\ialign\\bgroup\\hfil$\\m@th\\scriptstyle{##}$\\hfil\\crcr}%\n\\def\\Sb{_\\multilimits@}%\n\\def\\endSb{\\crcr\\egroup\\egroup\\egroup}%\n\\def\\Sp{^\\multilimits@}%\n\\let\\endSp\\endSb\n\\newdimen\\ex@\n\\ex@.2326ex\n\\def\\rightarrowfill@#1{$#1\\m@th\\mathord-\\mkern-6mu\\cleaders\n \\hbox{$#1\\mkern-2mu\\mathord-\\mkern-2mu$}\\hfill\n \\mkern-6mu\\mathord\\rightarrow$}%\n\\def\\leftarrowfill@#1{$#1\\m@th\\mathord\\leftarrow\\mkern-6mu\\cleaders\n \\hbox{$#1\\mkern-2mu\\mathord-\\mkern-2mu$}\\hfill\\mkern-6mu\\mathord-$}%\n\\def\\leftrightarrowfill@#1{$#1\\m@th\\mathord\\leftarrow\n\\mkern-6mu\\cleaders\n \\hbox{$#1\\mkern-2mu\\mathord-\\mkern-2mu$}\\hfill\n \\mkern-6mu\\mathord\\rightarrow$}%\n\\def\\overrightarrow{\\mathpalette\\overrightarrow@}%\n\\def\\overrightarrow@#1#2{\\vbox{\\ialign{##\\crcr\\rightarrowfill@#1\\crcr\n \\noalign{\\kern-\\ex@\\nointerlineskip}$\\m@th\\hfil#1#2\\hfil$\\crcr}}}%\n\\let\\overarrow\\overrightarrow\n\\def\\overleftarrow{\\mathpalette\\overleftarrow@}%\n\\def\\overleftarrow@#1#2{\\vbox{\\ialign{##\\crcr\\leftarrowfill@#1\\crcr\n \\noalign{\\kern-\\ex@\\nointerlineskip}$\\m@th\\hfil#1#2\\hfil$\\crcr}}}%\n\\def\\overleftrightarrow{\\mathpalette\\overleftrightarrow@}%\n\\def\\overleftrightarrow@#1#2{\\vbox{\\ialign{##\\crcr\n \\leftrightarrowfill@#1\\crcr\n \\noalign{\\kern-\\ex@\\nointerlineskip}$\\m@th\\hfil#1#2\\hfil$\\crcr}}}%\n\\def\\underrightarrow{\\mathpalette\\underrightarrow@}%\n\\def\\underrightarrow@#1#2{\\vtop{\\ialign{##\\crcr$\\m@th\\hfil#1#2\\hfil\n $\\crcr\\noalign{\\nointerlineskip}\\rightarrowfill@#1\\crcr}}}%\n\\let\\underarrow\\underrightarrow\n\\def\\underleftarrow{\\mathpalette\\underleftarrow@}%\n\\def\\underleftarrow@#1#2{\\vtop{\\ialign{##\\crcr$\\m@th\\hfil#1#2\\hfil\n $\\crcr\\noalign{\\nointerlineskip}\\leftarrowfill@#1\\crcr}}}%\n\\def\\underleftrightarrow{\\mathpalette\\underleftrightarrow@}%\n\\def\\underleftrightarrow@#1#2{\\vtop{\\ialign{##\\crcr$\\m@th\n \\hfil#1#2\\hfil$\\crcr\n \\noalign{\\nointerlineskip}\\leftrightarrowfill@#1\\crcr}}}%\n\n\\def\\qopnamewl@#1{\\mathop{\\operator@font#1}\\nlimits@}\n\\let\\nlimits@\\displaylimits\n\\def\\setboxz@h{\\setbox\\z@\\hbox}\n\n\n\\def\\varlim@#1#2{\\mathop{\\vtop{\\ialign{##\\crcr\n \\hfil$#1\\m@th\\operator@font lim$\\hfil\\crcr\n \\noalign{\\nointerlineskip}#2#1\\crcr\n \\noalign{\\nointerlineskip\\kern-\\ex@}\\crcr}}}}\n\n \\def\\rightarrowfill@#1{\\m@th\\setboxz@h{$#1-$}\\ht\\z@\\z@\n $#1\\copy\\z@\\mkern-6mu\\cleaders\n \\hbox{$#1\\mkern-2mu\\box\\z@\\mkern-2mu$}\\hfill\n \\mkern-6mu\\mathord\\rightarrow$}\n\\def\\leftarrowfill@#1{\\m@th\\setboxz@h{$#1-$}\\ht\\z@\\z@\n $#1\\mathord\\leftarrow\\mkern-6mu\\cleaders\n \\hbox{$#1\\mkern-2mu\\copy\\z@\\mkern-2mu$}\\hfill\n \\mkern-6mu\\box\\z@$}\n\n\n\\def\\qopnamewl@{proj\\,lim}{\\qopnamewl@{proj\\,lim}}\n\\def\\qopnamewl@{inj\\,lim}{\\qopnamewl@{inj\\,lim}}\n\\def\\mathpalette\\varlim@\\rightarrowfill@{\\mathpalette\\varlim@\\rightarrowfill@}\n\\def\\mathpalette\\varlim@\\leftarrowfill@{\\mathpalette\\varlim@\\leftarrowfill@}\n\\def\\mathpalette\\varliminf@{}{\\mathpalette\\mathpalette\\varliminf@{}@{}}\n\\def\\mathpalette\\varliminf@{}@#1{\\mathop{\\underline{\\vrule\\@depth.2\\ex@\\@width\\z@\n \\hbox{$#1\\m@th\\operator@font lim$}}}}\n\\def\\mathpalette\\varlimsup@{}{\\mathpalette\\mathpalette\\varlimsup@{}@{}}\n\\def\\mathpalette\\varlimsup@{}@#1{\\mathop{\\overline\n {\\hbox{$#1\\m@th\\operator@font lim$}}}}\n\n\\def\\stackunder#1#2{\\mathrel{\\mathop{#2}\\limits_{#1}}}%\n\\begingroup \\catcode `|=0 \\catcode `[= 1\n\\catcode`]=2 \\catcode `\\{=12 \\catcode `\\}=12\n\\catcode`\\\\=12 \n|gdef|@alignverbatim#1\\end{align}[#1|end[align]]\n|gdef|@salignverbatim#1\\end{align*}[#1|end[align*]]\n\n|gdef|@alignatverbatim#1\\end{alignat}[#1|end[alignat]]\n|gdef|@salignatverbatim#1\\end{alignat*}[#1|end[alignat*]]\n\n|gdef|@xalignatverbatim#1\\end{xalignat}[#1|end[xalignat]]\n|gdef|@sxalignatverbatim#1\\end{xalignat*}[#1|end[xalignat*]]\n\n|gdef|@gatherverbatim#1\\end{gather}[#1|end[gather]]\n|gdef|@sgatherverbatim#1\\end{gather*}[#1|end[gather*]]\n\n|gdef|@gatherverbatim#1\\end{gather}[#1|end[gather]]\n|gdef|@sgatherverbatim#1\\end{gather*}[#1|end[gather*]]\n\n\n|gdef|@multilineverbatim#1\\end{multiline}[#1|end[multiline]]\n|gdef|@smultilineverbatim#1\\end{multiline*}[#1|end[multiline*]]\n\n|gdef|@arraxverbatim#1\\end{arrax}[#1|end[arrax]]\n|gdef|@sarraxverbatim#1\\end{arrax*}[#1|end[arrax*]]\n\n|gdef|@tabulaxverbatim#1\\end{tabulax}[#1|end[tabulax]]\n|gdef|@stabulaxverbatim#1\\end{tabulax*}[#1|end[tabulax*]]\n\n\n|endgroup\n \n\n \n\\def\\align{\\@verbatim \\frenchspacing\\@vobeyspaces \\@alignverbatim\nYou are using the \"align\" environment in a style in which it is not defined.}\n\\let\\endalign=\\endtrivlist\n \n\\@namedef{align*}{\\@verbatim\\@salignverbatim\nYou are using the \"align*\" environment in a style in which it is not defined.}\n\\expandafter\\let\\csname endalign*\\endcsname =\\endtrivlist\n\n\n\n\n\\def\\alignat{\\@verbatim \\frenchspacing\\@vobeyspaces \\@alignatverbatim\nYou are using the \"alignat\" environment in a style in which it is not defined.}\n\\let\\endalignat=\\endtrivlist\n \n\\@namedef{alignat*}{\\@verbatim\\@salignatverbatim\nYou are using the \"alignat*\" environment in a style in which it is not defined.}\n\\expandafter\\let\\csname endalignat*\\endcsname =\\endtrivlist\n\n\n\n\n\\def\\xalignat{\\@verbatim \\frenchspacing\\@vobeyspaces \\@xalignatverbatim\nYou are using the \"xalignat\" environment in a style in which it is not defined.}\n\\let\\endxalignat=\\endtrivlist\n \n\\@namedef{xalignat*}{\\@verbatim\\@sxalignatverbatim\nYou are using the \"xalignat*\" environment in a style in which it is not defined.}\n\\expandafter\\let\\csname endxalignat*\\endcsname =\\endtrivlist\n\n\n\n\n\\def\\gather{\\@verbatim \\frenchspacing\\@vobeyspaces \\@gatherverbatim\nYou are using the \"gather\" environment in a style in which it is not defined.}\n\\let\\endgather=\\endtrivlist\n \n\\@namedef{gather*}{\\@verbatim\\@sgatherverbatim\nYou are using the \"gather*\" environment in a style in which it is not defined.}\n\\expandafter\\let\\csname endgather*\\endcsname =\\endtrivlist\n\n\n\\def\\multiline{\\@verbatim \\frenchspacing\\@vobeyspaces \\@multilineverbatim\nYou are using the \"multiline\" environment in a style in which it is not defined.}\n\\let\\endmultiline=\\endtrivlist\n \n\\@namedef{multiline*}{\\@verbatim\\@smultilineverbatim\nYou are using the \"multiline*\" environment in a style in which it is not defined.}\n\\expandafter\\let\\csname endmultiline*\\endcsname =\\endtrivlist\n\n\n\\def\\arrax{\\@verbatim \\frenchspacing\\@vobeyspaces \\@arraxverbatim\nYou are using a type of \"array\" construct that is only allowed in AmS-LaTeX.}\n\\let\\endarrax=\\endtrivlist\n\n\\def\\tabulax{\\@verbatim \\frenchspacing\\@vobeyspaces \\@tabulaxverbatim\nYou are using a type of \"tabular\" construct that is only allowed in AmS-LaTeX.}\n\\let\\endtabulax=\\endtrivlist\n\n \n\\@namedef{arrax*}{\\@verbatim\\@sarraxverbatim\nYou are using a type of \"array*\" construct that is only allowed in AmS-LaTeX.}\n\\expandafter\\let\\csname endarrax*\\endcsname =\\endtrivlist\n\n\\@namedef{tabulax*}{\\@verbatim\\@stabulaxverbatim\nYou are using a type of \"tabular*\" construct that is only allowed in AmS-LaTeX.}\n\\expandafter\\let\\csname endtabulax*\\endcsname =\\endtrivlist\n\n\n\n \\def\\endequation{%\n \\ifmmode\\ifinner\n \\iftag@\n \\addtocounter{equation}{-1}\n $\\hfil\n \\displaywidth\\linewidth\\@taggnum\\egroup \\endtrivlist\n \\global\\@ifnextchar*{\\@tagstar}{\\@tag}@false\n \\global\\@ignoretrue \n \\else\n $\\hfil\n \\displaywidth\\linewidth\\@eqnnum\\egroup \\endtrivlist\n \\global\\@ifnextchar*{\\@tagstar}{\\@tag}@false\n \\global\\@ignoretrue \n \\fi\n \\else \n \\iftag@\n \\addtocounter{equation}{-1}\n \\eqno \\hbox{\\@taggnum}\n \\global\\@ifnextchar*{\\@tagstar}{\\@tag}@false%\n $$\\global\\@ignoretrue\n \\else\n \\eqno \\hbox{\\@eqnnum\n $$\\global\\@ignoretrue\n \\fi\n \\fi\\fi\n } \n\n \\newif\\iftag@ \\@ifnextchar*{\\@tagstar}{\\@tag}@false\n \n \\def\\@ifnextchar*{\\@TCItagstar}{\\@TCItag}{\\@ifnextchar*{\\@TCItagstar}{\\@TCItag}}\n \\def\\@TCItag#1{%\n \\global\\@ifnextchar*{\\@tagstar}{\\@tag}@true\n \\global\\def\\@taggnum{(#1)}}\n \\def\\@TCItagstar*#1{%\n \\global\\@ifnextchar*{\\@tagstar}{\\@tag}@true\n \\global\\def\\@taggnum{#1}}\n\n \\@ifundefined{tag}{\n \\def\\@ifnextchar*{\\@tagstar}{\\@tag}{\\@ifnextchar*{\\@tagstar}{\\@tag}}\n \\def\\@tag#1{%\n \\global\\@ifnextchar*{\\@tagstar}{\\@tag}@true\n \\global\\def\\@taggnum{(#1)}}\n \\def\\@tagstar*#1{%\n \\global\\@ifnextchar*{\\@tagstar}{\\@tag}@true\n \\global\\def\\@taggnum{#1}}\n }{}\n\n\n\n\\makeatother\n\\endinput\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\n\\hspace{\\parindent}The Squamous cell carcinoma (SCC) is one of the most common oral cavity cancers among men. Survival rates for this cancer are low \\cite{Nag2018_Analysis,Boockwho}. Thus, early diagnosis is essential for survival and for determining the most appropriate treatment. Epithelial Dysplasia (ED) is a very frequent tissue alteration in lesions that precede oral cancer. Its cause may be associated with etiological factors, mainly with the consumption of tabacco and alcohol \\cite{introd1}.\n\nThe standard ED diagnosis is made by pathologists through analysis of histopathological slides in search of changes in the epithelium. These alterations may be architectural, such as basilar hyperplasia, droplet-shaped epithelial projections, increased number of mitoses or loss of cellular cohesion. They may also be cellular changes, such as enlarged nuclei, enlarged cells, increased nuclear-cytoplasmic ratio, or atypical mitotic figures \\cite{DE_conceito}.\n\nThe most widely employed criteria for grading oral ED are those defined in the World Health Organization (WHO) classification system, which considers the presence of certain architectural and cytological features. The more prominent and numerous these features are identified in the histopathological image, the more severe the grade of dysplasia~\\cite{Warnakulasuriya2008_Oral,Boockwho}. However, the assessment of dysplasia is subjective and strongly dependent on the personal experience of the pathologist~\\cite{embalo2021evaluation}, and the diagnosis variability is well documented~\\cite{Bouquot2006_Epithelial,Kujan2006_Evaluation,Warnakulasuriya2008_Oral,mahmood2021artificial}.\n\nThe area of computer-aided diagnosis has been growing over the past decades. Histopathological images are analyzed using quantitative measures of image characteristics. These measures are employed either by image processing algorithms or interpreted by machine learning systems to yield a suggested diagnosis. Such automated diagnostic tools can be an important aid to pathologists, enabling the creation of large-scale decision support systems to identify potentially malignant lesions~\\cite{pallua2020future,mahmood2021artificial}. Some recent works have proposed automated diagnostic solutions to assist in the evaluation of histopathological images for the detection of epithelial dysplasia~\\cite{artgo2,art1,art2,gupta2019tissue,Adel2018,Sami2009_ComputerAided,prabavathy2021analysis,baik2014automated,krishnan2009automated}.\n\nIn a previous work, texture characteristics of the epithelium have been used~\\cite{artgo2} to classify histopathological images into normal epithelia and oral sub-mucous fibrosis (OSF) with or without dysplasia. In this article, textural characteristics of the epithelium were extracted using higher order spectra (HOS), local binary pattern (LBP), and laws texture energy (LTE). Five different classifiers were employed: Decision Tree (DT), Sugeno Fuzzy, Gaussian Mixture Model (GMM), K-Nearest Neighbor (K-NN), and Radial Basis Probabilistic Neural Network (RBPNN). The results of this work indicate that combination of texture and HOS features coupled with a fuzzy classifier resulted in $95.7\\%$ accuracy. However, it is noted in the article that images need considerable processing to be used in the proposed classification method. In addition, the training and testing (evaluation) of the classifiers was performed using stratified 3-fold cross-validation, thus separating the data set into only three subsets, which may favor the occurrence of strong correlation between the training set and the test set. This evaluation approach leaves the performance results obtained ($95.7 \\%$ accuracy) open to questioning.\n\nThe Block Intensity Code Comparison (BICC) was used in another study~\\cite{art1} to extract the characteristics of the epithelium. Each image was divided into blocks and the intensity of each block was calculated. Blocks of sizes $5\\times5$, $10\\times10$ and $15\\times15$ have been tested. The classifier used was a Radial Basis Function Neural Network (RBFNN) with a single hidden layer, and Gaussian activation function, classifying epithelia into normal or dysplastic. Different values were tested for the centers (means) of the Gaussians. These centers were located using k-Means, and $k=6$ led to the best result. The article fails to clearly explain how the final classification was made.\n \nA convolutional neural network (CNN) has also been used~\\cite{art2} to extract characteristics of histological images of the uterine cervix epithelium. The authors classify the epithelium in four classes: normal, mild dysplasia, moderate dysplasia and severe dysplasia. Each of the $66$ images available was subdivided into $10$ vertical segments. The images had to be correctly positioned so that each segment had the three layers of the epithelium. Each of these segments was subdivided into three parts of the same size (top, middle, bottom). Finally, $32\\times32$ pixel patches were extracted from each part of each segment using non-overlapping windows, which generated a total of $75,000$ samples of size $32\\times32$. Three CNNs were implemented, each of which classified the samples of each of the 3 parts into one of the 4 classes. These networks extract the characteristics of the samples and provide an initial classification.\nThe authors did not inform the amount of samples used to train and test the networks. The extracted characteristics fed 5 different classifiers (SVM, LDA, MLP, logistic regression and radom forest), which merged the characteristics of the 3 parts, obtained from the CNNs, and classified the entire epithelium image. The best performance ($77.25\\%$ accuracy) was obtained with logistic regression and random forest, and using leave-one-out cross-validation performed only once to classify the 66 images.\n\nAlso using CNN, Gupta et al.~\\cite{gupta2019tissue} classified epithelial images into 4 classes (normal, mild dysplasia, moderate dysplasia or severe dysplasia). The CNN received epithelial images as inputs. The classifier was trained using off-the-shelf packages. No details were given regarding the characteristics of the network nor the training algorithm. There were 2688 images taken from 672 tissue images of 52 patients, with approximately the same amount of images belonging to each of the four classes. The CNN training was performed from scratch using 70\\% of the available samples. The remaining 30\\% were used for testing. The CNN trained over 75 epochs presented an accuracy of $89.3\\%$ for the test set.\n\nClassification based on extracted values of the 16 WHO defined features was proposed in another study~\\cite{Adel2018}. No detail was provided on how the extraction was performed. An SVM and a K-nearest-neighbor classifiers were tested using the different feature sets. The reported results were based on a single classification run using 46 images, from which 32 were used for training an 14 for test in the SVM implementation. The accuracy results varied from 71.4\\% to 78.6\\% in most implementations, and reached 92.8\\% only for the SVM classifier operating on the features extracted by the Oriented FAST and Rotated BRIEF (ORB) algorithm.\n\n\nClassification based on the similarity of neighboring rete ridges was proposed~\\cite{Sami2009_ComputerAided}. Epithelium was classified as normal, dysplastic or carcinoma in-situ (CIS). The method was based on comparing the drop-shaped similarity level between the best matching pair of neighboring rete ridges. A contour extraction method was proposed, and the roundness of extracted twin contours was quantified. Clustering of the three classes was based on the roundness absolute values and differences. Method illustration was based on a set of 17 images. No statistical evaluation was presented regarding the accuracy of the proposed classification method. \n\nIn a recent publication, two characteristic extraction techniques were employed~\\cite{prabavathy2021analysis}, namely histogram oriented gradient (HOG) and local binary pattern (LBP), were applied to discrete wavelet transformed $512\\times512$ epithelial images. A three-layer back propagation neural network (BPNN) classified the images into normal or dysplastic epithelium. The best results were obtained using the HOG features, yielding an accuracy of $85\\%$. The article does not detail the methodology used for the classification.\n\n\nA semi-automatic algorithm has been also proposed~\\cite{baik2014automated} to predict the progression of oral premalignant lesions (OPL) to invasive squamous cell carcinoma (SCC). The authors initially use two Random Forests, one to segment the nuclei of the histopathological images and the other to classify the nuclei found into normal and abnormal (cancerous). After the identification of the nuclei, a Nucleus Phenotype Score (NPS) was calculated based on the voting score that each nucleus received from the Random Forests classifier. Based on the average NPS of all image cores, an automated Tissue Nuclear Phenotype Score (aNPS) was assigned in order to identify OPL with high risk of progression. During the prediction of the progression of $71$ lesions in the test set, a $78\\%$ sensitivity and a $71\\%$ specificity were obtained.\n\nSegmentation and classification of sub-epithelial connective tissue cells into normal or with oral submucous fibrosis (OSF) was proposed~\\cite{krishnan2009automated}. Limiarization with several levels was used in the segmentation. An SVM-based classifier was used to classify the cells as per their geometric shape features (eccentricity and compactness). The classifier presented an accuracy of $88.69\\%$ for the test set.\n\n\nOne common problem to these previous proposals is that they rely almost entirely on the information provided by the collected data. Despite the large popularity of machine learning (ML) algorithms, it is known that they tend to be data inefficient, and frequently generalize poorly to unseen cases. This characteristic is especially present in medical applications where the amount of available data is usually limited. Most ML algorithms that process raw data lead to classifications based on the correlation structure of the data presented to them during training. The success of this approach depends on a huge amount of data. Moreover, the diagnosis quality is also frequently dependent on factors independent of the data correlation, such as the amount and type of noise present in the data, the quality of data acquisition, and the amount of useful information embedded in the data. For instance, some of the presented solutions just discussed proposed to extract the characteristics of the epithelium by analyzing the entire image, which increases the possibility of using information that may hamper the classification process. This also tends to lead to a lack of consensus on the most relevant characteristics for classification. Another consideration usually made~\\cite{Kujan2006_Evaluation,embalo2021evaluation} is that a binary classification system tends to be more helpful to the clinician for making critical clinical decisions in cases of high-risk epithelial dysplasia than the WHO three-level classification.\n\nA more sensible approach is to complement the information provided by the raw data with the knowledge of experts in the application field. This tends to lead to more robust performances using simpler algorithms and less data. This work is a contribution in such a direction. We combine the knowledge of the pathologist with the information provided by the data to generate a method of detecting oral dysplasia which performs on a smaller amount of data, and is more robust to changes in the statistical characteristics of the data. We use the expert's knowledge to define the information to be delivered to an ML algorithm that aims at classifying histopathological images into dysplastic and non-dysplastic epithelia.\n\n\n\n\n\n\n\\section{Materials and methods}\n\n\\hspace{\\parindent}In this section, we describe the database employed and the methods utilized.\n\n\\subsection{Database and sample preparation}\n\n\\hspace{\\parindent}The database consists of histologic images obtained from the Biobank Archives of the Oral Pathology Laboratory of UFSC (CONEP B-$051$; Process No.~$25000.237810$\/$2014$-$54$). The research project was approved by the UFSC Human Beings Research Ethics Committee (Platform Brasil under number CAAE $15025319.3.0000.0121$).\n\nThe images used were from histologic slides stained with hematoxylin and eosin. These slides were photographed with an original magnification of $200$x, using a digital camera coupled to an optical microscope. After being scanned, the images were duly anonymized to be used in this research.\n\nThe data set corresponded to $73$ cases, totaling $172$ images. Of these, $36$ cases ($88$ images) were from oral potentially malignant disorders with dysplastic epithelia and $37$ cases ($84$ images) were from fibrous hyperplasia with non-dysplastic epithelium. \n\nAs evidenced in the Introduction, there seems to be no consensus yet regarding a good set of characteristics to be used for classification of dysplastic epithelia. Hence, we decided to use the image cutouts themselves as the information to be delivered to the classifier.\n\nTo take advantage of the existing knowledge, we have chosen to extract cutouts from the image region containing the main visual characteristics used by the pathologist for such evaluations. The cutouts were extracted from regions located close to the border that separates the epithelial tissue from the connective tissue, as illustrated in Figure \\ref{img_regRecorte}. The reasoning for this choice was that the lower third of a dysplastic epithelium image will always present detectable changes, regardless of the dysplasia degree (mild, moderate or severe). Through this informed choice, the use of a small image region simplifies the processing without harming the classification potential.\n\n \\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{figuras\/regioesDemarcadas_39_2}\n\\caption{Example of $7$ selected cutouts on the epithelium image.}\n\\label{img_regRecorte}\n\\end{figure}\n\nThe definitions of location and size of each cutout were influenced by a combination of factors:\n\\begin{itemize}\n\t\\item[a)] The use of small regions simplifies the data processing.\n\t\\item[b)] The cutout image should contain enough information for a classification by the pathologist.\n\t\\item[c)] A basic classifier should yield a reasonably low classification error using the cutouts. \n\\end{itemize}\n\nBased on these criteria, we have defined cutouts of arbitrary dimensions, at the frontier between the epithelial and connective tissues, with most of its area in the epithelial region, as shown in Figure~\\ref{img_regRecorte}. \n\n\nFor processing by the classifier, each cutout image was converted from RGB to grayscale, reducing by $2\/3$ the amount of data to be processed. Then, the cutout image values were normalized to increase robustness to intensity variations due to lighting effects in different acquisitions. Pixel intensities of each cutout were normalized to be in the range $[0,1]$. Finally, each cutout used in the classifier training stage was rotated three times by $90^\\circ$, yielding four images for each cutout. Rotations by non-integer multiples of $90^\\circ$ were not considered to avoid the need for further processing at the corners of the rotated cutouts. The use of four rotated positions was considered sufficient to provide the necessary robustness to image rotation. \n\n\n\\subsection{Methodologies}\n\n\\hspace{\\parindent}This section describes the methods employed in the different steps of the classification task.\n\nThis project was implemented in Matlab$^\\copyright$ (The MathWorks, Inc. Software). It was employed a personal computer with windows $10$ operating system, a $1.8$ GHz Intel Core $i5$ processor and $8$ GBytes of RAM memory.\n\n\\subsubsection{Definition of the classifier input}\\label{sec:phase1}\n\n\\hspace{\\parindent}The classifiers were implemented using feed-forward multi-layer neural networks. To define the structures of these networks it was necessary to choose, besides the size of the input cutouts, the number of hidden layers, the number of neurons in each hidden layer, the cost function to be used, and other necessary parameters depending on the previous choices. Different structures were considered and evaluated in the training stage. This stage was subdivided into two phases.\n\nIn the first phase we have implemented very simple networks with only one hidden layer. The objective of this phase was a first evaluation of the amount of information required for classification purposes, which would help to define the size of the input cutout.\n\nThe networks have been tested with $16,384$ and $65,536$ input neurons. Cutout dimensions were, respectively, $128\\times128$ pixels and $256\\times256$ pixels. In addition, the number of neurons in the hidden layer was varied in the set $\\{20, 50, 100\\}$. The cost function was the Mean Squared Error (MSE), given that the data had no outliers and the objective was an initial evaluation of the classification performance as a function of cutout size. The performance was evaluated based on the classification of each cutout individually, independent of the case to which they belonged. For the training, 1440 cutouts (720 with dysplasia and 720 without dysplasia) were randomly selected from the 1840 cutouts available. For testing, 160 randomly selected cutouts had been previously separated (being 80 with and 80 without dysplasia). From each of these sets of 80 cutouts, 50 were randomly selected. A total of 100 realizations, comprising training and test, were carried on to evaluate the performance. The results of the 100 realizations were averaged to determine the confusion matrix, specificity, sensitivity and average accuracy rate of the classifications. Based on these results, it was verified that the structures with $65,536$ input neurons yielded better performance on the average. Thus, all the structures implemented in the second phase had this same amount of neurons in the input layer.\n\nThe second phase of the design will be described next.\n\n\\subsubsection{Definition of the classifier structure}\\label{sec:2phase}\n\n\\hspace{\\parindent}The second design phase aimed at improving the performance of the classifier, given an input layer with $65,536$ neurons. We have initially used networks with two hidden layers. The number of neurons per layer was varied within the set $\\{20, 50, 100, 150\\}$. The cost functions tested were MSE and the cross-entropy. Another difference from the first phase was that the data unit considered was cases rather than individual cutouts. As we had data for 42 cases, at each realization the classifier was trained with the cutouts corresponding to 41 cases, and tested with the cutouts of the remaining case, thus using, leave-one-out cross validation. As the number of cutouts available was not exactly the same for all cases, care was taken to maintain the same number of dysplastic and non-dysplastic cutouts during training. The classification decision (dysplastic or non-dysplastic) was made by majority of the cutout classifications for each case. The confusion matrices, sensitivities, specificities, accuracy, and a novel figure of merit $D$ to be defined later were evaluated for each of the cases after test.\n\nFor statistical evaluation purposes, the complete process described above was repeated 50 times with randomly selected network initialization. Then, the confusion matrices, sensitivities, specificities, accuracy, and the figure of merit $D$ were averaged over 50 realizations for each of the 42 cases.\n\nAfter evaluating the performance of the structures with two hidden layers, those which yielded better accuracy results had their number of hidden layers increased to verify if this increase would lead to an improvement of the average accuracy for the 42 cases under test. For each structure, the number of hidden layers was incremented until a drop in classification performance was verified. The training and test were performed exactly as done for the two-layer structures.\n\nIn both design phases, the classifiers were trained using the scaled conjugate gradient algorithm. The stop criterion was the, early stopping, in which the training was stopped when any of the following conditions occurred:\n\n\\begin{itemize}\n\t\\item[a)] The maximum number of $1000$ iterations was reached.\n\t\\item[b)] The error was equal to zero.\n\t\\item[c)] The gradient was less than or equal to $10^{-6}$. \n\t\\item[d)] The validation set error increased in $6$ consecutive epochs. \n\\end{itemize}\n\nFor each training performed, the cutouts in the training set were randomly subdivided into three subsets: $70\\%$ for training, $15\\%$ for validation and $15\\%$ for testing. However, the test subset was not used at this stage.\n\nThe structures of the trained networks used the SoftMax activation function for the output layer and the hyperbolic tangent function (Tanh) for the hidden layers.\n\n\n\\subsubsection{Decision rule}\n\n\\hspace{\\parindent}All neural networks were designed to classify individual cutouts, and not the epithelium image as a whole. First, we classify all the cutouts, assigning each one to class $\\omega_1$ (dysplastic) or class $\\omega_2$ (non-dysplastic). This is done using the following Bayes risk rule:\n\n\n\\begin{equation} \\label{eq:DecisionRule}\n\t\\text{Assign $\\bs{x}$ to $\\omega_1$ ($\\omega_2$) if: } \\lambda_{21} P(\\omega_1|\\bs{x}) >(<) \\lambda_{12} P(\\omega_2|\\bs{x})\n\\end{equation}\n\n\n\\noindent in which $\\bs{x}$ is the input vector (cutout image); $P(\\omega_i|\\bs{x})$ is the posterior conditional probability that $\\bs{x}$ is from class $\\omega_i$, given the observed cutout image $\\bs{x}$ (as estimated by the neural network); and the loss $\\lambda_{ij}$ is defined as the loss associated to assigning a cutout to class $\\omega_i$ when it actually belongs to class $\\omega_j$.\n\nTwo different rules were considered to provide diagnosis for each cutout. The first one employs $\\lambda_{12} = \\lambda_{21}=1$. The second one considers that the loss in having a false negative (erroneously classifying a cutout as non-dysplastic) is twice as large as the loss associated to a false positive. In this case, $\\lambda_{12} = 1$ and $\\lambda_{21}=2$. The decision rule is then \"classify $\\bs{x}$ in class $\\omega_1$ if\"\n\n\n\\begin{equation} \\label{eq1}\n {\\cfrac{P({\\omega}_{1}|\\bs{x})}{P({\\omega}_{2}|\\bs{x})} \\geq \\cfrac{\\lambda_{12}}{\\lambda_{21}} }\n\\end{equation}\n\n\n\\noindent The final diagnosis for the case is based on the majority of classifications obtained for all the patient cutouts.\n\n\n\\subsubsection{Performance metrics}\n\n\\hspace{\\parindent}Different metrics were used to evaluate the performance of the classifiers. The main metric was the confusion matrix, which reports the number of true positives (TP), true negatives (TN), false positives (FP) and false negatives (FN). Using these results we evaluated the sensitivity ($S_e$) and the specificity ($S_p$) of the classifier, given by, respectively\n\n\\begin{equation} \\label{eq2}\n{S_{\\rm e}= \\cfrac{TP}{TP+FN} }\n\\end{equation}\n\n\\begin{equation} \\label{eq3}\n{S_{\\rm p}= \\cfrac{TN}{TN+FP} }\n\\end{equation}\n\nAnother metric employed was the accuracy, evaluated as \n\n\\begin{equation} \\label{eq4}\n{A_{\\rm cc}= \\cfrac{TP+TN}{\\textit{N}} }\n\\end{equation}\n\n\\noindent where $n$ stands for the total number of data classified. \n\nFinally, we propose in this study a new figure of merit $D$ to evaluate the deviation from an ideal classifier performance. Figure~\\ref{img_figMeritD} shows a graphical interpretation of the new figure of merit $D$. The horizontal axis shows the Positive Predictive Value (PPV $=TP$\/total number of positives), and the vertical axis shows the Negative Predictive Value (NPV $=TN$\/total number of negatives) in the training set. Vector $\\boldsymbol{R}_{\\rm ef}$ represents the composite relative accuracy. In the ideal classifier (gold standard), $\\boldsymbol{R}_{\\rm ef}=\\boldsymbol{R}_{{\\rm ef}_{0}}$ has magnitude $\\sqrt{2}$ and $45^o$ phase. Deviations from this ideal situation represent loss in classification performance. \n\n\n\n\n\n\n\n \\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.3\\textwidth]{figuras\/grafico_new.png}\n\\caption{Two-dimensional descriptive space of figure of merit $D$. In red: the best classification scenario. In blue: Example of classifier performance.} \n\\label{img_figMeritD}\n\\end{figure}\n\nTo this end, we define $D$ as follows:\n\n\n\\begin{equation} \\label{eq5}\n {D= \\cfrac{|{D}_{1}|+|{D}_{2}|}{2}}\n\\end{equation}\n\n\\begin{equation} \\label{eq6}\n {{D}_{1}= \\cfrac{\\sqrt{{PPV}^2 + {NPV}^2}}{\\sqrt{2}} }\n\\end{equation}\n\n\\begin{equation} \\label{eq7}\n {{D}_{2}= \\cos{(45^\\circ - \\theta)}}\n\\end{equation}\n\n\\begin{equation} \\label{eq8}\n {\\theta = \\arctan \\frac{NPV}{PPV}}\n\\end{equation}\n\n\nIn \\eqref{eq5}--\\eqref{eq8}, ${D}_{1}$ corresponds to the magnitude of vector $\\boldsymbol{R}_{\\rm ef}$, and ${D}_{2}$ corresponds to the angular deviation from the ideal $45^o$ reference. Hence, $0 \\le D \\le 1$, and the larger the value of $D$ the better the classifier performance. One particularly interesting aspect of the performance criterion $D$ is that it provides an objective way to rank the tested network structures for relative performance. Criterion $D$ is used in the next design phase to estimate the best trained networks.\n\n\\subsection{Definition of the best designs}\\label{sec:avaDesem}\n\n\\hspace{\\parindent}At this design stage we had already defined the input dimension (phase 1) and a set of network structures leading to the best performance results, as measured by the average accuracy obtained in design phase 2. The next stage was a statistical performance evaluation, in which we used 100 realizations of training and test for each structure defined in phase 2 with random initialization in search for the best training for each of them. In each realization, the networks were trained with all the cutouts of the 42 cases used in the first and second phases. The testing was performed using cutouts of 31 new cases that had not been used in any previous training or test. The difference in the performances obtained at each of the 100 realizations was due to a different initialization of the network coefficients. For each of the network structures, the coefficients resulting from the training realization that yielded the highest value of criterion $D$ (out of the 100 realizations) for the test set were stored. After these steps, the trained networks, with fixed structures and its coefficients, were used for the classifications, and the results were compared with other classifiers.\n\n\\subsection{Performance comparisons}\n\n\\hspace{\\parindent}At this last stage of the project the performances of the networks trained during the classification of the $31$ test cases were compared with the performances in these same cases provided by a convolutional neural network pre-trained by learning transfer, and by trained evaluators (oral pathologists). \n\nThe pre-trained CNN used in our comparison was the Resnet-18 network~ \\cite{MathworksResnet}. Some adaptation of the original code was necessary in order to replace the fully connected layer and the sorting layer so that the resulting network output had only two classes as required by the application. In addition, all cutouts were modified to the RGB scale, normalized, and had their sizes readjusted from $256\\times256$ to $224\\times224$ pixels, to be compatible with the input dimensions of the pre-trained network.\n\nThe main CNN training options used were the following:\n\n\\begin{itemize}\n\t\\item[a)] Convolutional layer learning rate equal to $10^{-4}$. It was chosen very low so that the filter coefficients and the previously trained weights were not lost.\n\t\\item[b)] Mini-batch size equal to $10$.\n\t\\item[c)] Fully connected layer learning rate equal to 10. It was chosen high so that the learning for these layers was faster than for the convolutional layers. \n\t\\item[d)] Maximum number of epochs equal to 8. \n\\end{itemize}\n\nThe training and performance evaluation stages of the CNN network were evaluated using the same methodologies described in sections \\ref{sec:2phase} and \\ref{sec:avaDesem}.\n\nFinally, to evaluate the usefulness of a classifier in this case, we compared the performances of the best neural classifiers designed with the classifications provided by three trained evaluators. To this end, we asked the trained evaluators to evaluate the original images of each of the $31$ cases used as the test set. The evaluation was made using the whole images (not the cutouts), as would be the case in a real diagnosis situation.\n\n\n\\section{Results}\n\n\\hspace{\\parindent}In this section we present the performances obtained from the networks designed in the second phase of the project. As mentioned in Section \\ref{sec:phase1}, it was determined that the structures with $65,536$ neurons in the input layer had the best performances on average. Hence, all the structures implemented in the second phase had this same input layer size.\n\n\n\\subsection{Definition of the classifier structure}\n\n\\hspace{\\parindent}Structures with 20, 50, 100, and 150 neurons per layer, and the cost functions mean-square error (MSE) and cross-entropy (CE) were designed. Table~\\ref{tab:segFase_semFuncRisc} shows the average classification performances ($50$ realizations) obtained for the $42$ cases available for $\\lambda_{12}=\\lambda_{21}=1$ in equation \\eqref{eq1}. Each realization was characterized by a random initialization of the neural network weights. The majority of structures presented average accuracy higher than $0.80$ with small standard deviations, what is a good indicator of design reliability. Structure 8 presented the highest average accuracy of 0.8205, and its best result reached a 0.8810 accuracy. The results demonstrate that all structures presented high averages for the proposed figure of merit D (equation \\eqref{eq5}). However, the best performance was that of structure 8.\n\n\n\\begin{table}[h]\n \\renewcommand{\\arraystretch}{1.3}\n\t\\caption{\\label{tab:segFase_semFuncRisc}Results - second design phase for $\\lambda_{12}=\\lambda_{21}=1$.}\n \\centering\n\t\\begin{tabular}{lccc}\n\t\t\n\t\t\\hline\n\t\t\\textbf{Structures} &\n\t\t\\multicolumn{2}{c}{\\textbf{Accuracy}} &\n\t\t\\multirow{2}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Figure of \\\\ merit \\footnotemark[5] \\\\ $\\mathbf{D}$($\\mathbf{\\%}$)\\end{tabular}}} \\\\\n\t\t\\begin{tabular}[c]{c@{}@{}}(NHL\\footnotemark[1]\/NNPHL\\footnotemark[2]\/\\\\ Cost function)\\end{tabular} &\n\t\t\\textbf{Average$\\pm$SD\\footnotemark[3]} &\n\t\t\\textbf{\\begin{tabular}[c]{@{}@{}c@{}}Max.\\\\ value \\footnotemark[4]\\end{tabular}} &\n\t\t\\\\ \\hline\n\t\t\\textbf{1}~(2\/20\/MSE) & 0.7881$\\pm$0.0334 & 0.8571 & 89.25 \\\\\n\t\t\\textbf{2}~(2\/20\/CE\\footnotemark[6]) & 0.8019$\\pm$0.0386 & 0.8810 & 89.93 \\\\\n\t\t\\textbf{3}~(2\/50\/MSE) & 0.8014$\\pm$0.0421 & 0.9048 & 89.94 \\\\\n\t\t\\textbf{4}~(2\/50\/CE\\footnotemark[6]) & 0.8143$\\pm$0.0340 & 0.8810 & 90.58 \\\\\n\t\t\\textbf{5}~(2\/100\/MSE) & 0.8105$\\pm$0.0391 & 0.8810 & 90.36 \\\\\n\t\t\\textbf{6}~(2\/100\/CE\\footnotemark[6]) & 0.8086$\\pm$0.0376 & 0.8810 & 90.26 \\\\\n\t\t\\textbf{7}~(2\/150\/MSE) & 0.8043$\\pm$0.0507 & 0.9048 & 90.05 \\\\\n\t\t\\textbf{8}~(2\/150\/CE\\footnotemark[6]) & 0.8205$\\pm$0.0382 & 0.8810 & 90.88 \\\\ \\hline\n\t\\end{tabular}\n\\end{table}\n\\hspace{\\parindent} \\hspace{\\parindent} \\hspace{\\parindent} \\footnotemark[1]{NHL - Number of Hidden Layers.}\n\n\\hspace{\\parindent} \\hspace{\\parindent} \\hspace{\\parindent} \\footnotemark[2]{NNPHL - Number of Neurons Per Hidden Layer.}\n\n\\hspace{\\parindent} \\hspace{\\parindent} \\hspace{\\parindent} \\footnotemark[3]{SD - Standard Deviation.}\n \n\\hspace{\\parindent} \\hspace{\\parindent} \\hspace{\\parindent} \\footnotemark[4]{Max. value - Maximum value.}\n\n\\hspace{\\parindent} \\hspace{\\parindent} \\hspace{\\parindent} \\footnotemark[5]{Percentage of maximum possible value in \\eqref{eq5}.}\n\n\\hspace{\\parindent} \\hspace{\\parindent} \\hspace{\\parindent} \\footnotemark[6]{CE - Cross-Entropy.} \\\\\n\n \t\n\n\nThe same eight structures in Table~\\ref{tab:segFase_semFuncRisc} were trained using $\\lambda_{12}=1$ and $\\lambda_{21}=2$ in~\\eqref{eq1} to test the design performance with a conservative risk function. It was observed that the average accuracy and mean values of D did not present a significant increase when compared to the values in Table~\\ref{tab:segFase_semFuncRisc}. Figures \\ref{img_sensi2fase} and \\ref{img_espec2fase} show, respectively, comparisons of the sensitivity and specificity values obtained using the two different risk functions. As expected, the sensitivity increased and the specificity reduced when $\\lambda_{21}=2$ was used. This is because, in this case, a greater importance was given to the occurrence of false negatives (erroneous classification of dysplastic epithelia). As a consequence, a greater number of cases were classified by the network as dysplastic, increasing the number of true positives and the average sensitivity. On the other hand, the number of false positives increased, decreasing the average specificity.\n \n \\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{figuras\/sensitivity}\n\\caption{Average sensitivities of network structures with two hidden layers. Black: $\\lambda_{12}=\\lambda_{21}=1$. Gray: $\\lambda_{12}=1$, $\\lambda_{21}=2$.} \n\\label{img_sensi2fase}\n\\end{figure}\n\n\n \\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{figuras\/specificity}\n\\caption{Average specificities of network structures with two hidden layers. Black: $\\lambda_{12}=\\lambda_{21}=1$. Gray: $\\lambda_{12}=1$, $\\lambda_{21}=2$.} \n\\label{img_espec2fase}\n\\end{figure}\n\n\nFor the next design step we selected the three structures in Table~\\ref{tab:segFase_semFuncRisc} which yielded the largest average accuracies, namely structures 4, 5, and 8. For these three structures we proceeded to increase the number of hidden layers while such increase resulted in clear possibility of performance improvement. \n\n\n\nTable \\ref{tab:struct4} shows that the performance of structure 4 degraded when a third hidden layer was added. Table \\ref{tab:struct5} shows the performance of structure $5$. It is noted that a performance increase was obtained with $4$ hidden layers (structure $11$). Table \\ref{tab:struct8} shows the performance of structure $8$. In this case, an improved performance was verified as the number of hidden layers increased to 3 (structure $13$). Comparing the results for structures $4$, $11$ and $13$, structure $13$ was the best one, with an average accuracy of 0.8271, and an average figure of merit D above $91\\%$. Given their comparable performances, we considered these three structures (4, 11, and 13) in the following design and comparison stages. \n\n\n\\begin{table}[h!]\n \\renewcommand{\\arraystretch}{1.3}\n\t\\caption{\\label{tab:struct4}Performance of structure $4$ with 2 and 3 hidden layers ($\\lambda_{12}=\\lambda_{21}=1$).}\n\t\\centering\n\t\\resizebox{9cm}{!}{\n\t\t\\begin{tabular}{lccllc}\n\t\t\t\\hline\n\t\t\t\\textbf{Structures} &\n\t\t\t\\multicolumn{2}{c}{\\textbf{Accuracy}} &\n\t\t\t\\multicolumn{1}{c}{$\\mathbf{S_{\\rm e}}$\\footnotemark[1]} &\n\t\t\t\\multicolumn{1}{c}{$\\mathbf{S_{\\rm p}}$\\footnotemark[2]} &\n\t\t\t\\multirow{2}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Figure \\\\ of merit \\\\ $D$ ($\\%$)\\end{tabular}}} \\\\\n\t\t\t\\begin{tabular}[c]{@{}c@{}}(NHL\/NNPHL\\\\ Cost function)\\end{tabular} &\n\t\t\t\\textbf{Average$\\pm$SD} &\n\t\t\t\\textbf{\\begin{tabular}[c]{@{}c@{}}Max.\\\\ value \\end{tabular}} &\n\t\t\t\\multicolumn{1}{c}{($\\%$)} &\n\t\t\t\\multicolumn{1}{c}{($\\%$)} &\n\t\t\t\\\\ \\hline\n\t\t\t\\textbf{4}~(2\/50\/CE) & 0.8143$\\pm$0.0340 & 0.8810 & 76.70 & 85.73 & 90.58 \\\\\n\t\t\t\\textbf{9}~(3\/50\/CE) & 0.8105$\\pm$0.0363 & 0.9048 & 76.10 & 85.55 & 90.38 \\\\ \\hline\n\t\t\\end{tabular}\n\t}\n\\end{table}\n\\hspace{\\parindent} \\hspace{\\parindent} \\hspace{\\parindent} \\hspace{\\parindent} \\footnotemark[1]{$S_{\\rm e}$ - Sensitivity.}\n\n\\hspace{\\parindent} \\hspace{\\parindent} \\hspace{\\parindent} \\hspace{\\parindent} \\footnotemark[2]{$S_{\\rm p}$ - Specificity.} \\\\\n\n\\begin{table}[h!]\n \\renewcommand{\\arraystretch}{1.3}\n\t\\caption{\\label{tab:struct5}Performance of structure $5$ with $2-5$ hidden layers ($\\lambda_{12}=\\lambda_{21}=1$).}\n\t\\centering\n\t\\resizebox{9cm}{!}{\n\t\t\\begin{tabular}{lccllc}\n\t\t\t\\hline\n\t\t\t\\textbf{Structures} &\n\t\t\t\\multicolumn{2}{c}{\\textbf{Accuracy}} &\n\t\t\t\\multicolumn{1}{c}{$\\mathbf{S_{\\rm e}}$} &\n\t\t\t\\multicolumn{1}{c}{$\\mathbf{S_{\\rm p}}$} &\n\t\t\t\\multirow{2}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Figure \\\\ of merit \\\\ $D$ ($\\%$)\\end{tabular}}} \\\\\n\t\t\t\\begin{tabular}[c]{@{}c@{}}(NHL\/NNPHL\/\\\\ Cost function)\\end{tabular} &\n\t\t\t\\textbf{Average$\\pm$SD} &\n\t\t\t\\textbf{\\begin{tabular}[c]{@{}c@{}}Max.\\\\ value \\end{tabular}} &\n\t\t\t\\multicolumn{1}{c}{($\\%$)} &\n\t\t\t\\multicolumn{1}{c}{($\\%$)} &\n\t\t\t\\\\ \\hline\n\t\t\t\\textbf{5}~(2\/100\/MSE) & 0.8105$\\pm$0.0391 & 0.8810 & 75.60 & 86.00 & 90.36 \\\\\n\t\t\t\\textbf{10}~(3\/100\/MSE) & 0.8100$\\pm$0.0515 & 0.8810 & 77.20 & 84.45 & 90.39 \\\\\n\t\t\t\\textbf{11}~(4\/100\/MSE) & 0.8195$\\pm$0.0411 & 0.9048 & 78.80 & 84.82 & 90.88 \\\\\n\t\t\t\\textbf{12}~(5\/100\/MSE) & 0.8071$\\pm$0.0394 & 0.8810 & 77.20 & 83.91 & 90.25 \\\\ \\hline\n\t\t\\end{tabular}\n\t}\n\n\\end{table}\n\nThe performances of structures $4$, $5$ and $8$ obtained with the increase of the hidden layers for $\\lambda_{12}=1$ and $\\lambda_{21}=2$ were also verified. The relative performances were very similar to those shown in Figures \\ref{img_sensi2fase} and \\ref{img_espec2fase}, namely, the average sensitivities increased while the average specificities reduced. The average accuracies and figure of merit D had no significant changes when compared with the results obtained from the classifications using $\\lambda_{12}=\\lambda_{21}=1$ (Tables \\ref{tab:struct4}, \\ref{tab:struct5} and \\ref{tab:struct8}). Hence, the results are presented only for the latter case.\n\n\n\\begin{table}[h!]\n \\renewcommand{\\arraystretch}{1.3}\n\t\\caption{\\label{tab:struct8}Performance of structure $8$ with $2-4$ hidden layers ($\\lambda_{12}=\\lambda_{21}=1$).}\n\t\\centering\n\t\\resizebox{9cm}{!}{\n\t\t\\begin{tabular}{lccllc}\n\t\t\t\\hline\n\t\t\t\\textbf{Structures} &\n\t\t\t\\multicolumn{2}{c}{\\textbf{Accuracy}} &\n\t\t\t\\multicolumn{1}{c}{$\\mathbf{S_{\\rm e}}$} &\n\t\t\t\\multicolumn{1}{c}{$\\mathbf{S_{\\rm p}}$} &\n\t\t\t\\multirow{2}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Figure \\\\ of merit \\\\ $D$ ($\\%$)\\end{tabular}}} \\\\\n\t\t\t\\begin{tabular}[c]{@{}c@{}}(NHL\/NNPHL\/\\\\ Cost function)\\end{tabular} &\n\t\t\t\\textbf{Average$\\pm$SD} &\n\t\t\t\\textbf{\\begin{tabular}[c]{@{}c@{}}Max.\\\\ value \\end{tabular}} &\n\t\t\t\\multicolumn{1}{c}{($\\%$)} &\n\t\t\t\\multicolumn{1}{c}{($\\%$)} &\n\t\t\t\\\\ \\hline\n\t\t\t\\textbf{8}~(2\/150\/CE) & 0.8205$\\pm$0.0382 & 0.8810 & 77.10 & 86.55 & 90.88 \\\\\n\t\t\t\\textbf{13}~(3\/150\/CE) & 0.8271$\\pm$0.0343 & 0.8571 & 78.80 & 86.27 & 91.25 \\\\\n\t\t\t\\textbf{14}~(4\/150\/CE) & 0.8110$\\pm$0.0313 & 0.8571 & 75.40 & 86.27 & 90.38 \\\\ \\hline\n\t\t\\end{tabular}\n\t}\n\n\\end{table}\n\n\n\\subsection{Performance evaluation stage}\n\n\\hspace{\\parindent}The figure of merit D was computed for the test cases using \\eqref{eq5} after each training. The trained networks with each structure yielding the highest value of D had their coefficients stored to be used in future comparisons. Table \\ref{tab:bestsStruc} shows the performance of the best design for each of the three selected structures.\n\n\n\\begin{table}[h!]\n \\renewcommand{\\arraystretch}{1.3}\n\t\\caption{\\label{tab:bestsStruc}Performance of the three best structures ($\\lambda_{12}=\\lambda_{21}=1$).}\n\t\\centering\n\t\\begin{tabular}{lcccc}\n\t\t\\hline\n\t\t\\textbf{Structures} & \\textbf{Accuracy} & \\textbf{$\\mathbf{S_{\\rm e}}$} & \\textbf{$\\mathbf{S_{\\rm p}}$} & \\multirow{2}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Figure \\\\ of merit \\\\ $D$ ($\\%$)\\end{tabular}}} \\\\\n\t\t\\begin{tabular}[c]{@{}c@{}}(NHL\/NNPHL\/\\\\ Cost function)\\end{tabular} & \\textbf{} & ($\\%$) & ($\\%$) & \\\\ \\hline\n\t\t\\textbf{4}~(2\/50\/CE) & 0.8065 & 81.25 & 80 & 90.31 \\\\\n\t\t\\textbf{11}~(4\/100\/MSE) & 0.8710 & 81.25 & 93.33 & 93.63 \\\\\n\t\t\\textbf{13}~(3\/150\/CE) & 0.8710 & 81.25 & 93.33 & 93.63 \\\\ \\hline\n\t\\end{tabular}\n\t\n\\end{table}\n\n\nTable \\ref{tab:CNN_train} shows the average performances of all four classifiers in the training stage. The pre-trained CNN yielded a slightly better average accuracy and the best performance when compared to the MLP networks. However, its accuracy had the largest standard deviation (about 10\\% higher than structure 11, and 30\\% higher than the best MLP structure 13). \n\n\n\\begin{table}[h!]\n \\renewcommand{\\arraystretch}{1.3}\n\t\\caption{\\label{tab:CNN_train}Performance comparison with pre-trained CNN - training.}\n\t\\centering\n\t\\resizebox{9cm}{!}{\n\t\t\\begin{tabular}{lccllc}\n\t\t\t\\hline\n\t\t\t\\textbf{Structures} &\n\t\t\t\\multicolumn{2}{c}{\\textbf{Accuracy}} &\n\t\t\t\\multicolumn{1}{c}{$\\mathbf{S_{\\rm e}}$} &\n\t\t\t\\multicolumn{1}{c}{$\\mathbf{S_{\\rm p}}$} &\n\t\t\t\\multirow{2}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Figure \\\\ of merit \\\\ $D$ ($\\%$)\\end{tabular}}} \\\\\n\t\t\t\\begin{tabular}[c]{@{}c@{}}(NHL\/NNPHL\/\\\\ Cost function)\\end{tabular} &\n\t\t\t\\textbf{Average$\\pm$SD} &\n\t\t\t\\textbf{\\begin{tabular}[c]{@{}c@{}}Max.\\\\ value \\end{tabular}} &\n\t\t\t\\multicolumn{1}{c}{($\\%$)} &\n\t\t\t\\multicolumn{1}{c}{($\\%$)} &\n\t\t\t\\\\ \\hline\n\t\t\t\\textbf{4}~(2\/50\/CE) & 0.8143$\\pm$0.0340 & 0.8810 & 76.70 & 85.73 & 90.58 \\\\\n\t\t\t\\textbf{11}~(4\/100\/MSE) & 0.8195$\\pm$0.0411 & 0.9048 & 78.80 & 84.82 & 90.88 \\\\ \n\t\t\t\\textbf{13}~(3\/150\/CE) & 0.8271$\\pm$0.0343 & 0.8571 & 78.80 & 86.27 & 91.25 \\\\ \n\t\t\t\\textbf{Pre-trained}~\\textbf{CNN} & 0.8505$\\pm$0.0449 & 0.9524 & 83.30 & 86.64 & 92.47 \\\\ \\hline\n\t\t\\end{tabular}\n\t}\n\n\\end{table}\n\nTable \\ref{tab:CNN_avaDesem} shows the obtained classification results for the test set. The best performance was obtained from the CNN. \n\n\\begin{table}[h!]\n \\renewcommand{\\arraystretch}{1.3}\n\t\\caption{\\label{tab:CNN_avaDesem} Performance comparison with pre-trained CNN - test.}\n\t\\centering\n\t\\begin{tabular}{lcccc}\n\t\t\\hline\n\t\t\\textbf{Structures} & \\textbf{Accuracy} & \\textbf{$\\mathbf{S_{\\rm e}}$} & \\textbf{$\\mathbf{S_{\\rm p}}$} & \\multirow{2}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Figure \\\\ of merit \\\\ $D$ ($\\%$)\\end{tabular}}} \\\\\n\t\t\\begin{tabular}[c]{@{}c@{}}(NHL\/NNPHL\/\\\\ Cost function)\\end{tabular} & \\textbf{} & ($\\%$) & ($\\%$) & \\\\ \\hline\n\t\t\\textbf{4}~(2\/50\/CE) & 0.8065 & 81.25 & 80 & 90.31 \\\\\n\t\t\\textbf{11}~(4\/100\/MSE) & 0.8710 & 81.25 & 93.33 & 93.63 \\\\\n\t\t\\textbf{13}~(3\/150\/CE) & 0.8710 & 81.25 & 93.33 & 93.63 \\\\\n\t\t\\textbf{pre-trained}~\\textbf{CNN} & 0.9032 & 93.75 & 86.67 & 95.10 \\\\\\hline\n\t\\end{tabular}\n\t\n\\end{table}\n\n\nAs an additional evaluation, it is of interest to gauge the potential of the different structures as more samples are available for training. To this end, we have trained the classifiers using both the CNN and structure $13$ for an increasing number of training samples. Figure \\ref{img_aumentoCasos_train} shows the progress in average accuracy obtained by the two classifiers when training is performed with 20, 30, and 42 cases. These results indicate that the performance of classifiers using MLP networks tends to approach that of the CNN classifier as the amount of available training data increases. \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.4\\textwidth]{figuras\/aumentoNumCasos}\n\\caption{Average accuracy of classifiers with the increase in the number of cases used in training. Black: structure $13$. Gray: pre-trained CNN.} \n\\label{img_aumentoCasos_train}\n\\end{figure}\n\n\n\\subsection{Comparison with trained evaluators}\n\n\\hspace{\\parindent}Table \\ref{tab:compAvalia} compares the performances of the neural classifiers and the three evaluators in the classification of the $31$ cases of the test set. This table shows that neural classifiers generated an accuracy equal to or even higher than the three trained evaluators. The three evaluators correctly classified all dysplastic cases (sensitivities were $100\\%$). However, they presented specificities equal or inferior to the neural classifiers.\n\n\n\\begin{table}[h!]\n \\renewcommand{\\arraystretch}{1.3}\n\t\\caption{\\label{tab:compAvalia} Comparison including trained evaluators - test set.}\n\t\\centering\n\t\\begin{tabular}{lcccc}\n\t\t\\hline\n\t\t\\textbf{Structures} & \\textbf{Accuracy} & \\textbf{$\\mathbf{S_{\\rm e}}$} & \\textbf{$\\mathbf{S_{\\rm p}}$} & \\multirow{2}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Figure \\\\ of merit \\\\ $D$ ($\\%$)\\end{tabular}}} \\\\\n\t\t\\begin{tabular}[c]{@{}c@{}}(NHL\/NNPHL\/\\\\ Cost function)\/\\\\ \\textbf{Evaluator}\\end{tabular} & \\textbf{} & ($\\%$) & ($\\%$) & \\\\ \\hline\n\t\t\\textbf{4}~(2\/50\/CE) & 0.8065 & 81.25 & 80 & 90.31 \\\\\n\t\t\\textbf{11}~(4\/100\/MSE) & 0.8710 & 81.25 & 93.33 & 93.63 \\\\\n\t\t\\textbf{13}~(3\/150\/CE) & 0.8710 & 81.25 & 93.33 & 93.63 \\\\\n\t\t\\textbf{pre-trained}~\\textbf{CNN} & 0.9032 & 93.75 & 86.67 & 95.10 \\\\\n\t\t\\textbf{Evaluator 1} & 0.9032 & 100 & 80 & 94.97 \\\\\n\t\t\\textbf{Evaluator 2} & 0.6451 & 100 & 26.67 & 79.86 \\\\\n\t\t\\textbf{Evaluator 3} & 0.8710 & 100 & 73.34 & 93.26 \\\\ \\hline\n\t\\end{tabular}\n\\end{table}\n\n\n\n\\section{Discussion}\n\n\n\\hspace{\\parindent}The diagnosis of epithelial dysplasia is provided by oral pathologists based on visual analysis of histopathological images. However, this diagnosis procedure may be subjective and dependent on the professional experience of the pathologist. In this work, a diagnostic aid system was proposed to help to reduce this subjectivity and the variability in diagnoses. The objective was to develop an epithelial classifier with low computational cost, using the expertise of pathologists to avoid excessive complexity. A methodology was proposed for the design of a simple MLP neural network classifier. Two of the resulting classifiers (11 and 13 in Table \\ref{tab:bestsStruc}) yielded an average accuracy of $87\\%$ in the performance assessment stage. \n\nIt has been recognized in the technical literature that Convolutional Neural Networks (CNNs) tend to yield excellent performance in image classification, especially in the medical field \\cite{art2}. Hence, CNNs qualify as the best candidates for performance comparison with the oral lesion classifiers designed using the proposed approach. However, training a CNN from scratch requires a huge amount of data, in the order of millions of samples. Such amount of data is hardly available in the medical area, and such a solution (if ever feasible) would result in a time-consuming and very costly training process. A typical solution to circumvent this limitation is to apply transfer learning. In this method, a fine-tuning is done in a pre-trained network, where the weights of the initial layers are frozen to values obtained from training with general images, and only the last layers are trained with the application-specific data \\cite{CNNNove}. This was the possible solution in our case, due to the limited amount of data available (a typical situation for the application at hand).\n\nThe performances of the best classifier designs were compared with that of a pre-trained CNN, the state-of-the-art in image classifiers for medical applications. The results (Table \\ref{tab:CNN_avaDesem}) show an accuracy of more than 87\\%, only 3.7\\% smaller than that obtained from the CNN solution. The sensitivity of the CNN was 15\\% better (93.75\\% versus 81.25\\%), but at the cost of a $7\\%$ worse specificity (86.67\\% versus 93.33\\% of the proposed solution). The difference in the value of the newly proposed figure of merit was only 1.6\\% in favor fo the CNN solution (95.1\\% versus 93.63\\%). It should be noted that this good performance of the proposed solution, which is quite comparable to that of the pre-trained CNN, comes at a significant advantage in complexity. \n\n\nTable \\ref{tab:compComplexComp} compares the computational cost in Floating-point Operations (FLOPs) of the three classifiers designed using the proposed methodology with that of the pre-trained CNN. It is noted that the pre-trained CNN has an operation complexity at least $100$ times greater than the most complex network among networks $4$, $11$ and $13$. This increase in complexity is also accompanied by a significant increase in the amount of memory required for the CNN, when compared to the other networks. This much higher complexity does not justify the corresponding modest increase in performance, showing that the contribution of theoretical expertise to the design of the classifier can easily surpass the advantages of using very sophisticated neural network structures to classify raw data. \n\n\\begin{table}[h!]\n \\renewcommand{\\arraystretch}{1.3}\n\t\\centering\n\t\\caption{\\label{tab:compComplexComp}Comparison of computational cost.}\n\n\n\t\\begin{tabular}{lc}\n\t\t\\hline\n\t\t\\textbf{Structures} & \\textbf{\\begin{tabular}[c]{@{}c@{}} Computational \\\\ cost\\\\ (FLOPs)\\end{tabular}} \\\\ \\hline\n\t\t\\textbf{4}~(2\/50\/Cross-Entropy) & 6.58 M \\\\\n\t\t\\textbf{11}~(4\/100\/MSE) & 13.20 M \\\\\n\t\t\\textbf{13}~(3\/150\/Cross-Entropy) & 19.79 M \\\\\n\t\t\\textbf{pre-trained}~\\textbf{CNN} ResNet-18 & $\\cong$ 2 G \\\\ \\hline\n\t\\end{tabular}\n\n\\end{table}\n\nWe have also verified the performance of the classifiers when using different number of cases during their training. Our results (Figure \\ref{img_aumentoCasos_train}) showed that the performance of the proposed MLP classifiers converges to that of the pre-trained CNN as more cases are used for training. Hence, weighting implementation cost and classification performance, the generally accepted superiority of a CNN solution for any image classification application is clearly open to question.\n\nAnother important aspect of the specific application is to address the value of the proposed classifiers as a supporting tool for the pathologist in reaching the correct diagnosis. The value of such support should be evaluated considering the fact that a diagnosis of epithelial dysplasia is rarely made by a large number of experts. Also, even a group of highly trained professionals may reach diverse conclusions, especially in less obvious cases. Mathematically, these two aspects indicate a tendency of large variability in the diagnosis made by a small set of pathologists. Recognizing such a tendency, it is of interest of the pathologist to have the support of a well designed classifier when analyzing oral lesions for detecting dysplasia.\n\nThis tendency of high variability among trained evaluators was observed in Table~\\ref{tab:compAvalia}, where average accuracy of evaluators was $0.8064$, but with standard deviation of $0.1406$, while the average accuracy of the three MLP networks was $0.8495$, with standard deviation of $0.0372$. Although these results are not statistically significant due to the small number of trained evaluators, they correspond to a practical situation, as the diagnosis will rarely be made by considering a large number of pathologists opinions. These results corroborate the expected tendency of having a large variability in classifications by few trained evaluators, which suggests that the help of a well trained algorithmic classifier should be welcome as a support for decision.\n\nAnother interesting observation in Table~\\ref{tab:compAvalia} is that the three evaluators correctly classified all dysplastic cases (sensitivities were $100\\%$). However, the trained evaluators yielded specificities equal to or lower than those obtained with neural classifiers. This fact suggests that the trained evaluators tended to classify cases as dysplastic when there was doubt in the classification. This tendency to be \"on the safe side\" leads to and increase in the number of false positives, which corresponds to using a risk function with $\\lambda_{21}>1$ for reaching the decision.\n\n \n\n\\section{Conclusions}\\label{sec:conclusions}\n\n\\hspace{\\parindent}This study showed that the multilayered network structures combined with the pathologists' knowledge to choose the cutout region that would be delivered to the network, presented performances similar to those of the network that is considered the state of the art in image classifications (pre-trained CNN), with considerable less operational complexity. In addition, it was analyzed that increasing the number of cases used in the training of MLP networks would bring this performances even closer.\n\nFinally, the average performance of three trained evaluators was compared with the average performance of the three MLP networks. We observed that they resulted in very close averages of accuracy, but the standard deviation of neural structures was approximately $74\\%$ lower than the standard deviation of trained evaluators. This high variability in the diagnoses of trained evaluators may be associated with their emotional state during the classification of cases. Thus, using a well trained classifier to aid in the diagnosis, could be welcome to reduce this high variability found.\n \n\\section{Acknowledgments} \n\n\\hspace{\\parindent}This study was supported in part by the Coordena\u00e7\u00e3o de Aperfei\u00e7oamento de Pessoal de N\u00edvel Superior \u2013 Brasil (CAPES) \u2013 Finance Code 001, and by the National Council for Scientific and Technological Development (CNPq).\n \n \n\\bibliographystyle{IEEEbib}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nThe analysis of complex networks is a fast growing topic of interest, with applications in fields as various as neural networks, protein networks, computer networks or geographical networks. One of the most prominent application domain is social network analysis.\n\nThe study of social networks can be traced back to the beginning of the 19th century, since the initial work on sociometry \\protect\\cite{moreno1934}. This subject has gained new momentum in recent years, mainly due to the advent of the information age and internet, which has led to the extensive popularity of online social networks, producing large social datasets that can be studied by researchers. The goal of social network analysis is to analyze relationships among social entities and to understand the general properties and features of the whole network, typically by means of graph theory. Nodes in the graph represent social actors within the network (people, organization, groups, or any other entity) and edges characterize social interactions or relations between nodes (friendship, collaboration, influence, idea, etc.). \n\nOne of the most prominent features of social networks is their community structure, characterized by the existence of nodes collections called communities, where nodes within a collection tend to interact more with each other than with the rest of the network \\cite{radicchi2004}. Individuals within the same community often share similar properties, such as interests, social ties, location, occupation, etc. Therefore, the ability to detect such communities could be of utmost importance in a number of research areas, such as recommender systems \\cite{boratto2009}\\cite{deng2014}, email communication \\cite{moradi2012}, epidemiology \\cite{kitchovitch2011}, criminology \\cite{ferrara2014}, marketing and advertising \\cite{mckenzie1999, fenn2009}, etc.\n\nThere are many challenges facing community detection. One of the most important, in particular for social networks, is \\textit{overlap of communities}: in such networks, individuals often belong to several social groups. For instance, individuals often belong to familial and professional circles; scientists collaborate with several research groups, etc. The second challenge lies in the fact that real-world communities are time-evolving. The community structure changes as the social entities and their interactions evolve. These changes can be modeled as addition and removal of nodes and edges from the graph. For instance, in online social networks like Facebook, changes are introduced by users joining or withdrawing from the network, or by people adding each other as \"friend\". These changes may lead to a significant transformation of the network community structure. Palla et al.\\cite{palla2007} propose six types of events which may occur during the evolution of communities: birth, growth, shrink, merge, split, and death. The communities can grow or shrink, as members are added or removed from an existing community. As time goes by, new communities can be born, and old communities may disappear. Two communities can become closely related and merge into a single one, or, conversely, a single community can split into two or more distinct ones.\n\n\\subsection{Rationale for an online version of the Clique Percolation Method}\nA growing number of methods have been proposed to reveal overlapping and evolving community structures \\cite{wang2013communities,cazabet2014dynamic}. One of the most prominent of these methods was proposed by Palla et al.\\cite{palla2007}. The clique percolation method (CPM) \\cite{palla2005} is used to extract the community structure at each time step of an evolving network. Then, communities in consecutive time steps are matched. \n\nThe CPM method, thanks to its community definition, has interesting properties compared with other popular methods such as Louvain and infomap \\cite{louvain,rosvall2008maps}: \n\\begin{itemize}\n\\item It is deterministic, i.e., two runs on two networks with the same topology will yield the same results.\n\\item Communities are defined intrinsically, i.e., each community exists independently from the rest of the network, unlike methods using a global quality function such as the \\textit{modularity} \\cite{girvan2002}, that suffer from resolution limits \\cite{fortunato2007resolution} binding the size of communities to the size of the network.\n\\item Communities can overlap, i.e., a node can be part of several communities.\n\\end{itemize}\n\nThese properties represent an advantage when working with social networks and with dynamic networks. In particular, a well-known problem with the discovery of evolving communities is the so-called instability of methods \\cite{aynaud2010static}, which can be summarized as follows: because community detection methods are unstable, the difference observed in the partition between two consecutive periods of the network might be due either to significant changes in the network or to random perturbations introduced by the algorithm itself. This problem is due to (1) the usage of stochastic methods, as two runs on very similar (or even identical) networks can yield very different results if the algorithm reaches different local maximum, (2) non-intrinsically defined communities, as a modification of a community might be due to changes introduced in an unrelated part of the network. \n\nGiven these observations, CPM appears as a natural candidate to be used for dynamic community detection. The method adapting CPM to the dynamic case \\citep{palla2007}, however, suffers from at least two weaknesses for which we propose solutions in this article, one due to CPM itself, and other to its adaptation to the dynamic case:\n\\begin{itemize}\n\\item All cliques need to be discovered anew at each step, both in the new graph snapshot and in a joint graph between snapshots at $t$ and $t-1$, which is computationally expensive for networks with many steps of evolution.\n\\item Nodes must belong to a cliques of size at least $k$ to be part of a community, and as a consequence, some nodes might not be affected to any community. As most social networks have a scale-free degree distribution, a large number of nodes remain without a community.\n\\end{itemize}\n\nTo circumvent these issues, we propose a new two-step framework for detecting overlapping and evolving communities in social networks. First, built upon the classical algorithm CPM, we introduce an Online CPM algorithm (OCPM) to identify the core nodes of communities in real time. To do that, we propose to use \\textit{stream graph} as a network model. At every change in the network, the community structure is updated at the local scale. This allows significant improvements in computational complexity compared with dynamic CPM \\cite{palla2007}. Second, to deal with the coverage problem of CPM, we propose a label propagation post-process (OLCPM)and thus, nodes not embedded in any community will be assigned to one or more communities.\n\nThe rest of the paper is organized as follows: section \\ref{relatedWork} discusses the related work on overlapping and evolving community detection algorithms. \nIn Section \\ref{dynamicNetworkModel}, we present the different types of dynamic networks and introduce a fully dynamic network model. \nSection \\ref{OLCPMsection} presents the OLCPM framework of dynamic community detection: OCPM algorithm and Label propagation based post process. \nExperimental results are described in section \\ref{Experiments}.\n\n\\section{Related work}\n\\label{relatedWork}\nIn this section, we first introduce the Clique Percolation Method (CPM) \\cite{palla2005} and its dynamic version \\cite{palla2007}, on which our proposal is built on. Then, we present a brief overview of some relevant research work on overlapping and dynamic community detection. \n\nPalla et al.\\cite{palla2007} were among the first to propose an approach for dealing with dynamic and overlapping community detection. Their approach has two main steps: i) static community identification and ii) community matching. In the first step, the CPM method \\cite{palla2005} is used to extract the community structure at each time step. In this method, a community is defined as the union of all \\textit{$k$-cliques} (complete subgraphs of size $k$) that can be reached from each other through a series of adjacent $k-cliques$ (sharing $k-1$ nodes). In the second step, communities are matched between consecutive snapshots. The following process is used: for each pair of consecutive snapshots, a joint graph is created, containing the union of nodes and links from both networks. CPM is then applied to the resulting graph. The communities in the joint graph provide a natural connection between communities in the consecutive snapshots. If a community in the joint graph contains a single community in each corresponding snapshot, then they are matched. If the joint graph contains more than one community from either snapshot, the communities are matched in descending order of their relative node overlap. Overlap is computed for every pair of communities from the two snapshots as the fraction of the number of common nodes to the sum of the number of nodes in both communities. \n\nThe work of Palla et al. \\cite{palla2007} falls into the category of community matching approaches, i.e., methods with a static community detection step and a matching step. Most of the earliest algorithms proposed for dynamic community detection were following a similar approach, with variations in the method used for detection in each snapshot (MOSES in \\cite{greene2010}), Louvain in \\cite{louvain}, etc.) and for community matching (Jaccard Coefficient in \\cite{greene2010}, Core nodes in \\cite{Wang2008}, etc).\n\nIn recent years, several authors have proposed methods based on a different approach, allowing to work on dynamic graphs provided as a stream. In this case, there are too many modifications of the network to run a complete algorithm at each step. Therefore, these methods update communities found at previous steps based on local rules. Below, we introduce examples of such methods. More details can be found in \\cite{cazabet2014dynamic}.\n\\begin{itemize}\n\n\\item Xie et al.\\cite{xie2013b} extended LabelRank \\cite{xie2013a} algorithm which is a stabilized and deterministic variant of Label propagation algorithm \\cite{xie2011} to deal with evolving communities in dynamic networks. The extended algorithm called LabelRankT is based on a conditional update rule by which only nodes involved in change between two consecutive snapshots are updated.\n\n\\item Nguyen et al.\\cite{nguyen2011} proposed AFOCS, an adaptive framework for detecting, updating and tracing the evolution of overlapping communities in dynamic mobile networks. During the initialisation step, AFOCS identifies all possible basic network communities which represent the densely connected part of the network, whose internal density is greater than a certain level, and merge those with the highest overlaps with each other. In a second step, AFOCS adaptively update the community structure, as the dynamic network evolves in time.\n\n\\item Cazabet and Amblard \\cite{cazabet2011} proposed an online algorithm called iLCD. In this work, the dynamic network is considered as a sequence of events (adding or removing edges). iLCD is using a multi-agent system: each community is an agent on the network, which can integrate or reject nodes. The agents are bounded by a certain number of operating rules, like updating existing communities, creating new communities or merging similar ones. Communities can be updated at each apparition or deletion of links.\n\n\\item Rossetti et al.\\cite{rossetti2016} defined TILES, which also proceeds in a streaming fashion, i.e., dynamics of the network is described as flows of interactions (also called perturbations) between users where nodes and edges can be created or removed over time. Each perturbation is considered as a fall of domino tile: every time a new interaction appears in the network, TILES updates the community locally and then propagates the changes to the node surroundings to adjust the neighbors' community memberships. \n\\end{itemize}\n\nA weakness of these algorithms is the absence of any guarantee that the communities found represent an optimal solution at the global level, because communities at each step are based on communities found in a previous step by applying a set of local rules. More precisely, these methods suffer from the risk of community drift, in which the solution can be dragged away from an originally relevant solution. Another consequence is that communities found by these algorithms at step $t$ depend on the particular sequence of previous graph modifications: the same graph produced by a different graph's history would yield a different partition.\n\nOn the contrary, due to the nature of the definition of communities in CPM, we are able in this article to provide an algorithm that handles a flow of changes with local modifications, while guaranteeing that the same state of the graph will always yield the same community structure.\n\n\\section{Dynamic Network Model}\n\\label{dynamicNetworkModel}\nVarious temporal models have been proposed to deal with dynamic networks. We distinguish three broad approaches: \n\n\\begin{itemize}\n\\item \\textbf{Aggregated graphs} model the dynamic network as a single static network by aggregating all contacts between each pair of nodes in a single edge. This representation does not allow longitudinal analysis, for instance tracking the evolution of communities.\n\n\\item \\textbf{Series of snapshots} model the evolving network through a series of snapshots, each of which is a static network representing contacts that exist at the corresponding time, or during the corresponding time window. The main issue of this approach is to determine the 'right' number of time windows, i.e., the temporal granularity. Tracking communities across network sequences can be difficult if important temporal information is lost between snapshots.\n\n\\item \\textbf{Temporal networks} conserve all known temporal information. There are two main models: series of contact and interval graph \\cite{holme2012}. In a sequence of contact, interaction is represented as a triple $(i, j, t)$ where $i$ and $j$ are the interacting entities and $t$ is the time when the relationship is activated. In an interval graph, interaction is represented as a quadruplet $(i,j,t,\\delta t)$ which means that $i$ is involved in contact with $j$ from $t$ to $\\delta t$. In these models, only the temporal information about interactions is represented, there is no temporal information about nodes. \n\\end{itemize}\n\nIn the following, we introduce our own formalism for evolving graphs, which is better suited to deal with \\textit{stream graphs}, i.e., graphs whose modifications occur as a flow, not necessarily known \\textit{a priori}. This formalism has the same expressivity as interval graphs.\n\n\\subsection{Stream graph}\n\\label{streamGraph}\n\nNetworks are often represented by a graph $G=(V, E)$, where $V$ is the set of nodes and $E$ is the set of edges between nodes. We represent dynamic graphs as an ordered sequence of events, which can be node addition, node removal, edge addition or edge removal. We use the following notations:\n\n\\begin{itemize}\n\\item {\\em Inserting or removing a node} is represented as triples $(v,e,t)$, where $v$ is the node, $e$ is the event observed among $\\{+,-\\}$(insert ($+$) or remove($-$)), and $t$ is the time when the event occurs.\n\n\\item {\\em Inserting or removing an edge} is represented as quadruplets $(u,v,e,t)$, where $u$ and $v$ are endpoints of the edge, $e$ is the event observed among $\\{+,-\\}$( insert ($+$) or remove($-$)), $t$ is the time when the event occurs.\n\n\\end{itemize}\n\nNote that this formalism, for edges, is identical in nature to an interval graph, but is more convenient for stream algorithms, as new operations can be added at the end of the ordered sequence of events without affecting previous ones.\n\n\n\\section{OLCPM Framework}\n\\label{OLCPMsection}\n\nOur framework comprises two main steps. First, we propose to adapt the classical algorithm CPM \\cite{palla2005} for static overlapping community detection to deal with evolving networks. We propose an online version of CPM called OCPM (Online CPM). This algorithm is based on analyzing the dynamic behaviors of the network, which may arise from inserting or removing nodes or edges, i.e., every time a change is produced in the network, we update locally the community structure alongside the involved node or edge. \n\nAs stated earlier, CPM may not cover the whole network, i.e., some nodes have no community membership. To deal with this problem, we assume that the communities corresponding to OCMP contain core nodes, and we propose a way to discover the community peripheral nodes. In the second step of our framework, we extend OCMP using label propagation method and we propose OLCPM (Online Label propagation CPM). These proposals will be presented in detail in the next section.\n\\begin{figure} [h]\n\\centering\n\\begin{subfigure}{\\linewidth}\n \\centering\n \\includegraphics{addExternEdge1}\n \\caption{Example with k=3}\n \\end{subfigure}\n\n\\begin{subfigure}{\\linewidth}\n \\centering\n \\includegraphics{addExternEdge2}\n \\caption{Example with k=4}\n \\end{subfigure}\n\n\\caption{Examples of adding an edge with both endpoints outside any community. $(a)$ Example for $k=3$: when the edge$(1, 2)$ is added, a new community $\\{$1, 2, 3, 4$\\}$ is created from two adjacent $k$-cliques $\\{1, 2, 3\\}$ and $\\{1, 2, 4\\}$. $(b)$ Example for $k=4$: the insertion of edge$(1, 2)$ leads to the creation of two communities $\\{ 1, 2, 3, 4\\}$ and $\\{1, 2, 5, 6, 7\\}$ from respectively two groups of not-adjacent $k$-cliques $\\{\\{1, 2, 3, 4\\}\\}$ and $\\{\\{1, 2, 5, 6\\}$,$\\{1, 2, 6, 7\\}\\}$}.\n\\label{fig:AddExternalEdge}\n\\end{figure}\n\n\\begin{figure} [h]\n\n\\centering\n\\begin{subfigure}{\\linewidth}\n \\centering\n \\includegraphics{addMixEdge1}\n \\caption{Simple grow}\n \\end{subfigure}\n\n\\begin{subfigure}{\\linewidth}\n \\centering\n \\includegraphics{addMixEdge2\n \\caption{Grow and merge}\n \\end{subfigure}\n\n\\begin{subfigure}{\\linewidth}\n \\centering\n \\includegraphics{addMixEdge3}\n \\caption{New community}\n \\end{subfigure}\n \n\\caption{Example of adding an edge with an external endpoint and internal one(for $k=3$). (a) The community $\\{1, 2, 3, 4, 6 \\}$ grows with node $5$ when adding edge $(3, 5)$. (b) When the edge $(4,7)$ is added, the communities $\\{ 1,2,3,4\\}$ and $\\{4,5,6\\}$ grow with node $7$, and then merged. The resulting community takes the identity of the one that contains more nodes.(c) By adding edge $(3,6)$, a new community $\\{ 3,5,6,7\\}$ is created.}\n\n\\label{fig:OneEexternalEndpoint}\n\\end{figure}\n\n\n\\begin{figure} [h]\n\n\\begin{subfigure}{\\linewidth}\n \\centering\n \\includegraphics{addInternEdge1\n \\caption{Grow and Merge}\n \\end{subfigure}\n \n \\begin{subfigure}{\\linewidth}\n \\centering\n \\includegraphics{addInternEdge2}\n \\caption{Grow}\n \\end{subfigure}\n\n\\caption{Examples of adding an edge with two internal endpoints(k=3). (a) The communities $\\{1,2,3,4\\}$ and $\\{2,5,6,7\\}$ grow with the nodes of adjacent $k$-cliques $\\{ \\{1,3,5\\},\\{2,3,5\\}\\}$ formed when adding the edge $(3,5)$, and then merged. (b) The community $\\{1,2,3,4, 6\\}$ grows with the nodes of adjacent $k$-cliques $\\{ \\{1,7,8\\},\\{1,5,8\\},\\{1,2,8\\}\\}$ formed when adding the edge $(3,5)$.}\n\\label{fig:TwoInternalEndpoints}\n\\end{figure}\n\n\\begin{figure} [h]\n\\centering\n\n\\begin{subfigure}{\\linewidth}\n \\centering\n \\includegraphics[width=.3\\linewidth]{removeInternNode1}\n \\caption{Shrink}\n \\end{subfigure}\n \n \\begin{subfigure}{\\linewidth}\n \\centering\n \\includegraphics[width=.4\\linewidth]{removeInternNode2} \n \\caption{Shrink and Split}\n \\end{subfigure}\n \n \\begin{subfigure}{\\linewidth}\n \\centering\n \\includegraphics{removeInternNode3}\n \\caption{Death}\n \\end{subfigure}\n \n\\caption{Example of removing internal node (k=3 for (a) and (b), k=4 for (c)). (a) When removing the node $4$, the members $\\{4,5,6\\}$ leaves out the community $\\{1,2,3,4,5,6\\}$.(b) When removing the node $4$, the community $\\{1,2,3,4,5,6,7,8\\}$ shrinks, i.e., it loses this node and all its edges, and then splits into two communities: $\\{5,6,7,8\\}$ and $\\{1,2,3\\}$. (c)By removing the node $6$, the community $\\{1,2,3,4\\}$ shrinks and the community $\\{3,5,6,7\\}$ dies}\n\\label{fig:DeleteInternalNode}\n\\end{figure}\n\n\\begin{figure} [h]\n\n \\begin{subfigure}{\\linewidth}\n \\centering\n \\includegraphics{removeInternEdge1\n \\caption{No change in the community structure}\n \\end{subfigure}\n \n \\begin{subfigure}{\\linewidth}\n \\centering\n \\includegraphics{removeInternEdge2\n \\caption{Community split}\n \\end{subfigure}\n \n\\caption{Examples of removing internal edge (k=4). (a) The community structure doesn't change when removing the edge $(4, 7)$. (b) When removing the edge $(4, 6)$, the community splits into two small communities, each of which contains a group of adjacent $k$-cliques in the original community.}\n\\label{fig:RemoveInternalEdge}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\\subsection{OCPM: Online Clique Percolation Method}\n\nThis section proposes the first step of our framework OLCPM, an online Clique Percolation Method (OCPM). This method takes two inputs: \n\\begin{itemize}\n\\item $SE$, chronologically ordered sequence of events which models networks modification, following the format: $(n, e, t)$ or $(i, j, e, t)$ as defined in section \\ref{streamGraph}\n\\item the parameter $K$, which determines the clique size; it is an integer value greater than or equal to 3 \n\\end{itemize}\n\nThe OCPM method maintains after each modification three elements: \n\\begin{itemize}\n\t\\item $G(V, E)$ the current state of the network\n\t\\item $AC$ the set of currently alive communities \n\t\\item $DC$ the set of dead communities\n\\end{itemize}\n\nIt is therefore possible to know the community structure status at every network modification step. \n\n\\subsubsection{Definition of the OCPM algorithm}\nNote: To facilitate the readability of the paper, we decided to put all formal algorithms in the \\textbf{Appendix}, and to only include the rationale of these algorithms in the body of the article. Please refer to the \\textbf{Appendix} for further details.\n\n\\bigbreak\n\nThe core of the OCPM algorithm can be defined by an algorithm that updates the current state of all variables according to a sequence of events $SE$, as detailed in Algorithm \\ref{algo:OCPM}. The task carried out by the algorithm depends on the type of event encountered:\n\n\\begin{itemize}\n\\item \\textbf{Add a new node}: adding an isolated node $n$ has no influence on the community partition. In this case, only $n$ is added to the graph $G$ and no other action is performed until the next event. \n\\item \\textbf{Add a new edge}: when a new edge $(i,j)$ appears, we add this edge to the graph $G$.\nAccording to the type of edge, we distinguish two cases:\n\n\t\\begin{itemize}\n\n\t\\item When inserting an external edge, i.e., both its endpoints are outside any community, we check if one or more new $k$-cliques (KCliques() function Algorithm \\ref{KCliques}) are created. If it is the case, we gather all adjacent $k$-cliques one to the other. Then, for each group of adjacent $k$-cliques, we create a single community. Figure \\ref{fig:AddExternalEdge} shows two examples of adding external edges and the changes it brings to the community structure. (See Algorithm \\ref{Algo:AddExternalEdge})\n\t\\item In all other cases, i.e., when a new edge appears with one or two internal extremities, we check all $k$-cliques created when adding this edge and not belonging to any community. Then, all adjacent $k$-cliques are grouped together and for each group, we check if there are other adjacent $k$-cliques included in any community to which belongs any node in this group. If they exist, the corresponding communities will grow with the nodes of this group and they can eventually be merged (Merge()function Algorithm \\ref{Merge}). Otherwise, a new community appears containing nodes of this group. Figures \\ref{fig:OneEexternalEndpoint} and \\ref{fig:TwoInternalEndpoints} depict some examples of adding edges with one or two internal endpoints and the changes to the community structure. (Algorithm \\ref{Add_internal_edge})\n\t\\end{itemize}\n\n\\item \\textbf{Delete node}: In this case, we remove the node from the graph G, and all its edges are removed as well. If the node is external, i.e., it doesn't belong to any community, the community structure is not affected and no action is performed until the next event. When the removed node belongs to one or more communities, we check for each community to which this node belongs whether it still contains at least a $k$-clique after the node is removed. This community dies if it loses all $k$-cliques(see figure (c) \\ref{fig:DeleteInternalNode}). Otherwise, the community shrinks, i.e., it loses this node and all its associated edges. Here, we distinguish two cases:\n\\begin{itemize}\n\\item The community may remain coherent and the community structure doesn't change(see figure (a) \\ref{fig:DeleteInternalNode} ).\n\\item The community may become disconnected and therefore, it will be break up into small communities (see figure (b) \\ref{fig:DeleteInternalNode}).\n\\end{itemize}\nThe split function (Algorithm \\ref{Split}) deals with these two cases. After the community shrinking, its structure is recalculated keeping the principle of CPM -checking all maximal cliques of size not less than $k$. The resulting community having the largest number of nodes keeps the identity of the original one, where the others have new identities.\n\nThe Algorithm \\ref{Remove_internal_node} describes this case.\n\n\\item \\textbf{Delete edge}: First, we remove the edge from the graph G. The removal of an edge with two endpoints belonging to the same community(ies)(called internal edge) follows the same mechanism as internal node removal: the communities to which belong the two extremities of this edge may split or die. For each of them, we check whether it still contains $k$-cliques. If so, we use the function Split (Algorithm \\ref{Split}) to check whether or not the community is divided into smaller parts. Otherwise, this community dies (see Algorithm \\ref{Remove_internal_edge}). Figure \\ref{fig:RemoveInternalEdge} shows two examples of removing internal Edge and the changes that it brings to the community structure.\n\nFor all other types of edges, the community structure doesn't change.\n\n\\end{itemize}\n\nHere, we detail some functions used in our algorithm:\n\n\\begin{itemize}\n\n\\item \\textbf{Kcliques}(): (Algorithm \\ref{KCliques}) This function takes a set of nodes SN as input parameter and returns all maximal cliques of size not less than $k$ containing this set. In order to optimize the performance of our algorithm, $k$-cliques are locally launched in the subgraph including the set $SN$ and all common neighbors among its members. \n\n\\item \\textbf{Merge}(): (Algorithm \\ref{Merge}) This function is used for merging adjacent communities. The resulting community takes the identity of the one with the highest number of nodes. \n\n\\item \\textbf{Split}(): (Algorithm \\ref{Split}) This function is used for splitting a community if possible. It takes as input a community and creates from it one or more communities. We proceed as follows: first, we identify all maximal cliques of size not less than $k$ in this community and we aggregate adjacent $k$-cliques with each other. Then, for each of the aggregated $k$-cliques, we create a new community. The community which has the largest number of nodes keeps the identity of the original one.\n\\end{itemize}\n\n\nTable \\ref{tab:Actions} summarizes the actions which can be carried out by OCPM according to graph events.\n\n\\begin{table} [!h] \n\\centering\n\\begin{tabular}{|l|l|l|}\n\\hline\n\\multicolumn{2}{|l|}{\\textbf{Event}} & \\textbf{Actions} \\\\\n\\hline\n\\multicolumn{2}{|l|}{Add new node} & - \\\\\n\\hline\n\\multirow{2}{*}{Add new edge} & External & Birth \\\\ \\cline{2-3}\n & Other & Grow+[Merge], Birth \\\\ \n\\hline\n\\multirow{2}{*}{Delete Node} & External & - \\\\ \\cline{2-3}\n & Internal & Shrink+[Split], Death \\\\\n\\hline\n\\multirow{2}{*}{Delete Edge} & Internal & Split, Death \\\\ \\cline{2-3}\n & Other & - \\\\\n\\hline\n\\end{tabular}\n\\caption{Actions that can be performed according to graph events. Brackets denotes events that can only follow the preceding community event.}\\label{tab:Actions}\n\n\\end{table}\n\n\\subsubsection{Complexity of the algorithm}\n\nInstead of computing all $k$-cliques for the whole network at each event occurring in the network, OCPM updates the community structure on the local scale, and thus only the community structure alongside the node or the edge involved in the event is recomputed. For certain events, like adding or deleting an isolated node or deleting an external edge, the community structure doesn't change and hence, the computational time saving reaches its maximum. For instance, if we have $n$ $k$-cliques when such event is produced, the computational time savings will be $n$ times the average time for calculating $k$-cliques. For other events, the computational time saving is also significant. See section \\ref{empComplexity} for an empirical evaluation of the complexity.\n\n\\subsubsection{Community tracking process}\nOne of the difficulties when tracking the evolution of communities is to decide which community is a continuation of which. Our framework allows a trivial matching in the case of \\textit{continuation} (no merge or split) of communities. In the case of merge and split, deciding which community keeps the original identity is a well-known problem with no consensus in the literature \\cite{cazabet2014dynamic}. In OCPM, we took the simple yet reasonable decision to consider that the \\textit{largest} community involved in a merge or split have the same identifier as the merged\/split one. This strategy can be replaced without altering the algorithm logic. A more advanced process could be added to solve problems of \\textit{instability}, e.g. communities merging and quickly splitting back to their original state.\n\n\\subsection{OLCPM: Online Label propagation CPM}\n\nThis section describes the second step of our framework. A post-processing based on label propagation is set out on the output communities of OCPM to discover the peripheral nodes. This module is called OLCPM (Online Label propagation CPM). \n\nThere is a twofold reason for using a post-process extending core-communities found by OCPM:\n\\begin{itemize}\n\t\\item In a network evolving at fast path, one can update core-communities efficiently after each event, and run the post-process only when the current state of communities needs to be known, thus saving computation time\n\t\\item It is known that the periphery of communities is often not well defined and unstable. As seen earlier, and because OCPM is deterministic and it searches for core-communityies, it reduces this instability problem. By using the label propagation mechanism only as a post-process for analysis, communities at $t$ do not depend on the periphery of communities that might have been computed at $t-1$, but only on the stable part found by OCPM.\n\\end{itemize}\n\n\\subsubsection{OLCPM algorithm}\n\nFirst, each core-community (community found by OCPM) spreads to neighboring peripheral nodes (nodes not covered by OCPM) a label containing its identity and a weight representing the geodesic distance (the length of the shortest path) between this neighboring node and any other node in the core-community. Each peripheral node has a local memory allowing the storage of many labels. Label propagation process is based on breadth-first search (BFS). When all labels have been shared, nodes are associated with all communities with which they have the shortest geodesic distance. Note that nodes can, therefore, belong to several communities, if they are at the same distance of community found by OCPM. This algorithm is defined formally in Algorithm \\ref{algo:OLCPM}.\n\n\nFigure \\ref{fig:OLCPM} presents an illustration of this process.\n\n\\begin{figure*}[h!]\n \\centering\n \\begin{subfigure}{\\linewidth}\n \\centering\n \\includegraphics{labelSpreadingStep\n \\caption{Label spreading step}\n \\end{subfigure}\n \\begin{subfigure}{\\linewidth}\n \\centering\n \\includegraphics{afterLabelAnalysis\n \\caption{Community structure after label analysis (k=3)}\n \\end{subfigure}\n\\caption{Peripheral community updates by OLCPM. (a) Label spreading step. (b) Community structure after label analyses (for K=3). Green nodes are members of the community $C1$; Yellow nodes are members of the community $C2$; uncolored nodes have no affiliation.}\n\\label{fig:OLCPM}\n\\end{figure*}\n\n\n\n\n\\section{Experiments}\n\\label{Experiments}\nIn this section, we begin by evaluating the effectiveness of OCPM algorithm. Thus, we compare the time complexity of OCPM with the dynamic version of CPM \\cite{palla2007}. Second, we are interested in the quality of the communities that OLCPM is able to find, considering both synthetic and real-world networks.\n\n\n\\subsection{Measuring OCPM complexity gain for highly dynamic networks}\n\\label{empComplexity}\nIn this section, we compare the empirical complexity of the original dynamic version of CPM (hereafter, DyCPM)\\cite{palla2007} and our proposed version (OCPM). We generate synthetic dynamic networks, and compare how the running time of both algorithms vary with the properties of the network and of its dynamic. Note that we compare OCPM only with CPM because both algorithms try to solve the \\textit{same problem}, i.e, they have the same definition of communities. Other streaming algorithms introduced in section \\ref{relatedWork} have an \\textit{ad hoc} definition of communities introduced together with the method, and does not have the same properties, such as being deterministic and not being dependent on the network history. Their complexity is, in theory, similar to the one of OCPM (local updates at each modification).\n\n\\subsubsection{Generation of dynamic networks with community structure}\nWe propose a simple process to generate dynamic networks with realistic community structure. First, a static network is generated using the LFR benchmark \\cite{lancichinetti2009a}, the most used benchmark for community detection. Then, for this network, we generate a step by step evolution. In order to conserve the network properties (community structure, size, density), we define an \\textit{atomic modification} as the following process: \n\n\n\\begin{enumerate}\n\\item Choose randomly a planted community as provided by LFR\n\\item Select an existing edge in this community\n\\item Select a pair of nodes without edges in this community\n\\item Replace the selected existing edge by the selected not-existing one.\n\\end{enumerate}\n\nWe define a step of evolution as the combination of $a$ atomic modifications. In order to test the influence of the number of modifications between steps, we test different values of $a$.\n\nNote that we use synthetic networks instead of real networks at this step since: \n\\begin{itemize}\n\t\\item We are only interested in measuring time complexity of algorithms. Synthetic networks are mostly criticized for having unrealistic community structures, while here we are mainly interested in the size and rate of evolution of the networks.\n\t\\item It allows controlled experiments. With real evolving networks, changes in the structure\/size of the network could affect computation time at each step, and we could not control the number of modifications between snapshots, or vary the size of networks while keeping constant properties.\n\\end{itemize}\n\n\\subsubsection{Experimental process}\n\nThe LFR benchmark \\cite{lancichinetti2009a} is, as of today, the most widely used benchmark to evaluate community detection methods. It is known to generate realistic networks with heterogeneous degrees and community sizes.\n\nIt has the following parameters : $N$ is the network size, $k$ is the average degree of nodes, $kmax$ the maximum degree, $t1$ and $t2$ are power-law distribution coefficients for the degree of nodes and the size of community respectively, $\\mu$ is the mixing parameter which represents the ratio between the external degree of the node with respect to its community and the total degree of the node, $minc$ and $maxc$ are the minimum and maximum community size respectively, $On$ is the number of overlapping nodes , $Om$ is the number of community memberships of each overlapping node. \n\nIn order to obtain realistic networks, we first generate an original network with $n$ nodes using the LFR benchmark, with fix parameters $k=7$, $maxk=15$, and $\\mu=0.4$. Other parameters stay at their default values. In order to test the influence of the network size, we test different values of $n$.\n\n\n\\begin{figure} [h!]\n\\begin{center}\n\\includegraphics[width=0.8\\linewidth]{courbe1.pdf}\n\\end{center}\n\\caption{Evolution of time complexity when varying the size of the network (number of nodes), and keeping other parameters constant (average node degree, community, size, etc.). DyCPM complexity increases exponentially with the size of the network, while OLCPM one stays constant or slightly decreases. Expressed in base 50, i.e, 10 on the vertical axis means 10 times slower than with 50 nodes.}\n\\label{fig:time1}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[width=0.8\\linewidth]{courbe2.pdf}\n\\end{center}\n\\caption{Evolution of time complexity when varying the number of atomic changes by step. DyCPM complexity is independent relatively to $a$ while OLCPM's complexity increases linearly with $a$ Time.}\n\\label{fig:time2}\n\\end{figure}\n\nAs can be seen in figures \\ref{fig:time1} and \\ref{fig:time2}, the complexity of both algorithms depends on very different parameters. With OLCPM, the time needed to update communities after a modification step does not increase proportionally to the size of the network at any given time, but increases linearly with the number of atomic modifications. \n\nOn the contrary, the complexity of DyCPM depends on the properties of the static network, but not on the number of atomic modifications between steps.\n\nAs expected, OLCPM is appropriate to deal with stream graphs, in which modifications are known at a fine granularity, as the cost of each update is low. On the contrary, DyCPM is appropriate to deal with network snapshots, i.e., a dynamic network composed of a few observations collected at regular intervals.\n\n\n\\subsection{Measuring OLCPM communities quality}\n\nTo quantify the quality of communities detected by OLCPM framework, we used both synthetic and real-world networks with ground truth community structure. We remind the reader that communities found by DyCPM and OCPM are identical, the difference lies only in the label propagation post-process of OLCPM.\n\nNormalized Mutual Information (NMI) is used as the measurement criterion. This measure is borrowed from information theory \\cite{danon2005} and widely adopted for evaluating community detection algorithms. It measures the similarity between a ground truth partition and the one delivered by an algorithm. As the original definition is only well defined for \\textit{partitions} (each node belong to one and only one community), a variant of the NMI adapted for \\textit{covers} (nodes can belong to zero, one or more communities) have been introduced in \\cite{lancichinetti2009b}. This variant is the most used in the literature for comparing overlapping communities. We used the original implementation by the authors \\footnote{\\url{https:\/\/sites.google.com\/site\/andrealancichinetti\/software}}. The NMI value is defined between 0 and 1, with a higher value meaning higher similarity.\n\n\n\\subsubsection{Static Synthetic networks}\n\nWe use the LFR benchmark \\cite{lancichinetti2009a} to generate realistic artificial networks. \n\nWe use two different network sizes, \\textit{small networks}(1000 nodes) and \\textit{large networks}(5000 nodes), and for a given size we use two ranges for community size: \\textit{small communities}, having between $10$ and $50$ nodes and \\textit{large communities}, having between $20$ and $100$ nodes. We generate eight groups of LFR networks.\n\nIn the first four networks, $\\mu$ ranges from $0$ to $0.5$ (steps of $0.1$) while $Om$ is set to $100$ for small networks and $500$ for large networks ($5000$ nodes). In the other networks, $\\mu$ is fixed to $0.1$ and $On$ ranges from $0$ to $500$ (steps of $100$) for small networks and from $0$ to $2000$ (steps of $500$) for large networks. All these networks share the common parameters: $k = 10$, $maxk = 30$, $t1 = 2$, $t2 = 1$, $On = 2$. The parameter settings are shown in table \\ref{tab:LFRParm}. \n\n\\begin{table} [!h] \n\\centering\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\n \\hline\n \\textbf{Network group ID} & \\textbf{N} & \\textbf{minc} &\t\\textbf{maxc} & \\textbf{$\\mu$} & \\textbf{On}\\\\ \n \\hline\n N1 & 1000 & 10 & 50 & 0-0.5 & 100 \\\\ \n \\hline\n N2 & 1000 & 20 & 100 & 0-0.5 & 100 \\\\ \n \\hline\n N3 & 5000 & 10 & 50 & 0-0.5 & 500 \\\\ \n \\hline\n N4 & 5000 & 20 & 100 & 0-0.5 & 500 \\\\ \n \\hline\n N5 & 1000 & 10 & 50 & 0.1 & 0-500 \\\\ \n \\hline\n N6 & 1000 & 20 & 100 & 0.1 & 0-500 \\\\ \n \\hline\n N7 & 5000 & 10 & 50 & 0.1 & 0-2000 \\\\ \n \\hline\n N8 & 5000 & 20 & 100 & 0.1 & 0-2000 \\\\ \n \\hline\n\n\\end{tabular}\n\\caption{LFR parameter setting}\\label{tab:LFRParm}\n\n\\end{table}\n\n\nCPM and OLCPM are run for $k=4$. The NMI values of communities detected by CPM and OLCPM are depicted in figure \\ref{fig:LFRRes}. Note that communities found by CPM and OCPM are identical, therefore the observed differences are only due to the post process.\n\n\n\\begin{figure*}[!h]\n\\centering \n\\begin{subfigure}{0.4\\linewidth}\n \\includegraphics[width=\\linewidth]{N1.pdf}\n \\end{subfigure}\n \\begin{subfigure}{0.4\\linewidth}\n \\includegraphics[width=\\linewidth]{N3.pdf}\n \\end{subfigure}\n \n \\begin{subfigure}{0.4\\linewidth}\n \\includegraphics[width=\\linewidth]{N2.pdf}\n \\end{subfigure}\n \\begin{subfigure}{0.4\\linewidth}\n \\includegraphics[width=\\linewidth]{N4.pdf}\n \\end{subfigure}\n \n \\begin{subfigure}{0.4\\linewidth}\n \\includegraphics[width=\\linewidth]{N5.pdf}\n \\end{subfigure}\n \\begin{subfigure}{0.4\\linewidth}\n \\includegraphics[width=\\linewidth]{N7.pdf}\n \\end{subfigure}\n \\begin{subfigure}{0.4\\linewidth}\n \\includegraphics[width=\\linewidth]{N6.pdf}\n \\end{subfigure}\n \\begin{subfigure}{0.4\\linewidth}\n \\includegraphics[width=\\linewidth]{N8.pdf}\n \\end{subfigure}\n\\caption{Performance of CPM and OLPM for $k=4$ on the LFR benchmark networks. The plots show the NMI scores as a function of the mixing parameter $\\mu$ (upper half plots) and of the number of overlapping nodes $On$ (lower half plots) for different network sizes (small networks in the left hand plots and large networks in the right hand plots) and different community sizes ($(S)$ ranges from $10$ to $50$ and $(B)$ ranges from $20$ to $100$).}\n\\label{fig:LFRRes}\n\\end{figure*}\n\n\n\n\nIn most cases, OLCPM achieves the highest results, except for the two cases where: (1) the community structure becomes very fuzzy ( $On >= 400$ for small networks or $On >=1500$ for large networks) or (2) the value of $\\mu$ is large (greater than $0.3$). In these cases, OLCPM performs similar or slightly worse than CPM. When the community structure becomes too fuzzy for CPM, the irrelevant core-communities provided are probably worsened by the post-process. \n\nAs a conclusion, we can consider that in situations in which CPM finds meaningful communities in a network, the proposed post-process improves the solution. \n\n\\subsubsection{Dynamic Real-world networks}\n\nIn order to evaluate the community detection results of our framework OLCPM on real temporal networks, we leverage a high-resolution time-varying network describing contact patterns among high school students in Marseilles, France \\cite{fournet2014}. The dataset was collected by the SocioPatterns collaboration using wearable sensors, able to capture proximity between individuals wearing them. The dataset was gathered during nine days (Monday to Tuesday) in November 2012. Data collection involved 180 students from five classes. Proximity relations are detected over 20-second intervals. Data collection involved students' classes corresponding to different specializations: 'MP' classes focus more on mathematics and physics, 'PC' classes on physics and chemistry, and 'PSI' classes on engineering studies. These classes represent the expected ground truth community structure.\n\nWe construct a dynamic network composed of 216 snapshots, each corresponding to 1 hour of data. Nodes correspond to students, and there is an edge between two nodes in a snapshot if the corresponding students have been observed in interaction at least once during the corresponding period. (Please refer to the original article \\cite{fournet2014} for details about the meaning of \\textit{interaction}. To sum up, two students are in interaction if they stand face-to-face at a distance between 1 and 1.5 meters.)\n\n\nWe compute the communities at each step using both DyCPM and OLCPM (Communities yielded by DyCPM and OCPM are identical). Then, for each snapshot, we compute the NMI according to \\cite{lancichinetti2009b}.\nResults are displayed in Figure \\ref{fig:NMI}. We show results for k=3 and k=4, which yield the best results.\n\nThe average NMI over all snapshots is provided in Table \\ref{tab:ANMI}.\n\\begin{table} [!h] \n \\begin{center}\n\\begin{tabular}{|c||c|c|c|c|}\n \\hline\n \\textbf{Algorithm} & DyCPM k=3 & DyCPM k=4 & OLCPM k=3 & OLCPM k=4\\\\ \n \\hline\n \\textbf{ Average NMI} & 0.024\t & 0.004 & 0.059 & 0.044 \\\\ \n \\hline\n\\end{tabular} \\\\ \n \n \\caption{Average NMI scores of OLCPM and DyCPM \\cite{palla2007} for $k=3$ and $k=4$ on SocioPatterns collaboration networks \\cite{fournet2014}.}\\label{tab:ANMI}\n \\end{center}\n\\end{table}\n\n\\begin{figure*}[!h]\n\\begin{center}\n\\includegraphics[width=\\linewidth]{imageRN3.pdf}\n\\end{center}\n\\caption{NMI values of OLCPM and CPM \\cite{palla2005} for $k=3$ and $k=4$ in on SocioPatterns collaboration networks \\cite{fournet2014}. }\n\\label{fig:NMI}\n\\end{figure*}\n\nWe can observe that the average NMI of OLCPM is higher than the original DyCPM, and that values of NMI are also higher for most snapshots.\n\nThe longitudinal visualization of Figure \\ref{fig:NMI} illustrates the relevance of studying the evolution of a network with a fine granularity: only looking at this plot, we can see that the class structure is not always present in the data. For instance, we can observe that there is no community structure during evenings and weekends, or that the community structure is less observable during several days around lunchtime (Thursday, Friday, second Monday). One can then look in more details to the communities found and their evolution to interpret these observations.\nIn this example, we were able to run DyCPM because of the small size of the network, the restriction to one-hour interval, and the limitation to 9 days of data, but, as shown previously, it would not be possible to extend this analysis to a much larger number of steps due to the increase in complexity.\n\n\n\\section{Conclusion}\n\nIn this paper, we proposed OLCPM framework to discover overlapping and evolving communities in social networks. We proposed OCPM, an online version of CPM \\cite{palla2005}, working on a fully dynamic network model, i.e., described as flows of events, where nodes or edges can be added or removed over time. Instead of calculating all $k$-cliques for the whole network at each event occurring in the network, our method updates only the community structure alongside the node or the edge involved in the event. This local update of the community structure provides a significant improvement in computational time.\n\nTo cope with the covering problem of CPM, nodes belonging to OCPM communities are considered as core nodes and we proposed a post-process based on label propagation to discover peripheral nodes.\n\nThe experimental results of our framework in both artificial and real-world networks show good performance in both computing time and quality detection.\n\nOur method has some drawbacks, some of which are related to CPM itself, like the dependency of the parameter $k$ (clique size). We intend to propose a heuristic for finding appropriate values of $k$. \n\nCurrently, the post-process is run from scratch at each step, and although it is not as costly as a clique-finding problem, running it at each step for a large network can become very costly. For future research, it would be interesting to extend OLCPM by developing an online version of the post-process.\n \n\n\\section*{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe method of detecting extrasolar planets by \n\\textit{direct imaging}, even in its current early stage, fills in an important gap in our knowledge of the \ndiversity of planetary systems around nearby stars. Direct imaging searches with the best \nconventional AO systems (e.g. Keck\/NIRC2, VLT\/NaCo, Subaru\/HiCIAO) are sensitive to very massive planets \n($M$ $\\gtrsim$ 5--10 $M_{J}$) at wide separation ($a$ $\\sim$ 10-30 $AU$ \nto 100 $AU$) and young ages ($t$ $\\lesssim$ 100 Myr), which are not detectable by the radial \nvelocity and transit methods \\citep[e.g.][]{Lafreniere2007b,Vigan2012,Rameau2013,Galicher2013}.\nPlanets with these masses and orbital separations pose a stiff challenge to planet formation theories \n \\citep[e.g.][]{Kratter2010,Rafikov2011}.\nYoung self-luminous directly-imageable planets provide a critical \nprobe of planet atmospheric evolution \\citep{Fortney2008,Currie2011a,Spiegel2012,Konopacky2013}.\n\nThe directly-imaged planet around the nearby star $\\beta$ Pictoris ($\\beta$ Pictoris b) is a particularly \nclear, crucial test for understanding the formation and atmospheric evolution of gas giant planets \n\\citep{Lagrange2009,Lagrange2010}. \nAt 12$^{+8}_{-4}$ Myr old \\citep{Zuckerman2001}, the $\\beta$ Pictoris system provides a way to probe\n planet atmospheric properties only $\\approx$ 5--10 Myr after \nthe disks from which planets form dissipate \\citep[$\\approx$ 3--10 Myr, e.g.][]{Pascucci2006,Currie2009}. \nSimilar to the case for the HR 8799 planets \\citep{Marois2011,Fabrycky2010,Currie2011a,SudolHaghighipour2012}, \n$\\beta$ Pic b's mass can be constrained without \ndepending on highly-uncertain planet cooling models: in this case, RV-derived \ndynamical mass upper limits when coupled with the range of plausible orbits ($a$ $\\sim$ 8--10 AU)\n imply masses less than $\\sim$ 10--15 $M_{J}$ \\citep{Lagrange2012a,Currie2011b,Chauvin2012,Bonnefoy2013}, \na mass range consistent with estimates derived from the planet's interaction with the secondary \ndisk \\citep{Lagrange2009,Dawson2011}.\n\nFurthermore, while other likely\/candidate planets such as Fomalhaut b and LkCa 15 b are probably made detectable by\ncircumplanetary emission in some poorly constrained geometery \\citep{Currie2012a,Kraus2012}, $\\beta$ Pic b's emission \nappears to be consistent with that from a self-luminous planet's atmosphere \\citep{Currie2011b,Bonnefoy2013}.\nOther objects of comparable mass appear to have formed more like low-mass binary companions.\nThus, combined with the planets HR 8799 bcde, $\\beta$ Pic b provides a crucial reference point with \nwhich to interpret the properties of many soon-to-be imaged planets with upcoming extreme AO systems like \n$GPI$, $SCExAO$, and $SPHERE$ \\citep{Macintosh2008,Martinache2009,Beuzit2008}.\n\nHowever, investigations into $\\beta$ Pic b's atmosphere are still in an early stage compared to\n those for the atmospheres of the HR 8799 planets and other very low-mass, young substellar \nobjects \\citep[e.g.][]{Currie2011a,Skemer2011,Konopacky2013,Bailey2013}. Of the current \npublished photometry, only $K_{s}$ (2.18 $\\mu m$) and $L^\\prime$ (3.78 $\\mu m$) have photometric \nerrors smaller than $\\sim$ 0.1 mag \\citep{Bonnefoy2011,Currie2011b}. Other high SNR detections such \nas at $M^\\prime$ were obtained without reliable flux calibration \n\\citep{Currie2011b} or with additional, large photometric uncertainties due to processing \\citep{Bonnefoy2013}. \n As a result, the best-fit models admit a wide range of temperatures, surface gravities, and cloud structures \n\\citep[e.g.][]{Currie2011b}. Thus, new higher signal-to-noise\/precision and flux-calibrated photometry at \n1--5 $\\mu m$ should provide a clearer picture of the clouds, chemistry, temperature, and gravity of \n$\\beta$ Pic b. Moreover, new near-to-mid IR data may identify distinguishing characteristics of $\\beta$ Pic b's atmosphere, \nmuch like clouds and non-equilibrium carbon chemistry for HR 8799 bcde \\citep{Currie2011a,Galicher2011,Skemer2012,Konopacky2013}.\n\nIn this study, we present new 1.5--5 $\\mu m$ observations for $\\beta$ Pic b obtained with $NaCo$ on the \n\\textit{Very Large Telescope} and $NICI$ on \\textit{Gemini-South}. We extract the first detection at the 3.09 $\\mu m$ water-ice filter; the first \nhigh signal-noise, well calibrated H, [4.05], and $M^\\prime$ detections; and higher \nsignal-to-noise detections at $K_{s}$ and $L^\\prime$ (2.18 and 3.8 $\\mu m$). To our new data, we \nadd rereduced $\\beta$ Pic data obtained in $J$ (1.25 $\\mu m$) and $H$ (1.65 $\\mu m$) bands and first presented in \n\\citet{Bonnefoy2013}, recovering $\\beta$ Pic b at a slightly higher signal-to-noise and deriving \nits photometry with smaller errors.\n\nWe compare the colors derived from broadband photometry to that for field substellar objects with a range of spectral types to \nassess whether $\\beta$ Pic b's colors appear anomalous\/redder than the field sequence like those for planets around HR 8799 \nand $\\kappa$ And; planet-mass companions like 2M 1207 B, GSC 06214 B, and 1RXJ 1609 B \\citep{Chauvin2004,IrelandKraus2011,Lafreniere2008a}; \nand other substellar objects \nlike Luhman 16B \\citep{Luhman2013}. We use atmosphere modeling to constrain the range of temperatures, \nsurface gravities, and cloud structures plausible for the planet. While previous studies have shown the importance \nof clouds and non-equilibrium carbon chemistry in fitting the spectra\/photometry of directly-imaged planets \n\\citep{Bowler2010,Currie2011a,Madhusudhan2011,Galicher2011,Skemer2012,Konopacky2013}, here the assumed sizes of dust \nparticles entrained in the clouds plays a critical role.\n \n\\section{Observations and Data Reduction}\n\\subsection{VLT\/NaCo Data and Basic Processing}\nWe observed $\\beta$ Pictoris under photometric conditions on 14 December to 17 December 2012 with the NAOS-CONICA instrument \\citep[NaCo;][]{Rousset2003} \non the \\textit{Very Large Telescope} UT4\/Yepun at Paranal Observatory (Program ID 090.C-0396). \nAll data were taken in pupil-tracking\/angular differential imaging \\citep{Marois2006} and data cube mode. \nTable \\ref{bpiclog} summarizes the basic properties of these observations. Our full complement of data during the run includes \nimaging at 1.04 $\\mu m$, 2.12 $\\mu m$, $K_{s}$\/2.18 $\\mu m$, 2.32 $\\mu m$, 3.74 $\\mu m$, $L^\\prime$\/3.78 $\\mu m$, \nBr-$\\alpha$\/4.05 $\\mu m$, and $M^\\prime$. Here, we focus only on the $L^\\prime$, [4.05], and $M^\\prime$ data, \ndeferring the rest to a later study. Each observation was centered on $\\beta$ Pictoris's transit for a total \nfield rotation of $\\sim$ 50--70 degrees and a total observing times ranging between $\\sim$ 30 minutes and 59 minutes.\n\nTo these new observations, we rereduce $J$-band and $H$-band data first presented in \\citet{Bonnefoy2013} and \ntaken on 16 December 2011 and 11 January 2012, respectively.\nThe saturated $J$ band science images are bracketed by two sequences of unsaturated images obtained in neutral density filter \nfor flux calibration. While there were additional frames taken but not analyzed in \\citeauthor{Bonnefoy2013}, we found these \nto be of significantly poorer quality and thus do not consider them here. In total, the $J$-band data we consider \ncovers 40 minutes of integration time and $\\sim$ 23$^{o}$ of field rotation. The $H$-band data cover $\\sim$ 92 minutes \nof integration time and $\\sim$ 36$^{o}$ of field rotation.\n \nBasic NaCo image processing steps were performed as in \\citet{Currie2010,Currie2011b}.\nThe thermal IR data at $L^\\prime$ and [4.05] ($M^\\prime$) were obtained in a dither pattern with offsets every 2 (1) images to remove \nthe sky background. \nAs all data were obtained in data cube mode, we increased our PSF quality by realigning each individual exposure in the \ncube to a common center position and clipping out frames with low encircled energy (i.e. those with a \ncore\/halo ratio $<$ max(core\/halo) - 3$\\times$$\\sigma$(core-to-halo ratio)).\n\n\\subsection{Gemini\/NICI Data and Basic Processing}\nWe obtained Gemini imaging for $\\beta$ Pic b using the Near-Infrared Coronagraphic Imager (NICI) on 23 December 2012 and \n26 December 2012 in the \nH$_{2}$O filter ($\\lambda_{o}$ = 3.09 $\\mu m$) and 9 January 2013 in the $H$ and $K_{s}$ filters (dual-channel \nimaging), both under photometric conditions (Program GS-2012B-Q-40). \nThese observations were also executed in \\textit{angular differential imaging} mode.\nFor the $H_{2}$$O$ data, we dithered each 38 s exposure for sky subtraction for a total of $\\sim$ 38 minutes of integration \ntime over a field rotation of $\\sim$ 30 degrees.\nFor the $H\/K_{s}$ data, we placed the star behind the $r$ = 0\\farcs{}22 partially-transmissive coronagraphic \nmask to suppress the stellar halo. Here, we took shorter exposures of $\\beta$ Pic ($t_{int}$ $\\sim$ 11.4 s) to better \nidentify and filter out frames with bad AO correction. Our observing sequence consists of $\\sim$ 22 minutes of usable data \ncentered on transit with a field rotation of $\\sim$ 41 degrees.\n\nBasic image processing follows steps described above for NaCo data. The PSF halo was saturated \nout to $r$ $\\sim$ 0\\farcs{}32--0\\farcs{}36 in $H$ during most of the observations and our sequence suffered periodic seeing bubbles that \nsaturated the halo out to angular separations overlapping with the $\\beta$ Pic b PSF. Thus, we focus on reducing only \nthose $H$-band frames with less severe halo saturation ($r_{sat}$ $<$ 0\\farcs{}36). The $K_{s}$ observations, obtained \nat a higher Strehl ratio, never suffered halo saturation. The first of the two $H_{2}0$ sets, suffered from severe \nperiodic seeing bubbles and thus generally poor AO performance. We identify and remove from \nanalysis frames whose halo flux exceeded the $F_{min}$+3$\\sigma$, where $F_{min}$ is the minimum flux within an \naperture covering $\\beta$ Pic b and $\\sigma$ is the dispersion in this flux: about 10-25\\% of the \nframes, depending on the data set in question. \n\n\\subsection{PSF Subtraction}\nTo remove the noisy stellar halo and reveal $\\beta$ Pic b, we process the data with \nour ``adaptive\" LOCI (A-LOCI) pipeline \\citep[][ T. Currie 2013 in prep.]{Currie2012a,Currie2012b}.\nThis approach adopts ``locally optimized \ncombination of images\" (LOCI) formalism \\citep{Lafreniere2007}, where we perform PSF subtraction in small annular regions (the ``subtraction zone\")\nat a time over each image. \nPreviously-described A-LOCI components we use here include ``subtraction zone centering\" \n\\citep{Currie2012b}; ``speckle filtering\" to identify and remove images with \nnoise structure poorly correlated with that from the science image we are wanting to subtract \\citep{Currie2012b}; \na moving pixel mask to increase point source throughput and normalize it as a function of azimuthal angle \\citep{Currie2012a}.\nWe do not consider a PSF reference library \\citep{Currie2012a} since $\\beta$ Pictoris is our only target.\n\nInto A-LOCI as recently utilized in \\citet{Currie2012a}, we incorporate a component different from but complementary \nto our ``speckle filtering\", using \\textit{singular value decomposition} (SVD) to limit the number of images used \nin a given annular region (i.e. for a given optimization zone) to construct and subtract a reference. \nBriefly, in the (A-)LOCI formalism a matrix inversion yields the set of coefficients $c^{k}$ applied to each image making up the \nreference ``image\": \\textbf{$c^{k}$} = \\textbf{A$^{-1}$}\\textbf{b}. Here, \\textbf{A} is the covariance matrix \nand \\textbf{b} is a column matrix defined from $i$ pixels in the ``optimization zones\" of the $j$-th reference image section \n\\textit{O$^{j}$} and the science image, \n\\textit{O$^{T}$}: \\textit{b$_{j}$} = $\\sum\\limits_{i}$ \\textit{O$^{j}_{i}$}\\textit{O$^{T}_{i}$} \\citep[see][]{Lafreniere2007}.\nIn the previous versions of our codes, we use a simple double-precision \nmatrix inversion to invert the covariance matrix and then solve for $c^{k}$ after multiplying by \\textbf{b}. \n\nIn this work, we instead use SVD to rewrite \\textbf{A} as \\textbf{U$\\Sigma$V$^{T}$} such that \n\\textbf{A$^{-1}$} = \\textbf{V$\\Sigma^{-1}$U$^{T}$}, where the \\textit{T} superscript \nstands for the transpose of the matrix. Prior to inversion, we truncate the number \nof singular values at a predefined cutoff, $svd_{lim}$. This eigenvalue truncation is very \nsimilar to and functions the same as the truncation of principle components, $N_{pca}$, in the \nKarhunen-Loeve image projection (KLIP) \\citep{Soummer2012} and has been \nsuccessfully incorporated before \\citep{Marois2010b}. \nWe found that both speckle filtering and SVD truncation within our \nformalism can yield significant contrast gains over LOCI and KLIP\/Principle Component Analysis (PCA), although \nin this study at the angular separation of $\\beta$ Pic b ($\\approx$ 0\\farcs{}45) the gains over LOCI are typically about a \nfactor of 1.5, albeit with substantially higher throughput\\footnote{Recently, \\citet{Amara2012} claimed a contrast gain of\n $\\sim$ 5$\\times$ over LOCI using PCA.\nHowever, optimal set-ups even \\textit{within} a given formalism like LOCI or PCA\/KLIP are very dataset-specific \n\\citep[cf.][]{Lafreniere2007,Currie2012a,Currie2012b}.\n With LOCI, we obtained roughly equivalent SNRs for $\\beta$ Pic b obtained during \nthe same observing run but on a night with poorer observing conditions (29 December 2009) than their test data set \\citep{Currie2011b}. \nImplementing some A-LOCI filtering and pixel masking yields SNR $\\approx$ 30--35.}.\n\n\\subsection{Planet Detections}\nFigures \\ref{niciimages}, \\ref{jhnacoimages}, and \\ref{midirnacoimages} display reduced NaCo and NICI images \nof $\\beta$ Pic. We detect $\\beta$ Pic b in all datasets (summarized in Table \\ref{bpicphot}). To compute the signal-to-noise ratio (SNR) for $\\beta$ Pic b, \nwe determine the dispersion, $\\sigma$, in pixel values of our final image convolved with a gaussian along a ring with width of \n1 FWHM at the same angular separation as $\\beta$ Pic b but excluding the planet \\citep[e.g.][]{Thalmann2009}, and average the \nSNR\/pixel over the aperture area. For the Gemini-NICI $H$, $K_{s}$, and two [3.1] datasets, the SNRs are thus 6.4, 11, 4.6, and 10, respectively.\n For the $J$ and $H$-band NaCo data \npreviously presented in \\citet{Bonnefoy2013}, we achieve SNR $\\sim$ 9 and SNR $\\sim$ 30, respectively. Generally speaking, our 3.8--5 $\\mu m$ \nNaCo data are deeper than the near-IR NaCo and especially near-IR NICI data, where we detect $\\beta$ Pic b at SNR = 40 \nin $L^\\prime$ and 22 at $M^\\prime$, roughly a factor of two higher than previously reported \\citep{Currie2011b,Bonnefoy2013}, \ngains due to $\\beta$ Pic b now being at a wider projected separation ($L^\\prime$) or post-processing and slightly better observing conditions \n($M^\\prime$). The high SNR detections obtained with NaCo also leverage on recent engineering upgrades that substantially \nimproved the instrument's image quality and the stability of its PSF \\citep{Girard2012}.\n\nThe optimal A-LOCI algorithm parameters vary significantly from dataset to dataset. The rotation gap ($\\Delta$PA in units of the image \nfull-width half maximum) criterion used to produce most of the images is $\\delta$ $\\sim$ 0.6--0.65, although it is \nsignificantly larger for the $J$ and $H$ data sets ($\\delta$ = 0.75--0.95). Generally speaking, the optimization areas we use $N_{A}$ are \nsignificantly smaller ($N_{A}$ = 50-150) than those typically adopted \\citep[i.e. $N_{A}$=300;][]{Lafreniere2007}. We speculate that \nthe pixel masking component of A-LOCI drives the optimal $N_{A}$ settings toward these smaller values since the planet flux \n(ostensibly within the subtraction zone) no longer significantly biases the coefficient determinations to the point of reducing \nthe planet's SNR. Filtering parameters \n$r_{corr}$ and $svd_{lim}$ likewise vary wildly from $r_{corr}$ = 0 and $svd_{lim}$ = 2.5$\\times$10$^{-7}$ at $J$ to $r_{corr}$ = 0.9 for the NICI \n$H$-band data or $svd_{lim}$ = 2.5$\\times$10$^{-2}$ for the $M^\\prime$ NaCo data. \n\nWhile the many algorithm free parameters make finding an optimal \ncombination difficult and computationally expensive, our final image quality is nevertheless \\textit{extremely} sensitive to some \nvalues, in particular $svd_{lim}$ and $r_{corr}$. As a test, we explored other image processing methods -- ADI-based \nclassical PSF subtraction and LOCI. While A-LOCI always yields deeper contrasts, we easily detect $\\beta$ Pic b in the mid-IR NaCo data \nusing any method and only the poorer of the two [3.1] data sets requires A-LOCI to yield a better than 4-$\\sigma$ detection (i.e. where $\\sigma_{det}$ = \n1.0857\/SNR = 0.27 mags).\nWe will present a detailed analysis of image processing methods and algorithm parameters in an upcoming study (T. Currie, 2013 in prep.).\n\nAdopting the pixel scales listed in Table \\ref{bpiclog}, $\\beta$ Pic b is detected at an angular separation of $r$ $\\sim$ 0\\farcs{}46 in each data set.\nThe position angle of $\\beta$ Pic b is consistent with previously-listed values (PA $\\approx$ 210$^{o}$) and in between\n values for the main disk and the warp, intermediate between the results presented in \\citet{Currie2011b} and \\citet{Lagrange2012b}. \nWhile the NICI north position angle on the detector is precisely known and determined from facilty observations, \nwe have not yet used our astrometric standard observations to derive the NaCo position angle offset,\nwhich changes every time NaCo is removed from the telescope.\nTo dissuade others from using the poorly calibrated NaCo data and precisely calibrated data \\citep{Lagrange2012b} together, \nwe reserve a detailed determination of $\\beta$ Pic b's astrometry and a study of its orbit for a future study. \n We also detect the $\\beta$ Pic debris disk in each new broadband data set and at [4.05] (Figure \\ref{diskimage}). \nWe will analyze its properties at a later time as well.\n\n\\subsection{Planet Photometry}\nTo derive $\\beta$ Pic b photometry, we first measured its brightness within an aperture roughly equal to the image FWHM in each case, \nwhich was known since we either had AO-corrected standard star observations (NICI $H$, $K_{s}$, and [3.1]), unsaturated \nimages of the primary as seen through the coronagraphic mask (NICI $K_{s}$), unsaturated neutral density filter observations (NaCo $J$, $H$, \n$L^\\prime$, and $M^\\prime$), or unsaturated images of the primary (NaCo $L^\\prime$ and [4.05]). We assessed and corrected for \nplanet throughput losses due to processing by comparing the flux of synthetic point sources within this aperture \nimplanted into registered images at the same angular separation as\n$\\beta$ Pic b before and after processing. To derive $\\beta$ Pic b's throughput and uncertainty in the throughput ($\\sigma_{atten}$), we \nrepeat these measurements at 15 different position angles and adopt the clipped mean of the throughput as our throughput\nand standard deviation of this mean as its uncertainty. The planet throughput ranges from 0.38 for the $J$-band data to 0.82 for the [4.05] \ndata and 0.96 for the NICI $H$-band data, even with aggressive algorithm parameters (i.e. $\\delta$ $\\sim$ 0.6), due to the throughput gains yielded \nby our pixel masking and the SVD cutoff. \n\nFor photometric calibration, we followed several different approaches. For the NICI data, we used TYC 7594-1689-1 and HD 38921 as photometric standards. \nWe were only able to obtain photometric calibrations for the first of the two [3.1] datasets.\nFor all other data we used the primary star, $\\beta$ Pic, for flux calibration adopting the measurements listed in \\citet{Bonnefoy2013}. \nFor the $J$ and $H$ NaCo data, we used images of the primary as viewed through the neutral density filter. For the $M^\\prime$ \nNaCo data, we obtained neutral density filter observations \\textit{and} very short exposures. While the latter were close to \nsaturation and were probably in the non-linear regime, the implied photometry for $\\beta$ Pic was consistent to within errors. The primary was unsaturated \nin the [4.05]. Finally, for the $L^\\prime$ data, we took 8.372 ms unsaturated images of $\\beta$ Pic for flux calibration. In all cases, \nwe again adopt the clipped mean of individual measurements as our photometric calibration uncertainty, $\\sigma_{fluxcal}$. To compute the \nphotometric uncertainty for each data set, we considered the SNR of our detection, the uncertainty in the planet throughput, and the uncertainty \nin absolute flux calibration: $\\sigma$ = $\\sqrt{\\sigma_{det}^{2}+\\sigma_{atten}^{2}+\\sigma_{fluxcal}^{2}}$.\n\nTable \\ref{bpicphot} reports our photometry and Table \\ref{bpicphoterror} lists sample error budgets for two NICI photometric measurements and \ntwo NaCo measurements. The relative contributions from each source of photometric uncertainty to the total uncertainty are \nrepresentative of our combined data set. For the [3.09] data, residual speckle noise\/sky fluctuations greatly \nlimit the planet's SNR and thus $\\sigma_{det}$ is the primary source of photometric uncertainty. \nFor the $K_{s}$ data, the intrinsic SNR and the two other sources of photometric uncertainty contribute in \na more equal proportion. The $L^\\prime$ and $M^\\prime$ data error budgets are characteristic of most of \nour other data, where the photometric uncertainty is primarily due to the absolute photometric calibration \nand throughput. With the exception of the [3.09] NICI data, the intrinsic SNR of the detection does not \ndominate the error budget. \nFor the best-quality (mid-IR NaCo) data, the throughput uncertainty was small ($\\approx$ 5\\%) and was \nnever any larger than 15\\% ($J$ band data) in any data set\\footnote{In principle, tuning the algorithm parameters to maximize the SNR of $\\beta$ Pic b could \nintroduce additional photometric uncertainties if the planet is in significant residual speckle contamination. In such a case, the \nalgorithm parameters maximizing the SNR could instead be the set that maximizes the residual speckle contamination within the\nthe planet aperture while minimizing it elsewhere, especially as the pixel masking technique normalizes the point source throughput \nbut not the noise as a function of azimuthal angle. However, we do not find substantial differences in \nthe derived photometry if we adopt a default set of algorithm parameters. Furthermore, the parameters maximizing the SNR are \nnever the ones maximizing the planet throughput, and our tuning is not just finding the parameter set \nmaking pixels within the planet aperture 'noisiest'. Adopting slightly different parameters from the 'optimized' case yields nearly identical photometry. \nMoreover, residual speckle contamination in most data sets is extremely low, and for the mid-IR data the intrinsic SNR is limited \nby sky background fluctuations in addition to speckles.}. \n\nIn general, we find fair agreement with previously published photometry, where \nour measurements are usually consistent within photometric errors with those reported previously (e.g. $m_{H}$= 13.32 $\\pm$ 0.14 and \n13.25 $\\pm$ 0.18 vs. 13.5 $\\pm$ 0.2 in Bonnefoy et al. 2013).\nOur $L^\\prime$ photometry is more consistent with \\citeauthor{Currie2011b}'s measurement of $m_{L^\\prime}$=9.73 $\\pm$ 0.06\nthan with that listed in \\citet{Bonnefoy2013} ($m_{L^\\prime}$=9.5 $\\pm$ 0.2), though it is nearly identical to that derived \nfor some $\\beta$ Pic b data sets listed in \\citet{Lagrange2010}. Our [4.05] photometry implies that $\\beta$ Pic b \nis $\\sim$ 15-20\\% brighter there than previously assumed \\citep{Quanz2010} and may have a slightly red $L^\\prime$-[4.05] color.\nThe major difference from previous studies, though, is that our photometric errors are consistently much smaller. For \nexample, the uncertainty in the [4.05] photometry is reduced to 0.08 mag from 0.23 mag due both to higher SNR detections \nand lower uncertainty in our derived photometry (e.g. throughput corrections).\nNICI photometry is also substantially less uncertain than in \\citet{Boccaletti2013} because $\\beta$ Pic b is not occulted by the focal plane mask.\nThese lower uncertainties should allow more robust comparisons between $\\beta$ Pic b and other substellar objects and, from \nmodeling, more precise limits on the best-fitting planet atmosphere properties.\n\n\\section{Empirical Comparisons to $\\beta$ Pic b}\nOur new data allows us to compare the spectral energy distribution of $\\beta$ Pic b to \nthat for the many field L\/T-type brown dwarfs as well that for directly-imaged low-surface gravity, low-mass \nbrown dwarf companions and directly-imaged planets. Our goal here is to place \n$\\beta$ Pic b within the general L\/T type spectral sequence, identify departures from this sequence \nsuch as those seen for low surface gravity objects like HR 8799 bcde, and identify the substellar \nobject(s) with the best-matched SED. Some bona fide directly-imaged planets like \nHR 8799 bcde and at least some of the lowest-mass brown dwarfs like 2M 1207 B appear redder\/cloudier \nthan their field dwarf counterparts at comparable temperatures ($T_{eff}$ $\\approx$ 900-1100 $K$). \nHowever, it is unclear whether hotter imaged exoplanets appear different from their (already cloudy) field \nL dwarf counterparts, and $\\beta$ Pic b provides a test of any such differences. We will use \nour comparisons to the L\/T dwarf sequence and the SEDs of other substellar objects to inform our atmosphere model \ncomparisons later to derive planet physical parameters (e.g. $T_{eff}$ and log(g)).\n\n\\subsection{Infrared Colors of $\\beta$ Pic b}\nTo compare the near-to-mid IR properties of $\\beta$ Pic b with those\nfor other cool, substellar objects, we primarily use the sample of L\/T dwarfs compiled by\n\\citet{Leggett2010}, which include field dwarfs spectral classes between $\\sim$ M7 and T5, \ncorresponding to a range of temperatures between $\\sim$ 2500 $K$ and 700 $K$.\nTo explore how the $\\beta$ Pic b SED compares to those with other directly-imaged planets\/planet candidates \nand very low-mass brown dwarf companions within this temperature range, we include objects listed in \nTable \\ref{photcomptable}. These include the directly-imaged planets around \nHR 8799 \\citep{Marois2008,Marois2011,Currie2011a} and the directly-imaged planet candidate around \n$\\kappa$ And \\citep{Carson2013}.\nAdditionally, we include high mass ratio brown dwarf companions with masses less than the \ndeuterium-burning limit ($\\sim$ 13--14 $M_{J}$) and higher-mass companions whose\nyouth likely favors a lower surface gravity than for field brown dwarfs, a difference that affect the \nobjects' spectra \\citep[e.g.][]{Luhman2007}. Among these objects are \n1RXJ 1609B, AB Pic B, and Luhman 16 B \\citep{Lafreniere2008a,Chauvin2005,Luhman2013}.\nTable \\ref{photcomptable2} compiles photometry for all of these low surface gravity objects.\n\nFigure \\ref{colcol} compares the IR colors of $\\beta$ Pic b (dark blue diamonds) to those for field M dwarfs (small black \ndots), field L0--L5 dwarfs (grey dots), field L5.1-L9 dwarfs (asterisks), T dwarfs (small light-grey dots), \nand planets\/low-mass young brown dwarfs (light-blue squares). \nThe $J$-$H$\/$H$-$K_{s}$ colors for $\\beta$ Pic b appear slightly blue in $J$-$H$ and red in $H$-$K_{s}$ compared to \nfield L0--L5 dwarfs, though the difference here is not as large as was found in \\citet{Bonnefoy2013}.\nOther young substellar objects appear to have similar near-IR colors, in particular \n$\\kappa$ And b, GSC 06214 B, USco CTIO 108B, 2M 1207A, and Luhman 16 B, whose spectral types \nrange between M8 and T0.5. \n\nThe mid-IR colors of $\\beta$ Pic b (top-right and bottom panels) show a more complicated situation. \nIn $J$-$K_{s}$\/$K_{s}$-$L^\\prime$ and $H$-$K_{s}$\/$K_{s}$-$L^\\prime$, $\\beta$ Pic b lies along the field L\/T dwarf \nlocus with colors in between those for L0--L5 and L5.1--L9 dwarfs, overlapping in color with $\\kappa$ And b, 1RXJ 1609B, GSC 06214B, HR 8799 d, \nand 2M 1207 B. Compared to the few field L\/T dwarfs from the \\citeauthor{Leggett2010} sample with $M^\\prime$ photometry, \n$\\beta$ Pic b appears rather red, most similar in $K_{s}$-$M^\\prime$ color to GSC 06214 B. \n\nThe color-magnitude diagram positions of $\\beta$ Pic b (Figure \\ref{cmd}) better clarify how its near-to-mid \nSED compares to the field L\/T dwarf sequence and to very low-mass (and gravity?) young substellar objects.\nIn general, compared to the field L dwarf sequence, $\\beta$ Pic b appears progressively redder at \nmid-IR wavelengths. Similar to the case for GSC 06214 B \\citep{Bailey2013}, \n$\\beta$ Pic b appears overluminous compared to the entire \nL\/T dwarf sequence in the mid-IR. \n\n\\subsection{Comparisons to SEDs of Other Substellar Objects}\nTo further explore how the SED of $\\beta$ Pic b agrees with\/departs from the field L\/T dwarf sequence \nand other young substellar objects, we first compare its photometry to spectra from the SPeX library \n\\citep{Cushing2005,Rayner2009} of brown dwarfs with data overlapping with our narrowband \nmid-IR filters ([3.09] and [4.05]) spanning spectral classes between L1 and L5: \n2MASS J14392836+1929149 (L1), Kelu-1AB (L2), \n2MASS J15065441+1321060 (L3), 2MASS J15074769-1627386 (L5).\nTo compare the $\\beta$ Pic b photometry with cooler L dwarfs, we add combined IRTF\/SpeX and \nSubaru\/IRCS spectra from 1 to 4.1 $\\mu m$ for \n2MASS J08251968+2115521 (L7.5) and DENIS-P J025503.3-470049 (L8) \\citep{Cushing2008}.\nFinally, we add spectra for the low surface-gravity L4.5 dwarf, 2MASSJ22244381-0158521 \\citep{Cushing2008}.\nTo highlight differences between $\\beta$ Pic b and these L dwarfs, we scale the flux densities \nfor each of these standards to match $\\beta$ Pic b at $\\sim$ 2.15 $\\mu m$ ($K_{s}$ band).\n\nTo convert our photometry derived in magnitudes to flux density units, we use the zeropoint fluxes \nlisted in Table \\ref{fluxzero}. The $JHK_{s}$ and $L^\\prime$$M^\\prime$(4.78 $\\mu m$) zeropoints are from \\citet{Cohen2003} and \\citet{Tokunaga2005}, \nrespectively. We base the other zeropoints off of \\citet{Rieke2008}, although alternate sources \\citep[e.g.][]{Cohen1995} yield \nnearly identical values.\nBecause the overlap in wavelengths between $\\beta$ Pic and these objects is not uniform, we do not perform a rigorous \nfit between the two, finding the scaling factor that minimizes the $\\chi^{2}$ value defined from the planet flux density, \ncomparison object flux density, and photometric errors in both. Rather, we focus on a simple first-order comparison \nbetween $\\beta$ Pic b and the comparison objects to motivate detailed atmospheric modeling later in Section 4. \n\nFigure \\ref{spexcomp} (left panel) compares photometry for $\\beta$ Pic b to spectra for field L1--L5 dwarfs. \nWhile the L1 standard slightly overpredicts the flux density at $J$ band, the other three \nearly\/mid L standards match the $\\beta$ Pic b near-IR SED quite well, indicating a ``near-IR spectral type\" \nof $\\sim$ L2--L5. The L7.5 and L8 standards also produce reasonable matches, although they \ntend to underpredict the brightness at $J$ band (right panel). \n\nHowever, all standards have difficulty matching the $\\beta$ Pic b SED from 3--4 $\\mu m$. \nIn particular, the $\\beta$ Pic b flux density from $\\sim$ 3 to $\\sim$ 5 $\\mu m$ is nearly constant, whereas it \nrises through 4 $\\mu m$ and then steeply drops in all six standards depicted here. Focused on only $\\beta$ Pic b photometry \nat 3.8--4.1 $\\mu m$, the ``mid-IR spectral type\" is hard to define, the low surface gravity\nL4.5 dwarf bears the greatest resemblance, although we fail to identify \ngood matches at all wavelengths with any of our spectral templates, where the 3.1 $\\mu m$, $L^\\prime$, \nand [4.05] data points are the most problematic. While none of our standards have measurements fully overlapping \nwith the $M^\\prime$ filter, the flux densities at 5.1 $\\mu m$ indicate that they may have a very hard time simultaneously \nreproducing our measurements at all four filters between 3 and 5 $\\mu m$. Although non-equilibrium carbon chemistry can \nflatten the spectra of low surface gravity L\/T dwarfs \\citep{Skemer2012}, its effect \nis to weaken the methane absorption trough at $\\sim$ 3.3 $\\mu m$ and suppress emission at \n$\\sim$ 5 $\\mu m$. Thus, it is unclear whether this effect can explain the enhanced emission at \n$\\sim$ 3.1 $\\mu m$ (mostly outside of the $CH_{4}$ absorption feature to begin with) \\textit{and} 5 $\\mu m$.\n\nTo understand whether $\\beta$ Pic b's SED is unique even amongst other very low-mass \nsubstellar objects, we compare our photometry to that for companions listed in Table \\ref{photcomptable} that \nhave photometry from 1 $\\mu m$ through $\\sim$ 4--5 $\\mu m$: HR 8799 bcd, $\\kappa$ And b, 1RXJ 1609 B, GSC 06214B, \nHIP 78530 B, 2M 1207A\/B, HR 7329B, and AB Pic. Two objects -- 1RXJ 1609 B and GSC 06214B -- have 3.1 $\\mu m$ photometry:\n1RXJ 1609 B from \\citep{Bailey2013} and $\\kappa$ And b has [4.05] from data obtained by T. C. (M$_{[4.05]}$ = 9.45 $\\pm$ 0.20) \n(Bonnefoy, Currie et al., 2013 in prep.).\n\nThe two far-right columns of Table \\ref{photcomptable2} lists the reduced $\\chi^{2}$ and goodness-of-fit statistics \nbetween $\\beta$ Pic b's $JHK_{s}L^\\prime$ ([3.1],[4.05]) photometry, \nwhile Figure \\ref{empcomp} displays these comparisons for $\\kappa$ And b, 1RXJ 1609B, and GSC 06214B, which are all \nthought to be low surface gravity companions with $T_{eff}$ $\\sim$ 1700 K, 1800 K, and 2200 K \\citep{Carson2013,Lafreniere2010,Bowler2011,Bailey2013}.\nOverall, $\\kappa$ And b provides the best match to $\\beta$ Pic b's photometry, requires negligible flux scaling, and \nis essentially the same within the 68\\% confidence limit (C.L.) \n($\\chi^{2}$ = 0.946, C.L. = 0.186), although the large photometric uncertainties in the near-IR limit the robustness of these conclusions.\nThe companion to 1RXJ 1609 likewise produces a very good match ($\\chi^{2}$ = 1.369, C.L. = 0.287), while the slightly more luminous (and massive) \nGSC 06214B appears to be much bluer, (relatively) overluminous in $L^\\prime$ and $M^\\prime$ (or, conversely, overluminous at \n$JHK_{s}$) by $\\sim$ 30\\%. In comparison, the cooler ($T_{eff}$ $\\approx$ 900-1100 $K$) exoplanets HR 8799 bcd provide far poorer \nmatches ($\\chi^{2}$ $\\sim$ 6--52).\n\nStill, it is unclear whether any object matches $\\beta$ Pic b's photometry at all wavelengths: \nboth of the objects for which we have [3.1] data, GSC 06214B and 1RXJ 1609B, are still slightly underluminous here.\nMoreover, the best-matching companions -- $\\kappa$ And b and 1RXJ 1609B -- are still not identical, as the scaling factors between \n$\\beta$ Pic b's spectrum and these companions' spectra that minimize $\\chi^{2}$ are $\\sim$ 0.83 and 0.53, respectively.\nWhile companions with identical temperatures but radii 10\\% and 30\\% larger than $\\beta$ Pic b would achieve this scaling, \n$\\kappa$ And b and 1RXJ 1609B are respectively older and younger than $\\beta$ Pic b, whereas for a given initial entropy of \nformation planet radii are expected to decrease with time \\citep{Spiegel2012}.\n\nIn summary, young (low surface gravity?), low-mass objects may provide a better match to $\\beta$ Pic b's \nphotometry than do field dwarfs, especially those with temperatures well above 1000 $K$ but slightly below 2000 $K$ \n($\\kappa$ And b, 1RXJ 1609 B). However, we fail \nto find a match (within error bars) between the planet's photometry spanning the full range of wavelengths for \nwhich we have data, especially at $\\sim$ 3 $\\mu m$. As the planet spectra depend critically \non temperature, surface gravity, clouds and (as we shall see) dust particle sizes, our comparisons imply that $\\beta$ Pic b may differ from most\nyoung substellar objects in one of these respects. Next, we turn to detailed atmospheric modeling to \nidentify the set of atmospheric parameters that best fit the $\\beta$ Pic b data.\n\n\\section{Planet Atmosphere Modeling}\nTo further explore the physical properties of $\\beta$ Pic b, we compare its photometry to planet atmosphere models \nadopting a range of surface gravities, effective temperatures, and cloud prescriptions\/dust.\nFor a given surface gravity and effective temperature, a planet's emitted spectrum depends primarily on the atmosphere's composition,\nthe structure of its clouds, and the sizes of the dust particles of which the clouds are comprised \\citep{Burrows2006}.\nFor simplicity, we assume solar abundances except where noted and leave consideration of anomalous abundances for future work.\n\nBased on $\\beta$ Pic b's expected luminosity \n(log(L$_{p}\/L_{\\odot}$) $\\sim$ -3.7 to -4, Lagrange et al. 2010; Bonnefoy et al. 2013) and age, \nit is likely too hot ($T_{eff}$ $\\sim$ 1400-1800 K) for \nnon-equilibrium carbon chemistry to play a dominant role \\citep{HubenyBurrows2007,Galicher2011}.\nTherefore, our atmosphere models primarily differ in their treatment of clouds and the dust particles \n entrained in clouds. For each model, we explore a range of surface gravities and effective temperatures.\n\n\\subsection{Limiting Cases: The \\citet{Burrows2006} E60 and A60 Models and AMES-DUSTY Models}\n\\subsubsection{Model Descriptions}\nWe begin by applying an illustrative collection of previously-developed atmosphere models to $\\beta$ Pic b.\nThese models will produce limiting cases for the planet's cloud structure and typical dust grain size, which we refine in Section \n\\ref{sec-smalldust}. To probe the impact of cloud thickness, we first adopt a (large) \nmodal particle size of 60 $\\mu$m and consider three different cloud models: \nthe standard chemical equilibrium atmosphere thin-cloud models from \\citet{Burrows2006}, which successfully reproduces \nthe spectra of field L dwarfs, moderately-thick cloud models from \\citet{Madhusudhan2011}, and thick cloud models used in \\citet{Currie2011a}.\nTo investigate the impact of particle size, we then apply the AMES-DUSTY models. The DUSTY models lack any dust grain sedimentation, \nsuch that the dust grains are everywhere in the atmosphere, similar to the distribution of dust grains entrained in \nthick clouds. However, they adopt far smaller dust grains than do the thick cloud models from \\citet{Madhusudhan2011} and \\citet{Currie2011a}, \nwhere the grains are submicron in size and follow the interstellar grain size distribution \\citep{Allard2001}.\nAll models described here and elsewhere in the paper assume that the planet is in hydrostatic and radiative equilibrium.\nNone of them consider irradiation from the star, as this is likely unimportant at $\\beta$ Pic b's orbital separation.\n Table \\ref{bpicatmosfit} summarizes the range of atmospheric \nproperties we consider for each model.\n\n\\textbf{The \\citet{Burrows2006} E60 Thin Cloud, Large Dust Particle Models} -- As described in \\citet{Burrows2006} and later \nworks \\citep[e.g.][]{Currie2011a,Madhusudhan2011}, the Model E60 case assumes that the clouds are \nconfined to a thin layer, where the thickness of the flat part of the cloud encompasses the condensation points \nof different species with different temperature-pressure point intercepts. Above and below this flat portion, the \ncloud shape function decays as the -6 and -10 powers respectively, so that the clouds have scale heights of \n$\\sim$ 1\/7th and 1\/11th that of the gas. We adopt a modal particle size of 60 $\\mu m$ and a particle \nsize distribution drawn from terrestrial water clouds \\citep{Deirmendjian1964}. We consider surface gravities \nwith log(g) = 4 and 4.5 and temperatures with a range of $T_{eff}$ = 1400--1800 K in increments of 100 K.\n\n\\textbf{The \\citet{Madhusudhan2011} AE60 Moderately-Thick Cloud, Large Dust Particle Models} -- \nDescribed in \\citet{Madhusudhan2011}, the Model AE60 case assumes a shallower cloud shape function of \n$s_{u}$ = 1, such that the cloud scale height is half that of the gas as a whole. We again adopt a \nmodal particle size of 60 $\\mu m$ and the same particle size distribution. We consider surface gravities \n with log(g) = 4 and 4.5 and temperatures between $T_{eff}$ = 1000--1700 K in increments of 100 K.\n\n\\textbf{The \\citet{Burrows2006} A60 Thick Cloud, Large Dust Particle Models} --\nAs described in \\citet{Currie2011a}, the Model A60 case differs in that it \nassumes that the clouds extend with a scale height that tracks that of the gas as a whole. \nBelow the flat part of the cloud, the shape function decays as the -10 power as in the E60 and AE60 models, although \ndeviations from this do not affect the emergent spectrum. Here, we consider surface gravities with \nlog(g) = 4 and 4.5 and temperatures with a range of $T_{eff}$ = 1000-1700 K in increments of 100 K.\n\n\\textbf{AMES-DUSTY Thick-Cloud, Small Dust Particle Limit} -- The AMES-DUSTY atmosphere models \\citep{Allard2001} \nleverage on the PHOENIX radiative transfer code \\citep{HauschildtBaron1999} and\nexplore the limiting case where dust grains do not sediment\/rain out in the atmosphere. \nUnlike the \\citet{Burrows2006} models and those considered in later works \\citep[e.g.][]{Spiegel2012}, the \nAMES-DUSTY models adopt a interstellar grain size distribution favoring far tinier dust grains with \nhigher opacities. The grains' higher opacities reduce the planet's radiation at shorter wavelengths. \nThus, these models have dramatically different near-IR planet spectra from the E\/A\/AE60 type models with larger \nmodal grain sizes even at the same temperatures and gravities \\citep[cf.][]{Burrows2006,Currie2011a}.\nHere we consider AMES-DUSTY models with log(g) = 3.5, 4, and 4.5 and $T_{eff}$ = 1000--2000 K ($\\Delta T_{eff}$=100 K). \n\n\\subsubsection{Fitting Method}\n\nTo transform the DUSTY spectra into predicted flux density measurements (at 10 $pc$), we convolve the spectra \nover the filter response functions and scale by a dilution factor of f = ($R_{planet}$\/10 pc)$^{2}$. \nWe consider a range of planet radii between 0.9 $R_{J}$ and 2 $R_{J}$.\nLikewise, we convolve the E60 and A60 \nmodel spectra over filter response functions. The E60 models (as do all other \\citealt{Burrows2006} and \n\\citealt{Madhusudhan2011} models) adopt a mapping between planet radius and surface gravity\/temperature set \nby the \\citet{Burrows1997} planet evolution models.\nTo explore departures from these models, we allow the the radius to vary by an additional scale factor \nof 0.7 to 1.7. For most of our grid, this translates into a radius range of 0.9 to 2 $R_{J}$.\n\nOur atmosphere model fitting follows methods in \\citet{Currie2011a,Currie2011b}, where\n we quantify the model fits with the $\\chi^{2}$\nstatistic,\n\\begin{equation}\n\\chi^{2} = \\sum\\limits_{i=0}^{n} (f_{data,i}-\nF_{model,i})^{2}\/\\sigma_{data,i}^{2}.\n\\end{equation}\nWe weight each datapoint equally. Because our photometric calibration fully considers \nuncertainties due to the signal-to-noise ratio, the processing-induced attentuation, and \nthe absolute photometric calibration, we do not set a 0.1 mag floor to $\\sigma$ for each data point \nas we have done previously.\n\nWe determine which models are \\textit{formally} consistent with the data\nby comparing the resulting $\\chi^{2}$ value to that\nidentifying the 68\\% and identify those that can clearly be ruled out \nby computed the 95\\% confidence limit.\nNote here that these limits are significantly more stringent compared to the ones \nwe adopted in \\citet{Currie2011a}.\nTreating the planet radius as a free parameter, we have five \ndegrees of freedom for seven data points, leading to $\\chi^{2}_{68\\%}$ = 5.87 \nand $\\chi^{2}_{95\\%}$ = 11.06. `\n\n\\subsubsection{Results}\nTable \\ref{bpicatmosfitres} summarizes our fitting results using the E60, AE60, A60, and DUSTY \nmodels. Figure \\ref{sedfit1} displays some of these fitting results, where the left-hand \npanels show the $\\chi^{2}$ distributions with the 68\\% and 95\\% confidence limits indicated \nby horizontal lines dashed and dotted lines. The right-hand panels and middle-left panel \nshow the best-fitting models for each atmosphere prescription. \nA successful model must match three key properties of the observed SED:\n(1) At 3--5 $\\mu$m, the SED is relatively flat, (2) at 1--3 $\\mu$m, \nthe spectral slope is relatively shallow, and (3) the overall normalization of the 3--5 $\\mu$m flux\n relative to the 1--3 $\\mu$m flux must match the data.\n\nFor the E60, AE60, and A60 models, we find $\\chi^{2}$ minima at log(g) = 4--4.5 and $T_{eff}$ = 1400 $K$ \nin each case with radius scaling factors, the constant we multiple the nominal \\citeauthor{Burrows1997} planet radii, \nbetween 1.185 and 1.680. For the \\citet{Burrows1997} evolutionary models, these scaling factors imply planet radii between $\\sim$ 1.8 and 2 $R_{J}$, at \nthe upper extrema of our grid in radius. \n\n Figure \\ref{sedfit1} illustrates the impact on the SED of changing cloud models, \ngiven a fixed grain size. The best-fit temperature does not vary dramatically \nbecause, roughly speaking, the relative fluxes at 1--3 $\\mu$m and 3--5$\\mu$m are determined by the SED's blackbody envelope. \nHowever, cloud thickness dramatically affects the depths of absorption bands superimposed on that envelope. \nAtmosphere models presented here do not feature temperature inversions. As such, high opacity molecular lines \nhave low flux densities because they originate at high altitudes where the temperature is low. \nWhen clouds are thin, optical depth unity is achieved at very different altitudes in and outside of absorption \nbands such as those at 3.3$\\mu$m (methane) and 4.5 $\\mu$m (primarily CO), and the bands appear deep.\n\nFor a fixed \\textit{observed} effective temperature, thicker clouds translate into hotter temperature \nprofiles (i.e. at a given pressure in the atmosphere, the temperature is higher) \\citep[e.g.][]{Madhusudhan2011}.\nThe total Rosseland mean optical depth of the atmosphere at a given pressure is higher \\citep{Madhusudhan2011}.\nAs the clouds become thicker, the $\\tau$ = 1 surface also is more uniform, such that molecular features wash out and the \nspectrum overall appears flatter and more like a blackbody \\citep{Burrows2006}. Hence, the prominent molecular absorption bands seen in the best-fit \nE60 (thin cloud) model are substantially reduced in the A60 (thick cloud) model, with AE60 lying in between. \nThe planet's flat 3--5 $\\mu$m SED is best fit by A60.\n\nAlthough the $\\chi^{2}$ minima for all four of the models \nwe consider are sharply peaked, none yield fits falling within the 68\\% confidence interval.\n The fits from E60 and AE60 are particularly poor, ruled out at a \ngreater than 5-$\\sigma$ level, whereas the A60 model quantitatively does better but still \nis ruled out as an acceptably-fitting model (C.L. $\\sim$ 3.9-$\\sigma$).\nThe best-fit AMES-DUSTY model fits the SED even better than A60, with parameters of $T_{eff}$ = 1700 and\nlog(g) = 3.5 and a radius of $r$ = 1.35 $R_{J}$, similar \nparameters to those found in \\citet{Bonnefoy2013}. However, the best-fit DUSTY model \nstill falls outside the 68\\% confidence limit (C.L. = 0.84).\nThese exercises suggest that the atmospheric parameters assumed in the models need to \nbe modified in order to better reproduce the $\\beta$ Pic b photometry.\nTo achieve this, we restrict ourselves to thick clouds and consider more carefully the impact of dust size.\n\n\n\\subsection{A4, Thick Cloud\/Small Dust Models}\\label{sec-smalldust}\n\\subsubsection{The Effect of Small Dust Particles}\nOur analyses in the previous section show the extreme mismatch between standard L dwarf atmosphere models \nassuming thin clouds and large dust particles and the data. While our $\\chi^{2}$ values for the Burrows \nthick cloud, large dust particle models are systematically much lower, they likewise are a poor match \nto the data. In contrast, fits from the AMES-DUSTY models only narrowly lie outside the 68\\% confidence interval. \n\nA closer inspection of the best-fitting models in each case (right-hand panels) illustrates how they fail. \nThe main difficulty with matching these models to $\\beta$ Pic b spectrum is the planet's flat SED from 2 $\\mu m$ \nto 5 $\\mu m$, where models tend to underpredict the flux density at 3.1 $\\mu m$ and\/or $M^\\prime$. The slope from \n$J$ to $K_{s}$ is also a challenge. Reducing dust sizes can further fill in absorption troughs by \nincreasing the opacities of the clouds. The AMES-DUSTY model, however, appears to overcorrect as its \nspectrum exhibits sharp peaks due to its submicron sized grains that degrade its fit to the data.\nTherefore, we consider grain sizes intermediate between those in A60 and AMES-DUSTY (e.g. $\\sim$ 1--30 $\\mu m$).\n\n\\textbf{A4 Thick Cloud, Small Dust Particle Models} -- \nAs the primary difference between these models is the typical\/modal particle size, we here introduce a \nnew set of atmosphere models with the same A-type, thick cloud assumption but with modal particle sizes \nslightly larger than those characteristic of dust in the AMES-DUSTY models but significantly smaller than \nprevious Burrows models. We nominally adopt 4 $\\mu m$ as our new modal particle size, comparable \nin wavelength to the peak flux density of $\\beta$ Pic b in $F_{\\nu}$ units. Thus, we denote these models \nas ``A4\", thick-cloud, small dust particle models.\n\nFigure \\ref{dustseq} illustrates the effect of dust on the planet spectrum for modal particle sizes of 3, 5, 30 and 50 $\\mu m$ and \na temperature and surface gravity consistent with that expected to reflect $\\beta$ Pic b based on planet \ncooling models ($T_{eff}$ = 1600 K, log(g)=3.8-4, $r$ $\\sim$ 1.5 $R_{J}$) \\citep{Burrows1997,Baraffe2003,Lagrange2010,Spiegel2012,Bonnefoy2013}.\nAs particle sizes decrease, the water absorption troughs at 1.8 $\\mu m$ and 2.5 $\\mu m$ diminish. Likewise filled in is the deep absorption trough \nat $\\sim$ 3.3 $\\mu m$ and 4.5 $\\mu m$ that is usually diagnostic of carbon chemistry \\citep[e.g.][]{HubenyBurrows2007,Galicher2011}. \nOverall, the spectrum flattens and becomes redder (shorter wavelength emission originates at higher altitudes), with weaker \nemission and a steeper slope at $J$ to $K_{s}$. This reddening explains the difference in best-fit \neffective temperature between the AMES-DUSTY model and the 60 $\\mu$m dust models.\n\n\n\\subsubsection{Model Fitting Procedure}\nWe follow the steps outlined in \\citet{Currie2011a}, where we perform two runs:\none fixing the planet radius to the \\citet{Burrows1997} hot-start predictions for a given $T_{eff}$ and log(g) and \nanother where we consider a range of planet radii (as in the previous section). For the fixed-radii modeling, the \n68\\% and 95\\% confidence limits now lie at $\\chi^{2}$ = 7.01 and 12.6, respectively, whereas they are at 5.87 and 11.06 \nfor the varying-radii fits as before. \nSimilar to the Burrows A\/E60 model runs, we consider a range of temperatures between 1400 K and 1900 K. To explore whether \nor not the fits are sensitive to surface gravity, we consider models with log(g) = 3.6, 3.8, 4, and log(g) = 4.25. For \nthe age of $\\beta$ Pic (formally, 8 to 20 Myr), this surface gravity range fully explores the masses (in the hot-start \nformalism) allowed given the radial-velocity dynamical mass limits \\citep{Lagrange2012a}.\n\nTo further explore the effect that carbon chemistry may have on our planet spectra, we take the best-fitting model \nfrom the above exercise, significantly enhance the methane abundances over solar and re-run a small \ngrid of temperatures based on that, to determine if departures from solar abundances may yield a wider range of \nacceptable atmosphere parameters. \n Because variations in molecular abundances affect the depths of molecular absorption bands, we \nexpect that such variations may improve our fit.\n\n\n\n\\subsubsection{Results}\nFigures \\ref{sedfit2} and \\ref{sedfit3} and Table \\ref{bpicatmosfitres} present our results for fitting \nthe $\\beta$ Pic b data with the A4, thick cloud\/small dust models. Quantitatively, these models \nbetter reproduce the $\\beta$ Pic b SED. Fixing the planet radius to values assumed in the \\citet{Burrows1997} \nplanet cooling model, we find one atmosphere model -- log(g) = 3.8, $T_{eff}$ = 1600 $K$ -- consistent \nwith the data to within the 68\\% confidence interval. A wide range of models are consistent with the data\nat the 95\\% confidence limit, covering $\\pm$ 0.2 dex in surface gravity and $\\pm$ 100 $K$ in temperature. \n\nWe can slightly improve upon these fits if we allow the planet radius to freely vary. In this case, \nthe best-fitting models yield a slightly higher surface gravity of log(g) = 4--4.25 but the same temperature of \n1600 $K$. But in contrast to the fixed-radius case above, a wide range of models are consistent with the \ndata at the 68\\% confidence limit. In particular, all surface gravities considered in our model grid \nare consistent with the data provided that the temperature is 1600 $K$ and the radius is rescaled accordingly: \nlog(g) = 3.6--4.25, $T_{eff}$ = 1600 $K$. \nAnother set of models with the full range of surface gravities \nand 250 $K$ spread in temperature (1500--1750 $K$) are marginally consistent with the data.\n\nThe methane-enhanced models are shown in Figure \\ref{sedfit4} for log(g)=4 and $T_{eff}$ = 1575--1650 $K$.\nThe 1575 $K$ and 1600 $K$ models (Figure \\ref{sedfit4}) likewise produce good fits to the data ($\\chi^{2}$ = 5.13--5.3), \nwhere the 1650 $K$ model barely misses the 68\\% cutoff. \nThus, while best-fitting solar abundance models appear narrowly peaked at $T_{eff}$ = 1600 $K$, the range\nin temperature enclosing the 68\\% confidence interval is larger when non-solar abundances are considered, \nat least extending from 1575 $K$ to almost 1650 $K$. \nChanges in molecular abundances, as expected, allow us to very slightly improve the SED fit.\nHowever, thick clouds and small dust grains are likely still needed to match the emission \nfrom $\\beta$ Pic b, since given molecules (i.e. $CH_{4}$) by themselves do not change fluxes comparably \nat 1--3 $\\mu$m and 3--5 $\\mu$m. \n\nIn summary, adopting the \\citet{Burrows1997} hot-start models to set our planet radii and the A4 thick cloud\/small \ndust atmosphere models, we derive log(g) = 3.8 and $T_{eff}$ = 1600 $K$ for $\\beta$ Pic b. Allowing the radius to \nvary and considering non-solar carbon abundances we derive log(g) = 3.6--4.25 and $T_{eff}$ = 1575--1650 $K$, meaning \nthat the planet temperature is well constrained but the surface gravity is not. However, in Section 5\nwe narrow the range of surface gravities to log(g) = 3.8 $\\pm$ 0.2, as higher surface gravities imply planet masses \nruled out by dynamical estimates.\n\n\\subsubsection{Varying Grain Sizes and Fits Over Other Model Parameter Space} \nThe models considered in the previous subsections assume thick clouds, dust grains with \na modal size of 4 $\\mu m$, and (in most cases) solar abundances. Although we achieve statistically \nsignificant fits to the $\\beta$ Pic b photometry with these models, our exploration of \nmodel parameter space is still limited. While an exhaustive parameter space search\nis beyond the scope of this paper, here we argue that models either \nthick clouds or small dust grains are unlikely to produce good-fitting models. \nThus, small grains and thick clouds are likely important components of $\\beta$ Pic b's atmosphere \nrequired in order to fit the planet's spectrum.\n\nTo consider the robustness of our results concerning the modal grain size, we also ran some \nmodel fits for modal particle sizes of 3 $\\mu m$, 5 $\\mu m$, 10 $\\mu m$, and 30 $\\mu m$. \nThe models with 3 and 5 $\\mu m$ modal sizes yielded fits slightly worse than those with \nmodal sizes of 4 $\\mu m$. For example, models with modal sizes of $$ = 3 and 5 $\\mu m$, $T_{eff}$ = 1600 K, \nlog(g) = 3.8 and a freely-varying planet radius yield $\\chi^{2}$ 6.31 and 6.28, respectively.\nThese values lie slightly outside the 68\\% confidence interval, although they are still smaller than \nthose from the best-fit DUSTY models. \nIn contrast, models with $$ = 10 $\\mu m$ and 30 $\\mu m$ fit the data significantly worse \n($\\chi^{2}$ = 10.0 and 19.6, respectively).\n\nSimilarly, our investigations show that small dust grains do not obviate the need to assume \nthick, A-type clouds in our atmosphere models. For example, adopting the AE-type cloud prescription,\nmodal particle sizes of 5 $\\mu m$, a temperature of $T_{eff}$ = 1600 K, and a surface gravity \nof log(g) = 3.8--4, our model fits are substantiailly worse than the A4-type models and even \nthe AMES-DUSTY models and are easily ruled out ($\\chi^{2}$ $\\sim$ 15--40).\nThe AE-type cloud prescription fails to reproduce the $\\beta$ Pic b spectrum because by \nconfining clouds to a thinner layer the $\\tau$ = 1 surface varies too much in and out of \nmolecular absorption features such as $CH_{4}$ and $CO$. In disagreement with the \n$\\beta$ Pic b SED, the AE model spectra thus have suppressed emission at $\\approx$ 3 $\\mu m$ \nand 5 $\\mu m$ and an overall shape looking less like a blackbody.\n\nIn contrast, non-solar abundances may slightly widen the range of parameter space (in radius, temperature, \ngravity, etc.) yielding good fits. The methane-rich model from the previous section adopting $$ = 5 $\\mu m$ \ninstead of 4 $\\mu m$, log(g) = 3.8, and $T_{eff}$ = 1600 K still yields a fit in agreement with the data \nto within the 68\\% confidence limit ($\\chi^{2}$ = 5.59).\nThus, within our atmosphere modeling approach we need 1) grains several microns in size, \ncomparable to the typical sizes of grains in debris disks, \nand 2) thick clouds to yield fits consistent with the data to \nwithin the 68\\% confidence limit. These results are not strongly sensitive to chemical abundances \nalthough varying the range of abundances may slightly widen the corresponding range of other parameter space \n(in temperature, gravity, etc.) yielding good-fitting models.\n\n\\section{Planet Radii, Luminosities, Masses, and Evolution}\nFrom the set of models that reproduce the $\\beta$ Pic b SED to the 68\\% confidence interval, we \nderive a range of planet radii, luminosities and inferred masses. The planet radii for each model \nrun are given in Table \\ref{bpicatmosfitres}. Interestingly, all of our 1-$\\sigma$ solutions fall on or \nabout $R$ $\\sim$ 1.65 $R_{J}$ with very little dispersion ($\\pm$ $\\sim$ 0.05 dex). If we consider the \nrange of radii for a given atmosphere model consistent with the data to within the 68\\% (or 95\\%) confidence \ninterval regardless of whether the given radius is the best-fit one, then the range in acceptable radii \nmarginally broadens: $r$ = 1.65 $\\pm$ 0.06. Note that these \nradii are larger than those inferred for HR 8799 bcde based either on its luminosity and hot-start \ncooling tracks \\citep{Marois2008,Marois2011} or from atmosphere modeling, \nwhere in \\citet{Currie2011a} and \\citet{Madhusudhan2011} \nour best-fit models typically had $R$ $\\sim$ 1.3 $R_{J}$.\nThe range in inferred planet luminosities is even narrower, The values inferred from our best-fit \nmodels center on log($L$\/$L_{\\odot}$)= -3.80 with negligible intrinsic dispersion ($\\pm$ 0.01 dex).\nThe uncertainty in $\\beta$ Pic's distance affects both our radius and luminosity \ndeterminations. Treating the distance uncertainty ($\\pm$ 1 $pc$) as a separate, additive source of error, \n$\\beta$ Pic b's range in radii is 1.65 $\\pm$ 0.06 $R_{J}$ and its luminosity \nis log($L$\/$L_{\\odot}$)= -3.80 $\\pm$ 0.02.\n\nFrom our best-fit surface gravities and inferred radii, we can derive the mass of the planets \ninferred from our modeling. Adopting the hot-start formalism without rescaling the radius, \nour modeling implies a best-fit planet mass of $\\sim$ 7 $M_{J}$; the range covering the 95\\% \nconfidence limit of 5--9 $M_{J}$. If we allow the radius to freely vary, we derive a \nrange of 4 $M_{J}$ to 18.7 $M_{J}$, where the spread in mass reflects primarily the \nspread in surface gravity from best-fitting models (log(g) = 3.6--4.25). However, RV data limits $\\beta$ Pic b's \nmass to be less than 15 $M_{J}$ if its semimajor axis is less than 10 $AU$, which appears \nto be the case \\citep{Lagrange2012a,Chauvin2012,Bonnefoy2013}. Thus, limiting the atmosphere models \nto those whose implied masses do not in violate the RV upper limits (ones with log(g) = 3.6--4), \nour best-estimated (68\\% confidence limit) planet masses are $\\sim$ 7$^{+4}_{-3}$ $M_{J}$.\n\nPlanets cool and contract as a function of time, and we can compare our inferred luminosities and \nradii to planet cooling models. Figure \\ref{lumevo} compares the inferred planet luminosity \nto the hot-start planet evolution models from \\citet{Baraffe2003}. For context, we also show \nthe luminosities of other 5--100 Myr old companions with masses that (may) lie below 15 $M_{J}$: GSC 06214 B, \n1RXJ 1609 B, HR 8799 bcde, AB Pic B, and $\\kappa$ And b. From our revised luminosity estimate, \nthe \\citet{Baraffe2003} hot-start models imply a mass range of $\\sim$ 8--12 $M_{J}$ if \nthe planet's age is the same as the star's inferred age (12$^{+8}_{-4}$ Myr; \\citealt{Zuckerman2001}).\nIf we use the \\citet{Burrows1997} hot-start models, we obtain nearly identical results of\n9--13 $M_{J}$. These masses are slightly higher than most of the implied masses from our atmosphere modeling \nbut still broadly consistent with them and with the dynamical mass upper limits of 15 $M_{J}$ \nfrom \\citet{Lagrange2012a}. Note also that the luminosities and planet radii are completely inconsistent \nwith predictions from low-entropy, cold-start models for planet evolution.\n\nStill, the righthand panel of Figure \\ref{lumevo} highlights one possible complication with our results, namely \nthat our best-estimated planet radii are near the upper end of the predicted range for 5--10 $M_{J}$ companions \nin the hot-start formalism. For the hot-start models presented in \\citet{Burrows1997} and \\citet{Baraffe2003}, \n5--10 $M_{J}$ companions are predicted to have radii of $\\sim$ 1.5--1.6 $R_{J}$. \nFor the hot-start models presented in \\citet{Spiegel2012}, the predicted range for 5--10 $M_{J}$ planets covers \n$\\approx$ 1.4--1.5 $M_{J}$\\footnote{This mismatch does not mean that the AMES-DUSTY models, whose fits \nto the data imply planet radii of $\\approx$ 1.3 $R_{J}$ and lie just outside the 68\\% confidence limit, \nare preferable. The best-fit AMES-DUSTY radii lie \\textit{below} the \nradii predicted for 5--10 $M_{J}$ objects at $\\beta$ Pic b's age and are only consistent for `warm-start' models \nthat imply lower luminosities and colder temperatures than otherwise inferred from the AMES-DUSTY fits.}. \n\nTo reduce the planet radius of $\\sim$ 1.65 $R_{J}$ by $\\sim$ 10\\% while yielding the same luminosity \nrequires raising the effective temperature from $\\approx$ 1600 $K$ to $\\sim$ 1700 $K$. This is a small change and \natmospheric modeling of $\\beta$ Pic b and similar substellar objects is still in its early stages. Thus, it is \nquite plausible that future modeling efforts, leveraging on additional observations of $\\beta$ Pic b and those of \nother planets with comparable ages and luminosities, will find quantitatively better fitting solutions that \nimply smaller planet radii and higher temperatures. We consider this to be the most likely \nexplanation.\n\nAlternatively, we can bring the atmosphere modeling-inferred radius into more comfortable agreement with hot-start \nevolutionary models if $\\beta$ Pic b is $\\approx$ 7 Myr old or less. For a system age of $\\approx$ 12 Myr, \nthis is consistent with it forming late in the evolution of the protoplanetary disk that initially surrounded the primary.\nEven adopting the lower limit on $\\beta$ Pic's age (8 Myr), $\\beta$ Pic b may still need to be younger than the star.\nWhile most signatures of protoplanetary disks around 1--2 $M_{\\odot}$ stars disappear within 3--5 Myr, \nsome $\\sim$ 10--20\\% of such stars retain their disks through 5 Myr \\citep{Currie2009,CurrieSiciliaAguilar2011,Fedele2010}. \nSeveral 1--2 $M_{\\odot}$ members of Sco-Cen and h and $\\chi$ Persei apparently have even retained their disks for \nmore than 10 Myr \\citep{Pecaut2012,Bitner2010,Currie2007c}, \ncomparable to or greater than the age of $\\beta$ Pic. Models for even rapid planet formation by core accretion predict that \nseveral Myrs elapse before the cores are massive enough to undergo runaway gas accretion at $\\beta$ Pic-like separations \n\\citep{KenyonBromley2009,Bromley2011}. \n\nIn Figure \\ref{lumevo} the open circles depict a case where \n$\\beta$ Pic b formed after 5 Myr, effectively making the planet 5 Myr younger than the star, where \nthe implied masses and radii overlap better with our atmospheric modeling-inferred values.\nThe overlap is even better for some hot-start models such as COND, which predict larger planet radii at $\\approx$ \n5--10 Myr than depicted here. Note that a young $\\beta$ Pic b as depicted in \nFigure \\ref{lumevo} with an implied mass mass of $M$ $\\ge$ 5 $M_{J}$ is still consistent with a scenario\nwhere the planet produces the warped secondary disk \\citep[c.f.][]{Dawson2011}.\n\n\n\n\n\\section{Discussion}\n\\subsection{Summary of Results}\nThis paper presents and analyzes new\/archival VLT\/NaCo and Gemini\/NICI 1--5 $\\mu m$ photometry for $\\beta$ Pictoris b, \n These data allow a detailed comparison between \n$\\beta$ Pic b's SED and that of field brown dwarfs and other low-mass substellar objects such as \ndirectly imaged planets\/candidates around HR 8799 and $\\kappa$ And. Using a range of planet atmosphere models, we \nthen constrain $\\beta$ Pic b's temperature, surface gravity and cloud properties. Our study yields the following \nprimary results.\n\n\\begin{itemize}\n\\item \\textbf{1.} - The near-IR ($JHK_{s}$) colors of $\\beta$ Pic b appear fairly consistent with the field \nL\/T dwarf sequence. Compared to other young, low-mass substellar objects, $\\beta$ Pic b's near-IR colors \nbear the most resemblance to late M to early T dwarfs such as Luhman 16B and $\\kappa$ And b. From its near-IR colors \nand color-magnitude positions, $\\beta$ Pic b's near-IR properties most directly resemble those of a L2--L5 dwarf.\n\n\\item \\textbf{2.} - $\\beta$ Pic b's mid-IR properties identify a significant departure from the field L\/T dwarf sequence. \nThe planet is slightly overluminous at $L^\\prime$ and significantly overluminous at $M^\\prime$, with deviations from the field L dwarf \nsequence matched only by GSC 06214B and $\\kappa$ And b. The mid-IR portion of $\\beta$ Pic b's SED appears more like \nthat of a late L dwarf or low surface gravity mid L dwarf. The broadband $JHK_{s}L^\\prime$ photometry for \n$\\beta$ Pic b also closely resembles that of $\\kappa$ And b. However, it is unclear whether any object matches $\\beta$ Pic \nb's SED at all wavelengths for which we have measurements. Its 3.1 $\\mu m$ brightness and 3.8--5 $\\mu m$ spectral \nshape are particularly difficult to match.\n\n\\item \\textbf{3.} -- Compared to limiting-case atmosphere models E60 (large dust confined to very thin clouds), \nAE60\/A60 (large dust confined to moderately-thick\/thick clouds) and DUSTY (copious \nsmall dust everywhere in the atmosphere), $\\beta$ Pic b appears to have evidence for thick clouds consistent \nwith a high $T_{eff}$ and low surface gravity. We fail to find any E60\/AE60\/A60 model providing statistically significant fits \nover a surface gravity range of log(g) = 4--4.5 and any $T_{eff}$. The DUSTY models come much closer to yielding \nstatistically significant fits but mismatch the planet \nflux at $J$, $K_{s}$, [3.1], and $M^\\prime$. From these fiducial comparisons, we infer that $\\beta$ Pic b's atmosphere \nshows evidence for clouds much thicker than those assumed in the E60 models but is slightly less dusty \nthan the DUSTY models imply.\n\n\\item \\textbf{4.} -- Using thick cloud models with particle sizes slightly larger than those found in the \ninterstellar medium ($$ = 4 $\\mu m$), we can match $\\beta$ Pic b's SED in both the near and mid IR. \nAssuming planet radii appropriate for the \\citet{Burrows1997} `hot-start' models, we derive \nlog(g) = 3.80 and $T_{eff}$ = 1600 $K$ for $\\beta$ Pic b. Allowing the \nradius to freely vary, leaves the surface gravity essentially \nunconstrained, where models consistent with the data at the 68\\% confidence limit include \nlog(g) = 3.6--4.25 and $T_{eff}$ = 1600 $K$. Considering departures from solar abundances \nand eliminating models that imply masses ruled out by dynamical estimates, \nthe acceptably fitting range of atmosphere parameters cover log(g) = 3.6--4 and \n$T_{eff}$ = 1575--1650 $K$. \n\n\\item \\textbf{5.} -- Using our best-fit atmosphere models and eliminating models inconsistent \nwith $\\beta$ Pic b's dynamical mass upper limit, within the hot-start formalism \nwe derive a mass of 7 $M_{J}$ for a fixed radius and 7$^{+4}_{-3}$ $M_{J}$ for a scaled radius. \nOur best-fit planet radius is $\\sim$ 1.65 $\\pm$ 0.06 $R_{J}$ and luminosity of log(L\/L$_{\\odot}$) = -3.80 $\\pm$ 0.02.\n\n\\item \\textbf{6.} -- While our derived luminosity and radius for $\\beta$ Pic b rules out cold start models, \nthe radius is near the upper end of predicted radii for hot start-formed planets with $\\beta$ Pic's age. \nAs the planet only needs to be $\\sim$ 100 $K$ hotter to easily eliminate this discrepancy, it \nlikely identifies a limitation of the atmosphere models. Alternatively, if $\\beta$ Pic b has a significantly \nyounger age than the star's age consistent with it forming late in the protoplanetary disk stage\n our derived radius is comfortably within the range predicted by hot start models.\n\n\\end{itemize}\n\\subsection{Comparisons to Other Recent $\\beta$ Pictoris b Studies}\n\\subsubsection{Currie et al. 2011b}\nIn our first-look analysis of the atmosphere of $\\beta$ Pictoris b \\citep{Currie2011b}, \nwe compared its $K_{s}$, $L^\\prime$, and [4.05] photometry to an array of atmosphere models, from\natmospheres completely lacking clouds to those with the Model A-type thick clouds that extend to the \nvisible surface of the atmosphere. In that paper, we found that the AE thick cloud models from \\citet{Madhusudhan2011} \nyielded the smallest $\\chi^{2}$ value. The fits degraded at about the same level for \nthe Model A thick cloud and Model E ``normal\" L dwarf atmosphere prescriptions, while the cloudless case fared the worst.\n\\citet{Currie2011b} conclude that while the AE thick cloud model quantitatively produced the best fit, the existing data \nwere too poor to say whether the clouds in $\\beta$ Pictoris b were any different in physical extent, in \nmean dust particle size, etc. from those for field L dwarfs with the same range of temperatures.\n\nOur present study greatly improves upon the analyses in \\citet{Currie2011b}. First, our photometry covers seven \npassbands, not three, at 1.25--4.8 $\\mu m$, not 2.18--4.05 $\\mu m$. This expanded coverage allows far firmer \nconstraints on $\\beta$ Pic b's atmospheric properties. In particular, our new photometry strongly favors \nthe Model A thick-cloud prescription over AE, largely due to the relatively low planet flux densities at 1.25--1.65 $\\mu m$ \nand the relatively high flux densities at 3.1 $\\mu m$ and $M^\\prime$\/4.8 $\\mu m$, trends that the Model A cases consistently \nreproduce better. While all models considered in \\citet{Currie2011b} assumed a modal particle size of 60 $\\mu m$ for dust \nentrained in clouds, our fits improve if we use smaller sizes. The combined effect of thicker clouds and smaller particle sizes \nfavor atmosphere models with a slightly higher surface gravity and temperature than the best-fit model in \\citet{Currie2011b}.\nOur new data more clearly demonstrate the failure of the E models successful in fitting most of the field L dwarf \nsequence and thus better distinguish $\\beta$ Pic b's atmosphere from that of a typical cloudy field L dwarf.\n\n\\subsubsection{Bonnefoy et al. (2013)}\n\\citet{Bonnefoy2013} presented new photometry for $\\beta$ Pictoris b in the $J$, $H$, and $M^\\prime$ filters from \ndata taken in 2011 and 2012. The $J$ and $H$ detections are firsts and greatly expand the wavelength coverage \nfor $\\beta$ Pic b's SED. Their $M^\\prime$ detection is first \\textit{well-calibrated} detection, building upon and \nfollowing the detection presented in \\citet{Currie2011b}, which lacked a contemporaneous flux-calibration data to provide \nprecise photometry.\nThey then combined these measurements with their previously published $K_{s}$ and $L^\\prime$ photometry and [4.05] from \\citet{Quanz2010}. \n\n\nIn general, our study clarifies and modifies, instead of contradicting, \nthe picture of $\\beta$ Pictoris b constructed in \\citet{Bonnefoy2013}. \nOn the same datasets, the SNR of our detections is slightly higher \nbut our photometry agrees within theirs derived from their CADI, RADI, and LOCI reductions\nwithin their adopted photometric uncertainties ($\\sim$ 0.2--0.3 mag). \nWe derive smaller photometric uncertainties, owing to a more uniform \nthroughput as a function of azimuthal angle, probably due to our pixel masking technique \nand SVD cutoff in A-LOCI \\citep[see also][]{Marois2010b}. \nWe concur that the planet's mid-IR colors are unusually red and highlight a potentially strong, \nnew disagreement with field L dwarfs at 3.1 $\\mu m$.\n\nWe agree with \\citeauthor{Bonnefoy2013}'s general result that the best-fitting atmosphere models \nare those intermediate between the AMES-DUSTY models (submicron-sized dust everywhere) and \nthe COND or BT-Settl models (no dust\/clouds or very thin clouds). Quantitatively, the $\\chi^{2}$ \nvalues we derive are much larger than the best-fitting models in \\citeauthor{Bonnefoy2013} because \nour photometric uncertainties are significantly smaller (e.g. 7 vs. 3 for AMES-DUSTY).\nOur analyses point to thick clouds and particle sizes small compared to the range typically \nused in the \\citet{Burrows2006} models but larger than the ISM-like grains in the AMES-DUSTY \nmodels. The temperatures, surface gravities, and luminosities they derive are generally consistent \nwith our best-fit values. \n\n While they derive a lower limit to the initial entropy of \n9.3 $kB$\/baryon, we do not provide a detailed similar analysis since the inferred entropy range\ndepends on the planet radius which, considering our studies together, is very model dependent. Similarly, \nit depends on the planet mass (for which there still is some range) and the planet's age (which is very poorly constrained).\nStill, we agree that cold start models are ruled out for $\\beta$ Pic b as they fail to reproduce the inferred \nluminosity and radii of the planet determined from both our studies.\n\n\\subsection{Future Work to Constrain $\\beta$ Pic b's Properties}\nDeriving $\\beta$ Pic b's mass and other properties is difficult \nsince they are based on highly uncertain parameters such as the planet's age and its entropy at formation.\nHowever, dynamical mass limits can be derived from continued radial-velocity measurements \\citep{Lagrange2012a}. \nAs these limits depend on $\\beta$ Pic b's orbital parameters, future planet astrometry may be particularly \nimportant in constraining $\\beta$ Pic b's mass. If $\\beta$ Pic b is responsible for the warp observed in \nthe secondary debris disk \\citep{Golimowski2006}, planet-disk interaction modeling can likewise yield \na dynamical mass estimate \\citep{Lagrange2009,Dawson2011} provided the planet's orbit is known.\n\nFinally, while our models nominally assume solar abundances, we showed that changing the methane abundance \nmight yield marginally better fits to the data. Near-infrared \nspectroscopic observations of $\\beta$ Pic b as can be done soon with $GPI$ and $SPHERE$ may clarify \nits atmospheric chemistry. Future observations with GMTNIRS on the \\textit{Giant Magellan Telescope} should \nbe capable of resolving molecular lines in $\\beta$ Pic b's atmosphere \\citep{Jaffe2006}, providing a more detailed look \nat its chemistry, perhaps even constraining its carbon to oxygen ratio and formation history \\citep[e.g][]{Oberg2011,Konopacky2013}.\n\\acknowledgements \nWe thank Christian Thalmann, France Allard, and the anonymous\nreferee for helpful comments and discussions and\nMichael Cushing for providing IRTF\/SPeX and Subaru\/IRCS spectra of field L dwarfs.\nWe are grateful to the \ntelescope staffs at ESO Paranal Observatory and Gemini-South Cerro Pachon Observatory for support \nfor our observations, all of which were obtained with ``delegated visitor mode\" or ``eavesdropping mode\".\nFinally, we thank Christian Marois for very detailed discussions on image processing techniques and \nextensive helpful suggestions that improved this manuscript. \nT. C. acknowledges support from a McLean Postdoctoral Fellowship.\nR. D. acknowledges NSF-GRFP grant DGE-1144152.\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nFacing a steady stream of mis-and disinformation promulgated by digital-age technologies \\cite{vosughi}, a new strand of research investigates the (partial) automation of claim checking using Natural Language Processing (NLP) methods \\cite[e.g.][]{vlachos_riedel, fever_dataset, content_managament_perspective}. The datasets underlying automated claim-checking systems generally have two components. First, there is a set of annotated claims, which can for example be true, false, or non-verifiable. Second, there is a knowledge base (KB) or fact base, comprising a corpus of dependable documents which can be queried and consulted to assess the veracity of the claims. From this KB, we retrieve the required evidence, which in turn leads to the veracity prediction for a claim. \n\n\n\\begin{table*}[ht!]\n \\centering\n \\begin{tabular}{p{2cm} p{6.4cm} p{6.4cm}} \\toprule \n Claim & Wikipedia Evidence & Evidence from Scientific Abstracts \\\\ \\hline\n Global warming is driving polar bears toward extinction. \\textbf{True label: \\textsc{Supported}} & Habitat destruction: {\\color{blue}Rising global temperatures}, caused by the greenhouse effect, {\\color{blue}contribute to habitat destruction}, endangering various species, such as the {\\color{blue}polar bear}. Polar bear: \"Bear hunting caught in global warming debate\". Global warming: Rising temperatures push bees to their physiological limits, and could cause the extinction of bee populations. Extinction risk from global warming: \"Recent Research Shows Human Activity Driving Earth Towards Global Extinction Event\". \\textbf{predicted label: \\textsc{Not Enough Info}} & \\textcolor{blue}{Polar bears will largely disappear} from the southern portions of their range by mid-century (Stirling and Derocher, 2012). While the {\\color{blue}polar bear is the most well-known species imperiled by global warming}, and the first to be listed under the ESA solely due to this factor, it was not the first species protected under the statute in which global warming played a significant role. This highly publicized milestone firmly cemented the {\\color{blue}polar bear as the iconic example of the devastating impacts of global warming} on the planet's biodiversity (Cummings and Siegel, 2009). \\textbf{predicted label: \\textsc{Supported}} \\\\ \n The main greenhouse gas is water vapour \\textbf{True label: \\textsc{Supported}} & Greenhouse gas: \"AGU Water Vapor in the Climate System\". Global warming: As {\\color{blue}water is a potent greenhouse gas, this further heats the climate}: the water vapour feedback. Global warming: The main reinforcing feedbacks are the water vapour feedback, the ice\u2013albedo feedback, and probably the net effect of clouds. \\textbf{predicted label: \\textsc{Not Enough Info}} & \\textcolor{blue}{Water vapour is the most abundant and powerful greenhouse gas} in Earth's atmosphere, and is emitted by human activities (Sheerwood et al., 2018). {\\color{blue}Water vapour is a key greenhouse gas} in the Earth climate system (Trent et al., 2018). \\textbf{predicted label: \\textsc{Supported}}\n \\\\ \n \\end{tabular}\n \\caption{Example Claims, with Verification using Evidence from Different Knowledge Bases}\n \\label{tab:climate-fever-qualitative-analysis}\n\n\\end{table*}\n\n\nIn the previous literature on textual claim verification, the claims and the KB have been seen as part of a single unified whole. For example, FEVER consists of synthetic claims derived from Wikipedia sentences and uses Wikipedia itself as the KB \\cite{fever_dataset}. Another dataset, SciFact, considers scientific claims and evidence from a selected sample of scientific paper abstracts \\cite{scifact}. The task in these datasets then consists of finding the right evidence from the respective KB and to predict the correct veracity of the claim.\n\n\n\n\nThis self-contained approach to automated claim-checking certainly simplifies the scope of the problem. But it imposes a significant constraint on the practical usefulness and transferability of the resulting systems. By design, these systems will be specialized in the specific domain and less effective for checking diverse real-world claims. Consistent with this idea, some recent work has shown that a claim-checking system that queries Wikipedia performs poorly at checking claims from a climate-change-focused task \\citep{diggelmann2021climatefever}, from scientific paper abstracts \\citep{scifact}, or even from general-knowledge claims that are phrased more journalistically or colloquially \\citep{thorne2021evidencebased,kim-etal-2021-robust}. \n\nThus, the most recent work has recognized a need for closer attention on the choice of knowledge base in automated claim checking \\cite[e.g.][]{content_managament_perspective,claim_review_climate_science}. This recognition arises in parallel with stunning NLP breakthroughs brought by self-supervised learning applied to massive, curated corpora, hinting at potential gains from a more data-expansive approach \\citep[e.g.][]{bert,gpt2,gpt3}. As has been recently explored in other fields \\cite{mlops}, it is likely that automated claim checking could benefit from a more \\textit{data-centric} approach, which seeks performance gains via data work, rather than the standard model-centric approach, which focuses on model architectures \\cite{mlops}.\n\nThis paper puts the data-centric perspective into practice. In the standard approach, the data is held constant, and the claim-checking system is developed to improve performance on existing benchmarks \\cite[see e.g.][]{fever_dataset, scifact, augenstein2019multifc, kotonya2020explainable}. We do the opposite: The claim-checking system is held constant, while we systematically permute the KB corpus and the dataset of checked claims. Thus we generalize the process of automated claim-checking by detaching the claims from the KB. \n\nTable \\ref{tab:climate-fever-qualitative-analysis} illustrates the core intuition, with two climate-science-related claims listed in Column 1. A claim-checking system using a KB of Wikipedia articles does not produce sufficient evidence to resolve the claim (Column 2). When using a KB of scientific abstracts, however, the same claim-checking system recovers stronger supporting evidence and can reproduce the ground-truth label prediction (Column 3). Put simply, a scientific claim requires a scientific KB for a claim-checking system to predict the ground-truth label. This paper explores the generality of this insight. \n\nOur experimental setup works as follows. We take a standard claim-checking system that performs evidence retrieval followed by natural language inference between the evidence and the claim \\citep{e-fever}. We then consider a battery of claim verification tasks and knowledge bases. For each pair, we apply the pipeline without any further training.\\footnote{A similar approach is taken on a smaller scale by \\cite{claim_review_climate_science}, who retrieved evidence for Climate FEVER from Wikipedia and a subset of PubMed.}\n\nWe perform claim-checking on six labeled claim tasks. Besides two tasks based on Wikipedia \\citep{fever,foolmetwice} and one on scientific paper abstracts \\citep{scifact}, we have a task on climate-change-related claims \\citep{diggelmann2021climatefever}, one on statements from the 2016 presidential debates \\citep{clef_2019}, and one on real-world information needs based on search-engine queries \\citep{thorne2021evidencebased}. Our paper is the first to take on all of these diverse claim verification tasks using a single system. In turn, we confront these diverse verification tasks with knowledge bases from diverse domains. We use KB's on general knowledge (the universe of Wikipedia article summaries), from the scientific domain (a scientific KB consisting of more than 70Mio scientific abstracts), and from the journalistic domain (the universe of recent New York Times articles). In addition, we experiment with combining these knowledge bases, further generalizing the effort by \\citet{fakta} combining Wikipedia with a corpus of online news articles. Finally, we compare them to \"the whole internet\" queried via a search engine \\citep[e.g.][]{popat_2016, karadzhov-etal-2017-fully, augenstein2019multifc, clef_2019}. \n\nIn the experiments, we observe large differences in performance for claims from the various datasets using different knowledge bases. In particular, we show that a claim-checking pipeline can obtain good performance in a new domain (e.g., from Wikipedia to science articles) without additional model training, as long as it has access to a KB from the new domain. In line with Table \\ref{tab:climate-fever-qualitative-analysis}, intuitively, we obtain the highest label accuracy for each claim task using the KB that most closely matches the domain of the claims. In general, lower performance is driven by failure to retrieve the required evidence and by assigning a support\/refute predictions to non-verifiable claims, rather than making incorrect support\/refute determinations. \n\nMeanwhile, the union of multiple knowledge bases tends to perform similarly to the single KB from the closest domain. Sometimes, the combined KB performs worse. Thus, the issue of KB choice cannot be easily resolved by pooling all available knowledge bases. However, we ask whether the evidence pipeline metrics are predictive of system performance. We show that the the bm25 similarity between a claim and the closest KB documents performs poorly as a metric. However, the confidence score produced by the evidence selection module (a fine-tuned RoBERTa-based classifier) performs well. Thus, this evidence quality metric could be used to forecast system performance in new domains without labels. We demonstrate the usefulness of data-driven KB selection by showing that the overall best system across all claim-verification tasks is to select the KB with the highest evidence quality score by task. \n\nThese findings highlight the pivotal role of the knowledge base for automated claim checking. We provide additional evidentiary support for previous papers making this and related points \\cite{content_managament_perspective,claim_review_climate_science, goasdou_et_al}. We hope these results motivate the enhancement of existing knowledge bases and production of new ones. Further, it could be that additional performance gains are achievable through joint optimization of the KB and the claim-checking system. \n\nThe insights from this paper might be applicable in other settings besides text-based claim verification. For example, the benefits from focusing on unstructured knowledge bases might also pay out in the case of structured knowledge graphs \\citep{wilcke2017knowledge}, leading to improvements in non-textual claim verification systems \\citep{tchechmedjiev2019claimskg, tien_duc_cao, scrutinizer}. Meanwhile, open-domain question answering tasks have much in common with automated claim verification \\cite{zhu2021retrieving}. One can easily imagine similar performance gains from selecting KB's according to the question type before searching for answers. \n\n\n\\section{Methods}\\label{sec:data}\n\n\\subsection{Overview}\n\nThe claim-checking task can be summarized as follows. We have a set of claims $C=(X, Y)$, consisting of a plain-text claim $x$ and a three-class veracity label, $y\\in$\\{\\textsc{Supported}, \\textsc{Refuted}, \\textsc{Not Enough Info}\\}. In addition, we have access to a knowledge base (KB) $B$, comprising a list of plain-text evidence statements indexed by $b$. The objective of a claim-checking system is to take as input a claim $x$ and the KB $B$ and learn a prediction function $\\hat{y}(x,B)$. \n\nDue to computational constraints, the system cannot take as input all text in $B$, so we split the problem into two steps. First, a retrieval step identifies a set of candidate evidence statements $\\hat{b}(x,B)$ that serve as a sufficient statistic for the KB $B$ in the determination of claim $x$. Then the veracity prediction function is learned using the retrieved evidence, $\\hat{y}(x,\\hat{b}(x,B))$. \n\n\n \\begin{figure}[!tp]\n \\caption{Overview of the Claim-checking Task}\n \\label{fig:system_detail}\n \\centering\n \\includegraphics[scale=0.5]{figures\/system-detail.png}\n \n \\end{figure}\n\nWe show an illustration of our claim-checking approach in Figure \\ref{fig:system_detail}. As we can see, the task decomposes in a data part and a modelling part. Previous work has mainly focused on optimizing $\\hat{b}(\\cdot)$ and $\\hat{y}(\\cdot)$ for a given task and associated KB, i.e., allocated resources to optimize the models holding the data fixed. Our approach is different. We hold the system ($\\hat{b}(\\cdot)$, $\\hat{y}(\\cdot)$) fixed and vary the inputs ($C,B$). \n\nGoing forward, we can refer to a particular claim task as $C_i$, indexed by $i$, and a particular KB as $B_j$, indexed by $j$. Figure \\ref{fig:experimental_setup} provides an illustration, taking as an example $C_i$=SciFact. For a given scientific claim $x$, we retrieve relevant evidence $\\hat{b}(x,B_j))$ from the respective knowledge bases $B_j$. Given a claim and retrieved evidence, the model predicts a claim veracity $\\hat{y}(x,\\hat{b}(x,B_j))$, which can be compared to the true value $y$. This process is repeated and average label accuracy is reported for each pair ($C_i,B_j$).\n\n\\begin{figure}[!tp]\n \\caption{Overview of Experiment Setup}\n \\label{fig:experimental_setup}\n \\centering\n \\includegraphics[scale=0.11]{figures\/experimental_setup.png}\n\\end{figure}\n\n\nThe rest of this section provides additional detail on this procedure. Respectively, we discuss the claim verification pipeline (2.2), the claim verification tasks (2.3), and the sourced knowledge bases (2.4).\n\n\\subsection{Claim Verification Pipeline}\n\nOur claim verification pipeline ($\\hat{b}(\\cdot)$, $\\hat{y}(\\cdot)$) is held fixed across experiments. The starting point is \\citet{e-fever}, a system initially designed for claim-checking FEVER's synthetic claims using Wikipedia as the KB ($C_i$=FEVER, $B_j$=Wikipedia). Our only modification consists of adjusting the document retrieval step to work for arbitrary inputs ($C_i$,$B_j$). \n\nOur choice for using the \\citet{e-fever} system is twofold. First, it is the best performing FEVER system for which we found the code and models available.\\footnote{We used the code provided here: \\url{https:\/\/github.com\/dominiksinsaarland\/domlin_fever}.} Second, the approach is close to the original FEVER baseline system's (augmented with more powerful Transformer architectures), and thus a sensible standard choice when the focus is on the knowledge base. Nevertheless, our results are not contingent on using the \\citet{e-fever} system. In Appendix Table \\ref{tab:results_zero_shot_fever_kgat}, we can replicate our main results using another publicly available FEVER system \\cite{kgat}.\n\nThe first step is retrieval of relevant documents. For a given claim $x$, all documents $b$ in $B$ are ranked by bm25($x$,$b$), the bm25 similarity between $b$ and $x$ \\citep{anserini}.\\footnote{This document retrieval step is the major difference with \\citet{e-fever}, which was too specialized for Wikipedia -- in particular, using the MediaWiki API and following hyperlinks.} The top-five documents on this ranking, denoted as $\\hat{B}_5(x)$, are retrieved as containing potential evidence.\n\nThe second step is selection of evidence statements from the retrieved documents. After splitting the sentences using spaCy \\citep{spacy2}, each sentence $s \\in \\hat{B}_5(x)$ is assigned a score $\\hat{e}(s,x)\\in(0,1)$, interpretable as the probability that $s$ provides evidence about the claim $x$. The evidence scoring function $\\hat{e}(\\cdot)$, borrowed directly from \\citet{e-fever}, is a RoBERTa-based binary text classifier trained to identify evidence statements in an annotated Wikipedia-based dataset. Let $\\hat{s}_5(x,B)$ be the top-five sentences ranked by evidence score, across the sentences in all retrieved documents $\\hat{B}_5$. These sentences provide the evidence supplied to the veracity classifier. That is, $\\hat{b}(x,B)=\\hat{s}_5(x,B)$. \n\nThe third step of the pipeline is to make a veracity determination based on the claim $x$ and extracted evidence $\\hat{s}_5(x,B)$. This module consists of a ternary text classifier assigning predicted probabilities across the classes $\\hat{y}\\in$\\{\\textsc{Supported}, \\textsc{Refuted}, \\textsc{Not Enough Info}\\}. Again borrowed directly from \\citet{e-fever}, the veracity classifier is a fine-tuned RoBERTa model using FEVER's annotated dataset of claims and evidence from Wikipedia. For training and inference, the claim statement and the evidence statements are concatenated as a single string.\n\n\n\n\\subsection{Claim Verification Tasks}\n\nFor the set of claims $C=(X,Y)$, we examine six different tasks. For each task, we evaluate all claims in the official development set, if present. We include more detailed descriptions in Appendix \\ref{appendix:claims}, for example on how the labels were configured to make them comparable across tasks. Summary statistics on these tasks -- e.g. number of examples, class distributions -- are reported in Appendix Table \\ref{tab:dataset_descriptions}. Appendix Table \\ref{tab:example_claims1} shows a sample of data points for each task. \n\n\\begin{enumerate} \n \\itemsep2pt \n \\item \\textbf{FEVER} is a large-scale task based on Wikipedia sentences, and then manually verified by a second set of annotators using Wikipedia who did not know the source of the claim \\citep{fever}. We use the first 2K claims from the development set.\\footnote{One of our examined KBs is the Google API. Due to rate limits of 100 queries a day, it was not possible to experiment on all 19'998 claims from the FEVER development set.}\n \\item \\textbf{SciFact} SciFact considers the task of scientific claim verification \\cite{scifact}. The claims are generated from sentences in citing articles by annotators with expertise in scientific NLP and life science, based on a corpus of 5,183 scientific abstracts that have been sampled from Semantic Scholar \\citep{semanticscholar}. We ran our experiments on the 300 claims from the development set. \n \\item \\textbf{Climate FEVER} contains new annotated claims related to climate change, with a mixture of journalistic and scientific claims \\citep{diggelmann2021climatefever}. Because of the lack of a development set in this task, we considered all 1,381 claims which were not disputed in our experiments. \n \\item \\textbf{Presidential Speeches} is a task covering two presidential debates and one vice presidential debate held in the 2016 U.S. elections \\cite{clef_2019}. It contains 70 annotated true or false claims on which we base our analysis. We excluded half-true sentences and sentences without any annotation. \n \\item \\textbf{Real-World Information Needs} is a task with annotated claims based on search engine queries, representing real-world information needs \\cite{thorne2021evidencebased}. We include the development set of this task. Similar to climate FEVER, we produce an aggregated label for each claim and exclude claims with conflicting annotations, resulting in 930 claims.\n \\item \\textbf{Fool Me Twice} is a task containing challenging entailment pairs gathered through a multi-player game based on Wikipedia \\cite{foolmetwice}. The aim of the game is to construct claims which are difficult to verify. We use the 1,169 claims from the development set.\n\\end{enumerate}\n\n\n\\subsection{Knowledge Bases} \n\nThe component of interest in our study is the knowledge base (KB) $B$. Each KB, described in this subsection, is a set of plain-text documents. Appendix Table \\ref{tab:stats_knowledge_bases} reports summary statistics on each KB, including the number of documents and the average document length. Appendix Table \\ref{tab:examples_knowledge_bases} shows a sample of sentences from each KB. In the following, we describe the four knowledge bases in more detail.\n\n\\paragraph{General-Knowledge Domain: Wikipedia.}\nThe first KB is a corpus of the introductory summaries for all English-language Wikipedia articles (N = 5.5 million) as of 2017. This corpus was used to construct the FEVER claim-checking task \\citep{fever_dataset}. It is a general-knowledge crowd-sourced encyclopedia emphasizing breadth rather then depth. \n\n\\paragraph{Scientific Domain: Scientific Abstracts.}\n\nThe scientific KB is built using a large corpus of scientific abstracts. These abstracts are compiled from three sources. The bulk of the abstracts (61Mio abstracts) come from Scopus, a large-scale database managed by Elsevier. Another 8Mio abstracts are added from CrossRef. The last 8Mio abstracts come from the Semantic Scholar Project \\cite{semanticscholar}. In total, the KB covers 77Mio abstracts from scientific articles in all scientific fields. \n\n\\paragraph{Journalistic Domain: N.Y. Times.}\n\nThe third KB is from the news-media domain, as investigated by\n\\cite{fakta, ferreira-vlachos-2016-emergent, fakenewschallenge}. For this purpose, we use all full-text articles crawled from the New York Times web site, published between January 2000 and March 2021 (N = 2 million). \n\n\n\\paragraph{\"The Whole Internet\": Google Search API.}\nWe have a fourth KB representing the statements available on the searchable internet. This is arguably the broadest possible domain. For this KB, our document retrieval system works differently. We use the Google Search API to retrieve documents from the web. Each query is the plain claim, for which we retrieve 10 hits from the API. We use these as the retrieved documents $\\hat{B}_5$.\\footnote{Arguably, giving the Google API 10 hits rather than 5 hits is not a fair comparison with the other databases. We did this because the snippets are very short. We also produced our main experiment results limiting to just the five most relevant snippets and the results were nearly identical.} After that, the pipeline proceeds normally with the evidence statements selected from the preview snippets.\n\n\\paragraph{All KB's.}\n\nBeside the four individual knowledge bases, we analyze a synthetic KB that includes the union of the four domains. For each claim, the top-five documents in each KB (so 20 documents in total) are included as potential evidence. Then the top 5 sentences by the score $\\hat{e}$, across all 20 documents, are used as evidence.\n\n\\paragraph{No KB.}\n\nAs a minimal baseline, we produce a prediction using just the claim as input without considering any evidence. By construction, this setting almost always produces the label \\textsc{Not Enough Info}.\n\n\\paragraph{\"Best Evidence\" KB.}\n\nWe use an unsupervised measure to automatically choose the most appropriate KB for a given claim verification task. This measure will be discussed in more detail in Section \\ref{sec:evidence_quality}.\n\n\\section{Results}\n\n\n\\begin{table*}[ht!]\n\\caption{Label Accuracy Across Claim Verification Tasks Using Different Knowledge Bases}\n \\label{tab:results_zero_shot_fever}\n\\centering\n\\renewcommand{\\arraystretch}{1.5}\n\\begin{tabular}{l | c c c c c c | c}\n \\hline \n & \\multicolumn{6}{c}{\\textbf{Claim Verification Task}} \\\\ \n \\textbf{Knowledge Base} & FEVER & SciFact & Climate & Presidential & Real-World & Fool Me Twice & Avg.\\\\ \\hline\nWikipedia & \\textbf{74} & 39 & \\underline{43} & 1 & \\underline{38} & \\underline{24} & 37 \\\\\nScience Abstracts & 47 & \\textbf{60} & \\textbf{45} & 1 & 31 & 9 & 32 \\\\\nNYTimes & 55 & 39 & 43 & \\underline{10} & 36 & 16 & 33 \\\\\nGoogle API & \\underline{72} & \\underline{50} & 36 & \\textbf{26} & \\textbf{61} & \\textbf{40} & 47 \\\\\n\n\\hline\nNone & 38 & 38 & 34 & 0 & 14 & 0 & 21 \\\\ \nAll & 75 & 59 & 47 & 21 & 54 & 41 & 50 \\\\\nBest Evidence & 74 & 60 & 43 & 26 & 61 & 40 & \\textbf{51} \\\\\n\\hline\n\n\\end{tabular}\n\\flushleft \\small{\\textit{Notes}. Development set label accuracy of different claim-checking tasks (columns) using different KB domains (rows). Best results by task\/column in bold, second-best underlined. In the last column, we show average scores of a KB on all experiments. In the bottom half of the table, we show results of synthetic experiments.}\n\\end{table*}\n\n\\subsection{Main Results}\n\nThe results of our main analysis are reported in Table \\ref{tab:results_zero_shot_fever}. Each cell reports the experiment's label accuracy for the column-indicated task using the row-indicated knowledge base (KB). The last column provides the averaged results for a given KB on the different claim verification tasks. \n\nThe third-to-last last row corresponds to the minimal baseline inputting the claim but without any evidence (and therefore mechanically predicting \\textsc{Not Enough Info}). The penultimate row shows results using the union of all knowledge bases, to be discussed in Subsection \\ref{sec:all-KB}. The last row indicates results using an unsupervised measure to automatically detect the most appropriate KB, to be discussed in Subsection \\ref{sec:evidence_quality}. \n\nIn the rest of this subsection, we provide a detailed discussion by claim verification task (that is, by column). Throughout the discussion, we refer to Appendix Table \\ref{tab:conf-mat-all}, which displays full confusion matrices for the results of each experiment. Appendix Table \\ref{tab:bootstrapped_main_results} provides results with bootstrapped confidence intervals.\n\n\\paragraph{FEVER.}\n\nFirst, it is not surprising that the best label accuracy for the FEVER task (74\\%) was achieved using Wikipedia. The FEVER task consists of synthetic claims based on Wikipedia articles, and our claim-checking pipeline was originally designed to work on FEVER by consulting Wikipedia articles.\\footnote{Notably, the change to bm25 for document retrieval obtains comparable results to the original system in \\citet{e-fever}} Thus, it is also not that surprising that this experiment gets the highest accuracy across all single-KB experiments.\n\nThe second-best results are obtained using the Google Search API (72\\%). We manually investigated some examples and find that the API often returns the required Wikipedia page to verify a claim. Meanwhile, the NYT and Science KB's perform significantly worse. Consulting the confusion matrices (Appendix Table \\ref{tab:conf-mat-all}), we see that with both of these KB's, the pipeline almost always predicts \\textsc{Not Enough Info}. Intuitively, with these out-of-domain knowledge bases, the pipeline cannot recover enough useful evidence to check the claims. \n\n\\paragraph{SciFact.}\n\nThe second task, SciFact, consists of scientific claims. As the Science Abstracts KB comprises abstracts from scientific articles, it is not surprising that this KB delivers the best label accuracy. At 62\\% accuracy, the Science KB is significantly and proportionally better than the second-best Google API at 50\\%.\n\nStepping back for a moment, the strong level of performance (62\\%) is itself notable. That performance level is comparable to baseline systems which were trained on SciFact, notwithstanding that our pipeline has never before seen the SciFact claims task data. On this task, as we will see again in the other tasks, our system that has only seen FEVER claims can still deliver good performance. \n\nHowever, the new KB is a necessary ingredient in that performance. Using the pipeline's native KB, Wikipedia, delivers poor performance on SciFact. As with the NYT KB, performance with Wikipedia is barely better than the no-evidence baseline. As can be seen with the confusion matrices, these out-of-domain KB's predict almost exclusively the third class due to lack of evidence.\n\n\\paragraph{Climate-FEVER.}\nClimate-FEVER contains a mix of journalistic and scientific claims about climate change. Reflecting the relevance of the scientific claims, the best label accuracy (45\\%) is achieved using the Scientific KB. However, the difference is not nearly as stark as with SciFact: Wikipedia (42\\%) and NYTimes (40\\%) are not that much worse. The confusion matrices suggest that none of the KB's are well-suited to this task, because all of them generate way too many \\textsc{Not Enough Info}'s. Interestingly, this is the only task where the Google API returns the worst results. \n\n\\paragraph{Presidential Debates.}\nIn the presidential debates task, we obtain best results using the Google Search API -- yet still quite low at 26\\%. The NYT KB is worse at 10\\%,\\footnote{This result illustrates the over-all usefulness of the New York Times KB, which performs consistently better than a claim-only baseline and on par with Wikipedia for the experiments which are not based on Wikipedia claims. We expect such a journalistic KB would be even better if bigger and more carefully curated (for example removing quotes).} while the other databases are close to zero. This lower performance likely reflects that the style of speaking in presidential debates -- political language -- is far away from the encyclopedic, scientific, and journalistic domains. In addition, the lower average than the other categories reflects that in this task, there are no true examples of \\textsc{Not Enough Info}. In the confusion matrix, we find that barely any of the false claims could be identified across all experiments, while with some evidence from e.g. Google, we managed to verify some of the true statements.\n\n\n\n\\paragraph{Real-World Claims.}\n\nFor the real-world claims task, we achieve by far the best label accuracy again using the Google Search API -- a respectable 61\\%. This is proportionally quite a lot better than the other knowledge bases, including the second-place Wikipedia (38\\%). One interpretation of this result is that the writing style of the claims, plain English, is unlike the more specialized language in the three main KB's. Google Search works well given its access to many plain-English web sites. As illustrated with the confusion matrices, these differences are not as extreme as with the specialized tasks, and each KB can do better than the minimal baseline.\n\n\\paragraph{Fool Me Twice.}\n\nThe KB originally associated with the task again is Wikipedia, and we find the same trend observed in the Real-World Claims task. Best results come from the Google Search API, followed by the other three knowledge bases (with Wikipedia performing second-best). As seen with the confusion matrices, the results are also driven by lack of retrieved evidence. The evidence retrieval system trained with FEVER cannot yet deal with the challenging problem formulations in Fool Me Twice. \\\\\n\n\n\\noindent Overall, we find mixed results in these experiments. We conclude that there is no \\textit{\"universally best\"} knowledge base. We find the most broad KB, the Google API, often is a reasonable default, but is outperformed by more suitable KBs on half of the tasks examined. Hence, one should be careful in selecting the KB for automated claim checking systems. \n\nMoreover, we have an indication of where the systems tend to fail. Appendix Table \\ref{tab:wiki-conf-mat} shows a single aggregated confusion matrix that sums across all the single-KB tasks. That matrix illustrates that the \\textsc{Supported} and \\textsc{Refuted} classes have high precision. If one looks just at the top-left $2 \\times 2$ matrix, one would infer a highly performant system, as it is rare for the model to make an incorrect decision between \\textsc{Supported} and \\textsc{Refuted}. Moreover, it is also rare for the model to mistakenly assign a veracity value (\\textsc{Supported} or \\textsc{Refuted}) when the true value is \\textsc{Not Enough Info}. The bulk of the mistakes made by the model consist of incorrectly picking \\textsc{Not Enough Info}. \n\nThis tendency is reassuring as it suggests the system to be somewhat \"modest\" in its classifications. If the proper facts to verify a claim are missing in a KB, our system at least predicts that claims then are not verifiable. Conversely, high label accuracy corresponds to the facts actually being present in a KB.\n\n\\subsection{Combining Knowledge Bases}\n\\label{sec:all-KB}\n\nFor each claim-checking task, we run an additional experiment considering the top-five document hits from all the individual KBs. This modified system gives the evidence selection module access to additional documents. Since all of the useful evidence from the individual KB's is available, one might expect that combining databases would only improve performance. \n\nAs shown in Table \\ref{tab:results_zero_shot_fever} (penultimate row), however, this is not the case. The combined KB is similar in performance to the best single KB. With FEVER, Climate-FEVER, and Fool Me Twice, the combined KB is only slightly better than any single KB.\\footnote{For e.g. FEVER, we found in manual inspections that the addition of the Google API corrected some of the mistakes from bm25 using just Wikipedia by yielding the required Wikipedia page.}\nThis tiny increase in performance is perhaps most surprising for Climate-FEVER, where the claims come from both a scientific and journalistic domain. In that context, especially, one might expect benefits from combining scientific and journalistic knowledge bases. However, that expectation is not borne out in the experiments. \n\n\\begin{figure}[hbt!]\n \n \\caption{Evidence Retrieval Errors when Combining Knowledge Bases}\n \\label{fig:error_analysis}\n \n \\centering\n \\scriptsize\n \\begin{tabular}{p{6cm}} \n\\textbf{(a) Claim:}: Autologous transplantation of mesenchymal stem cells causes a higher rate of opportunistic infections than induction therapy with anti-interleukin-2 receptor antibodies. \\\\ \\textit{[claim derived from \\citet{scifact_devset_error_analysis}]} \\\\\n\\textbf{Highest Scoring Evidence From Science KB}: Among patients undergoing renal transplant, the use of autologous MSCs compared with anti-IL-2 receptor antibody induction therapy resulted in lower incidence of acute rejection, decreased risk of opportunistic infection, and better estimated renal function at 1 year \\\\ \\textit{[evidence from \\citet{scifact_devset_error_analysis}, in Science Abstracts KB]} \\\\\n\\textbf{Highest Scoring Evidence from All Knowledge Bases} [...] In high-risk kidney transplant recipients, induction therapy with rabbit but is associated with significant toxicity, opportunistic infections, and cancer. Using reduced doses of RATG combined with anti\u2013IL-2 antibodies may achieve the mAb against the CD25 subunit of the IL-2 receptor (IL-2R) on activated T cells, such as Tocilizumab, a humanized monoclonal antibody against the IL-6 receptor already used in Anti-TNF-na\u00efve patients had higher remission and response rates than safety profile without any risk of serious or opportunistic infections [147] [...] \\\\ \\textit{[evidence retrieved from \\citet{google_evidence_1} and \\citet{google_evidence_2} using Google API]} \n \\end{tabular}\n\\end{figure}\n\nMeanwhile, the combined KB actually sometimes produces more errors than the best single KB. Figure \\ref{fig:error_analysis} shows an example of such an error in the SciFact task. For this claim, with access only to the Science KB produces the correct evidence and the system correctly predicts \\textsc{Refuted}. But with access to all knowledge bases, the system has access to additional evidence from the Google API which rank highly on the evidence scoring function yet turn out not to be useful to the veracity classifier. Hence, the All-KB system guesses incorrectly, \\textsc{Not Enough Info}.\n\nIn some cases, then, adding more potential evidence documents can reduce performance by reducing the quality of the selected evidence. One possible implication is that the issues of KB choice cannot be solved by pooling all knowledge bases. Another possibility is that evidence retrieval and evidence selection systems need to be adapted to accommodate larger and more diverse knowledge bases. This is an important area for further investigation.\n\n\n\n\n\\subsection{Evidence Quality}\\label{sec:evidence_quality}\n\nThe results reported so far suggest that the quality of the retrieved and selected evidence is pivotal for the functioning of a claim-checking pipeline. Motivated by this notion, we investigate this directly using our system outputs as metrics for evidence quality. \n\nAn appealing interpretation of our main results is that if we have a high topical overlap between a set of claims and a knowledge base (KB), we would tend to retrieve the appropriate facts and obtain a higher label accuracy. A direct measure of this overlap is provided by the bm25 scores of the outputs in the document retrieval step. Thus, to test this interpretation of the results, we compare the average label accuracy for a given claims-KB pair with the associated average bm25 score for the highest-ranked retrieved document.\n\n\\begin{figure}\n \\caption{Evidence Quality and Label Accuracy}\n \\label{fig:corr-evidence-la}\n \\centering\n \\small{A. Accuracy vs. Retrieved Document Similarity (max bm25)} \\\\\n \\includegraphics[scale=0.5]{figures\/scatter_bm25.png} \\\\\n \\small{B. Accuracy vs. Selected Evidence Score (max $\\hat{e}$)} \\\\\n \\includegraphics[scale=0.5]{figures\/scatterplot_evidence_veracity.png}\n\\end{figure}\n\nFigure \\ref{fig:corr-evidence-la} Panel A shows a scatterplot of this relationship. Each datapoint corresponds to a pair, with the x-axis giving the average of the highest bm25 score across claims and the y-axis giving the label accuracy from the associated experiment. We find that bm25 scores of retrieved documents are not correlated with the resulting label accuracy. The estimated Pearson correlation is -0.05and not statistically significant ($p=0.85$). Appendix Table \\ref{tab:bm25_scores}, reporting the numbers by task-KB pair, shows the same story. For example, the Science KB tends to obtain a high top-bm25 score across all tasks, even though it is not the best KB for most tasks. Thus, the intuition that a standard text similarity metric like bm25 would identify a useful KB turns out to be a wrong one. \n\nGiven that bm25 similarity does not capture knowledge-base suitability, we assess a second potential pipeline metric -- the confidence score $\\hat{e}$ provided by the evidence selection module. Figure \\ref{fig:corr-evidence-la} Panel B is complementary with Panel A, plotting average label accuracy by claims-KB pair against the average of the highest assigned evidence score across claims. Here, there is a clear positive relationship. \nThe statistically significant relationship has an $r^2$ of $.33$ and a Pearson correlation of 0.49 ($p=0.015$). The numbers in Appendix Table \\ref{tab:evidence_scores} illustrate how each task's same-domain KB, which tends to produce the highest accuracy, also obtain the highest evidence score. \n\nWhat is the practical usefulness of this relationship between evidence scores and label accuracy? To check for this, we ran an additional synthetic experiment where, for each claim task, the KB with the highest average evidence confidence score is selected. The results are reported in the last row of Table \\ref{tab:results_zero_shot_fever}. We find that this data-driven KB-choice strategy matches the best individual KB's label accuracy in five out of six tasks. Further, the approach obtains the highest average accuracy across all tasks (rightmost column), slightly better than the union of all KB's.\n\nAs a complementary experiment, we performed KB selection at the claim level (rather than the task level). That is, for each claim, we fetch evidence from each KB and then use the KB with the best evidence quality score. The claim-level best-evidence approach did slightly better on half the tasks, but worse on the other half, and overall worse on average (Appendix Table \\ref{tab:bootstrapped_main_results}, bottom row). Looking deeper into the predictions, we found that the lower performance is driven in part by the model being \\textit{too} confident in the produced evidence. It tends to make veracity determinations even when the correct label is \\textsc{Not Enough Info} (Appendix Table \\ref{tab:conf-mat-all}, bottom row).\n\nIn summary, high overlap of surface-form TF-IDF features does not indicate the suitability of a KB and a set of claims. However, the confidence score produced by the sentence retrieval module provides a metric for the suitability of a KB for checking a set of claims. While this is what the evidence selection module is designed to do, it is nonetheless reassuring that it works in practice. It illustrates the additional, and pivotal, semantic information extracted by the RoBERTa-based model which is not captured by surface TF-IDF features. \n\nThese results could have significant practical value. The confidence metric can be used to provide an immediate indication whether a claim-checking pipeline, along with a particular KB, can be transferred to checking a new domain of unlabeled claims. Given the costs of producing and labeling true and false claims, this approach makes fact-checking systems more useful by providing an indication of their transferability. Further, in new domains where one has multiple KB's to choose from, the evidence quality metric provides guidance on which KB to apply. \n\n\n\n\n\\section{Conclusion}\n\nThis work has explored the choice of the knowledge base (KB) in automated claim checking. We used a single system to predict label accuracy for a number of claim-checking tasks from different domains while varying the KB and holding everything else constant. We have shown that FEVER-style claim-verification systems have capacity for checking out-of-domain claims, as long as the system has access to facts from the new domain. Overall, choosing a suitable KB for such out-of-domain claims matters a lot and we can improve the resulting label accuracy by a large margin using a more suitable KB. We also find that a larger KB is not always better -- e.g., combining all knowledge bases often does not result in better label accuracy than taking the most suitable one. \n\nOur approach and findings are in line with a resurgent data-centric paradigm in machine learning. This paradigm takes the view that the data used for a machine learning task is at least as important as the model. While in some ways this is an old idea, for example in the context of feature engineering \\citep[e.g.][]{zheng2018feature}, the recognition of the importance of data quality has gained renewed attention in deep learning due to the impressive gains in large-scale language modeling made through curation of datasets \\citep[see e.g.][]{gpt2, gpt3, t5}. In particular, \\citet{thepile} achieve remarkable improvements in language modeling using more compact architectures but with a more carefully curated pre-training corpus. Given the diminishing returns to more complex architectures, increasing data quality still leaves much room for improvement.\n\nThese insights have clear relevance for automated claim verification. While efforts to create more diverse lists of checkable claims are of course valuable, a more balanced approach would investigate all data dependencies, among them the choice of the knowledge bases used for claim-checking. Our experiments show that with a given FEVER-based system, zero-shot label accuracy can increase by as much as 20\\% with a more appropriate KB (e.g. with SciFact). In light of these results, researchers in automated claim verification may decide to curate their knowledge bases as much as their pipeline architectures. \n\n\\bibliographystyle{unsrtnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}