id
stringlengths
20
20
content
stringlengths
211
2.4M
meta
dict
BkiUdYM4uzqh_QQD89BV
\section*{Acknowledgements} The authors thank Y.~Abe and O.~Zeitouni for discussions on excursions and cover times (as mentioned in the proof of Proposition~\ref{p_ind_exc_notcover}). The authors were partially supported by CMUP, member of LASI, which is financed by national funds through FCT --- Funda\c{c}\~ao para a Ci\^encia e a Tecnologia, I.P., under the project with reference UIDB/00144/2020.
{ "attr-fineweb-edu": 0.127686, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdgjxaKgS2Kyuc0Sq
\section{Acknowledgments} This work has been supported by the ERC contract OUTEFLUCOP. We acknowledge funding from the Investissement d'Avenir LabEx PALM program (Grant No. ANR-10-LABX-0039-PALM). \vfill \bibliographystyle{unsrtnat}
{ "attr-fineweb-edu": 0.766113, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbCA5qdmB6zNbhbZE
\section*{Acknowledgments} This work was supported in part by Decanato de Investigaci\'on of the Universidad Nacional Experimental del T\'achira (UNET), under grants 04-001-2006 and 04-002-2006. J.G.-E. thanks Decanato de Investigaci\'on and Vicerrectorado Acad\'emico of UNET for travel support. J.G-E. and R.L-R. acknowledge support from BIFI, Universidad de Zaragoza, from Asociaci\'on Iberoamericana de Postgrado (AUIP), and by grant DGICYT-FIS2006-12781-C02-01, Spain. M. G. C. is supported by grant C-1396-06-05-B from Consejo de Desarrollo, Cient\'ifico, Tecnol\'ogico y Human\'istico, Universidad de Los Andes, M\'erida, Venezuela.
{ "attr-fineweb-edu": 0.183594, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUcRnxK5YsWV5L3GDO
\section*{acknowledgments} K.W.K.W. and E.B. are supported by NSF Grants No. PHY-1841464 and AST-1841358, and by NASA ATP Grant 17-ATP17-0225. We thank the referee (Neil Cornish), Quentin Baghi, Robert Caldwell, Tyson Littenberg, Travis Robson, Ira Thorpe, Nadia Zakamska, Hsiang-Chih Hwang, Kevin Schlaufman and all members of the NASA LISA Study Team for useful discussions.
{ "attr-fineweb-edu": 0.553223, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdLs5qWTA9fx_ALp-
\section*{Acknowledgements} The author acknowledges the hospitality of the organisers of the ``XI Rencontres de Blois'' and the financial support of the Russian Foundation for Basic Research, travel grant 99-02-78141.
{ "attr-fineweb-edu": 0.101746, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUeNU4eIOjRst9_2Gq
\chapter*{Acknowledgments} \thispagestyle{empty} Finally I have finished my thesis. The process of preparing and writing this thesis has lasted for about four to five years. Several people has helped or encouraged me during that time. I would like to mention them here (hoping not to forget anybody!). \selectlanguage{catalan} En primer lloc (i de manera especial) agrair a en Joan Soto el seu ajut durant tot aquest temps. Òbviament sense la seva ajuda, dirigint-me la tesi, aquest document no hagués estat possible. També agrair a l'Antonio Pineda l'ajuda i els consells durant aquest temps (i gràcies per avisar la meva mare i el meu germà que ja sortia de l'avió provinent d'Itàlia ;-) ). Gràcies també a en Lluís Garrido per ajudar-nos en diverses ocasions amb el tractament de dades experimentals. També agrair a Ignazio Scimemi (amb qui més pàgines de càlcul he compartit) i, en definitiva, a tota aquella gent amb qui he estat interactuant en el departament durant aquest temps. \selectlanguage{american} I would also like to thank Nora and Antonio for their help and for the kind hospitality all the times I have been in Milano (and special thanks for the nice dinners!). Thanks also to Matthias Neubert for his help. This year I spent two months in Boston, I would like to thank specially Iain Stewart for the kind hospitality during the whole time I was there. I am also very grateful to the other people at the CTP who made my stay there so enjoyable. Thanks also to my roommates in Boston. \selectlanguage{catalan} Aquesta tesi ha estat realitzada amb el suport del Departament d'Universitats Recerca i Societat de la Informació de la Generalitat de Catalunya i del Fons Social Europeu. No puc deixar d'anomenar als companys i amics del departament, amb qui he estat compartint el dia a dia durant aquests anys. En primer lloc als meus actuals, Míriam, Carlos i Román (àlies Tomàs), i anteriors, Aleix, Toni i Ernest (gràcies per votar sempre Menjadors!), companys de despatx. I a tots aquells que he trobat en el departament (o prop d'ell), Quico, Luca, Àlex, Joan, Jaume, Jan, Enrique, Enric, Toni M., Lluís, Jorge, Jordi G., Majo, David, Jordi M., Chumi, Arnau, Mafalda, Otger, Dolors, David D., Julián, Dani, Diego, Pau, Laia, Sandro, Sofyan, Guillem, Valentí, Ester, Olga... i molts d'altres que em dec estar oblidant d'anomenar! \selectlanguage{american} And finally thanks to you for reading those acknowledgments, now you can start with the thesis! \chapter{Definitions} In this appendix we collect the definitions of some factors appearing throughout the thesis. \phantom{} \noindent$\gamma_E$ is the Euler constant $\gamma_E=0.577216...$ . $\zeta(s)$ is the Riemann zeta function, with $\zeta(3)=1.2021...$ . The Euler beta function is given by \begin{equation} B(\tau,\omega)=\frac{\Gamma(\tau)\Gamma(\omega)}{\Gamma(\tau+\omega)} \end{equation} \section{Color factors} The color factors for an $SU(N_c)$ group are given by \begin{equation} C_f=\frac{N_c^2-1}{2N_c}\qquad C_A=N_c\qquad T_F=\frac{1}{2} \end{equation} \section{QCD beta function} The strong coupling $\alpha_{\rm s}=g^2/(4\pi)$ constant runs according to \begin{equation} \mu\frac{d\alpha_{\rm s}}{d\mu}=-2\alpha_{\rm s}\left\{\beta_0\frac{\alpha_{\rm s}}{4\pi}+\beta_1\left(\frac{\alpha_{\rm s}}{4\pi}\right)^2+\cdots\right\} \end{equation} The first coefficients of the QCD beta function are given by \begin{equation} \beta_0=\frac{11}{3}C_A-\frac{4}{3}T_Fn_f\qquad\beta_1=\frac{34}{3}C_A^2-\frac{20}{3}C_AT_Fn_f-4C_fT_Fn_f \end{equation} where, here and throughout the thesis, $n_f$ is the number of light flavors. \chapter{Feynman rules}\label{appFR} \section{pNRQCD} The pNRQCD Lagrangian (\ref{pNRSO}) gives the position space rules for the vertices and propagators displayed in figure \ref{figposs}. Feynman rules in ultrasoft momentum space are also useful. These are displayed in figure \ref{figmoms} (additionally an insertion of a correction to the potential $\delta V$ in a singlet or octet propagator will give rise to a $-i\delta V$ factor). \begin{figure} \centering \includegraphics{pNRposs} \caption[Propagators and vertices in pNRQCD. Position space]{Propagators and vertices in pNRQCD in position space. We have displayed the rules at leading order in $1/m$ and order $r$ in the multipole expansion. If one wants to perform a perturbative calculation these rules must be expanded in $g$.}\label{figposs} \end{figure} \begin{figure} \centering \includegraphics{pNRmoms} \caption[Propagators and vertices in pNRQCD. Momentum space]{Propagators and vertices in pNRQCD in ultrasoft momentum space. $P^{\mu}$ is the gluon incoming momentum. Dashed lines represent longitudinal gluons and springy lines transverse ones.}\label{figmoms} \end{figure} \section{SCET} The Feynman rules that arise from the Lagrangian (\ref{LSCET}) are represented in figure \ref{figfrSCET}. \begin{figure} \centering \includegraphics{SCET} \caption[$\mathcal{O}(\lambda^0)$ SCET Feynman rules]{Propagators and vertices arising from the SCET Lagrangian (\ref{LSCET}). We have just displayed the interactions with one and two collinear gluons, although interactions with an arbitrary number of them are allowed. Dashed lines are collinear quarks, springy lines are ultrasoft gluons and springy lines with a line inside are collinear gluons.}\label{figfrSCET} \end{figure} \chapter{NRQCD matrix elements in the strong coupling regime}\label{appME} First we list the four-fermion NRQCD operators that appear in the subsequent formulas. \begin{eqnarray} O_1({}^3S_1) &=& \psi^\dagger \mbox{\boldmath $\sigma$} \chi \cdot \chi^\dagger \mbox{\boldmath $\sigma$} \psi \\ O_8({}^1S_0) &=& \psi^\dagger T^a \chi \, \chi^\dagger T^a \psi \\ O_8({}^3S_1) &=& \psi^\dagger \mbox{\boldmath $\sigma$} T^a \chi \cdot \chi^\dagger \mbox{\boldmath $\sigma$} T^a \psi \\ {\cal P}_1({}^1S_0) &=& {1\over 2} \left[\psi^\dagger \chi \, \chi^\dagger (-\mbox{$\frac{i}{2}$} \stackrel{\leftrightarrow}{\bf D})^2 \psi \;+\; {\rm H.c.}\right] \\ {\cal P}_1({}^3S_1) &=& {1\over 2}\left[\psi^\dagger \mbox{\boldmath $\sigma$} \chi \cdot \chi^\dagger \mbox{\boldmath $\sigma$} (-\mbox{$\frac{i}{2}$} \stackrel{\leftrightarrow}{\bf D})^2 \psi \;+\; {\rm H.c.}\right] \\ O_8({}^1P_1) &=& \psi^\dagger (-\mbox{$\frac{i}{2}$} \stackrel{\leftrightarrow}{\bf D}) T^a\chi \cdot \chi^\dagger (-\mbox{$\frac{i}{2}$} \stackrel{\leftrightarrow}{\bf D}) T^a\psi \\ O_8({}^3P_{0}) &=& {1 \over 3} \; \psi^\dagger (-\mbox{$\frac{i}{2}$} \stackrel{\leftrightarrow}{\bf D} \cdot \mbox{\boldmath $\sigma$}) T^a\chi \, \chi^\dagger (-\mbox{$\frac{i}{2}$} \stackrel{\leftrightarrow}{\bf D} \cdot \mbox{\boldmath $\sigma$}) T^a\psi \\ O_8({}^3P_{1}) &=& {1 \over 2} \; \psi^\dagger (-\mbox{$\frac{i}{2}$} \stackrel{\leftrightarrow}{\bf D} \times \mbox{\boldmath $\sigma$}) T^a\chi \cdot \chi^\dagger (-\mbox{$\frac{i}{2}$} \stackrel{\leftrightarrow}{\bf D} \times \mbox{\boldmath $\sigma$}) T^a\psi \\ O_8({}^3P_{2}) &=& \psi^\dagger (-\mbox{$\frac{i}{2}$} \stackrel{\leftrightarrow}{{\bf D}}{}^{(i} \mbox{\boldmath $\sigma$}^{j)}) T^a\chi \, \chi^\dagger (-\mbox{$\frac{i}{2}$} \stackrel{\leftrightarrow}{{\bf D}}{}^{(i} \mbox{\boldmath $\sigma$}^{j)}) T^a\psi \\ O_{\rm EM}(^3S_1) &=& \psi^\dagger {\mbox{\boldmath $\sigma$}} \chi |{\rm vac}\rangle \langle {\rm vac}| \chi^\dagger {\mbox{\boldmath $\sigma$}} \psi \\ {\cal P}_{\rm EM}(^1S_0) &=& {1\over 2} \left[ \psi^\dagger \chi |{\rm vac}\rangle \langle {\rm vac}| \chi^\dagger \left( -{i\over 2} {\bf D}^2 \right) \psi + {\rm H.c.} \right] \\ {\cal P}_{\rm EM}(^3S_1) &=& {1\over 2} \left[ \psi^\dagger {\mbox{\boldmath $\sigma$}} \chi |{\rm vac}\rangle \langle {\rm vac}| \chi^\dagger {\mbox{\boldmath $\sigma$}} \left( -{i\over 2} {\bf D}^2 \right) \psi + {\rm H.c.} \right] \end{eqnarray} In the strong coupling regime ($\Lambda_{\mathrm{QCD}}\gg E$) the following factorized formulas can be derived for the NRQCD matrix elements. The analytic contributions in $1/m$ for some $S$-wave states (we just display here expressions involving $S$-wave states, since are the only ones really used in the thesis, see \cite{Brambilla:2002nu} for a complete list), up to corrections of ${\cal O}(p^3/m^3 \times (\Lambda_{\rm QCD}^2/m^2, E/m))$, are given by \begin{eqnarray} \label{O13S1} &&\hspace{-8mm} \langle V_Q(nS)|O_1(^3S_1)|V_Q(nS)\rangle^{1/m}= C_A {|R^V_{n0}({0})|^2 \over 2\pi} \left(1-{E_{n0}^{(0)} \over m}{2{\cal E}_3 \over 9} +{2{\cal E}^{(2,t)}_3 \over 3 m^2 }+{c_F^2{\cal B}_1 \over 3 m^2 }\right) \\ &&\hspace{-8mm} \langle V_Q(nS)|O_{\rm EM}(^3S_1)|V_Q(nS)\rangle^{1/m}= C_A {|R^V_{n0}({0})|^2 \over 2\pi} \left(1-{E_{n0}^{(0)} \over m}{2{\cal E}_3 \over 9} +{2{\cal E}^{(2,{\rm EM})}_3 \over 3 m^2}+{c_F^2{\cal B}_1 \over 3 m^2}\right) \\ \label{OEM1S0} &&\hspace{-8mm} \langle V_Q(nS)|{\cal P}_1(^3S_1)|V_Q(nS)\rangle^{1/m}= \langle P_Q(nS)|{\cal P}_1(^1S_0)|P_Q(nS)\rangle^{1/m}= \nonumber\\ &&\hspace{-8mm} \langle V_Q(nS)|{\cal P}_{\rm EM}(^3S_1)|V_Q(nS)\rangle^{1/m}= \langle P_Q(nS)|{\cal P}_{\rm EM}(^1S_0)|P_Q(nS)\rangle^{1/m} \nonumber\\ &&\qquad\qquad\qquad\qquad\qquad\qquad\quad =C_A {|R^{(0)}_{n0}({0})|^2 \over 2\pi} \left(m E_{n0}^{(0)} -{\cal E}_1 \right) \label{P13S1} \\ &&\hspace{-8mm} \langle V_Q(nS)|O_8(^3S_1)|V_Q(nS)\rangle^{1/m}= \langle P_Q(nS)|O_8(^1S_0)|P_Q(nS)\rangle^{1/m} \nonumber\\ &&\qquad\qquad\qquad\qquad\qquad\qquad\quad =C_A {|R^{(0)}_{n0}({0})|^2 \over 2\pi} \left(- {2 (C_A/2-C_f) {\cal E}^{(2)}_3 \over 3 m^2 }\right) \\ &&\hspace{-8mm} \langle V_Q(nS)|O_8(^1S_0)|V_Q(nS)\rangle^{1/m}= {\langle P_Q(nS)|O_8(^3S_1)|P_Q(nS)\rangle^{1/m} \over 3} \nonumber\\ &&\qquad\qquad\qquad\qquad\qquad\qquad\quad =C_A {|R^{(0)}_{n0}({0})|^2 \over 2\pi} \left(-{(C_A/2-C_f) c_F^2{\cal B}_1 \over 3 m^2 }\right) \\ &&\hspace{-8mm} {\langle V_Q(nS)|O_8(^3P_J)|V_Q(nS)\rangle^{1/m} \over 2J+1}= {\langle P_Q(nS)|O_8(^1P_1)|P_Q(nS)\rangle^{1/m} \over 9} \nonumber\\ &&\qquad\qquad\qquad\qquad\qquad\qquad\quad = \,C_A {|R^{(0)}_{n0}({0})|^2 \over 2\pi} \left(-{(C_A/2-C_f) {\cal E}_1 \over 9 }\right) \\ \end{eqnarray} There are also non-analytic contributions in $1/m$. Up to corrections of order ${\cal O}(p^3/m^3 \times \Lambda_{\rm QCD}/m \times m\alpha_{\rm s}/\sqrt{m\,\Lambda_{\rm QCD}})$ they are given by (see \cite{Brambilla:2003mu} for the complete list of known corrections) \begin{eqnarray} \langle V_Q(nS)|O_1(^3S_1)|V_Q(nS)\rangle^{1/\sqrt{m}}&=& \langle V_Q(nS)|O_{\rm EM}(^3S_1)|V_Q(nS)\rangle^{1/\sqrt{m}} \nonumber\\ &=& C_A {|R^V_{n0}({0})|^2 \over 2\pi} \left(1+ {4 (2\,C_f+C_A)\over 3\Gamma(7/2)} \, \; {\alpha_{\rm s}\, {\cal E}^E_{5/2}\over m^{1/2}}\right) \label{O3S1nonan} \end{eqnarray} In all those expressions $R$ represents the radial part of the wave function, $E$ the binding energy and all the $\mathcal{E}$ and $\mathcal{B}$ are universal (bound state independent) non-perturbative parameters. \chapter{Background}\label{chapback} In this chapter we describe the two effective field theories that will be mainly used and studied in the thesis: \emph{potential Non-Relativistic QCD} and \emph{Soft-Collinear Effective Theory}. It does not attempt to be a comprehensive review but just provide the sufficient ingredients to follow the subsequent chapters. \section{potential Non Relativistic QCD}\label{secpNRQCD} As it has already been explained in the introduction of the thesis, heavy quarkonium systems are characterized by three intrinsic scales. Those are, the heavy quark mass $m$ (which is referred to as the \emph{hard} scale and sets the mass of the quarkonium state), the relative three-momentum of the heavy quark-antiquark pair $\vert\mathbf{p}\vert\sim mv$ (which is referred to as the \emph{soft} scale and sets the size of the bound state. $v$ is the typical relative velocity between the quark and the antiquark) and the kinetic energy of the heavy quark and antiquark $E\sim mv^2$ (which is referred to as the \emph{ultrasoft} scale and sets the binding energy of the quarkonium state), and by the generic hadronic scale of QCD $\Lambda_{QCD}$. All those scales are summarized in table \ref{tabpNsc}. The interplay of $\Lambda_{QCD}$ with the other three scales determines the nature of the different heavy quarkonium systems. By definition of heavy quark, $m$ is always much larger than $\Lambda_{QCD}$; so the inequality $m\gg\Lambda_{QCD}$ always holds. Exploiting the inequality $m\gg\vert\mathbf{p}\vert,E$ one arrives at Non-Relativistic QCD (NRQCD), as it has been described in the previous chapter (note that at this level, after the definition of heavy quark, one still does not need to specify the interplay of $\Lambda_{QCD}$ with the remaining scales, to identify the relevant degrees of freedom). Going one step further, using the full non-relativistic hierarchy of the heavy quarkonium systems $m\gg mv\gg mv^2$, one arrives at potential NRQCD (pNQRCD)\footnote{See \cite{Brambilla:2004jw} for a review of pNRQCD.}. Now it is necessary to set the relative importance of $\Lambda_{QCD}$ with the scales $\vert\mathbf{p}\vert$ and $E$ to fix the degrees of freedom of the resulting theory, the aim of which is to study physics at the scale of the binding energy $E$. Two relevant regimes have been identified so far; the so called \emph{weak coupling regime}, where $mv^2\gtrsim\Lambda_{QCD}$, and the so called \emph{strong coupling regime}, where $mv\gtrsim\Lambda_{QCD}\gg mv^2$. \begin{table}[t] \begin{center} \begin{tabular}{|c|c|} \hline Scale & Value \\ \hline & \\ hard & $m$ \\ & \\ soft & $mv$ \\ & \\ ultrasoft & $mv^2$ \\ & \\ generic hadronic QCD scale & $\Lambda_{QCD}$\\ & \\ \hline \end{tabular} \caption{Relevant physical scales in heavy quarkonium}\label{tabpNsc} \end{center} \end{table} \subsection{Weak coupling regime} In this situation, the degrees of freedom of pNRQCD are not very different from those of NRQCD. They are heavy quarks and antiquarks with a three momentum cut-off $\nu_p$ ($\vert\mathbf{p}\vert\ll\nu_p\ll m$) and an energy cut-off $\nu_{us}$ ($\frac{\mathbf{p}^2}{m}\ll\nu_{us}\ll\vert\mathbf{p}\vert$), and gluons and light quarks with a four momentum cut-off $\nu_{us}$. The most distinct feature is that now non local terms in $r$, that is potentials, can appear (as it has been discussed before, in the introductory section \ref{secinthq}). These degrees of freedom can be arranged in several ways in the effective theory. One first way is to express them with the same fields as in NRQCD. Then the pNRQCD Lagrangian has the following form \begin{equation} L_{pNRQCD}=L_{NRQCD}^{us}+L_{pot} \end{equation} where $L_{NRQCD}^{us}$ is the NRQCD Lagrangian but restricted to ultrasoft gluons and $L_{pot}$ is given by \begin{equation} L_{pot}=-\int d^3\mathbf{x}_1d^3\mathbf{x}_2\psi^{\dagger}\left(t,\mathbf{x}_1\right)\chi\left(t,\mathbf{x}_2\right)V\left(\mathbf{r},\mathbf{p}_1,\mathbf{p}_2,\mathbf{S}_1,\mathbf{S}_2\right)(us\textrm{ gluon fields})\chi^{\dagger}\left(t,\mathbf{x}_2\right)\psi\left(t,\mathbf{x}_1\right) \end{equation} $\psi$ is the field that annihilates a quark and $\chi$ the one that creates and antiquark; $\mathbf{p}_i=-i\mbox{\boldmath $\nabla$}_{x_i}$ and $\mathbf{S}_i=\mbox{\boldmath $\sigma$}_i/2$. Another option to express the degrees of freedom is to represent the quark-antiquark pair by a wavefunction field $\Psi$ (that is to project the theory to the one heavy quark-one heavy antiquark sector) \begin{equation} \Psi (t,{\bf x}_1, {\bf x}_2)_{\alpha\beta} \sim \psi_{\alpha} (t,{\bf x}_1) \chi_{\beta}^\dagger (t,{\bf x}_2) \sim {1 \over N_c}\delta_{\alpha\beta}\psi_{\sigma} (t,{\bf x}_1) \chi_{\sigma}^\dagger (t,{\bf x}_2) + {1 \over T_F} T^a_{\alpha\beta}T^a_{\rho\sigma}\psi_{\sigma} (t,{\bf x}_1) \chi_{\rho}^\dagger (t,{\bf x}_2) \end{equation} Now the Lagrangian has the form ($m_1$ is the mass of the heavy quark and $m_2$ the mass of the heavy antiquark, later on we will mainly focus in the equal mass case $m_1=m_2\equiv m$) \[ L_{NRQCD}^{us}=\int d^3{\bf x}_1 \, d^3{\bf x}_2 \; {\rm Tr}\, \left\{\Psi^{\dagger} (t,{\bf x}_1 ,{\bf x}_2 ) \left( iD_0 +{{\bf D}_{{\bf x}_1 }^2\over 2\, m_1}+{{\bf D}_{{\bf x}_2 }^2\over 2\, m_2} + \cdots \right)\Psi (t,{\bf x}_1 ,{\bf x}_2 )\right\}- \] \begin{equation} -\int d^3 x \; {1\over 4} G_{\mu \nu}^{a}(x) \,G^{\mu \nu \, a}(x) + \int d^3 x \; \sum_{i=1}^{n_f} \bar q_i(x) \, i D\!\!\!\!\slash \,q_i(x) + \cdots \end{equation} \begin{equation} L_{pot}=\int d^3{\bf x}_1 \, d^3{\bf x}_2 \; {\rm Tr} \left\{ \Psi^{\dagger} (t, {\bf x}_1,{\bf x}_2)\, V( {\bf r}, {\bf p}_1, {\bf p}_2, {\bf S}_1,{\bf S}_2)(us\hbox{ gluon fields}) \, \Psi(t, {\bf x}_1, {\bf x}_2 ) \right\} \end{equation} where the dots represent higher order terms in the $1/m$ expansion and \begin{equation} iD_0 \Psi (t,{\bf x}_1 ,{\bf x}_2) = i\partial_0\Psi (t,{\bf x}_1 ,{\bf x}_2 ) -g A_0(t,{\bf x}_1)\, \Psi (t,{\bf x}_1 ,{\bf x}_2) + \Psi (t,{\bf x}_1 ,{\bf x}_2)\, g A_0(t,{\bf x}_2). \end{equation} The gluon fields can be enforced to be ultrasoft by multipole expanding them in the relative coordinate $\mathbf{r}$ (we define the center of mass coordinates by $\mathbf{R}=(\mathbf{x}_1+\mathbf{x}_2)/2$ and $\mathbf{r}=\mathbf{x}_1-\mathbf{x}_2$), the problem is that the multipole expansion spoils the manifest gauge invariance of the Lagrangian. The gauge invariance can be restored by decomposing the wavefunction field into (singlet and octet) components which have homogeneous gauge transformations with respect to the center of mass coordinate \[ \Psi (t,{\bf x}_1 ,{\bf x}_2)= P\bigl[e^{ig\int_{{\bf x}_2}^{{\bf x}_1} {\bf A} \cdot d{\bf x}} \bigr]\;{\rm S}({{\bf r}}, {{\bf R}}, t) +P\bigl[e^{ig\int_{{\bf R}}^{{\bf x}_1} {\bf A} \cdot d{\bf x}} \bigr]\; {\rm O} ({\bf r} ,{\bf R} , t) \; P\bigl[e^{ig\int^{{\bf R}}_{{\bf x}_2} {\bf A} \cdot d{\bf x}}\bigr]= \] \begin{equation} =U_P\left(\mathbf{x}_1,\mathbf{R}\right)\left({\rm S}({{\bf r}}, {{\bf R}}, t)+{\rm O} ({\bf r} ,{\bf R} , t)\right)U_P\left(\mathbf{R},\mathbf{x}_2\right) \end{equation} with \begin{equation} U_P\left(\mathbf{x}_1,\mathbf{R}\right)=P\bigl[e^{ig\int_{{\bf R}}^{{\bf x}_1} {\bf A}(t,\mathbf{x}) \cdot d{\bf x}} \bigr] \end{equation} and the following color normalization for the singlet and octet fields \begin{equation} {\rm S} = { S 1\!\!{\rm l}_c / \sqrt{N_c}} \quad\quad\quad {\rm O} = O^a { {\rm T}^a / \sqrt{T_F}} \end{equation} Arranging things that way, the lagrangian density (at order $p^3/m^2$) reads \[ \mathcal{L}_{pNRQCD}=\int d^3{\bf r} \; {\rm Tr} \, \Biggl\{ {\rm S}^\dagger \left( i\partial_0 - h_s({\bf r}, {\bf p}, {\bf P}_{\bf R}, {\bf S}_1,{\bf S}_2) \right) {\rm S} + {\rm O}^\dagger \left( iD_0 - h_o({\bf r}, {\bf p}, {\bf P}_{\bf R}, {\bf S}_1,{\bf S}_2) \right) {\rm O} \Biggr\}+ \] \[ +V_A ( r) {\rm Tr} \left\{ {\rm O}^\dagger {\bf r} \cdot g{\bf E} \,{\rm S} + {\rm S}^\dagger {\bf r} \cdot g{\bf E} \,{\rm O} \right\} + {V_B (r) \over 2} {\rm Tr} \left\{ {\rm O}^\dagger {\bf r} \cdot g{\bf E} \, {\rm O} + {\rm O}^\dagger {\rm O} {\bf r} \cdot g{\bf E} \right\}- \] \begin{equation}\label{pNRSO} - {1\over 4} G_{\mu \nu}^{a} G^{\mu \nu \, a} + \sum_{i=1}^{n_f} \bar q_i \, i D\!\!\!\!\slash \, q_i \end{equation} where \begin{equation} h_s({\bf r}, {\bf p}, {\bf P}_{\bf R}, {\bf S}_1,{\bf S}_2) = {{\bf p}^2 \over \, m_{\rm red}} +{{\bf P}_{\bf R}^2 \over 2\, m_{\rm tot}} + V_s({\bf r}, {\bf p}, {\bf P}_{\bf R}, {\bf S}_1,{\bf S}_2) \end{equation} \begin{equation} h_o({\bf r}, {\bf p}, {\bf P}_{\bf R}, {\bf S}_1,{\bf S}_2) = {{\bf p}^2 \over \, m_{\rm red}} +{{\bf P}_{\bf R}^2 \over 2\, m_{\rm tot}} + V_o({\bf r}, {\bf p}, {\bf P}_{\bf R}, {\bf S}_1,{\bf S}_2) \end{equation} and \begin{equation} D_0 {\rm O} \equiv i \partial_0 {\rm O} - g [A_0({\bf R},t),{\rm O}]\quad {\bf P}_{\bf R} = -i{\bf D}_{\bf R}\quad m_{\rm red} =\frac{m_1m_2}{m_{\rm tot}}\quad m_{\rm tot} = m_1 + m_2 \end{equation} $\mathbf{E}^i=G^{i0}$ and $\mathbf{B}^i=-\epsilon_{ijk}G^{jk}/2$ are the chromoelectric and chromomagnetic fields, respectively. Some of the Feynman rules arising from this Lagrangian are displayed in appendix \ref{appFR}. When written in terms of these singlet and octet fields, the power counting of the pNRQCD Lagrangian is easy to establish. Since the Lagrangian is bilinear in these fields we have just to set the size of the terms multiplying those bilinears. The derivatives with respect to the relative coordinate and $1/r$ factors must be counted as the soft scale and the time derivatives, center of mass derivatives and fields for the light degrees of freedom must be counted as the ultrasoft scale. The $\alpha_s$ that come from the matching from NRQCD must be understood as $\alpha_s(1/r)$ and the ones associated with light degrees of freedom must be understood as $\alpha_s(E)$. It is not that one form of the Lagrangian is preferred among the others, but the different forms of writing the Lagrangian are convenient for different purposes. In principle it is possible to go from one form of the Lagrangian to the others; as an easy example consider the leading order Lagrangian (in $\alpha_{\rm s}$ and in the multipole expansion) in the static limit ($m\to\infty$) written in terms of the wavefunction field \[ L_{pNRQCD}=\int d^3{\bf x}_1 \, d^3{\bf x}_2 \; {\rm Tr}\, \left\{\Psi^{\dagger} (t,{\bf x}_1 ,{\bf x}_2 ) \left( iD_0\right)\Psi (t,{\bf x}_1 ,{\bf x}_2 )\right\}+ \] \[ +\int d^3{\bf x}_1 d^3{\bf x}_2 \;\frac{\alpha_{\rm s}}{\vert\mathbf{x}_1-\mathbf{x}_2\vert}\mathrm{Tr}\left(T^a\Psi^{\dagger} (t,{\bf x}_1 ,{\bf x}_2 )T^a\Psi (t,{\bf x}_1 ,{\bf x}_2 )\right)- \] \begin{equation} -\int d^3 x \; {1\over 4} G_{\mu \nu}^{a}(x) \,G^{\mu \nu \, a}(x) + \int d^3 x \; \sum_{i=1}^{n_f} \bar q_i(x) \, i D\!\!\!\!\slash \,q_i(x) \end{equation} we will forget about the last line in the equation above, since it remains the same. Now we introduce the singlet and octet fields, and take into account that at leading order in the multipole expansion the Wilson lines are equal to one, to obtain \begin{equation}\label{wftoso} \int d^3{\bf R} \, d^3{\bf r}\;\mathrm{Tr}\left\{\left(\mathrm{S}^{\dagger}+\mathrm{O}^{\dagger}\right)iD_0\left(\mathrm{S}+\mathrm{O}\right)\right\}+\int d^3{\bf R} \, d^3{\bf r}\;\frac{\alpha_{\rm s}}{r}\mathrm{Tr}\left\{T^a\left(\mathrm{S}^{\dagger}+\mathrm{O}^{\dagger}\right)T^a\left(\mathrm{S}+\mathrm{O}\right)\right\} \end{equation} now, since $iD_0(\mathrm{S}+\mathrm{O})=i\partial_0(\mathrm{S}+\mathrm{O})-g\left[A_0,\mathrm{O}\right]$ and taking into account that the trace of a single color matrix is zero, we obtain from the first term in (\ref{wftoso}) \begin{equation} \mathrm{Tr}\left\{\mathrm{S}^{\dagger}i\partial_0\mathrm{S}+\mathrm{O}^{\dagger}iD_0\mathrm{O}\right\} \end{equation} and from the second term \begin{equation} \frac{\alpha_{\rm s}}{r}\mathrm{Tr}\left\{T^a\mathrm{S}^{\dagger}T^a\mathrm{S}+T^a\mathrm{O}^{\dagger}T^a\mathrm{O}\right\}=\frac{\alpha_{\rm s}}{r}\mathrm{Tr}\left\{C_f\mathrm{S}^{\dagger}\mathrm{S}-\frac{1}{2N_c}\mathrm{O}^{\dagger}\mathrm{O}\right\} \end{equation} which gives us the static pNRQCD Lagrangian at leading order written in terms of singlet and octet fields \[ L_{pNRQCD}=\int d^3{\bf R} \, d^3{\bf r}\;\mathrm{Tr}\left\{\mathrm{S}^{\dagger}\left(i\partial_0+\frac{C_f\alpha_{\rm s}}{r}\right)\mathrm{S}+\mathrm{O}^{\dagger}\left(iD_0-\frac{1}{2N_c}\frac{\alpha_{\rm s}}{r}\right)\mathrm{O}\right\}- \] \begin{equation} -\int d^3{\bf R}{1\over 4} G_{\mu \nu}^{a}G^{\mu \nu \, a} + \int d^3 {\bf R} \sum_{i=1}^{n_f} \bar q_i \, i D\!\!\!\!\slash \,q_i \end{equation} While this procedure is relatively simple at leading order, in general it is more convenient to construct each form of the pNRQCD Lagrangian independently (by using the appropriate symmetry arguments and matching to NRQCD). Note that, as mentioned before, the usual quantum mechanical potentials appear as matching coefficients of the effective theory. Renormalization group improved expressions for the potentials can then be obtained \cite{Pineda:2000gz,Pineda:2001ra}. \subsection{Strong coupling regime} In this situation (where, remember, $\vert\mathbf{p}\vert\gtrsim\Lambda_{QCD}\gg E$) the physics at the scale of the binding energy (which is in what we are interested) is below the scale $\Lambda_{QCD}$. This implies that QCD is strongly coupled, which in turn indicates that is better to formulate the theory in terms of hadronic degrees of freedom. Hence we have, unavoidable, to rely on some general considerations and indications from the lattice data to identify the relevant degrees of freedom. Therefore we assume that a singlet field describing the heavy quarkonium state together with Goldstone boson fields, which are ultrasoft degrees of freedom, are the relevant degrees of freedom for this theory. For this assumption to hold, we have to consider that there is an energy gap of order $\Lambda_{QCD}$ from the ground state energy to the higher hybrid excitations (that is states with excitations of the gluonic spin), which seems to be supported by lattice data, and also that we are away from the energy threshold for the creation of a heavy-light meson pair (in order to avoid mixing effects with these states). If one forgets about the Goldstone boson fields (switch off light fermions), as it is usually done, we are left with just the singlet field and the theory takes the form of the potential models. In that case the pNRQCD Lagrangian is given by \begin{equation} L_{\rm pNRQCD} = \int d^3 {\bf R} \int d^3 {\bf r} \; S^\dagger \big( i\partial_0 - h_s({\bf x}_1,{\bf x}_2, {\bf p}_1, {\bf p}_2, {\bf S}_1, {\bf S}_2) \big) S \end{equation} with \begin{equation} h_s({\bf x}_1,{\bf x}_2, {\bf p}_1, {\bf p}_2, {\bf S}_1, {\bf S}_2) = {{\bf p}_1^2\over 2m_1} + {{\bf p}_2^2\over 2m_2} + V_s({\bf x}_1,{\bf x}_2, {\bf p}_1, {\bf p}_2, {\bf S}_1, {\bf S}_2) \end{equation} The potential $V_s$ is now a non-perturbative quantity (the different parts of which can be organized according to their scaling in $m$). The matching procedure will give us expressions for the different parts of the potential in terms of Wilson loop amplitudes (which in principle could be calculated on the lattice or with some vacuum model of QCD). When considering annihilation processes (in which case, obviously, $m_1=m_2=m$), these expressions translate into formulas for the NRQCD matrix elements. Hence, in the strong coupling regime, the NRQCD matrix elements can be expressed in terms of wave functions at the origin and a few universal (that is bound state independent) parameters. A list of some of the pNRQCD expressions for the matrix elements can be found in appendix \ref{appME}. In the process of integrating out the degrees of freedom, from the scale $m$ to the ultrasoft scale, new momentum regions may appear (which were not present in the weak coupling regime, since now we are also integrating $\Lambda_{QCD}$). It turns out that the intermediate three momentum scale $\sqrt{m\Lambda_{QCD}}$ is also relevant (it give contributions to loop diagrams where gluons of energy $\Lambda_{QCD}$ are involved. Note that $\sqrt{m\Lambda_{QCD}}$ is the three momentum scale that corresponds to the energy scale $\Lambda_{QCD}$). Hence, effects coming from this intermediate scale have also to be taken into account for the matching in the strong coupling regime \cite{Brambilla:2003mu}. To establish the power counting of this Lagrangian we have to assign the soft scale to derivatives with respect to the relative coordinate and $1/m$ factors, and the ultrasoft scale $E$ to time derivatives and the static $V_s^{(0)}$. By definition of strong coupling regime $\alpha_{\rm s}$ evaluated at the scale $E$ must be taken as order one. If we want to stay in the most conservative situation we should assume $\Lambda_{QCD}\sim mv$, in which case $\alpha_{\rm s}(1/r)\sim 1$. Expectation values of fields for the light degrees of freedom should be counted as $\Lambda_{QCD}$ to the power of their dimension. \section{Soft-Collinear Effective Theory}\label{secSCET} The aim of this theory is to describe processes in which very energetic (collinear) modes interact with soft degrees of freedom. Soft-Collinear Effective Theory (SCET) can thus be applied to a wide range of processes, in which this kinematic situation is present. Those include exclusive and semi-inclusive $B$ meson decays, deep inelastic scattering and Drell-Yan processes near the end-point, exclusive and semi-inclusive quarkonium decays and many others. Generally speaking, any process that contains highly energetic hadrons (that is hadrons with energy much larger than its mass), together with a source for them, will contain particles (referred to as \emph{collinear}) which move close to a light cone direction $n^{\mu}$. Since these particles are constrained to have large energy $E$ and small invariant mass, the size of the different components (in light cone coordinates, $p^{\mu}=(\bar{n}p)n^{\mu}/2+p^{\mu}_{\perp}+(np)\bar{n}^{\mu}/2$) of their momentum $p$ is very different; typically $\bar{n}p\sim E$, $p_{\perp}\sim E\lambda$ and $np\sim E\lambda^2$, with $\lambda$ a small parameter. It is of this hierarchy, $\bar{n}p\gg p_{\perp}\gg np$, that the effective theory takes advantage. Due to the peculiar nature of the light cone interactions, the resulting theory turns out to be non-local in one of the light cone directions (as it has been mentioned in the introduction of the thesis). Unfortunately there is not a standard notation for the theory. Apart from differences in the naming of the distinct modes, there are basically two different formalisms (or notations). The one originally used in \cite{Bauer:2000yr,Bauer:2001ct}, which uses the label operators (sometimes referred to as the \emph{hybrid momentum-position space representation}) and the one first employed in \cite{Beneke:2002ph}, which uses light-front multipole expansions to ensure a well defined power counting (this is sometimes referred to as the \emph{position space representation})\footnote{Note that the multipole expansions used in the position space representation are, to some extent, similar to the ones used in pNRQCD, while the hybrid representation is, not surprisingly, closer to the so called vNRQCD formalism. In any case the main difference between the vNRQCD and pNRQCD approaches is the way the soft and ultrasoft effects are disentangled. While vNRQCD introduces separate fields for the soft and ultrasoft degrees of freedom at the NRQCD level, the pNRQCD approach integrates out the soft scale producing thus a chain of effective theories QCD$\to$NRQCD$\to$pNRQCD, so that the final theory just contains the relevant degrees of freedom to study physics at the scale of the binding energy. In that sense any version of SCET is closer to vNRQCD than to pNRQCD, since separate (and overlapping) fields are introduced for the soft and collinear degrees of freedom (which probably is more adequate in this case) .}. The two formalisms are supposed to be completely equivalent (although precise comparisons are, many times, difficult). The modes one need to include in the effective theory depend on whether one want to study inclusive or exclusive processes. The resulting theories are usually called SCET$_{\rm{I}}$ and SCET$_{\rm{II}}$, respectively. When one is studying an inclusive process, collinear degrees of freedom with a typical offshellness of order $\sqrt{E\Lambda_{QCD}}$ are needed. While in an exclusive process the collinear degrees of freedom in the final state have typical offshellness of order $\Lambda_{QCD}$; the simultaneous presence of two type of collinear modes must then be taken into account in the matching procedure from QCD to SCET$_{\rm{II}}$. We will briefly describe these two theories in turn, in the following subsections. In this thesis we will be mainly using the SCET$_{\rm I}$ framework (consequently the peculiarities and subtleties of SCET$_{\rm II}$ will just be very briefly mentioned). \subsection{SCET$_{\rm I}$} This is the theory containing collinear $(p^{\mu}=(\bar{n}p,p_{\perp},np)\sim(1,\lambda,\lambda^2))$ and ultrasoft $(p^{\mu}\sim(\lambda^2,\lambda^2,\lambda^2))$ modes\footnote{Be aware that the terminology for the different modes varies a lot in the literature. One should check the terminology used in each case to avoid unnecessary confusions (this is also true for SCET$_{\rm II}$).} (for some applications collinear fields in more than one direction could be needed), where the final collinear states have virtualities of order $E\Lambda_{QCD}$. The theory was first written in the (sometimes called) label or hybrid formalism \cite{Bauer:2000yr,Bauer:2001ct}. Within that approach the large component of the momentum $p$ is extracted from the fields (it becomes a label for them) according to \begin{equation} \phi(x)=\sum_{\tilde{p}\neq 0}e^{-i\tilde{p}x}\phi_{n,p} \end{equation} where $p=\tilde{p}+k$ and $\tilde{p}$ contains the large components of the momentum. In that way $\bar{n}p$ and $p_{\perp}$ have become labels for the field. Derivatives acting on $\phi_{n,p}$ will just give contributions of order $\lambda^ 2$. Then the so called label operators $\mathcal{P}$ are introduced. Those operators, when acting on the effective theory fields, give the sum of large labels in the fields minus the sum of large labels in the conjugate fields. We have, therefore \begin{equation} f\left(\bar{\mathcal{P}}\right)\left(\phi_{n,q_1}^{\dagger}\cdots\phi_{n,p_1}\cdots\right)=f\left(\bar{n}p_1+\cdots-\bar{n}q_1\cdots\right)\left(\phi_{n,q_1}^{\dagger}\cdots\phi_{n,p_1}\cdots\right) \end{equation} an analogous operator is defined for the transverse label $\mathcal{P}_{\perp}^{\mu}$. With that technology, building blocks to form invariant operators (under collinear and ultrasoft gauge transformations) can be constructed. A scaling in $\lambda$ is assigned to the fields in the effective theory, such that the action for the kinetic terms counts as $\lambda^0$. The scaling for the various fields is summarized in table \ref{scaling}. \begin{table}[t] \begin{center} \begin{tabular}{|c|c|} \hline Fields & Scaling \\ \hline & \\ collinear quark $\xi$ & $\lambda$ \\ & \\ ultrasoft quark $q$ & $\lambda^3$ \\ & \\ ultrasoft gluon $A_{us}^{\mu}$ & $\lambda^2$ \\ & \\ collinear gluon $(\bar{n}A_{n,q},A_{n,q}^{\perp},nA_{n,q})$ & $(1,\lambda ,\lambda^2)$\\ & \\ \hline \end{tabular} \caption{$\lambda$ scaling of the fields in SCET$_{\mathrm{I}}$.}\label{scaling} \end{center} \end{table} The leading order Lagrangian for the SCET is then derived. This leading order (in the power counting in $\lambda$) Lagrangian is given by \begin{equation}\label{LSCET} \mathcal{L}_c=\bar{\xi}_{n,p'}\left\{inD+gnA_{n,q}+\left(\mathcal{P}\!\!\!\!\slash_{\perp}+gA\!\!\!\slash_{n,q}^{\perp}\right)W\frac{1}{\bar{\mathcal{P}}}W^{\dagger}\left(\mathcal{P}\!\!\!\!\slash_{\perp}+gA\!\!\!\slash_{n,q'}^{\perp}\right)\right\}\frac{\bar{n}\!\!\!\slash}{2}\xi_{n,p} \end{equation} in that equation $\xi$ are the fields for the collinear quarks, $A$ are the gluon fields, the covariant derivative $D$ contains ultrasoft gluon fields and $W$ are collinear Wilson lines given by \begin{equation} W=\left[\sum_{\mathrm{perm.}}e^{-g\frac{1}{\bar{\mathcal{P}}}\bar{n}A_{n,q}}\right] \end{equation} where the label operator acts only inside the square brackets. We can see that couplings to an arbitrary number of $\bar{n}A_{n,q}$ gluons are present at leading order in $\lambda$. The Feynman rules arising from this Lagrangian are given in appendix \ref{appFR}. Subsequently power suppressed (in $\lambda$) corrections to that Lagrangian were derived. This was first done in \cite{Beneke:2002ph,Beneke:2002ni}, where the position space formalism for SCET was introduced. In the position space formalism, the different modes present in the theory are also defined by the scaling properties of their momentum. But the strategy to construct the theory consists now of three steps. First one performs a field redefinition on the QCD fields, to introduce the fields with the desired scaling properties. Then the resulting Lagrangian is expanded, in order to achieve an homogeneous scaling in $\lambda$ of all the terms in it. This step involves multipole expanding the ultrasoft fields in one light cone direction, according to \begin{equation} \phi_{us}(x)=\phi_{us}(x_-)+\left[x_{\perp}\partial\phi_{us}\right](x_-)+\frac{1}{2}nx\left[\bar{n}\partial\phi_{us}\right](x_-)+\frac{1}{2}\left[x_{\mu\perp}x_{\nu\perp}\partial^{\mu}\partial^{\nu}\phi_{us}\right](x_-)+\mathcal{O}\left(\lambda^3\phi_{us}\right) \end{equation} where $x_-=1/2(\bar{n}x)n$. And finally the last step consists in a further field redefinition which restores the explicit (collinear and ultrasoft) gauge invariance of the Lagrangian (which was lost by the multipole expansions). With that procedure the Lagrangian for SCET up to corrections of order $\lambda^2$ (with respect to the leading term (\ref{LSCET})) was obtained. Later on this power suppressed terms were also derived in the label formalism \cite{Pirjol:2002km}. Note that the purely collinear part of the Lagrangian is equivalent to full QCD (in a particular reference frame). The notion of collinear particle acquires a useful meaning when, in a particular reference frame, we have a source that create such particles. \subsection{SCET$_{\rm II}$} This is the theory that describe processes in which the collinear particles in the final state have virtualities of order $\Lambda_{QCD}^2$. The simultaneous presence of two kinds of collinear modes must be taken into account in this case. We will have hard-collinear modes, with a typical scaling $p^{\mu}\sim(1,\lambda,\lambda^2)$ and virtuality of order $E\Lambda_{QCD}$ (these correspond to the collinear modes of the previous section, in SCET$_{\rm{I}}$) and collinear modes, with a typical scaling $p^{\mu}\sim(1,\lambda^2,\lambda^4)$ and virtuality of order $\Lambda_{QCD}^2$; together with ultrasoft modes with scaling $p^{\mu}\sim(\lambda^2,\lambda^2,\lambda^2)$. In the final effective theory (SCET$_{\rm II}$) only modes with virtuality $\mathcal{O}(\Lambda_{QCD}^2)$ must be present. The contributions from the intermediate hard-collinear scale must then be integrated out in this case. This can be done with a two step process, where first the hard scale $E$ is integrated and one ends up with SCET$_{\rm I}$. Then the hard-collinear modes are integrated and one is left with an effective theory containing only modes with virtuality of order $\Lambda_{QCD}^2$. SCET$_{\rm II}$ is therefore much more complex than SCET$_{\rm I}$. In particular one of the most controversial issues is how one should deal with end-point singularities that may appear in convolutions for the soft-collinear factorization formulas. Those can be treated, or regulated, in several different ways. If one works in dimensional regularization in the limit of vanishing quark masses a new mode, called soft-collinear messenger \cite{Becher:2003qh}, must be introduced in the theory. It provides a systematic way to discuss factorization and end-point singularities. Alternative regulators avoid the introduction of such a mode. Although this is clear now, to what extent the messenger should be considered as fundamental in the definition of the effective theory or not is still under debate. \chapter{Conclusions/Overview}\label{chapconcl} In this thesis we have employed Effective Field Theory techniques to study the heavy quark sector of the Standard Model. We have focused in three different subjects. First, we have studied the singlet static QCD potential, employing potential Non-Relativistic QCD. With the help of that effective theory we have been able to determine the sub-leading infrared dependence of that static potential. Among other possible applications, this calculation will enter in the third order analysis of $t-\bar{t}$ production near threshold. An analysis which will be needed for a future $e^+-e^-$ linear collider. After that we have studied an anomalous dimension in Soft-Collinear Effective Theory. That effective theory has very important applications in the field of $B$-physics. A field which is of crucial importance for the indirect searches of new physics effects (through the study of $CP$ violation and the CKM matrix). And finally we have studied the semi-inclusive radiative decays of heavy quarkonium to light particles, employing a combined use of potential Non-Relativistic QCD and Soft-Collinear Effective Theory. Viewed in retrospect, that process can be seen as a nice example on how a process is well described theoretically once one includes all the relevant degrees of freedom (in the effective theory) and uses a well defined power counting. When we have the radiative decay understood, it can be used to determine properties of the decaying heavy quarkonia, as we have also shown in the thesis. \chapter[Two loop SCET heavy-to-light current a.d. : $n_f$ terms]{Two loop SCET heavy-to-light current anomalous dimension: $n_f$ terms}\label{chapda} In this chapter we will calculate the two loop $n_f$ terms of the anomalous dimension of the leading order (in $\lambda$) heavy-to-light current in Soft-Collinear Effective Theory (SCET). The work presented in this chapter, although mentioned in \cite{Neubert:2004dd,Neubert:2005nt}, appears here for the first time. The calculation of the complete two loop anomalous dimension will appear elsewhere \cite{dainprep}. \section{Introduction} The heavy-to-light hadronic currents $J_{\mathrm{had}}=\bar{q}\Gamma b$ ($b$ represents a heavy quark and $q$ a light quark) appearing in operators of the weak theory at a scale $\mu\sim m_b$ can be matched into SCET$_{\mathrm{I}}$ \cite{Bauer:2000yr}. The lowest order SCET hadronic current is not $J_{\mathrm{had}}^{SCET}=C(\mu)\bar{\xi}\Gamma h$, but rather \begin{equation} J_{\mathrm{had}}^{SCET}=c_0\left(\bar{n}p,\mu\right)\bar{\xi}_{n,p}\Gamma h+c_1\left(\bar{n}p,\bar{n}q_1,\mu\right)\bar{\xi}_{n,p}\left(g\bar{n}A_{n,q_1}\right)\Gamma h+\cdots \end{equation} That is, an arbitrary number of $\bar{n}A_{n,q}$ gluons can be added without suppression in the power counting. Here $\xi$ and $A$ are the fields for the collinear quarks and gluons in the effective theory, respectively; $h$ is the field for the heavy quark in HQET. Collinear gauge invariance relates all these operators and organize the current into the (collinear gauge invariant) form \begin{equation} J_{\mathrm{had}}^{SCET}=C_i\left(\mu,\bar{n}P\right)\bar{\chi}_{n,P}\Gamma h \end{equation} where \begin{equation} \bar{\chi}=\bar{\xi}W \end{equation} and $W$ is a collinear Wilson line (see section \ref{secSCET}). We can then run the Wilson coefficients down in SCET. Note that it is enough to consider the simpler current $\bar{\xi}\Gamma h$, because collinear gauge invariance relates them all. This was done at one loop in \cite{Bauer:2000yr}. The result obtained there was\footnote{The coefficients for the different Dirac structures mix into themselves. There is no operator mixing at this order.} \begin{equation} Z=1+\frac{\alpha_sC_f}{4\pi}\left(\frac{1}{\epsilon^2}+\frac{2}{\epsilon}\log\left(\frac{\mu}{\bar{n}P}\right)+\frac{5}{2\epsilon}\right) \end{equation} \begin{equation} \gamma=-\frac{\alpha_s}{4\pi}C_f\left(5+4\log\left(\frac{\mu}{\bar{n}P}\right)\right) \end{equation} $Z$ is the current counterterm in the effective theory, $\gamma$ is the anomalous dimension ($P$ is the total outgoing jet momentum). Here we will calculate the 2 loop $n_f$ corrections to this result. \section{Calculation of the $n_f$ terms} The effective theory diagrams that are needed to compute the $n_f$ terms of the two loop anomalous dimension are depicted in figure \ref{figdiagnf}. We will perform the calculation in $d=4-2\epsilon$ dimensions. To distinguish infrared (IR) divergences from the ultraviolet (UV) ones, we will take the collinear quark off-shell by setting $p_{\perp}=0$ and the heavy quark with residual momentum $\omega$. This will regulate the IR divergences of all the diagrams. We will work in Feynman gauge. The gluon self-energy is the same as in QCD (for both the collinear and the ultrasoft gluons), it is given in figure \ref{figgse}. The Feynman rules which are needed to compute the diagrams are given in figure \ref{figcurr} (for the current), figure \ref{figHQET} (vertex and propagator rules for HQET) and appendix \ref{appFR} (vertex and propagator rules for SCET). \begin{figure} \centering \includegraphics{currlo.epsi} \caption[Feynman rule for the $\mathcal{O}(\lambda^0)$ SCET heavy-to-light current]{\label{figcurr} Feynman rule for the $\mathcal{O}(\lambda^0)$ SCET heavy-to-light current. The double line is the heavy quark. The dashed line is the collinear quark. Springy lines with a line inside are collinear gluons.} \end{figure} \begin{figure} \centering \includegraphics{HQET.epsi} \caption[HQET Feynman rules]{\label{figHQET} HQET Feynman rules. The heavy quark is represented by a double line. The springy line represents the gluon.} \end{figure} For the ultrasoft diagrams the collinear quark propagator simplifies to ($s$ is an ultrasoft loop momentum, $p$ is the external collinear quark momentum) \begin{equation} \frac{\bar{n}(p+s)}{(p+s)^2+i\eta}=\frac{\bar{n}(p)}{\bar{n}(p)n(p+s)+i\eta}=\frac{1}{ns+\frac{\bar{n}pnp}{\bar{n}p}+i\eta}=\frac{1}{ns+\frac{p^2}{\bar{n}p}+i\eta}\equiv\frac{1}{ns+\alpha+i\eta} \end{equation} To further simplify the integrals we will choose $\omega$ to be $\omega=\alpha/2$. For the evaluation of the ultrasoft graphs we will just need the integrals \[ \int\frac{d^ds}{(2\pi)^d}\frac{1}{ns+\alpha+i\eta}\frac{1}{vs+\omega+i\eta}\left(\frac{1}{s^2+i\eta}\right)^{\beta}=\frac{2i}{(4\pi)^{2-\epsilon}}(-1)^{2-\beta}\frac{\Gamma(2-\beta-\epsilon)\Gamma(-2+2\beta+2\epsilon)}{\Gamma(\beta)}\cdot \] \begin{equation}\label{ints1} \cdot\int_0^1dy\left(2\omega y+\alpha(1-y)\right)^{2-2\beta-2\epsilon}y^{-2+\beta+\epsilon}=\frac{2i(-1)^{2-\beta}}{(4\pi)^{2-\epsilon}}\alpha^{2-2\beta-2\epsilon}\frac{\Gamma(2-\beta-\epsilon)\Gamma(-2+2\beta+2\epsilon)}{\Gamma(\beta)(-1+\beta+\epsilon)} \end{equation} \begin{equation} \int\frac{d^ds}{(2\pi)^d}\frac{1}{vs+\omega+i\eta}\left(\frac{1}{s^2+i\eta}\right)^{\beta}=\frac{2i(-1)^{2-\beta}}{(4\pi)^{2-\epsilon}}(2\omega)^{3-2\beta-2\epsilon}\frac{\Gamma(2-\beta-\epsilon)\Gamma(-3+2\beta+2\epsilon)}{\Gamma(\beta)} \end{equation} which can be calculated with Feynman and Georgi parameterizations, we have used that $2\omega=\alpha$ in the last step of (\ref{ints1}). Using these results we obtain \[ \mathrm{Fig.}\;\ref{figdiagnf}a=\frac{\alpha_s^2}{(4\pi)^2}\left(\frac{p^2}{\mu\bar{n}p}\right)^{-4\epsilon}C_f\left(C_A\left(\frac{-5}{12\epsilon^3}-\frac{31}{36\epsilon^2}+\frac{1}{\epsilon}\left(-\frac{2}{27}-\frac{5\pi^2}{8}\right)\right)-\right. \] \begin{equation} \left.-T_Fn_f\left(-\frac{1}{3\epsilon^3}-\frac{5}{9\epsilon^2}+\frac{1}{\epsilon}\left(\frac{8}{27}-\frac{\pi^2}{2}\right)\right)\right) \end{equation} \[ \mathrm{Fig.}\;\ref{figdiagnf}b=\frac{\alpha_s^2}{(4\pi)^2}\left(\frac{p^2}{\mu\bar{n}p}\right)^{-2\epsilon}C_f\left(C_A\left(\frac{5}{3\epsilon^3}+\frac{1}{\epsilon}\left(-\frac{5}{3}+\frac{25\pi^2}{36}\right)\right)+\right. \] \begin{equation} \left.+T_Fn_f\left(-\frac{4}{3\epsilon^3}+\frac{1}{\epsilon}\left(\frac{4}{3}-\frac{5\pi^2}{9}\right)\right)\right) \end{equation} where we have redefined $\mu^2\to\mu^2e^{\gamma_E}/(4\pi)$ (from now on, we will always use this redefinition). The evaluation of the collinear graphs requires the integral (which again can be calculated with Feynman and Georgi parameterizations) \[ \int\frac{d^ds}{(2\pi)^d}\frac{\bar{n}(p-s)}{\bar{n}s}\frac{1}{(s-p)^2+i\epsilon}\left(\frac{1}{s^2+i\epsilon}\right)^{\beta}=\frac{i(-p^2)^{-\epsilon}}{(4\pi)^{2-\epsilon}}\left(p^2\right)^{-\beta+1}\cdot \] \begin{equation} \cdot\frac{\Gamma(1-\epsilon)\Gamma(\beta-1+\epsilon)\Gamma(1-\beta-\epsilon)}{\Gamma(\beta)\Gamma(2-\beta-2\epsilon)}\left(\frac{1-\epsilon}{2-\beta-2\epsilon}\right) \end{equation} plus other integrals which do not involve the $\bar{n}$ vector and can thus be found in standard QCD books (see for instance \cite{Pascual:1984zb}). Using these results we obtain \[ \mathrm{Fig.}\;\ref{figdiagnf}c=\frac{\alpha_s^2}{(4\pi)^2}\left(\frac{p^2}{\mu^2}\right)^{-2\epsilon}C_f\left(C_A\left(\frac{5}{6\epsilon^3}+\frac{23}{9\epsilon^2}-\frac{1}{\epsilon}\left(-\frac{253}{27}+\frac{5\pi^2}{36}\right)\right)+\right. \] \begin{equation} \left.+T_Fn_f\left(-\frac{2}{3\epsilon^3}-\frac{16}{9\epsilon^2}+\frac{1}{\epsilon}\left(-\frac{176}{27}+\frac{\pi^2}{9}\right)\right)\right) \end{equation} \[ \mathrm{Fig.}\;\ref{figdiagnf}d=\frac{\alpha_s^2}{(4\pi)^2}\left(\frac{p^2}{\mu^2}\right)^{-\epsilon}C_f\left(C_A\left(-\frac{10}{3\epsilon^3}-\frac{5}{3\epsilon^2}-\frac{1}{\epsilon}\left(5-\frac{5\pi^2}{18}\right)\right)-\right. \] \begin{equation} \left.-T_Fn_f\left(-\frac{4}{3\epsilon^3}-\frac{2}{3\epsilon^2}+\frac{1}{\epsilon}\left(-2+\frac{\pi^2}{9}\right)\right)\right) \end{equation} \begin{figure} \centering \includegraphics{nfsoft} \caption[2 loop $n_f$ diagrams]{\label{figdiagnf} Effective theory diagrams contributing to the 2 loop $n_f$ terms of the $\mathcal{O}(\lambda^0)$ heavy-to-light current. The double line represents a heavy quark, the dashed line represents a collinear quark and the springy lines are gluons (collinear if they have a line inside ultrasoft if not). The circled cross is the insertion of the current. The shaded blob represents the one loop insertion of the gluon self-energy and the cross the corresponding counterterm.} \end{figure} \begin{figure} \centering \includegraphics{gse.epsi} \caption[Gluon self-energy]{\label{figgse} Gluon self-energy graph. The gluons in this graph can be understood either as ultrasoft or collinear.} \end{figure} To compute the anomalous dimension we also need the two loop correction to the collinear and heavy quark propagators. Since the correction to the collinear quark propagator just involves collinear particles (and not ultrasoft ones), this is the same as in usual QCD. While the correction for the heavy quark propagator is that of HQET. The corresponding counterterms are \cite{Egorian:1978zx,Broadhurst:1991fz} \[ Z_{\xi}=1+\frac{\alpha_sC_f}{4\pi}\frac{1}{\epsilon}+\left(\frac{\alpha_s}{4\pi}\right)^2C_f\left(C_A\left(\frac{-1}{\epsilon^2}+\frac{34}{8\epsilon}\right)+C_f\left(\frac{-1}{2\epsilon^2}-\frac{3}{4\epsilon}\right)-T_Fn_f\frac{1}{\epsilon}\right) \] \begin{equation} Z_{h}=1-\frac{\alpha_sC_f}{4\pi}\frac{2}{\epsilon}+\left(\frac{\alpha_s}{4\pi}\right)^2C_f\left(C_A\left(\frac{9}{2\epsilon^2}-\frac{19}{3\epsilon}\right)-C_f\frac{2}{\epsilon^2}-T_Fn_f\left(\frac{2}{\epsilon^2}-\frac{8}{3\epsilon}\right)\right) \end{equation} With this we obtain the two loop $n_f$ part of the counterterm in the effective theory \begin{equation} Z^{(2loop\;n_f)}=\left(\frac{\alpha_s}{4\pi}\right)^2\frac{4}{3}C_fT_Fn_f\left(\frac{3}{4\epsilon^3}+\frac{5}{6\epsilon^2}+\frac{1}{\epsilon^2}\log\left(\frac{\mu}{\bar{n}P}\right)-\frac{125}{72}\frac{1}{\epsilon}-\frac{\pi^2}{8\epsilon}-\frac{5}{3\epsilon}\log\left(\frac{\mu}{\bar{n}P}\right)\right) \end{equation} where $P$ is the total outgoing jet momentum. We can then obtain the two loop $n_f$ terms of the anomalous dimension by using the formula \begin{equation} \gamma=\frac{2}{Z}\frac{d}{d\mu}Z=\frac{2}{Z}\left(\left(-\epsilon\alpha_s-\beta_0\frac{\alpha_s^2}{4\pi}\right)\frac{\partial Z}{\partial\alpha_s}+\frac{\mu}{2}\frac{\partial Z}{\partial\mu}\right) \end{equation} The result is \begin{equation} \gamma^{(2loop\;n_f)}=\left(\frac{\alpha_s}{4\pi}\right)^2\frac{4T_Fn_fC_f}{3}\left(\frac{125}{18}+\frac{\pi^2}{2}+\frac{20}{3}\log\left(\frac{\mu}{\bar{n}P}\right)\right) \end{equation} \section{Discussion} There is a lot of theoretical interest in obtaining the $\mathcal{O}\left(\alpha_s^2\right)$ corrections to the $\bar{B}\to X_s\gamma$ decay rate. To measure this decay rate it is experimentally necessary to put a cut on the energy of the observed photon. This cut introduces a new scale in the problem and, consequently, induces a possible new source of corrections that must be taken into account in the evaluation of the decay rate. The effects of this scale can be systematically treated in an effective field theory framework using SCET \cite{Neubert:2004dd}. A factorization formula for the decay rate with the cut in the photon energy, which disentangle the effects of all these scales, can then be derived in a systematic way. The anomalous dimension calculated in this chapter enters in this expression. This formula involves, among many other things, a jet function, which describes the physics of the hadronic final state, and a soft function, which governs the soft physics inside the $B$ meson. Expressions for the evolution equations of these jet and shape functions can be derived. The evolution equation for the jet function involves an anomalous dimension $\gamma^J$ which can be related to the anomalous dimension for the jet function appearing in deep-inelastic scattering (DIS) \cite{Neubert:2004dd,Becher:2006qw}. The two loop anomalous dimension entering the evolution of the shape function was calculated in \cite{Korchemsky:1992xv}\footnote{The original result in \cite{Korchemsky:1992xv} (from 1992) have been recently corrected in the revised version of the paper (from 2005). The revised result agrees with an independent calculation \cite{Gardi:2005yi}. The discovery of some of the mistakes in \cite{Korchemsky:1992xv} was triggered by the calculations described in this chapter.}. The current anomalous dimension that we are dealing with in this chapter can be related to the anomalous dimensions for these jet and shape functions \cite{Neubert:2004dd}. Recently the soft and jet functions have been evaluated at two loops \cite{Becher:2005pd,Becher:2006qw}. This calculation confirms the previous results for the two loop anomalous dimension of the soft function \cite{Korchemsky:1992xv,Gardi:2005yi} and provides the first direct calculation (that is, not using the relation with DIS) for the two loop anomalous dimension of the jet function. Therefore the last thing that remains to be done is the direct two loop calculation of the leading order heavy-to-light current in SCET (in this chapter we have presented the $n_f$ part of this calculation). This is important since it will ensure that we correctly understand the renormalization properties of SCET (at two loops). Given the peculiar structure of SCET (much different from other known effective field theories) some subtleties may arise here. \chapter{General introduction} \section{Preface} This thesis deals with the study of the structure and the interactions of the fundamental constituents of matter. We arrived at the end of the twentieth century describing the known fundamental properties of nature in terms of quantum field theories (for the electromagnetic, weak and strong nuclear forces) and general relativity (for the gravitational force). The Standard Model (SM) of the fundamental interactions of nature comprises quantum field theories for the electromagnetic (quantum electrodynamics, QED) and weak interactions (which are unified in the so-called electro-weak theory) and for the strong interactions (quantum chromodynamics, QCD). This SM is supplemented by the classical (not-quantum) theory of gravitation (general relativity). All the experiments that has been performed in accelerators (to study the basic constituents of nature), up to now, are consistent with this framework. This has led us to the beginning of the twenty-first century waiting for the next generation of experiments to unleash physics beyond that well established theories. Great expectations have been put on the Large Hadron Collider (LHC) actually being built at CERN and scheduled to start being operative in 2007. It is hoped that LHC will open us the way to new physics. Either by discovering the Higgs particle (the last yet-unseen particle in the SM) and triggering this way the discovery of new particles beyond the SM or by showing that there is not Higgs particle at all, which would demand a whole new theoretical framework for the explanation of the fundamental interactions in nature\footnote{Let us do not worry much about the scaring possibility that LHC finds the Higgs, closes the SM, and shows that there are no new physics effects at any scale we are capable to reach in accelerators. Although this is possible, it is, of course, extremely undesirable.}. Possible extensions of the SM have been widely studied during the last years. The expectation is that those effects will show up in that new generation of experiments. Obviously accelerator Earth-based experiments are not the only source of information for new physics, looking at the sky and at the information that comes from it (highly energetic particles, cosmic backgrounds...) is also a widely studied and great option. But in this way of finding the next up-to-now-most-fundamental theory of nature we do not want to lose the ability to use it to make concise predictions for as many processes as possible and we also want to be able to understand how the previous theory can be obtained from it, in an unambiguous way. It is obviously the dream of all physicists to obtain an unified framework for explaining the four known interactions of nature. But not at the price of having a theory that can explain everything but is so complicated that does not explain anything. In that sense, constructing a new theory is as important as developing appropriate tools for its use. As mentioned, this is true in a two-fold way, we should be able to understand how the previous theory can be derived from the new one and we also should be able to precisely predict as many observables as possible (to see if the observations can really be accommodated in our theory or if new effects are needed). Let us end this preface to the thesis with a little joke. According to what have been said here, the title of the seminar that would suppose the end of (theoretical) physics is not: \emph{M-theory: a unification of all the interactions in nature}, but rather: \emph{How to obtain the metabolism of a cow from M-theory}. \section{Effective Field Theories} In the thesis we will focus in the study of systems involving the strong interacting sector of the Standard Model (SM). The piece of the SM which describes the strong interactions is known as Quantum ChromoDynamics (QCD). QCD is a non-abelian $SU(3)$ quantum field theory, that describes the interactions between quarks and gluons. Its Lagrangian is extremely simple and it is given by \begin{equation} \mathcal{L}_{QCD}=\sum_{i=1}^{N_f}\bar{q}_i\left(iD\!\!\!\!\slash-m_i\right)q_i-\frac{1}{4}G^{\mu\nu\, a}G_{\mu\nu}^a \end{equation} In that equation $q_i$ are the quark fields, $igG_{\mu\nu}=[D_{\mu},D_{\nu}]$, with $D_{\mu}=\partial_{\mu}+igA_{\mu}$, $A_{\mu}$ are the gluon fields and $N_f$ is the total number of quark flavors. QCD enjoys the properties of confinement and asymptotic freedom. The strong coupling constant becomes large at small energies and tends to zero at large energies. At large energies quarks and gluons behave as free particles. Whereas at low energies they are confined inside color singlet hadrons. QCD develops an intrinsic scale $\Lambda_{QCD}$ at low energies, which gives the main contribution to the mass of most hadrons. $\Lambda_{QCD}$ can be thought of in several, slightly different, ways, but it is basically the scale where the strong coupling constant becomes order one (and perturbative calculations in $\alpha_{\rm s}$ are no longer reliable). It can be thought as some scale around the mass of the proton. The presence of this intrinsic scale $\Lambda_{QCD}$ and the related fact that the spectrum of the theory consists of color singlet hadronic states, causes that direct QCD calculations may be very complicated (if not impossible) for many physical systems of interest. The techniques known as Effective Field Theories (EFT) will help us in this task. In general in quantum field theory, the study of any process which involve more than one relevant physical scale is complicated. The calculations (and integrals) that will appear can become very cumbersome if more than one scale enters in them. The idea will be then to construct a new theory (the effective theory) derived from the fundamental one, in such a way that it just involves the relevant degrees of freedom for the particular energy regime we are interested in. The general idea underlying the EFT techniques is simply the following one: to describe physics in a particular energy region we do not need to know the detailed dynamics of the other regions. Obviously this is a very well known and commonly believed fact. For instance, to describe a chemical reaction one does not need to know about the quantum electrodynamical interaction between the photons and the electrons, but rather a model of the atom with a nucleus and orbiting electrons is more adequate. And one does not need to use this atomic model to describe a macroscopic biological process. The implementation of this, commonly known, idea in the framework of quantum field theories is what is known under the generic name of \emph{Effective Field Theories}. As mentioned before, those techniques are specially useful for processes involving the strong interacting sector of the SM, which is in what this thesis focus. The process of constructing an EFT comprises the following general steps. First one has to identify the relevant degrees of freedom for the process one is interested in. Then one should make use of the symmetries that are present for the problem at hand and finally any hierarchy of energy scales should be exploited. It is important to notice that the EFT is constructed in such a way that it gives equivalent physical results (equivalent to the fundamental theory) in its region of validity. We are not constructing a model for the process we want to study, but rigorously deriving the desired results from the fundamental theory, in a well controlled expansion. More concretely, in this thesis we will focus in the study of systems involving heavy quarks. As it is well known, there are six flavors of quarks in QCD. Three of them have masses below the intrinsic scale of QCD $\Lambda_{QCD}$, and are called \emph{light}. The other three have masses larger than $\Lambda_{QCD}$ and are called \emph{heavy}. Therefore, rather than describing and classifying EFT in general we will describe heavy quark systems and the EFT that can be constructed for them (as it will be more adequate for our purposes here). \section{Heavy quark and quarkonium systems}\label{secinthq} Three of the six quarks present in QCD have masses larger than $\Lambda_{QCD}$ and are called heavy quarks. The three heavy quarks are the charm quark, the bottom quark and the top quark. The EFT will take advantage of this large mass of the quarks and construct an expansion in the heavy quark limit, of infinite quark masses. The simpler systems that can be constructed involving heavy quarks are hadrons composed of one heavy quark and one light (anti-)quark. The suitable effective theory for describing this kind of systems is known as \emph{Heavy Quark Effective Theory} (HQET), and it is nowadays (together with chiral perturbation theory, which describes low energy interactions among pions and kaons, and the Fermi theory of weak interactions, which describes weak disintegrations below the mass of the $W$) a widely used example to show how EFT work in a realistic case (see \cite{Neubert:1993mb} for a review of HQET). In brief, the relevant scales for this kind of systems are the heavy quark mass $m$ and $\Lambda_{QCD}$. The effective theory is then constructed as an expansion in $\Lambda_{QCD}/m$. The momentum of a heavy quark is decomposed as \begin{equation} p=mv+k \end{equation} where $v$ is the velocity of the hadron (which is basically the velocity of the heavy quark) and $k$ is a residual momentum of order $\Lambda_{QCD}$. The dependence on the large scale $m$ is extracted from the fields, according to \begin{equation} Q(x)=e^{-im_Qv\cdot x}\tilde{Q}_v(x)=e^{-im_Qv\cdot x}\left[h_v(x)+H_v(x)\right] \end{equation} and a theory for the soft fluctuations around the heavy quark mass is constructed. The leading order Lagrangian of HQET is given by \begin{equation} \mathcal{L}_{HQET}=\bar{h}_viv\cdot Dh_v \end{equation} This leading order Lagrangian presents flavor and spin symmetries, which can be exploited for phenomenology. The systems in which this thesis mainly focus (although not exclusively) are those known as heavy quarkonium. Heavy quarkonium is a bound state composed of a heavy quark and a heavy antiquark. We can therefore have \emph{charmonium} ($c\bar{c}$) and \emph{bottomonium} ($b\bar{b}$) systems. The heaviest of the quarks, the top, decays weakly before forming a bound state; nevertheless $t-\bar{t}$ production in the non-relativistic regime (that is near threshold) can also be studied with the same techniques. The relevant physical scales for the heavy quarkonium systems are the heavy quark mass $m$, the typical three momentum of the bound state $mv$ (where $v$ is the typical relative velocity of the heavy quark-antiquark pair in the bound state) and the typical kinetic energy $mv^2$. Apart from the intrinsic hadronic scale of QCD $\Lambda_{QCD}$. The presence of all those scales shows us that heavy quarkonium systems probe all the energy regimes of QCD. From the hard perturbative region to the low energy non-perturbative one. Heavy quarkonium systems are therefore an excellent place to improve our understanding of QCD and to study the interplay of the perturbative and the non-perturbative effects in QCD \cite{Brambilla:2004wf}. To achieve this goal, EFT for this system will be constructed. Using the fact that the mass $m$ of the heavy quark is much larger than any other scale present in the problem (a procedure which is referred to as \emph{integrating out the scale $m$}) one arrives at an effective theory known as \emph{Non-Relativistic QCD} (NRQCD) \cite{Bodwin:1994jh}. In that theory, which describes the dynamics of heavy quark-antiquark pairs at energy scales much smaller than their masses, the heavy quark (and antiquark) is treated non-relativistically by (2 component) Pauli spinors. Also gluons and light quarks with a four momentum of order $m$ are integrated out and not present any more in the effective theory. What we have achieved with the construction of this EFT is the systematic factorization of the effects at the hard scale $m$ from the effects coming from the rest of scales. NRQCD provides us with a rigorous framework to study spectroscopy, decay, production and many other heavy quarkonium processes. The leading order Lagrangian for this theory is given by \begin{equation} \mathcal{L}_{NRQCD}= \psi^{\dagger} \left( i D_0 + {1\over 2 m} {\bf D}^2 \right)\psi + \chi^{\dagger} \left( i D_0 - {1\over 2 m} {\bf D}^2 \right)\chi \end{equation} where $\psi$ is the field that annihilates a heavy quark and $\chi$ the field that creates a heavy antiquark. Sub-leading terms (in the $1/m$ expansion) can then be derived. One might be surprised, at first, that heavy quarkonium decay processes can be studied in NRQCD. Since the annihilation of a $Q\bar{Q}$ pair will produce gluons and light quarks with energies of order $m$, and those degrees of freedom are not present in NRQCD. Nevertheless those annihilation processes can be explained within NRQCD (in fact the theory is constructed to reproduce that kind of physics). The answer is that annihilation processes are incorporated in NRQCD through local four fermion operators. The $Q\bar{Q}$ annihilation rate is represented in NRQCD by the imaginary parts of $Q\bar{Q}\to Q\bar{Q}$ scattering amplitudes. The coefficients of the four fermion operators in the NRQCD Lagrangian, therefore, have imaginary parts, which reproduces the $Q\bar{Q}$ annihilation rates. In that way we can describe inclusive heavy quarkonium decay widths to light particles. NRQCD has factorized the effects at the hard scale $m$ from the rest of scales in the problem. But if we want to describe heavy quarkonium physics at the scale of the binding energy, we will face with the complication that the soft, $mv$, and ultrasoft, $mv^2$, scales are still entangled in NRQCD. It would be desirable to disentangle the effects of those two scales. To solve this problem one can proceed in more than one way. One possibility is to introduce separate fields for the soft and ultrasoft degrees of freedom at the NRQCD level. This would lead us to the formalism now known as \emph{velocity NRQCD} (vNRQCD) \cite{Luke:1999kz}. Another possibility is to exploit further the non-relativistic hierarchy of scales in the system ($m\gg mv\gg mv^2$) and construct a new effective theory which just contains the relevant degrees of freedom to describe heavy quarkonium physics at the scale of the binding energy. That procedure lead us to the formalism known as \emph{potential NRQCD} (pNRQCD) \cite{Brambilla:1999xf} and is the approach that we will take here, in this thesis. When going from NRQCD to pNRQCD one is integrating out gluons and light quarks with energies of order of the soft scale $mv$ and heavy quarks with energy fluctuations at this soft scale. This procedure is sometimes referred to as \emph{integrating out the soft scale}, although the scale $mv$ is still active in the three momentum of the heavy quarks. The resulting effective theory, pNRQCD, is non-local in space (since the gluons are massless and the typical momentum transfer is at the soft scale). The usual potentials in quantum mechanics appear as Wilson coefficients of the effective theory. This effective theory will be described in some more detail in section \ref{secpNRQCD}. The correct treatment of some heavy quark and quarkonium processes will require additional degrees of freedom, apart from those of HQET or NRQCD. When we want to describe regions of phase space where the decay products have large energy, or exclusive decays of heavy particles, for example, collinear degrees of freedom would need to be present in the theory. The interaction of collinear and soft degrees of freedom has been implemented in an EFT framework in what now is known as \emph{Soft-Collinear Effective Theory} (SCET) \cite{Bauer:2000yr,Beneke:2002ph}. This effective theory will also be described in a following section \ref{secSCET}. Just let us mention here that, due to the peculiar nature of light cone interactions, this EFT will be non-local in a light cone direction (collinear gluons can not interact with soft fermions without taking them far off-shell). The study of heavy quark and quarkonium systems has thus lead us to the construction of effective quantum field theories of increasing richness and complexity. The full power of the quantum field theory techniques (loop effects, matching procedures, resummation of large logarithms...) is exploited to obtain systematic improvements in our understanding of those systems. \section{Structure of the thesis} This thesis is structured in the following manner. Next chapter (chapter \ref{cat}) is a summary of the whole thesis written in Catalan (it does not contain any information which is not present in other chapters, except for the translation). Chapter \ref{chapback} contains an introduction to potential NRQCD and Soft-Collinear Effective Theory, the two effective theories that are mainly used during the thesis. The three following chapters (chapters \ref{chappotest}, \ref{chapda} and \ref{chapraddec}) comprise the original contributions of this thesis. Chapter \ref{chappotest} is devoted to the study of the (infrared dependence of the) QCD static potential, employing pNRQCD techniques. Chapter \ref{chapda} is devoted to the calculation of an anomalous dimension in SCET (two loop $n_f$ terms are obtained), which is relevant in many processes under recent study. And chapter \ref{chapraddec} is devoted to the study of the semi-inclusive radiative decays of heavy quarkonium to light hadrons, employing a combination of pNRQCD and SCET. Chapter \ref{chapconcl} is devoted to the final conclusions. This chapter is followed by three appendices. The first appendix contains definitions of several factors appearing throughout the thesis. The second appendix contains Feynman rules for pNRQCD an SCET. And, finally, the third appendix contains the factorization formulas for the NRQCD matrix elements in the strong coupling regime. \chapter*{ } \thispagestyle{empty} \begin{center} \vspace*{\stretch{1}} {\bf {\Huge Applications of effective field theories to the strong interactions of heavy quarks}}\\ \vspace*{\stretch{2}} Mem\`oria de la tesi presentada per en Xavier Garcia i Tormo per optar al grau de Doctor en Ci\`encies F\'\i siques \vspace*{\stretch{1}} {Director de la tesi: Dr. Joan Soto i Riera} \vspace*{\stretch{4}} Departament d'Estructura i Constituents de la Mat\`eria Programa de doctorat de {\it ``F\'\i sica avan\c cada''} Bienni 2001-2003 {\bf Universitat de Barcelona} \vspace*{\stretch{2}} \end{center} \pagebreak \chapter*{ } \thispagestyle{empty} \begin{flushright} \vspace*{\stretch{1}} No diguis blat\\fins que no sigui al sac i ben lligat\\ \vspace{.2cm} DITA POPULAR \vspace*{\stretch{4}} \end{flushright} \pagebreak \chapter{The singlet static QCD potential}\label{chappotest} In this chapter we will calculate the logarithmic fourth order perturbative correction to the static quark-antiquark potential for a color singlet state (that is the sub-leading infrared dependence). This work appears here for the first time. It will later be reported in \cite{potestinprep}. \section{Introduction} The static potential between a quark and an antiquark is a key object for understanding the dynamics of QCD. The first thing almost every student learns about QCD is that a linear growing potential at long distances is a signal for confinement. Apart from that, it is also a basic ingredient of the Schrödinger-like formulation of heavy quarkonium. What is more, precise lattice data for the short distance part of the potential is nowadays available, allowing for a comparison between lattice and perturbation theory. Therefore the static potential is an ideal place to study the interplay of the perturbative and the non-perturbative aspects of QCD. The quark-antiquark system can be in a color singlet or in a color octet configuration. Which will give rise to the singlet and octet potentials, respectively. Both of them are relevant for the modern effective field theory calculations in the heavy quarkonium system. Here we will focus in the singlet potential. The perturbative expansion of the singlet static potential (in position space) reads \[ V_s^{(0)}(r)=-\frac{C_f\alpha_{\rm s}(1/r)}{r}\left(1+\frac{\alpha_{\rm s}(1/r)}{4\pi}\left(a_1+2\gamma_E\beta_0\right)+\left(\frac{\alpha_{\rm s}(1/r)}{4\pi}\right)^2\bigg( a_2+\right. \] \begin{equation}\label{pot} +\left.\left(\frac{\pi^2}{3}+4\gamma_E^2\right)\beta_0^2+\gamma_E\left(4a_1\beta_0+2\beta_1\right)\bigg)+\left(\frac{\alpha_{\rm s}(1/r)}{4\pi}\right)^3\left(\tilde{a}_3+\frac{16\pi^2}{3}C_A^3\log r\mu\right)+\cdots\right) \end{equation} the one-loop coefficient $a_1$ is given by \cite{Fischler:1977yf,Billoire:1979ih} \begin{equation} a_1=\frac{31}{9}C_A-\frac{20}{9}T_Fn_f \end{equation} and the two loop coefficient $a_2$ by \cite{Peter:1996ig,Schroder:1998vy} \begin{eqnarray} a_2&=& \left[{4343\over162}+4\pi^2-{\pi^4\over4}+{22\over3}\zeta(3)\right]C_A^2 -\left[{1798\over81}+{56\over3}\zeta(3)\right]C_AT_Fn_f- \nonumber\\ &&{}-\left[{55\over3}-16\zeta(3)\right]C_fT_Fn_f +\left({20\over9}T_Fn_f\right)^2 \end{eqnarray} the non-logarithmic third order correction $\tilde{a}_3$ is still unknown. The form of the logarithmic term in (\ref{pot}) corresponds to using dimensional regularization for the ultrasoft loop (which is the natural scheme when calculating from pNRQCD).\footnote{Note that this is not the natural scheme when calculating from NRQCD. In that case one would regulate also the potentials in $d$-dimensions.} We will calculate here the logarithmic fourth order correction to the potential. Since this calculation follow the same lines as that of the third order logarithmic terms, we will briefly review it in the next section. \section{Review of the third order logarithmic correction} The leading infrared (IR) logarithmic dependence of the singlet static potential was obtained in \cite{Brambilla:1999xf} by matching NRQCD to pNRQCD perturbatively. The matching is performed by comparing Green functions in NRQCD and pNRQCD (in coordinate space), order by order in $1/m$ and in the multipole expansion. To perform that matching, first of all one need to identify interpolating fields in NRQCD with the same quantum numbers and transformation properties as the singlet and octet fields in pNRQCD. The chosen fields are \begin{equation} \chi^\dagger({\bf x}_2,t) \phi({\bf x}_2,{\bf x}_1;t) \psi({\bf x}_1,t) \rightarrow \sqrt{Z^{(0)}_s(r)} S({\bf r},{\bf R},t) + \sqrt{Z_{E,s}(r)} \, r \, {\bf r}\cdot g{\bf E}^a({\bf R},t) O^a({\bf r},{\bf R},t) + \dots \end{equation} for the singlet, and \begin{eqnarray} \chi^\dagger({\bf x}_2,t) \phi({\bf x}_2,{\bf R};t) T^a \phi({\bf R},{\bf x}_1;t) \psi({\bf x}_1,t) &\rightarrow& \sqrt{Z^{(0)}_o(r)} O^a({\bf r},{\bf R},t)+ \nonumber\\ && \hspace{-5mm} + \sqrt{Z_{E,o}(r)} \, r \, {\bf r}\cdot g{\bf E}^a({\bf R},t) S({\bf r},{\bf R},t) + \dots \end{eqnarray} for the octet, where \begin{equation} \phi({\bf y},{\bf x};t)\equiv P \, \exp \left\{ i \displaystyle \int_0^1 \!\! ds \, ({\bf y} - {\bf x}) \cdot g{\bf A}({\bf x} - s({\bf x} - {\bf y}),t) \right\} \end{equation} The $Z$ in the above expressions are normalization factors. The different combinations of fields in the pNRQCD side are organized according to the multipole expansion, just the first term of this expansion is needed for our purposes here. Then the matching is done using the Green function \begin{equation} G=\langle {\rm vac} \vert \chi^\dagger(x_2) \phi(x_2,x_1) \psi(x_1) \psi^\dagger(y_1)\phi(y_1,y_2) \chi(y_2) \vert {\rm vac} \rangle \end{equation} In the NRQCD side we obtain \begin{equation}\label{GNRQCD} G_{\mathrm{NRQCD}}=\delta^3({\bf x}_1 - {\bf y}_1) \delta^3({\bf x}_2 - {\bf y}_2) \langle W_\Box \rangle \end{equation} where $W_\Box$ represents the rectangular Wilson loop of figure \ref{figWL}. Explicitly it is given by \begin{equation} W_\Box \equiv P \,\exp\left\{{\displaystyle - i g \oint_{r\times T} \!\!dz^\mu A_{\mu}(z)}\right\}. \end{equation} The brackets around it in (\ref{GNRQCD}) represent an average over gauge fields and light quarks. \begin{figure} \centering \includegraphics{willoop.epsi} \caption[Rectangular Wilson loop]{Rectangular Wilson loop. The corners are $x_1 = (T/2,{\bf r}/2)$, $x_2 = (T/2,-{\bf r}/2)$, $y_1 = (-T/2,{\bf r}/2)$ and $y_2 = (-T/2,-{\bf r}/2)$}\label{figWL} \end{figure} We are interested only in the large $T$ limit of the Wilson loop (to single out the soft scale), therefore we define the following expansion for $T\to\infty$ \begin{equation} {i\over T}\ln \langle W_\Box \rangle = u_0(r) + i {u_1(r)\over T} + {\cal O}\left( {1\over T^2}\right) \end{equation} In the pNRQCD side we obtain, at order $r^2$ in the multipole expansion\footnote{The superscripts $(0)$ in all those expressions are reminding us that we are in the static limit $m\to\infty$. Since in this chapter we are always in the static limit, we will omit them after (\ref{GpNRQCD}), to simplify the notation.} \begin{eqnarray}\label{GpNRQCD} & &G_{\rm pNRQCD} = Z^{(0)}_s(r) \delta^3({\bf x}_1 - {\bf y}_1) \delta^3({\bf x}_2 - {\bf y}_2) e^{-iTV^{(0)}_s(r)}\cdot \\ & &\cdot \left(1 - {T_F\over N_c} V_A^2 (r) \int_{-T/2}^{T/2} \! dt \int_{-T/2}^{t} \! dt^\prime \, e^{-i(t-t^\prime)(V^{(0)}_o-V^{(0)}_s)} \langle {\bf r}\cdot g{\bf E}^a(t) \phi^{\rm adj}_{ab}(t,t^\prime){\bf r} \cdot g{\bf E}^b(t^\prime)\rangle \right) \nonumber \end{eqnarray} where the Wilson line \begin{equation} \phi(t,t')=P\, \exp \left\{ - ig \displaystyle \int_{t'}^{t} \!\! d\tilde{t} \, A_0(\tilde{t}) \right\} \end{equation} which comes from the octet propagator, is evaluated in the adjoint representation. Then comparing $G_{\rm NRQCD}$ with $G_{\rm pNRQCD}$ one obtains the matching conditions for the potential and the normalization factors. That matching is schematically represented in figure \ref{figmatchr2}. \begin{figure} \centering \includegraphics{NRpNR.epsi} \caption[NRQCD$\to$pNRQCD matching of the potential at $\mathcal{O}(r^2)$]{Schematic representation of the NRQCD$\to$pNRQCD matching for the static potential and the normalization factors, at order $r^2$ in the multipole expansion. On the pNQRCD side (right) the single line represents the singlet field, the double line the octet field, the circled cross the $\mathcal{O}(r)$ chromoelectric vertex and the thick springy line represents the correlator of chromoelectric fields. Remember that those diagrams are representing an expansion in $r$ (and in $1/m$) and not a perturbative expansion in $\alpha_s$.}\label{figmatchr2} \end{figure} Note that up to this point no perturbative expansion in $\alpha_{\rm s}$ have been used yet. We will now evaluate the pNRQCD diagram perturbatively in $\alpha_{\rm s}$. The dependence in $\alpha_{\rm s}$ enters through the $V_A$, $V_s$ and $V_o$ potentials and through the field strength correlator of chromoelectric fields. To regulate IR divergences we will keep the $\alpha_{\rm s}$ dependence in the exponential on the second line of (\ref{GpNRQCD}) unexpanded. $V_o-V_s$ will then act as our IR regulator. The tree level expression for $V_A$ is given by the NRQCD$\to$pNRQCD matching at order $r$ in the multipole expansion. It simply states that $V_A=1$ at tree level. We then need the tree level expression for the correlator of chromoelectric fields. Note that, since afterwards we want to integrate over $t$ and $t'$, we need this gluonic correlator in $d$ dimensions (see the following subsections for more details). Inserting the $d$-dimensional ($d=4-2\epsilon$) tree level expressions we obtain (in the $T\to\infty$ limit) \[ G_{\rm pNRQCD}=Z_s(r) \delta^3({\bf x}_1 - {\bf y}_1) \delta^3({\bf x}_2 - {\bf y}_2) e^{-iTV_s(r)}\cdot \] \[ \cdot\left(1-iT\frac{N_c^2-1}{2N_c}\frac{\alpha_s(\mu)}{\pi}\frac{r^2}{3}\left(V_o-V_s\right)^3\left(\frac{1}{\epsilon}-\log\left(\frac{V_o-V_s}{\mu}\right)^2+const.\right)-\right. \] \begin{equation}\label{restl} \left.-\frac{N_c^2-1}{2N_c}\frac{\alpha_s(\mu)}{\pi}r^2\left(V_o-V_s\right)^2\left(\frac{1}{\epsilon}-\log\left(\frac{V_o-V_s}{\mu}\right)^2+const.\right)\right) \end{equation} where we can explicitly see that $V_o-V_s$ acts as our IR regulator in the logarithms. The last line in (\ref{restl}) only affects the normalization factor (and not the potential) and will be omitted in the following. The $V_o-V_s$ that appears outside the logarithms must now be expanded in $\alpha_{\rm s}$ (since are no longing acting as IR regulators). Therefore we have obtained the result \[ G_{\rm pNRQCD}=Z_s(r) \delta^3({\bf x}_1 - {\bf y}_1) \delta^3({\bf x}_2 - {\bf y}_2) e^{-iTV_s(r)}\cdot \] \begin{equation}\label{restl2} \cdot\left(1-iT\frac{N_c^2-1}{2N_c}\frac{\alpha_s(\mu)}{\pi}\frac{C_A^3}{24}\frac{1}{r}\alpha_s^3(1/r)\left(\frac{1}{\epsilon}-\log\left(\frac{V_o-V_s}{\mu}\right)^2+const.\right)+\mathcal{O}(T^0)\right) \end{equation} Note that the $\alpha_{\rm s}$ coming from the potentials are evaluated at the soft scale $1/r$, while the $\alpha_{\rm s}$ coming from the ultrasoft couplings is evaluated at the scale $\mu$\footnote{Obviously the distinction is only relevant at the next order, that is next section.}. The ultraviolet divergences in that expression can be re-absorbed by a renormalization of the potential. Comparing now the perturbative evaluations of $G_{\rm NRQCD}$ and $G_{\rm pNRQCD}$ (obviously the same renormalization scheme one has used in the evaluation of the pNRQCD diagram must be used in the calculation of the Wilson loop in NRQCD) we obtain \begin{equation} V_s(r,\mu)=(u_0(r))_{\rm two-loops} -\frac{N_c^2-1}{2N_c}{C_A^3\over 12} {\alpha_{\rm s}\over r}{\alpha_{\rm s}^3\over \pi}\ln ({r\mu}) \end{equation} when $(u_0(r))_{\rm two-loops}$ is substituted by its known value we obtain the result (\ref{pot}), quoted in the introduction of this chapter. The calculation explained in this section has thus provided us the leading IR logarithmic dependence of the static potential. In \cite{Brambilla:1999xf} the cancellation of the IR cut-off dependence between the NRQCD and pNRQCD expressions was checked explicitly, by calculating the relevant graphs for the Wilson loop. That was an explicit check that the effective theory (pNRQCD) is correctly reproducing the IR. In the following section we will use the same procedure employed here to obtain the next-to-leading IR logarithmic dependence of the static potential. That is the logarithmic $\alpha_{\rm s}^5$ contribution to the potential (which is part of the N$^4$LO, in $\alpha_{\rm s}$, correction to the potential). \section{Fourth order logarithmic correction} In the preceding section no perturbative expansion in $\alpha_{\rm s}$ was used until the paragraph following equation (\ref{GpNRQCD}). Therefore (\ref{GpNRQCD}) is still valid for us here and will be our starting point (note that contributions from higher order operators in the multipole expansion have a suppression of order $\alpha_{\rm s}^2$ with respect to the second term of (\ref{GpNRQCD}), and therefore are unimportant for us here). We need to calculate the $\alpha_{\rm s}$ correction to the evaluation of the diagram in the preceding section. Remember that the dependence in $\alpha_{\rm s}$ enters through $V_A$, $V_s$, $V_o$ and the correlator of gluonic fields, therefore we need the $\mathcal{O}(\alpha_s)$ correction to all this quantities. That terms will be discussed in the following subsections in turn. Then in subsection \ref{subseccal4o} we will obtain the fourth order logarithmic correction to the potential. Again we will regulate IR divergence by keeping the exponential of $V_o-V_s$ unexpanded. \subsection{$\mathcal{O}(\alpha_s)$ correction of $V_A$, $V_s$ and $V_o$} The $\mathcal{O}(\alpha_s)$ corrections to $V_s$ and $V_o$ are well known. They are given by \begin{eqnarray} V_s & = & -\frac{C_f}{r}\alpha_s(1/r)\left(1+\left(a_1+2\gamma_E\beta_0\right)\frac{\alpha_s(1/r)}{4\pi}\right)\\ V_o & = & \left(\frac{C_A}{2}-C_f\right)\frac{1}{r}\alpha_s(1/r)\left(1+\left(a_1+2\gamma_E\beta_0\right)\frac{\alpha_s(1/r)}{4\pi}\right) \end{eqnarray} as it has already been reported in the introduction of this chapter. The mixed potential $V_A$ can be obtained by matching NRQCD to pNRQCD at order $r$ in the multipole expansion (we are thinking in performing this matching for $V_A$ in pure dimensional regularization). At leading order in $\alpha_s$ we have to calculate the diagrams shown in figure \ref{figVALO}. They give the tree level result $V_A=1$. The first corrections to this result are given by diagrams like that of figure \ref{figVANLO}. We can clearly see that the first corrections are $\mathcal{O}(\alpha_s^2)$ \begin{figure} \centering \includegraphics{VALO.epsi} \caption[Leading order matching of $V_A$]{NRQCD diagrams for the leading order matching of $V_A$. The solid lines represent the quark and the antiquark, the dashed line represents an $A_0$ gluon.}\label{figVALO} \end{figure} \begin{figure} \centering \includegraphics{VANLO.epsi} \caption[Next-to-leading order matching of $V_A$]{Sample NRQCD diagram for the next-to-leading order matching of $V_A$.}\label{figVANLO} \end{figure} \begin{equation} V_A=1+\mathcal{O}(\alpha_s^2) \end{equation} and therefore unimportant for us here. \subsection{$\mathcal{O}(\alpha_s)$ correction of the field strength correlator} The $\mathcal{O}(\alpha_s)$ correction to the QCD field strength correlator was calculated in \cite{Eidemuller:1997bb}. Let us review here that result and explain how we need to use it. The two-point field strength correlator \begin{equation} \mathcal{D}_{\mu\nu\lambda\omega}(z)\equiv\left<\mathrm{vac}\right\vert T\left\{G_{\mu\nu}^a(y)\mathcal{P}e^{gf^{abc}z^{\tau}\int_0^1d\sigma A_{\tau}^c(x+\sigma z)}G_{\lambda\omega}^b(x)\right\}\left\vert\mathrm{vac}\right> \end{equation} can be parametrised in terms of two scalar functions $\mathcal{D}(z^2)$ and $\mathcal{D}_1(z^2)$ according to \[ \mathcal{D}_{\mu\nu\lambda\omega}(z)=\left(g_{\mu\lambda}g_{\nu\omega}- g_{\mu\omega}g_{\nu\lambda}\right)\left(\mathcal{D}(z^2)+\mathcal{D}_1(z^2)\right)+ \] \begin{equation} +\left(g_{\mu\lambda}z_\nu z_\omega-g_{\mu\omega} z_\nu z_\lambda-g_{\nu\lambda} z_\mu z_\omega+g_{\nu\omega}z_\mu z_\lambda\right)\frac{\partial\mathcal{D}_1(z^2)}{\partial z^2} \end{equation} where $z=y-x$. In (\ref{GpNRQCD}) $x$ and $y$ just differ in the time component, so $z=t-t'$ for us. Furthermore, we are interested in the chromoelectric components, so we need the contraction \begin{equation} \mathcal{D}_{i0i0}(z)=-(d-1)\left(\mathcal{D}(z^2)+\mathcal{D}_1(z^2)+z^2\frac{\partial\mathcal{D}_1(z^2)}{\partial z^2}\right) \end{equation} The tree level contribution is given by the diagram shown in figure \ref{figfscLO}\footnote{Note a slight change in notation in the diagrams with respect to \cite{Eidemuller:1997bb}. We represent the gluonic string by a double (not dashed) line and we always display it (also when it reduces to $\delta^{ab}$).}, the result is \begin{equation} \mathcal{D}_1^{(0)}(z^2)=\mu^{2\epsilon}(N_c^2-1)\frac{\Gamma(2-\epsilon)}{\pi^{2-\epsilon}z^{4-2\epsilon}}\qquad \mathcal{D}^{(0)}(z^2)=0 \end{equation} \begin{figure} \centering \includegraphics{EELO.epsi} \caption[Field strength correlator at leading order]{Leading order contribution to the field strength correlator. The gluonic string is represented by a double line.}\label{figfscLO} \end{figure} The next-to-leading ($\mathcal{O}(\alpha_{\rm s})$) contribution is given by the diagrams in figure \ref{figfscNLO}. Here we need the expression in $d$ dimensions. The $d$-dimensional result for the $\alpha_{\rm s}$ correction is \cite{pcJamin}\footnote{We are indebted to Matthias Jamin for sharing the $d$-dimensional results for the field strength correlator with us.} \begin{figure} \centering \includegraphics{EENLO} \caption[Field strength correlator at next-to-leading order]{Next-to-leading order contributions to the field strength correlator. The gluonic string is represented by a double line. The shaded blob represents the insertion of the 1-loop gluon self-energy. Symmetric graphs are understood for (c) and (d).}\label{figfscNLO} \end{figure} \begin{eqnarray} \mathcal{D}^{(1)}(z^2) & = & N_c(N_c^2-1)\frac{\alpha_{\rm s}}{\pi}\frac{\mu^{4\epsilon}}{4\pi^{2-2\epsilon}}\Gamma^2(1-\epsilon)\left(\frac{1}{z^2}\right)^{2-2\epsilon}g(\epsilon)\\ \mathcal{D}_1^{(1)}(z^2) & = & N_c(N_c^2-1)\frac{\alpha_{\rm s}}{\pi}\frac{\mu^{4\epsilon}}{4\pi^{2-2\epsilon}}\Gamma^2(1-\epsilon)\left(\frac{1}{z^2}\right)^{2-2\epsilon}g_1(\epsilon) \end{eqnarray} with \begin{eqnarray} g(\epsilon) & = & \frac{2 \epsilon ^3+2 (1-\epsilon) B(2 \epsilon -1,2 \epsilon -2) \epsilon ^2-6 \epsilon ^2+8 \epsilon -3}{\epsilon \left(2 \epsilon ^2-5 \epsilon +3\right)}\\ g_1(\epsilon) & = & \frac{-6 \epsilon ^3+17 \epsilon ^2-18 \epsilon +6}{\epsilon ^2 \left(2 \epsilon ^2-5 \epsilon +3\right)}+\frac{2 (1-\epsilon+\epsilon^2) B(2 \epsilon -1,2 \epsilon -2)+\frac{2 (1-\epsilon) n_f}{N_c}}{\epsilon (2 \epsilon -3)} \end{eqnarray} Since the external points, $x$ and $y$, are fixed in this calculation, the divergences we will encounter in $\mathcal{D}_{i0i0}$ (coming from the expressions above) should be canceled by the vertex and (gluon and octet field) propagator counterterms. The counterterm for the vertex is zero, since as we have seen in the previous subsection the first correction to $V_A$ is of order $\alpha_{\rm s}^2$. The counterterm for the gluon propagator is the usual one in QCD. The counterterm for the octet propagator coincides with the counterterm for the quark propagator in Heavy Quark Effective Theory but with the quark in the adjoint representation. We can represent the counterterm contributions by the diagrams of figure \ref{figctEE}. We have checked that when we compute $\mathcal{D}_{i0i0}$ the divergence coming from the first diagram in figure \ref{figfscNLO} is canceled by the counterterm of the gluon propagator. That diagram \ref{figfscNLO}b does not give a divergent contribution, as it should. And that when we add the remaining diagrams the divergence we obtain is exactly canceled by the counterterm of the octet propagator. \begin{figure} \centering \includegraphics{EEct.epsi} \caption[$\mathcal{O}(\alpha_{\rm s})$ counterterm diagrams for the chromoelectric correlator]{$\mathcal{O}(\alpha_{\rm s})$ counterterm diagrams for the chromoelectric correlator. The gluonic string (which comes from the octet propagator) is represented by a double line.}\label{figctEE} \end{figure} The contributions of the counterterms are given by \begin{eqnarray} \mathcal{D}^{\mathrm{c.t.}}(z^2) & = & 0\\ \mathcal{D}^{\mathrm{c.t.}}_1(z^2) & = & N_c(N_c^2-1)\frac{\alpha_{\rm s}}{\pi}\frac{\mu^{2\epsilon}}{4\pi^{2-\epsilon}}\Gamma(2-\epsilon)\frac{1}{z^{4-2\epsilon}}\left[\frac{-2}{\epsilon}+\frac{1}{\epsilon}\left(-\frac{5}{3}+\frac{4}{3}T_F\frac{n_f}{N_c}\right)\right] \end{eqnarray} where the first $1/\epsilon$ in the square bracket corresponds to the octet propagator and the second one to the gluon propagator. Then the total $d$-dimensional result (including the contributions from the counterterms) for the $\alpha_{\rm s}$ correction to the chromoelectric correlator is \begin{equation}\label{EENLO} \mathcal{D}_{i0i0}^{(1)}=-(d-1)\left(\mathcal{D}^{(1)}(z^2)+(-1+2\epsilon)\mathcal{D}_1^{(1)}(z^2)+\mathcal{D}^{\mathrm{c.t.}}(z^2)+(-1+\epsilon)\mathcal{D}_1^{\mathrm{c.t.}}(z^2)\right) \end{equation} which no longer have $1/\epsilon$ poles. \subsection{Calculation of the fourth order logarithmic correction}\label{subseccal4o} The results of the two preceding subsections provides us all the necessary ingredients to compute the next-to-leading IR dependence of the static potential. We have to evaluate the second term in the parenthesis of (\ref{GpNRQCD}) at next-to-leading order. Let us define \begin{equation} \mathcal{G}^{(r^2)}\equiv - {T_F\over N_c} V_A^2 (r) \int_{-T/2}^{T/2} \! dt \int_{-T/2}^{t} \! dt^\prime \, e^{-i(t-t^\prime)(V_o-V_s)} \langle {\bf r}\cdot g{\bf E}^a(t) \phi^{\rm adj}_{ab}(t,t^\prime){\bf r} \cdot g{\bf E}^b(t^\prime)\rangle \end{equation} First we will consider the contribution we obtain when we insert the $\alpha_{\rm s}$ correction (\ref{EENLO}) to the field strength correlator. We have just to perform the integrations over $t$ and $t'$. To do that we change the integration variables to $t+t'$ and $t-t'\equiv t_-$. The integral over the sum just gives us a factor $T-t_-$. The $T$ term will give a contribution to the potential and the $t_-$ term a contribution to the normalization factor (that is unimportant for us here, it will be omitted in the following). The remaining integral over $t_-$ can be done by using \begin{equation} \int_0^{\infty}dx\,x^ne^{-ax}=\frac{\Gamma(n+1)}{a^{n+1}} \end{equation} The result we obtain is then (in the $T\to\infty$ limit) \[ \mathcal{G}^{(r^2)}_{<EE>\vert_{\mathcal{O}(\alpha_{\rm s})}}=-iT\left(\frac{\alpha_s(\mu)}{\pi}\right)^2\alpha_s^3(1/r)\frac{N_c^2-1}{2}\frac{C_A^3}{8}\frac{1}{r}\cdot \] \begin{equation}\label{coEE} \cdot\left(\frac{A}{\epsilon^2}+\frac{B}{\epsilon}+C_1\log^2\frac{V_o-V_s}{\mu}+C_2\log\frac{V_o-V_s}{\mu}+const.\right) \end{equation} with \begin{eqnarray} A & = & \frac{1}{24} \left(\frac{2 n_f}{3N_c}-\frac{11}{3}\right)\nonumber\\ B & = & \frac{1}{108} \left(-\frac{5 n_f}{N_c}+6 \pi ^2+47\right)\nonumber\\ C_1 & = & \frac{1}{6} \left(-\frac{2 n_f}{3N_c}+\frac{11}{3}\right)\nonumber\\ C_2 & = & \frac{1}{54} \left(\frac{20 n_f}{N_c}-12 \pi ^2-149\right) \end{eqnarray} We get another contribution to $\mathcal{G}^{(r^2)}$ when we use the leading order expression for the chromoelectric correlator (then we arrive at (\ref{restl})) and then insert the next-to-leading order correction to $V_o-V_s$. This contribution is given by \begin{equation}\label{coprop} \mathcal{G}^{(r^2)}_{V_o-V_s\vert_{\mathcal{O}(\alpha_{\rm s})}}=-iT\frac{\alpha_{\rm s}(\mu)}{\pi}\frac{\alpha_{\rm s}^4(1/r)}{4\pi}\frac{N_c^2-1}{2N_c}\frac{C_A^3}{8}\frac{1}{r}\left(a_1+2\gamma_E\beta_0\right)\left(\frac{1}{\epsilon}-\log\left(\frac{V_o-V_s}{\mu}\right)^2+const.\right) \end{equation} The ultraviolet divergences we encounter in expressions (\ref{coEE}) and (\ref{coprop}) can again be re-absorbed by a renormalization of the potential. Finally we get another contribution that comes from changing $\alpha_{\rm s}(\mu)$ to $\alpha_{\rm s}(1/r)$ in equation (\ref{restl2}), after renormalization (we want all the $\alpha_{\rm s}$ evaluated at the scale $1/r$ in the potential). It is given by \begin{equation} \mathcal{G}^{(r^2)}_{\mu\to 1/r}=-iT\frac{\alpha_{\rm s}^5(1/r)C_A^3\beta_0}{48\pi^2}\frac{N_c^2-1}{2N_c}2\log(r\mu)\log\left(\frac{V_o-V_s}{\mu}\right) \end{equation} We see that the $\log^2((V_o-V_s)/\mu)$ and the $\log(r\mu)\log((V_o-V_s)/\mu)$ terms appear with the right coefficients to form, together with the double IR logarithms that would come from the NRQCD calculation of the Wilson loop, an IR cut-off independent quantity for the matching coefficient (as it should). Moreover the coefficient for the double logarithm we have obtained here (which remember came from the correction to the gluonic correlator) coincides with what one obtains expanding the renormalization group improved static potential of \cite{Pineda:2000gz}. These two facts are checks of our calculation. We have therefore obtained the $\alpha_{\rm s}^5\log r\mu$ (and $\alpha_{\rm s}^5\log^2r\mu$) terms of the singlet static potential\footnote{\emph{Note added}: It is understood that (when renormalizing the potential) we have used the scheme where $1/\varepsilon-\gamma_E+\log\pi$ is subtracted. And this has been implemented by redefining $\mu^2\to\mu^2e^{\gamma_E}/\pi$ where applicable. Therefore one of the $\alpha_{\rm s}$ in the third order correction in (\ref{pot}) is understood to be in this scheme, whereas the remaining $\alpha_{\rm s}$ are understood to be in the $\overline{MS}$ scheme. Also we have chosen the scheme where only the $\log r\mu$ terms that compensate an infrared $\log((V_o-V_s)/\mu)$ are displayed in the potential.}. \[ V_s^{(0)}(r)=(\mathrm{Eq.}\ref{pot})- \] \begin{equation} -\frac{C_f\alpha_{\rm s}(1/r)}{r}\left(\frac{\alpha_{\rm s}(1/r)}{4\pi}\right)^4\frac{16\pi^2}{3}C_A^3\left(-\frac{11}{3}C_A+\frac{2}{3}n_f\right)\log^2r\mu- \end{equation} \begin{equation} -\frac{C_f\alpha_{\rm s}(1/r)}{r}\left(\frac{\alpha_{\rm s}(1/r)}{4\pi}\right)^4 16\pi^2 C_A^3\left(a_1+2\gamma_E\beta_0-\frac{1}{27}\left(20 n_f-C_A(12 \pi ^2+149)\right)\right)\log r\mu \end{equation} \section{Discussion} In view of the possible future construction of an $e^+-e^-$ linear collider (the aim of which will be the study of the possible new particles LHC will discover), much theoretical effort is being put in the calculation of $t-\bar{t}$ production near threshold. The complete second order (N$^2$LO) corrections are already computed. The second order renormalization group improved expressions (N$^2$LL) are under study (several contributions are already known) \cite{Hoang:2001mm,Pineda:2001et,Hoang:2002yy,Hoang:2003ns,Penin:2004ay}. Given the extremely good precision that such a new machine could achieve, the third order corrections are also needed. These third order corrections (N$^3$LO terms) are being computed at present, by several different people. This is a gigantic project that requires the use of state-of-the-art calculational and computational techniques \cite{Penin:2002zv,Beneke:2005hg,Penin:2005eu,Eiras:2005yt}. Once those third order corrections are completed, the corresponding third order renormalization group improved expressions (N$^3$LL) will also be needed (to achieve the desired theoretical precision in the calculation). Just let us mention that the results presented in this chapter will be a piece of these N$^3$LL computations. \chapter{Radiative heavy quarkonium decays}\label{chapraddec} In this chapter we study the semi-inclusive radiative decay of heavy quarkonium to light hadrons from an effective field theory point of view. As we will see below, the correct treatment of the upper end-point region of the spectrum requires the combined use of Non-Relativistic QCD and Soft-Collinear Effective Theory. When these two effective theories are consistently combined a very good description of the experimental data is achieved. The photon spectrum can then be used to uncover some properties of the decaying quarkonia. The contents of this chapter are basically based on work published in \cite{GarciaiTormo:2004jw,GarciaiTormo:2004kb,GarciaiTormo:2005ch,GarciaiTormo:2005bs}, although some comparisons with new data (not available when some of the preceding articles were published) are also presented. \section{Introduction} Although we will focus on the study of the semi-inclusive radiative decay, the exclusive radiative decays have also been addressed in an effective field theory approach. Exclusive decays will be very briefly commented in subsection \ref{subsecexcl}. \subsection{Semi-inclusive radiative decays} Semi-inclusive radiative decays of heavy quarkonium systems (see \cite{Brambilla:2004wf} for a review) to light hadrons have been a subject of investigation since the early days of QCD \cite{Brodsky:1977du,Koller:1978qg}. In these references, the decay of the heavy quarkonium state to $gg\gamma$ (and to $ggg$) is treated in lowest order QCD, in analogy with the QED decays of orthopositronium to three photons. This lowest order QCD calculation predicted a, basically, linear rise with $z$ ($z$ being the fraction of the maximum energy the photon may have) of the photon spectrum. The angular distribution has also been studied in \cite{Koller:1978qg}; it should be mentioned that this angular distribution is still assumed to be correct, and it is used for the comparison of the experimental results with theory and the subsequent extraction of QCD parameters\footnote{The recent data in \cite{Besson:2005jv} has allowed for the first time for a check of this prediction for the angular distribution. There it is found that data agrees adequately with \cite{Koller:1978qg}}. The upper end-point region of the photon spectrum (that was obtained by several later experiments \cite{Csorna:1985iv,Bizzeti:1991ze,Albrecht:1987hz}) appeared to be poorly described by this linear rise; a much softer spectrum, with a peak about $z\sim 0.6-0.7$, was observed instead. A subsequent resummation of the leading Sudakov ($\log (1-z)$) logarithms \cite{Photiadis:1985hn}, as well as a calculation of the leading relativistic corrections \cite{Keung:1982jb} (see also \cite{Yusuf:1996av}), although produced a softening of the spectrum in the upper end-point (namely $z\to 1$) region, were neither able to reproduce the observed spectrum. Instead, the data was well described by the model in ref. \cite{Field:1983cy}, where a parton-shower Monte Carlo technique was used to incorporate the effects of gluon radiation by the outgoing gluons in the decay. This led to some authors to claim that a non-vanishing gluon mass was necessary in order to describe the data \cite{Consoli:1993ew}. With the advent of Non-Relativistic QCD (NRQCD) \cite{Bodwin:1994jh}, these decays could be analyzed in a framework where short distance effects, at the scale of the heavy quark mass $m$ or larger, could be separated in a systematic manner \cite{Maltoni:1998nh}. These short distance effects are calculated perturbatively in $\alpha_{\rm s} (m)$ and encoded in matching coefficients whereas long distance effects are parameterized by matrix elements of local NRQCD operators. Even within this framework, a finite gluon mass seemed to be necessary to describe data \cite{Consoli:1997ts}. However, about the same time it was pointed out that in the upper end-point region the NRQCD factorization approach breaks down and shape functions, namely matrix elements of non-local operators, rather than NRQCD matrix elements, must be introduced \cite{Rothstein:1997ac}. Early attempts to modeling color octet shape functions produced results in complete disagreement with data \cite{Wolf:2000pm} (as shown in figure \ref{figWolf}), and hence later authors did not include them in their phenomenological analysis. \begin{figure} \centering \hspace{-2.5cm}\includegraphics[width=14cm]{wolf.eps} \caption[Early attempts to modeling the color octet shape functions]{Comparison of the spectrum obtained by the modeling of the octet shape functions with data. The dot-dashed line is the (leading order) color singlet contribution alone (color singlet model). The dashed line is the direct octet contribution (where the shape functions enter). The solid line is the total result. We can clearly see that color singlet model alone is unable to reproduce data and that the modeling of the octet shape function has produced a result in complete disagreement with data. $\Lambda$ is a parameter in the model for the shape functions. Plot from \texttt{hep-ph/0010217} \cite{Wolf:2000pm}.}\label{figWolf} \end{figure} Notwithstanding this upper end-point region has received considerable attention lately, as it was recognized that Soft-Collinear Effective Theory (SCET) \cite{Bauer:2000ew,Bauer:2000yr} may help in organizing the calculation and in performing resummations of large (Sudakov) logs \cite{Bauer:2001rh,Fleming:2002rv,Fleming:2002sr,Fleming:2004rk}. In fact, the early resummation of Sudakov logarithms \cite{Photiadis:1985hn} has been recently corrected \cite{Fleming:2004rk} within this framework, and statements about the absence of Sudakov suppression in the color singlet channel \cite{Hautmann:2001yz} have been clarified \cite{Fleming:2002sr}. These SCET calculations will be explained in the following sections. For the $\Upsilon (1S)$ state, the bound state dynamics is amenable of a weak coupling analysis, at least as far as the soft scale ($mv$, $v\sim \alpha_{\rm s} (mv) \ll 1$, the typical velocity of the heavy quark in the quarkonium rest frame) is concerned \cite{Titard:1993nn,Titard:1994id,Titard:1994ry,Pineda:1997hz,Pineda:1998ja,Pineda:2001zq,Brambilla:2001fw,Brambilla:2001qk,Recksiegel:2002za,Kniehl:2002br,Penin:2002zv,Kniehl:2003ap,Kniehl:2002yv,Kniehl:1999mx}. These calculations can most conveniently be done in the framework of potential NRQCD (pNRQCD), a further effective theory where the contributions due to the soft and ultrasoft ($\sim mv^2$) scales are factorized \cite{Pineda:1997bj,Kniehl:1999ud,Brambilla:1999xf} (see section \ref{secpNRQCD}). The color octet shape functions can then be calculated combining pNRQCD and SCET. This calculation of the octet shape functions in the weak coupling regime will be the subject of subsection \ref{subseccalshpfct}. Parallel to all that, shortly after \cite{Bodwin:1994jh}, in \cite{Catani:1994iz} it was pointed out that a parametrically leading contribution had been ignored so far. This was the contribution where the photon is emitted from the decay products of the heavy quark (light quarks), and not directly from the heavy quark itself (remember that we are always dealing with prompt photons, that is photons that do not come from hadronic decays). These type of contributions, called fragmentation contributions, completely dominate the spectrum in the lower end-point (namely $z\to 0$) region. At first it was thought that only the gluon to photon fragmentation function appeared in the process; so the radiative decays seemed a good place to determine this (yet unknown) gluon to photon fragmentation function; but a subsequent investigation \cite{Maltoni:1998nh} showed that this was not the case. When considering also the color octet contributions, the quark to photon fragmentation function also appeared, and their contributions can not be disentangled. When all the known contributions to the photon spectrum are taken into account and are consistently combined, a very good description of the data is now achieved (with no longer need for the introduction of a finite gluon mass). This will be explained in detail in section \ref{secmerg}. \subsection{Exclusive radiative decays}\label{subsecexcl} Exclusive radiative decays of heavy quarkonium have been analyzed in an effective field theory framework in \cite{Fleming:2004hc}. A combination of NRQCD and SCET is also needed in this case. However, since in this case we are dealing with an exclusive process, the existence (and effects) of two different collinear scales have to be taken into account. Moreover, the fact that the hadronic final states must be composed of collinear fields in a color singlet configuration, causes that only color singlet contributions, and not color octet ones, enter in the NRQCD-SCET analysis at leading order (in contrast with the situation in the inclusive case, as we will see below). In this case the final result of this effective theory analysis agrees with the leading-twist order of previous known results \cite{Baier:1985wv,Ma:2001tt}. \phantom{} We will move now to the study of the semi-inclusive radiative decays, starting in the next section. \section{Effective Field Theory approach to the upper end-point region} The NRQCD framework organizes the radiative decay in the following factorized form \begin{equation}\label{NRQCDfact} \frac{d\Gamma}{dz}=\sum_{i}C_i\left(M,z\right)\left<\mathrm{H}\right|\mathcal{O}_i\left|\mathrm{H}\right> \end{equation} where H represents a generic heavy quarkonium state and $M$ represents its mass. In that formula $C_i$ are the hard matching coefficients, which incorporates short distance effects, and the $\left<\mathrm{H}\right|\mathcal{O}_i\left|\mathrm{H}\right>$ are the NRQCD matrix elements, which parameterize the long distance effects. However, as was already mentioned in the previous section, in the upper end-point region of the photon spectrum that standard NRQCD factorization is not applicable \cite{Rothstein:1997ac}. This is due to the fact that small scales induced by the kinematics enter the problem and have an interplay with the bound state dynamics. In order to study this region, one has to take into account collinear degrees of freedom in addition to those of NRQCD. This can be done using SCET as it has been described in \cite{Bauer:2001rh,Fleming:2002sr}. Using the SCET framework, the decay rate has been expressed in the factorized form \cite{Fleming:2002sr} \begin{equation}\label{SCETfact} \frac{d\Gamma}{dz}=\sum_{\omega}H(M,\omega,\mu)\int dk^+S(k^+,\mu)\mathrm{Im}J_{\omega}(k^++M(1-z),\mu) \end{equation} where $H$ encodes the short distance effects, $J$ is the so called jet function, which incorporates effects at the collinear scale, and $S$ are the ultrasoft shape functions. Using the combined counting in SCET (counting in $\lambda\sim\sqrt{\Lambda_{QCD}/(2m)}$) plus NRQCD (counting in $v$), one can see that we have color singlet and color octet operators contributing at the same order. More concretely \cite{Fleming:2004hc,Lee:2005gj}, at $\mathcal{O}(\lambda)$ in the SCET counting we have the $\phantom{}^1S_0$ and $\phantom{}^3P_J$ octet operators \begin{equation} \sum_iC_i^{(8,\phantom{}^1S_0)}\Gamma^i_{\alpha\mu}\chi^{\dagger}_{-\mathbf{p}}B_{\perp}^{\alpha}\psi_{\mathbf{p}} \end{equation} \begin{equation} \sum_i C^{(8,\phantom{}^3P_J)}_i \Gamma^i_{\alpha \mu \sigma \delta} \chi^{\dagger}_{-{\bf p}} B^\alpha_\perp \Lambda \cdot \frac{{\bf p}^\sigma}{2m} \Lambda \cdot {\mbox{\boldmath $\sigma$}}^\delta \psi_{\bf p} \end{equation} When considering also the $v$ counting (and taking into account the overlap with the $\phantom{}^3S_1$ quarkonium state) the two of these operators become $\mathcal{O}(v^5\lambda)$. Their matching coefficients start at order $\sqrt{\alpha_{\rm s}(\mu_h)}$. At $\mathcal{O}(\lambda^2)$ in the SCET counting and with a matching coefficient starting at order $\alpha_{\rm s}(\mu_h)$, we have the color singlet operator \begin{equation} \sum_i \Gamma^i_{\alpha \beta \delta \mu} \chi^\dagger_{-{\bf p}} \Lambda\cdot{\mbox{\boldmath $\sigma$}}^\delta \psi_{\bf p} {\rm Tr} \big\{ B^\alpha_\perp \, C^{(1,{}^3S_1)}_i \, B^\beta_\perp \big\} \end{equation} when considering also the $v$ counting this operator becomes $\mathcal{O}(v^3\lambda^2)$. Then the octet-to-singlet ratio (considering $\lambda\sim v$) becomes $\frac{v}{\sqrt{\alpha_{\rm s}(\mu_h)}}$; hence the color octet contributions become as important as the color singlet ones if we count $\alpha_{\rm s}(\mu_h)\sim v^2\sim 1-z$. Before going on, and explaining the calculations in the Effective Field Theory (EFT) approach, let us comment on the relation of these EFT calculations with the, phenomenologically very successful, model in \cite{Field:1983cy}. Recall that, as mentioned in the introduction of this chapter, in that reference a parton-shower Monte Carlo technique was used to incorporate the effects of gluon radiation by the outgoing gluons in the decay. When we are at the end-point region of the spectrum, the EFT analysis tells us that the decay is organized according to eq. (\ref{SCETfact}), then (as was already explained in ref. \cite{Fleming:2002sr}) the approach in \cite{Field:1983cy} is equivalent to consider that the collinear scale is non-perturbative and introduce a model with a gluon mass for the jet function $J$. When we are away of the upper end-point region, the EFT approach tells us that the decay is organized according to (\ref{NRQCDfact}). Then the effects of the gluon radiation modeled in \cite{Field:1983cy} should be incorporated in higher order NRQCD local matrix elements (this interpretation is consistent with the fact that the analysis in \cite{Field:1983cy} produced a not very large correction to the spectrum for all $z$, except in the upper end-point region, where the effect becomes $\mathcal{O}(1)$); in any case it is not justified, from an EFT point of view, why one should take into account the subset of corrections that are incorporated in \cite{Field:1983cy} and not other ones, which in principle could contribute with equal importance. \subsection{Resummation of the color singlet contributions}\label{subsecressing} The resummation of the Sudakov logarithms in the color singlet channel has been performed in ref. \cite{Fleming:2002rv,Fleming:2002sr} and in ref. \cite{Fleming:2004rk} (where the (small) effect of the operator mixing was taken into account). It is found that all the logarithms come from collinear physics, that is only collinear gluons appear in the diagrams for the running of the singlet operator. The resummed rate is given by \[ \frac{1}{\Gamma_0}\frac{d\Gamma^{e}_{CS}}{dz} = \Theta(M-2mz) \frac{8z}9 \sum_{n \rm{\ odd}} \left\{\frac{1}{f_{5/2}^{(n)}} \left[ \gamma_+^{(n)} r(\mu_c)^{2 \lambda^{(n)}_+ / \beta_0} - \gamma_-^{(n)} r(\mu_c)^{2 \lambda^{(n)}_- / \beta_0} \right]^2+\right. \] \begin{equation}\label{singres} \left.+ \frac{3 f_{3/2}^{(n)}}{8[f_{5/2}^{(n)}]^2}\frac{{\gamma^{(n)}_{gq}}^2}{\Delta^2} \left[ r(\mu_c)^{2 \lambda^{(n)}_+ / \beta_0} - r(\mu_c)^{2 \lambda^{(n)}_- / \beta_0} \right]^2\right\} \end{equation} where \begin{equation} f_{5/2}^{(n)} = \frac{n(n+1)(n+2)(n+3)}{9(n+3/2)}\quad;\quad f_{3/2}^{(n)} = \frac{(n+1)(n+2)}{n+3/2} \end{equation} \begin{equation} r(\mu) = \frac{\alpha_s(\mu)}{\alpha_s(2m)} \end{equation} \begin{equation} \gamma_\pm^{(n)} = \frac{\gamma_{gg}^{(n)} - \lambda^{(n)}_\mp}{\Delta}\quad;\quad \lambda^{(n)}_\pm = \frac{1}{2} \big[ \gamma^{(n)}_{gg} + \gamma^{(n)}_{q\bar{q}} \pm \Delta \big]\quad;\quad\Delta = \sqrt{ (\gamma^{(n)}_{gg} - \gamma^{(n)}_{q\bar{q}})^2 + 4 \gamma^{(n)}_{gq} \gamma^{(n)}_{qg} } \end{equation} \begin{eqnarray} \gamma^{(n)}_{q\bar{q}} &=& C_f \bigg[ \frac{1}{(n+1)(n+2)} -\frac{1}{2} - 2 \sum^{n+1}_{i=2} \frac{1}{i} \bigg]\,\nonumber \\ \gamma^{(n)}_{gq} &=& \frac{1}{3}C_f \frac{n^2 + 3n +4}{(n+1)(n+2)}\, \nonumber \\ \gamma^{(n)}_{qg} &=& 3 n_f \frac{n^2 + 3n +4}{n(n+1)(n+2)(n+3)}\, \nonumber \\ \gamma^{(n)}_{gg} &=& C_A \bigg[ \frac{2}{n(n+1)} + \frac{2}{(n+2)(n+3)}- \frac{1}{6} - 2 \sum^{n+1}_{i = 2} \frac{1}{i} \bigg] - \frac{1}{3}n_f \end{eqnarray} This result corrects a previous calculation \cite{Photiadis:1985hn} performed several years ago. \subsection{Resummation of the color octet contributions}\label{subsecresoc} The resummation of the Sudakov logarithms in the color octet channel was performed in ref. \cite{Bauer:2001rh}. In contrast with the color singlet channel, both ultrasoft and collinear gluons contribute to the running of the octet operators. The expression for the resummed Wilson coefficients is \begin{equation}\label{ocres} C(x-z)=-\frac{d}{dz} \left\{ \theta(x-z) \; \frac{\exp [ \ell g_1[\alpha_s \beta_0 \ell/(4\pi)] + g_2[\alpha_s \beta_0 \ell/(4\pi)]]}{\Gamma[1-g_1[\alpha_s \beta_0 \ell/(4\pi)] - \alpha_s \beta_0 \ell/(4\pi) g_1^\prime[\alpha_s \beta_0 \ell/(4\pi)]]}\right\} \end{equation} where \begin{equation} \ell\approx-\log(x-z) \end{equation} \begin{eqnarray} g_1(\chi) &=& -\frac{2 \Gamma^{\rm adj}_1}{\beta_0\chi}\left[(1-2\chi)\log(1-2\chi) -2(1-\chi)\log(1-\chi)\right] \nonumber \\ g_2(\chi) &=& -\frac{8 \Gamma^{\rm adj}_2}{\beta_0^2} \left[-\log(1-2\chi)+2\log(1-\chi)\right] \nonumber\\ && - \frac{2\Gamma^{\rm adj}_1\beta_1}{\beta_0^3} \left[\log(1-2\chi)-2\log(1-\chi) +\frac12\log^2(1-2\chi)-\log^2(1-\chi)\right] \nonumber\\ &&+\frac{4\gamma_1}{\beta_0} \log(1-\chi) + \frac{2B_1}{\beta_0} \log(1-2\chi) \nonumber\\ &&-\frac{4\Gamma^{\rm adj}_1}{\beta_0}\log n_0 \left[\log(1-2\chi)-\log(1-\chi)\right] \end{eqnarray} \begin{equation} \Gamma^{\rm adj}_1 = C_A \quad ; \Gamma^{\rm adj}_2 = C_A \left[ C_A \left( \frac{67}{36} - \frac{\pi^2}{12} \right) - \frac{5n_f}{18} \right] \quad; B_1 = -C_A\quad; \gamma_1 = -\frac{\beta_0}{4}\quad; n_0=e^{-\gamma_E} \end{equation} \subsection{Calculation of the octet shape functions in the weak coupling regime}\label{subseccalshpfct} In this subsection we will explain in detail the calculation of the octet shape functions. We will start by rewriting some of the expressions in the preceding sections in the pNRQCD language, which is the convenient language for the subsequent calculation of the shape functions. We begin from the formula given in \cite{Fleming:2002sr} \begin{equation}\label{QCDexpr} {d \Gamma\over dz}=z{M\over 16\pi^2} {\rm Im} T(z)\quad \quad T(z)=-i\int d^4 x e^{-iq\cdot x}\left< V_Q(nS)\vert T\{ J_{\mu} (x) J_{\nu} (0)\} \vert V_Q(nS)\right> \eta^{\mu\nu}_{\perp} \end{equation} where $J_{\mu} (x)$ is the electromagnetic current for heavy quarks in QCD and we have restricted ourselves to $\phantom{}^3S_1$ states. The formula above holds for states fulfilling relativistic normalization. In the case that non-relativistic normalization is used, as we shall do below, the right hand side of either the first or second formulas in (\ref{QCDexpr}) must be multiplied by $2M$. At the end-point region the photon momentum (in light cone coordinates, $q_\pm=q^0\pm q^ 3$) in the rest frame of the heavy quarkonium is $q=\left(q_{+},q_{-}, q_{\perp}\right)=(zM,0,0)$ with $z \sim 1$ ($M\sqrt{1-z} \ll M$). This together with the fact that the heavy quarkonium is a non-relativistic system fixes the relevant kinematic situation. It is precisely in this situation when the standard NRQCD factorization (operator product expansion) breaks down \cite{Rothstein:1997ac}. The quark (antiquark) momentum in the $Q\bar Q$ rest frame can be written as $p=(p_0, {\bf p})$, $p_0=m+l_0, {\bf p}={\bf l}$; $l_0 , {\bf l} \ll m$. Momentum conservation implies that if a few gluons are produced in the short distance annihilation process at least one of them has momentum $r=( r_{+},r_{-}, r_{\perp})$, $r_{-} \sim M/2$ ; $r_{+}, r_{\perp}\ll M$, which we will call collinear. At short distances, the emission of hard gluons is penalized by $\alpha_{\rm s} (m)$ and the emission of softer ones by powers of soft scale over $M $. Hence, the leading contribution at short distances consists of the emission of a single collinear gluon. This implies that the $Q\bar Q$ pair must be in a color octet configuration, which means that the full process will have an extra long distance suppression related to the emission of (ultra)soft gluons. The next-to-leading contribution at short distances already allows for a singlet $Q\bar Q$ configuration. Hence, the relative weight of color-singlet and color-octet configurations depends not only on $z$ but also on the bound state dynamics, and it is difficult to establish a priori. In order to do so, it is advisable to implement the constraints above by introducing suitable EFTs. In a first stage we need NRQCD \cite{Bodwin:1994jh}, which factors out the scale $m$ in the $Q\bar Q$ system, supplemented with collinear gluons, namely gluons for which the scale $m$ has been factored out from the components $r_{+}, r_{\perp} $ (but is still active in the component $r_{-}$). For the purposes of this section it is enough to take for the Lagrangian of the collinear gluons the full QCD Lagrangian and enforce $r_{+}, r_{\perp} \ll m$ when necessary. \subsubsection{Matching QCD to NRQCD+SCET}\label{subsubsecmQCD} At tree level, the electromagnetic current in (\ref{QCDexpr}) can be matched to the following currents in this EFT \cite{Fleming:2002sr} \begin{equation}\label{effcurr} J_{\mu} (x)= e^{-i2mx_0}\left( \Gamma_{\alpha\beta i\mu}^{(1,\phantom{}^3S_1)}J^{i\alpha\beta}_{(1,\phantom{}^3S_1)} (x) +\Gamma_{\alpha\mu}^{(8,\phantom{}^1S_0)}J^{\alpha}_{(8,\phantom{}^1S_0)} (x)+ \Gamma_{\alpha\mu ij}^{(8,\phantom{}^3P_J)}J^{\alpha ij}_{(8,\phantom{}^3P_J)} (x) + \dots \right) + h.c. \end{equation} \begin{eqnarray} & \Gamma_{\alpha\beta i\mu}^{(1,\phantom{}^3S_1)}={g_s^2 e e_Q\over 3 m^2}\eta^{\perp}_{\alpha\beta}\eta_{\mu i} &J^{i\alpha\beta}_{(1,\phantom{}^3S_1)} (x)= \chi^{\dagger}\mbox{\boldmath $\sigma$}^{i} \psi Tr\{ B^{\alpha}_{\perp} B^{\beta}_{\perp}\}(x) \cr & & \cr&\Gamma_{\alpha\mu}^{(8,\phantom{}^1S_0)}= {g_s e e_Q\over m} \epsilon_{\alpha\mu}^{\perp}&J^{\alpha}_{(8,\phantom{}^1S_0)} (x)= \chi^{\dagger}B^{\alpha}_{\perp} \psi (x)\cr & & \cr &\Gamma_{\alpha\mu ij}^{(8,\phantom{}^3P_J)}= {g_s e e_Q\over m^2}\left(\eta_{\alpha j}^{\perp} \eta_{\mu i}^{\perp} +\eta_{\alpha i}^{\perp} \eta_{\mu j}^{\perp}-\eta_{\alpha\mu}^{\perp} n^{j} n^{i}\right) & J^{\alpha i j}_{(8,\phantom{}^3P_J)} (x)= -i \chi^{\dagger}B^{\alpha}_{\perp} \mbox{\boldmath $\nabla$}^{i}\mbox{\boldmath $\sigma$}^{j}\psi (x) \end{eqnarray} where $n=(n_+,n_-,n_{\perp})=(1,0,0)$ and $\epsilon_{\alpha\mu}^{\perp}=\epsilon_{\alpha\mu\rho 0}n^\rho $. These effective currents can be identified with the leading order in $\alpha_{\rm s}$ of the currents introduced in \cite{Fleming:2002sr} (which has already appeared at the beginning of this section). We use both Latin ($1$ to $3$) and Greek ($0$ to $3$) indices, $B^{\alpha}_{\perp}$ is a single collinear gluon field here, and $ee_Q$ is the charge of the heavy quark. Note, however, that in order to arrive at (\ref{effcurr}) one need not specify the scaling of collinear fields as $M(\lambda^2, 1, \lambda)$ but only the cut-offs mentioned above, namely $r_{+}, r_{\perp}\ll M$. Even though the $P$-wave octet piece appears to be $1/m$ suppressed with respect to the $S$-wave octet piece, it will eventually give rise to contributions of the same order once the bound state effects are taken into account. This is due to the fact that the $^3S_1$ initial state needs a chromomagnetic transition to become an octet $^1S_0$, which is $\alpha_{\rm s}$ suppressed with respect to the chromoelectric transition required to become an octet $^3P_J$. $T(z)$ can then be written as \begin{eqnarray}\label{T} T(z)&=&H^{(1,\phantom{}^3S_1)}_{ii'\alpha\alpha'\beta\beta'}T_{(1,\phantom{}^3S_1)}^{ii'\alpha\alpha'\beta\beta'}+H^{(8,\phantom{}^1S_0)}_{\alpha\alpha^{\prime}}T_{(8,\phantom{}^1S_0)}^{\alpha\alpha^{\prime}}+ H^{(8,\phantom{}^3P_J)}_{\alpha ij\alpha^{\prime}i'j'}T_{(8,\phantom{}^3P_J)}^{\alpha ij\alpha^{\prime}i'j'} +\cdots \end{eqnarray} where \begin{eqnarray} H^{(1,\phantom{}^3S_1)}_{ii'\alpha\alpha'\beta\beta'}&=& \eta_{\perp}^{\mu\nu}\Gamma_{\alpha\beta i\mu}^{(1,\phantom{}^3S_1)}\Gamma_{\alpha'\beta'i'\nu}^{(1,\phantom{}^3S_1)}\nonumber\\ H^{(8,\phantom{}^1S_0)}_{\alpha\alpha^{\prime}}&=& \eta_{\perp}^{\mu\nu}\Gamma_{\alpha\mu}^{(8,\phantom{}^1S_0)}\Gamma_{\alpha'\nu}^{(8,\phantom{}^1S_0)}\nonumber\\ H^{(8,\phantom{}^3P_J)}_{\alpha ij\alpha^{\prime}i'j'}&=& \eta_{\perp}^{\mu\nu}\Gamma_{\alpha\mu ij}^{(8,\phantom{}^3P_J)}\Gamma_{\alpha'\nu i'j'}^{(8,\phantom{}^3P_J)} \end{eqnarray} and \begin{eqnarray} T_{(1,\phantom{}^3 S_1)}^{ii'\alpha\alpha'\beta\beta'}(z)&= &-i\int d^4 x e^{-iq\cdot x-2mx_0}\left<V_Q(nS)\vert T\{ {J^{i\alpha\beta}_{(\phantom{}1,^3 S_1)} (x)}^{\dagger} J^{i'\alpha^{\prime}\beta^{\prime}}_{(1,\phantom{}^3 S_1)} (0)\} \vert V_Q(nS)\right> & \cr T_{(8,\phantom{}^1 S_0)}^{\alpha\alpha^{\prime}}(z)&= &-i\int d^4 x e^{-iq\cdot x-2mx_0}\left<V_Q(nS)\vert T\{ {J^{\alpha}_{(8,^1 S_0)} (x)}^{\dagger} J^{\alpha^{\prime}}_{(8,\phantom{}^1 S_0)} (0)\} \vert V_Q(nS)\right> & \cr T_{(8,\phantom{}^3 P_J)}^{\alpha ij\alpha^{\prime}i'j'}(z)&= &-i\int d^4 x e^{-iq\cdot x-2mx_0}\left<V_Q(nS)\vert T\{ {J^{\alpha ij}_{(8,\phantom{}^3 P_J)} (x)}^{\dagger} J^{\alpha^{\prime}i'j'}_{(8,\phantom{}^3 P_J)} (0)\} \vert V_Q(nS)\right> \end{eqnarray} In (\ref{T}) we have not written a crossed term $(8,\phantom{}^1S_0$-$\phantom{}^3P_J)$ since it eventually vanishes at the order we will be calculating. \subsubsection{Matching NRQCD+SCET to pNRQCD+SCET} Thanks to the fact that in the end-point region ($M\gg M\sqrt{1-z}\gg M(1-z)$) the typical three momentum of the heavy quarks is given by \begin{equation}\label{trim} p\sim\sqrt{m\left(\frac{M}{2}(1-z)-E_1\right)} \end{equation} we can proceed one step further in the EFT hierarchy. NRQCD still contains quarks and gluons with energies $\sim m\alpha_{\rm s}$, which in the kinematical situation of the end-point (where the typical three momentum is always much greater than the typical energy) can be integrated out. This leads to potential NRQCD (pNRQCD). For the color singlet contributions we have \begin{displaymath} \left<V_Q(nS)\vert T\{ {J^{i\alpha\beta}_{(\phantom{}1,^3 S_1)} (x)}^{\dagger} J^{i'\alpha^{\prime}\beta^{\prime}}_{(1,\phantom{}^3 S_1)} (0)\} \vert V_Q(nS)\right> \longrightarrow \end{displaymath} \begin{equation}\label{hardcoll} \longrightarrow 2N_c {S_{V}^{i}}^{\dagger}({\bf x}, {\bf 0}, x_0)S_{V}^{i'}({\bf 0}, {\bf 0}, 0) \left< \textrm{{\small VAC}}\vert Tr\{ B^{\alpha}_{\perp} B^{\beta}_{\perp}\}(x) Tr\{ B^{\alpha^{\prime}}_{\perp} B^{\beta^{\prime}}_{\perp}\}(0)\vert \textrm{{\small VAC}}\right> \end{equation} The calculation of the vacuum correlator for collinear gluons above has been carried out in \cite{Fleming:2002sr}, and the final result, which is obtained by sandwiching (\ref{hardcoll}) between the quarkonium states, reduces to the one put forward in that reference. For the color octet currents, the leading contribution arises from a tree level matching of the currents (\ref{effcurr}), \begin{eqnarray}\label{effcurr2} J^{\alpha}_{(8,\phantom{}^1S_0)} (x) &\longrightarrow &\sqrt{2T_F} O_{P}^a ({\bf x}, {\bf 0}, x_0)B^{a \alpha}_{\perp}(x)\cr & & \cr J^{\alpha ij}_{(8,\phantom{}^3P_J)} (x)& \longrightarrow &\sqrt{2T_F}\left.\left( i\mbox{\boldmath $\nabla$}^i_{\bf y} O_{V}^{a j}({\bf x}, {\bf y}, x_0)\right)\right\vert_{{\bf y}={\bf 0}} B^{a \alpha}_{\perp}(x) \end{eqnarray} $S_{V}^{i}$, $O_{V}^{a i}$ and $O_{P}^a$ are the projection of the singlet and octet wave function fields introduced in \cite{Pineda:1997bj,Brambilla:1999xf} to their vector and pseudoscalar components, namely $S=(S_{P}+S_{V}^{i}\sigma^i)/\sqrt{2}$ and $O^a =(O_{P}^{a}+O_{V}^{a i}\sigma^i)/\sqrt{2}$. $T_F=1/2$ and $N_c=3$ is the number of colors. \subsubsection{Calculation in pNRQCD+SCET}\label{subsubseccalc} We shall then calculate the contributions of the color octet currents in pNRQCD coupled to collinear gluons. They are depicted in figure \ref{figdos}. For the contribution of the $P$-wave current, it is enough to have the pNRQCD Lagrangian at leading (non-trivial) order in the multipole expansion given in \cite{Pineda:1997bj,Brambilla:1999xf}. For the contribution of the $S$-wave current, one needs a $1/m$ chromomagnetic term given in \cite{Brambilla:2002nu}. \begin{figure} \centering \includegraphics{dos} \caption[Color octet contributions]{\label{figdos}Color octet contributions. $\bullet$ represents the color octet S-wave current, $\blacktriangle$ represents the color octet P-wave current. The notation for the other vertices is that of ref. \cite{Brambilla:2002nu}, namely \ding{60}:= ${ig c_F \over \sqrt{N_c T_F}} { \left ( {\mbox{\boldmath $\sigma$}}_1 - {\mbox{\boldmath $\sigma$}}_2 \right ) \over 2m } \, {\rm Tr} \left [ T^b {\bf B} \right ]$ and \ding{182}:= ${ig\over \sqrt{N_c T_F}} {\bf x} \, {\rm Tr} \left [ T^b {\bf E} \right ]$. The solid line represents the singlet field, the double line represents the octet field and the gluon with a line inside represents a collinear gluon.} \end{figure} Let us consider the contribution of the $S$-wave color octet current in some detail. We have, from the first diagram of fig. \ref{figdos}, \begin{displaymath} T_{(8,\phantom{}^1S_0)}^{\alpha\alpha^{\prime}}(z)= -i\eta^{\alpha\alpha^{\prime}}_{\perp}(4\pi){32\over 3}T_F^2 \left({ c_F\over 2m}\right)^2 \alpha_{\rm s} (\mu_u)C_f\int d^3 {\bf x} \int d^3 {\bf x}^{\prime} \psi^{\ast}_{n0}( {\bf x}^{\prime}) \psi_{n0}( {\bf x})\int\!\!\! {d^4 k\over (2\pi)^4} {{\bf k}^2\over k^2+i\epsilon}\times \end{displaymath} \begin{equation}\label{eqonas} \times\left(1\over -k_0+E_n-h_o+i\epsilon\right)_{{\bf x}^{\prime},{\bf 0}}{1\over (M(1-z)-k_{+})M-{\bf k}_{\perp}^2+i\epsilon} \left(1\over -k_0+E_n-h_o+i\epsilon\right)_{{\bf 0}, {\bf x}} \end{equation} where we have used the Coulomb gauge (both for ultrasoft and collinear gluons). $E_n < 0$ is the binding energy ($M=2m+E_n$) of the heavy quarkonium, $\psi_{n0}( {\bf x})$ its wave function, and $h_o$ the color-octet Hamiltonian at leading order, which contains the kinetic term and a repulsive Coulomb potential \cite{Pineda:1997bj,Brambilla:1999xf}. $c_F$ is the hard matching coefficient of the chromomagnetic interaction in NRQCD \cite{Bodwin:1994jh}, which will eventually be taken to $1$. We have also enforced that $k$ is ultrasoft by neglecting it in front of $M$ in the collinear gluon propagator. We shall evaluate (\ref{eqonas}) in light cone coordinates. If we carry out first the integration over $k_{-}$, only the pole $k_{-}={\bf k}_{\perp}^2/k_{+}$ contributes. Then the only remaining singularities in the integrand are in the collinear gluon propagator. Hence, the absorptive piece can only come from its pole $M^2(1-z)-M k_{+}= {\bf k}_{\perp}^2$. If $k_{+} {\ \lower-1.2pt\vbox{\hbox{\rlap{$<$}\lower6pt\vbox{\hbox{$\sim$}}}}\ } M(1-z)$, then ${\bf k}_{\perp}^2 \sim M^2(1-z)$ which implies $k_{-}\sim M$. This contradicts the assumption that $k$ is ultrasoft. Hence, ${\bf k}_{\perp}^2$ must be expanded in the collinear gluon propagator. We then have \begin{displaymath} {\rm Im}\left(T_{(8,\phantom{}^1S_0)}^{\alpha\alpha^{\prime}}(z)\right)=-\eta^{\alpha\alpha^{\prime}}_{\perp}(4\pi){32\over 3}T_F^2 \left({ c_F\over 2m}\right)^2 \alpha_{\rm s}(\mu_u)C_f\times \end{displaymath} \begin{displaymath} \times\int d^3 {\bf x} \int d^3 {\bf x}^{\prime} \psi^{\ast}_{n0}( {\bf x}^{\prime}) \psi_{n0}( {\bf x}){1\over 8\pi M}\int_0^{\infty} dk_+ \delta \left(M(1-z) -k_+\right)\times \end{displaymath} \begin{equation}\label{imts} \times\int_0^{\infty} dx \left( \left\{ \delta ({\bf \hat x}) , {h_o-E_n \over h_o -E_n +{k_+ \over 2}+x} \right\} - {h_o-E_n \over h_o -E_n +{k_+ \over 2}+x}\delta ({\bf \hat x}){h_o-E_n \over h_o -E_n +{k_+ \over 2}+x} \right)_{{\bf x},{\bf x}^{\prime}} \end{equation} where we have introduced the change of variables $\vert {\bf k}_{\perp}\vert=\sqrt{2k_+x}$. Restricting ourselves to the ground state ($n=1$) and using the techniques of reference \cite{Beneke:1999gq} we obtain \begin{displaymath} {\rm Im}\left(T_{(8,\phantom{}^1S_0)}^{\alpha\alpha^{\prime}}(z)\right)=-\eta^{\alpha\alpha^{\prime}}_{\perp}{16\over 3}T_F^2 \left({ c_F\over 2m}\right)^2 \alpha_{\rm s}(\mu_u)C_f{1\over M}\int_0^{\infty} dk_+ \delta (M(1-z) -k_+)\times \end{displaymath} \begin{displaymath} \times\int_0^{\infty} dx \left( 2 \psi_{10}( {\bf 0})I_{S}({k_+\over 2} +x)- I_{S}^2({k_+\over 2} +x) \right) \end{displaymath} \begin{displaymath} I_{S}({k_+\over 2} +x):=\int d^3 {\bf x} \psi_{10}( {\bf x})\left( {h_o-E_1 \over h_o -E_1 +{k_+ \over 2}+x}\right)_{{\bf x},{\bf 0}}= \end{displaymath} \begin{equation} =m\sqrt{\gamma\over \pi}{\alpha_{\rm s} N_c \over 2}{1\over 1-z'}\left( 1-{2z'\over 1+z'} \;\phantom{}_2F_1\left(-\frac{\lambda}{z'},1,1-\frac{\lambda}{z'},\frac{1-z'}{1+z'}\right)\right) \end{equation} where \begin{equation} \gamma=\frac{mC_f\alpha_{\rm s}}{2}\quad z'=\frac{\kappa}{\gamma}\quad-\frac{\kappa^2}{m}=E_1-\frac{k_+}{2}-x\quad\lambda=-\frac{1}{2N_cC_f}\quad E_1=-{\gamma^2\over m} \end{equation} This result can be recast in the factorized form given in \cite{Fleming:2002sr} (equation \ref{SCETfact}). \begin{equation}\label{ImTS} {\rm Im}\left(T_{(8,\phantom{}^1S_0)}^{\alpha \alpha^{\prime}}(z)\right)=-\eta^{\alpha \alpha^{\prime}}_{\perp}\int dl_+ S_{S}(l_+) {\rm Im} J_M (l_+ - M(1-z)) \end{equation} \begin{equation} {\rm Im} J_M(l_+ - M(1-z))= T_F^2\left(N_c^2-1\right){2\pi\over M}\delta (M(1-z) -l_+) \end{equation} \begin{equation}\label{shpfctS} S_{S}(l_+)={4\alpha_{\rm s} (\mu_u)\over 3 \pi N_c} \left({ c_F\over 2m}\right)^2 \int_0^{\infty} dx \left( 2 \psi_{10}( {\bf 0})I_{S}({l_+\over 2} +x)- I_{S}^2({l_+\over 2} +x) \right) \end{equation} We have thus obtained the $S$-wave color octet shape function $S_{S}(l_+)$. Analogously, for the $P$-wave color octet shape functions, we obtain from the second diagram of fig. \ref{figdos} \begin{equation}\label{ImTP} {\rm Im}\left(T_{(8,\phantom{}^3P_J)}^{\alpha i j \alpha^{\prime} i' j'}(z)\right)\!\!\!=\!\!-\eta_{\perp}^{\alpha \alpha^{\prime}}\delta^{j j'}\!\!\!\int\!\!\! dl_+\!\! \left( \delta_{\perp}^{i i'}S_{P1}(l_+)+\left(n^i n^{i'}\!\!-{1\over 2}\delta_{\perp}^{i i'}\right)S_{P2}(l_+)\right) {\rm Im} J_M(l_+ - M(1-z)) \end{equation} \begin{equation}\label{shpfctP1} S_{P1}(l_+):= {\alpha_{\rm s} (\mu_u)\over 6 \pi N_c} \int_0^{\infty}\!\!\!dx\left( 2\psi_{10}( {\bf 0})I_P(\frac{l_+}{2}+x)-I_P^2(\frac{l_+}{2}+x) \right) \end{equation} \begin{equation}\label{shpfctP2} S_{P2}(l_+):= {\alpha_{\rm s} (\mu_u)\over 6 \pi N_c} \int_0^{\infty}\!\!\!dx \frac{8l_+x}{\left(l_++2x\right)^2}\left( \psi^2_{10}( {\bf 0})-2\psi_{10}( {\bf 0})I_P(\frac{l_+}{2}+x)+I_P^2(\frac{l_+}{2}+x)\right) \end{equation} where \begin{displaymath} I_{P}({k_+\over 2} +x):=-\frac{1}{3}\int d^3 {\bf x} {\bf x}^i \psi_{10}( {\bf x})\left( {h_o-E_1 \over h_o -E_1 +{k_+ \over 2}+x} \mbox{\boldmath $\nabla$}^i \right)_{{\bf x},{\bf 0}}= \end{displaymath} \begin{displaymath} =\sqrt{\frac{\gamma^3}{\pi}} {8\over 3}\left( 2-\lambda \right)\!\!\frac{1}{4(1+z')^3}\Bigg( 2(1+z')(2+z')+(5+3z')(-1+\lambda)+2(-1+\lambda)^2+ \end{displaymath} \begin{equation} \left.+\frac{1}{(1-z')^2}\left(4z'(1+z')(z'^2-\lambda^2)\left(\!\!-1+\frac{\lambda(1-z')}{(1+z')(z'-\lambda)}+\phantom{}_2F_1\left(-\frac{\lambda}{z'},1,1-\frac{\lambda}{z'},\frac{1-z'}{1+z'}\right)\right)\right)\right) \end{equation} Note that two shape functions are necessary for the $P$-wave case. The shape functions (\ref{shpfctS}), (\ref{shpfctP1}) and (\ref{shpfctP2}) are ultraviolet (UV) divergent and require regularization and renormalization. In order to regulate them at this order it is enough to calculate the ultrasoft (US) loop (the integral over $k$ in (\ref{eqonas})) in $D$-dimensions, leaving the bound state dynamics in $3$ space dimensions ($D=4-2\varepsilon$). In fact, the expressions (\ref{shpfctS}) and (\ref{shpfctP1}) implicitly assume that dimensional regularization (DR) is used, otherwise linearly divergent terms proportional to $\psi^2_{10}({\bf 0})$ would appear (which make (\ref{shpfctS}) and (\ref{shpfctP1}) formally positive definite quantities). This procedure, to use DR for the US loop only, was the one initially employed in \cite{GarciaiTormo:2004jw}. There the following steps were performed: \begin{itemize} \item In order to isolate the $1/\varepsilon$ poles, $I_S$ and $I_P$ were expanded up to $\mathcal{O}(1/{z'}^2)$ (the expansion of these functions up to $\mathcal{O}(1/{z'}^4)$ can be found in equations (\ref{IStaylor}) and (\ref{IPtaylor})). \item The result was subtracted and added to the integrand of (\ref{shpfctS})-(\ref{shpfctP1}) (for (\ref{shpfctP2}) this is not necessary since the only divergent piece is independent of $I_P$). The subtracted part makes the shape functions finite. The added part contains linear and logarithmic UV divergencies. \item The remaining divergent integrals were dimensionally regularized by making the substitution $dx\rightarrow dx (x/ \mu)^{-\varepsilon}$. That produced the $1/\varepsilon$ poles displayed in formulas (16) of ref. \cite{GarciaiTormo:2004jw}, which were eventually subtracted (linear divergences are set to zero as usual in DR). \end{itemize} That last point was motivated by the fact that $x \sim {\bf k}_\perp^2$ (${\bf k}_\perp$ being the transverse momentum of the US gluon) but differs from a standard $MS$ scheme. As was already mentioned in \cite{GarciaiTormo:2004jw}, this regularization and renormalization scheme is not the standard one in pNRQCD calculations. Later, in \cite{GarciaiTormo:2005ch}, a regularization-renormalization procedure closer to the standard one in pNRQCD was used; which is the one we will use here. That latter procedure consists in regularizing both the US loop and the potential loops (entering in the bound state dynamics) in DR; then US divergences are identified by taking the limit $D\rightarrow 4$ in the US loops while leaving the potential loops in $D$ dimensions \cite{Pineda:1997ie}; potential divergencies are identified by taken $D\rightarrow 4$ in the potential loops once the US divergencies have been subtracted. It turns out that all divergencies in $S_{P2}$ are US and all divergencies in $S_S$ are potential. $S_{P1}$ contains both US and potential divergencies. The potential divergences related with the bound state dynamics can be isolated using the methods of ref. \cite{Czarnecki:1999mw}. Following this procedure we obtain the following expressions for the singular pieces \begin{displaymath} \left.S_{S}(k_+)\right\vert_{\varepsilon\rightarrow 0}\simeq {c_F^2\alpha_{\rm s} (\mu_u )\gamma^3 C_f^2 \alpha_{\rm s}^2 (\mu_p)\over 3\pi^2 N_c m}(1-\lambda)\left( -2+\lambda (2\ln2 + 1)\right)\cdot \end{displaymath} \begin{equation}\label{singularSS} \cdot\left(\frac{1}{\varepsilon}+\ln\left(\frac{\mu_{pc}^2}{m\left(\frac{k_+}{2}+\frac{\gamma^2}{m}\right)}\right)+\cdots\right) \end{equation} \[ \left.S_{P1}(k_+)\right\vert_{\varepsilon\rightarrow 0} \simeq {\alpha_{\rm s} (\mu_u )\gamma^3 m C_f^2 \alpha_{\rm s}^2 (\mu_p)\over 9\pi^2 N_c } \left(\!\!-\frac{31}{6}+\lambda (4\ln2+\frac{19}{6})-\lambda^2 (2\ln 2 + {1\over 6})\right)\cdot \] \begin{equation}\label{singularSP1} \cdot\left(\frac{1}{2\varepsilon}+\ln\left(\frac{\mu_{pc}^2}{m\left(\frac{k_+}{2}+\frac{\gamma^2}{m}\right)}\right) +\cdots\right)+{2\alpha_{\rm s} (\mu_u )\gamma^5 \over 9\pi^2 N_c m}\left(-{1\over\varepsilon}-\ln\left(\frac{\mu_c^2}{k_+^2}\right)+\cdots\right) \end{equation} \begin{equation}\label{singularP2} \left. S_{P2}(k_+)\right\vert_{\varepsilon\rightarrow 0}\simeq {\alpha_{\rm s} (\mu_u ) k_+ \gamma^3\over 3\pi^2 N_c} \left(\frac{1}{\varepsilon} +\ln\left(\frac{\mu_c^2}{k_+^2}\right)+\cdots \right) \end{equation} For simplicity, we have set $D=4$ everywhere except in the momentum integrals. $\mu_p$, according to (\ref{trim}), is given by $\mu_p=\sqrt{m(M(1-z)/2-E_1)}$. $\mu_c$ and $\mu_{pc}$ are the subtraction points of the US and potential divergencies respectively. For comparison, let us mention that when we set $\mu_c=M\sqrt{1-z}$ and $\mu_{pc}=\sqrt{m\mu_c}$, as we will do, we obtain exactly the same result as in the procedure used in ref. \cite{GarciaiTormo:2004jw} for what the potential divergences is concerned\footnote{We assume that the correlation of scales advocated in \cite{Luke:1999kz} (see \cite{Pineda:2001et} for the implementation in our framework) must also be taken into account here.}; for the US divergences there is a factor $\ln\left(\frac{\mu_c}{2k_+}\right)$ of difference with respect to that former scheme. The renormalization of that expressions is not straightforward. We will assume that suitable operators exists which may absorb the $1/\varepsilon$ poles so that an $MS$-like scheme makes sense to define the above expressions, and discuss in the following the origin of such operators. In order to understand the scale dependence of (\ref{singularSS})-(\ref{singularP2}) it is important to notice that it appears because the term ${\bf k}_{\perp}^2$ in the collinear gluon propagator is neglected in (\ref{eqonas}). It should then cancel with an IR divergence induced by keeping the term ${\bf k}_{\perp}^2$, which implies assuming a size $M^2(1-z)$ for it, and expanding the ultrasoft scales accordingly. We have checked that it does. However, this contribution cannot be computed reliably within pNRQCD (neither within NRQCD) because it implies that the $k_{-}$ component of the ultrasoft gluon is of order $M$, and hence it becomes collinear. A reliable calculation involves (at least) two steps within the EFT strategy. The first one is the matching calculation of the singlet electromagnetic current at higher orders both in $\alpha_{\rm s}$ and in $({\bf k}_{\perp}/M)^2$ and $k_+/M$. The second is a one loop calculation with collinear gluons involving the higher order singlet currents. Figure \ref{figmatch} shows the relevant diagrams which contribute to the IR behavior we are eventually looking for. We need NNLO in $\alpha_{\rm s}$, but only LO in the $({\bf k}_{\perp}/M)^2$ and $k_+/M$ expansion. These diagrams are IR finite, but they induce, in the second step, the IR behavior which matches the UV of (\ref{singularSS})-(\ref{singularP2}). The second step amounts to calculating the loops with collinear gluons and expanding smaller scales in the integrand. We have displayed in fig. \ref{figdiv} the two diagrams which provide the aforementioned IR divergences. For the UV divergences that do not depend on the bound state dynamics, we need the matching at LO in $\alpha_{\rm s}$ (last diagram in Fig. \ref{figmatch}) but NLO in $k_+/M$ and $({\bf k}_{\perp}/M)^2$. \begin{figure} \centering \includegraphics{match} \caption{Relevant diagrams in the matching calculation QCD $\rightarrow$ pNRQCD+SCET.}\label{figmatch} \end{figure} \begin{figure} \centering \includegraphics{div} \caption{Diagrams which induce an IR scale dependence which cancels against the UV one of the octet shape functions. }\label{figdiv} \end{figure} The above means that the scale dependence of the leading order contributions of the color-octet currents is of the same order as the NNLO contributions in $\alpha_{\rm s}$ of the color-singlet current, a calculation which is not available. One might, alternatively, attempt to resum logs and use the NLO calculation \cite{Kramer:1999bf} as the boundary condition. This log resummation is non-trivial. One must take into account the correlation of scales inherent to the non-relativistic system \cite{Luke:1999kz}, which in the framework of pNRQCD has been implemented in \cite{Pineda:2001ra,Pineda:2002bv,Pineda:2001et}, and combine it with the resummation of Sudakov logs in the framework of SCET \cite{Bauer:2000ew,Bauer:2001rh,Fleming:2002rv,Fleming:2002sr} (see also \cite{Hautmann:2001yz}). Correlations within the various scales of SCET may start playing a role here as well \cite{Manohar:2000mx}. In any case, it should be clear that by only resumming Sudakov logs, as it has been done so far \cite{Bauer:2001rh}, one does not resum all the logs arising in the color octet contributions of heavy quarkonium, at least in the weak coupling regime. Keeping this in mind, we can proceed and write the renormalized expressions for the shape functions. These renormalized expressions, in an $MS$ scheme, read \[ S_{S}^{MS}(k_+)={4\alpha_{\rm s} (\mu_u)\over 3 \pi N_c} \left({ c_F\over 2m}\right)^2 \Bigg\{2 \psi_{10}( {\bf 0})\!\!\left(m\sqrt{\frac{\gamma}{\pi}}\frac{\alpha_sN_c}{2}\right)\Bigg(\!\!\int_0^{\infty} \!\!\!\left(\widetilde{I}_{S}({k_+\over 2} +x)-\frac{1}{z'}-\left(-1+2\lambda\ln 2\right)\cdot\right. \] \[ \left.\left.\cdot\frac{1}{z'^2}\right)dx-2\frac{\gamma}{\sqrt{m}}\sqrt{\frac{k_+}{2}+\frac{\gamma^2}{m} \right)-\left(m\sqrt{\frac{\gamma}{\pi}}\frac{\alpha_sN_c}{2}\right)^2\left(\int_0^{\infty}\!\!\!\left(\widetilde{I}_{S}^2({k_+\over 2} +x)-\frac{1}{z'^2}\right)d \right)\Bigg\}+ \] \begin{equation} +{c_F^2\alpha_{\rm s} (\mu_u )\gamma^3 C_f^2 \alpha_{\rm s}^2 (\mu_p)\over 3\pi^2 N_c m}(1-\lambda)\left( -2+\lambda (2\ln2 + 1)\right)\left(\ln\left(\frac{\mu_{pc}^2}{m\left(\frac{k_+}{2}+\frac{\gamma^2}{m}\right)}\right)\right) \end{equation} \[ S_{P1}^{MS}(k_+)={\alpha_{\rm s} (\mu_u)\over 6 \pi N_c} \Bigg\{2 \psi_{10}( {\bf 0})\left(\sqrt{\frac{\gamma^3}{\pi}}\frac{8}{3}(2-\lambda)\right)\!\!\Bigg(\int_0^{\infty} \!\!\!\left(\widetilde{I}_{P}({k_+\over 2} +x)\!-\!\frac{1}{2z'}\!-\!\left(-\frac{3}{4}+\lambda\ln 2-\frac{\lambda}{4}\right)\cdot\right. \] \[ \left.\left.\cdot\frac{1}{z'^2}\right)dx-\frac{\gamma}{\sqrt{m}}\sqrt{\frac{k_+}{2}+\frac{\gamma^2}{m} \right)-\left(\sqrt{\frac{\gamma^3}{\pi}}\frac{8}{3}(2-\lambda)\right)^2\left(\int_0^{\infty}\!\!\!\left(\widetilde{I}_{P}^2({k_+\over 2} +x)-\frac{1}{4z'^2}\right)dx\right)\Bigg\} + \] \[ +{\alpha_{\rm s} (\mu_u )\gamma^3 m C_f^2 \alpha_{\rm s}^2 (\mu_p)\over 9\pi^2 N_c } \left(\!\!-\frac{31}{6}+\lambda (4\ln2+\frac{19}{6})-\lambda^2 (2\ln 2 + {1\over 6})\right)\ln\!\!\left(\frac{\mu_{pc}^2}{m\left(\frac{k_+}{2}+\frac{\gamma^2}{m}\right)}\right) + \] \begin{equation} + {2\alpha_{\rm s} (\mu_u )\gamma^5 \over 9\pi^2 N_c m}\left(-\ln\left(\frac{\mu_c^2}{k_+^2}\right)\right \end{equation} \[ S_{P2}^{MS}(k_+)={\alpha_{\rm s} (\mu_u)\over 6 \pi N_c} \Bigg\{\psi^2_{10}( {\bf 0})k_+\left(-2+2\ln\left(\frac{\mu_c^2}{k_+^2}\right)\right)+ \] \begin{equation} +\int_0^{\infty}\!\!\!dx \frac{8k_+x}{\left(k_++2x\right)^2}\left(-2\psi_{10}( {\bf 0})I_P(\frac{k_+}{2}+x)+I_P^2(\frac{k_+}{2}+x)\right)\Bigg\} \end{equation} where \begin{eqnarray} \widetilde{I}_S(\frac{k_+}{2}+x) & := & \left(m\sqrt{\frac{\gamma}{\pi}}\frac{\alpha_sN_c}{2}\right)^{-1}{I}_S(\frac{k_+}{2}+x)\nonumber\\ \widetilde{I}_P(\frac{k_+}{2}+x) & := & \left(\sqrt{\frac{\gamma^3}{\pi}}\frac{8}{3}(2-\lambda)\right)^{-1}{I}_P(\frac{k_+}{2}+x) \end{eqnarray} In ref. \cite{GarciaiTormo:2004kb} an additional subtraction related to linear divergencies was made. This subtraction was necessary in order to merge smoothly with the results in the central region. We will also need this subtraction when discussing the merging at LO in the following sections. We use \begin{displaymath} \int_0^{\infty}\!\!\!\!\!\!dx\, \frac{1}{z'} \longrightarrow -2\frac{\gamma}{\sqrt{m}}\left[\sqrt{\frac{k_+}{2}+\frac{\gamma^2}{m}}-\sqrt% {\frac{k_+}{2}}\right] \end{displaymath} which differ from the $MS$ scheme by the subtraction of the second term in the square brackets. In that other scheme (\emph{sub}) the expressions for the shape functions read \begin{eqnarray} S_{S}^{sub}(k_+)&=& S_{S}^{MS}(k_+)+{4\alpha_{\rm s} (\mu_u)\over 3 \pi N_c} \left({ c_F\over 2m}\right)^2 2 \psi_{10}( {\bf 0})\left(m\sqrt{\frac{\gamma}{\pi}}\frac{\alpha_sN_c}{2}\right)2\frac{\gamma}{\sqrt{m}}\sqrt{\frac{k_+}{2}}\\ S_{P1}^{sub}(k_+)&=& S_{P1}^{MS}(k_+)+{\alpha_{\rm s} (\mu_u)\over 6 \pi N_c} 2 \psi_{10}( {\bf 0})\left(\sqrt{\frac{\gamma^3}{\pi}}\frac{8}{3}(2-\lambda)\right)\frac{\gamma}{\sqrt{m}}\sqrt{\frac{k_+}{2}}\\ S_{P2}^{sub}(k_+)&=&S_{P2}^{MS}(k_+) \end{eqnarray} The validity of the formulas for the shape functions is limited by the perturbative treatment of the US gluons. The typical momentum of these gluons in light cone coordinates turns out to be: \begin{equation} (k_+, k_{\perp}, k_-)=\left(M(1-z),\sqrt{2M(1-z)\left(\frac{M(1-z)}{2}-E_1\right)},M(1-z)-2E_1\right) \end{equation} Note that the typical $k_{\perp}$ is not fixed by the bound state dynamics only but by a combination of the latter and the end-point kinematics. Hence, the calculation is reliable provided that $k_{\perp} \gtrsim 1 GeV$, which for the $\Upsilon(1S)$ system means $z<0.92$. \subsection{Comparison with experiment} We apply here the results in this section to the $\Upsilon (1S)$ system. There is good evidence that the $\Upsilon (1S)$ state can be understood as a weak coupling (positronium like) bound state \cite{Titard:1993nn,Titard:1994id,Titard:1994ry,Pineda:1997hz,Pineda:1998ja,Pineda:2001zq,Brambilla:2001qk,Recksiegel:2002za,Kniehl:2002br,Penin:2002zv,Kniehl:2003ap}. Hence, ignoring $\mathcal{O}\left(\Lambda_{\rm QCD}\right)$ in the shape functions, as we have done, should be a reasonable approximation. We will denote the contribution in the upper end-point region by $\frac{d\Gamma^e}{dz}$. It is given by \begin{equation}\label{endp} \frac{d\Gamma^e}{dz}=\frac{d\Gamma^{e}_{CS}}{dz}+\frac{d\Gamma^{e}_{CO}}{dz} \end{equation} where $CS$ and $CO$ stand for color singlet and color octet contributions respectively. The color singlet contribution is the expression with the Sudakov resummed coefficient (\ref{singres}). The color octet contribution is given by \begin{equation} \frac{d\Gamma_{CO}^{e}}{dz}=\alpha_s\left(\mu_u\right)\alpha_s\left(\mu_h\right)\left(\frac{16M\alpha}{81m^4% }\right)\int_z^{\frac{M}{2m}}\!\!\! C(x-z) S_{S+P}(x)dx \end{equation} where $\mu_u$ is the US scale, that arises from the couplings of the US gluons (see below for the expression we use). $C(x-z)$ contains the Sudakov resummations explained in \ref{subsecresoc}\footnote{These matching coefficients, provided in reference \cite{Bauer:2001rh}, become imaginary for extremely small values of $z-1$, a region where our results do not hold anyway. We have just cut-off this region in the convolutions.}. The (tree level) matching coefficients (up to a global factor) and the various shape functions are encoded in $S_{S+P}(x)$, \[ S_{S+P}(z):=z\left(-\left(\frac{4\alpha_s\left(\mu_u\right)}{3\pi N_c}\left(\frac{c_F}{2m}\right)^2\right)^{-1}\!\!\!\!\!S_S(M(1-z))-\right. \] \begin{equation}\label{sp} \left.-\left(\frac{\alpha_s\left(\mu_u\right)}{6\pi N_c}\right)^{-1}\left(3S_{P1}(M(1-z))+S_{P2}(M(1-z))\right)\right) \end{equation} The shape functions $S_S$, $S_{P1}$ and $S_{P2}$ may become $S_S^{MS}$, $S_{P1}^{MS}$ and $S_{P2}^{MS}$ or $S_S^{sub}$, $S_{P1}^{sub}$ and $S_{P2}^{sub}$ depending on the subtraction scheme employed. We will use the following values of the masses for the plots: $m=4.81$ GeV and $M=9.46$ GeV. The hard scale $\mu_h$ is set to $\mu_h=M$. The soft scale $\mu_s = m C_f\alpha_s$ is to be used for the $\alpha_s$ participating in the bound state dynamics, we have $\alpha_s(\mu_s)=0.28$. The ultrasoft scale $\mu_u$ is set to $\mu_u=\sqrt{2M(1-z)\left(\frac{M}{2}(1-z)-E_1\right)}$, as discussed in the previous subsection. We have used the \verb|Mathematica| package \verb|RunDec| \cite{Chetyrkin:2000yt} to obtain the (one loop) values of $\alpha_s$ at the different scales. In figure \ref{figepsubst} we plot the end-point contribution (\ref{endp}) with the shape functions renormalized in an $MS$ scheme (blue dashed line) and in the $sub$ scheme (red solid line), together with the experimental data \cite{Nemati:1996xy} (we have convoluted the theoretical curves with the experimental efficiency, the overall normalization of each curve is taken as a free parameter). We see that a very good description of data is achieved and that both schemes are equally good for the description of the shape of the experimental data in the end-point region. This nice agreement with data is an encouraging result. But still remains to be seen if it is possible to combine these results, for the end-point region, with the ones for the central region (where the NRQCD formalism is expected to work). This will be the subject of section \ref{secmerg}. \begin{figure} \centering \includegraphics[width=14.5cm]{epsubst} \caption[End-point contribution of the spectrum]{End-point contribution of the spectrum, $d\Gamma^e/dz$, with the shape functions renormalized in an $MS$ scheme (blue dashed line) and in the $sub$ scheme (red solid line). The points are the CLEO data \cite{Nemati:1996xy}}\label{figepsubst} \end{figure} \subsection{Calculation of $\Upsilon (1S)$ NRQCD color octet matrix elements} The calculation of the shape functions can be easily taken over to provide a calculation of $\left<\Upsilon(1S)\right\vert $ $\left. \mathcal{O}_8(\phantom{}^1 S_0)\vert\Upsilon (1S) \right> $ and $\left<\Upsilon (1S)\vert \mathcal{O}_8(\phantom{}^3 P_J)\vert\Upsilon (1S)\right>$ assuming that $m\alpha_{\rm s}^2\gg \Lambda_{QCD}$ is a reasonable approximation for this system. Indeed, we only have to drop the delta function (which requires a further integration over $k_{+}$) and arrange for the suitable factors in (\ref{ImTS}) and (\ref{ImTP}). We obtain \begin{displaymath} \left< \Upsilon (1S) \vert \mathcal{O}_8 (^1 S_0) \vert \Upsilon (1S) \right> =-2T_F^2 (N_c^2-1) \int_0^{\infty}dk_+S_{S}(k_+) \end{displaymath} \begin{equation} \left< \Upsilon (1S) \vert \mathcal{O}_8 (^3 P_J) \vert \Upsilon (1S) \right> =-{4(2J+1)T_F^2 (N_c^2-1)\over 3} \int_0^{\infty}dk_+S_{P1}(k_+) \end{equation} where we have used \begin{equation}}{\bf \int_0^{\infty}dk_+S_{P2}(k_+)={2\over 3} \int_0^{\infty}dk_+S_{P1}(k_+) \end{equation} The expressions above contain UV divergences which may be regulated by calculating the ultrasoft loop in $D$ dimensions. These divergences can be traced back to the diagrams in fig. \ref{figz2} and fig. \ref{figz4}. Indeed, if we expand $I_{S}$ and $I_{P}$ for $z^{\prime}$ large, we obtain \begin{displaymath} I_S\sim m\sqrt{\frac{\gamma}{\pi}}\frac{\alpha_{\rm s} N_c}{2}\left\{\frac{1}{{z^{\prime}}}+\frac{1}{{z^{\prime}}^2}(-1+2\lambda\ln2)+\frac{1}{{z^{\prime}}^3}\left(1-2\lambda+\frac{\lambda^2\pi^2}{6}\right)+\right. \end{displaymath} \begin{equation}\label{IStaylor} \left.+\frac{1}{{z^{\prime}}^4}\left(-1+\lambda(2\ln2+1)+\lambda^2(-4\ln2)+\frac{3}{2}\zeta(3)\lambda^3\right) +{\cal O}(\frac{1}{{z^{\prime}}^5})\right\} \end{equation} \begin{displaymath} I_P\sim \sqrt{\frac{\gamma^3}{\pi}} {8\over 3}(2-\lambda) \left\{\frac{1}{2{z^{\prime}}}+\left(-\frac{3}{4}+\lambda\left(-\frac{1}{4}+\ln2\right)\right)\frac{1}{{z^{\prime}}^2}+\right. \end{displaymath} \begin{displaymath} \left.+\left(1-\lambda+\frac{1}{12}(-6+\pi^2)\lambda^2\right)\frac{1}{{z^{\prime}}^3}+\frac{1}{4}\left(-5+\lambda+\lambda^2(2-8\ln2)+8\lambda\ln2+\right.\right. \end{displaymath} \begin{equation}\label{IPtaylor} \left.\left.+\lambda^3\left(-4\ln2+3\zeta(3)\right)\right)\frac{1}{{z^{\prime}}^4} +{\cal O}(\frac{1}{{z^{\prime}}^5})\right\} \end{equation} \begin{figure} \centering \includegraphics{app1} \caption[Diagrams which require a ${\cal P}_1 (^3 S_1)$ operator for renormalization]{\label{figz2}Diagrams which require a ${\cal P}_1 (^3 S_1)$ operator for renormalization. The solid circle stands for either the $\mathcal{O}_8 (^1 S_0)$ or $\mathcal{O}_8 (^3 P_J)$ operator, the crossed box for either the chromomagnetic (\ding{60} ) or chromoelectric (\ding{182} ) interaction in fig. \ref{figdos}, the empty box for the octet Coulomb potential, and the thin solid lines for free $Q\bar Q$ propagators.} \end{figure} \begin{figure} \centering \includegraphics{app2} \caption[Diagrams which require a $\mathcal{O}_1 (^3 S_1)$ operator for renormalization]{\label{figz4}Diagrams which require a $\mathcal{O}_1 (^3 S_1)$ operator for renormalization. Symbols are as in fig. \ref{figz2}. } \end{figure} It is easy to see that only powers of $1/z^{\prime}$ up to order four may give rise to divergences. Moreover, each power of $1/z^{\prime}$ corresponds to one Coulomb exchange. Taking into account the result of the following integral, \begin{equation}}{\bf \int_0^\infty dk_+\int_0^\infty dx(2k_+x)^{-\varepsilon}\frac{1}{{z'}^{\alpha}}=2^{1-2\varepsilon}\left(\frac{\gamma^2}{m}\right)^{2-2\varepsilon}\frac{\Gamma^2(1-\varepsilon)}{\Gamma\left(\frac{\alpha}{2}\right)}\Gamma\left(\frac{\alpha}{2}+2\varepsilon-2\right) \end{equation} we see that only the $1/{z^{\prime}}^2$ and $1/{z^{\prime}}^4$ terms produce divergences. The former correspond to diagrams in fig. \ref{figz2} and the latter to fig. \ref{figz4}, which can be renormalized by the operators ${\cal P}_1 (^3 S_1)$ and $\mathcal{O}_1(^3S_1)$ respectively. It is again important to notice that these divergences are a combined effect of the ultrasoft loop and quantum mechanics perturbation theory ({\it potential} loops \cite{Beneke:1997zp}) and hence it may not be clear at first sight if they must be understood as ultrasoft (producing $\log \mu_u$ in the notation of refs. \cite{Pineda:2001ra,Pineda:2002bv,Pineda:2001et}) or potential (producing $\log \mu_p$ in the notation of refs. \cite{Pineda:2001ra,Pineda:2002bv,Pineda:2001et}). In any case, the logarithms they produce depend on the regularization and renormalization scheme used for both ultrasoft and potential loops. Remember that the scheme we use here is not the standard one in pNRQCD \cite{Pineda:1997ie,Pineda:1998kn,Pineda:2001ra,Pineda:2002bv,Kniehl:2002br,Penin:2002zv}. In the standard scheme the ultrasoft divergences (anomalous dimensions) are identified by dimensionally regulating both ultrasoft and potential loops and subsequently taking $D\rightarrow 4$ in the ultrasoft loop divergences only. If we did this in the present calculation we would obtain no ultrasoft divergence. Hence, in the standard scheme there would be contributions to the potential anomalous dimensions only. The singular pieces in our scheme are displayed below \begin{displaymath} \left.\left< \Upsilon (1S) \vert \mathcal{O}_8 (^1 S_0) \vert \Upsilon (1S) \right>\right\vert_{\varepsilon\rightarrow 0} \simeq -{1\over \varepsilon}\left({2\gamma^2\over \mu m}\right)^{-2\varepsilon}{1\over 24} c_{F}^2 N_c \alpha_{\rm s} (\mu_u)\left(C_f\alpha_{\rm s} (\mu_s) \right)^4 {\gamma^3\over \pi^2} \Bigg( 2+ \end{displaymath} \begin{displaymath} \left. +\lambda \left[ -7-4\log 2\right] +\lambda^2\left[ 4+8\log 2 + 4\log^2 2 + {\pi^2\over 3}\right] + \lambda^3\left[ -4\log^2 2-{\pi^2\over 3}-{3\over 2}\zeta (3)\right]\right) \end{displaymath} \begin{displaymath} \left.\left< \Upsilon (1S) \vert \mathcal{O}_8 (^3 P_J) \vert \Upsilon (1S) \right>\right\vert_{\varepsilon\rightarrow 0} \simeq -(2J+1){1\over \varepsilon}\left({2\gamma^2\over \mu m}\right)^{-2\varepsilon}{4\over 27} C_f\alpha_{\rm s} (\mu_u)(C_f\alpha_{\rm s} (\mu_s) )^2 {\gamma^5\over \pi^2} \times \end{displaymath} \begin{displaymath} \times (2-\lambda )\left( -4 + +\lambda\left[ {47\over 12}+5\log 2\right]+\lambda^2\left[ {5\over 6}-{2\pi^2\over 9}-{8\over 3}\log 2-{8\over 3}\log^2 2 \right]+\right. \end{displaymath} \begin{equation} \left.+\lambda^3\left[ -{7\over 12}+{\pi^2\over 9}-{5\over 3}\log 2 + {4\over 3}\log^2 2 +{3\over 4}\zeta (3)\right] \right) \end{equation} With these expressions we obtain the following estimates for the value of the matrix elements \begin{eqnarray} \left.\left< \Upsilon (1S) \vert \mathcal{O}_8 (^1 S_0) \vert \Upsilon (1S) \right>\right|_{\mu=M} & \sim & 0.004\,GeV^3\label{estomeS}\\ \left.\left< \Upsilon (1S) \vert \mathcal{O}_8 (^3 P_0) \vert \Upsilon (1S) \right>\right|_{\mu=M} & \sim & 0.08\,GeV^5\label{estomeP} \end{eqnarray} remember that the above numbers are obtained in an $MS$ scheme from dimensionally regularized US loops only. The value we assign to the $S$-wave matrix element is compatible with the recent (quenched) lattice determination (hybrid algorithm) \cite{Bodwin:2005gg}. \section{Merging the various contributions to the spectrum}\label{secmerg} Now, with the, for a long time elusive, end-point region of the spectrum well described, it is the time to put together all the contributions to the spectrum and see if a good description of data is achieved. As was already explained, the contributions to the spectrum can be split into direct ($^{dir}$) and fragmentation ($^{frag}$) \begin{equation} \frac{d\Gamma}{dz}=\frac{d\Gamma^{dir}}{dz}+\frac{d\Gamma^{frag}}{dz} \end{equation} The fragmentation contributions are those in which the photon is emitted from the decay products of the heavy quark (final state light quarks), these contributions where first taken into account in \cite{Catani:1994iz} and further studied in \cite{Maltoni:1998nh}; while the direct contributions are those in which the photon is emitted directly from the heavy quark itself. Although this direct-fragmentation splitting is correct at the order we are working it should be refined at higher orders. We discuss each of these contributions in turn, in the two following subsections. \subsection[Direct contributions: merging the central and upper end-point regions]{Direct contributions} The approximations required to calculate the QCD formula (\ref{QCDexpr}) are different in the lower end-point region ($z\rightarrow 0$), in the central region ($z\sim 0.5$) and in the upper end-point region ($z\rightarrow 1$). In the lower end-point region the emitted low energy photon can only produce transitions within the non-relativistic bound state without destroying it. Hence the direct low energy photon emission takes place in two steps: (i) the photon is emitted (dominantly by dipole electric and magnetic transitions) and (ii) the remaining (off-shell) bound state is annihilated into light hadrons. This lower end-point contribution goes to zero, for $z\to 0$, as $z^ 3$, while the leading order NRQCD result goes to zero as $z$ (see \cite{Manohar:2003xv,Voloshin:2003hh} for a recent analysis of this lower end-point region in QED). As was already mentioned, at some point the direct photon emission is overtaken by the fragmentation contributions \cite{Catani:1994iz,Maltoni:1998nh}. In practice this happens about $z\sim 0.4$, namely much before than the $z^3$ behavior of the low energy direct photon emission can be observed, and hence we shall neglect the latter in the following. For $z$ away from the lower and upper end-points ($0$ and $1$ respectively), no further scale is introduced beyond those inherent of the non-relativistic system. The integration of the scale $m$ in the time ordered product of currents in (\ref{QCDexpr}) leads to local NRQCD operators with matching coefficients which depend on $m$ and $z$. We will summarize here the known results for the central region (we denote the direct contributions in the central region by $\Gamma_c$). At leading order one obtains \begin{equation} \label{LOrate} \frac1{\Gamma_0} \frac{d\Gamma^c _{\rm LO}}{dz} = \frac{2-z}{z} + \frac{z(1-z)}{(2-z)^2} + 2\frac{1-z}{z^2}\ln(1-z) - 2\frac{(1-z)^2}{(2-z)^3} \ln(1-z), \end{equation} where\footnote{\emph{Note added:} The $\left(\frac{2m}{M}\right)^2$ factor was missing in formula (4) of \cite{GarciaiTormo:2005ch} (and in a previous version of the thesis). This was just a typo, not affecting any of the subsequent results. We thank A.Vairo for help in identifying it.} \begin{equation} \Gamma_0 = \frac{32}{27}\alpha\alpha_s^2e_Q^2 \frac{\langle V_Q (nS)\vert {\cal O}_1(^3S_1)\vert V_Q (nS)\rangle}{m^2}\left(\frac{2m}{M}\right)^2, \label{gamma0} \end{equation} The $\alpha_s$ correction to this rate was calculated numerically in ref.~\cite{Kramer:1999bf}. The expression corresponding to (\ref{gamma0}) in pNRQCD is obtained at lowest order in any of the possible regimes by just making the substitution \begin{eqnarray} \label{singletWF} \langle V_Q (nS) \vert {\cal O}_1(^3S_1) \vert V_Q (nS) \rangle &=& 2 N_c |\psi_{n0}({\bf 0})|^2, \end{eqnarray} where $\psi_{n0}({\bf 0})$ is the wave function at the origin. The final result coincides with the one of the early QCD calculations \cite{Brodsky:1977du,Koller:1978qg}. We will take the Coulomb form $\psi_{10}({\bf 0})=\gamma^3/\pi$ for the LO analysis of $\Upsilon (1S)$. The NLO contribution in the original NRQCD counting \cite{Bodwin:1994jh} is $v^2$ suppressed with respect to (\ref{LOrate}). It reads \begin{equation} \label{RelCo} \frac{d\Gamma^c_{\rm NLO}}{dz}=C_{\mathbf{1}}'\left(\phantom{}^3S_1\right)\frac{\langle V_Q (nS)\vert {\cal P}_1(^3S_1)\vert V_Q (nS)\rangle}{m^4} \end{equation} In the original NRQCD counting or in the weak coupling regime of pNRQCD the new matrix element above can be written in terms of the original one \cite{Gremm:1997dq}\footnote{In the strong coupling regime of pNRQCD an additional contribution appears \cite{Brambilla:2002nu}} \begin{equation} \frac{\langle V_Q (nS)\vert {\cal P}_1(^3S_1)\vert V_Q (nS)\rangle}{m^4}=\left(\frac{M-2m}{m}\right)\frac{\langle V_Q (nS)\vert {\cal O}_1(^3S_1)\vert V_Q (nS)\rangle}{m^2}\left(1+\mathcal{O}\left(v^2\right)\right) \end{equation} The matching coefficient can be extracted from an early calculation \cite{Keung:1982jb} (see also \cite{Yusuf:1996av}). It reads \begin{equation} C_{\mathbf{1}}'\left(\phantom{}^3S_1\right)=-\frac{16}{27}\alpha\alpha_s^2e_Q^2\left(F_B(z)+\frac{1}{2} F_W(z)\right) \end{equation} where ($\xi=1-z$) \[ F_B(z)\!=\!\frac{2\!-\!16\xi+10\xi^2-48\xi^3 -10\xi^4+64\xi^5-2\xi^6 +(1-3\xi+14\xi^2-106\xi^3+17\xi^4 -51\xi^5)\ln \xi}{2\,(1-\xi)^3 (1+\xi)^4} \] \[ F_W(z)=\frac{-26+14\xi-210\xi^2+134\xi^3+274\xi^4-150\xi^5-38\xi^6+2\xi^7}{3(1-\xi)^3 (1+\xi)^5}- \] \begin{equation} -\frac{(27+50\xi+257\xi^2-292\xi^3+205\xi^4-78\xi^5-41\xi^6)\ln \xi}{3(1-\xi)^3 (1+\xi)^5} \end{equation} The contributions of color octet operators start at order $v^4$. Furthermore, away of the upper end-point region, the lowest order color octet contribution identically vanishes \cite{Maltoni:1998nh}. Hence there is no $1/\alpha_{\rm s}$ enhancement in the central region and we can safely neglect these contributions here. If we use the counting $\alpha_{\rm s} (\mu_h)\sim v^2$, $\alpha_{\rm s}\left(\mu_s\right)\sim v$ (remember that $\mu_h\sim m$ and $\mu_s\sim mv$ are the hard and the soft scales respectively) for the $\Upsilon (1S)$, the complete result up to NLO (including $v^2$ suppressed contributions) can be written as \begin{equation}}{\bf \frac{d\Gamma^c}{dz}=\frac{d\Gamma^{c}_{LO}}{dz}+\frac{d\Gamma^{c}_{NLO}}{dz}+\frac{d\Gamma^{c}_{LO,\alpha_{\rm s}}}{dz} \label{central} \end{equation} The first term consist of the expression (\ref{LOrate}) with the Coulomb wave function at the origin (\ref{singletWF}) including corrections up to $\mathcal{O}\left[\left(\alpha_{\rm s}\left(\mu_s\right)\right)^2\right]$ \cite{Melnikov:1998ug,Penin:1998kx}, the second term is given in (\ref{RelCo}), and the third term consists of the radiative $\mathcal{O}\left(\alpha_{\rm s} (\mu_h)\right)$ corrections to (\ref{LOrate}) which have been calculated numerically in \cite{Kramer:1999bf}. Let us mention at this point that the $\mathcal{O}\left[\left(\alpha_{\rm s}\left(\mu_s\right)\right)^2\right]$ corrections to the wave function at the origin turn out to be as large as the leading order term. This will be important for the final comparison with data at the end of the section. Note that the standard NRQCD counting we use does not coincide with the usual counting of pNRQCD in weak coupling calculations, where $\alpha_{\rm s} (\mu_h) \sim \alpha_{\rm s} (\mu_s) \sim \alpha_{\rm s} (mv^2)$. The latter is necessary in order to get factorization scale independent results beyond NNLO for the spectrum and beyond NLO for creation and annihilation currents. However, for the $\Upsilon (1S)$ system (and the remaining heavy quarkonium states) the ultrasoft scale $mv^2$ is rather low, which suggests that perturbation theory should better be avoided at this scale \cite{Pineda:2001zq}. This leads us to standard NRQCD counting. The factorization scale dependences that this counting induces can in principle be avoided using renormalization group techniques \cite{Luke:1999kz,Pineda:2001ra,Pineda:2001et,Pineda:2002bv,Hoang:2002yy}. In practice, however, only partial NNLL results exists for the creation and annihilation currents \cite{Hoang:2003ns,Penin:2004ay} (see \cite{Pineda:2003be} for the complete NLL results), which would fix the scale dependence of the wave function at the origin at ${\cal O} (\alpha_{\rm s}^2 (mv))$. We will not use them and will just set the factorization scale to $m$. The upper end-point region of the spectrum has been discussed in great detail in the previous section. As we have seen there, different approximations, with respect to the ones for the central region, are needed here. It is not, by any way, obvious how the results for the central and for the upper end-point regions must be combined in order to get a reliable description of the whole spectrum. When the results of the central region are used in the upper end-point region, one misses certain Sudakov and Coulomb resummations which are necessary because the softer scales $M\sqrt{1-z}$ and $M(1-z)$ become relevant. Conversely, when results for the upper end-point region are used in the central region, one misses non-trivial functions of $z$, which are approximated by their end-point ($z\sim 1$) behavior. We will explain, in the remaining of this subsection, how to merge these two contributions. \subsubsection{Merging the central and upper end-point regions} One way to proceed with the merging is the following. If we assume that the expressions for the end-point contain the ones of the central region up to a certain order in $(1-z)$, we could just subtract from the expressions in the central region the behavior when $z\rightarrow 1$ at the desired order and add the expressions in the end-point region. Indeed, when $z\rightarrow 1$ this procedure would improve on the central region expressions up to a given order in $(1-z)$, and when $z$ belongs to the central region, they would reduce to the central region expressions up to higher orders in $\alpha_{\rm s}$. This method was used in ref. \cite{Fleming:2002sr} and in ref. \cite{GarciaiTormo:2004kb}. In ref. \cite{Fleming:2002sr} only color singlet contributions were considered and the end-point expressions trivially contained the central region expressions in the limit $z\rightarrow 1$. In ref. \cite{GarciaiTormo:2004kb} color octet contributions were included, which contain terms proportional to $(1-z)$. Hence, the following formula was used \begin{equation} \frac{1}{\Gamma_0}\frac{d\Gamma^{dir}}{dz}=\frac{1}{\Gamma_0}\frac{d\Gamma_{LO}^{c}}{dz}+\left(\frac{1}{\Gamma_0% }\frac{d\Gamma_{CS}^{e}}{dz}-z\right)+\left(\frac{1}{\Gamma_0}\frac{d\Gamma_{CO}^{e}}{dz}-z\left(4+2\log \left(1-z\right) \right) (1-z)\right) \label{mergingLO} \end{equation} Even though a remarkable description of data was achieved with this formula (upon using a suitable subtraction scheme, the {\it sub} scheme described in the previous subsection), this method suffers from the following shortcoming. The hypothesis that the expressions for the end-point contain the ones for the central region up to a given order in $(1-z)$ is in general not fulfilled. As we will see below, typically, they only contain part of the expressions for the central region. This is due to the fact that some $\alpha_{\rm s} (\mu_h)$ in the central region may soften as $\alpha_{\rm s} (M(1-z))$, others as $\alpha_{\rm s} (M\sqrt{1-z})$ and others may stay at $\alpha_{\rm s} (\mu_h)$ when approaching the end-point region. In a LO approximation at the end-point region, only the terms with the $\alpha_{\rm s}$ at low scales would be kept and the rest neglected, producing the above mentioned mismatch. We shall not pursue this procedure any further. Let us look for an alternative. Recall first that the expressions we have obtained for the upper end-point region are non-trivial functions of $M(1-z)$, $M\sqrt{1-z}$, $m\alpha_{\rm s} (mv)$ and $m\alpha_{\rm s}^2 (mv)$, which involve $\alpha_{\rm s}$ at all these scales. They take into account both Sudakov and Coulomb resummations. When $z$ approaches the central region, we can expand them in $\alpha_{\rm s} (M\sqrt{1-z})$, $\alpha_{\rm s} (M(1-z))$ and the ratio $m\alpha_{\rm s} (mv)/M\sqrt{1-z}$. They should reduce to the form of the expressions for the central region, since we are just undoing the Sudakov and (part of) the Coulomb resummations. Indeed, we obtain \begin{equation}}{\bf \frac{d\Gamma^{e}_{CS}}{dz} &\longrightarrow \displaystyle{\left.\frac{d\Gamma^{e}_{CS}}{dz}\right\vert_c}= & \Gamma_0z\left(1+\frac{\alpha_{\rm s}}{6\pi}\left(C_A\left(2\pi^2-17\right)+2n_f\right)\log (1-z) + \mathcal{O}(\alpha_{\rm s}^2) \right)\label{expands}\\ \frac{d\Gamma^{e}_{CO}}{dz} &\longrightarrow \displaystyle{\left.\frac{d\Gamma^{e}_{CO}}{dz}\right\vert_c} = & -z\alpha_{\rm s}^2\left(\frac{16M\alpha}{81m^4}\right)2\left|\psi_{10}\left(\bf{0}\right)\right|^2 \bigg( m\alpha_{\rm s}\sqrt{1-z} A +\nonumber\\ & & \left.+ M(1-z)\left(-1+\log\left(\frac{\mu_c^2}{M^2(1-z)^2}\right)\right)+\right.\nonumber\\ & & \left.+ M\frac{\alpha_{\rm s}}{2\pi}\left(-2C_A\left(\frac{1}{2}(1-z)\log^2(1-z)\left[\log\left(\frac{\mu_c^2}{M^2(1-z)^2}\right)-1\right]+\right.\right.\right.\nonumber\\ & & \left.+\int_z^1\!\!\!dx\frac{\log(x-z)}{x-z}f(x,z)\right)-\nonumber\\ & & \left.\left.-\left(\frac{23}{6}C_A-\frac{n_f}{3}\right)\left((1-z)\log(1-z)\left[\log\left(\frac{\mu_c^2}{M^2(1-z)^2}\right)-1\right]+\right.\right.\right.\nonumber\\ & & \left.\left.+\int_z^1\!\!\!dx\frac{1}{x-z}f(x,z)\right)\right)-\nonumber\\ & & \left.- \frac{\gamma^2}{m}2\left(\log\left(\frac{\mu_c^2}{M^2(1-z)^2}\right)+1\right)+ \mathcal{O}\left( m\alpha_{\rm s}^2, \alpha_{\rm s} \frac{\gamma^2}{m}, \frac{\gamma^4}{m^3}\right) \right)\label{expando} \end{eqnarray} where \begin{equation} f(x,z)=\left(1-x\right)\log\left(\frac{\mu_c^2}{M^2(1-x)^2}\right)-(1-z)\log \left(\frac{\mu_c^2}{M^2(1-z)^2}\right)+x-z \end{equation} $A=-N_c-136C_f(2-\lambda)/9$ (in an $MS$ scheme; it becomes $A=-64C_f(2-\lambda)/9$ in the $sub$ scheme). In the next paragraph we explain how to obtain these expressions for the shape functions in the central region. First consider the $S$-wave octet shape function \begin{equation}\label{Swave} I_{S}({k_+\over 2} +x):=\int d^3 {\bf x} \psi_{10}( {\bf x})\left( 1-{\frac{k_+}{2}+x \over h_o -E_1 +{k_+ \over 2}+x}\right)_{{\bf x},{\bf 0}} \end{equation} $h_o={\bf p}^2/m+V_o$, $V_o=\alpha_{\rm s}/(2N_c\vert {\bf r}\vert )$. When $z$ approaches the central region, $k_+\sim M(1-z) \gg -E_1$ and the larger three momentum scale is $M\sqrt{1-z}\gg\gamma$, the typical three momentum in the bound state. Therefore we can treat the Coulomb potential in (\ref{Swave}) as a perturbation when it is dominated by this scale. It is convenient to proceed in two steps. First we write $h_o=h_s + (V_o-V_s)$, where $h_s={\bf p}^2/m+V_s$, $V_s=-\alpha_{\rm s} C_f/\vert {\bf r}\vert$, and expand $V_o-V_s$. This allows to set $h_s -E_1$ to zero in the left-most propagator and makes explicit the cancellation between the first term in the series and the first term in (\ref{Swave}). It also makes explicit that the leading term will be proportional to $\alpha_{\rm s} (M\sqrt{1-z})$. Second, we expand $V_s$ in $h_s={\bf p}^2/m+V_s$. In addition, since $M\sqrt{1-z} \gg\gamma$, the wave function can be expanded about the origin. Only the first term in both expansion is relevant in order to get (\ref{expando}). Consider next the $P$-wave shape functions \begin{equation}\label{Pwave} I_{P}({k_+\over 2} +x):=-\frac{1}{3}\int d^3 {\bf x} {\bf x}^i \psi_{10}( {\bf x})\left(\left(1- {\frac{k_+}{2}+x \over h_o -E_1 +{k_+ \over 2}+x} \right)\mbox{\boldmath $\nabla$}^i \right)_{{\bf x},{\bf 0}} \end{equation} In order to proceed analogously to the $S$-wave case, we have first to move the ${\bf x}^i$ away from the wave function \begin{displaymath} I_{P}({k_+\over 2} +x)=\psi_{10}({\bf 0})+\frac{\frac{k_+}{2}+x}{3}\int d^3 {\bf x}\psi_{10}( {\bf x})\left\{\frac{1}{h_o -E_1 +{k_+ \over 2}+x}{\bf x} \mbox{\boldmath $\nabla$}+\right. \end{displaymath} \begin{equation}\label{Pwave2} \left. +{\frac{1}{h_o -E_1 +{k_+ \over 2}+x}}\left(-\frac{2\mbox{\boldmath $\nabla$}^i}{m}\right){\frac{1}{h_o -E_1 +{k_+ \over 2}+x}}\mbox{\boldmath $\nabla$}^i \right\} \end{equation} For the left-most propagators we can now proceed as before, namely expanding $V_o-V_s$. Note that the leading contribution in this expansion of the second term above exactly cancels against the first term. Of the remaining contributions of the second term only the next-to-leading one ($\mathcal{O}(\alpha_{\rm s})$) is relevant to obtain (\ref{expando}). Consider next the leading order contribution in this expansion of the last term. It reads \begin{displaymath} -\frac{2}{3m}\!\int d^3 {\bf x}\psi_{10}( {\bf x})\left\{\mbox{\boldmath $\nabla$}^i \frac{1}{h_o -E_1 +{k_+ \over 2}+x}\mbox{\boldmath $\nabla$}^i\right\}= -\frac{2}{3m}\int d^3 {\bf x}\psi_{10}( {\bf x})\left\{\!\left( \frac{1}{h_o -E_1 +{k_+ \over 2}+x}\mbox{\boldmath $\nabla$}^i-\right.\right. \end{displaymath} \begin{equation}\label{Pwave3} \left.\left.-\frac{1}{h_o -E_1 +{k_+ \over 2}+x}\mbox{\boldmath $\nabla$}^iV_o\frac{1}{h_o -E_1 +{k_+ \over 2}+x}\right)\mbox{\boldmath $\nabla$}^i \right\} \end{equation} Now we proceed as before with the left-most propagators, namely expanding $V_o-V_s$. The leading order contribution of the first term above produces the relativistic correction $\mathcal{O}(v^2)$ of (\ref{expando}). The next-to-leading contribution of this term and the leading order one of the second term are $\mathcal{O}(\alpha_{\rm s})$ and also relevant to (\ref{expando}). The next-to-leading order contribution of the last term in (\ref{Pwave2}) in the $V_o-V_s$ expansion of the left-most propagator is also $\mathcal{O}(\alpha_{\rm s} )$ and relevant to (\ref{expando}). Returning now to equations (\ref{expands})-(\ref{expando}), we see that the color singlet contribution reproduces the full LO expression for the central region in the limit $z\rightarrow 1$. The color octet shape functions $S_{P1}$ and $S_{P2}$ give contributions to the relativistic corrections (\ref{RelCo}), and $S_{P2}$ to terms proportional to $(1-z)$ in the limit $z\rightarrow 1$ of (\ref{LOrate}) as well. We have checked that, in the $z\rightarrow 1$ limit, both the $(1-z)\ln (1-z)$ of (\ref{LOrate}) and the $\ln (1-z)$ of the relativistic correction (\ref{RelCo}) are correctly reproduced if $\mu_c \sim M\sqrt{1-z}$, as it should. All the color octet shape functions contribute to the $\mathcal{O}(\alpha_{\rm s} (\mu_h))$ correction in the first line of (\ref{expando}). There are additional $\mathcal{O}(\alpha_{\rm s} (\mu_h))$ contributions coming from the expansion of the (Sudakov) resummed matching coefficients of the color singlet contribution and of the $S_{P2}$ color octet shape function. The $\alpha_{\rm s}\log(1-z)$ in (\ref{expands}) reproduces the logarithm in ${d\Gamma^{c}}_{LO,\alpha_{\rm s}}/{dz}$. We propose the following formula \begin{equation}}{\bf \frac{1}{\Gamma_0}\frac{d\Gamma^{dir}}{dz}=\frac{1}{\Gamma_0}\frac{d\Gamma^{c}}{dz}+\left(\frac{1}{\Gamma_0 }\frac{d\Gamma_{CS}^{e}}{dz}-\left.{\frac{1}{\Gamma_0 }\frac{d\Gamma_{CS}^{e}}{dz}}\right\vert_c\right)+\left(\frac{1}{\Gamma_0}\frac{d\Gamma_{CO}^{e}}{dz}-\left.{\frac{1}{\Gamma_0 }\frac{d\Gamma_{CO}^{e}}{dz}}\right\vert_c\right) \label{mergingNLO} \end{equation} This formula reduces to the NRQCD expression in the central region. When we approach the upper end-point region the second terms in each of the parentheses are expected to cancel corresponding terms in the $z\to1$ limit of the expression for the central region up to higher order terms (in the end-point region counting). Thus, we are left with the resummed expressions for the end-point (up to higher order terms). There are of course other possibilities for the merging. For instance, one may choose a $z_1$ below which one trusts the calculation for the central region and a $z_2$ above which one trusts the end-point region calculation, and use some sort of interpolation between $z_1$ and $z_2$ (see for instance \cite{Lin:2004eu}). This would have the advantage of keeping the right approximation below $z_1$ and beyond $z_2$ unpolluted, at the expense of introducing further theoretical ambiguities due to the choice of $z_1$ and $z_2$, and, more important, due to the choice of the interpolation between $z_1$ and $z_2$. We believe that our formula (\ref{mergingNLO}) is superior because it does not introduce the above mentioned theoretical ambiguities. The price to be paid is that the expressions from the central region have an influence in the end-point region and vice-versa. This influence can always be chosen to be parametrically subleading but large numerical factors may make it noticeable in some cases, as we shall see below. \subsubsection{Merging at LO} If we wish to use only the LO expressions for the central region, we should take (\ref{expands}) and (\ref{expando}) at LO, namely \begin{equation}}{\bf \left. { \frac{1}{\Gamma_0 }\frac{d\Gamma_{CS}^{e}}{dz}}\right\vert_c = z \quad ,\quad \left. {\frac{1}{\Gamma_0 }\frac{d\Gamma_{CO}^{e}}{dz}}\right\vert_c= z\left(2-4\log \left(\frac{\mu_c}{M(1-z)}\right) \right) (1-z) \end{equation} and substitute them in (\ref{mergingNLO}). Unexpectedly, the results obtained with this formula in the central region deviate considerably from those obtained with formula (\ref{LOrate}) (see fig. \ref{figcompLO}). This can be traced back to the fact that the $\alpha_{\rm s}\sqrt{1-z} $ corrections in (\ref{expando}) are enhanced by large numerical factors, which indicates that the merging should better be done including $\alpha_{\rm s} (\mu_h)$ corrections in the central region, as we discuss next. Alternatively, we may change our subtraction scheme in order to (partially) get rid of these contributions. With the new subtraction scheme ($sub$), described in the preceding section, the situation improves, although it does not become fully satisfactory (see fig. \ref{figcompLO}). This is due to the fact that some $\alpha_{\rm s}\sqrt{1-z} $ terms remain, which do not seem to be associated to the freedom of choosing a particular subtraction scheme. In spite of this the description of data turns out to be extremely good. In figure \ref{figdibLO} we plot, using the $sub$ scheme, the merging at LO (solid red line) and also, for comparison, equation (\ref{mergingLO}) (blue dashed line). We have convoluted the theoretical curves with the experimental efficiency and the overall normalization is taken as a free parameter. \begin{figure} \centering \includegraphics[width=12.5cm]{compLO} \caption[Merging at LO]{Merging at LO. The solid red line is the NRQCD expression (\ref{LOrate}). The dot-dashed curves are obtained using an $MS$ scheme: the pink (light) curve is the end-point contribution (\ref{endp}) and the black (dark) curve is the LO merging. The dashed curves are obtained using the $sub$ scheme: the green (light) curve is the end-point contribution (\ref{endp}) and the blue (dark) curve is the LO merging.}\label{figcompLO} \end{figure} \begin{figure} \centering \includegraphics[width=12.5cm]{dibLO} \caption[Direct contribution to the spectrum: LO merging]{Direct contribution to the spectrum. The solid red line corresponds to the LO merging and the blue dashed line corresponds to equation (\ref{mergingLO}). The points are the CLEO data \cite{Nemati:1996xy}.}\label{figdibLO} \end{figure} \subsubsection{Merging at NLO} If we wish to use the NLO expressions for the central region (\ref{central}), we should take all the terms displayed in (\ref{expands})- (\ref{expando}) and substitute them in (\ref{mergingNLO}). Unlike in the LO case, for values of $z$ in the central region the curve obtained from (\ref{mergingNLO}) now approaches smoothly the expressions for the central region (\ref{central}) as it should. This is so no matter if we include the $\alpha_{\rm s}^2 (\mu_s)$ corrections to the wave function at the origin in $d\Gamma^c_{LO}/dz$, as we in principle should, or not (see figs. \ref{figcompNLOFO} and \ref{figcompNLOFO2}). However, since the above corrections are very large, the behavior of the curve for $z\rightarrow 1$, strongly depends on whether we include them or not (see again figs. \ref{figcompNLOFO} and \ref{figcompNLOFO2}). We believe that the two possibilities are legitimate. If one interpretes the large $\alpha_{\rm s}^2 (\mu_s)$ corrections as a sign that the asymptotic series starts exploding, one should better stay at LO (or including $\alpha_{\rm s} (\mu_s)$ corrections). However, if one believes that the large $\alpha_{\rm s}^2 (\mu_s)$ corrections are an accident and that the $\alpha_{\rm s}^3 (\mu_s)$ ones (see \cite{Beneke:2005hg,Penin:2005eu} for partial results) will again be small, one should use these $\alpha_{\rm s}^2 (\mu_s)$ corrections. We consider below the two cases. If we stay at LO (or including $\alpha_{\rm s} (\mu_s)$ corrections) for the wave function at the origin, the curve we obtain for $z\rightarrow 1$ differs considerably from the expressions for the end-point region (\ref{endp}) (see fig. \ref{figcompNLOFO}). This can be traced back to the $\alpha_{\rm s}\sqrt{1-z}$ term in (\ref{expando}) again. This term is parametrically suppressed in the end-point region, but, since it is multiplied by a large numerical factor, its contribution turns out to be overwhelming. This term might (largely) cancel out against higher order contributions in the end-point region, in particular against certain parts of the NLO expressions for the color singlet contributions, which are unknown at the moment. If we use the wave function at the origin with the $\alpha_{\rm s}^2 (\mu_s)$ corrections included, the curves we obtain for $z\rightarrow 1$ become much closer to the expressions for the end-point region (\ref{endp}) (see fig \ref{figcompNLOFO2}). Hence, a good description of data is obtained with no need of additional subtractions\footnote{One might be worried about the big difference that the corrections to the wave function at the origin introduce in the result. In that sense let us mention that when we analyze the electromagnetic decay width of $\Upsilon(1S)$ ($\Gamma(\Upsilon(1S)\to e^+\; e^-)$, the formulas needed to compute the width can be found, for instance, in \cite{Vairo:2003gh}), with the same power counting we have employed here, the result we obtain is $5.24\cdot10^{-7}$GeV if we do not include the $\alpha_{\rm s}^2 (\mu_s)$ corrections and $1.17\cdot10^{-6}$GeV if we do include them. This is to be compared with the experimental result $1.32\cdot10^{-6}$GeV \cite{Eidelman:2004wy}.}, as shown in figure \ref{figdibNLO} (as usual experimental efficiency has been taken into account and the overall normalization is a free parameter). This are now good news. Because this final curve incorporates all the terms that are supposed to be there according to the power counting. \begin{figure} \centering \includegraphics[width=12.5cm]{compNLOFO} \caption[Merging at NLO (wave function at the origin at LO)]{Merging at NLO (using an $MS$ scheme and the wave function at the origin at LO). The solid red line is the NRQCD result (\ref{central}), the blue (light) dashed curve is the end-point contribution (\ref{endp}) and the black (dark) dashed curve is the NLO merging.}\label{figcompNLOFO} \end{figure} \begin{figure} \centering \includegraphics[width=12.5cm]{compNLOFO2} \caption[Merging at NLO (wave function at the origin with $\alpha_{\rm s}^2 (\mu_s)$ corrections)]{Merging at NLO (using an $MS$ scheme and the wave function at the origin with the $\alpha_{\rm s}^2 (\mu_s)$ corrections included). The solid red line is the NRQCD result (\ref{central}), the blue (light) dashed curve is the end-point contribution (\ref{endp}) and the black (dark) dashed curve is the NLO merging.}\label{figcompNLOFO2} \end{figure} \begin{figure} \centering \includegraphics[width=12.5cm]{dibNLO} \caption[Direct contribution to the spectrum: NLO merging]{Direct contribution to the spectrum using the NLO merging (in an $MS$ scheme and the wave function at the origin with the $\alpha_{\rm s}^2 (\mu_s)$ corrections included). The points are the CLEO data \cite{Nemati:1996xy}.}\label{figdibNLO} \end{figure} \subsection{Fragmentation contributions}\label{subsecfragcon} The fragmentation contributions can be written as \begin{equation} \frac{d\Gamma^{frag}}{dz}=\sum_{a = q,\bar q, g} \int_z^1\frac{dx}{x}C_a(x)D_{a\gamma}\left(\frac{z}{x},M\right), \end{equation} where $C_a$ represents the partonic kernels and $D_{a\gamma}$ represents the fragmentation functions. The partonic kernels can again be expanded in powers of $v$ \cite{Maltoni:1998nh} \begin{equation} C_a=\sum_{\mathcal{Q}}C_a[\mathcal{Q}] \end{equation} The leading order term in $v$ is the color singlet rate to produce three gluons \[ C_g\left[{\cal O}_1(^3S_1)\right]=\frac{40}{81}\alpha_s^3 \left(\frac{2-z}{z} + \frac{z(1-z)}{(2-z)^2} + 2\frac{1-z}{z^2}\ln(1-z) - 2\frac{(1-z)^2}{(2-z)^3} \ln(1-z)\right)\cdot \] \begin{equation}\label{fragsing} \cdot\frac{\langle V_Q (nS)\vert {\cal O}_1(^3S_1)\vert V_Q (nS)\rangle}{m^2} \end{equation} The color octet contributions start at order $v^4$ but have a $\frac{1}{\alpha_s}$ enhancement with respect to (\ref{fragsing}) \[ C_g\left[{\cal O}_8(^1S_0)\right]=\frac{5\pi\alpha_s^2}{3} \delta(1-z)\frac{\langle V_Q (nS)\vert{\cal O}_8(^1S_0)\vert V_Q (nS)\rangle}{m^2} \] \[ C_g\left[{\cal O}_8(^3P_J)\right]=\frac{35\pi\alpha_s^2}{3} \delta(1-z)\frac{\langle V_Q (nS)\vert{\cal O}_8(^3P_0)\vert V_Q (nS)\rangle}{m^4} \] \begin{equation} C_q\left[{\cal O}_8(^3S_1)\right]=\frac{\pi\alpha_s^2}{3} \delta(1-z)\frac{\langle V_Q (nS)\vert{\cal O}_8(^3S_1)\vert V_Q (nS)\rangle}{m^2}\label{octetf} \end{equation} Then the color singlet fragmentation contribution is of order $\alpha_s^3D_{g\to\gamma}$ and the color octet fragmentation are of order $v^4\alpha_s^2D_{g\to\gamma}$ ($\phantom{}^1S_0$ and $\phantom{}^3P_J$ contributions) or $v^4\alpha_s^2D_{q\to\gamma}$ ($\phantom{}^3S_1$ contribution). We can use, as before, the counting $v^2\sim\alpha_s$ to compare the relative importance of the different contributions together with the existing models for the fragmentation functions \cite{Aurenche:1992yc}. The latter tell us that $D_{q\to\gamma}$ is much larger than $D_{g\to\gamma}$. This causes the $\mathcal{O}(v^4\alpha_s^2D_{q\to\gamma})$ $\phantom{}^3S_1$ octet contribution to dominate in front of the singlet $\mathcal{O}(\alpha_s^3D_{g\to\gamma})$ and the octet $\mathcal{O}(v^4\alpha_s^2D_{g\to\gamma})$ contributions. In fact, $\alpha_sD_{q\to\gamma}$ is still larger than $D_{g\to\gamma}$, so we will include in our plots the $\alpha_s$ corrections to the color octet contributions (\ref{octetf}) proportional to $D_{q\to\gamma}$, which have been calculated in \cite{Maltoni:1998nh}. In addition, the coefficients for the octet $\phantom{}^3P_J$ contributions have large numerical factors, causing these terms to be more important than the color singlet contributions. Let us finally notice that the $\alpha_s$ corrections to the singlet rate will produce terms of $\mathcal{O}(\alpha_s^4D_{q\to\gamma})$, which from the considerations above are expected to be as important as the octet $\phantom{}^3S_1$ contribution. These $\alpha_s$ corrections to the singlet rate are unknown, which results in a large theoretical uncertainty in the fragmentation contributions. For the quark fragmentation function we will use the LEP measurement \cite{Buskulic:1995au} \begin{equation} D_{q\gamma}(z,\mu) = \frac{e_q^2\alpha(\mu)}{2\pi} \left[P_{q\gamma}(z) \ln\left(\frac{\mu^2}{\mu_0^2(1-z)^2}\right) + C\right] \end{equation} where \begin{equation} C = -1-\ln(\frac{M_Z^2}{2\mu_0^2})\quad ;\quad P_{q\gamma}(z) = \frac{1+ (1-z)^2}{z}\quad ;\quad\mu_0=0.14^{+0.43}_{-0.12} {\rm\ GeV} \end{equation} and for the gluon fragmentation function the model \cite{Owens:1986mp}. These are the same choices as in \cite{Fleming:2002sr}. However, for the $\mathcal{O}_8 (^1 S_0)$ and $\mathcal{O}_8 (^3 P_0)$ matrix elements we will use our estimates (\ref{estomeS})-(\ref{estomeP}). Notice that we do not assume that a suitable combination of these matrix elements is small, as it was done in \cite{Fleming:2002sr}. The $\mathcal{O}_8 (^3 S_1)$ matrix element can be extracted from a lattice determination of the reference \cite{Bodwin:2005gg}. Using the wave function at the origin with the $\alpha_{\rm s}^2 (\mu_s )$ corrections included, we obtain (we use the numbers of the hybrid algorithm), \begin{equation}}{\bf \left.\left< \Upsilon (1S) \vert \mathcal{O}_8 (^3 S_1) \vert \Upsilon (1S) \right>\right|_{\mu=M} \sim 0.00026\,GeV^3 \label{lattice} \end{equation} which differs from the estimate using NRQCD $v$ scaling by more than two orders of magnitude: \begin{equation} \left.\left< \Upsilon (1S) \vert \mathcal{O}_8 (^3 S_1) \vert \Upsilon (1S) \right>\right|_{\mu=M}\sim v^4\left.\left< \Upsilon (1S) \vert \mathcal{O}_1 (^3 S_1) \vert \Upsilon (1S) \right>\right|_{\mu=M}\sim 0.02\,GeV^3 \label{vscaling} \end{equation} (we have taken $v^2\sim0.08$), which was used in ref. \cite{Fleming:2002sr}. The description of data turns out to be better with the estimate (\ref{vscaling}). However, this is not very significant, since, as mentioned before, unknown NLO contributions are expected to be sizable. In the $z\rightarrow 0$ region soft radiation becomes dominant and the fragmentation contributions completely dominate the spectrum in contrast with the direct contributions \cite{Catani:1994iz}. Note that, since the fragmentation contributions have an associated bremsstrahlung spectrum, they can not be safely integrated down to $z=0$; that is $\int_0^1dz\frac{d\Gamma^{frag}}{dz}$ is not an infrared safe observable. In any case we are not interested in regularizing such divergence because the resolution of the detector works as a physical cut-off. \subsection{The complete photon spectrum} We can now compare the theoretical expressions with data in the full range of $z$. First note that formula (\ref{mergingNLO}) requires $d\Gamma^{e}/dz$ for all values of $z$. The color octet shape functions, however, were calculated in the end-point region under the assumption that $M\sqrt{1-z}\sim \gamma$, and the scale of the $\alpha_{\rm s}$ was set accordingly. When $z$ approaches the central region $M\sqrt{1-z}\gg \gamma$, and hence some $\alpha_{\rm s}$ will depend on the scale $M\sqrt{1-z}$ and others on $\gamma$ (we leave aside the global $\alpha_{\rm s} (\mu_u)$). In order to decide the scale we set for each $\alpha_{\rm s}$ let us have a closer look at the formula (\ref{expando}). We see that all terms have a common factor $\gamma^3$. This indicates that one should extract $\gamma^3$ factors in the shape functions, the $\alpha_{\rm s}$ of which should stay at the scale $\mu_s$. This is achieved by extracting $\gamma^{3/2}$ in $I_S$ and $I_P$. If we set the remaining $\alpha_{\rm s}$ to the scale $\mu_p=\sqrt{m(M(1-z)/2-E_1)}$, we will reproduce (\ref{expando}) when approaching to the central region, except for the relativistic correction, the $\alpha_{\rm s}$ of which will be at the scale $\mu_p$ instead of at the right scale $\mu_s$. We correct for this by making the following substitution \begin{equation}}{\bf S_{P1}\longrightarrow S_{P1}+{\alpha_{\rm s} (\mu_u)\over 6\pi N_c}{\gamma^3\over\pi}\left(\log {k_+^2\over \mu_c^2}-1\right)\left( {4\gamma^2\over 3m}- {m C_f^2\alpha_{\rm s}^2 (\mu_p)\over 3} \right) \end{equation} Notice that the replacements above are irrelevant as far as the end-point region is concerned, but important for the shape functions to actually (numerically) approach the expressions (\ref{expando}) in the central region, as they should. The comparison with the experiment is shown in figures \ref{figtotal} and \ref{figtotalnou}. These plots are obtained by using the merging formula (\ref{mergingNLO}) at NLO with the $\alpha_{\rm s}^2 (\mu_s)$ corrections to the wave function at the origin included for the direct contributions plus the fragmentation contributions in subsection \ref{subsecfragcon} including the first $\alpha_{\rm s}$ corrections in $C_q$ and using the estimate (\ref{vscaling}) for the $\left< \Upsilon (1S) \vert\mathcal{O}_8 (^3 S_1) \vert \Upsilon (1S) \right>$ matrix element. The error band is obtained by replacing $\mu_{c}$ by $\sqrt{2^{\pm 1}}\mu_{c}$. Errors associated to the large $\alpha_{\rm s}^2 (\mu_s)$ corrections to the wave function at the origin, to possible large NLO color singlet contributions in the end-point region and to the fragmentation contributions are difficult to estimate and not displayed (see the corresponding sections in the text for discussions). The remaining error sources are negligible. In figure \ref{figtotal}, as usual, experimental efficiency has been taken into account and the overall normalization is a free parameter. Figure \ref{figtotalnou} compares our results with the new (and very precise) data from CLEO \cite{Besson:2005jv}. This plot takes into account the experimental efficiency and also the resolution of the experiment (the overall normalization is a free parameter). We can see from figures \ref{figtotal} and \ref{figtotalnou} that, when we put together the available theoretical results, an excellent description of data is achieved for the whole part of the spectrum where experimental errors are reasonable small (recall that the error bars showed in the plots only take into account the statistical errors, and not the systematic ones \cite{Nemati:1996xy,Besson:2005jv}). Clearly then, our results indicate that the introduction of a finite gluon mass \cite{Field:2001iu} is unnecessary. One should keep in mind, however, that in order to have the theoretical errors under control higher order calculations are necessary both in the direct (end-point) and fragmentation contributions. Let us mention that the inclusion of color octet contributions in the end-point region, together with the merging with the central region expression explained here, may be useful for production processes like inclusive $J/\psi$ production in $e^+e^-$ machines \cite{Fleming:2003gt,Lin:2004eu,Hagiwara:2004pf}. \begin{figure} \centering \includegraphics[width=14.5cm]{total} \caption[Photon spectrum]{Photon spectrum. The points are the CLEO data \cite{Nemati:1996xy}. The solid lines are the NLO merging plus the fragmentation contributions: the red (light) line and the blue (dark) line are obtained by using (\ref{vscaling}) and (\ref{lattice}) for $\left< \Upsilon (1S) \vert \mathcal{O}_8 (^3 S_1) \vert \Upsilon (1S) \right>$ respectively. The grey shaded region is obtained by varying $\mu_{c}$ by $\sqrt{2^{\pm 1}}\mu_{c}$. The green shaded region on the right shows the zone where the calculation of the shape functions is not reliable (see subsection \ref{subseccalshpfct}). The pink dashed line is the result in \cite{Fleming:2002sr}, where only color singlet contributions were included in the direct contributions.}\label{figtotal} \end{figure} \begin{figure} \centering \includegraphics[width=14.5cm]{compnouCLEO} \caption[Photon spectrum (most recent data)]{Photon spectrum. The points are the new CLEO data \cite{Besson:2005jv}. The red solid line is the NLO merging plus the fragmentation contributions, using (\ref{vscaling}) for $\left< \Upsilon (1S) \vert \mathcal{O}_8 (^3 S_1) \vert \Upsilon (1S) \right>$. The grey shaded region is obtained by varying $\mu_{c}$ by $\sqrt{2^{\pm 1}}\mu_{c}$. The green shaded region on the right shows the zone where the calculation of the shape functions is not reliable (see subsection \ref{subseccalshpfct}).}\label{figtotalnou} \end{figure} \section{Identifying the nature of heavy quarkonium} As we have just seen in the previous section, the photon spectrum in the radiative decay of the $\Upsilon (1S)$ can be well explained theoretically. This fact, together with the recent appearance of measurements of the photon spectra for the $\Upsilon (2S)$ and $\Upsilon (3S)$ states \cite{Besson:2005jv}, motivates us to try to use these radiative decays to uncover the properties of the decaying heavy quarkonia. As it has already been explained, the interplay of $\Lambda_{\rm QCD}$ with the scales $mv$ and $mv^2$ dictates the degrees of freedom of pNRQCD. Two regimes have been identified: the weak coupling regime, $\Lambda_{\rm QCD} \lesssim mv^2$, and the strong coupling regime, $mv^2 \ll \Lambda_{\rm QCD} \lesssim mv$. Due to the fact that none of the scales involved in these hierarchies are directly accessible experimentally, given a heavy quarkonium state, it is not obvious to which regime it must be assigned. Only the $\Upsilon (1S)$ appears to belong to the weak coupling regime, since weak coupling calculations in $\alpha_{\rm s} (mv)$ converge reasonably well. The fact that the spectrum of excitations is not Coulombic suggests that the higher excitations are not in the weak coupling regime, which can be understood from the fact that $\mathcal{O}(\Lambda_{\rm QCD})$ effects in this regime are proportional to a high power of the principal quantum number \cite{Voloshin:1978hc,Leutwyler:1980tn}. Nevertheless, there have been claims in the literature, using renormalon-based approaches, that also $\Upsilon (2S)$ and even $\Upsilon (3S)$ can also be understood within the weak coupling regime \cite{Brambilla:2001fw,Brambilla:2001qk,Recksiegel:2003fm}. We will see that the photon spectra in semi-inclusive radiative decays of heavy quarkonia to light hadrons provide important information which may eventually settle this question. We start by writing the radiative decay rate for a state with generic principal quantum number $n$. Again we split the decay rate into direct and fragmentation contributions \begin{equation} \frac{d\Gamma_n}{dz}=\frac{d\Gamma^{dir}_n}{dz}+\frac{d\Gamma^{frag}_n}{dz} \end{equation} here $z=2E_\gamma /M_n$ ($M_n$ is the mass of the heavy quarkonium state). We shall now restrict our discussion to $z$ in the central region, in which no further scale is introduced beyond those inherent of the non-relativistic system. We write the spectrum in the following compact form \begin{equation}\label{dir} \frac{d\Gamma^{dir}_n}{dz}=\sum_{\mathcal{Q}}C[\mathcal{Q}](z)\frac{\langle \mathcal{Q}\rangle_n}{m^{\delta_{\mathcal{Q}}}} \end{equation} \begin{equation}\label{frag} \frac{d\Gamma_n^{frag}}{dz} = \sum_{a = q,\bar q, g} \int_z^1\frac{dx}{x}\sum_{\mathcal{Q}}C_a[\mathcal{Q}](x)D_{a\gamma}\left(\frac{z}{x},m\right)\ :=\sum_{\mathcal{Q}}f_\mathcal{Q}(z)\frac{\langle \mathcal{Q}\rangle_n}{m^{\delta_{\mathcal{Q}}}} \end{equation} where $\mathcal{Q}$ is a local NRQCD operator, $\delta_{\mathcal{Q}}$ is an integer which follows from the dimension of $\mathcal{Q}$ and $\langle \mathcal{Q}\rangle_n:=\langle V_Q (nS)\vert \mathcal{Q}\vert V_Q (nS)\rangle$. It is important for what follows that the $f_\mathcal{Q}(z)$ are universal and do not depend on the specific bound state $n$. Due to the behavior of the fragmentation functions above, the fragmentation contributions are expected to dominate the spectrum in the lower $z$ region and to be negligible in the upper $z$ one. In the central region, in which we will focus on, they can always be treated as a perturbation, as we will show below. Let us first consider the weak coupling regime, for which the original NRQCD velocity counting holds \cite{Bodwin:1994jh} (this is the situation described in the previous sections of this chapter, we recall here some of the arguments for an easier reading). The direct contributions are given at leading order by the $\mathcal{O}_1\left(\phantom{}^3S_1\right)$ operator; the next-to-leading order (NLO) ($v^2$ suppressed) term is given by the $\mathcal{P}_1\left(\phantom{}^3S_1\right)$ operator. The contributions of color octet operators start at order $v^4$ and are not $\alpha_{\rm s}^{-1}(m)$ enhanced in the central region. The fragmentation contributions are more difficult to organize since the importance of each term is not only fixed by the velocity counting alone but also involves the size of the fragmentation functions. It will be enough for us to restrict ourselves to the LO operators both in the singlet and octet sectors. The LO color singlet operator is $\mathcal{O}_1\left(\phantom{}^3S_1\right)$ as well. The leading color octet contributions are $v^4$ suppressed but do have a $\alpha_{\rm s}^{-1}(m)\sim 1/v^2$ enhancement with respect to the singlet ones here. They involve $\mathcal{O}_8\left(\phantom{}^3S_1\right)$, $\mathcal{O}_8\left(\phantom{}^1S_0\right)$ and $\mathcal{O}_8\left(\phantom{}^3P_0\right)$. Then in the central region, the NRQCD expression (at the order described above) reads \[ \frac{d\Gamma_n}{dz} =\left(C_1\left[\phantom{}^3S_1\right](z)+f_{\mathcal{O}_1\left(\phantom{}^3S_1\right)}(z)\right)\frac{\langle \mathcal{O}_1(^3S_1)\rangle_n}{m^2}+C_1'\left[\phantom{}^3S_1\right](z)\frac{\langle\mathcal {P}_1(^3S_1)\rangle_n}{m^4}+f_{\mathcal{O}_8\left(\phantom{}^3S_1\right)}(z)\cdot \] \begin{equation}\label{width} \cdot\frac{\langle\mathcal{ O}_8(^3S_1)\rangle_n}{m^2}++f_{\mathcal{O}_8\left(\phantom{}^1S_0\right)}(z)\frac{\langle\mathcal{O}_8(^1S_0)\rangle_n}{m^2}+f_{\mathcal{O}_8\left(\phantom{}^3P_J\right)}(z)\frac{\langle\mathcal{O}_8(^3P_0)\rangle_n}{m^4} \end{equation} If we are in the strong coupling regime and use the so called conservative counting, the color octet matrix elements are suppressed by $v^2$ rather than by $v^4$. Hence we should include the color octet operators in the direct contributions as well. In practise, this only amounts to the addition of $C_8$'s to the $f_{\mathcal{O}_8}$'s. Furthermore, $f_{\mathcal{O}_1\left(\phantom{}^3S_1\right)}(z)$, $f_{\mathcal{O}_8\left(\phantom{}^1S_0\right)}(z)$ and $f_{\mathcal{O}_8\left(\phantom{}^3P_J\right)}(z)$ are proportional to $D_{g\gamma}\left(x,m\right)$, which is small (in the central region) according to the widely accepted model \cite{Owens:1986mp}. $f_{\mathcal{O}_8\left(\phantom{}^3S_1\right)}(z)$ is proportional to $D_{q\gamma}\left(x,m\right)$, which has been measured at LEP \cite{Buskulic:1995au}. It turns out that numerically $f_{\mathcal{O}_8\left(\phantom{}^3S_1\right)}(z)\sim C_8[\phantom{}^3S_1](z)$ in the central region. Therefore, all the LO fragmentation contributions can be treated as a perturbation. Consequently, the ratio of decay widths of two states with different principal quantum numbers is given at NLO by \[ \frac{\displaystyle\frac{d\Gamma_n}{dz}}{\displaystyle\frac{d\Gamma_r}{dz}}=\frac{\langle\mathcal{O}_1(^3S_1)\rangle_n}{\langle\mathcal{O}_1(^3S_1)\rangle_r}\left(\!\!1\!+\!\frac{C_1'\left[\phantom{}^3S_1\right](z)}{C_1\left[\phantom{}^3S_1\right](z)}\frac{\mathcal{R}_{\mathcal{P}_1(\phantom{}^3S_1)}^{nr}}{m^2}+\frac{f_{\mathcal{O}_8\left(\phantom{}^3S_1\right)}(z)}{C_1\left[\phantom{}^3S_1\right](z)}\!\mathcal{R}_{\mathcal{O}_8(\phantom{}^3S_1)}^{nr}\!\!+\!\!\frac{f_{\mathcal{O}_8\left(\phantom{}^1S_0\right)}(z)}{C_1\left[\phantom{}^3S_1\right](z)}\!\mathcal{R}_{\mathcal{O}_8(\phantom{}^1S_0)}^{nr}+\right. \] \begin{equation}\label{nrqcd} \left.+\frac{f_{\mathcal{O}_8\left(\phantom{}^3P_J\right)}(z)}{C_1\left[\phantom{}^3S_1\right](z)}\frac{\mathcal{R}_{\mathcal{O}_8(\phantom{}^3P_0)}^{nr}}{m^2}\right) \end{equation} where \begin{equation} \mathcal{R}_{\mathcal{Q}}^{nr}=\left(\frac{\langle\mathcal{Q}\rangle_n}{\langle\mathcal{O}_1(^3S_1)\rangle_n}-\frac{\langle\mathcal{Q}\rangle_r}{\langle\mathcal{O}_1(^3S_1)\rangle_r}\right) \label{r} \end{equation} Note that the $\alpha_{\rm s} (m)$ corrections to the matching coefficients give rise to negligible next-to-next-to-leading order (NNLO) contributions in the ratios above. No further simplifications can be achieved at NLO without explicit assumptions on the counting. If the two states $n$ and $r$ are in the weak coupling regime, then $\mathcal{R}_{\mathcal{P}_1(\phantom{}^3S_1)}^{nr}=m(E_{n}-E_{r})$ \cite{Gremm:1997dq}. In addition, the ratio of matrix elements in front of the rhs of (\ref{nrqcd}) can be expressed in terms of the measured leptonic decay widths \begin{equation}\label{qucO1} \frac{\langle\mathcal{O}_1(^3S_1)\rangle_n}{\langle\mathcal{O}_1(^3S_1)\rangle_r}=\frac{\Gamma\left(V_Q(nS)\to e^+e^-\right)}{\Gamma\left(V_Q(rS)\to e^+e^-\right)}\left[\!1\!-\!\frac{\mathrm{Im}g_{ee}\left(\phantom{}^3S_1\right)}{\mathrm{Im}f_{ee}\left(\phantom{}^3S_1\right)}\frac{E_{n}-E_{r}}{m}\right] \end{equation} $\mathrm{Im}g_{ee}$ and $\mathrm{Im}f_{ee}$ are short distance matching coefficient which may be found in \cite{Bodwin:1994jh}. Eq.(\ref{qucO1}) and the expression for $\mathcal{R}_{\mathcal{P}_1(\phantom{}^3S_1)}^{nr}$ also hold if both $n$ and $r$ are in the strong coupling coupling regime \cite{Brambilla:2002nu,Brambilla:2001xy,Brambilla:2003mu}, but none of them does if one of the states is in the weak coupling regime and the other in the strong coupling regime. In the last case the NRQCD expression depends on five unknown parameters, which depend on $n$ and $r$. If both $n$ and $r$ are in the strong coupling regime further simplifications occur. The matrix elements of the color octet NRQCD operators are proportional to the wave function at the origin times universal (bound state independent) non-perturbative parameters \cite{Brambilla:2002nu,Brambilla:2001xy,Brambilla:2003mu} (see appendix \ref{appME}). Since $\langle\mathcal{O}_1(^3S_1)\rangle_n$ is also proportional to the wave function at the origin, the latter cancels in the ratios involved in (\ref{r}). Hence, $\mathcal{R}_{\mathcal{Q}}^{nr}=0$ for the octet operators appearing in (\ref{nrqcd}). Then, the pNRQCD expression for the ratio of decay widths reads \begin{equation}\label{scsc} \frac{\displaystyle\frac{d\Gamma_n}{dz}}{\displaystyle\frac{d\Gamma_r}{dz}} =\frac{\Gamma\left(V_Q(nS)\to e^+e^-\right)}{\Gamma\left(V_Q(rS)\to e^+e^-\right)}\left[\!1\!-\!\frac{\mathrm{Im}g_{ee}\left(\phantom{}^3S_1\right)}{\mathrm{Im}f_{ee}\left(\phantom{}^3S_1\right)}\frac{E_{n}-E_{r}}{m}\right]\left(1+\frac{C_1'\left[\phantom{}^3S_1\right](z)}{C_1\left[\phantom{}^3S_1\right](z)}\frac{1}{m}\left(E_{n}-E_{r}\right)\right) \end{equation} Therefore, in the strong coupling regime we can predict , using pNRQCD, the ratio of photon spectra at NLO (in the $v^2$, $(\Lambda_{\rm QCD} /m )^2$ \cite{Brambilla:2002nu,Brambilla:2001xy} and $\alpha_{\rm s} (\sqrt{m\Lambda_{\rm QCD}})\times \sqrt{ \Lambda_{\rm QCD} /m} $ \cite{Brambilla:2003mu} expansion). On the other hand, if one of the states $n$ is in the weak coupling regime, $\mathcal{R}_{\mathcal{Q}}^{nr}$ will have a non-trivial dependence on the principal quantum number $n$ and hence it is not expected to vanish. Therefore, expression (\ref{scsc}) provides invaluable help for identifying the nature of heavy quarkonium states. If the two states are in the strong coupling regime, the ratio must follow the formula (\ref{scsc}); on the other hand, if (at least) one of the states is in the weak coupling regime the ratio is expected to deviate from (\ref{scsc}), and should follow the general formula (\ref{nrqcd}). We illustrate the expected deviations in the plots (dashed curves) by assigning to the unknown $\cal R$s in (\ref{nrqcd}) the value $v^4$ ($v^2 \sim 0.1$), according to the original NRQCD velocity scaling. We will use the recent data from CLEO \cite{Besson:2005jv} (which includes a very precise measurement of the $\Upsilon (1S)$ photon spectrum, as well as measurements of the $\Upsilon (2S)$ and $\Upsilon (3S)$ photon spectra, see figures \ref{fignou1s}, \ref{fignou2s} and \ref{fignou3s}) to check our predictions. \begin{figure} \centering \includegraphics[width=12.5cm]{nou1s} \caption[CLEO data for $\Upsilon (1S)$ photon spectrum]{Background-subtracted CLEO data for the $\Upsilon (1S)$ photon spectrum \cite{Besson:2005jv}.} \label{fignou1s} \end{figure} \begin{figure} \centering \includegraphics[width=12.5cm]{nou2s} \caption[CLEO data for $\Upsilon (2S)$ photon spectrum]{Background-subtracted CLEO data for the $\Upsilon (2S)$ photon spectrum \cite{Besson:2005jv}.} \label{fignou2s} \end{figure} \begin{figure} \centering \includegraphics[width=12.5cm]{nou3s} \caption[CLEO data for $\Upsilon (3S)$ photon spectrum]{Background-subtracted CLEO data for the $\Upsilon (3S)$ photon spectrum \cite{Besson:2005jv}.} \label{fignou3s} \end{figure} In order to do the comparison we use the following procedure. First we efficiency correct the data (using the efficiencies modeled by CLEO). Then we perform the ratios $1S/2S$, $1S/3S$ and $2S/3S$ (we add the errors of the different spectra in quadrature). Now we want to discern which of these ratios follow eq.(\ref{scsc}) and which ones deviate from it; to do that we fit eq.(\ref{scsc}) to each of the ratios leaving only the overall normalization as a free parameter (the experimental normalization is unknown). The fits are done in the central region, that is $z\in[0.4,0.7]$, where eq.(\ref{scsc}) holds. A good (bad) $\chi^2$ obtained from the fit will indicate that the ratio does (not) follow the shape dictated by eq.(\ref{scsc}). In figures \ref{fig1s2s}, \ref{fig1s3s} and \ref{fig2s3s} we plot the ratios $1S/2S$, $1S/3S$ and $2S/3S$ (respectively) together with eq.(\ref{scsc}) and the estimate of (\ref{nrqcd}) mentioned above (overall normalizations fitted for all curves, the number of d.o.f. is then $45$). The figures show the spectra for $z\in[0.2,1]$ for an easier visualization but remember that we are focusing in the central $z$ region, denoted by the unshaded region in the plots. The theoretical errors due to higher orders in $\alpha_{\rm s} (m)$ and in the expansions below (\ref{scsc}) are negligible with respect to the experimental ones. For the $1S/2S$ ratio we obtain a $\chi^2/\mathrm{d.o.f.}\vert_{1S/2S}\sim 1.2$, which corresponds to an $18\%$ CL. The errors for the $\Upsilon (3S)$ photon spectrum are considerably larger than those of the other two states (see figures \ref{fignou1s}, \ref{fignou2s} and \ref{fignou3s}), this causes the ratios involving the $3S$ state to be less conclusive than the other one. In any case we obtain $\chi^2/\mathrm{d.o.f.}\vert_{1S/3S}\sim 0.9$, which corresponds to a $68\%$ CL, and $\chi^2/\mathrm{d.o.f.}\vert_{2S/3S}\sim 0.75$, which corresponds to an $89\%$ CL. Hence, the data disfavors $\Upsilon (1S)$ in the strong coupling regime but is consistent with $\Upsilon (2S)$ and $\Upsilon (3S)$ in it. \begin{figure} \centering \includegraphics[width=12.5cm]{1s2s01} \caption[Ratio of the $\Upsilon (1S)$ and $\Upsilon (2S)$ photon spectra]{Ratio of the $\Upsilon (1S)$ and $\Upsilon (2S)$ photon spectra. The points are obtained from the CLEO data \cite{Besson:2005jv}. The solid line is eq.(\ref{scsc}) (overall normalization fitted), the dashed line is the estimate of (\ref{nrqcd}) (see text). Agreement between the solid curve and the points in the central (unshaded) region would indicate that the two states are in the strong coupling regime.} \label{fig1s2s} \end{figure} \begin{figure} \centering \includegraphics[width=12.5cm]{1s3s01} \caption[Ratio of the $\Upsilon (1S)$ and $\Upsilon (3S)$ photon spectra]{Same as fig. \ref{fig1s2s} for $\Upsilon (1S)$ and $\Upsilon (3S)$ .} \label{fig1s3s} \end{figure} \begin{figure} \centering \includegraphics[width=12.5cm]{2s3s01} \caption[Ratio of the $\Upsilon (2S)$ and $\Upsilon (3S)$ photon spectra]{Same as fig. \ref{fig1s2s} for $\Upsilon (2S)$ and $\Upsilon (3S)$ .} \label{fig2s3s} \end{figure} In summary, using pNRQCD we have worked out a model-independent formula which involves the photon spectra of two heavy quarkonium states and holds at NLO in the strong coupling regime. When this formula is applied to the Upsilon system, current data indicate that the $\Upsilon (2S)$ and the $\Upsilon (3S)$ are consistent as states in the strong coupling regime\footnote{$\Upsilon (2S)$ also seems difficult to accomodate in a weak coupling picture in the analysis of the radiative transition $\Upsilon (2S)\to\eta_b\gamma$ \cite{Brambilla:2005zw}.} whereas the $\Upsilon (1S)$ in this regime is disfavor. A decrease of the current experimental errors for $\Upsilon (2S)$ and, specially, for the $\Upsilon (3S)$ is necessary to confirm this indication. This is important, since it would validate the use of the formulas in \cite{Brambilla:2002nu,Brambilla:2001xy,Brambilla:2003mu}, and others which may be derived in the future under the same assumptions, not only for the $\Upsilon (2S)$ and $\Upsilon (3S)$ but also for the $\chi_b (2P)$s, since their masses lie in between, as well as for their pseudoscalar partners. \chapter{Summary in Catalan}\label{cat} \selectlanguage{catalan} Per facilitar la lectura, i una eventual comparació amb d'altres referències escrites en anglès, incloem, en la taula \ref{tabtrad}, la traducció emprada per alguns dels termes presents en la tesi. \section{Introducció general} Aquesta tesi versa sobre l'estudi de l'estructura i les interaccions dels constituents fonamentals de la matèria. Vàrem arribar al final del segle XX descrivint les propietats més fonamentals conegudes de la matèria en termes de teories quàntiques de camps (pel que fa a les interaccions electromagnètiques, nuclear forta i nuclear feble) i de la relativitat general (pel que fa a la interacció gravitatòria). El Model Estàndard (ME) de les interaccions fonamentals en la natura engloba teories quàntiques de camps per descriure les interaccions electromagnètiques (l'anomenda ElectroDinàmica Quàntica, EDQ) i nuclears febles (que estan unificades en l'anomenada teoria electro-feble) i per descriure les interaccions fortes (l'anomedada CromoDinàmica Quàntica, CDQ). Aquest ME ve complementat per la teoria clàssica (no quàntica) de la gravitació, la relativitat general. Tots els experiments que s'han dut a terme en acceleradors de partícules (per tal d'estudiar els constituents bàsics de la matèria), fins el dia d'avui, són consistents amb aquest marc teòric. Això ens ha portat a començar el segle XXI esperant que la següent generació d'experiments destapi la física que hi pot haver més enllà d'aquestes teories, que han quedat ja ben establertes. Hi ha grans esperances posades en el gran accelerador hadrònic, anomenat \emph{Large Hadron Collider} (LHC), que s'està construint actualment al CERN. Està planificat que aquesta màquina comenci a ser operativa l'any 2007. El que s'espera és que l'LHC ens obri el camí cap a nous fenòmens físics no observats fins ara. Això es pot aconseguir de dues maneres. Una possibilitat és que l'LHC descobreixi la partícula de Higgs (l'única partícula del ME que encara no s'ha observat) i que això desencadeni la descoberta de noves partícules més enllà del ME. L'altra possibilitat és que l'LHC demostri que no hi ha tal partícula de Higgs; cosa que demanaria un marc teòric totalment nou i diferent l'actual (per explicar les interaccions fonamentals de la natura)\footnote{Intentarem no preocupar-nos gaire per la possibilitat que l'LHC descobreixi el Higgs, tanqui el ME i mostri que no hi ha efectes de nova física en cap escala d'energia que serem capaços d'assolir amb acceleradors de partícules. Tot i que això és possible no és, òbviament, gens desitjable.}. Les possibles extensions del ME han estat ja estudiades àmpliament i amb gran detall. El que s'espera és que tots aquests efectes es facin palesos en aquesta nova generació d'experiments. No cal dir que els experiments basats en acceleradors de partícules no són l'única opció que tenim, per tal de descobrir efectes associats a nova física. Una altra gran oportunitat (que també ha estat àmpliament estudiada) és la d'observar el cel i la informació que ens arriba d'ell (partícules altament energètiques, fons còsmics de radiació...). Però en aquest camí a la recerca de la següent teoria més fonamental coneguda fins al moment, no volem perdre l'habilitat de fer sevir aquesta teoria per fer prediccions concises per un ampli ventall de processos físics, i també volem poder entendre (d'una forma no ambigua) com la teoria precedent es pot obtenir a partir de la nova. Òbviament el somni de qualsevol físic és trobar una descripició unificada de les quatre interaccions fonamentals conegudes de la matèria; però no al preu de tenir una teoria que pot explicar-ho tot però que és tant complicada que, de fet, no explica res. Acabarem aquests paràgrafs que fan de prefaci a la tesi amb un petit acudit. D'acord amb el que hem dit aquí, el títol de la conferència que suposaria el punt i final de la física (tèorica) no és: \emph{La teoria M: una unificació de totes les interaccions de la natura}, sinó més aviat: \emph{Com obtenir el metabolisme d'una vaca a partir de la teoria M}. \subsection{Teories Efectives} En aquesta tesi ens centrarem en l'estudi de sistemes que involucren el sector de les interaccions fortes en el ME. La part del ME que descriu les interaccions fortes és, com s'ha comentat abans, la CromoDinàmica Quàntica. CDQ és una teòrica quàntica de camps basada en el grup no abelià $SU(3)$ i descriu les interaccions entre quarks i gluons. El seu Lagrangià és extremadament simple i ve donat per \begin{equation} \mathcal{L}_{QCD}=\sum_{i=1}^{N_f}\bar{q}_i\left(iD\!\!\!\!\slash-m_i\right)q_i-\frac{1}{4}G^{\mu\nu\, a}G_{\mu\nu}^a \end{equation} En aquesta equació $q_i$ són els camps associats als quarks, $igG_{\mu\nu}=[D_{\mu},D_{\nu}]$, amb $D_{\mu}=\partial_{\mu}+igA_{\mu}$, $A_{\mu}$ són els camps pels gluons i $N_f$ és el número total de sabors (tipus) de quarks. La CDQ presenta les propietats de llibertat asimptòtica i de confinament. La constant d'acoblament de les interaccions fortes esdevé gran a energies petites i tendeix a zero per energies grans. D'aquesta manera, per energies altes els quarks i els gluons es comporten com a partícules lliures, mentre que a baixes energies apareixen sempre confinats a l'interior d'hadrons (en una combinació singlet de color). La CDQ desenvolupa una escala intrínseca, $\Lambda_{QCD}$, a baixes energies; escala que dóna la contribució principal a la massa de la majoria dels hadrons. $\Lambda_{QCD}$ es pot interpretar de diferents maneres, però és bàsicament l'escala d'energia on la constant d'acoblament de les interaccions fortes esdevé d'ordre 1 (i la teoria de perturbacions en $\alpha_{\rm s}$ ja no és fiable). Es pot pensar que és una escala de l'ordre de la massa del protó. La presència d'aquesta escala intrinseca i el fet, íntimament relacionat, que l'espectre de la teoria consisteixi en estats hadrònics singlets de color (i no dels quarks i gluons) provoca que els càlculs directes des de CDQ siguin extremadament complicats, sinó impossibles, per molts sistemes físics d'interès. Les tècniques conegudes amb el nom de teories efectives (TE) ens ajudaran en aquesta tasca. Com a regla general, l'estudi de qualsevol procés, en teoria quàntica de camps, que involucri més d'una escala física rellevant és complicat. Els càlculs (i les integrals) que ens apareixeran poden resultar molt complicats si més d'una escala entra en ells. La idea serà doncs construir una nova teoria (la teoria efectiva), derivada de la teoria fonamental, de manera que només involucri els graus de llibertat rellevants per la regió que ens interessa. La idea general que hi ha sota les tècniques de TE és simplement la següent: per tal d'estudiar la física d'una determinada regió d'energies no necessitem conèixer la dinàmica de les altres regions de forma detallada. Aquest és, òbviament, un fet ben conegut i àmpliament acceptat. Per exemple, tothom entent que per descriure una reacció química no cal conèixer la interacció quàntica electrodinàmica entre els fotons i els electrons, per contra un model de l'àtom que consisteixi en un nucli i electrons orbitant al voltant és més convenient. I de la mateixa manera no cal usar aquest model atòmic per tal de descriure un procés biològic macroscòpic. La implementació d'aquesta ben coneguda idea en el marc de la teoria quàntica de camps és el que es coneix sota el nom genèric de \emph{Teories Efectives}. Tal i com ja s'ha dit abans, aquestes tècniques esdevenen especialment útils en l'estudi de processes que involucren les interaccions fortes. Per tal de construir una TE cal seguir els següents passos generals (a grans trets). En primer lloc és necessari identificar els graus de llibertat que són rellevants pel problema en que estem interessats. Després cal fer ús de les simetries presents en el problema i, finalment, hem d'aprofitar qualsevol jerarquia d'escales que hi pugui haver. És important remarcar que el que estem fent no és construir un model pel procés que volem estudiar. Per contra la TE està construida de manera de sigui equivalent a la teoria fonamental, en la regió on és vàlida; estem obtenint els resultats desitjats a partir d'una expansió ben controlada de la nostra teoria fonamental. Més concretament, en aquesta tesi ens centrarem en l'estudi de sistemes que involucren els anomenats quarks pesats. Com és ben conegut hi ha sis sabors (tipus) de quarks en CDQ. Tres d'ells tene masses per sota de l'escala $\Lambda_{\rm QCD}$ i s'anomenen \emph{lleugers}, mentre que els altres tres tenen masses per sobre d'aquesta escala $\Lambda_{\rm QCD}$ i s'anomenen \emph{pesats}. El que farem a continuació és descriure sistemes amb quarks pesats i les teories efectives que es poden construir per ells. \subsection{Sistemes de quarks pesats i quarkoni} El que faran les TE pels sistemes amb quarks pesats és aprofitar-se d'aquesta escala gran, la massa, i construir una expansió en el límit de quarks infinítament massius. Els sistemes més simples que es poden tenir involucrant quarks pesats són aquells composats d'un quark pesat i un (anti-)quark lleuger. La TE adequada per descriure aquest tipus de sistemes rep el nom de \emph{Teoria Efectiva per Quarks Pesats} (TEQP). Aquesta teoria és avui en dia, i juntament amb la teoria de perturbacions quiral (que descriu les interaccions de baixa energia entre pions i kaons) i la teoria de Fermi per les interaccions febles (que decriu les desintegracions febles per a energies per sota de la massa del bosó $W$), un exemple àmpliament usat per mostrar com les TE funcionen en un cas realista. De manera molt breu, les escales físiques rellevants per aquest sistema són la massa $m$ del quark pesat i $\Lambda_{\rm QCD}$. La TE es construeix, per tant, com una expansió en $\Lambda_{\rm QCD}/m$. El moment del quak pesat es descomposa d'acord amb \begin{equation} p=mv+k \end{equation} on $v$ és la velocitat de l'hadró (que és bàsicament la velocitat del quark pesat) i $k$ és un moment residual d'ordre $\Lambda_{\rm QCD}$. La dependència en l'escala $m$ s'extreu dels camps de la TE d'acord amb \begin{equation} Q(x)=e^{-im_Qv\cdot x}\tilde{Q}_v(x)=e^{-im_Qv\cdot x}\left[h_v(x)+H_v(x)\right] \end{equation} i es construeix una teoria per les fluctuacions suaus al voltant de la massa del quark pesat. El Lagrangià de la TEQP a ordre dominant ve donat per \begin{equation} \mathcal{L}_{HQET}=\bar{h}_viv\cdot Dh_v \end{equation} Aquest Lagrangià presenta simetries de sabor i spin, que es poden aprofitar per a fer fenomenologia. Els sistemes en que aquesta tesi se centrarà (encara que no de manera exclusiva) són aquells coneguts amb el nom de quarkoni pesat. El quarkoni pesat és un estat lligat composat per un quark pesat i un antiquark pesat. Per tant podem tenir sistemes de \emph{charmoni} ($c\bar{c}$) i de \emph{bottomoni} ($b\bar{b}$). El més pesat de tots els quarks, el quark top, es desintegra a través de les interaccions febles abans que pugui formar estats lligats; de tota manera la producció de parelles $t-\bar{t}$ prop del llindar de producció (per tant, en un règim no relativista) es pot estudiar amb les mateixes tècniques. Les escales físiques rellevants pels sistemes de quarkoni pesat són l'escala $m$ de la massa del quark pesat, el tri-moment típic de l'estat lligat $mv$ ($v$ és la velocitat relativa típica de la parella quark-antiquark en l'estat lligat) i l'energia cinètica típica $mv^2$. A part de l'escala intrínseca de la CDQ, $\Lambda_{\rm QCD}$, que sempre és present. La presència simultània de totes aquestes escales ens indica que els sistemes de quarkoni pesat involucren tots els rangs d'energia de CDQ, des de les regions perturbatives d'alta energia fins a les no-perturbatives de baixa energia. És per tant un bon sistema per estudiar la interacció entre els efectes perturbatius i els no perturbatius en CDQ i per millorar el nostre coneixement de CDQ en general. Per tal d'aconseguir aquest objectiu contruirem TE adequades per la descripció d'aquest sistema. Si fem servir el fet que la massa $m$ és molt més gran que qualsevol altra escala d'energia present el problema, arribem a una TE coneguda amb el nom de \emph{CDQ No Relativista} (CDQNR). En aquesta teoria, que descriu la dinàmica de parelles de quark-antiquark per energies força menors a les seves masses, els quarks pesats vénen representats per spinors no relativistes de dues components. A més a més, gluons i quarks lleugers amb quadri-moment a l'escala $m$ són intergats de la teoria i ja no hi apareixen. El que hem aconseguit amb la construcció d'aquesta teoria és factoritzar, de manera sistemàtica, els efectes que vénen de l'escala $m$ de la resta d'efectes provinents de les altres escales del problema. CDQNR ens proporciona un marc teòric rigorós on estudiar processos de desintegració, producció i espectroscòpia de quarkoni pesat. El Lagrangià a ordre dominant ve donat per \begin{equation} \mathcal{L}_{NRQCD}= \psi^{\dagger} \left( i D_0 + {1\over 2 m} {\bf D}^2 \right)\psi + \chi^{\dagger} \left( i D_0 - {1\over 2 m} {\bf D}^2 \right)\chi \end{equation} on $\psi$ és el camp que anihila el quark pesat i $\chi$ el camp que crea l'antiquark pesat. Termes sub-dominants, en l'expansió en $1/m$, poden ser derivats. D'entrada pot resultar sorprenent que els processos de desintegració puguin ser estudiats en el marc de la CDQNR. L'anihilació de la parella $Q\bar{Q}$ produirà gluons i quarks lleugers amb energies d'ordre $m$, i aquests graus de llibertat ja no són presents en CDQNR. Tot i això els processos de desintergació poden ser estudiats en el marc de la CDQNR, de fet la teoria està construida per tal de poder explicar aquests processos. La resposta és que els processos d'anihilació s'incorporen en CDQNR a través d'interaccions locals de quatre fermions. Les raons de desintegració vénen representades en CDQNR per les parts imaginàries de les amplituds de dispersió $Q\bar{Q}\to Q\bar{Q}$. Els coeficients dels operadors de quatre fermions tenen, per tant, parts imaginàries que codifiquen les raons de desintegració. D'aquesta manera podem estudiar les desintegracions inclusives de quakonium pesat en partícules lleugeres. La CDQNR ens ha factoritzat els efectes a l'escala $m$ de la resta. Ara bé, si volem estudiar la física del quarkoni pesat a l'escala de l'energia de lligam del sistema, ens trobarem amb el problema que les escales suau, corresponent al tri-moment típic $mv$, i ultrasuau, corresponent a l'energia cinètica típica $mv^2$, estan encara entrellaçades en CDQNR. Seria desitjable separar els efectes d'aquestes dues escales. Per tal de solucionar aquest problema es pot procedir de més d'una manera. L'estrategia que emprarem en aquesta tesi és la d'aprofitar de manera més àmplia la jerarquia no relativista d'escales que presenta el sistema ($m\gg mv\gg mv^2$) i construir una nova teoria efectiva que només contingui els graus de llibertat rellevants per tal de descriure els sistemes de quarkoni pesat a l'escala de l'energia de lligam. La teoria que s'obté és coneguda amb el nom de \emph{CDQNR de potencial} (CDQNRp). Aquesta teoria serà descrita breument en la següent secció. El tractament correcte d'alguns sistemes de quarks pesats i de quarkoni pesat demanarà la presència de graus de llibertat addicionals, a part dels presents en TEQP o en CDQNR. Quan volem descriure regions de l'espai fàsic on els productes de la desintegració tenen una energia gran, o quan volguem descriure desintegracions exclusives, per exemple, graus de llibertat col·lineals hauran de ser presents en la teoria. La interacció dels graus de llibertat col·lineals amb els graus de llibertat suaus ha estat implementada en el marc de les TE en el que avui es coneix com a Teoria Efectiva Col·lineal-Suau (TECS). Aquesta teoria la descriurem també breument en la següent secció. En definitiva, l'estudi de sistemes de quarks pesats i quarkoni ens ha portat a la construcció de teories efectives de camps de riquesa i complexitat creixents. Tota la potència de les tècniques de teoria quàntica de camps (efectes de bagues, resumació de logartimes...) és explotat per tal de millorar la nostra comprensió d'aquests sistemes. \begin{table}[t] \begin{center} \begin{tabular}{|c|c|} \hline Anglès & Català \\ \hline & \\ Quantum ChromoDynamics (QCD) & CromoDinàmica Quàntica (CDQ) \\ & \\ Soft-Collinear Effective Theory (SCET) & Teoria Efectiva Col·lineal-Suau (TECS) \\ & \\ loop & baga \\ & \\ Standard Model (SM) & Model Estàndard (ME)\\ & \\ Quantum ElectroDynamics (QED) & Electrodinàmica Quàntica (EDQ)\\ & \\ quarkonium & quarkoni\\ & \\ Heavy Quark Effective Theory (HQET) & Teoria Efectiva per Quarks Pesats (TEQP)\\ & \\ Non-Relativistic QCD (NRQCD) & CDQ No Relativista (CDQNR)\\ & \\ potential NRQCD (pNRQCD) & CDQNR de potencial (CDQNRp) \\ & \\ matching coefficients & coeficients de coincidència \\ & \\ label operators & operadors etiqueta \\ & \\ jet & doll\\ & \\ \hline \end{tabular} \caption[English-Catalan translations]{Traducció anglès-català d'alguns termes usats en la tesi.}\label{tabtrad} \end{center} \end{table} \section{Rerefons} \subsection{CDQNRp} Com ja s'ha dit abans, les escales rellevants pels sistemes de quarkoni pesat són la massa $m$, l'escala suau $mv$ i l'escala ultrasuau $mv^2$. A part de l'escala $\Lambda_{\rm QCD}$. Quan aprofitem la jerarquia no relativista del sistema en la seva totalitat arribem a la CDQNRp. Per tal d'identificar els graus de llibertat rellevants en la teoria final, cal especificar la importància relativa de $\Lambda_{\rm QCD}$ respecte les escales suau i ultrasuau. Dos règims rellevants han estat identificats. Són els anomenats \emph{règim d'acoblament feble} $mv^2\gtrsim\Lambda_{QCD}$ i \emph{règim d'acoblament fort} $mv\gtrsim\Lambda_{QCD}\gg mv^2$. \subsubsection{Règim d'acoblament feble} En aquest règim els graus de llibertat de CDQNRp són semblants als de CDQNR, però amb les cotes superiors en energia i tri-moments abaixades. Els graus de llibertat de CDQNRp consisteixen en quarks i antiquarks pesats amb un tri-moment fitat superiorment per $\nu_p$ ($\vert\mathbf{p}\vert\ll\nu_p\ll m$) i una energia fitada per $\nu_{us}$ ($\frac{\mathbf{p}^2}{m}\ll\nu_{us}\ll\vert\mathbf{p}\vert$), i en gluons i quarks lleugers amb un quadri-moment fitat per $\nu_{us}$. El Lagrangià es pot escriure com \[ \mathcal{L}_{pNRQCD}=\int d^3{\bf r} \; {\rm Tr} \, \Biggl\{ {\rm S}^\dagger \left( i\partial_0 - h_s({\bf r}, {\bf p}, {\bf P}_{\bf R}, {\bf S}_1,{\bf S}_2) \right) {\rm S} + {\rm O}^\dagger \left( iD_0 - h_o({\bf r}, {\bf p}, {\bf P}_{\bf R}, {\bf S}_1,{\bf S}_2) \right) {\rm O} \Biggr\}+ \] \[ +V_A ( r) {\rm Tr} \left\{ {\rm O}^\dagger {\bf r} \cdot g{\bf E} \,{\rm S} + {\rm S}^\dagger {\bf r} \cdot g{\bf E} \,{\rm O} \right\} + {V_B (r) \over 2} {\rm Tr} \left\{ {\rm O}^\dagger {\bf r} \cdot g{\bf E} \, {\rm O} + {\rm O}^\dagger {\rm O} {\bf r} \cdot g{\bf E} \right\}- \] \begin{equation} - {1\over 4} G_{\mu \nu}^{a} G^{\mu \nu \, a} + \sum_{i=1}^{n_f} \bar q_i \, i D\!\!\!\!\slash \, q_i \end{equation} amb \begin{equation} h_s({\bf r}, {\bf p}, {\bf P}_{\bf R}, {\bf S}_1,{\bf S}_2) = {{\bf p}^2 \over \, m_{\rm red}} + {{\bf P}_{\bf R}^2 \over 2\, m_{\rm tot}} + V_s({\bf r}, {\bf p}, {\bf P}_{\bf R}, {\bf S}_1,{\bf S}_2) \end{equation} \begin{equation} h_o({\bf r}, {\bf p}, {\bf P}_{\bf R}, {\bf S}_1,{\bf S}_2) = {{\bf p}^2 \over \, m_{\rm red}} + {{\bf P}_{\bf R}^2 \over 2\, m_{\rm tot}} + V_o({\bf r}, {\bf p}, {\bf P}_{\bf R}, {\bf S}_1,{\bf S}_2) \end{equation} i \begin{equation} D_0 {\rm O} \equiv i \partial_0 {\rm O} - g [A_0({\bf R},t),{\rm O}]\quad {\bf P}_{\bf R} = -i{\bf D}_{\bf R}\quad m_{\rm red} =\frac{m_1m_2}{m_{\rm tot}}\quad m_{\rm tot} = m_1 + m_2 \end{equation} $S$ és el camp singlet pel quarkoni i $O$ el camp octet per ell. $\mathbf{E}$ representa el camp cromoelèctric. Podem veure que els potencials usuals de mecànica quàntica apareixen com a coeficients de coincidència en la teoria efectiva. \subsubsection{Règim d'acoblament fort} En aquesta situació la física a l'escala de l'energia de lligam està per sota de l'escala $\Lambda_{\rm QCD}$. Per tant és millor discutir la teoria en termes de graus de llibertat hadrònics. Guiant-nos per algunes consideracions generals i per indicacions provinents CDQ en el reticle, podem suposar que el quarkoni ve descrit per un camp singlet. I si ignorem els bosons de Goldstone, aquests són tots els graus de llibertat en aques règim. El Lagrangià ve ara donat per \begin{equation} L_{\rm pNRQCD} = \int d^3 {\bf R} \int d^3 {\bf r} \; S^\dagger \big( i\partial_0 - h_s({\bf x}_1,{\bf x}_2, {\bf p}_1, {\bf p}_2, {\bf S}_1, {\bf S}_2) \big) S \end{equation} amb \begin{equation} h_s({\bf x}_1,{\bf x}_2, {\bf p}_1, {\bf p}_2, {\bf S}_1, {\bf S}_2) = {{\bf p}_1^2\over 2m_1} + {{\bf p}_2^2\over 2m_2} + V_s({\bf x}_1,{\bf x}_2, {\bf p}_1, {\bf p}_2, {\bf S}_1, {\bf S}_2) \end{equation} El potencial $V_s$ és ara una quantitat no perturbativa. El procediment de coincidència de la teoria fonamental i la teoria efectiva ens donarà expressions pel potencial (en termes de les anomenades bagues de Wilson). \subsection{TECS} L'objectiu d'aquesta teoria és descriure processos on graus de llibertat molt energètics (col·lineals) interactuen amb graus de llibertat suaus. Així la teoria es pot aplicar a un ampli ventall de processos, on aquesta situació cinemàtica és present. Qualsevol procés que contingui hadrons molt energètics, juntament amb una font per ells, contindrà partícules, anomenades col·lineals, que es mouen a prop d'una direcció del con de llum $n^{\mu}$. Com que aquestes partícules han de tenir una energia $E$ gran i alhora una massa invariant petita, el tamany de les components del seu quadri-moment (en coordenades del con de llum, $p^{\mu}=(\bar{n}p)n^{\mu}/2+p^{\mu}_{\perp}+(np)\bar{n}^{\mu}/2$) és molt diferent. Típicament $\bar{n}p\sim E$, $p_{\perp}\sim E\lambda$ i $np\sim E\lambda^2$, amb $\lambda$ un paràmetre petit. És d'aquesta jerarquia d'escales que la TE treurà profit. Els graus de llibertat que cal inlcoure en la teoria efectiva depenen de si un vol estudiar processos inclusius o exclusius. Les dues teories que en resulten es coneixen amb els noms de TECS$_{\rm I}$ i TECS$_{\rm II}$, respectivament. \subsubsection{TECS$_{\rm I}$} Aquesta és la teoria que conté graus de llibertat col·lineals $(p^{\mu}=(\bar{n}p,p_{\perp},np)\sim(1,\lambda,\lambda^2))$ i ultrasuaus $(p^{\mu}\sim(\lambda^2,\lambda^2,\lambda^2))$. Els graus de llibertat col·lineals tenen massa invariant d'ordre $E\Lambda_{QCD}$. Malauradament, tant en TECS$_{\rm I}$ com en TECS$_{\rm II}$, no hi ha una notació estàndard en la literatura. Dos formalismes (suposadament equivalents) han estat usats. El Lagrangià a ordre dominant ve donat per \begin{equation} \mathcal{L}_c=\bar{\xi}_{n,p'}\left\{inD+gnA_{n,q}+\left(\mathcal{P}\!\!\!\!\slash_{\perp}+gA\!\!\!\slash_{n,q}^{\perp}\right)W\frac{1}{\bar{\mathcal{P}}}W^{\dagger}\left(\mathcal{P}\!\!\!\!\slash_{\perp}+gA\!\!\!\slash_{n,q'}^{\perp}\right)\right\}\frac{\bar{n}\!\!\!\slash}{2}\xi_{n,p} \end{equation} $\xi_{n,p}$ és el camp pel quark col·lineal, $A_{n,p}$ el camp pel gluó col·lineal (la dependència en les escales grans ha estat extreta d'ells de manera semblant a en TEQP). Els $\mathcal{P}$ són els anomenats operdors etiqueta que donen les components grans (extretes) dels camps. Les $W$ són línies de Wilson. \subsubsection{TECS$_{\rm II}$} Aquesta és la teoria que descriu processos on els graus de llibertat col·lineals en l'estat final tenen massa invariant d'ordre $\Lambda_{QCD}^2$. Aquesta teoria és més complicada que l'anterior, ja que en el procés d'anar des de CDQ a TECS$_{\rm II}$ la presència de dos tipus de modes col·lineals s'ha de tenir en compte. En la tesi bàsicament no usarem aquesta teoria i, per tant, no en direm res més. \section{El potencial estàtic singlet de CDQ} El potencial estàtic entre un quark i un antiquark és un objecte clau per tal d'entendre la dinàmica de la CDQ. Aquí ens centrarem en estudiar la dependència infraroja del potencial estàtic singlet. Obtindrem la dependència infraroja sub-dominant del mateix fent servir la CDQNRp L'expansió perturbativa del potencial estàtic singlet ve donada per \[ V_s^{(0)}(r)=-\frac{C_f\alpha_{\rm s}(1/r)}{r}\left(1+\frac{\alpha_{\rm s}(1/r)}{4\pi}\left(a_1+2\gamma_E\beta_0\right)+\left(\frac{\alpha_{\rm s}(1/r)}{4\pi}\right)^2\bigg( a_2+\right. \] \begin{equation}\label{Vcat} +\left.\left(\frac{\pi^2}{3}+4\gamma_E^2\right)\beta_0^2+\gamma_E\left(4a_1\beta_0+2\beta_1\right)\bigg)+\left(\frac{\alpha_{\rm s}(1/r)}{4\pi}\right)^3\left(\tilde{a}_3+\frac{16\pi^2}{3}C_A^3\log r\mu\right)+\cdots\right) \end{equation} on \begin{equation} a_1=\frac{31}{9}C_A-\frac{20}{9}T_Fn_f \end{equation} i \begin{eqnarray} a_2&=& \left[{4343\over162}+4\pi^2-{\pi^4\over4}+{22\over3}\zeta(3)\right]C_A^2 -\left[{1798\over81}+{56\over3}\zeta(3)\right]C_AT_Fn_f- \nonumber\\ &&{}-\left[{55\over3}-16\zeta(3)\right]C_fT_Fn_f +\left({20\over9}T_Fn_f\right)^2 \end{eqnarray} el logaritme que veiem en l'expressió pel potencial és la dependència infraroja dominant. Aquí trobarem la dependència infraroja sub-dominant; és a dir una part de la correcció a quart ordre del potencial. Per fer-ho estudiarem el procés de fer coincidir CDQNR amb CDQNRp. El que cal fer és calcular la conicidència a ordre $r^2$ en l'expansió multipolar de CDQNRp. Per fer això cal evaluar el segon diagrama de la part dreta de la igualtat de la figura \ref{figmatchr2}. Quan calculem la primera correcció en $\alpha_{\rm s}$ d'aquest diagrama (després del terme dominant) obtenim la dependència infraroja sub-dominant que busquem (el terme dominant del diagrama donava la dependència infraroja dominant). El resultat pels termes infrarojos sub-dominants del potencial és \[ V_s^{(0)}(r)=(\mathrm{Eq.}\ref{Vcat})- \] \begin{equation} -\frac{C_f\alpha_{\rm s}(1/r)}{r}\left(\frac{\alpha_{\rm s}(1/r)}{4\pi}\right)^4\frac{16\pi^2}{3}C_A^3\left(-\frac{11}{3}C_A+\frac{2}{3}n_f\right)\log^2r\mu- \end{equation} \begin{equation} -\frac{C_f\alpha_{\rm s}(1/r)}{r}\left(\frac{\alpha_{\rm s}(1/r)}{4\pi}\right)^4\frac{16\pi^2}{3}C_A^3\left(a_1+2\gamma_E\beta_0-\frac{1}{27}\left(20 n_f-C_A(12 \pi ^2+149)\right)\right)\log r\mu \end{equation} \section{Dimensió anòmala del corrent lleuger-a-pesat en TECS a dues bagues: termes $n_f$} Els corrents hadrònics lleuger-a-pesat $J_{\mathrm{had}}=\bar{q}\Gamma b$ ($b$ representa el quark pesat i $q$ el quark lleuger), que apareixen en operadors de la teoria nuclear feble a una escala d'energia $\mu\sim m_b$, es poden fer coincidir amb els corrents de TECS$_{\mathrm{I}}$. A ordre més baix en el paràmetre d'expansió $\lambda$ el corrent en TECS ve donat per \begin{equation} J_{\mathrm{had}}^{SCET}=c_0\left(\bar{n}p,\mu\right)\bar{\xi}_{n,p}\Gamma h+c_1\left(\bar{n}p,\bar{n}q_1,\mu\right)\bar{\xi}_{n,p}\left(g\bar{n}A_{n,q_1}\right)\Gamma h+\cdots \end{equation} És a dir, un nombre arbitrari de gluons $\bar{n}A_{n,q}$ poden ser afegits, sense que això suposi supressió en el comptatge en el paràmetre $\lambda$. Els coeficients de Wilson poden ser evolucionats, en la teoria efectiva, a una escala d'energia més baixa. Com que tots els corrents estan relacionats per invariancia de galga col·lineal, és suficient estudiar el corrent $\bar{\xi}\Gamma h$ (que és òbviament més simple). L'evolució del corrent a una baga ve determinada per la dimensió anòmala \begin{equation} \gamma=-\frac{\alpha_s}{4\pi}C_f\left(5+4\log\left(\frac{\mu}{\bar{n}P}\right)\right) \end{equation} $P$ és el moment total sortint del doll de partícules. Aquí volem trobar els termes $n_f$ de la correcció a dues bagues d'aquest resultat. Per tal de calcular-los cal evaluar els diagrames de la figura \ref{figdiagnf}. A part també necessitem la correcció a dues bagues dels propagadors del quark col·lineal i del quark pesat. La correcció del propagador del quark col·lineal coincideix amb la usual de CDQ (ja que en el seu càlcul només hi entren partícules col·lineals, i no ultrasuaus); mentre que la correcció al propagador del quark pesat és la usual de TEQP. Tenint en compte el resultat dels diagrames i aquestes correccions als propagadors, obtenim el resultat desitjat pels termes $n_f$ a dues bagues de la dimensió anòmala \begin{equation} \gamma^{(2bagues\;n_f)}=\left(\frac{\alpha_s}{4\pi}\right)^2\frac{4T_Fn_fC_f}{3}\left(\frac{125}{18}+\frac{\pi^2}{2}+\frac{20}{3}\log\left(\frac{\mu}{\bar{n}P}\right)\right) \end{equation} \section{Desintegracions radiatives de quarkoni pesat} Les desintegracions semi-inclusives radiatives de quarkoni pesat a hadrons lleugers han estat estudiades des dels inicis de la CDQ. Aquests primers treballs tractaven el quarkoni pesat en analogia amb la desintegració de l'orto-positroni en EDQ. Diversos experiments, que es van fer posteriorment, van mostrar que la regió superior $z\to 1$ de l'espectre del fotó ($z$ és la fracció d'energia del fotó, respecte la màxima possible) no podia ser ben explicada amb aquests càlculs. Posteriors càlculs de correccions relativistes i de resumació de logaritmes, tot i que anaven en la bona direcció, no eren tampoc suficients per explicar les dades experimentals. Per contra, l'espectre podia ser ben explicat amb models que incorporaven una massa pel gluó. L'aparició de la CDQNR va permetre analitzar aquestes desintegracions de manera sistemàtica, però, tot i així, una massa finita pel gluó semblava necessària. Ben aviat, per això, es va notar que en aquesta regió superior la factorització de la CDQNR no funcionava. S'havien d'introduir les anomenades funcions d'estrucutra (en el canal octet de color), que integraven contribucions de diversos ordres en l'expansió de CDQNR. Alguns primers intents de modelitzar aquestes funcions d'estructura dugueren a resultats en fort desacord amb les dades. Més endavant es va reconèixer que per tractar correctament aquesta regió superior de l'espectre calia combinar la CDQNR amb la TECS (ja que els graus de llibertat col·lineals també eren importants en aquesta regió cinemàtica). D'aquesta manera les resumacions de logaritmes van ser estudiades en aquest marc (i es corregiren i ampliaren els resultats previs). Aquí farem servir una combinació de la CDQNRp amb la TECS per tal de calcular aquestes funcions d'estructura suposant que el quarkoni que es desintegra es pot tractar en el règim d'acoblament feble. Quan combinem de manera consistent aquests resultats amb els resultats previs coneguts, s'obté una bona descripció de l'espectre (sense que ja no calgui introduir una massa pel gluó) en tot el rang de $z$. Per tal de calcular aquestes funcions d'estrucutra, el primer que cal fer es escriure els corrents en CDQNRp+TECS, que és on els calcularem. Un cop es té això ja es poden calcular els diagrames corresponents i aleshores, comparant amb les fórmules de factorització per aquest procés, es poden indentificar les funcions d'estrucutra desitjades. Els diagrames que cal calcular vénen representats a la figura \ref{figdos}. Del càlcul d'aquests diagrames s'obtenen les funcions d'estructura. El resultat que s'obté és divergent ultraviolat i ha de ser renormalitzat. Un cop s'ha fet això, si comparem el resultat teòric que tenim ara per l'espectre amb les dades experimentals en la regió superior, trobem un bon acord; tal i com es pot veure en la figura \ref{figepsubst} (les dues corbes en la figura representen diferents esquemes de renormalització). Fins ara hem pogut explicar, doncs, la regió superior de l'espectre. El que ara falta fer és veure si aquests resultats es poden combinar amb els càlculs anteriors, per la resta de l'espectre, i obtenir un bon acord amb les dades experimentals en tot el rang de $z$. Cal anar amb compte a l'hora de combinar aquests resultats, ja que en les diferents regions de l'espectre són necessàries diferents aproximacions teòriques (per tal de poder calcular). El procés emprat consisteix doncs en expandir (per $z$ en la regió central) les expressions que hem obtingut per la regió superior de l'espectre. Aleshores cal combinar les expressions d'acord amb la fórmula \begin{equation}}{\bf \frac{1}{\Gamma_0}\frac{d\Gamma^{dir}}{dz}=\frac{1}{\Gamma_0}\frac{d\Gamma^{c}}{dz}+\left(\frac{1}{\Gamma_0 }\frac{d\Gamma_{SC}^{e}}{dz}-\left.{\frac{1}{\Gamma_0 }\frac{d\Gamma_{SC}^{e}}{dz}}\right\vert_c\right)+\left(\frac{1}{\Gamma_0}\frac{d\Gamma_{OC}^{e}}{dz}-\left.{\frac{1}{\Gamma_0 }\frac{d\Gamma_{OC}^{e}}{dz}}\right\vert_c\right) \end{equation} on $SC$ representa la contribució sinlget de color, $OC$ la contribució octet de color i els superíndexs $c$ i $e$ es refereixen a les expressions per la regió central i per l'extrem superior de l'espectre, respectivament. Quan usem aquesta fórmula aconseguim obtenir l'expressió vàlida per la regió central en la regió central i l'expressió vàlida per l'extrem superior de l'espectre en l'extrem superior, a part de termes que són d'ordre superior en el comptatge de la teoria efectiva en les respectives regions. I ho hem fet sense haver d'introduir talls o cotes arbitràries per tal de delimitar les diferents regions de l'espectre (cosa que hagués introduit incerteses teòriques bàsicament incontrolables en els nostres resultats). Quan comparem el resultat d'aquesta corba\footnote{També cal afegir les anomenades contribucions de fragmentació. A l'ordre en que estem treballant aquí són completament independents de les contribucions directes de la fórmula anterior.} (que ara ja conté tots els termes que, d'acord amb el nostre comptatge, han de ser presents) amb les dades experimentals, obtenim un molt bon acord. La comparació es pot veure a les figures \ref{figtotal} i \ref{figtotalnou} (la corba vermella (clara) contínua en aquestes figures és la predicció teòrica per l'espectre). Un cop ja tenim l'espectre ben descrit des del punt de vista teòric, podem fer-lo servir per estudiar propietats del quarkoni pesat. En concret, és possible fer servir aquests espectres per tal de determinar en quin règim d'acoblament es troben els diferents quarkonis que es desintegren. Si calculem el quocient d'espectres de dos estats ($n$ i $r$) en el règim d'acoblament fort obtenim \begin{equation} \frac{\displaystyle\frac{d\Gamma_n}{dz}}{\displaystyle\frac{d\Gamma_r}{dz}} =\frac{\Gamma\left(V_Q(nS)\to e^+e^-\right)}{\Gamma\left(V_Q(rS)\to e^+e^-\right)}\left[\!1\!-\!\frac{\mathrm{Im}g_{ee}\left(\phantom{}^3S_1\right)}{\mathrm{Im}f_{ee}\left(\phantom{}^3S_1\right)}\frac{E_{n}-E_{r}}{m}\right]\left(1+\frac{C_1'\left[\phantom{}^3S_1\right](z)}{C_1\left[\phantom{}^3S_1\right](z)}\frac{1}{m}\left(E_{n}-E_{r}\right)\right) \end{equation} (totes les quantitats que apareixen en aquesta equació són conegudes), mentre que si un dels dos estats és en el règim d'acoblament feble la fórmula que obtenim presenta una dependència en $z$ diferent. Per tant si la fórmula anterior reprodueix bé el quocient d'espectres, això ens estarà indicant que els dos quarkonis estan en el règim d'acoblament fort, mentre que si no és així almenys un dels dos serà en el règim d'acoblament feble. Com que hi ha dades disponibles pels estats $\Upsilon(1S)$, $\Upsilon(2S)$ i $\Upsilon(3S)$ podem portar aquest procés a la pràctica. La comparació amb els resultats experimentals es pot veure a les figures \ref{fig1s2s}, \ref{fig1s3s} i \ref{fig2s3s} (l'estat $\Upsilon(1S)$ esperem que estigui en el règim d'acoblament feble, cosa que és compatible amb la gràfica \ref{fig1s2s}). Els errors són molt grans, però $\Upsilon(2S)$ i $\Upsilon(3S)$ semblen compatibles amb ser estats de règim d'acoblament fort (cal comparar la corba contínua amb els punts. Si coincideixen indica que els dos estats són en el règim d'acoblament fort). \section{Conclusions} En aquesta tesi hem fet servir les tècniques de teories efectives per tal d'estudiar el sector de quarks pesats del Model Estàndard. Ens hem centrat en l'estudi de tres temes. En primer lloc hem estudiat el potencial estàtic singlet de CDQ, fent servir la CDQ No Relativista de potencial. Amb l'ajuda d'aquesta teoria efectiva hem estat capaços de determinar la dependència infraroja sub-dominant d'aquest potencial estàtic. Entre altres possibles aplicacions, aquest resultat és rellevant en l'estudi de la producció de $t-\bar{t}$ prop del llindar de producció (a tercer ordre). Aquest és un procés que cal ser estudiat amb molt de detall amb vista a la possible futura construcció d'un gran accelerador lineal electró-positró. Després hem estudiat una dimensió anòmala en la TECS. Aquesta teoria té aplicacions molt importants en el camp de la física de mesons $B$. I aquest és un camp de gran importància per a la recerca indirecta de processos associats a nova física (mitjançant l'estudi de la violació de $CP$ i de la matriu de CKM). Finalment hem estudiat les desintegracions raditives semi-inlcusives de quarkoni pesat a hadrons lleugers. Per tal d'explicar bé aquest procés ha estat necessària una combinació de la CDQNRp amb la TECS. Mirant-s'ho des de la perspectiva actual, es pot veure aquest procés com un bonic exemple de com, una vegada s'incorporen tots els graus de llibertat rellevants en un problema (i es fa servir un comptatge ben definit per ells), aquest és ben descrit per la teoria. Un cop aquest procés està entès, es pot fer servir per estudiar algunes de les propietats del quarkoni pesat que es desintegra; com també hem mostrat en la tesi. \selectlanguage{american}
{ "attr-fineweb-edu": 0.904297, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdz45qdmB66dybpqX
\section*{Acknowledgements} This work was supported in part by NSF grants \#1850349, \#1564330, and \#1663704, Google Faculty Research Award and Adobe Data Science Research Award. We thank Stats Perform SportVU \footnote{\url{https://www.statsperform.com/team-performance/basketball/optical-tracking/}} for the basketball tracking data. We gratefully acknowledge use of the following datasets: PRISM by the PRISM Climate Group, Oregon State University \footnote{\url{http://prism.oregonstate.edu}} and EN4 by the Met Office Hadley Centre \footnote{\url{https://www.metoffice.gov.uk/hadobs/en4/}}, and thank Caroline Ummenhofer of Woods Hole Oceanographic Institution for helping us obtain the data. \section{Conclusion and Future Work} We presented a novel algorithm for tensor models for spatial analysis. Our algorithm \texttt{MRTL}{} utilizes multiple resolutions to significantly decrease training time and incorporates a full-rank initialization strategy that promotes spatially coherent and interpretable latent factors. \texttt{MRTL}{} is generalized to both the classification and regression cases. We proved the theoretical convergence of our algorithm for stochastic gradient descent and compared the computational complexity of \texttt{MRTL}{} to a single, fixed-resolution model. The experimental results on two real-world datasets support its improvements in computational efficiency and interpretability. Future work includes 1) developing other stopping criteria in order to enhance the computational speedup, 2) applying our algorithm to more higher-dimensional spatiotemporal data, and 3) studying the effect of varying batch sizes between resolutions as in \citep{wu2019multigrid}. \section{Experiments} \label{sec:experiments} We apply \texttt{MRTL}{} to two real-world datasets: basketball tracking and climate data. More details about the datasets and pre-processing steps are provided in Appendix \ref{supp:experiments}. \subsection{Datasets} \paragraph{Tensor classification: Basketball tracking} We use a large NBA player tracking dataset from \citep{Yue2014,zheng2016generating} consisting of the coordinates of all players at 25 frames per second, for a total of approximately 6 million frames. The goal is to predict whether a given ball handler will shoot within the next second, given his position on the court and the relative positions of the defenders around him. In applying our method, we hope to obtain common shooting locations on the court and how a defender's relative position suppresses shot probability. \begin{figure}[h] \centering \includegraphics[width=0.8\linewidth]{images/bball/basketball_data.png} \captionof{figure}{Left: Discretizing a continuous-valued position of a player (red) via a spatial grid. Right: sample frame with a ballhandler (red) and defenders (green). Only defenders close to the ballhandler are used. } \label{fig:basketball_data} \end{figure} The basketball data contains two spatial modes: the ball handler's position and the relative defender positions around the ball handler. We instantiate a tensor classification model in Eqn \eqref{eq:class-full} as follows: \begin{eqnarray*} \T{Y}_{i} = \sum_{d^1=1}^{D^1_r} \sum_{d^2=1}^{D^2_r} \sigma(\T{W}_{i,d^1,d^2}^{(r)} \T{X}^{(r)}_{i, d^1,d^2} + b_i) \text{ ,} \end{eqnarray*} where $i \in \{1,\dots,I\}$ is the ballhandler ID, $d^1$ indexes the ballhandler's position on the discretized court of dimension $\{D_r^1\}$, and $d^2$ indexes the relative defender positions around the ballhandler in a discretized grid of dimension $\{D_r^2\}$. We assume that only defenders close to the ballhandler affect shooting probability and set $D_r^2 < D_r^1$ to reduce dimensionality. As shown in Fig. \ref{fig:basketball_data}, we orient the defender positions so that the direction from the ballhandler to the basket points up. $\T{Y}_{i} \in \{0, 1\}$ is the binary output equal to $1$ if player $i$ shoots within the next second and $\sigma$ is the sigmoid function. We use nearest neighbor interpolation for finegraining and a weighted cross entropy loss (due to imbalanced classes): \begin{equation} \T{L}_n = -\beta \left[\T{Y}_n \cdot \log{\hat{\T{Y}}_n} + (1-\T{Y}_n)\cdot \log{(1 - \hat{\T{Y}}_n)}\right] \text{,} \end{equation} where $n$ denotes the sample index and $\beta$ is the weight of the positive samples and set equal to the ratio of the negative and positive counts of labels. \paragraph{Tensor regression: Climate} Recent research \cite{li_2016_midwest, li_2016_sahel, zeng_2019_yangtze} shows that oceanic variables such as sea surface salinity (SSS) and sea surface temperature (SST) are significant predictors of the variability in rainfall in land-locked locations, such as the U.S. Midwest. We aim to predict the variability in average monthly precipitation in the U.S. Midwest using SSS and SST to identify meaningful latent factors underlying the large-scale processes linking the ocean and precipitation on land (Fig. \ref{fig:climate_data}). We use precipitation data from the PRISM group \cite{prism} and SSS/SST data from the EN4 reanalysis \cite{en4}. \begin{figure}[h] \centering \includegraphics[width=0.95\linewidth]{images/climate/climate_data.png} \captionof{figure}{Left: precipitation over continental U.S. Right: regions considered in particular.} \label{fig:climate_data} \end{figure} Let $\T{X}$ be the historical oceanic data with spatial features SSS and SST across $D_r$ locations, using the previous $6$ months of data. As SSS and SST share the spatial mode (the same spatial locations), we set the $F_2=2$ to denote the index of these features. We also consider the lag as a non-spatial feature so that $F_1=6$. We instantiate the tensor regression model in Eqn \eqref{eq:class-full} as follows: \begin{eqnarray*} \T{Y} = \sum_{f_1=1}^{F_1} \sum_{f_2=1}^{F_2} \sum_{d=1}^{D_r} \T{W}_{f_1, f_2, d}^{(r)} \T{X}^{(r)}_{f_1, f_2, d} + b \end{eqnarray*} The features and outputs (SSS, SST, and precipitation) are subject to long-term trends and a seasonal cycle. We use difference detrending for each timestep due to non-stationarity of the inputs, and remove seasonality in the data by standardizing each month of the year. The features are normalized using min-max normalization. We also normalize and deseasonalize the outputs, so that the model predicts standardized anomalies. We use mean square error (MSE) for the loss function and bilinear interpolation for finegraining. \paragraph{Implementation Details} For both datasets, we discretize the spatial features and use a 60-20-20 train-validation-test set split. We use Adam \citep{kingma2014adam} for optimization as it was empirically faster than SGD in our experiments. We use both $L_2$ and spatial regularization as described in Section \ref{sec:model}. We selected optimal hyperparameters for all models via random search. We use a stepwise learning rate decay with stepsize of $1$ with $\gamma=0.95$. We perform ten trials for all experiments. All other details are provided in Appendix \ref{supp:experiments}. \subsection{Accuracy and Convergence} We compare \texttt{MRTL}{} against a fixed-resolution model on accuracy and computation time. We exclude the computation time for \texttt{CP\_ALS}{} as it was quick to compute for all experiments ($<5$ seconds for the basketball dataset). The results of all trials are listed in Table \ref{tab:results}. Some results are provided in Appendix \ref{supp:experiments}. \begin{figure}[t] \centering \begin{subfigure}[t]{0.49\linewidth} \includegraphics[width=\textwidth]{images/bball/multi_fixed_full_val_F1.png} \end{subfigure} \begin{subfigure}[t]{0.49\linewidth} \includegraphics[width=\textwidth]{images/bball/multi_fixed_low_val_F1.png} \end{subfigure} \caption{Basketball: F1 scores of \texttt{MRTL}{} vs. the fixed-resolution model for the full rank (left) and low rank model (right). The vertical lines indicate finegraining to the next resolution.} \label{fig:bball_multi_fixed} \end{figure} \begin{figure}[t] \centering \begin{subfigure}[t]{0.49\columnwidth} \includegraphics[width=\textwidth]{images/bball/stop_cond_full_val_F1.png} \end{subfigure} \begin{subfigure}[t]{0.49\columnwidth} \includegraphics[width=\textwidth]{images/bball/stop_cond_low_val_F1.png} \end{subfigure} \caption{Basketball: F1 scores different finegraining criteria for the full rank (left) and low rank (right) model} \label{fig:bball_stop_cond} \end{figure} \begin{table*}[t] \footnotesize \centering \captionsetup{justification=centering} \caption{Runtime and prediction performance comparison of a fixed-resolution model vs \texttt{MRTL}{} for datasets} \begin{tabular}{l|l|c|c|c|c|c|c} \toprule \multirow{2}{*}{Dataset} & \multirow{2}{*}{Model} & \multicolumn{3}{c|}{Full Rank} & \multicolumn{3}{c}{Low Rank} \\ \cmidrule{3-8} & & Time [s] & Loss & F1 & Time [s] & Loss & F1 \\ \midrule \multirow{2}{*}{Basketball} & Fixed & 11462 \tiny{$\pm 565$} & 0.608 \tiny{$\pm 0.00941$} & 0.685 \tiny{$\pm 0.00544$} & 2205 \tiny{$\pm 841$} & 0.849 \tiny{$\pm 0.0230$} & 0.494 \tiny{$\pm 0.00417$} \\ & \texttt{MRTL}{} & 1230 \tiny{$\pm 74.1$} & 0.699 \tiny{$\pm 0.00237$} & 0.607 \tiny{$\pm 0.00182$} & 2009 \tiny{$\pm 715$} & 0.868 \tiny{$\pm 0.0399$} & 0.475 \tiny{$\pm 0.0121$}\\ \hline \multirow{2}{*}{Climate} & Fixed & 12.5\tiny{$\pm 0.0112$} & 0.0882 \tiny{$\pm 0.0844$} & \text{-} & 269 \tiny{$\pm 319$} & 0.0803 \tiny{$\pm 0.0861$} & \text{-} \\ & \texttt{MRTL}{} & 1.11 \tiny{$\pm 0.180$} & 0.0825 \tiny{$\pm 0.0856$} & \text{-} & 67.1 \tiny{$\pm 31.8$} & 0.0409 \tiny{$\pm 0.00399$} & \text{-} \\ \bottomrule \end{tabular} \label{tab:results} \end{table*} \begin{figure*}[t] \centering \begin{subfigure}[t]{0.40\linewidth} \includegraphics[width=\textwidth]{images/climate/mrtl_latent_1.png} \end{subfigure} \begin{subfigure}[t]{0.40\linewidth} \includegraphics[width=\textwidth]{images/climate/mrtl_latent_2.png} \end{subfigure} \caption{Climate: Some latent factors of sea surface locations after training. The red areas in the northwest Atlantic region (east of North America and Gulf of Mexico) represent areas where moisture export contributes to precipitation in the U.S. Midwest.} \label{fig:climate_lfs} \end{figure*} Fig. \ref{fig:bball_multi_fixed} shows the F1 scores of \texttt{MRTL}{} vs a fixed resolution model for the basketball dataset (validation loss was used as the finegraining criterion for both models). For the full rank case, \texttt{MRTL}{} converges 9 times faster than the fixed resolution case (the scaling of the axes obscures convergence; nevertheless, both algorithms have converged). The fixed-resolution model is able to reach a higher F1 score for the full rank case, as it uses a higher resolution than \texttt{MRTL}{} and is able to use more finegrained information, translating to a higher quality solution. This advantage does not transfer to the low rank model. For the low rank model, the training times are comparable and both reach a similar F1 score. There is decrease in the F1 score going from full rank to low rank for both \texttt{MRTL}{} and the fixed resolution model due to approximation error from CP decomposition. Note that this is dependent on the choice of $K$, specific to each dataset. Furthermore, we see a smaller increase in performance for the low rank model vs. the full rank case, indicating that the information gain from finegraining does not scale linearly with the resolution. We see a similar trend for the climate data, where \texttt{MRTL}{} converges faster than the fixed-resolution model. Overall, \texttt{MRTL}{} is approximately 4 $\sim$ 5 times faster and we get a similar speedup in the climate data. \subsection{Finegraining Criteria} We compare the performance of different finegraining criteria in Fig. \ref{fig:bball_stop_cond}. Validation loss converges much faster than other criteria for the full rank model while the other finegraining criteria converge slightly faster for the low rank model. In the classification case, we observe that the full rank model spends many epochs training when we use gradient-based criteria, suggesting that they can be too strict for the full rank case. For the regression case, we see all criteria perform similarly for the full rank model, and validation loss converges faster for the low rank model. As there are differences between finegraining criteria for different datasets, one should try all of them for fastest convergence. \subsection{Interpretability} We now demonstrate that \texttt{MRTL}{} can learn semantic representations along spatial dimensions. For all latent factor figures, the factors have been normalized to $(-1,1)$ so that reds are positive and blues are negative \begin{figure}[ht] \centering \begin{subfigure}[c]{0.22\columnwidth} \includegraphics[width=\textwidth]{images/bball/multi_B_heatmap1.png} \end{subfigure} \begin{subfigure}[c]{0.23\columnwidth} \includegraphics[width=\textwidth]{images/bball/multi_B_heatmap2.png} \end{subfigure} \begin{subfigure}[c]{0.22\columnwidth} \includegraphics[width=\textwidth]{images/bball/multi_B_heatmap3.png} \end{subfigure} \caption{ Basketball: Latent factor heatmaps of ballhandler position after training for $k=1,3,20$. They represent common shooting locations such as the right/left sides of the court, the paint, or near the three point line.} \label{fig:bball_lfs_B_some} \end{figure} \begin{figure}[ht] \centering \begin{subfigure}[c]{0.23\columnwidth} \includegraphics[width=\textwidth]{images/bball/multi_C_heatmap1.png} \end{subfigure} \begin{subfigure}[c]{0.22\columnwidth} \includegraphics[width=\textwidth]{images/bball/multi_C_heatmap2.png} \end{subfigure} \begin{subfigure}[c]{0.22\columnwidth} \includegraphics[width=\textwidth]{images/bball/multi_C_heatmap3.png} \end{subfigure} \caption{ Basketball: Latent factor heatmaps of relative defender positions after training for $k=1,3,20$. The green dot represents the ballhandler at $(6, 2)$. The latent factors show spatial patterns near the ballhandler, suggesting important positions to suppress shot probability.} \label{fig:bball_lfs_C_some} \end{figure} Figs. \ref{fig:bball_lfs_B_some}, \ref{fig:bball_lfs_C_some} visualize some latent factors for ballhandler position and relative defender positions, respectively (see Appendix for all latent factors). For the ballhandler position in Fig. \ref{fig:bball_lfs_B_some}, coherent spatial patterns (can be both red or blue regions as they are simply inverses of each other) can correspond to common shooting locations. These latent factors can represent known locations such as the paint or near the three-point line on both sides of the court. For relative defender positions in Fig. \ref{fig:bball_lfs_C_some}, we see many concentrated spatial regions near the ballhandler, indicating that such close positions suppress shot probability (as expected). Some latent factors exhibit directionality as well, suggesting that guarding one side of the ballhandler may suppress shot probability more than the other side. Fig. \ref{fig:climate_lfs} depicts two latent factors of sea surface locations. We would expect latent factors to correspond to regions of the ocean which independently influence precipitation. The left latent factor highlights the Gulf of Mexico and northwest Atlantic ocean as influential for rainfall in the Midwest due to moisture export from these regions. This is consistent with findings from \cite{li2018role, li_2016_midwest}. \paragraph{Random initialization} \label{sub:random_init} We also perform experiments using a randomly initialized low-rank model (without the full-rank model) in order to verify the importance of full rank initialization. Fig. \ref{fig:bball_lfs_rand} compares random initialization vs. \texttt{MRTL}{} for the ballhandler position (left two plots) and the defender positions (right two plots). We observe that even with spatial regularization, randomly initialized latent factor models can produce noisy, uninterpretable factors and thus full-rank initialization is essential for interpretability. \begin{figure}[t] \centering \begin{subfigure}[t]{0.21\columnwidth} \includegraphics[width=\textwidth]{images/bball/rand_B_heatmap1.png} \end{subfigure} \begin{subfigure}[t]{0.21\columnwidth} \includegraphics[width=\textwidth]{images/bball/rand_B_heatmap2.png} \end{subfigure} \begin{subfigure}[t]{0.265\columnwidth} \includegraphics[width=\textwidth]{images/bball/rand_C_heatmap1.png} \end{subfigure} \begin{subfigure}[t]{0.265\columnwidth} \includegraphics[width=\textwidth]{images/bball/rand_C_heatmap2.png} \end{subfigure} \caption{ Latent factor comparisons ($k=3,10$) of randomly initialized low-rank model (1st and 3rd) and \texttt{MRTL}{} (2nd and 4th) for ballhandler position (left two plots) and the defender positions (right two plots). Random initialization leads to uninterpretable latent factors.} \label{fig:bball_lfs_rand} \end{figure} \section{Introduction} \label{sec:intro} Analyzing large-scale spatial data plays a critical role in sports, geology, and climate science. In spatial statistics, kriging or Gaussian processes are popular tools for spatial analysis \cite{cressie1992statistics}. Others have proposed various Bayesian methods such as Cox processes \cite{Miller2014,dieng2017variational} to model spatial data. However, while mathematically appealing, these methods often have difficulties scaling to high-resolution data. \begin{wrapfigure}{tr}{0.5\linewidth} \centering \begin{subfigure}[t]{0.24\columnwidth} \includegraphics[width=\textwidth]{images/bball/rand_B_heatmap1.png} \end{subfigure} \begin{subfigure}[t]{0.24\columnwidth} \includegraphics[width=\textwidth]{images/bball/rand_B_heatmap2.png} \end{subfigure} \vspace{-0.1in} \caption{ Latent factors: random (left) vs. good (right) initialization. Latent factors vary in interpretability depending on initialization.} \label{fig:intro_rand_lfs} \end{wrapfigure} We are interested in learning high-dimensional tensor latent factor models, which have shown to be a scalable alternative for spatial analysis \cite{yu2018tensor,litvinenko2019tucker}. High resolution spatial data often contain higher-order correlations between features and locations, and tensors can naturally encode such multi-way correlations. For example, in competitive basketball play, we can predict how each player's decision to shoot is jointly influenced by their shooting style, his or her court position, and the position of the defenders by simultaneously encoding these features as a tensor. Using such representations, learning tensor latent factors can directly extract higher-order correlations. A challenge in such models is high computational cost. High-resolution spatial data is often discretized, leading to large high-dimensional tensors whose training scales exponentially with the number of parameters. Low-rank tensor learning \cite{yu2018tensor, kossaifi2019tensorly} reduces the dimensionality by assuming low-rank structures in the data and uses tensor decomposition to discover latent semantics; for an overview of tensor learning, see review papers \cite{Kolda2009, sidiropoulos2017tensor}. However, many tensor learning methods have been shown to be sensitive to noise \cite{cheng2016scalable} and initialization \cite{anandkumar2014guaranteed}. Other numerical techniques, including random sketching \cite{wang2015fast,haupt2017near} and parallelization, \cite{austin2016parallel,li2017model} can speed up training, but they often fail to utilize the unique properties of spatial data such as spatial auto-correlations. Using latent factor models also gives rise to another issue: interpretability. It is well known that a latent factor model is generally not identifiable \cite{allman2009identifiability}, leading to uninterpretable factors that do not offer insights to domain experts. In general, the definition of interpretability is highly application dependent \cite{doshi2017towards}. For spatial analysis, one of the unique properties of spatial patterns is \textit{spatial auto-correlation}: close objects have similar values \cite{moran1950notes}, which we use as a criterion for interpretability. As latent factor models are sensitive to initialization, previous research \cite{Miller2014, Yue2014} has shown that randomly initialized latent factor models can lead to spatial patterns that violate spatial auto-correlation and hence are not interpretable (see Fig. \ref{fig:intro_rand_lfs}). In this paper, we propose a Multiresolution Tensor Learning algorithm, \texttt{MRTL}{}, to efficiently learn accurate and interpretable patterns in spatial data. \texttt{MRTL}{} is based on two key insights. First, to obtain good initialization, we train a full-rank tensor model approximately at a low resolution and use tensor decomposition to produce latent factors. Second, we exploit spatial auto-correlation to learn models at multiple resolutions: we train starting from a coarse resolution and iteratively finegrain to the next resolution. We provide theoretical analysis and prove the convergence properties and computational complexity of \texttt{MRTL}{}. We demonstrate on two real-world datasets that this approach is significantly faster than fixed resolution methods. We develop several finegraining criteria to determine when to finegrain. We also consider different interpolation schemes and discuss how to finegrain in different applications. The code for our implementation is available \footnote{ \url{https://github.com/Rose-STL-Lab/mrtl}}. In summary, we: \iitem{ \itemsep0em \item propose a Multiresolution Tensor Learning (\texttt{MRTL}{}) optimization algorithm for large-scale spatial analysis. \item prove the rate of convergence for \texttt{MRTL}{} which depends on the spectral norm of the interpolation operator. We also show the exponential computational speedup for \texttt{MRTL}{} compared with fixed resolution. \item develop different criteria to determine when to transition to a finer resolution and discuss different finegraining methods. \item evaluate on two real-world datasets and show \texttt{MRTL}{} learns faster than fixed-resolution learning and can produce interpretable latent factors. } \section{Multiresolution Tensor Learning} \label{sec:mrtl} We now describe our algorithm \texttt{MRTL}{}, which addresses both the computation and interpretability issues. Two key concepts of \texttt{MRTL}{} are learning good initializations and utilizing multiple resolutions. \subsection{Initialization} In general, due to their nonconvex nature, tensor latent factor models are sensitive to initialization and can lead to uninterpretable latent factors \cite{Miller2014, Yue2014}. We use full-rank initialization in order to learn latent factors that correspond to known spatial patterns. We first train an approximate full-rank version of the tensor model at a low resolution in Eqn. \eqref{eq:class-full}. The weight tensor is then decomposed into latent factors and these values are used to initialize the low-rank model. The low-rank model in Eqn. \eqref{eq:class-low} is then trained to the final desired accuracy. As we use approximately optimal solutions of the full-rank model as initializations for the low-rank model, our algorithm produces interpretable latent factors in a variety of different scenarios and datasets. Full-rank initialization requires more computation than other simpler initialization methods. However, as the full-rank model is trained only for a small number of epochs, the increase in computation time is not substantial. We also train the full-rank model only at lower resolutions, for further reduction. Previous research \cite{Yue2014} showed that spatial regularization alone is not enough to learn spatially coherent factors, whereas full-rank initialization, though computationally costly, is able to fix this issue. We confirm the same holds true in our experiments (see Section \ref{sub:random_init}). Thus, full-rank initialization is critical for spatial interpretability. \subsection{Multiresolution} Learning a high-dimensional tensor model is generally computationally expensive and memory inefficient. We utilize multiple resolutions for this issue. We outline the procedure of \texttt{MRTL}{} in Alg. \ref{alg:mrtl}, where we omit the bias term in the description for clarity. \begin{algorithm}[t] \caption{Multiresolution Tensor Learning: \texttt{MRTL}{}} \label{alg:mrtl} \begin{small} \begin{algorithmic}[1] \STATE Input: initialization $\T{W}_{0}$, data $\T{X},\T{Y}$. \STATE Output: latent factors $\T{F}^{(r)}$ \STATE \textit{\# full rank tensor model} \FOR{each resolution $r \in \brckcur{1, \ldots, r_0}$} \STATE Initialize $t \gets 0$ \STATE Get a mini-batch $\T{B}$ from training set \WHILE{stopping criterion not true} \STATE $t \gets t + 1$ \STATE $\T{W}^{(r)}_{t+1} \gets$ \texttt{Opt}{}$\brck{\T{W}^{(r)}_{t} \mid \T{B}}$ \ENDWHILE \STATE $\T{W}^{(r+1)}$ = \texttt{Finegrain}{}$\brck{\T{W}^{(r)}}$ \ENDFOR \STATE \textit{\# tensor decomposition} \STATE $\T{F}^{(r_0)} \gets$ \texttt{CP\_ALS}{}$\brck{\T{W}^{(r_0)}}$ \STATE \textit{\# low rank tensor model} \FOR{each resolution $r \in \brckcur{r_0, \ldots, R}$} \STATE Initialize $t \gets 0$ \STATE Get a mini-batch $\T{B}$ from training set \WHILE{stopping criterion not true} \STATE $t \gets t + 1$ \STATE $\T{F}^{(r)}_{t+1} \gets$ \texttt{Opt}{}$\brck{\T{F}^{(r)}_{t} \mid \T{B}}$ \ENDWHILE \FOR{each spatial factor $n\in \{1,\cdots, N\}$} \STATE $\T{F}^{(r+1),n}$ = \texttt{Finegrain}{}$\brck{\T{F}^{(r),n}}$ \ENDFOR \ENDFOR \end{algorithmic} \end{small} \end{algorithm} We represent the resolution $r$ with superscripts and the iterate at step $t$ with subscripts, i.e. $\T{W}^{(r)}_{t}$ is $\T{W}$ at resolution $r$ at step $t$. $\T{W}_0$ is the initial weight tensor at the lowest resolution. $\T{F}^{(r)}=(A,B,C^{(r)})$ denotes all factor matrices at resolution $r$ and we use $n$ to index the factor $\T{F}^{(r), n}$. For efficiency, we train both the full rank and low rank models at multiple resolutions, starting from a coarse spatial resolution and progressively increase the resolution. At each resolution $r$, we learn $\T{W}^{(r)}$ using the stochastic optimization algorithm of choice $\texttt{Opt}{}$ (we used Adam \citep{kingma2014adam} in our experiments). When the stopping criterion is met, we transform $\T{W}^{(r)}$ to $\T{W}^{(r+1)}$ in a process we call finegraining (\texttt{Finegrain}{}). Due to spatial auto-correlation, the trained parameters at a lower resolution will serve as a good initialization for higher resolutions. For both models, we only finegrain the factors that corresponds to resolution dependent mode, which is the spatial mode in the context of spatial analysis. Finegraining can be done for other non-spatial modes for more computational speedup as long as there exists a multiresolution structure (e.g. video or time series data). Once the full rank resolution has been trained up to resolution $r_0$ (which can be chosen to fit GPU memory or time constraints), we decompose $\T{W}^{(r)}$ using \texttt{CP\_ALS}{}, the standard alternating least squares (ALS) algorithm \cite{Kolda2009} for CP decomposition. Then the low-rank model is trained at resolutions $r_0,\dots,R$ to final desired accuracy, finegraining to move to the next resolution. \paragraph{When to finegrain} There is a tradeoff between training times at different resolutions. While training for longer at lower resolutions significantly decreases computation, we do not want to overfit to the coarse, lower resolution data. On the other hand, training at higher resolutions can yield more accurate solutions using more detailed information. We investigate four different criteria to balance this tradeoff: 1) validation loss, 2) gradient norm, 3) gradient variance, and 4) gradient entropy. Increase in validation loss \cite{prechelt1998automatic, yao2007early} is a commonly used heuristic for early stopping. Another approach is to analyze the gradient distributions during training. For a convex function, stochastic gradient descent will converge into a noise ball near the optimal solution as the gradients approach zero. However, lower resolutions may be too coarse to learn more finegrained curvatures and the gradients will increasingly disagree near the optimal solution. We quantify the disagreement in the gradients with metrics such as norm, variance, and entropy. We use intuition from convergence analysis for gradient norm and variance \cite{bottou2018optimization}, and information theory for gradient entropy \cite{srinivas2012information}. Let $\T{W}_t$ and $\xi_t$ represent the weight tensor and the random variable for sampling of minibatches at step $t$, respectively. Let $f(\T{W}_t; \xi_t) := f_t$ be the validation loss and $g(\T{W}_t; \xi_t) := g_t$ be the stochastic gradients at step $t$. The finegraining criteria are: \begin{itemize} \item Validation Loss: \small{$\mathbb{E}[f_{t+1}] - \mathbb{E}[f_t] > 0$} \item Gradient Norm: \small{$\mathbb{E}[\|g_{t+1}\|^2] - \mathbb{E}[\|g_t\|^2] > 0$} \item Gradient Variance: \small{$V(\mathbb{E}[g_{t+1}]) - V(\mathbb{E}[g_t]) > 0$} \item Gradient Entropy: \small{$S(\mathbb{E}[g_{t+1}]) - S(\mathbb{E}[g_{t}]) > 0$} \text{,} \end{itemize} where $S(p) = \sum_{i} -p_i \ln(p_i)$. One can also use thresholds, e.g. $|f_{t+1} - f_{t}| < \tau$, but as these are dependent on the dataset, we use $\tau=0$ in our experiments. One can also incorporate patience, i.e. setting the maximum number of epochs where the stopping conditions was reached. \paragraph{How to finegrain} We discuss different interpolation schemes for different types of features. Categorical/multinomial variables, such as a player's position on the court, are one-hot encoded or multi-hot encoded onto a discretized grid. Note that as we use higher resolutions, the sum of the input values are still equal across resolutions, $\sum_{d} \T{X}^{(r)}_{:,:,d} = \sum_{d} \T{X}^{(r+1)}_{:,:,d}$. As the sum of the features remains the same across resolutions and our tensor models are multilinear, nearest neighbor interpolation should be used in order to produce the same outputs. \begin{align*} \sum_{d=1}^{D_{r}} \T{W}_{:,:,d}^{(r)} \T{X}^{(r)}_{:,:, d} = \sum_{d=1}^{D_{r+1}} \T{W}_{:,:,d}^{(r+1)} \T{X}^{(r+1)}_{:,:, d} \end{align*} as $\T{X}^{(r)}_{i,f, d} = 0$ for cells that do not contain the value. This scheme yields the same outputs and thus the same loss values across resolutions. Continuous variables that represent averages over locations, such as sea surface salinity, often have similar values at each finegrained cell at higher resolutions (as the values at coarse resolutions are subsampled or averaged from values at the higher resolution). Then $\sum_{d}^{D_{r+1}} \T{X}^{(r+1)}_{:,:,d} \approx 2^2 \sum_{d}^{D_r} \T{X}^{(r)}_{:,:,d}$, where the approximation comes from the type of downsampling used. \begin{align*} \sum_{d=1}^{D_{r}} \T{W}_{:,:,d}^{(r)} \T{X}^{(r)}_{:,:, d} \approx 2^2 \sum_{d=1}^{D_{r+1}} \T{W}_{:,:,d}^{(r+1)} \T{X}^{(r+1)}_{:,:, d} \end{align*} using a linear interpolation scheme. The weights are divided by the scale factor of $\frac{D_{r+1}}{D_{r}}$ to keep the outputs approximately equal. We use bilinear interpolation, though any other linear interpolation can be used. \section{Tensor Models for Spatial Data} \label{sec:model} We consider tensor learning in the supervised setting. We describe both models for the full-rank case and the low-rank case. An order-3 tensor is used for ease of illustration but our model covers higher order cases. \subsection{Full Rank Tensor Models} Given input data consisting of both non-spatial and spatial features, we can discretize the spatial features at $r = 1, \dots, R$ resolutions, with corresponding dimensions as $D_1, \dots, D_R$. Tensor learning parameterizes the model with a weight tensor $\T{W}^{(r)} \in \mathbb{R}^{I \times F \times D_r}$ over all features, where $I$ is number of outputs and $F$ is number of non-spatial features. The input data is of the form $\T{X}^{(r)} \in \mathbb{R}^{I\times F \times D_r}$. Note that both the input features and the learning model are resolution dependent. $\T{Y}_i \in \mathbb{R}, i=1,\dots,I$ is the label for output $i$. At resolution $r$, the full rank tensor learning model can be written as \begin{eqnarray} \label{eq:class-full} \T{Y}_{i} = a\left(\sum_{f=1}^F \sum_{d=1}^{D_r} \T{W}_{i,f,d}^{(r)} \T{X}^{(r)}_{i,f, d} + b_i\right) \text{,} \end{eqnarray} where $a$ is the activation function and $b_i$ is the bias for output $i$. The weight tensor $\T{W}$ is contracted with $\T{X}$ along the non-spatial mode $f$ and the spatial mode $d$. In general, Eqn. \eqref{eq:class-full} can be extended to multiple spatial features and spatial modes, each of which can have its own set of resolution-dependent dimensions. We use a sigmoid activation function for the classification task and the identity activation function for regression. \subsection{Low Rank Tensor Model} Low rank tensor models assume a low-dimensional latent structure in $\T{W}$ which can characterize distinct patterns in the data and also alleviate model overfitting. To transform the learned tensor model to a low-rank one, we use CANDECOMP/PARAFAC (CP) decomposition \cite{hitchcock1927expression} on $\T{W}$, which assumes that $\T{W}$ can be represented as the sum of rank-1 tensors. Our method can easily be extended for other decompositions as well. Let $K$ be the CP rank of the tensor. In practice, $K$ cannot be found analytically and is often chosen to sufficiently approximate the dataset. The weight tensor $\T{W}^{(r)}$ is factorized into multiple factor matrices as \[\T{W}_{i,f,d}^{(r)} = \sum_{k=1}^K A_{i,k} B_{f,k} C^{(r)}_{d,k} \] The tensor latent factor model is \begin{eqnarray} \label{eq:class-low} \T{Y}_{i} = a \left( \sum_{f=1}^F \sum_{d=1}^{D_r} \sum_{k=1}^{K} A_{i,k} B_{f,k} C^{(r)}_{d,k}\T{X}^{(r)}_{i,f, d} + b_i\right) \text{,} \end{eqnarray} where the columns of $A, B, C^r$ are latent factors for each mode of $\T{W}$ and $C^{(r)}$ is resolution dependent. CP decomposition reduces dimensionality by assuming that $A, B, C^r$ are uncorrelated, i.e. the features are uncorrelated. This is a reasonable assumption depending on how the features are chosen and leads to enhanced spatial interpretability as the learned spatial latent factors can show common patterns regardless of other features. \subsection{Spatial Regularization} Interpretability is in general hard to define or quantify \cite{doshi2017towards, ribeiro2016should, lipton2018mythos, molnar2019}. In the context of spatial analysis, we deem a latent factor as interpretable if it produces a spatially coherent pattern exhibiting spatial auto-correlation. To this end, we utilize a spatial regularization kernel \cite{lotte2010regularizing, Miller2014, Yue2014} and extend this to the tensor case. Let $d = 1,\dots,D_r$ index all locations of the spatial dimension for resolution $r$. The spatial regularization term is: \begin{equation} \label{eq:spatial-reg} R_s = \sum_{d=1}^{D_r} \sum_{d'=1}^{D_r} K_{d, d'} \| \T{W}_{:, :, d} - \T{W}_{:, :, d'}\|_F^2 \text{ ,} \end{equation} where $\| \cdot \|_F$ denotes the Frobenius norm and $K_{d, d'}$ is the kernel that controls the degree of similarity between locations. We use a simple RBF kernel with hyperparameter $\sigma$. \begin{equation} \label{eq:spatial-rbf} K_{d, d'} = e^{(-\|l_d - l_{d'}\|^2 / \sigma)} \text{ ,} \end{equation} where $l_d$ denotes the location of index $d$. The distances are normalized across resolutions such that the maximum distance between two locations is $1$. The kernels can be precomputed for each resolution. If there are multiple spatial modes, we apply spatial regularization across all different modes. We additionally use $L_2$ regularization to encourage smaller weights. The optimization objective function is \begin{equation} f(\T{W}) = L(\T{W};\T{X},\T{Y}) + \lambda_R R(\T{W}) \text{ ,} \end{equation} where $L$ is a task-dependent supervised learning loss, $R(\T{W})$ is the sum of spatial and $L_2$ regularization, and $\lambda_R$ is the regularization coefficient. \section{Related Work.} \paragraph{Spatial Analysis} Discovering spatial patterns has significant implications in scientific fields such as human behavior modeling, neural science, and climate science. Early work in spatial statistics has contributed greatly to spatial analysis through the work in Moran's I \cite{moran1950notes} and Getis-Ord general G \cite{getis1992analysis} for measuring spatial auto-correlation. Geographically weighted regression \cite{brunsdon1998geographically} accounts for the spatial heterogeneity with a local version of spatial regression but fails to capture higher order correlation. Kriging or Gaussian processes are popular tools for spatial analysis but they often require carefully designed variograms (also known as kernels) \cite{cressie1992statistics}. Other Bayesian hierarchical models favor spatial point processes to model spatial data \cite{diggle2013spatial,Miller2014,dieng2017variational}. These frameworks are conceptually elegant but often computationally intractable. \paragraph{Tensor Learning} Latent factor models utilize correlations in the data to reduce the dimensionality of the problem, and have been used extensively in multi-task learning \cite{romera2013multilinear} and recommendation systems \cite{lee2001algorithms}. Tensor learning \cite{zhou2013tensor, bahadori2014fast, haupt2017near} uses tensor latent factor models to learn higher-order correlations in the data in a supervised fashion. In particular, tensor latent factor models aim to learn the higher-order correlations in spatial data by assuming low-dimensional representations among features and locations. However, high-order tensor models are non-convex by nature, suffer from the curse of dimensionality, and are notoriously hard to train \cite{Kolda2009,sidiropoulos2017tensor}. There are many efforts to scale up tensor computation, e.g., parallelization \citep{austin2016parallel} and sketching \citep{wang2015fast, haupt2017near, li2018sketching}. In this work, we propose an optimization algorithm to learn tensor models at multiple resolutions that is not only fast but can also generate interpretable factors. We focus on tensor latent factor models for their wide applicability to spatial analysis and interpretability. While deep neural networks models can be more accurate, they are computationally more expensive and are difficult to interpret. \paragraph{Multiresolution Methods} Multiresolution methods have been applied successfully in machine learning, both in latent factor modeling \cite{kondor2014multiresolution,ozdemir2017multiscale} and deep learning \cite{reed2017parallel,serban2017multiresolution}. For example, multiresolution matrix factorization \citep{kondor2014multiresolution, ding2017multiresolution} and its higher order extensions \citep{schifanella2014multiresolution, ozdemir2017multiscale, han2018multiresolution} apply multi-level orthogonal operators to uncover the multiscale structure in a single matrix. In contrast, our method aims to speed up learning by exploiting the relationship among multiple tensors of different resolutions. Our approach resembles the multigrid method in numerical analysis for solving partial differential equations \cite{trottenberg2000multigrid, hiptmair1998multigrid}, where the idea is to accelerate iterative algorithms by solving a coarse problem first and then gradually finegraining the solution. \section{Theoretical Analysis} \label{supp:theory} \subsection{Convergence Analysis} \label{supp:theory-conv} \begin{theorem}\cite{bottou2018optimization} If the step size $\eta_t \equiv \eta \leq \frac{1}{Lc_g}$, then a fixed resolution solution satisfies \[{\mathbb E}[\| \V{w}_{t+1} - \V{w}_{\star}\|^2_2] \leq \gamma^t[{\mathbb E}[\|\V{w}_{0} - \V{w}_{\star}\|^2_2] - \beta] + \beta\] where $\gamma = 1 - 2 \eta \mu$, $\beta=\frac{\eta \sigma_g^2}{2\mu}$, and $\V{w}_\star$ is the optimal solution. \end{theorem} \proof For a single step update, \begin{align} \|\V{w}_{t+1} - \V{w}_\star \|_2^2 & = \|\V{w}_t -\eta_t \V{g}(\V{w}_t;\xi_t) - \V{w}_\star \|_2^2 \nonumber \\ & = \|\V{w}_t - \V{w}_\star \|_2^2 + \|\eta_t \V{g}(\V{w}_t;\xi_t) \|_2^2 - 2\eta_t \V{g}(\V{w}_t;\xi_t)(\V{w}_t-\V{w}_\star ) \end{align} by the law of total expectation \begin{align} {\mathbb E}[\V{g}(\V{x}_t;\xi_t)(\V{w}_t-\V{w}_\star)] &= {\mathbb E}[{\mathbb E}[\V{g}(\V{w}_t;\xi_t)(\V{w}_t-\V{w}_\star) | \xi_{<t} ]] \nonumber \\ &= {\mathbb E}[(\V{w}_t-\V{w}_\star) {\mathbb E}[\V{g}(\V{w}_t;\xi_t)| \xi_{<t}]] \nonumber \\ &= {\mathbb E}[(\V{w}_t-\V{w}_\star)^\top \triangledown f(\V{w}_t)] \end{align} From strong convexity, \begin{align} \langle \triangledown f(\V{w}_t) - \triangledown f(\V{w}_\star) , \V{w}_t- \V{w}_\star \rangle = \langle \triangledown f(\V{w}_t) , \V{w}_t- \V{w}_\star \rangle &\geq \mu \|\V{w}_t- \V{w}_\star\|_2^2 \end{align} which implies ${\mathbb E}[(\V{w}_t-\V{w}_\star)^\top \triangledown f(\V{w}_t)] \geq \mu{\mathbb E}[\|\V{w}_t- \V{w}_\star\|_2^2]$ as $\triangledown f(\V{w}_\star)=0$. Putting it all together yields \begin{align} {\mathbb E}[\|\V{w}^{t+1} - \V{w}_\star \|_2^2 ] \leq (1-2\eta_t \mu){\mathbb E}[\|\V{w}_t- \V{w}_\star\|_2^2] + (\eta_t \sigma_g)^2 \label{eqn:single_step} \end{align} As $\eta_t = \eta$, we complete the contraction, by setting $\beta = \frac{(\eta \sigma_g)^2}{(2\eta \mu)}$ \begin{align} {\mathbb E}[\|\V{w}_{t+1} - \V{w}_\star \|_2^2 ] -\beta \leq (1-2\eta_t \mu) ({\mathbb E}[\|\V{w}_t- \V{w}_\star\|_2^2] -\beta) \end{align} Repeat the iterations \begin{align} {\mathbb E}[\|\V{w}_{t+1} - \V{w}_\star \|_2^2 ] -\beta \leq (1-2\eta \mu)^t ({\mathbb E}[\|\V{w}_{0} - \V{w}_\star\|_2^2 ] -\beta ) \end{align} Rearranging the terms, we get \begin{equation} {\mathbb E}[\|\V{w}_{t+1} - \V{w}_\star \|_2^2] \leq (1-2\eta \mu)^t{\mathbb E}[\|\V{w}_{0} - \V{w}_\star ] - ((1-2\eta \mu)^t + 1)\frac{(\eta \sigma_g)^2}{(2\eta \mu)} \end{equation} \qed \begin{theorem} If the step size $\eta_t \equiv \eta \leq \frac{1}{Lc_g}$, then \texttt{MRTL}{} solution satisfies \begin{align*} &{\mathbb E}[\|\V{w}^{(r)}_{t} - \V{w}^{\star}\|^2_2] \leq \gamma^{t} (\|P\|^2_{op})^r [{\mathbb E}[\|\V{w}^{(1)}_{0} - \V{w}^{(1),\star}\|^2_2] \\ & \qquad - \gamma^{t} \|P\|^2_2 \beta + \gamma^{t_2} (\|P\|^2_{op}\beta - \beta) + O(1) \end{align*} where $\gamma = 1 - 2 \eta \mu$, $\beta=\frac{\eta \sigma_g^2}{2\mu}$, and $\|P\|_{op}$ is the operator norm of the interpolation operator $P$. \end{theorem} Consider a two resolution case where $R=2$ and $\V{w}^{(2)}_\star= \V{w}_\star$. Let $t_r$ be the total number of iterations of resolution $r$. Based on Eqn. \eqref{eqn:single_step}, for a fixed resolution algorithm, after $t_1+t_2$ number of iterations, \[{\mathbb E}[\|\V{w}_{t_1+t_2} - \V{w}_{\star}\|_2^2 ] -\beta \leq (1-2\eta \mu)^{t_1+t_2} ({\mathbb E}[\|\V{w}_{0} - \V{w}^\star \|_2^2] -\beta ) \] For multiresolution, where we train on resolution $r=1$ first, we have \begin{equation*} E[\|\V{w}^{(1)}_{t_1} - \V{w}^{(1)}_\star \|_2^2] - \beta \leq (1-2\eta \mu)^{t_1} ({\mathbb E}[\|\V{w}^{(1)}_{0} - \V{w}^{(1)}_\star\|_2^2 ] - \beta) \end{equation*} At resolution $r=2$, we have \begin{equation} {\mathbb E}[\|\V{w}^{(2)}_{t_2} - \V{w}_\star \|_2^2 ] -\beta \leq (1-2\eta \mu)^{t_2}( {\mathbb E}[\|\V{w}_{0}^{(2)} - \V{w}_\star \|_2^2] - \beta ) \label{eqn:multi_coarse} \end{equation} Using interpolation, we have $\V{w}_{0}^{(2)}=P \V{w}_{t_1}^{(1)}$. Given the spatial autocorrelation assumption, we have \[\| \V{w}_\star^{(2)} - P \V{w}_\star^{(1)} \|_{2} \leq \epsilon\] By the definition of operator norm and triangle inequality, \[{\mathbb E}[\|\V{w}_{0}^{(2)} - \V{w}^{(2)}_\star \|_2^2 \leq {\mathbb E} [ \|P \V{w}_{t_1}^{(1)} - \V{w}^{(2)}_\star \|^2_2] \leq \|P\|_{op}^2 {\mathbb E}[ \| \V{w}_{t_1}^{(1)} - \V{w}^{(1)}_\star \|_2^2 ] + \epsilon^2 \] Combined with eq. \eqref{eqn:multi_coarse}, we have \begin{align} {\mathbb E}[\|\V{w}^{(2)}_{t_2} - \V{w}_\star \|_2^2 ] - \beta \leq & (1-2\eta \mu)^{t_2} ( \|P\|_{op}^2 {\mathbb E}[ \| \V{w}_{t_1}^{(1)} - \V{w}^{(1)}_\star \|_2^2 ] + \epsilon^2 - \beta) \\ = & (1-2\eta \mu)^{t_1+t_2} \|P\|_{op}^2 ({\mathbb E}[\|\V{w}^{(1)}_{0} - \V{w}^{(1)}_\star\|_2^2 ]-\beta) + (1-2\eta \mu)^{t_2} ( \|P\|_{op}^2\beta + \epsilon^2 - \beta) \label{eqn:two_resolution} \end{align} If we initialize $\V{w}_0$ and $\V{w}^{(1)}_0$ such that $\|\V{w}^{(1)}_0-\V{w}^{(1)}_\star\|_2^2 = \|\V{w}_0-\V{w}_\star\|_2^2$, we have \texttt{MRTL}{} solution \begin{align} {\mathbb E}[\|\V{w}^{'}_{t_1+t_2} - \V{w}_\star \|_2^2 ] - \alpha \leq (1-2\eta \mu)^{t_1+t_2} \|P\|_{op}^2 ({\mathbb E}[\|\V{w}^{'}_{0} - \V{w}_\star\|_2^2 ] -\alpha) \end{align} for some $\alpha$ that completes the contraction. Repeat the resolution iterates in Eqn. \eqref{eqn:two_resolution}, we reach our conclusion. \qed \subsection{Computational Complexity Analysis} \label{supp:comp-proof} In this section, we analyze the computational complexity for \texttt{MRTL}{} (Algorithm \ref{alg:mrtl}). Assuming that $\nabla f$ is Lipschitz continuous, we can view gradient-based optimization as a fixed-point iteration operator $F$ with a contraction constant of $\gamma \in (0,1)$ (note that \emph{stochastic} gradient descent converges to a noise ball instead of a fixed point). \begin{eqnarray*} \V{w} \leftarrow F(\V{w}), \hspace{10pt} F:=I-\eta \nabla f, \| F(\V{w} ) - F(\V{w}') \|\leq \gamma \| \V{w} - \V{w}'\|. \end{eqnarray*} Let $\V{w}^{(r)}_\star $ be the optimal estimator at resolution $r$. Suppose for each resolution $r$, we use the following finegrain criterion: \eq{ \| \V{w}_{t}^{(r)} - \V{w}_{t-1}^{(r)}\| \leq \fr{C_0 D_r}{\gamma (1-\gamma)}. \label{supp:eqn:finegrain} } where $t_r$ is the number of iterations taken at level $r$. The algorithm terminates when the estimation error reaches $\fr{C_0 R}{(1-\gamma)^2}$. The following main theorem characterizes the speed-up gained by multiresolution learning \texttt{MRTL}{} w.r.t. the contraction factor $\gamma$ and the terminal estimation error $\epsilon$. \begin{theorem} Suppose the fixed point iteration operator (gradient descent) for the optimization algorithm has a contraction factor (Lipschitz constant) of $\gamma$, the multiresolution learning procedure is faster than that of the fixed resolution algorithm by a factor of $ \log\fr{1}{(1-\gamma) \epsilon}$, with $\epsilon$ as the terminal estimation error. \label{supp:thm:mmt} \end{theorem} We prove several useful Lemmas before proving the main Theorem \ref{supp:thm:mmt}. The following lemma analyzes the computational cost of the \emph{fixed-resolution} algorithm. \begin{lemma} Given a fixed point iteration operator with a contraction factor $\gamma$, the computational complexity of a fixed-resolution training for a $p$-order tensor with rank $K$ is % \eq{ % \mathcal{C} = \mathcal{O}\brck{ % \fr{1}{|\log \gamma|} \cdot \log \left(\fr{1 }{(1-\gamma)\epsilon}\right) % \cdot \fr{Kp}{(1-\gamma)^2\epsilon}}. } \label{supp:lemma:fixed} \end{lemma} \proof At a high level, we can prove this by choosing a small enough resolution $r$ such that the approximation error is bounded with a fixed number of iterations. Let $\V{w}_\star^{(r)}$ be the optimal estimate at resolution $r$ and $\V{w}_t$ be the estimate at step $t$. Then \begin{equation} \| \V{w}_\star -\V{w}_t \| \leq \| \V{w}_\star - \V{w}^{(r)}_\star \| + \|\V{w}^{(r)}_\star - \V{w}_t \| \leq \epsilon. \end{equation} We pick a fixed resolution $r$ small enough such that \eq{\| \V{w}_\star-\V{w}_\star^{(r)}\| \leq \fr{\epsilon}{2},} then using the termination criteria $\| \V{w}_\star -\V{w}^{(r)}_\star \| \leq \fr{C_0 R}{(1-\gamma)^2}$ gives $D_r = \Omega ((1-\gamma)^2\epsilon)$ where $D_r$ is the discretization size at resolution $r$. Initialize $\V{w}_0=0$ and apply $F$ to $\V{w}$ for $t$ times such that \eq{ \fr{\gamma^t}{2(1-\gamma) } \|F(\V{w}_0) \| \leq \fr{\epsilon}{2}. } As $\V{w}_0=0$, $\|F(\V{w}_0) \| \leq 2C$, we obtain that \eq{ t\leq \fr{1}{|\log \gamma|} \cdot \log \brck{\fr{2C}{(1-\gamma)\epsilon}}, } Note that for an order $p$ tensor with rank $K$, the computational complexity of every iteration in \texttt{MRTL}{} is $ \mathcal{O}(Kp/D_r)$ with $D_r$ as the discretization size. Hence, the computational complexity of the fixed resolution training is \eq{ \mathcal{C} % &= \mathcal{O}\brck{ % \fr{1}{|\log \gamma|} \cdot \log \brck{\fr{1}{ (1-\gamma)\epsilon}} % \cdot \fr{Kp}{D_r} } \notag\\ % &= \mathcal{O}\brck{ \fr{1}{|\log \gamma|} \cdot \log \brck{\fr{1 }{(1-\gamma)\epsilon}} % \cdot \fr{Kp}{(1-\gamma)^2\epsilon} % }. \notag\qed % } Given a spatial discretization $r$, we can construct an operator $F_r$ that learns discretized tensor weights. The next lemma relates the estimation error with resolution. The following lemma relates the estimation error with resolution: \begin{lemma}\citep{nash2000multigrid} For each resolution level $r = 1,\cdots,R$, there exists a constant $C_1$ and $C_2$, such that the fixed point iteration with discretization size $D_r$ has an estimation error: \eq{ \|F(\V{w}) - F^{(r)}(\V{w})\|\leq (C_1 + \gamma C_2 \|\V{w}\|) D_r} \label{supp:lemma:disc} \end{lemma} \proof See \citep{nash2000multigrid} for details. We have obtained the discretization error for the fixed point operation at any resolution. Next we analyze the number of iterations $t_r$ needed at each resolution $r$ before finegraining. \begin{lemma} For every resolution $r=1,\dots,R$, there exists a constant $C'$ such that the number of iterations $t_r$ before finegraining satisfies: % \eq{ t_r \leq C' / \log | \gamma | } \label{supp:lemma:iter_level} \end{lemma} \proof% According to the fixed point iteration definition, we have for each resolution $r$: \begin{eqnarray} % \|F_r(\V{w}_{t_r}) - \V{w}_{t_r}^{(r)}) \| &\leq& \gamma^{t_r-1} \| F_r(\V{w}^{(r)}_0) - \V{w}^{(r)}_0\| \\ &\leq& \gamma^{t_r-1} \frac{C_0 D_r}{1-\gamma}\\ &\leq & C'\gamma^{t_r-1} \end{eqnarray} using the definition of the finegrain criterion. \qed By combining Lemmas \ref{supp:lemma:iter_level} and the computational cost per iteration, we can compute the total computational cost for our \texttt{MRTL}{} algorithm, which is proportional to the total number of iterations for all resolutions: \eq{ \mathcal{C}_{\texttt{MRTL}{}} &= \mathcal{O}\brck{\fr{1}{|\log \gamma|}\brcksq {(D_r/Kp)^{-1} +(2 D_r/Kp)^{-1} + (4 D_r/Kp)^{-1} + \cdots} } \notag\\ &=\mathcal{O}\brck{\fr{1 }{|\log \gamma| }\brck{\fr{Kp}{D_r}}\brcksq{1 + \fr{1}{2} + \fr{1}{4} +\cdots} } \notag\\ &=\mathcal{O}\brck{\fr{1 }{|\log \gamma|} \brck{ \fr{Kp}{D_r} } \brcksq{ \fr{1-(\fr{1}{2}) ^{n}}{1-\fr{1}{2}} } } \notag\\ &= \mathcal{O}\brck{\fr{1 }{|\log \gamma|} \brck{\fr{Kp}{(1-\gamma)^2\epsilon}}}, } where the last step uses the termination criterion in (\ref{supp:eqn:finegrain}). Comparing with the complexity analysis for the fixed resolution algorithm in Lemma \ref{supp:lemma:fixed}, we complete the proof. \qed \section{Experiment Details} \label{supp:experiments} \paragraph{Basketball} We list implementation details for the basketball dataset. We focus only on half-court possessions, where all players have crossed into the half court as in \cite{Yue2014}. The ball must also be inside the court and be within a 4 foot radius of the ballhandler. We discard any passing/turnover events and do not consider frames with free throws. For the ball handler location $\{D_r^1\}$, we discretize the half-court into resolutions $4 \times 5, 8 \times 10, 20 \times 25, 40 \times 50$. For the relative defender locations, at the full resolution, we choose a $12 \times 12$ grid around the ball handler where the ball handler is located at $(6, 2)$ (more space in front of the ball handler than behind him/her). We also consider a smaller grid around the ball handler for the defender locations, assuming that defenders that are far away from the ball handler do not influence shooting probability. We use $6 \times 6 ,12 \times 12$ for defender positions. Let us denote the pair of resolutions as $(D_r^1, D_r^2)$. We train the full-rank model at resolutions $(4 \times 5, 6 \times 6), (8 \times 10, 6 \times 6), (8 \times 10, 12 \times 12)$ and the low-rank model at resolutions $(8 \times 10, 12 \times 12), (20 \times 25, 12 \times 12), (40 \times 50, 12 \times 12)$. There is a notable class imbalance in labels (88\% of data points have zero labels) so we use weighted cross entropy loss using the inverse of class counts as weights. For the low-rank model, we use tensor rank $K=20$. The performance trend of \texttt{MRTL}{} is similar across a variety of tensor ranks. $K$ should be chosen appropriately to the desired level of approximation. \paragraph{Climate} We describe the data sources used for climate. The precipitation data comes from the PRISM group \cite{prism}, which provides estimates monthly estimates at 1/24º spatial resolution across the continental U.S from 1895 to 2018. For oceanic data we use the EN4 reanalysis product \cite{en4}, which provides monthly estimates for ocean salinity and temperature at 1º spatial resolution across the globe from 1900 to the present (see Fig. \ref{fig:climate_data}). We constrain our spatial analysis to the range [-180ºW, 0ºW] and [-20ºS, 60ºN], which encapsulates the area around North America and a large portion of South America. The ocean data is non-stationary, with the variance of the data increasing over time. This is likely due to improvement in observational measurements of ocean temperature and salinity over time, which reduce the amount of interpolation needed to generate an estimate for a given month. After detrending and deseasonalizing, we split the train, validation, and test sets using random consecutive sequences so that their samples come from a similar distribution. We train the full-rank model at resolutions $4\times 9$ and $8 \times 18$ and the low-rank model at resolutions $8\times18$, $12 \times 27$, $24 \times 54$, $40 \times 90$, $60 \times 135$, and $80 \times 180$. For finegraining criteria, we use a patience factor of 4, i.e. training was terminated when a finegraining criterion was reached a total of 4 times. Both validation loss and gradient statistics were relatively noisy during training (possibly due to a small number of samples), leading to early termination without the patience factor. During finegraining, the weights were upsampled to the higher resolution using bilinear interpolation and then scaled by the ratio of the number of inputs for the higher resolution to the number of inputs for the lower resolution (as described in Section \ref{sec:mrtl}) to preserve the magnitude of the prediction. \paragraph{Details} We trained the basketball dataset on 4 RTX 2080 Ti GPUs, while the climate dataset experiments were performed on a separate workstation with 1 RTX 2080 Ti GPU. The computation times of the fixed-resolution and \texttt{MRTL}{} model were compared on the same setup for all experiments. \subsection{Hyperparameters} \begin{table}[h] \centering \footnotesize \begin{tabular}{l|l|l} Hyperparameter & Basketball & Climate \\ \hline Batch size & $32 - 1024$ & $8 - 128$ \\ Full-rank learning rate $\eta$ & $10^{-3} - 10^{-1}$ & $10^{-4} - 10^{-1}$ \\ Full-rank regularization $\lambda$ & $10^{-5} - 10^{0}$ & $10^{-4} - 10^{-1}$ \\ Low-rank learning rate $\eta$ & $10^{-5} - 10^{-1}$ & $10^{-4} - 10^{-1}$ \\ Low-rank regularization $\lambda$ & $10^{-5} - 10^{0}$ & $10^{-4} - 10^{-1}$ \\ Spatial regularization $\sigma$ & $0.03 - 0.2$ & $0.03 - 0.2$ \\ Learning rate decay $\gamma$ & $0.7 - 0.95$ & $0.7 - 0.95$ \end{tabular} \caption{Search range for \texttt{Opt}{} hyperparameters} \label{supp:tab_hyperp} \end{table} Table \ref{supp:tab_hyperp} show the search ranges of all hyperparameters considered. We performed separate random searches over this search space for \texttt{MRTL}{}, fixed-resolution model, and the randomly initialized low-rank model. We also separate the learning rate $\eta$ and regularization coefficient $\lambda$ between the full-rank and low-rank models. \subsection{Accuracy and Convergence} \begin{figure}[ht] \centering \begin{subfigure}[t]{0.49\linewidth} \includegraphics[width=\textwidth]{images/bball/multi_fixed_full_val_loss.png} \end{subfigure} \begin{subfigure}[t]{0.49\linewidth} \includegraphics[width=\textwidth]{images/bball/multi_fixed_low_val_loss.png} \end{subfigure} \caption{Basketball: Loss curves of \texttt{MRTL}{} vs. the fixed-resolution model for the full rank (left) and low rank model (right). The vertical lines indicate finegraining to the next resolution.} \label{fig:bball_multi_fixed_loss} \end{figure} Fig. \ref{fig:bball_multi_fixed_loss} shows the loss curves of \texttt{MRTL}{} vs. the fixed resolution model for the full rank and low rank case. They show a similar convergence trend, where the fixed-resolution model is much slower than \texttt{MRTL}{}. \subsection{Finegraining Criteria} Table \ref{tab:stop_cond} lists the results for the different finegraining criteria. In the classification case, we see that validation loss reaches much faster convergence than other gradient-based criteria in the full-rank case, while the gradient-based criteria are faster for the low-rank model. All criteria can reach similar F1 scores. For the regression case, all stopping criteria converge to a similar loss in roughly the same amount of time for the full-rank model. For the low-rank model, validation loss appears to converge more quickly and to a lower loss value. \begin{table}[h] \footnotesize \centering \captionsetup{justification=centering} \caption{Runtime and prediction performance comparison of different finegraining criteria} \begin{tabular}{l|l|c|c|c|c|c|c} \toprule \multirow{2}{*}{Dataset} & \multirow{2}{*}{Model} & \multicolumn{3}{c|}{Full-Rank} & \multicolumn{3}{c}{Low-Rank} \\ \cmidrule{3-8} & & Time [s] & Loss & F1 & Time [s] & Loss & F1 \\ \midrule \multirow{4}{*}{Basketball} & Validation loss & \textbf{1230 \tiny{$\pm 74.1$}} & 0.699 \tiny{$\pm 0.00237$} & 0.607 \tiny{$\pm 0.00182$} & 2009 \tiny{$\pm 715$} & 0.868 \tiny{$\pm 0.0399$} & 0.475 \tiny{$\pm 0.0121$}\\ & Gradient norm & 7029 \tiny{$\pm 759$} & 0.703 \tiny{$\pm 0.00216$} & \textbf{0.610 \tiny{$\pm 0.00149$}} & \textbf{912 \tiny{$\pm 281$}} & 0.883 \tiny{$\pm 0.00664$} & 0.476 \tiny{$\pm 0.00270$} \\ & Gradient variance & 7918 \tiny{$\pm 1949$} & 0.701 \tiny{$\pm 0.00333$} & 0.609 \tiny{$\pm 0.00315$} & 933 \tiny{$\pm 240$} & 0.883 \tiny{$\pm 0.00493$} & \textbf{0.476 \tiny{$\pm 0.00197$}} \\ & Gradient entropy & 8715 \tiny{$\pm 957$} & 0.697 \tiny{$\pm 0.00551$} & 0.597 \tiny{$\pm 0.00737$} & 939 \tiny{$\pm 259$} & 0.886 \tiny{$\pm 0.00248$} & 0.475 \tiny{$\pm 0.00182$} \\ \hline \multirow{4}{*}{Climate} & Validation loss & 1.04 \tiny{$\pm 0.115$} & \textbf{0.0448} \tiny{$\pm 0.0108$} & \text{-} & \textbf{37.4} \tiny{$\pm 28.7$} & \textbf{0.0284} \tiny{$\pm 0.00171$} & \text{-} \\ & Gradient norm & 1.11 \tiny{$\pm 0.0413$} & 0.0506 \tiny{$\pm 0.00853$} & \text{-} & 59.1 \tiny{$\pm 16.9$} & 0.0301 \tiny{$\pm 0.00131$} & \text{-} \\ & Gradient variance & 1.14 \tiny{$\pm 0.0596$} & 0.0458 \tiny{$\pm 0.00597$} & \text{-} & 62.9 \tiny{$\pm 14.4$} & 0.0305 \tiny{$\pm 0.00283$} & \text{-} \\ & Gradient entropy & \textbf{0.984 \tiny{$\pm 0.0848$}} & 0.0490 \tiny{$\pm 0.0144$} & \text{-} & 48.4 \tiny{$\pm 21.1$} & 0.0331 \tiny{$\pm 0.00949$} & \text{-} \\ \bottomrule \end{tabular} \label{tab:stop_cond} \end{table} \subsection{Random initialization} Fig. \ref{fig:sup_bball_lfs_B} shows all latent factors after training \texttt{MRTL}{} vs a randomly initialized low-rank model for ballhandler position. We can see clearly that full-rank initialization produces spatially coherent factors while random initialization can produce some uninterpretable factors (e.g. the latent factors for $k=3,4,5,7,19,20$ are not semantically meaningful). Fig. \ref{fig:sup_bball_lfs_C} shows latent factors for the defender position spatial mode, and we can draw similar conclusions about random initialization. \begin{figure}[!h] \centering \begin{subfigure}[c]{0.35\columnwidth} \includegraphics[width=\textwidth]{images/bball/multi_40x50_12x12_B_heatmap.png} \end{subfigure} \begin{subfigure}[c]{0.35\columnwidth} \includegraphics[width=\textwidth]{images/bball/rand_40x50_12x12_B_heatmap.png} \end{subfigure} \caption{ Basketball: Latent factors of ball handler position after training \texttt{MRTL}{} (left) and a low-rank model using random initialization (right). The factors have been normalized to (-1,1) so that reds are positive and blues are negative. The latent factors are numbered left to right, top to bottom.} \label{fig:sup_bball_lfs_B} \end{figure} \begin{figure}[!h] \centering \begin{subfigure}[c]{0.35\columnwidth} \includegraphics[width=\textwidth]{images/bball/multi_40x50_12x12_C_heatmap.png} \end{subfigure} \begin{subfigure}[c]{0.35\columnwidth} \includegraphics[width=\textwidth]{images/bball/rand_40x50_12x12_C_heatmap.png} \end{subfigure} \caption{ Basketball: Latent factors of relative defender positions after training \texttt{MRTL}{} (left) and a low-rank model using random initialization (right). The factors have been normalized to (-1,1) so that reds are positive and blues are negative. The green dot represents the ballhandler at (6, 2). The latent factors are numbered left to right, top to bottom.} \label{fig:sup_bball_lfs_C} \end{figure} \section{Theoretical Analysis.} \subsection{Convergence} We prove the convergence rate for \texttt{MRTL}{} with a single spatial mode and one-dimensional output, where the weight tensor reduces to a weight vector $\V{w}$. We defer all proofs to Appendix \ref{supp:theory}. For the loss function $f$ and a stochastic sampling variable $\xi$, the optimization problem is: \begin{equation} \V{w}_\star = \text{argmin}\ {\mathbb E}[f(\V{w};\xi)] \end{equation} We consider a fixed-resolution model that follows Alg. \ref{alg:mrtl} with $r=\{R\}$, i.e. only the final resolution is used. For a fixed-resolution miniSGD algorithm, under common assumptions in convergence analysis: \begin{itemize} \item $f$ is $\mu$- strongly convex, $L$-smooth \item (unbiased) gradient ${\mathbb E}[g(\V{w}_t; \V{\xi}_t)] =\triangledown f(\V{w}_t)$ given $\xi_{<t}$ \item (variance) for all the $\V{w}$, ${\mathbb E}[\|g(\V{w}; \xi)\|^2_2] \leq \sigma_g^2 + c_g \|\triangledown f(\V{w})\|_2^2 $ \end{itemize} \begin{theorem}\cite{bottou2018optimization} If the step size $\eta_t \equiv \eta \leq \frac{1}{Lc_g}$, then a fixed resolution solution satisfies \begin{align*} {\mathbb E}[\|\V{w}_{t+1} - \V{w}_{\star}\|^2_2] \leq & \gamma^t({\mathbb E}[\|\V{w}_{0} - \V{w}_{\star}\|^2_2) - \beta] + \beta \text{,} \end{align*} where $\gamma = 1 - 2 \eta \mu$, $\beta=\frac{\eta \sigma_g^2}{2\mu}$, and $w_\star$ is the optimal solution. \end{theorem} which gives $O(1/t)+O(\eta)$ convergence. At resolution $r$, we define the number of total iterations as $t_r$, and the weights as $\V{w}^{(r)}$. We let $D_r$ denote the number of dimensions at $r$ and we assume a dyadic scaling between resolutions such that $D_{r+1}=2D_r$. We define finegraining using an interpolation operator $P$ such that $\V{w}^{(r+1)}_{0} = P \V{w}^{(r)}_{t_r}$ as in \cite{bramble2019multigrid}. For the simple case of a 1D spatial grid where $\V{w}_t^{(r)}$ has spatial dimension $D_r$, P would be of a Toeplitz matrix of dimension $2D_r \times D_r$. For example, for linear interpolation of $D_r=2$, \begin{align*} P \V{w}^{(r)} = \frac{1}{2}\begin{bmatrix} 1 & 0 \\ 2 & 0 \\ 1 & 1 \\ 0 & 2 \\ \end{bmatrix} \begin{bmatrix} \V{w}_1^{(r)} \\ \V{w}_2^{(r)} \end{bmatrix} = \begin{bmatrix} \V{w}_1^{(r+1)}/2 \\ \V{w}_1^{(r+1)} \\ \V{w}_1^{(r+1)}/2 + \V{w}_2^{(r+1)}/2 \\ \V{w}_2^{(r+1)} \end{bmatrix} \text{.} \end{align*} Any interpolation scheme can be expressed in this form. The convergence of multiresolution learning algorithm depends on the following property of spatial data: \begin{definition}[Spatial Smoothness] The difference between the optimal solutions of consecutive resolutions is upper bounded by $\epsilon$ \[\| \V{w}^{(r+1)}_\star - P \V{w}^{(r)}_\star \| \leq \epsilon \text{,}\] with $P$ being the interpolation operator. \end{definition} The following theorem proves the convergence rate of \texttt{MRTL}{}, with a constant that depends on the operator norm of the interpolation operator $P$. \begin{theorem} If the step size $\eta_t \equiv \eta \leq \frac{1}{Lc_g}$, then the solution of \texttt{MRTL}{} satisfies \begin{align*} &{\mathbb E}[\|\V{w}^{(r)}_{t} - \V{w}_{\star}\|^2_2] \leq \gamma^{t} \|P\|^{2r}_{op}~{\mathbb E}[\|\V{w}_{0} - \V{w}_\star\|^2_2 + O(\eta \|P\|_{op}) \text{,} \end{align*} where $\gamma = 1 - 2 \eta \mu$, $\beta=\frac{\eta \sigma_g^2}{2\mu}$, and $\|P\|_{op}$ is the operator norm of the interpolation operator $P$. \end{theorem} \subsection{Computational Complexity} To analyze computational complexity, we resort to fixed point convergence \citep{hale2008fixed} and the multigrid method \citep{stuben2001review}. Intuitively, as most of the training iterations are spent on coarser resolutions with fewer number of parameters, multiresolution learning is more efficient than fixed-resolution training. Assuming that $\nabla f$ is Lipschitz continuous, we can view gradient-based optimization as a fixed-point iteration operator $F$ with a contraction constant of $\gamma \in (0,1)$ (note that \emph{stochastic} gradient descent converges to a noise ball instead of a fixed point): \begin{eqnarray*} \V{w} \leftarrow F(\V{w}), \hspace{10pt} F:=I-\eta \nabla f, \\ \| F(\V{w} ) - F(\V{w}') \|\leq \gamma \| \V{w} - \V{w}'\| \end{eqnarray*} Let $\V{w}^{(r)}_\star$ be the optimal estimator at resolution $r$ and $\V{w}^{(r)}$ be a solution satisfying $\| \V{w}^{(r)}_\star -\V{w}^{(r)}\| \leq \epsilon/2$. The algorithm terminates when the estimation error reaches $\fr{C_0 R}{(1-\gamma)^2}$. The following lemma describes the computational cost of the \emph{fixed-resolution} algorithm. \begin{lemma} Given a fixed point iteration operator $F$ with contraction constant of $\gamma \in (0,1)$, the computational complexity of fixed-resolution training for tensor model of order $p$ and rank $K$ is % \eq{ % \mathcal{C} = \mathcal{O}\brck{ % \fr{1}{|\log \gamma|} \cdot \log \left(\fr{1 }{(1-\gamma)\epsilon}\right) % \cdot\fr{Kp}{(1-\gamma)^2\epsilon}} \text{,} } \label{lemma:fixed} where $\epsilon$ is the terminal estimation error. \end{lemma} The next Theorem \ref{thm:mmt} characterizes the computational speed-up gained by \texttt{MRTL}{} compared to fixed-resolution learning, with respect to the contraction factor $\gamma$ and the terminal estimation error $\epsilon$. \begin{theorem} \label{thm:mmt} If the fixed point iteration operator (gradient descent) has a contraction factor of $\gamma$, multiresolution learning with the termination criteria of $\fr{C_0 r}{(1-\gamma)^2}$ at resolution $r$ is faster than fixed-resolution learning by a factor of $ \log\fr{1}{(1-\gamma) \epsilon}$, with the terminal estimation error $\epsilon$. \end{theorem} Note that the speed-up using multiresolution learning uses a global convergence criterion $\epsilon$ for each $r$.
{ "attr-fineweb-edu": 0.980957, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUb2jxK0wg09FXUa1L
\section*{Acknowledgments} We acknowledge useful discussions with Claude Duhr and Franz Herzog on the results of Ref.~\cite{Anastasiou:2014vaa}. The work of SM is supported by the UK's STFC. SF and GR are supported in part by an Italian PRIN2010 grant, and SF also by a European Investment Bank EIBURS grant, and by the European Commission through the HiggsTools Initial Training Network PITN-GA-2012-316704.
{ "attr-fineweb-edu": 0.208618, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUazfxK5YsWTJsNcyS
\subsection*{Acknowledgments} I would like to thank David Rosenthal for helpful discussions and Ulrich Bunke, Fumiya Mikawa, Christoph Winges, Takamitsu Yamauchi and the referee for useful comments on a previous version. \bibliographystyle{amsalpha}
{ "attr-fineweb-edu": 0.57666, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbSY5qsNCPdwheagB
\section*{Acknowledgments} We acknowledge support from ERC under Consolidator grant number 771536 (NEMO). GP thanks SISSA for hospitality and the Gustave B\"oel-Sofina Fellowships for financing his stay in Trieste. He is also supported by the Aspirant Fellowship FC 23367 from the F.R.S-FNRS and acknowledges support from the EOS contract O013018F. PC is very grateful to Marcello Dalmonte for discussions on a closely related project and to Bruno Bertini, Andrea De Luca, and Lorenzo Piroli for comments and insights.
{ "attr-fineweb-edu": 0, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUc6g241xiDnlsPs1L
\section{Introduction} WAR (wins above replacement) is a fundamental statistic for valuing baseball players, and has recently been proposed to determine arbitration salaries \citep{war_arb}. So, it is of utmost importance to use a WAR statistic that accurately captures a player's contribution to his team. However, current popular implementations of WAR for starting pitchers, implemented by FanGraphs \citep{war_FG} and Baseball \citet{war_BR}, have flaws. In particular, by computing seasonal WAR as a function of a pitcher's performance averaged over each game of a season, these methods ignore a pitcher's game-by-game variance and the convexity of WAR. Hence in this paper we propose a new way to compute WAR for starting pitchers, \textit{Grid WAR}. We begin in Section~\ref{sec:theProblem} by detailing the problems with current implementations of WAR for starting pitchers. In particular, in averaging over pitcher performance over the course of a season, current implementations of WAR ignore that WAR should be a convex function in the number of runs allowed. We fix this issue in Section~\ref{sec:def_grid_war} by defining \textit{Grid WAR}, which computes a starting pitcher's WAR in an individual game and his seasonal WAR as the sum of the WAR of each of his games. Then in Section~\ref{sec:estimate_fgwa} we estimate certain functions and the park factors which allow us to compute \textit{Grid WAR}. Next, in Section~\ref{sec:Results} we compute the \textit{Grid WAR} of a set of starting pitchers in 2019. We then compare these pitchers' \textit{Grid WAR} to their FanGraphs WAR and their Baseball Reference WAR. We find that the convexity of \textit{Grid WAR} places importance on games with few runs allowed. Finally, in the Appendix Section \ref{sec:park_effects}, we give a detailed comparison between our park factors and existing park factors. In particular, we discuss flaws with current implementations of existing park factors (e.g. from ESPN and FanGraphs), and then show that our park factors based on ridge regression are superior to these existing park factors. \section{Problems with Current Implementations of WAR}\label{sec:theProblem} \subsection{The Problem: Averaging over Pitcher Performance} The primary flaw of traditional methods for computing WAR for pitchers, as implemented by Baseball Reference and FanGraphs, is WAR is calculated as a function of a pitcher's \textit{average} performance. Baseball Reference averages a pitcher's performance over the course of a season via $xRA$, or ``expected runs allowed'' \citep{war_BR}. $xRA$ is a function of a pitcher's average number of runs allowed per out. FanGraphs averages a pitcher's performance over the course of a season via $ifFIP$, or ``fielding independent pitching (with infield flies)" \citep{war_FG}. $ifFIP$ is defined by $$ifFIP := \frac{13\cdot HR + 3\cdot(BB+HBP) - 2\cdot(K+IFFB)}{IP} + ifFIP constant,$$ which involves averaging some of a pitcher's statistics over his innings pitched. \subsection{Ignoring Variance} Using a pitcher's \textit{average} performance to calculate his WAR is a subpar way to measure his value on the mound because it ignores the variance in his game-by-game performance. To see why ignoring variance is a problem, consider Max Scherzer's six game stretch from June 12, 2014 through the 2014 All Star game, shown in Table \ref{Tab:Scherzer} \citep{Scherzer}. In Scherzer's six game stretch, he averages 15 runs over 41 innings, or $0.366$ runs per inning. So, on average, Scherzer pitches $3.3$ runs per complete game. If we look at each of Scherzer's individual games separately, however, we see that he has four dominant performances, one decent game, and one ``blowup''. Intuitively, the four dominant performances alone are worth more than allowing 3.3 runs in each of six games. On this view, averaging Scherzer's performances significantly devalues his contributions during this six game stretch. \begin{table}[H] \centering \begin{tabular}{rccccccc} \hline \text{game} & 1 & 2 & 3 & 4 & 5 & 6 & \text{total} \\ \hline \text{earned runs} & 0 & 10 & 1 & 2 & 1 & 1 & 15 \\ \text{innings pitched} & 9 & 4 & 6 & 7 & 8 & 7 & 41 \\ \hline \end{tabular} \caption{Max Scherzer's performance over six games prior to the 2014 All Star break.} \label{Tab:Scherzer} \end{table} Because \begin{quote} \centering ``\textit{you can only lose a game once,}'' \end{quote} it makes more sense to give Scherzer zero credit for his one bad game than to distribute his one poor performance over all his other games via averaging. Another way of thinking about this is \begin{quote} \centering ``\textit{not all runs have the same value.}'' \end{quote} For instance, the difference between allowing 10 runs instead of 9 in a game is much smaller than the difference between allowing 1 run instead of 0. On this view, the tenth run allowed in a game has much smaller impact than the first. Hence we should not compute WAR as a function of a pitcher's average game-performance. Instead, we should compute a pitcher's WAR in each individual game, and compute his season-long WAR as the summation of the WAR of his individual games. \subsection{Ignoring the Convexity of WAR} Additionally, using a pitcher's \textit{average} performance to calculate his WAR is a subpar way to measure his value on the mound because it ignores the convexity of WAR. Think of a starting pitcher's WAR in a complete game as a function $R \mapsto WAR(R)$ of the number of runs allowed in that game. We expect $WAR$ to be a decreasing function, because allowing more runs in a game should correspond to fewer wins above replacement. Additionally, we expect $WAR$ to be a \textit{convex} function, whose second derivative is positive. In other words, as $R$ increases, we expect the relative impact of allowing an extra run, given by $WAR(R+1) - WAR(R)$, to decrease. Concretely, allowing 2 runs instead of 1 should have a much steeper dropoff in $WAR$ than allowing 7 runs instead of 6. Again, we expect this because ``\textit{you can only lose a game once}'' and ``\textit{not all runs have the same value}''. If a pitcher allows 6 runs, he has essentially already lost his team the game, so allowing an extra run to make this total 7 shouldn't have a massive difference in a pitcher's $WAR$ during that game. Conversely, if a pitcher allows 1 run, the marginal impact of allowing an extra run to make this total 2 is much larger. Because we expect $WAR$ to be a convex function, Jensen's inequality tells us that averaging a pitcher's performance undervalues his contributions. Specifically, thinking of a pitcher's number of runs allowed in a complete game as a random variable $R$, Jensen's inequality says \begin{equation} WAR({\mathbb E}[R]) \leq {\mathbb E}[WAR(R)]. \label{eqn:wjensen1} \end{equation} Traditional methods for computing WAR are reminiscent of the left side of Equation \ref{eqn:wjensen1} -- average a pitcher's performance, and then compute his WAR. In this paper, we devise a WAR metric reminiscent of the right side of Equation \ref{eqn:wjensen1} -- compute the WAR of each of a pitcher's individual games, and then average. By Equation \ref{eqn:wjensen1}, traditional metrics for computing WAR undervalue the contributions of many starting pitchers. On the other hand, in accounting for the convexity of WAR, our method allows us to more accurately value a pitcher's contributions to winning baseball games. \section{Defining \textit{Grid WAR} for Starting Pitchers}\label{sec:def_grid_war} We wish to create a metric which computes a starting pitcher's WAR for an individual game. The idea is to compute a context-neutral and offense-invariant version of win-probability-added that is derived only from a pitcher's performance. In this Section, we begin by defining our metric \textit{Grid WAR} by ignoring the ballpark, and then in Section \ref{sec:park_adjustment} we discuss our ballpark adjustment. First, we define a starting pitcher's pre-park-adjusted \textit{Grid WAR} ($GWAR$) for a game in which he exits at the end of an inning. To do so, we create the function $f=f(I,R)$ which, assuming both teams have league-average offenses, computes the probability a team wins a game after giving up $R$ runs through $I$ innings. $f$ is a context-neutral version of win probability, as it depends only on the starter's performance. To compute a wins \textit{above replacement} metric, we need to compare this context-neutral win-contribution to that of a potential replacement-level pitcher. We use a constant $w_{rep}$ which denotes the probability a team wins a game with a replacement-level starting pitcher, assuming both teams have league-average offenses. We expect $w_{rep} < 0.5$ since replacement-level pitchers are worse than league-average pitchers. {} Then, we define a starter's pre-park-adjusted \textit{Grid WAR} during a game in which he gives up $R$ runs through $I$ complete innings as \begin{equation} f(I, R) - w_{rep}. \label{eqn:war_f} \end{equation} We call our metric \textit{Grid WAR} because the function $f=f(I,R)$ is defined on the 2D grid $\{1,...,9\} \times \{0,...,R_{max}=10\}$. We restrict $R \leq R_{max} = 10$ because there is not much data to estimate $f(I,R)$ for $R > 10$, and because $f(I,10)$ is essentially zero. In particular, for $R > R_{max}$, we set $f(I,R) = f(I,R_{max})$. Next, we define a starting pitcher's pre-park-adjusted \textit{Grid WAR} for a game in which he exits midway through an inning. To do so, we create a function $g=g(R|S,O)$ which, assuming both teams have league-average offenses, computes the probability that, starting midway through an inning with $O \in \{0,1,2\}$ outs and base-state $$S \in \{000,100,010,001,110,101,011,111\},$$ a team scores exactly $R$ runs through the end of the inning. Then we define a starter's pre-park-adjusted \textit{Grid WAR} during a game in which he gives up $R$ runs and leaves midway through inning $I$ with $O$ outs and base-state $S$ as the expected \textit{Grid WAR} at the end of the inning, \begin{equation} \sum_{r \geq 0} g(r|S,O) f(I,r+R) - w_{rep}. \label{eqn:war_g} \end{equation} Finally, we define a starting pitcher's pre-park-adjusted \textit{Grid WAR} for an entire season as the sum of the \textit{Grid WAR} of his individual games. \subsection{Park Adjustment}\label{sec:park_adjustment} Now, we modify \textit{Grid WAR} to adjust for the ballpark in which the game was played. We need to adjust for ballpark because some ballparks are more batter-friendly than others (e.g. Coors Field), and we don't want to penalize a pitcher for allowing more runs than usual as a result of the ballpark. We define the \textit{park effect} $\alpha$ of a ballpark as the expected runs scored in one inning at that park above that of an average park, if an average offense faces an average defense. In Section~\ref{sec:park_effects} we derive the park effect $\alpha$ for each ballpark. Now, suppose a starting pitcher allows $R$ runs through $I$ complete innings at a ballpark with park effect $\alpha$. This is equivalent in expectation to allowing $R-I\alpha$ runs through $I$ innings at an average ballpark. To see this, note that $I\alpha$ is the expected runs scored through $I$ innings at this ballpark above that of an average ballpark. Therefore, a starting pitcher's park-adjusted \textit{Grid WAR} for a game in which he gives up $R$ runs through $I$ complete innings is \begin{equation} f(I, R - I\alpha) - w_{rep}. \label{eqn:war_f_park} \end{equation} The function $R \mapsto f(I,R)$ is defined on positive integers, however, and $R-I\alpha$ is not a positive integer. So, we approximate $f(I, R - I\alpha)$ via a first order Taylor approximation, \begin{equation} f(I, R - I\alpha) \approx \begin{cases} \text{Let } h = I|\alpha|. \\ (1-h)\cdot f(I,R) + h \cdot f(I,R-1) \ \ \qquad\qquad\text{if } \alpha > 0, \ R > 0, \\ (1-h)\cdot f(I,R) + h \cdot f(I,R+1) \ \ \qquad\qquad\text{if } \alpha < 0, \ R < R_{max}, \\ (1+h)\cdot f(I,0) - h \cdot f(I,1) \ \qquad\qquad\qquad\text{if } \alpha > 0, \ R = 0, \\ (1+h)\cdot f(I,R_{max}) - h \cdot f(I,R_{max}-1) \qquad\text{if } \alpha < 0, \ R = R_{max}. \end{cases} \end{equation} Then, by the same logic as before, a starter's park-adjusted \textit{Grid WAR} during a game in which he gives up $R$ runs and leaves midway through inning $I$ with $O$ outs and base-state $S$ is \begin{equation} \sum_{r \geq 0} g(r|S,O) f(I,r+R-I\alpha) - w_{rep}. \label{eqn:war_g_park} \end{equation} Finally, we define a starting pitcher's \textit{Grid WAR} for an entire season as the sum of the park-adjusted \textit{Grid WAR} of his individual games. In order to compute \textit{Grid WAR} for each starting pitcher, we need only estimate the grid functions $f$ and $g$, the constant $w_{rep}$, and the park effects $\alpha$. \section{Estimating the Grid Functions $f$ and $g$, the Constant $w_{rep}$, and the Park Effects $\alpha$}\label{sec:estimate_fgwa} Now we discuss the estimation of the functions $f$ and $g$, the constant $w_{rep}$, and the park effects $\alpha$ which allow us to compute a starting pitcher's \textit{Grid WAR} for a baseball game. Specifically, in Section~\ref{sec:estimate_f} we estimate $f$, in Section~\ref{sec:convexityOfWAR} we discuss how our estimated $f$ reflects the convexity of WAR, in Section~\ref{sec:estimate_g} we estimate $g$, in Section~\ref{sec:estimate_wrep} we estimate $w_{rep}$, and in Section~\ref{sec:parkFxFinal} we estimate $\alpha$. \subsection{Our Data}\label{sec:our_data} We relied on play-by-play data from Retrosheet in our analysis. We scraped every plate appearance from 1990 to 2020 from the Retrosheet database. For each plate appearance, we record the pitcher, batter, home team, away team, league, park, inning, runs allowed, base state, and outs count. We provide a link to our final dataset in Section~\ref{sec:data_and_code}. In our study, we restrict our analysis to every plate appearance from 2010 to 2019 featuring a starting pitcher, using the 2019 season as our primary example. \subsection{Estimating $f$}\label{sec:estimate_f} First, we estimate the function $f=f(I,R)$ which, assuming both teams have league-average offenses, computes the probability a team wins a game after giving up $R$ runs through $I$ complete innings. We estimate $f$ using logistic regression. The response variable is a binary variable indicating whether a pitcher's team won a game after giving up $R$ runs through $I$ innings. We model $I$ and $R$ as fixed effects (i.e., we have separate coefficients for each value of $I$ and $R$). In order to make $f$ context neutral, we also adjust for home field, National vs. American league, and the year, each as a fixed effect. This process is essentially equivalent to binning, averaging, and smoothing over the variables $(I,R)$ after adjusting for confounders. Additionally, recall that if a home team leads after the top of the $9^{th}$ inning, then the bottom of the $9^{th}$ is not played. Therefore, to avoid selection bias, we exclude all $9^{th}$ inning instances in which a pitcher pitches at home. In Figure \ref{fig:fR}, we plot the functions $R \mapsto f(I,R)$ for each inning $I$, for an away-team American League starting pitcher in 2019. For each inning $I$, $R \mapsto f(I,R)$ is decreasing. This makes sense: if you allow more runs through a fixed number of innings, you are less likely to win the game. Also, $R \mapsto f(I,R)$ is mostly convex. This makes sense: if you have already allowed a high number of runs, there is a lesser relative impact of throwing an additional run. Conversely, if you have allowed few runs thus far, there is a high relative impact of throwing an additional run. Furthermore, for each $R$, the function $I \mapsto f(I,R)$ is increasing. This makes sense: giving up $R$ runs through $I$ innings is worse than giving up $R$ runs through $I+i$ innings for $i > 0$, because giving up $R$ runs through $I+i$ innings implies you gave up fewer than $R$ runs through $I$ innings, on average. \begin{figure}[H] \centering \includegraphics[width=15cm]{writeup_plots/plot_fIR_R_smoothed.png} \caption{The function $R \mapsto f(I,R)$ for each inning $I$, for an away-team American League starting pitcher in 2019. These functions $R \mapsto f(I,R)$ are mostly convex.} \label{fig:fR} \end{figure} \subsection{The Convexity of $GWAR$}\label{sec:convexityOfWAR} As shown in Figure \ref{fig:fR}, the function $R \mapsto f(I,R)$ is mostly convex for each inning $I$. Therefore, by Jensen's inequality, we expect that traditional WAR metrics undervalue players relative to $GWAR$. To see why, suppose a starting pitcher allows $R$ runs through $I = i$ innings, where $i$ is a fixed number and $R$ is a random variable. Then by Jensen's inequality, \begin{equation} f(i, {\mathbb E}[R]) \leq {\mathbb E}[f(i,R)]. \label{eqn:jensen1} \end{equation} Therefore, supposing a pitcher in each game $j \in \{1,...,n\}$ allows $R_j$ runs through $I=i$ complete innings, we approximately have \begin{equation} f\bigg(i, \frac{1}{n}\sum_{j=1}^{n} R_j\bigg) \leq \frac{1}{n}\sum_{j=1}^{n} f(i,R_j). \label{eqn:jensen2} \end{equation} In other words, for a pitcher who pitches exactly $I=i$ innings in each game, the $GWAR$ of his average number of runs is less than the average $GWAR$ of his individual games. On this view, traditional WAR metrics undervalue the win contributions of many players, especially those of high variance pitchers or pitchers with skewed runs-allowed distributions. In this paper, by computing season-long WAR as the summation of the WAR of his individual games, we allow the convexity of WAR to more accurately describe pitchers' performances. \subsection{Estimating $g$}\label{sec:estimate_g} Now, we estimate the function $g=g(R|S,O)$ which, assuming both teams have league-average offenses, computes the probability that, starting midway through an inning with $O \in \{0,1,2\}$ outs and base-state $$S \in \{000,100,010,001,110,101,011,111\},$$ a team scores exactly $R$ runs through the end of the inning. We estimate $g(R|S,O)$ using the empirical distribution, for $R \in \{0,...,13\}$. Specifically, we bin and average over the variables $(R,S,O)$, using data from every game from 2010 to 2019. Because $g$ isn't significantly different across innings, we use data from each of the first eight innings. In Figure \ref{fig:g0} we plot the distribution of $g(R|S,O=0)$, with $O=0$ outs, for each base-state $S$. With no men on base ($S=000$), 0 runs allowed for the rest of the inning is most likely. With bases loaded ($S=111$), 1 run allowed for the rest of the inning is most likely, and there is a fat tail expressing that 2 through 5 runs through the rest of the inning are also reasonable occurrences. With men on second and third, 2 runs allowed for the rest of the inning is most likely, but the tail is skinnier than that of bases loaded. \begin{figure}[H] \centering \includegraphics[width=15cm]{writeup_plots/plot_gRSO_R0.png} \caption{The discrete probability distribution $R \mapsto g(R|S,O=0)$ for each base-state $S$.} \label{fig:g0} \end{figure} \subsection{Estimating $w_{rep}$}\label{sec:estimate_wrep} To compute a wins \textit{above replacement} metric, we need to compare a starting pitcher's context-neutral win contribution to that of a potential replacement-level pitcher. Thus we define a constant $w_{rep}$ which denotes the context-neutral probability a team wins a game with a replacement-level starting pitcher, assuming both teams have a league-average offense and league-average fielding. We expect $w_{rep} < 0.5$ since replacement-level pitchers are worse than league-average pitchers. Further, both Baseball Reference and FanGraphs assume that a team consisting entirely of replacement-level players should end the season with a 0.294 win percentage, or equivalently, 47.7 wins in 162 games \citep{unifying_rep_lvl}. Hence we expect $w_{rep} > 0.294$, as $w_{rep}$ represents a team of average players with a replacement-level starting pitcher. It is difficult to estimate $w_{rep}$ because it is difficult to compile a list of replacement-level pitchers. According to \citet{ReplacementLevel}, \textit{replacement-level} is the ``level of production you could get from a player that would cost you nothing but the league minimum salary to acquire.'' Since we are not members of an MLB front office, this level of production is difficult to estimate. Ultimately, the value of $w_{rep}$ doesn't matter too much because we rescale all pitcher's \textit{Grid WAR} to sum to a fixed amount, to compare our results to those of Baseball Reference and FanGraphs. So, as a compromise to the constraints $w_{rep} > 0.294$ and $w_{rep} < 0.5$, noting that $w_{rep}$ should be closer to $0.5$ than $0.294$, we arbitrarily set $w_{rep} = 0.41$. \subsection{Estimating the Park Effects $\alpha$}\label{sec:parkFxFinal} Finally, we estimate the park effect $\alpha$ of each ballpark, which represents the expected runs scored in one inning at that park above that of an average park, if an average offense faces an average defense. To do so, we take all innings from 2017 to 2019 and fit a ridge regression with $\lambda = 0.25$, treating each park, team-offensive-season, and team-defensive-season as fixed effects. We plot these park effects in Figure \ref{fig:final_parkFx}. We use ridge regression, as opposed to ordinary least squares or existing park effects from ESPN, FanGraphs, or Baseball Reference, because, as discussed in the Appendix Section \ref{sec:park_effects}, it performs the best in two simulation studies and has the best out-of-sample predictive performance on observed data. \begin{figure}[H] \centering \includegraphics[width=14cm]{writeup_plots/plot_pf_ridge_1719.png} \caption{Our 2019 park effects, fit on all innings from 2017 to 2019. The park effect $\alpha$ of a ballpark represents the expected runs scored in one inning at that park above that of an average park, if an average offense faces an average defense. The park abbreviations are from Retrosheet.} \label{fig:final_parkFx} \end{figure} \section{Results}\label{sec:Results} Now that we have estimated the grid functions $f$ and $g$, the constant $w_{rep}$, and the park effects $\alpha$, we compute the \textit{Grid WAR} ($GWAR$) of each starting pitcher in 2019. We begin by showing the 2019 $GWAR$ rankings in Figure~\ref{fig:gwar2019rankings} and a game-by-game breakdown of Gerrit Cole's 2019 season in Figure~\ref{fig:cole2019}. Then, in Section~\ref{sec:gwar_vs_FanGraphs}, we compare \textit{Grid WAR} to FanGraphs WAR ($FWAR$), noting in particular pitchers who have similar $FWAR$ yet different $GWAR$. Similarly, in Section~\ref{sec:gwar_vs_Bwar} we compare \textit{Grid WAR} to Baseball Reference WAR. We find, as expected, that \textit{Grid WAR} values pitchers who pitch many games with few runs allowed. \begin{figure}[H] \centering \includegraphics[width=15cm]{writeup_plots/plot_gwar_rankings_2019.png} \caption{Ranking starting pitchers in 2019 by \textit{Grid WAR}.} \label{fig:gwar2019rankings} \end{figure}{} \begin{figure}[H] \centering \includegraphics[width=15cm]{writeup_plots/plot_cole_2019.png} \caption{A game-by-game breakdown of Gerrit Cole's 2019 season.} \label{fig:cole2019} \end{figure} \subsection{Comparing \textit{Grid WAR} to FanGraphs WAR}\label{sec:gwar_vs_FanGraphs} We can better understand \textit{Grid WAR} by comparing it to existing WAR formulations. We begin by comparing $GWAR$ to FanGraphs WAR ($FWAR$). We acquire the 2019 $FWAR$ of 58 starting pitchers from \citet{FanGraphs2019War}. To legitimize comparison between $GWAR$ and $FWAR$, we rescale $GWAR$ so that the sum of these pitchers' $GWAR$ equals the sum of their $FWAR$. By rescaling, we compare the \textit{relative} value of starting pitchers according to $GWAR$ to the \textit{relative} value of starting pitchers according to $FWAR$. In Figure \ref{fig:gwarVfwar19} we plot $GWAR$ vs. $FWAR$ for starting pitchers in 2019. \begin{figure}[H] \centering \includegraphics[width=14cm]{writeup_plots/plot_GWAR_vs_FWAR_2019.png} \caption{\textit{Grid WAR} vs. FanGraphs WAR in 2019. Pitchers above the line $y=x$ are undervalued according to $GWAR$ relative to $FWAR$, and pitchers below the line are overvalued.} \label{fig:gwarVfwar19} \end{figure} To highlight the difference between $GWAR$ and $FWAR$, we compare players who have similar $FWAR$ but different $GWAR$ values in 2019. To start, in Figure \ref{fig:pf1} we compare Jacob deGrom to Lance Lynn. They have similarly high $FWAR$ (Lynn 6.7, deGrom 6.9), but deGrom has much higher $GWAR$ (deGrom 6.9, Lynn 5.8). Also, in Figure \ref{fig:pf2} we compare Sonny Gray and Jose Berrios. They have similar $FWAR$ (Gray 4.5, Berrios 4.4), but Gray has much higher $GWAR$ (Gray 4.7, Berrios 2.5). Finally, in Figure \ref{fig:pf3} we compare Sandy Alcantara and Reynaldo Lopez. They have similarly moderate $FWAR$ (Alcantara 2.3, Lopez 2.4), but Alcantara has much higher $GWAR$ (Alcantara 3.4, Lopez 1.0). \begin{figure}[H] \centering \includegraphics[width=15cm]{writeup_plots/pf1_2019.png} \caption{Histogram of runs allowed in a game in 2019 for Jacob deGrom (left), Lance Lynn (middle), and the difference between these two histograms (right). They have similar $FWAR$ (Lynn 6.7, deGrom 6.9), but deGrom has much higher $GWAR$ (deGrom 6.9, Lynn 5.8). This is because deGrom pitches 6 more games in which he allows exactly 0 runs, and Lynn pitches 8 more games in which he allows 2 runs or more.} \label{fig:pf1} \end{figure} \begin{figure}[H] \centering \includegraphics[width=15cm]{writeup_plots/pf2_2019.png} \caption{Histogram of runs allowed in a game in 2019 for Sonny Gray (left), Jose Berrios (middle), and the difference between these two histograms (right). They have similar $FWAR$ (Gray 4.5, Berrios 4.4), but Gray has much higher $GWAR$ (Gray 4.7, Berrios 2.5). This is because Gray pitches 7 more games in which he allows 3 runs or fewer, and Berrios pitches 8 more games in which he allows 4 runs or more.} \label{fig:pf2} \end{figure} \begin{figure}[H] \centering \includegraphics[width=15cm]{writeup_plots/pf3_2019.png} \caption{Histogram of runs allowed in a game in 2019 for Sandy Alcantara (left), Reynaldo Lopez (middle), and the difference between these two histograms (right). They have similar $FWAR$ (Alcantara 2.3, Lopez 2.4), but Alcantara has much higher $GWAR$ (Alcantara 3.4, Lopez 1.0). This is because Alcantara pitches 9 more games in which he allows 0, 2, or 4 runs, and Lopez pitches 9 more games in which he allows 1, 3, 5, or 7 runs.} \label{fig:pf3} \end{figure} In each of these comparisons, we see a similar trend explaining the differences in $GWAR$. Specifically, the pitcher with higher $GWAR$ allows fewer runs in more games, and allows more runs in fewer games. This is depicted graphically in the ``Difference'' histograms, which show the difference between the histogram on the left and the histogram on the right. The green bars denote positive differences (i.e., the pitcher on the left has more games with a given number of runs allowed than the pitcher on the right), and the red bars denote negative differences (i.e., the pitcher on the left has fewer games with a given number of runs allowed than the pitcher on the right). In each of these examples, the green bars are shifted towards the left (pitchers with higher $GWAR$ allow few runs in more games), and the red bars are shifted towards the right (pitchers with lower $GWAR$ allow more runs in more games). For instance, consider Figure \ref{fig:pf2}. Gray pitches 7 more games than Berrios in which he allows 3 runs or fewer, and Berrios pitches 8 more games than Gray in which he allows 4 runs or more. On this view, Gray should have a higher $WAR$ than Berrios. Similarly, consider Figure \ref{fig:pf3}. Alcantara pitches 9 more games than Lopez in which he allows 0, 2, or 4 runs, whereas Lopez pitches 9 more games than Alcantara in which he allows 1, 3, or 5 runs. So, in 9 games, Alcantara pitches exactly 1 run fewer than Lopez! Hence he should have a higher $WAR$ than Lopez. Additionally, consider Figure \ref{fig:pf1}. DeGrom pitches 6 more games than Lynn in which he allows exactly 0 runs, and Lynn pitches 8 more games than DeGrom in which he allows 2 runs or more. The convexity of $GWAR$ places massive importance on games in with 0 runs allowed, so it makes sense that deGrom has such a higher $GWAR$ than Lynn. Furthermore, in Figure \ref{fig:pf_agg} we compare undervalued pitchers (according to $GWAR$ relative to $FWAR$) on aggregate to overvalued pitchers. In particular, in the histogram on the left, we aggregate the runs allowed in each of game of the five most undervalued pitchers, who have the largest $GWAR - FWAR$ values: Julio Teheran, Dakota Hudson, Clayton Kershaw, Jeff Samardzija, and Mike Fiers. In the histogram in the middlle, we aggregate the runs allowed in each of game of the five most overvalued pitchers, who have the most negative $GWAR - FWAR$ values: Jose Berrios, Jose Quintana, Shane Bieber, Noah Syndergaard, and Joe Musgrove. Finally, in the histogram on the right, we show the difference between the histogram on the left and the histogram in the middle. As before, the green bars are shifted towards the left, and the red bars are shifted towards the right, which means that on aggregate, pitchers who are undervalued pitch more games with few runs allowed, and pitchers who are overvalued pitch more games with many runs allowed. This makes sense in light of the convexity of \textit{Grid WAR}, which places importance on games with few runs allowed. \begin{figure}[H] \centering \includegraphics[width=15cm]{writeup_plots/pf_agg_2019.png} \caption{Histogram of runs allowed in a game for the five most undervalued pitchers according to $GWAR$ relative to $FWAR$ (left), the five most overvalued pitchers (middle), and the difference between these two histograms (right). } \label{fig:pf_agg} \end{figure} \subsection{Comparing \textit{Grid WAR} to Baseball Reference WAR}\label{sec:gwar_vs_Bwar} Similarly, we compare $GWAR$ to Baseball Reference WAR ($BWAR$). As before, we rescale $BWAR$ (and $GWAR$) so that the sum of these pitchers' $BWAR$ (and $GWAR$) equals the sum of their $FWAR$. By rescaling, we again compare the relative value of starting pitchers according to each WAR metric. In Figure \ref{fig:gwarVbwar19} we plot $GWAR$ vs. $BWAR$ for these starting pitchers in 2019. \begin{figure}[H] \centering \includegraphics[width=14cm]{writeup_plots/plot_GWAR_vs_BWAR_2019.png} \caption{\textit{Grid WAR} vs. Baseball Reference WAR in 2019. Pitchers above the line $y=x$ are undervalued according to $GWAR$ relative to $BWAR$, and pitchers below the line are overvalued.} \label{fig:gwarVbwar19} \end{figure} Similarly as before, we compare players who have similar $BWAR$ but different $GWAR$ values in 2019. To start, in Figure \ref{fig:pb1} we compare Zack Greinke to Lucas Giolito. They have similarly high $BWAR$ (Giolito 5.2, Greinke 5.1), but Greinke has much higher $GWAR$ (Greinke 4.9, Giolito 3.8). Also, in Figure \ref{fig:pb2} we compare Julio Teheran to Anibal Sanchez. They have similarly moderate $BWAR$ (Teheran 3.3, Sanchez 3.3), but Teheran has much higher $GWAR$ (Teheran 4.2, Sanchez 2.8). Finally, in Figure \ref{fig:pb3} we compare Wade Miley to Robbie Ray. They have similar $BWAR$ (Miley 2.0, Ray 2.0), but Miley has higher $GWAR$ (Miley 2.4, Ray 1.8). \begin{figure}[H] \centering \includegraphics[width=15cm]{writeup_plots/pb1_2019.png} \caption{Histogram of runs allowed in a game in 2019 for Zack Greinke (left), Lucas Giolito (middle), and the difference between these two histograms (right). They have similar $BWAR$ (Giolito 5.2, Greinke 5.1), but Greinke has much higher $GWAR$ (Greinke 4.9, Giolito 3.8). This is because Greinke pitches 5 more games in which he allows 4 runs or fewer, and Giolito pitches 1 more game in which he allows 5 runs or more.} \label{fig:pb1} \end{figure} \begin{figure}[H] \centering \includegraphics[width=15cm]{writeup_plots/pb2_2019.png} \caption{Histogram of runs allowed in a game in 2019 for Julio Teheran (left), Anibal Sanchez (middle), and the difference between these two histograms (right). They have similar $BWAR$ (Teheran 3.3, Sanchez 3.3), but Teheran has much higher $GWAR$ (Teheran 4.2, Sanchez 2.8). This is because Teheran pitches 6 more games in which he allows 1 runs or fewer, and Sanchez pitches 4 more games in which he allows 2 runs or more.} \label{fig:pb2} \end{figure} \begin{figure}[H] \centering \includegraphics[width=15cm]{writeup_plots/pb3_2019.png} \caption{Histogram of runs allowed in a game in 2019 for Wade Miley (left), Robbie Ray (middle), and the difference between these two histograms (right). They have similar $BWAR$ (Miley 2.0, Ray 2.0), but Miley has higher $GWAR$ (Miley 2.4, Ray 1.8). This is because Miley pitches 5 more games in which he allows 3 runs or fewer, and Ray pitches 4 more games in which he allows 4 runs or more.} \label{fig:pb3} \end{figure} We see the same trend explaining the differences in $GWAR$ here as in the $GWAR$ comparison to FanGraphs in section \ref{sec:gwar_vs_FanGraphs}. In particular, the pitcher with higher $GWAR$ allows fewer runs in more games, and allows more runs in fewer games. This is again depicted graphically in the ``Difference'' histograms, as the green bars are shifted towards the left and the red bars are shifted towards the right. Moreover, in Figure \ref{fig:pb_agg} we compare the five most undervalued pitchers (according to $GWAR$ relative to $BWAR$) on aggregate to the five most overvalued pitchers. As before, we find that on aggregate, pitchers who are undervalued pitch more games with few runs allowed, and pitchers who are overvalued pitch more games with many runs allowed. This makes sense in light of the convexity of \textit{Grid WAR}, which places importance on games with few runs allowed. \begin{figure}[H] \centering \includegraphics[width=15cm]{writeup_plots/pb_agg_2019.png} \caption{Histogram of runs allowed in a game for the five most undervalued pitchers according to $GWAR$ relative to $BWAR$ (left), the five most overvalued pitchers (middle), and the difference between these two histograms (right). } \label{fig:pb_agg} \end{figure} \section{Conclusion} Traditional methods of computing WAR are flawed because they compute WAR as a function of a pitcher's \textit{average} performance. Averaging over pitcher performance is a subpar way to value a pitcher's performance because it ignores a pitcher's game-by-game variance and ignores the convexity of WAR. Stated concisely, ``\textit{not all runs have the same value}'' - for instance, allowing the tenth run of a game has a smaller marginal impact than allowing the first run of a game, because by the time a pitcher has thrown nine runs, the game is essentially already lost. In other words, ``\textit{you can only lose a game once.}'' So, in this paper, we devise \textit{Grid WAR}, a new way to compute a starting pitcher's WAR. We compute a pitcher's $GWAR$ in each of his individual games, and define his seasonal $GWAR$ as the sum of the $GWAR$ of his individual games. We compute $GWAR$ on a set of starting pitchers in 2019, and compare them to their FanGraphs WAR and Baseball Reference WAR. Examining the trends of pitchers who are overvalued and undervalued by $GWAR$ relative to $FWAR$ and $BWAR$ in 2019, we see that $GWAR$ values games in which a pitcher allows few runs. This makes sense because the convexity of \textit{Grid WAR} places importance on games with few runs allowed. \subsection{Future Work} An MLB general manager, when signing a starting pitcher to his team, is generally interested in his expected \textit{future} performance. Therefore, it would be prudent to explore whether a pitcher's $GWAR$ in season $t$ is predictive of his $GWAR$ in season $t+1$. On this view, we also suggest exploring a version of \textit{Grid WAR} based on expected runs allowed ($xRA$), rather than runs allowed, to potentially make $GWAR$ more stable from year to year. Furthermore, in this paper we propose a new way to compute WAR for starting pitchers. Our method, however, does not translate to valuing relievers in an obvious way. In particular, relievers enter the game at different times, which makes it difficult to value their context-neutral win contribution. Also, there is no obvious analog of $w_{rep}$ for relievers. Nevertheless, for future work we suggest extending \textit{Grid WAR} to value relief pitchers. \subsection{Our Code \& Data}\label{sec:data_and_code} Our code is available on github at \url{https://github.com/snoopryan123/grid_war}. Our data is available on Dropbox at \url{https://upenn.app.box.com/v/retrosheet-pa-1990-2000}. \newpage \bibliographystyle{apalike}
{ "attr-fineweb-edu": 0.978516, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUfMk25V5jQ-dgBMaP
\section{Introduction} Consumers increasingly rely on user-generated online reviews when making purchase decision \cite{Cone:11,Ipsos:12}. Unfortunately, the ease of posting content to the Web, potentially anonymously, create opportunities and incentives for unscrupulous businesses to post deceptive opinion spam fraudulent or fictitious reviews that are deliberately written to sound authentic, in order to deceive the reader \cite{Ott:11}. For example, a hotel manager may post fake positive reviews to promote their own hotel, or fake negative reviews to demote a competitor`s hotel. Accordingly, there appears to be widespread and growing concern among both businesses and the public \cite{Meyer:09,Miller:09,Streitfeld:12,Topping:10}. The followings are two reviews, one is deceptive and the other one is truthful. \begin{enumerate} \item {My husband and I arrived for a 3 night stay. for our 10th wedding anniversary. We had booked an Executive Guest room, upon ar- rival we were informed that they would be upgrading us to a beautiful Junior Suite. This was just a wonderful unexpected plus to our beautifully planned weekend. The front desk manager was professional and made us feel warmly welcomed. The Chicago Affinia was just a gorgeous hotel, friendly staff, lovely food and great atmosphere. Not the men- tion the feather pillows and bedding that was just fantastic. Also we were allowed to bring out beloved Shi-Tzu and he experienced the Jet Set Pets stay. The grooming was perfect, the daycare service we felt completely comfortable with. This was a beautiful weekend, thank you Affinia Hotels! We would visit this hotel again! } \item {\em As others have said, once all the construction works are completed I suspect that the prices this hotel can (legitimately) charge will put it out of our price range. Which will be a pity because it is an excellent hotel, the location couldn't be better. The room was very spacious, with separate sitting study areas a nice bathroom. Only 3 minor points: down- stairs they were serving a free basic break- fast (coffee and pastries), but we only knew of it on our last morning nobody had mentioned this; the cost of internet is a bit dear especially when lots of motels now offer it free; there was only powered milk in the rooms, which wasn't that nice. But none of these really spolied a really enjoyable stay.. } \end{enumerate} As can be seen, as the distinction is too subtle, it is hard to manually tell truthful reviews from deceptive one (the first one is deceptive). Existing approaches for spam detection usually focus on developing supervised learning-based algorithms to help users identify deceptive opinion spam \cite{jindal2008opinion,jindal2010finding,li2011learning,lim2010detecting,Ott:11,wang2011review,wu2010distortion}. These supervised approaches suffer one main disadvantage: they are highly dependent upon high-quality gold-standard labelled data. One option for producing gold-standard labels, for example, would be to rely on the judgements of human annotators \cite{jindal2010finding,mukherjee2012spotting}. Recent studies, however, shows that unlike other kinds of spam, such as Web \cite{castillo2006reference,martinez2009web} and e-mail spam \cite{chirita2005mailrank}, deceptive opinion spam is neither easily ignored nor easily identified by human readers \cite{Ott:11}. This is especially the case when considering the overtrusting nature of most human judges, a phenomenon referred to in the psychological deception literature as a truth bias \cite{vrij2008detecting}. Due to the difficulty in manually labeling deceptive opinion, Ott et al. (2011) \cite{Ott:11} solicit deceptive reviews from workers on Amazon Mechanical Turk, and built a dataset containing 400 deceptive and 400 truthful reviews, which they use to train and evaluate supervised SVM classifiers\footnote{Truthful opinions are selected from 5-star reviews from TripAdvisor. http://www.tripadvisor.com/}. According to their findings, truthful hotel opinions are more specific about spatial configurations(e.g. small, bathroom, location), which can be easily explained by the fact that the Turkers have never been to that hotels. Our work in this paper started from the dataset, 400 truthful and 400 deceptive reviews, from \cite{Ott:11,li2013topicspam}, a large-scale, publicly available dataset for deceptive opinion spam research. Then we use three ways to pre-processing dataset. (a) Use N-gram language model to build a probabilistic language model (b) Use POS-tag to delve the linguistic reason why a review would likely to be considered fake (c) Use TF-IDF to reflect how important a word is in given text. After pre-processing, the data has high dimensional features which is quite computational expensive and easily raises the over fitting problem. Thus we are motivated to reduce dimension using Latent Semantic Indexing (LSI), one of most popular indexing and retrieval method that uses singular value decomposition (SVD) to map the documents into vector in the latent concept-space. LSI extracts useful information in latent concepts through learning a low-dimensional representation of this text, which helps us to know more detailed difference in fake and real reviews. We also use supervised latent semantic analysis (Sprinkle) to overcome the shortcoming of LSI and results show performance improves a lot. To classify data, we use Support Vector Machine (SVM) and Naive Bayes, two popular and mature classifier, on our dataset after pre-processing and dimension reduction. Although these classifiers are quite powerful, it may still misclassify correct points. So we use a voting scheme or a weighted combination of the multiple hypothesis to achieve final conclusion. The remainder of this paper is organized as follows: Section 2 talks about related work. Section 3 presents details of our fake reviews detection framework, including three approaches to preprocessing data, two ways of dimension reduction, and two classification method. In section 4 shows the experimental results and discussion of performance; Finally, Section 5 concludes our work. \section{Related Work} Jindal and Liu \shortcite{jindal2008opinion} first studied the deceptive opinion problem and trained models using features based on the review text, reviewer, and product to identify duplicate opinions, i.e., opinions that appear more than once in the corpus with similar contexts. Wu et al. (2010) propose an alternative strategy to detect deceptive opinion spam in the absence of a gold standard. Yoo and Gretzel \shortcite{yoo2009comparison} gathered 40 truthful and 42 deceptive hotel reviews and manually compare the linguistic differences between them. Ott et al. created a gold-standard collection by employing Turkers to write fake reviews, and follow-up research was based on their data \cite{ott2012estimating,Ott-EtAl:2013:NAACL-HLT,li2013identifying,feng2013detecting,litowards}. For example, Song et al. \shortcite{feng2012syntactic} looked into syntactic features from Context Free Grammar parse trees to improve the classifier performance. A step further, Feng and Hirst \shortcite{feng2013detecting} make use of degree of {\it compatibility} between the personal experiment and a collection of reference reviews about the same product rather than simple textual features. In addition to exploring text or linguistic features in deception, some existing work looks into customers' behavior to identify deception \cite{mukherjee2013spotting}. For example, Mukherjee et al. \shortcite{mukherjee2011detecting,mukherjee2012spotting} delved into group behavior to identify group of reviewers who work collaboratively to write fake reviews. \section{Data Processing} Before building n-gram model, we tokenized the text of each review and used porter stemming algorithms to eliminate the influence of different forms of one word. An n-gram model is a type of 2 probabilistic language model for predicting the next item in such a sequence in the form of n-1 order Markov model. We built unigram and bigram model in our experiment. \paragraph{tf-idf} TF-IDF, short for term frequencyinverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus. The tf-idf value increases proportionally to the number of times a word appears in the document, but is offset by the frequency of the word in the corpus, which helps to control for the fact that some words are generally more common than others. For example, stop words – common words that appears every where but do not have much meaning, such as the, a, and that, are of little significance. So tf-idf will assign small weight to these words. The formulae of tf-idf is: $$w=tf\cdot idf_{i,j}=tf\cdot \log\frac{N}{df_{i,j}}$$ \paragraph{POS} Part-of-speech tagging, also called grammatical tagging or word-category disambiguation, is the process of marking up a word in a text as corresponding to a particular part of speech. Our project use POS-tag to delve the linguistic reason why a review would likely to be considered fake. We use the Stanford Parser to obtain the relative POS frequencies as feature vector to separate data. \section{Model} The power of dimension reduction lies in the fact that it can ease later processing, improve computational performance improvement, filter useless noise and recover underlying causes. In our project, we are using LSI for text classification. LSI is based upon the assumption that there is an underlying semantic structure in textual data, and that the relationship between terms and documents can be re-described in this semantic structure form. Textual documents are represented as vectors in a vector space. It is fundamentally based on SVD that breaks original relationship of the data into linearly independent components, where the original term vectors are represented by left singular vectors and document vectors by right singular vectors. That is \begin{equation} U_{d\times l}s_{l\times l}V_{l\times d} \end{equation} The column of $U_{d\times l}$ defines the lower dimensional coordinate system. $S_l$ is a diagonal matrix with the l largest singular values in non-increasing order along its diagonal. The l columns are the new coordinates of each document after dimensionality reduction. One major key of LSI is that it overcomes two of the most problematic constraints synonymy and polysemy. It is very tolerant of noise (i.e, misspelled words, typographical errors, etc.) and has been proven to capture key relationship information, including causal, goal-oriented, and taxonomic information.Another usage of LSI is in data visualization. Let l=2 or 3. Each xi is approximated by a l-dimensional vector now suitable for plotting in 2D or 3D spaces. LSI has limitations in classification because it is an unsupervised method which doesn't take class information into account. So we decide to use the process Sprinkling in which LSI is performed on a term-document matrix augmented with sprinkled terms, namely class labels. \subsection{Classification} \subsection{Naive Bayes} Naive Bayes classifiers are among the most successful known algorithms for learning to classify text documents. As we know, text classifiers often don't use kind of deep representation about language: often a document is represented as a bag of words. This is a very simple representation of document : it only knows which words are included in the document and throws away the word order. NB classifier just relies on this representation. For a document D and a class C. In our project, we only have two classes: fake or real. We classify d as the class which has the highest posterior probability P(C|D), which can be re-expressed using Bayes Theorem: \begin{equation} p(c|d)=\frac{p(d|c)p(c)}{p(d)} \end{equation} \begin{equation} \begin{aligned} &C_{MAP}=argmax_c P(c|d)\\ &=argmax_{c}\frac{P(d|c)P(c)}{p(d)}\\ &=argmax_{c}P(d|c)P(c) \end{aligned} \end{equation} \subsection{Support Vector Machine} Support Vector Machines(SVM) \cite{joachims1999making} are supervised learning algorithms that used to recognize patterns and classify data. Given a set of binary training data, an SVM training algorithm builds a model to calculate a hyper-plane that separate data into one category or the other. The goal of SVM is to find a hyper-plane that clearly classify data and maximize the distance of the support vectors to hyper-plane at the same time . \begin{equation} \frac{1}{2}||w||^2-\sum_{i}\alpha_i [y_i(w\cdot x_i-b)-1] \end{equation} We choose SVM as our classifier since in the area of text classification, SVM performs more robust than other techniques. When implementing SVM, we first pre-process each original document to a vector of features. Such text feature vectors often have high dimension, however, SVM supports quite well. SVMs use overfitting protection, so the performance does not affect much when the number of features increases. Joachims shows most text categorization problems are linearly separable, so we use linear SVMs, which is also faster to learn, to solve our problem. In our project, we use SVMlight to train our linear SVM models on datasets that processed by UNIGRAMS, BIGRAMS, POS-Tag, and TF-IDF. We also train dataset after dimension reduction. Results show that SVMs perform well on classifying our data. \subsection{Voting} Voting Scheme, also called weighted combination of the multiple hypothesis, is used to improve performance when having different approaches to a certain problem. Although some classifier is quite powerful, it may still misclassify correct points. On the other hand, we desire simple models that free users from troublesome algorithm design. The uncertainty of dimension selection and goodness evaluation makes classification difficult. This method is more robust to difficult classified text and achieves better performance comparing to separate models. \section{Experimental Results} To better understand the models learned by these automated approaches, we decide to demonstrate what SVD in LSI analysis is capturing. We find the group of words that have the most significant influence in latent concepts. we can see, the second concept contributes most for separating the fake reviews. We list in Table with the highest scores that loads on the positive polar (truthful) and negative polar (fake) of second concept. \begin{table}[!ht] \centering \begin{tabular}{ll}\hline deceptive&truthful\\\hline hotel&room\\ my&)\\ chicago&(\\ will&but\\ room&$\$$\\ very&bathroom\\ visit&location\\ husband&night\\ city&walk\\ experience&park\\\hline\hline \end{tabular} \caption{Top words in different topics from TopicSpam} \end{table} We are using Sprinkled LSI and LSI to reduce the term-document matrix in different dimensions from 50 to 700. We use SVM to classify the data. The results show that with the dimension increasing, the training accuracy of two methods is also increasing because we have higher features which give us more information. Besides, the testing accuracy and precision is increasing when dimension is from 0 to 500. When dimension continues to increase, the training accuracy increases while testing accuracy goes down. It implies to us that there is an overfitting problem when the dimension of the training data is high. Results in bold responds to best accuracy. We can see that when we use Sprinkle + SVM and the dimension is around 500, the accuracy is 90$\%$ which is quite good. In this paper, we combine 5 different algorithms above to automatically obtain the final classification by voting for the test results. If over 3 methods vote for the same class, we will determine the document to belong to that class. The five algorithms we choose as follows: Sprinkle+SVM (dimension 500), Sprinkle+SVM(dimension 300), Unigram+SVM, TF.IDF+SVM, Unigram+NB. The voting algorithm gets 0.95 accuracy. \begin{table} \centering \begin{tabular}{|c|c|}\hline SVM$_{unigram}$&0.90\\\hline NB$_{unigram}$&0.863\\\hline SVM$_{tf-idf}$&0.885\\\hline SVM$_{sprinkle300}$&0.890\\\hline SVM$_{sprinkle500}$&0.900\\\hline Voting&0.95\\\hline \end{tabular} \caption{Performance comparisons of Voting Scheme vs separate algorithm} \end{table} \section{Conclusion} In this work we use different methods to analyse and improve fake reviews classification result based on Myle Ott and Yejin Choi work. It shows that the detection of fake reviews by using automatically approaches is well beyond using human judges. We first build n-gram models and POS model combined with SVM and Naive Bayes to detect the deceptive opinion. Similar to Myle Ott and Yejin Chois result, n-gram based test categorization have the best performance suggesting that key-word based approaches might be necessary in detecting fake review. Besides, we used LSI to exert dimension reduction and some theoretical analysis. Contrast to Myle Ott and Yejin Choi, our result shows that 2nd person pronouns, rather than 1st person form are relatively tend to be used in deceptive review. The fake reviews also have the character of lacking of the concrete noun and adjectives. Furthermore, we compare the experimental results of Sprinkled LSI and LSI . We find Sprinkled LSI+ SVM outperforms LSI + SVM when dimension is not high. Finally, we proposed a voting scheme which combines a few algorithms we used above to achieve better classification performance.\bibliographystyle{acl}
{ "attr-fineweb-edu": 0.869629, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbvM5qsBB3PVU-xTj
\section*{Acknowledgements} We thank a referee, K.~Asano, A.~Beloborodov, J.~C.~McKinney, T.~Nakamura and T.~Piran for useful comments. This work is supported by KAKENHI, 19047004, 22244019, 22244030 (KI), 21684014 (KI, YO), 22740131 (NK) and 20105005 (AM).
{ "attr-fineweb-edu": 0.093262, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUc5vxK19JmejM9Gq_
\subsection*{Acknowledgements} Most of this work was done during a stay at IMPA (Rio de Janeiro) in November 2010, in the framework of the France-Brazil cooperation. I thank the first for its hospitality and the second for its support.
{ "attr-fineweb-edu": 0.382324, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbjQ5qoaAwjlS6Td7
\section*{Acknowledgements} It is a pleasure to thank Masaaki Kuroda for a fruitful collaboration and Lev Lipatov and Victor Kim for a splendid organization and a wonderful hospitality in Repino and St. Petersburg.
{ "attr-fineweb-edu": 0, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUcujxK1yAgXBVEUK3
\section{Introduction} \begin{table}[t] \setlength{\tabcolsep}{2pt} \renewcommand{\arraystretch}{1.1} \centering \begin{scriptsize} \textsf{ \begin{tabular}{|l|} \hline \rowcolor{gray!15} \multicolumn{1}{|c|}{\textbf{Segments of a Podcast Transcript }}\\ \hdashline\Tstrut What's good? Everybody is of all trades here with the game \\ Illuminati. Hope you guys are having a great day. So far. If you guys \\ didn't get a chance to check out our last video. Be sure to check out \\ the link down in the description box below. We are back for another \\ Triple Threat Sports podcast. Now before we get into the Super Bowl \\ edition of Triple Threat Sports podcast. I got to introduce you guys \\ to my co-host first co-host. Say what up C ewan's what up next you \\ guys know him as UT X JG to dine, but he's also known as the ...\\[0.2em] \hdashline\Tstrut The LA Rams supporter and he's going to the Superbowl. Say what\\ up, GG don't it? Feel so good though. It feels so good. The lone\\ person the triple threat Sports podcast by team is going to the shelf \\ and people are mad and we gonna talk about it man. We got we \\ definitely going to talk about it for sure. So at the time of this \\ recording we've already gone through the Pro Bowl which was \\ yesterday. I'm sure some of you guys watched it and you know \\ whoopty whoopty Doo I've ...\\[0.2em] \hdashline\Tstrut Interest in the Pro Bowl after like I turned I 15 but AFC 126 to 7, but \\ we're going to talk about these NFL Conference Championship \\ games. You got to between the Rams and the Saints which whoo, \\ boy, there's a lot of controversy behind that one and then the Chiefs \\ and the Patriots before we get to the smoke and everything like that \\ because jg's been handing them out all this week. Let's go ahead \\ and start going into the NFL conference championships for the \\ Patriots and the Chiefs now, I'll go with you ... \\[0.5em] \hline \hline \rowcolor{gray!15} \multicolumn{1}{|c|}{\textbf{Creator Description}} \\ \hdashline\Tstrut The Guys are back for another Triple Threat Sports Podcast! This \\ time UTXJGTHEDON is giving out all the smoke as his Los Angeles \\ Rams is heading to the Super Bowl to face the New England Patriots. \\ --- Support this podcast: https://anchor.fm/triplethreatsportspodcast\\[0.5em] \hline \hline \rowcolor{gray!15} \multicolumn{1}{|c|}{\textbf{Our Summary}} \\ \hdashline\Tstrut In this episode of the Triple Threat Sports Podcast, JG and UTX \\ discuss the NFL Conference Championship games between the \\ Patriots and Chiefs and the Rams and Saints. They also discuss \\ the controversy between the Chiefs and Patriots and what they \\ had to say about it. The guys also give their predictions for the \\ Super Bowl and what teams they think are going to win the game.\\[0.5em] \hline \end{tabular} } \end{scriptsize} \vspace{-0.1in} \caption{ A snippet of the podcast transcript, its original creator description and an abstract produced by our summarizer. } \label{tab:example} \vspace{-0.2in} \end{table} Podcast is a promising new medium for reaching a broad audience. A podcast series usually feature one or more recurring hosts engaged in a discussion about a particular topic.\footnote{\url{https://en.wikipedia.org/wiki/Podcast}} New platforms developed by Spotify, Apple, Google and Pandora encompass a wide variety of topics, ranging from talk shows to true crime and investigative journalism. Our data are provided by Spotify~\cite{trec2020podcastnotebook}, containing 100,000 podcast episodes comprised of raw audio files, their transcripts and metadata. The transcription is provided by Google Cloud Platform's Speech-to-Text API.\footnote{\url{https://cloud.google.com/speech-to-text}} We seek to generate a concise textual summary for any podcast episode, which a user might read when deciding whether to listen to the podcast. An ideal summary is required to accurately convey all the most important attributes of the episode, such as topical content, genre and participants. It is best for the summary to contain no redundant material that is not needed when deciding whether to listen. In Table~\ref{tab:example}, we show a snippet of the podcast transcript and its creator description. The snippet contains 3 segments, each corresponds to 30 seconds of audio. \begin{figure*} \centering \includegraphics[width=5in]{architecture} \vspace{-0.05in} \caption{ An illustration of our system architecture. \textsf{\textbf{\scriptsize{UCF\_NLP1}}} truncates the transcript to a length of $L$=1,024 tokens, before feeding it to the BART model to produce a concise abstract. \textsf{\textbf{\scriptsize{UCF\_NLP2}}} produces an abstractive summary in a similar fashion. It enhances content selection by identifying important segments from the transcripts to serve as input to BART. The selected segments are limited to a length of $L$=1,024 tokens. } \label{fig:architecture} \end{figure*} The major challenge in performing podcast summarization includes (a) the unique characteristics of spoken text. Disfluencies and redundancies are abundant in spoken text; its information density is often low when compared to written text. The podcasts are of various genres: monologue, interview, conversation, debate, documentary, etc. and transcription is more challenging and noisier; (b) the excessive length of transcripts. It exceeds the limit imposed by many neural abstractive models. A podcast transcript contains on average 80 segments and 5,743 tokens. It serves as the input to a podcast summarization system to produce an abstract. The creator description is part of the metadata. It contains 81 tokens on average and is used as the reference summary. This work draws on our rich experience in summarizing meeting conversations~\cite{liu-etal-2009-unsupervised,liu-liu-2009-extractive,Liu:2013:IEEETrans,Koay:2020} and building neural abstractive systems~\cite{lebanoff-etal-2019-scoring,Lebanoff:2020:AACL,Song:2020:Copy}. We have chosen an abstractive system over its extractive counterpart for this task, as neural abstractive systems have seen significant progress~\cite{Raffel:2019,lewis-etal-2020-bart,qi2020prophetnet}. Not only can an abstract accurately convey the content of the podcast, but it is in a succinct form that is easy to read on a smartphone. Our system seeks to fine-tune a neural abstractive summarizer with an encoder-decoder architecture~\cite{lewis-etal-2020-bart} on podcast data. We especially emphasize content selection, where an extractive module is developed to select salient segments from the beginning and end of a transcript, which serve as the input to an abstractive summarizer. In this work, we systematically investigate three crucial questions concerning abstractive summarization of podcast transcripts. \begin{itemize}[topsep=5pt,itemsep=0pt] \item Is it sufficient to feed the leading sentences of a transcript to the summarizer to produce an abstract, or are there advantages to be gained from selecting salient segments with an extractive module to use as input to the summarizer? (\emph{Content Selection}) \item Should we remove training pairs with noisy, low-quality reference summaries in entirety, or can we improve their quality, and thus strike a balance between the amount and quality of training examples? (\emph{The Quality of Reference}) \item What summary length would be most appropriate for podcast episodes to serve our goal of assisting users in deciding whether to listen to the podcast? (\emph{Summary Postprocessing}) \end{itemize} \section{Our Method} \label{sec:approach} We aim to produce a concise textual summary from a podcast transcript that captures the most important information of the episode to help users decide whether to listen to that episode. Our method makes use of podcast transcripts only but not raw audio. It utilizes the BART model~\cite{lewis-etal-2020-bart} to condense a source text into an an abstractive summary, which employs an encoder-decoder architecture. The model is pretrained using a denoising objective. The source text is corrupted by replacing spans of text with mask symbols. The encoder encodes the corrupted text using a bidirectional model, and the decoder learns to reconstruct the original source text by attending to hidden states of the final layer of the encoder using a cross-attention mechanism. Our implementation is based on BART-\textsc{large}. The encoder and decoder each contain 12 layers of Transformer blocks. The hidden state and embedding size is 1,024. Byte-Pair Encoding (BPE; Sennrich et al., 2016\nocite{sennrich-etal-2016-neural}) is used to tokenize the source text. It has a vocabulary of 50,265 subword units. The BART model is fine-tuned on CNN (\textsf{\textbf{\footnotesize{bart-large-cnn}}}) then on podcast data; the latter contain 79,262/500 examples for training and validation. Our system is evaluated on a test set with 1,027 examples. Given a transcript, we compare two methods to generate the source text that serves as the input to BART, corresponding to two runs we submitted to the Podcast Challenge. Our system architecture is illustrated in Figure~\ref{fig:architecture} and details are described as follows. \begin{itemize}[topsep=5pt,itemsep=0pt] \item \textsf{\textbf{\footnotesize{UCF\_NLP1}}} \quad The transcript is truncated to a length of $L$=1,024 tokens. The method takes the lead sentences of a transcript, feeds them to the BART model to produce a succinct abstractive summary that captures important information about the podcast episode. \item \textsf{\textbf{\footnotesize{UCF\_NLP2}}} \quad It produces an abstractive summary from a podcast transcript in a similar fashion. Crucially, the method enhances content selection by identifying summary-worthy segments from the transcript to serve as input to BART. The selected segments are limited to a length of $L$=1,024 tokens. \end{itemize} \subsection{Content Selection} \label{sec:content} We seek to empirically answer the question: ``\emph{Is it sufficient to feed the lead sentences of the transcript to an abstractive summarizer, or are there advantages to be gained from selecting salient segments with an extraction module to use as input to the summarizer?}'' We consider segments produced by the Google Speech-to-Text API as basic units of extraction, each corresponds to 30 seconds of audio. We opt for segment- rather than sentence-based extraction for two reasons. First, the information density of single utterances is often low. In contrast, the segments are lengthier and more detailed. They tend to have similar lengths and are less likely to be misclassified due to length variation. Second, comparing to sentences, concatenating segments to form a source text that serves as the input to BART can help preserve the context of the utterances extracted from the transcript. We introduce a hybrid representation for the $i$-th candidate segment that combines deep contextualized representations and surface features. Particularly, each segment is encoded by RoBERTa~\cite{liu2019roberta}. It contains 24 layers of Transformer blocks, has a hidden size of 1024 and 16 attention heads. A special token \textsf{\scriptsize{[CLS]}} is added to the beginning of the segment and \textsf{\scriptsize{[SEP]}} is added to the end. We use the output vector corresponding to the \textsf{\scriptsize{[CLS]}} token as the contextualized representation for the segment, denoted by $\bm{h}_i^{c} \in \mathbb{R}^D$. A segment containing salient words is deemed to be important. We measure word salience by its duration (in seconds) and TF-IDF score, which are orthogonal to contextualized representations and aim to capture the topical salience. To characterize a segment using its containing words, we compute 12 feature scores for a candidate segment, including (a) the sum and average of word TF-IDF scores; (b) the sum and average of word durations; (3) the average of word TF-IDF scores (and durations), limiting to 5/10/15/20 words per segment that yield the highest scores. Each feature score is discretized into a binary vector using a number of bins whose sizes are \{2, 3, 5, $\cdots$ 31, 37\} (12 prime numbers). E.g., a feature score is mapped to a 2-dimensional vector [0,1] (bin size=2) if its value is in the upper half of all values; otherwise it is [1,0]. By concatenating binary vectors of different bin sizes, and vectors corresponding to different feature scores, we obtain a 2,364-dimentional vector for each segment. The vector is passed through a feedforward layer to generate a surface feature vector of size $D$, denoted by $\bm{h}_i^{s} \in \mathbb{R}^D$. We take 33 segments from the beginning and 7 segments from the end of each transcript to be the candidate segments. This amounts to a total of 40 segments per episode. The selection is bounded by the GPU memory, but allows us to cover 81\% of the ground-truth summary segments. Each segment is characterized by its contextualized representations $\bm{h}_i^{c}$, surface features $\bm{h}_i^{s}$, and a position embedding $\bm{h}_i^{p}$, all of which are added up in an element-wise manner to serve as input to a 2-layer Transformer encoder, with a hidden size of $D$=1,024, 16 attention heads and no pretraining, to produce a vector for each candidate segment of the transcript. Each vector is fed to a feedforward and a softmax layer to predict if the segment is salient. The ground-truth segment labels are derived by comparing segments with creator descriptions. We calculate the ROUGE-2 Recall score for a segment against any sentence of the creator description. A segment is labelled as positive if the score is greater than a threshold ($\tau$=0.2), otherwise negative. The positive-to-negative ratio is 1:18 among candidate segments, and no downsampling was performed. Our preliminary results suggest that using a hybrid representation that combines surface features with contextualized representations for the segments can lead to an improvement in extraction performance (+0.53\% F-score). \begin{table}[t] \setlength{\tabcolsep}{2pt} \renewcommand{\arraystretch}{1.15} \centering \begin{scriptsize} \textsf{ \begin{tabular}{|l|} \hline \rowcolor{gray!15} \multicolumn{1}{|c|}{\textbf{Creator Description}} \\ \hdashline\Tstrut The Guys are back for another Triple Threat Sports Podcast! This \\ time UTXJGTHEDON is giving out all the smoke as his Los Angeles \\ Rams is heading to the Super Bowl to face the New England Patriots. \\ --- Support this podcast: https://anchor.fm/triplethreatsportspodcast\\[0.5em] \hline \hline \rowcolor{gray!15} \multicolumn{1}{|c|}{\textbf{Clean Reference Summary}} \\ \hdashline\Tstrut The Guys are back for another Triple Threat Sports Podcast! This \\ time UTXJGTHEDON is giving out all the smoke as his Los Angeles \\ Rams is heading to the Super Bowl to face the New England Patriots. \\ \hline \end{tabular} } \end{scriptsize} \vspace{-0.05in} \caption{ Our data cleansing method focuses on improving the quality of reference summaries using a number of heuristics, rather than eliminating noisy reference summaries in entirety. It thus strikes a good balance between the quality and amount of training examples. } \label{tab:ref} \vspace{-0.1in} \end{table} \begin{table*} \setlength{\tabcolsep}{5pt} \renewcommand{\arraystretch}{1.15} \centering \begin{scriptsize} \textsf{ \begin{tabular}{|l|l|} \hline\Tstrut (\textbf{3}) \textbf{Excellent} & The summary accurately conveys all the most important attributes of the episode, which could include topical content, genre, \\ & and participants. In addition to giving an accurate representation of the content, it contains almost no redundant material \\ & which is not needed when deciding whether to listen. It is also coherent, comprehensible, and has no grammatical errors. \\ \hdashline (\textbf{2}) \textbf{Good} & The summary conveys most of the most important attributes and gives the reader a reasonable sense of what the \\ & episode contains with little redundant material which is not needed when deciding whether to listen. Occasional \\ & grammatical or coherence errors are acceptable.\\ \hdashline (\textbf{1}) \textbf{Fair} & The summary conveys some attributes of the content but gives the reader an imperfect or incomplete sense of what the \\ & episode contains. It may contain redundant material which is not needed when deciding whether to listen and may contain \\ & repetitions or broken sentences.\\ \hdashline (\textbf{0}) \textbf{Bad} & The summary does not convey any of the most important content items of the episode or gives the reader an incorrect or \\ & incomprehensible sense of what the episode contains. It may contain a large amount of redundant information that is not \\ & needed when deciding whether to listen to the episode.\\[0.3em] \hline \end{tabular} } \end{scriptsize} \vspace{-0.05in} \caption{ The qualitative judgments performed by NIST. The rating is a number from 0-3, with 0 being Bad and 3 being Excellent. } \label{tab:eval_criteria} \end{table*} \begin{table} \setlength{\tabcolsep}{3pt} \renewcommand{\arraystretch}{1.15} \centering \begin{scriptsize} \textsf{ \begin{tabular}{|l|l|} \hline \textbf{Q1} & Does the summary include \textbf{names of the main people} (hosts, \\ & guests, characters) involved or mentioned in the podcast? \\[0.2em] \hdashline \textbf{Q2} & Does the summary give any \textbf{additional information} about \\ & the people mentioned (such as their job titles, biographies, \\ & personal background, etc)? \\[0.2em] \hdashline \textbf{Q3} & Does the summary include the \textbf{main topic(s)} of the podcast? \\[0.2em] \hdashline \textbf{Q4} & Does the summary tell you anything about \textbf{the format of} \\ & \textbf{the podcast}; e.g. whether it's an interview, whether it's a chat \\ & between friends, a monologue, etc? \\[0.2em] \hdashline \textbf{Q5} & Does the summary give you \textbf{more context on the title} \\ & of the podcast?\\[0.2em] \hdashline \textbf{Q6} & Does the summary contain \textbf{redundant information}? \\[0.2em] \hdashline \textbf{Q7} & Is the summary written in \textbf{good English}? \\[0.2em] \hdashline \textbf{Q8} & Are the \textbf{start and end of the summary} good sentence and \\ & paragraph start and end points? \\[0.2em] \hline \end{tabular} } \end{scriptsize} \vspace{-0.05in} \caption{ There are eight yes-or-no questions asked about the summary. The judgments are performed by NIST. An ideal summary should receive a ``yes'' (1) for all questions but Q6. } \label{tab:eval_questions} \end{table} \subsection{The Quality of Reference} \label{sec:quality} One of the significant challenges in abstractive summarization is the scarcity of labelled data. While it is common practice to remove training examples containing noisy, low-quality reference summaries, it is not obvious whether this is the best path to take for data curation, as a significant amount of examples may be eliminated from the training set. Thus, we raise the question: ``\emph{Should we remove training examples with noisy low-quality reference summaries in entirety, or can we improve their quality, and thus strike a balance between the amount and quality of training examples?}'' The training data provided by the Podcast Challenge contain over 100,000 episodes and short descriptions written by their respective creators. The organizers find that about a third are less useful descriptions, and have since filtered out descriptions that are (a) too long (greater than 750 characters) or short (less than 20 characters); (b) too similar to other descriptions (cut-and-paste or template); (c) too similar to its show description (no new info). This practice results in 66,245 training examples, corresponding to a 34\% reduction of training data, which has a visible effect on performance. Instead of eliminating noisy examples in entirety, we strive to enhance the quality of creator descriptions using heuristics. Our goal is to identify sentences that contain improper content and remove them from the descriptions. We compute a salience score for each sentence of the description by summing over word IDF scores. A low IDF score indicates the word frequently appears in other episodes, and thus is uninformative.\footnote{ We perform data normalization by replacing URLs, Email addresses, @usernames, \#hashtags, digits and tokens that are excessively long (greater than 25 characters) with placeholders before computing word IDF scores. Only words occurring 5 times or more in the corpus and with IDF scores greater than 1.5 are considered when computing sentence salience scores. } We remove sentences if their salience scores are lower than a threshold ($\sigma$=10). The remaining sentences of a creator description are concatenated to form a clean reference summary. Our method results in 79,912 training examples. It reduces the average length of the reference summary from 81 to 76 words. In Table~\ref{tab:ref}, we show an example containing reference summaries before and after data cleansing. \begin{table*} \setlength{\tabcolsep}{6pt} \renewcommand{\arraystretch}{1.15} \centering \begin{minipage}{\textwidth} \begin{scriptsize} \textsf{ \begin{tabular}{|l|r|rrrrrrrr|} \hline \rowcolor{gray!15} & \textcolor{red}{\textbf{Quality}} & Q1: People & Q2: People & Q3: Main & Q4: Podcast & Q5: Title & Q6: Summ & Q7: Good & Q8: Start/End\\ \rowcolor{gray!15} System & \textcolor{red}{\textbf{Rating}} & Names & Add Info & Topics & Format & Context & Redund & English & Points\\ \hline \hline DESC & 1.291 & 0.559 & 0.341 & 0.721 & 0.553 & 0.637 & 0.034 & 0.777 & 0.520\\ FILT & 1.307 & 0.581 & 0.380 & 0.704 & 0.531 & 0.603 & \textbf{0.028} & 0.782 & 0.609\\ \textcolor{red}{UCF\_NLP1} & 1.453 & 0.609 & 0.374 & \textbf{0.804} & \textbf{0.564} & \textbf{0.821} & 0.078 & 0.827 & \textbf{0.659}\\ \textcolor{red}{UCF\_NLP2} & \textcolor{red}{\textbf{1.559}} & \textbf{0.642} & \textbf{0.385} & 0.765 & \textbf{0.564} & 0.726 & 0.061 & \textbf{0.877} & 0.620\\ \hline \end{tabular} } \end{scriptsize} \caption{ The average results of human judgments for 179 testing summaries. The evaluation was performed by NIST assessors. An assessor quickly skimmed the episode, and made judgments for each summary of the episode. ``\textsf{\scriptsize{DESC}}'' represents the episode description. ``\textsf{\scriptsize{FILT}}'' is a ``filtered'' summary provided by Spotify. ``\textsf{\scriptsize{UCF\_NLP}}$\star$'' are our system outputs. Our best system ``\textsf{\scriptsize{UCF\_NLP2}}'' uses an additional extraction component to identify summary-worthy segments from the transcripts. It achieves a quality rating of 1.559. This is an absolute increase of 0.268 (+21\%) over the episode description. } \label{tab:human_eval} \end{minipage} \vskip0.8\baselineskip \begin{minipage}{\textwidth} \begin{scriptsize} \textsf{ \begin{tabular}{|l|r|rrrrrrrr|} \hline \rowcolor{gray!15} & Quality & Q1: People & Q2: People & Q3: Main & Q4: Podcast & Q5: Title & Q6: Summ & Q7: Good & Q8: Start/End\\ \rowcolor{gray!15} System & Rating & Names & Add Info & Topics & Format & Context & Redund & English & Points\\ \hline \hline DESC & 91.06 & 89.39 & 93.30 & 86.59 & \textbf{94.41} & 84.36 & 99.44 & 85.48 & 91.06\\ FILT & 89.39 & 92.18 & 92.74 & 84.92 & 92.74 & 82.12 & 99.44 & 86.03 & \textbf{96.65}\\ \textcolor{red}{UCF\_NLP1} & 93.30 & 97.21 & \textbf{97.21} & \textbf{93.86} & 93.86 & 91.06 & \textbf{100} & 93.30 & 93.30\\ \textcolor{red}{UCF\_NLP2} & \textcolor{red}{\textbf{96.65}} & \textbf{98.88} & 96.65 & 92.18 & 93.86 & \textbf{92.18} & \textbf{100} & \textbf{94.97} & 93.86\\ \hline \end{tabular} } \end{scriptsize} \caption{ Percentages of testing episodes (out of a total of 179) on which our system performs equal to or better than the majority baseline. The majority rating was obtained by choosing the most frequent rating across a total of 29 submitted summaries, for each episode and each question. Our systems ``\textsf{\scriptsize{UCF\_NLP}}$\star$'' achieve the most gains compared to episode descriptions ``\textsf{\scriptsize{DESC}}'' on questions Q1 (people names), Q3 (main topics), Q5 (title context) and Q7 (good English). } \label{tab:human_eval_more} \end{minipage} \end{table*} \begin{table} \setlength{\tabcolsep}{5pt} \renewcommand{\arraystretch}{1.15} \centering \begin{scriptsize} \begin{minipage}{\textwidth} \textsf{ \begin{tabular}{|l|rrr|c|rrr|} \hline \rowcolor{gray!15} & \multicolumn{3}{c|}{\textcolor{blue}{\textbf{Majority Wins} (\%)}} & \textbf{Equal} & \multicolumn{3}{c|}{\textcolor{red}{\textbf{System Wins} (\%)}}\\ \rowcolor{gray!15} System & -3 & -2 & -1 & 0 & +1 & +2 & +3\\ \hline \hline DESC & 0.0 & 2.2 & 8.4 & 48.0 & 22.9 & 12.8 & 7.3\\ FILT & 0.0 & 2.2 & 8.4 & 44.7 & 25.1 & 10.1 & 9.5\\ \textcolor{red}{UCF\_NLP1} & 0.6 & 1.1 & 5.0 & 43.6 & 22.9 & 19.0 & 7.8\\ \textcolor{red}{UCF\_NLP2} & 0 & 1.1 & 2.2 & 43.6 & 26.3 & 16.2 & 10.6\\ \hline \end{tabular} } \end{minipage} \vskip3\baselineskip \begin{minipage}{\textwidth} \textsf{ \begin{tabular}{|l|rrr|c|rrr|} \hline \rowcolor{gray!15} & \multicolumn{3}{c|}{\textcolor{blue}{\textbf{DESC Wins} (\%)\phantom{\textbf{xx}}}} & \textbf{Equal} & \multicolumn{3}{c|}{\textcolor{red}{\textbf{System Wins} (\%)\phantom{\textbf{x}}}}\\ \rowcolor{gray!15} System & -3 & -2 & -1 & 0 & +1 & +2 & +3\\ \hline \hline FILT & 1.1 & 1.1 & 14.0 & 65.9 & 15.1 & 2.2 & 0.6\\ \textcolor{red}{UCF\_NLP1} & 0.6 & 6.7 & 17.9 & 39.7 & 23.5 & 9.5 & 2.2\\ \textcolor{red}{UCF\_NLP2} & 0.6 & 5.0 & 15.1 & 41.9 & 24.0 & 10.6 & 2.8\\ \hline \end{tabular} } \end{minipage} \end{scriptsize} \caption{ Percentages of testing episodes (out of a total of 179) on which a system performs equal to or better/worse than the majority baseline (``Majority'', top), and episode descriptions (``\textsf{\scriptsize{DESC}}'', bottom). ``$\pm$1/2/3'' represents the gap between the numerical ratings, measured in terms of summary quality. The majority rating was obtained by choosing the most frequent quality rating across a total of 29 submitted summaries, for each episode. Our system ``\textsf{\scriptsize{UCF\_NLP2}}'' frequently outperforms or performs comparably to the majority baseline (96.65\%) and episode descriptions (79.33\%). } \label{tab:majority_baseline} \end{table} \begin{table} \setlength{\tabcolsep}{8pt} \renewcommand{\arraystretch}{1.2} \centering \begin{scriptsize} \textsf{ \begin{tabular}{|llrrr|} \hline \rowcolor{gray!15} \textcolor{red}{\textbf{Test-1027}} & System & P (\%) & R (\%) & F (\%)\\ \hline \hline \multirow{2}{*}{ROUGE-1} & UCF\_NLP1 & 37.14 & 30.48 & 29.62\\ & UCF\_NLP2 & 36.29 & 31.39 & \textbf{29.64}\\ \hline \hline \multirow{2}{*}{ROUGE-2} & UCF\_NLP1 & 15.82 & 12.43 & \textbf{12.42}\\ & UCF\_NLP2 & 14.89 & 12.52 & 11.96\\ \hline \hline \multirow{4}{*}{ROUGE-L} & UCF\_NLP1 & 26.63 & 22.23 & \textbf{21.40}\\ & UCF\_NLP1$^\ddag$ & 26.67 & 22.05 & 21.32\\ & UCF\_NLP2 & 25.53 & 22.59 & 20.99\\ & UCF\_NLP2$^\ddag$ & 25.53 & 22.42 & 20.91\\ \hline \end{tabular} } \end{scriptsize} \caption{ ROUGE scores on the full test set containing 1,027 episodes. The episode descriptions are used as gold-standard summaries; they are called ``model'' summaries. $\ddag$ represents ROUGE scores computed by NIST. The other scores are calculated by us. There is a minor discrepancy between these scores when given the same summaries ($\pm$0.18\% for ROUGE-L). \textsf{\scriptsize{UCF\_NLP1}} marginally outperforms \textsf{\scriptsize{UCF\_NLP2}} according to ROUGE-2 and ROUGE-L. } \label{tab:rouge_1027} \end{table} \begin{table} \setlength{\tabcolsep}{8pt} \renewcommand{\arraystretch}{1.2} \centering \begin{scriptsize} \textsf{ \begin{tabular}{|llrrr|} \hline \rowcolor{gray!15} \textcolor{red}{\textbf{Test-179}} & System & P (\%) & R (\%) & F (\%)\\ \hline \hline \multirow{2}{*}{ROUGE-1} & UCF\_NLP1 & 40.20 & 31.98 & \textbf{31.70}\\ & UCF\_NLP2 & 38.05 & 32.67 & 31.18\\ \hline \hline \multirow{2}{*}{ROUGE-2} & UCF\_NLP1 & 18.52 & 13.72 & \textbf{14.03}\\ & UCF\_NLP2 & 16.23 & 13.53 & 12.95\\ \hline \hline \multirow{2}{*}{ROUGE-L} & UCF\_NLP1 & 29.18 & 23.33 & \textbf{23.04}\\ & UCF\_NLP2 & 27.06 & 23.29 & 22.03\\ \hline \end{tabular} } \end{scriptsize} \caption{ ROUGE scores on the test set of 179 episodes. The scores are calculated by us. Episode descriptions are used as gold-standard summaries; they are called ``model'' summaries. We observe that \textsf{\scriptsize{UCF\_NLP1}} outperforms \textsf{\scriptsize{UCF\_NLP2}} in terms of R-1, R-2 and R-L F-scores. We note that ROUGE scores are not necessarily a measure of summary quality, as \textsf{\scriptsize{UCF\_NLP2}} was rated higher according to human judgments. } \label{tab:rouge_179} \end{table} \subsection{Summary Postprocessing} \label{sec:length} We next describe our efforts at postprocessing the summaries generated by BART. The \textsf{\textbf{\footnotesize{length\_penalty}}} of BART penalizes the log probability of a summary (a negative value) by its length, with an exponent $p$, setting $p$=2.0 promotes the generation of longer summaries. We set \textsf{\textbf{\footnotesize{no\_repeat\_ngram\_size}}} to be 3, which stipulates that a trigram cannot occur more than once in the summary. After a grid search in the range of [35,42], we set the \textsf{\textbf{\footnotesize{min\_length}}} of a summary to be 35 subwords for \textsf{{\footnotesize{UCF\_NLP1}}} and 39 for \textsf{{\footnotesize{UCF\_NLP2}}}. The \textsf{\textbf{\footnotesize{max\_length}}} of a summary is set to 250 subwords. We use a beam size of $K$=4 for summary decoding. BART may optimize well with the inductive bias, but it remains necessary to apply a series of heuristics to the output summaries to alleviate any overfitting that has led to the generation of template language and improve the summary presentability. Among others, the heuristics include (a) removing the content after ``---'' (e.g., ``\emph{--- This episode is sponsored by},'' ``\emph{--- Send in a voice message}''); (b) removing URLs; (c) removing brackets and the content inside it; (d) removing any trailing incomplete sentence if the summary is excessively long ($\ge$128 tokens); (e) removing duplicate sentences that occur three times or more across different episodes. We observe that a handful of sentences appeared in summaries of different episodes. They are unrelated to the transcripts, but are generated possibly due to overfitting (e.g., \emph{The Oops podcast examines the mistakes that change the trajectory of people's lives: the bad decisions, the aftermath, the path to redemption and all things in between.}) \section{Results and Conclusion} \label{sec:future} There were 29 submitted runs for the podcast summarization task of TREC 2020. Each run contains a set of summaries generated for the full test set with 1,027 episodes. Among these, 179 episodes were selected by NIST evaluators in a random fashion to perform qualitative judgments on the summary quality. An evaluator quickly skimmed the episode, and made judgments for each summary for that episode, in a random order. Intermixed in the submitted summaries were the creator description (``\textsf{\footnotesize{DESC}}'') for an episode, and a ``filtered'' summary from Spotify (``\textsf{\footnotesize{FILT}}''). Our system runs are denoted by ``\textsf{\footnotesize{UCF\_NLP1}}'' and ``\textsf{\footnotesize{UCF\_NLP2}}''. The latter used an additional extraction module to identify salient segments from the transcripts. The evaluation criteria used by NIST evaluators are shown in Table~\ref{tab:eval_criteria}. The summary quality rating is a number from 0-3, corresponding to four levels: Bad/Fair/Good/Excellent. Additionally, there were eight yes-or-no questions asked about the summary (Table~\ref{tab:eval_questions}). They were designed to evaluate the various aspects of summaries. An ideal summary will receive a ``yes'' for all questions but Q6. We present our results in Tables~\ref{tab:human_eval}--\ref{tab:rouge_179}. Our system ``\textsf{\footnotesize{UCF\_NLP2}}'' has achieved a quality rating of 1.559. This is an absolute increase of 0.268 (+21\%) over episode descriptions. We raise awareness of possible inconsistencies between ROUGE and human judgments of summary quality. The inconsistencies could stem from the deficiencies of ROUGE in capturing semantic meanings, or it could be due to episode descriptions are not the most appropriate reference summaries. We find content selection to remain important for podcast summarization. A summary containing only partial information about an episode may hamper the reader's understanding. Further, we caution that it is challenging to generate abstractive summaries that are fully accurate to the content of the podcast. Not only are there transcription errors, but subtle change of meaning can happen in system abstracts. Even humans may not spot some subtle errors without a careful comparison of system abstracts and transcripts. \section*{Acknowledgments} This research was supported in part by the National Science Foundation IIS-1909603. We are grateful to Amazon for partially sponsoring the research and computation in this study through the Amazon AWS Machine Learning Research Award.
{ "attr-fineweb-edu": 0.957031, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUeG3xK6-gDyrT-L3_
\section*{Acknowledgements} I wish to thank Joe Polchinski and Fernando Quevedo for discussions and an anonymous referee for several useful comments. I also wish to thank the Abdus Salam ICTP for hospitality during the initial stages of this project. \bibliographystyle{apsrev}
{ "attr-fineweb-edu": 0.456543, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdbnxK4tBVhvvsa4o
\section{Appendix} \subsection{Proof of Lemma \ref{le costly round-trip}}\label{proof le costly round-trip} \noindent For all $\mathcal{I}, \mathcal{J} \subset \{1,\dots,d\}^2$, let $\mathrm{ad}[Q^{(\mathcal{I},\mathcal{J})}]$ be the adjunct matrix of $Q^{(\mathcal{I},\mathcal{J})}$. \\ For $1\le j \le d$, we denote, for ease of presentation, $\mathfrak{Q}^j:=\mathrm{ad}[Q^{(j,j)}] $, and we have \begin{align} \label{eq de adjunct} \mathfrak{Q}^j_{\ls{i}{j} ,\ls{\ell}{j} } = (-1)^{\ls{i}{j} + \ls{\ell}{j}} \det Q^{(\{j,\ell\},\{j,i\} )}. \end{align} for all $(\ell,i) \in \{1,\dots,d\} \backslash \{j\}$. For all $1 \le i \neq j \le d$, we define \begin{align*} C_{i,j} := \left((Q^{(j,j)})^{-1} \bar{c}^{(j)}\right)_{i-{\bf 1}_{\set{i>j}} } \text{ and } C_{j,j} = 0. \end{align*} Using $\mathfrak{Q}^j$ the adjunct matrix of $Q^{(j,j)}$, we observe then, for latter use, \begin{align}\label{eq expression Cij} C_{i,j} = \frac1{\mu_j} \sum_{\ell\neq j} \mathfrak{Q}^j_{\ls{i}{j},\ls{\ell}{j}}\bar{c}_{\ell}\;, \text{ for } i \neq j\;. \end{align} {\noindent \bf Proof. } 1. We first show that \eqref{le costly round-trip} holds true. From \eqref{eq de expected cost}, we observe that \begin{align*} \bar{C}_{j,j} &= \esp{\sum_{n=0}^{\tau_j - 1} \bar{c}_{X_n} | X_0 = j} \\ &= \esp{\sum_{n=0}^{\tau_j - 1} \sum_{\ell =1}^d \bar{c}_{\ell} {\bf 1}_{\set{X_n=\ell}} | X_0 = j}. \end{align*} Thus, \begin{align*} \bar{C}_{j,j} = \sum_{\ell=1}^d \bar{c}_{\ell} \gamma^j_\ell \; \text{ with } \; \gamma^j_\ell =\esp{\sum_{n=0}^{\tau_j-1} {\bf 1}_{\set{X_n=\ell}} | X_0 = j} \end{align*} From \cite[Theorem 1.7.5]{norris1998markov}, we know that $\gamma^j_\ell = \frac{\mu_\ell}{\mu_j}$. \\ \\ \vspace{2mm} 2. We prove \eqref{eq Cij+Cji linked to mu c} assuming the following for the moment: for all distinct $1 \le i,j,k \le d$, \begin{align}\label{eq kind of magic} \mu_i \mathfrak{Q}^j_{i^{(j)}, k^{(j)}} + \mu_j \mathfrak{Q}^i_{j^{(i)}, k^{(i)}} = \mathfrak{Q}^i_{j^{(i)}, j^{(i)}}\mu_k = \mathfrak{Q}^j_{i^{(j)}, i^{(j)}} \mu_k. \end{align} Let $1 \le i \neq j \le d$. We have, using \eqref{eq expression Cij}, \begin{align*} \nonumber C^{i,j} + C^{j,i} \nonumber &= \frac{1}{\mu_j} \left(\mathfrak{Q}^j \bar{c}^{(j)}\right)_{i^{(j)}} + \frac{1}{\mu_i} \left(\mathfrak{Q}^i \bar{c}^{(i)}\right)_{j^{(i)}} \\ \nonumber &= \frac{1}{\mu_j} \sum_{k \neq j} \mathfrak{Q}^j_{i^{(j)},k^{(j)}} \bar{c}_k + \frac{1}{\mu_i} \sum_{k \neq i} \mathfrak{Q}^i_{j^{(i)},k^{(i)}} \bar{c}_k \\ &= \frac{1}{\mu_j} \mathfrak{Q}^j_{i^{(j)},i^{(j)}} \bar{c}_i + \frac{1}{\mu_i} \mathfrak{Q}^i_{j^{(i)},j^{(i)}} \bar{c}_j + \sum_{k \neq i,j} \frac{\mu_i \mathfrak{Q}^j_{i^{(j)},k^{(j)}} + \mu_j \mathfrak{Q}^i_{j^{(i)},k^{(i)}}}{\mu_i \mu_j} \bar{c}_k. \end{align*} Using the previous point and the fact that $\mathfrak{Q}^j_{i^{(j)},i^{(j)}} = \mathfrak{Q}^i_{j^{(i)},j^{(i)}}$, we get \begin{align} \nonumber C^{i,j} + C^{j,i} &= \mathfrak{Q}^j_{i^{(j)},i^{(j)}} \left( \frac{\mu_i \bar{c}_i + \mu_j \bar{c}_j}{\mu_i \mu_j} + \sum_{k \neq i,j} \frac{\mu_k \bar{c}_k}{\mu_i \mu_j} \right) = \frac{\mathfrak{Q}^j_{i^{(j)},i^{(j)}}}{\mu_i \mu_j} \mu \bar{c} \end{align} which is the result we wanted to prove.\\ 3. We now prove \eqref{eq kind of magic}.\\ Let $i,j \in \set{1,\dots, d}$ and $i \neq j$. We observe first, using \eqref{eq de adjunct}, that \begin{align*} \mathfrak{Q}^j_{\ls{i}{j} ,\ls{i}{j} } = \det Q^{(\{j,i\},\{j,i\} )} = \mathfrak{Q}^i_{\ls{j}{i} ,\ls{j}{i} } \end{align*} For $k \in \set{1,\dots,d}\setminus \set{i,j}$, we denote by $k_{ij} \in \set{1,\dots,d-2}$ (resp. $i_{jk}$, $j_{ik}$) the index such that: \begin{align} Q_{k,\cdot} = Q^{(\{j,i\},\{j,i\} )}_{k_{ij}, \cdot} \,(\text{resp. } Q_{i,\cdot} = Q^{(\{j,k\},\{j,i\} )}_{i_{jk}, \cdot}\,, Q_{j,\cdot} = Q^{(\{k,i\},\{j,i\} )}_{j_{ik}, \cdot} ) \, , \label{eq de kij} \end{align} namely $$k_{ij} = k - {\bf 1}_{\set{k>i}} - {\bf 1}_{\set{k>j}}, \, i_{jk} = i - {\bf 1}_{\set{i>k}} - {\bf 1}_{\set{i>j}} \text{ and } j_{ik}= j - {\bf 1}_{\set{j>k}} - {\bf 1}_{\set{j>i}}. $$ Let $\sigma_k$ be the permutation of $ \set{1,\dots,d-2}$ given by \begin{align*} \left( \begin{array}{ccccccc} 2 & \dots &k_{ij} &1 & k_{ij}+1 & \dots & d-2 \\ 1 & \dots &k_{ij}-1 & k_{ij} &k_{ij}+1 &\dots &d-2 \end{array} \right) \end{align*} which is the composition of $k_{ij}-1$ transpositions. Applying $\sigma_k^{-1}$ to the row of $Q^{(\{j,i\},\{j,i\} )}$, we obtain a matrix denoted simply $Q^{(\{j,i\},\{j,i\} )}_{\sigma_k(\cdot),\cdot}$ whose first row is $Q_{k,\cdot}^{(\{j,i\},\{j,i\} )}$, and we have \begin{align*} \det Q^{(\{j,i\},\{j,i\} )} = (-1)^{k_{ij}-1} \det Q^{(\{j,i\},\{j,i\} )}_{\sigma_k(\cdot),\cdot} \end{align*} Since $\mu Q = 0$, we have $Q_{k,\cdot} = - \sum_{\ell \neq k}\frac{\mu_\ell}{\mu_k}Q_{\ell,\cdot}$ and then, \begin{align*} \det Q^{(\{j,i\},\{j,i\} )}_{\sigma_k(\cdot),\cdot} = - \sum_{\ell \neq k} \frac{\mu_\ell}{\mu_k} \left | \begin{array}{c} Q_{\ell,\cdot} \vspace{2pt} \\ Q^{(\{j,i\},\{j,i\} )}_{\sigma_k(2),\cdot} \\ \vdots \\ Q^{(\{j,i\},\{j,i\} )}_{\sigma_k(d-2),\cdot} \end{array} \right | = - \frac{\mu_i}{\mu_k} \left | \begin{array}{c} Q_{i,\cdot} \vspace{2pt} \\ Q^{(\{j,i\},\{j,i\} )}_{\sigma_k(2),\cdot} \\ \vdots \\ Q^{(\{j,i\},\{j,i\} )}_{\sigma_k(d-2),\cdot} \end{array} \right | - \frac{\mu_j}{\mu_k} \left | \begin{array}{c} Q_{j,\cdot} \vspace{2pt} \\ Q^{(\{j,i\},\{j,i\} )}_{\sigma_k(2),\cdot} \\ \vdots \\ Q^{(\{j,i\},\{j,i\} )}_{\sigma_k(d-2),\cdot} \end{array} \right |. \end{align*} Let $\sigma_i$ (resp. $\sigma_j$)be constructed as $\sigma_k$ but with $i_{jk}$ (resp. $j_{ik}$) instead of $k_{ji}$ then one observes \begin{align*} \det Q^{(\{j,i\},\{j,i\} )}_{\sigma_k(\cdot),\cdot} &= - \frac{\mu_i}{\mu_k} \det Q^{(\{j,k\},\{j,i\} )}_{\sigma_i(\cdot),\cdot} - \frac{\mu_j}{\mu_k} \det Q^{(\{i,k\},\{j,i\} )}_{\sigma_j(\cdot),\cdot} \\ &= - \frac{\mu_i}{\mu_k} (-1)^{i_{jk}-1} \det Q^{(\{j,k\},\{j,i\} )} - \frac{\mu_j}{\mu_k} (-1)^{j_{ik}-1} \det Q^{(\{i,k\},\{j,i\} )} \end{align*} We compute that \begin{align*} (-1)^{i_{jk}-1+k_{ij}-1+i^{(j)}+k^{(j)}}=-1 \text{ and } (-1)^{j_{ik}-1+k_{ij}-1+j^{(i)}+k^{(i)}}=-1, \end{align*} leading to \begin{align*} \mu_k \mathfrak{Q}^j_{\ls{i}{j} ,\ls{i}{j} } = \mu_{i} \mathfrak{Q}^j_{\ls{i}{j} ,\ls{k}{j} } + \mu_{j} \mathfrak{Q}^i_{\ls{j}{i} ,\ls{k}{i} }. \end{align*} \hbox{ }\hfill$\Box$ \subsection{Enlargement of a filtration along a sequence of increasing stopping times} \label{mrt} We fix a strategy $\phi \in \Phi$ and we study filtrations $\mathbb{F}^i, i \ge 0$ and $\mathbb{F}^\infty$ which are constructed in subsection \eqref{problem}.\\ For each $n \ge 0$, we define a new filtration $\mathbb{G}^n = (\mathcal{G}^n_t)_{t \ge 0}$ by the relations $\mathcal{G}^0_t = \mathcal{F}^0_t$ and for $n \ge 1$, $\mathcal{G}^n_t = \mathcal{F}^0_t \vee \sigma(X_i, i \le n) = \mathcal{G}^{n-1}_t \vee \sigma(X_n)$. \subsubsection{Representation Theorems} The goal of this section is to derive Integral Representation Theorems for filtrations $\mathbb{F}^i, i \ge 0$ and $\mathbb{F}$.\\ We first recall, see \cite{AJ17}: \begin{Theorem}[Lévy] Let $(\Omega,\mathcal{F},\mathbb{F},\P)$ a filtered probability space with $\mathbb{F}$ non necessarily right-continuous. Let $\xi \in \mathcal{F}$ and $X$ a $\mathbb{F}-$supermartingale. \begin{enumerate} \item We have $\espcond{\xi}{\mathcal{F}_t} \to \espcond{\xi}{\mathcal{F}_\infty}$ a.s. and in $L^1$, as $t \to \infty$. \item If $t_n$ decreases to $t$, we have $X_{t_n} \to X_{t^+}$ a.s. and in $L^1$ as $n \to \infty$. \end{enumerate} In particular, if $X_t = \espcond{\xi}{\mathcal{F}_t}$, we get that $\espcond{\xi}{\mathcal{F}_{t_n}} \to \espcond{\xi}{\mathcal{F}_{t^+}}$ a.s. and in $L^1$ as $n \to \infty$, for $t_n$ decreasing to $t$. \end{Theorem} We now recall an important notion of coincidence of filtrations between two stopping times, introduced in \cite{AJ17}. This will be useful for our purpose in the sequel.\\ Let $S, T$ two random times, which are stopping times for two filtrations $\H^1 = (\mathcal{H}^1_t)_{t \ge 0}$ and $\H^2 = (\mathcal{H}^2_t)_{t \ge 0}$. We set \begin{align*} \llbracket S, T \llbracket \, := \left\{ (\omega, s) \in \Omega \times \mathbb{R}_+ : S(\omega) \le s < T(\omega) \right\}. \end{align*} We say that $\H^1$ and $\H^2$ \emph{coincide} on $\llbracket S, T \llbracket$ if \begin{enumerate} \item for each $t \ge 0$ and each $\mathcal{H}^1_t$-measurable variable $\xi$, there exists a $\mathcal{H}^2_t$-measurable variable $\chi$ such that $\xi 1_{S \le t < T} = \chi 1_{S \le t < T}$, \item for each $t \ge 0$ and each $\mathcal{H}^2_t$-measurable variable $\chi$, there exists a $\mathcal{H}^1_t$-measurable variable $\xi$ such that $\chi 1_{S \le t < T} = \xi 1_{S \le t < T}$. \end{enumerate} We now study the right-continuity of the filtration $\mathbb{G}^n$ for some $n \ge 0$. Using its specific structure, it is easy to compute conditional expectations. Lévy's theorem then allows to obtain the right-continuity. \begin{Lemma} Let $n \ge 0$. \begin{enumerate} \item If $\xi \in L^1(\mathcal{F}^0_\infty)$ and $\xi' \in L^1(\sigma(X_i, 1 \le i \le n))$, then for $t \ge 0$, we have $\espcond{\xi\xi'}{\mathcal{G}^n_t} = \espcond{\xi}{\mathcal{F}^0_t}\xi'$. \item $\mathbb{G}^n$ is right-continuous. \end{enumerate} \end{Lemma} \begin{proof} \begin{enumerate} \item If $F \in \mathcal{F}^0_t$ and $F' \in \sigma(X_i, 1 \le i \le n)$, we have, by independence, \begin{align*} \nonumber \esp{\xi\xi' 1_{F\cap F'}} &= \esp{\xi 1_F}\esp{\xi' 1_{F'}} \\ \nonumber &= \esp{\espcond{\xi}{\mathcal{F}^0_t}1_F}{\esp{\xi' 1_{F'}}} \\ &= \esp{\xi'\espcond{\xi}{\mathcal{F}^0_t} 1_{F \cap F'}}. \end{align*} Since $\{F \cap F'| F \in \mathcal{F}^0_t, F' \in \sigma(X_i, 1 \le i \le n)\}$ is a $\pi$-system generating $\mathcal{G}^n_t$, the result follows by a monotone class argument. \item Let $t \ge 0$ and $t_m$ decreasing to $t$. We have, using Lévy's Theorem, the previous point and the right-continuity of $\mathbb{F}^0$, \begin{align*} \nonumber \espcond{\xi\xi'}{\mathcal{G}^n_{t^+}} &= \lim_m \espcond{\xi\xi'}{\mathcal{G}^n_{t_m}} = \lim_m \xi'\espcond{\xi}{\mathcal{F}^0_{t_m}} \\ \nonumber &= \xi'\espcond{\xi}{\mathcal{F}^0_t} = \espcond{\xi\xi'}{\mathcal{G}^n_t}. \end{align*} By a monotone class argument, we have $\espcond{\xi}{\mathcal{G}^n_{t^+}} = \espcond{\xi}{\mathcal{G}^n_t}$ for all bounded $\mathcal{G}^n_\infty$-measurable $\xi$, hence it follows the right-continuity of $\mathbb{G}^n$. \end{enumerate} \hbox{ }\hfill$\Box$ \end{proof} Using the previous Lemma, we show how to compute conditional expectations in $\mathbb{F}$ and $\mathbb{F}^n$ for all $n \ge 0$, and show that these filtrations are right-continuous. \begin{Proposition} \begin{enumerate} \item For all $m \ge n \ge 0$, $\mathbb{F}^{n}$, $\mathbb{F}^{m}$ and $\mathbb{F}^\infty$ coincide on $\llbracket 0, \tau_{n+1} \llbracket$.\\ For all $n \ge 0$, $\mathbb{F}^{n}$ and $\mathbb{G}^{n}$ coincide on $\llbracket \tau_n, +\infty \llbracket$. \item For all $n \ge 0$ and $t \ge 0$, we have, for $\xi \in L^1(\mathcal{F}^{n+1}_\infty)$: \begin{align} \label{dec-espcond} \espcond{\xi}{\mathcal{F}^{n+1}_t} = \espcond{\xi}{\mathcal{F}^n_t} 1_{t < \tau_{n+1}} + \espcond{\xi}{\mathcal{G}^{n+1}_t} 1_{\tau_{n+1} \le t}. \end{align} Let $t \ge 0$ such that $\sum_{n=0}^{+\infty} \P(\tau_n \le t < \tau_{n+1}) = 1$. Then, for $\xi \in L^1(\mathcal{F}^\infty_\infty)$, \begin{align*} \espcond{\xi}{\mathcal{F}^\infty_t} = \sum_{n = 0}^{+\infty} \espcond{\xi}{\mathcal{F}^n_t} 1_{\tau_n \le t < \tau_{n+1}}. \end{align*} \item For all $n \ge 0$, $\mathbb{G}^n$ is right-continuous. \item The filtration $\mathbb{G}$ is right-continuous on $[0,T]$. \end{enumerate} \end{Proposition} \begin{proof} \begin{enumerate} \item Let $t \ge 0$ be fixed. If $\xi$ is $\mathcal{F}^n_t$-measurable, since $\mathcal{F}^n_t \subset \mathcal{F}^m_t \subset \mathcal{F}^\infty_t$ for $m \ge n$, taking $\chi = \xi$ gives a $\mathcal{F}^m_t$-measurable (resp. $\mathcal{F}^\infty_t$-measurable) random variable such that $\xi 1_{t < \tau_{n+1}} = \chi 1_{t < \tau_{n+1}}$.\\ Conversely, if $\chi$ is a $\mathcal{F}^m_t$-measurable random variable, then \begin{align*} \chi = f(\tilde \chi, X_1 1_{\tau_1 \le t}, \dots, X_m 1_{\tau_m \le t}), \end{align*} for a measurable $f$ and a $\mathcal{F}^0_t$-measurable variable $\tilde \chi$. Since $X_k 1_{\tau_k \le t} = 0$ on $\{t < \tau_{n+1}\}$ when $k \ge n$, one gets: \begin{align*} \nonumber \chi 1_{t < \tau_{n+1}} &= f(\tilde \chi, X_1 1_{\tau_1 \le t}, \dots, X_n 1_{\tau_n \le t}, 0, \dots, 0) 1_{t < \tau_{n+1}} \\ &=: \xi 1_{t < \tau_{n+1}}, \end{align*} where $\xi$ is $\mathcal{F}^n_t$-measurable.\\ Last, let $\chi$ be a $\mathcal{F}^\infty_t$-measurable variable. Then $\chi = f(\tilde \chi, X_{i_1} 1_{\tau_{i_1} \le t}, \dots, X_{i_N} 1_{\tau_{i_N} \le t})$ for some random $N \ge 0$ and $1 \le i_1 \le \dots \le i_N$, and the same arguments applies. The proof of the second claim is straightforward as one remarks that for $t \ge 0$ and $n \ge 1$, the equality $f(\xi, X_1, \dots, X_n) 1_{\tau_n \le t} = f(\xi, X_1 1_{\tau_1 \le t}, \dots, X_n 1_{\tau_n \le t}) 1_{\tau_n \le t}$ holds, since the random times $\tau_i, i \ge 0$ are nondecreasing. \item Let $n \ge 0$ and $\xi \in L^1(\mathcal{F}^{n+1}_\infty)$.\\ Since $\mathbb{F}^n$ and $\mathbb{F}^{n+1}$ coincide on $\llbracket 0, \tau_{n+1} \llbracket$, we have $\espcond{\xi}{\mathcal{F}^{n+1}_t} 1_{t < \tau_{n+1}} = \tilde \xi 1_{t < \tau_{n+1}}$ for a $\mathcal{F}^n_t$-measurable variable $\tilde \xi$. In particular, the left hand side is also $\mathcal{F}^n_t$-measurable. Hence $\espcond{\xi}{\mathcal{F}^{n+1}_t} 1_{t < \tau_{n+1}} = \espcond{\espcond{\xi}{\mathcal{F}^{n+1}_t} 1_{t < \tau_{n+1}}}{\mathcal{F}^n_t} = \espcond{\xi}{\mathcal{F}^n_t} 1_{t < \tau_{n+1}}$.\\ Similarly, since $\mathbb{F}^{n+1}$ and $\mathbb{G}^{n+1}$ coincide on $\llbracket \tau_{n+1}, +\infty \llbracket$, we have $\espcond{\xi}{\mathcal{G}^{n+1}_t}1_{\tau_{n+1} \le t} = \hat \xi 1_{\tau_{n+1} \le t}$ for a $\mathcal{F}^{n+1}_t$-measurable variable $\hat \xi$. In particular, the left hand side is $\mathcal{F}^{n+1}_t$-measurable. Hence $\espcond{\xi}{\mathcal{G}^{n+1}_t}1_{\tau_{n+1} \le t} = \espcond{\espcond{\xi}{\mathcal{G}^{n+1}_t}1_{\tau_{n+1} \le t}}{\mathcal{F}^{n+1}_t} = \espcond{\xi}{\mathcal{F}^{n+1}_t} 1_{\tau_{n+1} \le t}$.\\ Let $t \ge 0$ such that $\sum_n \P(\tau_n \le t < \tau_{n+1}) = 1$. We have, since $\mathbb{G}$ and $\mathbb{G}^n$ coincide on $\llbracket 0, \tau_{n+1} \llbracket$, using the same arguments as before, \begin{align*} \nonumber \espcond{\xi}{\mathcal{G}_t} &= \sum_n \espcond{\xi}{\mathcal{G}_t} 1_{\tau_n \le t < \tau_{n+1}} = \sum_n \espcond{\xi}{\mathcal{G}^n_t} 1_{\tau_n \le t < \tau_{n+1}}. \end{align*} \item We prove by induction that $\mathbb{F}^n$ is right-continuous. Since $\mathbb{F}^0$ is the augmented Brownian filtration, the result is true for $n = 0$.\\ Assume now that $\mathbb{F}^{n-1}, n \ge 1,$ is right-continuous. Let $t \ge 0, \xi \in L^1(\mathcal{F}^n_\infty)$ and $(t_m)_{m \ge 0}$ such that $t_m \ge t_{m+1}$ and $\lim_m t_m = t$. We have, using the previous point and the right-continuity of $\mathbb{F}^{n-1}$ and $\mathbb{G}^n$: \begin{align*} \nonumber \espcond{\xi}{\mathcal{F}^n_{t^+}} &= \lim_m \espcond{\xi}{\mathcal{F}^n_{t_m}} \\ \nonumber &= \lim_m \espcond{\xi}{\mathcal{F}^n_{t_m}} 1_{t_m < \tau_n} + \espcond{\xi}{\mathcal{F}^n_{t_m}} 1_{\tau_n \le t_m} \\ \nonumber &= \lim_m \espcond{\xi}{\mathcal{F}^{n-1}_{t_m}} 1_{t_m < \tau_n} + \espcond{\xi}{\mathcal{G}^n_{t_m}} 1_{\tau_n \le t_m} \\ \nonumber &= \espcond{\xi}{\mathcal{F}^{n-1}_t} 1_{t < \tau_n} + \espcond{\xi}{\mathcal{G}_{t_m}} 1_{\tau_n \le t} \\ &= \espcond{\xi}{\mathcal{F}^n_t}. \end{align*} \item Let $t < T, \xi \in L^1(\mathcal{F}^\infty_\infty), (t_m)_{m \ge 0}$ such that $T > t_m > t_{m+1}$ and $\lim_m t_m = t$. We have, by Lévy's Theorem and the first point, \begin{align*} \nonumber \espcond{\xi}{\mathcal{F}^\infty_{t^+}} &= \lim_m \espcond{\xi}{\mathcal{F}^\infty_{t_m}} = \lim_m \sum_{n=0}^{+\infty} \espcond{\xi}{\mathcal{F}^\infty_{t_m}} 1_{\tau_n \le t_m < \tau_{n+1}} \\ &= \lim_m \sum_{n=0}^{+\infty} \espcond{\xi}{\mathcal{F}^n_{t_m}} 1_{\tau_n \le t_m < \tau_{n+1}}. \end{align*} Fix $\omega \in \Omega$. We have that $t_m < \hat T < \tau_{N+1}(\omega),$ hence \begin{align*} \nonumber \espcond{\xi}{\mathcal{F}^\infty_{t^+}}\!\!(\omega) &= \lim_m \sum_{n=0}^{+\infty} \espcond{\xi}{\mathcal{F}^n_{t_m}}\!\!(\omega) 1_{\tau_n(\omega) \le t_m < \tau_{n+1}(\omega)} \\ \nonumber &= \lim_m \sum_{n=0}^{N(\omega)+1} \espcond{\xi}{\mathcal{F}^n_{t_m}}\!\!(\omega) 1_{\tau_n(\omega) \le t_m < \tau_{n+1}(\omega)} \\ \nonumber &= \sum_{n=0}^{N(\omega)+1} \lim_m \espcond{\xi}{\mathcal{F}^n_{t_m}}\!\!(\omega) 1_{\tau_n(\omega) \le t_m < \tau_{n+1}(\omega)} \\ &= \sum_{n=0}^{+\infty} \lim_m \espcond{\xi}{\mathcal{F}^n_{t_m}}\!\!(\omega) 1_{\tau_n(\omega) \le t_m < \tau_{n+1}(\omega)}. \end{align*} Finally using the right-continuity of each $\mathbb{F}^n$, we get \begin{align*} \nonumber \espcond{\xi}{\mathcal{F}^\infty_{t^+}} &= \sum_{n=0}^{+\infty} \lim_m \espcond{\xi}{\mathcal{F}^n_{t_m}} 1_{\tau_n \le t_m < \tau_{n+1}} = \sum_{n=0}^{+\infty} \espcond{\xi}{\mathcal{F}^n_t} 1_{\tau_n \le t < \tau_{n+1}} = \espcond{\xi}{\mathcal{F}^\infty_t}, \end{align*} which proves that $\mathbb{F}^\infty$ is right-continuous on $[0,T]$. \end{enumerate} \hbox{ }\hfill$\Box$ \end{proof} \begin{Lemma} Let $n \ge 0$ and $\xi \in L^1(\mathcal{F}^n_\infty)$. Let $\sigma$ be a $\mathbb{F}^n-$stopping time. We have: \begin{align} \espcond{\xi}{\mathcal{F}^{n+1}_\sigma} = \espcond{\xi}{\mathcal{F}^n_\sigma}. \end{align} \end{Lemma} \begin{proof} Assume first that $\sigma = s$ is deterministic.\\ Let $\tilde\xi = \psi(\chi, X_{n+1} 1_{\tau_{n+1} \le s})$ be a $\mathcal{F}^{n+1}_s$-measurable bounded variable, where $\chi$ is $\mathcal{F}^n_s$-measurable. We need to show \begin{align*} \esp{\xi\tilde\xi} = \esp{\espcond{\xi}{\mathcal{F}^n_s}\tilde\xi}. \end{align*} We have, with $\hat\psi(y) := \int \psi(y,x) \P_{X_{n+1}}(\mathrm{d} x)$, \begin{align*} \nonumber \esp{\espcond{\xi}{\mathcal{G}^n_s}\psi(\chi, X_{n+1} 1_{\tau_{n+1} \le s})} &= \esp{\espcond{\xi}{\mathcal{G}^n_s}\psi(\chi, 0) 1_{s < \tau_{n+1}}} + \esp{\espcond{\xi}{\mathcal{G}^n_s}\psi(\chi, X_{n+1}) 1_{\tau_{n+1} \le s}} \\ &= \esp{\xi \psi(\chi, 0) 1_{s < \tau_{n+1}}} + \esp{\xi \hat\psi(\chi)1_{\tau_{n+1} \le s}}, \end{align*} and the same computation with $\xi$ instead of $\espcond{\xi}{\mathcal{G}^n_s}$ gives the same result.\\ Let $\sigma$ be a $\mathbb{F}^n$-stopping time, and let $\xi_s = \espcond{\xi}{\mathcal{F}^n_s} = \espcond{\xi}{\mathcal{F}^{n+1}_s}$. Since $\mathbb{F}^n$ (or $\mathbb{F}^{n+1}$) is right-continuous, there exists a right-continuous modification of $(\xi_s)_{s \ge 0}$. Applying Doob's Theorem twice gives $\xi_\sigma = \espcond{\xi}{\mathcal{F}^n_\sigma}$ and $\xi_\sigma = \espcond{\xi}{\mathcal{F}^{n+1}_\sigma}$, hence we get the result. \hbox{ }\hfill$\Box$ \end{proof} We are now in position to prove an Integral Representation Theorem in the filtrations $\mathbb{F}^n$, for all $n \ge 0$. \begin{Proposition} Let $n \ge 0$ and $\xi \in L^2(\mathcal{F}^n_T)$. Then there exists a $\mathbb{G}^n-$predictable process $\psi$ such that \begin{align*} \xi = \espcond{\xi}{\mathcal{F}^n_{T \wedge \tau_n}} + \int_{T \wedge \tau_n}^T \psi_s \mathrm{d} W_s. \end{align*} \end{Proposition} \begin{proof} We prove the theorem by induction on $n \ge 0$, following ideas from \cite{A00}. The case $n = 0$ is the usual Martingale Representation Theorem in the augmented Brownian filtration $\mathbb{F}^0$.\\ Assume now that the statement is true for all $\xi \in L^2(\mathcal{F}^{n-1}_T)\, (n \ge 1)$. Let $\xi \in L^2(\mathcal{F}^n_T)$.\\ Since $\mathcal{F}^n_T = \mathcal{F}^{n-1}_T \vee \sigma(X_n 1_{\tau_n \le T})$, we get that $\xi = \lim_{m \to \infty} \xi_m$ in $L^2(\mathcal{F}^n_T)$, with $\xi_m = \sum_{i=1}^{l_m} \chi^i_m \zeta^i_m$ and $(\chi^i_m,\zeta^i_m) \in L^\infty(\mathcal{F}^{n-1}_T) \times L^\infty(\sigma(X_n 1_{\tau_n \le T}))$ for all $m \ge 0$ and $1 \le i \le l_m$.\\ By induction, there exist $\mathbb{F}^{n-1}$-predictable processes $\psi^{i,m}$ such that $\chi^i_m = \espcond{\chi^i_m}{\mathcal{F}^{n-1}_{T \wedge \tau_{n-1}}} + \int_{T \wedge \tau_{n-1}}^T \psi^{i,m}_s \mathrm{d} W_s$. Since $\tau_n$ is a $\mathbb{F}^{n-1}$-stopping time with $\tau_n \ge \tau_{n-1}$, we get: \begin{align*} \chi^i_m = \espcond{\chi^i_m}{\mathcal{F}^{n-1}_{T \wedge \tau_n}} + \int_{T \wedge \tau_n}^T \psi^{i,m}_s \mathrm{d} W_s. \end{align*} Since $\zeta^i_m \in L^\infty(\sigma(X_n 1_{\tau_n \le T})) \subset L^2(\mathcal{F}^n_{T \wedge \tau_n})$, we get \begin{align*} \zeta^i_m \int_{T \wedge \tau_n}^T \psi^{i,m}_s \mathrm{d} W_s = \int_{T \wedge \tau_n}^T \zeta^i_m \psi^{i,m}_s \mathrm{d} W_s. \end{align*} In addition, since $\chi^i_m$ is $\mathcal{F}^{n-1}_T$-measurable and $\zeta^i_m \in L^2(\mathcal{F}^n_{T \wedge \tau_n})$, we get, by the previous lemma, \begin{align*} \zeta^i_m \espcond{\chi^i_m}{\mathcal{F}^{n-1}_{T \wedge \tau_n}} = \zeta^i_m \espcond{\chi^i_m}{\mathcal{F}^n_{T \wedge \tau_n}} = \espcond{\chi^i_m \zeta^i_m}{\mathcal{F}^n_{T \wedge \tau_n}}. \end{align*} Summing over $1 \le i \le l_m$ gives: \begin{align*} \nonumber \xi_m &= \sum_{i=1}^{l_m} \chi^i_m \zeta^i_m \\ \nonumber &= \sum_{i=1}^{l_m} \espcond{\chi^i_m \zeta^i_m}{\mathcal{F}^n_{T \wedge \tau_n}} + \sum_{i=1}^{l_m} \int_{T \wedge \tau_n}^T \zeta^i_m \psi^{i,m}_s \mathrm{d} W_s \\ &= \espcond{\xi_m}{\mathcal{F}^n_{T \wedge \tau_n}} + \int_{T \wedge \tau_n}^T \psi^m_s \mathrm{d} W_s, \end{align*} where $\psi^m := \sum_{i=1}^{l_m} \psi^{i,m}_s \zeta^i_m$.\\ Finally, since $\xi_m \to \xi$ in $L^2(\mathcal{F}^n_T)$, we get that $\espcond{\xi_m}{\mathcal{F}^n_{T \wedge \tau_n}} \to \espcond{\xi}{\mathcal{F}^n_{T \wedge \tau_n}}$ in $L^2(\mathcal{F}^n_T)$, hence $\int_{T \wedge \tau_n}^T \psi^m_s \mathrm{d} W_s$ converges to a limit $\int_{T \wedge \tau_n}^T \psi_s \mathrm{d} W_s$ for a $\mathbb{F}^n$-predictable process $\psi$. \hbox{ }\hfill$\Box$ \end{proof} \begin{Theorem} Let $0 \le T \le +\infty$ and $\xi \in L^2(\mathcal{G}^n_T)$. For all $0 \le k \le n$, there exists $\mathbb{F}^k$-predictable processes $\psi^k$ such that: \begin{align*} \nonumber \xi &= \esp{\xi} + \sum_{k = 0}^{n-1} \int_{T \wedge \tau_k}^{T \wedge \tau_{k+1}} \psi^k_s \mathrm{d} W_s + \int_{T \wedge \tau_n}^T \psi^n_s \mathrm{d} W_s\\ \nonumber &+ \sum_{k = 0}^{n-1} \left(\espcond{\xi}{\mathcal{F}^{k+1}_{T \wedge \tau_{k+1}}} - \espcond{\xi}{\mathcal{F}^k_{T \wedge \tau_{k+1}}}\right)\\ &= \esp{\xi} + \int_0^T \Psi^n_s \mathrm{d} W_s + \sum_{k = 0}^{n-1} \left(\espcond{\xi}{\mathcal{F}^{k+1}_{T \wedge \tau_{k+1}}} - \espcond{\xi}{\mathcal{F}^k_{T \wedge \tau_{k+1}}}\right), \end{align*} with $\Psi^n_t := \sum_{k = 0}^{n-1} \psi^k_t 1_{T \wedge \tau_k < t \le T \wedge \tau_{k+1}} + \psi^n_t 1_{T \wedge \tau_n < t \le T}$. \end{Theorem} \begin{proof} This is an immediate consequence of the previous theorem. \hbox{ }\hfill$\Box$ \end{proof} Last, we extend this theorem to obtain an Integral Representation Theorem in $\mathbb{F}^\infty$.\\ We now fix $\xi \in L^2(\mathcal{F}^\infty_T)$ and consider the filtration $\mathbb{A} = (\mathcal{A}_n)_{n \in \mathbb{N}}$ defined by $\mathcal{A}_n := \mathcal{F}^n_T$. We have $\mathcal{A}_\infty = \bigvee_n \mathcal{A}_n = \mathcal{F}^\infty_T$. By Lévy's Theorem, we get \begin{align} \label{lim1} \espcond{\xi}{\mathcal{F}^n_T} = \espcond{\xi}{\mathcal{A}_n} \to \espcond{\xi}{\mathcal{A}_\infty} = \xi,\mbox{ a.s.} \end{align} For all $n \ge 0$, since $\mathcal{F}^n_T \subset \mathcal{F}_T$, we can write: \begin{align*} \nonumber \espcond{\xi}{\mathcal{F}^n_T} = &\esp{\xi} + \sum_{k=0}^{n-1} \int_{T \wedge \tau_k}^{T \wedge \tau_{k+1}} \psi^{n,k}_s \mathrm{d} W_s + \int_{T \wedge \tau_n}^T \psi^{n,n}_s \mathrm{d} W_s \\ &+ \sum_{k=0}^{n-1} \left( \espcond{\xi}{\mathcal{F}^{k+1}_{T \wedge \tau_{k+1}}} - \espcond{\xi}{\mathcal{F}^k_{T \wedge \tau_{k+1}}} \right). \end{align*} \begin{Lemma} We have $\psi^{n,k} = \psi^{k,k}$ on $[T \wedge \tau_k, T \wedge \tau_{k+1})$, for all $n \ge k$. \end{Lemma} \begin{proof} It follows easily by induction, comparing $\espcond{\xi}{\mathcal{F}^k_T}$ and $\espcond{\espcond{\xi}{\mathcal{F}^n_T}}{\mathcal{F}^k_T}$ and using Itô's isometry.\\ \hbox{ }\hfill$\Box$ \end{proof} For all $n \ge 0$, we define $\psi^n := \psi^{n,n}$. Thus we have, for all $n \ge 0$, \begin{align*} \nonumber \espcond{\xi}{\mathcal{G}^n_T} = &\esp{\xi} + \sum_{k=0}^{n-1} \int_{T \wedge \tau_k}^{T \wedge \tau_{k+1}} \psi^k_s \mathrm{d} W_s + \int_{T \wedge \tau_n}^T \psi^n_s \mathrm{d} W_s \\ &+ \sum_{k=0}^{n-1} \left( \espcond{\xi}{\mathcal{F}^{k+1}_{T \wedge \tau_{k+1}}} - \espcond{\xi}{\mathcal{F}^k_{T \wedge \tau_{k+1}}} \right). \end{align*} We set, for $0 \le s \le T$, \begin{align*} \Psi_s &= \sum_{k = 0}^{+\infty} \psi^k_s 1_{T \wedge \tau_k \le s < T \wedge \tau_{k+1}},\\ \Psi^n_s &= \Psi_s 1_{s \le T \wedge \tau_{n+1}} + \psi^n_s 1_{T \wedge \tau_{n+1} < s}, \mbox{ and}\\ \Delta^k_s &:= \espcond{\xi}{\mathcal{F}^{k+1}_{s \wedge \tau_{k+1}}} - \espcond{\xi}{\mathcal{F}^k_{s \wedge \tau_{k+1}}}, \end{align*} so that \begin{align*} \espcond{\xi}{\mathcal{F}^n_T} = \esp{\xi} + \int_0^T \Psi^n_s \mathrm{d} W_s + \sum_{k=0}^{n-1} \Delta^k_T. \end{align*} \begin{Theorem}[Integral Representation Theorem for $\mathbb{F}^\infty$ For $\xi \in L^2(\mathcal{F}^\infty_T)$, we have \begin{align*} \xi = \esp{\xi} + \int_0^T \Psi_s \mathrm{d} W_s + \sum_{k = 0}^{+\infty} \Delta^k_T. \end{align*} \end{Theorem} \begin{proof} By definition of $N = N^\phi_T$, we have $T < \tau_{n+1}$ on $\{n \ge N\}$, see Section \ref{game}. Thus, \begin{align*} \nonumber 1_{N \le n} \int_0^T \Psi^n_s \mathrm{d} W_s &= \left( \int_0^{T \wedge \tau_{n+1}} \Psi_s \mathrm{d} W_s + \int_{T \wedge \tau_{n+1}}^T \psi^n_s \mathrm{d} W_s\right) 1_{N \le n} = 1_{N \le n} \int_0^T \Psi_s \mathrm{d} W_s. \end{align*} Moreover, if $k \ge n$, we have, since $T \wedge \tau_{k+1} = T$, \begin{align*} \nonumber \Delta^k_T 1_{N \le n} &= \left(\espcond{\xi}{\mathcal{F}^{k+1}_{T \wedge \tau_{k+1}}} - \espcond{\xi}{\mathcal{F}^k_{T \wedge\tau_{k+1}}}\right) 1_{N \le n} \\ &= \left(\espcond{\xi}{\mathcal{F}^{k+1}_T} - \espcond{\xi}{\mathcal{F}^k_T}\right) 1_{N \le n}. \end{align*} Applying \eqref{dec-espcond} to $\chi = \espcond{\xi}{\mathcal{F}^{k+1}_T}$, we get \begin{align*} \chi = \espcond{\chi}{\mathcal{F}^{k+1}_T} &= \espcond{\chi}{\mathcal{F}^k_T} 1_{T < \tau_{k+1}} + \espcond{\chi}{\mathcal{G}^{k+1}_T} 1_{\tau_{k+1} \le T}. \end{align*} Since $T < \tau_{n+1} \le \tau_{k+1}$ on $\{N \le n\}$, we finally obtain \begin{align*} \espcond{\xi}{\mathcal{F}^{k+1}_T} 1_{N \le n} = \chi 1_{N \le n} = \espcond{\chi}{\mathcal{F}^k_T} 1_{N \le n} = \espcond{\xi}{\mathcal{F}^k_T} 1_{N \le n}, \end{align*} which gives $\Delta^k_T 1_{N \le n} = 0$. Thus: \begin{align*} \nonumber \espcond{\xi}{\mathcal{F}^n_T}1_{N \le n} &= \left(\esp{\xi} + \int_0^T \Psi^n_s \mathrm{d} W_s + \sum_{k=0}^{n-1} \Delta^k_T\right) 1_{N \le n} \\ &= \left(\esp{\xi} + \int_0^T \Psi_s \mathrm{d} W_s + \sum_{k=0}^{+\infty} \Delta^k_T\right) 1_{N \le n}. \end{align*} Since $1_{N \le n} \to 1$ a.s. when $n \to \infty$ as $N = N^\phi_T$ and $\phi$ is an admissible strategy, see Section \ref{game}, we get, sending $n$ to $+\infty$, recall \eqref{lim1}, \begin{align*} \xi = \esp{\xi} + \int_0^T \Psi_s \mathrm{d} W_s + \sum_{k=0}^{+\infty} \Delta^k_T. \end{align*} \hbox{ }\hfill$\Box$ \end{proof} \begin{Remark}We have: \begin{align} \left[\int_0^\cdot \Psi_s \mathrm{d} W_s, \sum_{k=0}^{+\infty} \Delta^k \right]_t &= 0, \\ \left[\int_0^\cdot \Psi_s \mathrm{d} W_s\right]_t &= \int_0^t \Psi^2_s \mathrm{d} s, \\ \left[\sum_{k=0}^{+\infty} \Delta^k\right]_t &= \sum_{\tau_{k+1} \le t} |\Delta^k_t|^2. \end{align} In particular, martingales $\int_0^\cdot \Psi_s dW_s$ and $\sum_k \Delta^k$ are orthogonal. \end{Remark} \subsubsection{Backward Stochastic Differential Equations} \label{sec BSDE} We now consider Backward Stochastic Differential Equations. Let $\mathbb{F}$ be one of the filtrations $\mathbb{F}^i, i \ge 0$ or $\mathbb{F}^\infty$. Let $\xi$ be a $\mathcal{F}_T$-measurable variable and $f : \Omega \times[0,T] \times \mathbb{R}^d \times \mathbb{R}^{d \times \kappa} \to \mathbb{R}^d$. We assume here that $\xi$ and $f$ are standard parameters \cite{EKPQ97}: \begin{itemize} \item $\xi \in L^2(\mathcal{F}_T)$, \item $f(\cdot, 0, 0) \in \H^2_d(\mathbb{F} )$, \item There exists $C > 0$ such that \begin{align*} |f(t,y_1,z_1) - f(t, y_2, z_2)| \le C\left(|y_1-y_2| + |z_1+z_2|\right). \end{align*} \end{itemize} Under these hypothesis, since $\mathbb{F}$ is right-continuous, one can prove (\cite{EKPQ97}, Theorem 5.1): \begin{Theorem} \label{ex-uni-bsde} There exists a unique solution $(Y,Z,M) \in \mathbb{S}^2_d(\mathbb{F}) \times \H^2_{d \times \kappa}(\mathbb{F}) \times \H^2_d(\mathbb{F})$ such that $M$ is a martingale with $M_0 = 0$, orthogonal to the Brownian motion, and satisfying \begin{align*} Y_t = \xi + \int_t^T f(s,Y_s,Z_s) \mathrm{d} s - \int_t^T Z_s \mathrm{d} W_s - \int_t^T \mathrm{d} M_s. \end{align*} \end{Theorem} When $d=1$, one can easily deal with linear BSDEs in $\mathbb{F}$, and the specific form of its solutions allows to prove a Comparison Theorem. The proofs follow closely \cite{EKPQ97}, Theorem 2.2. \begin{Theorem} Let $(b,c)$ be a bounded $(\mathbb{R} \times \mathbb{R}^\kappa)$-valued predictable process and let $a \in \H^2(\mathbb{F})$. Let $\xi \in L^2(\mathcal{F}_T)$ and let $(Y,Z,M) \in \mathbb{S}^2(\mathbb{F}) \times \H^2_{1\times \kappa}(\mathbb{F}) \times \H^2(\mathbb{F})$ be the unique solution to \begin{align*} Y_t = \xi + \int_t^T \left( a_s Y_s + b_s Z_s + c_s \right) \mathrm{d} s - \int_t^T Z_s \mathrm{d} W_s - \int_t^T \mathrm{d} M_s. \end{align*} Let $\Gamma \in \H^2(\mathbb{F})$ the solution to \begin{align*} \Gamma_t = 1 + \int_0^t \Gamma_s a_s \mathrm{d} s + \int_0^t \Gamma_s b_s \mathrm{d} W_s. \end{align*} Then, for all $t \in [0,T]$, one has almost surely, \begin{align*} Y_t = \Gamma_t^{-1}\espcond{\Gamma_T \xi + \int_t^T \Gamma_s c_s \mathrm{d} s}{\mathcal{F}_t}. \end{align*} \end{Theorem} \begin{proof} We fix $t \in [0,T]$ and we apply Itô's formula to the process $Y_t \Gamma_t$: \begin{align*} \mathrm{d} \left(Y_t \Gamma_t\right) = Y_{t^-} \mathrm{d} \Gamma_t + \Gamma_{t^-} \mathrm{d} Y_t + \mathrm{d} \left[ Y, \Gamma \right]_t. \end{align*} Since $\Gamma$ is continuous, we get $\left[ Y, \Gamma \right]_t = \left< Y^c, \Gamma^c \right>_t + \sum_{s \le t} \left(\Delta Y_s\right)\left(\Delta\Gamma_s\right) = \left< Y^c, \Gamma \right>_t$, thus, \begin{align*} \mathrm{d} \left(Y_t\Gamma_t\right) = \Gamma_t\left(b_tY_t + Z_t\right) \mathrm{d} W_t + \Gamma_t \mathrm{d} M_t - \Gamma_t c_t \mathrm{d} t. \end{align*} We define a martingale by $N_t = \int_0^t \Gamma_s (b_s Y_s + Z_s) \mathrm{d} W_s + \int_0^t \Gamma_s \mathrm{d} M_s$, and the previous equality gives \begin{align*} Y_T \Gamma_T = Y_t \Gamma_t - \int_t^T \Gamma_s c_s \mathrm{d} s + N_T - N_t. \end{align*} Taking conditional expectation with respect to $\mathcal{F}_t$ on both sides gives the result.\\ \hbox{ }\hfill$\Box$\end{proof} \begin{Theorem} \label{comparison} Let $(\xi,f)$ and $(\xi',f')$ two standard parameters. Let $(Y,Z,M) \in \mathbb{S}^2(\mathbb{F}) \times \H^2_{1\times \kappa}(\mathbb{F}) \times \H^2(\mathbb{F})$ (resp. $(Y',Z',M')$) the solution associated with $(\xi,f)$ (resp. $(\xi',f')$). Assume that \begin{itemize} \item $\xi \ge \xi'$ a.s., \item $f(Y',Z',M') \ge f'(Y',Z',M')$ a.s. \end{itemize} Then $Y_t \ge Y'_t$ almost surely for all $t \in [0,T]$. \end{Theorem} \begin{proof}Since $f$ is Lipschitz, we consider the bounded processes $a,b$ and $c$ defined by: \begin{align} a_t &= \frac{f(t,Y_t,Z_t) - f(t,Y'_t,Z_t)}{(Y_t - Y'_t)}1_{Y_t \neq Y'_t}, \\ b^i_t &= \frac{\left(f(t,Y'_t,Z_t) - f(t,Y'_t, Z'_t)\right)(Z_t-Z'_t)}{|Z_t-Z'_t|^2} 1_{Z_t \neq Z'_t} , \\ c_t &= f(Y',Z',M') - f'(Y',Z',M'), \end{align} Setting $\delta Y_t = Y_t - Y'_t, \delta Z_t = Z_t - Z'_t$ and $\delta M_t = M_t - M'_t$, we observe that $(\delta Y, \delta Z, \delta M)$ is the solution to the following linear BSDE: \begin{align} \delta Y_t = \delta Y_T + \int_t^T \left( a_s \delta Y_s + b_s \delta Z_s + c_s \right) \mathrm{d} s - \int_t^T \delta Z_s \mathrm{d} W_s - \int_t^T \mathrm{d} \delta M_s. \end{align} Using the previous Theorem, we get $Y_t = \Gamma_t^{-1} \espcond{\delta Y_T \Gamma_T + \int_t^T \Gamma_s c_s \mathrm{d} s}{\mathcal{F}_t}$. By definition, $\Gamma$ is a strictly positive process, and $\delta Y_t$ and $c$ are positive by hypothesis, hence $Y_t \ge 0$.\\ \hbox{ }\hfill$\Box$\end{proof} \subsubsection{An example of switching problem with controlled randomization} We assume here that $\mathcal{C} = [0,1]$ and we consider the example of switching problem with controlled randomisation given by \eqref{transitions et couts exemple controle}. Since the cost functions are positive, $\mathcal{D}$ has a non-empty interior. \begin{Theorem} \label{th existence markov ex controled} There exists a function $H :\mathbb{R}^3\to \mathbb{R}^{3 \times 3}$ that satisfies Assumption \ref{assumptionMarkov}-v) and such that \begin{align*} H(y) v \in \mathcal{C}_o(y), \quad \forall y \in \mathcal{D},\, v \in \mathcal{C}(y). \end{align*} Consequently, if we assume that Assumption \ref{assumptionMarkov}(i)-(iv) is fulfilled, there exists a solution to \eqref{orbsde}-\eqref{orbsde2}-\eqref{orbsde3} with $\xi = g(X_T)$ and $f(\omega,s,y,z) = \psi(s,X^{t,x}_s(\omega),y,z)$. Moreover this solution is unique if we assume also Assumption \ref{hyp sup section 2}-ii). % \end{Theorem} \begin{proof} We first observe that uniqueness follows once again from Proposition \ref{prop unicite couts generaux}.\\ \noindent 1. We start by constructing $H$ on the boundary of $\mathcal{D}$. Recalling Lemma \ref{lem-general}, it is enough to construct it on its intersection with $\cD_{\!\circ}$ which is made up of $3$ vertices $$y^1=(1,0,0), \quad y^2=(0,1,0), \quad y^3=(0,-1,-1)$$ and three edges that are smooth curves. We denote $\mathcal{E}_1$ (respectively $\mathcal{E}_2$ and $\mathcal{E}_3$) the curve between $y^1$ and $y^2$ (respectively between $y^2$ and $y^3$ and between $y^3$ and $y^1$). Let us construct $H(y^1)$ and $H(y^2)$: we must have $$H(y^1) \left(\begin{array}{cc} 1 & 1\\ 0 & -1\\ -1 & 0 \end{array} \right) = \left(\begin{array}{cc} 0 & 0\\ 0 & -b\\ -a & 0 \end{array} \right), \quad H(y^2)\left(\begin{array}{cc} -1 & 0\\ 1 & 1\\ 0 & -1 \end{array} \right) = \left(\begin{array}{cc} -c & 0\\ 0 & 0\\ 0 & -d \end{array} \right),$$ with $a,b,c,d>0$. Let us set $a=b=c=d=1$. Then we can take $$H(y^1)=\left(\begin{array}{ccc} 1 & 1 & 1\\ 1 & 2 & 1\\ 1 & 1 & 2 \end{array} \right), \quad H(y^2)=\left(\begin{array}{ccc} 2 & 1 & 1\\ 1 & 1 & 1\\ 1 & 1 & 2 \end{array} \right).$$ We define now $H$ on $\mathcal{E}_1$. We denote $(x_s)_{s \in [0,1]}$ a continuous parametrization of $\mathcal{E}_1$ such that $x_0=y^1$ and $x_1=y^2$. For all $s \in [0,1]$, we also denote $R_s$ the matrix that send the standard basis on a local basis at point $x_s$ with the standard orientation and such that: the two first vectors are in the plane $\{z=0\}$, the first one is orthogonal to $\mathcal{E}_1$ while the second one is tangent to $\mathcal{E}_1$ and the third one is $e_3$. We have in particular, $Q_0=\text{Id}$. Then we just have to set $$H(x_s)=R_s [s H(y^1) +(1-s) R_1^{-1} H(y^2) R_1]R_s^{-1}.$$ We can check that, by construction, Assumption \ref{assumptionMarkov}-v) and \eqref{relation-cones} are fulfilled for points on $\mathcal{E}_1$. Moreover, we are able to construct by the same method $H$ on $y^3$, and then on $\mathcal{E}_2$ and $\mathcal{E}_3$, satisfying Assumption \ref{assumptionMarkov}-v) and \eqref{relation-cones}. \\ \noindent 2. By using Lemma \ref{lem-general} we can extend $H$ on all the boundary of $\mathcal{D}$. Finally, we can extend $H$ by continuity on the whole space $\mathbb{R}^3$ by following Remark 2.1 in \cite{chassagneux2018obliquely}. \hbox{ }\hfill$\Box$ \end{proof} \section{Obliquely Reflected BSDEs associated to randomised switching problems} \label{existence} In this section, we study the Obliquely Reflected BSDE \eqref{orbsde}-\eqref{orbsde2}-\eqref{orbsde3} associated to the \emph{switching problem with controlled randomisation}. We address the question of existence of such BSDEs. Indeed, as observed in the previous section, under appropriate assumptions, uniqueness follows directly from the control problem representation, see Corollary \ref{co uniqueness RBSDE} and Proposition \ref{prop unicite couts generaux}. We first give some general properties of the domain $\mathcal{D}$ and identify necessary and sufficient conditions linked to the non-emptiness of its interior. The non-empty interior property is key for our existence result and is not trivially obtained in the setting of signed costs. This is mainly the purpose of Section \ref{section prop du domaine de reflection}. Then, we prove existence results for the associated BSDE in the Markovian framework, in Section \ref{sub se markov}, and in the non-Markovian framework, in Section \ref{sub se non markov}, relying on the approach in \cite{chassagneux2018obliquely}. Existence results in \cite{chassagneux2018obliquely} are obtained for general obliquely reflected BSDEs where the oblique reflection is specified through an operator $H$ that transforms, on the boundary of the domain, the normal cone into the oblique direction of reflection. Thus, the main difficulty is to construct this operator $H$ with some specific properties needed to apply the existence theorems of \cite{chassagneux2018obliquely}. This task is carried out successfully for the \emph{randomised switching problem} in the Markovian framework. We also consider an example of \emph{switching problem with controlled randomisation} in this framework. In the non-Markovian framework, which is more challenging as more properties are required on $H$, we prove the well-posedness of the BSDE for some examples of \emph{randomised switching problem}. \subsection{Properties of the domain of reflection } \label{section prop du domaine de reflection} In this section, we study the domain where the solution of the reflected BSDEs is constrained to take its values. The first result shows that the domain $\mathcal{D}$ defined in \eqref{domain} is invariant by translation along the vector $(1,\dots,1)$ and deduces some property for its normal cone. Most of the time, we will thus be able to limit our study to \begin{align}\label{eq de D slice} \cD_{\!\circ} = \mathcal{D} \cap \set{ y \in \mathbb{R}^d \,|\, y_d = 0}\;. \end{align} \begin{Lemma} \label{lem-general} For all $x \in \mathcal{D}$, we have \begin{enumerate} \item \textcolor{black}{$x+h\sum_{i=1}^d e_i \in \mathcal{D}$, for all $h \in \mathbb{R}$,} \item there is a unique decomposition $x = y^x + z^x$ with $y^x \in \cD_{\!\circ}$ and $z^x \in \mathbb{R} \left(\sum_{i=1}^d e_i\right)$, \item if $x \in \mathcal{D}$, we have $\mathcal{C}(x) \subset \{v \in \mathbb{R}^d : \sum_{i=1}^d v_i = 0\}$, \item $\mathcal{C}(x) = \mathcal{C}(y_x)$, where $y_x$ is from the above decomposition. \end{enumerate} \end{Lemma} \begin{proof} \textcolor{black}{1. If $i \in \{1,\dots,d\}$, we have \begin{align} \nonumber x_i-h = &\ge \max_{u \in \mathcal{C}} \left(\sum_{j=1}^d P^u_{i,j} x_j - c_{i}^{u}\right) - h = \max_{u \in \mathcal{C}} \left( \sum_{j=1}^d P^u_{i,j} (x_j - h) - c_{i}^{u}\right) \end{align} and thus $x+h\sum_{i=1}^d e_i \in \mathcal{D}$.\\ 2. We set $y^x = x - z^x$ with $z^x = x_d \sum_{i=1}^d e_i$. It is clear that $y^x_d = 0$, and $y^x \in \mathcal{D}$ thanks to the first point. The uniqueness is clear since we have necessarily $z^x = x_d \sum_{i=1}^d e_i$.\\ 3. Point 1. shows that $x \pm \sum_{i=1}^d e_i \in \mathcal{D}$. Let $v \in \mathcal{C}(x)$. }Then we have, by definition, \begin{align*} 0 &\ge v^\top (x \pm \sum_{i=1}^d e_i - x) = \pm v^\top \sum_{i=1}^d e_i = \pm \sum_{i=1}^d v_i, \end{align*} and thus, $\sum_{i=1}^d v_i = 0$.\\ 4. Let $x \in \mathcal{D}$. Since $x = y^x + x_d \sum_{i=1}^d e_i$, it is enough to show that for all $w \in \mathcal{D}$ and all $a \in \mathbb{R}, \mathcal{C}(w) \subset \mathcal{C}(w + a \sum_{i=1}^d e_i)$.\\ Let $v \in \mathcal{C}(w)$. We have, for all $z \in \mathcal{D}$, since $\sum_{i=1}^d v_i = 0$ and $v^\top(z-w) \le 0$, \begin{align*} v^\top(z-(w+a\sum_{i=1}^d e_i)) &= v^\top(z-w) - a v^\top \sum_{i=1}^d e_i = v^\top(z-w) \le 0, \end{align*} and thus $v \in \mathcal{C}(w+a\sum_{i=1}^d e_i)$. \hbox{ }\hfill$\Box$ \end{proof} \vspace{2mm} \noindent Before studying the domain of reflection, we introduce three examples in dimension $3$ of switching problems. On Figure \ref{figure exemples domaines}, we draw the domain $\cD_{\!\circ}$ for these three different switching problems to illustrate the impact of the various controlled randomisations on the shape of the reflecting domain. \begin{description} \item[Example 1:] Classical switching problem with a constant cost $1$, i.e. $\mathcal{C}=\{1,2\}$, \begin{align*} P^1 = \left( \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{array} \right) \;,\; P^2 = \left( \begin{array}{ccc} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right) \;,\; \bar{c}^1 =\left( \begin{array}{c} 1 \\ 1 \\ 1 \\ \end{array} \right) \; \text{ and } \; \bar{c}^2=\left( \begin{array}{c} 1 \\ 1 \\ 1 \\ \end{array} \right). \end{align*} \item[Example 2:] Randomised switching problem with $\mathcal{C}=\{0\}$, \begin{align*} P^0 = \left( \begin{array}{ccc} 0 & 1/2 & 1/2 \\ 1/2 & 0 & 1/2 \\ 1/2 & 1/2 & 0 \end{array} \right) \;\text{ and } \; \bar{c}^0 =\left( \begin{array}{c} 1 \\ 1 \\ 1 \\ \end{array} \right). \end{align*} \item[Example 3:] Switching problem with controlled randomisation where $\mathcal{C}=[0,1]$, \begin{equation} \label{transitions et couts exemple controle} P^u= \left(\begin{array}{ccc} 0 & u & 1-u\\ 1-u & 0 & u\\ u & 1-u & 0\\ \end{array}\right) \;\text{ and } \; \bar{c}^0 =\left( \begin{array}{c} 1-u(1-u) \\ 1-u(1-u) \\ 1-u(1-u) \\ \end{array} \right) \quad \forall u \in [0,1]. \end{equation} In this example, the transitions matrix are given by convex combinations of transitions matrix of Example 1. \end{description} \begin{figure}[htp] \centering \begin{center} \includegraphics[width=10cm]{./examples.pdf} \end{center} \caption{Domaine $\cD_{\!\circ}$ for three examples of switching problems with or without controlled randomisation.} \label{figure exemples domaines} \end{figure} \textcolor{black}{ \begin{Remark} For the randomised switching problem, in any dimension, we can replace $(P_{i,j})_{1 \le j \le d}$ by $\left( \frac{P_{i,j}}{1-P_{i,i}}{\bf 1}_{i \neq j}\right)_{1 \le j \le d}$ and $\bar{c}_i$ by $\frac{\bar{c}_i}{1-P_{i,i}}$ as soon as $P_{i,i}<1$, without changing $\mathcal{D}$. The factor $({1-P_{i,i}})^{-1}$ in the cost has to be seen as the expectation of the geometric law of the number of trials needed to exit state $i$. So assuming that diagonal terms are zero is equivalent to assume that $P_{i,i}<1$, for all $1 \le i \le d$. \end{Remark} } \subsubsection{The uncontrolled case} In this part, we study the domain $\mathcal{D}$ for a fixed control, which is fixed to be $0$, without loss of generality. The properties of the domain are closely linked in this case to the homogeneous Markov chain, denoted $X$, associated to the stochastic matrix $P$. For this part, we thus work with the following assumption. \begin{Assumption} \label{ass-uncontrolled-irred} The set of control is reduced to $\mathcal{C} = \{0\}$. The Markov chain $X$ with stochastic matrix $P = (P_{i,j})_{1 \le i,j \le d} := (P^0_{i,j})_{1 \le i,j \le d}$ is irreducible. \end{Assumption} \noindent Our main goal is to find necessary and sufficient conditions to characterize the non-emptiness of the domain $\mathcal{D}$. To this end, we will introduce some quantities related to the Markov Chain $X$ and the costs vector $\bar{c}:=\bar{c}^0$. \vspace{2mm} \noindent For $1 \le i,j \le d$, we consider the expected cost along an ``excursion'' from state $i$ to $j$: \begin{align} \label{eq de expected cost} \bar{C}_{i,j} := \esp{\sum_{n=0}^{\tau_j - 1} \bar{c}_{X_n} \Big| X_0 = i} \textcolor{black}{=\esp{\sum_{n=0}^{\tau_j-1}c_{X_n,X_{n+1}} \Big | X_0 = i}} \end{align} where \begin{align*} \tau_j := \inf \set{ t \ge 1\,|\, X_t = j}\;. \end{align*} We also define \begin{align} \label{eq de C} C_{j,j} := 0 \; \text{ and } \; C_{i,j} = \bar{C}_{i,j} \text{ for } 1 \le i \neq j \le d \;. \end{align} We observe that, introducing $\tilde{\tau}_j := \inf \set{ t \ge 0\,|\, X_t = j}$, the cost $C$ rewrites as $\bar{C}$: \begin{align*} {C}_{i,j} := \esp{\sum_{n=0}^{\tilde{\tau}_j } \bar{c}_{X_n}{\bf 1}_{\set{X_n \neq j}} \Big| X_0 = i}\;, \text{ for } 1 \le i,j \le d\,. \end{align*} \textcolor{black}{Let us remark that $\esp{\tau+\tilde{\tau}}<+\infty$ and so $\bar{C}$ and $C$ are finite since the Markov chain is irreducible recurrent.} \\ \noindent Setting $Q = I_d - P$, the domain $\mathcal{D}$, defined in \eqref{domain}, rewrites: \begin{align} \label{eq de domain with Q} \mathcal{D} & = \{x \in \mathbb{R}^d : Qx + c \succcurlyeq 0\}. \end{align} Since $P$ is irreducible, it is well known (see for example \cite{B15}, Section 2.5) that for all $1 \le i, j \le d$, the matrix $Q^{(i,j)}$ is invertible, and that we have \begin{align}\label{eq inv meas} \tilde{\mu}_i := \det Q^{(i,i)} = (-1)^{i+j} \det Q^{(i,j)}> 0 . \end{align} Moreover, $\tilde{\mu} Q = 0$ with $\tilde\mu = (\tilde\mu_i)_{i=1}^d$, i.e. ${\mu} := \frac{\tilde\mu}{\sum_{i=1}^d \tilde\mu_i}$ is the unique invariant probability measure for the Markov chain with transition matrix $P$. \noindent We now obtain some necessary conditions for the domain to be non-empty. Let us first observe that \begin{Lemma} The mean costs $C$ are given for $1 \le i \neq j \le d$ by \begin{align}\label{eq carac C} C_{i,j} = \left((Q^{(j,j)})^{-1} \bar{c}^{(j)}\right)_{i-{\bf 1}_{\set{i>j}} } \;. \end{align} \end{Lemma} {\noindent \bf Proof. } 1. We first show that for $1 \le i,j \le d$: \begin{align}\label{eq dpp bar C} \bar{C}_{i,j} = \bar{c}_i + \sum_{\ell \neq j} \bar{C}_{\ell, j}P_{i,\ell} \;. \end{align} From \eqref{eq de expected cost}, we have \begin{align*} \bar{C}_{i,j} &= \esp{\sum_{n=0}^{+\infty} \bar{c}_{X_n} {\bf 1}_{\set{n<\tau_j}}\Big|X_0 = i} = \bar{c}_i + \esp{\sum_{n=1}^{+\infty} \bar{c}_{X_n} {\bf 1}_{\set{n<\tau_j}}\Big|X_0 = i}. \end{align*} Then, since for all $n \ge 1$, $\set{X_1 = j} \cap \set{n<\tau_j} = \emptyset$, we get \begin{align*} \bar{C}_{i,j} &=\bar{c}_i + \esp{\sum_{n=1}^{+\infty} \sum_{\ell \neq j} \bar{c}_{X_n}{\bf 1}_{\set{X_1 = \ell}} {\bf 1}_{\set{n<\tau_j}}\Big|X_0 = i}. \end{align*} We compute that, for $\ell \neq j$, \begin{align*} \esp{\sum_{n=1}^{+\infty} \bar{c}_{X_n}{\bf 1}_{\set{X_1 = \ell}} {\bf 1}_{\set{n<\tau_j}}\Big|X_0 = i} = \esp{\sum_{n=1}^{+\infty} \bar{c}_{X_n} {\bf 1}_{\set{n<\tau_j}}\Big|X_1 = \ell}P_{i,\ell}. \end{align*} The proof of \eqref{eq dpp bar C} is then concluded observing that from the Markov property, $$\esp{\sum_{n=1}^{+\infty} \bar{c}_{X_n} {\bf 1}_{\set{n<\tau_j}}\Big|X_1 = \ell} = \bar{C}_{\ell, j}\,.$$ 2. From \eqref{eq dpp bar C}, we deduce, recall Definition \eqref{eq de C}, that, for $i \neq j$, \begin{align}\label{eq dpp C} {C_{i,j}} = \bar{c}_i + \sum_{\ell \neq j} {C}_{\ell,j}P_{i,\ell} \;. \end{align} This equality simply rewrites $Q^{(j,j)} C_{\cdot,j} = \bar{c}^{(j)}$, which concludes the proof. \hbox{ }\hfill$\Box$ \vspace{2mm} \begin{Proposition}\label{pr necessary conditions} Assume $\mathcal{D}$ is non-empty. Then, \begin{enumerate} \item the mean cost with respect to the invariant measure is non-negative, namely: \begin{align}\label{eq cond mean cost non empty} {\mu} \bar{c} \ge 0. \end{align} \item For all $1 \le i,j \le d$, \begin{align}\label{eq cond C non empty} \min_{1 \le i,j \le d} \left(C_{i,j} + C_{j,i} \right) \ge 0. \end{align} \item The set $\cD_{\!\circ}$ is compact in $\{y \in \mathbb{R}^d | y_d = 0\}$. \end{enumerate} Moreover, if $\mathcal{D}$ has non-empty interior, then \begin{align}\label{eq cond cost non empty interior} \mu \bar{c} > 0\;\text{ and } \;\min_{1 \le i \neq j \le d} \left(C_{i,j} + C_{j,i} \right) > 0\;. \end{align} \end{Proposition} {\noindent \bf Proof. } 1.a We first show the key relation: \begin{align} \label{eq encadrement} - C_{i,j} \le x_i - x_j \le C_{j,i} \;,\; \text{ for } 1 \le i,j \le d. \end{align} For $j \in \{1,\dots,d\}$ and $x \in \mathbb{R}^d$, we introduce $\pi^j(x) \in \mathbb{R}^{d-1}$, given by, $$\pi^j(x)_k = x_{ \rs{k}{j} } - x_j\;,\;\; k \in \{1,\dots,d-1\}. $$ Let $x \in \mathcal{D}$ and $j \in \{1, \dots, d\}$. For all $i \in \{1, \dots, d\}, i \neq j$, we have, by definition of $\mathcal{D}$ and since $\sum_{k=1}^d P_{i,k} = 1$, \begin{align*} x_i - x_j \ge \sum_{k=1}^d P_{i,k} \left(x_k - x_j\right) - \bar{c}_i. \end{align*} Thus $\pi_j(x)$ satisfies to \begin{align*} Q^{(j,j)} \pi^j(x) \succcurlyeq - \bar{c}^{(j)}. \end{align*} Since $\left(Q^{(j,j)}\right)^{-1} = \sum_{k \ge 0} \left(P^{(j,j)}\right)^k \succcurlyeq 0$, we obtain, using inequality \eqref{eq carac C} \begin{align} \label{def C} \pi^j(x) \succcurlyeq - \left(Q^{(j,j)}\right)^{-1}\bar{c}^{(j)} = - C^{(j)}_{\cdot,j}, \end{align} which means $x_i - x_j \ge - C_{i,j}$ for all $i \neq j$.\\ Let $1 \le i \neq j \le d$. The precedent reasoning gives $x_i - x_j \ge - C_{i,j}$ and $x_j - x_i \ge - C_{j,i}$, thus \eqref{eq encadrement} is proved.\\ From \eqref{eq encadrement}, we straightforwardly obtain \eqref{eq cond C non empty} and the fact that $\cD_{\!\circ}$ is compact in $\{y \in \mathbb{R}^d : y_d = 0\}$. \\ 1.b Since $\mathcal{D}$ is non empty, the following holds for some $x \in \mathbb{R}^d$, recalling \eqref{eq de domain with Q}, \begin{align*} Qx + \bar{c} \succcurlyeq 0 \end{align*} Multiplying by ${\mu}$ the previous inequality, we obtain \eqref{eq cond mean cost non empty}, since ${\mu}Q = 0$. \\ 2. Assume now that $\mathcal{D}$ has non-empty interior and consider $x \in \overset{\circ}{\mathcal{D}}$. Then, for all $1 \le i \le d$, we have that $x - \epsilon e_i$ belongs to $\mathcal{D}$ for $\epsilon > 0$ small enough. Thus, we get \begin{align*} x_i - \epsilon \ge \sum_{\ell = 1}^d P_{i,\ell} x_{\ell} - \epsilon P_{i,i} - \bar{c}_i \end{align*} and then \begin{align*} Qx+\bar{c} \succcurlyeq \epsilon \min_{1 \le i \le d}(1-P_{i,i}) \sum_{\ell = 1}^d e_\ell. \end{align*} Since $P$ is irreducible, $\min_{1 \le i \le d}(1-P_{i,i}) > 0$, and multiplying by ${\mu}$ both sides of the previous inequality we obtain ${\mu} \bar{c}>0$. \\ For any $j \neq i$, since $x - \epsilon e_i \in \mathcal{D}$, we deduce from \eqref{eq encadrement}, $-C_{i,j}+\epsilon \le x_i - x_j$. Using again \eqref{eq encadrement}, we get $-C_{i,j}+\epsilon \le C_{j,i}$. This proves the right hand side of \eqref{eq cond cost non empty interior}. \hbox{ }\hfill$\Box$ \vspace{2mm} \noindent The next lemma, whose proof is postponed to Appendix \ref{proof le costly round-trip}, links the condition \eqref{eq cond mean cost non empty} to costly round-trip. \begin{Lemma}\label{le costly round-trip} The followings hold, for $1 \le j \le d$, \begin{align} \label{eq barCjj linked to mu c} \bar{C}_{jj} = \frac{\mu \bar{c}}{\mu_j} \;, \end{align} and, for $1 \le i \neq j \le d$, \begin{align}\label{eq Cij+Cji linked to mu c} C_{i,j} + C_{j,i} = \frac{\mu \bar{c}}{\mu_i} \left(\left[Q^{(j,j)}\right]^{-1}\right)_{i^{(j)},i^{(j)}} \;. \end{align} \end{Lemma} \vspace{2mm} \noindent We are now going to show that previous necessary conditions are also sufficient. The main result of this section is thus the following. \begin{Theorem} \label{th set of cns} The following conditions are equivalent: \begin{enumerate}[i)] \item The domain $\mathcal{D}$ is non-empty (resp. has non-empty interior). \item There exists $1 \le i \neq j \le d$ such that $C_{i,j} + C_{j,i} \ge 0$ (resp. $C_{i,j} + C_{j,i} > 0$). \item The inequality $\mu \bar{c} \ge 0$ (resp. $\mu \bar{c} > 0$) is satisfied. \item For all $1 \le i \neq j \le d$, $C_{i,j} + C_{j,i} \ge 0$ (resp. $C_{i,j} + C_{j,i} > 0$). \end{enumerate} \end{Theorem} {\noindent \bf Proof. } 1 We first note that in Proposition \ref{pr necessary conditions} we have proved $i)\implies iv)$. We also remark that $iv) \implies ii)$ trivially, and $ii) \implies iii)$ in a straightforward way from equality \eqref{eq Cij+Cji linked to mu c}, recalling that $\left(Q^{(j,j)}\right)^{-1} = \sum_{k \ge 0} \left(P^{(j,j)}\right)^k \succcurlyeq 0$. \\ 2. We now study $iii) \implies i)$. \\ 2.a Assume that $\mu \bar{c} \ge 0$. For $1 \le j \le d$, we denote $z^j := -C_{\cdot,j}$. Then from \eqref{eq dpp C}, we straightforwardly observe that, for all $i \neq j$, \begin{align}\label{eq calcul zji} z^j_i = \sum_{\ell = 1}^d z^j_\ell P_{i,\ell} - \bar{c}_i \;. \end{align} We now take care of the case $i=j$ by computing, recall $z^j_j=0$, \begin{align}\label{eq calcul zjj} z^j_j - \sum_{\ell = 1}^d z^j_\ell P_{j,\ell} + \bar{c}_j &= \sum_{\ell = 1}^d C_{\ell,j} P_{j,\ell} + \bar{c}_j =\bar{C}_{j,j} \end{align} where we used \eqref{eq dpp bar C} with $i=j$. Then, combining \eqref{eq barCjj linked to mu c} and the assumption $\mu \bar{c} \ge 0$ for this step, we obtain that $z^j \in \mathcal{D}$ and so $\mathcal{D}$ is non empty. \\ 2.b We assume that $\mu \bar{c}>0$ which implies that $\bar{C}_{jj} = \frac{\mu \bar{c}}{\mu_j} > 0$ for all $1 \leq j \leq d$ recalling \eqref{eq barCjj linked to mu c}. Set any $j \in \set{1,\dots,d}$ and consider $z^j := -C_{\cdot,j}$ introduced in the previous step. We then set \begin{align}\label{eq de x} x := z^j + \frac{1}{2(d-1)}\sum_{k \neq j}(z^k-z^j) \end{align} Next, we compute, for $i \neq j$, recalling \eqref{eq calcul zji} and \eqref{eq calcul zjj}, \begin{align*} \left(Qx + \bar{c} \right)_i &= \left(Qz^j + \bar{c} \right)_i + \frac{1}{2(d-1)}\sum_{k \neq j}(Qz^k-Qz^j)_i\\ &= 0+\frac{1}{2d} (Qz^i-Qz^j)_i = \frac{1}{2(d-1)}((Qz^i)_i+\bar{c}_i) = \bar{C}_{i,i}>0. \end{align*} For $i=j$, we compute, recalling \eqref{eq calcul zji} and \eqref{eq calcul zjj}, \begin{align*} \left(Qx + \bar{c} \right)_j&= \left(Qz^j + \bar{c} \right)_j + \frac{1}{2(d-1)}\sum_{k \neq j}(Qz^k-Qz^j)_j\\ &= \bar{C}_{j,j} + \frac{1}{2(d-1)}\sum_{k \neq j} (-\bar{c}_j + \bar{c}_j -\bar{C}_{j,j}) = \frac{\bar{C}_{j,j}}{2}>0. \end{align*} Combining the two previous inequalities, we obtain tha \begin{align*} Qx + \bar{c} \succcurlyeq \frac{\delta}{2} {\bf 1}\, \quad \text{with} \quad \delta = \min\{\bar{C}_{i,i}|1 \leq i \leq d\}. \end{align*} From this, we easily deduce that $x + B(0, \frac{\delta}{4 \sup_i ||Q_{i,\cdot}||_2} ) \subset \mathcal{D}$, which proves that $\mathcal{D}$ has a non-empty interior. \\ \hbox{ }\hfill$\Box$ \noindent We now give some extra conditions that are linked to the non-emptiness of the domain $\mathcal{D}$ \begin{Proposition} The following assertions are equivalent: \begin{enumerate}[i)] \item $\mathcal{D}$ is non-empty, \item For all $1 \le i,j,k \le d$, the following holds \begin{align}\label{eq nice cost} C_{jk} \le C_{ji} + C_{ik}, \end{align} \item \textcolor{black}{For any round trip of length less than $d$, i.e. $1\le n \le d$, $1 \le i_1 \neq...\neq i_n \le d$, we have \begin{align}\label{eq any roundtrip} {\sum_{k=1}^{n-1} C_{i_k,i_{k+1}} + C_{i_n, i_1} \ge 0 }. \end{align}} \end{enumerate} \end{Proposition} {\noindent \bf Proof. } 1. $i) \implies ii)$. From Theorem \ref{th set of cns}, we know that $-C_{\cdot,k} \in \mathcal{D}$ for all $k \in \set{1,\dots,d}$. Using then \eqref{eq encadrement}, we have \begin{align*} -C_{i,k} + C_{j,k} \le C_{j,i}\;, \text{ for all} 1 \le i, j \le d\;, \end{align*} which concludes the proof for this step. \\ 2. $ii) \implies iii)$ directly since $C_{i,i}=0$ for all $1 \le i \le d$. Finally $iii) \implies i)$ is already proved in Theorem \ref{th set of cns} for $2$-state round trip. \hbox{ }\hfill$\Box$ \begin{Proposition} \label{pr convex hull} Let us assume that $\mathcal{D}$ has a non empty interior. Define $\theta_{\cdot,j} = C_{\cdot,j} - C_{d,j} {\bf 1}$, for all $1 \le j \le d$. Then $(-\theta_{\cdot,j})_{1 \le j \le d}$ are affinely independent and $\cD_{\!\circ}$ is the convex hull of these points. \end{Proposition} {\noindent \bf Proof. } We know from Step 2.a in the proof of Theorem \ref{th set of cns} that $-C_{\cdot,j} \in \mathcal{D}$ for all $1 \le j \le d$. The invariance by translation along ${\bf 1}$ of the domain proves that $-\theta_{\cdot,j}$ are in $\cD_{\!\circ}$. More precisely, we obtain from \eqref{eq dpp C} that, \begin{align}\label{eq property theta} \theta_{i,j} - \sum_{\ell=1}^d \theta_{\ell,j}P_{i,\ell} = \bar{c}_i \;,\quad \text{ for } 1 \le i \neq j \le d\;. \end{align} \\ 1. We now prove that $(\theta_{\cdot,j})_{1 \le j \le d}$ are affinely independent. We consider thus $\alpha \in \mathbb{R}^d$ such that \begin{align}\label{eq ass on alpha} \sum_{j=1}^d \alpha_j = 0 \quad \text{ and } \quad z := \sum_{j = 1}^d \alpha_j \theta_{\cdot,j} = 0\;. \end{align} and we aim to prove that $\alpha_j = 0$, for $j \in \set{1,\dots,d}$. To this end, we compute, for $i \in \set{1,\dots,d}$, \begin{align*} z_i := \sum_{j=1}^d \alpha_j \theta_{i,j} &= \sum_{j \neq i} \alpha_j \theta_{i,j} + \alpha_i \theta_{ii} = \sum_{j\neq i} \alpha_j \bar{c}_i + \sum_{\ell = 1}^d \sum_{j \neq i}\alpha_j\theta_{\ell,j}P_{i,\ell} -\alpha_i C_{d,i} \\ &=\sum_{j\neq i} \alpha_j \bar{c}_j + \sum_{\ell = 1}^d z_\ell P_{i,\ell} - \alpha_i\sum_{\ell=1}^d \theta_{\ell,i} P_{i,\ell} -\alpha_i C_{d,i} \\ &= - \alpha_{i}(\bar{c}_i + \sum_{\ell=1}^d \theta_{\ell,i} P_{i,\ell} + C_{d,i}) = - \alpha_i(\bar{c}_i + \sum_{\ell=1}^d C_{\ell,i} P_{i,\ell})=-\alpha_i \bar{C}_{i,i}. \end{align*} We thus deduce that $\alpha_i=0$ since $\bar{C}_{i,i}=\frac{\mu \bar{c}}{\mu_i}>0$, which concludes the proof for this step \\ 2. We now show that $\cD_{\!\circ}$ is the convex hull of points $(-\theta_{\cdot,j})_{1 \le j \le d}$, which are affinely independent from the previous step. For $y \in \mathbb{R}^{d} \cap \{y_d = 0\}$, there exists thus a unique $(\lambda_1,\dots,\lambda_{d-1}) \in \mathbb{R}^{d-1}$ such that $y = \sum_{j=1}^d -\lambda_j \theta_{\cdot,j}$, with $\lambda_d = 1 - \sum_{j=1}^{d-1} \lambda_j$. Assuming that $y \in \mathcal{D}$, we have that \begin{align*} v := Qy + \bar{c} = \sum_{j=1}^d -\lambda_j Q \theta_{\cdot,j} + \bar{c} =\sum_{j=1}^d \lambda_j [Q(-\theta_{.,j})+\bar{c}]\succcurlyeq 0\;. \end{align*} Since $[Q(-\theta_{.,j})+\bar{c}]_i=0$ for all $i \neq j$, we get, for all $1 \le i \le d$, $$v_i = \lambda_i ([Q(-\theta_{.,i})]_i+\bar{c}_i) \ge 0.$$ Recalling that $[Q(-\theta_{.,i})]_i+\bar{c}_i \ge 0$ we obtain $\lambda_i \ge 0$ which concludes the proof. \hbox{ }\hfill$\Box$ \subsubsection{The setting of controlled randomisation} In this part we adapt Assumption \ref{ass-uncontrolled-irred} in the following natural way. \begin{Assumption} \label{ass-uncontrolled-irred- new} For all $u \in \mathcal{C}$, the Markov chain with stochastic matrix $P^u := (P^u_{i,j})_{1 \le i,j \le d}$ is irreducible. \end{Assumption} \noindent We then consider the matrix $\widehat{C}$ defined by, for all $(i,j) \in \set{1,\dots,d}$ \begin{align}\label{eq de min cost} \widehat{C}_{i,j} := \min_{u \in \mathcal{C}} {C}^u_{i,j} \, \, \end{align} recall the Definition of $\bar{C}^u_{i,j}$ for a fixed control in \eqref{eq de expected cost}. Let us note that $ \widehat{C}_{i,j}$ is well defined in $\mathbb{R}$ under Assumption \ref{ass-uncontrolled-irred- new} since $\mathcal{C}$ is compact. \noindent The following result is similar as Proposition \ref{pr necessary conditions} but in the context of switching with controlled randomisation.\\ \begin{Proposition}\label{pr necessary conditions general} Assume $\mathcal{D}$ is non-empty (resp. has non-empty interior). Then, \begin{align}\label{eq cond cost non empty interior general} \min_{u}\mu^u \bar{c}^u \ge 0\;(\text{resp.} > 0) \quad\text{ and } \quad\min_{1\le i\neq j \le d} \left(\widehat{C}_{i,j} + \widehat{C}_{j,i} \right) \ge 0\;(\text{resp.} > 0)\;. \end{align} Moreover, the set $\cD_{\!\circ}$ is compact in $\{y \in \mathbb{R}^d : y_d = 0\}$. \end{Proposition} {\noindent \bf Proof. } 1. Let $x \in \mathcal{D}$. From \eqref{eq encadrement}, we have for each $u \in \mathcal{C}$, $ -C^{u}_{i,j} \le x_i - x_j \le C^{u}_{j,i} \;. $ Minimizing on $u \in \mathcal{C}$, we then obtain \begin{align}\label{eq encadrement general} -\widehat{C}_{i,j} \le x_i - x_j \le \widehat{C}_{j,i} \;. \end{align} From this, we deduce that $\cD_{\!\circ}$ is compact in $\{y \in \mathbb{R}^d : y_d = 0\}$ and we get the right handside of \eqref{eq cond cost non empty interior general}. \\ We also have that, for all $u \in \mathcal{C}$, \begin{align*} Q^u x + \bar{c}^u \succcurlyeq 0\,, \end{align*} then multiplying by $\mu^u$ we obtain $\mu^u \bar{c}^u \ge 0$. This leads to $\min_{u}\mu^u \bar{c}^u \ge 0$. \\ \textcolor{black}{2. Then, results concerning the non-empty interior framework can be obtained as in the proof of Proposition \ref{pr necessary conditions}.} \paragraph{The case of controlled costs only.} Let us first start by introducing the minimal controlled mean cost: \begin{align*} \hat{c}_i := \min_{u \in \mathcal{C}} \bar{c}^u_i\;,\quad \text{for}\; 1 \le i \le d\;. \end{align*} In this setting, we have that \begin{align*} \mathcal{D} :=& \{x \in \mathbb{R}^d : (Qx)_i + \bar{c}^{u}_i \ge 0\;, \text{for all } u \in \mathcal{C}, \; 1 \le i \le d\} \\ = &\{x \in \mathbb{R}^d : (Qx)_i + \hat{c}_i \ge 0\;, \text{for all } \; 1 \le i \le d\}. \end{align*} Using the result of Proposition \ref{pr necessary conditions} with the new costs $\hat{c}$, we know that a necessary and sufficient condition is $\mu \hat{c} \ge 0$. Moreover, the matrix $C$ is defined here by \begin{align}\label{eq carac C control cost} C_{i,j} = \left((Q^{(j,j)})^{-1} \hat{c}^{(j)}\right)_{i-{\bf 1}_{\set{i>j}} },\quad 1 \le i \neq j \le d \;, \end{align} and $C_{i,i} =0$, for $1 \le i \le d$. Comparing the above expression with the definition of $\widehat{C}$ in \eqref{eq de min cost}, we observe that $C_{i,j} \le \widehat{C}_{i,j}$, $1 \le i,j \le d$. The following example confirms that \begin{align*} \min_{1\le i\neq j \le d} \left(\widehat{C}_{i,j} + \widehat{C}_{j,i} \right) \ge 0\,, \end{align*} recall Proposition \ref{pr necessary conditions general}, is not a sufficient condition in this context for non-emptiness of the domain. \begin{Example} Set $\mathcal{C} = \set{0,1}$, \begin{align*} P = \left( \begin{array}{ccc} 0 & 0.5 & 0.5 \\ 0.5 & 0 & 0.5 \\ 0.5 & 0.5 & 0 \end{array} \right) \;,\; \bar{c}^0 =\left( \begin{array}{c} -0.5 \\ 1.2 \\ 0.7 \\ \end{array} \right) \; \text{ and } \; \bar{c}^1=\left( \begin{array}{c} 1.5 \\ 0.2 \\ 0.2 \\ \end{array} \right) \end{align*} Observe that $\mu = (\frac13,\frac13,\frac13)$ and $\hat{c} = (-0.5,0.2,0.2)^\top$. Then, one computes that \begin{align*} \min_{1\le i\neq j \le d} \left(\widehat{C}_{i,j} + \widehat{C}_{j,i} \right) > 0 \quad \text{ but } \quad \mu \hat{c} < 0\;. \end{align*} \end{Example} \subsection{The Markovian framework} \label{sub se markov} We now introduce a Markovian framework, and prove that a solution to \eqref{orbsde}-\eqref{orbsde2}-\eqref{orbsde3} exists for the randomised switching problem under Assumption \ref{ass-uncontrolled-irred} and a technical {copositivity} hypothesis, see Assumption \ref{ass-copositivity} below. \textcolor{black}{We also investigate an example of switching problem with controlled randomisation, see \eqref{transitions et couts exemple controle}.}\\ To this effect, we rely on the existence theorem obtained in \cite{chassagneux2018obliquely}, which we recall next. \vspace{2mm} \noindent For all $(t,x) \in [0,T] \times \mathbb{R}^q$, let $X^{t,x}$ be the solution to the following SDE: \begin{align} \label{eq SDE} \mathrm{d} X_s &= b(s,X_s) \mathrm{d} s + \sigma(s, X_s) \mathrm{d} W_s, s \in [t,T], \\ X_t &= x. \end{align} We are interested in the solutions $(Y^{t,x},Z^{t,x},K^{t,x}) \in \mathbb S^2_d(\mathbb{F}^0) \times \H^2_{d \times \kappa}(\mathbb{F}^0) \times \mathbb{A}^2_d(\mathbb{F}^0)$ of \eqref{orbsde}-\eqref{orbsde2}-\eqref{orbsde3}, where the terminal condition satisfies $\xi = g(X^{t,x}_T)$, and the driver satisfies $f(\omega,s,y,z) = \psi(s,X^{t,x}_s(\omega),y,z)$ for some deterministic measurable functions $g, \psi$. We next give the precise set of assumptions we need to obtain our results. \noindent For sake of completeness, we recall here the existence result proved in \cite{chassagneux2018obliquely}, see also \cite{DAFH17}. \begin{Assumption} \label{assumptionMarkov} There exist $p \ge 0$ and $L \ge 0$ such that \begin{enumerate}[i)] \item \begin{align*} |\psi(t,x,y,z)| \le L(1+|x|^p+|y|+|z|). \end{align*} Moreover, $\psi(t,x,\cdot,\cdot)$ is continuous on $\mathbb{R}^d\times\mathbb{R}^{d \times \kappa}$ for all $(t,x) \in [0,T]\times\mathbb{R}^q$. \item $(b,\sigma) : [0,T] \times \mathbb{R}^q \to \mathbb{R}^q \times \mathbb{R}^{q \times \kappa}$ is a measurable function satisfying, for all $(t,x,y) \in [0,T] \times \mathbb{R}^q \times \mathbb{R}^q$, \begin{align*} |b(t,x)| + |\sigma(t,x)| &\le L(1+|x|),\\ |b(t,x)-b(t,y)|+|\sigma(t,x)-\sigma(t,y)|&\le L|x-y|. \end{align*} \item $g : \mathbb{R}^q \to \mathbb{R}^d$ is measurable and for all $(t,x) \in [0,T] \times \mathbb{R}^q$, we have \begin{align*} |g(t,x)| \le L(1+|x|^p). \end{align*} \item Let $\mathcal{X} = \{\mu(t,x;s,dy),x \in \mathbb{R}^q \textrm{ and } 0\leq t \leq s \leq T\}$ be the family of laws of $X^{t,x}$ on $\mathbb{R}^q$, i.e., the measures such that $\forall A \in \mathcal{B}(\mathbb{R}^q)$, $\mu(t,x;s,A) = \mathbb{P}(X_s^{t,x} \in A)$. For any $t \in [0,T)$, for any $\mu(0,a;t,dy)$-almost every $x \in \mathbb{R}^q$, and any $\delta \in ]0,T-t]$, there exists an application $\phi_{t,x}: [t,T]\times \mathbb{R}^d \rightarrow \mathbb{R}_+$ such that: \begin{enumerate} \item $\forall k \geq 1$, $\phi_{t,x} \in L^2([t+\delta,T] \times [-k,k]^q; \mu(0,a;s,dy)ds)$, \item $\mu(t,x;s,dy)ds = \phi_{t,x}(s,y)\mu(0,a;s,dy)ds$ on $[t+\delta,T] \times \mathbb{R}^q$. \end{enumerate} \item $H : \mathbb{R}^d \to \mathbb{R}^{d \times d}$ is a measurable function, and there exists $\eta > 0$ such that, for all $(y,y') \in \mathcal{D}\times\mathbb{R}^d$ and $v \in \mathfrak n(\mathfrak{P}(y))$, where $\mathfrak{P}$ is the projection on $\mathcal{D}$, we have \begin{align*} v^\top H(y) v &\ge \eta,\\ |H(y')| &\le L. \end{align*} Moreover, $H$ is continuous on $\mathcal{D}$. \end{enumerate} \end{Assumption} \begin{Remark} Assumption iv) is true as soon as $\sigma$ is uniformly elliptic, see \cite{HLP97}. \end{Remark} \noindent The existence result in the Markovian setting reads as follows. \begin{Theorem}[\cite{chassagneux2018obliquely}, Theorem 4.1] \label{thm-existence-cr} Under Assumption \ref{assumptionMarkov}, there exists a solution $(Y^{t,x},Z^{t,x},\Psi^{t,x}) \in \mathbb S^2_d(\mathbb{F}^0) \times \H^2_{d \times \kappa}(\mathbb{F}^0) \times \H^2_d(\mathbb{F}^0)$ of the following system \begin{align} \hspace{-1cm} \label{orbsde-gen-1} Y_s &= g(X^{t,x}_T) + \int_s^T \!\!\!\!\psi(u,X^{t,x}_u,Y_u,Z_u) \mathrm{d} u - \int_s^T \!\!\!Z_u \mathrm{d} W_u - \int_s^T \!\!\!\! H(Y_u) \Psi_u \mathrm{d} u, s \in [t,T], \\ \label{orbsde-gen-2} Y_s &\in \mathcal{D}, \mbox{ } \Psi_s \in \mathcal{C}(Y_s),\mbox{ } t \le s \le T,\\ \label{orbsde-gen-3} \int_t^T &1_{\{Y_s \not \in \partial \mathcal{D}\}} |\Psi_s| \mathrm{d} s = 0. \end{align} \end{Theorem} The main point to invoke Theorem \ref{thm-existence-cr} is then to construct a function $H : \mathbb{R}^d \to \mathbb{R}^{d \times d}$ which satisfies Assumption \ref{assumptionMarkov} v) and such that \begin{align} \label{relation-cones} H(y) v \in \mathcal{C}_o(y), \end{align} for all $y \in \mathcal{D}$ and $v \in \mathcal{C}(y)$, where $\mathcal{C}_o(y)$ is the cone of directions of reflection, given here by \begin{align*} \mathcal{C}_o(y) := -\sum_{i=1}^d \mathbb{R}_+ e_i \ind{y_i = \max_{u \in \mathcal{C}} \left\{ \sum_{j=1}^d P^u_{i,j} y_j - \bar{c}_{i}^u \right\}}. \end{align*} If Assumption \ref{assumptionMarkov} i), ii), iii), iv) are also satisfied, we obtain the existence of a solution to \eqref{orbsde-gen-1}-\eqref{orbsde-gen-2}-\eqref{orbsde-gen-3}. Setting $K^{t,x}_s := - \int_t^s H(Y^{t,x}_u) \Psi^{t,x}_u \mathrm{d} u$ for $t \le s \le T$ shows that $(Y^{t,x},Z^{t,x},K^{t,x})$ is a solution to \eqref{orbsde}-\eqref{orbsde2}-\eqref{orbsde3}. \subsubsection{Well-posedness result in the uncontrolled case } We assume here Assumption \ref{ass-uncontrolled-irred}. In addition, we need to introduce the following technical assumption in order to construct $H$ satisfying Assumption \ref{assumptionMarkov} v) and \eqref{relation-cones}. \textcolor{black}{ \begin{Assumption} \label{ass-copositivity} For all $1 \le i \le d$, the matrix $Q^{(i,i)}$ is strictly copositive, meaning that for all $0 \preccurlyeq x \in \mathbb{R}^{d-1}, x \neq 0$, we have \begin{align} x^\top Q^{(i,i)} x > 0. \end{align} \end{Assumption} } \noindent Our main result for this section is the following theorem. \begin{Theorem} \label{constr H} Suppose that Assumption \ref{assumptionMarkov} i), ii), iii), iv), Assumption \ref{ass-uncontrolled-irred} and Assumption \ref{ass-copositivity} are satisfied and that $\mathcal{D}$ has non-empty interior. \noindent Then, there exists $H : \mathbb{R}^d \to \mathbb{R}^{d \times d}$ satisfying \ref{assumptionMarkov} v). Consequently, there exists a solution to \eqref{orbsde}-\eqref{orbsde2}-\eqref{orbsde3} with $\xi = g(X_T)$ and $f(\omega,s,y,z) = \psi(s,X^{t,x}_s(\omega),y,z)$. Moreover this solution is unique if we assume also Assumption \ref{hyp sup section 2}-ii). \end{Theorem} {\noindent \bf Proof. } We first observe that uniqueness follows from Proposition \ref{prop unicite couts generaux}. We now focus on proving existence of solution which amounts to exhibit a convenient $H$ function. The general idea is to start by constructing $H$ on the points $(y^i)_{1 \le i \le d}$, given by \begin{align} y^i &:= (C_{d,i} - C_{j,i})_{1\le j \le d}, \end{align} then, using Proposition \ref{pr convex hull}, we can extend it on the whole $\cD_{\!\circ}$ by linear combination, and finally we extend $H$ on all $\mathbb{R}^d$ by using the geometry of $\mathcal{D}$. \\ The proof is then divided into several steps. \vspace{1mm} \\ 1. We start by computing the outward normal cone $\mathcal{C}(y)$ for all $y \in \cD_{\!\circ}$. Let us set $y \in \cD_{\!\circ}$. Thanks to Proposition \ref{pr convex hull}, there exists a unique $(\lambda_i)_{1 \leqslant i \leqslant d} \in [0,1]^d$ such that $$y = \sum_{i=1}^d \lambda_i y^i, \quad \sum_{i=1}^d \lambda_i =1.$$ Let us denote $\mathcal{E}_y = \{1 \leqslant i \leqslant d |\lambda_i>0\}$. We will show that \begin{align} \label{resultat a prouver} \mathcal{C}(y) = \sum_{j \notin \mathcal{E}_y} \mathbb{R}_+ n_j. \end{align} where n_i := (-Q_{i,j})_{1 \le j \le d}, and with the convention $\mathcal{C}(y)=\emptyset$ when $\mathcal{E}_y = \{1,...,d\}$. Let us remark that the result is obvious when $\mathcal{C}(y)=\emptyset$, since, in this case, $y$ is in the interior of $\mathcal{D}$. So we will assume in the following that $\mathcal{C}(y)\neq \emptyset$.\\ \noindent 1.a. First, let us show that for any $1 \leqslant i \leqslant d$, $(n_j)_{j \neq i}$ is a basis of $\{y \in \mathbb{R}^d | \sum_{k=1}^d v_k = 0\}$. Let $1 \le i \neq j \le d$. It is clear that $n_j \in \{v \in \mathbb{R}^d : \sum_{k=1}^d v_k = 0\}$. Since it is a hyperplan of $\mathbb{R}^d$ and that the family $(n_j)_{j \neq i}$ has $d-1$ elements, it is enough to show that the vectors are linearly independent. We observe that the matrix whose lines are the $n^{(i)}_j, j \neq i,$ is $- Q^{(i,i)}$. Since $P$ is irreducible, $Q^{(i,i)}$ is invertible. The vectors $n^{(i)}_j, j \neq i$ form a basis of $\mathbb{R}^{d-1}$, hence the vectors $(n_j)_{j \neq i}$ form a basis of $\{v \in \mathbb{R}^d | \sum_{k=1}^d v_k = 0\}$.\\ \noindent 1.b. We set now $j \notin \mathcal{E}_y$ and we will show that $n_j \in \mathcal{C}(y)$. For any $z \in \mathcal{D}$, by definition of $\mathcal{D}$, we have \begin{align*} \bar{c}_j \ge \sum_{k=1}^d P_{j,k} z_k - z_j = n_j^\top z, \end{align*} and for all $i \in \mathcal{E}_y$, by definition of $y^{i}$, we have \begin{align*} \bar{c}_{j} = \sum_{k=1}^d P_{j,k} y^{i}_k - y^{i}_j = n_j^\top y^{i}. \end{align*} This gives $n_j^\top (z - y) = n_j^\top z - \sum_{i \in \mathcal{E}_y} \lambda_{i}n_j^\top y^{i} \le 0$, hence $n_j \in \mathcal{C}(y)$.\\ \noindent 1.c. We now set $i=\min \mathcal{E}_y$. Conversely, since $(n_j)_{j \neq i}$ is a basis of $\{v \in \mathbb{R}^d : \sum_{i=1}^d v_i = 0\} \ni \mathcal{C}(y)$, see Lemma \ref{lem-general}, for $v \in \mathcal{C}(y)$ there exists a unique $\alpha = (\alpha_j)_{j \neq i} \in \mathbb{R}^{d-1}$ such that $v = \sum_{j \neq i} \alpha_j n_j = (n_j)_{j \neq i} \alpha$. We will show here that $\alpha_{\ell}=0$ for all $\ell \in \mathcal{E}_y \setminus \{i\}$ and $\alpha_{\ell}\geq 0$ for all $\ell \notin \mathcal{E}_y$. \\ For all $z \in \mathcal{D}$, previous calculations give us \begin{align*} 0 &\ge \alpha^\top \left[(n_j)_{j \neq i}\right]^\top(z-y) = - \alpha^\top Q^{(i,\cdot)} (z-\sum_{\ell \in \mathcal{E}_y} \lambda_\ell y^\ell) = - \alpha^\top \left[Q^{(i,\cdot)}z - \sum_{\ell \in \mathcal{E}_y} \lambda_\ell Q^{(i,\cdot)}y^\ell \right]. \end{align*} Let us recall that for any $j \neq i$, by definition of $y^j$, one gets $Q^{(i,\cdot)}y^j + \bar{c}^{(i)} = \frac{\mu \bar{c}}{\mu_j} e_j$, and $Q^{(i,\cdot)}y^i + \bar{c}^{(i)} = 0$. Thus, previous inequality becomes \begin{align}\nonumber 0 &\le \alpha^\top \left[Q^{(i,\cdot)}z - \sum_{\ell \in \mathcal{E}_y\setminus \{i\} } \lambda_\ell \left(\frac{\mu \bar{c}}{\mu_\ell} e_\ell - \bar{c}^{(i)}\right) +\lambda_i \bar{c}^{(i)} \right]\\ \label{inegalite centrale} &= \alpha^\top \left[Q^{(i,\cdot)}z +\bar{c}^{(i)} - \sum_{\ell \in \mathcal{E}_y\setminus \{i\} } \lambda_\ell \frac{\mu \bar{c}}{\mu_\ell} e_\ell \right]. \end{align} By taking $z=y^j$ in \eqref{inegalite centrale}, with $j \in \mathcal{E}_y\setminus \{i\}$, we get \begin{align} \label{ineg alpha l} 0&\le \alpha^\top \left[\frac{\mu \bar{c}}{\mu_j} e_j - \sum_{\ell \in \mathcal{E}_y\setminus \{i\} } \lambda_\ell \frac{\mu \bar{c}}{\mu_\ell} e_\ell \right] \end{align} and so, we can sum, over $j$, previous inequality with positive weights $\alpha_j$, to obtain \begin{align*} 0&\le \left(1-\sum_{j \in \mathcal{E}_y\setminus \{i\}} \lambda_j\right)\alpha^\top \left[\sum_{\ell \in \mathcal{E}_y\setminus \{i\} } \lambda_\ell \frac{\mu \bar{c}}{\mu_\ell} e_\ell \right]. \end{align*} Then $0\le \alpha^\top \left[\sum_{\ell \in \mathcal{E}_y\setminus \{i\} } \lambda_\ell \frac{\mu \bar{c}}{\mu_\ell} e_\ell \right]$ since $\lambda_i>0$. Moreover, we have also $0\ge \alpha^\top \left[\sum_{\ell \in \mathcal{E}_y\setminus \{i\} } \lambda_\ell \frac{\mu \bar{c}}{\mu_\ell} e_\ell \right]$ by taking $z=y^i$ in \eqref{inegalite centrale}, which gives us that \begin{align} \label{egalite sum alpha l} \alpha^\top \left[\sum_{\ell \in \mathcal{E}_y\setminus \{i\} } \lambda_\ell \frac{\mu \bar{c}}{\mu_\ell} e_\ell \right]=0. \end{align} We recall that $\mu \bar{c}>0$ since $\mathcal{D}$ has non-empty interior (see Theorem \ref{th set of cns}). Pluging \eqref{egalite sum alpha l} in \eqref{ineg alpha l} gives us that $\alpha_j \ge 0$ for all $j \in \mathcal{E}_y\setminus \{i\}$, which, combined with \eqref{egalite sum alpha l} allows to conclude to $\alpha_j = 0$ for all $j \in \mathcal{E}_y\setminus \{i\}$.\\ \noindent Now we apply \eqref{inegalite centrale} with $z=y^j$ for $j \notin \mathcal{E}_y$: hence $0 \leqslant \alpha_j \frac{\mu \bar{c}}{\mu_j}$ for all $j \notin \mathcal{E}_y$, which concludes the proof of \eqref{resultat a prouver}.\\ \noindent 2. Then, we construct $H(y)$. Let us start by $H(y^i)$ for any $1 \le i \le d$. Fix $1 \le i \le d$, and let $B^i \in \mathbb{R}^{(d-1) \times (d-1)}$ be the base change matrix from $(-n_j^{(i)})_{j \neq i}$ to the canonical basis of $\mathbb{R}^{d-1}$. We set $H(y^i) := I^i B^i P^i$, with $I^i : \mathbb{R}^{d-1} \to \mathbb{R}^d$ and $P^i : \mathbb{R}^d \to \mathbb{R}^{d-1}$ the linear maps defined by \begin{align} I^i(x_1,\dots,x_{d-1}) &= (x_1,\dots,x_{i-1},0,x_i,\dots,x_{d-1}), \\ P^i(x_1,\dots,x_d) &= (x_1,\dots,x_{i-1},x_{i+1},\dots,x_d). \end{align} Now we set $H(y) := \sum_{i \in\mathcal{E}_y} \lambda_i H(y^i)$. Let us take $v \in \mathcal{C}(y)$. Thanks to \eqref{resultat a prouver}, we know that $v = \sum_{j=1}^d \alpha_j n_j$ for some $(\alpha_j)_{1 \le j \le d} \in (\mathbb{R}^+)^d$ and such that $\alpha_j=0$ when $j \in \mathcal{E}_y$. Since $n_k = - Q_k^\top$, for all $1\le k \le d$, we have $v = -Q^\top \alpha$. By construction, we get that $$H(y)v = - \sum_{j \notin \mathcal{E}_y} \alpha_j e_j= -\alpha \in \mathcal{C}_o(y).$$ It remains to check that Assumption \ref{assumptionMarkov}-v) is fulfilled. If $v \neq 0$, which is equivalent to $\alpha \neq 0$, we have, for $i \in \mathcal{E}_y$, \begin{align*} v^\top H(y)v= \alpha^\top Q \alpha= (\alpha^{(i)})^\top Q^{(i,i)} \alpha^{(i)} >0, \end{align*} due to Assumption \ref{ass-copositivity} and the fact that $\alpha_i=0$.\\ \noindent 3. We have constructed $H$ on $\cD_{\!\circ}$ with needed properties. Finally, we set $H(x) = H(x - x_d \sum_{i=1}^d e_i)$ for all $x \in \mathcal{D}$ and $H(x) = H(\mathfrak{P}(x))$ for $x \in \mathbb{R}^d$ and the proof is finished \hbox{ }\hfill$\Box$ \begin{Remark} i) Assumption \ref{ass-copositivity} is satisfied as soon as $P$ is symmetric and irreducible. Indeed, $Q^{(i,i)}$ is then nonsingular, symmetric and diagonally dominant, hence positive definite, for all $i \in \set{1,\dots,d}$. \\ ii) In dimension $3$, if $P$ is irreducible, then Assumption \ref{ass-copositivity} is automatically satisfied. Indeed, we have \begin{align} P = \left(\begin{array}{ccc} 0 & p & 1-p \\ q & 0 & 1-q \\ r & 1-r & 0 \end{array}\right) \end{align} for some $p,q,r \in [0,1]$ satisfying to $0 \le p+q, 1+r-p, 2-(q+r) < 2$ by irreducibility. Thus, for $i=1$ for example, \begin{align} Q^{(1,1)} + \left(Q^{(1,1)}\right)^\top = \left(\begin{array}{cc} 2 & -(p+q) \\ -(p+q) & 2 \end{array}\right) \end{align} is nonsingular, symmetric and diagonally dominant, hence positive definite. Thus $x^\top Q^{(1,1)} x = \frac{1}{2} x^\top \left(Q^{(1,1)} + \left(Q^{(1,1)}\right)^\top\right) x > 0$ for all $x \neq 0$. \\ iii) However, in dimension greater than $3$, it is not always possible to construct a function $H$ satisfying to Assumption \ref{assumptionMarkov}. For example in dimension $4$, consider the following matrix: \begin{align} P = \left(\begin{array}{cccc} 0 & \frac{\sqrt{3}}{2} & 0 & 1 - \frac{\sqrt{3}}{2} \\ 1 - \frac{\sqrt{3}}{2} & 0 & \sqrt{3} - 1 & 1 - \frac{\sqrt{3}}{2} \\ 0 & 1 & 0 & 0 \\ \frac13 & \frac13 & \frac13 & 0 \end{array}\right), \end{align} together with positive costs $c$ to ensure that the domain has non-empty interior.\\ It is an irreducible stochastic matrix, and let's consider the extremal point $y^4$ such that \begin{align} y^4_4 &= 0, \\ y^4_1 &= \frac{\sqrt 3}{2} y^4_2 - c_1, \\ y^4_2 &= (1 - \frac{\sqrt 3}{2})y^4_1 + (\sqrt 3 - 1)y^4_3 - c_2, \\ y^4_3 &= y^4_2 - c_3. \end{align} We have $\mathcal{C}(y^4) = \mathbb{R}_+ (-1, \frac{\sqrt 3}{2}, 0, 1 - \frac{\sqrt 3}{2})^\top + \mathbb{R}_+ (1 - \frac{\sqrt 3}{2}, -1, \sqrt 3 - 1, 1 - \frac{\sqrt 3}{2})^\top + \mathbb{R}_+(0,1,-1,0)^\top =: \sum_{i=1}^3 \mathbb{R}_+ n_i$.\\ If $H(y^4)$ satisfies $H(y^4) n_1 = (-1,0,0,0), H(y^4) n_2 = (0,-1,0,0)$ and $H(y^4) n_3 = (0,0,-1,0)$, consider $v = \frac12 n_1 + n_2 + \frac{\sqrt 3}{2}n_3 \in \mathcal{C}(y^4)$. Then it is easy to compute $v^\top H v = 0$, hence it is not possible to construct $H(y^4)$ at this point satisfying Assumption \ref{assumptionMarkov}. \end{Remark} \section{Switching problem with controlled randomisation} \label{game} \definecolor{green}{rgb}{0.0, 0.65, 0.31} We introduce here a new kind of stochastic control problem that we name \emph{switching problem with controlled randomisation}. In contrast with the usual switching problems \cite{HJ07,HZ10,HT10}, the agent cannot choose directly the new state, but chooses a probability distribution under which the new state will be determined. In this section, we assume the existence of a solution to some auxiliary obliquely reflected BSDE to characterize the value process and an optimal strategy for the problem, see Assumption \ref{exist} below. \vspace{2mm} Let $(\Omega, \mathcal{G}, \P)$ be a probability space. We fix a finite time horizon $T > 0$ and $\kappa \ge 1,d \ge 2$ two integers. We assume that there exists a $\kappa$-dimensional Brownian motion $W$ and a sequence $(\mathfrak{U}_n)_{n \ge 1}$ of independent random variables, independent of $W$, uniformly distributed on $[0,1]$. We also assume that $\mathcal{G}$ is generated by the Brownian motion $W$ and the family $(\mathfrak{U}_n)_{n \ge 1}$. We define $\mathbb{F}^0 = (\mathcal{F}^0_t)_{t \ge 0}$ as the augmented Brownian filtration, which satisfies the usual conditions.\\ Let $\mathcal{C}$ be an ordered compact metric space and $F : \mathcal{C} \times \{1,\dots,d\} \times [0,1] \to \{1,\dots,d\}$ a measurable map. To each $u \in \mathcal{C}$ is associated a transition probability function on the state space $\{1, \dots, d\}$, given by $P^u_{i,j} := \P(F(u,i,\mathfrak{U}) = j)$ for $\mathfrak{U}$ uniformly distributed on $[0,1]$. {We assume that for all $(i,j) \in \{1,\dots,d\}^2$, the map $u \mapsto P^u_{i,j}$ is continuous.}\\ Let $\bar{c} : \{1,\dots,d\} \times \mathcal{C} \to \mathbb{R}_+, (i, u) \mapsto \bar{c}_{i}^{u}$ a map such that $u \mapsto \bar{c}^u_{i}$ is continuous for all $i = 1, \dots, d$. We denote $\sup_{i \in \{1,\dots,d\}, u \in \mathcal{C}} \bar{c}_{i}^{u} := \check{c}$ and $\inf_{i \in \{1,\dots,d\}, u \in \mathcal{C}} \bar{c}_{i}^{u} := \hat{c}$.\\ Let $\xi = (\xi^1, \dots, \xi^d) \in L^2_d(\mathcal{F}^0_T)$ and $f : \Omega \times [0,T] \times \mathbb{R}^d \times \mathbb{R}^{d \times \kappa} \to \mathbb{R}^d$ a map satisfying \begin{itemize} \item $f$ is $\mathcal{P}(\mathbb{F}^0) \otimes \mathcal{B}^{d} \otimes \mathcal{B}^{d\times \kappa}$-measurable and $f(\cdot,0,0) \in \H^2_d(\mathbb{F}^0)$. \item There exists $L \ge 0$ such that, for all $(t,y,y',z,z') \in [0,T] \times \mathbb{R}^d \times \mathbb{R}^d \times \mathbb{R}^{d \times \kappa} \times \mathbb{R}^{d \times \kappa}$, \begin{align*} |f(t,y,z)-f(t,y',z')| \le L(|y-y'|+|z-z'|). \end{align*} \end{itemize} These assumptions will be in force throughout our work. We shall also use, in this section only, the following additional assumptions. { \begin{Assumption} \label{hyp sup section 2} \begin{enumerate}[i)] \item Switching costs are assumed to be positive, i.e. $\hat{c}>0$. \item For all $(t,y,z) \in [0,T]\times\mathbb{R}^d\times\mathbb{R}^{d\times \kappa}$, it holds almost surely \begin{align} \label{hyp plus simple sur f} f(t,y,z) = (f^i(t,y^i,z^i))_{1\le i \le d}. \end{align} \end{enumerate} \end{Assumption} \begin{Remark} \label{re positive cost} i) \textcolor{black}{ It is usual to assume positive costs in the litterature on switching problem. In particular, the cumulative cost process, see \eqref{definition A et N}, is non decreasing. Introducing signed costs adds extra technical difficulties in the proof of the representation theorem (see e.g. \cite{Martyr-16} and references therein). We postpone the adaptation of our results in this more general framework to future works.} \\ ii) Assumption \eqref{hyp plus simple sur f} is also classical since it allows to get a comparison result for BSDEs which is key to obtain the representation theorem. Note however than our results can be generalized to the case $f^i(t,y,z)=f^i(t,y,z^i)$ for $i \in \{1,...,d\}$ by using similar arguments as in \cite{CEK11}. \end{Remark} } \subsection{Solving the control problem using obliquely reflected BSDEs} \label{problem} We define in this section the stochastic optimal control problem. We first introduce the strategies available to the agent and related processes. The definition of the strategy is more involved than in the usual switching problem setting since its adaptiveness property is understood with respect to a filtration built recursively. \vspace{2mm} A strategy is thus given by $\phi = \left(\zeta_0, (\tau_n)_{n \ge 0}, (\alpha_n)_{n \ge 1}\right)$ where $\zeta_0 \in \{1,\dots,d\}$, $(\tau_n)_{n \ge 0}$ is a nondecreasing sequence of random times and $(\alpha_n)_{n \ge 1}$ is a sequence of $\mathcal{C}$-valued random variables, which satisfy: \begin{itemize} \item $\tau_0 \in [0,T]$ and $\zeta_0 \in \{1, \dots, d\}$ are deterministic. \item For all $n \ge 0$, $\tau_{n+1}$ is a $\mathbb{F}^n$-stopping time and $\alpha_{n+1}$ is $\mathcal{F}^n_{\tau_{n+1}}$-measurable (recall that $\mathbb{F}^0$ is the augmented Brownian filtration). We then set $\mathbb{F}^{n+1} = (\mathcal{F}^{n+1}_t)_{t \ge 0}$ with $\mathcal{F}^{n+1}_t := \mathcal{F}^n_t \vee \sigma(\mathfrak{U}_{n+1} \ind{\tau_{n+1} \le t})$. \end{itemize} Lastly, we define $\mathbb{F}^\infty = (\mathcal{F}^\infty_t)_{t \ge 0}$ with $\mathcal{F}^\infty_t := \bigvee_{n \ge 0} \mathcal{F}^n_t, t \ge 0$. \\ For a strategy $\phi = \left(\zeta_0, (\tau_n)_{n \ge 0}, (\alpha_n)_{n \ge 1}\right)$, we set, for $n \ge 0$, \begin{align*} \zeta_{n+1} := F(\alpha_{n+1}, \zeta_n, \mathfrak{U}_{n+1}) \text{ and } a_t := \sum_{k = 0}^{+\infty} \zeta_k 1_{[\tau_k, \tau_{k+1})}(t), \; t \ge 0\;, \end{align*} which represents the state after a switch and the state process respectively. We also introduce two processes, for $t \ge 0$, \begin{align} \label{definition A et N} A^\phi_t = \sum_{k = 0}^{+\infty} \bar{c}_{\zeta_k}^{\alpha_{k+1}} \ind{\tau_{k+1} \le t} \text{ and } N^\phi_t := \sum_{k \ge 0} \ind{\tau_{k+1} \le t}. \end{align} The random variable $A^\phi_t$ is the cumulative cost up to time $t$ and $N^\phi_t$ is the number of switches before time $t$. Notice that the processes $(a,A^\phi,N^\phi)$ are adapted to $\mathbb{F}^\infty$ \textcolor{black}{and that $A^\phi$ is a non decreasing process.} \noindent We say that a strategy $\phi = (\zeta_0, (\tau_n)_{n \ge 0}, (\alpha_n)_{n \ge 1})$ is an \emph{admissible strategy} if the cumulative cost process satisfies \begin{align}\label{eq strategy admissible} A^\phi_T - A^\phi_{\tau_0} \in L^2(\mathbb{F}^\infty_T) \quad\text{ and }\quad \espcond{\left(A^\phi_{\tau_0}\right)^2}{\mathcal{F}^0_{\tau_0}} < +\infty \;a.s. \end{align} We denote by $\mathscr A$ the set of admissible strategies, and for $t \in [0,T]$ and $i \in \{1,\dots,d\}$, we denote by $\mathscr A^i_t$ the subset of admissible strategies satisfying $\zeta_0 = i$ and $\tau_0 = t$. \vspace{2mm} \begin{Remark} \begin{enumerate}[i)] \item \textcolor{black}{The definition of an \emph{admissible strategy} is slightly weaker than usual \cite{HT10}, which requires the stronger property $A^\phi_T \in L^2(\mathcal{F}^\infty_T)$. But, importantly, the above definition is enough to define the \emph{switched} BSDE associated to an admissible control, see below. Moreover, we observe in the next section that optimal strategies are admissible with respect to our definition, but not necessarily with the usual one, due to possible simultaneous jumps at the initial time.} \item {For technical reasons involving possible simultaneous jumps, we cannot consider the generated filtration associated to $a$, which is contained in $\mathbb{F}^\infty$. } \end{enumerate} \end{Remark} \vspace{2mm} \noindent We are now in position to introduce the reward associated to an admissible strategy. If $\phi = (\zeta_0,(\tau_n)_{n \ge 0},(\alpha_n)_{n\ge 1}) \in \mathscr A$, the reward is defined as the value $\espcond{U^\phi_{\tau_0}-A^\phi_{\tau_0}}{\mathcal{F}^0_{\tau_0}}$, where $(U^\phi,V^\phi,M^\phi) \in \mathbb S^2(\mathbb{F}^\infty) \times \H^2_{\kappa}(\mathbb{F}^\infty) \times \H^2(\mathbb{F}^\infty)$ is the solution of the following ``switched'' BSDE \cite{HT10} on the filtered probability space $(\Omega,\mathcal{G},\mathbb{F}^\infty,\P)$: \begin{align} \label{switched BSDE} U_t = \xi^{a_T} + \int_t^T f^{a_s}(s, U_s, V_s) \mathrm{d} s - \int_t^T V_s \mathrm{d} W_s - \int_t^T \mathrm{d} M_s - \int_t^T \mathrm{d} A^\phi_s, \quad t \in [\tau_0,T]. \end{align} \begin{Remark}This switched BSDE rewrites as a classical BSDE in $\mathbb{F}^\infty$, and since $A^\phi_{\cdot} - A^\phi_t \in \mathbb S^2(\mathbb{F}^\infty)$, the terminal condition and the driver are standard parameters, there exists a unique solution to \eqref{switched BSDE} for all $\phi \in \mathscr A$. We refer to Section \ref{sec BSDE} for more details. \end{Remark} \vspace{2mm} \noindent For $t \in [0,T]$ and $i \in \{1,\dots,d\}$, the agent aims thus to solve the following maximisation problem: \begin{align} \mathcal V^i_t = \esssup_{\phi \in \mathscr A^i_t} \espcond{U^\phi_t - A^\phi_t}{\mathcal{F}^0_t}. \end{align} \textcolor{black}{We first remark that this control problem corresponds to \eqref{eq de classical switching problem} as soon as $f$ does not depend on $y$ and $z$. Moreover, the term $ \espcond{A^\phi_t}{\mathcal{F}^0_t}$ is non zero if and only if we have at least one instantaneous switch at initial time $t$. } \noindent The main result of this section is the next theorem that relates the value process $\mathcal{V}$ to the solution of an obliquely reflected BSDEs, that is introduced in the following assumption: \begin{Assumption} \label{exist} \begin{enumerate}[i)] \item There exists a solution $(Y,Z,K) \in \mathbb S^2_d(\mathbb{F}^0) \times \H^2_{d \times \kappa}(\mathbb{F}^0) \times \mathbb{A}^2_d(\mathbb{F}^0)$ to the following obliquely reflected BSDE: \begin{align} \label{orbsde} Y^i_t &= \xi + \int_t^T f^i(s,Y^i_s,Z^i_s) \mathrm{d} s - \int_t^T Z^i_s \mathrm{d} W_s + \int_t^T \mathrm{d} K^i_s, \quad t \in [0,T], \, i \in \mathcal{I},\\ \label{orbsde2} Y_t &\in \mathcal{D}, \quad t \in [0,T],\\ \label{orbsde3} \int_0^T &\left(Y^i_t - \sup_{u \in \mathcal{C}} \left\{ \sum_{j=1}^d P^u_{i,j} Y^j_t - \bar{c}_{i}^{u}\right\}\right) \mathrm{d} K^i_t = 0, \quad i \in \mathcal{I}, \end{align} where $\mathcal{I} := \{1,\dots,d\}$ and $\mathcal{D}$ is the following convex subset of $\mathbb{R}^d$: \begin{align} \mathcal{D} := \left\{ y \in \mathbb{R}^d : y_i \ge \sup_{u \in \mathcal{C}} \left\{ \sum_{j=1}^d P^u_{i,j} y_j - \bar{c}_{i}^{u}\right\}, i \in \mathcal{I} \right\}. \label{domain} \end{align} \item For all $u \in \mathcal{C}$ and $i \in \{1,\dots,d\}$, we have $P^u_{i,i} \neq 1$. \end{enumerate} \end{Assumption} \textcolor{black}{Let us observe that the positive costs assumption implies that $\mathcal{D}$ has a non-empty interior. Except for Section \ref{section unicite}, this is the main setting for this part, recall Remark \ref{re positive cost}. In Section \ref{existence}, the system \eqref{orbsde}-\eqref{orbsde2}-\eqref{orbsde3} is studied in details in a general costs setting. An important step is then to understand when $\mathcal{D}$ has non-empty interior.} \vspace{2mm} \begin{Theorem}\label{th representation result} Assume that Assumptions \ref{hyp sup section 2} and \ref{exist} are satisfied. \begin{enumerate} \item For all $i \in \{1,\dots,d\}$, $t \in [0,T]$ and $\phi \in \mathscr A^i_t$, we have $Y^i_t \ge \espcond{U^\phi_t - A^\phi_t}{\mathcal{F}^0_t}$. \item We have $Y^i_t = \espcond{U^{\phi^\star}_t - A^{\phi^\star}_t}{\mathcal{F}^0_t}$, where $\phi^\star = (i,(\tau^\star_n)_{n \ge 0}, (\alpha^\star_n)_{n \ge 1})\in \mathscr A^i_t$ is defined in \eqref{def tau star}-\eqref{def alpha star}. \end{enumerate} \end{Theorem} \vspace{2mm} \noindent The proof is given at the end of \textcolor{black}{Section \ref{sous section preuve th representation}}. It will use several lemmata that we introduce below. We first remark, that as an immediate consequence, we obtain the uniqueness of the BSDE used to characterize the value process of the control problem. \begin{Corollary} \label{co uniqueness RBSDE} Under Assumptions \ref{hyp sup section 2} and \ref{exist}, there exists a unique solution $(Y,Z,K) \in \mathbb S^2_d(\mathbb{F}^0) \times \H^2_{d\times \kappa}(\mathbb{F}^0) \times \mathbb{A}^2_d(\mathbb{F}^0)$ to the obliquely reflected BSDE \eqref{orbsde}-\eqref{orbsde2}-\eqref{orbsde3}. \end{Corollary} \textcolor{black}{\begin{Remark} The \emph{classical switching problem} is an example of \emph{switching problem with controlled randomisation}. Indeed, we just have to consider $\mathcal{C}=\{1,...,d-1\}$, $$P_{i,j}^u= \begin{cases} 1 & \text{ if } j-i=u \text{ mod } d,\\ 0 & \text{ otherwise.} \end{cases},\quad \forall u \in \mathcal{C}, \, 1 \leqslant i,j \leqslant d$$ and $$c_{i,j}= \begin{cases} \bar{c}_i^{j-i} & \text{ if } j>i\\ \bar{c}_i^{j-i+d} & \text{ if } j<i\\ 0 & \text{ if } j=i. \end{cases}\quad \forall u \in \mathcal{C}, \, 1 \leqslant i,j \leqslant d.$$ We observe that, in this specific case, there is no extra-randomness introduced at each switching time and so there is no need to consider an enlarged filtration. In this setting, Theorem \ref{th representation result} is already known and Assumption \ref{exist} is fulfilled, see e.g. \cite{HZ10,HT10}. \end{Remark} } \subsection{Uniqueness of solutions to reflected BSDEs with general costs} \label{section unicite} In this section, we extend the uniqueness result of Corollary \ref{co uniqueness RBSDE}. Namely, we consider the case where $\inf_{1\le i \le d, u \in \mathcal{C}} \bar{c}_i^u = \hat{c}$ can be nonpositive, meaning that only Assumptions \ref{hyp sup section 2}-ii) and \ref{exist} hold here. Assuming that $\mathcal{D}$ has a non empty interior, we are then able to show uniqueness to \eqref{orbsde}-\eqref{orbsde2}-\eqref{orbsde3} in Proposition \ref{prop unicite couts generaux} below. \vspace{2mm} \noindent Fix $y^0$ in the interior of $\mathcal{D}$. It is clear that for all $1\le i \le d$, \begin{align*} y^0_i > \sup_{u \in \mathcal{C}} \left\{ \sum_{j=1}^d P^u_{i,j} y^0_j - \bar{c}_{i}^u \right\}. \end{align*} We set, for all $1\le i \le d$ and $u \in \mathcal{C}$, \begin{align*} \tilde c_{i}^u := y^0_i - \sum_{j=1}^d P^u_{i,j} y^0_j + \bar{c}_{i}^u > 0, \end{align*} so that $\hat{\tilde c} := \inf_{1\le i \le d, u \in \mathcal{C}} \tilde{c}_i^u > 0$ by compactness of $\mathcal{C}$. We also consider the following set \begin{align*} \tilde \mathcal{D} := \left\{ \tilde y \in \mathbb{R}^d : \tilde y_i \ge \sup_{u \in \mathcal{C}} \left\{ \sum_{j=1}^d P^u_{i,j} \tilde y_j - \tilde c_{i}^u \right\}, 1 \le i \le d \right\}. \end{align*} \begin{Lemma} Assume that $\mathcal{D}$ has a non empty interior. Then, \begin{align*} \tilde \mathcal{D} = \left\{ y - y^0 : y \in \mathcal{D} \right\}. \end{align*} \end{Lemma} \begin{proof} If $y \in \mathcal{D}$, let $\tilde y := y - y^0$. For $1 \le i \le d$ and $u \in \mathcal{C}$, we have \begin{align*} \tilde y_i = y_i - y^0_i &\ge \sum_{j=1}^d P^u_{i,j} y_j - \bar{c}_{i}^u - y^0_i = \sum_{j=1}^d P^u_{i,j} (y_j - y^0_j) - (\bar{c}_{i}^u + y^0_i - \sum_{j=1}^d P^u_{i,j} y^0_j) \\ &= \sum_{j=1}^d P^u_{i,j} \tilde y_j - \tilde c_{i}^u, \end{align*} hence $\tilde y \in \tilde \mathcal{D}$. \\ Conversely, let $\tilde y \in \tilde \mathcal{D}$ and let $y := \tilde y + y^0$. We can show by the same kind of calculation that $y \in \mathcal{D}$ \hbox{ }\hfill$\Box$ \end{proof} \begin{Proposition} \label{prop unicite couts generaux} Assume that $\mathcal{D}$ has a non empty interior. Under Assumptions \ref{hyp sup section 2}-ii) and \ref{exist}-ii), there exists at most one solution to \eqref{orbsde}-\eqref{orbsde2}-\eqref{orbsde3} in $\mathbb S^2_d(\mathbb{F}^0) \times \H^2_{d\times \kappa}(\mathbb{F}^0) \times \mathbb{A}^2_d(\mathbb{F}^0)$. \end{Proposition} \begin{proof} Let us assume that $(Y^1,Z^1,K^1)$ and $(Y^2,Z^2,K^2)$ are two solutions to \eqref{orbsde}-\eqref{orbsde2}-\eqref{orbsde3}. We set $\tilde Y^1 := Y^1 - y_0$ and $\tilde Y^2 := Y^2 - y_0$. Then one checks easily that $(\tilde Y^1, Z^1, K^1)$ and $(\tilde Y^2, Z^2, K^2)$ are solutions to \eqref{orbsde}-\eqref{orbsde2}-\eqref{orbsde3} with terminal condition $\tilde \xi = \xi - y_0$, driver $\tilde f$ given by \begin{align*} \tilde f^i(t,\tilde y_i,z_i) := f^i(t,\tilde y_i + y^0_i, z_i),\quad 1 \le i \le d,\, t \in [0,T],\, \tilde y \in \mathbb{R}^d,\, z \in \mathbb{R}^{d \times \kappa}, \end{align*} and domain $\tilde \mathcal{D}$. This domain is associated to a randomised switching problem with $\hat{\tilde c} > 0$, hence Corollary \ref{co uniqueness RBSDE} gives that $(\tilde Y^1, Z^1, K^1) = (\tilde Y^2, Z^2, K^2)$ which implies the uniqueness \hbox{ }\hfill$\Box$ \end{proof} \textcolor{black}{\subsection{Proof of the representation result} \label{sous section preuve th representation}} \noindent We prove here our main result for this part, namely Theorem \ref{th representation result}. It is divided in several steps. \subsubsection{Preliminary estimates} We first introduce auxiliary processes associated to an admissible strategy and prove some key integrability properties. \vspace{2mm} Let $i \in \{1,\dots,d\}$ and $t \in [0,T]$. We set, for $\phi \in \mathscr A^i_t$ and $t \le s \le T$, \begin{align} \mathcal{Y}^\phi_s &:= \sum_{k \ge 0} Y^{\zeta_k}_s 1_{[\tau_k, \tau_{k+1})}(s), \\ \mathcal{Z}^\phi_s &:= \sum_{k \ge 0} Z^{\zeta_k}_s 1_{[\tau_k, \tau_{k+1})}(s), \\ \mathcal{K}^\phi_s &:= \sum_{k \ge 0} K^{\zeta_k}_s 1_{[\tau_k, \tau_{k+1})}(s), \\ \mathcal{M}^\phi_s &:= \sum_{k \ge 0} \left(Y^{\zeta_{k+1}}_{\tau_{k+1}} - \espcond{Y^{\zeta_{k+1}}_{\tau_{k+1}}}{\mathcal{F}^k_{\tau_{k+1}}} \right) \ind{t < \tau_{k+1} \le s}, \\ \label{def A ronde} \mathcal{A}^\phi_s &= \sum_{k \ge 0} \left( Y^{\zeta_k}_{\tau_{k+1}} - \espcond{Y^{\zeta_{k+1}}_{\tau_{k+1}}}{\mathcal{F}^k_{\tau_{k+1}}} + \bar{c}_{\zeta_k}^{\alpha_{k+1}} \right) \ind{t < \tau_{k+1} \le s}. \end{align} \begin{Remark}For all $k \ge 0$, since $\alpha_{k+1} \in \mathcal{F}^k_{\tau_{k+1}}$, we have \begin{align} \espcond{Y^{\zeta_{k+1}}_{\tau_{k+1}}}{\mathcal{F}^k_{\tau_{k+1}}} = \sum_{j=1}^d \probcond{\zeta_{k+1} = j}{\mathcal{F}^k_{\tau_{k+1}}} Y^j_{\tau_{k+1}} = \sum_{j=1}^d P^{\alpha_k}_{\zeta_k, j} Y^j_{\tau_{k+1}}. \label{calcul esp cond} \end{align} \end{Remark} \begin{Lemma} \label{lem estimes startegies gene} Assume that assumption \eqref{exist} is satisfied. For any admissible strategy $\phi \in \mathscr A^i_t$, $\mathcal{M}^\phi$ is a square integrable martingale with $\mathcal{M}^\phi_t = 0$. Moreover, $\mathcal{A}^\phi$ is increasing and satisfies $\mathcal{A}^\phi_T \in L^2(\mathcal{F}^\infty_T)$. In addition, \begin{align} \label{mart en t} \espcond{\left(\sum_{k \ge 0} \left(Y^{\zeta_{k+1}}_{\tau_{k+1}} - \espcond{Y^{\zeta_{k+1}}_{\tau_{k+1}}}{\mathcal{F}^k_{\tau_{k+1}}}\right)\ind{\tau_{k+1} \le t}\right)^2}{\mathcal{F}^0_t} < +\infty \quad \text{a.s.} \end{align} \end{Lemma} {\noindent \bf Proof. } Let $\phi \in \mathscr A^i_t$. Using \eqref{def A ronde} and \eqref{calcul esp cond}, we have, for all $s \in [t,T]$, \begin{align} \mathcal{A}^\phi_s = \sum_{k \ge 0} \left( Y^{\zeta_k}_{\tau_{k+1}} - \sum_{j=1}^d P^{\alpha_{k+1}}_{\zeta_k,j} Y^j_{\tau_{k+1}} + \bar{c}_{\zeta_k}^{\alpha_{k+1}} \right)\ind{t < \tau_{k+1} \le s}, \end{align} which is increasing since each summand is positive as $Y \in \mathcal{D}$.\\ We have, for $t \le s \le T$, \begin{align} \label{debut} \mathcal{Y}^\phi_s - \mathcal{Y}^\phi_t &= \sum_{k \ge 0} \left( Y^{\zeta_k}_{\tau_{k+1} \wedge s} - Y^{\zeta_k}_{\tau_k \wedge s} \right) + \sum_{k \ge 0} \left( Y^{\zeta_{k+1}}_{\tau_{k+1}} - Y^{\zeta_k}_{\tau_{k+1}} \right) \ind{t < \tau_{k+1} \le s}. \end{align} Using \eqref{orbsde}, we get, for all $k \ge 0$, \begin{align*} &\hspace{-1.9cm}Y^{\zeta_k}_{\tau_{k+1} \wedge s} - Y^{\zeta_k}_{\tau_k \wedge s} \\ &\hspace{-1.9cm}= -\int_{\tau_k \wedge s}^{\tau_{k+1} \wedge s} f^{\zeta_k}(u,Y^{\zeta_k}_u,Z^{\zeta_k}_u) \mathrm{d} u + \int_{\tau_k \wedge s}^{\tau_{k+1} \wedge s} Z^{\zeta_k}_u \mathrm{d} W_u - \int_{\tau_k \wedge s}^{\tau_{k+1} \wedge s} \mathrm{d} K^{\zeta_k}_u, \end{align*} Recalling $\zeta_k$ is $\mathcal{F}_{\tau_k}$-measurable. We also have, using \eqref{calcul esp cond}, for all $k \ge 0$, \begin{align*} &Y^{\zeta_{k+1}}_{\tau_{k+1}} - Y^{\zeta_k}_{\tau_{k+1}} \\ &= \left( Y^{\zeta_{k+1}}_{\tau_{k+1}} - \espcond{Y^{\zeta_{k+1}}_{\tau_{k+1}}}{\mathcal{F}^k_{\tau_{k+1}}}\right) - \left( Y^{\zeta_k}_{\tau_{k+1}} - \sum_{j=1}^d P^{\alpha_{k+1}}_{\zeta_k, j} Y^j_{\tau_{k+1}} + \bar{c}_{\zeta_k}^{\alpha_{k+1}} \right) + \bar{c}_{\zeta_k}^{\alpha_{k+1}}. \end{align*} Plugging the two previous equalities into \eqref{debut}, we get: \begin{align} \nonumber \mathcal{Y}^\phi_s - \mathcal{Y}^\phi_t = &\sum_{k \ge 0} \left( -\int_{\tau_k \wedge s}^{\tau_{k+1} \wedge s} f^{\zeta_k}(u,Y^{\zeta_k}_u,Z^{\zeta_k}_u) \mathrm{d} u + \int_{\tau_k \wedge s}^{\tau_{k+1} \wedge s} Z^{\zeta_k}_u \mathrm{d} W_u - \int_{\tau_k \wedge s}^{\tau_{k+1} \wedge s} \mathrm{d} K^{\zeta_k}_u \right)\\ \nonumber &+ \sum_{k \ge 0} \left( Y^{\zeta_{k+1}}_{\tau_{k+1}} - \espcond{Y^{\zeta_{k+1}}_{\tau_{k+1}}}{\mathcal{F}^k_{\tau_{k+1}}}\right) \ind{t < \tau_{k+1} \le s} + A^\phi_s - A^\phi_t\\ &- \sum_{k \ge 0} \left( Y^{\zeta_k}_{\tau_{k+1}} - \sum_{j=1}^d P^{\alpha_{k+1}}_{\zeta_k, j} Y^j_{\tau_{k+1}} + \bar{c}_{\zeta_k}^{\alpha_{k+1}} \right) \ind{t < \tau_{k+1} \le s}. \end{align} By definition of $\mathcal{Y}^\phi, \mathcal{Z}^\phi, \mathcal{K}^\phi, \mathcal{M}^\phi, \mathcal{A}^\phi$, we obtain, for all $s \in [t,T]$, \begin{align} \nonumber \mathcal{Y}^\phi_s = &\xi^{a_T} + \int_s^T f^{a_u}(u,\mathcal{Y}^\phi_u,\mathcal{Z}^\phi_u) \mathrm{d} u - \int_s^T \mathcal{Z}^\phi_u \mathrm{d} W_u - \int_s^T \mathrm{d} \mathcal{M}^\phi_u- \int_s^T \mathrm{d} A^\phi_u \\ \label{eq apres t} &+ \left[ \left(\mathcal{A}^\phi_T + \mathcal{K}^\phi_T \right) - \left(\mathcal{A}^\phi_s + \mathcal{K}^\phi_s \right)\right]. \end{align} For any $n \ge 1$, we consider the admissible strategy $\phi_n = (\zeta_0, (\tau^n_k)_{k \ge 0}, (\alpha^n_k)_{k \ge 1})$ defined by $\zeta^n_0 = i = \zeta_0, \tau^n_k = \tau_k, \alpha^n_k = \alpha_k$ for $k \le n$, and $\tau^n_k = T+1$ for all $k > n$. We set $\mathcal{Y}^n := \mathcal{Y}^{\phi_n}, \mathcal{Z}^nv:= \mathcal{Z}^{\phi_n}$, and so on.\\ By \eqref{eq apres t} applied to the strategy $\phi^n$, we get, recalling that $\mathcal{A}^n_t = 0$, \begin{align} \nonumber \mathcal{A}^n_{\tau_n \wedge T} = &\mathcal{Y}^n_t - \mathcal{Y}^n_{\tau_n \wedge T} - \int_t^{\tau_n \wedge T} f^{a^n_s}(s,\mathcal{Y}^n_s,\mathcal{Z}^n_s) \mathrm{d} s + \int_t^{\tau_n \wedge T} \mathcal{Z}^n_s \mathrm{d} W_s\\ &+ \int_t^{\tau_n \wedge T} \mathrm{d} \mathcal{M}^n_s + \int_t^{\tau_n \wedge T} \mathrm{d} A^n_s - \int_t^{\tau_n \wedge T} \mathrm{d} \mathcal{K}^n_s. \end{align} We obtain, for a constant $\Lambda > 0$, \begin{align} \esp{|\mathcal{A}^n_{\tau_n \wedge T}|^2} \le& \Lambda\left(\esp{|\mathcal{Y}^n_t|^2 + |\mathcal{Y}^n_{\tau_n \wedge T}|^2 + \int_t^{\tau_n \wedge T} |f^{a^n_s}(s,\mathcal{Y}^n_s,\mathcal{Z}^n_s)|^2 \mathrm{d} s \right. \right. \\ \nonumber &\left.\left. + \int_t^{\tau_n \wedge T}|\mathcal{Z}^n_s|^2\mathrm{d} s + \int_t^{\tau_n \wedge T} \mathrm{d} [\mathcal{M}^n]_s + (A^\phi_T - A^\phi_t)^2 + (\mathcal{K}^n_T)^2}\right). \end{align} We have \begin{align*} \esp{|\mathcal{Y}^n_r|^2} &\le \sum_{j=1}^d \esp{|Y^j_r|^2} = \esp{|Y_r|^2} \le \esp{\sup_{t \le r \le T} |Y_r|^2} = \Vert Y \Vert_{\mathbb S^2_d(\mathbb{F}^0)}^2,\\ \esp{\int_t^{\tau_n \wedge T}|\mathcal{Z}^n_s|^2\mathrm{d} s} &\leqslant \Vert Z\Vert_{\H^2_{d\times \kappa}(\mathbb{F}^0)},\\ \esp{\int_t^{\tau_n \wedge T} |f^{a_s^n}(s,\mathcal{Y}^n_s,\mathcal{Z}^n_s)|^2\mathrm{d} s} & \le 4L^2T\Vert Y \Vert_{\mathbb S^2_d(\mathbb{F}^0)}^2 + 4L^2\Vert Z\Vert_{\H^2_d(\mathbb{F}^0)} + 2 \Vert f(\cdot,0,0) \Vert_{\H^2_d(\mathbb{F}^0)}^2 \end{align*} and \begin{align*} &\esp{(\mathcal{K}^n_T)^2} \le \esp{|K_T|^2}. \end{align*} Thus, by these estimates and the fact that $A^\phi_T - A^\phi_t \in L^2(\mathcal{F}^\infty_T)$ as $\phi$ is admissible, there exists a constant $\Lambda_1 > 0$ such that \begin{align} \label{est A} \esp{|\mathcal{A}^n_{\tau_n \wedge T}|^2} \le \Lambda_1 + \Lambda \esp{\int_t^{\tau_n\wedge T} \mathrm{d}[\mathcal{M}^n]_s}. \end{align} Using \eqref{eq apres t} applied to $\phi^n$ and Itô's formula between $t$ and $\tau_n \wedge T$, since $\mathcal{M}^n$ is a square integrable martingale orthogonal to $W$ and $A^n, \mathcal{A}^n, \mathcal{K}^n$ are nondecreasing and nonnegative, we get \begin{align} &\nonumber \esp{|\mathcal{Y}^n_t|^2 + \int_t^{\tau_n \wedge T} |\mathcal{Z}^n_s|^2 \mathrm{d} s + \int_t^{\tau_n \wedge T} \mathrm{d} [\mathcal{M}^n]_s} \\ &\nonumber = \esp{|\mathcal{Y}^n_{\tau_n \wedge T}|^2 + 2\int_t^{\tau_n \wedge T} \mathcal{Y}^n_s f^{a^n_s}(s,\mathcal{Y}^n_s,\mathcal{Z}^n_s)\mathrm{d} s - 2\int_t^{\tau_n \wedge T} \mathcal{Y}^n_s \mathrm{d} A^n_s \right.\\ &\nonumber \hspace{.7cm} \left. + 2\int_t^{\tau_n \wedge T} \mathcal{Y}^n_s \mathrm{d} \mathcal{A}^n_s + 2\int_t^{\tau_n \wedge T} \mathcal{Y}^n_s \mathrm{d} \mathcal{K}^n_s} \\ &\nonumber \le \esp{|\mathcal{Y}^n_{\tau_n \wedge T}|^2} + 2\esp{\int_t^{\tau_n \wedge T} |\mathcal{Y}^n_s f^{a^n_s}(s,\mathcal{Y}^n_s,\mathcal{Z}^n_s)| \mathrm{d} s} + 2 \esp{\int_t^{\tau_n \wedge T} |\mathcal{Y}^n_s| \mathrm{d} A^n_s}\\ &\label{est M} \hspace{.2cm}+ 2 \esp{\int_t^{\tau_n \wedge T} |\mathcal{Y}^n_s| \mathrm{d} \mathcal{A}^n_s} + 2 \esp{\int_t^{\tau_n \wedge T} |\mathcal{Y}^n_s| \mathrm{d} \mathcal{K}^n_s}. \end{align} We have, using Young's inequality, for some $\epsilon > 0$, and \eqref{est A}, \begin{align*} &\esp{\int_t^{\tau_n \wedge T} |\mathcal{Y}^n_s f^{a^n_s}(s,\mathcal{Y}^n_s,\mathcal{Z}^n_s)| \mathrm{d} s} \le \frac12 \esp{\int_t^{\tau_n \wedge T} |\mathcal{Y}^n_s|^2 \mathrm{d} s} + \frac12 \esp{\int_t^{\tau_n \wedge T} |f^{a^n_s}(s,\mathcal{Y}^n_s,\mathcal{Z}^n_s)|^2\mathrm{d} s} \\ &\hspace{5.25cm} \le T(\frac12 + 2L^2)\Vert Y \Vert_{\mathbb S^2_d(\mathbb{F}^0)}^2 + 2L^2\Vert Z\Vert_{\H^2_d(\mathbb{F}^0)} + \Vert f(\cdot,0,0) \Vert_{\H^2_d(\mathbb{F}^0)}^2,\\ &\esp{\int_t^{\tau_n \wedge T} |\mathcal{Y}^n_s| \mathrm{d} A^n_s} \le \frac{1}{2} \Vert Y \Vert^2_{\mathbb S^2_d(\mathbb{F}^0)} + \frac{1}{2}\esp{(A^\phi_T - A^\phi_t)^2}, \\ &\esp{\int_t^{\tau_n \wedge T} |\mathcal{Y}^n_s| \mathrm{d} \mathcal{K}^n_s} \le \frac{1}{2} \Vert Y \Vert^2_{\mathbb S^2_d(\mathbb{F}^0)} + \frac{1}{2}\esp{|K_T|^2}, \\ \end{align*} and \begin{align*} \esp{\int_t^{\tau_n \wedge T} |\mathcal{Y}^n_s| \mathrm{d} \mathcal{A}^n_s} &\le \frac{1}{2\epsilon} \Vert Y \Vert^2_{\mathbb S^2_d(\mathbb{F}^0)} + \frac{\epsilon}{2}\esp{(\mathcal{A}^n_{\tau_n \wedge T})^2} \\ &\le \frac{1}{2\epsilon} \Vert Y \Vert^2_{\mathbb S^2_d(\mathbb{F}^0)} + \frac{\epsilon}{2}\left(\Lambda_1 + \Lambda \esp{\int_t^{\tau_n\wedge T} \mathrm{d}[\mathcal{M}^n]_s}\right). \end{align*} Using these estimates together with \eqref{est M} gives, for a constant $C_\epsilon > 0$ independent of $n$, \begin{align} \left(1 - \epsilon \Lambda\right)\esp{\int_t^{\tau_n \wedge T} \mathrm{d} [\mathcal{M}^n]_s} \le C_\epsilon \left(\Vert Y \Vert^2_{\mathbb S^2_d(\mathbb{F}^0)}+\Vert Z\Vert_{\H^2_{d\times \kappa}(\mathbb{F}^0)} + \Vert f(\cdot,0,0) \Vert_{\H^2_d(\mathbb{F}^0)}^2+\esp{|K_T|^2}\right), \end{align} and chosing $\epsilon = \frac{1}{2\Lambda}$ gives that $\esp{\int_t^{\tau_n \wedge T} \mathrm{d} [M^n]_s}$ is upper bounded independently of $n$. We also get an upper bound independent of $n$ for $\esp{(\mathcal{A}^n_{\tau_n \wedge T})^2}$ by \eqref{est A}.\\ Since $\int_t^{\tau_n \wedge T} \mathrm{d} [\mathcal{M}^n]_s$ (resp. $|\mathcal{A}^n_{\tau_n \wedge T}|^2$) is nondecreasing to $\int_t^T \mathrm{d} [\mathcal{M}^\phi]_s$ (resp. to $|\mathcal{A}^\phi_T|^2$), we obtain by monotone convergence the first part of Lemma \eqref{lem estimes startegies gene}. It is also clear that $\mathcal{M}^\phi$ is a martingale satisfying $\mathcal{M}^\phi_t = 0$.\\ We now prove \eqref{mart en t}. Using that $\espcond{(N^\phi_t)^2}{\mathcal{F}^0_t}$ is almost-surely finite as $\phi$ is admissible and $\hat{c}>0$, we compute, \begin{align} &\nonumber \espcond{\left(\sum_{k \ge 0} \left(Y^{\zeta_{k+1}}_{\tau_{k+1}} - \espcond{Y^{\zeta_{k+1}}_{\tau_{k+1}}}{\mathcal{F}^k_{\tau_{k+1}}}\right)\ind{\tau_{k+1} \le t}\right)^2}{\mathcal{F}^0_t}\\ &\nonumber \le \espcond{\left(\sum_{k \ge 0} \left|Y^{\zeta_{k+1}}_{\tau_{k+1}} - \sum_{j=1}^d p^{\alpha_{k+1}}_{\zeta_k, j} Y^j_t\right|\ind{\tau_{k+1} \le t}\right)^2}{\mathcal{F}^0_t} \\ &\le 4 |Y_t|^2 \espcond{(N^\phi_t)^2}{\mathcal{F}^0_t} < +\infty\quad \text{a.s.} \end{align} \hbox{ }\hfill$\Box$ \subsubsection{An optimal strategy} \textcolor{black}{ We now introduce a strategy, which turns out to be optimal for the control problem. This strategy is the natural extension to our setting of the optimal one for \emph{classical switching problem}, see e.g. \cite{HT10}. The first key step is to prove that this strategy is admissible, which is more involved than in the classical case due to the randomisation.} \vspace{2mm} \noindent Let $\phi^\star = (\zeta^\star_0, (\tau^\star_n)_{n \ge 0}, (\alpha^\star_n)_{n \ge 1})$ defined by $\tau^\star_0 = t$ and $\zeta^\star_0 = i$ and inductively by: \begin{align} \label{def tau star} \tau^\star_{k+1} &= \inf \left\{ \tau^\star_k \le s \le T : Y^{\zeta^\star_k}_s = {\max}_{u \in \mathcal{C}} \left\{ \sum_{j=1}^d P^u_{\zeta_k,j} Y^j_s - \bar{c}_{\zeta^\star_k}^{u} \right\} \right\} \wedge (T + 1), \\ \label{def alpha star} \alpha^\star_{k+1} &= {\min}\left\{\alpha \in \argmax_{u \in \mathcal{C}} \left\{ \sum_{j=1}^d P^u_{\zeta_k,j} Y^j_{\tau^\star_{k+1}} - \bar{c}_{\zeta_k}^{u} \right\}\right\}, \end{align} recall that $\mathcal{C}$ is ordered.\\ In the following lemma, we show that, since $\mathcal{D}$ has non-empty interior, the number of switch (hence the cost) required to leave any point on the boundary of $\mathcal{D}$ is square integrable, following the strategy $\phi^\star$. This result will be used to prove that the cost associated to $\phi^\star$ satisfies $\espcond{\left(A^{\phi^\star}_t\right)^2}{\mathcal{F}_t^0}$ is almost surely finite. \begin{Lemma} \label{lem sortie bord} Let Assumption \ref{exist}-ii) hold. For $y \in \mathcal{D}$, we define \begin{align} S(y) = \left\{1 \le i \le d : y_i = \max_{u \in \mathcal{C}} \left\{ \sum_{j=1}^d P^u_{i,j} y_j - \bar{c}_{i}^{u} \right\} \right\}, \end{align} and $(u_i)_{i \in S(y)}$ the family of elements of $\mathcal{C}$ given by $$u_i = \min \argmax_{u \in \mathcal{C}} \left\{ \sum_{j=1}^d P^{u}_{i,j} y_j - \bar{c}_{i}^{u}\right\}.$$ Consider the homogeneous Markov Chain $X$ on $S(y) \cup \{0\}$ defined by, for $k \ge 0$ and $i,j \in S(y)^2$, \begin{align*} \P(X_{k+1} = j | X_k = i) &= P^{u_i}_{i,j}, \\ \P(X_{k+1} = 0 | X_k = i) &= 1 - \sum_{j \in S(y)} P^{u_i}_{i,j},\\ \P(X_{k+1}=0 | X_k=0) &= 1,\\ \P(X_{k+1}=i | X_k=0) &=0. \end{align*} Then $0$ is accessible from every $i \in S(y)$, meaning that $X$ is an absorbing Markov Chain.\\ Moreover, let $N(y) = \inf \{n \ge 0 : X_n = 0 \}$. Then $N(y) \in L^2(\P^i)$ for all $i \in S(y)$, where $\P^i$ is the probability satisfying $\P^i(X_0 = i) = 1$. \end{Lemma} \begin{proof} Assume that there exists $i \in S(y)$ from which $0$ is not accessible. Then every communicating class accessible from $i$ is included in $S(y)$. In particular, there exists a recurrent class $S' \subset S(y)$. For all $i \in S'$, we have $P^{u_i}_{i,j} = 0$ if $j \not\in S'$ since $S'$ is recurrent. Moreover, since $S' \subset S(y)$, we obtain, for all $i \in S'$, by definition of $S(y)$, \begin{align} \label{egalites y} y_i = \sum_{j \in S'} P^{u_i}_{i,j} y_j - \bar{c}_{i}^{u_i}. \end{align} Since $S'$ is a recurrent class, the matrix $\tilde{P} = (P^{u_i}_{i,j})_{i,j \in S'}$ is stochastic and irreducible.\\ By definition of $\mathcal{D}$, we have $$\mathcal{D} \subset \mathbb{R}^{d - |S'|} \times \left\{z \in \mathbb{R}^{|S'|} | z_i \ge \sum_{j \in S'} P^{u_i}_{i,j} z_j - \bar{c}_{i}^{u_i}, i \in S' \right\} = \mathbb{R}^{d - |S'|} \times \mathcal{D}'.$$ With a slight abuse of notation, we do not renumber coordinates of vectors in $\mathcal{D}'$.\\ Let $i_0 \in S'$ and let us restrict ourself to the domain $\mathcal{D}'$. According to Lemma \ref{lem-general}, $\mathcal{D}'$ is invariant by translation along the vector $(1,...,1)$ of $\mathbb{R}^{|S'|}$. Moreover, Assumption \ref{ass-uncontrolled-irred} is fulfilled since $\tilde{P}$ is irreducible and controls $(u_i)_{i \in S(y)}$ are set. So, Proposition \ref{pr necessary conditions} gives us that $\mathcal{D}' \cap \{z \in \mathbb{R}^{|S'|} | z_{i_0}=0\}$ is a compact convexe polytope. Recalling \eqref{egalites y}, we see that $(y_i - y_{i_0})_{i \in S'}$ is a point of $\mathcal{D}' \cap \{z \in \mathbb{R}^{|S'|} | z_{i_0}=0\}$ that saturates all the inequalities. So, $(y_i - y_{i_0})_{i \in S'}$ is an extreme points of $\mathcal{D}' \cap \{z \in \mathbb{R}^{|S'|} | z_{i_0}=0\}$ and all extreme points are given by $$\mathcal{E}:= \left\{z \in \mathbb{R}^{|S'|} | z_i = \sum_{j \in S'} P^{u_i}_{i,j} z_j - \bar{c}_{i}^{u_i}, i \in S', z_{i_0}=0 \right\}.$$ Recalling that $\mathcal{D}'$ is compact, $\mathcal{E}$ is a nonempty bounded affine subspace of $\mathbb{R}^{|S'|}$, so it is a singleton. Since $\mathcal{D}' \cap \{z_{i_0}=0\}$ is a compact convex polytope, it is the convex hull of $\mathcal{E}$ and so it is also a singleton. Hence $\mathcal{D}'$ is a line in $\mathbb{R}^{|S'|}$. Moreover, $|S'| \ge 2$ as $P^u_{i,i} \neq 1$ for all $u \in \mathcal{C}$ and $i \in \{1,\dots,d\}$. Thus $\mathcal{D} \subset \mathbb{R}^{d - |S'|} \times \mathcal{D}'$ gives a contradiction with the fact that $\mathcal{D}$ has non-empty interior and the first part of the lemma is proved.\\ % Finally, we have $N(y) \in L^2(\P^i)$ for all $i \in S(y)$ thanks to Theorem 3.3.5 in \cite{KS81}. \hbox{ }\hfill$\Box$ \end{proof} \begin{Lemma} \label{lem-integ-adm} Assume that assumption \eqref{exist} is satisfied. The strategy $\phi^\star$ is admissible. \end{Lemma} {\noindent \bf Proof. } For $n \ge 1$, we consider the admissible strategy $\phi_n = (\zeta_0, (\tau^n_k)_{k \ge 0}, (\alpha^n_k)_{k \ge 1})$ defined by $\zeta^n_0 = i = \zeta_0^\star, \tau^n_k = \tau^\star_k, \alpha^n_k = \alpha^\star_k$ for $k \le n$, and $\tau^n_k = T+1$ for all $k > n$. We set $\mathcal{Y}^n_s := \mathcal{Y}^{\phi_n}_s, \mathcal{Z}^n_s := \mathcal{Z}^{\phi_n}_s$ and so on, for all $s \in [t,T]$.\\ By definition of $\tau^\star, \alpha^\star$, recall \eqref{def tau star}-\eqref{def alpha star}, it is clear that $\mathcal{A}^n_{s\wedge \tau_n^\star} = 0$ and that $\int_{\tau^\star_k \wedge s}^{\tau^\star_{k+1} \wedge s} \mathrm{d} K^{\zeta^\star_k}_u = 0$ for all $k < n$ and $s \in [t,T]$. The identity \eqref{eq apres t} for the admissible strategy $\phi^n$ gives \begin{align*} \mathcal{Y}^n_t = \mathcal{Y}^n_{\tau^\star_n \wedge T} + \int_t^{\tau^\star_n \wedge T} f^{a^n_s}(s,\mathcal{Y}^n_s,\mathcal{Z}^n_s)\mathrm{d} s - \int_t^{\tau^\star_n \wedge T} \mathcal{Z}^n_s \mathrm{d} W_s - \int_t^{\tau^\star_n \wedge T} \mathrm{d} \mathcal{M}^n_s - \int_t^{\tau^\star_n \wedge T} \mathrm{d} A^n_s. \end{align*} Using similar arguments and estimates as in the precedent proof, we get \begin{align} \label{est A opti} \esp{|A^n_{\tau^\star_n \wedge T} - A^n_t|^2} \le \Lambda_1 + \Lambda\esp{\int_t^{\tau^\star_n \wedge T} \mathrm{d}[\mathcal{M}^u]_s}, \end{align} and, for $\epsilon > 0$, \begin{align} \left(1 - \epsilon\Lambda\right)\esp{\int_t^{\tau^\star_n \wedge T} \mathrm{d} [\mathcal{M}^n]_s} \le C_\epsilon \left(\Vert Y \Vert^2_{\mathbb S^2_d(\mathbb{F}^0)}+\Vert Z\Vert_{\H^2_{d\times \kappa}(\mathbb{F}^0)} + \Vert f(\cdot,0,0) \Vert_{\H^2_d(\mathbb{F}^0)}^2\right). \end{align} Choosing $\epsilon = \frac{1}{2\Lambda}$ gives that $\esp{\int_t^{\tau^\star_n \wedge T} \mathrm{d} [\mathcal{M}^n]_s}$ and $\esp{|A^n_{\tau^\star_n \wedge T} - A^n_t|^2}$ are upper bounded uniformly in $n$, hence by monotone convergence, we get that $A^{\phi^\star}_T - A^{\phi^\star}_t \in L^2(\mathcal{F}^\infty_T)$.\\ It remains to prove that $A^{\phi^\star}_t \in L^2(\mathcal{F}^0_t)$. We have $A^{\phi^\star}_t \le \check{c} N^{\phi^\star}_t$, and $\espcond{(N^{\phi^\star}_t)^2}{\mathcal{F}^0_t} < +\infty$ a.s. is immediate {from Lemma \ref{lem sortie bord}}, since $\espcond{(N^{\phi^\star}_t)^2}{\mathcal{F}^0_t} = \Psi(Y_t)$ with $\Psi(y) = \mathbb E^i\left[(N(y))^2\right], y \in \mathcal{D}$, where $\mathbb E^i$ is the expectation under the probability $\P^i$ defined in Lemma \ref{lem sortie bord}. \hbox{ }\hfill$\Box$ \subsubsection{Proof of Theorem \ref{th representation result}} We now have all the key ingredients to conclude the proof of Theorem \ref{th representation result}. \noindent 1. Let $\phi \in \mathscr A^i_t$, and consider the identity \eqref{eq apres t}. Since $\mathcal{M}^\phi$ is a square integrable martingale, orthogonal to $W$, and since $\mathcal{A}^\phi_T + \mathcal{K}^\phi_T \in L^2(\mathcal{F}^\infty_T)$ and the process $\mathcal{A}^\phi + \mathcal{K}^\phi$ is nonnegative and nondecreasing, the comparison Theorem \ref{comparison} gives $\mathcal{Y}^\phi_t \ge U^\phi_t$, recall \eqref{switched BSDE}.\\ Now, we have \begin{align} \nonumber \mathcal{Y}^\phi_t &= Y^i_t + \sum_{k \ge 0} \left( Y^{\zeta_{k+1}}_t - Y^{\zeta_k}_t \right) \ind{\tau_{k+1} \le t} \\ \nonumber &= Y^i_t + \sum_{k \ge 0} \left(Y^{\zeta_{k+1}}_t - \espcond{Y^{\zeta_{k+1}}_t}{\mathcal{F}^k_{\tau_{k+1}}} \right) \ind{\tau_{k+1} \le t}\\ &\label{eq en t} \hspace{1cm} - \sum_{k \ge 0} \left(Y^{\zeta_k}_t - \sum_{j=1}^d P^{\alpha_{k+1}}_{\zeta_{k,j}} Y^j_{\tau_{k+1}} + \bar{c}^{\alpha_{k+1}}_{\zeta_k}\right)\ind{\tau_{k+1}\le t} + A^\phi_t. \end{align} Since $U^\phi_t \le \mathcal{Y}^\phi_t$ and $\sum_{k \ge 0} \left(Y^{\zeta_k}_t - \sum_{j=1}^d P^{\alpha_{k+1}}_{\zeta_{k,j}} Y^j_{\tau_{k+1}} + \bar{c}^{\alpha_{k+1}}_{\zeta_k}\right)\ind{\tau_{k+1}\le t} \ge 0$, we get \begin{align} \nonumber U^\phi_t - A^\phi_t &\le Y^i_t + \sum_{k \ge 0} \left(Y^{\zeta_{k+1}}_t - \espcond{Y^{\zeta_{k+1}}_t}{\mathcal{F}^k_{\tau_{k+1}}} \right) \ind{\tau_{k+1} \le t}\\ &\nonumber \hspace{1cm}- \sum_{k \ge 0} \left(Y^{\zeta_k}_t - \sum_{j=1}^d P^{\alpha_{k+1}}_{\zeta_{k,j}} Y^j_{\tau_{k+1}} + \bar{c}^{\alpha_{k+1}}_{\zeta_k}\right)\ind{\tau_{k+1}\le t} \\ &\le Y^i_t + \sum_{k \ge 0} \left(Y^{\zeta_{k+1}}_t - \espcond{Y^{\zeta_{k+1}}_t}{\mathcal{F}^k_{\tau_{k+1}}} \right) \ind{\tau_{k+1} \le t}. \end{align} Using \eqref{mart en t}, we can take conditional expectation on both side with respect to $\mathcal{F}^0_t$ to obtain the result.\\ 2. Lemma \ref{lem-integ-adm} shows that the strategy $\phi^\star$ is admissible. Using \eqref{eq apres t}, since $\mathcal{A}^{\phi^\star} = 0$ and $\int_{\tau^\star_k \wedge T}^{\tau^\star_{k+1} \wedge T} \mathrm{d} K^{\zeta^\star_k}_u = 0$ for all $k \ge 0$, we obtain \begin{align} \mathcal{Y}^{\phi^\star}_s = \xi^{a^\star_T} + \int_s^T f^{a^\star_u}(u,\mathcal{Y}^{\phi^\star}_u,\mathcal{Z}^{\phi^\star}_u)\mathrm{d} u - \int_s^T \mathcal{Z}^{\phi^\star}_u \mathrm{d} W_u - \int_s^T \mathrm{d} \mathcal{M}^{\phi^\star}_u - \int_s^T \mathrm{d} A^{\phi^\star}_u. \end{align} By uniqueness Theorem \ref{ex-uni-bsde}, we get that $\mathcal{Y}^{\phi^\star}_t = U^{\phi^\star}_t$, recall \eqref{switched BSDE}.\\ We also have \begin{align*} \mathcal{Y}^{\phi^\star}_t &= Y^i_t + \sum_{k \ge 0} \left( Y^{\zeta^\star_{k+1}}_t - Y^{\zeta^\star_k}_t \right) \ind{\tau^\star_{k+1} \le t} \\ &= Y^i_t + \mathcal{M}^{\phi^\star}_t + A^{\phi^\star}_t, \end{align*} thus $U^{\phi^\star}_t - A^{\phi^\star}_t = \mathcal{Y}^{\phi^\star}_t - A^{\phi^\star}_t = Y^i_t + \mathcal{M}^{\phi^\star}_t$, and taking conditional expectation gives the result. \hbox{ }\hfill$\Box$ \section{Introduction} In this work, we introduce and study a new class of optimal switching problems in stochastic control theory. \noindent The interest in switching problems comes mainly from their connections to financial and economic problems, like the pricing of \emph{real options} \cite{carmona2010valuation}. In a celebrated article \cite{HJ07}, Hamadène and Jeanblanc study the fair valuation of a company producing electricity. In their work, the company management can choose between two modes of production for their power plant --operating or close-- and a time of switching from one state to another, in order to maximise its expected return. Typically, the company will buy electricity on the market if the power station is not operating. The company receives a profit for delivering electricity in each regime. The main point here is that a fixed cost penalizes the profit upon switching. This \emph{switching problem} has been generalized to more than two modes of production \textcolor{black}{\cite{DHP09}}. Let us now discuss this \emph{switching problem} with $d \ge 2$ modes in more details. The costs to switch from one state to another are given by a matrix $(c_{i,j})_{1 \le i,j \le d}$. The management optimises the expected company profits by choosing switching strategies which are sequences of stopping times $(\tau_n)_{n \ge 0}$ and modes $(\zeta_n)_{n \ge 0}$. The current state of the strategy is given by $a_t = \sum_{k = 0}^{+\infty} \zeta_k 1_{[\tau_k, \tau_{k+1})}(t), \; t \in [0,T]\;,$ where $T$ is a terminal time. To formalise the problem, we assume that we are working on a complete probability space $(\Omega,\mathcal{A},\P)$ supporting a Brownian Motion $W$. The stopping times are defined with respect to the filtration $(\mathcal{F}_t)_{t \ge 0}$ generated by this Brownian motion. Denoting by $f(t,i)$ the instantaneous profit received at time $t$ in mode $i$, the time cumulated profit associated to a switching strategy is given by $\int_{0}^Tf(t,a_t) \mathrm{d} t - \sum_{k = 0}^{+\infty} c_{\zeta_k, \zeta_{k+1}} \ind{\tau_{k+1} \le t\wedge T} $. The management solves then at the initial time the following control problem \begin{align}\label{eq de classical switching problem} \mathcal{V}_0 = \sup_{a \in \mathscr{A}} \esp{\int_{0}^Tf(t,a_t) \mathrm{d} t - \sum_{k = 0}^{+\infty} c_{\zeta_k, \zeta_{k+1}} \ind{\tau_{k+1} \le T}}\,, \end{align} where $\mathscr{A}$ is a set of admissible strategies that will be precisely described below (see Section \ref{problem}). We shall refer to problems of the form \eqref{eq de classical switching problem} under the name of \emph{classical switching problems}. These problems have received a lot of interest and are now quite well understood \textcolor{black}{\cite{HJ07,DHP09,HT10,CEK11}}. In our work, we introduce a new kind of switching problem, to model more realistic situations, by taking into account uncertainties that are encountered in practice. Coming back to the simple but enlightning example of an electricity producer described in \cite{HJ07}, we introduce some extra-randomness in the production process. Namely, when switching to the operating mode, it may happen with --hopefully-- a small probability that the station will have some dysfunction. This can be represented by a new mode of ``production'' with a greater switching cost than the business as usual one. To capture this phenomenon in our mathematical model, we introduce a randomisation procedure: the management decides the time of switching but the mode is chosen randomly according to some extra noise source. We shall refer to this kind of problem by \emph{randomised switching problem}. However, we do not limit our study to this framework. Indeed, we allow some control by the agent on this randomisation. Namely, the agent can chose optimally a probability distributions $P^u$ on the modes space given some parameter $u \in \mathcal{C}$, in the control space. The new mode $\zeta_{k+1}$ is then drawn, independently of everything up to now, according to this distribution $P^u$ and a specific switching cost $c^u_{\zeta_k,\zeta_{k+1}}$ is applied. The management strategy is thus given now by the sequence $(\tau_k,u_k)_{k \ge 0}$ of switching times and controls. The maximisation problem is still given by \eqref{eq de classical switching problem}. \textcolor{black}{Let us observe however that $\esp{c^{u_k}_{\zeta_k,\zeta_{k+1}}}=\esp{\sum_{1\leqslant j\leqslant d} P^{u_k}_{\zeta_k,j}c^{u_k}_{\zeta_k,j}}$, thanks to the tower property of conditional expectation. In particular, we will only work with the mean switching costs $\bar{c}^{u_k}_i:=\sum_{1\leqslant j\leqslant d} P^{u_k}_{i,j}c^{u_k}_{i,j}$ in \eqref{eq de classical switching problem}.} We name this kind of control problem \emph{switching problem with controlled randomisation}. Although their apparent modeling power, this kind of control problem has not been considered in the literature before, to the best of our knowledge. In particular, we will show that the \emph{classical} or \emph{randomised switching problem} are just special instances of this more generic problem. The \emph{switching problem with controlled randomisation} is introduced rigorously in Section \ref{problem} below. A key point in our work is to relate the control problem under consideration to a new class of obliquely reflected Backward Stochastic Differential Equations (BSDEs). In the first part, following the approach of \textcolor{black}{\cite{HJ07,DHP09,HT10}}, we completely solve the \emph{switching problem with controlled randomisation} by providing an optimal strategy. The optimal strategy is built by using the solution to a well chosen obliquely reflected BSDE. Although this approach is not new, the link between the obliquely reflected BSDE and the switching problem is more subtle than in the classical case due to the state uncertainty. In particular, some care must be taken when defining the adaptedness property of the strategy and associated quantities. Indeed, a tailor-made filtration, studied in details in Appendix A.2, is associated to each admissible strategy. The state and cumulative cost processes are adapted to this filtration, and the associated reward process is defined as the $Y$-component of the solution to some ``switched'' BSDE in this filtration. The classical estimates used to identify an optimal strategy have to be adapted to take into account the extra orthogonal martingale arising when solving this ``switched'' BSDE in a non Brownian filtration. \noindent In the second part of our work, we study \textcolor{black}{the auxiliary obliquely reflected BSDE, which is written in the Brownian filtration and represents the optimal value in all the possible starting modes.} Reflected BSDEs were first considered by Gegout-Petit and Pardoux \cite{GPP96}, in a multidimensional setting of normal reflections. In one dimension, they have also been studied in \cite{EKPPQ97} in the so called simply reflected case, and in \cite{CK96} in the doubly reflected case. The multidimensional RBSDE associated to the \emph{classical switching problem} is reflected in a specific convex domain and involves oblique directions of reflection. Due to the controlled randomisation, the domain in which the $Y$-component of the auxiliary RBSDE is constrained is different from the \emph{classical switching problem} domain and its shape varies a lot from one model specification to another. The existence of a solution to the obliquely reflected BSDE has thus to be studied carefully. We do so by relying on the article \cite{chassagneux2018obliquely}, that studies, in a generic way, the obliquely reflected BSDE in a fixed convex domain in both Markovian and non-Markovian setting. The main step for us here is to exhibit an oblique reflection operator, with the good properties to use the results in \cite{chassagneux2018obliquely}. We are able to obtain new existence results for this class of obliquely reflected BSDEs. Because we are primarily interested in solving the control problem, we derive the uniqueness of the obliquely reflected BSDEs in the Hu and Tang specification for the driver \cite{HT10}, namely $f^{i}(t,y,z) := f^{i}(t,y^i,z^i)$ for $i \in \set{1,\dots,d}$. But our results can be easily generalized to the specification $f^{i}(t,y,z) := f^{i}(t,y,z^i)$ by using similar arguments as in \cite{CEK10}. The rest of the paper is organised as follows. In Section \ref{game}, we introduce the \emph{switching problem with controlled randomisation}. We prove that, if there exists a solution to the associated BSDE with oblique reflections, then its $Y$-component coincides with the value of the switching problem. A verification argument allows then to deduce uniqueness of the solution of the obliquely reflected BSDE. In Section \ref{existence}, we show that there exists indeed a solution to the obliquely reflected BSDE under some conditions on the parameters of the switching problem and its randomisation. We also prove uniqueness of the solution under some structural condition on the driver $f$. Finally, we gather in the Appendix section some technical results. \paragraph{Notations} If $n \ge 1$, we let $\mathcal{B}^n$ be the Borelian sigma-algebra on $\mathbb{R}^n$.\\ For any filtered probability space $(\Omega,\mathcal{G},\mathbb{F},\P)$ and constants $T > 0$ and $p \ge 1$, we define the following spaces: \begin{itemize} \item $L^p_n(\mathcal{G})$ is the set of $\mathcal{G}$-measurable random variables $X$ valued in $\mathbb{R}^n$ satisfying $\esp{|X|^p} < +\infty$, \item $\mathcal{P}(\mathbb{F})$ is the predictable sigma-algebra on $\Omega \times [0,T]$, \item $\H^p_n(\mathbb{F})$ is the set of predictable processes $\phi$ valued in $\mathbb{R}^n$ such that \begin{align} \esp{\int_0^T |\phi_t|^p \mathrm{d} t} < +\infty, \end{align} \item $\mathbb S^p_n(\mathbb{F})$ is the set of \textcolor{black}{c\`adl\`ag} processes $\phi$ valued in $\mathbb{R}^n$ such that \begin{align} \esp{\sup_{0 \le t \le T} |\phi_t|^p} < +\infty, \end{align} \item $\mathbb{A}^p_n(\mathbb{F})$ is the set of continuous processes $\phi$ valued in $\mathbb{R}^n$ such that $\phi_T \in L^p_n(\mathcal{F}_T)$ and $\phi^i$ is nondecreasing for all $i = 1, \dots, n$. \end{itemize} If $n = 1$, we omit the subscript $n$ in previous notations.\\[.3cm] For $d \ge 1$, we denote by $(e_i)_{i=1}^d$ the canonical basis of $\mathbb{R}^d$ and $S_d(\mathbb{R})$ the set of symmetric matrices of size $d \times d$ with real coefficients.\\[.3cm] If $\mathcal{D}$ is a convex subset of $\mathbb{R}^d$ ($d \ge 1$) and $y \in \mathcal{D}$, we denote by $\mathcal{C}(y)$ the outward normal cone at $y$, defined by \begin{align} \mathcal{C}(y) := \{ v \in \mathbb{R}^d : v^\top (z - y) \le 0 \mbox{ for all } z \in \mathcal{D} \}. \end{align} We also set $\mathfrak n(y) := \mathcal{C}(y) \cap \{v \in \mathbb{R}^d : |v| = 1\}$.\\[.3cm] If $X$ is a matrix of size $n \times m$, $\mathcal{I} \subset \{1,\dots,n\}$ and $\mathcal{J} \subset \{1,\dots,m\}$, we set $X^{(\mathcal{I},\mathcal{J})}$ the matrix of size $(n - |\mathcal{I}|) \times (m - |\mathcal{J}|)$ obtained from $X$ by deleting rows with index $i \in \mathcal{I}$ and columns with index $j \in \mathcal{J}$. If $\mathcal{I} = \{i\}$ we set $X^{(i,\mathcal{J})} := X^{(\mathcal{I},\mathcal{J})}$, and similarly if $\mathcal{J} = \{j\}$.\\ If $v$ is a vector of size $n$ and $1 \le i \le n$, we set $v^{(i)}$ the vector of size $n-1$ obtained from $v$ by deleting coefficient $i$. \newcommand{\ls}[2]{\ensuremath{#1^{(#2)} }} \newcommand{\rs}[2]{\ensuremath{#1+ {\bf 1}_{\set{#1\ge #2}} }} \noindent For $(i,j) \in \set{1,\dots,d}$, we define $ \ls{i}{j}:= i - {\bf 1}_{\set{i>j}} \in \set{1,\dots,d-1}\;$, for $d\ge 2$. \\ We denote by $\succcurlyeq$ the component by component partial ordering relation on vectors and matrices. \subsection{The non-Markovian framework} \label{sub se non markov} We now switch to the non-Markovian case, which is more challenging. We prove the well-posedness of the RBSDE in the uncontrolled setting for two cases: Problems in dimension $3$ and the example of a symmetric transition matrix $P$, in any dimension. \\ We first recall Proposition 3.1 in \cite{chassagneux2018obliquely} that gives an existence result for non-Markovian obliquely reflected BSDEs and the corresponding assumptions, see Assumption \ref{assumptionNonMarkov} below. Let us remark that the non-Markovian case is more challenging for our approach as it requires more structure condition on $H$, which must be symmetric and smooth in this case. \begin{Assumption} \label{assumptionNonMarkov}There exists $L>0$ such that \begin{enumerate}[i)] \item $\xi:=g((X_t)_{t \in [0,T]})$ with $g : C([0,T],\mathbb{R}^q) \rightarrow \bar{\mathcal{D}}$ a bounded uniformly continuous function and $X$ solution of the SDE \eqref{eq SDE} where $(b,\sigma) : [0,T] \times \mathbb{R}^q \to \mathbb{R}^q \times \mathbb{R}^{q \times \kappa}$ is a measurable function satisfying, for all $(t,x,y) \in [0,T] \times \mathbb{R}^q \times \mathbb{R}^q$, \begin{align*} |\sigma(t,x)| &\le L,\\ |b(t,x)-b(t,y)|+|\sigma(t,x)-\sigma(t,y)|&\le L|x-y|. \end{align*} \item $f: \Omega \times [0,T] \times \mathbb{R}^d \times \mathbb{R}^{d \times \kappa} \rightarrow \mathbb{R}^d$ is a $\mathcal{P} \otimes \mathcal{B}(\mathbb{R}^d \times \mathbb{R}^{d \times \kappa})$-measurable function such that, for all $(t,y,y',z,z') \in [0,T] \times \mathbb{R}^d \times \mathbb{R}^d \times \mathbb{R}^{d \times \kappa}\times \mathbb{R}^{d \times \kappa}$, \begin{align*} |f(t,y,z) - f(t,y',z')| &\leqslant L \left(|y-y'| + |z-z'|\right)\;. \end{align*} Moreover we have \begin{align*} \esssup_{\omega \in \Omega, t \in [0,T]} \mathbb{E}\left[ \int_t^T |f(s,0,0)|^2 \mathrm{d} s \Big| \mathcal{F}_t\right] \leqslant L. \end{align*} \item $H: \mathbb{R}^d \rightarrow \mathbb{R}^{d\times d}$ is valued in the set of symmetric matrices $Q$ satisfying \begin{align}\label{eq bound matrix H} |Q| \le L\;,\quad L |\upsilon|^2 \ge \upsilon^\top Q \upsilon \ge \frac1L |\upsilon|^2\,,\;\forall \upsilon \in \mathbb{R}^d. \end{align} $ H$ is a $\mathcal{C}^{1}$-function and $H^{-1}$ is a $\mathcal{C}^{2}$ function satisfying \begin{align* |\partial_y H| + |H^{-1}| + |\partial_y H^{-1}| + |\partial^2_{yy} H^{-1}| \le L. \end{align*} \end{enumerate} \end{Assumption} \noindent From this assumption, follows the following general existence result in the non-Markovian setting. \begin{Theorem}[\cite{chassagneux2018obliquely}, Proposition 3.1] \label{thm-existence-nonmarkov} We assume that $\mathcal{D}$ has non-empty interior. Under Assumption \ref{assumptionNonMarkov}, there exists a solution $(Y,Z,\Psi) \in \mathbb S^2_d(\mathbb{F}^0) \times \H^2_{d \times \kappa}(\mathbb{F}^0) \times \H^2_d(\mathbb{F}^0)$ of the following system \begin{align} \hspace{-1cm} \label{orbsde-gen-nm-1} Y_s &= \xi + \int_s^T f(u,Y_u,Z_u) \mathrm{d} u - \int_s^T Z_u \mathrm{d} W_u - \int_s^T H(Y_u) \Psi_u \mathrm{d} u, s \in [0,T], \\ \label{orbsde-gen-nm-2} Y_s &\in \mathcal{D}, \mbox{ } \Psi_s \in \mathcal{C}(Y_s),\mbox{ } 0 \le s \le T,\\ \label{orbsde-gen-nm-3} \int_0^T &1_{\{Y_s \not \in \partial \mathcal{D}\}} |\Psi_s| \mathrm{d} s = 0. \end{align} \end{Theorem} \begin{Remark} \begin{enumerate}[i)] \item \textcolor{black}{The assumption on the terminal condition is slightly less general than the one needed in \cite{chassagneux2018obliquely} (see Assumption \textbf{SB}(i) and Corollary 2.2 in \cite{chassagneux2018obliquely}). One could get a more general result by assuming that $\mathbb{E}[\xi|\mathcal{F}_.]$ is a BMO martingale such that its bracket has sufficiently large exponential moment. \item We do not use Theorem 3.1 in \cite{chassagneux2018obliquely} since the domain $\mathcal{D}$ is not smooth enough to apply it (see Assumption \textbf{SB}(iv) in \cite{chassagneux2018obliquely}). Consequently, we have to assume the extra assumption that $\xi$ is bounded. \item The uniqueness result for this part is obtain also by invoking Corollary \ref{co uniqueness RBSDE}. \end{enumerate} \end{Remark} \subsubsection{Existence of solutions in dimension $3$} We focus in this part on the uncontrolled case $\mathscr C = \{0\}$, in dimension $d=3$. Thus, there is a unique transition matrix given by \begin{align} P:=P^0 =\left(\begin{array}{ccc} 0&p&1-p\\ q&0&1-q\\ r&1-r&0\\ \end{array}\right), \end{align} for some $p,q,r \in [0,1]$. \begin{Theorem} \label{th existence non markov dim3} Let us assume that $0<p,q,r<1$ and that $\mathcal{D}$ has non-empty interior. Then there exists a function $H :\mathbb{R}^3\to \mathbb{R}^{3 \times 3}$ that satisfies Assumption \ref{assumptionNonMarkov}(iii) and such that \begin{align} \label{relation-cones-2} H(y) v \in \mathcal{C}_o(y), \quad \forall y \in \mathcal{D},\, v \in \mathcal{C}(y). \end{align} Consequently, if we assume that Assumption \ref{assumptionNonMarkov}(i)-(ii) is fulfilled, then there exists a solution to the Obliquely Reflected BSDE \eqref{orbsde}-\eqref{orbsde2}-\eqref{orbsde3}. Moreover this solution is unique if we assume also Assumption \ref{hyp sup section 2}-ii). \end{Theorem} \begin{proof} Once again we exhibit a convenient $H$. Thanks to Lemma \ref{lem-general}, it is enough to construct $H$ only on $\mathbb{R}^3 \cap \{(x,y,z) \in \mathbb{R}^3| z = 0\}$. we start by $\cD_{\!\circ}$ which i a triangle with three vertices $v^i = (x_i,y_i,z_i), i = 1,2,3$ given by: \begin{align} x_1 &= p y_1 + (1-p) z_1 - c_1, x_2 = p y_2 + (1-p) z_2 - c_1, y_3 = q x_3 + (1-q) z_3 - c_2, \\ y_1 &= q x_1 + (1-q) z_1 - c_2, z_2 = r x_2 + (1-r) y_2 - c_3, z_3 = r x_3 + (1-r) y_3 - c_3,\\ z_1 &=0, z_2=0, z_3=0. \end{align} We first observe that uniqueness follows once again from Proposition \ref{prop unicite couts generaux}. Let us now construct $H$ on each vertex. We consider first the point $v^1$. It is easy to compute its outward normal cone, which is given by \begin{align} \mathcal{C}(v^1) = \mathbb{R}^+ (-1, p, 1-p)^\top + \mathbb{R}^+ (q, -1, 1-q)^\top. \end{align} The matrix $H(v^1)$ must satisfy \begin{align} \label{identity} H(v^1)\left(\begin{array}{cc} -1 & q \\ p & -1 \\ 1-p & 1-q \\ \end{array}\right) = \left(\begin{array}{cc} -a & 0 \\ 0 & -b \\ 0 & 0 \end{array}\right) \end{align} for some $a,b > 0$. Taking $a=\frac 1 q, b= \frac 1 p$, we consider, for any $\alpha > 0$, \begin{align} H(v^1) &= -\left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{array}\right) \left(\begin{array}{cc} -q & pq \\ pq & -p \end{array}\right)^{-1} \left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{array}\right) + \left(\begin{array}{ccc} \alpha & \alpha & \alpha \\ \alpha & \alpha & \alpha \\ \alpha & \alpha & \alpha \end{array}\right) \\ &= \frac{1}{pq(1-pq)}\left(\begin{array}{ccc} \alpha + p & \alpha + pq & \alpha\\ \alpha + pq & \alpha + q & \alpha \\ \alpha & \alpha & \alpha \end{array}\right). \end{align} It is easy to check that this matrix $H(v^1)$ is symmetric and positive definite for any $\alpha > 0$, so we can set $\alpha=1$ in the following. Similarly, we construct $H$ on vertices $v^2, v^3$, \begin{align} H(v^2) &= \frac{1}{r(1-p)(1-r(1-p))}\left(\begin{array}{ccc} 1 + (1-p) & 1 & 1 + r(1-p)\\ 1 & 1 & 1 \\ 1 + r(1-p) & 1 & 1 + r \end{array}\right), \\ \nonumber H(v^3) &= \frac{1}{(1-q)(1-r)(1-(1-q)(1-r))}\left(\begin{array}{ccc} 1 & 1 & 1 \\ 1 & 1 + (1-q) & 1 + (1-q)(1-r) \\ 1 & 1 + (1-q)(1-r) & 1 + (1-r) \end{array}\right). \end{align} We can extend $H$ on all $\cD_{\!\circ}$ by convex combination, i.e. linear interpolation. By this way, $H$ stays valued in the set of positive definite symmetric matrices and is smooth enough. We could also define $H$ outside $\mathcal{D} \cap \{(x,y,z) \in \mathbb{R}^3| z = 0\}$ by linear interpolation but we will lose the boundedness and the positivity of $H$. Nevertheless we can find a bounded and convex, $\mathcal{C}^2$ open neighborhood $\mathcal{V}$ of $\mathcal{D}$, small enough, such that $H$ (still defined by linear interpolation) stays valued in the set of positive definite symmetric matrices on $\overline{\mathcal{V}}$. Then we define $H(y)$ for $y \notin \overline{\mathcal{V}}$ by $H(\mathcal{P}(y))$ where $\mathcal{P}$ stands for the projection onto $\overline{\mathcal{V}}$. By this way, $H$ is a bounded function with values in the set of positive definite symmetric matrices, that satisfies \eqref{eq bound matrix H}, \eqref{relation-cones-2} and that is $\mathcal{C}^{0}(\mathbb{R}^2) \cap \mathcal{C}^{2} (\mathbb{R}^2 \setminus \partial \mathcal{V})$ smooth, with $\partial \mathcal{V}$ the boundary of $\mathcal{V}$. Finally, we just have to mollify $H$ in a neighborhood of $\partial \mathcal{V}$, small enough to stay outside $\mathcal{D} \cap \{z = 0\}$. \hbox{ }\hfill$\Box$ \end{proof} \begin{Remark} When $pqr(1-p)(1-q)(1-r)=0$ then we can show that it is not possible to construct a function $H$ that satisfies Assumption \ref{assumptionNonMarkov}(iii) and \eqref{relation-cones-2}. \end{Remark} \subsubsection{Existence of solutions for a symmetric multidimensional example} \label{section exemple symetrique} We focus in this part on the uncontrolled case $\mathscr C = \{0\}$, in dimension $d \geqslant 3$ with a unique transition matrix $P$ given by \begin{equation*} P_{i,j} = \frac{1}{d-1} {\bf 1}_{i \neq j}. \end{equation*} \begin{Theorem} \label{th existence non markov example} Assume that $\mathcal{D}$ has non-empty interior. There exists a function $H :\mathbb{R}^d\to \mathbb{R}^{d \times d}$ that satisfies Assumption \ref{assumptionNonMarkov}(iii) and such that \begin{align*} H(y) v \in \mathcal{C}_o(y), \quad \forall y \in \mathcal{D},\, v \in \mathcal{C}(y). \end{align*} Consequently, if we assume that Assumption \ref{assumptionNonMarkov}(i)-(ii) is fulfilled, then there exists a solution to the Obliquely Reflected BSDE \eqref{orbsde}-\eqref{orbsde2}-\eqref{orbsde3}. Moreover this solution is unique if we assume also Assumption \ref{hyp sup section 2}-ii). \end{Theorem} \begin{proof} The proof follows exactly the same lines as the proof of Theorem \ref{th existence non markov dim3}. $\cD_{\!\circ}$ is a convex polytope with $d$ vertices $(y^i)_{1 \leqslant i \leqslant d}$ satisfying: for all $1 \leqslant i \leqslant d$, \begin{align*} y^i_{\ell} = \sum_{j \neq i }\frac{1}{d-1} y^j_{\ell} - \bar{c}_i, \quad \forall i \neq \ell, \quad \text{and} \quad y^i_d=0. \end{align*} Let us construct $H$ on vertex $y^d$. It is easy to compute its outward normal cone, which is positively generated by vectors $f^1,...,f^{d-1}$ where $$f^k_i = -{\bf 1}_{i=k} +\frac{1}{d-1} {\bf 1}_{i \neq k}.$$ For any $1 \leqslant k \leqslant d-1$, we impose $H(y^d)f^k = -\alpha_k e_k$ with $\alpha_k>0$. We can check that it is true with $\alpha_k=1$ for all $1 \leqslant k \leqslant d-1$, if we set, for any $a>0$, \begin{equation*} H(y^d)=\left(\begin{array}{cccc} a & & a-\frac{d-1}{d} & a-2\frac{d-1}{d}\\ & \ddots & & \vdots\\ a-\frac{d-1}{d} & & a & \vdots\\ a-2\frac{d-1}{d} & \ldots & \ldots & a-2\frac{d-1}{d} \end{array}\right) . \end{equation*} Since $\frac{d-1}{d}$ is an eigenvalue of $H(y^d)$ with multiplicity $d-2$, $\Det(H(y^d))=\left(a-2\frac{d-1}{d}\right) (d-1) \left(\frac{d-1}{d}\right)^{d-2}$ and $\Trace(H(y^d)) =da-2\frac{d-1}{d}$, $H(y^d)$ is a positive definite symmetric matrix as soon as $a>2\frac{d-1}{d}$. Thus we can set $a=2$. By simple permutations of rows and columns in $H(y^d)$ we can construct easily $H(y^k)$ for any $1 \leqslant k \leqslant d$. Then we just have to follow the proof of Theorem \ref{th existence non markov dim3} to extend $H$ from vertices of $\cD_{\!\circ}$ to the whole space. \hbox{ }\hfill$\Box$ \end{proof}
{ "attr-fineweb-edu": 0.831055, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUcSk5qsNCPfdFeMA5
\section*{Acknowledgements} The visit of A.S. to France, when this paper was completed, was supported by the Ecole Normale Sup\'{e}rieure (Paris). A.S. also acknowledges financial support by the Russian Foundation for Basic Research, grant 96-02-17591, as well as by the German Science Foundation (DFG) through grant 436 RUS 113/333/3. C.K. acknowledges financial support by the University of Tours during his visit to Tours, and D.P. acknowledges financial support by the DAAD during his visit to Freiburg.
{ "attr-fineweb-edu": 0.468994, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdUo4eIZjyphZuxc0
\section{Summary and Conclusions} \begin{center} {\bf Acknowledgements} \end{center} We are grateful to Ignacio Aracena, Hayk Hakobyan and Will Brooks for useful discussions. We also thank Sergey Gninenko and Alberto Lusiani for useful comments. J.C.H. thanks the IFIC for hospitality during his stay. This work was supported by FONDECYT (Chile) under projects 1100582, 1100287; Centro-Cient\'\i fico-Tecnol\'{o}gico de Valpara\'\i so PBCT ACT-028, by Research Ring ACT119, CONICYT (Chile) and CONICYT/CSIC 2009-136. M.H. acknowledges support from the Spanish MICINN grants FPA2008-00319/FPA, FPA2011-22975, MULTIDARK CSD2009-00064 and 2009CL0036 and by CV grant Prometeo/2009/091 and the EU~Network grant UNILHC PITN-GA-2009-237920.
{ "attr-fineweb-edu": 0.039612, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdM44uzlgqKp8yign
\section*{Conflict of interest} On behalf of all authors, the corresponding author states that there is no conflict of interest. \bibliographystyle{unsrt} \section{Great Britain Transportation Networks Results: Air and Coach Travel} \label{app:air_and_coach} \subsection*{Coach Travel Network Analysis} Figures~\ref{fig:GB_coach_mean_centrality_components_analysis}~and~\ref{fig:GB_coach_network_and_PDs} show similar results as in Section~\ref{ssec:GB_results} but for the coach travel network. Again, the diurnal trends are shown in $H_1$ and the $H_0$ shows a single connected component. This structural information is not captured by the traditional statistics shown in Fig.~\ref{fig:GB_coach_mean_centrality_components_analysis}. \begin{figure}[H] \centering \includegraphics[width=0.99\textwidth]{GB_coach_mean_centrality_components_analysis} \caption{Connectivity and centrality analysis on temporal Great Britain coach network.} \label{fig:GB_coach_mean_centrality_components_analysis} \end{figure} \begin{figure}[H] \centering % \begin{minipage}{0.33\textwidth} \centering \includegraphics[width=\textwidth]{Figures/GB_coach} \\ (a) Full coach Travel Network. % % \end{minipage} \hfill \begin{minipage}{0.33\textwidth} \centering \includegraphics[width=\textwidth]{Figures/GB_coach_PD0} \\ (b) Zero-dimensional zigzag persistence. % % \end{minipage} \hfill \begin{minipage}{0.33\textwidth} \centering \includegraphics[width=\textwidth]{Figures/GB_coach_PD1} \\ (c) One-dimensional zigzag persistence. % % \end{minipage} % \caption{Zigzag persistence diagrams of the coach transportation network of Great Britain.} \label{fig:GB_coach_network_and_PDs} \end{figure} \subsection*{Air Travel Network Analysis} Figures~\ref{fig:GB_air_mean_centrality_components_analysis}~and~\ref{fig:GB_air_network_and_PDs} show similar results as in Section~\ref{ssec:GB_results} but for the coach travel network. However, both the $H_0$ and $H_1$ show the diurnal trends with the network completely disappearing on a daily basis which results in off-diagonal points in $H_0$ of Fig.~\ref{fig:GB_air_network_and_PDs} based on the daily trends. \begin{figure}[H] \centering \includegraphics[width=0.99\textwidth]{GB_air_mean_centrality_components_analysis} \caption{Connectivity and centrality analysis on temporal Great Britain air network.} \label{fig:GB_air_mean_centrality_components_analysis} \end{figure} \begin{figure}[H] \centering % \begin{minipage}{0.33\textwidth} \centering \includegraphics[width=\textwidth]{Figures/GB_air} \\ (a) Full air Travel Network. % % \end{minipage} \hfill \begin{minipage}{0.33\textwidth} \centering \includegraphics[width=\textwidth]{Figures/GB_air_PD0} \\ (b) Zero-dimensional zigzag persistence. % % \end{minipage} \hfill \begin{minipage}{0.33\textwidth} \centering \includegraphics[width=\textwidth]{Figures/GB_air_PD1} \\ (c) One-dimensional zigzag persistence. % % \end{minipage} % \caption{Zigzag persistence diagrams of the air transportation network of Great Britain.} \label{fig:GB_air_network_and_PDs} \end{figure} \section{Background} \label{sec:background} \begin{figure}% \centering \includegraphics[width=0.83\textwidth]{simple_zigzag_example} \caption{ % Example application of zigzag persistence to study changing topology of simplicial complex sequence. This example shows the sequence of simplicial complexes with intermediate unions and corresponding time stamps on the left and on the right the resulting zigzag persistence diagram on the right for dimension $0$ and $1$ as $H_0$ and $H_1$, respectively. % } \label{fig:simple_zigzag_example} \end{figure} \subsection{Persistent Homology} Persistent homology, the flagship tool from the field of Topological Data Analysis (TDA), is used to measure the shape of a dataset at multiple dimensions. For example, it can measure connected components (dimension zero), loops (dimension one), voids (dimension two), and higher dimensional analogues. Persistent homology measures these shapes using a parameterized filtration to detect when the structures are born (appear) and die (disappear). We give the basic ideas in this section and direct the interested reader to more complete introductions to standard homology \cite{Hatcher,Munkres2} and persistent homology \cite{Dey2021,Oudot2015,Munch2017}. The dataset of interest needs to be represented as a simplicial complex $K_\alpha$ at multiple filtration parameter levels of $\alpha$ to measure the homology and apply the persistent homology algorithm. A common approach for this using point cloud data $\chi$ is to use the Vietoris-Rips (VR) complex $R_\alpha(\chi)$, where $\alpha$ is the filtration parameter. In fact, what is required is that we have a discrete metric space: that is, a collection of points $\chi$ (whether or not they are from points in Euclidean space) which have a notion of distance $d:\chi \times \chi \to \R$. In the VR construction, we have a vertex set $\chi$ and a simplex $\sigma \subseteq \chi$ is included in the abstract simplicial complex $R_\alpha(\chi)$ whenever $d(v,w) \leq \alpha$ for all $v,w \in \sigma$. Then by definition, $R_\alpha(\chi) \subseteq R_{\alpha'}(\chi)$ whenever $\alpha \leq \alpha'$, making this an example of a filtration defined more generally as follows. A filtration $\{K_{a_i}\}_i$ is a parameterized sequence of simplicial complexes which are nested; i.e. \begin{equation} K_{\alpha_0} \subseteq K_{\alpha_1} \subseteq K_{\alpha_2} \subseteq \ldots \subseteq K_{\alpha_n}. \label{eq:nested_complexes} \end{equation} Under this notation, there are $n+1$ simplicial complexes; and it is often the case that $K_{\alpha_0}$ is either the empty complex or the complex consisting of only the vertex set. We can then calculate the homology of dimension $p$ for each complex, $H_p(K_{a_i})$, which is a vector space representing the $p$-dimensional structure of the space such as loops, voids, etc. Surprisingly, there is further information to be used, namely that the inclusions on the simplicial complexes induce linear maps on the vector spaces resulting in a construction called a persistence module: \begin{equation} H_p(K_{\alpha_0}) \to H_p(K_{\alpha_1}) \to H_p(K_{\alpha_2}) \to \ldots \to H_p(K_{\alpha_n}). \label{eq:filtration} \end{equation} The appearance and disappearance of classes in this object can be tracked, resulting in a representation of the information known as a persistence diagram. For each class which appears at $K_{b_i}$ and disappears at $K_{d_i}$, we draw a point in the plane at $(b_i,d_i)$. Taken together, the collection of points (also called persistence pairs), all of which are above the diagonal $\Delta= \{ (x,y) \mid x = y\}$, is called a persistence diagram. \subsection{Zigzag Persistence} \begin{figure}% \centering \begin{minipage}{0.21\textwidth} \centering \includegraphics[width=\textwidth]{Figures/case1_zigzag_issue} \\ (a) Case 1 % % \end{minipage} \hfill \begin{minipage}{0.21\textwidth} \centering \includegraphics[width=\textwidth]{Figures/case2_zigzag_issue} \\ (b) Case 2 % % \end{minipage} \hfill \begin{minipage}{0.21\textwidth} \centering \includegraphics[width=\textwidth]{Figures/case3_zigzag_issue} \\ (c) Case 3 % % \end{minipage} \hfill \begin{minipage}{0.21\textwidth} \centering \includegraphics[width=\textwidth]{Figures/case4_zigzag_issue} \\ (d) Case 4 % % \end{minipage} \caption{Several examples showing how the zigzag persistence diagram can be affected by the existence of small loops. The first and third; and the second and fourth examples have the same filtration with the addition of a single triangle in the later case. In each case, the generators of homology are shown with the interval decomposition drawn below. The resulting persistence diagrams show that the third and fourth examples show a single bar representing the persisting loop. However in the first two intervals are quite different, where the long lived bar is split into two pieces in the second example. % } \label{fig:HouseExample} \end{figure} A limitation of the standard setup of persistent homology is that it requires each subsequent simplicial complex to be a subset of the previous simplicial complex, as shown in Eq.~\eqref{eq:nested_complexes}, meaning at each step, we are only allowed to add new simplices to the previous complex. However, there are many cases of real-world applications where we have a parameterized sequence of simplicial complexes where simplices can both enter and exit the complex. This issue can be alleviated through zigzag persistence~\cite{Carlsson2010, Carlsson2009a}, which allows for a zigzagging of the subset directions as \begin{equation} K_{\alpha_0} \leftrightarrow K_{\alpha_1} \leftrightarrow K_{\alpha_2} \leftrightarrow \ldots \leftrightarrow K_{\alpha_n}, \label{eq:bi_directional_zigzag_complexes} \end{equation} where $\leftrightarrow$ denotes one of the two inclusions $\hookrightarrow$ and $\hookleftarrow$. A common special case of this definition is where the left and right inclusions alternate. Note that constructions of this special case can arise by taking a sequence of simplicial complexes, and interleaving them with either unions or intersections of the adjacent complexes. Focusing on the case of the union, this results in the zigzag filtration \begin{equation} K_{\alpha_0} \hookrightarrow K_{\alpha_0,\alpha_1} \hookleftarrow K_{\alpha_1} \hookrightarrow K_{\alpha_1,\alpha_2} \hookleftarrow K_{\alpha_2} \hookrightarrow \ldots \hookleftarrow K_{\alpha_{n-1}} \hookrightarrow K_{\alpha_{n-1},\alpha_{n}} \hookleftarrow K_{\alpha_n} \label{eq:zigzag_union_complexes} \end{equation} where we have modified the index set for clarity by setting $K_{i,i+1} = K_i \cup K_{i+1}$. The same algebra that makes it possible for standard persistence allows for computation of when homology features are born and die based on the zigzag persistence, however some of the intuition is lost. We can again track this with a persistence diagram consisting of persistence pairs $(b_i, d_i)$. In the case of a class appearing or disappearing at the union complex $K_{\alpha_{i},\alpha_{i+1}}$, we draw the index at the average $(\alpha_{i+1}-\alpha_i)/2$. If a topological feature persists through the last simplicial complex we set its death as the end time of the last window or index $n+0.5$. To demonstrate how zigzag persistence tracks the changing topology in a sequence of simplicial complexes we will use a simple example shown in Fig.~\ref{fig:simple_zigzag_example}. The sequence of simplicial complexes are shown as $[K_{0}, K_{1}, K_{2}]$, with unions given by $K_{0,1}$ and $K_{1,2}$. The persistence diagram then tracks where topological features of various dimensions of $H_p$ (dimension 0 and 1 for this example) form and disappear. For example, for $H_0$ there are two components in $K_{0}$. At the next simplicial complex $K_{0,1}$ the two $0$-dimensional features combine signifying one of their deaths which is tracked in the persistence diagram as the persistence pair $(0,0.5)$ since 0.5 is the average of $0$ and $1$. The component that persists remains throughout all of the simplicial complexes. Therefore, we set its death as the last time stamp plus 0.5 and record the persistence pair as $(0, 2.5)$. On the other there is a single loop (a 1-dimensional feature) which is shown only at $K_{0,1}$. For technical reasons, this is drawn as a point with birth time 0.5 (the average of $0$ and $1$) and death time 1 since it dies entering $K_{1}$. It should be noted that sometimes zigzag persistence results are counter-intuitive. For example, consider the example of Fig.~\ref{fig:HouseExample}. The first two filtrations each appear to have a circular structure which lasts through the duration of the filtration, however, the zigzag persistence diagram sees the first case as a class living throughout, while the second breaks this class into two individual bars. In this case at least, the issue can be mitigated by including a triangle at the top of the house filling in the short cycle, resulting in both filtrations having only a single point in the persistence diagram representing the loop in the house. \subsection{Temporal Graphs} \label{ssec:temporal_graphs} A temporal graph is a graph structure that incorporates time information on when edges and/or nodes are present in the graph. We will only be using the case of temporal information attributed to the edges in this work and assume nodes are added in as edge-induced subgraphs. Thus, our starting data is a graph where each edge has a collection of time intervals associated to it for when that edge is active. For example, edge $ab$ between nodes $a$ and $b$ in graph $G$ would have the interval collection $I_{ab}$ associated to it. Fitting with the previous section, for a collection of times $t_0\leq t_1 \leq \cdots \leq t_n$ and a choice of window size $w$, we can construct a zigzag filtration of graphs \begin{equation} G_{t_0} \hookrightarrow G_{t_0,t_1} \hookleftarrow G_{t_1} \hookrightarrow G_{t_1,t_2} \hookleftarrow G_{t_2} \hookrightarrow \ldots \hookleftarrow G_{t_{n-1}} \hookrightarrow G_{t_{n-1},t_{n}} \hookleftarrow G_{t_n} \label{eq:zigzag_union_graphs} \end{equation} where $G_{t}$ is the graph with all edges present within a window $[t-w/2,t+w/2]$; and $G_{t_i,t_{i+1}}$ is the union of the two adjacent graphs. We also can form a similar filtration by replacing $G_t$ with its the Vietoris-Rips (VR) complex for a fixed filtration value $r$, \begin{equation} R_r(G_{t_0} ) \hookrightarrow R_r(G_{t_0,t_1} ) \hookleftarrow R_r(G_{t_1} ) \hookrightarrow R_r(G_{t_1,t_2} ) \hookleftarrow R_r(G_{t_2} ) \hookrightarrow \ldots \hookleftarrow R_r(G_{t_{n-1}} ) \hookrightarrow R_r(G_{t_{n-1},t_{n}} ) \hookleftarrow R_r(G_{t_n}). \label{eq:zigzag_union_Rips_graphs} \end{equation} Specifically, we construct a simplicial complex from the set of vertices where the distance between nodes is defined using the unweighted shortest path distance from the original graph $G_t$. In this setting, if $r=0$, then each complex is only the set of vertices. If $r=1$, the clique complex of the original graph is returned. For higher values of $r$, we essentially fill in loops in increasing order of size. \section{Conclusion} \label{sec:conclusion} In this work we studied how to effectively apply zigzag persistence to temporal graphs. Zigzag persistence provides a unique perspective when studying the evolving structure of a temporal graph by tracking the standard lower-dimensional features (e.g., connected components), but also higher-dimensional features (e.g., loops and voids) through a sequence of simplicial complexes. This allows for an understanding of the evolving topology of a temporal graph. We showed the benefits of using zigzag persistence over standard graph theory statistics on two examples: the Great Britain transportation network and temporal ordinal partition networks. Our results showed that the informative zero and one-dimensional zigzag persistence provided insights into the structure of the temporal graph that were not easily gleaned from standard centrality and connectivity statistics. We believe zigzag persistence could also be leveraged to study other temporal graphs including flock behavior models (e.g., Viscsek model) and the emergence of coordinated motion, power grid dynamics with the topological characteristics of a cascade failures, and supplier-manufacture networks through the effects of trade failures on production and consumption. Future work would involve an analysis on deciding an optimal window size and overlap, a method to incorporate edge weight and directionality, and temporal information on both the nodes and edges. It would also be worth investigating higher-dimensional features (e.g., voids through $H_2$). \section{Introduction} \label{sec:intro} A dynamical system is any system whose future state is dependent on the current state. Many real-world dynamical systems are simulated using approximate models, which occupy a wide range of applications from population models~\cite{May1987} to aeronautical dynamics~\cite{Rohith2020}. A common characteristic of a dynamical system is that its behavior can change with a system parameter, known as a bifurcation. For example, the airflow over an aircraft's wing can change from laminar to turbulent with a change in the angle of attack resulting in stall~\cite{Semionov2017, Zhang2020}, the load on a power-grid system can push a line to fail causing a cascade failure and black-out~\cite{Schaefer2018, Schaefer2019}, or a change in atmospheric chemistry can cause for severe weather~\cite{Hochman2019}. Capturing the characteristic changes of a dynamical system through a measurement signal is critical in detecting, predicting, and possibly preventing some of these catastrophic failures. Outside of detecting imminent events, many other important characteristics of a system are studied through the lens of dynamics. These include population models transitioning from stable values to chaotic oscillations based on environmental factors~\cite{May1987}, economic bubbles showing dynamics with bimodal distribution bifurcations~\cite{Dmitriev2017}, or chaotic fluctuations in power-grid dynamics through period-doubling~\cite{Ji1996}. A common avenue to study these systems is through time series or signals, which are widely utilized to analyze real-world dynamical system bifurcations. For example, a change in measured biophysical signals can indicate upcoming health problems~\cite{Rangayyan2015, Nayak2018, Guo2020} or a change in the vibratory signals of machines or structures can be the harbinger of imminent failure~\cite{Sohn2001,Avci2021}. Time series typically originate from real-life systems measurements, and they provide only finitely sampled information from which the underlying dynamics must be gleaned. Time series analysis methods have many useful foundational tools for bifurcation and dynamic state analysis, such as frequency spectrum analysis~\cite{Borkowski2015, Detroux2015} and autocorrelation~\cite{Quail2015}. While time series analysis tools can be leveraged for bifurcation detection and dynamic state analysis, many complex and high-dimensional dynamical systems and their corresponding measurements can more naturally be represented as complex networks. For example, there are dynamical systems models for social networks~\cite{Skyrms2000}, disease spread dynamics~\cite{Husein2019}, manufacturer-supplier networks~\cite{Xu2019b}, power grid network~\cite{Schaefer2018}, and transportation networks~\cite{DavidBoyce2012}. These dynamical system models demonstrate how dynamical networks can be representative of highly complex real-world systems. Many important characteristics of a dynamical network can be extracted from the data. These include source and rate of disease spread as well as predictions on future infections~\cite{Enright2018}, weak branches in supply chains and possible failures~\cite{Nuss2016, Xu2019b}, changes in infrastructure to avoid cascade failures in power grids~\cite{Soltan2014, Schaefer2018}, transportation network optimal routing (finding an optimal minimum time route between)~\cite{Bast2015}, fault analysis (detecting transportation disruptions) in transportation networks~\cite{Sugishita2020}, and flow pattern analysis (visualization)~\cite{Hackl2019}. If the studied system only has a single one-dimensional signal output, we can still represent the dynamical system as a temporal network. This is done using complex networks representations of windowed sections of time series data to visualize how the graph structure of the windowed time series data changes. Examples of graph formation techniques from time series include k-nearest-neighbors networks~\cite{Khor2016}, epsilon-recurrence networks~\cite{Jacob2019}, coarse-grained state-space network~\cite{Small2009a,Small2013a}, or ordinal partition networks~\cite{McCullough2015}. We are using temporal data to construct the evolving networks in this work. As such, we will refer to them as temporal graphs~\cite{Holme2012}. While a complex dynamical system typically drives the temporal graphs, the underlying equations of motion are unknown. Temporal graph data is commonly represented using attributed information on the edges for the time intervals or instances in which the edges are active~\cite{Chen2016a,Huang2015}. Using this attributed information, we can represent the graph in several ways including edge labeling, snapshots, and static graph representations~\cite{Wang2019}. In this work we will first represent the data in the standard attributed (labeled) temporal graph structure and then use the graph snapshots approach. The graph snapshots represent the temporal graph as a sequence of static graphs $G_0, G_1, \ldots, G_n$. The standard network analysis tools for studying temporal networks often include measures such as centrality or flow measures~\cite{Borgatti2005}, temporal clustering for event detection~\cite{Crawford2018, You2021, Moriano2019}, and connectedness~\cite{Kempe2002}. However, these tools do not account for higher-dimensional structures (e.g., loops as a one-dimensional structure). It may be important to account for evolving higher-dimensional structures in temporal networks to understand the changing structure better. For example, a highly connected network may only have one connected component with no clear clusters, but the number of loops within the network may detect the change. To study the evolving higher dimensional structures within a temporal network, we will leverage zigzag persistence~\cite{Carlsson2010} from the field of Topological Data Analysis (TDA)~\cite{Carlsson2009}. The mainstay of TDA is persistent homology, colloquially referred to as persistence, and it encodes structure by analyzing the changing shape of a simplicial complex (a higher dimensional generalization of a network) over a filtration (a nested sequence of subcomplexes). Perhaps the most used pipeline for applying persistence is the one related to constructing a Vietoris Rips (VR) at multiple distance filtration values over point cloud data embedded in $\R^n$. The VR complex is generated for incremented filtration values such that the result is a nested sequence of simplicial complexes. The homology of the point cloud data can then be measured for each simplicial complex. The homologies that persist over a broader range of filtration values are often considered significant. We provide a more detailed introduction in Section~\ref{sec:background}. It is also possible to apply this framework to graph data using geodesic distance measures such as the shortest path as done in~\cite{Myers2019}. Unfortunately, the standard persistent homology pipeline does not account for temporal information when edges can be both added and removed. To account for temporal changes, we use zigzag persistence~\cite{Carlsson2010}. Instead of measuring the shape of static point cloud data through a distance filtration, zigzag persistence measures how long a structure persists through a sequence of changing simplicial complexes. Zigzag persistence tracks the formation and disappearance of these homologies through a persistence diagram as a two-dimensional summary diagram consisting of persistence pairs or points $(b,d)$ where $b$ is the birth or formation time of a homology and $d$ is its death or disappearance. For example, in~\cite{Tymochko2020} the Hopf bifurcation is detected through zigzag persistence (i.e., a loop is detected through the one-dimensional zigzag persistence diagram). In this work, we will use zigzag persistence to visualize these changes as it incorporates the two essential characteristics of temporal graphs we are looking to study. Namely, the temporal and structural information stored within a temporal network. Zigzag persistence compactly represents both temporal and structural changes using a persistence diagram. The resulting persistence diagram is commonly analyzed through either a qualitative analysis, standard one-dimensional statistical summaries, or machine learning via vectorizing the persistence diagram. \subsection{Organization} We will start in Section~\ref{sec:background} with an introductory background on persistent homology and zigzag persistence. Next, in Section~\ref{sec:method}, we overview the general pipeline for applying zigzag persistence to temporal graph data. We couple this explanation with a demonstrative toy example. In Section~\ref{sec:results} we introduce the two systems we will study. The first is a dataset collected over a week of the Great Britain transportation system~\cite{Gallotti2015}. The second is an intermittent Lorenz system simulation, where we generate a temporal network through complex networks of sliding windows. % Then we apply zigzag persistence to our two examples and show how the resulting persistence diagrams help visualize the underlying dynamics in comparison to standard temporal network analysis techniques. \section{Method} \label{sec:method} \begin{figure} \centering \includegraphics[width=0.97\textwidth]{pipeline} \caption{Pipeline for applying zigzag persistence to temporal networks. Begin with an unweighted and undirected \textbf{temporal graph} where each edge is on at a point or interval of time. Create \textbf{graph snapshots} using a sliding window interval over the time domain. Create a sequence of \textbf{simplicial complexes} from the graphs and apply \textbf{zigzag persistence} to the union zigzag simplicial complexes.} \label{fig:pipeline} \end{figure} To apply zigzag persistence for studying temporal graphs, we use the pipeline shown in Fig.~\ref{fig:pipeline}. This process takes a temporal graph to a sequence of snapshot graphs, which can then be represented as a zigzag of simplicial complexes. This procedure then allows for the application of zigzag persistence. We begin with a dataset as a temporal graph where each edge has intervals or instances in time representing when the edge is active. The temporal graph datasets and details on how we represent these as such are described in Section~\ref{ssec:temporal_graphs}. Graph snapshots $G_{I}$ are generated using a sliding window technique using the temporal information. For a fixed collection of $t_i$ and a fixed width $w$, we construct the graphs $G_{(t_i-w,t_i+w)}$ and their unions $G_{(t_i-w,t_{i+1}+w)}$. The sliding windows can be ensured to overlap by choosing window times such that $t_{i+1} - t_i < w$. When clear from context, we can instead index the graphs by $i$, writing $G_i = G_{(t_i-w,t_i+w)}$ and $G_{i,i+1} = G_{(t_i-w,t_{i+1}+w)}$. To apply zigzag persistence to study the changing topology of the temporal graph we need to represent each graph snapshot $G_i$ (and graph unions $G_{i,i+1}$) as simplicial complexes. To do this we use the Vietoris-Rips (VR) complex with distance filtration value $r$, where the distance between nodes is defined using the unweighted shortest path distance. By working with a graph structure the appropriate choice of $r$ is simplified. For most applications $r=1$ should be used this returns the clique complex of the original graph. If the graph is clique free, the original graph is returned. However, when working with the temporal ordinal partition network or other time series based complex networks it may be necessary to choose a higher $r$ value as seen in the example of Fig.~\ref{fig:HouseExample}. With an appropriately chosen $r$, we use the resulting sequence of simplicial complexes to calculate zigzag persistence to study the changing structure of the temporal graph. Note that the birth and death events correspond to the midpoint of the interval associated to the graph; so $G_i = G_{(t_i-w,t_i+w)}$ is labeled as $t_i$, while $G_{i,i+1} = G_{(t_i-w,t_{i+1}+w)}$ is labeled as $(t_{i+1}-t_i)/2$. \subsection{Example} \label{ssec:example} \begin{figure} \centering % \begin{minipage}{0.64\textwidth} \centering \includegraphics[width=\textwidth]{Figures/example_graph_sequence_and_intervals} \end{minipage} \hfill \begin{minipage}{0.31\textwidth} \centering \includegraphics[width=\textwidth]{Figures/example_PD_dimension} \end{minipage} \begin{minipage}{0.64\textwidth} \centering (a) Edge intervals with sliding windows highlighted (alternating blue-red) with corresponding graphs and union graphs above. \end{minipage} \hfill \begin{minipage}{0.31\textwidth} \centering (b) Zigag persistence diagram for both $H_0$ and $H_1$. \end{minipage} % \caption{Example zigzag persistence applied to a simple temporal graph with temporal information stored for each edge as intervals. The birth and death of a feature is encoded as the midpoint of the interval where the event happened. } \label{fig:example_zigzag_of_basic_temporal_graph} \end{figure} In the following simple example shown in Fig.~\ref{fig:example_zigzag_of_basic_temporal_graph}, we describe the method in more detail and show how to interpret the resulting zigzag persistence diagram. In this example, we measure the changing structure of a simple 5-node cycle graph as edges are added and removed based on the temporal information. The bottom left of Fig.~\ref{fig:example_zigzag_of_basic_temporal_graph}a shows the times associated to each edge, and the resulting graphs are shown above. In this notation, the subscripts correspond to the interval of time used to build the graphs. Then we have $w= 0.5$, and the centers of intervals are $t_0 = 0.5$, $t_1 = 1.5$, $\cdots$, $t_9 = 9.5$. This results in intervals for the graphs of the form $(i,i+1)$ and intervals for the union graphs as $(i,i+2)$. We could replace each graph with its VR complex using $r=1$, however because this graph is clique free, the result is that the simplicial complex is the same as the original graph in each case. Setting $r=1$ creates the graph equivalent simplicial complex. At the end of the sliding windows, we consider the graph empty and set the death of any remaining homology features as the end time of the last window (i.e., $t=10$ for this example). The resulting zigzag persistence diagram is shown in Fig.~\ref{fig:example_zigzag_of_basic_temporal_graph}b. This persistence diagram shows the zero-dimensional and one-dimensional features as $H_0$ and $H_1$, respectively. There are two zero-dimensional features at persistence pairs $(1, 3)$ and $(0.5, 10)$. The later represents the connected component which appears at the first graph and lasts throughout the filtration. The other component is the piece consisting of vertices $3$ and $4$ which appears at union graph $G_{(0,2)}$, thus is associated to a birth at the midpoint of this interval occurring at 1. The one-dimensional feature (the cycle represented in $H_1$) is present twice in the persistence diagram. This is due to it first appearing in $G_{(3,5)}$ and then disappearing at $G_{(4,5)}$ with corresponding persistence pair $(4, 4.5)$. The cycle then reappears at $G_{(5,7)}$ and disappears at $G_{(8,9)}$ resulting in persistence pair at $(6,8.5)$. This example demonstrates how zigzag persistence captures the changing structure of temporal graphs at multiple dimensions. It is possible to also capture higher-dimensional structures using higher-dimensional homology, although we do not investigate this direction in this work. \section{Results} \label{sec:results} To demonstrate the functionality of zigzag persistence for analyzing temporal graphs, we will use two examples. The first is an analysis of transportation data from Great Britain in Section~\ref{ssec:GB_results}. The second is a simulated dataset from the Lorenz system that exhibits intermittency, a dynamical system phenomenon where the dynamic state transition from periodic to chaotic in irregular intervals with results in Section~\ref{ssec:TOPN_results}. We study this signal using the temporal ordinal partition network framework as described in Section~\ref{ssec:temp_OPN}. We compare our results for both examples to some standard networks tools to analyze temporal networks. Namely, we will compare two connectivity statistics and three centrality statistics. The two connectivity statistics analyze the Connected Components (CCs). The first CC statistic is the number of connected components $N_{cc}$, which provides a simple shape summary of the graph snapshots by understanding the number of disconnected subgraphs. The second statistic is the average size (number of nodes) of the connected components $\bar{S}_{cc}$. This statistic provides insight into how significant the components are for each graph snapshot. The second statistic type is on centrality measures. The three centrality measures we use are the average and standardized degree centrality $\bar{C}_d$, betweenness centrality $\bar{C}_b$, and closeness centrality $\bar{C}_c$. The degree centrality measures the number of edges connected to a node, the betweenness centrality measures how often a node is used all possible shortest paths, and the closeness centrality measures how close the node is to all other nodes through the shortest path. For details on the implementation of each centrality measure, we direct the reader to~\cite{Landherr2010}. \subsection{Great Britain Temporal Transportation Network} \label{ssec:GB_results} We use temporal networks created from the Great Britain (GB) temporal transportation dataset~\cite{Gallotti2015} for the air, rail, and coach transportation methods. This data provides the destinations (nodes) and connections (edges) for public transportation in GB. Additionally, the departure and arrival times are provided to allow for a temporal analysis. This temporal data was collected for one week. In this section, we focus on the rail data; similar calculations for air and coach are included in Appendix \ref{app:air_and_coach}. The rail graph constructed without the inclusion of temporal information is shown at left in Fig.~\ref{fig:GB_rail_network_and_PDs} where the destinations are overlaid with a GB map outline. \begin{figure}% \centering % \begin{minipage}{0.31\textwidth} \centering \includegraphics[width=\textwidth]{Figures/GB_rail} \end{minipage} \hfill \begin{minipage}{0.31\textwidth} \centering \includegraphics[width=\textwidth]{Figures/GB_rail_PD0} \end{minipage} \hfill \begin{minipage}{0.31\textwidth} \centering \includegraphics[width=\textwidth]{Figures/GB_rail_PD1} % \end{minipage} \begin{minipage}{0.31\textwidth} \centering (a) Full Rail Travel Network. \end{minipage} \hfill \begin{minipage}{0.31\textwidth} \centering (b) Zero-dimensional zigzag persistence. \end{minipage} \hfill \begin{minipage}{0.31\textwidth} \centering (c) One-dimensional zigzag persistence. \end{minipage} % \caption{Zigzag persistence diagrams of the rail transportation network of Great Britain.} \label{fig:GB_rail_network_and_PDs} \end{figure} We set the sliding windows to have width $w = 20$ minutes. We chose this window size based on the average wait time being 7 minutes and 7 seconds with a standard deviation of 7 minutes and 24 seconds from a collected sample~\cite{vanHagen2011}. Additionally, we used an overlap of 50\% between adjacent windows. To create simplicial complexes from the graph snapshots, we used a distance filtration of $r=1$ to return the graph equivalent simplicial complex. \begin{figure} \centering \includegraphics[width=0.99\textwidth]{GB_rail_mean_centrality_components_analysis} \caption{Connectivity and centrality analysis on temporal Great Britain rail network.} \label{fig:GB_rail_mean_centrality_components_analysis} \end{figure} As a first approach to understand the dynamics of this graph, we implement the standard centrality and connectivity statistics as shown in Fig.~\ref{fig:GB_rail_mean_centrality_components_analysis}. The standard tools show us the general daily trends. Specifically, all the connectivity and centrality measures increase during peak travel hours. However, further information is difficult to glean from these statistics. On the other hand, in Fig.~\ref{fig:GB_rail_network_and_PDs} the zigzag persistence provides us with much more information. It also shows daily trends, but it also conveys through $H_0$ that a main connected component persists for the first six days and a second component for the last day. This provides an understanding of the long-term connectivity of this component that was not present in the standard statistics. Further, the $H_1$ encapsulates that travel loops form during peak travel times and persist daily. \subsection{Temporal Ordinal Partition Network for Intermittency Detection} \label{ssec:TOPN_results} In this section we describe a method to apply zigzag persistence to study time series data using complex networks; namely, the ordinal partition network. Ordinal partition networks~\cite{McCullough2015} are a graph representation of time series data based on permutation transitions. As such, they encapsulate the state space structure of the underlying system. While we only use the ordinal partition network in this work, there are several other transitional complex networks from time-series data that a similar analysis could be done. These include $k$-nearest-neighbors~\cite{Khor2016}, epsilon-recurrence~\cite{Jacob2019}, and coarse-grained state-space networks~\cite{Small2009a,Small2013a}. \subsubsection{Temporal Ordinal Partition Network} \label{ssec:temp_OPN} The ordinal partition network is formed by first generating a sequence of permutations from the time series $x = [x_0, x_1, x_2, \ldots, x_n]$ using a permutation dimension $m$ and delay $\tau$. These are the same permutations in the information statistic permutation entropy~\cite{Bandt2002}. In this work we choose $m=6$ and $\tau$ using the multi-scale permutation entropy method as suggested in~\cite{Myers2020c}. We generate a sequence of permutation by assigning each vector embedding \begin{equation} v_i = [x_i, x_{i+\tau}, x_{i+2\tau}, \ldots, x_{i+(m-1)\tau}] = [v_i(0), v_i(1) \ldots, v_i(m-1)] \end{equation} to one of the $m!$ possible permutations. We assign the permutation $\pi_j = [\pi_j(0), \ldots \pi_j(n-1)] \in \mathbb{Z}^{m}$ based on the ordinal pattern of $v_i$ such that $ v_i(\pi_j(0)) \leq v_i(\pi_j(1)) \leq \ldots \leq v_i(\pi_j(m-1)) $. Using the chronologically ordered sequence of permutations $\Pi$ we can form a graph $G(E,V)$ by setting the vertices $V$ as all permutations used and edges for transitions from $\pi_a$ to $\pi_b$ with $a, b \in m!$ and $a \neq b$ (no self-loops). We will not add weight or directionality to the graph for this formation. However, we will include the index $i$ and the corresponding time at which each edge is activated as temporal data for the graph. \begin{figure}% \centering \includegraphics[width=0.82\textwidth]{simple_example_OPN_formation} \caption{Example formation of an ordinal partition network for a sinusoidal signal $x(t) = \sin(t)$ with permutations of dimension $n=3$. The resulting permutation sequence $\Pi$ shows how these permutations transition, which is captured by the ordinal partition network on the right. % } \label{fig:simple_example_OPN_formation} \end{figure} In Fig.~\ref{fig:simple_example_OPN_formation} we demonstrate the ordinal partition network formation procedure for a simple example signal as $x(t) = \sin(t)$, where $t \in [0,15]$ sampled at a rate of $f_s = 25$ Hz. Using the method of multi-scale permutation entropy we selected $\tau = 52$ and set $n=3$ for demonstrative purposes. The corresponding permutations to delay embedding vector $v_i$ is shown as a sequence $\Pi$ in the middle subfigure of Fig.~\ref{fig:simple_example_OPN_formation}. This sequence shows captures the periodic nature of the signal which is then summarized as the ordinal partition network on the right with each permutation as a vertex and edges added for permutation transitions in $\Pi$. For more details and examples of the ordinal partition network, we direct the reader to~\cite{McCullough2015, Myers2019}. \subsubsection{Ordinal Partition Network Results} Using a sliding window technique, we can represent ordinal partition networks as temporal graphs. However, instead of each edge having a set of intervals associated with it as in the example in Section~\ref{sec:method}, they have time instances each edge is active. The instances are based on when a transition between unique permutations occurs. For example, the transition from $\pi_i$ to $\pi_{i+1}$ occurring at time $t_i$ would be active for that moment in time $t_i$. If the sliding window overlaps with an edge's activation instance, we add that edge to the sliding windows graph. We will show how this procedure can be used to detect chaotic and periodic windows in a signal exhibiting intermittency (i.e., the irregular transitions from periodic to chaotic dynamics). The signal is the $x$ solution to the simulated Lorenz system defined as \begin{equation} \frac{dx}{dt} = \sigma (y-x), \: \frac{dy}{dt} = x (\rho -z) - y, \: \frac{dz}{dt} = xy - \beta z \label{eq:lorenz} \end{equation} with system parameters $\sigma = 10.0$, $\beta = 8.0 / 3.0$, and $\rho = 166.15$ for a response with type 1 intermittency~\cite{Pomeau1980}. We simulated the system with a sampling rate of 100 Hz for 500 seconds with only the last 70 seconds used. We set the sliding windows for generating graph snapshots to have a width of $w = 10\tau$ and $80\%$ overlap between adjacent windows. For each window, we generated ordinal partition networks using $\tau = 30$ and $n=6$, where $\tau$ was selected using the multi-scale permutation entropy method~\cite{Myers2020c}. The resulting signal $x(t)$ from simulating the Lorenz system in Eq.~\eqref{eq:lorenz} is shown in Fig.~\ref{fig:intermittency_mean_centrality_components_analysis} with example ordinal partition networks generated at a chaotic window boxed in red and a periodic window in blue. These sample graph snapshots show that the structure of the ordinal partition network significantly changes depending on the dynamic state of the window's time-series segment. Further, we expect to see little change in the graph structure while the window slides along a periodic region of $x(t)$ compared to significant changes when overlapping with a chaotic region. \begin{figure}% \centering \includegraphics[width=0.99\textwidth]{intermittency_mean_centrality_components_analysis} \caption{Connectivity and centrality analysis on temporal ordinal partition network with chaotic regions of $x(t)$ highlighted in red.} \label{fig:intermittency_mean_centrality_components_analysis} \end{figure} We show the standard tools for connectivity and centrality measures of the graph snapshots in Fig.~\ref{fig:intermittency_mean_centrality_components_analysis}. The number of components $N_{cc}$ is constant due to the nature of the ordinal partition network, where the sequence of permutation transitions creates a chain of connected edges. As such, there is no structural information in the number of components. However, the size of the components does increase during the chaotic windows. This increase is due to, in general, more unique permutations and thus nodes used in a chaotic signal compared to periodic. Of the centrality statistics, only the average closeness centrality shows an apparent increase during chaotic regions. The increase in centrality is most likely due to the chaotic regions causing a more highly connected graph as demonstrated in the chaotic window and corresponding network of Fig.~\ref{fig:intermittency_example_PD1}. While these statistics do provide some insight into the changing dynamics, they do not show how the higher-dimensional structure of the graph evolves through the sliding windows and graph snapshots. In comparison to the standard statistics, the $H_1$ in Fig.~\ref{fig:intermittency_example_PD1} shows us a persistent loop structure that persists between the chaotic windows, which is representative of the periodic nature. Further, the $H_1$ shows that the chaotic windows characteristically have many low-lifetime persistence pairs. This is in line with the results in~\cite{Myers2019} that showed ordinal partition networks from chaotic signals tend to have persistence diagrams with many features in $H_1$ in comparison to their periodic counterpart. These additional insights through the zigzag persistence provide a helpful insight into analyzing temporal graphs that is not possible with standard statistics. \begin{figure}% \centering \includegraphics[width=0.99\textwidth]{TOPN_intermittency_PD} \caption{One-dimensional zigzag persistence of the temporal ordinal partition network from the $x$ solution of the intermittent Lorenz system described in Eq.~\eqref{eq:lorenz}. Chaotic (red) and periodic (blue) regions highlighted in signals with corresponding regions of persistence diagram shown.} \label{fig:intermittency_example_PD1} \end{figure}
{ "attr-fineweb-edu": 0.875, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUefa6NNjgB0Ss3TzO
\section*{Acknowledgments} This work is part of the Interdisciplinary Thematic Institute QMat of the University of Strasbourg, CNRS, and Inserm. It was supported by the following pro- grams: IdEx Unistra (ANR-10-IDEX-0002), SFRI STRATUS project (ANR-20-SFRI-0012), and USIAS (ANR-10-IDEX- 0002-02), under the framework of the French Investments for the Future Program. \newpage \onecolumngrid
{ "attr-fineweb-edu": 0.841797, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdiU5qYVBOT10J3VB
\section{Introduction} \label{sec:introduction} The {\it action spotting} task as proposed by Giancola et al.~\cite{giancola2018soccernet} aims to detect a single characteristic time instant for each action in a video. In the current paper, we tackle action spotting on SoccerNet-v2, which is currently the largest soccer dataset of its kind with respect to several important metrics~\cite{deliege2021soccernet}. One significant shortcoming of previous soccer action spotting approaches~\cite{tomei2021rms,cioppa2020context,cioppa2021camera,giancola2021temporally,zhou2021feature,vanderplaetse2020improved,vats2020event} is their imprecise temporal localization. While temporal localization errors may be acceptable when finding only the main events within soccer matches (which tend to have longer durations), there are a variety of applications where they become unacceptable. Examples include detecting scoring events in faster-paced sports, such as basketball and volleyball, as well as detecting frequent short events within soccer itself, such as {\it ball out of play}, and {\it throw-in} (the most frequent actions in SoccerNet-v2), or {\it passes} and {\it challenges}, which are not currently in SoccerNet-v2, but are highly relevant for sports analytics. Our solution, illustrated in Fig.~\ref{fig:methods}, makes use of a dense set of detection anchors. We define an anchor for every pair formed by a time instant (usually taken every 0.5 or 1.0 seconds) and action class, thus adopting a multi-label formulation. For each anchor, we predict both a detection confidence and a fine-grained temporal displacement. This approach leads to a new state-of-the-art on SoccerNet-v2. Experiments show large improvements in temporal precision, with substantial benefits from the temporal displacements. Our approach is inspired by work in object detection. In particular, Lin et al.~\cite{lin2017focal} demonstrated the advantages of using a dense set of detection anchors, with their single-stage RetinaNet detector surpassing the accuracy of slower contemporary two-stage counterparts. One important difference here is that the output space for action spotting is inherently lower-dimensional than that of object detection, given that each action can be completely defined by its time and class. This allows us to use a very dense set of action spotting anchors, at a relatively much lower computational cost. \begin{figure*} \begin{subfigure}{.7\linewidth} \includegraphics[width=12.0cm]{diagramv5b.pdf} \caption{Architecture overview} \label{fig:diagram} \end{subfigure}% \hfill \begin{subfigure}{0.3\linewidth} \centering \includegraphics[width=5.3cm]{targets_flipped_annotated_d.pdf} \caption{Targets around a ground-truth action} \label{fig:targets} \end{subfigure} \caption{\subref{fig:diagram} Architecture overview, where Convolution$(w,n)$ denotes a convolution with kernel size $w$ and $n$ filters. \subref{fig:targets}~Example of target confidences $c_{*,k}$ and temporal displacements $d_{*,k}$ for a single class $k$, around a ground-truth action of the same class, at one anchor per second. Note temporal displacement targets are undefined outside of the radius $r_d$ of the ground-truth action.} \label{fig:methods} \end{figure*} For the trunk of our models, we experiment with a one-dimensional version of a u-net~\cite{ronneberger2015u} as well as a Transformer encoder~\cite{vaswani2017attention}. Both architectures incorporate large temporal contexts important for action spotting, while also preserving the smaller-scale features required for precise localization. We show that, while both architectures can achieve good results, the u-net has a better trade-off of time and accuracy. The SoccerNet-v2 dataset is of moderate size, containing around 110K action spotting labels. At the same time, deep networks generally require large amounts of data or pretraining strategies to work well in practice. We show that Sharpness-Aware Minimization (SAM)~\cite{foret2021sharpnessaware} and mixup data augmentation~\cite{zhang2018mixup} are able to improve results significantly on the dataset, thus mitigating the lack of larger scale data. \section{Related Work} \label{sec:related} Since the release of the SoccerNet datasets~\cite{giancola2018soccernet,deliege2021soccernet}, several action spotting methods have been proposed~\cite{tomei2021rms,cioppa2020context,cioppa2021camera,giancola2021temporally,zhou2021feature,vanderplaetse2020improved,vats2020event}. Related to our approach, RMS-Net~\cite{tomei2021rms} and CALF~\cite{cioppa2020context} also use temporal regression, but with very different formulations. RMS-Net~\cite{tomei2021rms} predicts at most one action per video chunk, and makes use of a max-pooling operation over the whole temporal dimension. It is thus not well suited for predicting multiple nearby actions. This differs from the CALF model~\cite{cioppa2020context}, which produces a set of possible action predictions per video chunk, each of which may correspond to any time instant within the chunk, and belong to any class. The model is thus faced with a challenging problem to learn: simultaneously assigning all of its predictions to time instants and classes such that, in the aggregate, they cover all existing actions within the chunk. Our dense anchor approach sidesteps this challenge, by having each output anchor being preassigned to a time instant and class. This naturally allows for predicting multiple actions per video chunk while using large chunk sizes that provide ample context for prediction. Our regressed temporal displacements are then used to further finely localize each action in time. Zhou et al.~\cite{zhou2021feature} presented experiments using a Transformer Encoder (TE) on SoccerNet-v2. Differently from their work, our approach makes use of the encoder output at every token, which is used to generate predictions for our dense set of anchors. In addition, here we also experiment with a one-dimensional version of a u-net, and show that it has a better trade-off of time and accuracy relative to the TE. \section{Methods} \label{sec:methods} Following previous works~\cite{deliege2021soccernet,tomei2021rms,cioppa2020context,giancola2021temporally,zhou2021feature}, we adopt a two-phase approach to action spotting, consisting of feature extraction followed by action prediction. This significantly decreases the computational burden during training and also allows us to perform meaningful comparisons across methods. We note, however, that training end-to-end~\cite{tomei2021rms} and fine-tuning feature extraction backbones~\cite{zhou2021feature} have both shown to further improve results and represent promising directions for future work. Our two-phase architecture is illustrated in Fig.~\ref{fig:diagram}. In the first phase, a video chunk is decoded into frames, from which a sequence of $T$ feature vectors of dimension $P$ is extracted, composing a $T \times P$ feature matrix. In the second phase, this matrix is used to produce the action predictions. This starts with a single two-layer MLP applied independently to each input feature vector, resulting in a lower-dimensional output, which then gets fed into the model's {\it trunk}. As described in~\ref{sec:trunk}, the trunk combines information across all temporal locations while maintaining the sequence size $T$. As described in~\ref{sec:anchors}, the trunk's output is used to create predictions for the dense set of $T \times K$ anchors, with $K$ the number of classes. When training, our loss is applied directly to all anchor predictions, while at test-time, post-processing is used to consolidate them into a set of action detections. \subsection{Dense detection anchors} \label{sec:anchors} We use a dense set of detection anchors, inspired by dense single-stage object detectors~\cite{lin2017focal}. The output from our model's trunk is attached to two heads, predicting respectively confidences $\hat{\mathbf{C}} = (\hat{c}_{t,k})$ and temporal displacements $\hat{\mathbf{D}} = (\hat{d}_{t,k})$, where $t = 1, 2, \dots, T$ indexes the $T$ temporal locations of a given video chunk, and $k = 1, 2, \dots, K$ indexes the $K$ classes. $\hat{\mathbf{C}}$ and $\hat{\mathbf{D}}$ are computed from the trunk outputs via their respective convolution operations, each using a temporal window of size 3 and having $K$ output channels. We define a confidence loss $L_c$ and a temporal displacement loss $L_d$, training a separate model for each rather than optimizing them jointly (see Section~\ref{sec:training}). The losses are computed with respect to {\it targets} (desired outputs), which are derived from the $N$ ground-truth actions contained within the given video chunk, which we denote $G=\{(t_i,k_i)\}_{i=1}^N$. These targets, illustrated in Fig.~\ref{fig:targets}, are described below. The confidence loss $L_c$ for a video chunk is computed with respect to target confidences $\mathbf{C} = (c_{t,k})$, defined to be 1 within an $r_c$ seconds radius of a ground-truth action and 0 elsewhere, i.e. $c_{t,k} = I\left(\exists (s,k) \in G \colon \lvert s - t \rvert \leq r_c f\right)$, where $I$ is the indicator function and $f$ is the temporal feature rate (the number of feature vectors extracted per second). The confidence loss is defined as $L_c(\hat{\mathbf{C}}, \mathbf{C}) = \sum_{k=1}^K{\sum_{t = 1}^T{\text{CE}(\hat{c}_{t,k}, c_{t,k})}}$, where CE denotes the standard cross-entropy loss. We found that $r_c$ on the order of a few seconds gave the best results. This entails a loss in temporal precision, as the model learns to output high confidences within the whole radius of when an action actually happened, motivating the use of the temporal displacement outputs $\hat{\mathbf{D}}$. As we show in experiments, incorporating the displacements results in a large improvement to temporal precision. The temporal displacement loss $L_d$ is only applied within an $r_d$ seconds radius of ground-truth actions, given that predicted displacements will only be relevant when paired with high confidences. Thus, for each class $k$, we first define its temporal support set $S(k) = \{t =1,2,\dots,T \mid \exists(s,k) \in G \colon \lvert s - t \rvert \leq r_d f\}$. We then define the loss $L_d(\hat{\mathbf{D}}, \mathbf{D}) = \sum_{k=1}^K \sum_{t \in S(k)}{L_h(\hat{d}_{t,k}, d_{t,k})}$, where $L_h$ denotes the Huber regression loss and the targets $\mathbf{D} = (d_{t,k})$ are defined so that each $d_{t,k}$ is the signed difference between $t$ and the temporal index of its nearest ground-truth action of class $k$ in $G$. At test-time, to consolidate the predictions from $\hat{\mathbf{C}}$ and $\hat{\mathbf{D}}$, we apply two post-processing steps. The first displaces each confidence $\hat{c}_{t,k}$ by its corresponding displacement $\hat{d}_{t,k}$, keeping the maximum confidence when two or more are displaced into the same temporal location. The second step applies non-maximum suppression (NMS)~\cite{giancola2018soccernet,giancola2021temporally} to the displaced confidences. Since we adopt a multi-label formulation, we apply NMS separately for each class. To demonstrate the improvement from incorporating the temporal displacements, we later present an ablation where they are ignored, which is done simply by skipping the first post-processing step above. Note we do not apply any post-processing during training, instead defining the losses directly on the raw model predictions. \subsection{Trunk architectures} \label{sec:trunk} We experiment with two different trunk architectures. The first is a 1-D version of a u-net~\cite{ronneberger2015u}. The u-net consists of a contracting path that captures global context, followed by a symmetric expanding path, whose features are combined with those from the contracting path to enable precise localization. We replace the u-net's standard 2-D convolution blocks with \mbox{1-D} versions of ResNet-V2 bottleneck blocks~\cite{he2016identity}, which gave improved results while stabilizing training. The second trunk architecture we experiment with is a Transformer encoder (TE)~\cite{vaswani2017attention}, whose attention mechanism allows each token in a sequence to attend to all other tokens, thus incorporating global context while still preserving important local features. Relative to convolutional networks such as the u-net, Transformers have less inductive bias, often requiring pretraining on large datasets, or strong data augmentation and regularization~\cite{chen2021vision,devlin-etal-2019-bert,dosovitskiy2021vit}. Here, we achieve good results with the TE by training with Sharpness-Aware Minimization (SAM)~\cite{foret2021sharpnessaware} and mixup~\cite{zhang2018mixup}, as described in Section~\ref{sec:training}. \subsection{Training} \label{sec:training} We train our models from scratch using the Adam optimizer with Sharpness-Aware Minimization (SAM)~\cite{foret2021sharpnessaware}, mixup data augmentation~\cite{zhang2018mixup}, and decoupled weight decay~\cite{loshchilov2017decoupled}. SAM seeks wide minima of the loss function, which has been shown to improve generalization~\cite{foret2021sharpnessaware}, in particular for small datasets and models with low inductive bias~\cite{chen2021vision}. We do not apply batch normalization when training the u-net, finding that its skip connections were sufficient to stabilize training. We found it convenient to train temporal displacement regression separately from confidence prediction, resulting in a two-step approach. This provides similar results to joint training, while simplifying experimental design. We first train a model that produces only confidences, by optimizing the confidence loss $L_c$ and making use of mixup data augmentation~\cite{zhang2018mixup}. We then train a second model that produces only temporal displacements, by optimizing $L_d$, but {\it without} applying mixup. Due to the temporal displacement loss only being defined within small windows around ground-truth actions, we found it difficult to effectively apply mixup when using it. \section{Experiments} \label{sec:experiments} \subsection{Experimental Settings} \label{sec:settings} We present results on two sets of features. The first consists of ResNet-152 features extracted at $f = 2$ fps~\cite{deliege2021soccernet}. We experiment with the PCA version, here denoted ResNet+PCA. The second set comes from a series of models fine-tuned on SoccerNet-v2~\cite{zhou2021feature}, which we denote as Combination, whose features are extracted at $f = 1$ fps. Our two-layer MLP has layers with respectively 256 and 64 output channels, generating a $T \times 64$ matrix irrespective of the input feature set size. We experimentally chose a chunk size of 112s and radii for the confidence and displacement targets of $r_c = 3$s and $r_d = 6$s. We use an NMS suppression window of 20s, following previous works~\cite{giancola2021temporally,zhou2021feature}. Training and inference speeds were measured using wall times on a cloud instance with a V100 vGPU, 48 vCPUs at 2.30GHz, and 256GiB RAM. At each contracting step of the u-net, we halve the temporal dimension while doubling the channels. Expansion follows a symmetric design. We contract and expand 5 times when using ResNet+PCA ($T=224$), and 4 times when using Combination features ($T=112$), so in both cases the smallest temporal dimension becomes $224 / 2^5 = 112 / 2^4 = 7$. We experiment with two sizes for the TE: Small and Base. Small has 4 layers, embedding size 128, and 4 attention heads, while Base has 12 layers, embedding size 256 and 8 attention heads. Due to GPU memory limitations, we use a batch size of 64 for Base, while Small uses our default of 256 For each model, we choose a set of hyper-parameters on the validation set. To decrease costs, we optimize each hyper-parameter in turn, in the following order: the learning rate; SAM's $\rho$ (when applicable); the weight decay; and the mixup $\alpha$ (when applicable). We use a batch size of 256 and train for 1,000 epochs, where each epoch consists of 8,192 uniformly sampled video chunks. We apply a linear decay to the learning rate and weight decay, so that the final decayed values (at epoch 1,000) are 1/100th of the respective initial values. We train each model five times and report average results. We report results on SoccerNet's average-mAP metric, which uses tolerances $\delta=5, 10, \dots, 60$s, as well as the recent {\it tight} average-mAP metric, which uses $\delta=1, 2, 3, 4, 5$s~\cite{spottingchallenge}. The tolerance $\delta$ defines the time difference allowed between a detection and a ground-truth action such that the detection may still be considered a true positive. Thus, smaller tolerances enforce a higher temporal precision. Note $\delta$ is unrelated to the radii $r_c$ and $r_d$, the latter only used during training. \subsection{Results} \label{sec:results} A set of ablations is presented in Table~\ref{tab:ablations}, where DU, DTES and DTEB stand for the dense anchor model using respectively the u-net, Small TE and Base TE. We see a large improvement when applying SAM with the ResNet+PCA features, but a very small one when it is applied with the Combination features, which were already fine-tuned on the same dataset. Mixup gives small improvements across both feature types. DTEB+SAM+mixup achieves average-mAP similar to that of DU+SAM+mixup, but with a much longer training time and lower throughput. Recent techniques have reduced the computational demands of Transformers~\cite{tay2020long}, while pretraining is well-known to improve their results~\cite{chen2021vision,devlin-etal-2019-bert,dosovitskiy2021vit}, though we have not currently explored those directions. \begin{table} \scriptsize \begin{center} \begin{tabular}{c c c c c c} \Xhline{2\arrayrulewidth} Method & Features & \makecell{Avg.-\\mAP} & \makecell{Chunks/\\second} & \makecell{Epoch\\time} & Params \\ \Xhline{2\arrayrulewidth} DU & RN+PCA & 63.8 & 1,447 & 5.4s & 17.5M \\ DU+SAM & RN+PCA & 72.0 & 1,447 & 8.1s & 17.5M \\ DU+SAM+mixup & RN+PCA & 72.2 & 1,447 & 8.3s & 17.5M \\ \hline DTES+SAM+mixup & RN+PCA & 68.2 & 1,329 & 15.0s & 1.0M \\ \hline DTEB+SAM+mixup & RN+PCA & 72.4 & 342 & 112.0s & 9.7M \\ \hline DU & Combin. & 75.7 & 511 & 13.7s & 8.9M \\ DU+SAM & Combin. & 76.1 & 511 & 18.4s & 8.9M \\ DU+SAM+mixup & Combin. & 77.3 & 511 & 22.7s & 8.9M \\ \Xhline{2\arrayrulewidth} \end{tabular} \end{center} \caption{\label{tab:ablations}Ablations exploring different feature sets, model trunks, and the use of SAM and mixup. Chunks/second measures each model's inference throughput, excluding first-stage feature extraction and all post-processing.} \end{table} Results comparing methods across various tolerances $\delta$ are presented in Figure~\ref{fig:curves}. We include results from CALF~\cite{cioppa2020context} and NetVLAD++~\cite{giancola2021temporally}, whose implementations were made available by their authors. All results in the figure were generated using the ResNet+PCA features. While our method outperforms the previous approaches across all tolerances, the improvement is significantly larger at smaller ones. The figure also shows that the temporal displacements provide significant improvements at small matching tolerances, without affecting results at larger ones. This observation is confirmed in Table~\ref{tab:comparisons}, where our method without the temporal displacements $\hat{\mathbf{D}}$ has much lower tight average-mAP. \begin{figure}[htb] \centering \hspace*{-0.5cm} \includegraphics[width=8.0cm]{tolerance_graph_medium_td.pdf} \caption{mAP for different methods, as a function of the matching tolerance $\delta$. While the standard average-mAP metric is defined with tolerances of up to 60 seconds, the graph focuses on smaller tolerances, of up to 20 seconds.} \label{fig:curves} \end{figure} Comparisons to prior work are presented in Table~\ref{tab:comparisons}. On the ResNet+PCA features, DU outperforms CALF~\cite{cioppa2020context} and NetVLAD++~\cite{giancola2021temporally}. Surprisingly, DU+SAM+mixup on the same set of features outperforms other methods that use fine-tuned features, excluding that of Zhou et al.~\cite{zhou2021feature}. When we apply our model on Zhou et al.'s pre-computed features, we see substantial improvements. In general, our model's improvements are larger on the tight average-mAP metric. \begin{table} \footnotesize \begin{center} \begin{tabular}{c c c c c} \Xhline{2\arrayrulewidth} Method & Features & A-mAP & Tight a-mAP \\ \Xhline{2\arrayrulewidth} NetVLAD++~\cite{giancola2021temporally} & ResNet & 53.4 & 11.5$^*$ \\ AImageLab RMSNet~\cite{tomei2021rms} & ResNet-tuned & 63.5$^*$ & 28.8$^*$ \\ Vis. Analysis of Humans & CSN-tuned & 64.7$^\dag$ & 46.2$^\dag$ \\ \hline CALF~\cite{deliege2021soccernet,cioppa2020context} & ResNet+PCA & 40.7 & 12.2$^\ddagger$ \\ NetVLAD++~\cite{giancola2021temporally} & ResNet+PCA & 50.7 & 11.3$^\ddagger$\\ DU & ResNet+PCA & 63.8 & 41.7 \\ DU+SAM+mixup w/o $\hat{\mathbf{D}}$ & ResNet+PCA & 72.1 & 39.3\\ DU+SAM+mixup & ResNet+PCA & {\bf 72.2} & {\bf 50.7}\\ \hline Zhou et al.~\cite{zhou2021feature} & Combination & 73.8 & 47.1$^*$ \\ DU & Combination & 75.7 & 58.5 \\ DU+SAM+mixup w/o $\hat{\mathbf{D}}$ & Combination & {\bf 77.3} & 46.8\\ DU+SAM+mixup & Combination & {\bf 77.3} & {\bf 60.7} \\ \Xhline{2\arrayrulewidth} \end{tabular} \end{center} \caption{\label{tab:comparisons}Comparison with results from prior works. *~Results reported on challenge website~\cite{spottingchallenge}. \dag~Results reported on challenge website for the {\it challenge} split, whereas all other reported results are on the standard {\it test} split. $\ddagger$ Results computed using the implementation provided by the authors.} \end{table} \section{Conclusion} \label{sec:conclusion} This work presented a temporally precise action spotting model that uses a dense set of detection anchors. The model sets a new state-of-the-art on SoccerNet-v2 with marked improvements when evaluated at smaller tolerances. For the model's trunk, we experimented with a 1-D u-net as well as a TE, showing that the TE requires a much larger computational budget to match the accuracy of the u-net. Ablations demonstrated the importance of predicting fine-grained temporal displacements for temporal precision, as well as the benefits brought by training with SAM and mixup data augmentation. \vspace{4mm} \noindent {\bf Acknowledgements} We are grateful to Gaurav Srivastava for helpful discussions and for reviewing this manuscript. \clearpage \bibliographystyle{IEEEbib}
{ "attr-fineweb-edu": 1.515625, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdJs5qYVBQos0IFVf
\section{Introduction} \label{intro} Mathematically optimized schedules have huge practical roles as they often have a large impact both economically and environmentally and one area where they provide significant results is in sports league scheduling. Professional sports leagues exists as big businesses all over the world and many of these popular leagues are of huge economic importance due to the vast amounts of revenue they generate. While economic importance is one of the reasons the Traveling Tournament Problem (TTP) has received much attention in recent years, one other major reason is the extremely challenging scheduling problems they generate. In fact while the general complexity of TTP is still an open question, some instances of it have been proved to be NP-complete \cite{Westphal, Rishi}. \newline The TTP was introduced by Easton et.al in 2001 \cite{Easton} and the problem, given the number of teams $n$ (even) and the pairwise distance between their home venues, is concerned with arriving at a schedule for a double round robin tournament that minimizes the sum of the distances traveled by all the participating teams. While arriving at an optimized schedule, $S$, for the double round robin tournament, TTP places two additional constraints on the schedule called the AtMost and the NonRepeat constraint. The AtMost constraint mandates that each team must play no more than $k$ ($k$ is usually taken as 3) consecutive matches at home or away and the NonRepeat constraint states that two teams should not play each other in consecutive rounds. \newline In this paper we consider an important variant of the TTP called the Mirrored Traveling Tournament Problem (mTTP). mTTP was introduced by Ribeiro and Urrutia in \cite{mTTP} and here in place of the NonRepeat constraint we have a Mirror constraint. The Mirror constraint requires that the games played in round $r$ are exactly the same as those played in round $r + (n - 1)$, for $r = 1, 2, \cdots, n - 1$, with reversed venues. While there have been many attempts at arriving at optimized schedules for both the TTP and mTTP \cite{Anag, mTTP, Lim, Easton}, here we suggest a parallel simulated annealing approach for solving the mTTP and we show that this approach is superior especially with respect to the number of solution instances it can probe per unit time. Additionally, based on an implementation on OpenMP, we also show that there is significant speed up of 1.5x - 2.2x in terms of number of solutions it can explore per unit time. \section{Methodology} Simulated Annealing is a local search meta-heuristic used to address global optimization problems, especially when the search space is discrete. The name comes from the process of annealing in metallurgy which involves the heating and controlled cooling of a metal to increase the size of its crystals and to reduce their defects. If the cooling schedule is sufficiently slow, the final configuration results in a solid with superior structural integrity which in turn represents a state with minimum energy.Simulated annealing emulates the physical process described above and in this method, each point $s$ of the search space is analogous to a state of some physical system, and the function $E(s)$ that is to be minimized is analogous to the internal energy of the system in that state. In the following subsections, we explain the serial version of mTTP and then we discuss the parallelization of this algorithm. \subsection{The SA algorithm for mTTP} \label{sa} The simulated annealing algorithm starts with an initial random schedule, $S$ and at each basic step it probabilistically decides between making a transition to a schedule $S'$ in its neighborhood, or staying at $S$. The neighborhood of a schedule $S$ is defined as the set of all schedules that can be generated by applying any one of the $5$ five moves : swap-teams, column-swap, row-swap, swap-rounds, interchange-home-away. These $5$ moves are the same as those suggested in \cite{Anag}. Once the neighbouring schedule $S'$ is determined, the probability of making the transition to the new configuration $S'$ is dependent on the on the variation, $\Delta$, in the objective function produced by the move. The system moves to $S'$ with a probability 1 if $\Delta < 0$. If $\Delta > 0$, then the transition to the new state $S'$ happens with a probability $\exp({-\Delta}/{T})$. The rationale behind this is that, here as the temperature decreases over time the probability, $\exp({-\Delta}/{T})$, of accepting non-improving solutions decreases. \subsection{The Parallel SA algorithm for mTTP} \label{psa} In order to overcome the restrictive nature of the serial SA algorithm presented in \ref{sa} in terms of the number of solutions being explored, in this paper we explore the possibility of parallelism in the SA algorithm. Since the nature of the SA algorithm allows only for work level parallelism, we the exploit work level parallelism offered by shared memory multi core CPU's using openmp (omp) threads and we present the parallel simulated annealing algorithm ( PSA(T) ) below, where T is the number of threads used. The main rationale behind choosing this model comes from the intuition that as the number of threads increases the solutions explored by them, collectively, will be significantly larger and hence would help us in obtaining the optimal solutions faster. \newpage \begin{algorithm}[htbp] \label{alg2} \caption{: PSA(T)} \begin{algorithmic}[1] \STATE \textbf{do in parallel} for each thread $1, 2 \cdots T$ \STATE start with a random schedule $S$ \STATE curr$\_$dist $=$ best$\_$dist $=$ distance($S$) \STATE curr$\_$schedule $=$ best$\_$schedule $= S$ \STATE initialize n$\_$iterations, $T_{\text{initial}}$, $T_{\text{final}}$ and $\alpha$ \STATE set count$\_$itr $= 0$ \WHILE{(count$\_$itr $<$ n$\_$iterations)} \STATE temp$\_$curr $=$ $T_{\text{initial}}$ \STATE temp$\_$end = $T_{\text{final}}$ \STATE curr$\_$dist = best$\_$dist; \STATE curr$\_$schedule = best$\_$schedule \WHILE{(temp$\_$curr $>$ temp$\_$end)} \STATE S' = select$\_$random$\_$schedule() \STATE total$\_$dist $=$ distance(S') \STATE $\Delta =$ total$\_$dist $-$ curr$\_$dist \IF{($\Delta < 0$ \OR $\exp(-\Delta /$ temp$\_$curr) $>$ random()) } \STATE curr$\_$dist $=$ total$\_$dist \STATE curr$\_$schedule $=$ S' \IF{(total$\_$dist $<$ best$\_$dist)} \STATE acc $=$ check$\_$schedule() \IF{(acc is \TRUE)} \STATE best$\_$dist $=$ curr$\_$dist \STATE best$\_$schedule $=$ curr$\_$schedule \ENDIF \ENDIF \ENDIF \STATE temp$\_$curr $=$ temp$\_$curr $ * \alpha$; \ENDWHILE \STATE count$\_$itr++ \ENDWHILE \STATE \textbf{end do in parallel} \STATE synchronizeThreads() \STATE Pick least distance schedule from all the threads \end{algorithmic} \end{algorithm} \section{Computational Experiments and Results} \label{results} The proposed parallel simulated annealing algorithm was tested on a number of mTTP instances given in \cite{Web} and it was seen that this algorithm, in addition to finding optimized solutions for these instances (all of which were within 10\% of the known lower bounds), was superior especially in terms of the number of solutions that could be explored in a second. This is particularly significant since one of the main objectives of a simulated annealing approach is to explore as much of the solution space as possible. Figure 1 demonstrates the variation in the number of solutions explored using the serial SA, PSA(2) and PSA(4) for instances NL06, NL08, CIRC08, NL10 and CIRC10. Figure 2 provides the corresponding speed up graph for these instances. It is evident from the figure that a significant speedup of upto 2.2X was achieved. \begin{figure}[h] \begin{minipage}{16pc} \includegraphics[scale=0.35]{omp_ttp1.png} \caption{\label{fig4} Variations in the number of solutions explored.} \end{minipage}\hspace{2pc}% \begin{minipage}{16pc} \includegraphics[scale=0.35]{omp_ttp2.png} \caption{\label{fig4} Threads Versus Speed Up of Annealing on mTTP.} \end{minipage} \end{figure} \section{Conclusion} Annealing belongs to class of sub optimal algorithms which depends heavily on randomization. In order to improve the solution, we need to explore more number of solutions at each basic step. The proposed parallel SA achieves this objective by utilizing multi core omp threads. Parallel SA will thus help in converging faster towards the optimal solution. As for the future work, we plan to extend the proposed parallel version to incorporate synchronization and communication points between the threads for faster convergence towards the global optimum. We also plan to port the parallel SA to GPGPU's to achieve better performance using streaming multicore processors of NVIDIA's Compute Unified Device Architecture (CUDA) technology.
{ "attr-fineweb-edu": 1.913086, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUa8U5qsFAfsI7XA3w
\section{Introduction} In this work paper we evaluate the algorithm used by \gls{fifa} to rank the international men teams. We also propose and study simple modifications to improve the prediction capability of the algorithm. We are motivated by the fact that the rating and ranking are important elements of sport competitions and the surrounding entertainment environments. The rating consists in assigning the team/player a number, often referred to as ``skills'' or ``strengths''; the ranking is obtained by sorting these numbers and is also referred to as ``power ranking". The rating has an informative function providing fans and profane observers with a quick insight into the relative strength of the teams. For example, the press is often interested in the ``best'' teams or the national team reaching some records position in the ranking. More importantly, the ranking leads to consequential decisions such as a) the seeding, \ie defining which teams play against each other in the competitions (\eg used to establish the composition of the groups in the qualification rounds of the \gls{fifa} World Cup), b) the promotion/relegation (\eg determining which teams move between the \gls{epl} and the English Football League Championship, or teams which move between the Nations Leagues groups), or c) defining the participants in the prestigious (and lucrative) end-of-the-season competitions (such as Champions League in European football, Stanley Cup series in \gls{nhl}). Most of the currently used sport ratings are based on counting of wins/loses (and draws, when applicable) but in some cases the sport-governing bodies moved beyond these simple methods and implemented more sophisticated rating algorithms where the rating levels attributed to the teams are meant to represent the skills. In particular, \gls{fifa} started a new ranking/rating algorithm in 2018, where the rating level (skills) assigned to the teams are calculated from the game outcome, of course, but also from the skills of the teams before the game. The resulting rating algorithm has a virtue of being simple and defined in a (mostly) transparent manner. The main objective of this work is to analyze the \gls{fifa} ranking using the statistical modelling methodology. Considering that the football association is, by any measure, the most popular sport in the world, it has a value in itself and follows the line of works which analyzed the past strategies of ranking used by \gls{fifa}, \eg \citep{Lasek13}, \citep{Ley19}. However, the approach we propose can be applied to evaluate other algorithms as well, such as the one used by \gls{fivb}, \citep{fivb_rating}. In this work we will: \begin{itemize} \item Derive the \gls{fifa} algorithm from the first principles. In particular, we will define the probabilistic model underlying the algorithm and identify the estimation method used to estimate the skills. \item Assess the relevance of the parameters used in the current algorithms. In particular we will evaluate the role played by the change of the adaptation step according to the game's importance (as defined by \gls{fifa}). \item Optimize the parameters of the proposed model. As a result, we derive an algorithm which is equally as simple as the \gls{fifa}'s one, but allows us to improve the prediction of the games' results. \item Propose the modifications of the algorithm which take into account the goals differential, also known as the \gls{mov}. We consider legacy-compliant algorithms and a new version of the rating. \end{itemize} Our work is organized as follows. In \secref{Sec:FIFA.rating} we describe the \gls{fifa} algorithm in the framework which simplifies the manipulation of the models and the evaluation of the results. This is also where we clarify the data origin and make a preliminary evaluation of the relevance of the game's importance parameters currently used to control the size of the adaptation step. The algorithm is then formally derived in \secref{Sec:derivation.and.batch} where we also discuss the evaluation of the results and the batch estimation approach we use. The incorporation of the \gls{mov} into the rating is evaluated in \secref{Sec:MOV.general} using two different strategies. In \secref{Sec:on-line} we return to the on-line rating, evaluating and re-optimizing the proposed algorithms, pointing out to the role of the scale, and commenting on the elements of the \gls{fifa} algorithm (the shootout/knockout rules) which seem to be introduced in an ad-hoc manner for which the models are not specified. We conclude the work in \secref{Sec:Conclusions} summarizing our findings and in \secref{Sec:Recommedations} we make en explicit list of recommendations which may be introduced to improve on the current version of the \gls{fifa} algorithm. \section{FIFA ranking algorithm}\label{Sec:FIFA.rating} We consider the scenario where there is a total of $M$ teams playing agains each other in the games indexed with $r=1,\ldots,T$, where $T$ is the number of games in the observed period. \gls{fifa} ranks $M=210$ international teams and, between June 4, 2018 and \data{October 16, 2021, there were $T=2964$} games \gls{fifa}-recognized games. Let us denote the skill of the team $m=1, \ldots, M$ before the game $t$ as $\theta_{t,m},~ t\in\mc{T}=\set{1,\ldots, T}$ which are gathered in a vector $\boldsymbol{\theta}_t=[\theta_{t,1},\ldots,\theta_{t,M}]\T$, where $(\cdot)\T$ denotes the transpose. The home and the away teams are denoted by $i_t$ and $j_t$ respectively. The game results $y_t\in\mc{Y}$ are ordinal variables, where the elements of $\mc{Y}=\set{\mf{H},\mf{D}, \mf{A}}$ represent the win of the home team ($y_t=\mf{H}$), the draw ($y_t=\mf{D}$), and the win of the away team ($y_t=\mf{A}$). These ordinal variables are often transformed into the numerical \emph{scores} $\check{y}_t=\check{y}(y_t)$: $\check{y}(\mf{A})=0$, $\check{y}(\mf{D})=0.5$ and $\check{y}(\mf{H})=1$. The basic rules of \gls{fifa}'s rating for a team $m$ which plays in the $t$-th game are defined as follows \begin{align}\label{FIFA.basic.rules} \theta_{t+1,m} & \leftarrow \theta_{t,m} + I_{c_t} \delta_{t,m}\\ \label{FIFA.basic.delta} \delta_{t,m} & = \check{y}_{t,m} - F\big( \textstyle \frac{z_{t,m}}{s}\big) \\ \label{FIFA.logistic} F(z) & = \frac{1}{1+10^{-z}}\\ z_{t,m} &= \theta_{t,m} - \theta_{t,n} \end{align} where $s=600$ is the scale\footnote{The role of the scale is to ensure that the values of the skills $\theta_{t,m}$ are situated in a visually comfortable range; it can be also used when changing the rating algorithm, as is discussed in \secref{Sec:Scale.adjustment}.}, $n$ is the index of the team opposing the team $m$ in the $t$-th game, $\check{y}_{t,m}$ is the ``subjective'' score of the team $m$ (for the home team, $m=i_t$, $\check{y}_{t,m}=\check{y}_t$, and for the away team, $m=j_t$, $\check{y}_{t,m}=1-\check{y}_t$). The result produced by the logistic function, $F(z_{t,m}/s)$ in \eqref{FIFA.basic.delta} is referred to as the \emph{expected score}. When the team $m$ does not play, its skills do note change, \ie $\theta_{t+1,m}\leftarrow\theta_{t,m}$. Since $|\delta_{t,m}|\le 1$, $I_{c_t}$ is the maximum allowed update step, where $c_t$ is the game category (or game ``importance'') and we can decompose $I_c$ into two components \begin{align}\label{define.I_c} I_c = K \xi_c, \end{align} where $K=5$ and $\xi_c$ is a category-dependent adjustment as defined in \tabref{Tab:importance_levels}. \begin{table}[th] \centering \begin{tabular}{c|c|c|c|c} $c$ & $I_c$ & $\xi_c$ & Description & Number\\ \hline 0 & 5 & 1 & Friendlies outside International Match Calendar windows & 436\\ 1 & 10 & 2 & Friendlies during International Match Calendar windows & 583\\ 2 & 15 & 3 & Group phase of Nations League competitions & 347\\ 3 & 25 & 5 & Play-offs and finals of Nations League competitions & 84\\ 4 & 25 & 5 & Qualifications for Confederations/World Cup finals & 1189\\ 5 & 35 & 7 & Confederation finals up until the QF stage & 209\\ 6 & 40 & 8 & Confederation finals from the QF stage onwards & 52\\ 7 & 50 & 10 & World Cup finals up until QF stage & 56\\ 8 & 60 & 12 & World Cup finals from QF stage onwards & 8 \end{tabular} \caption{Categories, $c$ of the game and the corresponding update steps $I_c=K \xi_c$, \citep{fifa_rating}, where $K=5$ and $\xi_c=I_c/I_0$. The number of the observed categories between June 4, 2018 and \data{October 16, 2021 is also given (total number of games is $T=2964$)}.} \label{Tab:importance_levels} \end{table} The basic equation governing the change of the skills in \eqref{FIFA.basic.delta} is next supplemented with the following rules: \begin{itemize} \item \emph{Knockout rule}: in the knockout stage of any competition (which follows the group stage), instead of \eqref{FIFA.basic.delta} we use \begin{align}\label{knockout.rule} \delta_{t,m} & \leftarrow \max\set{0, \delta_{t,m}} \end{align} which guarantees that no points are lost by teams moving out of the group stage. \item \emph{Shootout rule}: If the team $m$ wins the game in the shootouts we use \begin{align}\label{shootout.rule} \check{y}_{t,m}&\leftarrow 0.75,\quad \check{y}_{t,n}\leftarrow 0.5, \end{align} where $n$ is the index of the team which lost. This rule, however, does not apply in two-legged qualification games if the shootout is required to break the tie. \end{itemize} We will discuss the effect of the knockout rule later and here we only point out to the fact that by applying the shootout/knockout we always increase $\delta_{t,m}$. Thus, while the basic rules \eqref{FIFA.basic.rules}-\eqref{FIFA.basic.delta} guarantee that the teams ``exchange'' the points so the total number of points stays constant, \ie $\sum_{m=1}^M\theta_{t,m}=\sum_{m=1}^M\theta_{t+1,m}$ (this is a well-known property of the Elo rating algorithm, \citep{Elo78_Book}), the shootout/knockout rules increase the total number of points which causes an ``inflation" of the rating. In fact, in the considered period, there were \data{124 games where the shootout rule, the knockout rule, or both were applied and this increased the total score by $1739$ points} (with the initial total being $254680$). The rating we described is published by \gls{fifa} since August 2018, roughly on a per-month basis. The algorithm was initialized on the June 4, 2018, with the initialization values $\boldsymbol{\theta}_0$ based on the previous rating system. To run the algorithm, we need to know the initialization $\boldsymbol{\theta}_0$, the presence of conditions which trigger the use of the knockout/shootout rules, and most importantly, the category/importance of each game, $c_t$. These elements are not officially published so we use here the unofficial data shown in \citet{football_rankings} which keeps track of the \gls{fifa} rating since June 2018. Using it, we were able to reproduce the ratings $\boldsymbol{\theta}_t$ with a precision of fractions of rating points which gives us confidence that the categories of the games are assigned according to the \gls{fifa} rules.\footnote{Information provided by \citet{football_rankings} is highly valuable because it is far from straightforward to verify which games are included in the rating and what their importance $I_c$ is. In particular, the games in the same tournament can be included or excluded from the rating and in some cases the changes may be done retroactively complicating further the understanding of the rating results. For example, we had to deal with two minor exceptions: \begin{itemize} \item We recognized the victory of Guyana (GUY) over Barbados (BRB) in the game played on Sept. 6, 2019 already on the date of the game, while in the \gls{fifa} rating, the draw was originally registered and the GUY's victory was recognized only later, when BRB was disqualified for having fielded an ineligible player. \item We removed the game Côte d'Ivoire (CIV) vs. Zambia (ZAM) played on June 19, 2019, where CIV, the winner and ZAM exchanged 2.21 points. The removal of this game from the \gls{fifa}-recognized list seems to be a reason why \gls{fifa} changed the ratings of both teams between two official publications on Dec. 19, 2019 and on Feb. 20, 2020. Namely, the CIV's rating was changed from 1380 to 1378 and ZAM's from 1277 to 1279. This was done despite both teams not playing at all in this period of time. \end{itemize} } Before discussing the suitable models and algorithms we ask a very simple question: Are the parameters $I_c$ defining the ``importance" of the game suitably set? If not, how should we define them to improve the results? The immediate corollary question is what ``improving'' the results may mean and, in general, how to evaluate the results produced by the algorithm. We note here that the concept of the game importance is not unique to the \gls{fifa} rating and appears also in the \gls{fivb} rating, \citep{fivb_rating} and the statistical literature, \eg \citep[Sec.~2.1.2]{Ley19}. \subsection{Preliminary evaluation of the FIFA rating}\label{Sec:FIFA.vs.FIFA} A conventional approach in statistics is to base the performance evaluation on a metric, called a scoring function, relating the outcome, $y_t$ to its prediction obtained from the estimates at hand (here, $\boldsymbol{\theta}_t$), \citep{Gelman14}. At this point we want to use only the elements which are clearly defined in the \gls{fifa} ranking and the only explicit predictive element defined in the \gls{fifa} algorithm is the expected score \eqref{FIFA.logistic}, $F(z_t/s)=\Ex[\check{y}_t|z_t]$, we will base the evaluation on the metric affected by the mean. Later we will abandon this simplistic approach. Using the squared prediction error \begin{align}\label{squared.error} \mf{m}(z_t, y_t) &= \big(\check{y}_t-F(z_t/s)\big)^2 \end{align} averaged over the large number of games, we obtain the \gls{mse} estimate \begin{align}\label{MSE.eq} \mf{MSE} = \frac{2}{T}\sum_{t=T/2+1}^{T}\mf{m}(z_t, y_t), \end{align} where we use the games in the second half of the observation period to attenuate the initialization effects. This truncation is somewhat arbitrary of course but does not affect the results significantly for large $T$. The \gls{mse} in \eqref{MSE.eq} may be treated as an estimate of the expectation, \begin{align} \mf{MSE}\approx\Ex_{z_t}\big[\Ex_{y_t|z_t}[\big(\check{y}_t-F(z_t/s)\big)^2]\big]=\Var[\check{y}_t] + \Ex_{z_t}\big[|B(z_t,y_t)|^2\big], \end{align} which highlights the bias-variance decomposition, \citep[Ch.~9.3.2]{Duda_book} and where \mbox{$\Var[\check{y}_t]=\Ex_{z_t}\big[\Var[\check{y}_t|z_t]\big]$} is the average conditional variance of $\check{y}_t$, and $B(z_t,y_t)=F(z_t/s)-\Ex[\check{y}_t|z_t]$ is the estimation bias of the mean. Therefore, by reducing the (absolute value of the) bias $B(z_t,y_t)$, that is, by improving the calculation of the expected score $F(z_t/s)$, should manifest itself in a lower value of the \gls{mse}, which is calculated as in \eqref{MSE.eq}. Using the \gls{mse}, we are now able to assess how the values of the importance parameters $I_c$ (or alternatively, $K$ and $\xi_c$) affect the expected value of the score. We find the coefficients $K$ and/or $\xi_c$ by minimizing the \gls{mse} \eqref{MSE.eq} and it turns out that a simple alternate optimization (one variable $K$ or $\xi_c$ is optimized at a time, till convergence) leads efficiently to satisfactory solutions.\footnote{This was done by a line search as we preferred avoiding more formal, \eg gradient-based, methods which are not well suited to deal with the complicated functional relationship resulting from the recursive rating algorithm. } The results are shown in \tabref{tab:solutions.FIFA} and we observe the following: \begin{itemize} \item The common update step $K$ increases ten-fold in the optimized setup and it seems that it is the most important contributor to the improvement of the \gls{mse} (which changes from $\mf{MSE}=0.1295$ in the original algorithm to $\mf{MSE}=0.1262$ in the algorithm with fixed-importance games but larger common adaptation step). \item For the games in the categories well represented in the data, \ie $c\in\set{0,1,2,4,5}$, the relative importance of the games $\xi_c$ does not seem to be critically different and for sure does not fall in line with the values used in the \gls{fifa} algorithm. Overall, the optimized $\xi_c$ yield a very small improvement in the \gls{mse} comparing to the use of fixed $\xi_c$. In fact that the Friendlies played in the International Match Calendar window are weighted down ($\xi_1=0.6$) comparing to the Friendlies played outside the window, which is the trend contrary to what the \gls{fifa} algorithm does. \item Estimates of $\xi_c$ for the categories $c\in\set{3,6,7,8}$ should not be considered as very reliable because the number of games in each of these categories is rather small (less than $3\%$ of the total). Moreover, the games in the categories $c=7$ and $c=8$ were observed only in June 2018, during the 2018 World Cup; thus, their effect is most likely very weak in the games from the second half of the observed batch, see \eqref{MSE.eq}, which starts around October 2019. \end{itemize} \begin{table}[t] \centering \begin{tabular}{c||c|c|c|c|c|c|c|c|c|c} $\mf{MSE}_\tr{opt}$ & $K$ & $\xi_0$ & $\xi_1$ & $\xi_2$ & $\xi_3$ & $\xi_4$ & $\xi_5$ & $\xi_6$ & $\xi_7$ & $\xi_8$ \\ \hline $0.1295$ & \ccol $5$ &\ccol $1$ &\ccol $2$ &\ccol $3$ &\ccol $5$ &\ccol $5$ &\ccol $7$ &\ccol $8$ & \ccol$10$ & \ccol$12$ \\ $0.1262$ & $12$ &\ccol $1$ &\ccol $2$ &\ccol $3$ &\ccol $5$ &\ccol $5$ &\ccol $7$ &\ccol $8$ & \ccol$10$ & \ccol$12$ \\ $0.1262$ & $55$ &\ccol $1$ &\ccol $1$ &\ccol $1$ &\ccol $1$ &\ccol $1$ &\ccol $1$ &\ccol $1$ &\ccol $1$ &\ccol $1$ \\ $0.1250$ & $50$ & \ccol $1$ & $0.6$ & $1.8$ & $0.8$ & $1.2$ & $1.1$ & $2.4$ & $0.1$ & $9.9$ \\ \end{tabular} \caption{Parameters $K$ and $\xi_c$, in \eqref{define.I_c}, are either fixed (shadowed cells), or obtained by minimizing the \gls{mse} \eqref{MSE.eq}. The first row corresponds to the original \gls{fifa} algorithm: $K$ and $\xi_c$ are taken from \tabref{Tab:importance_levels}.} \label{tab:solutions.FIFA} \end{table} Using a very simple criterion of the \gls{mse} derived from the definitions used by the \gls{fifa} algorithm, we obtain results which cast a doubt on the optimality/utility of the games' importance parameters, $I_c$ proposed by \gls{fifa}. However, drawing conclusions at this point may be premature. For example, regarding $K$ (which, after optimization should be much larger than $5$), it is possible that the relatively short period of observation time ($29$ months) is not sufficient for small $K$ to guarantee the sufficient convergence but may pay off in a long run, when smaller values of $K$ will improve the performance after the convergence is reached. We cannot elucidate this issue with the data at hand. On the other hand, to address the concerns regarding the \emph{relative} importance weights $\xi_c$ the situation is rather different. Even after the convergence, the weights associated with different game categories should affect meanigfuly the results. To elucidate this point we will now take a more formal approach and go back to the ``drawing board'' to derive the rating algorithm from the first principles. \section{Derivation of the algorithm and batch-rating}\label{Sec:derivation.and.batch} To understand and eventually modify the rating algorithm used by \gls{fifa} we propose to cast it in the well defined probabilistic framework. To this end we define explicitly a model relating the game outcome $y_t$ to the skills of the home-team ($\theta_{i_t}$) and the away-team ($\theta_{j_t}$), where the most common assumption is that the probability that, at time $t$, a random variable $y$ takes the value $y_t$, depends on the skills' difference $z_t=\theta_{t,i_t}-\theta_{t,j_t}$, \ie \begin{align}\label{Pr.yt} \PR{y = y_t|\boldsymbol{\theta}_t } &= L(z_t/s; y_t)\\ z_t &= \boldsymbol{x}\T_t\boldsymbol{\theta}_t, \end{align} where $L(z_t/s; y_t)$ is the \emph{likelihood} of $\boldsymbol{\theta}_t$ (for a given outcome $y_t$) and we define a \emph{scheduling} vector $\boldsymbol{x}_t=[x_{t,0},\ldots, x_{t,N-1}]\T$ for the game $t$, as \begin{align} x_{t,m}=\IND{i_t=m} - \IND{j_t=m}, \end{align} with $\IND{a}=1$ when $a$ is true, and $\IND{a}=0$, otherwise. Thus, $x_{t,m}=1$ if the team $m$ is playing at home, $x_{t,m}=-1$ if the team $m$ is visiting, and $x_{t,m}=0$ for all teams $m$ which do not play. This notation allows us to a) deal in a compact manner with all the skills $\boldsymbol{\theta}_t$ for each $t$, and b) consider the \gls{hfa} or a lack thereof. As before, $s$ is the scale. We are interested in the on-line rating algorithms, in which the skills of the participating teams are changed immediately after the results of the game are known. Nevertheless, we will start the analysis with a batch processing, \ie assuming that the skills $\boldsymbol{\theta}_t$ do not vary in time, $\boldsymbol{\theta}_t=\boldsymbol{\theta}$. This is a reasonable approach if the time window defined $T$ is not too large, so that the skills of the teams may, indeed, be considered approximately constant. The on-line rating algorithms will be then derived as approximate solutions to the batch optimization problem. The purpose of such approach is to a) tie the algorithm used by \gls{fifa} with the theoretical assumptions, which are not spelled out when the algorithm is presented, b) remove the dependence on the initialization and/or on the scale, and c) treat the past and present data in the same manner, \eg avoiding the partial elimination in the performance metrics, see \eqref{MSE.eq}. Assuming that the observations are independent when conditioned on the skills, the rating may be based on the \emph{weighted} \gls{ml} estimation principle \begin{align} \hat\boldsymbol{\theta} \label{ML.optimization} &=\mathop{\mr{argmin}}_{\boldsymbol{\theta}} \sum_{t\in \mc{T}} \xi_{c_t} \ell(z_t/s;y_t), \end{align} where \begin{align} \label{log.likelihood.def} \ell(z_t/s;y_t) &=- \log L(z_t/s;y_t), \end{align} is a (negated)\footnote{The negation in \eqref{log.likelihood.def} allows us to use a minimization in \eqref{ML.optimization} which is a very common formulation} log-likelihood. The weighting with $\xi_{c_t}\in(0,1]$ is used in the estimation literature to take care of the outcomes which are more or less reliable, \citep{Hu01}, \citep{Amiguet10_thesis}. In our problem, the reliability is associated with the game category, $c_t$, so $\xi_c$ denotes the weight of the category $c$. Since multiplication of $\xi_c$ by a common factor is irrelevant for minimization, we fix $\xi_0=1$. We may solve \eqref{ML.optimization} using the steepest descent \begin{align}\label{hat.btheta.gradient} \hat\boldsymbol{\theta} \leftarrow \hat\boldsymbol{\theta} - \mu/s\sum_{t}\boldsymbol{x}_t \xi_{c_t} g(z_t/s;y_t), \end{align} where $\mu$ is the adaptation step and \begin{align}\label{g.derivative} g(z;y) &= \frac{\dd}{\dd z} \ell(z;y). \end{align} The on-line version of \eqref{hat.btheta.gradient} is obtained replacing the batch-optimization with the \gls{sg} which updates the solution each time a new observation becomes available, \ie \begin{align}\label{SG.algorithm} \boldsymbol{\theta}_{t+1} \leftarrow \boldsymbol{\theta}_{t} - K\xi_{c_t}\boldsymbol{x}_t g(z_t/s;y_t), \end{align} where the update amplitude is controlled by the weight $\xi_{x_t}$ and the step $K$ which absorbs the scale $s$. \subsection{Davidson model and Elo algorithm}\label{Sec:Davidson.Elo} The rating depends now on the choice of the likelihood function $L(z;y)$ and we opt here for the Davidson model, \citep{Davidson70}, being a particular case of the multinomial model used also in \citet{Egidi21} \begin{align} \label{P2.z} L(z;\mf{H}) &= \frac{10^{0.5(z+\eta b)}}{10^{0.5(z+\eta b)}+\kappa+10^{-0.5(z+\eta b)}},\\ \label{P0.z} L(z;\mf{A}) &= P(-z;\mf{H}),\\ \label{P1.z} L(z;\mf{D}) &= \kappa\sqrt{L(z;\mf{H})L(z;\mf{A})}, \end{align} where $\eta$ is a \gls{hfa} modelling the apparent increase in the skills of the local team, the indicator $b=\IND{\tr{game is played in the home-team country}}$ allows us to distinguish between the games played on the home or the neutral venues,\footnote{Out of $T=2964$ games we considered, $768$ were played on neutral venues. To verify the venue we used \citep{roonba} and \citep{soccerway}.} and $\kappa$ controls for the presence of the draws. The choice of this model is motivated by the fact that it leads to a simple algorithmic update of the skills generalizing the Elo rating algorithm, \citep{Szczecinski20}. It also becomes equivalent to the latter for a particular value of $\kappa=2$ and $\kappa=0$. These relationships make possible (or at the least -- ease) the comparison with the rating algorithm currently used by \gls{fifa} which is also based on the Elo algorithm. Using \eqref{P2.z}-\eqref{P1.z} in \eqref{g.derivative}, with straightforward algebra we obtain \begin{align} \label{g.yz.1} g(z;y) &=\frac{\dd}{\dd z}\ell(z;y) \\ \label{g.yz.3} & = -\ln 10(\check{y} - F_{\kappa}(z)), \end{align} where $\check{y}$ is the ``score'' of the game which we already defined, and \begin{align}\label{F.z} F_\kappa(z) &= \frac{\frac{1}{2}\kappa+10^{0.5(z+\eta b)}}{10^{0.5(z+\eta b)}+\kappa+10^{-0.5(z+\eta b)}} \end{align} has the meaning of the conditional expected score, $F_\kappa(z)=\Ex[\check{y}|z]=\sum_{y\in\mc{Y}} \check{y} L(z; y)$. Therefore, the \gls{sg} algorithm \eqref{SG.algorithm} becomes \begin{align}\label{Elo.algorithm} \boldsymbol{\theta}_{t+1} \leftarrow \boldsymbol{\theta}_{t} + K\xi_t\boldsymbol{x}_t\big(\check{y}_t-F_\kappa(z_t/s)\big) \end{align} and it obviously has the form of the Elo and \gls{fifa} rating algorithms, see \eqref{FIFA.basic.rules}-\eqref{FIFA.logistic}, except that we use $F_\kappa(z)$ while the former use $F(z)$. Note that the step $K$ in \eqref{Elo.algorithm} absorbs the term $\ln 10$ from \eqref{g.yz.3}. It is easy to see that for $\eta=0$ and $\kappa=0$ (\ie when $L(z;\mf{D})=0$ and the draws are ignored) we have $F_0(z)=F(z)$ which is simply a logistic function as in the Elo algorithm. Furthermore, for $\eta=0$ and $\kappa=2$ we obtain $F_2(z)=F(z/2)$ and thus \eqref{Elo.algorithm} is again equivalent to the Elo rating algorithm but with the doubled scale value. While we conclude that the \gls{fifa} rating algorithm may be seen as the instance of the maximum weighted likelihood estimation, this is, of course, a ``reverse-engineered'' hypothesis because the \gls{fifa} document, \citep{fifa_rating}, does not mention any remotely similar concept. \subsection{Regularized batch rating}\label{Sec:regularized.batch} In order to go beyond the limitation of the \gls{sg} optimization and to avoid the problems related to the removal of the significant portion of the data (meant to eliminate the initialization effects during evaluation, see \eqref{MSE.eq}) we may focus on the original problem defined in \eqref{ML.optimization} for the entire set of data. That is, we ignore now the on-line rating aspect and rather focus on the evaluation of the model and the optimization criterion that underlie the algorithm. We start noting that the problem \eqref{ML.optimization} is, in general, ill-posed: since the solution depends only on the differences between the skills, $z_t$, all solutions $\hat\boldsymbol{\theta}$ and $\hat\boldsymbol{\theta}+\theta_\tr{o}\boldsymbol{1}$ are equivalent because the differences $z_t$ are independent from ``origin" value $\theta_\tr{o}$. To remove this ambiguity we may \emph{regularize} the problem as \begin{align} \hat\boldsymbol{\theta} \label{MAP.optimization} &=\mathop{\mr{argmin}}_{\boldsymbol{\theta}} J(\boldsymbol{\theta})\\ J(\boldsymbol{\theta})&=\sum_{t\in\mc{T}}\xi_{c_t}\ell(z_t/s;y_t) +\frac{\alpha}{2s^2} \|\boldsymbol{\theta}\|^2, \end{align} where $\alpha$ is the regularization parameter and we have opted for a so-called ridge regularization \citep[Ch.~3.4.1]{Hastie_book}. Under the model \eqref{P2.z}-\eqref{P1.z}, the regularized batch-optimization problem \eqref{MAP.optimization} is useful to resolve another difficulty. Namely, if there is a team $m$ having registered only wins, \ie when $\forall i_t=m, ~ y_t=\mf{H}$ and $\forall j_t=m, ~ y_t =\mf{A}$, then \eqref{ML.optimization} cannot be solved (or rather, $\hat\theta_m\rightarrow\infty$) because $J(\boldsymbol{\theta})$ does not limit the value of $\theta_m$. Such a solution not only is unattainable numerically but is, in fact meaningless and the regularization \eqref{MAP.optimization} settles this issue.\footnote{The same problem arises, of course, when a team registers a sequences of pure losses. This is not a hypothetical issue and in the official \gls{fifa} games three teams registered the streaks of unique wins or losses (without any other results): Tonga (three wins), Eritrea (two losses), and American Samoa (four losses). Thus, the attempt to solve the batch-optimization problem without regularization (\ie with $\alpha=0$) would yield $\hat\theta_{m}=\infty$, for $m$ being index of Tonga.} The estimated skills, $\hat\boldsymbol{\theta}$ depend now on the weights $\xi_c$, on the regularization parameter $\alpha$, and on the model parameters $\eta$ and $\kappa$. If unknown, all these parameters must be optimized. As for the optimization criterion, we recall that the \gls{fifa} algorithm only specified the expected score, so the quadratic error \eqref{squared.error} was allowed us to evaluate the algorithm and stay within the boundaries of its definitions. Now, however, with the explicit skills-outcome model, we may go beyond this limitation and may use the prediction metrics known in the machine learning such as the (negated) log-score, \citep{Gelman14} \begin{align}\label{log.score.t} \mf{m}^{\tr{ls}}(z_t;y_t) & = \ell(z_t/s;y_t), \end{align} often preferred due to its compatibility with the log-likelihood used as the optimization criterion, or the accuracy score, \citep{Lasek20} \begin{align}\label{acc.score.t} \mf{m}^{\tr{acc}}(z_t;y_t) & = \IND{y_t = \mathop{\mr{argmax}}_{y} L(z_t/s;y)}, \end{align} which equals one if the event with the largest predicted probability was actually observed, otherwise it is zero. Furthermore, thanks to the batch-rating we are able to consider the entire data set in the performance evaluation by averaging the scoring function \eqref{log.score.t} or \eqref{acc.score.t} over all games \begin{align} \label{LS.avg} \mf{LS}&=\frac{1}{T}\sum_{t\in\mc{T}} \mf{m}^{\tr{ls}} \big( \boldsymbol{x}_t\T\hat\boldsymbol{\theta}_{\backslash{t}}, y_t\big),\\ \label{ACC.avg} \mf{ACC}&=\frac{1}{T}\sum_{t\in\mc{T}} \mf{m}^{\tr{acc}} \big( \boldsymbol{x}_t\T\hat\boldsymbol{\theta}_{\backslash{t}}, y_t\big), \end{align} where \begin{align} \label{find.theta} \hat\boldsymbol{\theta}_{\backslash{t}} &= \mathop{\mr{argmin}}_{\boldsymbol{\theta}} J_{\backslash{t}}(\boldsymbol{\theta}),\\ \label{J.theta.xi.alpha} J_{\backslash{t}}(\boldsymbol{\theta}) &=\sum_{\substack{l\in\mc{T}\\ l\neq t}} \xi_{c_l}\ell(\boldsymbol{x}_l\T\boldsymbol{\theta}/s; y_l) +\frac{\alpha}{2s^2}\|\boldsymbol{\theta}\|^2. \end{align} In plain words, for given parameters ($\alpha$, $\kappa$, $\eta$, $\xi_c$), we find the skills $\hat\boldsymbol{\theta}_{\backslash{t}}$ from all, but the $t$-th game [this is \eqref{find.theta}-\eqref{J.theta.xi.alpha}], and next use them to predict the results $y_t$; we repeat it for all $t\in\mc{T}$, summing the obtained scores. This is the well-known \gls{loo} cross-validation strategy \citep[Sec.~2.9]{Hastie_book}, \citep[Ch.~9.6.2]{Duda_book}: no data is discarded when calculating the metrics \eqref{LS.avg}-\eqref{ACC.avg} and this comes with the price of having to find $\hat\boldsymbol{\theta}_{\backslash{t}}$ for all $t\in\mc{T}$. To diminish the computational load, we opt here for the \gls{alo} cross-validation \citep{Rad20} based on the local quadratic approximation of the optimization function defined for all the data. Details are given in \appref{App:ALO}. Although both, the average log-score in \eqref{LS.avg} and the accuracy \eqref{ACC.avg} can be now optimized with respect to $\alpha$, $\kappa$, $\eta$, and/or $\xi_c$, we only optimize the log-score whose optimal value is denoted as $\mf{LS}_\tr{opt}$; the resulting accuracy, $\mf{ACC}$ will be also shown. It is, of course, possible to optimize the log-score with respect to any subset of parameters. Again, we used the alternated minimization: $\mf{LS}$ was minimized with respect to one parameter at a time: $\alpha$, $\kappa$, $\eta$, or $\xi_c$, till no improvement was observed. This simple strategy led to the minimum $\mf{LS}_{\tr{opt}}$ which turned out to be independent of various starting points we used.\footnote{Although we cannot prove the solution to be global, in all our observations the log-score functions seemed to be unimodal.} A quick comment may be useful regarding the interpretation of the performance metrics. The accuracy \eqref{ACC.avg} is easily understandable: it is an average number of the events which were predicted correctly (as those with the maximum likelihood $L(z_t/s;y)$). On the other hand, the metric \eqref{LS.avg} may be represented as $\exp(-\mf{LS})=[\prod_{t=1}^T L(z_t/s;y_t)]^{1/T}$ which is a geometric mean of the predicted probabilities assigned to the events which were actually observed. While the accuracy metric penalizes the wrong guesses with zero (so $\mf{ACC}\in[0,1]$), the log-score penalizes them via the logarithmic function, which may be arbitrarily large (so $\mf{LS}\in(0,\infty)$). However, the fundamental difference between the two metrics is that we can use the accuracy without specifying the distribution for all possible outcomes but we cannot calculate the log-score in such a case.\footnote{ The common confusion is to interpret the function $F(z_t/s)$ in the Elo/\gls{fifa} algorithm as the probability of the home win, and the value $1-F(z_t/s)$, as the probability of an away win. This, of course, implies that the draw probability equals zero. With such an interpretation, we can still calculate the accuracy metric even if we never predict the draw. On the other hand we cannot calculate the log-score, because when the draw occurs, we have undefined metric $\mf{m}^{\tr{ls}}(z_t/s;\mf{D})\rightarrow \infty$.} \begin{table}[tb] \centering \begin{tabular}{c||c|c|c||c|c|c|c|c|c|c|c|c||c} $\mf{LS}_{\tr{opt}}$ & $\alpha$ & $\eta$ & $\kappa$ & $\xi_0$ & $\xi_1$ & $\xi_2$ & $\xi_3$ & $\xi_4$ & $\xi_5$ & $\xi_6$ & $\xi_7$ & $\xi_8$ & $\mf{ACC} [\%]$\\ \hline $0.960$ & $1.7$ &\ccol $0$ &\ccol $2.0$ &\ccol $1$ &\ccol $2.0$ &\ccol $3.0$ &\ccol $5.0$ &\ccol $5.0$ &\ccol $7.0$ &\ccol $8.0$ &\ccol $10.0$ &\ccol $12.0$ & $55$\\ $0.948$ & $0.2$ &\ccol $0$ &\ccol $2.0$ &\ccol $1$ &\ccol $1.0$ &\ccol $1.0$ &\ccol $1.0$ &\ccol $1.0$ &\ccol $1.0$ &\ccol $1.0$ &\ccol $1.0$ &\ccol $1.0$ & $56$\\ $0.948$ & $0.3$ &\ccol $0$ &\ccol $2.0$ & \ccol $1$ & $0.9$ & $0.7$ & $0.8$ & $0.9$ & $1.2$ & $0.7$ & $0.8$ & $1.1$ & $55$\\ \hline $0.918$ & $0.3$ & $0.4$ &\ccol $2.0$ &\ccol $1$ &\ccol $1.0$ &\ccol $1.0$ &\ccol $1.0$ &\ccol $1.0$ &\ccol $1.0$ &\ccol $1.0$ &\ccol $1.0$ &\ccol $1.0$ & $56$\\ $0.860$ & $0.4$ & $0.3$ & $0.8$ &\ccol $1$ &\ccol $1.0$ &\ccol $1.0$ &\ccol $1.0$ &\ccol $1.0$ &\ccol $1.0$ &\ccol $1.0$ &\ccol $1.0$ &\ccol $1.0$ & $61$\\ $0.860$ & $0.5$ & $0.3$ & $0.8$ & \ccol $1$ & $0.8$ & $0.7$ & $0.8$ & $1.0$ & $1.2$ & $0.8$ & $1.0$ & $1.0$ & $61$\\ \end{tabular} \caption{Batch-rating parameters obtained via minimization of the log-score \eqref{LS.avg}. The parameters ($\alpha$, $\kappa$, $\eta$, $\xi_c$) are either fixed (shadowed cells), or obtained via optimization. The upper-part results correspond to the conventional \gls{fifa} algorithm: using $\kappa=2$ and $\eta=0$, the expected score is calculated using a logistic function.} \label{tab:solutions.batch} \end{table} The results obtained are shown in \tabref{tab:solutions.batch} and indicate that \begin{itemize} \item The data does not provide evidence for using a category-dependent weights $\xi_c$. There is actually a slight indication that the optimal weights of the Friendlies within the IMC (category $c=1$) and Group phase of Nations Leagues (category $c=2$) are slightly \emph{smaller} than the weight of the regular Friendlies. This stands in contrast to the \gls{fifa} algorithm which doubles the weight $\xi_1$ of the Friendlies played in the IMC and triples the weight of $\xi_2$. In fact, the results obtained using the \gls{fifa} weights $\xi_c$ are \emph{worse} than those obtained using constant weights $\xi_c=1$ (\ie essentially ignoring the possibility of weighting). Even with the argument of having a small number of games in some categories (such as a World Cup), it is very unlikely that observing more games will speak in favor of variable weights and almost surely not in favor of the highly disproportionate weights used in the \gls{fifa} algorithm. \item A notable improvement in the prediction capacity as measured by the log-score is obtained by considering the \gls{hfa}. The value $\eta\in \{0.3,0.4\}$ emerges from the optimization fit and we note that $\eta=0.25$ was used in \citet{eloratings.net}.\footnote{Therein, the unnormalized value $\eta s=100$ is reported and since $s=400$, we obtain $\eta=0.25$.} \item A more important improvement is obtained by optimizing the parameter $\kappa$ which takes into account the draws and their frequency as discussed in \citet{Szczecinski20}. \end{itemize} It is interesting to compare the parameters found by optimization with the simplified formulas proposed in \citet[Sec.~3.2]{Szczecinski20} \begin{align} \label{eta.approx} \eta &= \log_{10}\frac{f_\mf{H}}{f_\mf{A}}\\ \label{kappa.approx} \kappa &= \frac{f_\mf{D}}{\sqrt{f_\mf{H} f_\mf{A}}} \approx\frac{2 f_\mf{D}}{1-f_\mf{D}}. \end{align} where $f_y, y \in\mc{Y}$ are empirical frequencies of outcomes. We can consider separately the games played on the neutral venues and calculate these frequencies as $f^\tr{neut.}_{\mf{A}}=0.37$, $f^\tr{neut.}_\mf{D}=0.24$, $f^\tr{neut.}_\mf{H} = 0.39$, and those played on home venues as $f^\tr{hfa}_{\mf{A}}=0.27$, $f^\tr{hfa}_\mf{D}=0.22$, $f^\tr{hfa}_\mf{H} = 0.51$, which yields \begin{align} \kappa^{\tr{hfa}} &= 0.61 &\eta^{\tr{hfa}} &= 0.28\\ \kappa^{\tr{neut.}} &= 0.63 &\eta^{\tr{neut.}} &= 0.02. \end{align} The parameter $\eta^{\tr{hfa}}$ predicted by \eqref{eta.approx} is practically equal to the one obtained by optimization. And while the parameters $\kappa^{\tr{hfa}}$ and $\kappa^{\tr{neut.}}$ are slightly different from the one predicted by \eqref{kappa.approx}, using them in the rating, we obtained $\mf{LS}_{\tr{opt}}=0.868$, which is still notably better than using the conventional \gls{fifa} rating. This is interesting because finding the parameters $\eta$ and $\kappa$ from the frequencies of the games not only avoids optimization but also provides a simple empirical justification. \section{Margin of victory}\label{Sec:MOV.general} In the search for a possible improvement of the rating we want to consider now the use of the \gls{mov} variable, defined by the difference of the goals scored by each team, and denoted by $d_{t}$. With that regard, the most recent works adopt two conceptually different approaches. The first one keeps the structure of the known rating algorithm (such as \gls{fifa} algorithm) and modifies it by changing the adaptation step size as a function of $d_t$. This was already done in \citet{eloratings.net}, \citet{Hvattum10}, \citet{Silver14}, \citet{Ley19}, and \citet{Kovalchik20}, and is conceptually similar to the weighting according to the game-category we consider in the previous section. Second approach changes the model relating the skills to the \gls{mov} variable $d_t$ and was already studied before in \citet{Maher82}, \citet{Ley19}, \citet{Lasek20}, \citet{Szczecinski20c}. We will focus on the simple proposition from \citet{Lasek20} building on the formulation of \citet{Karlis08}. \subsection{MOV via weighting}\label{Sec:MOV} For the context, we show in \tabref{tab:goals_difference} the number of games and their percentage of the total, depending on the value of the \gls{mov} variable $d$. While, in principle, it is possible to use directly $d$, it is customary to consider their absolute value, $|d|$. The Elo/\gls{fifa} algorithms \eqref{Elo.algorithm} can be easily modified as follows, to take the \gls{mov} variable into account: \begin{align} K_{c,d} = K \xi_{c}\zeta_{d}, \end{align} where, as before, $K$ is the common step, $\xi_{c}$ is the weight associated with the game-category $c$, and $\zeta_d$ is the function of the \gls{mov}-variable $d$. \begin{table}[tb] \centering \begin{tabular}{c|c|c|c|c|c|c} $|d|=0$ & $|d|=1$ & $|d|=2$ & $|d|=3$ & $|d|=4$ & $|d|=5$ & $|d|>6$ \\ \hline $678$ & $1070$ & $543$ & $337$ & $162$ & $77$ & $97$\\ $22\%$ & $36\%$ & $18\%$ & $11\%$ & $6\%$ & $3\%$ & $2\%$ \end{tabular} \caption{\data{Number of games till October 15, 2021} which finished with the goal difference $|d|$ (the fractions are not adding to 100\% due to rounding).} \label{tab:goals_difference} \end{table} For example, \citep{eloratings.net} uses \begin{align} \label{K.d.elorating} \zeta_d &= \begin{cases} 1& |d|\le 1\\ 1.5 & |d|=2\\ 1.75 + 0.125(|d|-3) & |d|\ge 3 \end{cases}. \end{align} The similar propositions may be found in \citet{Hvattum10} (in the context of association football), in \citet{Kovalchik20} (to rate the tennis players), or in \citet{Silver14} (for rating of the teams in American football). To elucidate how useful such heuristics are, we note that the problem is very similar to the importance-weighting we analyzed before; the difference resides in the fact that the weighting depends now on the product $\xi_c\zeta_d$. We may thus reuse our optimization strategy to find the optimal weights for the games with different values of $|d|$. To this end we discretize $|d|$ into $V+1$ \gls{mov}-categories, $v=0,\ldots, V$ and we use a very simple mapping $v=|d|$ for $v<V$ and $v=V \iff |d|\ge V$. For example, with $V=2$, $\zeta_{0}$ weights the draws ($|d|=0$), $\zeta_1$ weights the games with one goal difference ($|d|=1), v=0,1$ and $\zeta_2$ weights the games with more than one goal difference ($|d|\ge 1$). Breaking with the predefined functional relationship as the one shown in \eqref{K.d.elorating} we are more general than the latter, \eg treating the cases $|d|=0$ and $|d|=1$ separately. This makes sense since, not only they are the most frequent events, corresponding, respectively, to $22\%$ and $36\%$ of the total, see \tabref{tab:goals_difference}, but also they correspond to the events of draw and win/loss treated differently by the algorithm. On the other hand, we are also less general due to the merging of the events $|d_t|\ge V$, although this effect will decrease with $V$, simply because there will be very few observations as may be understood from \tabref{tab:goals_difference}. For example, with $V=4$, the weighting $\zeta_4$ will be the same for the events with $|d| = 4$ and $|d|>4$ but the latter make only $5\%$ of the total. We consider again the game categories defined in \tabref{Tab:importance_levels} and thus we solve now the problem \begin{align} \label{MAP.optimization.xi.zeta} \hat\boldsymbol{\theta} &=\mathop{\mr{argmin}}_{\boldsymbol{\theta}} \sum_{t\in\mc{T}}\xi_{c_t}\zeta_{v_t}\ell(z_t/s;y_t) +\frac{\alpha}{2s^2} \|\boldsymbol{\theta}\|^2, \end{align} where $v_t$ is the index of the \gls{mov} variable $d_t$. To remove ambiguity of the solution, we set $\xi_0=1$ and $\zeta_0=1$. The parameters $\xi_c$, $\zeta_v$, $\eta$, $\kappa$, and $\alpha$ will be chosen again using the \gls{alo} approach we described in \secref{Sec:regularized.batch}, that is, by optimizing the log-score criterion \eqref{LS.avg}. The results shown in \tabref{tab:solutions.batch.mov} allow us to conclude that: \begin{itemize} \item The optimization of the \gls{mov}-weights $\zeta_v$ (while keeping $\xi_c=1$) yields $\mf{LS}_\tr{opt}=0.937$ and the optimization of $\xi_c$ (with $\zeta_v=1$) yields $\mf{LS}_\tr{opt}=0.948$ (see \tabref{tab:solutions.batch}). By comparing them, we see that weighting of the \gls{mov}-categories is more beneficial than weighting of the game-categories. Therefore, there is little improvement in considering the category-related weights, $\xi_c$. \item The optimization indicates that $\zeta_v$ defined by \eqref{K.d.elorating} is suboptimal. In particular, the optimal \gls{mov} weights, $\zeta_v$ are monotonically growing (as foreseen by the heuristics) only for $|d|\ge 1$ and the draws (\ie $|d_t|=0$) have a weight which is more important that the weights of the events $|d_t|=1$; thus, these two events should not be merged together, nor we should impose a particular functional form for the weights. \item The best improvement in the prediction is obtained again by optimizing the parameters $\eta$ and $\kappa$ of the Davidson model together with the \gls{mov} weights $\zeta_v$. \end{itemize} \begin{table}[tb] \centering \begin{tabular}{c|c||c|c|c||c|c|c|c|c|c|c|c||c} $\mf{LS}_{\tr{opt}}$ & $V$ & $\alpha$ & $\eta$ & $\kappa$ & $\boldsymbol{\xi}$ & $\zeta_0$ & $\zeta_1$ & $\zeta_2$ & $\zeta_3$ & $\zeta_4$ & $\zeta_5$ & $\zeta_6$ & $\mf{ACC} [\%]$\\ \hline $0.949$ & $6$ & $0.9$ &\ccol $0$ &\ccol $2.0$ &\ccol $\boldsymbol{1}$ &\ccol $1$ &\ccol $1$ &\ccol $1.5$ &\ccol $1.75$ &\ccol $1.875$ &\ccol $2.0$ & $4.2$ & $56$\\ $0.937$ & $6$ & $0.2$ &\ccol $0$ &\ccol $2.0$ &\ccol $\boldsymbol{1}$ & \ccol $1$ & $0.3$ & $0.5$ & $0.7$ & $1.0$ & $1.3$ & $2.4$ & $55$\\ $0.935$ & $6$ & $0.2$ &\ccol $0$ &\ccol $2.0$ & $\hat{\boldsymbol{\xi}}$ & \ccol $1$ & $0.3$ & $0.5$ & $0.7$ & $1.0$ & $1.5$ & $2.3$ & $55$\\ \hline $0.906$ & $6$ & $0.2$ & $0.4$ &\ccol $2.0$ &\ccol $\boldsymbol{1}$ & \ccol $1$ & $0.3$ & $0.5$ & $0.7$ & $1.0$ & $1.6$ & $3.0$ & $56$\\ $0.852$ & $6$ & $0.3$ & $0.3$ & $0.8$ &\ccol $\boldsymbol{1}$ & \ccol $1$ & $0.3$ & $0.5$ & $0.7$ & $1.0$ & $1.6$ & $3.0$ & $62$\\ $0.853$ & $4$ & $0.3$ & $0.3$ & $0.8$ &\ccol $\boldsymbol{1}$ & \ccol $1$ & $0.3$ & $0.5$ & $0.8$ & $1.4$ & $\times$ & $\times$ & $62$\\ $0.854$ & $2$ & $0.2$ & $0.3$ & $0.8$ &\ccol $\boldsymbol{1}$ & \ccol $1$ & $0.3$ & $0.8$ & $\times$ & $\times$ & $\times$ & $\times$ & $62$\\ $0.857$ & $1$ & $0.2$ & $0.3$ & $0.8$ &\ccol $\boldsymbol{1}$ & \ccol $1$ & $0.6$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $61$ \end{tabular} \caption{Batch-rating parameters obtained via minimization of the log-score \eqref{LS.avg} with weighting of the \gls{mov}-variables. The parameters ($\alpha$, $\kappa$, $\eta$, $\boldsymbol{\xi}$, $\zeta_v$) are either fixed (shadowed cells), or obtained through optimization; to save space, in the sole case when the parameters $\xi_c$ are optimized, their optimal values are gathered in the vector $\hat\boldsymbol{\xi}=[1.0, 1.0, 0.9, 1.2, 1.1, 1.8, 1.9, 1.4, 5.6]$.} \label{tab:solutions.batch.mov} \end{table} \subsection{MOV via modelling}\label{Sec:Skellam.model} A different approach to deal with the \gls{mov} relies on the integration of the latter into the formal model relating the skills $\boldsymbol{\theta}_t$ and the observed \gls{mov} variable $d_t$. A simple approach proposed in \citet{Karlis08} relies on a direct modelling of the goal difference using the Skellam's distribution \begin{align} \label{Skellam.proba} \PR{d_{t}=d|\boldsymbol{\theta}_t} &=L(z_t;d_t)\\ &= \mr{e}^{-(\mu_{\tr{h},t}+\mu_{\tr{a},t})}\left(\frac{\mu_{\tr{h},t}}{\mu_{\tr{a},t}}\right)^{d/2} I_{|d|}(2\sqrt{\mu_{\tr{h},t}\mu_{\tr{a},t}}), \end{align} where $I_v(t)$ is the modified Bessel function of order $v$ and $\mu_{\tr{h},t}$ and $\mu_{\tr{h},t}$ are means of the Poisson variables modelling the home- and away- goals. The latter are functions of the skills' difference $z_t$, \citep[Sec.~2.2]{Karlis08} \begin{align} \label{mu.h.a} \mu_{\tr{h},t}& = \mr{e}^{c+ z_t + b\eta}, \quad \mu_{\tr{a},t}=\mr{e}^{c-z_t-b\eta}, \end{align} where is $c$ is a constant and, as before, $\eta$ is the \gls{hfa} coefficient.\footnote{For the home team we add-- and for the away team -- subtract $b\eta$ in the exponent. This is different from \citep{Karlis08}, \citep{Lasek20}, where only the home team benefits from the \gls{hfa} boost while the away team is not penalized, see \citep[Eq. (2.2)-(2.3)]{Karlis08}. Of course, we can rewrite \eqref{mu.h.a} as $\mu_{\tr{h},t} = \mr{e}^{c'+ z_t + b\eta'}$, $\mu_{\tr{a},t} = \mr{e}^{c'+ z_t}$ with $c'=c-b\eta$ and $\eta'=2\eta$ but it makes sense only when the \gls{hfa} is always present, as then $b\eta=\eta$. While this condition holds in the context of football leagues considered in \citet{Karlis08} and in \citet{Lasek20}, this is not the case in the international \gls{fifa} games, which can be played on the neutral venues.} The model \eqref{Skellam.proba} is a particular case of a more general form shown in \citet{Karlis08}, which allowed us to model the offensive and the defensive skills. Here, however, we are interested in rating and thus one skill per team should be used. As noted in \citet{Ley19}, \citet{Lasek20} this offers a sufficient prediction capability avoiding the problem of over-parametrization due to doubling of the number of skills. Using \eqref{mu.h.a} in \eqref{Skellam.proba}, the following log-likelihood is obtained \begin{align} \label{log.likelihood.Skellam} \ell(z;d) &=-\log L(z;d)\\ &= (\mu_{\tr{h}}+\mu_{\tr{a}}) - d (z+b\eta) - 2\mr{e}^c - \log \tilde{I}_{|d|}(2\mr{e}^c) \end{align} where, for numerical stability it is convenient to use an exponentially modified form of the Bessel function, $\tilde{I}_v(t)=I_v(t)\mr{e}^{-t}$, available in many computation packages. The derivative of \eqref{log.likelihood.Skellam} is given by \begin{align} \label{grad.Poisson} g(z;d)&=\frac{\dd}{\dd z} \ell(z; d)=-(d - \ov{F}(z)),\\ \label{Ex.score.Poisson} \ov{F}(z) &= \mu_{\tr{h}}-\mu_{\tr{a}}=\mr{e}^{c}(\mr{e}^{z+b\eta}-\mr{e}^{-z-b\eta}). \end{align} The batch rating consists then in solving the following problem: \begin{align} \label{MAP.optimization.Poisson} \hat\boldsymbol{\theta} &=\mathop{\mr{argmin}}_{\boldsymbol{\theta}} \sum_{t\in\mc{T}}\ell(z_t/s;d_t) +\frac{\alpha}{2s^2} \|\boldsymbol{\theta}\|^2 \end{align} and the \gls{sg} implementation of the \gls{ml} principle will produce the algorithm \begin{align}\label{Poisson.SG} \boldsymbol{\theta}_{t+1} \leftarrow \boldsymbol{\theta}_{t} + K\boldsymbol{x}_t\big(d_t-\ov{F}(z_t/s)\big), \end{align} which is again written in a form similar to the \gls{fifa} rating algorithm, where the goal difference $d_t$ plays the role of the ``score", and $\ov{F}(z_t/s)=\Ex[d_t|z_t]$ is the expected score. The algorithm \eqref{Poisson.SG} can be also obtained by applying the Poisson model to the goals scored by each of the teams \citep{Lasek20}. To calculate the log-score, we have to merge the events $d<0$ (away-win) and $d>0$ (home-win). Since the closed-form formulas do not exist we do it approximately by truncated sums \begin{align} \label{log.score.t.Skellam} \mf{m}^{\tr{ls}}(z;\mf{A})&= -\log \sum_{d=-D}^{-1} L(z;d), \quad \mf{m}^{\tr{ls}}(z;\mf{D})= -\log L(z;0), \quad \mf{m}^{\tr{ls}}(z;\mf{H})= -\log \sum_{d=1}^{D} L(z;d), \end{align} where we used $D=50$ which guaranteed that $|1-\sum_{d=-D}^D L(z;d)|<10^{-4}$. The results shown in \tabref{tab:solutions.batch.Skellam} indicate that, with this very simple approach (with only two parameters of the model which must be optimized) we are able to improve over the \gls{mov}-weighting strategy and this should be attributed to the use of a formal skills-outcome model. The price to pay for the improvement lies in the change of the entire algorithm and in abandoning of the legacy of the Elo algorithm. Moreover, the possible implementation issues may arise since the expected score \eqref{Ex.score.Poisson} is theoretically unbounded. Whether the improvement of the log-score from $\mf{LS}=0.857$ (in the \gls{mov}-weighting, see \tabref{tab:solutions.batch.mov}) to $\mf{LS}=0.845$ in the Skellam's \gls{mov} model is worth the change and the implementation risks, is at least debatable. \begin{table}[tb] \centering \begin{tabular}{c||c|c|c||c} $\mf{LS}_{\tr{opt}}$ & $\alpha$ & $\eta$ & $c$ & $\mf{ACC} [\%]$ \\ \hline $0.845$ & $0.21$ & $0.20$ & $0$ & $61$ \end{tabular} \caption{Batch-rating parameters obtained via minimization of the log-score \eqref{LS.avg} using the Skellam's model \eqref{log.likelihood.Skellam}.} \label{tab:solutions.batch.Skellam} \end{table} \section{On line rating}\label{Sec:on-line} \begin{table}[tb] \centering \begin{tabular}{c|c|c|c} original & no shootouts & no knockouts & no shootouts\\ & rules & rules &/knockouts rules\\ \hline BEL (1832.3) & BEL (1831.0) & BRA (1775.9) & FRA (1768.4) \\ BRA (1820.4) & BRA (1817.5) & FRA (1770.1) & BRA (1767.7) \\ FRA (1779.2) & FRA (1778.2) & BEL (1759.2) & BEL (1757.4) \\ ITA (1750.5) & ENG (1740.2) & ITA (1730.5) & ITA (1711.1) \\ ENG (1750.2) & ITA (1733.0) & ENG (1711.9) & ENG (1701.3) \end{tabular} \caption{Ranking of the top teams: Belgium (BEL), Brazil (BRA), England (ENG), France (FRA), and Italy (ITA). The original \gls{fifa} algorithm and its modified rules are considered.} \label{tab:teams.final.ranking} \end{table} Before starting a metrics-based comparison of the on-line algorithms, in \secref{Sec:knock.shoot.rules} we will address the use of the knockout/shootout rules \eqref{knockout.rule}-\eqref{shootout.rule} and, in \secref{Sec:Scale.adjustment} the practical issue of setting the scale. \subsection{Effect of the knockout/shootout rules}\label{Sec:knock.shoot.rules} \tabref{tab:teams.final.ranking} compares the ranking (of top-five teams) obtained using the \gls{fifa} algorithm (first column) to the rating resulting from the modified algorithm in which we a) eliminate the shootout rule (second column), b) eliminate the knockout rule (third column), as well as c) elimination both rules (fourth column). The differences, most notably the removal of Belgium from the first place, are due to the different number of times the teams benefited from the knockout rules (although the shootout rule for sure has an effect on the final rating too). Indeed, by analyzing the results of the games, we observed that in the original ranking, Belgium (BEL) benefited four times from the knockout rule for a total of 85 points (which would be lost without the rule \eqref{knockout.rule}), Brazil (BRA) and England (ENG) benefited twice for a total of 53 and 80 points, respectively, while both France (FRA) and Italy (ITA) benefited only once, gaining 14 points each.\footnote{Of course, due to the temporal relationships, eliminating the knockout/shootout rules is not the same as evaluating the points (not lost in the original algorithm) and discarding them from the final results.} What is important is that the points-preserving knockout rule ignores the direct comparison between the teams. In fact, the games in which Belgium was not penalized (for loosing in knockout stages) were played against France (twice) and against Italy (twice as well). Thus, despite a direct evidence indicating that France and Italy were able to beat Belgium, the knockout rule preserved the points earned by Belgium in other games. In fact, such a situation is not surprising and we indeed expect the teams which compete for the top ranking spots to be also likely to make it to the final stages of the important competitions (in case of the Belgium's games: World Cup 2018, Euro 2020, and UEFA Nations League 2021) and then play against each other. While these games will provide direct comparison results, current knockout rule will preserve the points of the losing team. Whether this is fair and desirable may be debatable especially considering that the knockout/shootout rules are not rooted in any formal modelling principle, and most likely are introduced to compensate for the increased value of $I_c$ in the advanced stages of competitions. \subsection{Scale adjustment}\label{Sec:Scale.adjustment} The scale is obviously irrelevant in the batch optimization and the on-line update can also be written in the scale-invariant manner by dividing \eqref{SG.algorithm} by $s$: \begin{align} \boldsymbol{\theta}'_{t+1} &\leftarrow \boldsymbol{\theta}'_{t} - K'\xi_{c_t}\boldsymbol{x}_tg (z'_t; y_t)\\ z'_t &= z_t/s\\ \boldsymbol{\theta}'_t &= \boldsymbol{\theta}_t/s\\ K' &= K/s; \end{align} in other words, for the same scale-invariant initialization $\boldsymbol{\theta}'_0$ and using the same step $K'$ we will obtain the same results $\boldsymbol{\theta}'_t$. However, in the \gls{fifa} ranking, a non-zero initialization $\boldsymbol{\theta}_0$ was determined in advance and thus $\boldsymbol{\theta}'_0$ is not scale-invariant. Thus, given the initialization at hand, the question is how to determine the scale? In general, it is, of course, a difficult question but an insight may be gained assuming that the initialization corresponds to the ``optimal'' solution, \eg $\hat\boldsymbol{\theta}$ obtained in the batch optimization with a given scale $s_0$. It is easy to see that using $s>s_0$ will force the algorithm to significantly change $\boldsymbol{\theta}_t$ (attainable with large values of the adaptation step, $K$); the same will happen for $s<s_0$ because the optimal estimates $\boldsymbol{\theta}_t$ will have to be scaled down. Since scaling up/down of the skills changes their empirical moments we suggest to choose the scale, $s$ in a moment-preserving manner. To this end we define the empirical standard deviation of the skills \begin{align} \sigma_t = \sqrt{\|\hat\boldsymbol{\theta}_t-\ov{\hat\boldsymbol{\theta}}_t\|^2/M} \end{align} where $\ov{\hat\boldsymbol{\theta}}_t=(\sum_{m=1}^M\hat{\theta}_{t,m})/M$ is the empirical mean, and postulate that, at the initialization and at the final step, we have $\sigma_0 \approx \sigma_T$. In fact, the initialization used by \gls{fifa} yields $\sigma_0 = 220$ and, after running the \gls{fifa} algorithm we obtain $\sigma_T = 250$, relatively close to the initial value $\sigma_0$. Changing the scale $s$, we will obtain different $\sigma_T$ so the idea is to run the algorithms for different values of the scale $s$, \eg as multiples of $100$ and to choose the one which yields a standard deviation $\sigma_T \approx \sigma_0$. In practice it has to be done using historical data \emph{before} the new rating is deployed but in our case we could do it in the hindsight. In this manner we found $s=200$ to be suitable for the Davidson-Elo algorithm (we obtained \data{$\sigma_T=219$} for the unweighted version and $\sigma_T=221$ for the \gls{mov}-weigted approach), and $s=300$ well suited for the Skellam's algorithms (where \data{$\sigma_T=225$} was obtained). This also indicates the the scale $600$ was too large for the \gls{fifa} rating. This can be noted by comparing, in \tabref{tab:solutions.online.rating} the result \gls{fifa} with $\xi_c=1$ to the results of the \gls{sg} (with $\eta=0$ and $\kappa=2$). Both are essentially the same algorithms (although \gls{fifa} uses the shootout/knockout rules which have negligible impact on the performance) and the only difference resides in the scale. Since the scale $s=200$ in the Elo-Davidson algorithm corresponds to the scale $s=400$ in the \gls{fifa} algorithm, the latter would perform better with the scale $s=400$. This effect, however, appears only due to limited observation window we have at our disposal and will vanish after a sufficiently large number of games. \begin{table}[t] \centering \begin{tabular}{c||c||c|c|c||c} algorithm & $\mf{LS}_{\tr{opt}}$ & $K$ & $\eta$ & $\kappa$ & $\mf{ACC} ~[\%]$\\ \hline \gls{fifa}, $\xi_c$ from \tabref{Tab:importance_levels}& $0.951$ & \ccol $5$ & \ccol $0$ & \ccol $2$ & $50$\\ \gls{fifa}, $\xi_c=1$ & $0.933$ & $55$ & \ccol $0$ & \ccol $2$ & $52$\\ \hline \multirow{3}{*}{\gls{sg}} & $0.917$ & $35$ & \ccol $0$ & \ccol $2$ & $54$\\ & $0.892$ & $35$ & $0.4$ & \ccol $2$ & $58$\\ & $0.841$ & $35$ & $0.3$ & $0.9$ & $61$ \end{tabular} \vskip 0.2cm a) Performance of the algorithms : \gls{fifa} ($s=600$) and Elo-Davidson model with \gls{sg} ($s=200$) \vskip 0.2cm \begin{tabular}{c|c||c|c|c|c|c|c|c||c} $\mf{LS}_{\tr{opt}}$ & $V$ & $K$ & $\eta$ & $\kappa$ & $\zeta_0$ & $\zeta_1$ & $\zeta_2$ & $\zeta_3$ & $\mf{ACC}~[\%]$\\ \hline $0.841$ & $1$ & $35$ & $0.3$ & $0.9$ & $1.0$ & $0.9$ & $\times$ & $\times$ & $61$\\ $0.838$ & $2$ & $35$ & $0.3$ & $0.9$ & $1.0$ & $0.6$ & $1.3$ & $\times$ & $62$\\ $0.837$ & $3$ & $40$ & $0.3$ & $0.9$ & $1.0$ & $0.5$ & $0.8$ & $1.8$ & $62$\\ \end{tabular} \vskip 0.2cm b) \gls{sg} with the \gls{mov} weighting, $s=200$ \vskip 0.2cm \begin{tabular}{c||c|c|c||c} $\mf{LS}_{\tr{opt}}$ & $K$ & $\eta$ & $c$ & $\mf{ACC} ~[\%]$ \\ \hline $0.827$ & $7.5$ & $0.2$ & $-0.1$ & $62$ \end{tabular} \vskip 0.2cm c) \gls{sg} implementing Skellam's model for the \gls{mov}, $s=300$ \vskip 0.2cm \caption{Parameters and performance of the on-line rating \gls{sg} algorithms obtained by minimizing the log-score \eqref{LS.final} for a) Davidson model, b) \gls{mov}-weighting strategy from \secref{Sec:MOV}, and c) the \gls{mov}-modelling strategy from \secref{Sec:Skellam.model}.} \label{tab:solutions.online.rating} \end{table} \subsection{Evaluation of the algorithms}\label{Sec:online.algorithms.metrics} To evaluate the \gls{sg} algorithms for the models studied in the batch context, we will use the log-score and the accuracy metrics defined for the half of the games in the considered time period \begin{align}\label{LS.final} \mf{LS} &= \frac{2}{T}\sum_{t=T/2+1}^{T}\mf{m}^{\tr{ls}}(z_t, y_t)\\ \mf{ACC} &= \frac{2}{T}\sum_{t=T/2+1}^{T}\mf{m}^{\tr{acc}}(z_t, y_t). \end{align} We consider the \gls{sg} algorithm based on the Davidson model (\tabref{tab:solutions.online.rating}a), the Davidson model with the \gls{mov}-weighting (\tabref{tab:solutions.online.rating}b), and Skellam's model algorithm (\tabref{tab:solutions.online.rating}c). In all cases, but in the original \gls{fifa} algorithm, we ignore the category-weighting (\ie we use $\xi_c=1$) because, as we have already shown, its effect is negligible. This is clearly shown in the first part of \tabref{tab:solutions.online.rating}a where we see that using the \gls{fifa} weighting we obtain worse results than when the weighting in ignored. This is essentially the same result as the one we have shown in \tabref{tab:solutions.FIFA} but we repeat it here to show the log-score metric which we could not calculate without first introducing the Davidson model underlying the \gls{fifa} algorithm. The results indicate that: \begin{itemize} \item The most notable improvements are due to, in similar measures, two elements: the introduction of the \gls{hfa} coefficient, $\eta$ and the explicit use of the Davidson model (and thus, the optimization of the coefficient $\kappa$). \item Additional small, but still perceivable gains are obtained by introducing the \gls{mov}-weighting, where from the lesson learnt in \secref{Sec:MOV} we weight independently the draws and the home/away wins. \item The \gls{mov}-modelling using the Skellam's distribution brings again a small benefit. \end{itemize} We present in \tabref{tab:teams.final.ranking.new} the rating obtained for the top teams via new rating algorithms. Of course, due to smaller scale we used, the skills have smaller values and should not be compared directly to those from \tabref{tab:teams.final.ranking} but the ranking is of interest, where the teams from the \gls{fifa} ranking are present (FRA, BRA, BEL) but this time Argentine (ARG), which was on the sixth place in the previous rankings, is now consistently on and above the top-third position. We can also see that the differences between the rating values are much less pronounced. \begin{table}[tb] \centering \begin{tabular}{c|c|c} Davidson & MOV weights & Skellam's model\\ \hline FRA (1683.5) & FRA (1690.8) & BRA (1596.0) \\ BRA (1673.1) & BRA (1677.4) & ARG (1585.8) \\ ARG (1668.6) & ARG (1677.0) & BEL (1546.2) \\ BEL (1664.9) & BEL (1666.5) & POR (1541.2) \\ ITA (1657.7) & ITA (1665.6) & ESP (1540.7) \end{tabular} \caption{Ranking of the top teams using the proposed algorithms.} \label{tab:teams.final.ranking.new} \end{table} \section{Conclusions}\label{Sec:Conclusions} In this work we analyzed the \gls{fifa} ranking using the methodology conventionally used in the probabilistic modelling and inference. In the first step, we identified the model relating the outcomes (games results) to the parameters which have to be optimized (skills of the teams). More precisely, we have shown that the \gls{fifa} algorithm can be formally derived as the \acrfull{sg} optimization of the weighted \acrfull{ml} criterion in the Davidson model \citep{Davidson70}. This first step allows us to define the performance metrics related to the predictive performance of the algorithms we study. This is particularly important in the case of the \gls{fifa} ranking algorithm because it does not model the outcomes of the game but only explicitly specifies the expected score, which is not sufficient to precisely evaluate the rating results. It also allows us to apply the batch approach to rating and skills' estimation. This conventional machine learning strategy frees us from the considerations related to the scale, initialization, or modeling of the skills' dynamics. Using the batch rating, we have shown that the game-category weighting is negligible at best, and counterproductive at worst, which is the case of the weighting used by the \gls{fifa} rating. This observation is interesting in its own right because, while on one hand the concept of weighting is also used in the rating literature, \eg \citep{Ley19}, on the other, the literature does not show any evidence that it is in any way beneficial and our findings consistently indicate the contrary. We next considered extensions of the algorithm by including the \gls{hfa} and optimizing the parameter responsible for the draws. These two elements seem to be particularly important from the point of view of the performance of the rating algorithm. While the \gls{hfa} is a well-known element, already considered by \gls{fifa} in \citet{fifa_rating_W}, the possibility of generalizing the Elo algorithm by using the Davidson's model, was only recently shown in \citet{Szczecinski20}. We also evaluated the possibility of using the \acrfull{mov} given by the goal differential, where we analyzed the weighting strategy and the modelling based on the Skellam's distribution. These two methods further improve the results at the cost of higher complexity. Here, the formal optimization strategy of the weighting parameters also yield interesting and somewhat counter-intuitive results. Namely, we have shown that the games won with small margin should have smaller weights than the tied games. This stands in net contrast with the weighting strategies proposed before, \eg in \citet{Hvattum10}, \citet{Silver14}, \citet{Kovalchik20} which use the weighting with monotonically increasing functions of the margin. Finally, we evaluated the heuristic shootout/knockout rules which are used in the \gls{fifa} rating. Since their impact on the overall performance is small and they may distort the relationship between the ratings of the strong teams which often face each other in the final stages of the competitions, their usefulness is questionable. In particular, eliminating the knockout rule would strip Belgium from its first place position in the current \gls{fifa} ranking due to multiple losses Belgium suffered against the current top teams (\eg Italy, France). \subsection{Recommendations}\label{Sec:Recommedations} Given the analysis and the observations we made, if the \gls{fifa} rating was to be changed, the following steps are recommended: \begin{enumerate} \item Add the \acrfull{hfa} parameter to the model because playing at the home venue is a strong predictor for the victory. Not only this well-known fact is already exploited in Women \gls{fifa} ranking but such a modification is most likely the simplest and the least debatable element. In our view, it is surprising that the current rating adopted in 2018 does not include the \gls{hfa}. \item Use explicit model to relate the skills to the outcomes. Not only it would add expressiveness providing the explicit predicted probability for each outcomes, but it also improves the prediction results. Note that the rating algorithm introduced recently by \gls{fivb} adopts such an approach and specifies the probability for each of the game outcomes. In the context of the \gls{fifa} ranking, the Davidson model we used in this work is an excellent candidate for that purpose as it results in a natural generalization of the Elo algorithm, preserving the legacy of the current algorithm. \item Remove the weighting of the games according to their assumed importance because the data does not provide any evidence for their utility, or rather provides the indication that the weighting in its current form is counterproductive. If the concept of the game importance is of extra-statistical nature (such as entertainment), it is preferable to diminish its role, \eg by shrinking the gap between the largest and the smaller values of $\xi_c$ used. \item Remove the shootout and knockout rules which are not rooted in any sound statistical principle. As far as the knockout rule is concerned, while the intent to protect the rating of the teams which manage to qualify to the knockout stage is clear, we may argue that the penalty due to losing in the knockout game is aggravated by the increased weighting of these games. Therefore, removing the weight, as we postulate, would also eliminate the very reason to protect the teams' points with the knockout rule. Regarding the shootout rule, a small frequency of events where it can be applied and a marginal changes in the score imposed by the rule, make its impact rather negligible. Its fairness is again debatable because there is little evidence relating the skills of the teams to the outcome of the shootout. \item If the rating was to consider the \gls{mov}, the simplest solution lies in weighting the update step using the goal differential. On the other hand, the modification based on the change of the model using the Skellam's distribution may cause numerical problems and the relatively small performance gains hardly justify the added complexity. On the other hand, the \gls{mov} may be added using alternative solutions similar to those already considered in the Women' teams \gls{fifa} ranking. Again, the latter should be studied, \eg using the methodology we used in this work and basing the results on a formal probabilistic model. \end{enumerate}
{ "attr-fineweb-edu": 1.628906, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUd-w4eIOjRl2TtGL_
\section{Introduction}\label{sec:intro} Given partially observed pairwise comparison data from $n$ players, we are interested in ranking the players according to their skills by aggregating the comparison results. This high-dimensional statistical estimation problem has important applications in many areas such as recommendation systems \cite{baltrunas2010group, cao2018attentive}, sports and gaming \cite{beaudoin2018computationally, csato2013ranking, minka2018trueskill, herbrich2007trueskill, motegi2012network,sha2016chalkboarding}, web search \citep{dwork2001rank,cossock2006subset}, social choices \cite{mcfadden1973conditional, mcfadden2000mixed, saaty1990decision, louviere2000stated, manski1977structure}, psychology \cite{choo2004common, thurstone1927law, luce1977choice}, information retrieval \cite{liu2011learning,cao2007learning}, etc. In this paper, we focus on arguably one of the most widely used parametric models, the Bradley-Terry-Luce (BTL) model \citep{bradley1952rank,luce2012individual}. That is, we observe $L$ games played between $i$ and $j$, and the outcome is modeled by \begin{equation} y_{ijl}\stackrel{ind}{\sim}\text{Bernoulli}\left(\frac{w_i^*}{w_i^*+w_j^*}\right),\quad l=1,\cdots, L. \label{eq:BTL-w} \end{equation} We only observe outcomes from a small subset of pairs. This subset $E$ is modeled by edges generated by an Erd\H{o}s-R\'{e}nyi random graph \citep{erdHos1960evolution} with connection probability $p$ on the $n$ players. More details of the model will be given in Section \ref{sec:ml}. With the observations $\{y_{ijl}\}_{(i,j)\in E,l\in[L]}$, our goal is to optimally recover the ranks of the skill parameters $w_i^*$'s. The literature on ranking under the BTL model has been mainly focused on the so-called top-$k$ ranking problem. Let $r^*$ be the rank vector of the $n$ players. In other words, $r^*$ is a permutation such that $r_i^*=j$ if $w_i^*$ is the $j$th largest number among $\{w_i^*\}_{i\in[n]}$. The goal of top-$k$ ranking is to recover the set $\{i\in[n]: r_i^*\leq k\}$ from the pairwise comparison data. Theoretical properties of the top-$k$ ranking problem have been studied by \cite{chen2015spectral,jang2016top,chen2017competitive,jang2017optimal,chen2019spectral} and references therein. Recently, it was shown by \cite{chen2019spectral} that both the MLE and the spectral ranking algorithm proposed by \cite{negahban2017rank} can exactly recover the set of top-$k$ players with high probability under optimal sample complexity up to some constant factor. In terms of partial recovery, the minimax rate of the problem under a normalized Hamming distance was derived by \cite{chen2020partial}. In this paper, we study the problem of \textit{full ranking}, the estimation of the entire rank vector $r^*$. To the best of our knowledge, theoretical analysis of full ranking under the BTL model has not been considered in the literature yet. We rigorously formulate the full ranking problem from a decision-theoretic perspective, and derive the minimax rate with respect to a loss function that measures the difference between two permutation vectors. To be specific, our main result of the paper shows that \begin{equation} \inf_{\widehat{r}\in\mathfrak{S}_n}\sup_{r^*\in\mathfrak{S}_n}\mathbb{E}\textsf{K}(\widehat{r},r^*)\asymp \begin{cases} \exp\left(-\Theta(Lp\beta)\right), & Lp\beta > 1, \\ n\wedge\sqrt{\frac{1}{Lp\beta}}, & Lp\beta \leq 1,\\ \end{cases}\label{eq:minimax-BTL-intro} \end{equation} where $\mathfrak{S}_n$ is the set of all rank vectors of size $n$, $\textsf{K}(\widehat{r},r^*)$ is the \textit{Kendall's tau distance} that counts the number of inversions between two ranks, and $\beta$ is the minimal gap between skill parameters of different players. The precise definitions of these quantities will be given in Section \ref{sec:ml}. The minimax rate (\ref{eq:minimax-BTL-intro}) exhibits a transition between an exponential rate and a polynomial rate. This is a unique phenomenon in the estimation of a full rank vector. In contrast, under the same BTL model, the minimax rate of estimating the skill parameters is always polynomial \citep{negahban2017rank,chen2019spectral}, and the minimax rate of top-$k$ ranking is always exponential \citep{chen2020partial}. Whether (\ref{eq:minimax-BTL-intro}) is exponential or polynomial depends on the value of $Lp\beta$ that plays the role of signal-to-noise ratio. When $Lp\beta > 1$, the exponential minimax rate is a consequence of the discreteness of a rank vector. On the other hand, when $Lp\beta \leq 1$, the discrete nature of ranking is blurred by the noise, and thus estimating the rank vector is effectively estimating a continuous parameter, which leads to a polynomial rate. A more detailed statement of the minimax rate (\ref{eq:minimax-BTL-intro}) with an explicit exponent in the regime of exponential rate will be given in Section \ref{sec:main}. Achieving the minimax rate (\ref{eq:minimax-BTL-intro}) is a nontrivial problem. To this end, we propose a \textit{divide-and-conquer} algorithm that first partitions the $n$ players into several leagues and then computes a local MLE using games in each league. Finally, a full rank vector is obtained by aggregating local ranking results from all leagues. The divide-and-conquer technique is the basis of efficient algorithms for all kinds of sorting problems \citep{sedgewick1978implementing,katajainen1997meticulous,knuth1997art}. Our adaption of this classical technique in the optimal full ranking is motivated by both information-theoretic and computational considerations. From an information-theoretic perspective, games between players whose skill parameters are significantly different from each other have little effect on the final ranking result. This phenomenon can be revealed by a simple local Fisher information calculation of each player. The league partition step groups players with similar skill parameters together, thus maximizing information in the follow-up step of local MLE. From a computational perspective, the local MLE computed within each league involves an objective function whose Hessian matrix is well conditioned, a property that is crucial for efficient convex optimization. The description and the analysis of our algorithm are given in Section \ref{sec:dnc}. Before the end of the introduction section, let us also remark that the more general problem of permutation estimation has also been considered in various other settings in the literature \citep{braverman2008noisy,braverman2009sorting,collier2013permutation,collier2016minimax,pananjady2016linear,gao2017phase,mao2018minimax,gao2019iterative,pananjady2020worst}. For instance, in the problem of noisy sorting \citep{braverman2009sorting,mao2018minimax}, one assumes a data generating process that satisfies $\mathbb{P}(y_{ijl}=1)>\frac{1}{2}+\gamma$ when $r_i^*<r_j^*$. In the feature matching problem \citep{collier2016minimax,collier2013permutation}, it is assumed that $X_i-Y_{r_i^*}\sim{ \mathcal{N} }(0,\sigma^2_i)$ for some permutation $r^*$, and the goal is to match the two data sequences $X$ and $Y$ by recovering the unknown permutation. An extension of this problem, called shuffled regression, assumes that the response variable $y_{i}$ and regression function $x_{r_i^*}^T\beta$ are linked by an unknown permutation. Estimation of the unknown permutation in shuffled regression has been considered by \cite{pananjady2016linear}. The rest of the paper is organized as follows. We introduce the problem setting in Section \ref{sec:ml}. The minimax rate of the full ranking is presented in Section \ref{sec:main}. In Section \ref{sec:dnc}, we introduce and analyze a divide-and-conquer algorithm that achieves the minimax rate. Numerical studies of the algorithm are given in Section \ref{sec:simulation}. In Section \ref{sec:disc}, we discuss a few extensions and future projects that are related to the paper. Finally, Section \ref{sec:pf} collects technical proofs of the results of the paper. We close this section by introducing some notation that will be used in the paper. For an integer $d$, we use $[d]$ to denote the set $\{1,2,...,d\}$. Given two numbers $a,b\in\mathbb{R}$, we use $a\vee b=\max(a,b)$ and $a\wedge b=\min(a,b)$. For any $x\in\mathbb{R}$, $\lfloor x\rfloor$ stands for the largest integer that is no greater than $x$ and $\lceil x\rceil$ is the smallest integer that is no less than $x$. For two positive sequences $\{a_n\},\{b_n\}$, $a_n\lesssim b_n$ or $a_n=O(b_n)$ means $a_n\leq Cb_n$ for some constant $C>0$ independent of $n$, $a_n=\Omega(b_n)$ means $b_n=O(a_n)$, and we use $a_n\asymp b_n$ or $a_n=\Theta(b_n)$ when both $a_n\lesssim b_n$ and $b_n\lesssim a_n$ hold. We also write $a_n=o(b_n)$ when $\limsup_n\frac{a_n}{b_n}=0$. For a set $S$, we use $\indc{S}$ to denote its indicator function and $|S|$ to denote its cardinality. We use the notation $S =S_1 \uplus S_2$ to denote a partition of $S$ such that $S_1\cap S_2 = \varnothing$ and $S= S_1 \cup S_2$. For a vector $v\in\mathbb{R}^d$, its norms are defined by $\norm{v}_1=\sum_{i=1}^d|v_i|$, $\norm{v}^2=\sum_{i=1}^dv_i^2$ and $\norm{v}_{\infty}=\max_{1\leq i\leq d}|v_i|$. For a matrix $A\in\mathbb{R}^{n\times m}$, we use $\opnorm{A}$ for its operator norm, which is the largest singular value. The notation $\mathds{1}_{d}$ means a $d$-dimensional column vector of all ones. Given $p,q\in(0,1)$, the Kullback-Leibler divergence is defined by $D(p\|q)=p\log\frac{p}{q}+(1-p)\log\frac{1-p}{1-q}$. For a natural number $n$, $\mathfrak{S}_n$ is the set of permutations on $[n]$. The notation $\mathbb{P}$ and $\mathbb{E}$ are used for generic probability and expectation whose distribution is determined from the context. \section{A Decision-Theoretic Framework of Full Ranking}\label{sec:ml} \paragraph{The BTL Model.} Consider $n$ players, each associated with a positive latent skill parameter $w_i^*$ for $i\in[n]$. The games played among the $n$ players are modeled by an Erd\H{o}s-R\'{e}nyi random graph $A\sim\mathcal{G}(n,p)$. To be specific, we have $A_{ij}\stackrel{iid}{\sim}\text{Bernoulli}(p)$ for all $1\leq i<j\leq n$. For any pair $(i,j)$ such that $A_{ij}=1$, we observe the outcomes of $L$ games played between $i$ and $j$, modeled by the Bradley-Terry-Luce (BTL) model (\ref{eq:BTL-w}). Our goal is to estimate the ranks of the $n$ players. To formulate the problem of full ranking from a decision-theoretic perspective, we can reparametrize the BTL model (\ref{eq:BTL-w}) by a sorted vector $\theta^*$ and a rank vector $r^*$. A sorted vector $\theta^*$ satisfies $\theta_1^*\geq \theta_2^* \geq \cdots \geq \theta_n^*$, and a rank vector $r^*$ is an element of the permutation set $\mathfrak{S}_n$. We have \begin{equation} y_{ijl}\stackrel{ind}{\sim}\text{Bernoulli}(\psi(\theta^*_{r^*_i}-\theta^*_{r^*_j})),\quad l=1,\cdots, L, \label{eq:BTL-theta} \end{equation} where $\psi(\cdot)$ is the sigmoid function $\psi(t)=\frac{1}{1+e^{-t}}$. In the original representation (\ref{eq:BTL-w}), we have $w_i^*=\exp(\theta_{r_i^*}^*)$ for all $i\in[n]$. With (\ref{eq:BTL-theta}), the full ranking problem is to estimate the rank vector $r^*$ from the random comparison data. \paragraph{Loss Function for Full Ranking.} To measure the difference between an estimator $\widehat{r}\in\mathfrak{S}_n$ and the true $r^*\in\mathfrak{S}_n$, we introduce the \textit{Kendall's tau distance}, defined by \begin{equation} \textsf{K}(\widehat{r},r^*)=\frac{1}{n}\sum_{1\leq i<j\leq n}\indc{\text{sign}(\widehat{r}_i-\widehat{r}_j)\text{sign}(r_i^*-r_j^*)<0}, \label{eq:kendall} \end{equation} where $\text{sign}(x)$ represents the sign of $x$ and $n\textsf{K}(\widehat{r},r^*)$ counts the number of inversions between $\widehat{r}$ and $r^*$. Another distance is the normalized $\ell_1$ loss, defined as \begin{equation} \textsf{F}(\widehat{r},r^*)=\frac{1}{n}\sum_{i=1}^n\abs{\widehat{r}_i-r_i^*}, \label{eq:footrule} \end{equation} also known as the \textit{Spearman's footrule}. The two loss functions can be related by the following inequality, \begin{equation} \frac{1}{2}\textsf{F}(\widehat{r},r^*) \leq \textsf{K}(\widehat{r},r^*) \leq \textsf{F}(\widehat{r},r^*). \label{eq:FandK} \end{equation} See \cite{diaconis1977spearman} for the derivation of (\ref{eq:FandK}). The inequality (\ref{eq:FandK}) establishes an equivalence between the estimation of the vector $r^*$ and that of the matrix of pairwise relation $\mathbb{I}\{r_i^*<r_j^*\}$, a key fact that we will explore in constructing an optimal algorithm. A problem that is closely related to full ranking is called top-$k$ ranking. The goal of top-$k$ ranking is to identify the subset $\{i\in[n]:r_i^*\leq k\}$ from the random comparison data. In \cite{chen2020partial}, the minimax rate of top-$k$ ranking was studied under the loss function of normalized Hamming distance, \begin{equation} \textsf{H}_k(\widehat{r},r^*)=\frac{1}{2k}\left(\sum_{i=1}^n\indc{\widehat{r}_i>k, r^*_i\leq k}+\sum_{i=1}^n\indc{\widehat{r}_i\leq k, r^*_i> k}\right). \label{eq:h-loss} \end{equation} The comparison between (\ref{eq:kendall}) and (\ref{eq:h-loss}) reveals the key difference between the two problems. While top-$k$ ranking only requires a correct classification of the two groups, the quality of the full ranking depends on the accuracy of each individual $|\widehat{r}_i-r_i^*|$. It is easy to see that $\textsf{K}(\widehat{r},r)=0$ implies $\textsf{H}_k(\widehat{r},r^*)=0$, but the opposite direction is not true. \paragraph{Regularity of Skill Parameters.} For the nuisance parameter $\theta^*$ of the model (\ref{eq:BTL-theta}), it is necessary that the skill parameters of neighboring players $\theta_i^*$ and $\theta_{i+1}^*$ are separated so that the identification of the ranks is possible. We introduce a parameter space that serves for this purpose. For any $\beta>0$ and any $C_0\geq 1$, define $$\Theta_n(\beta,C_0)=\left\{\theta\in\mathbb{R}^n: \theta_1\geq \cdots \geq\theta_n, 1 \leq \frac{|\theta_i-\theta_j|}{\beta|i-j|}\leq C_0\text{ for any }i\neq j\right\}.$$ In other words, neighboring $\theta_i^*$ and $\theta_{i+1}^*$ are required to be separated by at least $\beta$. The magnitude of $\beta$ then characterizes the difficulty of full ranking. The number $C_0$ characterizes the regularity of the space of sorted vectors $\Theta_n(\beta,C_0)$. The special case $\Theta_n(\beta,1)$ only consists of fully regular $\theta$'s that can be written as $\theta_i=\alpha-\beta i$. Throughout the paper, we assume that $C_0\geq 1$ is an absolute constant, but allow $\beta$ to be a function of the sample size $n$, with the possibility that $\beta\rightarrow 0$. The assumption $\theta^*\in\Theta_n(\beta,C_0)$ implies that the numbers $\theta_1^*,\cdots,\theta_n^*$ to be roughly evenly spaced. This assumption, which can be certainly relaxed, allows us to obtain relatively clean formulas of the minimax rate of full ranking. By restricting our focus to the space $\Theta_n(\beta,C_0)$, we will develop a clear but nontrivial understanding of the full ranking problem in this paper. The extension of our results beyond $\theta^*\in\Theta_n(\beta,C_0)$ will be briefly discussed in Section \ref{sec:disc}. \section{Minimax Rates of Full Ranking}\label{sec:main} In this section, we present the minimax rate of full ranking under the BTL model. To better understand the results, we first derive the minimax rate of full ranking under a Gaussian pairwise comparison model in Section \ref{sec:Gaussian-result}. This allows us to highlight some of the unique and nontrivial features of the BTL model by comparing the minimax rates of the two different distributions. Readers who are already familiar with the BTL model can directly start with Section \ref{sec:intuition}. \subsection{Results for a Gaussian Model}\label{sec:Gaussian-result} Consider the same comparison scheme modeled by the Erd\H{o}s-R\'{e}nyi random graph $A\sim\mathcal{G}(n,p)$. For any pair $(i,j)$ such that $A_{ij}=1$, we independently observe \begin{equation} y_{ij}\sim{ \mathcal{N} }(\theta_{r_i^*}^*-\theta_{r_j^*}^*,\sigma^2). \label{eq:dgp-Gaussian} \end{equation} The joint distribution of $\{A_{ij}\}$ and $\{y_{ij}\}$, under the above generating process, is denoted by $\mathbb{P}_{(\theta^*,\sigma^2,r^*)}$. Estimation of the rank vector $r^*\in\mathfrak{S}_n$ under the Gaussian model (\ref{eq:dgp-Gaussian}) is much less complicated than the same problem under (\ref{eq:BTL-theta}), because of the separate parametrization of mean and variance. \begin{thm}\label{thm:Gaussian-minimax} Assume $\theta^*\in\Theta_n(\beta,C_0)$ for some constant $C_0\geq 1$ and $\frac{np}{\log n}\rightarrow\infty$. Then, for any constant $\delta$ that can be arbitrarily small, we have $$\inf_{\widehat{r}\in\mathfrak{S}_n}\sup_{r^*\in\mathfrak{S}_n}\mathbb{E}_{(\theta^*,\sigma^2,r^*)}\textsf{K}(\widehat{r},r^*)\gtrsim \begin{cases} \frac{1}{n-1}\sum_{i=1}^{n-1}\exp\left(-\frac{(1+\delta)np(\theta_i^*-\theta_{i+1}^*)^2}{4\sigma^2}\right), & \frac{np\beta^2}{\sigma^2} > 1, \\ n\wedge\sqrt{\frac{\sigma^2}{np\beta^2}}, & \frac{np\beta^2}{\sigma^2} \leq 1.\\ \end{cases}$$ Moreover, let $\widehat{r}$ be the rank obtained by sorting the MLE $\widehat{\theta}$, and then $$\sup_{r^*\in\mathfrak{S}_n}\mathbb{E}_{(\theta^*,\sigma^2,r^*)}\textsf{K}(\widehat{r},r^*)\lesssim \begin{cases} \frac{1}{n-1}\sum_{i=1}^{n-1}\exp\left(-\frac{(1-\delta)np(\theta_i^*-\theta_{i+1}^*)^2}{4\sigma^2}\right)+n^{-5}, & \frac{np\beta^2}{\sigma^2} > 1, \\ n\wedge\sqrt{\frac{\sigma^2}{np\beta^2}}, & \frac{np\beta^2}{\sigma^2} \leq 1.\\ \end{cases}$$ Both inequalities are up to constant factors only depending on $C_0$ and $\delta$. \end{thm} Theorem \ref{thm:Gaussian-minimax} characterizes the statistical fundamental limit of full ranking under the Gaussian comparison model. The result holds for each individual $\theta^*\in\Theta_n(\beta,C_0)$. It is interesting to note that the minimax rate exhibits a transition between an exponential rate and a polynomial rate. By scrutinizing the proof, the constant $\delta$ can be replaced by some sequence $\delta_n=o(1)$. Therefore, consider a special example $\theta^*\in\Theta_n(\beta,1)$, and the minimax rate (ignoring the $n^{-5}$ term) can be simplified as \begin{equation} \begin{cases} \exp\left(-\frac{(1+o(1))np\beta^2}{4\sigma^2}\right), & \frac{np\beta^2}{\sigma^2} > 1, \\ n\wedge\sqrt{\frac{\sigma^2}{np\beta^2}}, & \frac{np\beta^2}{\sigma^2} \leq 1.\\ \end{cases}\label{eq:G-minimax-simp} \end{equation} The behavior of (\ref{eq:G-minimax-simp}) is illustrated in Figure \ref{fig:phase}. \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{phase.pdf} \caption{\textsl{Illustration of the the minimax rate of full ranking.}} \label{fig:phase} \end{figure} The quantity $\frac{np\beta^2}{\sigma^2}$ plays the role of the signal-to-noise ratio of the ranking problem. In the high SNR regime $\frac{np\beta^2}{\sigma^2} > 1$, the difficulty of the ranking problem is dominated by whether the data can distinguish each $r_i^*$ from its neighboring values. Therefore, ranking is essentially a \textit{hypothesis testing} problem, which leads to an exponential rate. In the low SNR regime $\frac{np\beta^2}{\sigma^2} \leq 1$, the discrete nature of ranking is absent because of the noise level. The recovery of $r^*$ is equivalent to the estimation of a continuous vector in $\mathbb{R}^n$, which is essentially a \textit{parameter estimation} problem. The polynomial rate $n\wedge\sqrt{\frac{\sigma^2}{np\beta^2}}$ is the usual minimax rate for estimating an $n$-dimensional parameter under the $\ell_1$ loss. It is also worthing noting that the rate (\ref{eq:G-minimax-simp}) implies that the rank vector can be exactly recovered when $\frac{np\beta^2}{\sigma^2} > C\log n$ for any constant $C>4$. This is because in this regime, we have $\textsf{K}(\widehat{r},r^*)=o(n^{-1})$ with high probability by a direct application of Markov's inequality. According to the definition of $\textsf{K}(\widehat{r},r^*)$, we know $\textsf{K}(\widehat{r},r^*)=o(n^{-1})$ implies $\textsf{K}(\widehat{r},r^*)=0$. The upper bound of Theorem \ref{thm:Gaussian-minimax} involves an extra $n^{-5}$ term in the high SNR regime. According to the proof, the number $5$ in the exponent can actually be replaced by an arbitrarily large constant. The $n^{-5}$ term does not contribute to the high-probability bound. By a direct application of Markov's inequality, when $\frac{np\beta^2}{\sigma^2}\rightarrow\infty$, we have \begin{equation} \textsf{K}(\widehat{r},r^*) \lesssim \frac{1}{n-1}\sum_{i=1}^{n-1}\exp\left(-\frac{(1-\delta)np(\theta_i^*-\theta_{i+1}^*)^2}{4\sigma^2}\right), \label{eq:no-n-5} \end{equation} with probability $1-o(1)$. Notice that the high-probability bound (\ref{eq:no-n-5}) does not involve the $n^{-5}$. This is because when $\textsf{K}(\widehat{r},r^*)$ is nonzero, it must be at least $n^{-1}$ by the definition of the loss function. Therefore, $n^{-5}$ can always be absorbed into the other term of the upper bound. We also remark that the condition $\frac{np}{\log n}\rightarrow\infty$ guarantees that the random graph $A$ is connected with high probability. It is well known that when $p\leq c\frac{\log n}{n}$ for some sufficiently small constant $c>0$, the random graph has several disjoint components, which makes the comparisons between different components impossible. An optimal estimator that achieves the minimax rate is the rank vector induced by the MLE, which is defined by \begin{equation} \widehat{\theta}\in\mathop{\rm argmin}_{\theta}\sum_{1\leq i<j\leq n}A_{ij}\Big(y_{ij}-(\theta_i-\theta_j)\Big)^2. \label{eq:Gaussian-ls} \end{equation} We note that the parameter $\theta^*$ in (\ref{eq:dgp-Gaussian}) is identifiable up to a global shift. We may put an extra constraint $\mathds{1}_n^T\theta=0$ in the least-squares estimator above, so that $\widehat{\theta}$ is uniquely defined. However, this constraint is actually not essential, since even without it, the rank vector $\widehat{r}$ induced by $\widehat{\theta}$ is still uniquely defined. To study the property of $\widehat{\theta}$, we introduce a diagonal matrix $D\in\mathbb{R}^{n\times n}$ whose entries are given by $D_{ii}=\sum_{j\in[n]\backslash\{i\}}A_{ij}$. Then, $\mathcal{L}_A=D-A$ is the graph Laplacian of $A$. A standard least-squares analysis of (\ref{eq:Gaussian-ls}) leads to the fact that up to some global shift, \begin{equation} \widehat{\theta}\sim { \mathcal{N} }\left(\theta^*,\sigma^2\mathcal{L}_A^{\dagger}\right), \label{eq:MLE-distribution} \end{equation} where $\mathcal{L}_A^{\dagger}$ is the generalized inverse of $\mathcal{L}_A$. The covariance matrix of (\ref{eq:MLE-distribution}) is optimal by achieving the intrinsic Cram{\'e}r-Rao lower bound of the problem \citep{boumal2013intrinsic}. Without loss of generality, we can assume $r_i^*=i$ for each $i\in[n]$. Then, by the definition of the loss function (\ref{eq:kendall}), we have $$\mathbb{E}\textsf{K}(\widehat{r},r^*) = \frac{1}{n}\sum_{1\leq i<j\leq n}\mathbb{P}\left(\widehat{r}_i>\widehat{r}_j\right) = \frac{1}{n}\sum_{1\leq i<j\leq n}\mathbb{P}\left(\widehat{\theta}_i>\widehat{\theta}_j\right),$$ and each $\mathbb{P}\left(\widehat{\theta}_i>\widehat{\theta}_j\right)$ can be accurately estimated by a Gaussian tail bound under the distribution (\ref{eq:MLE-distribution}), which then leads to the upper bound result of Theorem \ref{thm:Gaussian-minimax}. A detailed proof of Theorem \ref{thm:Gaussian-minimax}, including a lower bound analysis, is given in Section \ref{sec:pf-G}. \subsection{Some Intuitions for the BTL Model} \label{sec:intuition} Before stating the minimax rate for the BTL model, we discuss a few key differences that one can expect from the result. Without of loss of generality, we assume $r_i^*=i$ for all $i\in[n]$ throughout the discussion to simplify the notation. Let us consider a problem of oracle estimation of the skill parameter of the first player $\theta_1^*$. To be specific, we would like to estimate $\theta_1^*$ by assuming that $\theta_2^*,\cdots,\theta_n^*$ are known. The Fisher information of this problem can be shown as \begin{equation} I^{\rm oracle}(\theta_1^*)=Lp\sum_{j=2}^n\psi'(\theta_1^*-\theta_j^*). \label{eq:oracle-fisher} \end{equation} The formula (\ref{eq:oracle-fisher}) characterizes the individual contribution of each player to the overall information in estimating $\theta_1^*$. That is, the information from the games between $1$ and $j$ is quantified by $Lp\psi'(\theta_1^*-\theta_j^*)$. Since $\psi'(t)=\frac{e^{t}}{(1+e^{t})^2}\leq e^{-|t|}$, we have $$\psi'(\theta_1^*-\theta_j^*)\leq \exp\left(-|\theta_1^*-\theta_j^*|\right).$$ In other words, $\psi'(\theta_1^*-\theta_j^*)$ is an exponentially small function of the skill difference $|\theta_1^*-\theta_j^*|$. This means for players whose skills are significantly different from $\theta_1^*$, their games with Player 1 offers little information in the inference of $\theta_1^*$. This phenomenon can be intuitively understood from the following simple example illustrated in Figure \ref{fig:example}. \begin{figure}[h] \centering \includegraphics[width=0.3\textwidth]{example.pdf} \caption{\textsl{A comparison graph of four players.}} \label{fig:example} \end{figure} Consider four players with skill parameters $(\theta_1^*,\theta_2^*,\theta_3^*,\theta_4^*)=(201,200,199,0)$, and we would like to compare the first two players. With the direct link between $1$ and $2$ missing, the only way to compare Players 1 and 2 is through their performances against Players 3 and 4. Since both $\theta_1^*-\theta_4^*=201$ and $\theta_2^*-\theta_4^*=200$ are very large numbers, it is very likely that Player 4 will lose all games against Players 1 and 2. On the other hand, we have $\theta_1^*-\theta_3^*=2$ and $\theta_2^*-\theta_3^*=1$, and thus Player 3 is likely to lose more games against Player 1 than against Player 2. Therefore, we can conclude that Player 1 is stronger than Player 2 based on their performances against Player 3, and the games against Player 4 offer no information for this purpose. This example clearly illustrates that closer opponents are more informative. Mathematically, for any $\theta^*\in\Theta_n(\beta,C_0)$ and any $M>0$, it can be easily shown that \begin{equation} I^{\rm oracle}(\theta_1^*)\leq (1+O(e^{-M}))Lp\sum_{j\leq M/\beta}\psi'(\theta_1^*-\theta_j^*). \label{eq:oracle-fisher-trunc} \end{equation} Therefore, (\ref{eq:oracle-fisher}) and (\ref{eq:oracle-fisher-trunc}) imply that \begin{equation} I^{\rm oracle}(\theta_1^*)=(1+O(e^{-M}))Lp\sum_{j\leq M/\beta}\psi'(\theta_1^*-\theta_j^*). \label{eq:oracle-fisher-asymp} \end{equation} There is no need to consider the games against players with $j>M/\beta$. Moreover, we also observe from (\ref{eq:oracle-fisher-asymp}) that the parameter $\beta$ plays two different roles in the BTL model: \begin{enumerate} \item The parameter $\beta$ is the minimal gap between different players, and it quantifies the signal strength of the BTL model. \item The number $1/\beta$ quantifies the number of close opponents of each player, and thus $p/\beta$ can be understood as the effective sample size of the BTL model. \end{enumerate} While the first role is also shared by the $\beta$ in the Gaussian comparison model (\ref{eq:dgp-Gaussian}), the second role dramatically distinguishes the BTL model from its Gaussian counterpart. The effective sample size of the Gaussian model is $np$, compared with $p/\beta$ of the BTL model. This critical difference is a consequence of the nonlinearity of the logistic function. Increasing $\beta$ magnifies the signal but reduces the effective sample size at the same time. The precise role of $\beta$ in full ranking under the BTL model will be clarified by the formula of the minimax rate. \subsection{Results for the BTL Model} To present the minimax rate of full ranking under the BTL model, we first introduce some new quantities. For any $i\in[n]$, define \begin{equation} V_i(\theta^*)=\frac{n}{\sum_{j\in[n]\backslash\{i\}}\psi^\prime(\theta_i^*-\theta_j^*)}. \label{eq:BTL-var} \end{equation} The quantity (\ref{eq:BTL-var}) is interpreted as the variance function of the $i$th best player. With a slight abuse of notation, the expectation associated with the BTL model is denoted as $\mathbb{E}_{(\theta^*,r^*)}$. \begin{thm}\label{thm:BTL-minimax} Assume $\theta^*\in\Theta_n(\beta,C_0)$ for some constant $C_0\geq 1$ and $\frac{p}{(\beta\vee n^{-1})\log n}\rightarrow\infty$. Then, for any constant $\delta$ that can be arbitrarily small, we have $$\inf_{\widehat{r}\in\mathfrak{S}_n}\sup_{r^*\in\mathfrak{S}_n}\mathbb{E}_{(\theta^*,r^*)}\textsf{K}(\widehat{r},r^*)\gtrsim \begin{cases} \frac{1}{n-1}\sum_{i=1}^{n-1}\exp\left(-\frac{(1+\delta)npL(\theta_i^*-\theta_{i+1}^*)^2}{4V_{i}(\theta^*)}\right), & \frac{Lp\beta^2}{\beta\vee n^{-1}} > 1, \\ n\wedge\sqrt{\frac{\beta\vee n^{-1}}{Lp\beta^2}}, & \frac{Lp\beta^2}{\beta\vee n^{-1}} \leq 1.\\ \end{cases}$$ Moreover, let $\widehat{r}$ be the rank computed by Algorithm \ref{alg:whole}, and then if additionally $\frac{L}{\log n}\rightarrow\infty$, we have $$\sup_{r^*\in\mathfrak{S}_n}\mathbb{E}_{(\theta^*,r^*)}\textsf{K}(\widehat{r},r^*)\lesssim \begin{cases} \frac{1}{n-1}\sum_{i=1}^{n-1}\exp\left(-\frac{(1-\delta)npL(\theta_i^*-\theta_{i+1}^*)^2}{4V_{i}(\theta^*)}\right)+n^{-5}, & \frac{Lp\beta^2}{\beta\vee n^{-1}} > 1, \\ n\wedge\sqrt{\frac{\beta\vee n^{-1}}{Lp\beta^2}}, & \frac{Lp\beta^2}{\beta\vee n^{-1}} \leq 1.\\ \end{cases}$$ Both inequalities are up to constant factors only depending on $C_0$ and $\delta$. \end{thm} Similar to Theorem \ref{thm:Gaussian-minimax}, the result of Theorem \ref{thm:BTL-minimax} holds for each individual $\theta^*\in\Theta_n(\beta,C_0)$, and the minimax rate also exhibits a transition between an exponential rate and a polynomial rate. To better understand the minimax rate formula, we use Lemma \ref{lem:sum-psi-prime} to quantify the order of the variance function $V_{i}(\theta^*)$. There exist constants $C_1,C_2>0$, such that $$C_1\left(\beta\vee \frac{1}{n}\right)\leq \frac{V_{i}(\theta^*)}{n}\leq C_2\left(\beta\vee \frac{1}{n}\right).$$ Therefore, when $\beta\gtrsim n^{-1}$, the minimax rate (ignoring the $n^{-5}$ term) can be simplified as \begin{equation} \begin{cases} \exp\left(-\Theta(Lp\beta)\right), & Lp\beta > 1, \\ n\wedge\sqrt{\frac{1}{Lp\beta}}, & Lp\beta \leq 1.\\ \end{cases} \label{eq:BTL-minimax-simp} \end{equation} The formula (\ref{eq:BTL-minimax-simp}) also exhibits a transition between a polynomial rate and an exponential rate. Its behavior can be illustrated by Figure \ref{fig:phase} with SNR being $\Theta(LP\beta)$. Compared with the minimax rate (\ref{eq:G-minimax-simp}) for the Gaussian comparison model, the dependence of (\ref{eq:BTL-minimax-simp}) on $\beta$ is weaker. This is a consequence of the dual roles of $\beta$ discussed in Section \ref{sec:intuition}. In fact, by writing $$Lp\beta=L\beta^{-1}p \beta^2,$$ we can directly observe the effects of $\beta^{-1}p$ and $\beta^2$ as the effective sample size and the signal strength, respectively. On the other hand, the number of total players $n$ has very little effect on the minimax rate formula. The condition $\frac{p}{(\beta\vee n^{-1})\log n}\rightarrow\infty$ required by Theorem \ref{thm:BTL-minimax} can be equivalently written as $\frac{np}{\log n}\rightarrow\infty$ and $\frac{p}{\beta\log n}\rightarrow\infty$. Compared with the setting of Theorem \ref{thm:Gaussian-minimax}, an additional condition $\frac{p}{\beta\log n}\rightarrow\infty$ is assumed for the BTL model. This condition can be seen as a consequence of the Fisher information formula (\ref{eq:oracle-fisher-asymp}) that statistical inference on the skill parameter of each player only depends on the player's close opponents. In other words, for each $\theta_i^*$, the information is available in the games on the local graph \begin{equation} \mathcal{A}_i=\left\{A_{jk}: |r_j^*-r_i^*|\leq \frac{M}{\beta}, |r_k^*-r_i^*|\leq \frac{M}{\beta}\right\}.\label{eq:nei-i} \end{equation} All the other games have little information in the statistical inference of $\theta_i^*$. Therefore, it is required that the local graph $\mathcal{A}_i$ is connected. The condition $\frac{p}{\beta\log n}\rightarrow\infty$ guarantees the connectivity of $\mathcal{A}_i$ for all $i\in[n]$. Note that the size of the local graph is $O(\beta^{-1})$, which again justifies that the effective sample size of the BTL model is $p/\beta$ instead of $pn$ in the Gaussian case. Since the local graph $\mathcal{A}_i$ is unknown, the additional $\frac{L}{\log n}\rightarrow\infty$ assumption is needed in the upper bound to estimate it or its surrogate. \section{A Divide-and-Conquer Algorithm}\label{sec:dnc} We introduce a fully adaptive and computationally efficient algorithm for ranking under the BTL model in this section. We first outline the main idea in Section \ref{sec:overview}. Details of the algorithm are presented in Section \ref{sec:BTL-algo}, and the statistical properties are analyzed in Section \ref{sec:analysis}. \subsection{An Overview}\label{sec:overview} In the Gaussian comparison model, we first compute the global MLE for the skill parameters via the least-squares optimization (\ref{eq:Gaussian-ls}), and then rank the players according to the estimators of the skills. This simple idea does not generalize to the BTL model, since the statistical information of each player concentrates on its close opponents, a phenomenon that is discussed in Section \ref{sec:intuition}. Therefore, instead of using the global MLE, we should maximize likelihood functions that are only defined by players whose abilities are close. This modification not only addresses the information-theoretic issue of the BTL model that we just mentioned, but it also leads to Hessian matrices that are well conditioned, a property that is critical for efficient convex optimization. For Player $i$, the set of close opponents that are sufficient for optimal statistical inference is given by $\mathcal{A}_i$ defined in (\ref{eq:nei-i}). Suppose the knowledge of $\mathcal{A}_i$ was available, we could compute the local MLE using games only against players in $\mathcal{A}_i$. This idea is roughly correct, but there are several nontrivial issues that we need to solve before making it actually work. The first issue lies in the identifiability of the BTL model that $\theta_i^*$ can only be estimated up to a translation, which makes the comparison between $\widehat{\theta}_i$ obtained from $\mathcal{A}_i$ and $\widehat{\theta}_j$ obtained from $\mathcal{A}_j$ meaningless. The second issue is that the set $\mathcal{A}_i$ is unknown, and we need a data-driven procedure to identify the close opponents of each player. We propose an algorithm that first partitions the $n$ players into several leagues and then use local MLE to compare the skills of players within the same league. The league partition is data-driven, and serves as a surrogate for the local graphs $\mathcal{A}_i$'s. Moreover, for two players $i$ and $j$ in the same league, the MLEs of their skill parameters are computed using the same set of opponents, and thus $\widehat{\theta}_i-\widehat{\theta}_j$ is a well-defined estimator of $\theta_i^*-\theta_j^*$. Another key idea we use in our proposed algorithm is that the estimation of $r^*$ is closely related to the estimation of the \emph{pairwise relation matrix} $R^*$ defined as \begin{align}\label{eqn:pair_relation} R^*_{ij} =\mathbb{I}\{r_i^* < r^*_j\}\quad \text{for all } 1\leq i\neq j\leq n. \end{align} For any estimator of $R^*$, it can be converted into an estimator of the rank vector $r^*$ according to Lemma \ref{lem:anderson-ineq}. As a result, we shall focus on constructing a good estimator for all the pairwise relations $\{\mathbb{I}\{r_i^* < r^*_j\}\}_{i<j}$. \iffalse For a proper estimator $ R$ of $R^*$, we can easily convert it into a ranking estimator $\hat r$ of $r^*$ such that $\textsf{K}(\hat r, r^*)$ is well controlled by the Frobenius distance between $ R$ and $R^*$ (Lemma \ref{lem:anderson-ineq}). As a result, we can mainly focus on obtaining a decent estimation of $R^*$, which is essentially all about estimating the relations $\{\mathbb{I}\{r_i^* < r^*_j\}\}_{i<j}$. After the league partition, if two players $i$ and $j$ are in the same league, we estimate $\mathbb{I}\{r_i^* < r^*_j\}$ by $\mathbb{I}\{\widehat{\theta}_i>\widehat{\theta}_j\}$ from the local MLE. If they are not from the same league, it means $r_i^*$ and $r_j^*$ are far away from each other, and hence $\mathbb{I}\{r_i^* < r^*_j\}$ can be estimated by the leagues the two players belong to. In the end, we aggregate the information across all leagues and output a full rank estimator. \fi This divide-and-conquer algorithm, which will be described in Section \ref{sec:BTL-algo}, resembles typical strategies adopted in professional sports such as European football leagues. It is computationally efficient and we will show the algorithm achieves the minimax rate of full ranking. \subsection{Details of The Proposed Algorithm}\label{sec:BTL-algo} We first decompose the set $[L]$ by $\{1,\cdots,L_1\}$ and $\{L_1+1,\cdots,L\}$. Games in the first set are used as preliminary games for league partition, and games in the second set are used for computing the MLE. Under the condition $\frac{L}{\log n}\rightarrow\infty$, we can set the number $L_1$ as $L_1=\ceil{\sqrt{L\log n}}$. Define $$\bar{y}_{ij}^{(1)}=\frac{1}{L_1}\sum_{l=1}^{L_1}y_{ijl}\quad\text{and}\quad\bar{y}_{ij}^{(2)}=\frac{1}{L-L_1}\sum_{l=L_1+1}^Ly_{ijl}$$ as the summary statistics in $\{1,\cdots,L_1\}$ and $\{L_1+1,\cdots,L\}$, respectively. The proposed algorithm consists of four steps, which we describe in detail below before presenting the whole procedure in Algorithm \ref{alg:whole}. \paragraph{Step 1: League Partition.} For each $i\in[n]$, we define \begin{equation} w_i^{(1)}=\sum_{j\in[n]}A_{ij}\mathbb{I}\{\bar{y}_{ij}^{(1)}\leq\psi(-2M)\}, \label{eq:def-w1} \end{equation} where $M$ is some sufficiently large constant. The indicator $\mathbb{I}\{\bar{y}_{ij}^{(1)}\leq\psi(-2M)\}$ describes the event that Player $i$ is completely dominated by Player $j$ in the preliminary games. The quantity $w_i^{(1)}$ then counts the number of players who have dominated Player $i$. If $w_i^{(1)}$ is sufficiently small, Player $i$ should belong to the top league since only few or no players could dominate Player $i$. Indeed, the first league is defined by \begin{equation} S_1=\left\{i\in[n]: w_i^{(1)}\leq h\right\}, \label{eq:def-S1} \end{equation} where $h$ is chosen as $h=\frac{pM}{\beta}$. A data-driven $h$ will be described in the Section \ref{sec:h}. Similarly, $w_i^{(2)}$ and the second league $S_2$ can be defined by replacing $[n]$ with $[n]\backslash\{S_1\}$ in (\ref{eq:def-w1}) and (\ref{eq:def-S1}). Sequentially, we compute $w_i^{(k+1)}$ and $S_{k+1}$ based on players in $[n]\backslash\left(S_1\cup\cdots\cup S_k\right)$ for all $k\geq 1$. \iffalse Sequentially we define \begin{align} w_i^{(k+1)} & =\sum_{j\in[n]\backslash\left(S_1\cup\cdots\cup S_k\right)}A_{ij}\mathbb{I}\{\bar{y}_{ij}^{(1)}\leq\psi(-2M)\}, \label{eqn:w_def}\\ \text{and } S_{k+1} & = \left\{i\in[n]\backslash\left(S_1\cup\cdots\cup S_k\right): w_i^{(k+1)}\leq h\right\}, \label{eqn:S_def} \end{align} for any $k\geq 1$. \fi This procedure will terminate as soon as the number of the players who are yet to be classified is small enough, at which point all of the remaining players will be grouped together into the last league. The entire procedure of league partition is described in Algorithm \ref{alg:partition}. \begin{algorithm} \DontPrintSemicolon \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \Input{$\{A_{ij}\bar y_{ij}^{(1)}\}_{1\leq i<j\leq n}$ and $\{A_{ij}\}_{1\leq i<j\leq n}$; $M$ and $h$} \Output{A partition of $[n]$: $S_1,\cdots, S_K$ such that $[n]=\uplus_{k=1}^KS_k$} \nl For $i$ in $[n]$, compute $w_i^{(1)}\leftarrow\sum_{j\in[n]}A_{ij}\mathbb{I}\{\bar{y}_{ij}^{(1)}\leq\psi(-2M)\}$.\; Set $S_1\leftarrow\left\{i\in[n]: w_i^{(1)}\leq h\right\}$ and $k=1$.\; \nl While $n-\left(|S_1|+\cdots+|S_k|\right)>|S_k|/2$,\; \qquad For each $i\in[n]\backslash\left(S_1\cup\cdots\cup S_k\right)$, \; \qquad\qquad compute $w_i^{(k+1)}\leftarrow\sum_{j\in[n]\backslash\left(S_1\cup\cdots\cup S_k\right)}A_{ij}\mathbb{I}\{\bar{y}_{ij}^{(1)}\leq\psi(-2M)\}$. \; \qquad Set $S_{k+1}\leftarrow\left\{i\in[n]\backslash\left(S_1\cup\cdots\cup S_k\right): w_i^{(k+1)}\leq h\right\}$ and $k\leftarrow k+1$. \; \nl Set $K\leftarrow k-1$ and $S_K\leftarrow S_K \cup\left([n]\backslash\left(S_1\cup\cdots\cup S_{K-1}\right)\right)$. \caption{A league partition algorithm} \label{alg:partition} \end{algorithm} \paragraph{Step 2: Local MLEs and Within-League Pairwise Relation Estimation.} Having obtained the league partition $S_1,\cdots,S_K$, we need to compare players in the same league in the next step. Given the ambiguity between neighboring leagues, we shall also compare players if the leagues they belong to are next to each other. Therefore, for each $k\in[K-1]$, we need to compute the MLE for $\{\theta_{r_i^*}^*\}_{i\in S_k\cup S_{k+1}}$. This leads to the comparison between any two players in $S_k\cup S_{k+1}$. Define \begin{equation} \mathcal{E}=\left\{(i,j):1\leq i<j\leq n, \psi(-M)\leq \bar{y}_{ij}^{(1)}\leq \psi(M)\right\}. \label{eq:close-edges} \end{equation} For each $k\in[K-1]$, the local negative log likelihood function is given by \begin{equation} \ell^{(k)}(\theta)=\sum_{\substack{(i,j)\in\mathcal{E}\\i,j\in S_{k-1}\cup S_k\cup S_{k+1}\cup S_{k+2}}}A_{ij}\left[\bar{y}_{ij}^{(2)}\log\frac{1}{\psi(\theta_i-\theta_j)}+(1-\bar{y}_{ij}^{(2)})\log\frac{1}{1-\psi(\theta_i-\theta_j)}\right].\label{eq:league-loss} \end{equation} When $k=1$ or $k=K-1$, we use the notation $S_0=S_{K+1}=\varnothing$. Note that the negative log likelihood function is only defined for edges in $\mathcal{E}$. In other words, only games between close opponents are considered. Moreover, some of the top players in $S_k$ may have close opponents in the previous league $S_{k-1}$, and some of the bottom players in $S_{k+1}$ may have close opponents in the next league $S_{k+2}$. The likelihood should include these games as well for optimal inference of the parameters $\{\theta_{r_i^*}^*\}_{i\in S_k\cup S_{k+1}}$. The MLE is defined by \begin{equation} \widehat{\theta}^{(k)}\in \mathop{\rm argmin}\ell^{(k)}(\theta), \label{eq:league-MLE} \end{equation} which is any vector that minimizes $\ell^{(k)}(\theta)$. Then, for any $i\in S_k$ and any $j\in S_k\cup S_{k+1}$, set $$R_{ij}=\mathbb{I}\{\widehat{\theta}_i^{(k)}>\widehat{\theta}_j^{(k)}\}.$$ Note that $\{\widehat{\theta}_i^{(k)}\}_{i\in S_k\cup S_{k+1}}$ is defined only up to a common translation, but even with such ambiguity, the comparison indicator $R_{ij}$ is uniquely defined. We also remark that the computation of the MLE (\ref{eq:league-MLE}) is a straightforward convex optimization. It can be shown that the Hessian matrix of the objective function is well conditioned (Lemma \ref{lem:bound-hessian}), and thus a standard gradient descent algorithm converges to the optimum with a linear rate \citep{chen2019spectral,chen2020partial}. \paragraph{Step 3: Cross-League Pairwise Relation Estimation.} Consider $i$ and $j$ that belong to $S_k$ and $S_l$ respectively with $|k-l|\geq 2$. This is a pair of players that are separated by at least an entire league between them. For all such pairs, we set $$R_{ij}=\mathbb{I}\{k<l\}.$$ Combined with the entries that are computed in Step 2, all upper triangular entries of the matrix $R$ have been filled. The remaining entries of $R$ can be filled according to the rule $R_{ij}+R_{ji}=1$. ~\\ \indent Step 2 and Step 3 together serve the purpose of estimating the pairwise relation matrix $R^*$ defined in (\ref{eqn:pair_relation}). Illustrated in Figure \ref{fig:relation}, the matrix $R^*$ can be decomposed into blocks $\{R^*_{S_k \times S_l}\}_{k<l}$ according to the league partition $\{S_k\}_{k\in[K]}$. The yellow blocks close to the diagonal are estimated by the procedure described in Step 2. In Figure \ref{fig:relation}, the data used in the two local MLEs ($k=1$ and $k=4$) are marked by different patterns for illustration. For example, when $k=4$, we obtain estimators for $R^*_{S_4 \times S_4}$ and $R^*_{S_4 \times S_5}$ based on the local MLE that involves observations from $\{(i,j)\in \mathcal{E}:i,j\in S_3\cup S_4 \cup S_5 \cup S_6\}$. The blue blocks are away from the diagonal and are estimated in Step 3. The remaining blocks in the lower triangular part are estimated according to $R_{ij}+R_{ji}=1$. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{relation.pdf} \caption{\textsl{Illustration of Step 2 and Step 3.}} \label{fig:relation} \end{figure} \paragraph{Step 4: Full Rank Estimation.} In the last step, we convert the pairwise relations estimator $R$ into a rank estimator. First, compute the score for the $i$th player by $$s_i=\sum_{j\in[n]\backslash\{i\}}R_{ij}.$$ Then, the rank estimator $\widehat{r}$ is obtained by sorting the scores $\{s_i\}_{i\in[n]}$. The whole procedure of full ranking is summarized as Algorithm \ref{alg:whole}. \begin{algorithm} \DontPrintSemicolon \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \Input{$\{A_{ij}\bar y_{ij}^{(1)}\}_{1\leq i<j\leq n}$, $\{A_{ij}\bar y_{ij}^{(2)}\}_{1\leq i<j\leq n}$ and $\{A_{ij}\}_{1\leq i<j\leq n}$; $M$ and $h$} \Output{A rank vector $\widehat{r}\in\mathfrak{S}_n$} \nl Run Algorithm \ref{alg:partition} and obtain the partition $[n]=\uplus_{k=1}^K S_k$.\; Set $S_0=S_{K+1}=\varnothing$. \; \nl For $k\in[K-1]$, \; \qquad compute the local MLE $\widehat{\theta}^{(k)}$ according to (\ref{eq:league-MLE}). \; \qquad For $i\in S_k$ and $j\in S_k\cup S_{k+1}$, \; \qquad\qquad set $R_{ij}\leftarrow\mathbb{I}\{\widehat{\theta}_i^{(k)}>\widehat{\theta}_j^{(k)}\}$. \; \nl For $k\in[K-2]$ and $l\in[k+2:K]$, \; \qquad For $(i,j)\in S_k\times S_l$, \; \qquad\qquad set $R_{ij}\leftarrow 1$. \; For $i\in[n]$ and $j\in[i+1:n]$, \; \qquad set $R_{ji}\leftarrow 1-R_{ij}$. \; \nl For $i\in[n]$, \; \qquad compute $s_i\leftarrow \sum_{j\in[n]\backslash\{i\}}R_{ij}$. \; Sort $\{s_i\}_{i\in[n]}$ from high to low and obtain a full rank vector $\widehat{r}$. \caption{A divide-and-conquer full ranking algorithm} \label{alg:whole} \end{algorithm} \subsection{Statistical Properties of Each Step}\label{sec:analysis} The purpose of this section is to prove the upper bound result of Theorem \ref{thm:BTL-minimax} by analyzing the statistical properties of Algorithm \ref{alg:whole}. The four components of the algorithm will be analyzed separately. We will first analyze Step 4 in Section \ref{sec:analysis_step4}, then Step 1 in Section \ref{sec:analysis_step1}, followed by Step 3 in Section \ref{sec:analysis_step3} and finally Step 2 in Section \ref{sec:analysis_step2}. The results of these individual components will be combined to derive the minimax optimality of Algorithm \ref{alg:whole}, presented in Section \ref{sec:whole}. \subsubsection{From Pairwise Relations to Full Ranking (Step 4).} \label{sec:analysis_step4} We first establish a result that clarifies the role of Step 4 of Algorithm \ref{alg:whole}. Consider any matrix $R\in\{0,1\}^{n\times n}$ that satisfies $R_{ij}+R_{ji}=1$ for any $i\neq j$. Let $\widehat{r}$ be the rank vector obtained by sorting $\{\sum_{j\in[n]\backslash\{i\}}R_{ij}\}_{i\in[n]}$ from high to low. The error of $\widehat{r}$ is controlled by the following lemma. \begin{lemma}\label{lem:anderson-ineq} For any $r^*\in\mathfrak{S}_n$, define its pairwise relation matrix $R^*$ such that $R_{ij}^*=\mathbb{I}\{r_i^*<r_j^*\}$. Then, we have $$\textsf{K}(\widehat{r},r^*)\leq \frac{4}{n}\sum_{1\leq i\neq j\leq n}\mathbb{I}\{R_{ij}\neq R_{ij}^*\}.$$ \end{lemma} Lemma \ref{lem:anderson-ineq} is a deterministic inequality that bounds the error of the rank estimation by the estimation error of pairwise relations. It implies that to accurately rank $n$ players, it is sufficient to accurately estimate the pairwise relations between all pairs. \subsubsection{Statistical Properties of League Partition (Step 1).} \label{sec:analysis_step1} The partition output by Algorithm \ref{alg:partition} satisfies several nice properties that are stated by the following theorem. \begin{thm}\label{thm:alg-1} Assume $\theta^*\in\Theta_n(\beta,C_0)$ for some constant $C_0\geq 1$, $\frac{L}{\log n}\rightarrow\infty$ and $\frac{p}{(\beta\vee n^{-1})\log n}\rightarrow\infty$. Let $\{S_k\}_{k\in[K]}$ be the output of Algorithm \ref{alg:partition} with $L_1=\ceil{\sqrt{L\log n}}$, $1\leq M=O(1)$ and $h=\frac{pM}{\beta}$. Then, there exist some constants $C_1,C_2,C_3>0$ only depending on $C_0$ such that the following conclusions hold with probability at least $1-O(n^{-9})$: \begin{enumerate} \item \emph{Boundedness:} For any $k\in[K]$ and any $i,j\in S_{k-1}\cup S_k\cup S_{k+1}$, we have $|\theta_{r_i^*}^*-\theta_{r_j^*}^*|\leq C_1M$. Recall the convention that $S_{0}=S_{K+1}=\varnothing$; \item \emph{Inclusiveness:} For any $k\in[K]$ and any $i\in S_k$, we have $\left\{j\in[n]: |r_i^*-r_j^*|\leq \frac{C_2M}{\beta}\right\}\subset S_{k-1}\cup S_k\cup S_{k+1}$; \item \emph{Separation:} For any $i\in S_k$ and $j\in S_{l}$ such that $l-k\geq 2$, we have $\theta^*_{r_i^*} > \theta^*_{r_j^*}$; \item \emph{Independence:} For any $k\in[K]$, we have $S_k=\check{S}_k$. Here, $\{\check{S}_k\}_{k\in[K]}$ is a partition that is measurable with respect to the $\sigma$-algebra generated by $\{(A_{ij},\bar{y}_{ij}^{(1)}): |\theta_{r_i^*}^*-\theta_{r_j^*}^*|>1.9M\}$; \item \emph{Continuity: } For any $k\in[K-1]$ and any $i\in S_{k-1}\cup S_{k} \cup S_{k+1}\cup S_{k+2}$, we have $\abs{\left\{j\in[n]: |\theta^*_{r_i^*}-\theta^*_{r_j^*}|\leq \frac{M}{2}\right\} \cap (S_{k-1}\cup S_{k} \cup S_{k+1}\cup S_{k+2})} \geq C_3 \br{\frac{ M}{\beta}\wedge n}$. \end{enumerate} \end{thm} We give some remarks on each conclusion of Theorem \ref{thm:alg-1}. The first conclusion asserts that the skill parameters of players from the neighboring leagues are close to each other. This property is complemented by the second conclusion that the close opponents of each player are either from the same league, the previous league, or the next league. In other words, for any $k\in[K]$ and any $i\in S_k$, the local graph $\{A_{jk}: j,k\in S_{k-1}\cup S_k\cup S_{k+1}\}$ can be viewed as a data-driven surrogate of $\mathcal{A}_i$ defined in (\ref{eq:nei-i}). Moreover, the second conclusion also implies that $|S_{k-1}\cup S_k\cup S_{k+1}|\gtrsim \frac{1}{\beta}\wedge n$, from which we can deduce the bound $K=O\left(n\beta\vee 1\right)$ that controls the number of iterations Algorithm \ref{alg:partition} needs before it is terminated.\footnote{We can in fact prove a stronger result that $\frac{1}{\beta}\wedge n\lesssim |S_k|\lesssim \frac{1}{\beta}\wedge n$ uniformly for all $k\in[K]$ with probability at least $1-O(n^{-9})$.} Conclusion 3 implies that the partition $\{S_k\}_{k\in[K]}$ is roughly correlated with the true rank in the sense that it correctly identifies the comparisons between players who do not belong to neighboring leagues. Conclusion 4 shows that almost all of the randomness of the partition is from that of $\{(A_{ij},\bar{y}_{ij}^{(1)}): \theta_{r_i^*}^*-\theta_{r_j^*}^*\leq-1.9M\}$. This fact leads to a crucial independence property in the later analysis of the local MLE. Conclusions 1, 2, 4, and 5 are crucial in the analysis of Step 2 in Section \ref{sec:analysis_step2}, while Conclusion 3 will be used in the analysis of Step 3 in Section \ref{sec:analysis_step3}. The proof of Theorem \ref{thm:alg-1} is a delicate mathematical induction argument that iteratively explores the asymptotic independence between consecutive constructions of leagues. To be specific, the random variable $$w_i^{(k+1)}=\sum_{j\in[n]\backslash\left(S_1\cup\cdots\cup S_k\right)}A_{ij}\mathbb{I}\{\bar{y}_{ij}^{(1)}\leq\psi(-2M)\}$$ can be sandwiched between $\underline{w}_i^{(k+1)}$ and $\overline{w}_i^{(k+1)}$. We show that both $\underline{w}_i^{(k+1)}$ and $\overline{w}_i^{(k+1)}$, when conditioning on the previous leagues $S_1,\cdots,S_k$, approximately follow Binomial distributions. Essentially, the $A_{ij}$'s that contribute to the summation of $w_i^{(k+1)}$ are disjoint from the $A_{ij}$'s that lead to the constructions of $S_1,\cdots,S_k$, which then implies an asymptotic independence property between $\big(\underline{w}_i^{(k+1)},\overline{w}_i^{(k+1)}\big)$ and $S_1,\cdots,S_k$. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{thm41.pdf} \caption{\textsl{Illustration of the independence property of Algorithm \ref{alg:partition}.}} \label{fig:thm41} \end{figure} This phenomenon is illustrated in Figure \ref{fig:thm41}. In the picture, we use the orange block to denote $S_1\cup\cdots\cup S_k$, the set that has already been partitioned. The next step of the algorithm is to construct the $(k+1)$th league from $[n]\backslash (S_1\cup\cdots\cup S_k)$, which is the blue block. From the positions of $w_i^{(k+1)}$'s, we observe that the construction of $S_{k+1}$ depends on $A_{ij}$'s that are in the yellow area. On the other hand, since the area on the left hand side of the dashed curve satisfies $\bar{y}_{ij}^{(1)}\leq\psi(-2M)$, the construction of the first $k$ leagues only depends on $A_{ij}$'s that are in the grey area. The independence property can be easily seen from the separation between the grey and the yellow areas. A rigorous proof of Theorem \ref{thm:alg-1}, which is based on this argument, will be given in Section \ref{sec:pf-alg-1}. \subsubsection{Statistical Properties of Cross-League Estimation (Step 3).} \label{sec:analysis_step3} The analysis of Step 3 is quite straightforward following the results from the league partition. Assume the Conclusion 3 of Theorem \ref{thm:alg-1} holds. Then for any $i\in S_k$ and $j\in S_{l}$ such that $l-k\geq 2$, we have $R^*_{ij}=1$. Since $R_{ij}=1$ for all such pairs, we have $\sum_{k\in[K-2]}\sum_{ l\in[k+2:K]} \indc{R_{ij} \neq R^*_{ij}, i\in S_k, j\in S_l}=0$. \subsubsection{Statistical Properties of Local MLEs (Step 2).} \label{sec:analysis_step2} \iffalse Having obtained Theorem \ref{thm:alg-1}, we are ready to analyze the statistical property of the local MLE (\ref{eq:league-MLE}). We can no longer show $ \indc{R_{ij} \neq R^*_{ij}}=0$ as in the above analysis of Step 3. Instead, we will establish a sharp upper bound for a conditional probability of $R_{ij} \neq R^*_{ij}$. We divide the analysis into three parts. \fi The main challenge of analyzing the local MLE is the dependence between the partition $\{S_k\}_{k\in[K]}$ and the likelihood (\ref{eq:league-loss}). We are going to use Conclusion 4 of Theorem \ref{thm:alg-1} to resolve this issue. Define $$\check{A}_{ij}=A_{ij}\mathbb{I}\{|\theta_{r_i^*}^*-\theta_{r_j^*}^*|\leq M/2\}+A_{ij}\mathbb{I}\left\{(i,j)\in\mathcal{E}, M/2<|\theta_{r_i^*}^*-\theta_{r_j^*}^*|< 1.1 M\right\},$$ and $$\check{\ell}^{(k)}(\theta)=\sum_{i,j\in \check{S}_{k-1}\cup \check{S}_k\cup \check{S}_{k+1}\cup \check{S}_{k+2}}\check{A}_{ij}\left[\bar{y}_{ij}^{(2)}\log\frac{1}{\psi(\theta_i-\theta_j)}+(1-\bar{y}_{ij}^{(2)})\frac{1}{1-\psi(\theta_i-\theta_j)}\right].$$ The maximizer of $\check{\ell}^{(k)}(\theta)$ is denoted by \begin{equation} \check{\theta}^{(k)}\in \mathop{\rm argmin}\check{\ell}^{(k)}(\theta).\label{eq:MLE-check} \end{equation} The introduction of $\check{\ell}^{(k)}(\theta)$ and $\check{\theta}^{(k)}$ is to disentangle the dependence of the MLE on the league partition. By Theorem \ref{thm:alg-1}, we know that $S_k=\check{S}_k$ for all $k\in[K]$. The concentration of $\{\bar{y}_{ij}^{(1)}\}$ implies that $\{|\theta_{r_i^*}^*-\theta_{r_j^*}^*|\leq M/2\}\subset\{(i,j)\in\mathcal{E}\}\subset\{|\theta_{r_i^*}^*-\theta_{r_j^*}^*|\leq 1.1M\}$ for all $1\leq i<j\leq n$. Therefore, we have $$\mathbb{I}\left\{(i,j)\in\mathcal{E}\right\}=\mathbb{I}\{|\theta_{r_i^*}^*-\theta_{r_j^*}^*|\leq M/2\}+\mathbb{I}\left\{(i,j)\in\mathcal{E}, M/2<|\theta_{r_i^*}^*-\theta_{r_j^*}^*|< 1.1 M\right\}.$$ We can thus conclude that $\ell^{(k)}(\theta)=\check{\ell}^{(k)}(\theta)$ for all $\theta$ with high probability. The result is formally stated below. \begin{lemma}\label{lem:MLE-check} Assume $\theta^*\in\Theta_n(\beta,C_0)$ for some constant $C_0\geq 1$, $\frac{L}{\log n}\rightarrow\infty$ and $\frac{p}{(\beta\vee n^{-1})\log n}\rightarrow\infty$. Let $\{S_k\}_{k\in[K]}$ be the output of Algorithm \ref{alg:partition} with $L_1=\ceil{\sqrt{L\log n}}$, $1\leq M=O(1)$ and $h=\frac{pM}{\beta}$. Then, with probability at least $1-O(n^{-8})$, we have $\ell^{(k)}(\theta)=\check{\ell}^{(k)}(\theta)$ for all $\theta$ and for all $k\in[K]$. As a consequence $\{\widehat{\theta}_i^{(k)}\}_{i\in S_k\cup S_{k+1}}$ and $\{\check{\theta}_i^{(k)}\}_{i\in S_k\cup S_{k+1}}$ are equivalent up to a common shift. \end{lemma} With Lemma \ref{lem:MLE-check}, it suffices to study (\ref{eq:MLE-check}) for the statistical property of the MLE. Note that $\{\check{A}_{ij}\}$ is measurable with respect to the $\sigma$-algebra generated by $\{(A_{ij},\bar{y}_{ij}^{(1)}): |\theta_{r_i^*}^*-\theta_{r_j^*}^*|<1.1M\}$. Theorem \ref{thm:alg-1} shows that $\{\check{S}_k\}$ is measurable with respect to the $\sigma$-algebra generated by $\{(A_{ij},\bar{y}_{ij}^{(1)}): |\theta_{r_i^*}^*-\theta_{r_j^*}^*|>1.9M\}$. We then reach a very important conclusion that $\{\check{A}_{ij}\}$, $\{\bar{y}_{ij}^{(2)}\}$ and $\{\check{S}_k\}$ are mutually independent, and therefore we can analyze $\check{\theta}^{(k)}$ by conditioning on the partition $\{\check{S}_k\}$. To be more specific, for any $i,j\in \check{S}_k\cup\check{S}_{k+1}$ such that $\theta_{r_i^*}^*>\theta_{r_j^*}^*$, since $R_{ij} = \indc{\check{\theta}_i^{(k)}>\check{\theta}_j^{(k)}} $, we will provide an upper bound for $\mathbb{P}\left(\check{\theta}_i^{(k)}<\check{\theta}_j^{(k)}\Big|\{\check{S}_k\}_{k\in[K]}\right)$. To this end, we state a result that characterizes the performance of the MLE under a BTL model with bounded skill parameters. Consider a random graph with independent edges $B_{ij}\sim\text{Bernoulli}(p_{ij})$ for $1\leq i<j\leq m$. For each $B_{ij}=1$, observe i.i.d. $y_{ijl}\sim\text{Bernoulli}(\psi(\eta_i^*-\eta_j^*))$ for $l=1,\cdots,L$. Let $\bar{y}_{ij}=\frac{1}{L}\sum_{l=1}^Ly_{ijl}$, and we define the MLE by \begin{equation} \widehat{\eta}\in\mathop{\rm argmin}\sum_{1\leq i<j\leq m}B_{ij}\left[\bar{y}_{ij}\log\frac{1}{\psi(\eta_i-\eta_j)}+(1-\bar{y}_{ij})\log\frac{1}{1-\psi(\eta_i-\eta_j)}\right]. \label{eq:MLE-small} \end{equation} \begin{lemma}\label{lem:prev-paper} Assume $\eta_1^*>\cdots>\eta_m^*$ and $\eta_1^*-\eta_m^*\leq\kappa$. There exists some constant $c\in(0,1)$ such that $p_{ij}=p$ for all $|i-j|\leq cm$ and $p_{ij}\leq p$ otherwise. As long as $\frac{mp}{\log(m+n)}\rightarrow\infty$ and $\kappa=O(1)$, then for any $\delta>0$ that is sufficiently small, there exists a constant $C>0$ such that $$\mathbb{P}\left(\widehat{\eta}_i<\widehat{\eta}_j\right)\leq C\left[\exp\left(-\frac{(1-\delta)L(\eta_i^*-\eta_j^*)^2}{2(W_{i}(\eta^*)+W_j(\eta^*))}\right)+n^{-7}\right],$$ for all $1\leq i<j\leq m$, where $W_i(\eta^*)=\frac{1}{\sum_{j\in[m]\backslash\{i\}}p_{ij}\psi'(\eta_i^*-\eta_j^*)}$ for all $i\in[m]$. \end{lemma} The proof of Lemma \ref{lem:prev-paper}, which relies on a recently developed leave-one-out technique in the analysis of the BTL model \citep{chen2019spectral,chen2020partial}, will be given in Section \ref{sec:pf-lemmas}. By conditioning on $\{\check{S}_k\}$, the statistical property of (\ref{eq:MLE-check}) is a direct consequence of Lemma \ref{lem:prev-paper}. Note that $\mathbb{P}\left(\check{\theta}_i^{(k)}<\check{\theta}_j^{(k)}\Big|\{\check{S}_k\}_{k\in[K]}\right)$ is a function of $\{\check{S}_k\}_{k\in[K]}$, and we will establish a uniform upper bound for this conditional probability for any partition $\{\check{S}_k\}_{k\in[K]}$ satisfying the following conditions: \begin{enumerate} \item[(i)] For any $k\in[K]$ and any $i,j\in \check{S}_{k-1}\cup \check{S}_k\cup \check{S}_{k+1}$, we have $|\theta_{r_i^*}^*-\theta_{r_j^*}^*|\leq C_1M$; \item[(ii)] For any $k\in[K]$ and any $i\in \check{S}_k$, we have $\left\{j\in[n]: |r_i^*-r_j^*|\leq \frac{C_2M}{\beta}\right\}\subset \check S_{k-1}\cup \check S_k\cup \check S_{k+1}$; \item[(iii)] For any $k\in[K-1]$ and any $i\in \check{S}_{k-1}\cup \check{S}_k\cup \check{S}_{k+1}\cup \check{S}_{k+2}$, we have $\Big|\left\{j\in[n]: |\theta^*_{r_i^*}-\theta^*_{r_j^*}|\leq \frac{M}{2}\right\} \cap ( \check{S}_{k-1}\cup \check{S}_k\cup \check{S}_{k+1}\cup \check{S}_{k+2})\Big| \geq C_3 \br{\frac{ M}{\beta}\wedge n}$. \end{enumerate} Note that we use the convention $\check{S}_0=\check{S}_{K+1}=\varnothing$ and $C_1,C_2,C_3$ are the same constants in Theorem \ref{thm:alg-1}. \iffalse In fact, by Conclusion 4 of Theorem \ref{thm:alg-1}, the three conditions are nearly the same with the Conclusion 1, 2, and 5 of Theorem \ref{thm:alg-1}, respectively. As a result, the upper bound (\ref{eq:conditional-comp}) we are about to establish actually holds uniformly for all $\{\check{S}_k\}_{k\in[K]}$ with high probability. \fi Consider any partition $\{\check{S}_k\}_{k\in[K]}$ satisfying the three conditions above. When applying Lemma \ref{lem:prev-paper}, by Conditions (i) and (ii), we have $\kappa=2C_1M$ and $m=|\check{S}_{k-1}\cup \check{S}_k\cup \check{S}_{k+1}\cup \check{S}_{k+2}|\asymp \frac{1}{\beta}\wedge n$. We also know that for any $i,j \in \check{S}_{k-1}\cup \check{S}_k\cup \check{S}_{k+1}\cup \check{S}_{k+2}$ such that $|\theta^*_{r_i^*}-\theta^*_{r_j^*}|\leq \frac{M}{2}$, we have $\check{A}_{ij}=A_{ij}\sim\text{Bernoulli}(p)$. Then, Condition (iii) implies the existence of a band in $\{(r_i^*,r_j^*):i,j\in \check{S}_{k-1}\cup \check{S}_k\cup \check{S}_{k+1}\cup \check{S}_{k+2}\}$ with width at least $cm$ for some constant $c>0$, such that $\check{A}_{ij}\sim\text{Bernoulli}(p)$ for all pairs in the band. For any other $(i,j)$, we have $\check{A}_{ij}\sim\text{Bernoulli}(p_{ij})$ with $p_{ij}\leq p$. Having checked the conditions of Lemma \ref{lem:prev-paper}, we obtain the following result for the local MLE (\ref{eq:MLE-check}), \begin{equation} \mathbb{P}\left(\check{\theta}_i^{(k)}<\check{\theta}_j^{(k)}\Big|\{\check{S}_k\}_{k\in[K]}\right)\leq C\left[\exp\left(-\frac{(1-\delta)npL(\theta_{r_i^*}^*-\theta_{r_j^*}^*)^2}{2(V_{r_i^*}(\theta^*)+V_{r_j^*}(\theta^*))}\right)+n^{-7}\right], \label{eq:conditional-comp} \end{equation} for any $i,j\in \check{S}_k\cup\check{S}_{k+1}$ such that $\theta_{r_i^*}^*>\theta_{r_j^*}^*$. Recall the definition of $V_i(\theta^*)$ in (\ref{eq:BTL-var}). The constant $\delta$ in (\ref{eq:conditional-comp}) can be made arbitrarily small with a sufficiently large $M$. To derive (\ref{eq:conditional-comp}) from Lemma \ref{lem:prev-paper}, we only need to show $$p\sum_{j\in[n]\backslash\{i\}}\psi'(\theta_i^*-\theta_j^*)\leq \left(1+O(e^{-C_2M})\right)\sum_{j\in(\check{S}_{k-1}\cup \check{S}_k\cup \check{S}_{k+1}\cup \check{S}_{k+2})\backslash\{i\}}p_{ij}\psi'(\theta_{r_i^*}^*-\theta_{r_j^*}^*),$$ for all $i\in \check{S}_k\cup\check{S}_{k+1}$. This is true by a similar argument that leads to (\ref{eq:oracle-fisher-asymp}), together with Condition (ii). Finally, by Theorem \ref{thm:alg-1}, Conditions (i)-(iii) hold for $\{\check{S}_k\}_{k\in[K]}$ with high probability, and thus (\ref{eq:conditional-comp}) is a high-probability bound. A similar bound to (\ref{eq:conditional-comp}) also holds for (\ref{eq:league-MLE}) by the conclusion of Lemma \ref{lem:MLE-check}. \subsection{Analysis of Algorithm \ref{alg:whole}} \label{sec:whole} With the help of Lemma \ref{lem:anderson-ineq}, Theorem \ref{thm:alg-1}, Lemma \ref{lem:MLE-check} and Lemma \ref{lem:prev-paper}, we are ready to prove that Algorithm \ref{alg:whole} achieves the minimax rate of full ranking. \begin{proof}[Proof of Theorem \ref{thm:BTL-minimax} (upper bound)] Let $\mathcal{G}$ be the event that the conclusions of Theorem \ref{thm:alg-1} and Lemma \ref{lem:MLE-check} hold. We have $\mathbb{P}(\mathcal{G}^c)=O(n^{-8})$. In addition, we use the notation $\check{\mathcal{S}}$ for the event that $\{\check{S}_k\}_{k\in[K]}$ satisfies Conditions (i)-(iii) listed in Section \ref{sec:analysis_step2}. It is clear that $\mathcal{G}\subset \check{\mathcal{S}}$. By Lemma \ref{lem:anderson-ineq}, we have $$\mathbb{E}\textsf{K}(\widehat{r},r^*)\leq \frac{4}{n}\sum_{1\leq i\neq j\leq n}\mathbb{P}(R_{ij}\neq R_{ij}^*).$$ It suffices to give a bound for $\mathbb{P}(R_{ij}\neq R_{ij}^*)$ for every pair $i\neq j$. Note that we have $\mathbb{P}(R_{ij}\neq R_{ij}^*) \leq \mathbb{P}\left(R_{ij}\neq R_{ij}^*, \mathcal{G}\right) + \mathbb{P}(\mathcal{G}^c)$. Then. \begin{align*} \mathbb{P}(R_{ij}\neq R_{ij}^*,\mathcal{G}) & = \sum_{k=1}^K\sum_{l=1}^K\mathbb{P}(R_{ij}\neq R_{ij}^*,\mathcal{G}, i \in S_k, j \in S_l)\\ & = \sum_{(k,l)\in[K]^2: \abs{k-l}\leq 1} \mathbb{P}(R_{ij}\neq R_{ij}^*,\mathcal{G}, i \in S_k, j \in S_l)\\ &\quad + \sum_{(k,l)\in[K]^2: \abs{k-l}\geq 2} \mathbb{P}(R_{ij}\neq R_{ij}^*,\mathcal{G}, i \in S_k, j \in S_l). \end{align*} The second term above is zero. This is due to the analysis of Step 3 in Section \ref{sec:analysis_step3} which shows $\sum_{(k,l)\in[K]^2: \abs{k-l}\geq 2} \mathbb{I}\{R_{ij}\neq R_{ij}^*, i \in S_k, j \in S_l\}=0$ under the event $\mathcal{G}$. Hence, we only need to study the first term. Without loss of generality, consider $\theta_{r_i^*}>\theta_{r_j^*}^*$. Then, the event $\{R_{ij}\neq R_{ij}^*, \mathcal{G},i\in S_k,j\in S_{k}\}$ is equivalent to $\{\widehat{\theta}^{(k)}_i<\widehat{\theta}^{(k)}_j, \mathcal{G},i\in S_k,j\in S_{k}\}$, which is further equivalent to $\{\check{\theta}^{(k)}_i<\check{\theta}^{(k)}_j, \mathcal{G},i\in \check{S}_k,j\in \check{S}_{k}\}$ by the definition of $\mathcal{G}$. We thus have \begin{align*} \mathbb{P}(R_{ij}\neq R_{ij}^*,\mathcal{G}) & = \sum_{(k,l)\in[K]^2: \abs{k-l}\leq 1} \mathbb{P}(\check{\theta}^{(k)}_i<\check{\theta}^{(k)}_j, \mathcal{G},i\in \check{S}_k,j\in \check{S}_{l}) \\ & \leq \sum_{(k,l)\in[K]^2: \abs{k-l}\leq 1} \mathbb{P}(\check{\theta}^{(k)}_i<\check{\theta}^{(k)}_j, \check{\mathcal{S}},i\in \check{S}_k,j\in \check{S}_{l}) \\ & = \sum_{(k,l)\in[K]^2: \abs{k-l}\leq 1} \mathbb{P}\left(\check{\theta}^{(k)}_i<\check{\theta}^{(k)}_j\Big|\check{\mathcal{S}},i\in \check{S}_k,j\in \check{S}_{l}\right)\mathbb{P}\left(\check{\mathcal{S}},i\in \check{S}_k,j\in \check{S}_{l}\right)\\ & \leq C\left[\exp\left(-\frac{(1-\delta)npL(\theta_{r_i^*}^*-\theta_{r_j^*}^*)^2}{2(V_{r_i^*}(\theta^*)+V_{r_j^*}(\theta^*))}\right)+n^{-7}\right] \sum_{(k,l)\in[K]^2: \abs{k-l}\leq 1} \mathbb{P}\left(\check{\mathcal{S}}, i\in \check{S}_k,j\in \check{S}_{l}\right) \\ &\leq C\left[\exp\left(-\frac{(1-\delta)npL(\theta_{r_i^*}^*-\theta_{r_j^*}^*)^2}{2(V_{r_i^*}(\theta^*)+V_{r_j^*}(\theta^*))}\right)+n^{-7}\right], \end{align*} for some constant $C>0$ and some $\delta>0$ that is arbitrarily small. The second last inequality above is by Lemma \ref{lem:prev-paper}, or more specifically, (\ref{eq:conditional-comp}), as we show (\ref{eq:conditional-comp}) holds for any $\{\check{S}_k\}_{k\in[K]}$ satisfying Conditions (i)-(iii) listed in Section \ref{sec:analysis_step2}. Since $\mathbb{P}(\mathcal{G}^c)=O(n^{-8})$, we obtain the bound \begin{equation} \mathbb{P}(R_{ij}\neq R_{ij}^*) \leq 2C\left[\exp\left(-\frac{(1-\delta)npL(\theta_{r_i^*}^*-\theta_{r_j^*}^*)^2}{2(V_{r_i^*}(\theta^*)+V_{r_j^*}(\theta^*))}\right)+n^{-7}\right],\label{eq:trueforallpairs} \end{equation} for all $i\neq j$. Summing the bound (\ref{eq:trueforallpairs}) over all $i\neq j$, we have \begin{eqnarray} \nonumber \mathbb{E}\textsf{K}(\widehat{r},r^*) &\leq& \frac{8C}{n}\sum_{1\leq i\neq j\leq n}\exp\left(-\frac{(1-\delta)npL(\theta_{r_i^*}^*-\theta_{r_j^*}^*)^2}{2(V_{r_i^*}(\theta^*)+V_{r_j^*}(\theta^*))}\right) + 8Cn^{-6} \\ \label{eq:sum-exp} &=& \frac{8C}{n}\sum_{1\leq i\neq j\leq n}\exp\left(-\frac{(1-\delta)npL(\theta_{i}^*-\theta_{j}^*)^2}{2(V_{i}(\theta^*)+V_{j}(\theta^*))}\right) + 8Cn^{-6}. \end{eqnarray} Now it is just a matter of simplifying the expression (\ref{eq:sum-exp}). We consider the following two cases: $\frac{Lp\beta^2}{\beta\vee n^{-1}}\leq 1$ and $\frac{Lp\beta^2}{\beta\vee n^{-1}}> 1$. First, we consider the case $\frac{Lp\beta^2}{\beta\vee n^{-1}}\leq 1$. By Lemma \ref{lem:sum-psi-prime} proved in Section \ref{sec:pf-BTL}, there exist constants $c_1,c_2>0$, such that \begin{equation} c_1\left(\beta\vee \frac{1}{n}\right)\leq \frac{V_{i}(\theta^*)}{n}\leq c_2\left(\beta\vee \frac{1}{n}\right),\label{eq:order-of-variance} \end{equation} for all $\theta^*\in\Theta_n(\beta,C_0)$ and all $i\in[n]$. Then, for each $i\in[n]$, \begin{eqnarray*} \sum_{j\in[n]\backslash\{i\}}\exp\left(-\frac{(1-\delta)npL(\theta_{i}^*-\theta_{j}^*)^2}{2(V_{i}(\theta^*)+V_{j}(\theta^*))}\right) &\leq& \sum_{j\in[n]\backslash\{i\}}\exp\left(-\frac{1}{3c_2}(i-j)^2\frac{Lp\beta^2}{\beta\vee n^{-1}}\right) \\ &\leq& \int_0^{\infty}\exp\left(-\frac{1}{3c_2}x^2\frac{Lp\beta^2}{\beta\vee n^{-1}}\right) dx \\ &=& \sqrt{\frac{3\pi c_2}{4}}\sqrt{\frac{\beta\vee n^{-1}}{Lp\beta^2}}, \end{eqnarray*} and we have $\mathbb{E}\textsf{K}(\widehat{r},r^*)\lesssim \sqrt{\frac{\beta\vee n^{-1}}{Lp\beta^2}}$. The definition of the loss function implies $\mathbb{E}\textsf{K}(\widehat{r},r^*)\leq n$, and thus we obtain the rate $n\wedge\sqrt{\frac{\beta\vee n^{-1}}{Lp\beta^2}}$ when $\frac{Lp\beta^2}{\beta\vee n^{-1}}\leq 1$. Next, we consider the case $\frac{Lp\beta^2}{\beta\vee n^{-1}}> 1$. For any $|i-j|\leq C_0\sqrt{c_2/c_1}$, we have $V_j(\theta^*)\leq (1+\delta')V_i(\theta^*)$ for some $\delta'=o(1)$. This is by the definition of the variance function and the fact that $\sup_x\left|\frac{\psi'(x+\Delta)}{\psi'(x)}-1\right|\lesssim |\Delta|$ for $\Delta=o(1)$. Therefore, we have \begin{eqnarray*} && \sum_{1\leq i\neq j\leq n: |i-j|\leq C_0\sqrt{c_2/c_1}}\exp\left(-\frac{(1-\delta)npL(\theta_{i}^*-\theta_{j}^*)^2}{2(V_{i}(\theta^*)+V_{j}(\theta^*))}\right) \\ &\lesssim& \sum_{i=1}^{n-1}\exp\left(-\frac{(1-2\delta)npL(\theta_i^*-\theta_{i+1}^*)^2}{4V_i(\theta^*)}\right), \end{eqnarray*} By (\ref{eq:order-of-variance}), we also have \begin{eqnarray*} && \sum_{1\leq i\neq j\leq n: |i-j|> C_0\sqrt{c_2/c_1}}\exp\left(-\frac{(1-2\delta)npL(\theta_{i}^*-\theta_{j}^*)^2}{2(V_{i}(\theta^*)+V_{j}(\theta^*))}\right) \\ &\lesssim& \sum_{1\leq i\neq j\leq n: |i-j|> C_0\sqrt{c_2/c_1}}\exp\left(-\frac{(1-2\delta)pL\beta^2(i-j)^2}{2c_2(\beta\vee n^{-1})}\right) \\ &\lesssim& n\exp\left(-\frac{(1-2\delta)pL\beta^2C_0^2}{2c_1(\beta\vee n^{-1})}\right) \\ &\lesssim& \sum_{i=1}^{n-1}\exp\left(-\frac{(1-2\delta)npL(\theta_i^*-\theta_{i+1}^*)^2}{4V_i(\theta^*)}\right). \end{eqnarray*} The desired bound for $\mathbb{E}\textsf{K}(\widehat{r},r^*)$ immediately follows by summing up the above bounds. \end{proof} \subsection{A Data-Driven $h$.} \label{sec:h} Our proposed algorithm relies on a tuning parameter $h=\frac{pM}{\beta}$ that is unknown in practice. This quantity can be replaced by a data-driven version, defined as \begin{equation} \widehat{h}=\frac{1}{n}\sum_{1\leq i<j\leq n}A_{ij}\mathbb{I}\{1.2 M\leq |\psi^{-1}(\bar{y}_{ij}^{(1)})| \leq 1.8M\}. \label{eq:global-h} \end{equation} A standard concentration result implies that $\widehat{h}\asymp \frac{pM}{\beta}$ with high probability. Moreover, by defining $$\check{h}=\frac{1}{n}\sum_{\substack{1\leq i<j\leq n\\1.1M<|\theta_{r_i^*}^*-\theta_{r_j^*}^*|<1.9M}}A_{ij}\mathbb{I}\{1.2 M\leq |\psi^{-1}(\bar{y}_{ij}^{(1)})| \leq 1.8M\},$$ it can be shown that $\widehat{h}=\check{h}$ with high probability. Since $\check{h}$ is measurable with respect to the $\sigma$-algebra generated by $\{(A_{ij},\bar{y}_{ij}^{(1)}): 1.1M<|\theta_{r_i^*}^*-\theta_{r_j^*}^*|<1.9M\}$, we still have the asymptotic independence property between the league partition and local MLE after $h$ being replaced by $\widehat{h}$ in Algorithm \ref{alg:partition}. Therefore, with a data-driven $\widehat{h}$ being used in the proposed algorithm, the upper bound conclusion of Theorem \ref{thm:BTL-minimax} still holds. \section{Numerical Results}\label{sec:simulation} In this section, we conduct numerical experiments to study the statistical and computational properties of Algorithm \ref{alg:whole}. \iffalse In this section, we numerically compare the performance of the vanilla MLE and our Algorithm \ref{alg:whole} described in Section \ref{sec:BTL-algo}. We will compare them in two aspects: statistical accuracy and time efficiency. Additionally, we will also look at the performance of Algorithm \ref{alg:partition}, i.e., how players are partitioned. \fi \paragraph{Simulation Setting.} In our experiment, we consider $\theta^*\in\mathbb{R}^n$ with $n=1000$. In particular, we set $\theta_i^*=-\beta i$ for all $i\in[n]$ with some $\beta\in[0.001, 0.05]$. The range of $\beta$ implies that the dynamic range $\theta_1^*-\theta_{1000}^*$ takes value in $[0.999, 49.95]$. We assume the true rank is the identity permutation, i.e., $r_i^*=i$ for all $ i\in[n]$. We also consider three different $(L, L_1)$ pairs: (50, 10), (75, 15), (100, 20) in Algorithm \ref{alg:whole}. \paragraph{Implementation.} In the implementation of Algorithm \ref{alg:whole}, we set $M = 5$. For the choice of $h$, though the recommended data-driven estimator (\ref{eq:global-h}) works for the theoretical purpose, it may not be a sensible choice for a data set with a moderate size. Note that with $M = 5$, we have $\psi(1.2M)=0.9975274$ and $\psi(1.8M)=0.9998766$, respectively, and thus the indicator $\mathbb{I}\{1.2 M\leq |\psi^{-1}(\bar{y}_{ij}^{(1)})| \leq 1.8M\}$ is usually zero in (\ref{eq:global-h}). To address this issue, we set $h$ by $$h=0.4\times\frac{1}{n}\sum_{1\leq i<j\leq n}A_{ij}\indc{\psi(-M)\leq\bar{y}_{ij}^{(2)}\leq\psi(M)}.$$ The computation of the local MLE (\ref{eq:league-MLE}) is implemented by the MM algorithm \citep{hunter2004mm}. All simulations are implemented in Python (along with NumPy package, whose backend is written in C) using a 2019 MacBook Pro, 15-inch, 2.6GHz 6-core Intel Core i7. \paragraph{Accuracy of League Partition.} We first study Algorithm \ref{alg:partition}, which is Step 1 of Algorithm \ref{alg:whole}. The purpose of Algorithm \ref{alg:partition} is to divide all players into $K$ leagues. The average value of $K$ from 50 independent experiments is reported in Figure \ref{sim:league-cnt}. This number increases with $\beta$ linearly, which agrees with our theoretical bound $K=O_{\mathbb{P}}(n\beta\vee 1)$. \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth, trim=0 40 0 60, clip]{leagues_cnt.pdf} \caption{\textsl{The number of leagues obtained by Algorithm \ref{alg:partition}. The orange curve is mostly overlapped by the green curve.}} \label{sim:league-cnt} \end{figure} To quantify the accuracy of Algorithm \ref{alg:partition}, we define the following metric, $$E_{partition}=\begin{cases} \frac{1}{K - 2}\sum_{k=2}^{K-1}\indc{\max\left\{r_i^*:i\in\cup_{k^\prime<k}S_{k^\prime}\right\}>\min\left\{r_i^*:i\in\cup_{k^\prime>k}S_{k^\prime}\right\}}, & K \geq 3,\\ 0, & K < 3. \end{cases}$$ The quantity $E_{partition}$ is essentially designed to verify the Conclusion 3 in Theorem \ref{thm:alg-1}, and we expect that $E_{partition}$ should be $0$ with high probability. Note that Conclusion 3 of Theorem \ref{thm:alg-1} guarantees the correctness of the cross-league pairwise relation estimation, which is Step 3 of Algorithm \ref{alg:whole}. For each combination of $(\beta, L, L_1)$, we generate independent data and repeat the experiments 50 times. It turns out that $E_{partition}$ is always 0, which agrees with the theoretical property of the league partition. \paragraph{Statistical Error.} Next, we study the ranking error of the proposed divide-and-conquer algorithm (Algorithm \ref{alg:whole}) under the Kendall's tau distance defined by (\ref{eq:kendall}). For comparison, we also implement the global MLE and the spectral method. The MLE outputs the rank of the entries of $\widehat{\theta}$ that maximizes the negative log-likelihood function \begin{equation} \sum_{1\leq i<j\leq n}A_{ij}\left[\bar{y}_{ij}\log\frac{1}{\psi(\theta_i-\theta_j)}+(1-\bar{y}_{ij})\log\frac{1}{1-\psi(\theta_i-\theta_j)}\right], \label{eq:global-log-lik} \end{equation} where $\bar{y}_{ij}=\frac{1}{L}\sum_{l=1}^Ly_{ijl}$. The spectral method, also known as Rank Centrality, is a ranking algorithm proposed by \cite{negahban2017rank}. Define a matrix $P\in\mathbb{R}^{n\times n}$ by $$ P_{ij}=\begin{cases} \frac{1}{d}A_{ij}\bar{y}_{ji}, & i\neq j, \\ 1 - \frac{1}{d}\sum_{l\in[n]\backslash\{i\}}A_{il}\bar{y}_{li}, & i=j, \end{cases} \label{eq:spec-P} $$ where $d$ is set to be twice the maximum degree of the random graph $A$. Note that $P$ is the transition matrix of a Markov chain. Let $\widehat{\pi}$ be the stationary distribution of this Markov chain, and the spectral method outputs the rank of the entries of the vector $\widehat{\pi}$. Both the MLE and the spectral method have been studied for parameter estimation \citep{negahban2017rank,chen2019spectral} and top-$k$ ranking \citep{chen2019spectral,chen2020partial} under the BTL model. However, to the best of our knowledge, the statistical properties of the two methods for full ranking have not been studied in the literature. The recent work \cite{chen2019spectral} has established the estimation errors of the skill parameter for both the MLE and the spectral method. Their results involve a factor of $e^{O(n\beta)}$ in the estimation error under an $\ell_{\infty}$ loss, which suggests that the MLE and the spectral method may not perform well when the dynamic range $n\beta$ diverges. We implement the MLE, the spectral method, and the divide-and-conquer algorithm for various combinations of $\beta$ and $L$. The results of each setting are computed by averaging across $50$ independent experiments. \begin{figure}[h] \centering \includegraphics[width=1.1\textwidth]{error_combined_and_spec.pdf} \caption{\textsl{Statistical error under Kendall's tau. Left: $(L, L_1)=(50, 10)$; Middle: $(L, L_1)=(75, 15)$; Right: $(L, L_1) = (100, 20)$.}} \label{sim:error-plot} \end{figure} As shown in Figure \ref{sim:error-plot}, the spectral method is significantly worse than the MLE and the divide-and-conquer algorithm. The performance of the spectral method may be explained by the $e^{O(n\beta)}$ factor in the $\ell_{\infty}$ norm error bound obtained by \cite{chen2019spectral}, though the exact relation between the $\ell_{\infty}$ error and the full ranking error is not clear to us. On the other hand, the error curves of the MLE and the divide-and-conquer algorithm are very close. Since the divide-and-conquer algorithm has been proved to be minimax optimal, the simulation results suggest that the MLE may also enjoy such statistical optimality. The current analysis of the MLE \citep{chen2019spectral,chen2020partial} crucially depends on the spectral property of the Hessian matrix $H(\theta^*)$ of the objective (\ref{eq:global-log-lik}). It is known that the condition number of $H(\theta^*)$ on the subspace orthogonal to $\mathds{1}_{n}$ is of order $e^{O(n\beta)}$, which explains the $e^{O(n\beta)}$ factor in the $\ell_{\infty}$ estimation error of the MLE \citep{chen2019spectral}. However, our simulation study reveals that the error bound of \cite{chen2019spectral} can be potentially loose. The definition of the Kendall's tau distance suggests that a sharp analysis of the MLE requires a careful study of the random variable $\widehat{\theta}_{r_i^*}-\widehat{\theta}_{r_j^*}$. We conjecture that the variance of $\widehat{\theta}_{r_i^*}-\widehat{\theta}_{r_j^*}$ should be approximately proportional to $(e_{r_i^*}-e_{r_j^*})^TH(\theta^*)^{\dagger}(e_{r_i^*}-e_{r_j^*})$, where $e_j$ is the $j$th canonical vector with all entries being 0 except that the $j$th entry is 1. Since $H(\theta^*)$ can be viewed as the graph Laplacian of some random weighted graph, there may exist random matrix tools to study $(e_{r_i^*}-e_{r_j^*})^TH(\theta^*)^{\dagger}(e_{r_i^*}-e_{r_j^*})$ directly without using the naive condition number bound, and we leave this interesting direction as a future project. In comparison, our divide-and-conquer algorithm does not need to solve the global MLE. Since the objective function of each local MLE is well conditioned (Lemma \ref{lem:bound-hessian}), Algorithm \ref{alg:whole} is provably optimal in addition to its good performance in simulation. \begin{figure}[h] \centering \includegraphics[width=1.1\textwidth]{time_combined_and_spec.pdf} \caption{\textsl{Running time comparison. Left: $(L, L_1)=(50, 10)$; Middle: $(L, L_1)=(75, 15)$; Right: $(L, L_1) = (100, 20)$.}} \label{sim:time-plot} \end{figure} \paragraph{Computational Cost.} Finally, we compare the computational costs of the three methods. The average time needed to run the three algorithms is given in Figure \ref{sim:time-plot}. The spectral method, though suffers from its unsatisfactory statistical error, is the fastest, partly because finding the stationary distribution is just a single line of code using a NumPy function whose backend is C. The running time of the MLE grows rapidly as $\beta$ increases. This can be explained by the growing condition number of the Hessian matrix $H(\theta^*)$. While the condition number may not affect the statistical error of the MLE, it does have a rather strong effect on its computational cost. On the other hand, the running time for the divide-and-conquer method (Algorithm \ref{alg:whole}) first increases with $\beta$, and then stabilizes. This is the effect of Algorithm \ref{alg:partition}, which divides a large difficult problem into many small sub-problems, and after that each small sub-problem can be conquered efficiently. In fact, we can further improve the computational efficiency by solving the sub-problems in parallel. The initial increase of the running time of Algorithm \ref{alg:whole} is because of the additional league partition step. Recall that the league partition step divides the players into $K=O_{\mathbb{P}}(n\beta\vee 1)$ subsets. When $\beta$ is small, we have a very small $K$. According to the formula (\ref{eq:league-loss}), the local MLE is as difficult as the global MLE whenever $K\leq 4$. In this regime, the divide-and-conquer method is more time consuming because of the additional league partition step. On the other hand, as $\beta$ grows, the computational advantage of the divide-and-conquer strategy becomes significant. This makes our proposed algorithm scalable to large data sets, while preserving the statistical optimality, which concludes the divide-and-conquer algorithm as the best overall method among the three. \section{Discussion}\label{sec:disc} In this paper, the problem of ranking $n$ players from partial comparison data under the BTL model has been investigated. We have derived the minimax rate with respect to the Kendall's tau distance. A divide-and-conquer algorithm is proposed and is proved to achieve the minimax rate. In this section, we discuss a few directions along which the results of the paper can be extended. \iffalse As illustrated in Figure \ref{sim:error-plot}, Section \ref{sec:simulation}, the vanilla MLE also seems to achieve the optimal statistical accuracy, despite the ill-conditioned Hessian. Therefore, we conjecture that the vanilla MLE should also be rate optimal. In fact, the Hessian $H(\theta^*)$ comes into play when trying to upper bound $\mathbb{P}_{(\theta^*, r^*)}\left(\widehat{\theta}_{r_i^*}<\widehat{\theta}_{r_j^*}\right)$, where $r_i^*<r_j^*$, where $\widehat{\theta}_{r_i^*}$ is the estimated skill parameter of player $i$ using the vanilla MLE. The variance of $\widehat{\theta}_{r_i^*}-\widehat{\theta}_{r_j^*}$ is approximately $(e_{r_i^*}-e_{r_j^*})^TH(\theta^*)^{\dagger}(e_{r_i^*}-e_{r_j^*})$ from classical likelihood theory, where $e_{k}$ is a one-hot vector with all entries being 0 except that entry $k$ is 1. Naturally, the spectral property of $H(\theta^*)$ gives us direct control of this variance. However, it seems to be an overkill since we only need to care about the quadratic form in a specific set of directions, i.e., $\{e_i-e_j: i\neq j\}$. This, together with the fact that $H(\theta^*)$ is a Graph Laplacian and the special structure of $\theta^*$ (roughly equidistant), may lead to a better control of this variance without looking at the spectral property. \fi An important condition that we impose throughout the paper is the regularity of the skill parameters $\theta^*\in\Theta_n(\beta,C_0)$. It assumes that $|\theta_i^*-\theta_j^*|\asymp \beta|i-j|$, which roughly describes that players with different skills are evenly distributed in the population. Without this condition, we conjecture that the minimax rate under the Kendall's tau loss should be $$\inf_{\widehat{r}\in\mathfrak{S}_n}\sup_{r^*\in\mathfrak{S}_n}\mathbb{E}_{(\theta^*,r^*)}\textsf{K}(\widehat{r},r^*)\asymp \frac{1}{n}\sum_{1\leq i<j\leq n}\exp\left(-\frac{(1+o(1))npL(\theta_{i}^*-\theta_{j}^*)^2}{2(V_{i}(\theta^*)+V_{j}(\theta^*))}\right).$$ In fact, this formula has already appeared in the upper bound analysis (\ref{eq:sum-exp}) and can be simplified to the result of Theorem \ref{thm:BTL-minimax} when $\theta^*\in\Theta_n(\beta,C_0)$. Extending the result of Theorem \ref{thm:BTL-minimax} beyond the condition $\theta^*\in\Theta_n(\beta,C_0)$ is possible by some necessary modifications of the league partition step described in Algorithm \ref{alg:partition}. Without $|\theta_i^*-\theta_j^*|\asymp \beta|i-j|$, the partition formula $S_k=\{i\in[n]\backslash(S_1\cup\cdots \cup S_{k-1}): w_i^{(k)}\leq h\}$ should be replaced by $S_k=\{i\in[n]\backslash(S_1\cup\cdots \cup S_{k-1}): w_i^{(k)}\leq h_k\}$ for some sequence $\{h_k\}$ to account for the non-regularity of $\theta^*$. Intuitively, the size of each $|S_k|$ should adaptively depend on the local density of the skill parameters in the neighborhood from which it is selected. Then, the major difficulty is to find a data-driven $\{\widehat{h}_k\}$ that estimates the local density. When $|\theta_i^*-\theta_j^*|\asymp \beta|i-j|$, we can just use the global estimator (\ref{eq:global-h}). Without this assumption, estimating $\{h_k\}$ is a much harder problem. In \cite{jadbabaie2020estimation}, it is assumed that the skill parameters $\theta_1^*,\cdots,\theta_n^*$ are i.i.d drawn from some distribution $F$ instead of being fixed parameters, and the authors have studied the problem of estimating $F$, which is called the skill distribution, from the partial pairwise comparison data. Under this formulation, the estimation of the parameters $\{h_k\}$ can be linked to the problem of local bandwidth selection in kernel density estimation \citep{jones1996brief}. We leave this direction of research as one of our future projects. A restriction of the BTL model is that it can only deal with pairwise comparison. One extension from pairwise comparison to multiple comparison is the popular Plackett-Luce model \cite{plackett1975analysis,luce2012individual}. Suppose there is a subset of $J$ players $S=\{i_1,i_2,\cdots,i_J\}$. Under the Plackett-Luce model, the probability that $j$ is selected among $S$ is given by the formula $\frac{\exp(\theta_j)}{\sum_{i\in S}\exp(\theta_i)}$. Statistical analysis of ranking under the Plackett-Luce model is a problem that has been rarely explored. Both the minimax rate and the construction of optimal algorithms are important open problems. The ranking problem has also been studied under nonparametric comparison models. For example, a nonparametric stochastically transitive model was proposed by \cite{shah2016stochastically,shah2017simple} and the problems of estimating the mean matrix and top-$k$ ranking have been investigated. However, full ranking is still a problem that has not been well studied under nonparametric models. One of the few works that we are aware of is \cite{mao2018minimax} that assumes $\mathbb{P}(y_{ijl}=1)>\frac{1}{2}+\gamma$ when $r_i^*<r_j^*$. An investigation of full ranking under more general nonparametric settings is another direction to be explored. \section{Proofs}\label{sec:pf} \subsection{Proof of Theorem \ref{thm:Gaussian-minimax}}\label{sec:pf-G} We prove Theorem \ref{thm:Gaussian-minimax} in this section. We first state and prove a few lemmas. \begin{lemma}\label{lem:A-accurate} Assume $p\geq\frac{c_0\log n}{n}$ for some sufficiently large $c_0>0$. Then, we have \begin{equation} \opnorm{A-\mathbb{E}(A)}\leq C\sqrt{np},\label{eq:A-op} \end{equation} \begin{equation} \opnorm{D-\mathbb{E}(D)}\leq C\sqrt{np\log n}\label{eq:D-op} \end{equation} for some constant $C>0$ with probability at least $1-O(n^{-10})$. \end{lemma} \begin{proof} Bound (\ref{eq:A-op}) is a direct consequence of Theorem 5.2 in \cite{lei2015consistency} and Bound (\ref{eq:D-op}) is from standard concentration of sums of i.i.d. Bernoulli random variables. \end{proof} \begin{lemma}\label{lem:Laplace-eigen} Assume $p\geq\frac{c_0\log n}{n}$ for some sufficiently large $c_0>0$. Then, we have $$np-2C\sqrt{np\log n}\leq\lambda_{\min,\perp}(\mathcal{L}_A)=\min_{u\neq 0, \mathds{1}_n^Tu=0}\frac{u^T\mathcal{L}_A u}{\norm{u}},$$ $$np + 2C\sqrt{np\log n}\geq\lambda_{\max,\perp}(\mathcal{L}_A)= \max_{u\neq 0, \mathds{1}_n^Tu=0}\frac{u^T\mathcal{L}_A u}{\norm{u}}$$ for some constant $C>0$ with probability at least $1-O(n^{-10})$. \end{lemma} \begin{proof} Note the decomposition $$\mathcal{L}_A=\mathbb{E}\mathcal{L}_A+D-\mathbb{E} D-(A-\mathbb{E} A)$$ and $\lambda_{\min, \perp}(\mathbb{E}\mathcal{L}_A)=\lambda_{\max, \perp}(\mathbb{E}\mathcal{L}_A)=np$. By Lemma \ref{lem:A-accurate}, we have $$\opnorm{D-\mathbb{E} D-(A-\mathbb{E} A)}\leq 2C\sqrt{np\log n}$$ with probability at least $1-O(n^{-10})$ for some $C>0$. The Lemma can be seen immediately by Weyl's inequality. \end{proof} We introduce another notation $r^{*(i,j)}\in\mathfrak{S}_n$ to be the element in $\mathfrak{S}_n$ having \begin{align}\label{eqn:r_star_ij_def} r_k^{*(i,j)}= \begin{cases} r_k^*, \text{ if }k\neq i,j \\ r_j^*,\text{ if }k=i\\ r_i^*,\text{ if }k=j\\ \end{cases}. \end{align} That is, $r^{*(i,j)}$ is a permutation by swapping the $i,j$th position in $r^*$ while keeping other positions fixed. \begin{lemma}\label{lem:Gaussian-two-point} Assume $\frac{np}{\log n}\to\infty$. There exists $\delta=o(1)$, such that for any $\theta^*\in\Theta_n(\beta, C_0)$, any $r^*\in\mathfrak{S}_n$, any $i,j\in[n], i\neq j$, we have \begin{align*} &\inf_{\widehat{r}}\frac{\mathbb{P}_{(\theta^*,\sigma^2, r^*)}\left(\widehat{r}\neq r^*\right)+\mathbb{P}_{(\theta^*,\sigma^2, r^{*(i,j)})}\left(\widehat{r}\neq r^{*(i,j)}\right)}{2}\\ &\gtrsim \min\left\{1, \sqrt{\frac{\sigma^2}{np(\theta_{r_i^*}^*-\theta_{r_j^*}^*)^2}}\exp\left(-\frac{(1+\delta)np(\theta_{r_i^*}^*-\theta_{r_j^*}^*)^2}{4\sigma^2}\right)\right\} \end{align*} \end{lemma} \begin{proof} Assume $r_i^*=a<r_j^*=b$ and thus $\theta_a^*\geq\theta_b^*$. Let $\mathcal{F}$ be the event about $A$ on which Lemma \ref{lem:A-accurate} holds. We have $\pbr{\mathcal{F}}>1/2$. To simplify notation, let $\mathbb{P}_A(\cdot)=\mathbb{P}_{(\theta^*,\sigma^2, r^*)}(\cdot|A)$ be the conditional probability. For any $A$, by Neyman-Pearson Lemma, the optimal procedure is given by the likelihood ratio test. Then \begin{align} \nonumber&\inf_{\widehat{r}}\frac{\mathbb{P}_{(\theta^*,\sigma^2, r^*)}\left(\widehat{r}\neq r^*\right)+\mathbb{P}_{(\theta^*,\sigma^2, r^{*(i,j)})}\left(\widehat{r}\neq r^{*(i,j)}\right)}{2}\\ \nonumber&\geq \pbr{\mathcal{F}}\inf_{A\in\mathcal{F}}\mathbb{P}_{A}\left(\frac{d\mathbb{P}_{(\theta^*,\sigma^2, r^{*(i,j)})}}{d\mathbb{P}_{(\theta^*,\sigma^2, r^{*})}}\geq1\right)\\ \nonumber&\gtrsim\inf_{A\in\mathcal{F}}\mathbb{P}_{A}\left(-4A_{ij}(\theta_a^*-\theta_b^*+w_{ij})+\sum_{k\neq i,j}-A_{ik}(\theta_a^*-\theta_b^*+2w_{ik})+\sum_{k\neq i,j}A_{jk}(\theta_b^*-\theta_a^*+2w_{jk})\geq0\right)\\ \nonumber&=\inf_{A\in\mathcal{F}}\mathbb{P}_A\left(\mathcal{N}(0,\frac{\sigma^2}{D_{ii}+D_{jj}+2A_{ij}})\geq\frac{\abs{\theta_a-\theta_b}}{2}\right)\\ &\gtrsim\min\left\{1, \sqrt{\frac{\sigma^2}{np(\theta_a^*-\theta_b^*)^2}}\exp\left(-\frac{(1+\delta)np(\theta_a^*-\theta_b^*)^2}{4\sigma^2}\right)\right\}\label{eq:Gaussian_lower} \end{align} for some $\delta=o(1)$, where (\ref{eq:Gaussian_lower}) comes from standard Gaussian tail bound and Lemma \ref{lem:A-accurate}. \end{proof} Now we are ready to state the proof of Theorem \ref{thm:Gaussian-minimax}. \begin{proof}[Proof of Theorem \ref{thm:Gaussian-minimax}] We prove the theorem for any $\theta^*\in\Theta_n(\beta, C_0)$. Note that conditional on $A$, the solution of the least squares problem (\ref{eq:Gaussian-ls}) can be written as $$\widehat{\theta}=c\mathds{1}_n+\theta_{r^*}^*+Z,$$ where $\theta_{r^*}^*=(\theta_{r_1^*}^*,...,\theta_{r_n^*}^*)^T$, $Z\sim { \mathcal{N} }(0,\sigma^2\mathcal{L}_A^{\dagger})$ and $c\mathds{1}_n$ is a global shift of the skill parameters. Let $x_{ij}=e_i-e_j$ where $\{e_1,...,e_n\}$ are the standard basis of $\mathbb{R}^n$. Let $\mathcal{F}$ be the event about $A$ when Lemma \ref{lem:Laplace-eigen} holds. Then \begin{align} \nonumber&\mathbb{E}_{(\theta^*,\sigma^2, r^*)}\left[\textsf{K}(\widehat{r}, r^*)\right]=\frac{1}{n}\sum_{1\leq i<j\leq n}\mathbb{P}_{(\theta^*,\sigma^2, r^*)}\left(\text{sign}(\widehat{r}_i-\widehat{r}_j)\text{sign}(r_i^*-r_j^*)<0\right)\\ \nonumber&=\frac{1}{n}\sum_{1\leq i<j\leq n}\mathbb{P}_{(\theta^*,\sigma^2, r^*)}\left(\text{sign}(\widehat{\theta}_i-\widehat{\theta}_j)\text{sign}(r_i^*-r_j^*)>0\right)\\ \nonumber&\leq\frac{1}{n}\sum_{1\leq i<j\leq n}\sup_{A\in\mathcal{F}}\mathbb{P}\left(\mathcal{N}(0,\sigma^2x_{ij}^T\mathcal{L}_A^{\dagger}x_{ij})>\abs{\theta_{r_i^*}^*-\theta_{r_j^*}^*}|A\right)+O(n^{-9})\\ \nonumber&\leq\frac{1}{n}\sum_{1\leq i<j\leq n}\sup_{A\in\mathcal{F}}\min\left\{1,\sqrt{\frac{\sigma^2x_{ij}^T\mathcal{L}_A^{\dagger}x_{ij}}{2\pi(\theta_{r_i^*}^*-\theta_{r_j^*}^*)^2}}\exp\left(-\frac{(\theta_{r_i^*}^*-\theta_{r_j^*}^*)^2}{2\sigma^2x_{ij}^T\mathcal{L}_A^{\dagger}x_{ij}}\right)\right\}+O(n^{-9})\\ &\leq\frac{1}{n}\sum_{1\leq i<j\leq n}\min\left\{1,\sqrt{\frac{\sigma^2(np-2C\sqrt{np\log n})^{-1}}{\pi(\theta_{r_i^*}^*-\theta_{r_j^*}^*)^2}}\exp\left(-\frac{(np-2C\sqrt{np\log n})(\theta_{r_i^*}^*-\theta_{r_j^*}^*)^2}{4\sigma^2}\right)\right\}\label{eq:Laplace-spec}\\ \nonumber&\quad+O(n^{-9})\\ \nonumber&\lesssim\frac{1}{n}\sum_{1\leq i<j\leq n}\min\left\{1,\sqrt{\frac{\sigma^2}{np(\theta_{i}^*-\theta_{j}^*)^2}}\exp\left(-\frac{(1-\delta_1^\prime)np(\theta_{i}^*-\theta_{j}^*)^2}{4\sigma^2}\right)\right\}+n^{-9} \end{align} for some $\delta_1^\prime=o(1)$ independent of $\theta^*$, $\sigma^2$ and $r^*$, where (\ref{eq:Laplace-spec}) is due to Lemma \ref{lem:Laplace-eigen}. We first consider the high signal-to-noise ratio regime, where $\frac{np\beta^2}{\sigma^2}>1$. In this scenario, \begin{align*} &\sum_{1\leq i<j\leq n}\min\left\{1,\sqrt{\frac{\sigma^2}{np(\theta_{i}^*-\theta_{j}^*)^2}}\exp\left(-\frac{(1-\delta_1^\prime)np(\theta_{i}^*-\theta_{j}^*)^2}{4\sigma^2}\right)\right\}\\ &\leq\sum_{i=1}^{n-1}\sum_{j=i+1}^n\exp\left(-\frac{(1-\delta_1^\prime)np(\theta_{i}^*-\theta_{j}^*)^2}{4\sigma^2}\right)\\ &\leq\sum_{i=1}^{n-1}\exp\left(-\frac{(1-\delta_1^\prime)np(\theta_{i}^*-\theta_{i+1}^*)^2}{4\sigma^2}\right)\sum_{j=i+1}^n\exp\left(-\frac{(1-\delta_1^\prime)np[(\theta_{i}^*-\theta_{j}^*)^2-(\theta_{i}^*-\theta_{i+1}^*)^2]}{4\sigma^2}\right)\\ &\leq\sum_{i=1}^{n-1}\exp\left(-\frac{(1-\delta_1^\prime)np(\theta_{i}^*-\theta_{i+1}^*)^2}{4\sigma^2}\right)\sum_{j=i+1}^n\exp\left(-\frac{(1-\delta_1^\prime)np(j-i-1)\beta^2}{4\sigma^2}\right)\\ &\lesssim\sum_{i=1}^{n-1}\exp\left(-\frac{(1-\delta_1^\prime)np(\theta_{i}^*-\theta_{i+1}^*)^2}{4\sigma^2}\right) \end{align*} where the last inequality is due to summation of an exponentially decaying series. This gives the exponential rate in high signal-to-noise ratio regime. Now, when $\frac{np\beta^2}{\sigma^2}\leq 1$, \begin{align} \nonumber&\sum_{1\leq i<j\leq n}\min\left\{1,\sqrt{\frac{\sigma^2}{np(\theta_{i}^*-\theta_{j}^*)^2}}\exp\left(-\frac{(1-\delta_1^\prime)np(\theta_{i}^*-\theta_{j}^*)^2}{4\sigma^2}\right)\right\}\\ \nonumber&\leq\sum_{i=1}^{n-1}\sum_{k\geq1}\sum_{\substack{j>i\\(k-1)\sqrt{\frac{\sigma^2}{np\beta^2}}<j-i\leq k\sqrt{\frac{\sigma^2}{np\beta^2}}}}\min\left\{1,\sqrt{\frac{\sigma^2}{np(\theta_i^*-\theta_j^*)^2}}\exp\left(-\frac{(1-\delta^\prime)np(\theta_i^*-\theta_j^*)^2}{4\sigma^2}\right)\right\}\\ \nonumber&\lesssim\sqrt{\frac{\sigma^2}{np\beta^2}}\sum_{i=1}^{n-1}\left(\sum_{k\geq0}\exp\left(-\frac{(1-\delta^\prime)k^2}{4}\right)\right)\lesssim n\sqrt{\frac{\sigma^2}{np\beta^2}}\wedge n^2 \end{align} where the last inequality also comes from summing an exponentially decaying series and $n^2$ is a trivial upper bound. This finishes the proof of the upper bound. Now we look at the lower bound. For any $r^*\in\mathfrak{S}_n$, we have $r^{*(i,j)}\in\mathfrak{S}_n$ defined as in (\ref{eqn:r_star_ij_def}). Then for any $\theta^*\in\Theta_n(\beta, C_0)$, \begin{align} \nonumber&\inf_{\widehat{r}}\sup_{r^*\in\mathfrak{S}_n}\mathbb{E}_{(\theta^*, \sigma^2, r^*)}\left[\textsf{K}(\widehat{r}, r^*)\right]\\ \nonumber&\geq\inf_{\widehat{r}}\frac{1}{n}\sum_{1\leq i<j\leq n}\frac{1}{n!}\sum_{r^*\in\mathfrak{S}_n}\mathbb{P}_{(\theta^*,\sigma^2, r^*)}\left(\text{sign}(\widehat{r}_i-\widehat{r}_j)\text{sign}(r_i^*-r_j^*)<0\right)\\ \nonumber&=\inf_{\widehat{r}}\frac{1}{n}\sum_{1\leq i<j\leq n}\frac{1}{n!}\sum_{1\leq a<b\leq n}\sum_{r^*:\{r_i^*,r_j^*\}=\{a,b\}}\mathbb{P}_{(\theta^*,\sigma^2, r^*)}\left(\text{sign}(\widehat{r}_i-\widehat{r}_j)\text{sign}(r_i^*-r_j^*)<0\right)\\ \nonumber&\geq \frac{1}{n}\sum_{1\leq i<j\leq n}\frac{2}{n(n-1)}\sum_{1\leq a<b\leq n}\frac{1}{(n-2)!}\sum_{r^*:r_i^*=a, r_j^*=b}\inf_{\widehat{r}}\frac{\mathbb{P}_{(\theta^*,\sigma^2, r^*)}\left(\widehat{r}_i\neq a\right)+\mathbb{P}_{(\theta^*,\sigma^2, r^{*(i,j)})}\left(\widehat{r}_i\neq b\right)}{2}\\ \nonumber&\gtrsim\frac{1}{n}\sum_{1\leq i<j\leq n}\frac{2}{n(n-1)}\sum_{1\leq a<b\leq n}\\ &\quad\quad\quad\quad\quad\quad\frac{1}{(n-2)!}\sum_{r^*:r_i^*=a, r_j^*=b}\min\left\{1, \sqrt{\frac{\sigma^2}{np(\theta_a^*-\theta_b^*)^2}}\exp\left(-\frac{(1+\delta^\prime)np(\theta_a^*-\theta_b^*)^2}{4\sigma^2}\right)\right\}\label{eq:Gaussian-lower-main}\\ \nonumber&=\frac{1}{n}\sum_{1\leq a<b\leq n}\min\left\{1, \sqrt{\frac{\sigma^2}{np(\theta_a^*-\theta_b^*)^2}}\exp\left(-\frac{(1+\delta^\prime)np(\theta_a^*-\theta_b^*)^2}{4\sigma^2}\right)\right\} \end{align} for some $\delta^\prime=o(1)$, where (\ref{eq:Gaussian-lower-main}) comes from Lemma \ref{lem:Gaussian-two-point}. We still consider the high signal-to-noise ratio case first. \begin{align} \nonumber&\sum_{1\leq i<j\leq n}\min\left\{1, \sqrt{\frac{\sigma^2}{np(\theta_i^*-\theta_j^*)^2}}\exp\left(-\frac{(1+\delta^\prime)np(\theta_i^*-\theta_j^*)^2}{4\sigma^2}\right)\right\}\\ \nonumber&\geq\sum_{i=1}^{n-1}\sqrt{\frac{\sigma^2}{np(\theta_i^*-\theta_{i+1}^*)^2}}\exp\left(-\frac{(1+\delta^\prime)np(\theta_i^*-\theta_{i+1}^*)^2}{4\sigma^2}\right)\\ &\gtrsim\sum_{i=1}^{n-1}\exp\left(-\frac{(1+\delta)np(\theta_i^*-\theta_{i+1}^*)^2}{4\sigma^2}\right)\label{eq:Gaussian-absorb} \end{align} where $\delta$ in (\ref{eq:Gaussian-absorb}) can be chosen arbitrarily small when $np\beta^2/\sigma^2>1$, which concludes the exponential lower bound. For the polynomial lower bound when signal-to-noise ratio is small, \begin{align*} &\sum_{1\leq i<j\leq n}\min\left\{1, \sqrt{\frac{\sigma^2}{np(\theta_i^*-\theta_j^*)^2}}\exp\left(-\frac{(1+\delta^\prime)np(\theta_i^*-\theta_j^*)^2}{4\sigma^2}\right)\right\}\\ &\gtrsim\sum_{i=1}^n\sum_{\substack{j\neq i\\\abs{j-i}\leq \sqrt{\frac{\sigma^2}{np\beta^2}}}}\min\left\{1,\sqrt{\frac{\sigma^2}{np(\theta_i-\theta_j)^2}}\exp\left(-\frac{(1+\delta^\prime)np(\theta_i-\theta_j)^2}{4\sigma^2}\right)\right\}\\ &\gtrsim\sum_{i=1}^nn\wedge \left(\sqrt{\frac{\sigma^2}{np\beta^2}}\right) \end{align*} which concludes the proof. \end{proof} \subsection{Proof of Theorem \ref{thm:BTL-minimax}}\label{sec:pf-BTL} This section proves Theorem \ref{thm:BTL-minimax}. Since the upper bound part of the proof has already been given in Section \ref{sec:analysis}, we only need to establish the lower bound. First of all, we establish a few lemmas. \begin{lemma}[Central limit theorem, Theorem 2.20 of \cite{ross2007second}]\label{lem:CLT-stein} If $Z\sim { \mathcal{N} }(0,1)$ and $W=\sum_{i=1}^nX_i$ where $X_i$ are independent mean $0$ and $\Var(W)=1$, then $$\sup_t\left|\mathbb{P}(W\leq t)-\mathbb{P}(Z\leq t)\right| \leq 2\sqrt{3\sum_{i=1}^n\left(\mathbb{E}X_i^4\right)^{3/4}}.$$ \end{lemma} \begin{lemma}\label{lem:A-bern-2} Assume $p\geq c_0\frac{\log n}{n}$ for some sufficiently large constant $c_0>0$. For any fixed $\{w_{ijk}\}$, $i,j\in[n], k\in\mathbb{K}$ where $\mathbb{K}$ is a discrete set with cardinality at most $n^{c_1}$ for some constant $c_1>0$. Assume $\max_{i,j\in[n], k\in\mathbb{K}}\abs{w_{ijk}}\leq c_2$ and $$p\min_{i\in[n],k\in\mathbb{K}}\sum_{j\in[n]\backslash\{i\}}w_{ijk}^2\geq c_3\log n$$ for some constants $c_2, c_3>0$. Then there exists constants $C_1, C_2>0$, such that for any $i\in[n]$, $$\max_{k\in\mathbb{K}}\sum_{j\in[n]\backslash\{i\}}(A_{ij}-p)w_{ijk}\leq C_1\sqrt{p\log n\max_{k\in\mathbb{K}}\sum_{j\in[n]}w_{ijk}^2}$$ with probability at least $1-C_2n^{-10}$. \end{lemma} \begin{proof} For any constant $C_1^{\prime}>0$, by Bernstein's inequality, we have \begin{align*} &\mathbb{P}\left(\max_{k\in\mathbb{K}}\sum_{j\in[n]\backslash\{i\}}(A_{ij}-p)w_{ijk}> C_1^{\prime}\sqrt{p\log n\max_{k\in\mathbb{K}}\sum_{j\in[n]}w_{ijk}^2}\right)\\ &\leq\abs{\mathbb{K}}\max_{k\in\mathbb{K}}\exp\left(-\frac{C_1^{\prime2}p\log n\max_{k\in\mathbb{K}}\sum_{j\in[n]}w_{ijk}^2}{2p\sum_{j\in[n]\backslash\{i\}}w_{ijk}^2+\frac{2}{3}\max_{i,j\in[n], k\in\mathbb{K}}\abs{w_{ijk}}C_1^{\prime}\sqrt{p\log n\max_{k\in\mathbb{K}}\sum_{j\in[n]}w_{ijk}^2}}\right)\\ &\leq n^{c_1}\exp\left(-\frac{C_1^{\prime2}}{C_2^{\prime}}\log n\right) \end{align*} for some constant $C_2^{\prime}>0$. Thus we can set $C_1^{\prime}$ large enough to make the theorem holds. \end{proof} \begin{lemma}\label{lem:sum-psi-prime} Assume $1\leq C_0=O(1)$ and $0<\beta=o(1)$. For any constant $\alpha>0$, there exists constants $C_1, C_2>0$ such that for any $\theta\in\Theta_n(\beta, C_0)$, $$C_1\frac{1}{\beta\vee1/n}\leq\inf_{\theta_0\in[\theta_n, \theta_1]}\sum_{i=1}^n\psi^{\prime}(\theta_0-\theta_i)^{\alpha}\leq\sup_{\theta_0\in[\theta_n, \theta_1]}\sum_{i=1}^n\psi^{\prime}(\theta_0-\theta_i)^{\alpha}\leq C_2\frac{1}{\beta\vee1/n}$$ for $n$ large enough. \end{lemma} \begin{proof} Define \begin{equation} R_\theta(x, t_1, t_2)=\{i:t_1\leq\abs{\theta_i-x}< t_2\}\label{eq:neighbor-set} \end{equation} It is easy to see that there exist constants $C_1^{\prime}, C_2^{\prime}>0$ such that for any $\theta\in\Theta_n(\beta, C_0)$, \begin{equation} \frac{C_1^{\prime}}{\beta\vee1/n}\leq\inf_{x\in[\theta_n, \theta_1]}|R_{\theta}(x, 0, 1)|\label{eq:R-lower} \end{equation} and \begin{equation} \sup_{t\in\mathbb{N}}\sup_{x\in[\theta_n, \theta_1]}|R_{\theta}(x, t, t+1)|\leq\frac{C_2^{\prime}}{\beta\vee1/n}\label{eq:R-upper} \end{equation} Thus \begin{align*} &\inf_{\theta_0\in[\theta_n, \theta_1]}\sum_{i=1}^n\psi^{\prime}(\theta_0-\theta_i)^{\alpha}\geq\inf_{\theta_0\in[\theta_n, \theta_1]}\sum_{i\in R_\theta(\theta_0, 0,1)}\psi^{\prime}(\theta_0-\theta_i)^{\alpha}\\ &=\inf_{\theta_0\in[\theta_n, \theta_1]}\sum_{i\in R_\theta(\theta_0, 0,1)}\left[\frac{e^{\theta_0-\theta_i}}{\left(1+e^{\theta_0-\theta_i}\right)^2}\right]^{\alpha}\geq\inf_{\theta_0\in[\theta_n, \theta_1]}\sum_{i\in R_\theta(\theta_0, 0,1)}\frac{1}{4^{\alpha}}e^{-\alpha\abs{\theta_0-\theta_i}}\\ &\geq\inf_{\theta_0\in[\theta_n, \theta_1]}\abs{R_\theta(\theta_0, 0,1)}\frac{1}{4^{\alpha}}e^{-\alpha}\geq\frac{C_3^{\prime}}{\beta\vee1/n} \end{align*} for some constant $C_3^\prime>0$. On the other hand, \begin{align*} &\sup_{\theta_0\in[\theta_n, \theta_1]}\sum_{i=1}^n\psi^{\prime}(\theta_0-\theta_i)^{\alpha}=\sup_{\theta_0\in[\theta_n, \theta_1]}\sum_{t\geq0}\sum_{i\in R_{\theta}(\theta_0, t, t+1)}\psi^{\prime}(\theta_0-\theta_i)^{\alpha}\\ &\leq\sup_{\theta_0\in[\theta_n, \theta_1]}\sum_{t\geq0}\sum_{i\in R_{\theta}(\theta_0, t, t+1)}e^{-\alpha\abs{\theta_0-\theta_i}}\leq\sup_{\theta_0\in[\theta_n, \theta_1]}\sum_{t\geq0}\abs{R_\theta(\theta_0, t,t+1)}e^{-\alpha t}\\ &\leq\frac{C_4^{\prime}}{\beta\vee1/n} \end{align*} for some constant $C_4^{\prime}>0$, which concludes the proof. \end{proof} \begin{lemma}\label{lem:sup-A-u} Assume $p\geq c_0(\beta\vee\frac{1}{n})\log n$ for some sufficiently large constant $c_0>0$ and $1\leq C_0=O(1)$. For any constant $\alpha>0$, there exist constants $C_1, C_2, C_3>0$ such that for any $r\in\mathfrak{S}_n, i\neq j\in[n]$, and $\theta\in\Theta_n(\beta, C_0)$, \begin{equation} \inf_{u\in[0,1]}\sum_{k\neq i,j}A_{ik}\psi^{\prime}(u\theta_{r_i}+(1-u)\theta_{r_j}-\theta_{r_k})^{\alpha}\geq C_1\frac{p}{\beta\vee1/n}\label{eq:inf-A-u} \end{equation} and \begin{equation} \sup_{u\in[0,1]}\sum_{k\neq i,j}A_{ik}\psi^{\prime}(u\theta_{r_i}+(1-u)\theta_{r_j}-\theta_{r_k})^{\alpha}\leq C_2\frac{p}{\beta\vee1/n}\label{eq:sup-A-u} \end{equation} with probability at least $1-O(n^{-10})$ for $n$ large enough. \end{lemma} \begin{proof} We remark that $p\geq c_0(\beta\vee\frac{1}{n})\log n$ necessarily implies $0<\beta=o(1)$. We only give the proof of (\ref{eq:sup-A-u}). The inf part (\ref{eq:inf-A-u}) can be proved similarly. For (\ref{eq:sup-A-u}), \begin{align} \nonumber&\sup_{u\in[0,1]}\sum_{k\neq i,j}A_{ik}\psi^{\prime}(u\theta_{r_i}+(1-u)\theta_{r_j}-\theta_{r_k})^{\alpha}\\ &\leq\frac{C_1^{\prime}p}{\beta\vee1/n}+\sup_{u\in[0,1]}\sum_{k\neq i,j}(A_{ik}-p)\psi^{\prime}(u\theta_{r_i}+(1-u)\theta_{r_j}-\theta_{r_k})^{\alpha}\label{eq:use-sum-psi-prime} \end{align} for some constant $C_1^{\prime}>0$, where (\ref{eq:use-sum-psi-prime}) uses Lemma \ref{lem:sum-psi-prime}. To bound the second term in (\ref{eq:use-sum-psi-prime}), we use standard discretization technique. Let $u_a=\frac{a}{n}, a\in[n]$. Then for any $u\in[0,1]$, let $a(u)=\arg\min_{a\in[n]}\abs{u-u_a}$. We have $\abs{u-u_{a(u)}}\leq1/n$. Observe that for any $u\in[0,1]$, \begin{align} \nonumber&\abs{\sum_{k\neq i,j}(A_{ik}-p)\left(\psi^{\prime}(u\theta_{r_i}+(1-u)\theta_{r_j}-\theta_{r_k})^{\alpha}-\psi^{\prime}(u_{a(u)}\theta_{r_i}+(1-u_{a(u)})\theta_{r_j}-\theta_{r_k})^{\alpha}\right)}\\ &\leq\alpha\sup_{\xi\in[u\wedge u_{a(u)},u\vee u_{a(u)}]}\sum_{k\neq i,j}\psi^{\prime}(\xi\theta_{r_i}+(1-\xi)\theta_{r_j}-\theta_{r_k})^{\alpha}\abs{u-u_{a(u)}}\abs{\theta_{r_i}-\theta_{r_j}}\label{eq:psi-pp-bound}\\ &\leq\frac{C_2^\prime n\beta}{n}\frac{1}{\beta\vee1/n}\leq C_2^{\prime}\frac{p}{\beta\vee1/n}\label{eq:psi-diff-bound} \end{align} for some constant $C_2^{\prime}>0$, where (\ref{eq:psi-pp-bound}) is due to mean value theorem and $\abs{\psi^{\prime\prime}(x)}\leq\psi^{\prime}(x)$ while (\ref{eq:psi-diff-bound}) comes from Lemma \ref{lem:sum-psi-prime}. Therefore, \begin{align} \nonumber&\sup_{u\in[0,1]}\sum_{k\neq i,j}A_{ik}\psi^{\prime}(u\theta_{r_i}+(1-u)\theta_{r_j}-\theta_{r_k})^{\alpha}\\ &\leq\frac{C_3^{\prime}p}{\beta\vee1/n}+\max_{a\in[n]}\sum_{k\neq i,j}(A_{ik}-p)\psi^{\prime}(u_a\theta_{r_i}+(1-u_a)\theta_{r_j}-\theta_{r_k})^{\alpha}\label{eq:log-dominate}\\ &\leq\frac{C_3^{\prime}p}{\beta\vee1/n}+C_4^\prime\sqrt{p\log n\max_{a\in[n]}\sum_{k\neq i,j}\psi^{\prime}(u_a\theta_{r_i}+(1-u_a)\theta_{r_j}-\theta_{r_k})^{2\alpha}}\label{eq:simple-concentration}\\ &\leq\frac{C_5^\prime p}{\beta\vee1/n}\label{eq:bound-sum-psi-pp} \end{align} for some constants $C_3^\prime, C_4^\prime, C_5^\prime>0$ with probability at least $1-O(n^{-10})$, where (\ref{eq:log-dominate}) is due to (\ref{eq:psi-diff-bound}) and $\frac{p}{\beta\vee1/n}\gtrsim\log n\gg1$. (\ref{eq:simple-concentration}) comes from Lemma \ref{lem:sum-psi-prime}, $\abs{\psi^{\prime}(x)}\leq1/4$ and Lemma \ref{lem:A-bern-2}. (\ref{eq:bound-sum-psi-pp}) is a consequence of Lemma \ref{lem:sum-psi-prime} and $\log n\lesssim\frac{p}{\beta\vee1/n}$, which concludes the proof. \end{proof} To proceed with our proof for the lower bound, we define \begin{align} G_{i,j,k,\theta,r}(u)=\log\frac{(1+e^{\theta_{r_i}-\theta_{r_k}})^u(1+e^{\theta_{r_j}-\theta_{r_k}})^{1-u}}{1+e^{u\theta_{r_i}+(1-u)\theta_{r_j}-\theta_{r_k}}}.\label{eqn:G_def} \end{align} This term is a key ingredient in the exponent of the rate. We first derive some properties of this term. \begin{lemma}\label{lem:G-prop} Assume $1\leq C_0=O(1)$ and $0<\beta=o(1)$. For any constant $C>0$, there exist constants $C_1, C_2, C_3>0$ such that for any $\theta\in\Theta_n(\beta, C_0)$, any $r\in\mathfrak{S}_n$ and any $i\neq j\in[n]$ such that $\abs{\theta_{r_i}-\theta_{r_j}}\leq C$, the following hold for $n$ large enough, \begin{equation} \sup_{u\in[0,1]}\sup_{k\neq i,j}G_{i,j,k,\theta,r}(u)\leq C_1,\label{eq:G-max} \end{equation} \begin{equation} \sup_{u\in[0,1]}\sum_{k\neq i,j}G_{i,j,k,\theta,r}(u)+G_{i,j,k,\theta, r}(1-u)\leq \sum_{k\neq i,j}\log\frac{(1+e^{\theta_{r_i}-\theta_{r_k}})(1+e^{\theta_{r_j}-\theta_{r_k}})}{\left(1+e^{\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k}}\right)^2},\label{eq:G-sum} \end{equation} \begin{equation} \sup_{u\in[0,1]}\sum_{k\neq i,j}G_{i,j,k,\theta,r}(u)^2+G_{i,j,k,\theta, r}(1-u)^2\leq C_2\frac{(\theta_{r_i}-\theta_{r_j})^4}{\beta\vee1/n},\label{eq:G-square-sum} \end{equation} \begin{equation} C_3\frac{\abs{\theta_{r_i}-\theta_{r_j}}^2}{\beta\vee1/n}\leq\sum_{k\neq i,j}\log\frac{(1+e^{\theta_{r_i}-\theta_{r_k}})(1+e^{\theta_{r_j}-\theta_{r_k}})}{\left(1+e^{\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k}}\right)^2}\leq C_2\frac{\abs{\theta_{r_i}-\theta_{r_j}}^2}{\beta\vee1/n}.\label{eq:G-sum-range} \end{equation} \end{lemma} \begin{proof} We first look at (\ref{eq:G-max}). Note that \begin{align*} &G_{i,j,k,\theta,r}(u)=\log\frac{\psi(u\theta_{r_i}+(1-u)\theta_{r_j}-\theta_{r_k})}{\psi(\theta_{r_i}-\theta_{r_k})^u\psi(\theta_{r_j}-\theta_{r_k})^{1-u}}\\ &\leq\log\frac{\psi(u\theta_{r_i}+(1-u)\theta_{r_j}-\theta_{r_k})}{\psi(\theta_{r_i}-\theta_{r_k})\wedge\psi(\theta_{r_j}-\theta_{r_k})}\\ &=\log\frac{(1+e^{\theta_{r_i}-\theta_{r_k}})}{e^{(1-u)(\theta_{r_i}-\theta_{r_j})}+e^{\theta_{r_i}-\theta_{r_k}}}\vee\log\frac{(1+e^{\theta_{r_j}-\theta_{r_k}})}{e^{-u(\theta_{r_i}-\theta_{r_j})}+e^{\theta_{r_j}-\theta_{r_k}}}\leq C \end{align*} where the last inequality comes from $\abs{\theta_{r_i}-\theta_{r_j}}\leq C$. Now we look at (\ref{eq:G-sum}). \begin{align*} &\sup_{u\in[0,1]}\sum_{k\neq i,j}G_{i,j,k,\theta,r}(u)+G_{i,j,k,\theta, r}(1-u)\\ &=\sup_{u\in[0,1]}\sum_{k\neq i,j}\log\frac{(1+e^{\theta_{r_i}-\theta_{r_k}})(1+e^{\theta_{r_j}-\theta_{r_k}})}{1+e^{u\theta_{r_i}+(1-u)\theta_{r_j}-\theta_{r_k}}+e^{(1-u)\theta_{r_i}+u\theta_{r_j}-\theta_{r_k}}+e^{\theta_{r_i}+\theta_{r_j}-2\theta_{r_k}}}\\ &\leq\sup_{u\in[0,1]}\sum_{k\neq i,j}\log\frac{(1+e^{\theta_{r_i}-\theta_{r_k}})(1+e^{\theta_{r_j}-\theta_{r_k}})}{1+2e^{\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k}}+e^{\theta_{r_i}+\theta_{r_j}-2\theta_{r_k}}}=\sum_{k\neq i,j}\log\frac{(1+e^{\theta_{r_i}-\theta_{r_k}})(1+e^{\theta_{r_j}-\theta_{r_k}})}{\left(1+e^{\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k}}\right)^2}. \end{align*} To see (\ref{eq:G-square-sum}), we first note that \begin{align*} &G_{i,j,k,\theta,r}(u)\\ &=u\log(1+e^{\theta_{r_i}-\theta_{r_k}})+(1-u)\log(1+e^{\theta_{r_j}-\theta_{r_k}})-\log(1+e^{u\theta_{r_i}+(1-u)\theta_{r_j}-\theta_{r_k}})\\ &\geq0 \end{align*} by Jensen's inequality. Therefore, \begin{align} \nonumber&\sup_{u\in[0,1]}\sum_{k\neq i,j}G_{i,j,k,\theta,r}(u)^2+G_{i,j,k,\theta, r}(1-u)^2\leq\sup_{u\in[0,1]}\sum_{k\neq i,j}(G_{i,j,k,\theta,r}(u)+G_{i,j,k,\theta, r}(1-u))^2\\ &\leq\sum_{k\neq i,j}\left[\log\frac{(1+e^{\theta_{r_i}-\theta_{r_k}})(1+e^{\theta_{r_j}-\theta_{r_k}})}{\left(1+e^{\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k}}\right)^2}\right]^2\label{eq:use-G-max} \end{align} where (\ref{eq:use-G-max}) can be derived similarly as in the proof of (\ref{eq:G-sum}). To upper bound (\ref{eq:use-G-max}), recall the definition of $R_{\theta}(\cdot,\cdot,\cdot)$ in (\ref{eq:neighbor-set}). We have that for any $k$ such that $r_k\in R_{\theta}(\frac{\theta_{r_i}+\theta_{r_j}}{2}, t, t+1)$, \begin{align} \nonumber&\log\frac{(1+e^{\theta_{r_i}-\theta_{r_k}})(1+e^{\theta_{r_j}-\theta_{r_k}})}{\left(1+e^{\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k}}\right)^2}=\log\left(\frac{\cosh(\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k})+\cosh\frac{\theta_{r_i}-\theta_{r_j}}{2}}{\cosh(\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k})+1}\right)\\ &\leq\frac{\cosh\frac{\theta_{r_i}-\theta_{r_j}}{2}-1}{\cosh(\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k})+1}\leq\frac{C_1^{\prime}(\theta_{r_i}-\theta_{r_j})^2}{e^{t}}\label{eq:bound-cosh} \end{align} for some constant $C_1^\prime>0$. (\ref{eq:bound-cosh}) can be seen from $\cosh x\leq1+C_2^{\prime}x^2$ for some constant $C_2^{\prime}>0$ when $\abs{x}\leq C/2$. and the fact that $t\leq |\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k}|\leq t+1$. Therefore, using (\ref{eq:R-upper}) and (\ref{eq:bound-cosh}), \begin{align*} &\sum_{k\neq i,j}\left[\log\frac{(1+e^{\theta_{r_i}-\theta_{r_k}})(1+e^{\theta_{r_j}-\theta_{r_k}})}{\left(1+e^{\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k}}\right)^2}\right]^2\\ &\leq\sum_{t\geq0}\sum_{k:r_k\in R_{\theta}(\frac{\theta_{r_i}+\theta_{r_j}}{2}, t, t+1)}\frac{C_1^{\prime2}(\theta_{r_i}-\theta_{r_j})^4}{e^{2t}}\leq\frac{C_3^{\prime}(\theta_{r_i}-\theta_{r_j})^4}{\beta\vee1/n} \end{align*} for some constant $C_3^\prime>0$. The upper bound of (\ref{eq:G-sum-range}) can be proved similarly. Finally, we turn to the lower bound of (\ref{eq:G-sum-range}). Note that we also have $\cosh x\geq1+C_4^{\prime}x^2$ for some constant $C_4^{\prime}>0$ when $\abs{x}\leq C/2$. Therefore, when $r_k\in R_{\theta}(\frac{\theta_{r_i}+\theta_{r_j}}{2}, 0, 1)$, \begin{align} \nonumber&\log\frac{(1+e^{\theta_{r_i}-\theta_{r_k}})(1+e^{\theta_{r_j}-\theta_{r_k}})}{\left(1+e^{\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k}}\right)^2}=\log\left(1+\frac{\cosh\frac{\theta_{r_i}-\theta_{r_j}}{2}-1}{\cosh(\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k})+1}\right)\\ &\geq\frac{\cosh\frac{\theta_{r_i}-\theta_{r_j}}{2} - 1}{\cosh\frac{\theta_{r_i}-\theta_{r_j}}{2}+\cosh(\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k})}\geq C_5^{\prime}\abs{\theta_{r_i}-\theta_{r_j}}^2\label{eq:log-lower} \end{align} for some constant $C_5^{\prime}>0$, where the first inequality is due to the fact that $\log(1+x) \geq x/(1+x)$ for any $x>-1$. Thus, \begin{align} \nonumber&\sum_{k\neq i,j}\log\frac{(1+e^{\theta_{r_i}-\theta_{r_k}})(1+e^{\theta_{r_j}-\theta_{r_k}})}{\left(1+e^{\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k}}\right)^2}\geq\sum_{k:\substack{r_k\in R_{\theta}(\frac{\theta_{r_i}+\theta_{r_j}}{2}, 0, 1)\\textsf{K}\neq i,j}}\log\frac{(1+e^{\theta_{r_i}-\theta_{r_k}})(1+e^{\theta_{r_j}-\theta_{r_k}})}{\left(1+e^{\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k}}\right)^2}\\ &\geq\left(\abs{R_{\theta}(\frac{\theta_{r_i}+\theta_{r_j}}{2}, 0, 1)}-2\right)C_5^{\prime}\abs{\theta_{r_i}-\theta_{r_j}}^2\geq\frac{C_6^{\prime}\abs{\theta_{r_i}-\theta_{r_j}}^2}{\beta\vee1/n}\label{eq:sum-log-lower} \end{align} for some constant $C_6^{\prime}>0$, where (\ref{eq:sum-log-lower}) is a result of (\ref{eq:R-lower}) and (\ref{eq:log-lower}). \end{proof} \begin{lemma}\label{lem:sup-A-G} Assume $\frac{p}{\log n(\beta\vee1/n)}\to\infty$ and $1\leq C_0=O(1)$. For any constant $C_1>0$, there exists $\delta=o(1)$ and constant $C_2>0$, such that for any $\theta\in\Theta_n(\beta, C_0)$, any $r\in\mathfrak{S}_n$ and any $i\neq j\in[n]$ such that $\abs{\theta_{r_i}-\theta_{r_j}}\leq C_1$, the following holds with probability at least $1-O(n^{-10})$ for $n$ large enough, $$\sup_{u\in[0,1]}\sum_{k\neq i,j}A_{ik}G_{i,j,k,\theta,r}(u)+A_{jk}G_{i,j,k,\theta,r}(1-u)\leq(1+\delta)p\sum_{k\neq i,j}\log\frac{(1+e^{\theta_{r_i}-\theta_{r_k}})(1+e^{\theta_{r_j}-\theta_{r_k}})}{\left(1+e^{\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k}}\right)^2}.$$ \end{lemma} \begin{proof} First we have \begin{equation} \psi( a - c )\wedge\psi( b - c )\leq\frac{1}{ a - b }\log\frac{1+e^{ a - c }}{1+e^{ b - c }}\leq\psi( a - c )\vee\psi( b - c ),\label{eq:G-interesting-prop} \end{equation} for any $a,b,c\in\mathbb{R}$. To see why (\ref{eq:G-interesting-prop}) holds, let us study the function $f(\delta) = \log(1+\exp(x+\delta)) - \log(1+\exp(x)) - \delta \exp(x)/(1+\exp(x))$ for any $x$. Note that $f'(\delta) = \exp(x+\delta)/(1+\exp(x+\delta)) - \exp(x)/(1+\exp(x))$ is positive when $\delta >0$ and negative when $\delta<0$. Since $f(0)=0$, we have $f(\delta) \geq 0$. As a result, we have $\exp(x)/(1+\exp(x)) \leq \delta^{-1} \log((1+\exp(x+\delta))/(1+\exp(x)))$ when $\delta >0$ and the direction of the inequality is reversed when $\delta <0$. WLOG, we assume $ a - b >0$. Then the first inequality of (\ref{eq:G-interesting-prop}) is proved by taking $x = b - c $ and $\delta = a - b $ and second one is proved by taking $x = a - c $ and $\delta =-( a - b ) $. Recall the definition of $G_{i,j,k,\theta,r}$ in (\ref{eqn:G_def}). Then \begin{align} \nonumber&\abs{G_{i,j,k,\theta,r}^\prime(u)}=\abs{\theta_{r_i}-\theta_{r_j}}\abs{\frac{1}{\theta_{r_i}-\theta_{r_j}}\log\frac{1+e^{\theta_{r_i}-\theta_{r_k}}}{1+e^{\theta_{r_j}-\theta_{r_k}}}-\frac{e^{u(\theta_{r_i}-\theta_{r_j})+\theta_{r_j}-\theta_{r_k}}}{1+e^{u(\theta_{r_i}-\theta_{r_j})+\theta_{r_j}-\theta_{r_k}}}}\\ &\leq\abs{\theta_{r_i}-\theta_{r_j}}\abs{\psi(\theta_{r_i}-\theta_{r_k})-\psi(\theta_{r_j}-\theta_{r_k})}\label{eq:G-prime-bound-1}\\ &\leq\abs{\theta_{r_i}-\theta_{r_j}}^2.\label{eq:G-prime-bound-2} \end{align} Here (\ref{eq:G-prime-bound-1}) is due to the observation that both terms are in the interval $[\psi(\theta_{r_i}-\theta_{r_k})\wedge\psi(\theta_{r_j}-\theta_{r_k}),\psi(\theta_{r_i}-\theta_{r_k})\vee\psi(\theta_{r_j}-\theta_{r_k})]$ for any $u\in[0,1]$, where the first term is due to (\ref{eq:G-interesting-prop}) and the second term is due to the monotonicity of $\exp(x)/(1+\exp(x))$. Hence the difference between these two terms are bounded by $\abs{\psi(\theta_{r_i}-\theta_{r_k})-\psi(\theta_{r_j}-\theta_{r_k})}$ in absolute value. (\ref{eq:G-prime-bound-2}) is due to $\psi^\prime(x)\leq1/4$. Following the line of discretization, let $u_a=\frac{a}{n}, a=1,...,n$. Then for any $u\in[0,1]$, let $a(u)=\arg\min_{a\in[n]}\abs{u-u_a}$. We have $\abs{u-u_{a(u)}}\leq1/n$. Thus, \begin{align} \nonumber&\abs{\sum_{k\neq i,j}(A_{ik}-p)(G_{i,j,k,\theta,r}(u)-G_{i,j,k,\theta,r}(u_{a(u)}))+(A_{jk}-p)(G_{i,j,k,\theta,r}(1-u)-G_{i,j,k,\theta,r}(1-u_{a(u)}))}\\ &\leq2\abs{\theta_{r_i}-\theta_{r_j}}^2(n-2)\abs{u-u_{a(u)}}\leq2\abs{\theta_{r_i}-\theta_{r_j}}^2\label{eq:G-discrete}. \end{align} Then \begin{align} \nonumber&\sup_{u\in[0,1]}\sum_{k\neq i,j}A_{ik}G_{i,j,k,\theta,r}(u)+A_{jk}G_{i,j,k,\theta,r}(1-u)\\ &\leq p\sum_{k\neq i,j}\log\frac{(1+e^{\theta_{r_i}-\theta_{r_k}})(1+e^{\theta_{r_j}-\theta_{r_k}})}{\left(1+e^{\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k}}\right)^2}\label{eq:A-G-sum-bound-1}\\ \nonumber&\quad\quad+\sup_{u\in[0,1]}\sum_{k\neq i,j}(A_{ik}-p)G_{i,j,k,\theta,r}(u)+(A_{jk}-p)G_{i,j,k,\theta,r}(1-u)\\ \nonumber&\leq p\sum_{k\neq i,j}\log\frac{(1+e^{\theta_{r_i}-\theta_{r_k}})(1+e^{\theta_{r_j}-\theta_{r_k}})}{\left(1+e^{\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k}}\right)^2}\\ &\quad\quad+2\abs{\theta_{r_i}-\theta_{r_j}}^2+\max_{a\in[n]}\sum_{k\neq i,j}(A_{ik}-p)G_{i,j,k,\theta,r}(u_{a})+(A_{jk}-p)G_{i,j,k,\theta,r}(1-u_{a})\label{eq:A-G-sum-bound-2}\\ \nonumber&\leq p\sum_{k\neq i,j}\log\frac{(1+e^{\theta_{r_i}-\theta_{r_k}})(1+e^{\theta_{r_j}-\theta_{r_k}})}{\left(1+e^{\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k}}\right)^2}+2\abs{\theta_{r_i}-\theta_{r_j}}^2\\ &\quad\quad+C_1^{\prime}\sqrt{p\log n\max_{a\in[n]}\sum_{k\neq i,j}G_{i,j,k,\theta,r}(u_a)^2+G_{i,j,k,\theta,r}(1-u_a)^2}\label{eq:A-G-sum-bound-3}\\ &\leq p\sum_{k\neq i,j}\log\frac{(1+e^{\theta_{r_i}-\theta_{r_k}})(1+e^{\theta_{r_j}-\theta_{r_k}})}{\left(1+e^{\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k}}\right)^2}+2\abs{\theta_{r_i}-\theta_{r_j}}^2+C_2^\prime\abs{\theta_{r_i}-\theta_{r_j}}^2\sqrt{\frac{p\log n}{\beta\vee1/n}}\label{eq:A-G-sum-bound-4}\\ &=(1+\delta)p\sum_{k\neq i,j}\log\frac{(1+e^{\theta_{r_i}-\theta_{r_k}})(1+e^{\theta_{r_j}-\theta_{r_k}})}{\left(1+e^{\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k}}\right)^2}\label{eq:A-G-sum-bound-5} \end{align} with probability at least $1-O(n^{-10})$ for some constants $C_1^\prime, C_2^\prime>0$ and $\delta=o(1)$. (\ref{eq:A-G-sum-bound-1}) is due to Lemma \ref{lem:G-prop}. (\ref{eq:A-G-sum-bound-2}) comes from (\ref{eq:G-discrete}). (\ref{eq:A-G-sum-bound-3}) and (\ref{eq:A-G-sum-bound-4}) are a consequence of Lemma \ref{lem:A-bern-2} and Lemma \ref{lem:G-prop}. (\ref{eq:A-G-sum-bound-5}) is because of Lemma \ref{lem:G-prop} and $$p\frac{1}{\abs{\theta_{r_i}-\theta_{r_j}}^2}\sum_{k\neq i,j}\log\frac{(1+e^{\theta_{r_i}-\theta_{r_k}})(1+e^{\theta_{r_j}-\theta_{r_k}})}{\left(1+e^{\frac{\theta_{r_i}+\theta_{r_j}}{2}-\theta_{r_k}}\right)^2}\gtrsim \frac{p}{\beta\vee1/n}\gg\sqrt{\frac{p\log n}{\beta\vee1/n}}\gg1,$$ which concludes the proof. \end{proof} \begin{lemma}\label{lem:BTL-two-point} Assume $\frac{p}{\log n(\beta\vee1/n)}\to\infty$ and $1\leq C_0=O(1)$. For any constant $C>0$. there exist constants $C_1,C_2>0$, $\delta=o(1)$ such that for any $\theta^*\in\Theta_n(\beta, C_0)$, any $r^*\in\mathfrak{S}_n$ and $i\neq j\in[n]$ such that $\abs{\theta_{r_i^*}^*-\theta_{r_j^*}^*}\leq C$, we have \begin{align*} &\inf_{\widehat{r}}\frac{\mathbb{P}_{(\theta^*, r^*)}\left(\widehat{r}\neq r^*\right)+\mathbb{P}_{(\theta^*, r^{*(i,j)})}\left(\widehat{r}\neq r^{*(i,j)}\right)}{2}\\ &\geq C_1\exp\left(-\sqrt{\frac{C_2Lp(\theta_{r_i^*}^*-\theta_{r_j^*}^*)^2}{\beta\vee1/n}}-(1+\delta)2Lp\sum_{k\neq i,j}G_{i,j,k,\theta^*, r^*}(1/2)\right) \end{align*} for $n$ large enough. Here $r^{*(i,j)}$ is defined as in (\ref{eqn:r_star_ij_def}). \end{lemma} \begin{proof} By Neyman-Pearson Lemma, the optimal procedure is the likelihood ratio test: \begin{align*} &\inf_{\widehat{r}}\frac{\mathbb{P}_{(\theta^*, r^*)}\left(\widehat{r}\neq r^*\right)+\mathbb{P}_{(\theta^*, r^{*(i,j)})}\left(\widehat{r}\neq r^{*(i,j)}\right)}{2}\\ &=\frac{\mathbb{P}_{(\theta^*, r^*)}\left(\ell_n(\theta^*, r^*)\geq\ell_n(\theta^*, r^{*(i,j)})\right)+\mathbb{P}_{(\theta^*, r^{*(i,j)})}\left(\ell_n(\theta^*, r^*)\leq\ell_n(\theta^*, r^{*(i,j)})\right)}{2} \end{align*} We only need to lower bound $\mathbb{P}_{(\theta^*, r^*)}\left(\ell_n(\theta^*, r^*)\geq\ell_n(\theta^*, r^{*(i,j)})\right)$ and the other term can be bounded similarly. WLOG, assume $i<j$ and $r_i^*=a<r_j^*=b$. Let $$Z_{kl}=y_{ikl}\log\frac{\psi(\theta_b^*-\theta_{r_k^*}^*)}{\psi(\theta_a^*-\theta_{r_k^*}^*)}+(1-y_{ikl})\log\frac{1-\psi(\theta_b^*-\theta_{r_k^*}^*)}{1-\psi(\theta_a^*-\theta_{r_k^*}^*)}, k\neq i,j,$$ $$\bar{Z}_{kl}=y_{jkl}\log\frac{\psi(\theta_a^*-\theta_{r_k^*}^*)}{\psi(\theta_b^*-\theta_{r_k^*}^*)}+(1-y_{jkl})\log\frac{1-\psi(\theta_a^*-\theta_{r_k^*}^*)}{1-\psi(\theta_b^*-\theta_{r_k^*}^*)}, k\neq i,j$$ and $$Z_{0l}=y_{ijl}\log\frac{\psi(\theta_b^*-\theta_{a}^*)}{\psi(\theta_a^*-\theta_{b}^*)}+(1-y_{ijl})\log\frac{1-\psi(\theta_b-\theta_{a}^*)}{1-\psi(\theta_a^*-\theta_{b}^*)}.$$ To simplify notation, we use $\mathbb{P}_A(\cdot)$ as $\mathbb{P}_{(\theta^*, r^*)}(\cdot|A)$ and $\mathbb{E}_{A}[\cdot]$ as $\mathbb{E}_{(\theta^*, r^*)}[\cdot|A]$. Then \begin{align*} &\mathbb{P}_{A}\left(\ell_n(\theta^*, r^*)\geq\ell_n(\theta^*, r^{*(i,j)})\right)=\mathbb{P}_{A}\left(\sum_{l=1}^L\left(A_{ij}Z_{0l}+\sum_{k\neq i,j}A_{ik}Z_{kl}+A_{jk}\bar{Z}_{kl}\right)\geq0\right). \end{align*} Let $\mu_{i^\prime j^\prime}=\psi(\theta_{r_{i^\prime}^*}^*-\theta_{r_{j^\prime}^*}^*)$ for any $i^\prime\neq j^\prime$. Define \begin{align*} &\nu_{r^*}(u)=\log\mathbb{E}_{A}\left\{\exp\left[u\left(A_{ij}Z_{01}+\sum_{k\neq i,j}A_{ik}Z_{k1}+A_{jk}\bar{Z}_{k1}\right)\right]\right\}\\ &=A_{ij}\nu_{0,r^*}(u)+\sum_{k\neq i,j}A_{ik}\nu_{k,r^*}(u)+\sum_{k\neq i,j}A_{jk}\bar{\nu}_{k,r^*}(u) \end{align*} where $$\nu_{0,r^*}(u)=\log\left[\mu_{ij}^u(1-\mu_{ij})^{1-u}+\mu_{ij}^{1-u}(1-\mu_{ij})^u\right]=-\log\frac{1+e^{\theta_a^*-\theta_b^*}}{e^{u(\theta_a^*-\theta_b^*)}+e^{(1-u)(\theta_a^*-\theta_b^*)}}$$ $$\nu_{k,r^*}(u)=\log\left[\mu_{jk}^u\mu_{ik}^{1-u}+(1-\mu_{jk})^u(1-\mu_{ik})^{1-u}\right]=-G_{i,j,k,\theta^*,r^*}(1-u)$$ $$\bar{\nu}_{k,r^*}(u)=\log\left[\mu_{ik}^u\mu_{jk}^{1-u}+(1-\mu_{ik})^u(1-\mu_{jk})^{1-u}\right]=-G_{i,j,k,\theta^*,r^*}(u).$$ $\nu_{r^*}(u)$ is the conditional cumulant generating function of $A_{ij}Z_{01}+\sum_{k\neq i,j}A_{ik}Z_{kl}+A_{jk}\bar{Z}_{kl}$. We also have $\nu_{0,r^*}(u), \nu_{k,r^*}(u),\bar{\nu}_{k,r^*}(u)$ as the cumulant generating functions of $Z_{01}, Z_{k1}, \bar{Z}_{k1}$ respectively. Define $$u_{r^*}^*=\arg\min_{u\geq0}\nu_{r^*}(u).$$ Since cumulant generating functions are convex and $\nu_{r^*}(0) = \nu_{r^*}(1)=0$, it can be seen easily that $u_{r^*}^*\in(0,1)$ and depends on $A$. Following the change-of-measure argument in the proof of Lemma 8.4 and Lemma 9.3 of \cite{chen2020partial}, we have \begin{align} \nonumber&\mathbb{P}_{A}\left(\sum_{l=1}^L\left(A_{ij}Z_{0l}+\sum_{k\neq i,j}A_{ik}Z_{kl}+A_{jk}\bar{Z}_{kl}\right)\geq0\right)\\ &\geq\exp\left(-u_{r^*}^*T+L\nu_{r^*}(u_{r^*}^*)\right)\mathbb{Q}_A\left(0\leq\sum_{l=1}^L\left(A_{ij}Z_{0l}+\sum_{k\neq i,j}A_{ik}Z_{kl}+A_{jk}\bar{Z}_{kl}\right)\leq T\right)\label{eq:change-measure} \end{align} for any $T$ in (\ref{eq:change-measure}) to be determined later and $\mathbb{Q}_A$ is a measure under which $Z_{0l}, Z_{kl}, \bar{Z}_{kl}, l\in[L], k\neq i,j$ are all independent given $A$ and follow $$\mathbb{Q}_A(Z_{0l}=s)=e^{u_{r^*}^*s-\nu_{0,r^*}(u_{r^*}^*)}\mathbb{P}_A\left(Z_{0l}=s\right),$$ $$\mathbb{Q}_A(Z_{kl}=s)=e^{u_{r^*}^*s-\nu_{k,r^*}(u_{r^*}^*)}\mathbb{P}_A\left(Z_{kl}=s\right),k\neq i,j,k\in[n],$$ $$\mathbb{Q}_A(\bar{Z}_{kl}=s)=e^{u_{r^*}^*s-\bar{\nu}_{k,r^*}(u_{r^*}^*)}\mathbb{P}_A\left(\bar{Z}_{kl}=s\right), k\neq i,j, k\in[n].$$ Furthermore, by definition of $u_{r^*}^*$, the expectation of $A_{ij}Z_{0l}+\sum_{k\neq i,j}A_{ik}Z_{kl}+A_{jk}\bar{Z}_{kl}$ under $\mathbb{Q}_A$ is 0. We can compute the 2nd and 4th moments under $Q_A$, denoted as $\Var_{\mathbb{Q}_A}(\cdot)$ and $\kappa_{\mathbb{Q}_A}(\cdot)$ respectively: \begin{align} \nonumber&\Var_{\mathbb{Q}_A}(Z_{0l})=\nu_{0,r^*}^{\prime\prime}(u_{r^*}^*)=4\mu_{ij}(1-\mu_{ij})\frac{(\theta_a^*-\theta_b^*)^2e^{2u_{r^*}^*(\theta_a^*-\theta_b^*)}}{((1-\mu_{ij})e^{2u_{r^*}^*(\theta_a^*-\theta_b^*)}+\mu_{ij})^2}\\ &=4(\theta_a^*-\theta_b^*)^2\psi^{\prime}\left((1-2u_{r^*}^*)(\theta_a^*-\theta_b^*)\right),\label{eq:Q-2nd-1} \end{align} \begin{align} \nonumber&\Var_{\mathbb{Q}_A}(Z_{kl})=\nu_{k,r^*}^{\prime\prime}(u_{r^*}^*)=\mu_{ik}(1-\mu_{ik})\frac{(\theta_a^*-\theta_b^*)^2e^{u_{r^*}^*(\theta_a^*-\theta_b^*)}}{((1-\mu_{ik})e^{u_{r^*}^*(\theta_a^*-\theta_b^*)}+\mu_{ik})^2}\\ &=(\theta_a^*-\theta_b^*)^2\psi^{\prime}\left((1-u_{r^*}^*)\theta_a^*+u_{r^*}^*\theta_b^*-\theta_{r_k^*}^*\right), k\neq i,j, k\in[n],\label{eq:Q-2nd-2} \end{align} \begin{align} \nonumber&\Var_{\mathbb{Q}_A}(\bar{Z}_{kl})=\bar{\nu}_{k,r^*}^{\prime\prime}(u_{r^*}^*)=\mu_{jk}(1-\mu_{jk})\frac{(\theta_a^*-\theta_b^*)^2e^{-u_{r^*}^*(\theta_a^*-\theta_b^*)}}{((1-\mu_{jk})e^{-u_{r^*}^*(\theta_a^*-\theta_b^*)}+\mu_{jk})^2}\\ &=(\theta_a^*-\theta_b^*)^2\psi^{\prime}\left(u_{r^*}^*\theta_a^*+(1-u_{r^*}^*)\theta_b^*-\theta_{r_k^*}^*\right),k\neq i,j, k\in[n]\label{eq:Q-2nd-3} \end{align} and \begin{align} \nonumber&\kappa_{\mathbb{Q}_A}(Z_{0l})=\mathbb{Q}_A\left((Z_{0l}-\mathbb{Q}_A(Z_{0l}))^4\right)=\nu_{0,r^*}^{\prime\prime\prime\prime}(u_{r^*}^*)+3\nu_{0,r^*}^{\prime\prime}(u_{r^*}^*)^2\\ \nonumber&\leq16\mu_{ij}(1-\mu_{ij})\frac{(\theta_a^*-\theta_b^*)^4e^{2u_{r^*}^*(\theta_a^*-\theta_b^*)}}{[(1-\mu_{ij})e^{2u_{r^*}^*(\theta_a^*-\theta_b^*)}+\mu_{ij}]^2}+3\nu_{0,r^*}^{\prime\prime}(u_{r^*}^*)^2\\ &=4(\theta_a^*-\theta_b^*)^2\nu_{0,r^*}^{\prime\prime}(u_{r^*}^*)+3\nu_{0,r^*}^{\prime\prime}(u_{r^*}^*)^2\leq7(\theta_a^*-\theta_b^*)^4\psi^{\prime}\left((1-2u_{r^*}^*)(\theta_a^*-\theta_b^*)\right),\label{eq:Q-4th-1} \end{align} \begin{align} \nonumber&\kappa_{\mathbb{Q}_A}(Z_{kl})=\mathbb{Q}_A\left((Z_{kl}-\mathbb{Q}_A(Z_{kl}))^4\right)\leq(\theta_a^*-\theta_b^*)^2\nu_{k,r^*}^{\prime\prime}(u_{r^*}^*)+3\nu_{k,r^*}^{\prime\prime}(u_{r^*}^*)^2\\ &\leq4(\theta_a^*-\theta_b^*)^4\psi^{\prime}\left((1-u_{r^*}^*)\theta_a^*+u_{r^*}^*\theta_b^*-\theta_{r_k^*}^*\right),k\neq i,j, k\in[n]\label{eq:Q-4th-2} \end{align} \begin{align} \nonumber&\kappa_{\mathbb{Q}_A}(\bar{Z}_{kl})=\mathbb{Q}_A\left((\bar{Z}_{kl}-\mathbb{Q}_A(\bar{Z}_{kl}))^4\right)\leq(\theta_a^*-\theta_b^*)^2\bar{\nu}_{k,r^*}^{\prime\prime}(u_{r^*}^*)+3\bar{\nu}_{k,r^*}^{\prime\prime}(u_{r^*}^*)^2\\ &\leq4(\theta_a^*-\theta_b^*)^4\psi^{\prime}\left(u_{r^*}^*\theta_a^*+(1-u_{r^*}^*)\theta_b^*-\theta_{r_k^*}^*\right),k\neq i,j, k\in[n].\label{eq:Q-4th-3} \end{align} Let $\mathcal{F}_1$ be the event on which the following holds: $$ \inf_{u\in[0,1]}\sum_{k\neq i,j}A_{ik}\psi^\prime((1-u)\theta_a^*+u\theta_b^*-\theta_{r_k^*}^*)+A_{jk}\psi^\prime(u\theta_a^*+(1-u)\theta_b^*-\theta_{r_k^*}^*)\geq C_1^\prime\frac{p}{\beta\vee1/n},$$ $$ \sup_{u\in[0,1]}\sum_{k\neq i,j}A_{ik}\psi^\prime((1-u)\theta_a^*+u\theta_b^*-\theta_{r_k^*}^*)^{3/4}+A_{jk}\psi^\prime(u\theta_a^*+(1-u)\theta_b^*-\theta_{r_k^*}^*)^{3/4}\leq C_2^\prime\frac{p}{\beta\vee1/n} $$ for some constants $C_1^\prime, C_2^\prime>0$. We shall choose $C_1^\prime, C_2^\prime$ to make $\mathcal{F}_1$ happen with probability at least $1-O(n^{-10})$ by Lemma \ref{lem:sup-A-u}. Therefore, we shall choose $T$ as \begin{align*} &T=\sqrt{L\left(A_{ij}\Var_{\mathbb{Q}_A}(Z_{01})+\sum_{k\neq i,j}A_{ik}\Var_{\mathbb{Q}_A}(Z_{k1})+A_{jk}\Var_{\mathbb{Q}_A}(\bar{Z}_{k1})\right)}\\ &\leq\sqrt{C_3^{\prime}L\frac{p(\theta_a^*-\theta_b^*)^2}{\beta\vee1/n}} \end{align*} on $\mathcal{F}_1$ for some constant $C_3^\prime>0$ using (\ref{eq:Q-2nd-1})-(\ref{eq:Q-2nd-3}). With this choice of $T$, the $\mathbb{Q}_A$ measure can be lower bounded by some constant $C_4^\prime>0$ on $\mathcal{F}_1$. This can be seen by bounding the 4th moment approximation bound using Lemma \ref{lem:CLT-stein} : \begin{align} \nonumber&\sqrt{L\frac{A_{ij}\kappa_{\mathbb{Q}_A}(Z_{01})^{3/4}+\sum_{k\neq i,j}A_{ik}\kappa_{\mathbb{Q}_A}(Z_{kl})^{3/4}+A_{jk}\kappa_{\mathbb{Q}_A}(\bar{Z}_{kl})^{3/4}}{\left(LA_{ij}\Var_{\mathbb{Q}_A}(Z_{01})+L\sum_{k\neq i,j}A_{ik}\Var_{\mathbb{Q}_A}(Z_{k1})+A_{jk}\Var_{\mathbb{Q}_A}(\bar{Z}_{k1})\right)^{3/2}}}\\ &\leq\sqrt{C_5^\prime L\frac{\sum_{k\neq i,j}A_{ik}\psi^{\prime}\left((1-u_{r^*}^*)\theta_a^*+u_{r^*}^*\theta_b^*-\theta_{r_k^*}^*\right)^{3/4}+A_{jk}\psi^{\prime}\left(u_{r^*}^*\theta_a^*+(1-u_{r^*}^*)\theta_b^*-\theta_{r_k^*}^*\right)^{3/4}}{\left(L\sum_{k\neq i,j}A_{ik}\psi^\prime((1-u_{r^*}^*)\theta_a^*+u_{r^*}^*\theta_b^*-\theta_{r_k^*}^*)+A_{jk}\psi^\prime(u_{r^*}^*\theta_a^*+(1-u_{r^*}^*)\theta_b^*-\theta_{r_k^*}^*)\right)^{3/2}}}\label{eq:Q-clt-bound-1}\\ &\leq C_6^{\prime}\left(L\frac{p}{\beta\vee1/n}\right)^{-1/4}\label{eq:Q-clt-bound-2} \end{align} on $\mathcal{F}_1$ for some constants $C_5^\prime, C_6^\prime>0$ and this bound tends to 0. (\ref{eq:Q-clt-bound-1}) is due to (\ref{eq:Q-2nd-1})-(\ref{eq:Q-2nd-3}) and (\ref{eq:Q-4th-1})-(\ref{eq:Q-4th-3}). (\ref{eq:Q-clt-bound-2}) is a consequence of Lemma \ref{lem:sup-A-u}. Now we turn to $L\nu_{r^*}(u_{r^*}^*)$. Let $\mathcal{F}_2$ be the event on which the following holds: \begin{align*} &\sup_{u\in[0,1]}\sum_{k\neq i,j}(A_{ik}G_{i,j,k,\theta^*,r^*}(1-u)+A_{jk}G_{i,j,k,\theta^*,r^*}(u))\\ &\leq(1+\delta_1^\prime)2p\sum_{k\neq i,j}G_{i,j,k,\theta^*, r^*}(1/2). \end{align*} By Lemma \ref{lem:sup-A-G}, there exists $\delta_1^\prime=o(1)$ independent of $i,j,\theta^*,r^*$ such that $\mathcal{F}_2$ holds with probability at least $1-O(n^{-10})$. Then, on this event, \begin{align} \nonumber&\nu_{r^*}(u_{r^*})\geq-\sup_{u\in[0,1]}\left(-A_{ij}\nu_{0,r^*}(u)-\sum_{k\neq i,j}(A_{ik}\nu_{k,r^*}(u)+A_{jk}\bar{\nu}_{k,r^*}(u))\right)\\ \nonumber&\geq-A_{ij}\sup_{u\in[0,1]}\log\frac{1+e^{\theta_a^*-\theta_b^*}}{e^{u(\theta_a^*-\theta_b^*)}+e^{(1-u)(\theta_a^*-\theta_b^*)}}-\sup_{u\in[0,1]}\sum_{k\neq i,j}(A_{ik}G_{i,j,k,\theta^*,r^*}(1-u)+A_{jk}G_{i,j,k,\theta^*,r^*}(u))\\ &\geq-C_7^\prime\abs{\theta_a^*-\theta_b^*}^2-(1+\delta_1^\prime)2p\sum_{k\neq i,j}G_{i,j,k,\theta^*, r^*}(1/2)\label{eq:ij-bound}\\ &\geq-(1+\delta_2^\prime)2p\sum_{k\neq i,j}G_{i,j,k,\theta^*, r^*}(1/2)\label{eq:ij-absorb} \end{align} for some $\delta_2^\prime=o(1)$. (\ref{eq:ij-bound}) comes from $$\log\frac{1+e^{\theta_a^*-\theta_b^*}}{e^{u(\theta_a^*-\theta_b^*)}+e^{(1-u)(\theta_a^*-\theta_b^*)}}\leq\log\cosh\frac{\theta_a^*-\theta_b^*}{2}\leq\cosh\frac{\theta_a^*-\theta_b^*}{2} - 1\leq C_7^\prime\abs{\theta_a^*-\theta_b^*}^2$$ for some constant $C_7^\prime>0$ when $\abs{\theta_a^*-\theta_b^*}\leq C$. (\ref{eq:ij-absorb}) is because of Lemma \ref{lem:G-prop} and $\frac{p}{\beta\vee1/n}\gg1$. Note that $\delta_2^\prime$ can also be chosen independent of $i,j,\theta^*,r^*$. Thus, we can further lower bound (\ref{eq:change-measure}) on $\mathcal{F}_1\cap\mathcal{F}_2$: \begin{align} \nonumber&\mathbb{P}_{A}\left(\sum_{l=1}^L\left(A_{ij}Z_{0l}+\sum_{k\neq i,j}A_{ik}Z_{kl}+A_{jk}\bar{Z}_{kl}\right)\geq0\right)\\ \nonumber&\geq C_4^\prime\exp\left(-\sqrt{C_3^\prime Lp\frac{(\theta_a^*-\theta_b^*)^2}{\beta\vee1/n}}+L\nu_{r^*}(u_{r^*}^*)\right)\\ \nonumber&\geq C_4^\prime\exp\left(-\sqrt{C_3^\prime Lp\frac{(\theta_a^*-\theta_b^*)^2}{\beta\vee1/n}}-L\sup_{u\in[0,1]}\left(-A_{ij}\nu_{0,r^*}(u)-\sum_{k\neq i,j}(A_{ik}\nu_{k,r^*}(u)+A_{jk}\bar{\nu}_{k,r^*}(u))\right)\right)\\ \nonumber&\geq C_4^\prime\exp\left(-\sqrt{C_3^\prime Lp\frac{(\theta_a^*-\theta_b^*)^2}{\beta\vee1/n}}-(1+\delta_2^\prime)2Lp\sum_{k\neq i,j}G_{i,j,k,\theta^*, r^*}(1/2)\right). \end{align} which finishes the proof. \end{proof} Now we are ready to prove the lower bound part of Theorem \ref{thm:BTL-minimax}. \begin{proof}[Proof of Theorem \ref{thm:BTL-minimax} (lower bound)] We remark that $p\geq c_0(\beta\vee\frac{1}{n})\log n$ necessarily implies $0<\beta=o(1)$. It also implies $n\wedge \beta^{-1}\gg 1$ and $\frac{\beta\vee1/n}{Lp}=o(1)$ which will be useful in the proof. Recall the definition of $r^{*(i,j)}$ in (\ref{eqn:r_star_ij_def}) for any $r^{*}\in\mathfrak{S}_n$ and $i,j\in[n]$ such that $i\neq j$. For any $\theta^*\in\Theta_n(\beta, C_0)$, we have \begin{align} \nonumber&\inf_{\widehat{r}}\sup_{r^*\in\mathfrak{S}_n}\mathbb{E}_{(\theta^*,r^*)}\left[\textsf{K}(\widehat{r},r)\right]\\ \nonumber&\geq\inf_{\widehat{r}}\frac{1}{n!}\sum_{r^*\in\mathfrak{S}_n}\frac{1}{n}\sum_{1\leq i< j\leq n}\mathbb{P}_{(\theta^*,r^*)}\left(\widehat{r}_i<\widehat{r}_j,r_i^*>r_j^*\right)+\mathbb{P}_{(\theta^*,r^*)}\left(\widehat{r}_i>\widehat{r}_j,r_i^*<r_j^*\right)\\ \nonumber&=\inf_{\widehat{r}}\frac{1}{n}\sum_{1\leq i<j\leq n}\frac{1}{n!}\sum_{r^*\in\mathfrak{S}_n}\mathbb{P}_{(\theta^*,r^*)}\left(\widehat{r}_i<\widehat{r}_j,r_i^*>r_j^*\right)+\mathbb{P}_{(\theta^*,r^*)}\left(\widehat{r}_i>\widehat{r}_j,r_i^*<r_j^*\right)\\ \nonumber&\geq\frac{1}{n}\sum_{1\leq a<b\leq n}\frac{2}{n(n-1)}\sum_{1\leq i<j\leq n}\\ \nonumber&\quad\quad\quad\quad\quad\quad\quad\frac{1}{(n-2)!}\sum_{r^*:r_i^*=a,r_j^*=b}\inf_{\widehat{r}}\frac{\mathbb{P}_{(\theta^*,r^*)}(\widehat{r}\neq r^*)+\mathbb{P}_{(\theta^*,r^{*(i,j)})}(\widehat{r}\neq r^{*(i.j)})}{2}\\ \nonumber&\geq\frac{1}{2n}\sum_{a=1}^n\frac{2}{n(n-1)}\sum_{1\leq i<j\leq n}\sum_{b\in[n]\backslash\{a\}:\abs{a-b}\leq 1\vee \sqrt{\frac{C_1^\prime(\beta\vee n^{-1})}{Lp\beta^2}}}\\ \nonumber&\quad\quad\quad\quad\quad\quad\quad\frac{1}{(n-2)!}\sum_{r^*:r_i^*=a,r_j^*=b}\inf_{\widehat{r}}\frac{\mathbb{P}_{(\theta^*,r^*)}(\widehat{r}\neq r^*)+\mathbb{P}_{(\theta^*,r^{*(i,j)})}(\widehat{r}\neq r^{*(i.j)})}{2}, \end{align} where $C_1^\prime>0$ is a constant. Note that for any $a,b\in[n]$ such that $\abs{a-b}\leq 1\vee \sqrt{\frac{C_1^\prime(\beta\vee n^{-1} )}{Lp\beta^2}}$, we have $\abs{\theta^*_a - \theta^*_b}\leq C_0 \br{\beta \vee \sqrt{\frac{C_1^\prime(\beta\vee n^{-1} )}{Lp}}}=o(1)$. Then by Lemma \ref{lem:BTL-two-point}, we have \begin{align} \nonumber&\inf_{\widehat{r}}\sup_{r^*\in\mathfrak{S}_n}\mathbb{E}_{(\theta^*,r^*)}\left[\textsf{K}(\widehat{r},r)\right]\geq\frac{1}{2n}\sum_{a=1}^n\frac{2}{n(n-1)}\sum_{1\leq i<j\leq n}\sum_{b\in[n]\backslash\{a\}:\abs{a-b}\leq 1\vee \sqrt{\frac{C_1^\prime(\beta\vee n^{-1} )}{Lp\beta^2}}}\\ &\quad\quad C_3^\prime\exp\left(-\sqrt{\frac{C_2^\prime Lp(\theta_a^*-\theta_b^*)^2}{\beta\vee n^{-1} }}-(1+\delta_1^\prime)Lp\sum_{k\neq a, b}\log\frac{(1+e^{\theta_a^*-\theta_k^*})(1+e^{\theta_{b}^*-\theta_k^*})}{\left(1+e^{\frac{\theta_a^*+\theta_{b}^*}{2}-\theta_k^*}\right)^2}\right),\label{eq:use-BTL-two-point} \end{align} for some constant $C_2^\prime,C_3^\prime>0$ and some $\delta_1^\prime=o(1)$. We are going to simplify the second term in the exponent. We have \begin{align} \sum_{k\neq a, b}\log\frac{(1+e^{\theta_a^*-\theta_k^*})(1+e^{\theta_{b}^*-\theta_k^*})}{\left(1+e^{\frac{\theta_a^*+\theta_{b}^*}{2}-\theta_k^*}\right)^2}&\leq\sum_{k\neq a, b}\frac{e^{\theta_a^*-\theta_k^*}+e^{\theta_b^*-\theta_k^*}-2e^{\frac{\theta_a^*+\theta_{b}^*}{2}-\theta_k^*}}{\left(1+e^{\frac{\theta_a^*+\theta_{b}^*}{2}-\theta_k^*}\right)^2}\label{eq:log-ineq}\\ \nonumber&=2\sum_{k\neq a, b}\frac{(\cosh\frac{\theta_a^*-\theta_{b}^*}{2}-1)e^{\frac{\theta_a^*+\theta_{b}^*}{2}-\theta_k^*}}{\left(1+e^{\frac{\theta_a^*+\theta_{b}^*}{2}-\theta_k^*}\right)^2}\\ &=(1+\delta_2^\prime)\frac{(\theta_a^*-\theta_{b}^*)^2}{4}\sum_{k\neq a, b}\psi^{\prime}\left(\frac{\theta_a^*+\theta_{b}^*}{2}-\theta_k^*\right)\label{eq:cosh-expand}\\ &=(1+\delta_3^\prime)\frac{(\theta_a^*-\theta_{b}^*)^2}{4}\sum_{k\neq a}\psi^{\prime}(\theta_a^*-\theta_k^*)\label{eq:a-a+1-close} \end{align} for some $\delta_2^\prime=o(1), \delta_3^\prime=o(1)$. Here (\ref{eq:log-ineq}) uses $\log(1+x)\leq x$. In (\ref{eq:cosh-expand}) we use $\theta_a^*-\theta_{b}^*=o(1)$ and $\cosh x-1=(1+O(x))\frac{x^2}{2}$ when $x=o(1)$. From Lemma \ref{lem:sum-psi-prime} we know $\sum_{k\neq a}\psi^{\prime}(\theta_a^*-\theta_k^*)\asymp n\wedge \beta^{-1}\gg1$. Then using this and the fact $\sup_x\abs{\frac{\psi^\prime(x+ t )}{\psi^\prime(x)}-1}=O( t )$ when $ t =o(1)$, we obtain (\ref{eq:a-a+1-close}). Using $\sum_{k\neq a}\psi^{\prime}(\theta_a^*-\theta_k^*)\asymp n\wedge \beta^{-1}$ again and the fact $\abs{\theta^*_a - \theta^*_b} \geq \beta$, there exists a constant $C_4^\prime>0$ such that $$\frac{\sqrt{\frac{C_2^\prime Lp(\theta_a^*-\theta_{b}^*)^2}{\beta\vee n^{-1} }}}{\frac{Lp(\theta_a^*-\theta_{b}^*)^2}{4}\sum_{k\neq a}\psi^{\prime}(\theta_a^*-\theta_k^*)}\leq \frac{C_4^\prime}{\sqrt{\frac{Lp\beta^2}{\beta\vee n^{-1} }}}.$$ Therefore, for an arbitrarily small constant $\delta>0$, we have constant $C_5^\prime>0$, such that (\ref{eq:use-BTL-two-point}) can be lower bounded by \begin{align} \nonumber&C_3^{\prime}\exp\left(-\sqrt{\frac{C_2^\prime Lp(\theta_a^*-\theta_{b}^*)^2}{\beta\vee n^{-1} }}-(1+\delta_3^\prime)\frac{Lp(\theta_a^*-\theta_{b}^*)^2}{4}\sum_{k\neq a}\psi^{\prime}(\theta_a^*-\theta_k^*)\right)\\ \nonumber&\geq C_3^\prime\exp\left(-\left(1+\delta_3^\prime+\frac{C_4^\prime}{\sqrt{\frac{Lp\beta^2}{\beta\vee n^{-1} }}}\right)\frac{Lp(\theta_a^*-\theta_{b}^*)^2}{4V_a(\theta^*)}\right)\\ &\geq C_5^\prime\exp\left(-(1+\delta)\frac{Lp(\theta_a^*-\theta_{b}^*)^2}{4V_a(\theta^*)}\right).\label{eqn:noname7} \end{align} So far, we obtain \begin{align} \nonumber&\inf_{\widehat{r}}\sup_{r^*\in\mathfrak{S}_n}\mathbb{E}_{(\theta^*,r^*)}\left[\textsf{K}(\widehat{r},r)\right]\\ &\geq\frac{1}{2n}\sum_{a=1}^n\frac{2}{n(n-1)}\sum_{1\leq i<j\leq n}\sum_{b\in[n]\backslash\{a\}:\abs{a-b}\leq 1\vee \sqrt{\frac{C_1^\prime(\beta\vee n^{-1} )}{Lp\beta^2}}} C_5^\prime\exp\left(-(1+\delta)\frac{Lp(\theta_a^*-\theta_{b}^*)^2}{4V_a(\theta^*)}\right)\label{eqn:noname8}\\ \nonumber &\geq\frac{1}{2n}\sum_{a=1}^n\frac{2}{n(n-1)}\sum_{1\leq i<j\leq n}\sum_{b=a+1} C_5^\prime\exp\left(-(1+\delta)\frac{Lp(\theta_a^*-\theta_{b}^*)^2}{4V_a(\theta^*)}\right) \\ \nonumber &\geq \frac{C_5^\prime}{2n}\sum_{a=1}^n \exp\left(-(1+\delta)\frac{Lp(\theta_a^*-\theta_{a+1}^*)^2}{4V_a(\theta^*)}\right). \end{align} Hence, we obtain the exponential rate. In the following we are going to derive the polynomial rate for the regime $\frac{Lp\beta^2}{\beta\vee n^{-1}}\leq 1$. Note that for any $a,b\in[n]$ such that $\abs{a-b}\leq 1\vee \sqrt{\frac{C_1^\prime(\beta\vee n^{-1})}{Lp\beta^2}}$, we have \begin{align*} \frac{Lp(\theta_a^*-\theta_{b}^*)^2}{4V_a(\theta^*)} \lesssim \frac{Lp\beta^2}{n \wedge \beta^{-1}} \br{1\vee \frac{\beta \vee n^{-1}}{Lp\beta^2}} \lesssim 1. \end{align*} Then from (\ref{eqn:noname8}), there exist some constant $C_6^\prime,C_7^\prime>0$ such that \begin{align*} \inf_{\widehat{r}}\sup_{r^*\in\mathfrak{S}_n}\mathbb{E}_{(\theta^*,r^*)}\left[\textsf{K}(\widehat{r},r)\right]&\geq\frac{1}{2n}\sum_{a=1}^n\frac{2}{n(n-1)}\sum_{1\leq i<j\leq n}\sum_{b\in[n]\backslash\{a\}:\abs{a-b}\leq 1\vee \sqrt{\frac{C_1^\prime(\beta\vee n^{-1})}{Lp\beta^2}}} C_6^\prime\\ &\geq \frac{C_6^\prime}{2} \br{1\vee \sqrt{\frac{C_1^\prime(\beta\vee n^{-1})}{Lp\beta^2}}} \\ & \geq C_7^\prime \left(n\wedge\sqrt{\frac{\beta\vee n^{-1}}{Lp\beta^2}}\right), \end{align*} where the last inequality is due to $\frac{Lp\beta^2}{\beta\vee n^{-1}}\leq 1$ and the fact that the loss is at most $n$. \end{proof} \subsection{Proof of Theorem \ref{thm:alg-1}}\label{sec:pf-alg-1} The following two lemmas are needed for the proof of Theorem \ref{thm:alg-1}. Recall that $\bar{y}_{ij}^{(1)}=\frac{1}{L_1}\sum_{l=1}^{L_1}y_{ijl}, i\neq j\in[n]$. \begin{lemma}\label{lem:ave-concentration} There exists a constant $C_1>0$ such that for any $\theta^*\in\Theta_n(\beta, C_0)$ and $r^*\in\mathfrak{S}_n$, $$\max_{i\in[n], j\in[n], i\neq j}\abs{\bar{y}_{ij}^{(1)}-\psi(\theta_{r_i^*}^*-\theta_{r_j^*}^*)}\leq C_1\sqrt{\frac{\log n}{L_1}}$$ holds with probability at least $1-O(n^{-10})$. \end{lemma} \begin{proof} This can be seen directly by standard Hoeffding's inequality and union bound argument. \end{proof} \begin{lemma}\label{cor:ave-con-cor} For $L_1$ such that $\frac{L_1}{\log n}\to\infty$ and constant $M\geq1$, there exists $0<\delta_0=o(1)$ and $0<\delta_1=o(1)$ such that for any $\theta^*\in\Theta_n(\beta, C_0)$, any $r^*\in\mathfrak{S}_n$, $$\max_{i\in[n], j\in[n], i\neq j}\abs{\bar{y}_{ij}^{(1)}-\psi(\theta_{r_i^*}^*-\theta_{r_j^*}^*)}\leq \delta_0$$ and $$\bigcap_{i=1}^n\left\{\underline{\mathcal{E}_{1,i}}\subset\mathcal{E}_{1,i}\subset\overline{\mathcal{E}_{1,i}}\right\}$$ hold with probability at least $1-O(n^{-10})$, where \begin{eqnarray*} \mathcal{E}_{1,i} &=& \left\{j\in[n]: \bar{y}_{ij}^{(1)}\leq\psi(-2M)\right\}, \\ \underline{\mathcal{E}_{1,i}} &=& \left\{j\in[n]: \theta_{r_j^*}^*\geq\theta_{r_i^*}^*+2M+\delta_1\right\}, \\ \overline{\mathcal{E}_{1,i}} &=& \left\{j\in[n]: \theta_{r_j^*}^*\geq\theta_{r_i^*}^*+2M-\delta_1\right\}. \end{eqnarray*} \end{lemma} \begin{proof} This is a direct consequence of Lemma \ref{lem:ave-concentration} and $M=O(1)$. \end{proof} Now we are ready to prove Theorem \ref{thm:alg-1}. \begin{proof}[Proof of Theorem \ref{thm:alg-1}] Let $\mathcal{F}^{(0)}$ be the event on which Lemma \ref{cor:ave-con-cor} holds. We will always work on this high probability event throughout the proof. Also, we will assume the regime $n\beta\to\infty$. The case $\beta\lesssim1/n$ is trivial since we only have one league $S_1=[n]$ if $M$ is chosen to be a large enough constant. To start the exposition, we define a series of quantities iteratively for all $k\in[K-1]$, with the base case $\underline{S_0}=\overline{S_0}=S_0^\prime=\widetilde{S}_0=\emptyset, \underline{u^{(0)}}=\overline{u^{(0)}}=0$. Let $$\underline{t_i^{(k)}}=\abs{\left\{j\in[n]\backslash\widetilde{S}_{k-1}:j\in\underline{\mathcal{E}_{1,i}}\right\}},$$ $$ \overline{t_i^{(k)}}=\abs{\left\{j\in[n]\backslash\widetilde{S}_{k-1}:j\in\overline{\mathcal{E}_{1,i}}\right\}},$$ \begin{align} & \underline{S_k}=\left\{i\in[n]\backslash\widetilde{S}_{k-1}:\left(1+\frac{0.11}{C_0^2}\right)p\overline{t_i^{(k)}}\leq h\right\},\label{eqn:def_underline_S_k} \\ & \overline{S_k}=\left\{i\in[n]\backslash\widetilde{S}_{k-1}:\left(1-\frac{0.11}{C_0^2}\right)p\underline{t_i^{(k)}}\leq h\right\}, \label{eqn:def_overline_S_k} \end{align} $$\overline{u^{(k)}}=\max\left\{r_i^*:i\in[n]\backslash\widetilde{S}_{k-1}, \overline{t_i^{(k)}}\leq\frac{M}{\left(1-\frac{0.12}{C_0^2}\right)\beta}\right\},$$ $$\underline{u^{(k)}}=\max\left\{r_i^*:i\in[n]\backslash\widetilde{S}_{k-1}, \overline{t_i^{(k)}}\leq\frac{M}{\left(1+\frac{0.11}{C_0^2}\right)\beta}\right\},$$ $$w_i^{(k)\prime}=\sum_{j\in\underline{S_k}\cap\overline{\mathcal{E}_{1,i}}}A_{ij}\indc{j\in\mathcal{E}_{1,i}},$$ $$S_k^\prime = \left\{i\in[n]\backslash\widetilde{S}_{k-1}:w_i^{(k)\prime}\leq h\right\}, $$ $$\widetilde{S}_{k}=\uplus_{m=1}^{k}S_m^\prime.$$ We make several remarks about these definition. The above definitions have essentially constructed another partition $S_1^\prime, S_2^\prime, ...$ using $w_i^{(k)\prime}$ comparing to Algorithm \ref{alg:partition} using $w_i^{(k)}$. The relationship between $S_k$ and $S_k^\prime$ will be made clear during the exposition. In fact, they will be equal with high probability. We should keep in mind that the partition using $w_i^{(k)\prime}$ is not a bona fide one since the definition uses $\overline{\mathcal{E}_{1,i}}$ and $\underline{S_k}$ which involve the knowledge of $\theta^*$. However, this can be used in theoretical exploration. Our strategy is to show certain properties hold for partitions $S_k^\prime$, then $S_k=S_k^\prime$ with high probability and thus inherits those properties. We start with some simple but crucial facts which will act as building blocks in the proof. \begin{itemize} \item $\overline{t_i^{(k)}}$ and $\underline{t_i^{(k)}}$ has the following monotonicity property: for any $i,j\in[n]\backslash\widetilde{S}_{k-1}$ such that $r_i^*\leq r_j^*$, \begin{equation} \overline{t_i^{(k)}}\leq\overline{t_j^{(k)}}, \underline{t_i^{(k)}}\leq\underline{t_j^{(k)}}.\label{eq:t-mono-k} \end{equation} This is direct from the definition. \item For any $i\in[n]\backslash\widetilde{S}_{k-1}$, \begin{equation} 0\leq \overline{t_i^{(k)}}-\underline{t_i^{(k)}}\leq\frac{2\delta_1}{\beta}+1,\label{eq:t-upper-lower-diff-k} \end{equation} which comes from $\overline{t_i^{(k)}}-\underline{t_i^{(k)}}=\abs{\left\{j\in[n]\backslash\widetilde{S}_{k-1}:2M-\delta_1\leq\theta_{r_j^*}^*-\theta_{r_i^*}^*<2M+\delta_1\right\}}.$ \item We have \begin{equation} \left\{i\in[n]\backslash\widetilde{S}_{k-1}:r_i^*\leq\underline{u^{(k)}}\right\}=\underline{S_k}\subset\overline{S_k}\subset\left\{i\in[n]\backslash\widetilde{S}_{k-1}:r_i^*\leq\overline{u^{(k)}}\right\}.\label{eq:S-u-relation-k} \end{equation} Here $\underline{S_k}\subset\overline{S_k}$ is due to monotonicity (\ref{eq:t-mono-k}) and $\underline{t_i^{(k)}}\leq\overline{t_i^{(k)}}$ by definition. Recall $h=pM/\beta$. Using (\ref{eq:t-upper-lower-diff-k}), for any $i\in \overline{S_k}$, we have $\overline{t_i^{(k)}} \leq \underline{t_i^{(k)}} + \frac{2\delta_1}{\beta}+1 \leq \frac{h}{\left(1-\frac{0.11}{C_0^2}\right)p}+ \frac{2\delta_1}{\beta}+1 \leq \frac{M}{\left(1-\frac{0.12}{C_0^2}\right)\beta}$. Hence, we have $\overline{S_k}\subset\left\{i\in[n]\backslash\widetilde{S}_{k-1}:r_i^*\leq\overline{u^{(k)}}\right\}$. \item $\underline{t_i^{(k)}}$, $\overline{t_i^{(k)}}$, $\underline{S_k}$, $\overline{S_k}, \underline{u^{(k)}}, \overline{u^{(k)}}$ are measurable with respect to the $\sigma$-algebra generated by $\widetilde{S}_{k-1}$. This is direct from the definition. \end{itemize} Now, we will prove the following statements by induction on $k$: \begin{itemize} \item With probability at least $1-O(kn^{-10})$, \begin{equation} \underline{S_{k^\prime}}\subset S_{k^\prime}\subset\overline{S_{k^\prime}} \label{eq:u-contain-S-k} \end{equation} for all $0\leq k^\prime\leq k$. \item With probability at least $1-O(kn^{-10})$, \begin{equation} \abs{\underline{S_{k^\prime}}}\geq\left(\frac{1.7}{C_0}+\frac{1}{1+\frac{0.11}{C_0^2}}\right)\frac{M}{\beta}\label{eq:S-k-card-lower} \end{equation} for all $1\leq k^\prime\leq k$ and $\abs{S_0}=0$. \item With probability at least $1-O(kn^{-10})$, \begin{equation} \abs{\overline{S_{k^\prime}}\backslash\underline{S_{k^\prime}}}\leq\overline{u^{(k^\prime)}}-\underline{u^{(k^\prime)}}\leq\frac{0.29M}{C_0\beta}\label{eq:S-upper-lower-diff-k} \end{equation} for all $0\leq k^\prime\leq k$. \item With probability at least $1-O(kn^{-10})$, \begin{equation} \abs{\overline{S_{k^\prime}}}\leq\left(2+\frac{0.29}{C_0}+\frac{1}{1-\frac{0.12}{C_0^2}}\right)\frac{M}{\beta}\label{eq:S-k-card-upper} \end{equation} for all $0\leq k^\prime\leq k$. \item With probability at least $1-O(kn^{-10})$, \begin{equation} S_{k^\prime}=S_{k^\prime}^\prime\label{eq:S-S-prime-equal-k} \end{equation} for all $0\leq k^\prime\leq k$. \end{itemize} Now, suppose (\ref{eq:u-contain-S-k}) - (\ref{eq:S-S-prime-equal-k}) hold until $k-1$, which is the case for $k=1$. In the following, we are going to establish (\ref{eq:u-contain-S-k}) - (\ref{eq:S-S-prime-equal-k}) for $k$ one by one. ~\\ \emph{(Establishment of (\ref{eq:u-contain-S-k})).} Recall that we assume $\mathcal{F}^{(0)}$ holds. On the intersection of all high probability events before $k$, we have $\widetilde{S}_{k-1}=S_1\cup \ldots\cup S_{k-1}$. We sandwich $w_i^{(k)}$ by $$\underline{w_i^{(k)}}=\sum_{j\in[n]\backslash\widetilde{S}_{k-1}}A_{ij}\indc{j\in\underline{\mathcal{E}_{1,i}}}\leq w_i^{(k)}\leq\sum_{j\in[n]\backslash\widetilde{S}_{k-1}}A_{ij}\indc{j\in\overline{\mathcal{E}_{1,i}}}=\overline{w_i^{(k)}}.$$ Recall the definition of $S_k$ in Algorithm \ref{alg:partition}. Then we have $ S_k = \left\{i\in[n]\backslash\widetilde{S}_{k-1}: w_i^{(k)}\leq h\right\}$. Hence, $ \left\{i\in[n]\backslash\widetilde{S}_{k-1}: \overline{w_i^{(k)}}\leq h\right\} \subset S_k\subset \left\{i\in[n]\backslash\widetilde{S}_{k-1}: \underline{w_i^{(k)}}\leq h\right\}$. To prove (\ref{eq:u-contain-S-k}), by the definitions in (\ref{eqn:def_underline_S_k}) and (\ref{eqn:def_overline_S_k}), we only need to show \begin{align*} &\left\{i\in[n]\backslash\widetilde{S}_{k-1}: \underline{w_i^{(k)}}\leq h\right\} \subset \left\{i\in[n]\backslash\widetilde{S}_{k-1}:\left(1-\frac{0.11}{C_0^2}\right)p\underline{t_i^{(k)}}\leq h\right\},\\ &\left\{i\in[n]\backslash\widetilde{S}_{k-1}:\left(1+\frac{0.11}{C_0^2}\right)p\overline{t_i^{(k)}}\leq h\right\} \subset \left\{i\in[n]\backslash\widetilde{S}_{k-1}: \overline{w_i^{(k)}}\leq h\right\}, \end{align*} a sufficient condition of which is the following event: \begin{align*} &\mathcal{F}^{(k)}=\left\{\forall i\in[n]\backslash\widetilde{S}_{k-1} \text{ such that }p\underline{t_i^{(k)}}\leq \frac{h}{2} : \underline{w_i^{(k)}}\leq h\right\}\\ &\quad\quad\bigcap\left\{\forall i\in[n]\backslash\widetilde{S}_{k-1} \text{ such that }p\underline{t_i^{(k)}}> \frac{h}{2} : \left(1-\frac{0.11}{C_0^2}\right)p\underline{t_i^{(k)}} \leq \underline{w_i^{(k)}}\right\} \\ &\quad\quad\bigcap\left\{\forall i\in[n]\backslash\widetilde{S}_{k-1} \text{ such that }p\overline{t_i^{(k)}}\leq \frac{h}{2} : \overline{w_i^{(k)}}\leq h\right\}\\ &\quad\quad\bigcap\left\{\forall i\in[n]\backslash\widetilde{S}_{k-1} \text{ such that }p\overline{t_i^{(k)}}> \frac{h}{2} : \overline{w_i^{(k)}}\leq\left(1+\frac{0.11}{C_0^2}\right)p\overline{t_i^{(k)}}\right\}. \end{align*} Hence to prove (\ref{eq:u-contain-S-k}), we only need to analyze $\pbr{\mathcal{F}^{(k)}}$. Note that for any $j\in[n]\backslash\widetilde{S}_{k-1}$ we have $r^*_j>\underline{u^{(k-1)}}$ according to the definition of $\underline{S_{k-1}}$ in (\ref{eq:S-u-relation-k}). Thus $$\underline{w_i^{(k)}}=\sum_{\substack{j\in[n]\backslash\widetilde{S}_{k-1}\\r^*_j>\underline{u^{(k-1)}}}}A_{ij}\indc{\theta_j^*\geq\theta_i^*+2M+\delta_1},$$ $$\overline{w_i^{(k)}}=\sum_{\substack{j\in[n]\backslash\widetilde{S}_{k-1}\\r^*_j>\underline{u^{(k-1)}}}}A_{ij}\indc{\theta_j^*\geq\theta_i^*+2M-\delta_1}.$$ On the other hand, recall that $w_i^{(k-1)\prime}=\sum_{j\in \underline{S_{k-1}}\cap\overline{\mathcal{E}_{1,i}}}A_{ij}\indc{j\in\mathcal{E}_{1,i}}$ which only involves $A_{ij}$ such that $r^*_j\leq \underline{u^{(k-1)}}$ due to (\ref{eq:S-u-relation-k}). By (\ref{eq:S-u-relation-k}) and induction hypothesis of (\ref{eq:u-contain-S-k}) we further know $\underline{u^{(1)}}\leq ...\leq \underline{u^{(k-1)}}$. As a result, $w_i^{(1)\prime},...,w_i^{(k-1)\prime}$ are independent of $\underline{w_i^{(k)}}, \overline{w_i^{(k)}}$. Since $\widetilde{S}_{k-1}$ is determined by $w_i^{(1)\prime},...,w_i^{(k-1)\prime}$, it is also independent of $\underline{w_i^{(k)}}, \overline{w_i^{(k)}}$. Therefore, conditional on $\widetilde{S}_{k-1}$, we have $$\underline{w_i^{(k)}}|\widetilde{S}_{k-1}\sim\text{Binomial}(\underline{t_i^{(k)}}, p),$$ $$\overline{w_i^{(k)}}|\widetilde{S}_{k-1}\sim\text{Binomial}(\overline{t_i^{(k)}}, p).$$ Recall that $C_0\geq 1$ is a constant and $h=pM/\beta \gg \log n$ since $p/(\beta \log n)\rightarrow\infty$ by assumption. By Bernstein inequality for the Binomial distributions together with a union bound argument, we have $\pbr{\mathcal{F}^{(k)}|\widetilde{S}_{k-1}} \geq 1-O(n^{-10}).$ Since this holds for all $\widetilde{S}_{k-1}$, we have \begin{align*} \pbr{\mathcal{F}^{(k)}} \geq 1-O(n^{-10}). \end{align*} Therefore, we have proved (\ref{eq:u-contain-S-k}). ~\\ \emph{(Establishment of (\ref{eq:S-k-card-lower})).} We first present a simple fact from induction hypothesis: \begin{align} \left\{i\in[n], r_i^*\leq\underline{u^{(k-1)}}\right\}\subset\widetilde{S}_{k-1}\subset\left\{i\in[n], r_i^*\leq\overline{u^{(k-1)}}\right\}.\label{eqn:tilde_S_sandwitch} \end{align} The first containment is because (\ref{eq:S-u-relation-k}) and (\ref{eq:u-contain-S-k}) hold up to $k-1$. To prove the second containment, we only need to show $\overline{u^{(1)}} \leq \ldots \leq \overline{u^{(k-1)}}$. Notice that from (\ref{eq:S-k-card-lower}) and (\ref{eq:S-upper-lower-diff-k}) for $k-1$, we have $\abs{\underline{S_{k-1}}} \geq \overline{u^{(k-2)}}-\underline{u^{(k-2)}}$. On the other hand, from (\ref{eq:u-contain-S-k}) for $k-1$, we have $\abs{\underline{S_{k-1}}} \leq \abs{ \cbr{i\in[n]: r_i^*> \underline{u^{(k-2)}}, r^*_i \leq \overline{u^{(k-1)}}}} \leq \overline{u^{(k-1)}} - \underline{u^{(k-2)}}$. Hence, we have $\overline{u^{(k-1)}} \geq \overline{u^{(k-2)}}$ and similarly we can show $\overline{u^{(l+1)}} \geq \overline{u^{(l)}}$ for any $l\leq k-2$, which proves $\overline{u^{(1)}} \leq \ldots \leq \overline{u^{(k-1)}}$. Using (\ref{eqn:tilde_S_sandwitch}), we have \begin{align*} &\abs{\underline{S_k}}=\abs{\left\{i\in[n]\backslash\widetilde{S}_{k-1}: \overline{t_i^{(k)}}\leq\frac{M}{\left(1+\frac{0.11}{C_0^2}\right)\beta}\right\}}\\ &\geq\abs{\left\{i\in[n]: r_i^*>\overline{u^{(k-1)}}, \abs{\left\{j\in[n]: r_j^*>\underline{u^{(k-1)}},\theta_{r_j^*}^*\geq\theta_{r_i^*}^*+2M-\delta_1\right\}}\leq\frac{M}{\left(1+\frac{0.11}{C_0^2}\right)\beta}\right\}}. \end{align*} For any $i\in[n]$, since $\theta^*\in \Theta_n(\beta,C_0)$, we have \begin{align*} \abs{\left\{j\in[n]: r_j^*>\underline{u^{(k-1)}},\theta_{r_j^*}^*\geq\theta_{r_i^*}^*+2M-\delta_1\right\}}&\leq r_i^* -\left\lfloor \frac{2M-\delta_1}{C_0\beta}\right\rfloor - \underline{u^{(k-1)}}. \end{align*} Hence, \begin{align*} \abs{\underline{S_k}} &\geq\abs{\left\{i\in[n]: r_i^*>\overline{u^{(k-1)}}, r_i^*\leq\frac{M}{\left(1+\frac{0.11}{C_0^2}\right)\beta}+ \left\lfloor \frac{2M-\delta_1}{C_0\beta}\right\rfloor + \underline{u^{(k-1)}} \right\} } \\ & \geq \frac{M}{\left(1+\frac{0.11}{C_0^2}\right)\beta}+ \left\lfloor \frac{2M-\delta_1}{C_0\beta}\right\rfloor + \underline{u^{(k-1)}} - \overline{u^{(k-1)}}\\ &\geq\left(\frac{1.7}{C_0}+\frac{1}{1+\frac{0.11}{C_0^2}}\right)\frac{M}{\beta}. \end{align*} ~\\ \emph{(Establishment of (\ref{eq:S-upper-lower-diff-k})).} From (\ref{eq:u-contain-S-k}), we have $\abs{\overline{S_k}\backslash\underline{S_k}}\leq\overline{u^{(k)}}-\underline{u^{(k)}}$. Hence, we only need to show $\overline{u^{(k)}}-\underline{u^{(k)}}\leq\frac{0.29M}{C_0\beta}$. We are going to prove \begin{align} \theta^*_{ \overline{u^{(k-1)}}} \geq \theta^*_{ \underline{u^{(k)}}} + 2M -\delta_1.\label{eqn:noname1} \end{align} First, by (\ref{eq:S-upper-lower-diff-k}) for $k-1$, (\ref{eq:S-k-card-lower}), and (\ref{eq:S-u-relation-k}), we have $\abs{\left\{i\in [n]: \underline{u^{(k-1)}}\leq r_i^*\leq\underline{u^{(k)}}\right\}}\geq |\underline{S_k}|$ which leads to $\underline{u^{(k)}} \geq \overline{u^{(k-1)}}$. Let $b\in[n]$ be the index such that $r^*_b = \underline{u^{(k)}} + 1 $. Then it means $b \in [n]\backslash\widetilde{S}_{k-1}$ and $\overline{t^{(k)}_{b}}> \frac{M}{\left(1+\frac{0.11}{C_0^2}\right)\beta}$. By the definition of $\overline{t^{(k)}_i}$, for any $i\in[n]\backslash\widetilde{S}_{k-1}$, we have \begin{align*} \abs{\left\{j\in[n]: r^*_j \geq \underline{u^{(k-1)}}, \theta^*_{r^*_j} > \theta^*_{r^*_i} + 2M -\delta_1\right\}} & \geq \abs{\left\{j\in[n]\backslash\widetilde{S}_{k-1}:j\in\overline{\mathcal{E}_{1,i}}\right\}} = \overline{t^{(k)}_i}, \end{align*} which implies $\theta^*_{\underline{u^{(k-1)}} + \overline{t^{(k)}_i}} >\theta^*_{r^*_i} + 2M -\delta_1 $. This means \begin{align*} \theta^*_{\underline{u^{(k-1)}} } >\theta^*_{r^*_i} + 2M -\delta_1 +\overline{t^{(k)}_i}\beta. \end{align*} Considering the $b$ index here, we have \begin{align} \theta^*_{ \underline{u^{(k-1)}}} \geq \theta^*_{\underline{u^{(k)}} + 1} + 2M -\delta_1 +\frac{M}{\left(1+\frac{0.11}{C_0^2}\right)}. \label{eqn:noname2} \end{align} Then using (\ref{eq:S-upper-lower-diff-k}) for $k-1$, we have \begin{align} \theta^*_{ \overline{u^{(k-1)}}} \geq \theta^*_{\underline{u^{(k)}} + 1} + 2M -\delta_1 +\frac{M}{\left(1+\frac{0.11}{C_0^2}\right)} - 0.29M \geq \theta^*_{\underline{u^{(k)}} } + 2M -\delta_1,\label{eqn:noname3} \end{align} which proves (\ref{eqn:noname1}). Then for any $i,j\in[n]\backslash\widetilde{S}_{k-1}$ such that $\underline{u^{(k)}}\leq r_i^*< r_j^*$, we have \begin{align*} \overline{t_j^{(k)}} - \overline{t_i^{(k)}} & = \abs{\left\{l\in[n]\backslash\widetilde{S}_{k-1}:\theta^*_{r^*_l} \geq \theta^*_{r^*_j} + 2M -\delta_1\right\}} - \abs{\left\{l\in[n]\backslash\widetilde{S}_{k-1}:\theta^*_{r^*_l} \geq \theta^*_{r^*_i} + 2M -\delta_1\right\}} \\ & = \abs{\left\{l\in[n]\backslash\widetilde{S}_{k-1}: \theta^*_{r^*_i} + 2M -\delta_1 >\theta^*_{r^*_l} \geq \theta^*_{r^*_j} + 2M -\delta_1\right\}} \\ & \geq \abs{\left\{r_l^*\geq \overline{u^{(k-1)}}: \theta^*_{r^*_i} + 2M -\delta_1 >\theta^*_{r^*_l} \geq \theta^*_{r^*_j} + 2M -\delta_1\right\}}\\ &\geq \abs{\left\{l\in[n]: \theta^*_{r^*_i} + 2M -\delta_1 >\theta^*_{r^*_l} \geq \theta^*_{r^*_j} + 2M -\delta_1\right\}}\\ & \geq \frac{\theta^*_{r^*_i} -\theta^*_{r^*_j} }{C_0 \beta}\\ &\geq \frac{r^*_j - r^*_i}{C_0}, \end{align*} where in the first inequality we use (\ref{eqn:tilde_S_sandwitch}) and in the second inequality we use (\ref{eqn:noname1}). The last two inequalities are due to $\theta^*\in \Theta_n(\beta,C_0)$. As a result, \begin{align}\label{eqn:noname5} \overline{u^{(k)}}-\underline{u^{(k)}} \leq \frac{\frac{M}{\left(1-\frac{0.12}{C_0^2}\right)\beta} - \frac{M}{\left(1+\frac{0.11}{C_0^2}\right)\beta}}{C_0} \leq \frac{0.29M}{C_0\beta}. \end{align} ~\\ \emph{(Establishment of (\ref{eq:S-k-card-upper})).} We first have $$\abs{\overline{S_k}}\leq\overline{u^{(k)}}-\underline{u^{(k-1)}}\leq\overline{u^{(k)}}-\left(\overline{u^{(k-1)}}-\frac{0.29M}{C_0\beta}\right)$$ due to induction hypothesis on (\ref{eq:S-upper-lower-diff-k}) for $k-1$ and $\left\{i\in[n]: r_i^*\leq\underline{u^{(k-1)}}\right\}\subset\widetilde{S}_{k-1}$. By the definition of $\overline{u^{(k)}}$, similar to the proof of (\ref{eqn:noname2}), we can show \begin{align*} \theta^*_{\overline{u^{(k-1)}}} \leq \theta_{\overline{u^{(k)}}}^* + 2M -\delta_1 + \frac{M}{\left(1-\frac{0.12}{C_0^2}\right)} \end{align*} which implies $$\overline{u^{(k)}}-\overline{u^{(k-1)}}\leq\frac{2M}{\beta}+\frac{M}{\left(1-\frac{0.12}{C_0^2}\right)\beta}.$$ Therefore, $$\abs{\overline{S_k}}\leq\left(2+\frac{0.29}{C_0}+\frac{1}{1-\frac{0.12}{C_0^2}}\right)\frac{M}{\beta}.$$ ~\\ \emph{(Establishment of (\ref{eq:S-S-prime-equal-k})).} Define $$\mathcal{F}^{(k)\prime}=\left\{\min_{i\in[n]:r_i^*>\overline{u^{(k)}}}\sum_{j\in\underline{S_k}}A_{ij}\indc{j\in\underline{\mathcal{E}_{1,i}}}>h\right\}.$$ We are going to show the event $\mathcal{F}^{(k)\prime}$ is a sufficient condition for (\ref{eq:S-S-prime-equal-k}). By definition, since $\underline{S_k} \subset [n]\backslash \widetilde{S}_{k-1}$, we have $w_i^{(k)\prime}\leq w_i^{(k)}$ which implies $S_k\subset S_k^\prime$. We only need to show $S_k^\prime\subset S_k$. Note that for any $i$ such that $r_i^*>\overline{u^{(k)}}$, we have \begin{align*} w_i^{(k)\prime} = \sum_{j\in\underline{S_k}}A_{ij}\indc{j\in\mathcal{E}_{1,i}} \geq \sum_{j\in\underline{S_k}}A_{ij}\indc{j\in\underline{\mathcal{E}_{1,i}}}>h, \end{align*} which means $i\notin S_k^\prime$ as $\mathcal{F}^{(k)\prime}$ is assumed to be true. Hence to show $S_k^\prime\subset S_k$, we only need to show $S_k^\prime \cap \{i\in[n]: r_i^*\leq \overline{u^{(k)}}\}\subset S_k$. Note that due to (\ref{eq:S-upper-lower-diff-k}), for any $i,j\in[n]$, such that $r_i^*\leq \overline{u^{(k)}}$ and $r_j^*> \underline{u^{(k)}}$, we have $r_j^*>\underline{u^{(k)}}$, $\theta_{r_i^*}^*-\theta_{r_j^*}^*\geq \theta_{\overline{u^{(k)}}}^*-\theta_{\underline{u^{(k)}}}^*\geq -0.29M$. Then for any $i$ such that $r_i^*\leq \overline{u^{(k)}}$, we have \begin{align*} w_i^{(k)\prime} - w_i^{(k)} &= \sum_{j\in\underline{S_k}}A_{ij}\indc{j\in\mathcal{E}_{1,i}} - \sum_{j\in[n]\backslash \widetilde{S}_{k-1}}A_{ij}\indc{j\in\mathcal{E}_{1,i}} \\ &\geq - \sum_{j\in[n]:r_j^*> \underline{u^{(k)}}}\indc{j\in\mathcal{E}_{1,i}}\\ & \geq - \sum_{j\in[n]:r_j^*> \underline{u^{(k)}}}\indc{j\in\overline{\mathcal{E}_{1,i}}}\\ & = - \sum_{j\in[n]:r_j^*> \underline{u^{(k)}}} \indc{\theta_{r_j^*}^*\geq\theta_{r_i^*}^*+2M-\delta_1} \\ & = 0, \end{align*} where first inequality is due to (\ref{eq:S-u-relation-k}). Hence we have $S_k^\prime \cap \{i\in[n]: r_i^*\leq \overline{u^{(k)}}\}\subset S_k$ which leads to $S_k = S_k^\prime$. As a result, to establish (\ref{eq:S-S-prime-equal-k}), we only need to analyze $\pbr{\mathcal{F}^{(k)\prime}}$. The analysis of $\pbr{\mathcal{F}^{(k)\prime}}$ is similar to that of $\pbr{\mathcal{F}^{(k)}}$ in the establishment of (\ref{eq:u-contain-S-k}). By a similar independence argument, we have \begin{align*} \br{\sum_{j\in\underline{S_k}}A_{ij}\indc{j\in\underline{\mathcal{E}_{1,i}}}}\Bigg| \widetilde{S}_{k-1} \sim\text{Binomial}\left(\abs{\underline{S_k}\cap\underline{\mathcal{E}_{1,i}}},p\right) \end{align*} for any $i\in[n]$ such that $r_i^*>\overline{u^{(k)}}$. From (\ref{eqn:noname3}), we have \begin{align} \underline{u^{(k)}} - \overline{u^{(k-1)}} \geq \frac{2M - \delta_1}{C_0 \beta}.\label{eqn:noname4} \end{align} Together with (\ref{eq:S-u-relation-k}) and (\ref{eqn:tilde_S_sandwitch}), we have \begin{align*} \abs{\underline{S_k}\cap\underline{\mathcal{E}_{1,i}}} &\geq \abs{\cbr{j\in[n]: \overline{u^{(k-1)}} \leq r_j^* \leq \underline{u^{(k)}} , \theta_{r_j^*}^*\geq\theta_{r_i^*}^*+2M+\delta_1}} \geq \underline{u^{(k)}} - \overline{u^{(k-1)}} \geq \frac{2M - \delta_1}{C_0 \beta}. \end{align*} Recall that $h=pM/\beta$ and $p/(\beta \log n)\rightarrow\infty$. By Bernstein inequality, we have \begin{align*} \pbr{\mathcal{F}^{(k)\prime} | \widetilde{S}_{k-1}} = \pbr{\min_{i\in[n]:r^*_i \geq \overline{u^{(k)}}}\br{\sum_{j\in\underline{S_k}}A_{ij}\indc{j\in\underline{\mathcal{E}_{1,i}}}} > h\Bigg| \widetilde{S}_{k-1}} \geq 1-O(n^{-10}). \end{align*} Since this holds for all $\widetilde{S}_{k-1}$, we have $\pbr{\mathcal{F}^{(k)\prime}} \geq 1-O(n^{-10})$. ~\\ \emph{(Establishment of (\ref{eq:u-contain-S-k}) - (\ref{eq:S-S-prime-equal-k}) for $K$).} We have (\ref{eq:u-contain-S-k}) - (\ref{eq:S-S-prime-equal-k}) hold for each $k\in[K-1]$ with probability at least $1-O(n^{-9})$. For the last partition, $S_K=[n]\backslash\widetilde{S}_{K-1}$. Let $S_{K,1}$ be the set obtained by Algorithm \ref{alg:partition} before the terminating condition $[n] - \abs{S_1}+...+\abs{S_{K,1}}\leq\abs{S_{K,1}}/2$ is met. $\underline{S_{K,1}}, \overline{S_{K,1}}$ can be similarly defined and (\ref{eq:u-contain-S-k}) - (\ref{eq:S-S-prime-equal-k}) should also be satisfied by $S_{K,1}$. Therefore, $$\abs{S_K}\leq\frac{3\abs{S_{K, 1}}}{2}\leq\frac{3}{2}\abs{\overline{S_{K, 1}}}\leq\frac{3}{2}\left(2+\frac{0.29}{C_0}+\frac{1}{1-\frac{0.12}{C_0^2}}\right)\frac{M}{\beta},$$ $$\abs{S_K}\geq\abs{S_{K, 1}}\geq\abs{\underline{S_{K,1}}}\geq\left(\frac{1.7}{C_0}+\frac{1}{1+\frac{0.11}{C_0^2}}\right)\frac{M}{\beta}$$ and $$\left\{i\in[n]: r_i^*>\overline{u^{(K-1)}}\right\}\subset S_K\subset\left\{i\in[n]: r_i^*>\underline{u^{(K-1)}}\right\}.$$ ~\\ \indent So far, we have establish (\ref{eq:u-contain-S-k}) - (\ref{eq:S-S-prime-equal-k}) for any $k\in[K]$. Now we are ready to use them to prove the conclusions in Theorem \ref{thm:alg-1}. \begin{enumerate} \item Conclusion 1 is a consequence of (\ref{eq:u-contain-S-k}) and (\ref{eq:S-k-card-upper}). \item For Conclusion 2, by (\ref{eqn:noname4}) we have $\overline{u^{(k-2)}} < \underline{u_{(k-1)}} < \overline{u^{(k-1)}} < \underline{u_{(k)}} < \overline{u^{(k)}} < \underline{u_{(k+1)}} $. Together with (\ref{eq:u-contain-S-k}) and (\ref{eqn:tilde_S_sandwitch}), we have $$\left\{i\in[n]: \overline{u^{(k-2)}}<r_i^*\leq\underline{u^{(k+1)}}\right\}\subset S_{k-1}\cup S_k\cup S_{k + 1}\subset\left\{i\in[n]: \underline{u^{(k-2)}}<r_i^*\leq\overline{u^{(k+1)}}\right\}.$$ Therefore, using (\ref{eqn:noname4}), for any $i$ such that $\underline{u^{(k-1)}}<r_i^*\leq\overline{u^{(k)}}$, \begin{align*} &\left\{j\in[n]: \abs{r_i^*-r_j^*}\leq\frac{1.51M}{C_0\beta}\right\}\\ &\subset\left\{j\in[n]: \underline{u^{(k-1)}}-\frac{1.51M}{C_0\beta}\leq r_j^*\leq\overline{u^{(k)}}+\frac{1.51M}{C_0\beta}\right\}\\ &\subset\left\{j\in[n]: \overline{u^{(k-2)}}<r_j^*\leq\underline{u^{(k+1)}}\right\}\subset S_{k-1}\cup S_k\cup S_{k + 1}. \end{align*} For $k=1$ or $K$, only oneside needs to be considered and the property still holds due to the gap between $\underline{u^{(2)}}$ and $\overline{u^{(1)}}$ as well as the gap between $\underline{u^{(K-1)}}$ and $\overline{u^{(K-2)}}$. \item For Conclusion 3, by (\ref{eq:u-contain-S-k}) and (\ref{eqn:tilde_S_sandwitch}), we have \begin{align} \left\{i\in[n]: \overline{u^{(k-1)}}<r_i^*\leq\underline{u^{(k)}}\right\}\subset S_k\subset\left\{i\in[n]: \underline{u^{(k-1)}}<r_i^*\leq\overline{u^{(k)}}\right\}.\label{eqn:noname6} \end{align} Using (\ref{eqn:noname4}), we have $$\max\left\{r_i^*: i\in S_k\right\}\leq\overline{u^{(k)}}<\underline{u^{(k+1)}}<\min\left\{r_i^*: i\in S_{k+2}\right\}.$$ Same results can be established for $\max\left\{r_i^*: i\in S_k\right\}<\min\left\{r_i^*: i\in S_{l}\right\}$ for any $l>k+2$. \item For Conclusion 4, for any $k$ and any $i$, the definition of $w_i^{(k)\prime}$ only involves $j$ such that $j\in\overline{\mathcal{E}_{1,i}}$. This implies that the definition of $S_k^\prime$ only involves information of $(A_{ij}, \bar{y}_{ij}^{(1)})$ such that $\theta_{r_j^*}^*-\theta_{r_i^*}^*\geq2M-\delta_1$. Thus $S_k^\prime$ can be used as the $\check{S}_k$ in Theorem \ref{thm:alg-1}. \item For Conclusion 5, note that for any $k\in[K]$ and $i\in S_k$, we have \begin{align*} \abs{\left\{j\in[n]: |\theta^*_{r_i^*}-\theta^*_{r_j^*}|\leq \frac{M}{2}\right\} \cap S_k} &\geq \abs{\left\{j\in[n]: |r_i^*-r_j^*|\leq \frac{M}{2C_0\beta}\right\} \cap S_k} \\ & \geq \abs{\left\{j\in[n]: |r_i^*-r_j^*|\leq \frac{M}{2C_0\beta} , \overline{u^{(k-1)}}<r_j^*\leq\underline{u^{(k)}}\right\}}. \end{align*} where the last inequality is by (\ref{eqn:noname6}). Again by (\ref{eqn:noname6}), we have $\underline{u^{(k-1)}}<r_i^*\leq\overline{u^{(k)}}$. From (\ref{eqn:noname4}) we know $\underline{u^{(k)}} - \overline{u^{(k-1)}} > M/(2C_0\beta)$. Then we have \begin{align*} \abs{\left\{j\in[n]: |\theta^*_{r_i^*}-\theta^*_{r_j^*}|\leq \frac{M}{2}\right\} \cap S_k} &\geq \frac{M}{2C_0\beta} - \max\cbr{\overline{u^{(k-1)}} - \underline{u^{(k-1)}}, \overline{u^{(k)}} - \underline{u^{(k)}}}\\ &\geq \frac{0.21M}{C_0\beta} \end{align*} where the last inequality is by (\ref{eq:S-upper-lower-diff-k}). \end{enumerate} The proof is complete. \end{proof} \subsection{Proofs of Lemma \ref{lem:anderson-ineq}, Lemma \ref{lem:MLE-check} and Lemma \ref{lem:prev-paper}}\label{sec:pf-lemmas} We first prove Lemma \ref{lem:anderson-ineq} below. \begin{proof}[Proof of Lemma \ref{lem:anderson-ineq}] Recall that $\widehat{r}$ is obtained by sorting $\left\{\sum_{j\in[n]\backslash\{i\}}R_{ij}\right\}_{i\in[n]}$. Define $$\widehat{s}_i=\sum_{j\in[n]\backslash\{i\}}R_{ij},$$ $$\widehat{R}_{ij}=\indc{\widehat{s}_i>\widehat{s}_j}=\indc{\widehat{r}_i<\widehat{r}_j}$$ and $$s_i^*=\sum_{j\in[n]\backslash\{i\}}R_{ij}^*.$$ Observe that $$r_i^*=n-s_i^*,$$ we have $$\widehat{R}_{ij}=\indc{\widehat{s}_i>\widehat{s}_j}=\indc{\widehat{s}_i-s_i^*+s_i^*>\widehat{s}_j-s_j^*+s_j^*}=\indc{\widehat{s}_i-s_i^*-(\widehat{s}_j-s_j^*)>r_i^*-r_j^*}.$$ Thus \begin{align*} &\textsf{K}(\widehat{r}, r^*)=\frac{1}{n}\sum_{1\leq i<j\leq n}\indc{\widehat{R}_{ij}\neq R_{ij}^*}\\ &=\frac{1}{n}\sum_{1\leq i<j\leq n}\abs{\indc{\widehat{s}_i-s_i^*-(\widehat{s}_j-s_j^*)>r_i^*-r_j^*}-\indc{0>r_i^*-r_j^*}}\\ &\leq\frac{1}{n}\sum_{1\leq i<j\leq n}\indc{\abs{\widehat{s}_i-s_i^*-(\widehat{s}_j-s_j^*)}\geq\abs{r_i^*-r_j^*}}\\ &\leq\frac{1}{n}\sum_{1\leq i<j\leq n}\indc{\abs{\abs{r_i^*-r_j^*}\leq\abs{\widehat{s}_i-s_i^*}+\abs{\widehat{s}_j-s_j^*}}}\\ &=\frac{1}{n}\sum_{k=1}^{n-1}\sum_{\substack{1\leq i<j\leq n\\\abs{r_i^*-r_j^*}=k}}\indc{k\leq\abs{\widehat{s}_i-s_i^*}+\abs{\widehat{s}_j-s_j^*}}\\ &\leq\frac{1}{n}\sum_{k=1}^{n-1}\sum_{\substack{1\leq i<j\leq n\\\abs{r_i^*-r_j^*}=k}}\indc{\frac{k}{2}\leq\abs{\widehat{s}_i-s_i^*}}+\indc{\frac{k}{2}\leq\abs{\widehat{s}_j-s_j^*}}\\ &\leq\frac{2}{n}\sum_{i=1}^{n}\sum_{k=1}^{n-1}\indc{\frac{k}{2}\leq\abs{\widehat{s}_i-s_i^*}}\leq\frac{4}{n}\sum_{i=1}^{n}\sum_{k=1}^n\indc{k\leq\abs{\widehat{s}_i-s_i^*}}\\ &=\frac{4}{n}\sum_{i=1}^n\abs{\widehat{s}_i-s_i^*}\leq\frac{4}{n}\sum_{i=1}^n\sum_{j\in[n]\backslash\{i\}}\abs{R_{ij}-R_{ij}^*}=\frac{4}{n}\sum_{1\leq i\neq j\leq n}\indc{R_{ij}\neq R_{ij}^*} \end{align*} which completes the proof. \end{proof} Next, we prove Lemma \ref{lem:MLE-check}. \begin{proof}[Proof of Lemma \ref{lem:MLE-check}] Recall $\mathcal{E}=\left\{(i,j): 1\leq i<j\leq n, \psi(-M)\leq\overline{y}_{ij}^{(1)}\leq\psi(M)\right\}$. Then on the event where Lemma \ref{cor:ave-con-cor} holds, $\mathcal{E}$ can be written as $$\mathcal{E}=\left\{(i,j):1\leq i<j\leq n, \abs{\theta_{r_i^*}-\theta_{r_j^*}}\leq M/2\right\}\uplus\left(\mathcal{E}\cap\left\{(i,j):M/2<\abs{\theta_{r_i^*}-\theta_{r_j^*}}< 1.1M\right\}\right)$$ which implies $\check{A}_{ij}=A_{ij}\indc{(i,j)\in\mathcal{E}}$. Moreover, on the event where Theorem \ref{thm:alg-1} holds, $\check{S}_k=S_k, k\in[K]$ by Conclusion 4. This proves $\ell^{(k)}(\theta)=\check{\ell}^{(k)}(\theta)$ with probability at least $1- O(n^{-8})$. $\check{\theta}^{(k)}$ and $\widehat{\theta}^{(k)}$ are equivalent up to a common shift since the Hessian in local MLE is well conditioned with probability at least $1- O(n^{-8})$ due to Lemma \ref{lem:bound-hessian}. \end{proof} Finally, we need to prove Lemma \ref{lem:prev-paper}, which requires us to first establish a few extra lemmas. \begin{lemma}\label{lem:band_laplacian_eigenvalue} For any integer constant $C\geq1$, define a matrix $M\in\cbr{0,1}^{n\times n}$ such that $M_{ij}=\indc{\abs{i-j}\leq n/C}$. Let $\mathcal{L}_M$ be its Laplacian matrix such that \begin{align*} \sbr{\mathcal{L}_M}_{ij} =\begin{cases} -M_{ij},\quad \text{if }i\neq j\\ \sum_{l}M_{il},\quad \text{if }i= j. \end{cases} \end{align*} Denote $\lambda_{\min,\perp}(\mathcal{L}_M)$ to be the second smallest eigenvalue of $\mathcal{L}_M$, i.e., $\lambda_{\min,\perp}(\mathcal{L}_M)=\min_{u\neq 0, \mathds{1}_n^Tu=0}\frac{u^T\mathcal{L}_M u}{\norm{u}}$. Then there exists another constant $C'>0$ that only depends on $C$ such that \begin{align*} \frac{1}{n}\lambda_{\min,\perp}(\mathcal{L}_M) =\inf_{\substack{x\in\mathbb{R}^n\\\mathds{1}_n^Tx=0\\\norm{x}=1}}\frac{\sum_{\abs{i-j}\leq\frac{n}{C}}(x_i-x_j)^2}{n} \geq C'. \end{align*} \end{lemma} \begin{proof} We partition $[n]$ into $4C$ consecutive blocks such that each block contains either $\lceil n/4C\rceil$ or $\lfloor n/4C\rfloor$ consecutive indices. Let these blocks be a sequence of disjoint sets $B_1,...,B_{4C}$ such that $\max_{i\in B_k} i <\min_{j\in B_l} j$ if $k<l$. The idea is to lower bound the summation over the diagonal region by a sequence of square regions. Thus, for any $x\in\mathbb{R}^n, \mathds{1}_n^Tx=0, \norm{x}=1$, we have \begin{align*} \frac{1}{n}x^T\mathcal{L}_M x& = \frac{\sum_{\abs{i-j}\leq\frac{n}{C}}(x_i-x_j)^2}{n} \\ &\geq\frac{1}{n}\left[\sum_{k,l\in[4C]:\abs{k-l}\leq 1}\sum_{i\in B_{k}, j\in B_{l}}(x_i-x_j)^2\right]\\ & = \sum_{k,l\in[4C]:\abs{k-l}\leq 1} \br{\frac{\abs{B_l}}{n} \sum_{i\in B_k} x_i^2 + \frac{\abs{B_k}}{n} \sum_{i\in B_l} x_i^2 - 2\br{\sum_{i\in B_k} \frac{x_i}{\sqrt{n}}}\br{\sum_{i\in B_l} \frac{x_i}{\sqrt{n}}} }\\ & = \sum_{k,l\in[4C]:\abs{k-l}\leq 1} \br{p_l z_k+ p_k z_l -2y_ky_l}, \end{align*} where we denote $$y_k=\sum_{i\in B_k}\frac{x_i}{\sqrt{n}}, z_k=\sum_{i\in B_k}x_i^2, p_k=\frac{\abs{B_k}}{n}.$$ For any $k\in[4C]$, we define \begin{align}\label{eqn:w_def} w_{2k-1} = \frac{y_k +\sqrt{p_k z_k - y_k^2}}{2p_k},\text{ and }w_{2k} = \frac{y_k -\sqrt{p_k z_k - y_k^2}}{2p_k}. \end{align} Note that for any $k,l\in[4C]$, we have \begin{align*} &p_l p_k \br{\br{w_{2k-1} - w_{2l-1}}^2 + \br{w_{2k-1} - w_{2l}}^2 + \br{w_{2k} - w_{2l-1}}^2 + \br{w_{2k} - w_{2l}}^2} \\ &= p_l p_k \br{2\br{w_{2k-1}^2 + w_{2k}^2 + w_{2l-1}^2 + w_{2k}^2} - 2\br{w_{2k-1} + w_{2k}}\br{w_{2l-1} + w_{2l}} } \\ & = p_l p_k \br{\frac{z_k}{p_k} + \frac{z_l}{p_l} -2 \frac{y_ky_l}{p_kp_l}}\\ &= p_l z_k + p_k z_l -2y_ky_l. \end{align*} Then we have \begin{align*} &\sum_{k,l\in[4C]:\abs{k-l}\leq 1} \br{p_l z_k+ p_k z_l -2y_ky_l}\\ &= \sum_{k,l\in[4C]:\abs{k-l}\leq 1} p_l p_k \br{\br{w_{2k-1} - w_{2l-1}}^2 + \br{w_{2k-1} - w_{2l}}^2 + \br{w_{2k} - w_{2l-1}}^2 + \br{w_{2k} - w_{2l}}^2}. \end{align*} Note that $w$ is a function of $y,z,p$ which by definition satisfy: $\sum_{k=1}^{4C} y_k=0$, $\sum_{k=1}^{4C} z_k=1$, $\min_{k\in[4C]} p_k \geq 1/(5C)$, $\sum_{k=1}^{4C} p_k=1$, and $y_k^2\leq p_k z_k$ for all $k\in[4C]$. Define a parameter space $T$: \begin{align*} T = \cbr{(y,z,p):\sum_{k=1}^{4C} y_k=0,\sum_{k=1}^{4C} z_k=1,\min_{k\in[4C]} p_k \geq 1/(5C),\sum_{k=1}^{4C} p_k=1, \text{ and }y_k^2\leq p_k z_k,\forall k\in[4C]}. \end{align*} Then we have \begin{align} \nonumber&\frac{1}{n}\lambda_{\min,\perp}(\mathcal{L}_M) \\ &\geq \inf_{(y,z,p)\in T} \sum_{k,l\in[4C]:\abs{k-l}\leq 1} p_l p_k \br{\br{w_{2k-1} - w_{2l-1}}^2 + \br{w_{2k-1} - w_{2l}}^2 + \br{w_{2k} - w_{2l-1}}^2 + \br{w_{2k} - w_{2l}}^2},\label{eqn:w_lower} \end{align} where $w$ is defined in (\ref{eqn:w_def}). Since $T$ only depends on $C$, the quantity (\ref{eqn:w_lower}) also only depends on $C$. Then, (\ref{eqn:w_lower}) is equal to some constant $C'\geq 0$ only depending on $C$. We are going to show $C'>0$. Otherwise, let the infimum of (\ref{eqn:w_lower}) be achieved at some $w$ with $(y,z,p)\in T$. Then, we must have $w_{2k-1} = w_{2l-1} = w_{2k} = w_{2l}$ for any $k,l\in[4C]$ such that $\abs{k-l}\leq 1$. This has two immediately implications. First, for any $k\in[4C]$, since $w_{2k-1}=w_{2k}$, we have $y_k^2 =p_k z_k$ and $w_k = y_k/(2p_k)$, Second, since $w_{2k} = w_{2(k+1)}$ for any $k\in[4C-1]$, there exists some $c$ such that $y_k/p_k = c$ for all $k\in[4C]$. Together with $y_k^2 =p_k z_k$, we obtain $c^2 p_k = z_k$ for all $k\in[C]$. Since $\sum_{k=1}^{4C} z_k=1$ and $\sum_{k=1}^{4C} p_k=1$, we conclude $c=\pm 1$. Then using $y_k/p_k = c$, we have $\sum_{k=1}^{4C} y_k = c \sum_{k=1}^{4C} p_k = c \neq 0$, which is a contradiction with $\sum_{k=1}^{4C} y_k=0$. As a result, we obtain $\frac{1}{n}\lambda_{\min,\perp}(\mathcal{L}_M) \geq C' >0$. \end{proof} \begin{lemma}\label{lem:bound-hessian} Under the assumptions in Lemma \ref{lem:prev-paper}, $$\lambda_{\min,\perp}(H(\eta^*))=\min_{u\neq 0: \mathds{1}_{m}^Tu=0}\frac{u^TH(\eta^*)u}{\|u\|^2} \gtrsim mp$$ with probability at least $1-O(n^{-10})$, where $H(\eta^*)$ is the Hessian matrix of the objective (\ref{eq:MLE-small}), defined by $$H_{ij}(\eta^*)=\begin{cases} \sum_{l\in[m]\backslash\{i\}}B_{il}\psi'(\eta_i^*-\eta_l^*), & i=j, \\ -B_{ij}\psi'(\eta_i^*-\eta_j^*), & i\neq j. \end{cases}$$ \end{lemma} \begin{proof} We can decompose $H(\eta^*)$ into stochastic part $H(\eta^*)-\mathbb{E}(H(\eta^*))$ ans deterministic part $\mathbb{E}(H(\eta^*))$ and bound them separately. We first look at the stochastic part. Note that $$H(\eta^*)-\mathbb{E}(H(\eta^*))=D - \mathbb{E}(D)-\sum_{i<j}(B_{ij}-p_{ij})\psi^\prime(\eta_i^*-\eta_j^*)(E_{ij}+E_{ji})$$ where $D=diag\{D_1,...,D_m\}=diag\{\sum_{j\neq 1}B_{ij}\psi^\prime(\eta_1^*-\eta_j^*),...,\sum_{j\neq m}B_{mj}\psi^\prime(\eta_m^*-\eta_j^*)\}$; $E_{ij}$ is an $m\times m$ matrix and has 1 on the entry $(i,j)$ and 0 otherwise. We also have $\opnorm{(B_{ij}-p_{ij})\psi^\prime(\eta_i^*-\eta_j^*)(E_{ij}+E_{ji})}\leq 1$ and $\opnorm{\sum_{i<j}(B_{ij}-p_{ij})^2\psi^\prime(\eta_i^*-\eta_j^*)^2(E_{ij}+E_{ji})^2}\leq mp$. By matrix Bernstein inequality in \cite{tropp2015introduction}, we have $$\mathbb{P}\left(\opnorm{\sum_{i<j}(B_{ij}-p_{ij})(E_{ij}+E_{ji})}>t\right)\leq2m\exp\left(-\frac{t^2/2}{mp+\frac{t}{3}}\right).$$ Taking $t=C_1^\prime\sqrt{mp\log n}$ for some large enough constant $C_1^\prime>0$, we have $$\opnorm{\sum_{i<j}(B_{ij}-p_{ij})\psi^\prime(\eta_i^*-\eta_j^*)(E_{ij}+E_{ji})}\leq C_1^\prime\sqrt{mp\log n}$$ with probability at least $1-O(n^{-10})$. Standard concentration using Bernstein inequality also yields $$\opnorm{D-\mathbb{E}(D)}\leq C_2^\prime\sqrt{mp\log n}$$ for some constant $C_2^\prime>0$ with probability at least $1-O(n^{-10})$. Thus the stochastic part \begin{equation} \opnorm{H(\eta^*)-\mathbb{E}(H(\eta^*))}\leq(C_1^\prime+C_2^\prime)\sqrt{mp\log n}=o(mp)\label{eq:stoc-small} \end{equation} with probability at least $1-O(n^{-10})$. For the deterministic part, we first choose a constant integer $C^\prime>0$ such that for any $\abs{i-j}\leq\frac{n}{C^\prime}$, $p_{ij}=p$. Thus for any unit vector $x\in\mathbb{R}^m$ such that $\mathds{1}_m^Tx=0$, \begin{align} \nonumber&\frac{x^T\mathbb{E}(H(\eta^*))x}{m}=\frac{\sum_{i<j}p_{ij}\psi^\prime(\eta_i^*-\eta_j^*)(x_i-x_j)^2}{m}\\ \nonumber&\geq\frac{\sum_{i<j, \abs{i-j}\leq\frac{m}{C^\prime}}p\psi^\prime(\eta_i^*-\eta_j^*)(x_i-x_j)^2}{m}\\ &\gtrsim p\frac{\sum_{i<j, \abs{i-j}\leq\frac{m}{C^\prime}}(x_i-x_j)^2}{m}\label{eq:use-bounded}\\ &\gtrsim p\label{eq:use-band-mat} \end{align} where (\ref{eq:use-bounded}) uses the boundedness of $\eta_1^*-\eta_m^*$; (\ref{eq:use-band-mat}) is a consequence of Lemma \ref{lem:band_laplacian_eigenvalue} and $C^\prime$ is a constant independent of $m$ and $n$. Combing (\ref{eq:stoc-small}) and (\ref{eq:use-band-mat}) concludes the proof. \end{proof} The proof of Lemma \ref{lem:prev-paper} is given below. \begin{proof}[Proof of Lemma \ref{lem:prev-paper}] Since $\frac{L(\eta_i^*-\eta_j^*)^2}{2(W_{i}(\eta^*)+W_j(\eta^*))}\asymp mpL(\eta_i^*-\eta_j^*)^2$, we only need to consider the situation where $mpL(\eta_i^*-\eta_j^*)^2$ is greater than a sufficiently large constant, since otherwise we can use the trivial bound $\mathbb{P}\left(\widehat{\eta}_i<\widehat{\eta}_j\right)\leq 1$. Define $$\widetilde{\eta}_j=\eta_j^*-\frac{\sum_{l\in[m]\backslash\{j\}}B_{jl}(\bar{y}_{jl}-\psi(\eta_j^*-\eta_l^*))}{\sum_{l\in[m]\backslash\{j\}}B_{jl}\psi'(\eta_j^*-\eta_l^*)}.$$ Following the same argument used in the proof of Theorem 3.2 of \cite{chen2020partial}, we have \begin{equation} |\widehat{\eta}_i-\widetilde{\eta}_i|\vee|\widehat{\eta}_j-\widetilde{\eta}_j|\leq \delta\Delta,\label{eq:bias-very-small} \end{equation} with probability at least $1-O(n^{-7})-\exp(-\Delta^{3/2}Lmp)-\exp\left(-\Delta^2mpL\frac{mp}{\log(n+m)}\right)$, where $\Delta=\min\left(\eta_i^*-\eta_j^*,\left(\frac{\log(n+m)}{mp}\right)^{1/4}\right)$ and $\delta>0$ is some sufficiently small constant. In fact, the bound (\ref{eq:bias-very-small}) has only been established in \cite{chen2020partial} with a random graph that satisfies $p_{ij}=p$ for all $1\leq i<j\leq m$. To establish (\ref{eq:bias-very-small}) under the more general setting of interest, we first have \begin{equation} \lambda_{\min,\perp}(H(\eta^*))=\min_{u\neq 0: \mathds{1}_{m}^Tu=0}\frac{u^TH(\eta^*)u}{\|u\|^2} \gtrsim mp, \label{eq:Hessian-more-general-graph} \end{equation} with high probability, where $H(\eta^*)$ is the Hessian matrix of the objective (\ref{eq:MLE-small}). This is established in Lemma \ref{lem:bound-hessian}. Note that (\ref{eq:Hessian-more-general-graph}) is the only difference between the proofs of the current setting and the setting in \cite{chen2020partial}. With (\ref{eq:bias-very-small}), we have \begin{eqnarray*} \mathbb{P}\left(\widehat{\eta}_i<\widehat{\eta}_j\right) &\leq& \mathbb{P}\left(\widetilde{\eta}_j-\eta_j^*-(\widetilde{\eta}_i-\eta_i^*)>(1-\delta)\Delta\right) \\ && + O(n^{-7})+\exp(-\Delta^{3/2}Lmp)+\exp\left(-\Delta^2mpL\frac{mp}{\log(n+m)}\right). \end{eqnarray*} Define $$\mathcal{B}=\left\{B: \left|\frac{\sum_{l\in[m]\backslash\{j\}}p_{jl}\psi'(\eta_j^*-\eta_l^*)}{\sum_{l\in[m]\backslash\{j\}}B_{jl}\psi'(\eta_j^*-\eta_l^*)}-1\right|\leq\delta,\left|\frac{\sum_{l\in[m]\backslash\{i\}}p_{il}\psi'(\eta_i^*-\eta_l^*)}{\sum_{l\in[m]\backslash\{i\}}B_{il}\psi'(\eta_i^*-\eta_l^*)}-1\right|\leq\delta'\right\}.$$ By Bernstein's inequality, we have $\mathbb{P}(B\in \mathcal{B}^c)\leq O(n^{-7})$ for some $\delta'=o(1)$. We then have \begin{eqnarray*} && \mathbb{P}\left(\widetilde{\eta}_j-\eta_j^*-(\widetilde{\eta}_i-\eta_i^*)>(1-\delta)\Delta\right) \\ &\leq& \sup_{B\in\mathcal{B}}\mathbb{P}\left(-\frac{\sum_{l\in[m]\backslash\{j\}}B_{jl}(\bar{y}_{jl}-\psi(\eta_j^*-\eta_l^*))}{\sum_{l\in[m]\backslash\{j\}}B_{jl}\psi'(\eta_j^*-\eta_l^*)}\right. \\ && \left.+\frac{\sum_{l\in[m]\backslash\{i\}}B_{il}(\bar{y}_{il}-\psi(\eta_i^*-\eta_l^*))}{\sum_{l\in[m]\backslash\{i\}}B_{il}\psi'(\eta_i^*-\eta_l^*)}>(1-\delta)\Delta\Big|B\right) + O(n^{-7}) \\ &\leq& \exp\left(-\frac{(1-2\delta)L(\eta_i^*-\eta_j^*)^2}{2(W_{i}(\eta^*)+W_j(\eta^*))}\right) + O(n^{-7}). \end{eqnarray*} Since $$\exp(-\Delta^{3/2}Lmp)+\exp\left(-\Delta^2mpL\frac{mp}{\log(n+m)}\right)\lesssim \exp\left(-\frac{(1-2\delta)L(\eta_i^*-\eta_j^*)^2}{2(W_{i}(\eta^*)+W_j(\eta^*))}\right) + O(n^{-7}),$$ we obtain the desired conclusion. \end{proof} \bibliographystyle{dcu}
{ "attr-fineweb-edu": 1.350586, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUf7LxK7FjYEXSHf05
\section{Introduction} In recent years, the exponential speed of improvement in the technologies supporting the collection, storage, and analysis of data has a revolutionary effect on football analytics as well as many fields. The easy accessibility of data provides a great potential to propose several key performance metrics measuring several aspects of the play such as pass evaluation, quantifying controlled space, evaluating shots, and goal-scoring opportunities through possession values. One of these prominent metrics is \emph{expected goal} (xG) which is the most notable one in football talkshows in TV and end-of-match statistics nowadays. It is proposed by Green \cite{green} to quantify the probability of a shot being the goal. The reason behind the development of such a metric is to propose a metric to represent the low-scoring nature of football rather than the other sports. It is an ordinary story in football that a team dominated the game -they had many scoring opportunities- but could not goal, and the opponent won the match by converting one of the few goal opportunities to goal they created. In this case, the xG is used as a useful indicator of the score. It can be defined as the mean of a large number of independent observations of a random variable which is the shots from the statistical point of view. Besides being a good representative of the score, it is also a good indicator which is used to predict the future team performance \cite{cardoso}. There are many studies is conducted to train a machine learning model learned from the predictors such as shot type, distance to goal, angle to goal for predicting the value of xG \cite{rathke, tippana, pardo, herbinet, wheatcroft, bransen, sarkar_and_kamath, Eggels}. The data used to train the xG model is usually highly unbalanced. It causes an important problem that seen in some of these papers is the poor prediction performance of the models on minority class \cite{Anzer_and_Bauer}. In this paper, we aimed to propose an accurate xG model in terms of both majority and minority classes. One of the practical applications of the xG models, which is the main focus of this paper, is performance evaluation \cite{kharrat, spearman}. Brechot and Flepp \cite{brechot} proposed to use the xG models for performance evaluation instead of match outcomes which may easily be influenced by randomness in short-term results. They introduced a chart built upon the concept of the xG by plotting the teams' ranking in the league table against their rankings based on xG. Moreover, they proposed some useful metrics calculated based on xG such as offensive and defensive ratios. Fairchild et al. \cite{fairchild} focused on ways for evaluating the xG model goes beyond the accuracy, which is second from a player and team evaluation perspective on offensive and defensive efficiency by comparing the xG metric with the actual goals. They created the xG model for Major League Soccer in the USA and Canada. However, these papers consider only the output of the xG model. Thanks to the XAI tools, it is possible to explain a black-box machine learning model's behavior at the local and global levels. In this way, we can gather more information from the model not only its prediction and also its behavior. The one of most commonly used tool at local-level is the ceteris-paribus (CP) profiles that show the change of model prediction would change for the value of a feature on a single observation \cite{ema}. Its use to explain only one observation is a limitation, but by aggregating these profiles, it is possible to explain more than one observation at the same time. Actually, partial dependence profile (PDP) is used to explain the relationship between a feature and the response \cite{friedman}. However, the PDP is the estimation of the mean of the CP profiles for all observations in a dataset, not just some of the observations. In football, offensive performance can be measured through the shots taken by a team or player. In this paper, we introduced a practical application of an XAI tool based on the aggregation of the CP profiles which are used for the local-level explanation of the model behavior. By this approach, we can evaluate a player or team's performance and answer the what-if type questions about the performance. The main contributions of this paper are: (1) proposing the most accurate xG model, in terms of both majority and minority classes, trained on the data consists 315,430 shots from seven seasons between 2014-15 and 2020-21 of the top-five European football leagues, and (2) introducing a novel team/player performance evaluation approach which is a practical application of the XAI tools in football based on the aggregation of the CP profiles. We believe that the approaches given in this paper can be generalized for the other branches of sport. The remainder of this paper is structured as follows: Sec. II introduces the mathematical background of the xG models and xG model training. The performance of the trained xG models are investigated in Sec. III. Lastly, in Sec. IV, we introduce how the aggregated profiles are created and used to evaluate the performance of a player or team. Moreover, we demonstrate a practical application of the aggregated profiles based on the xG model for player and team levels. \section{Expected goal models} Consider $\mathbf{X} \subseteq \mathbb{R}^d$ is a $d$-dimensional feature vector, $Y \in \{0, 1\}$ is the label vector of response variable. The dataset is denoted by $D = \{ (\mathbf{x}_i, y_i) \}^n_{i=1}$ where each sample $(x_i, y_i)$ is independently sampled from the joint distribution with density $p(\mathbf{x}, y)$ which includes an instance $\mathbf{x}_i \in \mathbf{X}$ and a label $y_i \in Y$. The goal of a binary classifier is to train an optimal mapping function as follows: \begin{equation} f: \mathbf{X} \rightarrow Y \end{equation} \noindent by minimizing a loss function is $L(f) = P[Y \neq f(\mathbf{X})]$. The xG model is a special case of supervised classification task which has a binary outcome that takes the values are goal or not. Here $Y$ is the target variable which shows the goal or not of a shot, and the $\textbf{X}$ are the features that are used to predict the value of $Y$. The most commonly used features are distance to goal, angle to goal, shot type, last action, etc. The calculation of the xG value from the xG model can be easily algorithmized: \begin{enumerate} \item the individual scoring probabilities of the shots are calculated, \item these probabilities are summed over for a player, or a team to derive the cumulative xG value \cite{brechot}. \end{enumerate} \noindent For example, assuming that a team had three shots in a match with probabilities of 0.50, 0.20, and 0.05 means that the team has generated chances worth 0.75 xG. It can be calculated not only for a match, but also for different time periods such as season(s). This creates many different practical applications' opportunities. In the following subsections, we describe the data we used to train xG models, and give the steps about the pre-processing of data. Then, we introduce the tools we used in model training and explanation. Lastly, the problem in the xG model is imbalanced data is discussed and the solution way is mentioned.\\ \subsection{Description of the Data} The issue that needs to be discussed, before the training of a xG model, is the characteristics of the data used to train the xG models. It is expected that the style of play changes over time and varies between leagues from the football enthusiasts' point of view. The answer of "How it can be determined whether this situation has occurred or not?" can be found in Robberechts and Davis \cite{Rob_and_Davis}. They conducted an extensive experimental study to investigate the frequently asked data-related questions such as "How much data is needed to train an accurate xG model?", "Are xG models league-specific?", and "Does data go out of date?" that may affect the performance of an xG model. Their results show that five seasons of data are needed to train a complex xG model, the data does not go out of date, and using league-based xG models does not increase the accuracy significantly. We determined our model development strategy considering these findings in this paper. We focus in our paper on 315,430 shots-related event data (containing 33,656 goals $\sim 10.66\%$ of total shots) from the 12,655 matches in 7 seasons between 2014-15 and 2020-21 from the top-five European football leagues which are Serie A, Bundesliga, La Liga, English Premier League, Ligue 1. The dataset is collected from Understat\footnote{\href{https://understat.com}{https://understat.com}} by using the R-package worldfootballR \cite{worldfootballR} and excluded the 1,012 shots resulting in own goals due to their unrelated pattern from the concept of the model. The package provides useful functions to gather and handle the shots data by matches, seasons, and leagues from the various data sources such as FBref\footnote{\href{https://fbref.com/en/}{https://fbref.com/en/}}, Transfermarkt\footnote{\href{https://www.transfermarkt.com}{https://www.transfermarkt.com}}, and Fotmob\footnote{\href{https://www.fotmob.com}{https://www.fotmob.com}}. The detailed information, such as type and description, about the features used in the model, are given in Table \ref{tab:features}. \begin{table}[h] \caption{Details of the variables used to train our xG model} \begin{center} \begin{tabular}{p{3cm} p{2cm} p{9cm}}\toprule \textbf{Features} & \textbf{Type} & \textbf{Description} \\\toprule \texttt{status} & categorical & situation that the shot is being a goal (0: no goal, 1: goal)\\ \texttt{minute} & continuous & minute of shot between 1 and 90 + possible extra time \\ \texttt{home and away} & categorical & status of the shooting team (home or away)\\ \texttt{situation} & categorical & situation at the time of the event (Direct freekick, From corner, Open play, Penalty, Set play) \\ \texttt{shot type} & categorical & type based on the limb used by the player to shot (Head, Left foot, Right foot, Other part of the body) \\ \texttt{last action} & categorical & last action before the shot (Pass, Cross, Rebound, Head Pass, and 35 more levels) \\ \texttt{distance to goal} & continuous & distance from where the shot was taken to the goal line $([0.295, 84.892]$ in meters)\\ \texttt{angle to goal} & continuous & angle of the throw to the goal line $([0.10^{\circ}, 90^{\circ}])$ \\\bottomrule \end{tabular} \label{tab:features} \end{center} \end{table}\vspace{0.3cm} The summary statistics of the shots and goals, such as the number ($\#$) of matches, shots, goals, the mean ($\mu$) of shots and goals per match, and the conversion percent ($\%$) of a shot to goal, per league over seven seasons are given in Table \ref{tab:summary}. \begin{table}[h] \centering \caption{The summary statistics of the shots and goals, such as the number ($\#$) of matches, shots, goals, the mean ($\mu$) of shots and goals per match and the conversion percent ($\%$) of a shot to goal for per league over seven seasons} \label{tab:summary} \begin{tabular}{lcccccc}\toprule League &$\#$Match &$\#$Shot &$\mu_{Shot}$& $\#$Goal& $\mu_{Goal}$& $\%$ \\\toprule Bundesliga & 2,141 & 55,129 & 25.7 &6,161 & 2.88 & 11.2 \\ EPL & 2,650 & 66,605 & 25.1 &6,951 & 2.62 & 10.4 \\ La Liga & 2,648 & 62,028 & 23.4 &6,854 & 2.59 & 11.0 \\ Ligue 1 & 2,557 & 61,053 & 23.9 &6,438 & 2.52 & 10.5 \\ Serie A & 2,659 & 70,615 & 26.6 &7,252 & 2.73 & 10.3\\\midrule Mean & 2,531 & 63,086 & 24.9 &6,371 & 2.67 & 10.7\\\midrule Total & 12,655 & 315,430& - &33,656 & - & - \\\bottomrule \end{tabular} \end{table}\vspace{0.3cm} According to the summary statistics, the conversion percent of all leagues is 0.107 and the Bundesliga has the highest rate is 0.112 while the Serie A has the lowest rate of 0.103. The interesting statistics related to Serie A is the percent of conversion to goals is the lowest, however it is the league with the highest number of shots per match. \begin{figure}[h] \centering \caption{The distribution of \texttt{angle to goal} and \texttt{distance to goal} of shots regarding goal status in the last seven seasons of top-five European football leagues} \vspace{3mm} \label{fig:shot_distribution} \includegraphics[scale = 0.32]{shot_dist.png} \end{figure} As expected, when the angle of the shot decreases, the probability of goal decreases, and the distance to goal increases, and the probability of goal decreases. Fig \ref{fig:shot_distribution} shows that: (1) the distribution of the distance to goal of shots seems similar for each league, (2) the range of distance to goal is between 0-25 meters, (3) the optimal angle to goal is about $30^{\circ}$. The distribution of angle to goal and distance to goal of shots regarding goal status in the last seven seasons of the leagues seems similar. It is observed that the distributions of the two most important features affecting the probability of goal between leagues are similar.\\ \subsection{Pre-processing of the Data} The pre-processing steps are necessary before modeling such as the transformation of some features. The location of shots is given in the coordinates system in the dataset as $L_i \in [0, 1]$ and $W_i \in [0, 1]$ as in Fig \ref{fig:football_pitch}. We must calculate the distance and angle to goal of the shots which are the two most important features in xG models based on the coordinate values because the coordinates are not meaningful in the interpretation of the model. Before calculating these features, we standardized a football pitch is $L = 105$m $\times$ $W = 68$m in size, however some pitches may have different dimensions in reality. The size of the pitch is that the length should be between 90 and 120 meters and the width should be between 45 and 90 meters are limited by the rules of The International Football Association Board\footnote{\href{https://digitalhub.fifa.com/m/5371a6dcc42fbb44/original/d6g1medsi8jrrd3e4imp-pdf.pdf}{https://digitalhub.fifa.com/m/5371a6dcc42fbb44/original/d6g1medsi8jrrd3e4imp-pdf.pdf}}. \begin{figure}[h] \centering \caption{The standard dimension of a football pitch} \label{fig:football_pitch} \includegraphics[trim={0 18cm 0 13cm}, clip, scale = 0.2]{football_pitch.png} \end{figure} \noindent The following transformation are used to calculate the distance ($X^{DTG}$) and angle to goal ($X^{ATG}$) features: \begin{equation} X^{DTG}_i = \sqrt{[105 - (L_i \times 105)] ^ 2 + [32.5 - (W_i \times 68) ^ 2]} \end{equation} \noindent where $L$ and $W$ are the coordinates of a shot. \begin{equation} X^{ATG}_i = \left|\frac{a_i}{b_i} \times \frac{180}{\pi}\right| \end{equation} \noindent where $a_i = \arctan[7.32 \times [105 - (L_i \times 105)]]$ and $b_i = [105 - (L_i \times 105)]^2 + [32.5 - (W_i \times 68)] ^ 2 - (7.32 / 2) ^ 2$. We used $X^{DTG}$ and $X^{ATG}$ features in model training instead of the original coordinates $L_i$ and $W_i$ given in the raw data.\\ \subsection{Model Training} We use the forester \cite{forester} AutoML tool to train various tree-based classification models from XGBoost \cite{xgboost}, randomForest \cite{breiman}, LightGBM \cite{lightgbm}, and CatBoost \cite{catboost} libraries. These models do not provide any pre-processing steps like missing data imputation, encoding, or transformation and show quite good performance in the presence of outlier(s) in the dataset which is used to train models. We use the train-test split (80-20) to train and validate the models. Moreover, another advantage of the forester is that provides an easy connection to DALEX \cite{dalex} model explanation and exploration ecosystem.\\ \subsection{Data balancing} \label{sec:balancing} Imbalancedness is a kind of problem when one of the classes of the target variable is rare over the sample in the classification task. In this case, the model learns more from the majority class that which lead to poor classification performance of the minority class. The imbalance problem between the class of target feature is considered a separate field called imbalanced learning in machine learning. In imbalanced learning, there are three main strategies to train the models \cite{guo}: (1) balancing the dataset, by using over or under-sampling methods (2) using cost-sensitive learners, and (3) using ensemble learning models. The target feature we used in the model is imbalanced (90$\%$-10$\%$), and to handle this problem we prefer to use a balancing strategy which is the random over-sampling method provided by the ROSE package in R \cite{rose}. It consists of a smoothed bootstrap-based technique which is proposed by Menardi and Torelli \cite{menardi}. \\ \section{Exploration and explanation of the proposed xG model} \vspace{3mm} In this section, the performance of the trained xG models are investigated in terms of several metrics under different sampling strategies. Then, the best performing xG model is compared with the alternatives in the literature. The aggregated profiles that used to analysis model behavior of the proposed xG model is introduced for evaluating the performance at player and team levels in the last part. Moreover, the practical applications of aggregated profiles are given on the xG model.\\ \subsection{Model Performance} For the model level exploration, the first step is usually related to model performance. Different measures may be used such as precision, recall, F1, accuracy, and AUC. However, these measures do not measure the performance of a classification model in case of imbalanced data. The Mathews correlation coefficient, brier score, log-loss and balanced accuracy are used to measure the classification ability of both of the classes in this task. Unfortunately, there are only a few papers have used appropriate measures regarding this problem in the literature related to the xG model. Considering this situation, we reported the results in terms of both of these two groups of measures. During the study of this paper, we restricted the analysis to a comparison of model performance between train and test data. The performance measures of the random forest, catboost, lightgbm, and xgboost models trained on the train, over-sampled train, and under-sampled train set are calculated and given in Table \ref{tab:performance}.\\ \begin{table}[h] \centering \small \caption{Performance of trained xG models} \label{tab:performance} \begin{tabular}{p{1.6cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1.2cm}}\toprule Model & Sampling & Recall & Precision & F1 & Accuracy & AUC & MCC & Brier Score & Log-loss & Balanced Accuracy \\\toprule \multirow{3}{*}{random forest} & over & \textbf{0.955} & \textbf{0.921} & \textbf{0.938} & \textbf{0.937} & \textbf{0.985} & \textbf{0.874} & \textbf{0.073} & 0.275 & \textbf{0.937} \\ & under & 0.860 & 0.882 & 0.871 & 0.872 & 0.954 & 0.745 & 0.106 & 0.357 & 0.872 \\ & original & 0.297 & 0.891 & 0.446 & 0.921 & 0.975 & 0.488 & 0.051 & \textbf{0.175} & 0.646 \\\midrule \multirow{3}{*}{catboost} & over & 0.739 & 0.756 & 0.747 & 0.751 & 0.835 & 0.501 & 0.167 & 0.501 & 0.750\\ & under & 0.722 & 0.753 & 0.737 & 0.741 & 0.823 & 0.483 & 0.171 & 0.512 & 0.513 \\ & original & 0.187 & 0.718 & 0.296 & 0.905 & 0.820 & 0.334 & 0.075 & 0.264 & 0.589 \\\midrule \multirow{3}{*}{xgboost} & over & 0.710 & 0.751 & 0.730 & 0.737 & 0.816 & 0.476 & 0.175 & 0.523 & 0.737\\ & under & 0.717 & 0.751 & 0.734 & 0.738 & 0.818 & 0.477 & 0.174 & 0.521 & 0.739 \\ & original & 0.175 & 0.709 & 0.281 & 0.904 & 0.816 & 0.321 & 0.076 & 0.266 & 0.583 \\\midrule \multirow{3}{*}{lightgbm} & over & 0.713 & 0.748 & 0.730 & 0.736 & 0.814 & 0.473 & 0.176 & 0.526 & 0.736\\ & under & 0.714 & 0.748 & 0.730 & 0.736 & 0.814 & 0.471 & 0.176 & 0.525 & 0.736 \\ & original & 0.179 & 0.689 & 0.284 & 0.904 & 0.813 & 0.318 & 0.077 & 0.267 & 0.585 \\\bottomrule \multicolumn{11}{l}{*The best value is given in bold for each metric.} \end{tabular} \end{table}\vspace{0.3cm} \subsection{Comparison of Model Performance} We compared the performance of our proposed models with the models in the literature \cite{Eggels}, \cite{pardo}, \cite{tippana}, \cite{Anzer_and_Bauer}, \cite{haaren}, \cite{umami}, \cite{fernandez} in terms of precision, recall, accuracy, F1, AUC, log-loss, Brier score, and mean absolute error (MAE) in Table \ref{tab:comparison}. We decided to use these measures because the authors of these papers reported the performance of the models. The reason for the empty cells in the table is that these values for the relevant models have not been reported in these papers. \begin{table}[h] \small \caption{Performance Comparison of the proposed xG model with the models in the literature} \begin{center} \begin{tabular}{p{3.5cm}p{2.7cm}p{0.9cm}p{0.9cm}p{0.9cm}p{0.9cm}p{0.9cm}p{0.9cm}p{0.6cm}}\toprule Paper & Model & Precision & Recall &F1 & AUC & Log-loss & Brier score & MAE \\\toprule \multirow{4}{*}{Eggels et al. (2016)} & random forest & 0.785 & 0.822 & 0.800 & 0.814 & - & - & - \\ & decision tree & 0.698 & 0.678 & 0.676 & 0.677 & - & - & - \\ & logistic regression & 0.715 & 0.650 & 0.673 & 0.697 & - & - & - \\ & ada-boost & 0.624 & 0.773 & 0.688 & 0.670 & - & - & -\\\midrule \multirow{3}{*}{Pardo (2020)} & logistic regression & - & - & - & - & 0.261 & - & - \\ & xgboost & - & - & - & - & 0.257 & - & - \\ & neural network & - & - & - & - & 0.260 & - & - \\\midrule Tippana (2020) & Poisson regression & - & - & - & - & - & - & 6.5 \\\midrule \multirow{4}{*}{Anzer and Bauer (2021)} & gbm & 0.646& 0.181 & - & 0.822 & - & - & - \\ & logistic regression & 0.611 & 0.108 & - & 0.807 & - & - & -\\ & ada-boost & 0.548 & 0.201 & - & 0.816 & - & - & - \\ & random forest & 0.611 & 0.163 & - & 0.794 & - & - & - \\\midrule Haaren (2021) & boosting machine & - & - & - & 0.793 & - & 0.082 & - \\\midrule Umami et al. (2021) & logistic regression & - & \textbf{0.967} & - & - & - & - & - \\\midrule Fernandez et al. (2021) & xgboost & - & - & - & - & \textbf{0.254} & - & -\\\midrule \multirow{4}{*}{Our models} & random forest & \textbf{0.921} & 0.955 & \textbf{0.938} & \textbf{0.985} & 0.275 & \textbf{0.073} & \textbf{2.0} \\ & catboost & 0.756 & 0.739 & 0.747 & 0.835 & 0.501 & 0.167 & 3.0\\ & xgboost & 0.751 & 0.710 & 0.730 & 0.816 & 0.523 & 0.175 & 3.1\\ & lightgbm & 0.748 & 0.713 & 0.730 & 0.814 & 0.526 & 0.176 & 3.1 \\\bottomrule \multicolumn{9}{l}{*The best value is given in bold for each metric.} \end{tabular} \label{tab:comparison} \end{center} \end{table}\vspace{0.3cm} Eggels et al. \cite{Eggels}, Pardo \cite{pardo}, and Anzer and Bauer \cite{Anzer_and_Bauer} trained several xG models and reported their performances. The random forest, xgboost, and gbm models outperform others, respectively in these papers. It is seen that our proposed random forests model outperforms the others in terms of precision, F1, AUC, Brier score, and MAE. The model proposed by Fernandez et al. \cite{fernandez} has a lower log-loss value than the model that we proposed. However, no information is reported about the performance of their model in terms of recall and precision, so it is difficult to be certain of its performance. It is same for the model proposed by Umami et al. \cite{umami}. \\ \subsection{Model Behavior: The Aggregated Profiles} The XAI tools are classified under two main sections are local and global levels. The local-level explanations are used to explain the behavior of a black-box model for a single observation while the global-level explanations are used for an entire dataset. However, our need is to explain the model for a group of observations, i.e. for a player or team. That's why we introduce the aggregated profiles (AP) which can be used for a group of observations. The idea behind AP is the aggregation of the CP profiles that show how the change of a model's prediction would change regarding the value of a feature. In other words, the CP profile is a function that describes the dependence of the conditional expected value of the response on the value z of the $j^{th}$ feature \cite{ema}. The AP can be defined simply as the averaging of the CP profiles which are considered. The value of an AP for model $f(.)$ and feature $X_j$ at $z$ is defined as follows: \begin{equation} g_{AP}^j (z) = E_\textbf{X}^{-j} [f(\textbf{X}^{j|z})] \end{equation} \noindent where $g_{AP}$ is the expected value of the model predictions when $X_j$ is fixed at $z$ over the marginal distribution of $\textbf{X}_{j|z}$. The distribution of $\textbf{X}_{j|z}$ can be estimated by using the mean of CP profiles for $X_j$ as an estimator of the AP: \begin{equation} \hat{g}^j_{AP} (z) = \frac{1}{k} \sum_{i = 1}^{k} f(\textbf{x}^{ij|z}) \end{equation} \noindent where $k$ is the number of profiles that are aggregated. The difference between the AP and PDP is the number of aggregated profiles. The PDP is the aggregation of all profiles which are calculated on the entire dataset while the AP is the aggregation of several profiles. The innovative part of this paper is to propose a performance evaluation by aggregating CP profiles of shots (i.e. observations) taken by a team or player during the period of interest. This evaluation can be defined as post-game analysis if the period is a game or games. Also, it can be used after the first half of a match to decide the second-half strategy. In this way, answers to what-if questions can be created based on teams or players, not based on shots that are not meant to evaluate.\\ \subsection{Practical Applications of the AP in Football} In this part, we demonstrate the practical applications of the aggregated profiles, which is constructed on the proposed xG model, for team and player levels in football. Firstly, consider the match of Schalke 04 vs. Bayern Munich which is played in Bundesliga on Jan 24, 2021. The end-of-match statistics such as number of goals ($\#$Goal), expected goal (xG), number of shots ($\#$Shot), mean angle to goal ($\mu_{ATG}$), and mean distance to goal ($\mu_{DTG}$) for each team over the match are given in Table \ref{tab:schalke}. \begin{table}[h] \centering \caption{The End-of-match statistics of Schalke 04 vs. Bayern Munich in the match is played on Jan 24, 2021} \label{tab:schalke} \begin{tabular}{lccccc}\toprule Team & $\#$Goal & xG & $\#$Shot & $\mu_{ATG}$ & $\mu_{DTG}$ \\\toprule Schalke 04 & 0 & 2.58 & 13 & 25.18 & 18.03 \\ Bayern Munich & 4 & 9.92 & 31 & 27.56 & 16.99 \\\bottomrule \end{tabular} \end{table}\vspace{0.3cm} According to the match statistics, Bayern Munich took 31 shots while Schalke 04 took 13 shots in this match, 4 of them being goals and Bayern won the game 0-4. The xG values of the teams show the created goal chances over the shots. It is expected that the final score of the match is 3-10 in terms of expected goal. However, Schalke 04 could not find any goal while Bayern Munich found four goals. The offensive efficiency (actual goals - expected goal) of both teams are not good, because they only found four goals considering 12.5 xG. The observed mean angle to goal is about $25^{\circ}$ for Schalke 04 and $27.56^{\circ}$ for Bayern Munich, and the observed mean distance to goal is 18 meters for Schalke 04 and 17 meters for Bayern Munich. The AP of the Schalke 04 and Bayern Munich for angle and distance to goal are given in Fig \ref{fig:app}. In the figure of AP, X-axis represents the value of the interested feature, and y-axis represents the average prediction of the xG for per-shot. \begin{figure}[h] \centering \caption{The aggregated xG profiles of Schalke 04 and Bayern Munich for angle and distance to goal in the match is played on Jan 24, 2021} \label{fig:app} \vspace{3mm} \centering \includegraphics[trim = 0cm 0cm 0cm 4cm, clip = true, scale = 0.1, width=0.55\textwidth]{atg.png} \centering \includegraphics[trim = 0cm 0cm 0cm 6cm, clip = true, scale = 0.1, width=0.55\textwidth]{dtg.png} \end{figure} \noindent As known, if the shots are taken at a steeper angle and closer to the goal, the value of xG increases. It is seen that the average of the xG is about constant after 30 meters according to the AP for distance to goal. Evaluating the average distances to goal, Schalke 04 is farther from the goal than Bayern. This may be one of the reasons why they did not find any goal. They could have increased the average xG for pre-shot by around 40 percent if they reduced the average distance to goal from 18 meters to 15 meters. Moreover, if Schalke had taken the shots with an average angle of $35^{\circ}$ instead of an average angle of $25^{\circ}$, the average value of xG for per-shot would have increased by $20\%$. This potential change can be considered to improve the team's performance for the next match(s). As seen, AP provides to evaluate team performance after a match and determine how the team can increase the value of xG with possible improvements during the game. Secondly, consider the player performance of Burak Yilmaz is the striker of Lille OSC from Ligue 1, Lionel Messi is the midfielder of FC Barcelona from La Liga, and Robert Lewandowski is the striker of Bayern Munich from Bundesliga in the season of 2020-21. According to the end-of-season statistics given in Table~\ref{tab:lewa}, they took 66, 195, 132 shots, and 16, 30, 40 of them being goals during the season, respectively. The player with the highest ability to convert shots into goals is Robert Lewandowski with $30\%$, Burak Yilmaz is second with $25\%$ and Lionel Messi is third with $15\%$. The ability isn't just about skill, it may also related to how players defend against and the average angle and distance from which they use the shots. Robert Lewandowski shot from the shorter distance and more right angle on average, while the other players shot from the relatively long distance and narrow angle. It can be roughly said that the percent of the players converting shots into goals, in other words the created xG values, are correlated with distance and angle to goal. \begin{table}[h] \centering \caption{The End-of-season statistics of Burak Yilmaz (BY), Lionel Messi (LM), and Robert Lewandowski (RL) in the season of 2020-21} \label{tab:lewa} \begin{tabular}{lcccccc}\toprule Player &$\#$Game& $\#$Goal & xG & $\#$Shots & $\mu_{ATG}$ & $\mu_{DTG}$ \\\toprule BY &24 & 16 & 24.48 & 66 & 21.72 & 19.67 \\ LM &35 & 30 & 69.83 & 195 & 21.33 & 19.39 \\ RL &28 & 40 & 62.52 & 132 & 33.97 & 13.07 \\ \bottomrule \end{tabular} \end{table}\vspace{0.3cm} \noindent The AP of Burak Yilmaz, Lionel Messi, and Robert Lewandowski for angle and distance to goal in the season of 2020-21 are given in Fig \ref{fig:lewa}. \begin{figure}[h] \centering \caption{The aggregated profiles of Burak Yilmaz, Lionel Messi, and Robert Lewandowski for angle and distance to goal in the season of 2020-21} \label{fig:lewa} \vspace{3mm} \centering \includegraphics[trim = 0cm 0cm 0cm 4cm, clip = true, scale = 0.1, width=0.55\textwidth]{player_atg.png} \centering \includegraphics[trim = 0cm 0cm 0cm 6cm, clip = true, scale = 0.1, width=0.55\textwidth]{player_dtg.png} \end{figure} The average value of the xG is about similar for each player for distance to goal. Only the average value of Robert Lewandowski is slightly higher than the others after 10 meters. It means that if the players try to take shots at 15 meters from the goal instead of the observed mean distances seen in Table \ref{tab:lewa}, the average expected goal of the player per shot may increase about $20\%$. When the AP is examined in terms of the feature is angle to goal, the average xG of the player is about same from the start point to $25^{\circ}$, the AP of Robert Lewandowski is getting increase after this angle. It can be said that if the players try to take shots at $50^{\circ}$, the highest average xG of Robert Lewandowski will be about 0.5. At the player level as well as at the team level, the AP provides pretty practical information about performance evaluation and provisioning, and it is also very useful for comparing the players that play at similar position.\\ \section{Conclusion} The papers to date aim that to evaluate a team or player's performance that only consider the output of the xG model. Comparing the actual goals and expected goals, they provide some statistics based on these difference for evaluating the defensive and offensive performance of a team or a player. However, we focused to use the xG model's behavior to observe the relationship between features and response which is the xG value. In this way, we can suggest to how a team's performance can be improved by changing the strategies based on the features that effect the xG value such as distance to goal, angle to goal, and others. To do this, we first proposed an accurate xG model which is trained a random forests model on the data consist seven seasons of top-five European leagues. This model predicts both possible outcomes of the output of the xG model, goal and no goal, better than the alternatives in the literature. To obtain this model, we balanced the data by using the random over-sampling method to solve the imbalance problem that is often ignored in similar papers. Thus, the model we proposed learned quite well from both of the classes. The interesting thing in this process is that we detected some changes in the behavior of the model trained on the data obtained after over-sampling through PDP curves. We have included a detailed discussion on this problem in Discussion section. We evaluated the performance at the level of the team and the player by using the accurate xG model we proposed, in the practical application part. The AP of the model on the team level show to relationship between the interested features and the target variable is the average of xG values predicted over the observations. The usage of AP is practical to see the effect of the changes on the values of interested variable on the xG value, and also comparison of the players who are play in similar positions. For example, it can be extracted from Fig \ref{fig:lewa} that the average xG of Robert Lewandowski's shots may be higher than other players if he shoots from a steeper angle. As seen, detailed information about performance evaluation can be obtained with the practical use of AP, which is one of the XAI tools, on the xG models. \\ \section{Discussion} Domain-specific applications of XAI tools enable key insights to be extracted from a black-box model. In this context, this paper focuses the practical application of the AP for explaining more than one observation, not an observation or entire dataset. These tools may be referred to as \textit{semi-global explainers} for easier understanding in the XAI domain. It is seen from the examples discussed: AP can be used to extract provisions for performance analysis of a team or a player from xG models, which have been frequently used in football in recent years. In addition, comparisons of similar players and teams can be made in terms of interested features. Since these introduced methods can be generalized to other sports branches such as ice hockey which is used the xG models, its widespread effect will not be limited to football. Another discussion we want to mention in this paper is the effect of balancing methods on the model's behavior. It is known that these methods for imbalanced datasets in binary classification tasks is a very commonly used solution to improve the prediction performance of ML models in both the classes. However, how balancing the observation of classes causes effects on the model behavior is a subject that has not been discussed yet. The only paper in the literature about this is Patil et al. \cite{patil} discussed that whether the model is reliable or not after balancing. They decided to verify the reliability of the models and oversampling method with the help of the feature importance which is one of the XAI techniques. Their results demonstrate that the higher accuracy obtained by the over-sampled dataset while ensuring that the oversampling does not alter the feature correlation of the original dataset. This paper does not satisfy to decide the change of model behavior, because it only examined the change of order of features' importance in the model. The point we want to raise is the model behavior in terms of PDP's values, because it provides more detailed information than the feature importance to detecting the change in the model behavior. Thus, we compared the behavior of the original, over-sampled, and under-sampled versions of our proposed model using PDP curves for some features in Fig \ref{fig:imbalance}. \begin{figure}[h] \centering \caption{The behavior comparison of the random forest models trained on original, over-sampled, and under-sampled data in terms of PDP curves} \label{fig:imbalance} \vspace{3mm} \centering \includegraphics[trim = 0cm 0cm 0cm 4cm, clip = true, scale = 0.20, width = 0.55\textwidth]{angle.png} \centering \includegraphics[trim = 0cm 0cm 0cm 6cm, clip = true, scale = 0.20, width = 0.55\textwidth]{distance.png} \centering \includegraphics[trim = 0cm 0cm 0cm 6cm, clip = true, scale = 0.20, width = 0.55\textwidth]{minute.png} \end{figure} It is seen that behavior of the PDP curves for the feature is distance to goal are not same for the models. For the model trained on original data, the PDP curve is increasing from start point to 20 meters, then slowly increasing after some fluctations between 20 and 30 meters. However, the fluctations are seen on different range of the feature for the models trained on over and under-sampled data. This is a sign that there has been a change in model behavior after using balancing methods. We would like to draw attention to the need for careful consideration of this situation, and start a discussion on this subject in the literature based on our findings. \section*{Acknowledgement} The work on this paper is financially supported by the NCN Sonata Bis-9 grant 2019/34/E/ST6/00052.\\ \section*{Supplemental Materials} The R codes needed to reproduce the results can be found in the following GitHub repository: \href{https://github.com/mcavs/Explainable_xG_model_paper}{https://github.com/mcavs/Explainable\_xG\_model\_paper}.\\ \bibliographystyle{unsrtnat}
{ "attr-fineweb-edu": 1.993164, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdaY5qoTBG_qrqiQw
\section{Introduction} The classic buy or rent ski-rental problem is the following: Given a buy cost $b$ for the ski gear, one needs to make a decision on each day whether to rent the ski for a cost of $1$ unit or end the sequence of decisions by buying the ski for a cost of $b$ units. The challenging part of the problem is that one does not have apriori knowledge of how long the ski season would last. The ski-rental problem has found several practical applications including TCP acknowledgement, time scheduling, etc. In the basic version of the problem, a strategy achieving a competitive ratio of $2$ is to rent for $b$ days (if the ski season lasts) and buy thereafter. A recent line of work tries to consider the same problem when certain expert advice (potentially machine learning predictions) is available about the length of the ski season. In this scenario, it was shown recently that one can do better when the predictions are within reasonable deviations from the truth. While current work in this area( \cite{PSK-18}, \cite{GP-19}) propose algorithms that require a truthful environment parameter (buy cost), our main contribution is in the case the environment parameter is predicted by a set of experts. In this work we introduce a variant of the problem which we term the \emph{sequential ski rental problem}. Here, the ski season proceeds in a sequence of rounds and in each round, the player has to come up with a strategy to buy/rent a \emph{perishable} item every day of that season. The item if bought lasts only for that season and becomes unavailable the following seasons. In addition to the unavailability of the length of the season, the player also has to make an \emph{early} rent/buy decision \emph{before} the buy cost is revealed. The problem thus has two levels of uncertainty, in the ski-length and the buy costs respectively. At first glance, such a problem might seem hopeless to solve as there is no input parameter for the algorithm to work with (i.e., neither the buying price nor the length of the ski season). To tackle this, we consider two types of advice as being available to the player - one on the length of the ski season and other on the buy cost. Both of these could come from potential machine learning prediction models or from certain noisy inputs. In this novel setting, we develop online learning algorithms which have good performance with respect to the worst case bounds. Our algorithm relies on a combination of the stochastic Hedge algorithm and the algorithm for ski-rental problem with expert advice. Our main contributions are as follows \begin{itemize} \item We introduce the Sequential ski rental problem with two sets of expert advice \item We develop novel algorithms with performance guarantees \item We demonstrate the efficacy of the algorithm on experiments. \end{itemize} Our work can be applied to various real world settings an example of which is the following. Consider a customer on an online retail site, looking to purchase different products in a sequential fashion. For each item she desires, she needs to make a decision on whether to buy or rent the product. The way she decides her strategy is based on a set of experts who are given as input a buy cost and output a rent-buy strategy. The online retail site has a fixed purchase price for each product. However, the customer will need to pay an additional overhead cost which is unknown apriori. A main source of this overhead cost could be delivery charges (or more abstract costs involving time delays in delivery etc). This exact overhead cost is not clear to the customer at the time of deciding her strategy and she only has access to an estimate of this cost. The problem then is how well the customer performs when she is given the estimated cost at the time of deciding her strategy as compared to what she would have done had she been given the exact overall cost to decide her strategy. Clearly since the customer can purchase different items, the buy cost varies. \section{Related Work} Our work is closely related to \cite{GP-19} and \cite{PSK-18}. We utilize the concepts of robustness and consistency introduced in \cite{LV-18}, who utilized predictions for the online caching problem . \cite{GP-19} consider the setting in which multiple experts provide a prediction to the ski-rental problem. They obtain tight bounds for their settings thereby showing the optimal robustness and consistency ratios in the multi-expert setting. However their algorithm depends on the fact that the error in the buy cost is zero, in contrast to our setting where the buy-cost given to the experts may not be true as they are forced to make an early decision. For the ski-rental problem, \cite{PSK-18} utilize predictions from a single expert to improve on the lower bound of $\frac{e}{e-1}$, shown to be optimal by \cite{KMMO-94} . \cite{PSK-18} design a robust, consistent deterministic and randomized algorithm which performs optimally for a zero error prediction, while at the same time matches the optimal lower bound in the case the prediction has infinite error. As was the case in \cite{GP-19}, their algorithm depends on the fact that the buy cost is without error. Even for a small perturbation in the buy-cost, the expected algorithmic cost suffered is different. As a part of our online learning subroutine, we propose an algorithm which is robust to small perturbations in the environment parameters, the environment parameter being the buy-cost for the ski-rental problem. Recent work in online algorithms with machine learned predictions extend to numerous applications such as online scheduling with machine learned advice (\cite{STBS-20}, \cite{rohatgi2020near}, \cite{JPS-20}) or to the reserve price revenue optimization problem which utilizes predictions of the bids by bounding the gap between the expected bid and the revenue in terms of the average loss of the predictor (\cite{MS-17}). Our work tackles the problem of uncertainty by utilizing the online learning model. Specifically, we have two sets of experts where the loss value of one set comes from a stochastic distribution. Well studied models to tackle this uncertainty include that of robust optimization (\cite{KY-13}) which gives good guarantees for potential realizations of the inputs. Another model is that of stochastic optimization (\cite{BS-12}, \cite{MGZ-12}, \cite{MNS-12}) where the input comes from a known distribution. Our work utilizes the multiplicative weight update (\cite{LW-89}), also known as the Constant Hedge algorithm, as our learning algorithm for one set of experts. We utilize the Decreasing Hedge algorithm (\cite{ABG-02}) to update the weights of another set of experts whose losses come from a stochastic distribution. Considering this easier setting for learning, (\cite{HR-15}, \cite{NYG-07}, \cite{PGT-14}, \cite{AGA-14}, \cite{WPT-16}, \cite{STPW-14}, \cite{JS-19}) tackle this by designing algorithms that rely on data dependent tuning of the learning rate or better strategies and give theoretical results on the regret bounds in these settings. An interpretation of our work is to the classic online learning set-up given in a survey by \cite{Sha-11}. Instead of each hypothesis receiving the true environment parameter, based on which they make a recommendation to the learner, they receive an unbiased sample with some noise. Something similar has been studied in the area of differential privacy (\cite{DNP-10}) , where certain privacy guarantees are shown if one of the loss vectors in the loss sequence changes. Our model essentially boils down to the scenario where the initial sequence of loss vectors are different but over time they converge to the true loss sequence. \section{Preliminaries} We consider the online learning model to the ski-rental problem. \textbf{Ski-Rental Problem} : In the ski-rental problem, an example of a large class rent-or-buy problem, a skier wants to ski and needs to make a decision whether to buy the skis for a cost of $b$ (non-negative integer) units or rent the skis for a cost of 1 unit per day. The length of the ski season $x$ (non-negative integer) is \emph{not known}. Trivially, if the ski season lasted more than $b$ days and the skier knew this beforehand, she would buy them at day 1, otherwise she would rent them for all days. The minimum cost that can be suffered is $OPT=\min\{b,x\}$. For this problem, the best a deterministic algorithm can do is obtain a competitive ratio of $2$, while \cite{KMMO-94} designed a randomized algorithm which obtains a competitive ratio of $\frac{e}{e-1}$ which is optimal. \textbf{Robustness and Consistency} : We utilize the notions of robustness and consistency which are defined when online algorithms utilize machine learned predictions. In online algorithms, the ratio of the algorithmic cost($ALG$) to the optimal offline cost($OPT$) is defined as the competitive ratio. While utilizing predictions, such a ratio would be a function of the accuracy $\eta$ of the predictor. Note that the algorithm has no knowledge about the quality of the predictor. An algorithm is $\alpha$- robust if $\frac{ALG(\eta)}{OPT} \leq \alpha$ for all $\eta$ and is $\beta$-consistent if $\frac{ALG(0)}{OPT} \leq \beta$. The goal is to use the predictions in such a way that if the predictions are true, the algorithm performs close to the offline optimal and even if the prediction is very bad, it performs close to the original online setting without any predictions. \textbf{Hedge Algorithm for the expert advice problem} : In the classic learning from expert advice setting, also known as decision-theoretic online learning (\cite{FS-97}), the learner maintains a set of weights over the experts and updates these weights according to the losses suffered by the experts. These losses could potentially be chosen in an adversarial manner. Specifically, the learner has weights $\bm{\alpha^{t}} = (\alpha^{t}_{i})_{1 \leq i \leq M}$ over $M$ experts at time $t$. The environment chooses a bounded, potentially adversarial, loss vector $\bm{l^{t}}$ over these experts at time $t$. The loss suffered by the learner is $(\bm{\alpha^{t})}^{T}\bm{l^{t}}$. The goal of the learner is to compete against the expert with the minimum cumulative loss, i.e to minimize the \emph{regret} which is defined as $$R_{T} = \sum_{t=1}^{T}(\bm{\alpha^{t})}^{T}\bm{l^{t}} - \min_{i \in [M]}\sum_{t=1}^{T}l^{t}_{i}$$ where $T$ denotes the number of instances for which this game is played between the learner and the environment. We say that an algorithm is a no-regret learning algorithm if the regret is sub-linear in $T$ .The multiplicative weights algorithm(\cite{LW-89}) updates the weights \emph{optimally} as $$\alpha^{t}_{i} = \frac{e^{-\eta_{t}\sum_{t'=1}^{t'=t-1}l^{t'}_{i}}}{\sum_{i=1}^{M}e^{-\eta_{t}\sum_{t'=1}^{t'=t-1}l^{t'}_{i}}}$$ where $\eta_{t}$ is the learning rate. The learning rate could be constant or dependent on $t$. The Decrease Hedge algorithm(\cite{ABG-02}) has a learning rate $\eta_{t} \propto 1/\sqrt{t}$ while the Constant Hedge algorithm(\cite{LW-89}), given a $T \geq 1$, has a learning rate $\eta_{t} \propto 1/\sqrt{T}$. The standard regret bound for the hedge algorithm (eg : \cite{CZ-10}) is sub-linear in $T$. We look at the learning from expert advice problem from a different lens. At each stage $t$, there exists an environment parameter $\mathbf{e}^{t}$ based on which the experts give their recommendation. The recommendation can be the output of some machine learned model which each of the $M$ experts have, denoted by $(h_{i}(\mathbf{e}^{t}))_{1 \leq i \leq M}$. A more general view of the problem we tackle in this paper is what happens in the case the experts have access to some estimate $\mathbf{\hat{e}^{t}} \neq \mathbf{e}^{t}$ such that $\mathbf{E[\mathbf{\hat{e}^{t}}]} = \mathbf{e}^{t}$. The constraint we use on this setting is that the variance of the estimator $\hat{\mathbf{e}^{t}}$ becomes small as $t$ becomes large. \textbf{Stochastic Setting} : When the losses are realizations of some unknown random process, we consider it as the stochastic setting. Our work considers the standard i.i.d case where the loss vectors are $\bm{l^{1}},\bm{l^{2}},\dots,\bm{l^{t}}$ which are i.i.d. In general there need not be independence across experts. We define the sub-optimality gap as $\Delta = \min_{i \neq i^{*}} \mathbf{E}[l^{t}_{i}-l^{t}_{i^{*}}]$ where $i^{*} = argmin_{i}\mathbf{E}[l^{t}_{i}]$. A natural extension is whether the hedge algorithm obtains a better regret guarantee in the nicer stochastic setting. \cite{JS-19} show that for the Decreasing Hedge algorithm, a better regret bound in terms of the sub-optimality gap $\Delta$ can be obtained while also showing that for the Constant Hedge algorithm the $\sqrt{T\log M}$ regret bound is the best possible. A part of our setting is inspired by \cite{JS-19} as we have a set of experts whose loss comes from a stochastic distribution. \section{Problem Setting} In the ski-rental setting with predictions, the rental costs are 1-unit per day, $b$ is the buy cost, $x$ is the true number of ski days which is unknown to the algorithm and $y$ is the predicted number of ski days. We use $\eta = |y-x|$ to denote the prediction error. No assumptions are made on how the length of the ski season is predicted. The optimal strategy in hindsight will give us an optimal cost of $OPT = \min \{b,x\}$. \subsection{Sequential Ski Rental Setup} Our learning model has two sets of experts, one set predicting the environment parameters i.e the buy cost of the skis. Let there be $m$ such experts. We will call them as buy-experts from now on. The other set of experts are those that are giving advice to the learner on what strategy to follow based on their prediction of the number of ski-days. They utilize the prediction of the buy-experts to decide their strategy. Let these experts be $n$ in number. We will call them as ski-experts from now on. We are running multiple ski-rental instances over the time horizon $T$. For each $t \in [T]$, we denote the ground truth buy cost as $b^{t}$ and the ground truth ski-days as $x^{t}$. The ski experts only make a prediction on $x^{t}$ denoted by $\bm{y}^{t} = (y^{t}_{j})_{1\leq j \leq n}$, which is a vector of non-negative integers, and suggest a strategy to the learner based on the predicted value of the buy experts, which we denote by $b^{t}_{s}$. The way a ski-expert utilizes its prediction $y^{t}_{j}$ to suggest a strategy is based on a randomized algorithm which at a high level suggests to buy late if $y^{t}_{j} < b^{t}_{s}$ or suggests to buy early if $y^{t}_{j} \geq b^{t}_{s}$. \subsection{Buy Expert Predictions} We assume that the buy costs over rounds are integers in the range $[2,B]$ where $B$ is finite. In our model we consider that each buy cost prediction comes from a stochastic distribution with mean equal to the ground truth buy cost and a certain variance which corresponds to the quality of that buy expert. Let best buy expert be $i^{*}$ and her variance be $\gamma_{min}$. We define the sub-optimality gap for buy experts $i \neq i^{*}$ in terms of the variance as $\Delta_{i} = \gamma_{i} - \gamma_{min}$ where $\gamma_{i}$ corresponds to the variance of the $i^{th}$ buy expert. Let the vector of these predictions be $\bm{a^{t}} = (a^{t}_{i})_{1 \leq i \leq m}$. In our setting, $$\bm{a}^{t} = b^{t}\vv{1} + \bm{\epsilon^{t}_{b}}$$ where $\bm{\epsilon_{b}^{t}}$ has zero mean and its covariance matrix is a diagonal matrix due the independence across experts. We assume that $\bm{\epsilon^{t}_{b}} \in [-1,1]^{m} $. The algorithm maintains a weight vector $\bm{\alpha}^{t}$ over the buy experts corresponding to it's confidence over that particular expert. We use the Decreasing Hedge algorithm to update these weights, where the loss function is the squared error loss $$\alpha^{t}_{i} = \frac{e^{-\eta_{t}\sum_{t'=1}^{t'=t-1}(a^{t'}_{i}-b^{t'})^{2}}}{\sum_{i=1}^{m}e^{-\eta_{t}\sum_{t'=1}^{t'=t-1}(a^{t'}_{i}-b^{t'})^{2}}}$$ The ski-experts are given the buy cost prediction $b^{t}_{s}$ on which they base their strategy $$b^{t}_{s} = (\bm{a^{t}})^{T}\bm{\alpha^{t}}$$ \subsection{Ski Expert Predictions} The ski-experts suggest a strategy to the learner and suffer some loss for the same. They make a prediction on the number of ski days and then suggest a strategy to the learner utilizing the predicted buy cost $b^{t}_{s}$. While we do make an assumption on the prediction distribution, specifically that it is unbiased, of the buy experts, we make no assumption on how the ski expert predictions are obtained. The learner maintains a set of weights $\bm{\beta^{t}}$ over these experts which are updated using the Constant Hedge algorithm. The loss suffered by the $j^{th}$ ski-expert at time $t$ is denoted by $l^{t}_{j}(b^{t},x^{t},b^{t}_{s},y^{t}_{j})$. We denote the loss vector suffered by these experts as $\bm{l}^{t}(b^{t},x^{t},b^{t}_{s},\bm{y}^{t})$. \subsection{Regret} Our learning setup is as follows. At each $t \in [T]$ the learner chooses a strategy recommended by a ski-expert by sampling from the distribution $\bm{\beta}^{t}$ over these experts and suffers an expected loss. The ski-experts are in turn basing their strategy on a prediction of the buy cost from the buy-experts. At a time $t$, the loss suffered by the $j^{th}$ ski-expert depends on the ground truth values and the predictions it receives and is hence denoted by $l^{t}_{j}(b^{t},x^{t},b^{t}_{s},y^{t}_{j})$. The goal of the learner is to compete against the best expert in the case the experts are given the true environment parameters, that being the buy cost in our case. This lends to the expected regret definition which we wish to minimize $$R_{T} = \sum_{t=1}^{T}(\bm{\beta^{t}})^{T}\bm{l}^{t}(b^{t},x^{t},b^{t}_{s},\bm{y^{t}}) - \min_{j}\sum_{t=1}^{T}l^{t}_{j}(b^{t},x^{t},b^{t},y^{t}_{j})$$ \begin{comment} \subsection{Notation} \begin{enumerate} \item $x^{t}$ - True number of ski-days at the $t^{th}$ time instant. \item $y^{t}_{j}$ - The ski-days prediction at the $t^{th}$ time instant for the $j^{th}$ ski expert \item $b^{t}$ - True buy cost at the $t^{th}$ time instant. \item $a^{t}_{i}$ - Prediction for the buy cost at the $t^{th}$ time instant for the $i^{th}$ buy expert. \item $b^{t}_{s}$ - The weighted value of the buy cost predictions by the buy experts. \end{enumerate} \end{comment} \begin{table}[] \centering \begin{tabular}{c|c} \hline Symbol & Description \\ \hline $x^{t}$ & True number of ski-days at the $t^{th}$ time instant. \\ \hline $y^{t}_{j}$ & $j^{th}$ expert ski-day prediction at time $t$ \\ \hline $b^{t}$ & True buy cost at time $t$ \\ \hline $a^{t}_{i}$ & $i^{th}$ expert buy cost prediction at time $t$\\ \hline $b^{t}_{s}$ & Weighted sum estimate of the buy cost \end{tabular} \caption{Notation} \label{tab:notation} \end{table} \section{Algorithm} We introduce the subroutine used by the ski-experts to compute a strategy, which we call the \emph{CostRobust Randomized Algorithm} and propose the \emph{Sequential Ski Rental} algorithm which the learner utilizes when she has access to two sets of experts. \subsection{CostRobust Randomized Algorithm} Below we present an algorithm which obtains robust and consistent results for solving the ski-rental problem. The crucial difference between our algorithm and \cite{PSK-18} is that our algorithm is robust to small variations in the buy cost i.e if we obtain a noisy sample of the buy cost, our algorithm suffers a cost which is same as the cost suffered if the true value were given. Our algorithm obtains similar consistency and robustness guarantees while considering the competitive ratio. \begin{algorithm} \DontPrintSemicolon \SetAlgoLined \SetKwFunction{Floss}{SkiRentStrategy} \SetKwProg{Fn}{Function}{:}{} \Fn{\Floss{$b,y$}}{ \eIf{$y \geq nint(b)$}{ $k \gets \floor*{\lambda b}$ \; $q_{i} \gets (1-\frac{\lambda}{k})^{k-i}.\frac{\lambda}{k(1-(1-\lambda/k)^{k})} \forall 1 \leq i \leq k$ \; $d \sim \bm{q}$ // Sample the buy day based on the distribution above \; return $d$ \; }{ $l \gets \ceil*{ b/\lambda}$ \; $r_{i} \gets (1-\frac{1}{\lambda l})^{l-i}.\frac{1}{l \lambda(1-(1-\frac{1}{\lambda l})^{l})} \forall 1 \leq i \leq l$ \; $d \sim \bm{r}$ // Sample the buy day based on the distribution above \; return $d$ \; } } \caption{CostRobust Randomized Algorithm} \end{algorithm} Let $\lambda \in (1/b,1)$ be a hyper-parameter. For a chosen $\lambda$ the algorithm samples a buy day from two different probability distributions depending on the prediction and the input buy cost. The algorithm outputs a buy day strategy for the prediction on the number of ski days $y$ (a non-negative integer) and the input it gets on the cost $b$. Note that $nint(.)$ is the nearest integer function where half integers are always rounded to even numbers. We say that the algorithm is $\epsilon$ robust in terms of the buy cost if $\epsilon$ is the maximum possible value such that for a input non-negative integer $b$, if the buy cost prediction $b_{s}$ lies in the range $(b-\epsilon,b+\epsilon)$, the incurred cost is equal to the cost in the case the true value $b$ were given. Note that $nint(b) = b$ for the true buy cost. \begin{theorem} The CostRobust randomized algorithm is $\epsilon$ robust in terms of the buy cost where $\epsilon$ is $$\epsilon = \min \left (\frac{1}{\lambda} \min (\{\lambda b\},1-\{\lambda b\}), \lambda \min \left (\left \{\frac{b}{\lambda} \right \},1- \left \{\frac{b}{\lambda} \right \} \right ) \right )$$ where $\{x\}$ denotes the fractional part of $x$. \end{theorem} \begin{proof} Consider the case when $y \geq b$. In this case $k = \floor*{\lambda b}$. A predicted buy cost $b_{s}$ is given where $b-\epsilon < b_{s} < b + \epsilon$, and hence the condition we get on $\epsilon$ so that $\floor*{\lambda b} = \floor*{\lambda b_{s}}$ is $$\epsilon = \frac{1}{\lambda} \min (\{\lambda b\},1-\{\lambda b\})$$ Similarly, in the case $y < b$, performing a similar analysis where we require the $l$ values to be equal gives us the condition $$\epsilon = \lambda \min \left (\left \{\frac{b}{\lambda} \right \},1- \left \{\frac{b}{\lambda} \right \} \right ) $$ Hence the result follows. \end{proof} We now show consistency and robustness guarantees, in terms of the competitive ratio, of our proposed algorithm in the case it receives the true prediction of the buy cost. Our analysis is similar to \cite{PSK-18}, who calculate the expected loss of the algorithm based on the relative values of $y,b,x$ and the distribution defined. \begin{theorem} The CostRobust randomized ski-rental algorithm yields a competitive ratio of at most $\min \{\frac{1+1/\floor*{\lambda b}}{1-e^{-\lambda}},(\frac{\lambda}{1-e^{-\lambda}})(1+\frac{\eta}{OPT})\}$ where $\lambda$ is a hyper-parameter chosen from the set $\lambda \in (1/b,1]$. The CostRobust Randomized algorithm is $\frac{1+1/\floor*{\lambda b}}{1-e^{-\lambda}}$-robust and $(\frac{\lambda}{1-e^{-\lambda}})$-consistent. \end{theorem} \begin{proof} We consider different cases depending on the values of $y,b,x,k$. Note that as $b$ is a non-negative integer, $nint(b) = b$. \begin{itemize} \item $y \geq b$, $x \geq k$ Based on the algorithm $k=\floor*{\lambda b}$. \begin{equation*} \begin{split} \mathbf{E}[ALG] & = \sum_{i=1}^{k}(b+i-1)q_{i} \leq b - \frac{k}{\lambda} + \frac{k}{1-(1-\frac{\lambda}{k})^{k}} \\ & \leq b - \frac{k}{\lambda} + \frac{k}{1-e^{-\lambda}} \leq b + b \left (\frac{\lambda}{1-e^{-\lambda}}-1 \right) \\ & \leq \left (\frac{\lambda}{1-e^{-\lambda}} \right)(OPT+\eta) \end{split} \end{equation*} where the second to last inequality from the fact $b > \frac{k}{\lambda}$ and the last inequality from the fact $y>b$. \item $y \geq b$, $x < k$. For this ordering of the variables $OPT=x$. The algorithm suffers a loss of $ALG = b+i-1$ if it buys at the beginning of day $i \leq x$. Thus , \begin{equation*} \begin{split} \mathbf{E}[ALG] &= \sum_{i=1}^{x}(b+i-1)q_{i} + \sum_{i=x+1}^{k}x q_{i} \\ & = \frac{x}{1-(1-\frac{\lambda}{k})^{k}} \left (1+\left (1-\left (1-\frac{\lambda}{k} \right)^{x} \right) \left (1-\frac{\lambda}{k} \right )^{k-x}\frac{(b-\frac{k}{\lambda})}{x} \right ) \\ & \leq \left (\frac{1}{1-e^{-\lambda}} \left (\frac{\lambda b}{\floor*{\lambda b}}\right ) \right)OPT \\ & \leq \left (\frac{1+1/\floor*{\lambda b}}{1-e^{-\lambda}} \right)OPT \end{split} \end{equation*} where the second to last inequality follows from the fact that $(1-\frac{\lambda}{k})^{x} > 1 - \frac{\lambda x }{k}$. To show consistency, we have the inequality \begin{equation*} \begin{split} \mathbf{E}[ALG] & \leq b - \frac{k}{\lambda} + \frac{x}{1-e^{-\lambda}} \leq \frac{\{\lambda b\}}{\lambda} + \frac{x}{1-e^{-\lambda}} \\ & \leq \frac{\{\lambda b\}+x}{1-e^{-\lambda}} \leq \frac{\lambda b}{1-e^{-\lambda}} \\ & \leq \left (\frac{\lambda}{1-e^{-\lambda}} \right )(OPT+\eta) \end{split} \end{equation*} where the third inequality comes from the fact that $\lambda \geq 1-e^{-\lambda}$ for all $\lambda \in [0,1]$ and the fourth inequality comes from the fact that $x < k$. \item $y < b$, $x < l$. Based on the algorithm, $l= \ceil*{\frac{b}{\lambda}}$. The algorithm suffers a loss of $ALG = b+i-1$ if it buys at the beginning of day $i \leq x$. Thus , \begin{equation*} \begin{split} \mathbf{E}[ALG] &= \sum_{i=1}^{x}(b+i-1)r_{i} + \sum_{i=x+1}^{l}x r_{i} \leq (b-\lambda l) + \frac{x}{1-(1-\frac{1}{\lambda l})^{l}} \\ & \leq \frac{x}{1-e^{-\frac{1}{\lambda}}} \leq \left (\frac{1}{1-e^{-1/\lambda}} \right )(OPT+\eta) \\ & \leq \left (\frac{\lambda}{1-e^{-\lambda}} \right )(OPT+\eta) \end{split} \end{equation*} \item $y < b$, $x \geq l$ . Here $OPT=b$. The expected cost incurred is \begin{equation*} \begin{split} \mathbf{E}[ALG] & = \sum_{i=1}^{l}(b+i-1)r_{i} \leq b - \lambda l + \frac{l}{1-(1-\frac{1}{\lambda l})^{l}} \\ & \leq b + l \left (\frac{1}{1-e^{-1/\lambda}} - \lambda \right) \leq b + l \left (\frac{\lambda e^{-\lambda}}{1-e^{-\lambda}} \right) \\ & \leq \left (\frac{1+\lambda e^{-\lambda}/b}{1-e^{-\lambda}} \right)OPT < \frac{(1+1/\floor*{\lambda b})}{1-e^{-\lambda}}OPT \end{split} \end{equation*} which shows some sense of robustness. To show consistency, we can write the equations as \begin{equation*} \begin{split} \mathbf{E}[ALG] & \leq \frac{l}{1-e^{-1/\lambda}} = \frac{1}{1-e^{-1/\lambda}}(b+l-b) \\ & \leq \frac{1}{1-e^{-1/\lambda}}(OPT+\eta) \leq \left (\frac{\lambda}{1-e^{-\lambda}} \right)(OPT+\eta) \end{split} \end{equation*} \end{itemize} Hence the result follows. \end{proof} This result provides a trade-off between the consistency and robustness ratios. Setting $\lambda=1$ gives us a guarantee that even if the prediction has a very large error($\eta \to \infty$), our competitive ratio is bounded by $\frac{e}{e-1}(1+1/b)$ which is close to the best case theoretical bound without predictions. However if we are very confident in the prediction, i.e confident that $\eta=0$, then we can set $\lambda$ to be very small and get a guarantee of performing close to the offline optimal. \subsection{Sequential Ski Rental Algorithm} We utilize the CostRobust algorithm as a subroutine for each ski-expert. Each of the $n$ ski experts are running the algorithm for each instance of the ski-rental problem to determine a strategy using the predicted buy cost and its own prediction on the number of ski days. The loss is calculated with respect to the best hindsight strategy and hence is always positive. We normalize the competitive ratio by an additive factor so that if an expert predicts correctly, she obtains 0 loss. \begin{algorithm} \DontPrintSemicolon \SetAlgoLined \SetKwFunction{Floss}{loss} \SetKwProg{Fn}{Function}{:}{} \Fn{\Floss{$b^{t},x^{t},b^{t}_{s},y^{t}_{j}$}}{ $d \gets $ SkiRentStrategy($b^{t}_{s},y^{t}_{j}$) \eIf{$x^{t} \geq b^{t}$}{ $OPT = b^{t}$ \;}{$OPT = x^{t}$ \;} Based on the strategy suggested by the expert in Algorithm 1, $d$ is the buy-day. \eIf{$x^{t} \geq d$}{ $ALG = b^{t} + d - 1$ \;}{$ALG = x^{t}$ \;} $l^{t}_{j} = \frac{ALG-OPT}{OPT}$ // Loss suffered by the $j^{th}$ ski expert at the $t^{th}$ iteration \; } \caption{Loss calculation for each expert} \end{algorithm} At each instance of the \emph{Sequential Ski Rental Algorithm}, the input to the \emph{CostRobust} subroutine is an estimate $b^{t}_{s} \neq b^{t}$. The statement below shows that even if the ski-days prediction of a ski-expert has very large error, it suffers a finite loss when it uses an estimate of the buy cost. \begin{theorem} The loss suffered by each ski expert is bounded for every round $t \in [T]$ when the buy predictions are coming from bounded random variables. \end{theorem} \begin{algorithm} \SetAlgoLined \textbf{Input:} \newline $\lambda$ : Hyperparamter \; \textbf{Initialization:} $\bm{w_{\beta}^{1}} \gets (1,1,\dots,1)$ : Weights corresponding to the ski experts \; $\bm{w_{\alpha}^{1}} \gets (1,1,\dots,1)$ : Weights corresponding to the buy experts \; \For{$t\gets1$ \KwTo $T$}{ input $\bm{a^{t}}$ : Buy expert predictions \; $\bm{\alpha^{t}} = \frac{\bm{w_{\alpha}^{t}}}{\sum_{i}(w_{\alpha}^{t})_{i}}$ : Probability distribution over the buy experts \; $b^{t}_{s} \gets \bm{a^{t}} \cdot \bm{\alpha^{t}}$ : The weighted buy cost prediction to be provided to the ski experts \; input $\bm{y^{t}}$ : Ski-expert predictions \; $\bm{\beta^{t}} \gets \frac{\bm{w_{\beta}^{t}}}{\sum_{j}(w_{\beta}^{t})_{j}}$ : Probability distribution over the ski experts \; $\bm{l^{t}} \gets loss(b^{t},x^{t},b^{t}_{s},\bm{y^{t}})$ : Loss vector where $j^{th}$ element corresponds to the loss of the $j^{th}$ ski expert \; $\bm{w_{\beta}^{t+1}} \gets \bm{w_{\beta}^{t}} e^{-\epsilon_{s} \bm{l^{t}}}$ : Update the weights according to the Constant hedge algorithm \; $\bm{w_{\alpha}^{t+1}} \gets \bm{w_{\alpha}^{t}} e^{-\epsilon_{b} (\bm{a^{t}}-b^{t})^{2}}$ : Update the weights according to the Decreasing Hedge algorithm } \caption{Sequential Ski Rental Algorithm} \end{algorithm} We will denote this bound as $B$. We now describe the setting of the online learning algorithm. The buy predictions comes from unbiased experts. The assumption we make is that the buy predictions come from a bounded stochastic distribution such that the loss suffered by the buy experts is i.i.d over rounds. The prediction given to the ski experts is a weighted sum of each buy expert prediction. The ski experts utilize this to recommend a strategy and suffer some loss for the same. The buy expert weights are updated using the Decreasing Hedge algorithm while the weights of the ski experts are updated using the Constant Hedge algorithm. \section{Regret Analysis} In this section, we show our main result - a regret guarantee for the proposed Sequential Ski Rental algorithm. \begin{theorem} \label{Claim4} Let the variance of the best buy expert satisfy $$\gamma_{min} = \frac{\delta \epsilon^{2} }{T c}$$ for some $c \in (1,\infty)$ and the time horizon \begin{multline*} T > \max \{1+\frac{8}{\Delta^{2}}\log \left(\frac{2m}{c-1}\left(1+\frac{Tc\Delta}{\delta \epsilon^{2}}\right)\right), \\ 1+\frac{1}{2\Delta^{2}\log m}\log^{2} \left(\frac{2m}{c-1}\left(1+\frac{Tc\Delta}{\delta \epsilon^{2}}\right)\right),1+\ceil*{\frac{4}{\Delta^{2}}}\} \end{multline*} Then, with probability at least $1-\delta$, the cumulative regret of the Sequential Ski Rental algorithm is bounded as \begin{multline*} R_{T} \leq (1+B^{2})\sqrt{T\log n} + B \max \{1+\frac{8}{\Delta^{2}}\log \left(\frac{2m}{c-1}\left(1+\frac{Tc\Delta}{\delta \epsilon^{2}}\right)\right), \\ 1+\frac{1}{2\Delta^{2}\log m}\log^{2} \left(\frac{2m}{c-1}\left(1+\frac{Tc\Delta}{\delta \epsilon^{2}}\right)\right),1+\ceil*{\frac{4}{\Delta^{2}}}\} \end{multline*} where $n$ are the number of ski-experts, where $B$ is the bound on the loss suffered by the ski-experts, $\Delta$ is the minimum sub-optimality gap of the buy experts in terms of their variance, $m \geq 2$ are the number of buy experts and $\epsilon$ is the minimum robustness in terms of the buy cost of the CostRobust Randomized algorithm across rounds. \end{theorem} Recalling the regret definition we use, we have $$R_{T} = \sum_{t=1}^{T}(\bm{\beta^{t}})^{T}\bm{l}^{t}(b^{t},x^{t},b^{t}_{s},\bm{y^{t}}) - \min_{j}\sum_{t=1}^{T}l^{t}_{j}(b^{t},x^{t},b^{t},y^{t}_{j})$$ This can be split w.r.t the optimal ski expert $j^{*}$, given the true value as $R_{T} = R_{T}^{x} + R_{T}^{b}$ where each component is defined as $$R_{T}^{x} = \sum_{t=1}^{T}(\bm{\beta^{t}})^{T}\bm{l}^{t}(b^{t},x^{t},b^{t}_{s},\bm{y^{t}}) - \sum_{t=1}^{T}l^{t}_{j^{*}}(b^{t},x^{t},b^{t}_{s},y^{t}_{j})$$ and $$R_{T}^{b} = \sum_{t=1}^{T}l^{t}_{j^{*}}(b^{t},x^{t},b^{t}_{s},y^{t}_{j}) - \sum_{t=1}^{T}l^{t}_{j^{*}}(b^{t},x^{t},b^{t},y^{t}_{j})$$ \begin{theorem} \label{Claim5} The first term in the regret split $R_{T}^{x}$ is bounded by $$R_{T}^{x} \leq (1+B^{2})\sqrt{T\log n}$$ where $B$ is the bound on the loss suffered by the ski-experts and $n$ denotes the number of ski-experts. \end{theorem} \begin{comment} \begin{proof} Let the number of ski experts be $n$, the probability distribution over these experts as $\bm{\beta}^{t}$, and the weights over these experts as $\bm{w}^{t}$, where $\bm{\beta}^{t} = \frac{\bm{w}^{t}}{\sum_{i=1}^{n}w^{t}_{i}}$. The loss suffered by the learner at time $t$ is $l^{t}_{j}(b^{t},x^{t},b^{t}_{s},y^{t}_{j})$ where $b^{t}_{s}$ is the buy cost sampled and $y^{t}_{j}$ is the prediction for the number of ski days by the sampled ski-expert $j$. Consider the potential function $\phi^{t} = \sum_{i=1}^{n}w^{t}_{i} \forall t \in \{0,1,\dots,T\}$. Note that $\phi^{0} = n$. Thus, \begin{equation*} \begin{split} \phi^{t+1} & = \sum_{j=1}^{n}w^{t}_{j}e^{-\epsilon l^{t}_{j}(b^{t},x^{t},b^{t}_{s},y^{t}_{j})} = \phi^{t}\sum_{j=1}^{n}\beta^{t}_{j}e^{-\epsilon l^{t}_{j}(b^{t},x^{t},b^{t}_{s},y^{t}_{j})} \\ & \leq \phi^{t}\sum_{j=1}^{n}\beta^{t}_{j}(1-\epsilon l^{t}_{j}(b^{t},x^{t},b^{t}_{s},y^{t}_{j}) + \epsilon^{2}(l^{t}_{j}(b^{t},x^{t},b^{t}_{s},y^{t}_{j}))^{2}) \\ & \leq \phi^{t}(1-\epsilon \sum_{j=1}^{n}\beta^{t}_{j}l^{t}_{j}(b^{t},x^{t},b^{t}_{s},y^{t}_{j}) + \epsilon^{2}\sum_{j=1}^{n}\beta^{t}_{j}(l^{t}_{j}(b^{t},x^{t},b^{t}_{s},y^{t}_{j}))^{2}) \\ & \leq \phi^{t}e^{-\epsilon \bm{\beta^{t}}.\bm{l^{t}} + \epsilon^{2}\bm{\beta^{t}}.(\bm{l^{t}})^{2}} \end{split} \end{equation*} where the first inequality comes from the fact that $e^{-x} \leq 1-x+x^{2} \forall x \geq 0$ and the last inequality comes from the fact $e^{x} \geq 1+x$. Thus we get $\phi^{T} \leq ne^{-\epsilon \sum_{t=1}^{T}\bm{\beta^{t}}.\bm{l^{t}} + \epsilon^{2}\sum_{t=1}^{T}\bm{\beta^{t}}.(\bm{l^{t}})^{2}}$ Using the inequality $e^{-\epsilon \sum_{t=1}^{T}l^{t}_{j}(b^{t},x^{t},b^{t}_{s},y^{t}_{j})} \leq \phi^{T}$, for any ski-expert $j$, we get \begin{equation*} \begin{split} \sum_{t=1}^{T}\sum_{j=1}^{n}\beta^{t}_{j}l_{j}(b^{t},x^{t},b^{t}_{s},y^{t}_{j}) - \sum_{t=1}^{T}l_{j}(b^{t},x^{t},b^{t}_{s},y^{t}_{j}) & \leq \epsilon\sum_{t=1}^{T}\bm{\beta^{t}}(\bm{l^{t}})^{2} + \frac{\log n}{\epsilon} \\ & \leq \epsilon B^{2}T + \frac{\log n}{\epsilon} \end{split} \end{equation*} For $\epsilon = \sqrt{\frac{\log n}{T}}$, we obtain the bound for the first term as \begin{equation*} \begin{split} \sum_{t=1}^{T}\sum_{j=1}^{n}\beta^{t}_{j}l_{j}(b^{t},x^{t},b^{t}_{s},y^{t}_{j}) - \sum_{t=1}^{T}l_{j}(b^{t},x^{t},b^{t}_{s},y^{t}_{j}) \leq (1+B^{2})\sqrt{T\log n} \end{split} \end{equation*} \end{proof} \end{comment} This regret bound $R_{T}^{x}$ follows from the standard regret bound for the Constant Hedge algorithm with $n$ experts when losses for each of these experts lie in the range $[0,B]$. To bound $R_{T}^{b}$, note that the loss function is $\epsilon$-robust to the buy cost, hence if the predicted buy cost lies in the range $(b-\epsilon,b+\epsilon)$, this would imply that $l^{t}_{j^{*}}(b^{t},x^{t},b^{t}_{s},y^{t}_{j}) = l^{t}_{j^{*}}(b^{t},x^{t},b^{t},y^{t}_{j})$. Let us analyze the predicted buy cost $b^{t}_{s}$. Note that $\mathbf{E}[b^{t}_{s}|\bm{\alpha^{t}}] = b^{t}$. This is because each of the buy experts in expectation predict correctly and $\sum_{i=1}^{m}\alpha^{t}_{i} = 1$. This leads to $\mathbf{E}[b^{t}_{s}] = b^{t}$. Let $\gamma_{i}$ denote the variance of the $i^{th}$ buy expert. Now to find the variance of $b^{t}_{s}$. \begin{equation*} \begin{split} \mathbf{E}[(b^{t}_{s}-b^{t})^{2}|\bm{\alpha^{t}}] & = \mathbf{E}[(\sum_{i=1}^{m}\alpha^{t}_{i}(a^{t}_{i}-b^{t}))^{2} | \bm{\alpha^{t}}] = \sum_{i=1}^{m}(\alpha^{t}_{i})^{2}\gamma_{i \end{split} \end{equation*} where the first equality comes from the fact that $\sum_{i}\alpha^{t}_{i} = 1$, the second equality from the fact that agents predicting at time $t$ are independent and are predicting with mean $b^{t}$. Thus $\mathbf{E}[(b^{t}_{s}-b^{t})^{2}] = \sum_{i=1}^{m}\gamma_{i}\mathbf{E}[(\alpha^{t}_{i})^{2}]$. Using Chebyshev inequality, the probability that $b^{t}_{s}$ lies in the $\epsilon$ range about $b^{t}$ is given as $$Pr[|b^{t}_{s}-b^{t}| < \epsilon] > 1 - \frac{var(b^{t}_{s})}{\epsilon^{2}}$$ For this event to hold with probability at least $1-\delta$, we require $\sum_{i=1}^{m}\gamma_{i}\mathbf{E}[(\alpha^{t}_{i})^{2}] \leq \delta \epsilon^{2}$. Under the assumption that one buy expert has variance $\gamma_{min} < \delta \epsilon^{2}$ this is trivially satisfied as $t \to \infty$ .Hence we now require a minimum $t^{*} < T$ such that the above event is satisfied for all $t \in [t^{*},T]$. \begin{theorem} \label{Claim6} The number of rounds $t^{*}$ after which with probability at least $1-\delta$ $$\sum_{t=t^{*}}^{T}l_{j^{*}}(b^{t},x^{t},b^{t}_{s},y^{t}_{j}) = \sum_{t=t^{*}}^{T}l_{j^{*}}(b^{t},x^{t},b^{t},y^{t}_{j}) ~ \text{is} $$ \begin{multline*} t^{*} = \max \{1+\frac{8}{\Delta^{2}}\log \left(\frac{2m}{c-1}\left(1+\frac{Tc\Delta}{\delta \epsilon^{2}}\right)\right), \\ 1+\frac{1}{2\Delta^{2}\log m}\log^{2} \left(\frac{2m}{c-1}\left(1+\frac{Tc\Delta}{\delta \epsilon^{2}}\right)\right),1+\ceil*{\frac{4}{\Delta^{2}}}\} \end{multline*} under the assumption that the variance of one buy expert satisfies $\gamma_{min} = \frac{\delta \epsilon^{2}}{Tc}$ for some $c \in (1,\infty)$, where $m \geq 2$ are the number of buy-experts, $\epsilon$ is the minimum robustness of the CostRobust algorithm in terms of the buy cost across rounds, the sub-optimality gap is $\Delta$ and the time horizon $T > \max \{1+\frac{8}{\Delta^{2}}\log \left(\frac{2m}{c-1}\left(1+\frac{Tc\Delta}{\delta \epsilon^{2}}\right)\right), \\ 1+\frac{1}{2\Delta^{2}\log m}\log^{2} \left(\frac{2m}{c-1}\left(1+\frac{Tc\Delta}{\delta \epsilon^{2}}\right)\right),1+\ceil*{\frac{4}{\Delta^{2}}}\} $. \end{theorem} \begin{proof} Consider a time $t_{0} = \ceil*{\frac{4}{\Delta^{2}}}$. If the rate of convergence to the desired variance is upper bounded by $t_{0}$, then we have a convergence rate which does not depend on $T$. If the time taken for convergence is greater than $t_{0}$, then we consider the analysis below. The variance of $b^{t}_{s}$ is $$var(b^{t}_{s}) = \sum_{i=1}^{m}\gamma_{i} \mathbf{E}[(\alpha^{t}_{i})^{2}]$$ The update at time $t$ for each buy expert is made based on the squared error, specifically $(a^{t}_{i}-b^{t})^{2}$. We define the loss suffered by the $i^{th}$ buy expert at time $t$ as $g^{t}_{i} = (a^{t}_{i}-b^{t})^{2}$. For every buy expert $i \neq i^{*}$, we define the variable $Z^{t}_{i} := -g^{t}_{i} + g^{t}_{i^{*}} + \Delta_{i}$, which belong to $[-1+\Delta_{i},1+\Delta_{i}]$. We define $G^{t}_{i}$ as the cumulative loss for buy expert $i$ upto time $t$ i.e $G^{t}_{i} = \sum_{s=1}^{t}(a^{s}_{i}-b^{s})^{2}$. Applying Hoeffding's inequality, we get \begin{equation*} \begin{split} Pr(G^{t-1}_{i}-G^{t-1}_{i^{*}} < \Delta_{i}\frac{t-1}{2}) & = Pr(\sum_{s=1}^{t-1}Z^{s}_{i} > \Delta_{i}\frac{t-1}{2}) \\ & \leq e^{-(t-1)\frac{\Delta^{2}_{i}}{8}} \end{split} \end{equation*} When $G^{t-1}_{i}-G^{t-1}_{i^{*}} > \Delta_{i}\frac{t-1}{2}$, then \begin{equation*} \begin{split} \alpha^{t}_{i} & = \frac{e^{-\eta_{b}(G^{t-1}_{i}-G^{t-1}_{i^{*}})}}{1+\sum_{j \neq i^{*}}e^{-\eta_{b}(G^{t-1}_{j}-G^{t-1}_{i^{*}})}} \\ & \leq e^{-\Delta_{i}\sqrt{(t-1)(\log m) / 2}} \end{split} \end{equation*} since $t \geq t_{0} + 1 \geq 2$ .Thus $(\alpha^{t}_{i})^{2} \leq e^{-\Delta_{i}\sqrt{2(t-1)(\log m)}}$. Hence \begin{equation*} \begin{split} \mathbf{E}[(\alpha^{t}_{i})^{2}] & \leq Pr(G^{t-1}_{i}-G^{t-1}_{i^{*}} < \Delta_{i}\frac{t-1}{2}) + e^{-\Delta_{i}\sqrt{2(t-1)(\log m)}} \\ & \leq e^{-(t-1)\frac{\Delta^{2}_{i}}{8}} + e^{-\Delta_{i}\sqrt{2(t-1)(\log m)}} \end{split} \end{equation*} Let us consider each component separately. Considering the first term, since $\Delta_{i} \geq \Delta$ we obtain \begin{equation*} \Delta_{i}e^{-(t-1)\frac{\Delta^{2}_{i}}{8}} \leq \Delta e^{-(t-1)\frac{\Delta^{2}}{8}} \end{equation*} when $ \frac{\Delta\sqrt{t-1}}{2} \geq 1 $ i.e $t \geq 1 + \frac{4}{\Delta^{2}}$ which is satisfied as $t \geq t_{0} + 1 \geq 1 + \frac{4}{\Delta^{2}}$. Now considering the second component \begin{equation*} \Delta_{i}e^{-\Delta_{i}\sqrt{2(t-1)(\log m)}} \leq \Delta e^{-\Delta\sqrt{2(t-1)(\log m)}} \end{equation*} is satisfied if $\Delta\sqrt{2(t-1)(\log m)} \geq 1$ i.e $t \geq 1 + \frac{1}{2 \Delta^{2}\log m}$ which is again ensured by $t \geq t_{0} + 1$. Also, \begin{equation*} e^{-(t-1)\Delta_{i}^{2}/8} \leq e^{-(t-1)\Delta^{2}/8} \end{equation*} and \begin{equation*} e^{-\Delta_{i}\sqrt{2(t-1)(\log m)}} \leq e^{-\Delta \sqrt{2(t-1)(\log m)}} \end{equation*} where both these inequalities come from the fact $\Delta_{i} \geq \Delta$.Now, rewriting the variance in terms of the sub-optimality parameters \begin{equation*} \begin{split} var(b^{t}_{s}) & = \sum_{i=1}^{m}\gamma_{i}\mathbf{E}[(\alpha^{t}_{i})^{2}] \\ & \leq \gamma_{min} + \gamma_{min} \sum_{i \neq i^{*}}\mathbf{E}[(\alpha^{t}_{i})^{2}] + \sum_{i \neq i^{*}}\Delta_{i}\mathbf{E}[(\alpha^{t}_{i})^{2}] \end{split} \end{equation*} Hence for every $t \geq t_{0} + 1$, we get \begin{equation*} \begin{split} var(b^{t}_{s}) & \leq \gamma_{min} + m(\gamma_{min} + \Delta)( e^{-\Delta\sqrt{2(t-1)(\log m)}} + e^{-(t-1)\frac{\Delta^{2}}{8}}) \end{split} \end{equation*} Thus the convergence rate boils down to finding a minimum $t > t_{0}$ such that \begin{equation*} \begin{split} e^{-\Delta\sqrt{2(t-1)(\log m)}} + e^{-(t-1)\frac{\Delta^{2}}{8}} \leq \frac{\frac{\delta \epsilon^{2}}{T} - \gamma_{min}}{m(\gamma_{min} + \Delta)} \end{split} \end{equation*} Note that we assume $\gamma_{min} = \frac{\delta \epsilon^{2}}{Tc}$ for some $c \in (1,\infty)$. An upper bound on the convergence would be when each of the components is less than $\frac{\delta \epsilon^{2} (c-1)}{2Tm (1 + \rho)\Delta c}$ for some $c \in (1,\infty)$ and $\rho = \frac{\delta \epsilon^{2}}{Tc\Delta}$. Hence after \begin{multline*} t^{*} = \max \{1+\frac{8}{\Delta^{2}}\log \left(\frac{2T\Delta mc(1+\rho)}{\delta \epsilon^{2}(c-1)}\right), \\ 1+\frac{1}{2\Delta^{2}\log m}\log^{2} \left(\frac{2T\Delta mc(1+\rho)}{\delta \epsilon^{2}(c-1)}\right)\} \end{multline*} and thus \begin{multline*} t^{*} = \max \{1+\frac{8}{\Delta^{2}}\log \left(\frac{2m}{c-1}\left(1+\frac{Tc\Delta}{\delta \epsilon^{2}}\right)\right), \\ 1+\frac{1}{2\Delta^{2}\log m}\log^{2} \left(\frac{2m}{c-1}\left(1+\frac{Tc\Delta}{\delta \epsilon^{2}}\right)\right)\} \end{multline*} rounds we have that $e^{-\Delta\sqrt{2(t-1)(\log m)}} + e^{-(t-1)\frac{\Delta^{2}}{8}} \leq \frac{\frac{\delta \epsilon^{2}}{T} - \gamma_{min}}{m(\gamma_{min} + \Delta)}$ for all $t \in [t^{*},T]$. If the above $t^{*} > t_{0}$, then $var(b^{t}_{s}) < \frac{\delta \epsilon^{2}}{T}$ for all $t \in [t^{*},T]$. If the above $t^{*} \leq t_{0}$, then $var(b^{t}_{s}) < \frac{\delta \epsilon^{2}}{T}$ for all $t \in [t_{0} + 1,T]$ . Let $t'=\max\{t^{*},t_{0}+1\}$. We would want that for all rounds in $[t',T]$, $b^{t}_{s}$ lies in an $\epsilon$ range around $b^{t}$. Note that we have $$Pr[|b^{t}_{s}-b^{t}| < \epsilon] > 1 - \frac{\delta}{T}$$ for all $t \in [t^{'},T]$. We would like to bound the probability $Pr[\bigcap_{t=t'}^{T}\{|b^{t}_{s}-b^{t}| < \epsilon\}]$. Using Frechet inequalities, we have \begin{equation*} \begin{split} Pr[\bigcap_{t=t'}^{T}\{|b^{t}_{s}-b^{t}| < \epsilon\}] & \geq \sum_{t=t'}^{T}Pr[|b^{t}_{s}-b^{t}| < \epsilon] - (T-t') \\ & \geq (1-\frac{\delta}{T})(T-t'+1) - (T-t') \\ & \geq 1-\frac{\delta}{T} -\frac{\delta}{T}(T-t') \\ & \geq 1 - \delta \end{split} \end{equation*} The result follows. \end{proof} \begin{corollary} \label{Corollary1} With probability at least $1-\delta$, the second term in the regret split $R_{T}^{b}$ is bounded as \begin{multline*} R_{T}^{b} \leq B \max \{1+\frac{8}{\Delta^{2}}\log \left(\frac{2m}{c-1}\left(1+\frac{Tc\Delta}{\delta \epsilon^{2}}\right)\right), \\ 1+\frac{1}{2\Delta^{2}\log m}\log^{2} \left(\frac{2m}{c-1}\left(1+\frac{Tc\Delta}{\delta \epsilon^{2}}\right)\right),1+\ceil*{\frac{4}{\Delta^{2}}}\} \end{multline*} where $B$ is the bound on the loss suffered by the ski experts and $\gamma_{min} = \frac{\delta \epsilon^{2}}{Tc}$ for some $c \in (1,\infty)$. \end{corollary} Hence the regret bound follows using the above proved results. \section{Experiments} We perform empirical studies to show that our CostRobust algorithm performs similarly to the algorithm proposed by \cite{PSK-18}. We also perform simulations of the \emph{Sequential Ski Rental Algorithm} to verify our theoretical regret guarantees. \begin{figure*}[ht] \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=.99\linewidth]{Plots/lam_variation/lam_regret_ski1_50.png} \label{fig:lamvar1} \caption{$\eta_{min}=1$,$\eta_{max}=100$} \end{subfigure}% \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=.99\linewidth]{Plots/lam_variation/lam_regret_ski100_150.png} \label{fig:lamvar2} \caption{$\eta_{min}=100$,$\eta_{max}=150$} \end{subfigure} \label{fig:lamvar} \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=.99\linewidth]{Plots/ski_exp_variation/skiexp_regret_40avg.png} \label{fig:svar1} \caption{$\eta_{min}=1$,$\eta_{max}=50$} \end{subfigure}% \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=.99\linewidth]{Plots/buy_exp_variation/buy_exp_regret_1_50.png} \label{fig:svar2} \caption{$\eta_{min}=1$, $\eta_{max}=50$} \end{subfigure} \caption{((a) and (b)) Regret Variation as a function of the hyper-parameter $\lambda$; ((c) and (d)) Regret Variation as a function of the number of ski experts $n$ and the number of buy experts $m$} \label{fig:skivar} \end{figure*} \subsection{CostRobust Randomized Algorithm} To show a comparison, we set the cost of buying to $b=100$ and sample the actual number of ski days $x$ uniformly as an integer from $[1,4b]$. To obtain the predicted number of ski-days, we model it as $y = x +\epsilon$, where the noise $\epsilon$ is drawn from a normal distribution with mean $0$ and standard deviation $\sigma$. We compare our algorithms for two different values of the trade-off parameter $\lambda$. For each $\sigma$, we plot the average competitive ratio obtained by each algorithm over 10000 independent trials. Figure \ref{fig:compratio} shows that both the algorithms perform similarly in terms of the competitive ratio. Setting $\lambda=1$, both algorithms ignore the prediction and guarantee a robustness which are close to the theoretical lower bound of $\frac{e}{e-1}$. Setting $\lambda = \ln \frac{3}{2}$ guarantees an upper bound close to $3$ for the CostRobust algorithm. We observe that for such a $\lambda$, the algorithm performs much better than the classical guarantees. \begin{figure} \centering \includegraphics[width=\linewidth]{Plots/competitive_ratio/costrobust_performance.png} \caption{Comparison of the CostRobust randomized algorithm with the randomized algorithm proposed by \cite{PSK-18}} \label{fig:compratio} \end{figure} \subsection{Regret Experiments} While we require certain assumptions to get a theoretical bound, our empirical study shows that we obtain a vanishing regret for much weaker conditions. We obtain regret plots for different settings of $\lambda, n$ and $m$. For all $t \in [1,T]$, $b^{t}$ and $x^{t}$ is a uniformly sampled integer from $[200,700]$. We consider such a range as some experts could have really large errors in predictions which would lead to a negative prediction in the case the bound on the support was lower. We consider $m$ buy-experts and $n$ ski-experts. The learning rate is set according to the Decreasing Hedge algorithm and Constant Hedge algorithm the sets of experts. In our empirical study, the prediction of each buy expert is $a^{t}_{i} = b^{t} + \epsilon_{b}$, where $\epsilon_{b}$ is drawn from a truncated normal distribution in the range $[-50,50]$ with mean $0$ and variance $\gamma_{i}$. For the $m$ buy experts, their variance takes values at uniform intervals from the range $[\gamma_{min},\gamma_{max}]$. Our empirical study uses $\gamma_{min}=1$ and $\gamma_{max}=20$. Note that even though our theoretical bound holds when the noise comes from the range $[-1,1]$ our empirical study shows us that we can achieve vanishing regret for a much weaker constraint. As a modelling choice, we use predictions on the length of the ski season from a normal distribution. The prediction of each ski expert is $y^{t}_{j} = x^{t} + \epsilon_{x}$, where $\epsilon_{x}$ is drawn from a normal distribution with mean $0$ and variance $\eta_{j}$. For the $n$ ski experts, their variance takes values at uniform intervals from the range $[\eta_{min},\eta_{max}]$.The regret plots are obtained over 100 trials. \textbf{Variation in $\lambda$} - We expect that if there is a "good" ski expert(a ski expert with less error), then using a lower value of $\lambda$ will give us less algorithmic cost due to the consistency result derived above. However if we do not know the quality of the experts(worst case all of them are bad), the algorithmic cost and hence regret is bounded. The variation is shown in part (a) and (b) Figure \ref{fig:skivar}. \textbf{Ski Experts Variation} - What the variation in the number of experts shows us is that if we have a few experts at our disposal, making an early decision might be as good as making a decision when the experts have access to the true parameters if not better. An intuition for the learner performing better in the presence of noise is the following situation. Consider the case where $b^{t} < x^{t}$. The optimal strategy is to buy early. If the ski expert predictions are less than $b^{t}$ they would predict sub-optimally when given the true buy cost. If they receive a buy cost sample such that it is less than all of their predictions, then the learner performs better with the noisy sample. However the probability of this decreases as the number of ski-experts increase as all of their predictions need to satisfy this condition. The variation is shown in part (c) of Figure \ref{fig:skivar}. \textbf{Buy Experts Variation} - The number of buy experts does not affect the cumulative regret as long as the best buy expert comes from a similar error range. This is because the way the learner updates the weights of these experts is based on how far it is from the true buy cost at that time instant. The variation is shown in part (d) of Figure \ref{fig:skivar}. \begin{comment} \subsubsection{Variation in $\lambda$} We expect that if there is a "good" ski expert(a ski expert with less error), then using a lower value of $\lambda$ will give us less algorithmic cost due to the consistency result derived above. However if we do not know the quality of the experts(worst case all of them are bad), the algorithmic cost and hence regret is bounded. The variation is shown in part (a) and (b) Figure \ref{fig:skivar}. \subsubsection{Ski Experts Variation} What the variation in the number of experts shows us is that if we have a few experts at our disposal, making an early decision might be as good as making a decision when the experts have access to the true parameters if not better. An intuition for the learner performing better in the presence of noise is the following situation. Consider the case where $b^{t} < x^{t}$. The optimal strategy is to buy early. If the ski expert predictions are less than $b^{t}$ they would predict sub-optimally when given the true buy cost. If they receive a buy cost sample such that it is less than all of their predictions, then the learner performs better with the noisy sample. However the probability of this decreases as the number of ski-experts increase as all of their predictions need to satisfy this condition. The variation is shown in part (c) of Figure \ref{fig:skivar}. \subsubsection{Buy Experts Variation} The number of buy experts does not affect the cumulative regret as long as the best buy expert comes from a similar error range. This is because the way the learner updates the weights of these experts is based on how far it is from the true buy cost at that time instant. The variation is shown in part (d) of Figure \ref{fig:skivar}. \end{comment} \begin{comment} \begin{figure} \centering \includegraphics[width=\linewidth]{Plots/buy_exp_variation/buy_var_1_100.png} \caption{Regret variation as function of the number of bu experts} \label{fig:buyvar} \end{figure} \end{comment} \section{Conclusion} In this work, we introduced the sequential ski rental problem, a novel variant of the classical ski buy or rent problem. We developed algorithms and proved regret bounds for the same. Currently we assume that the buy costs are stochastic with different variances. Future work includes considering more general buy cost advice. \begin{acks} Arun Rajkumar thanks Robert Bosch Center for Data Science and Artificial Intelligence, Indian Institute of Technology Madras for financial support. \end{acks} \bibliographystyle{ACM-Reference-Format}
{ "attr-fineweb-edu": 1.916016, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdRo5qX_AY1mDIb15
\section{Introduction} \vspace{-0.5cm} \label{sec:intro} \noindent \blfootnote{$^{}$ T Alhanai and MM Ghassemi contributed equally to this work and should be considered shared senior authors.} \noindent Sporting events are a popular source of entertainment, with immense interest from the general public. Sports analysts, coaching staff, franchises, and fans alike all seek to forecast winners and losers in upcoming sports match-ups based on previous records. The interest in predicting sporting outcomes is particularly pronounced for professional team sport leagues including the Major League Baseball (MLB), the National Football League (NFL), the National Hockey League (NHL), and the National Basketball Association (NBA); postseason plays in these leagues, namely the playoffs, are are of greater interest than games in the regular season because teams compete directly for prestigious championships titles. The development of statistical models to robustly predict the outcome of playoff games from year-to-year is a challenging machine learning task because of the plethura of individual, team and extenral factors that all-together confound the propensity of a given team to win a given game in a give year. In this work, we develop \textbf{MambaNet}: a large hybrid neural network for predicting the outcome of a basketball match during the playoffs. There are five main differences between our work and previous studies: (1) we use a combination of both player and team statistics;(2) we account for the evolution in player and team statistics over time using a signal processing approach; (3) we utilize Feature Imitating Networks (FINs) \cite{sari} to embed feature representations into the network; (4) we predict the outcome of playoff results, as opposed to season games; and (5) we test the generalizability of our model across two distinct national basketball leagues. To assess the value of our proposed approach, we performed three experiments that compare MambaNet's to previously-proposed machine learning algorithms using NBA and Iranian Super League data. \begin{figure*}[t] \includegraphics[width=\textwidth]{MambaNet.png} \caption{\label{fig:mambanet}An overview of MambaNet's architecture. First, the home (column 1, yellow boxes) and away (column 1, purple boxes) teams' stats and the two teams' players' stats are fed to the network. Next, four FINs are utilized to represent the input stats' signal features, which contain trainable (column 1, dark circles) and non-trainable (column 2, light circles) layers. These representations are further processed with convolutional and Dense layers. Raw time-domain signal features are also extracted from input stats using LSTM networks. Finally, the aforementioned features are incorporated to make the final prediction.} \centering \end{figure*} \bgroup \def0.7{0.7} \begin{table*}[t] \centering \footnotesize \captionsetup{font=footnotesize} \begin{tabular}{cl|cl|cl} \hline \# & Statistics & \# & Statistics & \# & Statistics \\ \hline 1 & Minutes Played & 2 & Field Goal & 3 & Field Goal Attempts \\ 4 & Field Goal Percentage & 5 & 3-Point Field Goal & 6 & 3-Point Field Goal Attempts \\ 7 & 3-Point Field Goal Percentage & 8 & Free Throw & 9 & Free Throw Attempts \\ 10 & Free Throw Percentage & 11 & Offensive Rebound & 12 & Defensive Rebound \\ 13 & Total Rebound & 14 & Assists & 15 & Steals \\ 16 & Blocks & 17 & Turnover & 18 & Personal Fouls \\ 19 & Points & 20 & True Shooting Percentage & 21 & Effective Field Gaol Percentage \\ 22 & 3-Point Attempt Rate & 23 & Free Throw Attempt Rate & 24 & Offensive Rebound Percentage \\ 25 & Defensive Rebound Percentage & 26 & Total Rebound Percentage & 27 & Assist Percentage \\ 28 & Steal Percentage & 29 & Block Percentage & 30 & Turnover Percentage \\ 31 & Usage Percentage & 32 & Offensive Rating & 33 & Defensive Rating \\ 34 & Winning Percentage (Team-only) & 35 & Elo Rating (Team-only) & 36 & Plus/Minus (Player-only) \end{tabular} \caption{\label{tab:stats}A description of game statistics used in this work. Except for the last three features, the rest (1 to 33) are shared statistics in representing both teams and players. (Abr: Abbreviation, \#: feature number)} \end{table*} \egroup \section{Related Work} \label{sec:pagestyle} The NBA is the most popular contemporary basketball league \cite{Ulas2021ExaminationON,kawashiri2020societal}. Several previous studies have examined the impact of different game statistics on a team's propensity to win or lose a game \cite{ijerph17165722,seconndassist}. More specifically, previous studies have identified teams' defensive rebounds, field goal percentage, and assists as crucial contributing factors to succeeding in a basketball game \cite{reb}; for machine learning workflows, these game attributes may be used as valuable input features to predict the outcome of a given basketball game \cite{mlbs,BUNKER201927}. Probabilistic models to predict the outcome of basketball games have been proposed by several previous studies. Jain and Kaur \cite{kaur} developed a Support Vector Machine (SVM) and a Hybrid Fuzzy-SVM model (HFSVM) and reported 86.21\% and 88.26\% accuracy in predicting the outcome of basketball games. More recently, Houde \cite{houde} experimented with SVM, Gaussian Naive Bayes (GNB), Random Forest (RF) Classifier, K Neighbors Classifier (KNN), Logistic Regression (LR), and XGBoost Classifier (XGB) over fifteen game statistics across the last ten games of both home and away teams. They also experimented over a more extended period of NBA season data, starting from 2018 to 2021, and reported 65.1\% accuracy in winners/losers classifications. In contrast to Kaur and Houde, that addressed game outcome prediction as a binary classification task, Chen et al \cite{chen} identified the winner/loser by predicting their exact final game scores. They used a data mining approach, experimenting with 13 NBA game statistics from the 2018-2019 season. After feature selection, this number shrank to 6 critical basketball statistics for predicting the outcome. In terms of classifiers, the authors experimented with KNN, XGB, Stochastic Gradient Boosting (SGB), Multivariate Adaptive Regression Splines (MARS), and Extreme Learning Machine (ELM) to train and classify the winner of NBA matchups. The authors also studied the effect of different game-lag values (from 1 to 6) on the success of their utilized classifiers and indicated that 4 was found to perform best on their feature set. Fewer studies have used Neural Networks to predict the outcome of basketball games; this is mostly due to challenges of over-fitting in the presence of (relatively) small basketball training datasets. Thabtah et al \cite{thab} trained Artificial Neural Networks (ANN) on a wide span of data where they extracted 20 team stats per NBA matchup played from 1980 to 2017. Their model obtained 83\% accuracy in predicting NBA game outcomes; they also demonstrated the significance of three-point percentage, free throws made, and total rebounds as features that enhanced their model's accuracy rate. \section{METHODS} \label{sec:pagestyle} \noindent \textbf{Baseline approach}: A majority of the existing studies use a similar methodological approach: For each team (home and away), a set of $s$ game statistics (the features) are extracted over $n$ previous games (the game-lag value \cite{chen}) forming an $n \times s$ matrix. Then, the mean of each stat is calculated across the n games, resulting in a $1 \times s$ feature vector for each team. The two feature vectors and concatenated yielding a $1 \times 2s$ vector for each unique matchup between a given pair of teams. Finally, this results in a $trainSize \times 2s$ matrix which is used to train classification model (each experiment will report the train/set set size in more detail). Alternatively, the label of each sample indicates whether the home team won ($y=1$) or lost the game ($y=0$). \noindent \textbf{FIN Training}: Our method follows the same steps as the baseline approaches, but with one critical difference: instead of calculating the mean of features across the $n$ last games using the mean equation, we feed the entire $n \times s$ matrix to a pretrained mean FIN and stack hidden layers on top of it (hereafter, this FIN-based deep feedforward architecture is referred to as FINDFF) to perform binary classification; In addition to the mean feature, we also imitate standard deviation, variance, and skewness. All FINs are trained using the same neural architecture: A sequence of dense layers with 64, 32, 16, 8, and 4 units are stacked, respectively, before connecting to a single-unit sigmoid layer. The activation function of is ReLU, for the first two hidden layers and the rest are Linear. Each model is trained in a regression setting by using 100,000 randomly generated signals as the training set and handcrafted feature values for each signal as the training labels. Then, we freeze the first three layers, finetune the fourth layer, and remove the remaining two layers before integrating them within the larger network structure of the MambaNet network. \noindent \textbf{Mambanet}: In Figure \ref{fig:mambanet}, we provide an illustration of Mambanet - our proposed approach. The complete set of player and team statistics used in this study can also be found in Table \ref{tab:stats}. The input to the network is an $10 \times 35$ stats matrix which are passed to both the pretrained FINs as well as LSTM layers to extract a \textit{team}'s statistics' sequential features. For each team, we also extract the a stats matrix ($n=10$, $s=34$) for each of its roster's top ten \textit{players} and pass them to the same FINs and LSTM layers. Next, we flatten teams' signal feature representations and feed them to dense layers, whereas for players, we stack them and feed them to 1D convolutional layers. Finally, all latent representations of a team and its ten players are concatenated in the network before connecting them to the last sigmoid layer. \section{Experiments \& Results} \label{sec:typestyle} We performed three experiments to assess the performance of our proposed method. To demonstrate the advantage of leveraging FINs in deep neural networks, we first compare the performance of FINDFF against a diverse set of other basketball game outcome prediction models trained using NBA data. For the second experiment, these models are tested for generalization across unseen basketball playoff games from the Iranian Super League data. Finally, we assess the performance of Mamabanet for accurate playoff outcome prediction. In all three experiments, the Area Under the ROC Curve (AUC) was used as our primary evaluation metric. \bgroup \def0.7{1} \begin{table}[t] \centering \footnotesize \captionsetup{font=footnotesize} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Ref} & \multirow{2}{*}{FC} & \multirow{2}{*}{Alg} & \multicolumn{5}{c|}{AUC} \\ \cline{4-8} & & & \multicolumn{1}{c|}{17-18} & \multicolumn{1}{c|}{18-19} & \multicolumn{1}{c|}{19-20} & \multicolumn{1}{c|}{20-21} & 21-22 \\ \hline \hline {\cite{kaur}} & 33 & SVM & \multicolumn{1}{c|}{0.65} & \multicolumn{1}{c|}{0.57} & \multicolumn{1}{c|}{0.59} & \multicolumn{1}{c|}{0.50} & 0.60 \\ \hline \textbf{TW} & \textbf{33} & \textbf{FINDFF} & \multicolumn{1}{c|}{\textbf{0.71}} & \multicolumn{1}{c|}{\textbf{0.62}} & \multicolumn{1}{c|}{\textbf{0.71}} & \multicolumn{1}{c|}{\textbf{0.55}} & \textbf{0.65} \\ \hline \hline \multirow{6}{*}{\cite{houde}} & \multirow{6}{*}{15} & GNB & \multicolumn{1}{c|}{0.60} & \multicolumn{1}{c|}{0.52} & \multicolumn{1}{c|}{0.60} & \multicolumn{1}{c|}{0.52} & 0.55 \\ \cline{3-8} & & RF & \multicolumn{1}{c|}{0.62} & \multicolumn{1}{c|}{0.62} & \multicolumn{1}{c|}{0.60} & \multicolumn{1}{c|}{0.58} & 0.60 \\ \cline{3-8} & & KNN & \multicolumn{1}{c|}{0.49} & \multicolumn{1}{c|}{0.53} & \multicolumn{1}{c|}{0.63} & \multicolumn{1}{c|}{0.60} & 0.64 \\ \cline{3-8} & & SVM & \multicolumn{1}{c|}{0.55} & \multicolumn{1}{c|}{0.53} & \multicolumn{1}{c|}{0.64} & \multicolumn{1}{c|}{0.51} & 0.61 \\ \cline{3-8} & & LR & \multicolumn{1}{c|}{0.61} & \multicolumn{1}{c|}{0.65} & \multicolumn{1}{c|}{0.65} & \multicolumn{1}{c|}{0.61} & 0.66 \\ \cline{3-8} & & XGB & \multicolumn{1}{c|}{0.63} & \multicolumn{1}{c|}{0.67} & \multicolumn{1}{c|}{0.65} & \multicolumn{1}{c|}{0.50} & 0.59 \\ \hline \textbf{TW} & \textbf{15} & \textbf{FINDFF} & \multicolumn{1}{c|}{\textbf{0.68}} & \multicolumn{1}{c|}{\textbf{0.77}} & \multicolumn{1}{c|}{\textbf{0.69}} & \multicolumn{1}{c|}{\textbf{0.76}} & \textbf{0.70} \\ \hline \hline {\cite{e18120450}} & 14 & NBAME & \multicolumn{1}{c|}{0.51} & \multicolumn{1}{c|}{0.53} & \multicolumn{1}{c|}{0.53} & \multicolumn{1}{c|}{0.57} & 0.59 \\ \hline \textbf{TW} & \textbf{14} & \textbf{FINDFF} & \multicolumn{1}{c|}{\textbf{0.62}} & \multicolumn{1}{c|}{\textbf{0.59}} & \multicolumn{1}{c|}{\textbf{0.64}} & \multicolumn{1}{c|}{\textbf{0.60}} & \textbf{0.62} \\ \hline \hline \multirow{5}{*}{\cite{chen}} & \multirow{5}{*}{6} & ELM & \multicolumn{1}{c|}{0.53} & \multicolumn{1}{c|}{0.55} & \multicolumn{1}{c|}{0.55} & \multicolumn{1}{c|}{0.53} & 0.64 \\ \cline{3-8} & & KNN & \multicolumn{1}{c|}{0.58} & \multicolumn{1}{c|}{0.53} & \multicolumn{1}{c|}{0.51} & \multicolumn{1}{c|}{0.56} & 0.55 \\ \cline{3-8} & & XGB & \multicolumn{1}{c|}{0.60} & \multicolumn{1}{c|}{0.58} & \multicolumn{1}{c|}{0.53} & \multicolumn{1}{c|}{0.53} & 0.55 \\ \cline{3-8} & & MARS & \multicolumn{1}{c|}{0.63} & \multicolumn{1}{c|}{0.53} & \multicolumn{1}{c|}{0.53} & \multicolumn{1}{c|}{0.59} & 0.57 \\ \hline \textbf{TW} & \textbf{6} & \textbf{FINDFF} & \multicolumn{1}{c|}{\textbf{0.69}} & \multicolumn{1}{c|}{\textbf{0.65}} & \multicolumn{1}{c|}{\textbf{0.57}} & \multicolumn{1}{c|}{\textbf{0.63}} & \textbf{0.66} \\ \hline \hline {\cite{thab}} & 20 & ANN & \multicolumn{1}{c|}{0.55} & \multicolumn{1}{c|}{0.55} & \multicolumn{1}{c|}{0.53} & \multicolumn{1}{c|}{0.58} & 0.53 \\ \hline \textbf{TW} & \textbf{20} & \textbf{FINDFF} & \multicolumn{1}{c|}{\textbf{0.59}} & \multicolumn{1}{c|}{\textbf{0.68}} & \multicolumn{1}{c|}{\textbf{0.67}} & \multicolumn{1}{c|}{\textbf{0.61}} & \textbf{0.65} \\ \hline \end{tabular} \caption{\label{tab:pw}A performance comparison between FINDFF and other previously-developed machine learning models on five years of NBA Playoffs, from the 2017-2018 season (17-18) to the 21-22 season. (Ref: Reference, FC: Feature Count, Alg: Algorithm, TW: This Work)} \end{table} \egroup \newcommand{\RNum}[1]{\uppercase\expandafter{\romannumeral #1\relax}} \subsection{Experiment \RNum{1}} This experiment aims to determine whether using FINs in conjunction with deep neural networks can enhance playoff outcome prediction. We followed the same machine learning pipeline as previous studies to compare FINDFF. However, we applied a pretrained mean FIN to the $n \times s$ matrix instead of taking the mean directly, providing an almost identical setting when comparing FINDFF with other classic machine learning algorithms. Since the FIN is the only differing component in this setting, its effects can be easily studied. \noindent \textbf{Dataset:} All data were gathered from NBA games played from 2017-2018 to 2021-2022 over five seasons. We used each year's season games as training data (1,230, 1,120, 1,060, 1,086, and 1,236 games from 2017-2018 to 2021-2022) and playoff games as testing data (82, 82, 83, 85, and 87 games from 2017-2018 to 2021-2022), leaving us with five different NBA datasets. \noindent \textbf{Results:} In Table \ref{tab:pw}, we compare FINDFF models with five other methods from the literature using different features (game statistics), game-lag values, and classification algorithms. The FINDFF network successfully outperformed all other methods with a 0.05 to 0.15 AUC margin in every year of NBA data, demonstrating the advantage of the feature imitation technique in game outcome prediction. \bgroup \def0.7{1} \begin{table}[t] \centering \footnotesize \captionsetup{font=footnotesize} \begin{tabular}{|c|c|c|ccccc|} \hline \multirow{2}{*}{Ref} & \multirow{2}{*}{FC} & \multirow{2}{*}{Alg} & \multicolumn{5}{c|}{AUC} \\ \cline{4-8} & & & \multicolumn{1}{c|}{17-18} & \multicolumn{1}{c|}{18-19} & \multicolumn{1}{c|}{19-20} & \multicolumn{1}{c|}{20-21} & 21-22 \\ \hline \hline {[}4{]} & 33 & SVM & \multicolumn{1}{c|}{0.52} & \multicolumn{1}{c|}{0.55} & \multicolumn{1}{c|}{0.55} & \multicolumn{1}{c|}{0.57} & 0.55 \\ \hline \textbf{TW} & \textbf{33} & \textbf{FINDFF} & \multicolumn{1}{c|}{\textbf{0.62}} & \multicolumn{1}{c|}{\textbf{0.57}} & \multicolumn{1}{c|}{\textbf{0.67}} & \multicolumn{1}{c|}{\textbf{0.72}} & \textbf{0.59} \\ \hline \hline \multirow{6}{*}{{[}5{]}} & \multirow{6}{*}{15} & GNB & \multicolumn{1}{c|}{0.52} & \multicolumn{1}{c|}{0.55} & \multicolumn{1}{c|}{0.59} & \multicolumn{1}{c|}{0.52} & 0.52 \\ \cline{3-8} & & RF & \multicolumn{1}{c|}{0.59} & \multicolumn{1}{c|}{0.55} & \multicolumn{1}{c|}{0.55} & \multicolumn{1}{c|}{0.72} & \textbf{0.67} \\ \cline{3-8} & & KNN & \multicolumn{1}{c|}{0.52} & \multicolumn{1}{c|}{0.52} & \multicolumn{1}{c|}{0.55} & \multicolumn{1}{c|}{0.52} & 0.55 \\ \cline{3-8} & & SVM & \multicolumn{1}{c|}{0.52} & \multicolumn{1}{c|}{0.72} & \multicolumn{1}{c|}{0.67} & \multicolumn{1}{c|}{0.52} & 0.52 \\ \cline{3-8} & & LR & \multicolumn{1}{c|}{0.52} & \multicolumn{1}{c|}{0.52} & \multicolumn{1}{c|}{0.55} & \multicolumn{1}{c|}{0.52} & 0.55 \\ \cline{3-8} & & XGB & \multicolumn{1}{c|}{0.55} & \multicolumn{1}{c|}{\textbf{0.72}} & \multicolumn{1}{c|}{0.55} & \multicolumn{1}{c|}{0.72} & 0.64 \\ \hline \textbf{TW} & \textbf{15} & \textbf{FINDFF} & \multicolumn{1}{c|}{\textbf{0.59}} & \multicolumn{1}{c|}{0.70} & \multicolumn{1}{c|}{\textbf{0.67}} & \multicolumn{1}{c|}{\textbf{0.72}} & 0.64 \\ \hline \hline {[}8{]} & 14 & NBAME & \multicolumn{1}{c|}{0.52} & \multicolumn{1}{c|}{0.55} & \multicolumn{1}{c|}{0.52} & \multicolumn{1}{c|}{0.52} & 0.55 \\ \hline \textbf{TW} & \textbf{14} & \textbf{FINDFF} & \multicolumn{1}{c|}{\textbf{0.61}} & \multicolumn{1}{c|}{\textbf{0.66}} & \multicolumn{1}{c|}{\textbf{0.66}} & \multicolumn{1}{c|}{\textbf{0.59}} & \textbf{0.64} \\ \hline \hline \multirow{4}{*}{{[}6{]}} & \multirow{4}{*}{6} & ELM & \multicolumn{1}{c|}{0.52} & \multicolumn{1}{c|}{0.55} & \multicolumn{1}{c|}{0.55} & \multicolumn{1}{c|}{0.52} & 0.55 \\ \cline{3-8} & & KNN & \multicolumn{1}{c|}{0.52} & \multicolumn{1}{c|}{0.67} & \multicolumn{1}{c|}{0.55} & \multicolumn{1}{c|}{0.59} & 0.67 \\ \cline{3-8} & & XGB & \multicolumn{1}{c|}{0.52} & \multicolumn{1}{c|}{0.67} & \multicolumn{1}{c|}{0.60} & \multicolumn{1}{c|}{0.52} & 0.52 \\ \cline{3-8} & & MARS & \multicolumn{1}{c|}{\textbf{0.74}} & \multicolumn{1}{c|}{0.59} & \multicolumn{1}{c|}{\textbf{0.74}} & \multicolumn{1}{c|}{0.62} & 0.68 \\ \hline \textbf{TW} & \textbf{6} & \textbf{FINDFF} & \multicolumn{1}{c|}{0.71} & \multicolumn{1}{c|}{\textbf{0.71}} & \multicolumn{1}{c|}{0.71} & \multicolumn{1}{c|}{\textbf{0.62}} & \textbf{0.71} \\ \hline \hline {[}7{]} & 20 & ANN & \multicolumn{1}{c|}{0.59} & \multicolumn{1}{c|}{0.52} & \multicolumn{1}{c|}{0.55} & \multicolumn{1}{c|}{0.55} & 0.52 \\ \hline \textbf{TW} & \textbf{20} & \textbf{FINDFF} & \multicolumn{1}{c|}{\textbf{0.62}} & \multicolumn{1}{c|}{\textbf{0.67}} & \multicolumn{1}{c|}{\textbf{0.62}} & \multicolumn{1}{c|}{\textbf{0.67}} & \textbf{0.71} \\ \hline \end{tabular} \caption{\label{tab:iribf}A performance comparison between FINDFF and other previously-developed machine learning models trained on five years of NBA (from 17-18 to 21-22) on the 2020-2021 Iranian Super League Playoffs (Ref: Reference, FC: Feature Count, Alg: Algorithm, TW: This Work)} \end{table} \egroup \subsection{Experiment \RNum{2}} The purpose of this experiment is to examine the generalizability of methodologies from the first experiment. As we mentioned in Experiment 1, each model is trained and tested on five different years of NBA data. In this experiment, we still train these models on the five NBA datasets but test them on Iranian Super League playoffs. This allows us to compare how generalized each method is when predicting test cases from a significantly different data source. \noindent \textbf{Dataset:} For training purposes, we used the same NBA datasets discussed in Experiment 1. But, to test them, we used the 2020-2021 Iranian Basketball Super League playoffs. \noindent \textbf{Results:} As shown in Table \ref{tab:iribf}, FINDFF models outperformed almost all other methodologies in predicting the outcome of the Iranian Basketball Super League playoffs by a range of 0.02 to 0.12 AUC. \subsection{Experiment \RNum{3}} The first two experiments showed how FINs provided higher, and more generalizable performance in playoff outcome prediction compared to the baselines. After developing MambaNet by building on top of the FINDFF architecture, we aim to demonstrate how integrating other components may affect our hybrid model's performance. \noindent \textbf{Dataset:} We used the same NBA datasets introduced in Experiment 1. \noindent \textbf{Results:} In Table \ref{tab:mamba}, we present the results of our incremental experiment. The first row reports the simplest version of MambaNet using 35 team features that are passed to a FINDFF network imitating the mean (\textit{m}) as a feature. Compared with the baseline, we use a more extensive set of basketball game statistics to form the feature vector of a team since this helps better satisfy the data-intensive requirement of neural networks. At this stage, the AUC varies between 0.70 to 0.72. Next, we trained three more FINs to imitate Standard Deviation \textit{std}, Variance \textit{v}, and Skewness \textit{s} using the same neural network architecture as the mean FIN. The second row of the table shows how adding new signal feature representations improves the AUC up to 0.10 and 0.03 in the 2018-2019 and 2020-2021 NBA datasets. Furthermore, we integrated players' statistics alongside team statistics, leading to an 0.02 increase in AUC across four NBA datasets in the third row. Lastly, as shown in the fourth row, we used RNN layers to create a time-series representation of the game and individual statistics, resulting in 0.03 and 0.02 improvements in 2019-2020 and 2021-2022, respectively. \bgroup \def0.7{0.7} \begin{table}[t] \centering \footnotesize \captionsetup{font=footnotesize} \begin{tabular}{|c|c|c|c|c|ccccc} \hline \multirow{2}{*}{R} & \multirow{2}{*}{FS} & \multirow{2}{*}{FC} & \multirow{2}{*}{IF} & \multirow{2}{*}{Layers} & \multicolumn{5}{c|}{AUC} \\ \cline{6-10} & & & & & \multicolumn{5}{c|}{17-18 \thickspace\thickspace 18-19 \thickspace\thickspace 19-20 \thickspace\thickspace 20-21 \thickspace\thickspace\thickspace 21-22} \\ \hline \hline 1 & \textit{T} & 35 & \textit{m} & Dense & \multicolumn{1}{c|}{0.71} & \multicolumn{1}{c|}{0.70} & \multicolumn{1}{c|}{0.71} & \multicolumn{1}{c|}{0.72} & \multicolumn{1}{c|}{0.70} \\ \hline 2 & \textit{T} & 35 & \textit{\begin{tabular}[c]{@{}c@{}}m\\ std\\ v\\ s\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Dense\\ Conv\end{tabular} & \multicolumn{1}{c|}{0.71} & \multicolumn{1}{c|}{0.80} & \multicolumn{1}{c|}{0.71} & \multicolumn{1}{c|}{0.75} & \multicolumn{1}{c|}{0.70} \\ \hline 3 & \textit{\begin{tabular}[c]{@{}c@{}}T\\ \\ P\end{tabular}} & \begin{tabular}[c]{@{}c@{}}35\\ \\ 34\end{tabular} & \textit{\begin{tabular}[c]{@{}c@{}}m\\ std\\ v\\ s\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Dense\\ Conv\end{tabular} & \multicolumn{1}{c|}{0.73} & \multicolumn{1}{c|}{0.82} & \multicolumn{1}{c|}{0.73} & \multicolumn{1}{c|}{0.75} & \multicolumn{1}{c|}{0.72} \\ \hline 4 & \textit{\textbf{\begin{tabular}[c]{@{}c@{}}T\\ \\ P\end{tabular}}} & \textbf{\begin{tabular}[c]{@{}c@{}}35\\ \\ 34\end{tabular}} & \textit{\textbf{\begin{tabular}[c]{@{}c@{}}m\\ std\\ v\\ s\end{tabular}}} & \textbf{\begin{tabular}[c]{@{}c@{}}Dense\\ Conv\\ RNN\end{tabular}} & \multicolumn{1}{c|}{\textbf{0.73}} & \multicolumn{1}{c|}{\textbf{0.82}} & \multicolumn{1}{c|}{\textbf{0.76}} & \multicolumn{1}{c|}{\textbf{0.75}} & \multicolumn{1}{c|}{\textbf{0.74}} \\ \hline \end{tabular} \caption{\label{tab:mamba} Comparing the performance of different MambaNet versions in five years of NBA Playoffs from the 2017-2018 season (17-18) to the 21-22 season. (R: Row, FS: Feature Source, FC: Feature Count, IF: Imitating Feature)} \end{table} \egroup \section{Conclusion} \label{sec:majhead} In this work, we tackled playoff basketball game outcome prediction from a signal processing standpoint. We introduced MambaNet, which incorporated historical player and team statistics and represented them through signal feature imitation using FINs. To compare our method with the baseline, we used NBA and Iranian Super League data which enabled us to demonstrate the performance and generalizability of our method. Future studies will potentially use fusion techniques or other suitable data modeling techniques, such as graphs, to develop more advanced neural networks that integrate team and player representations more efficiently to predict playoff outcomes more accurately. \bibliographystyle{IEEEbib}
{ "attr-fineweb-edu": 1.466797, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUd7w5qoTBBFOE5fBM
\section{Introduction} \input{section-introduction} \section{Model}\label{sec.model} \input{section-model} \section{Data Set} \input{section-data-set} \newpage \section{Comparing the Three Models} \input{section-comparing-three-models} \section{Functional Ratings}\label{sec.funRating} \input{section-functional-ratings} \section{Conclusion} \input{section-conclusion} \newpage \bibliographystyle{plain} \subsection{Errors} The data set contains three types of errors. The first error is for only one game, Jackson State at Alabama A\&M on January 5, 2019. This game did not have a public scoring summary on any website. Therefore, this scoring summary has only has four data points obtained from the box score: the beginning of the game, the halftime score, the end of regulation score, and the end of overtime score. The second error comes from the final entry in the scoring summary not matching the correct final game score. In these cases, the scoring summary was missing a scoring play but we were unable to find the exact time in the game where the scoring play was missed. To ensure the final score was correct, we added the missing score as the last data point in our scoring summary. \newpage The final error is when multiple scoring plays occur at the same time. For example, Table~\ref{tab.scoringsummary} shows the first ten scoring plays for North Alabama at Samford on November 6, 2018. The play-by-play indicates that North Alabama makes two free throws at 40~s into the game, play continues with multiple missed shots, rebounds, and other scoring plays while the time remains at 40~s. Finally the time is updated when North Alabama makes a layup at 253~s into the game. The team's scoring functions show North Alabama having 7 points from 40~s to 252~s and Samford having 12 points over that same time period. The result in the model is that Samford is credited with a 5 point advantage over the entire time interval, when they are actually only winning by 5 points on a shorter time interval. Since we are unable to account for when the actual scoring plays happened we did not modify these situations. \begin{table} \parbox{.45\textwidth}{ \begin{center} \begin{tabular}{c|c|c} Time (s) & North Alabama & Samford \\\hline 30 & 2 & 0\\ 40 & 3 & 0\\ 40 & 4 & 0\\ 40 & 4 & 2\\ 40 & 4 & 5\\ 40 & 4 & 7\\ 40 & 7 & 7\\ 40 & 7 & 10\\ 40 & 7 & 12\\ 253 & 9 & 12 \end{tabular} \end{center} \caption{First ten scoring plays for North Alabama at Samford on November 6, 2018.} \label{tab.scoringsummary} } \hspace{.5in} \parbox{.45\textwidth}{ \begin{center} \begin{tabular}{c|c|c} Time (s) & LSU & Florida \\\hline 2159 & 63 & 60\\ 2175 & 63 & 63\\ 2175 & 63 & 64\\ 2175 & 63 & 65\\ 2175 & 63 & 66 \end{tabular} \end{center} \caption{A few scoring plays for LSU vs. Florida on March 15, 2019.} \label{tab.scoringsummary2} } \end{table} \paragraph{Note.} Not all scoring plays that happen at the same time occur in error. This will be the situation for free throws since the clock is not running during the free throw attempt. Large score changes at a given time are possible if a basket is made and a foul is called on the play. For example, Table~\ref{tab.scoringsummary2} shows a few scoring plays for LSU vs. Florida on March 15, 2019. At 2175~s into the game Florida made a three point shot and a foul on LSU occurred during the play. There was also a technical foul called on LSU resulting in a total of four free throw attempts for Florida. Florida made three of the four free throws resulting a six point play. \subsection{Functional to Scalar Ratings}\label{sec.func2scalar} How can we create a ranking system based on our functional data? A logical choice is to find the (weighted) average functional rating for each team: \begin{equation}\label{eq.scalar} \frac{\int_{0}^{T} w(t)\beta_i(t)dt}{\int_{0}^{T}w(t)dt} \end{equation} Using this scalar value, we can put the teams in rank order. If $w(t)=1$ for all $t$, the scalar rating is simply the average rating over the entire game. Of course, there are many different weights we could use, depending on what properties one values. For example, we could make the argument that the closer to the end of the game two teams are, the more impact the score differential should have on their rating (i.e. We want to favor teams who are better at the end of games). A simple choice would be to use an increasing linear function, but we can expand this idea to any non-decreasing weight function. Table~\ref{Rankings} shows the ranking results for a few selected teams. The table includes the scalar rating using $w(t)=1$ for all $t$ in~\eqref{eq.scalar}, the corresponding rank, and also the rank if only the end of the game rating was used to rank the teams. Comparing the two rankings, the top ten teams are the same, but in a different order. More movement occurs for middling teams. Central Florida and Illinois both see a 20 position decrease when only looking at the end of the game. This corresponds to their functional rating leveling off towards the end of the game as shown in Figure~\ref{fig.5teams}. On the other hand, Stanford's ranking increases by 40 positions when only considering the end of the game. Again, this corresponds to their functional rating improving greatly towards the end of the game. \begin{table} \parbox{.5\textwidth}{ \begin{center} \begin{tabular}{r|c|c|c} Team & \shortstack{Scalar \\ Rating} & Rank & \shortstack{End of \\ Game \\ Ranking} \\\hline Gonzaga & 14.98 & 1 &1\\ Duke & 13.31 & 2 &2\\ Virginia & 12.99 & 3 &3\\ North Carolina & 12.77 & 4 &5\\ Michigan & 12.45 & 5 &7\\ Michigan State & 11.87 & 6 &4\\ Tennessee & 11.71 & 7 &9\\ Purdue & 11.63 & 8 & 10\\ Texas Tech & 11.30 & 9 &6\\ Kentucky & 10.64 & 10 &8\\ Central Florida & 8.16 & 22 &40\\ Illinois & 5.58 & 52 &72\\ Stanford & 1.78 & 132 &92\\ Northern Colorado & -0.24 & 167 &191\\ Chicago State & -12.89 & 353 &352 \end{tabular} \caption{Select team's scalar rating using~\eqref{eq.scalar} with $w(t)=1$, the corresponding rank, and the rank based on their end of the game rating.} \label{Rankings} \end{center} } \hspace{.4in} \parbox{.4\textwidth}{ \begin{center} \begin{tabular}{r|c|c} Team & \shortstack{Scalar \\ SOS} & Rank \\\hline Kansas & 6.10 & 1 \\ Michigan State & 6.00 & 2 \\ Purdue & 5.91 & 3 \\ Oklahoma & 5.87 & 4 \\ Duke & 5.80 & 5 \\ Penn State & 5.72 & 6 \\ North Carolina & 5.63 & 7 \\ Minnesota & 5.60 & 8 \\ Florida & 5.58 & 9 \\ Nebraska & 5.48 & 10 \\ Illinois & 5.34 & 13 \\ Central Florida & 2.75 & 62 \\ Stanford & 2.18 & 75 \\ Northern Colorado & -2.71 & 316 \\ Morgan State & -5.01 & 353 \end{tabular} \caption{Scalar strength of schedule computed using~\eqref{eq.scalar} with $w(t)=1$ for some select teams.} \label{tbl.sos} \end{center} } \end{table} \newpage \subsection{Interpreting Functional Ratings} A nice interpretation of the functional ratings is obtained by examining the normal equation, \[X^T X \bm{\beta}(t) = X^T \bm{d}(t).\] Solving equation $i$ for team $i$'s functional rating shows that it can be slit into two parts: \begin{equation} \beta_i(t) = \bar{d}_i(t) + sos_i(t), \end{equation} where $\bar{d}_i(t)$ is the team's average point differential and $sos_i(t)$ is the team's strength of schedule. Team $i$'s {\it average point differential} is defined to be, \begin{equation} \bar{d}_i(t) = \frac{1}{m_i}\sum_{k \in G_i}x_{ki}d_k(t), \end{equation} where $G_i$ is the set of games that team $i$ played and $m_i$ is the number of games team $i$ played. Recall that $x_{ki} = 1$ if team $i$ is the home team and $x_{ki} = -1$ if team $i$ is the away team. The result is $x_{ki}d_k(t)$ will always treat the point differential for game $k$ relevant to team $i$ (positive point differential means team $i$ is winning). Team $i$'s {\it strength of schedule} is defined to be, \begin{equation} \text{sos}_i(t) = \frac{1}{m_i}\sum_{j \in T_i}\beta_j(t) - \frac{h_i}{m_i}\alpha(t) + \frac{a_i}{m_i}\alpha(t) \end{equation} where $T_i$ is the set of teams that played team $i$ (if a team is played multiple times they would be listed for each game played), $h_i$ is the number of home games, and $a_i$ is the number of away games. The first term of $\text{sos}_i(t)$ is the average rating of team $i$'s opponents, the second term is decreasing the strength of schedule for the fraction of games played at home (indicating an easier schedule), and the last term is increasing the strength of schedule for the fraction of games played on the road (indicating a harder schedule). The result is if a team plays more home games than away their strength of schedule will be less than their average opponents rating (or vice versa). Table~\ref{tbl.sos} shows the scalar strength of schedule for some select teams. The scalar strength of schedule is computed using~\eqref{eq.scalar} with $w(t)=1$ and $\text{sos}_i(t)$ rather than $\beta_i(t)$. As expected, the top 10 strength of schedules are all from major conferences (Big 12, Big 10, and ACC). Figure~\ref{fig.illinois} shows the strength of schedule, average point differential, and functional rating for Illinois. Illinois has a unique functional rating since it is roughly constant after halftime. Breaking down the rating into strength of schedule and average point differential shows why this happens. On average, Illinois is leading games for the first half and then trailing for the second half. However, the decrease in point differential during the second half is countered by the increasing strength of schedule resulting in a roughly constant functional rating. The inflection point in the functional rating occurs at the same time as the inflection point in the average point differential because the strength of schedule is increasing fairly consistently. This is a trend for most strength of schedules and the result is inflection points in the functional rating correspond to inflection points in the average point differential. Figure~\ref{fig.unc} shows the same graph for Northern Colorado. Northern Colorado is an example of a winning team (positive point differential and 19-11 overall record) that plays a weak schedule (negative SOS). The result is their rating is roughly average (167 out of 353). This example clearly demonstrates that strength of schedule is a factor in determining a team's rating. \begin{figure} \parbox{.45\textwidth}{ \begin{center} \includegraphics[width = 3in]{IllinoisRating.pdf} \end{center} \caption{Illinois functional rating, strength of schedule, and average point differential.} \label{fig.illinois} } \hspace{.4in} \parbox{.45\textwidth}{ \begin{center} \includegraphics[width = 3in]{ColoradoRating.pdf} \end{center} \caption{Northern Colorado functional rating, strength of schedule, and average point differential.} \label{fig.unc} } \end{figure} \begin{figure} \begin{center} \includegraphics[width = 3in]{DukevsVirginia.pdf} \end{center} \caption{Predicted game between Duke and Virginia of which is filled to be their team color if winning. } \label{DukevsVirginia} \end{figure} \newpage \subsection{Predictions} By simply subtracting each team's functional rating at every time, we get a prediction at a neutral site between these two teams. Figure~\ref{DukevsVirginia} shows an example of a prediction for Virginia versus Duke (two highly rated teams). We notice that the predictions are probably not the most accurate representation of an actual game; as an actual game would typically have larger runs for each team. Our model predictions represent the average point differential if many games were played between the two teams. Further study is needed to determine if the functional ratings can be used to simulate an actual game. \subsection{Basic Model} Suppose a set of $n$ teams play a total of $m$ games and we are interested in determining a ranking for the teams based not only on the final outcome of the game but also the point differential throughout the game. Let $T$ be the length of a game, $s_i(t)$ be team $i$'s score at time $t \in [0,T]$, and $\beta_i(t)$ be team $i$'s functional rating. If team $i$ plays team $j$, the model seeks to equate the difference of their functional ratings to the point differential ($d(t) = s_i(t) - s_j(t)$) in the game. The model equation for a game is \begin{equation}\label{eq.model1} \beta_i(t) - \beta_j(t) = d(t). \end{equation} In the model, $\beta_1(t),\ldots,\beta_n(t)$ are unknown parameters that we will estimate using the method of least squares. Let $\bm{\beta}(t) = (\beta_1(t),\ldots,\beta_n(t))^\top$, $\bm{d}(t) = (d_1(t),\ldots,d_m(t))^\top$, and $X$ be a $m\times n$ design matrix. We use the notation that $x^{}_{ki}$ is the element in the $k$-th row and $i$-th column of $X$ and define $X$ such that \[x^{}_{ki} = \begin{cases} 1 , & \text{if $i$ is home team for game $k$,} \\ -1 ,& \text{if $i$ is away team for game $k$,} \\ 0 , & \text{otherwise.}\\ \end{cases}\] It does not matter which team is designated the home team if a game is played on a neutral site. We will address home-court advantage in Section~\ref{sec.mod}. The linear system of equations can be expressed compactly as \[ X \bm{\beta}(t) = \bm{d}(t).\] The model is a natural extension of traditional least squares rating systems~\cite{harville.JASA.1980,harville.AS.1994,massey1997,stefani.IEEE.1977,stefani.IEEE.1980} that use a linear model to relate team ratings to the outcome of the games. In those models, the dependent variable is a scalar quantity, such as margin of victory or a score that is dependent on the margin of victory. In our model, the dependent variable is a function of time, so we obtain a rating for each team as a function of time. We will discus how to use the functional rating to determine a ranking in Section~\ref{sec.func2scalar}, but the functional rating will also be of interest to us. We will demonstrate that most of the modifications and analysis for the scalar models can be replicated for the functional models. We will also show how to use the functional ratings to gain additional insight on the teams. \subsection{Modifications}\label{sec.mod} There are two different modifications to account for home-court advantage. Our first modification assumes the home-court advantage is the same for all teams. The model equation for a game is \begin{equation}\label{eq.model2} d(t) = \begin{cases} \beta_i(t) - \beta_j(t) + \alpha(t), & \text{if team $i$ is the home team} \\ \beta_i(t) - \beta_j(t), & \text{if the game is played on a neutral site.} \\ \end{cases} \end{equation} The difference between two team's functional ratings is the expected point differential on a neutral site and $\alpha(t)$ is the additional points for the home team if played on their court. The design matrix for this model is obtained by adding a column to the design matrix of the basic model. Let $X$ be the $m\times (n+1)$ design matrix. The first $n$ columns are the same as in the basic model and the $(n+1)$-th column is defined by \[x^{}_{k(n+1)} = \begin{cases} 0 , & \text{if game $k$ is played on a neutral site,} \\ 1 ,& \text{if game $k$ is played on team $i$'s home-court,} \\ \end{cases}\] If we define $\bm{\beta}(t) = (\beta_1(t),\ldots,\beta_n(t),\alpha(t))^\top$, the matrix equation has the same form as the basic model \[ X \bm{\beta}(t) = \bm{d}(t).\] The second modification has a specific home-court advantage for each team, \begin{equation}\label{eq.model3} \beta_i(t) - \beta_j(t) + \alpha^{}_i(t)= d(t). \end{equation} The design matrix for this model is again obtained by adding columns to the design matrix of the basic model. Let $X$ be the $m\times (2n)$ design matrix. The first $n$ columns are the same as in the basic model and the $(n+i)$-th column is defined by \[x^{}_{k(n+i)} = \begin{cases} 1 ,& \text{if game $k$ is played on team $i$'s home-court,} \\ 0 , & \text{otherwise.} \\ \end{cases}\] The case when $x_{k(n+i)} = 0$ includes when game $k$ is a neutral site game and when team $i$ is not playing the game. If we define $\bm{\beta}(t) = (\beta_1(t),\ldots,\beta_n(t),\alpha_1(t),\ldots,\alpha_n(t))^\top$, then matrix equation again has the same form as the basic model \[ X \bm{\beta}(t) = \bm{d}(t).\] These modifications have been used in the past for ranking systems~\cite{harville.AS.1994,stefani.IEEE.1980}. Again, the main difference in our model is the dependent variables and parameters are functions rather than scalars. \subsection{Overtime Games} All of the models assume each game has the same length, which poses a problem for overtime games. We decided to remove overtime data and therefore, overtime games end in a tie. Another option could have been to extend all games to the longest game in the data set, however, this may give undesirable weight to the final score since games that ended in regulation would record the final score for all possible overtime periods. Further study is need to determine how to incorporate overtime data in the model. \subsection{Constraints} The design matrix in the basic model is not full rank because the null space has dimension of at least 1, which is readily seen because $X\bm{v} = 0$ when $\bm{v} = (1,1,1,...,1)^\top$. The design matrix for the other models are also not full rank by a similar argument. Therefore, there is no unique least squares solution, thus we are free to add a constraint. We use the constraint that the average team rating is the zero function, that is \begin{equation} \sum_{i = 1}^{n}\beta_i(t) = 0, \; \forall \; t. \end{equation} In general, we can choose a constraint that would set any team's rating to a constant of our choice. This will set teams' ratings so that they are relative to the chosen constraint, and is just a shift of the ratings, with no impact on the rankings. A logical choice could be to set the worst team to be the zero function so that all other teams' ratings are positive. Another option is to set the constraint to be the average score of all of the games for any time, which will shift our ratings to be interpreted as the average score for a team at any time. The dimension of the null space of $X$ is exactly 1 if enough games have been played so all of the teams are connected. If not enough games are played then a rating comparing all teams is not possible and one can only compare teams within a connected subset. We will always assume enough games are played so a rating comparing all teams is valid. \subsection{Least Squares Solution} Define the {\it residual} vector-valued function to be $\bm{r}(t) = X\bm{\beta}(t) - \bm{d}(t)$. The least squares solution seeks to minimize the $L^2$-norm of the residual, \[ || \bm{r}(t) || = \int_{0}^{T} \sum_{i=1}^{n} r_i^2(t) dt = \int_{0}^{T} \bm{r}(t)^\top \bm{r}(t) dt = \int_{0}^{T} (X\bm{\beta}(t) - \bm{d}(t))^\top (X\bm{\beta}(t) - \bm{d}(t)) dt. \] Ramsay and Silverman~\cite[Chapter~13]{ramsay.springer.2005} discuss various solution methods. We use pointwise minimization on the raw data which is discretized at every second in the game. The solution for this approach is obtained by finding the traditional least squares solution at each second. By using the raw data there is no loss of information prior to minimizing. In Section~\ref{sec.funRating}, we will smooth the functional ratings for easier interpretation of the results.
{ "attr-fineweb-edu": 1.902344, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdMk5qoYAuNZAcTzz
\section{Introduction} \label{sec:intro} Action detection and classification are one of the main challenges in computer vision~\cite{PeChapSpringer:2021}. Over the last few years, the number of datasets and their complexity dedicated to action classification has drastically increased~\cite{PeThesis2020}. Sports video analysis is one branch of computer vision and applications in this area range from multimedia intelligent devices with user-tailored digests, up to analysis of athletes' performance~\cite{LenhartL:2018, TT:TTNet:2020, Sport:EvaluationMotion:2017}. A large amount of work is devoted to the analysis of sports gestures using motion capture systems. However, body-worn sensors and markers could disturb the natural behavior of sports players. This issue motivates the development of methods for game analysis using non-invasive equipment such as video recordings from cameras. The Sports Video Classification project was initiated by the Sports Faculty (STAPS) and the computer science laboratory LaBRI of the University of Bordeaux, and the MIA laboratory of La Rochelle University\footnote{This work was supported by the New Aquitania Region through CRISP project - ComputeR vIsion for Sports Performance and the MIRES federation.}. This project aims to develop artificial intelligence and multimedia indexing methods for the recognition of table tennis activities. The ultimate goal is to evaluate the performance of athletes, with a particular focus on students, to develop optimal training strategies. To that aim, the video corpus named \texttt{TTStroke-21} was recorded with volunteer players. Datasets such as UCF-101~\cite{Dataset:UCF101:2012}, HMDB~\cite{Dataset:HMDB:2011, Dataset:JHMDB:2013}, AVA~\cite{Dataset:AVA:2018} and Kinetics~\cite{Dataset:Kinetics:2017,Dataset:Kinetics600:2018,Dataset:Kinetics700:2019,Dataset:Kinetics700:2020,Dataset:AVA_Kinetics:2020} are being use in the scope of action recognition with, year after year, an increasing number of considered videos and classes. Few datasets focus on fine-grained classification in sports such as FineGym~\cite{Dataset:Gym:2020} and TTStroke21~\cite{PeMTAP:2020}. To tackle the increasing complexity of the datasets, we have on one hand methods getting the most of the temporal information: for example, in~\cite{LiuH:2019}, where spatio-temporal dependencies are learned from the video using only RGB data. And on the other hand, methods combining other modalities extracted from videos, such as the optical flow~\cite{NN:I3DCarreira:2017, NN:Laptev:2018, PeICIP:2019}. The inter-similarity of actions - strokes - in \texttt{TTStroke-21} makes the classification task challenging, and both cited aspects shall be used to improve performance. The following sections present the Sport task this year and its specific terms of use. Complementary information on the task may be found on the dedicated page from the MediaEval website\footnote{\url{https://multimediaeval.github.io/editions/2021/tasks/sportsvideo/}}. \section{Task description} \label{sec:task} This task uses the \texttt{TTStroke-21} database~\cite{PeMTAP:2020}. This dataset is constituted of recordings of table tennis players performing in natural conditions. This task offers researchers an opportunity to test their fine-grained classification methods for detecting and classifying strokes in table tennis videos. Compared to the Sports Video 2020 edition, this year, we extend the task with detection, and enrich the data set with new and more diverse stroke samples. The task now offers two subtasks. Each subtask has its own split of the dataset, leading to different train, validation, and test sets. \par Participants can choose to participate in only one or both subtasks and submit up to five runs for each. The participants must provide one XML file per video file present in the test set for each run. The content of the XML file varies according to the subtask. Runs may be submitted as an archive (zip file), with each run in a different directory for each subtask. Participants should also submit a working notes paper, which describes their method and indicates if any external data, such as other datasets or pretrained networks, was used to compute their runs. The task is considered fully automatic: once the videos are provided to the system, results should be produced without human intervention. Participants are encouraged to release their code publicly with their submission. This year, a baseline for both subtasks was shared publicly~\cite{mediaeval/Martin21/baseline}. \subsection{Subtask 1 - Stroke Detection} Participants must build a system that detects whether a stroke has been performed, whatever its class, and extract its temporal boundaries. The aim is to distinguish between moments of interest in a game (players performing strokes) from irrelevant moments (time between strokes, picking up the ball, having a break…). This subtask can be a preliminary step for later recognizing a stroke that has been performed. \par Participants have to segment regions where a stroke is performed in the provided videos. Provided XML files contain the stroke temporal boundaries (frame index of the videos) related to the train and validation sets. We invite the participants to fill an XML file for each test video in which each stroke should be temporally segmented frame-wise following the same structure. \par For this subtask, the videos are not shared across train, validation, and test sets; however, a same player may appear in the different sets. The Intersection over Union (IoU) and Average Precision (AP) metrics will be used for evaluation. Both are usually used for image segmentation but are adapted for this task: \begin{itemize} \item \textbf{Global IoU:} the frame-wise overlap between the ground truth and the predicted strokes across all the videos. \item \textbf{Instance AP:} each stroke represents an instance to be detected. Detection is considered True when the IoU between prediction and ground truth is above an IoU threshold. $20$ thresholds from $0.5$ to $0.95$ with a step of $0.05$ are considered, similarly to the COCO challenge~\cite{CocoChallenge2014}. This metric will be used for the final ranking of participants. \end{itemize} \subsection{Subtask 2 - Stroke Classification} This subtask is similar to the main task of the previous edition~\cite{mediaeval/Martin20/task}. This year the dataset is extended, and a validation set is provided. \par Participants are required to build a classification system that automatically labels video segments according to a performed stroke. There are 20 possible stroke classes. The temporal boundaries of each stroke are supplied in the XML files accompanying each video in each set. The XML files dedicated to the train and validation sets contain the stroke class as a label, while in the test set, the label is set to ``\texttt{Unknown}''. Hence for each XML file in the test set, the participants are invited to replace the default label ``\texttt{Unknown}'' by the stroke class that the participant's system has assigned according to the given taxonomy. \par For this subtask, the videos are shared across the sets following a random distribution of all the strokes with the proportions of 60\%, 20\% and 20\% respectively for the train, validation and test sets. All submissions will be evaluated in terms of global accuracy for ranking and detailed with per-class accuracy. \par Last year, the best global accuracy (31.4\%) was obtained by~\cite{DBLP:conf/mediaeval/Nguyen-TruongCN20} using Channel-Separated CNN. \cite{DBLP:conf/mediaeval/MartinBMPM20} is second (26.6\%) using 3D attention mechanism and \cite{DBLP:conf/mediaeval/SatoA20} third (16.7\%) using pose information and cascade labelling method. Improvement has been observed compared to the previous edition~\cite{PeMETask:2019} with a best accuracy of 22.9\%~\cite{PeMEWork:2019}. This improvement seems to be correlated by various factors such as: i) multi-modal methods, ii) deeper and more complex CNN capturing simultaneously spatial and temporal features, and iii) class decision following a cascade method. \section{Dataset description} \label{sec:dataset} The dataset has been recorded at the STAPS using lightweight equipment. It is constituted of player-centered videos recorded in natural conditions without markers or sensors, see Fig~\ref{fig:dataset}. Professional table tennis teachers designed a dedicated taxonomy. The dataset comprises 20 table tennis stroke classes: height services, six offensive strokes, and six defensive strokes. The strokes may be divided in two super-classes: \texttt{Forehand} and \texttt{Backhand}. \par All videos are recorded in MPEG-4 format. We blurred fhe faces of the players for each original video frame using OpenCV deep learning face detector, based on the Single Shot Detector (SSD) framework with a ResNet base network. A tracking method has been implemented to decrease the false positive rate. The detected faces are blurred, and the video is re-encoded in MPEG-4. \par Compared with Sports Video 2020 edition, this year, the data set is enriched with new and more diverse video samples. A total of 100 minutes of table tennis games across 28 videos recorded at 120 frames per second is considered. It represents more than $718$~$000$ frames in HD ($1920 \times 1080$). An additional validation set is also provided for better comparison across participants. This set may be used for training when submitting the test set's results. Twenty-two videos are used for the Stroke Classification subtask, representing $1017$ strokes randomly distributed in the different sets following the previously given proportions. The same videos are used in the train and validation sets of the Segmentation subtask, and six additional videos, without annotations, are dedicated to its test set. \vspace{-5pt} \begin{figure} \includegraphics[width=.32\linewidth]{images/00000979.png} \includegraphics[width=.32\linewidth]{images/00001028.png} \includegraphics[width=.32\linewidth]{images/00001047.png}\\ \vspace{-5pt} \caption{Key frames of a same stroke from \texttt{TTStroke-21}} \label{fig:dataset} \vspace{-15pt} \end{figure} \section{Specific terms of use} \label{sec:conditions} Although faces are automatically blurred to preserve anonymity, some faces are misdetected, and thus some players remain identifiable. In order to respect the personal data of the players, this dataset is subject to a usage agreement, referred to as {\it Special Conditions}. The complete acceptance of these {\it Special Conditions} is a mandatory prerequisite for the provision of the Images as part of the MediaEval 2021 evaluation campaign. A complete reading of these conditions is necessary and requires the user, for example, to obscure the faces (blurring, black banner) in the video before use in any publication and to destroy the data by October 1st, 2022. \vspace{-5pt} \section{Discussions} \label{sec:discussion} This year the Sports Video task of MediaEval proposes two subtasks: i)~Detection and ii)~Classification of strokes from videos. Even if the players' faces are blurred, the provided videos still fall under particular usage conditions that the participants need to accept. Participants are encouraged to share their difficulties and their results even if they seem not sufficiently good. All the investigations, even when not successful, may inspire future methods. \begin{acks} Many thanks to the players, coaches, and annotators who contributed to \texttt{TTStroke-21}. \end{acks} \bibliographystyle{ACM-Reference-Format}
{ "attr-fineweb-edu": 1.774414, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUaarxK1Thg9qFa1cW
\section{Appendix} \subsection{More Generated Dialogues for Qualitative Comparison} \label{sec:dialsample} In Table \ref{table:dialsamp} and Table \ref{table:dialsamp2}, we list and compare more generated dialogues of different models given a variable number of goals to focus on. The domains, what slots to request such as phone or postcode, and when should the user make a reservation or not are specified in goals. We use bold style to emphasize whether the assigned goals are achieved or missed such as requests of a correct slot or a wrong domain. The underline and bold style denotes the role confusions. \begin{table*}[b] \fontsize{7.0}{8.4}\selectfont \centering \begin{tabular}{p{.15\textwidth} |p{.25\textwidth} | p{.255\textwidth} | p{.25\textwidth} } \centering {\bf \fontsize{8}{9.5}\selectfont Goals} & \centering {\bf \fontsize{8}{9.5}\selectfont LM + Goal} & \centering {\bf \fontsize{8}{9.5}\selectfont HRED} & \multicolumn{1}{p{.25\textwidth}}{\centering {\bf \fontsize{8}{9.5}\selectfont {G-DuHA}}} \\ \hline Domain: {\bf hotel}\newline User: [name]\newline Reqt: []\newline {\bf Book: True}\newline & U: i am looking for a hotel called the [hotel\_name] .\newline S: [hotel\_name] is a [value\_pricerange] -ly priced guesthouse in the [value\_area] . would you like me to book a room for you ?\newline U: yes , please . i need [value\_count] for [value\_count] people and [value\_count] nights starting from [value\_day] .\newline S: {\bf i have booked you at the [hotel\_name] .} your reference number is [hotel\_reference] . is there anything else i can help you with ?\newline U: no , that s all . thank you .\newline S: you are welcome . have a great day !\newline & U: i am looking for a hotel called the [hotel\_name] .\newline S: [hotel\_name] is a guesthouse in the [value\_area] . it is [value\_pricerange] -ly priced and has [value\_count] stars . would you like to book a room ?\newline U: yes , please . i need a room for [value\_count] people for [value\_count] nights starting on [value\_day] .\newline S: i am sorry , but i was unable to book that for you . would you like to try a different day or a shorter stay ?\newline U: how about [value\_count] nights ?\newline S: {\bf i was able to book you at the [hotel\_name] .} your reference number is [hotel\_reference] . is there anything else i can help you with ?\newline U: {\bf i am also looking for a train} that leaves after [value\_time] on [value\_day] .\newline ... (Cut Off) $\rightarrow$ {\bf Off the Rails}\newline & U: i am looking for a hotel called the [hotel\_name] .\newline S: [hotel\_name] is a guesthouse in the [value\_area] area . would you like to book a room ?\newline U: yes , i would like to book it for [value\_count] people and [value\_count] nights starting from [value\_day] .\newline S: {\bf booking was successful .} reference number is [hotel\_reference] .\newline U: that s all i need . thank you .\newline S: you are welcome . have a great day !\newline \\ \hline Domain: attraction\newline User: [name]\newline Reqt: [{\bf phone}, type]\newline Book: False\newline\newline Domain: train\newline User: [arriveBy, day, departure, destination]\newline Reqt: [{\bf price}]\newline {\bf Book: False}\newline & U: i am looking for information about [attraction\_name]\newline S: [attraction\_name] is a nightclub in the [value\_area] of town . their {\bf phone number is [attraction\_phone]} .\newline \underline{\bf U: i help you with anything else ?}\newline S: yes , i am also looking for a train that [value\_place] to [value\_place] .\newline U: what day and time would you like to leave ?\newline S: i would like to leave on [value\_day] and arrive by [value\_time] .\newline U: i have [value\_count] trains that that day . [value\_time] . would you like me to book it for you ?\newline S: yes , please . {\bf i need the price} and train id .\newline U: the train is [value\_price] pounds . the price is [value\_price] pounds .\newline S: you like me to {\bf book that for you ?}\newline {\bf U: no} , that s all i need . thank you .\newline S: thank you for using our service . have a great day .\newline & U: i am looking for information about [attraction\_name]\newline S: [attraction\_name] is located in the [value\_area] of town at [attraction\_address] .\newline U: what is the entrance fee ?\newline S: i am sorry , i do not have that information .\newline U: i am also looking for a train that leaves after [value\_time] on [value\_day] .\newline S: i have [value\_count] trains that meet your criteria . would you like to narrow it down by departure site , destination , or arrival time ?\newline U: i would like to leave on [value\_day] and arrive by [value\_time] .\newline S: i have train [train\_id] that leaves at [value\_time] and arrives at [value\_time] . would you like me to book that for you ?\newline U: {\bf yes , please book it} for [value\_count] people .\newline S: booking was successful , the total fee is [value\_price] gbp payable at the station . reference number is [train\_reference] .\newline U: thank you . that is all i need .\newline S: you are welcome . have a great day ! \newline & U: i am looking for information about [attraction\_name]\newline S: [attraction\_name] is a museum in the [value\_area] area . would you like the phone number ?\newline U: yes , i would like the postcode and phone number .\newline S: their {\bf phone number is [attraction\_phone]} and the postcode is [attraction\_postcode] .\newline U: i also need a train to [value\_place] .\newline S: where will you be departing from ?\newline U: i will be leaving from [value\_place] on [value\_day] .\newline S: i have [value\_count] trains that meet your criteria . what time would you like to arrive by ?\newline U: i need to arrive by [value\_time] .\newline S: i have train [train\_id] that leaves at [value\_time] and arrives at [value\_time] . would you like me to {\bf book that for you ?}\newline {\bf U: no }, that s all i needed . thank you .\newline S: you are welcome . have a great day !\newline \\ \hline \end{tabular} \caption{Dialogue qualitative comparison. Reqt: Requests. U: User, S: System. {\bf Goal hit or miss}. \underline{\bf Role confusion}. } \label{table:dialsamp} \end{table*} \begin{table*}[p!] \fontsize{7.0}{8.4}\selectfont \centering \begin{tabular}{p{.15\textwidth} |p{.25\textwidth} | p{.255\textwidth} | p{.25\textwidth} } \centering {\bf \fontsize{8}{9.5}\selectfont Goals} & \centering {\bf \fontsize{8}{9.5}\selectfont LM + Goal} & \centering {\bf \fontsize{8}{9.5}\selectfont HRED} & \multicolumn{1}{p{.25\textwidth}}{\centering {\bf \fontsize{8}{9.5}\selectfont {G-DuHA}}} \\ \hline Domain: attraction\newline User: [name]\newline Reqt: [address, entrance fee]\newline Book: False\newline\newline Domain: {\bf hotel}\newline User: [name]\newline Reqt: [postcode, stars]\newline {\bf Book: False}\newline\newline Domain: {\bf taxi}\newline User: [leaveAt]\newline Reqt: [car type, phone]\newline Book: False\newline & U: i am looking for [attraction\_name] in [value\_place]\newline S: it is located in the [value\_area] area . it is located on [attraction\_address] .\newline \underline{\bf U: is free .}\newline \underline{\bf S: can i get the phone number ?}\newline U: the , the phone number is [attraction\_phone] .\newline S: i you . {\bf i am also looking for a hotel called [hotel\_name] .}\newline U: [hotel\_name] is a [value\_pricerange] hotel in the [value\_area] area . it is a [value\_count] star rating . would you like me to {\bf book a room for you ?\newline S: no ,} i just need the information .\newline U: you are welcome . is there anything else i can help you with today ?\newline S: {\bf i also need a taxi} to commute between the [value\_count] place -s .\newline U: what time would you like to leave ?\newline S: i would like to leave the hotel by [value\_time] .\newline U: i have booked a taxi for you . the contact number is [taxi\_phone] . is there anything else i can help you with ?\newline S: no , that s all i need . thank you for your help !\newline U: you are welcome . have a great day !\newline & U: i am looking for [attraction\_name] in [value\_place]\newline S: i have found [attraction\_name] , it s located in the [value\_area] . it is free to enter . would you like more information ?\newline U: yes , can i get the phone number and postcode ?\newline S: sure , the phone number is [attraction\_phone] and the postcode is [attraction\_postcode] .\newline {\bf U: i am also looking for a train} that leaves on [value\_day] and goes to [value\_place] .\newline S: there are [value\_count] trains that match your request . what time would you like to leave ?\newline U: i would like to leave after [value\_time] .\newline S: i have train [train\_id] that leaves at [value\_time] and arrives at [value\_time] . would you like me to book that for you ?\newline U: yes , please book it for [value\_count] people .\newline S: booking was successful , the total fee is [value\_price] gbp payable at the station . reference number is [train\_reference] .\newline U: thank you so much for your help .\newline S: you are welcome . have a great day !\newline U: thank you , you too .\newline S: thank you for using our service today ! \newline $\rightarrow$ {\bf Off the Rails}\newline & U: i am looking for [attraction\_name] in [value\_place]\newline S: it is located at [attraction\_address] . is there anything else you would like to know ?\newline U: what is the postcode ?\newline S: the postcode is [hotel\_postcode] . is there anything else i can help you with ?\newline {\bf U: i am also looking for a hotel called [hotel\_name] .}\newline S: [hotel\_name] is a [value\_pricerange] -ly priced guesthouse in the [value\_area] area . would you like me to {\bf book you a room ?\newline U: no , }i just need the address please .\newline S: the postcode is [hotel\_postcode] . is there anything else i can help you with ?\newline {\bf U: i also need a taxi} to get between the [value\_count] place -s .\newline S: i can help you with that . what time would you like to leave or arrive by ?\newline U: i want to leave the hotel by [value\_time] .\newline S: i have booked you a [taxi\_type] , the contact number is [taxi\_phone] .\newline U: thank you so much for your help .\newline S: you are welcome . is there anything else i can help you with today ?\newline U: no , that s all i need . thank you for your help !\newline S: you are welcome . have a great day . \newline \\ \hline Domain: hotel\newline User: [name]\newline Reqt: [address]\newline {\bf Book: False}\newline\newline Domain: {\bf attraction}\newline User: [area]\newline Reqt: [address, phone, type]\newline Book: False\newline & U: hello ! i am looking for information about [hotel\_name] . it s a hotel in [value\_place] .\newline S: the is a guesthouse in the [value\_area] part of town . it is [value\_pricerange] -ly priced and has [value\_count] stars . would you like me to book a room ?\newline {\bf U: no , i do not need a reservation .} i you .\newline S: you are welcome . is there anything else i can help you with today ?\newline U: yes , i am {\bf looking for a place called [attraction\_name] .}\newline S: [attraction\_name] is a museum in the [value\_area] area . it is located at [attraction\_address] . their phone number is [attraction\_phone] .\newline \underline{\bf U: i help you with anything else ?}\newline S: no , that s all i need . thank you .\newline U: you are welcome . have a great day !\newline & U: hello ! i am looking for information about [hotel\_name] . it s a hotel in [value\_place] .\newline S: [hotel\_name] is a guesthouse in the [value\_area] . it is [value\_pricerange] -ly priced and has [value\_count] stars . would you like to book a room ?\newline U: yes , please . i need a room for [value\_count] people for [value\_count] nights starting on [value\_day] .\newline S: i am sorry , but i was unable to book that for you . would you like to try a different day or shorter stay ?\newline U: how about [value\_count] nights ?\newline {\bf S: i was able to book you} at the [hotel\_name] . your reference number is [hotel\_reference] . is there anything else i can help you with ?\newline {\bf U: i am also looking for a train} that leaves after [value\_time] on [value\_day] .\newline ... (Cut Off) $\rightarrow$ {\bf Off the Rails}\newline & U: hello ! i am looking for information about [hotel\_name] . it s a hotel in [value\_place] .\newline S: [hotel\_name] is a guesthouse in the [value\_area] area . it is a [value\_count] star hotel in the [value\_area] area . would you like me to book a room for you ?\newline {\bf U: no , i just need the address} and phone number .\newline S: sure , the phone number is [attraction\_phone] .\newline {\bf U: i am also looking for a place to go in town} .\newline S: there are [value\_count] attractions in the [value\_area] area . do you have a particular type of attraction in mind ?\newline U: i am not sure . i just need the address and postcode for [value\_count] of them .\newline S: [attraction\_name] is located at [attraction\_address] and the postcode is [attraction\_postcode] .\newline U: no , that s all i need . thank you for your help !\newline S: you are welcome . have a great day !\newline \\ \hline Domain: train\newline User: [day, departure, destination, leaveAt]\newline Reqt: [{\bf duration, price}]\newline {\bf Book: False}\newline & U: i am looking for a train from [value\_place] to [value\_place] .\newline S: there are [value\_count] trains that meet your criteria . what day and time would you like to travel ?\newline U: i would like to leave on [value\_day] after [value\_time] .\newline S: i have [value\_count] trains that leave [value\_place] at [value\_time] . would you like me to {\bf book it for you ?\newline U: yes ,} please . i need {\bf the price and travel time please .}\newline S: the price is [value\_price] pounds . the price is [value\_price] pounds .\newline \underline{\bf U: you like me to book that for you ?}\newline S: no , that s all i need . thank you .\newline U: you are welcome . have a great day !\newline & U: i am looking for a train from [value\_place] to [value\_place] .\newline S: there are [value\_count] trains that meet your criteria . what day and time would you like to travel ?\newline U: i would like to leave on [value\_day] and arrive by [value\_time] .\newline S: i have train [train\_id] that leaves at [value\_time] and arrives in [value\_place] at [value\_time] . would you like me to {\bf book that for you ?\newline U: yes ,} please . i need [value\_count] tickets .\newline S: i have booked [value\_count] tickets for you . your reference number is [train\_reference] .\newline U: i am also looking for a place to stay . {\bf i would like a [value\_count] star hotel .}\newline ... (Cut Off) $\rightarrow$ {\bf Off the Rails}\newline & U: i am looking for a train from [value\_place] to [value\_place] .\newline S: what day would you like to travel ?\newline U: i would like to leave on [value\_day] .\newline S: there are [value\_count] trains that meet your criteria . do you have a time you would like to leave or arrive by ?\newline U: i would like to leave after [value\_time] .\newline S: [train\_id] leaves at [value\_time] and arrives at [value\_time] . would you like me to {\bf book it for you ?\newline U: no ,} i just need {\bf the price} and the train id .\newline S: the price is [value\_price] pounds .\newline U: thank you for your help . that is all i need today .\newline S: you are welcome . have a great day ! \newline \\ \hline \end{tabular} \caption{Dialogue qualitative comparison (continue). Reqt: Requests. U: User, S: System. {\bf Goal hit or miss}. \underline{\bf Role confusion}. } \label{table:dialsamp2} \end{table*} \subsection{More Generated Responses for Qualitative Comparison} \label{sec:respsample} Responses are generated by a model given the goals and all the previous utterances. In Table \ref{table:respsample}, we present more generated responses for qualitative comparison. We observe that HRED is unable to correctly switch beteen goal contexts illustrated by the second example. \begin{table*}[b] \fontsize{8}{9.5}\selectfont \centering \begin{tabular}{p{.14\textwidth} |p{.48\textwidth} | p{.3\textwidth} } \centering {\bf Goals} & \centering {\bf Contexts (Reference)} & \multicolumn{1}{p{.3\textwidth}}{\centering {\bf Response}} \\ \hline Domain: attraction\newline User: [type]\newline Reqt: [entrance fee, address, phone]\newline Book: False\newline\newline Domain: restaurant\newline User: [pricerange, food]\newline Reqt: [phone, postcode]\newline Book: False & U: can you find me a theatre ?\newline S: would you like the [attraction\_name] ?\newline U: yes and i need to know the entrance fee , address , and phone number\newline S: the fee is not listed but the address is [attraction\_address] and the phone number is [attraction\_phone] . can i help you with anything else ?\newline U: i am also looking for a [value\_food] restaurant in the [value\_pricerange] range .\newline S: i have [value\_count] excellent options . [restaurant\_name] is in the [value\_area] of town , would you like me to reserve you a table ?\newline U: i am not sure how many people will be coming , so i will do that later. but can you provide me with their postcode and phone number ? & {\bf LM+G}: the phone number is [restaurant\_phone] and the postcode is [restaurant\_postcode] .\newline{\bf HRED}: the phone number is [restaurant\_phone] and the postcode is [restaurant\_postcode] .\newline {\bf {G-DuHA}}: sure , the phone number is [restaurant\_phone] and the postcode is [restaurant\_postcode] . is there anything else i can help you with ? \\ \hline Domain: train\newline User: [day, departure, destination, leaveAt]\newline Reqt: []\newline Book: True\newline\newline Domain: attraction\newline User: [area, type]\newline Reqt: [phone, postcode]\newline Book: False\newline & U: i am looking for a train that leaves on [value\_day] going to [value\_place] .\newline S: there are [value\_count] trains that meet what you are asking . where would you like to depart from ?\newline U: [value\_place] , it should also leave after [value\_time] .\newline S: [train\_id] departs [value\_place] at [value\_time] and arrives in [value\_place] at [value\_time] . will that work ?\newline U: yes it will . can you book that for just me , and provide the reference number ?\newline S: sure thing ! how many tickets do you need ?\newline U: i need [value\_count] ticket and can i please have the reference number ?\newline S: booking was successful , the total fee is [value\_price] gbp payable at the station . reference number is [train\_reference] . can i provide any further help today ? & {\bf LM+G}: i am also looking for a place to go in the [value\_area] . \newline{\bf HRED}: i am also looking for a place to stay . i would like a hotel in the [value\_area] . \newline {\bf {G-DuHA}}: i am also looking for a place to go in town . \\ \hline Domain: hotel\newline User: [area, internet, parking, stars]\newline Reqt: [postcode, pricerange, type]\newline Book: False\newline\newline Domain: restaurant\newline User: [area, food, pricerange]\newline Reqt: []\newline Book: True & U: hi i am looking for a restaurant in the [value\_area] of town serving [value\_food] food .\newline S: that s great ! there are several [value\_food] restaurant -s near the city [value\_area] . are you looking for a budget @UNK option or are you comfortable with a more [value\_pricerange] place ?\newline U: a [value\_pricerange] option please .\newline S: we have [restaurant\_name] , the ghandi and [restaurant\_name] , do you want me to book you for [value\_count] of those ?\newline U: yes please for [value\_count] people at [value\_time] on [value\_day].\newline S: you are all set . they will hold the table for [value\_count] minutes and your reference number is [hotel\_reference]\newline U: thank you , appreciate your service .\newline S: no problem . is there anything else i can help you with ? & {\bf LM+G}: i am also looking for a place to stay . i would like a [value\_count] star hotel in the [value\_area] . \newline{\bf HRED}: no , that s all . thank you . \newline {\bf {G-DuHA}}: i am also looking for a place to stay . \\ \hline Domain: taxi\newline User: [destination, arriveBy, departure]\newline Reqt: [car type, phone]\newline Book: False & U: i need to book a tax departing from gandhi .\newline S: what is your destination and when would you like to depart ? & {\bf LM+G}: i need to arrive by [value\_time] . \newline{\bf HRED}: i am going to [value\_place] from [value\_place] . \newline {\bf {G-DuHA}}: i want to go to [value\_place] and get to [restaurant\_name] by [value\_time] . \\ \hline Domain: attraction\newline User: [area]\newline Reqt: [address]\newline Book: False \newline \newline Domain:hotel\newline User: [pricerange, internet, stars]\newline Reqt: []\newline Book: True & U: hello , i am looking for place -s to go in the [value\_area] .\newline S: there are many attractions in the [value\_area] like museums , architecture , boating , and @UNK . what are you interested in ?\newline U: how about a boating attraction ?\newline S: there are [value\_count] in the [value\_area] of town . [hotel\_name] , and [hotel\_name] . would either of those interest you ?\newline U: could you give me the address for [attraction\_name] , please ? i also need a place to stay , preferably somewhere [value\_pricerange] .\newline S: sure , [attraction\_name] is @UNK at [attraction\_address] . the [hotel\_name] is in the [value\_pricerange] price range with [value\_count] stars . would you like to book a room ?\newline U: does it offer free wifi internet ? & {\bf LM+G}: yes , it does . would you like me to book a room for you ? \newline{\bf HRED}: yes , it does have free parking . \newline {\bf {G-DuHA}}: yes , it does . \\ \hline \end{tabular} \caption{Qualitative comparison of generated responses. Reqt: Requests. U: User, S: System. } \label{table:respsample} \end{table*} \section{Introduction} Modeling a probability distribution over word sequences is a core topic in natural language processing, with language modeling being a flagship problem and mostly tackled via recurrent neural networks (RNNs) \cite{mikolov2012context, melis2017state, Merity2018RegularizingAO}. Recently, dialogue modeling has drawn much attention with applications to response generation \cite{Serban2016MultiresolutionRN, Li2016DeepRL, asghar2018affective} or data augmentation \cite{Yoo2019DataAF}. It's inherently different from language modeling as the conversation is conducted in a turn-by-turn nature. \cite{Serban2016BuildingED} imposes a hierarchical structure on encoder-decoder to model this utterance-level and dialogue-level structures, followed by \cite{Serban2016AHL, chen2018hierarchical, le2018variational}. \begin{figure}[t!] \centering \includegraphics[width=0.48\textwidth]{figures/motivation.pdf} \caption{On-goal dialogues follow the given goals such as no booking of train tickets and a hotel reservation. Off-goal dialogues have context switches to other domains non-relevant to goals.} \label{fig:motivation} \end{figure} However, when modeling dialogues involving two interlocutors center around one or more goals, these systems generate utterances with the greatest likelihood but without a mechanism sticking to the goals. This makes them go off the rails and fail to model context-switching of goals. Most of the generated conversations become off-goal dialogues with utterances being non-relevant or contradicted to goals rather than on-goal dialogues. The differences are illustrated in Figure \ref{fig:motivation}. Besides, two interlocutors in a goal-oriented dialogue often play distinct roles as one has requests or goals to achieve and the other provides necessary support. Modeled by a single hierarchical RNN, this interlocutor-level disparity is neglected and constant context switching of roles could reduce the capacity for tracking conversational flow and long-term temporal structure. To resolve the aforementioned issues when modeling goal-oriented dialogues, we propose the Goal-Embedded Dual Hierarchical Attentional Encoder-Decoder (G-DuHA) to tackle the problems via three key features. First, the goal embedding module summarizes one or more goals of the current dialogue as goal contexts for the model to focus on across a conversation. Second, the dual hierarchical encoder-decoders can naturally capture interlocutor-level disparity and represent interactions of two interlocutors. Finally, attentions are introduced on word and dialogue levels to learn temporal dependencies more easily. In this work, our contributions are that we propose a model called goal-embedded dual hierarchical attentional encoder-decoder (G-DuHA) to be the first model able to focus on goals and capture interlocutor-level disparity while modeling goal-oriented dialogues. With experiments on dialogue generation, response generation and human evaluations, we demonstrate that our model can generate higher-quality, more diverse and goal-focused dialogues. In addition, we leverage goal-oriented dialogue generation as data augmentation for task-oriented dialogue systems, with better performance achieved. \section{Related Work} Dialogues are sequences of utterances, which are sequences of words. For modeling or generating dialogues, hierarchical architectures are usually used to capture their conversational nature. Traditionally, language models are also used for modeling and generating word sequences. As goal-oriented dialogues are generated, they can be used in data augmentation for task-oriented dialogue systems. We review related works in these fields. \vspace{1.5mm} \noindent {\bf Dialogue Modeling.} To model conversational context and turn-by-turn structure of dialogues, \cite{Serban2016BuildingED} devised hierarchical recurrent encoder-decoder (HRED). Reinforcement and adversarial learning are then adopted to improve naturalness and diversity \cite{Li2016DeepRL, Li2017AdversarialLF}. Integrating HRED with the latent variable models such as variational autoencoder (VAE) \cite{Kingma2014AutoEncodingVB} extends another line of advancements \cite{Serban2016AHL, Zhao2017LearningDD, Park2018AHL, Le2018VariationalME}. However, these systems are not designed for task-oriented dialogue modeling as goal information is not considered. Besides, conversations between two interlocutors are captured with a single encoder-decoder by these systems. \vspace{1.5mm} \noindent {\bf Language Modeling.} A probability distribution of a word sequence $w_{1:T}=(w_1, w_2, ..., w_T)$ can be factorized as $p(w_1) \prod_{t=2}^{T} p(w_t|w_{1:t-1})$. To approximate the conditional probability $p(w_t|w_{1:t-1})$, counted statistics and smoothed N-gram models have been used before \cite{goodman2001bit, katz1987estimation, kneser1995improved}. Recently, RNN-based models have achieved a better performance \cite{mikolov2010recurrent, Jzefowicz2016ExploringTL, Grave2017ImprovingNL, Melis2018OnTS}. As conversational nature is not explicitly modeled, models often have role-switching issues. \vspace{1.5mm} \noindent {\bf Task-Oriented Dialogue Systems.} Conventional task-oriented dialog systems entails a sophisticated pipeline \cite{raux2005let, Young2013POMDPBasedSS} with components including spoken language understanding \cite{chen2016end, Mesnil2015UsingRN, gupta2019simple}, dialog state tracking \cite{henderson2014word, Mrksic2017NeuralBT}, and dialog policy learning \cite{su2016line, gavsic2014gaussian}. Building a task-oriented dialogue agent via end-to-end approaches has been explored recently \cite{Li2017EndtoEndTN, Wen2017ANE}. Although several conversational datasets are published recently \cite{Gopalakrishnan2019, Henderson2019}, the scarcity of annotated conversational data remains a key problem when developing a dialog system. This motivates us to model task-oriented dialogues with goal information in order to achieve controlled dialogue generation for data augmentation. \section{Model Architecture} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{figures/model.pdf} \caption{{G-DuHA} architecture. The goal embedding module embeds goals as priors for context RNNs. Dual hierarchical RNNs naturally model two interlocutors. An attention over previous contexts captures long-term dependencies. For encoders, word attentions are used to summarize local importance of words. (Enc: Encoder, Dec: Decoder, Attn: Attention, $U_i$: User's utterance, $S_i$: Agent's utterance)} \label{fig:model} \end{figure*} Given a set of goals and the seed user utterance, we want to generate a goal-centric or on-goal dialogue that follows the domain contexts and corresponding requests specified in goals. In this section, we start with the mathematical formulation, then introduce our proposed model, and describe our model's training objective and inference. At training time, $K$ dialogues $\{D_1, ..., D_K\}$ are given where each $D_i$ associates with $N_i$ goals ${\bf g_i} = \{g_{i1}, g_{i2}, ..., g_{iN_i}\}$. A dialogue $D_i$ consists of $M$ turns of utterances between a user ${\bf u}$ and a system agent ${\bf s}$ $({\bf w_{u1}}, {\bf w_{s1}}, {\bf w_{u2}}, {\bf w_{s2}}, ...)$, where ${\bf w_{u1}}$ is a word sequence $w_{u1, 1}, w_{u1, 2}, ..., w_{u1, N_{u1}}$ denoting the user's first utterance. The task-oriented dialogue modeling aims to approximate the conditional probability of user's or agent's next utterance given previous turns and goals. It can be further decomposed over generated words, e.g. \begin{equation} \begin{split} P({\bf w_{um}}|{\bf w_{u1}}, {\bf w_{s1}}, ..., {\bf w_{s(m-1)}}, {\bf g_i}) = \\ \quad \prod_{n=1}^{N_{um}} P(w_{um, n}|w_{um, <n}, {\bf w_{u1}}, ...) \end{split} \end{equation} To model goal-oriented dialogues between two interlocutors, we propose Goal-embedded Dual Hierarchical Attentional Encoder-Decoder (G-DuHA) as illustrated in Fig. \ref{fig:model}. Our model comprises goal embedding module, dual hierarchical RNNs, and attention mechanisms detailed below. \subsection{Goal Embedding Module} We represent each goal in $\{g_{i1}, g_{i2}, ...\}$ using a simple and straightforward binary or multi-one-hot encoding followed by a feed-forward network (FFN). A goal could have a specific domain such as hotel or a request such as price or area, Table \ref{table:dialsample} shows a few examples. Our goal embedding module, a FFN, then converts binary encoding of each goal into a goal embedding, where the FFN is learned during training. If multiple goals present, all goal embeddings are then added up element-wisely to be the final goal embedding: \vspace{-1mm} \begin{equation} \overrightarrow{g_i} = \sum_{j=1}^{|g_i|} FFN( Encode(g_{ij}) ) \end{equation} The output of the goal embedding module has the same number of dimensions as context RNN's hidden state and is used as initial states for all layers of all context RNNs to inform the model about a set of goals to focus on. \subsection{Dual Hierarchical Architecture} The hierarchical encoder-decoder structure \cite{Serban2016BuildingED} is designed for utterance level and context level modeling. With a single encoder, context RNN, and decoder, the same module is used to process input utterances, track contexts, and generate responses for both interlocutors. In task-oriented dialogues, however, roles are distinct as the user aims to request information or make reservations to achieve goals in mind and the system agent provides necessary help. To model this interlocutor-level disparity, we extend it into a dual architecture involving two hierarchical RNNs, each serves as a role in a dialogue. \subsection{Attention Mechanisms} At the utterance level, as importance of words can be context dependent, our model uses first hidden layer's states of the context RNN as the query for attention mechanisms \cite{Bahdanau2015NeuralMT, Xu2015ShowAA} to build an utterance representation. A feed-forward network is involved to computed attention scores, whose input is the concatenation of the query and an encoder output. At the dialogue level, for faster training, our model has a skip connection to add up context RNN's raw output with its input as the final output $c_t$. To model long-term dependencies, an attention module is applied to summarize all previous contexts into a global context vector. Specifically, a feed-forward network takes the current context RNN's output $c_t$ as the query and all previous context outputs from both context RNNs as keys and values to compute attention scores. The global context vector is then concatenated with $c_t$ to form our final context vector for decoder to consume. \subsection{Objective} For predicting the end of a dialogue, we exploit a feed-forward network over the final context vector for a binary prediction. Thus our training objective can be written as \begin{equation} L = \sum_{i = 1}^{K} \Big[ -\log p_{\boldsymbol{\theta}}(D_{i}) + \sum_{t}^{M_i} - \log p_{\boldsymbol{\theta}}^{end}(e_{it}) \Big], \end{equation} where our model $p_{\boldsymbol{\theta}}$ has parameters $\boldsymbol{\theta}$, $M_i$ is the number of turns, $e_{it}$ is $0$ as the dialogue $D_i$ continues and $1$ if it terminates at turn $t$. \subsection{Generation} At dialogue generation time, a set of goals $\{g_{i1}, g_{i2}, ...\}$ and a user utterance ${\bf w_{u1}}$ as a seed are given. Then our model will generate conversations simulating interactions between a user and an agent that seek to complete all given goals. The generation process terminates as the end of dialogue prediction outputs a positive or the maximum number of turns is reached. \section{Experiments} We evaluate our approach on dialogue generation and response generation as well as by humans. Ablation studies and an extrinsic evaluation that leverages dialogue generation as a data augmentation method are reported in the subsequent section. \subsection{Dataset} Experiments are conducted on a task-oriented human-human written conversation dataset called MultiWOZ \cite{Budzianowski2018MultiWOZA}, the largest publicly available dataset in the field. Dialogues in the dataset span over diverse topics, one to more goals, and multiple domains such as \texttt{restaurant}, \texttt{hotel}, \texttt{train}, etc. It consists of $8423$ train dialogues, $1000$ validation and $1000$ test dialogues with on average $15$ turns per dialogue and $14$ tokens per turn. \subsection{Baselines} We compare our approach against four baselines: (i) LM+G: As long-established methods for language generation, we adopt an RNN language model (LM) with 3-layer 200-hidden-unit GRU \cite{Cho2014LearningPR} incorporating our goal embedding module as a baseline, which has goal information but no explicit architecture for dialogues. (ii) LM+G-XL: To show the possible impact of model size, a larger LM that has a 3-layer 450-hidden-unit GRU is adopted as another baseline. (iii) Hierarchical recurrent encoder-decoder (HRED) \cite{Serban2016BuildingED}: As the prominent model for dialogues, we use HRED as the baseline that has a dialogue-specific architecture but no goal information. The encoder, decoder, and context RNN are 2-layer 200-hidden-unit GRUs. (iv) HRED-XL: We also use a larger HRED with 350 hidden units for all GRUs as a baseline to show the impact of model size. \subsection{Implementation Details} In all experiments, we adopt the delexicalized form of dialogues as shown in Table \ref{table:dialsample} with vocabulary size, including slots and special tokens, to be $4258$. The max number of turns and sequence length are capped to $22$ and $36$, respectively. {G-DuHA} uses 2-layer, 200-hidden-unit GRUs as all encoders, decoders, and context RNNs. All feed-forward networks have 2 layers with non-linearity. FFNs of encoder attention and end of dialogue prediction have $50$ hidden units. The FFNs of context attention and goal embedding gets $100$ and $200$ hidden units. We simply use greedy decoding for utterance generation. All models initialize embeddings with pre-trained fast-text vectors on wiki-news \shortcite{mikolov2018advances} and are trained by the Adam optimizer \shortcite{Kingma2015AdamAM} with early-stopping to prevent overfitting. To mitigate the discrepancy between training and inference, we pick predicted or ground-truth utterance as the current input uniformly at random when training. \begin{table*}[t!] \centering \smaller[1] \begin{tabular}{l|c|c c c c | c c c | c c c | c c } \hline {\bf Model} & {\bf Size} & {\bf BLEU} & {\bf B1} & {\bf B2} & {\bf B3} & {\bf D-1} & {\bf D-2} & {\bf D-U} & {\bf P} & {\bf R} & {\bf F1} & {\bf L-D} & {\bf L-U}\\ \hline \hline LM+G & 4.2 M & 6.34 & 23.24 & 12.89 & 8.76 & 0.16 & 0.88 & 23.75 & 89.52 & 82.39 & {\bf 84.89} & 15.1 & 13.0\\ LM+G-XL&8.2 M& 6.22 & 23.10 & 12.75 & 8.63 & 0.16 & 0.93 & {\bf 26.38} & {\bf 90.03} & 81.62 & 84.71 & 14.7 & 14.4\\ HRED & 5.1 M & 5.40 & 21.91 & 11.58 & 7.66 & 0.09 & 0.38 & 3.94 & 69.69 & 66.22 & 65.35 & 17.3 & 14.1\\ HRED-XL & 8.8 M & 5.08 & 20.45 & 10.90 & 7.22 & 0.11 & 0.50 & 5.51 & 68.25 & 65.93 & 65.09 & 15.9 & 16.9\\ {G-DuHA} & 7.8 M & {\bf 7.48} & {\bf 24.38} & {\bf 14.44} & {\bf 10.12} & {\bf 0.17} & {\bf 0.99} & {\bf 25.98} & 86.21 & {\bf 83.18} & 83.76 & 14.3 & 15.0\\ \hline \end{tabular} \caption{{\bf Dialogue} generation performance. All results are averaged over 5 random seeds and reported in percentage (\%). {\bf Size}: Total number of model parameters, {\bf P}: Precision, {\bf R}: Recall, {\bf L-D}: Average dialogue length, {\bf L-U}: Average utterance length. } \label{table:diagen} \end{table*} \begin{table*}[t!] \fontsize{7.0}{8.4}\selectfont \centering \begin{tabular}{p{.148\textwidth}|p{.26\textwidth}| p{.258\textwidth} | p{.245\textwidth} } \centering {\bf \fontsize{8}{9.5}\selectfont Goals} & \centering {\bf \fontsize{8}{9.5}\selectfont LM + Goal} & \centering {\bf \fontsize{8}{9.5}\selectfont HRED} & \multicolumn{1}{p{.24\textwidth}}{\centering {\bf \fontsize{8}{9.5}\selectfont G-DuHA}} \\ \hline Domain: attraction\newline User: [name]\newline Reqt: [{\bf entrance fee}]\newline Book: False\newline\newline Domain: restaurant\newline User: [area, food]\newline Reqt: []\newline {\bf Book: True}\newline\newline Domain: {\bf taxi}\newline User: [arriveBy]\newline Reqt: [car type, {\bf phone}]\newline Book: False\newline & U: what is the address for [attraction\_name] ?\newline S: sure ! [attraction\_name] is located at [attraction\_address] and {\bf is free .}\newline U: i . i am also {\bf looking for a place to eat .} the [value\_area] .\newline S: there are [value\_count] restaurant -s in the [value\_area] . what type of food are you looking for ?\newline ...\newline U: yes , {\bf please book a table} for [value\_count] people at [value\_time] on [value\_day] .\newline ...\newline \underline{\bf U: i help you with anything else ?}\newline S: i {\bf also need a taxi} to get me the restaurant to the restaurant .\newline U: i time would you like to leave ?\newline S: i would like to arrive by the restaurant by the booked time .\newline U: booking completed ! booked car type [taxi\_type] {\bf contact number [taxi\_phone]}\newline S: thank you that s all i needed .\newline U: you are welcome . have a great day ! & U: what is the address for [attraction\_name] ?\newline S: the address is [hospital\_address] , [value\_place] . is there anything else i can help you with ?\newline U: yes , i would like to book a table for [value\_count] at [value\_time] on [value\_day] .\newline S: i {\bf have booked your table .} your reference number is [restaurant\_reference] .\newline U: thank you . i also need {\bf a train} to [value\_place] that leaves after [value\_time] .\newline S: there are [value\_count] trains that match your request . what day and time would you like to travel ?\newline U: i would like to leave on [value\_day] and arrive by [value\_time] .\newline S: there are [value\_count] trains that meet your criteria . would you like to narrow it down by departure time or arrival time ?\newline U: i would like to arrive by [value\_time] .\newline ... & U: what is the address for [attraction\_name] ?\newline S: the address is [attraction\_address] and the postcode is [attraction\_postcode] .\newline U: {\bf what is the entrance fee ?}\newline S: the {\bf entrance fee is free }. is there anything else i can help you with ?\newline U: yes , i am also {\bf looking for a restaurant} that serves [value\_food] food in the [value\_area] .\newline ...\newline S: i have [value\_count] restaurant -s that meet your criteria . would you like me to {\bf book you a table ?}\newline {\bf U: yes , please .} i need a table for [value\_count] people at [value\_time] on [value\_day] .\newline ...\newline U: i {\bf also need a taxi} to commute between the [value\_count] place -s .\newline S: i have booked you a taxi to pick you up at the [attraction\_name] . it will be a [taxi\_type] and the {\bf contact number is [taxi\_phone]} .\newline U: thank you , that is all i need .\newline S: you are welcome . have a great day ! \\ \hline \end{tabular} \caption{Dialogue qualitative comparison. Reqt: Requests. U: User, S: Agent. {\bf Goal hit or miss}. \underline{\bf Role confusion}. Extensive qualitative comparisons of dialogues are presented in the appendix. } \label{table:dialsample} \end{table*} \subsection{Evaluation Metrics} We employ a number of automatic metrics as well as human evaluations to benchmark competing models on quality, diversity, and goal focus: {\bf Quality.} BLEU \cite{papineni2002bleu}, as BLEU-4 by default, is a word-overlap measure against references and commonly used by dialogue generation works to evaluate quality \shortcite{Sordoni2015ANN, Li2016DeepRL, Li2016ADO, Zhao2017LearningDD, Xu2018BetterCB}. Lower N-gram B1, B2, B3 are also reported. {\bf Diversity.} D-1, D-2, D-U: The distinctiveness denotes the number of unique unigrams, bigrams, and utterances normalized by each total count \cite{Li2016ADO, Xu2018BetterCB}. These metrics are commonly used to evaluate the dialogue diversity. {\bf Goal Focus.} A set of slots such as address are extracted from reference dialogues as multi-label targets. Generated slots in model's output dialogues are the predictions. We use the multi-label precision, recall, and F1-score as surrogates to measure the goal focus and achievement. {\bf Human Evaluation.} The side-by-side human preference study evaluates dialogues on \emph{goal focus}, \emph{grammar}, \emph{natural flow}, and \emph{non-redundancy}. \section{Results and Discussion} \subsection{Dialogue Generation Results} For dialogue generation \cite{Li2016DeepRL}, a model is given one or more goals and one user utterance as the seed inputs to generate entire dialogues in an auto-regressive manner. Table \ref{table:diagen} summarizes the evaluation results. For quality measures, {G-DuHA} significantly outperforms other baselines, implying that it's able to carry out a higher-quality dialogue. Besides, goal-embedded LMs perform better than HREDs, showing the benefits of our goal embedding module. No significant performance difference is observed with respect to model size variants. \begin{table*}[t!] \centering \smaller[1] \begin{tabular}{l|c c c c | c c c | c c c | c} \hline {\bf Model} & {\bf BLEU} & {\bf B1} & {\bf B2} & {\bf B3} & {\bf D-1} & {\bf D-2} & {\bf D-U} & {\bf P} & {\bf R} & {\bf F1} & {\bf L-R}\\ \hline \hline LM+G & 14.88 & 35.86 & 24.59 & 18.81 & 0.27 & 1.44 & 40.84 & 79.71 & 68.57 & 71.73 & 14.3 \\ LM+G-XL & 14.51 & 35.28 & 24.07 & 18.36 & {\bf 0.28} & {\bf 1.47} & {\bf 42.56} & {\bf 79.79} & 67.31 & 71.00 & 14.3 \\ HRED & 14.34 & 36.27 & 24.31 & 18.33 & 0.21 & 0.94 & 20.21 & 75.46 & 67.08 & 68.78 & 17.1 \\ HRED-XL & 14.33 & 36.36 & 24.37 & 18.35 & 0.23 & 1.12 & 26.63 & 72.69 & 68.24 & 68.20 & 17.3 \\ {G-DuHA} & {\bf 15.85} & {\bf 37.99} & {\bf 26.14} & {\bf 20.01} & 0.25 & 1.27 & {\bf 39.59} & {\bf 78.34} & {\bf 71.55} & {\bf 72.69} & 16.7\\ \hline \end{tabular} \caption{{\bf Agent's response} generation performance. All results are averaged over 5 random seeds and reported in percentage (\%). {\bf P}: Precision, {\bf R}: Recall, {\bf L-R}: Average response length. } \label{table:resgensys} \end{table*} \begin{table*}[t!] \centering \smaller[1] \begin{tabular}{l|c c c c | c c c | c c c | c} \hline {\bf Model} & {\bf BLEU} & {\bf B1} & {\bf B2} & {\bf B3} & {\bf D-1} & {\bf D-2} & {\bf D-U} & {\bf P} & {\bf R} & {\bf F1} & {\bf L-R}\\ \hline \hline LM+G & 11.73 & 31.79 & 21.26 & 15.56 & 0.35 & 1.82 & 33.57 & 89.44 & 75.78 & 80.23 & 10.6 \\ LM+G-XL & 11.60 & 31.49 & 21.00 & 15.38 & {\bf 0.36} & {\bf 1.87} & 34.29 & 89.57 & 75.55 & 80.03 & 10.7 \\ HRED & 10.88 & 31.69 & 20.46 & 14.65 & 0.24 & 0.98 & 16.00 & 80.00 & 79.11 & 77.58 & 13.1 \\ HRED-XL & 10.81 & 31.84 & 20.48 & 14.60 & 0.26 & 1.15 & 19.87 & 80.11 & 78.82 & 77.42 & 13.2 \\ {G-DuHA} & {\bf 13.25} & {\bf 35.20} & {\bf 23.89} & {\bf 17.56} & 0.30 & 1.49 & {\bf 35.57} & {\bf 91.12} & {\bf 79.66} & {\bf 83.51} & 12.8 \\ \hline \end{tabular} \caption{{\bf User's response} generation performance. All results are averaged over 5 random seeds and reported in percentage (\%). {\bf P}: Precision, {\bf R}: Recall, {\bf L-R}: Average response length. } \label{table:resgenusr} \end{table*} \begin{table*}[t!] \fontsize{8}{9.5}\selectfont \centering \begin{tabular}{p{.16\textwidth} |p{.44\textwidth} | p{.32\textwidth} } \centering {\bf Goals} & \centering {\bf Contexts (Reference)} & \multicolumn{1}{p{.32\textwidth}}{\centering {\bf Response}} \\ \hline Domain: train\newline User: [destination, departure, arriveBy, day]\newline Reqt:[duration,price]\newline Book: False\newline\newline Domain: restaurant\newline User: [area, food, pricerange]\newline Reqt: []\newline Book: True & U: hey i am looking for a train from [value\_place] to [value\_place] . mind helping out ?\newline S: many trains are leaving on [value\_day] . mind choosing the time ?\newline U: i want to leave on [value\_day] and arrive by [value\_time] .\newline S: i have train [train\_id] that leaves [value\_place] at [value\_time] and arrives in [value\_place] at [value\_time] . would you like to make reservations ?\newline U: could you give me the travel time and price of that train please ?\newline S: the cost is [value\_price] pounds . the duration of the trip is [value\_count] minutes , leaving [value\_place] at [value\_time] and arriving at [value\_time] in [value\_place] . & {\bf LM+G}: thanks . i am also looking for a restaurant in the [value\_area] .\newline{\bf HRED}: i am also looking for a place to stay . i would like a [value\_count] star hotel .\newline {\bf {G-DuHA}}: thank you . i am also looking for a restaurant in the [value\_area] of town . \\ \hline Domain: train\newline User: [destination, departure, arriveBy, day]\newline Reqt: []\newline Book: False & U: i need a train that leaves on [value\_day] with an arrival of [value\_time] .\newline S: where will you be travelling to and from ? & {\bf LM+G}: i will be departing from [value\_place] . \newline{\bf HRED}: i will be leaving from [value\_place] . \newline {\bf {G-DuHA}}: i am departing from [value\_place] and need to arrive by [value\_time] . \\ \hline \end{tabular} \caption{Qualitative comparison of generated responses. Reqt: Requests. U: User, S: Agent. } \label{table:respsample} \end{table*} For diversity evaluations, {G-DuHA} is on par with goal-embedded LMs and both outperform HRED significantly. Of $1000$ generated dialogues, HRED delivers highly repetitive outputs with only $4$ to $6\%$ distinct utterances, whereas $25\%$ of utterances are unique from {G-DuHA}. For recovering slots in reference dialogues, precision denotes a degree of goal deviation, recall entails the achievement of goals, and F1 measures the overall focus. Goal-embedded LM is the best on precision and F1 with {G-DuHA} having comparable performance. However, even thought LM can better mention the slots in dialogue generation, utterances are often associated with a wrong role. That is, role confusions are commonly seen such as the user makes reservations for the agent as in Table \ref{table:dialsample}. The reason could be that LM handles the task similar to paragraph generation without an explicit design for the conversational hierarchy. Overall, {G-DuHA} is able to generate high-quality dialogues with sufficient diversity and still adhere to goals compared to baselines. {\bf Qualitative Comparison.} Table \ref{table:dialsample} compares generated dialogues from different models given one to three goals to focus on. It's clear that models with the goal embedding module are able to adhere to given goals such as ``book" or ``no book", requesting ``price" or ``entrance fee" while HRED fails to do so. They can also correctly covering all required domain contexts without any diversion such as switching from attraction inquiry to restaurant booking, then to taxi-calling. For HRED, without goals, generated dialogues often detour to a non-relevant domain context such as shifting to train booking while only hotel inquiry is required. For goal-embedded LM, a serious issue revealed is role confusions as LM often wrongly shifts between the user and agent as shown in Table \ref{table:dialsample}. The issue results from one wrong \texttt{EndofUtterance} prediction but affects rest of dialogue and degrades the overall quality. More generated dialogues are reported in the appendix. \subsection{Response Generation Results} \vspace{-0.1mm} For response generation \cite{Sordoni2015ANN, Serban2016AHL, Park2018AHL}, a set of goals as well as the previous context, i.e. all previous reference utterances, are given to a model to generate the next utterance as a response. Table \ref{table:resgensys} and \ref{table:resgenusr} summarize the results. {G-DuHA} outperforms others on quality and goal focus measures and rivals LM-goal on diversity on both agent and user responses. For goal focus, LM-goal performs good on precision but short on recall. This could because it generates much shorter user and agent responses on average. Interestingly, as previous contexts are given, LM-goal performs only slightly better than HRED. This implies hierarchical structures capturing longer dependencies can make up the disadvantages of having no goal information for response generation. However, as illustrated in Table \ref{table:respsample}, HRED could still fail to predict the switch of domain contexts, e.g. from \texttt{train} to \texttt{restaurant}, which explains performance gaps. Another intriguing observation is that when incorporating the goal embedding module, response diversity and goal focus can be boosted significantly. Comparing the performance between agent and user response generation, we observe that models can achieve higher quality and diversity but lower goal focus when modeling agent's responses. These might result from the relatively consistent utterance patterns but diverse slot types used by an agent. More generated responses across different models are presented in the appendix. \begin{table}[t!] \fontsize{10.5}{12.5}\selectfont \centering \begin{tabular}{l|c c c} \hline & {\bf Wins} & {\bf Losses} & {\bf Ties} \\ \hline Goal Focus & {\bf 82.33\%} & 6.00\% & 11.67\% \\[0.5mm] Grammar & 6.00\% & 5.00\% & 89.00\% \\[0.5mm] Natural Flow & {\bf 26.00\%} & 15.00\% & 59.00\% \\[0.5mm] Non-redundancy & {\bf 35.34\%} & 6.33\% & 58.33\% \\[0.5mm] \hline \end{tabular} \caption{Human evaluations, {G-DuHA} vs HRED. $100$ pairs of generated dialogues along with goals are given to three domain experts for side-by-side comparisons. } \label{table:humaneval} \end{table} \subsection{Human Evaluation Results} For human evaluation, we conduct side-by-side comparisons between {G-DuHA} and HRED, the widely used baseline in literature, on dialogue generation task. We consider the following four criteria: \emph{goal focus}, \emph{grammar}, \emph{natural flow}, and \emph{non-redundancy}. \emph{Goal focus} evaluates whether the dialogue is closely related to the preset goals; \emph{grammar} evaluates whether the utterances are well-formed and understandable; \emph{natural flow} evaluates whether the flow of dialogue is logical and fluent; and \emph{non-redundancy} evaluates whether the dialogue is absent of unnecessary repetition of mentioned information. $100$ pairs of generated dialogues from {G-DuHA} and HRED along with their goals are randomly placed against each other. For each goal and pair of dialogues, three domain experts were instructed to set their preferences with respect to each of the four criteria, marked as \emph{win / lose / tie} between the dialogues. Table \ref{table:humaneval} presents the results. {G-DuHA} shows substantial advantages on goal focus, with $82.33\%$ wins over HRED, confirming the benefits of our goal embedding module. {G-DuHA} also outperforms HRED significantly on natural flow and non-redundancy. These might result from {G-DuHA}'s ability to generating much more diverse utterances while concentrating on current goals. An especially interesting observation is that in cases where multiple goals are given, {G-DuHA} not only stays focused on each individual goal but also generates intuitive transitions between goals, so that the flow of a dialogue is natural and coherent. An example is shown in Table \ref{table:dialsample}, where the {G-DuHA}-generated dialogue switches towards the \texttt{taxi} goal while maintaining reference to the previously mentioned \texttt{attraction} and \texttt{restaurant} goals: \textit{``\ldots i also need a taxi to commute between the 2 places \ldots''}. We also observe that both {G-DuHA} and HRED performed well on grammaticality. The generated samples across all RNN-based models are almost free from grammar error as well. \begin{table*}[t!] \centering \begin{tabular}{l|c c | c c c | c c c} \hline {\bf Model} & {\bf BLEU} & {\bf B1} & {\bf D-1} & {\bf D-2} & {\bf D-U} & {\bf P} & {\bf R} & {\bf F1}\\ \hline \hline {G-DuHA} & 7.48 & 24.38 & 0.17 & 0.99 & 25.98 & 86.21 & 83.18 & 83.76\\ \quad w/o goal & 5.19 & 20.04 & 0.13 & 0.68 & 13.83 & 69.18 & 68.21 & 66.86\\ \quad w/o dual & 7.34 & 24.99 & 0.15 & 0.79 & 19.22 & 85.24 & 82.62 & 82.96\\ \quad w/o context attention & 7.34 & 24.34 & 0.17 & 0.99 & 24.98 & 86.70 & 83.40 & 84.10\\ \hline \end{tabular} \caption{Ablation studies on {\bf dialogue} generation over goal embedding module, dual architecture, and dialogue-level attention. Results are averaged over 5 random seeds and reported in percentage (\%). {\bf P}: Precision, {\bf R}: Recall. } \label{table:dialabl} \end{table*} \begin{table*}[t!] \centering \begin{tabular}{l|c c | c c c | c c c} \hline {\bf Model} & {\bf BLEU} & {\bf B1} & {\bf D-1} & {\bf D-2} & {\bf D-U} & {\bf P} & {\bf R} & {\bf F1}\\ \hline \hline {G-DuHA} & 14.84 & 36.84 & 0.18 & 1.10 & 34.23 & 89.87 & 82.00 & 84.80\\ \quad w/o goal & 13.29 & 35.23 & 0.16 & 0.97 & 28.04 & 84.33 & 80.15 & 81.10\\ \quad w/o dual & 14.60 & 36.66 & 0.17 & 0.96 & 27.53 & 89.43 & 80.81 & 83.91\\ \quad w/o context attention & 14.73 & 36.88 & 0.18 & 1.14 & 34.55 & 90.28 & 81.54 & 84.80\\ \hline \end{tabular} \caption{Ablation studies on {\bf response} generation over goal embedding module, dual architecture, and dialogue-level attention. Results are averaged over 5 random seeds and reported in percentage (\%). {\bf P}: Precision, {\bf R}: Recall. } \label{table:respabl} \end{table*} \subsection{Ablation Studies} The ablation studies are reported in Table \ref{table:dialabl} for dialogue generation and in Table \ref{table:respabl} for response generation to investigate the contribution of each module. Here we evaluate user and agent response generation together. \vspace{0.5mm} \noindent {\bf Goal Embedding Module.} First, we examine the impact of goal embedding module. When unplugging the goal embedding module, we observe significant and consistent drops on quality, diversity, and goal focus measures for both dialogue and response generation tasks. For dialogue generation task, the drops are substantially large which resonates with our intuition as the model only has the first user utterance as input context to follow. With no guideline about what to achieve and what conversation flow to go around with, dialogues generated from HRED often have the similar flow and low diversity. These results demonstrate that our goal embedding module is critical in generating higher-quality and goal-centric dialogues with much more diversity. \vspace{0.5mm} \noindent {\bf Dual Hierarchical Architecture.} We also evaluate the impact of dual hierarchical architecture. Comparisons on both dialogue and response generation tasks show a consistent trend. We observe that applying dual architecture for interlocutor-level modeling leads to a solid increase in utterance diversity as well as moderate improvements on quality and goal focus. The results echo our motivation as two interlocutors in a goal-oriented dialogue scenario exhibit distinct conversational patterns and this interlocutor-level disparity should be modeled by separate hierarchical encoder-decoders. For the dialogue-level attention module, there is no significant effect on diversity and goal focus on both tasks but it marginally improves the overall utterance quality as BLEU scores go up by a bit. \begin{table}[t!] \centering \begin{tabular}{c|c c} \hline & Joint Goal & Turn Request \\ \hline GLAD & 88.55\% & 97.11\% \\ GLAD + LM+G & 88.07\% & 96.02\% \\ GLAD + HRED & 89.03\% & 97.11\% \\ GLAD + {G-DuHA} & {\bf 89.04\%} & {\bf 97.59\%}$^{\ast}$ \\ \hline \end{tabular} \caption{Test accuracy of GLAD \cite{Zhong2018GlobalLocallySD} on the WoZ restaurant reservation dataset with different data augmentation models. ($^{\ast}$significant against others.) } \label{table:augment} \end{table} \section{Data Augmentation via Dialogue Generation} As an exemplified extrinsic evaluation, we leverage the goal-oriented dialogue generation as data augmentation for task-oriented dialogue systems. Dialogue state tracking (DST) is used as our evaluation task which is a critical component in task-oriented dialogue systems \cite{Young2013POMDPBasedSS} and has been studied extensively \cite{henderson2014word, Mrksic2017NeuralBT, Zhong2018GlobalLocallySD}. In DST, given the current utterance and dialogue history, a dialogue state tracker determines the state of the dialogue which comprises a set of {\it requests} and {\it joint goals}. For each user turn, the user informs the system a set of turn goals to fulfill, {\it e.g. inform(area=south)}, or turn requests asking for more information, {\it e.g. request(phone)}. The joint goal is the collection of all turn goals up to the current turn. We use the state-of-the-art Global-Locally Self-Attentive Dialogue State Tracker (GLAD) \cite{Zhong2018GlobalLocallySD} as our benchmark model and the WoZ restaurant reservation dataset \cite{Wen2017ANE, Zhong2018GlobalLocallySD} as our benchmark dataset, which is commonly used for the DST task. The dataset consists of $600$ train, $200$ validation and $400$ test dialogues. We use the first utterances from $300$ train dialogues and sample restaurant-domain goals to generate dialogues, whose states are annotated by a rule-based method. Table \ref{table:augment} summarizes the augmentation results. Augmentation with {G-DuHA} achieved an improvement over the vanilla dataset and outperform HRED on turn requests while being comparable on joint goal. For goal-embedded LM, as it struggles with role confusion, the augmentation actually hurts the overall performance. \section{Conclusion} We introduced the goal-embedded dual hierarchical attentional encoder-decoder ({G-DuHA}) for goal-oriented dialogue generation. {G-DuHA} is able to generate higher-quality and goal-focused dialogues as well as responses with decent diversity and non-redundancy. Empirical results show that the goal embedding module plays a vital role in the performance improvement and the dual architecture can significantly enhance diversity. We demonstrated one application of the goal-oriented dialogue generation through a data augmentation experiment, though the proposed model is applicable to other conversational AI tasks which remains to be investigated in the future. As shown in experiments, a language model coupled with goal embedding suffers from role-switching or confusion. It's also interesting to further dive deep with visualizations \cite{kessler2017scattertext} and quantify the impact on quality, diversity, and goal focus metrics. \section*{Acknowledgments} The authors would like to acknowledge the entire AWS Lex Science team for thoughtful discussions, honest feedback, and full support. We are also very grateful to the reviewers for insightful comments and helpful suggestions.
{ "attr-fineweb-edu": 1.800781, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbCs4eIXguphAwH70
\section{Introduction} Nowadays, recommender systems (RS) accompany users in their everyday life, by proactively providing ``suggestions for items a user may wish to utilize'' \cite{ricci2015recommender}. Depending on the domain of application they face specific challenges. In the tourism domain the recommended items are complex (i.e., they typically combine accommodation, transportation, activities, food, etc.), mostly intangible, and highly related to emotional experiences \cite{werthner1999information, Werthner:2004:ET:1035134.1035141}. Thus, people have often difficulties to explicitly express their preferences, needs, and interests, especially in the initial phase of travel decision making \cite{zins2007exploring}. Recommender systems need to be transparent and explainable to the user in their selection process, otherwise they only increase the complexity of choosing a product. In this study, we build on previous research to provide and evaluate an implicit picture-based preference elicitation method for a tourism-specific recommender system. We employ computer vision models to determine the touristic profile of users, which are explainable to the users and where users are able to compare and adjust the results with their own preferences. Our main contribution is a user study with 81 participants, who uploaded three to seven pictures and adjusted the automatically determined touristic profile and filled out a questionnaire about the usefulness of our system. Previous studies have shown that it is reasonable to follow the idiom ``a picture is worth a thousand words''. Neidhardt et al. \cite{Neidhardt:2014:EUU:2645710.2645767, neidhardt2015picture} used a simple picture selection process to determine the user's personality and travel preferences in an implicit gamified way. In their approach, a user has just to select three to seven pictures out of a predefined fixed set of 63 pictures. Ferwerda et al. \cite{ferwerda2015predicting, ferwerda2018predicting} used visual features (e.g., brightness, saturation, etc.) and content features (i.e., identified concepts) of Instagram pictures to predict the personality of users. Figuerdo et al. \cite{figueredo2018photos} classified users based on pictures of their social media streams into basic tourist classes. However, previous research mainly focused on profiling the user, but in an ideal case picture-based approaches should be applicable on both users and items in order to characterize them in a match-able way. Addressing this issue, we introduced a more generic concept, where we utilize any kind of picture collections \cite{sertkan19pictures, sertkan2020pictures}. Thus, depending on the source of the collection (e.g., social media stream, pictures provided by a destination management organization, etc.) either a user or an item (in this case a tourism destination) is characterized. In this paper we deploy our conceptual work \cite{sertkan19pictures, sertkan2020pictures} with a user study to evaluate the user profile generation. Additionally, we also consider the order of the pictures in a collection. The main contributions of this paper are as follows: \begin{itemize} \item We organize and evaluate a user study to determine the touristic profile of users based on pictures; \item We provide evidence that the touristic profile of users can be determined by a picture collection they provide; \item We analyze the difference between perceived profile and predicted profile; \item We compare the performance of our approach with and without considering the picture order. \end{itemize}{} The remainder of the paper is organised as follows: In Section~\ref{sec:Background} we give an overview of the related work and focus on touristic preference models and picture-based approaches. In Section~\ref{sec:Methods} we introduce our extension, illustrate the experimental setup and detail the evaluation. In Section~\ref{sec:Results} we present the results and in Section~\ref{sec:Discussion} we discuss findings and implications. Finally, in Section~\ref{sec:Conclusion} we conclude our work and provide some future outline. \section{Background} \label{sec:Background} In this section we present related studies on user preference representation in the tourism domain and previous picture-based preference and/or personality elicitation approaches. \subsection{Touristic Preference Models} \label{sec:seven-factors} Due to the complex nature of tourism products (e.g., a bundle of accommodation, transportation, activities, etc.), their strong ties to emotional experiences and high consumption costs, recommending the right product to a tourist is a non-trivial task. Content about the tourism products and knowledge about the domain are crucial for RSs to bundle and recommend appropriate items \cite{neidhardt2015picture}. Also, Burke and Ramezani \cite{Burke2011} suggest that content-based and / or knowledge-based paradigms are most appropriate for tourism recommenders. For both paradigms capturing the preferences and needs of users is critical. Especially, in case of the content-based recommendation paradigm defining an appropriate domain model and a distance measure in order to find matching products is essential. Often users and items are characterized through a multidimensional vector space model, where each dimension covers a different touristic aspect. In some cases, the vector space model is derived from the data and / or the structure of the data. For instance, the vector space model in \cite{dietz2019designing} has the following dimensions: \textit{Arts \& Entertainment}, \textit{Food}, \textit{Nightlife}, \textit{Outdoors \& Recreation}, \textit{Venues}, \textit{Cost Index}, \textit{Temperature}, and \textit{Precipitation}. Those dimensions are given / derived from different data sources they use (e.g., Foursquare, weather data, etc.). In other cases, the vector space model is derived from the literature, like the Seven-Factor Model \cite{Neidhardt:2014:EUU:2645710.2645767, neidhardt2015picture}, which we use in this work. The Seven-Factor Model combines the ``Big Five'' personality traits~\cite{goldberg1990alternative} (representing the long-term preferences) and 17 tourist roles of Gibson and Yiannakis \cite{GIBSON2002358} (representing the short-term preferences). Seven basic factors were obtained by reducing the initial 22 (i.e., 5 + 17) dimensions via factor analysis. Thus, the factors of the Seven-Factor model are considered as independent from each other. Furthermore, users are depicted as a mixture of the Seven-Factors (since people can have different tastes) rather than classified to only one factor. For a better understanding, the Seven-Factors can be briefly summarized as follows \cite{Neidhardt:2014:EUU:2645710.2645767, neidhardt2015picture}: \setlist[description]{font=\normalfont\itshape\space} \begin{description} \item[Sun \& Chill-Out (F1) -] a neurotic sun lover, who likes warm weather and sun bathing and does not like cold, rainy or crowded places; \item[Knowledge \& Travel (F2) -] an open minded, educational and well-organized mass tourist, who likes travelling in groups and gaining knowledge, rather than being lazy; \item[Independence \& History (F3) -] an independent mass tourist, who is searching for the meaning of life, is interested in history and tradition, and likes to travel independently, rather than organized tours and travels; \item[Culture \& Indulgence (F4) -] an extroverted, culture and history loving high-class tourist, who is also a connoisseur of good food and wine; \item[Social \& Sports (F5) -] an open minded sportive traveller, who loves to socialize with locals and does not like areas of intense tourism; \item[Action \& Fun (F6) -] a jet setting thrill seeker, who loves action, party, and exclusiveness and avoids quiet and peaceful places; \item[Nature \& Recreation (F7) -] a nature and silence lover, who wants to escape from everyday life and avoids crowded places and large cities. \end{description} \subsection{Picture-Based Approaches} \label{sec:picture-related} Historical user data and knowledge about the user's preferences and needs are essential for RSs in order to provide good recommendations. Often, this information is not available, for example, if the user is logged out or generally for any new user. This issue is also known as the ``cold start'' problem. In this case one can elicit the preferences and needs of users explicitly (e.g., by asking related questions, etc.) or implicitly (e.g., by observing the behaviour, etc.)~\cite{sertkan_what_2019}. However, people often have difficulties in explicitly expressing their travel preferences, which is due to the complexity of the tourism products. Thus, implicit preference elicitation techniques are promising in such cases. Previous research demonstrated that it is reasonable to use pictures as a medium for communication between a user and a recommendation system. In this way the user is addressed on an emotional implicit level and thus an explicit preference statement is not needed \cite{neidhardt2015picture}. Ferwerda et al. \cite{ferwerda2015predicting, ferwerda2018predicting} showed that the personality of people can be determined through their Instagram pictures. In \cite{ferwerda2015predicting} they only used low level features, such as brightness and saturation, to predict the well-known ``Big Five'' personality traits, i.e., \textit{openness}, \textit{conscientiousness}, \textit{extraversion}, \textit{agreeableness}, \textit{neuroticism}. They also utilized high level features, i.e., concepts they identified through Google Vision API\footnote{\url{https://cloud.google.com/vision}} \cite{ferwerda2018predicting}. Furthermore, personality traits tend to be stable over time and thus can facilitate the prediction of the long-term behaviour of people \cite{matthews_deary_whiteman_2003,WOSZCZYNSKI2002369}. Since the application domain of our work is tourism, personality traits only are not sufficient to make recommendations. Therefore, we use the Seven-Factor Model, which is covering personality and touristic traits. Our work can be seen as a continuation of the picture-based approach of Neidhardt et al. \cite{Neidhardt:2014:EUU:2645710.2645767, neidhardt2015picture}. In their approach they use a simple gamified picture selection process to depict users within the Seven-Factor Model. In the mentioned picture selection process people have to select three to seven pictures out of a given set of 63 pre-defined pictures and based on their selection the Seven-Factors are calculated. The set of 63 pictures and the loading of each picture with the Seven-Factors were identified through workshops and experts. The fundamental difference to our approach is that we do not limit the user interaction to a fixed set of pictures. Figuerdo et al. \cite{figueredo2018photos} use convolutional neural networks (CNNs) to characterize people based on pictures. They utilize pictures from people's social media stream (i.e., Facebook, Instagram, and Google Plus) to classify them into five basic classes, i.e., \textit{Historical/Cultural}, \textit{Adventure}, \textit{Urban}, \textit{Shopping}, and \textit{Landscape}. Similar to the approach of Ferwerda et al. \cite{ferwerda2018predicting} they identify concepts (i.e., scenes) in the pictures and based on the frequency of these concepts they use a fuzzy classifier to assign scores to the classes. They use CNNs in order to identify the concepts and in turn to determine scores for their classes, whereas in our approach we train CNNs to directly output the Seven-Factor scores. Our approach in a single model prevents information loss of the intermediate step. Furthermore, we utilize a touristic preference model derived from the literature, whereas the classes in \cite{ferwerda2018predicting} seem to be arbitrarily defined. Finally, in our approach we let the user decide, which pictures should be used for their travel profile generation. Previous studies mainly concentrate on the one-way profiling of pictures to user models. In contrast, our goal is to employ a generic profiler, which we already conceptually introduced in \cite{sertkan19pictures, sertkan2020pictures} and which can universally characterize users and recommendation items based on corresponding picture collections in a comparable way. \section{Methods} \label{sec:Methods} In this section we describe the ``generic profiler'' \cite{sertkan19pictures, sertkan2020pictures} plus the extension for also accounting the order of pictures in a collection. Furthermore, we present the experimental setup of the user study and finally, we define our evaluation procedure and metrics. \subsection{A Generic Profiler} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{pics/generic_profiler.pdf} \caption{Eliciting the Seven-Factors from picture collections - A generic approach \cite{sertkan19pictures,sertkan2020pictures}.} \end{figure} Given a collection of pictures as input the generic profiler determines the collection's Seven-Factor representation, i.e., the touristic profile. This is realized in two main steps, namely \textit{Classification} and \textit{Aggregation}. The purpose of the Classification step is to determine the Seven-Factor representation of an input picture. Each factor in the Seven-Factor Model is treated independently and therefore seven CNNs are trained as binary classifier. The output of each classifier (i.e., class probability) is used as the score of the corresponding factor. All seven scores are then combined into one seven-dimensional vector, i.e., the Seven-Factor representation of the input picture. For each binary classifier a pretrained ResNet50 model \cite{He_2016_CVPR} is adapted and fine-tuned \cite{sertkan19pictures,sertkan2020pictures}. Given a collection of pictures as input the Classification step returns the Seven-Factor representation $f^p$ for each picture in the input collection. Therefore, the main role of the Aggregation step is to aggregate the individual Seven-Factor representations $f^p_i$ of a collection $X$ with $N$ pictures into one representation, which characterizes the whole collection \cite{sertkan19pictures,sertkan2020pictures}. In \cite{sertkan19pictures,sertkan2020pictures} the aggregation is proposed as a simple mean. Therefore, the ``generic profile'' $gp(X)$ of a collection $X$ is defined as following: \begin{equation} \label{eq:AVG} gp(X)=\frac{1}{N}\sum_{i=1}^{N}f^p_i \end{equation} In addition to the simple mean we also consider in this work the order of the pictures within a collection for the aggregation (more details in Section~\ref{sec:WeightedAverage}) and then compare both aggregation strategies. \subsection{Accounting for the Picture Order} \label{sec:WeightedAverage} Insights of a study conducted in \cite{neidhardt2015picture} show that most people tend to select three to seven pictures out of a given set of pictures. Furthermore, the order of the pictures might carry valuable information, since in the same study people often re-ranked their initial selection. In order to consider the order (i.e., rank) of the pictures in the user's selection they experimented with different strategies. The best strategy not only considered the rank of the pictures in the user's selection, but also the number of pictures in the user's selection. We adapt those insights and also follow the best performing strategy by 1) Limiting the size of the input collection to minimum three and maximum seven pictures; and 2) Aggregating the individual Seven-Factor scores $f^p_i$ (i.e., output of the \textit{Classification} step) of an input collection $X$ with $n=3, ..., 7$ pictures through weighted averaging. Thus, the profile of $X$, i.e., $gp(X)$, is defined as follows: \begin{equation} \label{eq:WA} gp(X)=\frac{\sum_{i=1}^{n}\omega_if^p_i}{\sum_{i=1}^{n}\omega_i} \end{equation} \begin{equation} \label{eq:W} \omega_i=7\frac{-r + n + 1}{\sum_{k=1}^{n}k} \end{equation} where $\omega_i$ is the weight of each picture and depends on the collection size $n$ and the rank $r$ of the considered picture. For instance, $\omega_i$ for the first ranked picture in a collection of three pictures equals to $\frac{21}{6}$, $\frac{14}{6}$ for the second ranked picture, and finally $\frac{7}{6}$ for the third ranked picture. The sum of all weights always equals seven. \subsection{Experimental Design} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{pics/study_procedure.pdf} \caption{Study Procedure - Consisting of following steps: 1) Information and Consent; 2) Picture Selection; 3) Picture Ranking; 4) Profile Presentation; 5) Questions.} \label{fig:StudyProcedure} \end{figure} Figure~\ref{fig:StudyProcedure} illustrates the main steps of the study procedure. Participants start at the landing page, i.e., Step 1, where we present detailed information about the study and a consent form for participation. In Step 2, we ask the participants to imagine their next hypothetical vacation and based on this to select three to seven pictures, i.e., either their own pictures or pictures downloaded from the web. We clearly state that the pictures used in this study are not saved or displayed at any time. In Step 3, we ask the participants to rank the selected pictures according to their relevance. Based on the selected pictures we present the resulting touristic profile, i.e., Seven-Factor representation, to the participant in Step 4. The shown profile is, with equal chance, either based on the simple average aggregation strategy hereinafter referred to as \textit{AVG} (see Equation~\ref{eq:AVG}) or based on the rank weighted averaging aggregation strategy hereinafter referred to as \textit{RWA} (see Equations~\ref{eq:WA} and \ref{eq:W}). We also provide a brief explanation of the Seven-Factors plus a link to a more detailed description of the Seven-Factor Model. Finally, in Step 5 we ask the participants following questions (*~marks mandatory questions): \begin{description} \item[Q01 -]* It was easy to find 3 to 7 pictures. \item[Q02 -]* I mainly used pictures downloaded from the internet (e.g., Google, Flickr, etc.). \item[Q03 -]* I mainly used my own pictures. \item[Q04 -]* I understood the explanations of the Seven-Factors. \item[Q05 -]* The resulting profile matches my preferences. \item[Q06 -] Which factor in the resulting profile does not match well? (multiple answers allowed) \item[Q07 -] How would you adjust the resulting profile? (multiple adjustments allowed) \item[Q08 -]* What is your age? \item[Q09 -]* What is your gender? \item[Q10 -]* What is your highest degree of education? \item[Q11 -]* How often do you travel for pleasure (leisure/tourism)? \item[Q12 -] Comments/Suggestions. \end{description} For questions \textit{Q01}-\textit{Q05} we provide a five-point Likert scale ranging from \textit{strongly disagree} to \textit{strongly agree}. For question \textit{Q06} we provide seven checkboxes, each for one factor of the Seven-Factor Model. In case of \textit{Q07}, we provide seven sliders (again each for one factor of the Seven-Factor Model), where the values are pre-set to the Seven-Factor scores of the predicted touristic profile. Questions \textit{Q08}-\textit{Q11} can be answered via Radio buttons, where we always provide the option ``prefer not to say''. Question \textit{Q12} is an open question, which can be answered via text field. The questions can be related to three main topics: 1) Picture selection \textit{Q01-Q03}, where we focus on picture source and difficulty to find pictures; 2) The touristic profile \textit{Q04-Q07}, where we concentrate on the overall performance plus capturing the difference between perceived and predicted characteristics; and finally, 3) Demographics \textit{Q08-Q011}. Besides the explicitly asked questions, we also track following: \begin{itemize} \item Time spend for the picture selection and ranking process; \item Time spend for understanding the profile and answering the questions; \item Number of picture re-rankings. \end{itemize} \subsection{Evaluation} The purpose of questions \textit{Q01}-\textit{Q04} and \textit{Q08}-\textit{Q11} is to get more insights about the participants and their behaviour, and to find support for generalizability statements. Also, tracking interactions and time might give insights about the behaviour and hints about difficulties participants face. Altogether, those insight can be used to further improve the introduced concept and moreover its presentation (i.e., user interface). Questions \textit{Q05}-\textit{Q07} are used to assess the overall performance and also the difference in performance with respect to the aggregation strategies (i.e., \textit{AVG} and \textit{RWA}). We use the mean absolute error (\textit{MAE}) in order to assess the difference in each factor of the Seven-Factor Model between predicted touristic profile and the user's perception (i.e., user's adjustment to the presented profile). Besides considering the predictive performance in each factor, we also treat the user's touristic profile as such by considering its distance to the user's perception. Therefore, we use Kendall's Tau distance ($DIST_{\tau}$), Spearman's Footrule ($DIST_{SPEAR}$), and the Euclidean distance ($DIST_{EUCL}$). \begin{figure}[h] \centering \includegraphics[width=.7\linewidth]{pics/kendalls_tau.pdf} \caption{Kendall's Tau distance - Total number of inversion in $\sigma$. Note, in this example Kendall's Tau distance is 2. } \label{fig:KendallsTau} \end{figure} Comparing the predicted ranking (i.e, rank of the factors based on scores in the predicted profile) and perceived ranking ($\sigma$) (i.e., rank of the factors based on scores in the adjusted profile), the Kendall's Tau distance can be interpreted as the total number of inversions in $\sigma$ (see Figure~\ref{fig:KendallsTau}) \cite{dwork2001rank}. Here, a pair of elements $F_i$ and $F_j$ is considered as inversed if $R_{F_i} > R_{F_j}$ and $R_{\sigma(F_i)}<R_{\sigma(F_j)}$, where $R_{F_i}$ stands for the ranking of an element and $R_{\sigma(F_i)}$ the ranking of an element in $\sigma$. Thus the Kendall's Tau distance is defined as following: \begin{equation} \label{eq:Tau} DIST_{\tau}=\sum_{R_{F_i}<R_{F_j}}1_{R_{\sigma(F_i)}>R_{\sigma(F_j)}} \end{equation} \begin{figure}[h] \centering \includegraphics[width=.7\linewidth]{pics/spearman_footrule.pdf} \caption{Spearmans's Footrule distance - Total displacements of elements in $\sigma$. Note, in this example Spearman's Footrule distance is 4. } \label{fig:SpearmansFootrule} \end{figure} On the other hand, the Spearman's Footrule distance (see Figure~\ref{fig:SpearmansFootrule}) can be interpreted as the total number of displacement of all elements \cite{dwork2001rank}. Here, a displacement is considered as the distance an element $F_i$ has to be moved to match $\sigma(F_i)$, which can also be written as $|R_{F_i} - R_{\sigma(F_i)}|$. Since the Spearman's Footrule is defined as the total number of displacements, it can be written as following: \begin{equation} \label{eq:Spear} DIST_{SPEAR}=\sum_{i}|R_{F_i} - R_{\sigma(F_i)}| \end{equation} By comparing the ranking, we account for the change in relevance of each factor of the Seven-Factor Model, for instance, the factor \textit{Sun \& Chill-Out} might be more relevant (i.e., ranked higher) in the user's perception than in the predicted touristic profile. Besides that, we also consider the distance of the presented and the perceived touristic profile based on the actual difference in Seven-Factor scores by using the Euclidian distance, which is defined as following: \begin{equation} \label{eq:EUCL} DIST_{EUCL}=\sqrt{\sum_{i=1}^{7}(predicted\_F_i - perceived\_F_i)^2} \end{equation} Finally, in order to identify significant distributional differences between the predicted Seven-Factor scores and the perceived Seven-Factor scores, we use the paired Student's t-test or the Wilcoxon signed-rank test depending on the outcome of the Shapiro-Wilk normality test. Furthermore, to compare differences based on the two aggregation strategies (i.e., \textit{AVG} and \textit{RWA}) and the nature of the considered variable we either use Mann-Whitney U test or Fisher's exact test. \section{Results} \label{sec:Results} In this section we present and analyse the outcomes of the conducted user study. We provide insights about the people who participated in the study. We analyse the picture selection and ranking process. Finally, we evaluate the performance of our model and compare differences to the user's perception and differences in outcome of both aggregation strategies. \subsection{Participants} \label{sec:participants} The participants were recruited in January 2020 by i) sharing the user study at the ENTER2020 international eTourism conference and through international mailing lists of tourism experts, and ii)~on social media, and among friends and colleagues, with no substantial differences in the results. In total, 81 participants finished the user study, where 62\% of the participants defined them self as man and 38\% as woman. Their self-reported age distribution looks like following: 60\% 25-34 years; 20\% 35-44 years; 7\% 45-55 years; 7\% above 55 years; 3\% 18-24 years; and 3\% below 18 years. The vast majority of the participants (i.e., 87\%) reported that the highest educational degree they hold is either bachelor, master, or PhD degree and furthermore, 9\% answered with high school degree, 2\% with less than a high school degree, and 2\% with ``other''. The majority of participants (i.e., 62\%) reported that they travel between one and three times a year (i.e., they chose the option 1-2 times a year or 2-3 times a year) for pleasure (tourism/leisure), 15\% answered with 3-4 times a year, 4\% with 4-5 times a year, 12\% with more than 5 times a year, and finally 7\% with less than one time in a year. About 42\% of the participants used a mobile device. Finally, 90\% of the participants reported that they understood the description of the Seven-Factors (i.e., agreement or strong agreement with \textit{Q04}). As already mentioned, the touristic profile (i.e., Seven-Factor representation) shown to the user is either based on the \textit{AVG} aggregation strategy or on the \textit{RWA} aggregation strategy, with equal chance and randomly assigned. From 81 participants in total, 51\% (N=41) were assigned to \textit{AVG} and 49\% (N=40) to \textit{RWA}. Note, in the following sections we discuss the outcomes with respect to all participants (N=81) and broken down by both \textit{AVG} (N=41) and \textit{RWA} (N=40). \subsection{Picture Selection \& Ranking} \label{sec:picture-selection-ranking} As already mentioned, we gave the participants the option so select between three and seven pictures. The majority of the participants (i.e., 52\%) selected only three pictures, 16\% selected four pictures, 11\% six pictures, another 11\% seven pictures, and finally 10\% five pictures. We also asked the participants to re-consider the initial ranking of the selected pictures, where only 21\% did actually a re-ranking. Those, who considered a re-ranking, changed the initial ranking between one and four times. Half of the participants finished the selection and ranking task within 2.7 minutes, 75\% of the participants completed after 5.8 minutes, and after 10.7 minutes already 90\% were finished. The majority of the participants (72\%) agreed or strongly agreed with Q01 (i.e., ``It was easy to find 3 to 7 pictures''), which is in line with the reported timing above. \begin{figure*}[h] \centering \includegraphics[width=\textwidth]{pics/pictures_seven_factor_distribution.pdf} \caption{Distribution of Seven-Factor scores of the uploaded pictures.} \label{fig:UserPictureScoreDist} \end{figure*} The distributions of Seven-Factor scores of the uploaded pictures (see Figure~\ref{fig:UserPictureScoreDist}) have overall a similar shape, where either there are strong signals in the pictures for the considered factor (i.e., high score) or there are no signals (i.e., low score). Except in case of factor \textit{Independence \& History (F3)}, where the scores are more evenly distributed. Furthermore, there only were very few signals for the factor \textit{Action \& Fun (F6)} in the user provided pictures. Both observations might indicate that some factors are harder to capture, leaving the room for further improvements. In contrast to analysing the distribution of the Seven-Factor scores of all uploaded pictures, where we treated the Seven-Factors individually and all pictures at once, we also analysed the diversity of the provided pictures per user selection. In other words, we investigated whether the uploaded images are homogeneous (e.g., only images of nature) or more diverse (e.g., images of nature, sports, and beach). Here, diversity of a users picture selection is defined as the average of the pairwise distances of the pictures in the respective selection. We used $DIST_{\tau}$, $DIST_{SPEAR}$, and $DIST_{EUCL}$ as distance measure. The results are listed in Table~\ref{tab:UsersPictureDiversity}, for instance, in case of the selection size of three one has to swap on average 8 times adjacent (based on ranking) factors in order to match the ranking of factors with another picture's ranking in the same selection. Similarly, one has to move the factors (based on ranking) 13 times in order to match the ranking of factors with another picture's ranking in the same selection. Also, the point-wise difference, i.e., diversity based on $DIST_{EUCL}$, is relatively high. Thus, in case of picture selection size three, the participants selected relatively diverse pictures, which is also true in case of all other picture sizes. However, the diversity in pictures in selections of sizes six or seven is relatively lower compared to diversity of pictures in selections of all other sizes, which is not expected (since there are more pictures to compare with). \begin{table}[h] \caption{ Diversity in users' picture selection of different sizes. Note, ``\#Pics'' is selection size; ``Kendall's'' is the mean of the average pairwise $DIST_{\tau}$ of the users' picture selection; ``Spearman's'' is the mean of the average pairwise $DIST_{SPEAR}$; and ``Euclidean'' is the mean of the average pairwise $DIST_{EUCL}$.} \label{tab:UsersPictureDiversity} \begin{tabular}{lrrr} \toprule \#Pics & Kendall's & Spearman's & Euclidean \\ \midrule 3 & 8.41 & 13.08 & 1.11 \\ 4 & 8.07 & 12.94 & 1.06 \\ 5 & 10.82 & 16.34 & 1.35 \\ 6 & 7.73 & 12.77 & \textbf{1.00} \\ 7 & \textbf{7.30} & \textbf{11.47} & 1.03 \\ \bottomrule \end{tabular} \end{table} \subsection{Overall Performance} \label{sec:performance} In order to assess the overall performance and thus user satisfaction, we asked the participants whether or not the presented touristic profile matched their preferences (\textit{Q05}) and which of the factors in the shown touristic profile did not match well (Q06). Our approach got quite positive feedback, where 65\% of the participants were overall satisfied with the resulting touristic profile (i.e., agreement or strong agreement with \textit{Q05}). The distribution is also reflected in both strategies. Note, the touristic profile presented to the user is either based on \textit{AVG} aggregation (N=41) or on \textit{RWA} aggregation (N=40) (the strategies were randomly assigned). Table~\ref{tab:OverallSatisfaction} lists the summary statistics for the level of agreement to \textit{Q05} for both strategies and overall. No significant difference could be shown between both strategies with respect to \textit{Q05}. \begin{table}[h] \caption{Summary statistics of level of agreement to Q05 - Overall and broken down by aggregation strategy. Note, 0 is strongly disagree and 4 is strongly agree.} \label{tab:OverallSatisfaction} \begin{tabular}{lccccc} \toprule {} & mean & sd & min & median & max \\ \midrule Overall (N=81) & 2.69 & 0.81 & 0 & 3 & 4 \\ AVG (N=41) & 2.68 & 0.81 & 0 & 3 & 4 \\ RWA (N=40) & 2.70 & 0.81 & 1 & 3 & 4 \\ \bottomrule \end{tabular} \end{table} The participants disagreed (i.e., checked the option ``did not match well'') the most with factor \textit{Sun \& Chill-Out} (37\% of the participants) and the least with factor \textit{Nature \& Indulgence} (10\% of the participants). In all other factors of the Seven-Factor Model 17-22\% of the participants disagreed. This also holds if the participants response to \textit{Q06} is viewed separately with respect to the both aggregation strategies, i.e., \textit{AVG} and \textit{RWA} (see Figure~\ref{fig:FactorDisagreement}). No significant difference could be shown between both strategies with respect to \textit{Q06}. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{pics/percentage_disagreed_factors.pdf} \caption{Disagreement with factors of the presented touristic profile (\textit{Q06}) broken down by aggregation strategy.} \label{fig:FactorDisagreement} \end{figure} \begin{figure*}[h] \centering \includegraphics[width=\textwidth]{pics/predicted_vs_perceived_dist_of_Fi.pdf} \caption{Predicted vs. Perceived Seven-Factor scores - Overall and broken down by aggregation strategy. Note, significant differences are highlighted (significance levels: * p<0.05 and ** p<0.01).} \label{fig:PredictedVsPerceivedDist} \end{figure*} \subsection{Predicted vs. Perceived Touristic Profile} \label{sec:user-profile} Besides the binary feedback of whether or not a factor of the predicted touristic profile fits well (i.e., \textit{Q06}), we also provided the possibility to adjust the respective profile via seven sliders, each for one factor of the Seven-Factor Model (i.e., \textit{Q07}). We consider the resulting profile after \textit{Q07} as the user's perceived touristic profile. About 90\% of the participants adjusted one or more factors of the predicted touristic profile, but on average three factors. Similar observations are made, if the initial sample is split with respect to both aggregation strategies and analysed separately. No statistically significant difference was observed with respect to both strategies and the number of taken adjustments. We conducted statistical significance tests in order to capture if the Seven-Factor scores of the predicted touristic profiles differ significantly from the Seven-Factor scores of the perceived touristic profiles. In particular, based on the outcome of the Shapiro Wilk normality test we either used Wilcoxon signed-rank test or paired Student's t-test. Figure~\ref{fig:PredictedVsPerceivedDist} summarizes the outcome of this comparison. Overall (N=81), the distribution of scores between predicted touristic profile and perceived touristic profiles were significantly different in factors: \textit{Sun \& Chill-Out (F1)} with p<0.05, \textit{Culture \& Indulgence (F4)} with p<0.05, \textit{Social \& Sports (F6)} p<0.01, and \textit{Nature \& Recreation (F7)} with p<0.01. On average, the participants corrected those factors by plus 0.05-0.06. Focusing only on the responses to the \textit{AVG} aggregation strategy (N=41), the distributions of Seven-Factor scores of the predicted touristic profiles compared to the distributions of the perceived touristic profiles were significantly different in factors: \textit{Culture \& Indulgence (F4)} with p<0.05, \textit{Social \& Sports (F6)} p<0.01, and \textit{Nature \& Recreation (F7)} with p<0.05. On average, the participants corrected those factors by plus 0.05-0.09. On the other hand, by only considering the responses of participants, who were assigned the \textit{RWA} aggregation strategy (N=40), following factors showed significant differences in Seven-Factor scores distributions when the predicted and the perceived profiles were compared: \textit{Sun \& Chill-Out (F1)} p<0.05 and \textit{Nature \& Recreation (F7)} with p<0.05. On average, the participants corrected those factors by plus 0.07-0.10. Besides identifying differences in Seven-Factor scores distributions, we also calculated the mean absolute error (MAE), i.e., mean of the absolute differences, between predicted and perceived Seven-Factor scores, in order to capture the predictive performance of our models. Table~\ref{tab:MAE} lists the resulting overall MAEs for each factor of the Seven-Factor model and for both strategies. Overall, our approach showed promising performance with MAEs between 0.09 and 0.16 on a scale from 0 to 1. Furthermore, the largest deviation from the perceived Seven-Factor scores was the one for factor \textit{Sun \& Chill-Out (F1)} with a MAE of 0.16. For all other factors our approach showed similar performance. Similar MAEs were observed if the predicted and perceived Seven-Factor scores were considered for both strategies separately. Moreover, a comparison of MAEs between both strategies showed that \textit{AVG} results in a slight better predictive performance in factors \textit{Sun \& Chill-Out (F1)}, \textit{Knowledge \& Travel (F2)}, \textit{Independence \& History (F3)}, and \textit{Nature \& Recreation (F7)}. On the other hand, \textit{RWA} slightly performed better in factors \textit{Culture \& Indulgence (F4)}, \textit{Social \& Sports (F5)}, and \textit{Action \& Fun (F6)}. But, the differences in performance were overall not significant. \begin{table}[h] \caption{Mean Absolute Error (MAE)} \label{tab:MAE} \begin{tabular}{lrrrrrrr} \toprule {} & F1 & F2 & F3 & F4 & F5 & F6 & F7\\ \midrule MAE-Overall & 0.16 & 0.10 & 0.09 & 0.10 & 0.10 & 0.10 & 0.9 \\ MAE-\textit{AVG} & \textbf{0.16} & \textbf{0.08} & \textbf{0.07} & 0.11 & 0.12 & 0.12 & \textbf{0.07} \\ MAE-\textit{RWA} & 0.17 & 0.13 & 0.10 & \textbf{0.09} & \textbf{0.08} & \textbf{0.09} & 0.10 \\ \bottomrule \end{tabular} \end{table} Analysing mean absolute differences or distributional differences between Seven-Factor scores of predicted and perceived touristic profiles does not consider the user representation as a such, but rather the factors of the Seven-Factor model. Therefore, we also took into account the distance between the predicted and perceived user representations (i.e., touristic profiles) into account. We analysed how far apart both representations in Euclidean space are by using $DIST_{EUCL}$. Furthermore, we used $DIST_{\tau}$ and $DIST_{SPEAR}$ to capture whether or not changes in Seven-Factor scores lead to changes in factor relevance (i.e., ranking). For instance, after the user's predicted profile adjustment \textit{Sun \& Chill-Out (F1)} might score better and thus get more relevant (i.e., ranked higher) than \textit{Nature \& Recreation (F7)}. We determined all three distances between predicted and perceived touristic profile for all participants and then averaged them (i.e., $\overline{DIST}_\tau$, $\overline{DIST}_{SPEAR}$, and $\overline{DIST}_{EUCL}$) in order to draw conclusions about our approach with respected to the distances. The results are listed in Table~\ref{tab:DistPerceptionVsPredicted}. \textit{RWA} performed on average relatively better than \textit{AVG} with respect to the ranking of the factors (i.e., lower $\overline{DIST}_\tau$ and $\overline{DIST}_{SPEAR}$). On the other hand, \textit{AVG} performed relatively better with respect to prediction accuracy (i.e., lower $DIST_{EUCL}$). However, the Mann-Whitney-U test showed that the differences are not significant. \begin{table}[h] \caption{ Differences in predicted and perceived touristic profile with respect to changes in ranking (i.e., relevance) of the factors and point-wise difference. Note, $\overline{DIST}_\tau$ is the average $DIST_{\tau}$ between predicted and perceived touristic profiles; similarly $\overline{DIST}_{SPEAR}$ is the average $DIST_{SPEAR}$, and $\overline{DIST}_{EUCL}$ the average $DIST_{EUCL}$.} \label{tab:DistPerceptionVsPredicted} \begin{tabular}{lrrr} \toprule {} & Overall & \textit{AVG} & \textit{RWA} \\ \midrule $\overline{DIST}_\tau$ & 3.58 & 3.90 & \textbf{3.25} \\ $\overline{DIST}_{SPEAR}$ & 6.68 & 7.20 & \textbf{6.15} \\ $\overline{DIST}_{EUCL}$ & 0.44 & \textbf{0.41} & 0.47 \\ \bottomrule \end{tabular} \end{table} \section{Discussion} \label{sec:Discussion} Based on the responses to the demographic questions (i.e., \textit{Q08}-\textit{Q11}) the majority of participants were male, between 25 and 44 years old, and hold at least a bachelor's degree. Thus, generalizing the outcomes and implications might only be possible in a limited way. Future work will consider a more systematic approach of survey distribution and sampling, and eventually further distribution channels to increase and diversify the pool of participants. We are aware that by asking the users for their next hypothetical trip, where they have nothing to win or lose, might influence the users' behaviour and thus introduce some bias. Future work will consider user incentives. Ultimately, we aim at pre-trip and post-trip studies. Furthermore, we plan to enrich the questionnaire (e.g., by following \cite{pu2011user} and \cite{ekstrand2014user}) in order to obtain even more valuable information. Following the approach in \cite{neidhardt2015picture}, we gave participants the option to select three to seven pictures. Approximately half of them only selected three pictures. This might have happened because of convenience or they may already have had a very focused idea about their next tourism destination and thus uploaded few most important pictures. Difficulty in finding pictures might not be the reason, since the majority of the participants reported that it was easy to find pictures and also the timing indicates that they were relatively quick in selecting and ranking pictures. Furthermore, only few participants took the chance to re-order their initial pictures selection by relevance. Deciding, which of the selected pictures is more important than the others, might have been a difficult task. Especially, in case of only three pictures, it might have felt unnecessary. Another possible explanation, why users select only few pictures and do not re-consider the their ordering, is a lack of involvement, which can be addressed with user incentives in future. Overall, our approach got positive feedback. Most of the participants were satisfied with the predicted touristic profile capturing their preferences. The participants mostly disagreed with the predicted score for factor \textit{Sun \& Chill-Out} in comparison to all other factors. Furthermore, no significant differences in performance could be shown between both aggregation strategies, i.e. \textit{AVG} and \textit{RWA}. In the future, we will further improve our models, e.g., by training the CNNs systematically with more pictures. Moreover, we plan to adapt other aggregation strategies, for instance, variations of ordered weighted averaging. In addition to the binary feedback, whether or not a factor fits well, we also provided the option to directly adjust the predicted factors via slider inputs. The vast majority of participants used this opportunity and adjusted at least one of the factors. The participants adjusted the factor \textit{Sun \& Chill-Out} more often than the other factors of the Seven-Factor Model. This is in line with the outcomes of the binary feedback (i.e., people disagreed the most with factor\textit{Sun \& Chill-Out}). Similar observations were made by considering the mean absolute error (MAE) between predicted and perceived touristic profiles, where the biggest difference was observed in factor \textit{Sun \& Chill-Out}. Thus, the participants not only reported that they disagree with \textit{Sun \& Chill-Out} more often than the other factors, but they also adjusted this factor the most in comparison to the others. Overall, our approach has a tendency to underrate the predicted factor scores, i.e., the participants were usually correcting the predicted factors upwards. However, based on the resulting MAEs in each factor our approach showed promising performance. Finally, we analysed the differences in predicted relevancy (i.e., ranking) and perceived relevancy of the factors of the Seven-Factor representation (i.e., touristic profile). Therefore, we captured to what extend the ranking of the factors (based on the predicted Seven-Factor scores) did change after the user's adjustment. Our results indicates that the predicted ranking is relatively close to the perceived ranking. However, we did not consider the relative position of the rankings. Discrepancies in top ranked (i.e., highly relevant) factors might have a higher impact than in low ranked factors. \section{Conclusions} \label{sec:Conclusion} In this paper we addressed the difficulty of travelers of explicitly expressing their preferences and needs. We followed the idiom ``a picture is worth a thousand words'' and used pictures as a tool for communication and as a way to implicitly elicit the travelers' preferences in order to overcome communication barriers. We designed and deployed an online user study in order to evaluate a previously introduced concept \cite{sertkan19pictures, sertkan2020pictures}, in which preferences and needs of travelers are determined based on a selection of pictures they provide. We extended the concept by also considering the order of the pictures in the user's selection. We asked the participants, with their next hypothetical trip in their mind, to upload three to seven pictures and rank them based on relevancy. Based on the participants' picture selection we determined their touristic profile as a Seven-Factor representation. We randomly and with equal chance also considered the order of the pictures in the participants' selection. Finally, we let the participants adjust the presented profile and asked further questions. Our user study showed promising results, as the majority of participants (65\%) were overall satisfied with their predicted touristic profile and only few (18\%) disagreed with the outcome. Considering the order of the pictures in the participants' selection did not significantly improve the performance of our models. The participants mostly disagreed with the factor \textit{Sun \& Chill-Out} (37\%) in comparison with the other factors (10-22\%). Finally, we also showed that the predicted touristic profile is close to the perceived one with MAEs of factors between 0.09 and 0.16 on a scale from 0 to 1. In future work, we will improve our approach by i) boosting up the training set; ii) considering other aggregation strategies; iii)~combining the latent feature vectors of the images with low level features (e.g., colourfulness, brightness, etc.). Furthermore, we will focus on generalizability by even more systematically distributing the study. Finally, we aim to compare our work to different other user modelling approaches. \balance \bibliographystyle{ACM-Reference-Format}
{ "attr-fineweb-edu": 1.791992, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUcDzxK7Dgs_aSTsS3
\section{Dataset building} The data included in FTR-18 was harvested during the 2018 Summer Transfer Window (STW18). Transfer windows are periods during the year in which football clubs can make international transfers. Every league has the right to choose two annual transfer windows, with the opening and closing dates being defined by the football associations of the league. The collected material includes both transfer news and rumours and the corresponding reactions to these rumours on Twitter. As we develop a multilingual dataset containing news and comments in English, Spanish and Portuguese involving UEFA clubs, the harvesting is performed during a common period including the transfer windows of England, Spain and Portugal. Accordingly, data is being collected from June 24 until August 31, 2018. FTR-18 is built in three stages. First, we browsed the media and social networks for an initial assessment of transfer rumours involving top UEFA clubs. Next, we performed both a semi-automated news articles harvesting and a Twitter crawling, selecting news and tweets related to the rumours, respectively. Finally, as transfer moves are confirmed (or not), we annotate rumour veracity. \subsection{Football clubs selection} The first step in the creation of the FTR-18 is the selection of the clubs to follow during the Summer Transfer Window of 2018. We picked English, Spanish and Portuguese clubs that played the group stage in 2017-2018 UEFA Champions League. This decision was made based on the languages we are more familiar with. With this restriction, we decided to monitor the following clubs: \begin{description} \item \textbf{England}: Chelsea, Liverpool, Manchester City, Manchester United, Tottenham Hotspur \item \textbf{Spain}: Atl\'etico Madrid, Barcelona, Real Madrid, Sevilla \item \textbf{Portugal}: Benfica, Porto, Sporting CP \end{description} \subsection{Transfer rumours selection} The next stage of the FTR-18 generation is the collection of transfer news. We selected transfer rumours involving the monitored clubs. The rumours included narratives of permanence or transfer moves concerning football players, that is, whether players are leaving or being hired by the monitored clubs. For consistency reasons, we restricted the rumour selection to players affiliated to clubs from countries having English, Spanish or Portuguese as their majority language. The rumours were manually tracked and extracted from many sports news sources. For each rumour, we annotated the \textit{target} player, the target's club by the opening of the 2018 Summer Transfer Window (denoted as \textit{source}), and the rumoured \textit{destination} club (See Figure \ref{fig:rumour_attr}). We considered rumours either about the transference of a target to another club (source $\neq$ destination) or about the permanence of the player in the current club (source $=$ destination). \subsection{News articles harvesting} News harvesting involved searching and collecting news articles, published by different news organisations, related to the identified rumours in the languages of the dataset. It was performed on a daily basis during the STW18. Each selected news article is associated with a single transfer move, and it might report a rumour or a verified information. We restricted our harvesting to news containing headlines with clear transfer reports about a single target, even if they report secondary moves in the body text (e.g. another player's alleged move, or interest manifested by other clubs). News with headlines pointing potential transfer of a player to multiple clubs were discarded. For the news collection, we extracted content and meta-data from each article. Content information includes \textit{headline}, \textit{subhead}, and \textit{body text}, while for the meta-data we registered the \textit{article source}, \textit{language}, \textit{url} and the \textit{publication date} (See Figure \ref{fig:rumour_attr}). The news harvesting task is semi-automated, as we could not build scrapers suited for all news sources due to their poor CSS structure. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{images/rumour_attr.png} \caption{Transfer rumour and news articles attributes.} \label{fig:rumour_attr} \end{figure} \subsection{Rumour reactions crawler} Once a rumour is identified, we build a crawler using the Twitter's streaming API\footnote{https://developer.twitter.com/en/docs/tweets/filter-realtime/overview.html} to collect posts related to the rumour. In compliance to the established language restrictions, we only harvest tweets written in the languages of the FTR-18 dataset. We record both the \textit{message content}, \textit{tweet creation date} and the data associated to the user who posted the message (\textit{id}, \textit{screen name}, \textit{description}, \textit{location}, \textit{friends} and \textit{followers count}, \textit{verification account status}). If the post is a retweet, we also register the \textit{retweet status}, the data associated to the user who posted the original message and the \textit{retweet} and \textit{favourite counts}. Besides collecting tweets related to rumours, we also track Twitter account meta-data and followers statistics either for the players, for the monitored clubs and for the clubs involved in the transfer rumours. \subsection{Rumour veracity annotation} As the rumour unfolds, we annotate the rumour veracity by adding to the transfer rumour meta-data the evidences that support or refute the rumour and the publication date of these evidences. Rumour veracity is to be inferred until the end of 2018 Summer Transfer Window. Thus, for the case when source $\neq$ destination, the rumour veracity is stated as \textit{True}, if the transfer is confirmed, or is assigned as \textit{False}, if the target remains in the source club or if the target is transferred to a different destination during STW18. Similarly, for rumours in which source $=$ destination, the rumour veracity is inferred as \textit{False} if the player is transferred, or \textit{True} otherwise. Despite the challenges of manually determining the credibility of a rumour \cite{zubiaga2018detection}, UEFA registration after STW18 will provide ground truth data for the annotation of the rumours' veracity. \begin{comment} \todo[inline]{as bases são as fontes de rumour e depois vai haver informação do encadeamento gerar conjunto de metadata em volta do rumor} \end{comment} \section{Discussion} FTR-18 is suited for most of the steps involved in a rumour classification process, as presented by \citet{zubiaga2018detection}. Table \ref{tab:datsets} displays a comparison of FTR-18 against existing datasets, showing used data sources, availability, multilingual characteristics and possible usage. At this first phase of the dataset development, we perform annotations on rumours veracity. This task can be finished by the official closing date of STW18 (August 31, 2018), when all football transfers must be concluded. Any subsequent transfers are frozen until the next Winter Transfer Window (January 2019). Once each rumour is labelled on its truthfulness, rumour veracity assessment could be performed on the FTR-18 dataset. In a second annotation phase, we will manually label our news articles subset, identifying which articles report rumours and which report verified information. This annotation will make FTR-18 suited for a rumour detection task. Stance detection is another possible application for the FTR-18 dataset. The collected content allows the classification of how the news headlines orient towards a given transfer rumour. Besides news' stances, we can evaluate the attitudes manifested by the audience with respect to a transfer move using reactions to Twitter posts. Both news articles labelling and stance annotation depend on human judgement. The collected data also encourages analysis of rumour tracking. This research topic is scarcely explored and consists of identifying content associated with a rumour that is currently monitored \cite{zubiaga2018detection}. We intend to label the harvested news and Twitter posts a posteriori as related or unrelated to the rumour, following Qazvinian et al's approach \cite{qazvinian2011rumor}. As result, FTR-18 could be used as a rumour dataset suited for binary classification of posts in relation to a rumour. Finally, the FTR-18 dataset will also allow the analysis on how football transfer news are reported by the sports media. The news articles collection will serve as input for identifying the linguistics structures employed by journalists in transfer coverage. We expect to conduct an investigative work on the propagation patterns present in football transfer news, such as the recurrent conditional transfer moves and the echo effect caused by repetitions of unverified stories published by third-party news organisations. Further improvements in the FTR-18 dataset include the addition of new football leagues (e.g. CONMEBOL), the expansion of monitored clubs set and the collection of data from future transfer windows. \section{Initial Analysis} \begin{table*}[hbt] \caption{Datasets comparison (RD = rumour detection, RT = rumour tracking, SC = stance classification, VC = veracity classification, ER = evidence retrieval).} \label{tab:datsets} \begin{tabular}{ccccccccc} \toprule \multirow{2}{*}{Dataset} & \multirow{2}{*}{Data source} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Publicly\\ Available\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Multi-\\ lingual\end{tabular}} & \multicolumn{5}{c}{Usage} \\ & & & & RD & RT & SC & VC & ER \\ \midrule Qazvinian's \cite{qazvinian2011rumor} & Social media & \xmark & \xmark & & \cmark & \cmark & & \\ Emergent \cite{silverman2015lies,ferreira2016emergent} & \begin{tabular}[c]{@{}c@{}}News \& Rumour sites\\ \& Social media\end{tabular} & \cmark & \xmark & & & \cmark & \cmark & \\ PHEME-RNR \cite{zubiaga2016learning} & Social media & \cmark & \xmark & \cmark & & & \\ PHEME-RSD \cite{zubiaga2016analysing} & Social media & \cmark & \cmark & \cmark & & \cmark & \cmark & \\ FEVER \cite{thorne2018fever} & Wikipedia & \cmark & \xmark & & & & \cmark & \cmark \\ LIAR \cite{wang2017liar} & Rumour sites & \cmark & \xmark & & & \cmark & \cmark & \\ FTR-18 & News sites \& Social media & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark & \\ \bottomrule \end{tabular} \end{table*} Currently, the FTR-18 dataset comprises 3,045 news articles and more than 2,064K tweets (See Table \ref{tab:dataset_stats}). The news articles subset considers transfer news covered by online sports media written in English (1,517), Spanish (747) or Portuguese (781) by 96 different news organisations. This collection includes 304 transfer moves associated with 175 target football players. The Twitter subset contains original messages and retweets written in English (1,130K), Spanish (677K) or Portuguese (257K). The Twitter posts are related to 112 claimed transfer moves involving 84 different football players. The FTR-18 dataset meta-data and corresponding news source scrapers are available at \url{https://github.com/dcaled/FTR-18}. \begin{table}[h] \caption{Current status of FTR-18 dataset.} \label{tab:dataset_stats} \begin{tabular}{lcc} \toprule & News articles & Tweets \\ \midrule \textbf{English} & 1,517 & 1,130K \\ \textbf{Spanish} & 747 & 677K \\ \textbf{Portuguese} & 781 & 257K \\ \textbf{Total} & 3,045 & 2,064K \\ \bottomrule \end{tabular} \end{table} In a preliminary analysis of the collected news articles, we noticed many cross-references between different news sources. This pattern has already been identified by \citet{silverman2015lies} and \citet{maia2016jornalismo}. We observed a constant appearance of attribution formulations like ``\textit{source S reported}'' or ``\textit{according to source S}'', followed or not by the link to the referred news source. Other structures used to conceal the origin of the rumour are ``\textit{according to reports/sources}'', ``\textit{are said}'', ``\textit{player P linked to club C}'', ``\textit{reports suggest}''. Expressions like these appear in most of our collected news articles. Another common strategy is reporting an unverified information using news headline as a question \cite{silverman2015lies} (``\textit{Ronaldo out, Neymar in at Real Madrid?}\footnote{https://www.theguardian.com/football/2018/jul/03/football-transfer-rumours-cristiano-ronaldo-real-madrid-juventus-neymar}''). Although less common, this last example is often adopted, mainly by Spanish media. Another interesting observation concerning transfer news is the cascading move chain associated to a rumour. For example, news sources condition the acquisition of a new player on the sale of another player by the same club (``\textit{Cristiano Ronaldo open to Juventus move as Real Madrid consider selling to fund move for Kylian Mbappe or Neymar}\footnote{https://www.independent.co.uk/sport/football/transfers/cristiano-ronaldo-transfer-news-latest-juventus-real-madrid-neymar-kylian-mbappe-unveil-move-a8434171.html}''). This pattern drives the reader to expect a continuation of the rumourous story and its unfoldings, thus keeping audience's attention for follow-up stories. \section{Introduction} In the last years, the way news are spread has changed. Social networks, blogs/micro-blogs and other untrusted online news sources have gained popularity, thus allowing any user to produce unverified content. This modern kind of media enables real-time proliferation of news stories, and, as consequence, increases the diffusion of rumours, hoaxes and misinformation to a global audience \cite{tacchini2017some}. As news content is continuously published online, the speed in which it is disseminated hinders human fact-checking activity. A piece of information whose ``veracity status is yet to be verified at the time of posting'' is called rumour \cite{zubiaga2018detection}. News articles, scientific researches and conspiracy-theory stories can float in the veracity spectrum. They are characterised by distinct stylistic dimensions and these features enable their spread, but are orthogonal to their truthfulness \cite{vosoughi2018spread}. The football transfer market offers a fertile ground for rumour dissemination because rumours and misinformation spreads may be motivated by personal profit or public harm. Releasing false or misleading news about alleged moves of players between clubs emerges as a strategy to increase a player's market value and transfer fees. Negligent transfer announcements are assigned to high sums directly related to some transfers \cite{maia2016jornalismo}. In addition, news organisations also benefit from announcements of alleged transfer moves, attracting public attention by selling contents and advertisements. Maia showed that most of the football transfer news published by the three biggest Portuguese sports diaries during the Summer of 2015 were not confirmed \cite{maia2016jornalismo}. Table \ref{tab:mbappe_rumour} provides an example of a news article about a transfer to Real Madrid that was latter denied by the club in an official announcement. Both the transfer news and the announcement were released on July 4, 2018. According to the Union of European Football Associations' (UEFA) annual report\footnote{https://goo.gl/FoAv2g}, the European transfer market reached a record revenue of \texteuro5.6 billion during the Summer of 2017, a 6\% increase in spending during this period relatively to the highest value recorded in the last ten years. Economical side effects of rumours on football clubs shares are also reported, like the rumour of Cristiano Ronaldo joining Juventus FC, which made the club shares jump almost 10\% on a single day\footnote{https://goo.gl/3wKhj3} before any official announcement. \begin{table} \caption{Example of a transfer rumour.} \label{tab:mbappe_rumour} \begin{threeparttable} \begin{tabular}{p{0.95\columnwidth}} \toprule \textbf{Topic}: Real Madrid agreed a deal for PSG's Kylian Mbapp\'e. \\ \midrule \textbf{Agreeing news source}: www.msn.com\tnote{a} \\ \textbf{Headline}: Real Madrid agree stunning \texteuro272m deal to sign Kylian Mbappe \\ \textbf{Extract}: \textit{Real Madrid have reportedly agreed an eye-catching \texteuro272 million deal with Paris Saint-Germain for Kylian Mbappe.} \\ \textit{According to reports coming out of France from Baptiste Ripart, the French champions are set to lose Mbappe, with Madrid agreeing to pay the fee over four instalments.} \\ \midrule \textbf{Evidence}: Real Madrid official announcement\tnote{b}\\ \textbf{Extract}: \textit{Given the information published in the last few hours regarding an alleged agreement between Real Madrid C.F. and PSG for the player Kylian Mbapp\'e, Real Madrid would like to state that it is completely false.} \\ \textit{Real Madrid has not made any offer to PSG or the player and condemns the spreading of this type of information that has not been proven by the parties concerned.}\\ \bottomrule \end{tabular} \begin{tablenotes}\footnotesize \item [a] https://goo.gl/HhRafF \item [b] https://goo.gl/7oCqBy \end{tablenotes} \end{threeparttable} \end{table} Motivated by the extensive attention given by sports media to transfer news and the high amounts involved, we propose the creation of Football Transfer Rumours 2018 (FTR-18), a transfer rumours dataset for researching the linguistic patterns in their text and propagation mechanisms. FTR-18 is designed as a multilingual collection of articles published by the relevant news organisations either in English, Spanish or Portuguese languages. Besides news articles, our proposed dataset also comprises Twitter posts associated to the transfer rumours. The present work describes the creation process of FTR-18 dataset and discusses the intended usage of the collection. \section{Related Work} Collections of rumours from different natures have been assembled before. However, these datasets mainly focused on political and social issues \cite{qazvinian2011rumor,silverman2015lies,wang2017liar}. Some of these datasets were built using rumours collected from social media platforms \cite{qazvinian2011rumor,zubiaga2016analysing,zubiaga2016learning}, such as Twitter, or with data extracted from fact-checking websites \cite{silverman2015lies,wang2017liar}, like PolitiFact\footnote{http://www.politifact.com/} and Snopes\footnote{https://www.snopes.com/}, while others were crafted using manually created claims based on Wikipedia articles \cite{thorne2018fever}. The first large-scale dataset on rumour tracking and classification was proposed by \citet{qazvinian2011rumor}. This dataset comprises manually annotated Twitter posts (tweets) from five political and social controversial topics, with tweets marked as \textit{related} or \textit{unrelated} to the rumour. It also provides annotations about users' beliefs towards rumours, i.e., users who endorse versus users who refute or question a given rumour. Twitter posts were also used in the construction of two datasets under the PHEME project \cite{derczynski2014pheme}: PHEME dataset of rumours and non-rumours (PHEME-RNR) and PHEME rumour scheme dataset (PHEME-RSD). These two collections are composed of tweets from rumourous conversations associated with newsworthy events mainly about crisis situations. PHEME-RNR contains stories manually annotated as \textit{rumour} or \textit{non-rumour}, hence being suited for the rumour detection task \cite{zubiaga2016learning}. On the other hand, PHEME-RSD was annotated for stance and veracity, following crowd-sourcing guidelines \cite{zubiaga2015crowdsourcing}, and tracked three dimensions of interaction expressed by users: support, certainty and evidentiality \cite{zubiaga2016analysing}. PHEME-RSD contains conversations threads in English and in German, however this data is extremely imbalanced as less than 6\% of the tweets are written in German. Silverman developed a dataset for the analysis of how online media handles rumours and unverified information \cite{silverman2015lies}. The resulting Emergent database is a collection of online rumours comprising topics about war conflicts, politics and business/technology. Emergent data was generated with the help of automated tools to identify and capture unconfirmed reports early in their life-cycle. Then, rumours were classified according to their headline and body text stances and monitored with respect to social network shares (Twitter, Facebook and Google Plus) and version changes. In a posterior work, Ferreira and Vlachos leveraged Emergent into a dataset focusing veracity estimation and stance classification \cite{ferreira2016emergent}. This modified dataset consists of claims and associated news articles headlines. Claims were labelled with respect to their veracity, while the news articles headlines were categorised according to their stances towards the claim: \textit{supporting}, \textit{denying} or \textit{observing}, if the article does not make assessments about the veracity the claim. LIAR is another dataset that could be employed in fact-checking and stance classification tasks \cite{wang2017liar}. LIAR includes manually labelled short statements from PolitiFact. All the statements were obtained from a political context, either extracted from debates, campaign speeches, social media posts, news releases or interviews. Each statement was evaluated for its truthfulness and received a veracity label accompanied by the corresponding evidences. Thorne et al. developed FEVER (Fact Extraction and VERification), a large-scale manually annotated dataset focusing on verification of claims against textual sources \cite{thorne2018fever}. FEVER is composed of claims generated from modifications in sentences extracted from introductory sections of Wikipedia pages. The claims were manually classified as \textit{supported}, \textit{refuted} or \textit{not enough info} (if no information in Wikipedia can support or refute the claim). Additional Wikipedia justification evidences concerning supporting and refuting sentences were also recorded. The PHEME-RSD and FEVER datasets were employed in stance detection and veracity prediction shared tasks. The PHEME-RSD was employed in the Semantic Evaluation (SemEval) 2017 competition, Task 8 RumourEval\footnote{http://alt.qcri.org/semeval2017/task8/} \cite{derczynski2017semeval}. In the first sub-task (Sub-task A: Stance Classification), participants were asked to analyse how social media users reacted to rumourous stories, while for the second sub-task (Sub-task B: Veracity prediction), participants were asked to predict the veracity of a given rumour. In the FEVER shared task\footnote{http://fever.ai/task.html}, participants should build a system to extract textual evidences either supporting or refuting the claim. Despite the variety of existing datasets for rumour analysis, none of them addresses football rumours. Besides, rumours in multilingual environments like the European Football market are yet to be explored by the academic research community. For our purpose, however, it is extremely important to track how news about transfer rumours are covered by sports media in different languages, as UEFA transfers generally occur between distinct European countries. Hence, our proposal differs from the previous work as it introduces a new dataset comprising football transfer rumours in three different languages: English, Spanish and Portuguese.
{ "attr-fineweb-edu": 1.753906, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUef_xK6nrxl9bOnMe
\section{Introduction} \begin{figure}[tb] \centering \includegraphics[height=2.3cm]{example_fig.pdf} \caption{An micro-video interaction sequence with two interests: travel and pets.} \label{fig:ninterests} \end{figure} In recent years, micro-video apps such as TikTok, Kwai, MX TakaTak, etc. have become increasingly popular. Generally, the micro-video app displays a single video in full-screen mode at a time and automatically plays it in a repetitive way. Usually only after viewing the cover of the micro-video or watching the content of the micro-video for a few seconds, can the user determine whether he or she is interested in this micro-video. With the explosive growth of the number of micro-videos, if the micro-videos exposed to a user do not fall within the scope of his/her interests, then the user might leave the app. Therefore, the efficient micro-video recommendation has become a crucial task. Existing micro-video recommendation models \cite{wei_mmgcn_2019,chen_temporal_2018,liu_user-video_2019,jiang_what_2020} rely on multi-modal information processing, which is too expensive to deal with large-scale micro-videos. Furthermore, they learn a single interest embedding for a user from his/her interaction sequence. However, most users have multiple interests while watching micro-videos. As shown in Figure \ref{fig:ninterests}, a user is interested in both tourism and pets, and the user's interactions in the future might involve any one of the user interests. Therefore, a more reasonable approach is to learn multiple disentangled interest embeddings for users, each of which represents one aspect of user interests, and then generate recommendations for the users based on the learned multiple disentangled interest embeddings. On the other hand, contrastive learning has attracted a great deal of attention recently. It augments data to discover the implicit supervision signals in the input data and maximize the agreement between differently augmented views of the same data in a certain latent space. It has obtained the success in computer vision \cite{chen_simple_2020}, natural language processing \cite{fang_cert_2020,wu_clear_2020,giorgi_declutr_2021} and other domains \cite{oord_representation_2019}. More recently, contrastive learning also has been introduced to the recommendation, such as sequential recommendation, recommendation based on graph neural network, and etc., which realizes the debiasing \cite{zhou_contrastive_2021} and the denoising \cite{qin_world_2021}, and resolves the representation degeneration \cite{qiu_contrastive_2021} and the cold start problem \cite{wei_contrastive_2021}, improving the recommendation accuracy \cite{liu_contrastive_2021,xie_contrastive_2021,wu_self-supervised_2021,yu_graph_2021}. We note that there exists noise in the positive interactions in the micro-video scenario since micro-videos are automatically played and sometimes users cannot judge whether they like the micro-video or not until the micro-video finishes playing. However, neither existing micro-video recommendation models nor multi-interest recommendation models \cite{ma_disentangled_2020,cen_controllable_2020,liu_octopus_2020,li_multi-interest_2019} utilize contrastive learning to reduce the impact of noise in the positive interactions. In this paper, we propose a new micro-video recommendation model named CMI. Based on the implicit micro-video category information, this model learns multiple disentangled interests for a user from his/her historical interaction sequence, recalls a group of micro-videos by each interest embedding, and then forms the final recommendation result. In particular, contrastive learning is incorporated into CMI to realize the positive interaction denoising, improving the robustness of multi-interest disentanglement. Our contribution is summarized as follows. \begin{enumerate} \item We propose CMI, a micro-video recommendation model, to explore the feasibility of combining contrastive learning with the multi-interest recommendation. \item We establish a multi-interest encoder based on implicit categories of items, and propose a contrastive multi-interest loss to minimize the difference between interests extracting from two augmented views of the same interaction sequence. \item We conduct experiments on two micro-video datasets and the experiment results show the effectiveness and rationality of the model. \end{enumerate} \section{Methodology} \begin{figure}[tb] \centering \includegraphics[height=4.9cm]{model_fig.pdf} \caption{The architecture of CMI} \label{fig:model_fig} \end{figure} We denote user and item sets as $\mathcal{U}$ and $\mathcal{V}$, respectively. Further, we denote an interaction between a user and an item as a triad. That is, the fact that user $u_{i}$ interacts with micro-video $v_{j}$ at timestamp $t$ will be represented by $\left(i, j, t\right)$. Given a specific user $u_{i} \in \mathcal{U}$, we firstly generate a historical interaction sequence over a period of time, denoted as $s_{i}=\left[v_{i 1}, v_{i 2}, \ldots, v_{i\left|s_{i}\right|}\right]$, in which the videos are sorted by the timestamp that user $u_{i}$ interacts with the video in ascending order, and secondly learn multiple interest embeddings for each user, denoted as $\left[\mathbf{u}_{i}^{1}, \mathbf{u}_{i}^{2}, \ldots, \mathbf{u}_{i}^{m}\right]$. Then, for each interest embedding, we calculate the cosine similarity to each candidate micro-videos and recall $K$ micro-videos with the highest $K$ similarities, that is, a total of $m K$ micro-videos are recalled. Finally, from the recalled micro-videos, we select top-$K$ ones ranked by the cosine similarity as the final recommendations. \subsection{Multi-interest and General Interest Encoders} We argue that the categories of items are the basis of user interests. User preferences on a certain category of items form an interest of the user. Thus, we assume there are $m$ global categories and set learnable implicit embeddings $\left[\mathbf{g}_{1}, \mathbf{g}_{2}, \ldots, \mathbf{g}_{m}\right]$ for these $m$ categories. For items in historical interaction sequence $s_{i}$ of user $u_{i}$, we obtain the embedding of each item sequentially through the embedding layer, and form $\mathbf{S}_{i}=$ $\left[\mathbf{v}_{i 1}, \mathbf{v}_{i 2}, \ldots, \mathbf{v}_{i\left|s_{i}\right|}\right]$. We use the cosine similarity between an item embedding and a category embedding as the score that measures how the item belongs to the category. More specifically, the score of item $v_{i k} \in s_{i}$ matching category $l$ is calculated by Equation \ref{eq:wikj}. \begin{equation} \label{eq:wikj} w_{i k}^{l}=\frac{\mathbf{g}_{l}^{T} \mathbf{v}_{i k}}{\left\|\mathbf{g}_{l}\right\|_{2}\left\|\mathbf{v}_{i k}\right\|_{2}} \end{equation} Next, the probability of item $v_{i k} \in s_{i}$ assigned to category $l$ is calculated by Equation \ref{eq:pikl}, where $\epsilon$ is a hyper-parameter smaller than 1 to avoid over-smoothing of probabilities. \begin{equation} \label{eq:pikl} p_{i k}^{l}=\frac{\exp \left(w_{i k}^{l} / \epsilon\right)}{\sum_{l=1}^{m} \exp \left(w_{i k}^{l} / \epsilon\right)} \end{equation} Then, the user interest $\mathbf{u}_{i}^{l}$ corresponding to the item category ${l}$ is calculated by Equation \ref{eq:uil}. \begin{equation} \label{eq:uil} \mathbf{u}_{i}^{l}=\Sigma_{k=1}^{\left|s_{i}\right|} p_{i k}^{l} \mathbf{v}_{i k} \end{equation} While performing the category assignment, we might encounter two degeneration cases. One is that each item has the same or similar probability of belonging to different categories. The reason of causing this degeneration is that learned item category embeddings are quite same with each other. The other is that one item category dominates the entire item embedding space, which means all items belong to that category. In order to avoid degeneration cases, we constrain both category embeddings and item embeddings within a unit hypersphere, that is, $\lVert\mathbf{g}_{i}\rVert_{2}=\lVert \mathbf{v}_{*} \rVert_{2}=1$, and constrain every two category embeddings to be orthogonal, thus constructing an orthogonality loss as shown in Equation \ref{eq:lg}. \begin{equation} \label{eq:lg} \mathcal{L}_{orth}=\sum_{i=1}^{m} \sum_{j=1, j \neq i}^{m} (\mathbf{g}_{i}^{T} \mathbf{g}_{j})^2 \end{equation} In addition to encoding a user's multiple interests, we use GRU \cite{hidasi_recurrent_2018} to model the evolution of the general interest of the user, attaining the user's general interest $\mathbf{u}_{i}^{g}=G R U\left(\left[\mathbf{v}_{i1}, \mathbf{v}_{i2}, \ldots, \mathbf{v}_{i\left|s_{i}\right|}\right]\right)$. \subsection{Contrastive Regularization} We hold the view that user interests implied in the partial interactions are as same as ones implied in all the interactions. Therefore, we employ random sampling for data augmentation. Specifically, given the historical interaction sequence $s_i=[v_{i1},…,v_{i|s_i|}]$ of user $u_i$, we sample $\min \left(\mu\left|s_{i}\right|, f\right)$ micro-videos from $s_i$ and form a new sequence $s_{i}^{\prime}$ according to their orders in $s_{i}$, where $\mu$ is the sampling ratio and $f$ is the longest sequence length whose default value is 100. By randomly sampling $s_{i}$ twice, we get two sequences $s_{i}^{\prime}$ and $s_{i}^{\prime \prime}$. Then we feed these two augmented sequences to two multi-interest encoders to learn two groups of user interests, i.e., $\mathbf{U}_{i}^{\prime}=\left[\mathbf{u}_{i}^{1 \prime}, \mathbf{u}_{i}^{2 \prime}, \ldots, \mathbf{u}_{i}^{m \prime}\right]$ and $\mathbf{U}_{i}^{\prime \prime}=\left[\mathbf{u}_{i}^{1 \prime \prime}, \mathbf{u}_{i}^{2 \prime \prime}, \ldots, \mathbf{u}_{i}^{m \prime \prime}\right]$, as shown in Equation \ref{eq:uiuii}, where both $\mathbf{u}_{i}^{k \prime}$ and $\mathbf{u}_{i}^{k \prime \prime}$ are interests corresponding to the $k$-th micro-video category. \begin{equation} \label{eq:uiuii} \begin{aligned} \mathbf{U}_{i}^{\prime} &=\operatorname{Multi-Interest-Encoder}\left(s_{i}^{\prime}\right) \\ \mathbf{U}_{i}^{\prime \prime} &=\operatorname{Multi-Interest-Encoder}\left(s_{i}^{\prime \prime}\right) \end{aligned} \end{equation} Then, we construct a contrastive multi-interest loss as follows. For any interest embedding $\mathbf{u}_{i}^{k \prime} \in \mathbf{U}_{i}^{\prime}$ of user $u_{i}$, we construct a positive pair $(\mathbf{u}_{i}^{k \prime},\mathbf{u}_{i}^{k \prime \prime})$, construct $2m-2$ negative pairs using $\mathbf{u}_{i}^{k \prime}$ and the other $2m-2$ interest embeddings of user $u_{i}$, i.e., $\mathbf{u}_{i}^{h \prime} \in \mathbf{U}_{i}^{\prime}$ and $\mathbf{u}_{i}^{h \prime \prime} \in \mathbf{U}_{i}^{\prime \prime}$, where $h \in[1, m], h \neq k$. Since $m$ is usually not too large, the number of above negative pairs is limited. Therefore, given $\mathbf{u}_{i}^{k \prime}$, we take the interest embeddings of every other user in the same batch to build extra negative pairs. To sum up, let the training batch be $\mathcal{B}$ and the batch size be $|\mathcal{B}|$, for each positive pair, there are $2m(|\mathcal{B}|-1)+2m-2=2(m|\mathcal{B}|-1)$ negative pairs, which forms the negative set $\mathcal{S}^-$. Further, the contrastive multi-interest loss is defined in Equation \ref{eq:lcl}, where $\operatorname{sim}(\mathbf{a}, \mathbf{b})=\mathbf{a}^{T} \mathbf{b}/\left(\lVert\mathbf{a}\rVert_2\lVert\mathbf{b}\rVert_2\tau\right)$ and $\tau$ is a temperature parameter \cite{liu_contrastive_2021}. \begin{equation} \label{eq:lcl} \begin{aligned} \mathcal{L}_{c l}\left(\mathbf{u}_{i}^{k \prime}, \mathbf{u}_{i}^{k \prime\prime}\right)= & -\log \frac{e^{\operatorname{sim}\left(\mathbf{u}_{i}^{k \prime}, \mathbf{u}_{i}^{k \prime \prime}\right)}}{e^{\operatorname{sim}\left(\mathbf{u}_{i}^{k \prime}, \mathbf{u}_{i}^{k \prime \prime}\right)}+\sum_{\mathbf{s}^- \in \mathcal{S}^-}e^{\operatorname{sim}\left(\mathbf{u}_{i}^{k \prime}, \mathbf{s}^{-}\right)}}\\ & -\log \frac{e^{\operatorname{sim}\left(\mathbf{u}_{i}^{k \prime}, \mathbf{u}_{i}^{k \prime \prime}\right)}}{e^{\operatorname{sim}\left(\mathbf{u}_{i}^{k \prime}, \mathbf{u}_{i}^{k \prime \prime}\right)}+\sum_{\mathbf{s}^- \in \mathcal{S}^-}e^{\operatorname{sim}\left(\mathbf{u}_{i}^{k \prime\prime}, \mathbf{s}^{-}\right)}} \end{aligned} \end{equation} Through data augmentation and the contrastive multi-interest loss, user interest learning is no longer sensitive to a specific positive interaction, thereby reducing the impact of noisy positive interactions and realizing positive interaction denoising. \subsection{Loss Function} The interaction score between user $u_{i}$ and candidate item $v_{t}$ is predicted as $c_{i t}=\max _{0<k \leq m}\left(\left\{\mathbf{u}_{i}^{k T} \mathbf{v}_{t} / \epsilon\right\}\right)+\mathbf{u}_{i}^{g T} \mathbf{v}_{t}$, in which $k \in[1, m]$. In the training process, for each positive sample $v_{p}^{i}$ of user $u_{i}$, we need to randomly sample $n$ micro-videos that have never been interacted with from the full micro-videos as negative samples. However, in order to avoid high sampling cost, given a positive sample, we only sample one negative sample, that is, $n$ is 1. Besides, we take the positive sampling items and negative sampling items of other users in the same batch as the negative samples, thus forming a negative sample set $\mathcal{N}$. We then adopt the following cross-entropy loss as the main part of the loss. \begin{equation} \label{eq:lmain} \mathcal{L}_{\text {main}}\left(u_{i}, v_{p}^{i}\right)=-\ln \frac{\exp \left(c_{i p}\right)}{\sum_{v_{*} \in \left\{\mathcal{N} \cup v_{p}^{i}\right\}} \exp \left(c_{i *}\right)} \end{equation} Finally, our loss function is shown in Equation \ref{eq:ltotal}, where $\lambda_{*}$ is the regularization coefficient. \begin{equation} \label{eq:ltotal} \mathcal{L}=\mathcal{L}_{\text {main }}+\lambda_{c l} \mathcal{L}_{c l}+\lambda_{ {orth }} \mathcal{L}_{orth} \end{equation} \section{Experiments} \subsection{Experiment Setup} \subsubsection{Datasets} We conduct experiments on two micro-video datasets. \begin{enumerate} \item \textbf{WeChat}. This is a public dataset released by WeChat Big Data Challenge 2021\footnote{https://algo.weixin.qq.com/problem-description}. This dataset contains user interactions on WeChat Channels, including explicit satisfaction interactions such as likes and favorites and implicit engagement interactions such as playing. \item \textbf{TakaTak}. This dataset is collected from TakaTak, a micro-video app for Indian users. The dataset contains interaction records of 50,000 anonymous users in four weeks. \end{enumerate} The statistics of the two datasets are shown in Table \ref{tab:dataset_stat}. For a dataset spanning $h$ day, we construct the training set with the interactions in the first $h-2$ days, the validation set with the interactions in the $h-1$-th day, and the test set with the interactions of the $h$-th day. \begin{table}[t] \begin{center} \caption{The statistics of the two datasets. } \label{tab:dataset_stat} \begin{tabular}{ccccc} \toprule Dataset & \#Users & \#Micro-videos & \#Interactions & Density \\ \midrule WeChat & 20000 & 77557 & 2666296 & 0.17\% \\ TakaTak & 50000 & 157691 & 33863980 & 0.45\% \\ \bottomrule \end{tabular} \end{center} \end{table} \subsubsection{Metrics} Here, Recall@K and HitRate@K are used as metrics to evaluate the quality of the recommendations. \begin{table*}[tb] \caption{Recommendation accuracy on two datasets. \#I. denotes the number of interests. The number in a bold type is the best performance in each column. The underlined number is the second best in each column.} \label{tab:perform-cmp} \setlength\tabcolsep{4.2pt} \begin{center} \begin{tabular}{l|c|c c c | c c c|c|c c c | c c c} \toprule & \multicolumn{7}{|c|}{\textbf{\small{WeChat}}}& \multicolumn{7}{|c}{\textbf{\small{TakaTak}}}\\ \cmidrule(lr){2-15} & & \multicolumn{3}{|c|}{\textbf{\small{Recall}}}& \multicolumn{3}{|c|}{\textbf{\small{HitRate}}}& & \multicolumn{3}{|c|}{\textbf{\small{Recall}}}& \multicolumn{3}{|c}{\textbf{\small{HitRate}}}\\ \cmidrule(lr){2-15} & \textbf{\#I.} & \textbf{@10} & \textbf{@20} & \textbf{@50} & \textbf{@10} & \textbf{@20} & \textbf{@50} & \textbf{\#I.} & \textbf{@10} & \textbf{@20} & \textbf{@50} & \textbf{@10} & \textbf{@20} & \textbf{@50}\\ \midrule Octopus&1 &0.0057 &0.0125 &0.0400 &0.0442 &0.0917 &0.2332 &1 &0.0076 &0.0160 &0.0447 &0.1457 &0.2533 &0.4393\\ MIND&1 &0.0296 &0.0521 &0.1025 &0.1774 &0.2791 &0.4514 &1 &0.0222 &0.0389 &\underline{0.0773} &0.2139 &0.3263 &0.4977\\ ComiRec-DR&1 &0.0292 &0.0525 &0.1049 &0.1790 &0.2893 &0.4621 &1 &0.0226 &0.0392 &0.0769 &0.2345 &0.3427 &0.5144\\ ComiRec-SA&1 &0.0297 &0.0538 &0.1079 &0.1806 &0.2938 &0.4684 &1 &\underline{0.0239} &\underline{0.0409} &0.0752 &\underline{0.2567} &0.3665 &0.5207\\ DSSRec&1 & \underline{0.0327}&\underline{0.0578}&\underline{0.1161}&\underline{0.1971}&\underline{0.3064}&\underline{0.4854}&8&\textbf{0.0244}&0.0408&0.0749&0.2558&\underline{0.3704}&\underline{0.5259}\\ \midrule CMI&8 &\textbf{0.0424} &\textbf{0.0717} &\textbf{0.1342} &\textbf{0.2436} &\textbf{0.3612} &\textbf{0.5292} &8 &0.0210 &\textbf{0.0415} &\textbf{0.0877} &\textbf{0.2912} &\textbf{0.4172} &\textbf{0.5744}\\ \midrule Improv.&/ &29.66\%&24.05\%&15.59\%&23.59\%&17.89\%&9.02\% & /&/&1.72\%&17.09\%&13.84\%&12.63\%&9.22\%\\ \bottomrule \end{tabular} \end{center} \end{table*} \subsubsection{Competitors} We choose the following multi-interest recommendation models as competitors. \begin{enumerate} \item \textbf{Ocotopus} \cite{liu_octopus_2020}: It constructs an elastic archive network to extract diverse interests of users. \item \textbf{MIND} \cite{li_multi-interest_2019}: It adjusts the dynamic routing algorithm in the capsule network to extract multiple interests of users. \item \textbf{ComiRec-DR} \cite{cen_controllable_2020}: It adopts the original dynamic routing algorithm of the capsule network to learn multiple user interests. \item \textbf{ComiRec-SA} \cite{cen_controllable_2020}: It uses a multi-head attention mechanism to capture the multiple interests of users. \item \textbf{DSSRec} \cite{ma_disentangled_2020}: It disentangles multiple user intentions through self-supervised learning. \end{enumerate} To be fair, we do not compare with models that rely on multi-modal information. \subsubsection{Implementation Details} We implement our model with PyTorch, and initialize the parameters with the uniform distribution $U(-\frac{1}{\sqrt{d}}, \frac{1}{\sqrt{d}})$, where $d$ is the dimension of embeddings. We optimize the model through Adam. Hyper-parameters $\epsilon$, $\tau$ and $\lambda_{cl}$ are searched in [1, 0.1, 0.01, 0.001, 0.0001, 0.00001], and finally we set $\epsilon=0.1$, $\tau=0.1$, and $\lambda_{cl}=0.01$. $\lambda_{orth}$ is searched in $[15,10,5,1,0.5]$ and finally we set it to 10. The sampling rate $\mu$ is set to 0.5. For the sake of fairness, in all the experiments, we set the embedding dimension to 64 and the batch size to 1024. We stop training when Recall@50 on the validation set has not been improved in 5 consecutive epochs on the WeChat dataset and 10 consecutive epochs on the Takatak dataset. Besides, for MIND, ComiRec-DR, ComiRec-SA, and DSSRec, we use the open-source code released on Github \footnote{https://github.com/THUDM/ComiRec}$^,$\footnote{https://github.com/abinashsinha330/DSSRec}. \subsection{Performance Comparison} The experimental results of performance comparison are shown in Table \ref{tab:perform-cmp}, from which we have the following observations. (1) Multi-interest competitors except for CMI, with few exceptions, reach the best results while the number of interests is 1, indicating that these models cannot effectively capture multiple interests of a user in micro-videos. (2) The two dynamic routing-based models MIND and ComiRec-DR are not as good as ComiRec-SA and DSSRec. This is probably because both MIND and ComiRec-DR do not fully utilize the sequential relationship between historical interactions, but ComiRec-SA and DSSRec do. In addition, DSSRec adopts a novel seq2seq training strategy that leverages additional supervision signals, thus obtaining better performance. (3) Octopus performs worst. That is probably because it aggressively routes every item into one interest exclusively at the beginning of training, which makes it easy to trap the parameters in a local optimum. (4) On two datasets, CMI far outperforms the competitors on most metrics, which demonstrates that CMI generates recommendations with both high accuracy and excellent coverage. The reason is that we avoid model degeneration by setting the category embeddings orthogonal and extract multiple heterogeneous user interests. In addition, we achieve positive interaction denoising via contrastive learning, which improves the robustness of interest embeddings. \subsection{Ablation Study} While setting the number of interests to 8, we observe the performance of two model variants: CMI-CL and CMI-G, where the former is the CMI removing the contrastive multi-interest loss and the latter is the CMI without the general interest encoder. From the Table \ref{tab:ablation}, we find the model variants suffer severe declines in performance. This confirms the feasibility and effectiveness of contrastive learning for multi-interest recommendation, and shows that whether the candidate item matches the general user preferences is also important. \begin{table}[htbp] \caption{Ablation study on WeChat. The values in parentheses are the percentages of decline relative to the original model.} \label{tab:ablation} \begin{center} \setlength\tabcolsep{4pt} \begin{tabular}{l|l|c c c c} \toprule & & CMI-CL &CMI-G &CMI\\ \midrule &@10 &0.039(-8.02\%) &0.0342(-19.34\%) &\textbf{0.0424}\\ Recall&@20 &0.0665(-7.25\%) &0.0589(-17.85\%) &\textbf{0.0717}\\ &@50 &0.1285(-4.25\%) &0.1165(-13.19\%) &\textbf{0.1342}\\ \midrule &@10 &0.2286(-6.16\%) &0.2061(-15.39\%) &\textbf{0.2436}\\ HitRate&@20 &0.3443(-4.68\%) &0.3181(-11.93\%) &\textbf{0.3612}\\ &@50 &0.5188(-1.93\%) &0.4935(-6.71\%) &\textbf{0.5290}\\ \bottomrule \end{tabular} \end{center} \end{table} \begin{table}[htbp] \caption{The effect of the number of interests on WeChat. } \label{tab:eff-ninterest} \begin{center} \setlength\tabcolsep{4pt} \begin{tabular}{l|c|c c c c c} \toprule & \#I. & 1& 2 & 4 & 8 & 16 \\ \midrule &@10 &0.0303 & 0.0404 &0.0409 &\textbf{0.0428} &0.0412\\ Recall&@20 &0.0530&0.0699 &0.0694 &\textbf{0.0718} &0.0700\\ &@50 &0.1039&0.1343 &0.1333 &\textbf{0.1364} &0.1314\\ \midrule &@10 &0.1969&0.2383 &0.2384 &\textbf{0.2458} &0.2390\\ HitRate&@20 &0.3012&0.3547 &0.3516 &\textbf{0.3587} &0.3557\\ &@50 &0.4646&0.5330 &0.5271 &\textbf{0.5322} &0.5238\\ \bottomrule \end{tabular} \end{center} \end{table} \subsection{Effect of the Number of Interests} We set the number of interests to [1, 2, 4, 8, 16] successively and conduct experiments. The experimental results are shown in Table \ref{tab:eff-ninterest}. It can be seen that CMI reaches the best performance when the number of interests is 8 rather than 1. This confirms the necessity and effectiveness of extracting multiple interests in the micro-video recommendation scenarios. \section{Conclusion} This paper proposes a micro-video recommendation model CMI. The CMI model devises a multi-interest encoder and constructs a contrastive multi-interest loss to achieve positive interaction denoising and recommendation performance improvement. The performance of CMI on two micro-video datasets far exceeds other existing multi-interest models. The results of ablation study demonstrate that fusing contrastive learning into multi-interest extracting in micro-video recommendation is feasible and effective. \begin{acks} This work was supported by the National Natural Science Foundation of China under Grant No. 62072450 and the 2021 joint project with MX Media. \end{acks} \bibliographystyle{ACM-Reference-Format} \section{Introduction} \begin{figure}[tb] \centering \includegraphics[height=2.3cm]{example_fig.pdf} \caption{An micro-video interaction sequence with two interests: travel and pets.} \label{fig:ninterests} \end{figure} In recent years, micro-video apps such as TikTok, Kwai, MX TakaTak, etc. have become increasingly popular. Generally, the micro-video app displays a single video in full-screen mode at a time and automatically plays it in a repetitive way. Usually only after viewing the cover of the micro-video or watching the content of the micro-video for a few seconds, can the user determine whether he or she is interested in this micro-video. With the explosive growth of the number of micro-videos, if the micro-videos exposed to a user do not fall within the scope of his/her interests, then the user might leave the app. Therefore, the efficient micro-video recommendation has become a crucial task. Existing micro-video recommendation models \cite{wei_mmgcn_2019,chen_temporal_2018,liu_user-video_2019,jiang_what_2020} rely on multi-modal information processing, which is too expensive to deal with large-scale micro-videos. Furthermore, they learn a single interest embedding for a user from his/her interaction sequence. However, most users have multiple interests while watching micro-videos. As shown in Figure \ref{fig:ninterests}, a user is interested in both tourism and pets, and the user's interactions in the future might involve any one of the user interests. Therefore, a more reasonable approach is to learn multiple disentangled interest embeddings for users, each of which represents one aspect of user interests, and then generate recommendations for the users based on the learned multiple disentangled interest embeddings. On the other hand, contrastive learning has attracted a great deal of attention recently. It augments data to discover the implicit supervision signals in the input data and maximize the agreement between differently augmented views of the same data in a certain latent space. It has obtained the success in computer vision \cite{chen_simple_2020}, natural language processing \cite{fang_cert_2020,wu_clear_2020,giorgi_declutr_2021} and other domains \cite{oord_representation_2019}. More recently, contrastive learning also has been introduced to the recommendation, such as sequential recommendation, recommendation based on graph neural network, and etc., which realizes the debiasing \cite{zhou_contrastive_2021} and the denoising \cite{qin_world_2021}, and resolves the representation degeneration \cite{qiu_contrastive_2021} and the cold start problem \cite{wei_contrastive_2021}, improving the recommendation accuracy \cite{liu_contrastive_2021,xie_contrastive_2021,wu_self-supervised_2021,yu_graph_2021}. We note that there exists noise in the positive interactions in the micro-video scenario since micro-videos are automatically played and sometimes users cannot judge whether they like the micro-video or not until the micro-video finishes playing. However, neither existing micro-video recommendation models nor multi-interest recommendation models \cite{ma_disentangled_2020,cen_controllable_2020,liu_octopus_2020,li_multi-interest_2019} utilize contrastive learning to reduce the impact of noise in the positive interactions. In this paper, we propose a new micro-video recommendation model named CMI. Based on the implicit micro-video category information, this model learns multiple disentangled interests for a user from his/her historical interaction sequence, recalls a group of micro-videos by each interest embedding, and then forms the final recommendation result. In particular, contrastive learning is incorporated into CMI to realize the positive interaction denoising, improving the robustness of multi-interest disentanglement. Our contribution is summarized as follows. \begin{enumerate} \item We propose CMI, a micro-video recommendation model, to explore the feasibility of combining contrastive learning with the multi-interest recommendation. \item We establish a multi-interest encoder based on implicit categories of items, and propose a contrastive multi-interest loss to minimize the difference between interests extracting from two augmented views of the same interaction sequence. \item We conduct experiments on two micro-video datasets and the experiment results show the effectiveness and rationality of the model. \end{enumerate} \section{Methodology} \begin{figure}[tb] \centering \includegraphics[height=4.9cm]{model_fig.pdf} \caption{The architecture of CMI} \label{fig:model_fig} \end{figure} We denote user and item sets as $\mathcal{U}$ and $\mathcal{V}$, respectively. Further, we denote an interaction between a user and an item as a triad. That is, the fact that user $u_{i}$ interacts with micro-video $v_{j}$ at timestamp $t$ will be represented by $\left(i, j, t\right)$. Given a specific user $u_{i} \in \mathcal{U}$, we firstly generate a historical interaction sequence over a period of time, denoted as $s_{i}=\left[v_{i 1}, v_{i 2}, \ldots, v_{i\left|s_{i}\right|}\right]$, in which the videos are sorted by the timestamp that user $u_{i}$ interacts with the video in ascending order, and secondly learn multiple interest embeddings for each user, denoted as $\left[\mathbf{u}_{i}^{1}, \mathbf{u}_{i}^{2}, \ldots, \mathbf{u}_{i}^{m}\right]$. Then, for each interest embedding, we calculate the cosine similarity to each candidate micro-videos and recall $K$ micro-videos with the highest $K$ similarities, that is, a total of $m K$ micro-videos are recalled. Finally, from the recalled micro-videos, we select top-$K$ ones ranked by the cosine similarity as the final recommendations. \subsection{Multi-interest and General Interest Encoders} We argue that the categories of items are the basis of user interests. User preferences on a certain category of items form an interest of the user. Thus, we assume there are $m$ global categories and set learnable implicit embeddings $\left[\mathbf{g}_{1}, \mathbf{g}_{2}, \ldots, \mathbf{g}_{m}\right]$ for these $m$ categories. For items in historical interaction sequence $s_{i}$ of user $u_{i}$, we obtain the embedding of each item sequentially through the embedding layer, and form $\mathbf{S}_{i}=$ $\left[\mathbf{v}_{i 1}, \mathbf{v}_{i 2}, \ldots, \mathbf{v}_{i\left|s_{i}\right|}\right]$. We use the cosine similarity between an item embedding and a category embedding as the score that measures how the item belongs to the category. More specifically, the score of item $v_{i k} \in s_{i}$ matching category $l$ is calculated by Equation \ref{eq:wikj}. \begin{equation} \label{eq:wikj} w_{i k}^{l}=\frac{\mathbf{g}_{l}^{T} \mathbf{v}_{i k}}{\left\|\mathbf{g}_{l}\right\|_{2}\left\|\mathbf{v}_{i k}\right\|_{2}} \end{equation} Next, the probability of item $v_{i k} \in s_{i}$ assigned to category $l$ is calculated by Equation \ref{eq:pikl}, where $\epsilon$ is a hyper-parameter smaller than 1 to avoid over-smoothing of probabilities. \begin{equation} \label{eq:pikl} p_{i k}^{l}=\frac{\exp \left(w_{i k}^{l} / \epsilon\right)}{\sum_{l=1}^{m} \exp \left(w_{i k}^{l} / \epsilon\right)} \end{equation} Then, the user interest $\mathbf{u}_{i}^{l}$ corresponding to the item category ${l}$ is calculated by Equation \ref{eq:uil}. \begin{equation} \label{eq:uil} \mathbf{u}_{i}^{l}=\Sigma_{k=1}^{\left|s_{i}\right|} p_{i k}^{l} \mathbf{v}_{i k} \end{equation} While performing the category assignment, we might encounter two degeneration cases. One is that each item has the same or similar probability of belonging to different categories. The reason of causing this degeneration is that learned item category embeddings are quite same with each other. The other is that one item category dominates the entire item embedding space, which means all items belong to that category. In order to avoid degeneration cases, we constrain both category embeddings and item embeddings within a unit hypersphere, that is, $\lVert\mathbf{g}_{i}\rVert_{2}=\lVert \mathbf{v}_{*} \rVert_{2}=1$, and constrain every two category embeddings to be orthogonal, thus constructing an orthogonality loss as shown in Equation \ref{eq:lg}. \begin{equation} \label{eq:lg} \mathcal{L}_{orth}=\sum_{i=1}^{m} \sum_{j=1, j \neq i}^{m} (\mathbf{g}_{i}^{T} \mathbf{g}_{j})^2 \end{equation} In addition to encoding a user's multiple interests, we use GRU \cite{hidasi_recurrent_2018} to model the evolution of the general interest of the user, attaining the user's general interest $\mathbf{u}_{i}^{g}=G R U\left(\left[\mathbf{v}_{i1}, \mathbf{v}_{i2}, \ldots, \mathbf{v}_{i\left|s_{i}\right|}\right]\right)$. \subsection{Contrastive Regularization} We hold the view that user interests implied in the partial interactions are as same as ones implied in all the interactions. Therefore, we employ random sampling for data augmentation. Specifically, given the historical interaction sequence $s_i=[v_{i1},…,v_{i|s_i|}]$ of user $u_i$, we sample $\min \left(\mu\left|s_{i}\right|, f\right)$ micro-videos from $s_i$ and form a new sequence $s_{i}^{\prime}$ according to their orders in $s_{i}$, where $\mu$ is the sampling ratio and $f$ is the longest sequence length whose default value is 100. By randomly sampling $s_{i}$ twice, we get two sequences $s_{i}^{\prime}$ and $s_{i}^{\prime \prime}$. Then we feed these two augmented sequences to two multi-interest encoders to learn two groups of user interests, i.e., $\mathbf{U}_{i}^{\prime}=\left[\mathbf{u}_{i}^{1 \prime}, \mathbf{u}_{i}^{2 \prime}, \ldots, \mathbf{u}_{i}^{m \prime}\right]$ and $\mathbf{U}_{i}^{\prime \prime}=\left[\mathbf{u}_{i}^{1 \prime \prime}, \mathbf{u}_{i}^{2 \prime \prime}, \ldots, \mathbf{u}_{i}^{m \prime \prime}\right]$, as shown in Equation \ref{eq:uiuii}, where both $\mathbf{u}_{i}^{k \prime}$ and $\mathbf{u}_{i}^{k \prime \prime}$ are interests corresponding to the $k$-th micro-video category. \begin{equation} \label{eq:uiuii} \begin{aligned} \mathbf{U}_{i}^{\prime} &=\operatorname{Multi-Interest-Encoder}\left(s_{i}^{\prime}\right) \\ \mathbf{U}_{i}^{\prime \prime} &=\operatorname{Multi-Interest-Encoder}\left(s_{i}^{\prime \prime}\right) \end{aligned} \end{equation} Then, we construct a contrastive multi-interest loss as follows. For any interest embedding $\mathbf{u}_{i}^{k \prime} \in \mathbf{U}_{i}^{\prime}$ of user $u_{i}$, we construct a positive pair $(\mathbf{u}_{i}^{k \prime},\mathbf{u}_{i}^{k \prime \prime})$, construct $2m-2$ negative pairs using $\mathbf{u}_{i}^{k \prime}$ and the other $2m-2$ interest embeddings of user $u_{i}$, i.e., $\mathbf{u}_{i}^{h \prime} \in \mathbf{U}_{i}^{\prime}$ and $\mathbf{u}_{i}^{h \prime \prime} \in \mathbf{U}_{i}^{\prime \prime}$, where $h \in[1, m], h \neq k$. Since $m$ is usually not too large, the number of above negative pairs is limited. Therefore, given $\mathbf{u}_{i}^{k \prime}$, we take the interest embeddings of every other user in the same batch to build extra negative pairs. To sum up, let the training batch be $\mathcal{B}$ and the batch size be $|\mathcal{B}|$, for each positive pair, there are $2m(|\mathcal{B}|-1)+2m-2=2(m|\mathcal{B}|-1)$ negative pairs, which forms the negative set $\mathcal{S}^-$. Further, the contrastive multi-interest loss is defined in Equation \ref{eq:lcl}, where $\operatorname{sim}(\mathbf{a}, \mathbf{b})=\mathbf{a}^{T} \mathbf{b}/\left(\lVert\mathbf{a}\rVert_2\lVert\mathbf{b}\rVert_2\tau\right)$ and $\tau$ is a temperature parameter \cite{liu_contrastive_2021}. \begin{equation} \label{eq:lcl} \begin{aligned} \mathcal{L}_{c l}\left(\mathbf{u}_{i}^{k \prime}, \mathbf{u}_{i}^{k \prime\prime}\right)= & -\log \frac{e^{\operatorname{sim}\left(\mathbf{u}_{i}^{k \prime}, \mathbf{u}_{i}^{k \prime \prime}\right)}}{e^{\operatorname{sim}\left(\mathbf{u}_{i}^{k \prime}, \mathbf{u}_{i}^{k \prime \prime}\right)}+\sum_{\mathbf{s}^- \in \mathcal{S}^-}e^{\operatorname{sim}\left(\mathbf{u}_{i}^{k \prime}, \mathbf{s}^{-}\right)}}\\ & -\log \frac{e^{\operatorname{sim}\left(\mathbf{u}_{i}^{k \prime}, \mathbf{u}_{i}^{k \prime \prime}\right)}}{e^{\operatorname{sim}\left(\mathbf{u}_{i}^{k \prime}, \mathbf{u}_{i}^{k \prime \prime}\right)}+\sum_{\mathbf{s}^- \in \mathcal{S}^-}e^{\operatorname{sim}\left(\mathbf{u}_{i}^{k \prime\prime}, \mathbf{s}^{-}\right)}} \end{aligned} \end{equation} Through data augmentation and the contrastive multi-interest loss, user interest learning is no longer sensitive to a specific positive interaction, thereby reducing the impact of noisy positive interactions and realizing positive interaction denoising. \subsection{Loss Function} The interaction score between user $u_{i}$ and candidate item $v_{t}$ is predicted as $c_{i t}=\max _{0<k \leq m}\left(\left\{\mathbf{u}_{i}^{k T} \mathbf{v}_{t} / \epsilon\right\}\right)+\mathbf{u}_{i}^{g T} \mathbf{v}_{t}$, in which $k \in[1, m]$. In the training process, for each positive sample $v_{p}^{i}$ of user $u_{i}$, we need to randomly sample $n$ micro-videos that have never been interacted with from the full micro-videos as negative samples. However, in order to avoid high sampling cost, given a positive sample, we only sample one negative sample, that is, $n$ is 1. Besides, we take the positive sampling items and negative sampling items of other users in the same batch as the negative samples, thus forming a negative sample set $\mathcal{N}$. We then adopt the following cross-entropy loss as the main part of the loss. \begin{equation} \label{eq:lmain} \mathcal{L}_{\text {main}}\left(u_{i}, v_{p}^{i}\right)=-\ln \frac{\exp \left(c_{i p}\right)}{\sum_{v_{*} \in \left\{\mathcal{N} \cup v_{p}^{i}\right\}} \exp \left(c_{i *}\right)} \end{equation} Finally, our loss function is shown in Equation \ref{eq:ltotal}, where $\lambda_{*}$ is the regularization coefficient. \begin{equation} \label{eq:ltotal} \mathcal{L}=\mathcal{L}_{\text {main }}+\lambda_{c l} \mathcal{L}_{c l}+\lambda_{ {orth }} \mathcal{L}_{orth} \end{equation} \section{Experiments} \subsection{Experiment Setup} \subsubsection{Datasets} We conduct experiments on two micro-video datasets. \begin{enumerate} \item \textbf{WeChat}. This is a public dataset released by WeChat Big Data Challenge 2021\footnote{https://algo.weixin.qq.com/problem-description}. This dataset contains user interactions on WeChat Channels, including explicit satisfaction interactions such as likes and favorites and implicit engagement interactions such as playing. \item \textbf{TakaTak}. This dataset is collected from TakaTak, a micro-video app for Indian users. The dataset contains interaction records of 50,000 anonymous users in four weeks. \end{enumerate} The statistics of the two datasets are shown in Table \ref{tab:dataset_stat}. For a dataset spanning $h$ day, we construct the training set with the interactions in the first $h-2$ days, the validation set with the interactions in the $h-1$-th day, and the test set with the interactions of the $h$-th day. \begin{table}[t] \begin{center} \caption{The statistics of the two datasets. } \label{tab:dataset_stat} \begin{tabular}{ccccc} \toprule Dataset & \#Users & \#Micro-videos & \#Interactions & Density \\ \midrule WeChat & 20000 & 77557 & 2666296 & 0.17\% \\ TakaTak & 50000 & 157691 & 33863980 & 0.45\% \\ \bottomrule \end{tabular} \end{center} \end{table} \subsubsection{Metrics} Here, Recall@K and HitRate@K are used as metrics to evaluate the quality of the recommendations. \begin{table*}[tb] \caption{Recommendation accuracy on two datasets. \#I. denotes the number of interests. The number in a bold type is the best performance in each column. The underlined number is the second best in each column.} \label{tab:perform-cmp} \setlength\tabcolsep{4.2pt} \begin{center} \begin{tabular}{l|c|c c c | c c c|c|c c c | c c c} \toprule & \multicolumn{7}{|c|}{\textbf{\small{WeChat}}}& \multicolumn{7}{|c}{\textbf{\small{TakaTak}}}\\ \cmidrule(lr){2-15} & & \multicolumn{3}{|c|}{\textbf{\small{Recall}}}& \multicolumn{3}{|c|}{\textbf{\small{HitRate}}}& & \multicolumn{3}{|c|}{\textbf{\small{Recall}}}& \multicolumn{3}{|c}{\textbf{\small{HitRate}}}\\ \cmidrule(lr){2-15} & \textbf{\#I.} & \textbf{@10} & \textbf{@20} & \textbf{@50} & \textbf{@10} & \textbf{@20} & \textbf{@50} & \textbf{\#I.} & \textbf{@10} & \textbf{@20} & \textbf{@50} & \textbf{@10} & \textbf{@20} & \textbf{@50}\\ \midrule Octopus&1 &0.0057 &0.0125 &0.0400 &0.0442 &0.0917 &0.2332 &1 &0.0076 &0.0160 &0.0447 &0.1457 &0.2533 &0.4393\\ MIND&1 &0.0296 &0.0521 &0.1025 &0.1774 &0.2791 &0.4514 &1 &0.0222 &0.0389 &\underline{0.0773} &0.2139 &0.3263 &0.4977\\ ComiRec-DR&1 &0.0292 &0.0525 &0.1049 &0.1790 &0.2893 &0.4621 &1 &0.0226 &0.0392 &0.0769 &0.2345 &0.3427 &0.5144\\ ComiRec-SA&1 &0.0297 &0.0538 &0.1079 &0.1806 &0.2938 &0.4684 &1 &\underline{0.0239} &\underline{0.0409} &0.0752 &\underline{0.2567} &0.3665 &0.5207\\ DSSRec&1 & \underline{0.0327}&\underline{0.0578}&\underline{0.1161}&\underline{0.1971}&\underline{0.3064}&\underline{0.4854}&8&\textbf{0.0244}&0.0408&0.0749&0.2558&\underline{0.3704}&\underline{0.5259}\\ \midrule CMI&8 &\textbf{0.0424} &\textbf{0.0717} &\textbf{0.1342} &\textbf{0.2436} &\textbf{0.3612} &\textbf{0.5292} &8 &0.0210 &\textbf{0.0415} &\textbf{0.0877} &\textbf{0.2912} &\textbf{0.4172} &\textbf{0.5744}\\ \midrule Improv.&/ &29.66\%&24.05\%&15.59\%&23.59\%&17.89\%&9.02\% & /&/&1.72\%&17.09\%&13.84\%&12.63\%&9.22\%\\ \bottomrule \end{tabular} \end{center} \end{table*} \subsubsection{Competitors} We choose the following multi-interest recommendation models as competitors. \begin{enumerate} \item \textbf{Ocotopus} \cite{liu_octopus_2020}: It constructs an elastic archive network to extract diverse interests of users. \item \textbf{MIND} \cite{li_multi-interest_2019}: It adjusts the dynamic routing algorithm in the capsule network to extract multiple interests of users. \item \textbf{ComiRec-DR} \cite{cen_controllable_2020}: It adopts the original dynamic routing algorithm of the capsule network to learn multiple user interests. \item \textbf{ComiRec-SA} \cite{cen_controllable_2020}: It uses a multi-head attention mechanism to capture the multiple interests of users. \item \textbf{DSSRec} \cite{ma_disentangled_2020}: It disentangles multiple user intentions through self-supervised learning. \end{enumerate} To be fair, we do not compare with models that rely on multi-modal information. \subsubsection{Implementation Details} We implement our model with PyTorch, and initialize the parameters with the uniform distribution $U(-\frac{1}{\sqrt{d}}, \frac{1}{\sqrt{d}})$, where $d$ is the dimension of embeddings. We optimize the model through Adam. Hyper-parameters $\epsilon$, $\tau$ and $\lambda_{cl}$ are searched in [1, 0.1, 0.01, 0.001, 0.0001, 0.00001], and finally we set $\epsilon=0.1$, $\tau=0.1$, and $\lambda_{cl}=0.01$. $\lambda_{orth}$ is searched in $[15,10,5,1,0.5]$ and finally we set it to 10. The sampling rate $\mu$ is set to 0.5. For the sake of fairness, in all the experiments, we set the embedding dimension to 64 and the batch size to 1024. We stop training when Recall@50 on the validation set has not been improved in 5 consecutive epochs on the WeChat dataset and 10 consecutive epochs on the Takatak dataset. Besides, for MIND, ComiRec-DR, ComiRec-SA, and DSSRec, we use the open-source code released on Github \footnote{https://github.com/THUDM/ComiRec}$^,$\footnote{https://github.com/abinashsinha330/DSSRec}. \subsection{Performance Comparison} The experimental results of performance comparison are shown in Table \ref{tab:perform-cmp}, from which we have the following observations. (1) Multi-interest competitors except for CMI, with few exceptions, reach the best results while the number of interests is 1, indicating that these models cannot effectively capture multiple interests of a user in micro-videos. (2) The two dynamic routing-based models MIND and ComiRec-DR are not as good as ComiRec-SA and DSSRec. This is probably because both MIND and ComiRec-DR do not fully utilize the sequential relationship between historical interactions, but ComiRec-SA and DSSRec do. In addition, DSSRec adopts a novel seq2seq training strategy that leverages additional supervision signals, thus obtaining better performance. (3) Octopus performs worst. That is probably because it aggressively routes every item into one interest exclusively at the beginning of training, which makes it easy to trap the parameters in a local optimum. (4) On two datasets, CMI far outperforms the competitors on most metrics, which demonstrates that CMI generates recommendations with both high accuracy and excellent coverage. The reason is that we avoid model degeneration by setting the category embeddings orthogonal and extract multiple heterogeneous user interests. In addition, we achieve positive interaction denoising via contrastive learning, which improves the robustness of interest embeddings. \subsection{Ablation Study} While setting the number of interests to 8, we observe the performance of two model variants: CMI-CL and CMI-G, where the former is the CMI removing the contrastive multi-interest loss and the latter is the CMI without the general interest encoder. From the Table \ref{tab:ablation}, we find the model variants suffer severe declines in performance. This confirms the feasibility and effectiveness of contrastive learning for multi-interest recommendation, and shows that whether the candidate item matches the general user preferences is also important. \begin{table}[htbp] \caption{Ablation study on WeChat. The values in parentheses are the percentages of decline relative to the original model.} \label{tab:ablation} \begin{center} \setlength\tabcolsep{4pt} \begin{tabular}{l|l|c c c c} \toprule & & CMI-CL &CMI-G &CMI\\ \midrule &@10 &0.039(-8.02\%) &0.0342(-19.34\%) &\textbf{0.0424}\\ Recall&@20 &0.0665(-7.25\%) &0.0589(-17.85\%) &\textbf{0.0717}\\ &@50 &0.1285(-4.25\%) &0.1165(-13.19\%) &\textbf{0.1342}\\ \midrule &@10 &0.2286(-6.16\%) &0.2061(-15.39\%) &\textbf{0.2436}\\ HitRate&@20 &0.3443(-4.68\%) &0.3181(-11.93\%) &\textbf{0.3612}\\ &@50 &0.5188(-1.93\%) &0.4935(-6.71\%) &\textbf{0.5290}\\ \bottomrule \end{tabular} \end{center} \end{table} \begin{table}[htbp] \caption{The effect of the number of interests on WeChat. } \label{tab:eff-ninterest} \begin{center} \setlength\tabcolsep{4pt} \begin{tabular}{l|c|c c c c c} \toprule & \#I. & 1& 2 & 4 & 8 & 16 \\ \midrule &@10 &0.0303 & 0.0404 &0.0409 &\textbf{0.0428} &0.0412\\ Recall&@20 &0.0530&0.0699 &0.0694 &\textbf{0.0718} &0.0700\\ &@50 &0.1039&0.1343 &0.1333 &\textbf{0.1364} &0.1314\\ \midrule &@10 &0.1969&0.2383 &0.2384 &\textbf{0.2458} &0.2390\\ HitRate&@20 &0.3012&0.3547 &0.3516 &\textbf{0.3587} &0.3557\\ &@50 &0.4646&0.5330 &0.5271 &\textbf{0.5322} &0.5238\\ \bottomrule \end{tabular} \end{center} \end{table} \subsection{Effect of the Number of Interests} We set the number of interests to [1, 2, 4, 8, 16] successively and conduct experiments. The experimental results are shown in Table \ref{tab:eff-ninterest}. It can be seen that CMI reaches the best performance when the number of interests is 8 rather than 1. This confirms the necessity and effectiveness of extracting multiple interests in the micro-video recommendation scenarios. \section{Conclusion} This paper proposes a micro-video recommendation model CMI. The CMI model devises a multi-interest encoder and constructs a contrastive multi-interest loss to achieve positive interaction denoising and recommendation performance improvement. The performance of CMI on two micro-video datasets far exceeds other existing multi-interest models. The results of ablation study demonstrate that fusing contrastive learning into multi-interest extracting in micro-video recommendation is feasible and effective. \begin{acks} This work was supported by the National Natural Science Foundation of China under Grant No. 62072450 and the 2021 joint project with MX Media. \end{acks} \bibliographystyle{ACM-Reference-Format}
{ "attr-fineweb-edu": 1.871094, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdGbxaKgTyeUb1p1e
\section{Introduction} \setlength\intextsep{10pt} \begin{wrapfigure}{r}{2.5cm} \begin{tikzpicture} \begin{scope \small \node at (0,0) [circle,fill=black, scale = 0.4] (s) {}; \node at (0,-0.25) [] {$o$}; \node at (0,3) [circle,fill=black, scale = 0.4] (t) {}; \node at (0,3.25) [] {$d$}; \node at (0,1.5) [circle,fill=black, scale = 0.4] (m) {}; \draw (s) edge[->,thick, bend right=60] (t); \node at (-0.2,0.75) [] {$b$}; \node at (-0.55,2.25) [] {$\ell$}; \node at (0.2,2.25) [] {$s$}; \node at (1,1.5) [] {$m$}; \draw (s) edge[->,thick] (m); \draw (m) edge[->,thick, bend left=50] (t); \draw (m) edge[->,thick] (t); \end{scope} \end{tikzpicture} \end{wrapfigure} Consider the following situation, where two players want to travel from origin $o$ to destination $d$ in the (extension-parallel) graph on the right. They can take the metro $m$, which takes 6 minutes, or they can take the bike $b$ and then walk either the long (but scenic) route $\ell$, which takes $2$ minutes, or the short route $s$, which takes $1$ minute. There is only one bike: if only one of them takes the bike it takes $3$ minutes; otherwise, someone has to sit on the backseat and it takes them $5$ minutes. Both players want to minimize their own travel time. Suppose they announce their decisions sequentially. There are two possible orders: either the \rowcolor{red} player 1 moves first or the \columncolor{blue} player 2 moves first. We consider the \emph{sequential-move version} of the game where player $1$ moves first. There are three possible \textit{subgames} that player 2 may end up in, for which the corresponding game trees are as follows: \begin{center} \hspace*{-.3cm} \scalebox{.95}{ \small \tikzstyle{level 1}=[level distance=1.5cm, sibling distance=1.2cm] \tikzstyle{bag} = [circle, minimum width=5pt,fill, inner sep=0pt] \tikzstyle{end} = [circle, minimum width=5pt,fill, inner sep=0pt] \begin{tikzpicture}[] \node[bag, label=above:{\rowcolor{$b \ell$}}] {} child { node[end, black,label=below: {$(7,7)$}] {} edge from parent[black] node[left] {\columncolor{$b \ell$}} } child { node[end, black, label=below: {$(5,\textbf{6})$}] {} edge from parent[line width=2pt] node[right] {\columncolor{$m$}} } child { node[end, label=below: {$(7,6)$}] {} edge from parent[black] node[right] {\columncolor{$bs$}} }; \end{tikzpicture} \quad \begin{tikzpicture}[] \node[bag, label=above:{\rowcolor{$m$}}] {} child { node[end, black,label=below: {$(6,5)$}] {} edge from parent[black] node[left] {\columncolor{$b \ell$}} } child { node[end, black, label=below: {$(6,6)$}] {} edge from parent[] node[right] {\columncolor{$m$}} } child { node[end, label=below: {$(6,\textbf{4})$}] {} edge from parent[line width=2pt] node[right] {\columncolor{$bs$}} }; \end{tikzpicture} \quad \begin{tikzpicture}[] \node[bag, label=above:{\rowcolor{$bs$}}] {} child { node[end, black,label=below: {$(6,7)$}] {} edge from parent[black] node[left] {\columncolor{$b\ell$}} } child { node[end, black, label=below: {$(4,6)$}] {} edge from parent[black] node[right] {\columncolor{$m$}} } child { node[end, label=below: {$(6,\textbf{6})$}] {} edge from parent[line width=2pt] node[right] {\columncolor{$bs$}} }; \end{tikzpicture} } \end{center} A \textit{strategy} for player 2 is a function $ S_2:\{\rowcolor{b\ell},\rowcolor{bs},\rowcolor{m}\}\to\{\columncolor{b\ell},\columncolor{bs},\columncolor{m}\} $ that tells us which action player 2 plays given the action of player 1. Player 2 will always choose an action that minimizes his travel time and may break ties arbitrarily when being indifferent. The boldface arcs give a possible subgame-perfect strategy for player 2. If player $2$ fixes this strategy, then player 1 is strictly better off taking the bike and walking the long route (for a travel time of $5$ compared to a travel time of $6$ for the other cases). That is, the only subgame-perfect response for player 1 is $\rowcolor{b\ell}$. This shows $(\rowcolor{b\ell},\columncolor{m})$ is a \emph{subgame-perfect outcome}. However, this outcome is rather peculiar: Why would player 1 walk the long route if his goal is to arrive as quickly as possible? In fact, the outcome $(\rowcolor{b\ell},\columncolor{m})$ is not \emph{stable}, i.e., it does not correspond to a Nash equilibrium. Subgame-perfect outcomes are introduced as a natural model for farsightedness \cite{leme,selten}, or ``full anticipation'', and have been studied for various types of congestion games \cite{anomalies,bartjasper,jong,leme}. Another well-studied notion in this context are outcomes of \emph{greedy best-response} \cite{fot06,optimalNEmakespan,economicspaper,optimalNE}, i.e., players enter the game one after another and give a best response to the actions played already, thus playing with ``no anticipation''. Fotakis et al. \cite{fot06} proved that greedy best-response leads to stable outcomes on all \emph{series-parallel graphs} (which contain extension-parallel graphs like the one above as a special case). In fact, in the above example both $(\rowcolor{bs},\columncolor{m})$ and $(\rowcolor{bs},\columncolor{bs})$ are greedy best-response outcomes and they are stable. The example thus illustrates that full lookahead may have a negative effect on the stability of the outcomes. After a moment's thought, we realize that in the subgame-perfect outcome the indifference of player 2 is exploited (by breaking ties accordingly) to force player 1 to play a suboptimal action. Immediate questions that arise are: Does full lookahead guarantee stable outcomes if we adjust the travel times such that the players are no longer indifferent (i.e., if we make the game \emph{generic})? What is the lookahead that is required to guarantee stable outcomes? What about the inefficiency of these outcomes? In this paper, we address such questions. A well-studied playing technique for chess introduced by Shannon \cite{shan} is to expand the game tree up to a fixed level, use an evaluation function to assign values to the leaves and then perform backward induction to decide which move to make. Based on this idea, we introduce \emph{$k$-lookahead outcomes} as outcomes that arise when every player uses such a strategy with $k$ levels of backward induction. The motivation for our studies is based on the observation that such limited lookahead strategies are played in many scenarios. In fact, there is also experimental evidence (see, e.g., \cite{lamotivation}) that humans perform limited backward induction rather than behaving subgame-perfectly or myopically (i.e., $1<k<n$). In general, our notion of $k$-lookahead outcomes can be applied to any game that admits a natural evaluation function for partial outcomes (details will be provided in the full version of this paper). In this paper, we demonstrate the applicability of our novel notion by focussing on congestion games. These games admit a natural evaluation function by assigning a partial outcome its current cost, i.e., the cost it would have if the game would end at that point. \paragraph{Our model.} We introduce \emph{$k$-lookahead outcomes} as a novel solution concept where players enter the game sequentially (according to some arbitrary order) and anticipate the effect of $k-1$ subsequent decisions. In a \emph{$k$-lookahead outcome} $A = (A_1, \dots, A_n)$, the $i$th player with $i \ge 1$ computes a subgame-perfect outcome in the subgame induced by $(A_1, \dots, A_{i-1})$ with $\min\{k,n-i\}$ players (according to the order) and chooses his corresponding action. Our model interpolates between outcomes of greedy best-response ($k=1$) and subgame-perfect outcomes ($k=n$, the number of players). Our main goal is to understand the effect that different degrees of anticipation have on the stability and inefficiency of the resulting outcomes in congestion games. We combine limited backward induction with the approach of Paes Leme et al.~\cite{leme} who proposed to study the inefficiency of subgame-perfect outcomes. Subgame-perfect outcomes have several drawbacks as model for anticipating players, which are overcome (at least to some extent) by considering $k$-lookahead outcomes instead: \begin{enumerate} \item \textit{Computational complexity}. Computing a subgame-perfect outcome in a congestion game is PSPACE-complete \cite{leme} and it is NP-hard already for 2-player symmetric network congestion games with linear delay functions \cite{bartjasper}. This adds to the general discussion that if a subgame-perfect outcome cannot be computed efficiently, then its credibility as a predicting means of actual outcomes is questionable. On the other hand, computing a $k$-lookahead outcome for constant $k$ can be done efficiently by backward induction. \item \textit{Limited information}. Due to lack of information players might be forced to perform limited backward induction. Note that in order to expand the full game tree, a player needs to know his successors, the actions that these successors can choose and their respective preferences. In practice, however, this information is often available only for the first few successors. \item \textit{Clairvoyant tie-breaking}. Players may be unable to play subgame-perfectly, unless some clairvoyant tie-breaking rule is implemented. To see this, consider the example introduced above. Note that in the subgame induced by $\rowcolor{b\ell}$ (as well as $\rowcolor{bs}$) player $2$ is indifferent between playing $\columncolor{m}$ and $\columncolor{bs}$. Thus, player 1 has no way to play subgame-perfectly with certainty: in order to do so he will need to correctly guess how player 2 is going to break ties. Such clairvoyant tie-breaking is not required for reaching a $k$-lookahead outcome. \end{enumerate} \paragraph{Our results.} We study the efficiency and stability of $k$-lookahead outcomes. We call an outcome stable if it is a Nash equilibrium. In order to assess the inefficiency of $k$-lookahead outcomes, we introduce the \emph{$k$-Lookahead Price of Anarchy ($k$-$\textsf{LPoA}$)} which generalizes both the standard Price of Anarchy \cite{kouts} and the Sequential Price of Anarchy \cite{leme} (see below for formal definitions). Quantifying the $k$-Lookahead Price of Anarchy is a challenging task in general. In fact, even for the Sequential Price of Anarchy (i.e., for $k = n$) no general techniques are known in the literature. In this paper, we mainly focus on characterizing when $k$-lookahead outcomes correspond to stable outcomes. As a result, our findings enable us to characterize when the $k$-Lookahead Price of Anarchy coincides with the Price of Anarchy. We show that this correspondence holds for congestion games that are structurally simple (i.e., symmetric congestion games on extension-parallel graphs), called \emph{simple} below. Further, a common trend in our findings is that the stability of $k$-lookahead outcomes crucially depends on whether players do not or do have to resolve ties (generic vs.~non-generic games). Our main findings in this paper are as follows: \begin{enumerate} \item We show that for generic simple congestion games the set of $k$-lookahead outcomes coincides with the set of Nash equilibria for all levels of lookahead $k$. As a consequence, we obtain that the $k$-\textsf{LPoA}\ coincides with the Price of Anarchy (independently of $k$), showing that increased anticipation does not reduce the (worst-case) inefficiency. On the other hand, we show that only full anticipation guarantees the first player the smallest cost, so that anticipation might be beneficial after all. We also show that the above equivalence does not extend beyond the class of simple congestion games. (These results are presented in Section 3.) \item For non-generic simple congestion games, subgame-perfect outcomes my be unstable (as the introductory example shows) but we prove that they have optimal egalitarian social cost. For the more general class of series-parallel graphs, we prove that the congestion vectors of 1-lookahead outcomes coincide with those of global optima of Rosenthal's potential function. In particular, this implies that the $1$-$\textsf{LPoA}$ is bounded by the Price of Stability. (See Section 3.) \item We also study cost-sharing games and consensus games (see below for definitions). For consensus games, subgame-perfect outcomes may be unstable. In contrast, if players break their ties consistently all $k$-lookahead outcomes are optimal. Similarly, for non-generic cost-sharing games we show that even in the symmetric singleton case subgame-perfect outcomes may be unstable. For both symmetric and singleton games this can be resolved by removing the ties. We also observe a threshold effect with respect to the anticipation level. For generic symmetric cost-sharing games, we show $k$-lookahead outcomes are stable but guaranteed to be optimal only for $k=n$. For affine delay functions the $k$-$\textsf{LPoA}$ is non-increasing (i.e., the efficiency improves with the anticipation). For generic singleton cost-sharing games, $k$-lookahead outcomes are only guaranteed to be stable for $k=n$. (These results can be found in Section 4.) \end{enumerate} \paragraph{Related work.} The idea of limited backward induction dates back to the 1950s \cite{shan} and several researchers in artificial intelligence (see, e.g., \cite{lookaheadAI}) investigated it in a game-theoretic setting. Mirrokni et al. \cite{lookaheadVetta} introduce \textit{$k$-lookahead equilibria} that incorporate various levels of anticipation as well. Their motivation for introducing these equilibria is very similar to ours, namely to provide an accurate model for actual game play. However, their $1$-lookahead equilibria correspond to Nash equilibria rather than greedy best-response outcomes and none of the equilibria correspond to subgame-perfect outcomes. Moreover, lookahead equilibria are not guaranteed to exist. For example, Bilo et al. \cite{lookaheadBilo} show that symmetric singleton congestion games do not always admit 2-lookahead equilibria (for the ``average-case model''). Subgame-perfect outcomes are special cases of $n$-lookahead outcomes. Paes Leme et al.~\cite{leme} generalize the Price of Anarchy notion to subgame-perfect outcomes and show that the \emph{Sequential Price of Anarchy} can be much lower than the Price of Anarchy if the game is generic. On the other hand, this does not necessarily hold if the game is non-generic (see, e.g., \cite{anomalies,bartjasper,crowdgames}). \section{Lookahead outcomes} \label{sec:klo} We formally define congestion games and introduce some standard notation. We then introduce our notion of $k$-lookahead outcomes and the inefficiency measures studied in this paper. Finally, we comment on the impact of ties and different player orders in these games. \paragraph{Congestion games.} A \textit{congestion game} is a tuple $G = (\N, R,(\mathcal{A}_i)_{i\in \N},(d_r)_{r\in R})$ where $\N=[n]$ is a finite set of players, $R$ a finite set of \textit{resources}, $\mathcal{A}_i\subseteq 2^R$ the \emph{action set} of player $i$, and $d_r:\mathbb{N} \to \R_{\geq 0}$ a \textit{delay function} for every resource $r\in R$.\footnote{We use $[n]$ to denote the set $\{1,\dots,n\}$, where $n$ is a natural number.} Unless stated otherwise, we assume that $d_r$ is non-decreasing. We define $\mathcal{O} = \prod_{i \in \N} \mathcal{A}_i$ as the set of \emph{action profiles} or \emph{outcomes} of the game. Given an outcome $A\in \mathcal{O}$, the \textit{congestion vector} $x(A)=(x(A)_r)_{r\in R}$ specifies the number of players picking each resource, i.e., $x(A)_r = |\{i\in \N: r\in A_i\}|$. The cost function $c_i$ of player $i\in \N$ is given by $c_i(A)=\sum_{r\in A_i} d_r(x(A)_r)$. We call a congestion game \textit{symmetric} if $\mathcal{A}_i=\mathcal{A}_j = \mathcal{A}$ for all $i, j\in \N$. We say that $A_i$ is a \textit{best response} to $A_{-i}$ if $c_i(A_i, A_{-i}) \le c_i(B_i, A_{-i})$ for all $B_i\in \mathcal{A}_i$.\footnote{We use the standard notation $A_{-i}=(A_1,\dots,A_{i-1},A_{i+1},\dots,A_n)$ and $(B_i,A_{-i})=(A_1,\dots,A_{i-1},B_i,A_{i+1},\dots,A_n)$.} An outcome $A$ of a congestion game $G$ is a \emph{(pure) Nash equilibrium (NE)} if for all $i\in \N$, $A_i$ is a best response to $A_{-i}$. We use $\text{$\mathbb{NE}$}(G)$ to denote the set of all Nash equilibria of $G$. An \textit{order} on the players is a bijection $\sigma:\N\to[n]$. We denote the sequential-move version of a game $G$ with respect to order $\sigma$ by $G^\sigma$. The outcome on the equilibrium path of a subgame-perfect equilibrium in $G^\sigma$ is an action profile of $G$ and we refer to it as the \textit{subgame-perfect outcome (SPO)}. We use $\text{$\mathbb{SPO}$}}%\textup{\textit{SE}}(G)$ to refer to the set of all subgame-perfect outcomes (with respect to any order of the players) of a game $G$. \paragraph{$k$-lookahead outcomes.} Let $G=(\N,R,(\mathcal{A}_i)_{i\in \N},(d_r)_{r\in R})$ be a congestion game and $\sigma$ an order on the players. For $k\in[n]$, define $G^k(\sigma) =(\N',R,(\mathcal{A}_i)_{i\in \N'},(d_r)_{r\in R})$ as the congestion game with $\N'=\sigma^ {-1}\{1,\dots,k\}$ which we obtain from $G$ if only the $k$ first players (according to $\sigma$) play. Let $G^k := G^k(\Id_{[n]})$, where $\Id_{[n]}$ is the identity order (i.e., $\Id_{[n]}(i) = i$). For notational convenience, for $k>n$ we set $G^k(\sigma):=G^n(\sigma)=G$. Further, if the order $\sigma$ is defined on a larger domain than the player set of $G$, we define $G^k(\sigma):=G^k(\tau)$ for $\tau:\N\to[n]$ the unique bijection satisfying $\tau(i)<\tau(j)$ iff $\sigma(i)<\sigma(j)$ for all $i,j\in \N$ \begin{definition} Let $G$ be an $n$-player congestion game and let $k\in [n]$. An action profile $A$ is a \textbf{$k$-lookahead outcome} of $G$ if there exists an order $\sigma$ on the players so that for each $i\in \N$ we have that $A_{i}$ equals the action $B_i$ played by player $i$ in some subgame-perfect outcome $B$ of $(G')^{k}(\sigma)$ that corresponds to the order $\sigma$, where $G'$ is the subgame of $G$ induced by $(A_j)_{\sigma(j)<\sigma(i)}$.\footnote{Note that we would need to write $(G')^{\min\{k,n-\sigma(i)+1\}}$ instead of $(G')^{k}$ without our assumption that $G^\ell=G^n$ for $\ell>n$.} \end{definition} We say a $k$-lookahead outcome corresponds to the order $\tau$ if $\tau$ can be used as the order $\sigma$ in the definition above. We also define a $k$-lookahead outcome for $k>n$ as an $n$-lookahead outcome. We use $k\text{-}\mathbb{LO}(G)$ to denote the set of all $k$-lookahead outcomes of a game $G$. Assuming $\sigma=\Id_{[n]}$ for ease of notation, $A$ is a $k$-lookahead outcome (corresponding to the identity) if and only if $A_1$ is the action played by the first player in a subgame-perfect outcome (corresponding to the identity) of $G^k$ and $A_{-1}$ is a $k$-lookahead outcome (corresponding to the identity) in the game induced by $(A_1)$. \begin{example}\label{ex:1} Let $G$ be a congestion game with $R=\{r,s,t\}$, $\mathcal{A}_1=\{r,s\}$, $\mathcal{A}_2=\{s,t\}$ and $\mathcal{A}_3=\{t\}$. Let the delay functions be given by $d_r(x)=2x, ~d_s(x)=1.5x, ~d_t(x)=2x.$ Suppose the players enter the game in the order $1,2,3$ and all anticipate their own decision and the next player ($k=2$). Player 1 then computes the unique subgame-perfect outcome depicted on the left in Figure~\ref{fig:1}. Hence he chooses the resource $s$. This brings player 2 in the subgame whose game tree is depicted on the right in Figure~\ref{fig:1}. His unique subgame-perfect choice is $s$. This shows that the only 2-lookahead outcome corresponding to the identity of this game is $(s,s,t)$. \end{example} \begin{figure}[t] \begin{center} \scalebox{0.95}{\small \tikzstyle{level 1}=[level distance=1cm, sibling distance=2.5cm] \tikzstyle{level 2}=[level distance=1cm, sibling distance=1.25cm] \tikzstyle{bag} = [circle, minimum width=5pt,fill, inner sep=0pt] \tikzstyle{end} = [circle, minimum width=5pt,fill, inner sep=0pt] \hspace*{-.3cm} \begin{tikzpicture}[] \node[bag] {} child { node[bag] {} child { node[end, black,label=below: {$(2,1.5)$}] {} edge from parent[very thick] node[left] {$\columncolor{s}$} } child { node[end, black, label=below: {$(2,2)$}] {} edge from parent[thin] node[right] {$\columncolor{t}$} } edge from parent[thin] node[left] {$\rowcolor{r}$} } child { node[bag] {} child { node[end, label=below: {$(3,3)$}] {} edge from parent[thin] node[left] {$\columncolor{s}$} } child { node[end, label=below: {$(1.5,2)$}] {} edge from parent[very thick] node[right] {$\columncolor{t}$} } edge from parent[very thick] node[right] {$\rowcolor{s}$} }; \end{tikzpicture} \qquad \begin{tikzpicture}[] \node[bag] {} child { node[bag] {} child { node[end, black,label=below: {$(3,2)$}] {} edge from parent[very thick] node[left] {${t}$} } edge from parent[very thick] node[left] {$\columncolor{s}$} } child { node[bag] {} child { node[end, label=below: {$(4,4)$}] {} edge from parent[very thick] node[right] {${t}$} } edge from parent[thin] node[right] {$\columncolor{t}$} }; \end{tikzpicture} } \end{center} \vspace*{-.2cm} \caption{Illustration of game trees for Example~\ref{ex:1}.}\label{fig:1} \end{figure} \paragraph{Inefficiency of lookahead outcomes.} We introduce the \emph{$k$-Lookahead Price of Anarchy ($k$-\textsf{LPoA})} to study the efficiency of $k$-lookahead outcomes, which generalizes both the Price of Anarchy \cite{kouts} and the Sequential Price of Anarchy \cite{leme}. We consider two social cost functions in this paper: Given an outcome $A$, the \emph{utilitarian social cost} is defined as $SC(A)=\sum_{i\in \N}c_i(A)$; the \emph{egalitarian social cost} is given by $W_A=\max_{i\in \N}c_i(A)$. Given a game $G$, let $A^*$ refer to an outcome of minimum social cost. The \textit{$k$-Lookahead Price of Anarchy ($k$-\textsf{LPoA})} of a congestion game $G$ is \begin{equation}\label{eq:LPoA} k\text{-}\textsf{LPoA}(G) = \max_{A\in k\text{-}\mathbb{LO}(G)} \frac{\sum_{i\in \N}c_i(A)}{\sum_{i\in \N}c_i(A^*)}. \end{equation} Here the $k$-\textsf{LPoA}\ is defined for the utilitarian social cost; it is defined analogously for the egalitarian social cost. Recall that the \emph{Price of Anarchy (\textsf{PoA})} and the \emph{Sequential Price of Anarchy (\textsf{SPoA})} refer to the same ratio as in \eqref{eq:LPoA} but replacing ``$k\text{-}\mathbb{LO}(G)$'' by ``$\text{$\mathbb{NE}$}(G)$'' and ``$\text{$\mathbb{SPO}$}}%\textup{\textit{SE}}(G)$'', respectively. The \emph{Price of Stability (\textsf{PoS})} refers to the same ratio, but minimizing over the set of all Nash equilibria. \paragraph{Curse of ties.} When studying sequential-move versions of games, results can be quite different depending on whether or not players have to resolve ties. We next introduce two notions to avoid/resolve ties. We first introduce the notion of a \emph{generic} congestion game. Intuitively, this means that every player has a unique preference among all available actions. \begin{definition} \label{def:genericgame} A congestion game $G$ is \textbf{generic} if for all $N\subseteq \N$, $A,B\in\prod_{i\in N} \mathcal{A}_i$ and $j\in N$, $A_j\neq B_j$ implies $c_j(A) \neq c_j(B)$. \end{definition} Note that if $G$ is generic then $G^k(\sigma)$ is also generic for every order of the players $\sigma$ and $k\in \mathbb{N}$ and every induced subgame is generic as well. Hence if $A$ is the unique subgame-perfect outcome (say respect to the identity) of $G$, then the subgame $G'$ induced by $(A_1)$ is again generic, so that $(A_2,\dots,A_n)$ must be the \emph{only} subgame-perfect outcome (with respect to the identity) of $G'$. With induction, it then follows that for generic games, there is a unique $n$-lookahead outcome which is equal to the unique subgame-perfect outcome. For non-generic games, each subgame-perfect outcome is an $n$-lookahead outcome, but the reverse may be false. To see this, consider a congestion game with $R=\{r,s,t\}$, $\mathcal{A}_1=\{r,s\}$ and $\mathcal{A}_2=\{s,t\}$. Let $d_r(x_r)=2x_r$, $d_s(x_s)=2x_s$ and $d_t(x_t)=4$. Then $(s,t)$ is a subgame-perfect outcome, correspondig to the SPE depicted to the left below, so player 1 may choose $s$ in an $n$-lookahead outcome. \bigskip \begin{center} \scalebox{0.95}{\small \tikzstyle{level 1}=[level distance=1cm, sibling distance=3cm] \tikzstyle{level 2}=[level distance=1cm, sibling distance=1.5cm] \tikzstyle{bag} = [circle, minimum width=5pt,fill, inner sep=0pt] \tikzstyle{end} = [circle, minimum width=5pt,fill, inner sep=0pt] \begin{tikzpicture}[] \node[bag] {} child { node[bag] {} child { node[end, black,label=below: {$(2,2)$}] {} edge from parent[very thick] node[left] {$\columncolor{s}$} } child { node[end, black, label=below: {$(2,4)$}] {} edge from parent[thin] node[right] {$\columncolor{t}$} } edge from parent[thin] node[left] {$\rowcolor{r}$} } child { node[bag] {} child { node[end, label=below: {$(4,4)$}] {} edge from parent[thin] node[left] {$\columncolor{s}$} } child { node[end, label=below: {$(2,4)$}] {} edge from parent[very thick] node[right] {$\columncolor{t}$} } edge from parent[very thick] node[right] {$\rowcolor{s}$} }; \end{tikzpicture}\qquad \tikzstyle{level 1}=[level distance=1cm, sibling distance=1.5cm] \begin{tikzpicture}[] \node[bag] {} child { node[end, black,label=below: {$(4)$}] {} edge from parent[very thick] node[left] {$\columncolor{s}$} } child { node[end, black, label=below: {$(4)$}] {} edge from parent[thin] node[right] {$\columncolor{t}$} }; \end{tikzpicture} } \end{center} However, in the subgame induced by $(s)$, whose game tree is depicted to the right, both $s$ and $t$ are subgame-perfect (player 2 is indifferent), so player 2 may respond $s$, giving the $n$-lookahead outcome $(s,s)$. This is not a subgame-perfect outcome. Rather than assuming that no ties exist, we can also restrict the definition of a subgame-perfect outcome: A \textit{tie-breaking rule} for player $i$ is a total partial order $\succ_i$ on the action set $\mathcal{A}_i$. If player $i$ adopts the tie-breaking rule $\succ_i$, then $A_i$ only is a best response to $A_{-i}$ if $c_i(A_i,A_{-i})<c_i(B,A_{-i}) \text{ for all } B\in \mathcal{A}_i \text{ such that } B\succ_i A.$ In particular, $\succ_i$ ensures that player $i$ has a unique best response. As a result, there is a unique subgame-perfect outcome for each tie-breaking rule and order of the players. For symmetric congestion games, we can consider the special case of a \textit{common tie-breaking rule} $\succ$ on $\mathcal{A}$ that all players adopt. \paragraph{Effect of the player order.} Whether all subgame-perfect outcomes are stable may depend on the order of the players. For example, for consensus games all subgame-perfect outcomes corresponding to a \emph{tree respecting order} are stable (see Example \ref{ex:treeorder}). Whenever we make a claim such as ``all subgame-perfect outcomes are stable'', this should be read as ``subgame-perfect outcomes are stable \emph{for all orders}''. \begin{theorem} If $G$ is a symmetric congestion game and $A$ a $k$-lookahead outcome with respect to $\Id_{[n]}$, then $(A_{\sigma(1)},\dots, A_{\sigma(n)})$ is a $k$-lookahead outcome with respect to $\sigma$ for every order $\sigma$. \end{theorem} \begin{proof} Let $A$ be a $k$-lookahead outcome with respect to $\Id_{[n]}$. Then $A_1$ is the first action of some SPO $B$ with respect to the order $\Id_{[n]}$ in $G^{k}(\Id)$. Let $S$ be a subgame-perfect equilibrium inducing the SPO $B$. In the sequential move-version $(G^k(\sigma))^{\sigma}$, the root node is a decision node for player $i=\sigma^{-1}(1)$ and $S_1$ is a valid strategy for player $i$ (since the game is symmetric). Similarly, $S_{2},\dots,S_k$ are valid strategies for $\sigma^{-1}(2),\dots,\sigma^{-1}(k)$. In fact, the game tree of $(G^k)^{\Id}$ is the same as the game tree of $(G^k(\sigma))^{\sigma}$ up to a relabelling of the players. This means that in both games we verify the same equations when determining whether $S$ is a subgame-perfect equilibrium. Hence $S$ is a subgame-perfect equilibrium in $(G^k(\sigma))^{\sigma}$ as well. In the corresponding subgame-perfect outcome of $G^k(\sigma)$, player $i=\sigma^{-1}(1)$ plays $B_1=A_1=A_{\sigma(i)}$. Similar argumentation shows that in the subgame $G'$ induced by $(A_1)$, the action $A_2$ is the action of $\sigma^{-1}(2)$ in a subgame-perfect outcome of $(G')^k(\sigma)$. \end{proof} This theorem allows us to assume that the order is the identity when proving stability for symmetric games. Moreover, any permutation of a $k$-lookahead outcome is again a $k$-lookahead outcome, which is a useful fact that we exploit in the proofs below. \medskip \noindent Due to lack of space, some of the proofs are omitted from the main text below and will be provided in the full version of the paper. \section{Symmetric network congestion games} \label{sec:sncg} A \textit{single-commodity network} is a directed multigraph $\Gamma=(V,E)$ with two special vertices $o,d\in V$ such that each arc $a\in E$ is on at least one directed $o,d$-path. In a \textit{symmetric network congestion game (SNCG)}, the common set of actions $\mathcal{A}$ is given by the set of all directed paths in a single-commodity network $\Gamma=(V,R)$. A \textit{series-parallel graph (SP-graph)} either consists of (i) a single arc, or (ii) two series-parallel graphs in parallel or series. An \textit{extension-parallel graph (EP-graph)} either consists of (i) a single arc, (ii) two extension-parallel graphs in parallel, or (iii) a single arc in series with an extension-parallel graph. In our proofs we exploit the following equivalences: \begin{lemma}[Nested intersections property] The common action set $\mathcal{A}$ of each SNCG on an EP-graph satisfies the following three equivalent properties: \begin{enumerate} \item For all distinct $A,B,C \in \mathcal{A}$ either $A\cap B\subseteq A\cap C$ or $A \cap B \supseteq A \cap C$. \item For all distinct $A,B,C \in \mathcal{A}$, $A\cap B \not \subset C$ implies $A \cap C = B\cap C = A\cap B\cap C$. \item There is no \emph{bad configuration} (see \cite{strongeq}), i.e., for all distinct $A,B,C \in \mathcal{A}$, $A \cap \left( C \setminus B\right) =\emptyset$ or $A \cap \left( B \setminus C \right) =\emptyset$. \end{enumerate} \end{lemma} The non-trivial part that an SNCG has no bad configuration if and only if its network is extension-parallel is shown by Milchtaich \cite{milnetw}. Fotakis et al. \cite{fot06} show that each 1-lookahead outcome is a Nash equilibrium for SNCG on SP-graphs. We prove that the converse also holds for EP-graphs. \begin{theorem} \label{thm:NEisBRP} For every SNCG on an EP-graph, the set of 1-lookahead outcomes coincides with the set of Nash equilibria. \end{theorem} \begin{proof} It remains to show that each Nash equilibrium of $G$ is a permutation of a 1-lookahead outcome corresponding to the identity. Consider an NE $A$ with corresponding congestion vector $x$. Let $S_1$ be the set of actions costing the least for the first player, that is, those minimising $\sum_{r\in P}d_r(1)$. This corresponds to the set of NE of the 1-player game. Assume towards contradiction no one plays an action from $S_1$. Let $P\in S_1$. Since $A$ is an NE, $P$ has become more expensive as more players joined in, so some $A_j$ overlaps with it. Pick $j$ with $|P\cap A_j|$ maximal. Then by the nested intersection property, no $A_k$ intersects with $P\setminus A_j$. Hence $x_r=0$ for any $r\in P\setminus A_j$ (these are not chosen). We find $$ \sum_{r\in P\setminus A_j }d_r(x_r+1)=\sum_{r\in P\setminus A_j }d_r(1)<\sum_{r\in A_j \setminus P }d_r(1) \leq \sum_{r\in A_j \setminus P }d_r(x_r) $$ using that $A_j\not\in S_1$ and $P\in S_1$. This contradicts the fact that $A$ is an NE. Let $j$ be a player picking an action from $S_1$, so that $(A_j)$ forms a 1-lookahead outcome. Define $\sigma(j)=1$. Suppose we have defined $\sigma$ so that $A'= (A_{\sigma^{-1}(1)},\dots,A_{\sigma^{-1}(i)})$ forms a 1-lookahead outcome for some $1\leq i<n$. The profile $$ A''=A\setminus A'= (A_{j})_{j\in \sigma^{-1}\{i+1,\dots,n\}} $$ forms an NE in the game $G'$ induced by $A'$. Repeat the argument above for $A''$ and $G'$ to define $\sigma^{-1}(i+1)$. \end{proof} \subsection{Stability and inefficiency of generic games} As shown in the introduction, SPOs are not guaranteed to be stable for SNCGs on EP-graphs. However, stability is guaranteed if the game is generic. \begin{theorem} \label{thm:spoep} For every generic SNCG on an EP-graph, the set of subgame-perfect outcomes coincides with the set of Nash equilibria. \end{theorem} \begin{proof} Because the game is generic, there is a unique 1-lookahead outcome (up to permutation) and hence a unique NE: The game $G^1$ is generic and thus there is a unique cheapest first path $B_1$. The game $G^2$ is generic and hence so is the subgame of $G^2$ induced by $(B_1)$. Thus there is a unique cheapest second path, and so on. We prove the statement by induction on the number of players $n$. The claim is true for $n=1$. Suppose the claim holds for all generic SNCGs on EP-graphs with less than $n$ players and let $G$ be an SNCG on an EP-graph with $n$ players. Let $A$ be an SPO and $S$ the corresponding SPE, corresponding to some order $\sigma$ which we may assume to be the identity by relabeling the players. Given an action $P$ of player 1, $S$ prescribes an SPE $S'$ in the game induced by $(P)$. By our induction hypothesis (the subgame induced by $(P)$ has $n-1$ players), the SPO corresponding to $S'$ is a permutation of a 1-lookahead outcome in this subgame. In particular, if $B$ is the unique 1-lookahead outcome of the game $G$ and player 1 plays $B_i$, then the resulting outcome according to $S$ is a permutation of $B$. Suppose towards contradiction that $A_1\neq B_i$ for all $i$. By induction $A_{-1}$ is a NE in the subgame induced by $(A_1)$ (by the same argument as before). Since $A$ is not a Nash equilibrium and all players except 1 are playing a best response, there is an action $P$ so that $c_1(P,A_{-1})<c_1(A)$. If player 1 switches to $P$, then the other players are still playing a best response by \cite[Lemma 1]{fotakis}. So $(P,A_{-1})$ is an NE, hence a permutation of $B$. This means that $P=B_i$ for some $i$ (and $A_{-1}$ some permutation of $B_{-i}$), which yields a contradiction: $c_1(B_i,B_{-i})=c_1(P,A_{-1})<c_1(A_1,A_{-1})\leq c_1(B_i,B_{-i})$, where the last inequality follows because $A_1$ is subgame-perfect for player 1. \end{proof} \begin{theorem} \label{thm:corNE} For every SNCG on an EP-graph, each Nash equilibrium is a subgame-perfect outcome. \end{theorem} \begin{proof}[Sketch] Call game $G'$ (with delay functions $d'_r$) \textit{close} to $G$ if for all paths $P\neq Q$ and congestion vectors $(x_r)$ and $(y_r)$ $$ \sum_{r\in P} d_r(x_r) < \sum_{r\in Q} d_r(x_r) \implies \sum_{r\in P} d'_r(x_r) \leq \sum_{r\in Q} d'_r(x_r). $$ Suppose $G$ is given together with a Nash equilibrium $A$. If $G'$ is a generic game close to $G$ which also has $A$ has Nash equilibrium, then by Theorem \ref{thm:spoep} this is the unique subgame-perfect outcome of $G'$. The close condition above (invented by Milchtaich \cite{crowdgames}) exactly ensures $A$ is then also a subgame-perfect outcome in $G$. It remains to find a close generic game, which can be done by adjusting the delay functions of $G$; each path contains a resource he shares with no other path \cite{milnetw}, so we can increase the cost of individual paths without effecting the other path costs. \end{proof} The following theorem is the main result of this section. \begin{theorem} \label{thm:kLOforSNCG} Let $G$ be a generic SNCG on an EP-graph. Then for every $k$ the set of $k$-lookahead outcomes coincides with the set of Nash equilibria. As a consequence, $k$-$\textsf{LPoA}(G)=\textsf{PoA}(G)$. \end{theorem} \begin{proof} Since there is a unique Nash equilibrium and $k$-lookahead outcome up to permutation, it suffices to show that each $k$-lookahead outcome is a Nash equilibrium. By relabelling the players, we can assume that the order of the players is the identity. Let $B$ denote the unique 1-lookahead outcome of $G$ corresponding to the identity. Then $G^k$ has unique 1-lookahead outcome $(B_1,\dots,B_k)$. Let $A$ be a $k$-lookahead outcome. Since each SPO of $G^k$ is a permutation of a 1-lookahead outcome, we know that $A_1\in \{B_1,\dots,B_k\}$. This implies that the subgame of $G^{k+1}$ induced by $(A_1)$ has some permutation of $(B_1,\dots,B_{k+1})\setminus (A_1)$ as unique 1-lookahead outcome (where we perform the set operations seeing the tuple as a multiset). This means that $A_2\in (B_1,\dots,B_{k+1})\setminus (A_1)$. Continuing this way, we see that $A_i \in (B_1,\dots,B_{k+i-1})\setminus (A_1,\dots,A_{i-1})$ for $i=1,\dots, n-k$ and for $i>n-k$ we find $A_i \in B\setminus (A_1,\dots,A_{i-1})$. Hence $A$ will be a permutation of $B$. \end{proof} The result of Theorem~\ref{thm:kLOforSNCG} does not extend to series-parallel graphs. \begin{proposition} \label{prop:brpnotspo} For any SP-graph $\Gamma$ that is not EP, there is a generic SNCG $G$ on $\Gamma$ such that the sets of 1-lookahead and $n$-lookahead outcomes are disjoint. \end{proposition} \begin{proof} Consider the single-commodity network with four arcs $r,s,t,u$ and $od$-paths $rt$, $st$, $ru$ and $su$. Each series-parallel graph that is not extension-parallel has this network as a minor (see \cite{milnetw}). Thus it suffices to give a counterexample on this network. Consider the generic three-player game with delay functions \[ \begin{array}{l} (d_r(1),d_r(2),d_r(3)) = (1,3,100)\\ (d_s(1),d_s(2),d_s(3)) = (2,4,200) \\ (d_t(1),d_t(2),d_t(3)) = (1.1,4.1,100.1)\\ (d_u(1),d_u(2),d_u(3)) = (2.2,3.2,100.2). \end{array} \] The unique 1-lookahead outcome (up to permutation) is $(rt,su,ru)$ and the unique $n$-lookahead outcome (up to permutation) is $(st,ru,ru)$. \end{proof} We next show that anticipation may still be beneficial for the first player. \begin{theorem} Let $G$ be a generic SNCG on an EP-graph. Let $B$ be a subgame-perfect outcome with respect to the identity. Then $c_1(B)\leq\dots\leq c_n(B)$. In particular, $c_1(B)\leq c_1(A)$ for any $k$-lookahead outcome $A$ with respect to the identity. \end{theorem} \begin{proof} Let $A=(A_1,\dots,A_n)$ be a permutation of the unique NE of $G$ for which $c_1(A)\leq \dots \leq c_n(A)$. Let $B$ be the unique subgame-perfect outcome with respect to the identity; then this is some permutation of $A$ by Theorem~\ref{thm:kLOforSNCG}. If player 1 plays $A_1$, then his successors will play $A_2,\dots,A_n$ (not necessarily in that order), so that $c_1(B)\leq c_1(A)$. Since $B$ is some permutation of $A$ and since $A_1$ (which may equal $A_i$ for some $i$) is the unique element from $\mathcal{A}$ with the lowest cost in the profile $A$, we find $A_1=B_1$ and $c_1(A)=c_1(B)$. Similarly, player $j$ can ensure himself the cost $c_j(A)$ in the subgame induced by $(A_1,\dots,A_{j-1})$ and therefore $c_j(B)=c_j(A)$. This proves $c_1(B)=c_1(A)\leq\dots\leq c_n(A)=c_n(B)$ Finally, note that if $C$ is a $k$-lookahead outcome, then $C$ is some permutation of $A$ and therefore $c_1(C)=c_j(A)=c_j(B)\geq c_1(B)$ for some $j\in \N$. \end{proof} The following example shows that the cost of the first players does not decrease monotonically with his lookahead. In fact, a generalization of this example shows that only full lookahead guarantees the smallest cost for player 1. \begin{example} Consider the generic symmetric singleton congestion game with $R=\{r,s\}$, $d_r(x)=x$ and $d_s(x)=x+0.5$. The subgame-perfect outcome with respect to the identity is $(r,r,\dots r, s,\dots,s)$ if $n$ is even and $(s,s,\dots,s,r,\dots,r)$ if $n$ is odd. Thus, if $A$ is a $(n-2)$-lookahead outcome and $B$ a $(n-1)$-lookahead outcome, then $c_1(A)=\min_{i\in \N}c_i(A)<\max_{i\in \N}c_i(A)=c_1(B)$. \end{example} In contrast to the above, the first player is not guaranteed to achieve minimum cost with full lookahead if the game is non-generic. \begin{example} Suppose a symmetric singleton congestion game with delay functions $d_r(x)=x=d_s(x)$ is played by an odd number of players. The successors of player 1 can decide which resource becomes the most expensive one and enforce that this is the one that player 1 picks, so that player 1 is always worse off. \end{example} \subsection{Inefficiency of non-generic games} Let $G$ be an SNCG and let $A$ be an outcome. Let $(x_r)_{r\in R}$ be the corresponding congestion vector. We define the \emph{opportunity cost} of $A$ in $G$ as the minimal cost that a new player entering the game would have to pay, i.e., $O_A(G)=\min_{P\in \mathcal{A}}\sum_{r\in P}d_r(x_r+1)$; this definition is implicit in \cite{optimalNEmakespan,optimalNE}. The \textit{worst cost} of $A$ in $G$ is the egalitarian social cost, i.e., $W_A(G)= \max_{i\in\N}\sum_{r\in A_i}d_r(x_r)$. \begin{lemma} \label{lem:oppcosts} Let $G$ be an SNCG on a SP-graph $\Gamma$ with $m$ players and let $G^n$ be the same game with $n\leq m$ players. For any action profile $A$ in $G^n$ and any 1-lookahead outcome $B$ of $G$, we have $O_A(G^n)\leq O_B(G)$. Further, if $n<m$, then $O_A(G^n)\leq W_B(G)$. \end{lemma} The lemma above can be proved by induction on the graph structure. \begin{corollary} \label{cor:brpsamepotential} For SNCGs on SP-graphs, all 1-lookahead outcomes have the same value for the potential function. \end{corollary} \begin{proof} Let $A=(A_1,\dots,A_n)$ and $B=(B_1,\dots,B_n)$ be 1-lookahead outcomes (with respect to the identity). If $n=1$, then since both are chosen greedily we must have $\Phi(A)=\Phi(B)$. For $n>1$, the profiles $A'= (A_1,\dots,A_{n-1})$ and $B'=(B_1,\dots,B_{n-1})$ are also 1-lookahead outcomes, so that by an inductional argument and using that 1-lookahead outcomes have the same opportunity costs (Lemma \ref{lem:oppcosts}), we find $$ \Phi(A) = c_n(A) + \Phi(A') = O_{A'} +\Phi(A')= O_{B'}+\Phi(B')= \Phi(B). $$ \end{proof} Applying induction on the graph structure again, we are able to derive that each 1-lookahead outcome is a global optimum of Rosenthal's potential function. \begin{proposition} \label{prop:optima} Let $G$ be an SNCG on a series-parallel graph. Let $\mathcal{M}$ denote the set of global minima of the Rosenthal potential function $\Phi(A)=\sum_{r\in R}\sum_{i=1}^{x(A)_r}d_r(i)$. \begin{enumerate} \item For any $A\in \mathcal{M}$, there is a 1-lookahead outcome $B$ with $x(B)=x(A)$. \item All 1-lookahead outcomes of $G$ are global minima of the potential function and $$ \{x(B)\mid B \text{ 1-lookahead outcome}\}=\{x(A)\mid A \in \mathcal{M}\}. $$ \end{enumerate} \end{proposition} Fotakis \cite[Lemma 3]{fotakis} proves the following proposition: \begin{proposition}[Lemma 3 in \cite{fotakis}] \label{thm:fotakis} Let $G$ be an SNCG on an SP-graph with delay functions in class $\mathcal{D}$. For any global minimum $A$ of the Rosenthal potential function and any other profile $B$ we have $\sum_{i\in \N}c_i(A) \leq \rho(\mathcal{D}) \sum_{i\in \N}c_i(B)$, where $$ \rho(\mathcal{D})=\left(1-\sup_{d\in \mathcal{D}\setminus \{0\}}\sup_{x,y\geq 0} \frac{y(d(x)-d(y))}{xd(x)}\right)^{-1}. $$ \end{proposition} For example, if $\mathcal{D}$ consists of all affine functions then $\rho(\mathcal{D}) = \frac{4}{3}$. Fotakis \cite{fotakis} also proves that for SNCGs on SP-graphs, the Price of Stability is $\rho(\mathcal{D})$. Combining all these results, we obtain the following corollary. \begin{corollary} \label{cor:brps} For every SNCG on an SP-graph with delay functions in class $\mathcal{D}$, we have $1$-$\textsf{LPoA} \leq \rho(\mathcal{D})=\textsf{PoS}$. \end{corollary} Note that 1-lookahead outcomes are only guaranteed to be optimal Nash equilibria for games with the worst Price of Stability; a procedure for finding an optimal Nash equilibrium for every SNCG on a SP-graph is an NP-hard problem \cite{optimalNE}. For series-parallel graphs, 1-lookahead outcomes have maximal worst cost among the Nash equilibria \cite{optimalNEmakespan}; we use $W(G)$ to refer to the common worst cost of 1-lookahead outcomes. \begin{lemma} \label{lem:sncgworstcost} Let $G$ be an SNCG on an SP-graph with $n$ players and common action set $\mathcal{A}$. Let $P_1,\dots,P_m\in \mathcal{A}$ for $m< n$ and let $G'$ be the subgame of $G$ induced by $(P_1,\dots,P_m)$. Then $W(G')\leq W(G)$. \end{lemma} \begin{proof} The subgame $G'$ is again an SNCG (with $n-m$ players). By Lemma 1 in \cite{optimalNEmakespan}, the last player pays the worst cost in any 1-lookahead outcome $B$ of $G'$. Moreover, the last player pays at most $W(G)$: by Lemma \ref{lem:oppcosts}, the profile $A'=(P_1,\dots,P_m, B_1,\dots,B_{n-m-1})$ satisfies $O_{A'}(G^{n-1})\leq W(G)$ where $G^{n-1}$ denotes the game $G$ with $n-1$ players. Hence the greedy choice will cost at most $W(G)$. \end{proof} In particular, when at most $n-1$ players have chosen a path, there is still a path available of cost at most $W(G)$. On EP-graphs, Nash equilibria (and thus also 1-lookahead outcomes) have optimal egalitarian social cost \cite{epstein}. \begin{theorem} \label{thm:SPOgoodcost} For every SNCG $G$ on an EP-graph, each SPO $A$ has optimal egalitarian social cost, i.e., $W_A=W(G)$. \end{theorem} \begin{proof} We prove the claim by induction on the number of players in the game. If $G$ has $n=1$ players then the claim is true. Assume the claim is true for all subgame-perfect outcomes $A$ in games $G$ with at most $k$ players for some $k\geq 1$. Suppose towards contradiction that there is a game $G$ with $n=k+1$ players for which there exists an SPO $A$ with $W_A(G)>W(G)$. Renumber the players so that $A$ corresponds to the identity. Let $j$ be the last player paying a cost worse than $W(G)$ in the profile $A$. If $j\neq 1$, then the subgame $G'$ induced by $(A_1,\dots,A_{j-1})$ has at most $k$ players so that by induction the SPO $A'=(A_j,\dots,A_n)$ satisfies $W(G')=W_{A'}(G')>W(G)$ (the strict inequality follows by the choice of $j$). This is a contradiction with Lemma \ref{lem:sncgworstcost}. Hence we assume $j=1$. Let $P$ minimize $\sum_{r\in P}d_r(1)$, i.e., choose the cheapest path at the start of the game. Consider the subgame $G'$ induced by $(P)$. Take a subgame-perfect outcome $B=(B_i)_{i>1}$ for this subgame. Let $x$ be the congestion vector corresponding to $(P,B_{2},\dots,B_n)$. \begin{enumerate} \item Since player $1$'s subgame-perfect move costs more than $W(G)$, path $P$ will eventually cost $c(P,x) :=\sum_{r\in P}d_r(x_r)> W(G)$. \item Since $B$ is an SPO in $G'$ and $G'$ has strictly less players than $G$, we get $W_{B}(G')=W(G')$ by induction. Hence $c(B_i,x)\leq W_B(G')=W(G')\leq W(G)$ for all $i>1$, where the last inequality follows from Lemma \ref{lem:sncgworstcost}. \end{enumerate} By 1. and 2., no successor of player $1$ can pick $P$. By 1., there must be a player picking a path having overlap with $P$. Let $i>1$ for which $|P\cap B_i|$ is largest. By the nested intersection property, no player $k>1$ picks a path overlapping with $P\setminus B_i$. It follows that \begin{align*} c(B_i,x) & \geq \sum_{r\in B_i\cap P}d_r(x_r)+ \sum_{r\in B_i\setminus P} d_r(1) \\ & \geq \sum_{r\in B_i\cap P}d_r(x_r)+ \sum_{r\in P \setminus B_i} d_r(1)=c(P,x), \end{align*} where for the second inequality we use that $P$ was the cheapest path at the beginning of the game. By using 1. and 2., we conclude that $c(P,x)> W(G)\geq c(B_i,x) \geq c(P,x)$, which is a contradiction. \end{proof} \section{Extensions} \label{sec:extensions} We present our results for cost-sharing games and consensus games. \subsection{Cost-sharing games} A \textit{cost-sharing game} is a congestion game, where the delay functions are non-increasing. We first argue that subgame-perfect outcomes are not guaranteed to be stable, even for symmetric singleton cost-sharing games. \begin{example} Consider a cost-sharing game with two resources $r,s$ with identical delay functions $d_r(x) = d_s(x) = d(x)$ such that $d(x) = 2$ for $x = 1$ and $d(x) = 1$ for $x > 1$. If three players play this game, the subgame-perfect outcomes $(r,s,s)$ and $(s,r,r)$ (which can be created by defining $S_3(x,y)=y$ and $S_2(x)\neq x$) are unstable. \end{example} On the other hand, the above instability can be resolved for either symmetric or singleton cost-sharing games if no ties exist. \begin{theorem} \label{thm:gscsg} Let $G$ be a generic, symmetric cost-sharing game. Each $k$-lookahead outcome is a Nash equilibrium of the form $(P_k,\dots,P_k)$ for $P_k = \argmin_{A\in \mathcal{A}}\sum_{r\in A} d_r(k).$ In particular, each subgame-perfect outcome is optimal. \end{theorem} The proof of Theorem~\ref{thm:gscsg} relies on the following lemma. \begin{lemma} \label{prop:uniqueSPO} Let $A$ be an outcome so that for any $B$ with $B_i\neq A_i$ for some $i\in \N$, we have $c_i(A)>c_i(B)$. Then $A$ is the unique subgame-perfect outcome. \end{lemma} \begin{proof}[Proof of Theorem~\ref{thm:gscsg}] We first show that each subgame-perfect outcome is of the form $(P_n,\dots,P_n)$, which is the optimal profile. Because the game is generic, no other action $Q\neq P_n$ can cost the same as $P_n$ does in the profile $(P_n,\dots,P_n)$. Hence for any outcome in which some player $i$ plays a different action than $P_n$, this player is \emph{strictly} worse off than in $(P_n,\dots,P_n)$. As a consequence, $(P_n,\dots,P_n)$ is the only subgame-perfect outcome by Lemma \ref{prop:uniqueSPO}. Let $k\leq n$ be given. The game $G^k$ has a $P_k$ as its unique optimal outcome. Since $G^k$ is again a generic, symmetric cost-sharing game, each subgame-perfect outcome is of the form $(P_k,\dots,P_k)$. Thus, any $k$-lookahead outcome has the action $P_k$ as its first action. Let $G'$ be the subgame induced by $(P_k)$, then $G'^k$ is again a generic, symmetric cost-sharing game. Since the delay functions are non-increasing, the profile $P_k$ is again the optimal profile. Hence the second player also picks $P_k$. The proof now follows by continuing inductively. \end{proof} We turn to inefficiency of $k$-lookahead outcomes. In general, the $k$-$\textsf{LPoA}$ can increase with $k$ for generic, symmetric singleton cost-sharing games: \begin{example} Define delay functions $d_r(x)=\frac{n}x$ and $d_s(1)=n+1$ with $d_s(x)=2$ otherwise. Each $1$-lookahead equilibrium is optimal, whereas no $k$-lookahead equilibrium is optimal for $1<k<n$. This example can be extended make any subset of $[n-1]$ equal to the set $\{k\in[n-1]\mid k\text{-}\textsf{LPoA}(G)=1\}$. \end{example} However, we can gain monotonicity by restricting the delay functions. \begin{corollary} For every generic, symmetric cost-sharing game $G$ with delay functions of the form $d_r(x)=\frac{a_r}x+b_r$, $k$-$\textsf{LPoA}(G)$ is non-increasing in $k$. \end{corollary} \begin{proof} Theorem~\ref{thm:gscsg} shows that each $k$-lookahead outcome is of the form $(P_k,\dots,P_k)$ for $P_k=\argmin_{A\in \mathcal{A}}\sum_{r\in A} d_r(k)$. Let $n$ the number of players. Denote $a_{k} =\sum_{r\in P_k}a_r$ and $b_{k}=\sum_{r\in P_k}b_r$, so that $\sum_{r\in P_k}d_r(\ell)=\frac{a_k}\ell+b_k$. If $k$-$\textsf{LPoA}(G)>(k-1)$-$\textsf{LPoA}(G)$, then in particular $P_k\neq P_{k-1}$. This implies $\frac{a_{k}}{k-1}+b_{k}>\frac{a_{k-1}}{k-1}+b_{k-1}$ and $\frac{a_{k}}k+b_{k}<\frac{a_{k-1}}{k}+b_{k-1}$. Subtracting the first from the second and dividing by $\frac1{k-1}-\frac1k>0$ shows $a_k>a_{k-1}$. This implies $a_k(\frac1k-\frac1n)>a_{k-1}(\frac1k -\frac1n)$. Subtracting this from $\frac{a_{k}}k+b_{k}<\frac{a_{k-1}}{k}+b_{k-1}$ gives $\sum_{r\in P_k}d_r(n)<\sum_{r\in P_{k-1}}d_r(n)$. \end{proof} \begin{theorem} \label{thm:triv} If $G$ is a generic singleton cost-sharing game, then each SPO is an NE. However, $k$-lookahead equilibria are not guaranteed to be Nash equilibria for any $k<n$. \end{theorem} \begin{proof} We mimick the proof for fair cost-sharing games \cite{leme}. Assume without loss of generality that all resources in $R$ can be chosen by some player. For $r\in R$, let $N_r=\{i\in \N:r\in \mathcal{A}_i\}$ be the set of players that can choose resource $r$. Because the game is generic, $$ r^*=\arg\min_{r\in R} d_r(|N_r|) $$ satisfies $d_{r^*}(|N_{r^*}|)<d_s(|N_s|)$ for all $s\in R\setminus \{r^*\}$. Hence if all successors and predecessors of $i\in N_{r^*}$ choose $r^*$, then this is the unique best response for player $i$ as well. This implies all player $i\in N_{r^*}$ will choose $r^*$ independent of the order the players arrive in. Removing these players from the game and repeating this at most $n$ times, we find that the resource a player picks does not depend on the order in which the players arrive. Since the last player in a subgame-perfect outcome is always giving a best response and each player can be seen as the last player, it follows that this unique SPO is a Nash equilibrium. An example of a 1-lookahead outcome which is not an NE can be created by putting $d_r(x)=\frac1x,~d_s(x)=\frac4{3x}, ~\mathcal{A}_1=\{r,s\}, ~\mathcal{A}_2=\{s\}.$ This example can be extended to give unstable $k$-lookahead equilibria for any $k<n$ by putting players in between that do not interfere with the two players (e.g., by creating a third resource $t$ and setting $\mathcal{A}_i=\{t\}$ for $i\in \{2,\dots,n-1\}$). \end{proof} \subsection{Consensus games} In a consensus games, each player is a vertex in a weighted graph $\Gamma = (V,E,w)$ and can choose between actions $L$ and $R$. The cost of player $i$ in outcome $A$ is given by the sum of the weights $w_{ij}$ of all incident edges $ij\in E$ for which $A_i\neq A_j$. The following example shows that subgame-perfect outcomes may be unstable in general. \begin{example} \label{ex:treeorder} Consider a graph with vertex set $V=\{1,2,3\}$ and edge set $E=\{\{2,3\},\{3,1\}\}$. Choose weights $w_{23}> w_{13}>0$. For the order $1,2,3$ on the players, there is only one possibility to let player 3 act subgame-perfectly (since $w_{23}>w_{13}$): $S_3(x,y)=y$. Player 2 is hence indifferent between his two actions. So we can define $S_2(x)\neq x$ (``always choose an action different from player 1''). Player 1 is now indifferent as well and we can set $S_1=L$. This defines a subgame-perfect equilibrium for which the corresponding SPO is unstable. Note that the graph in this example forms a tree. Call an order $\sigma$ \emph{tree respecting} if each player except the first succeeds at least one of his neighbours. Mimicking the proof of Proposition \ref{prop:consensusgames}, one can show that for each subgame-perfect outcome is optimal (i.e. of the form $(L,\dots,L)$ or $(R,\dots,R)$) for such orderings. \end{example} \begin{proposition} \label{prop:consensusgames} Let $G$ be a consensus game. If all players adopt a common tie-breaking rule, then all $k$-lookahead outcomes are optimal. \end{proposition} \begin{proof} Assume without loss of generality that $R$ is the preferred action of all players and the order on the players is $1,\dots,n$. (Otherwise, we can relabel the players and actions $R$ and $L$.) We first show that each SPO $A$ is of the form $(R,\dots,R)$. Let $S$ be the subgame-perfect equilibrium corresponding to $A$. We show inductively that $S_i(R,\dots,R)=R$ for all $i\in \{n,\dots,1\}$, so that $A=(R,\dots,R)$ follows. The claim for player $n$ follows since $c_n(R,\dots,R)$ is the optimal possible cost for player $n$ and player $n$ plays $R$ when indifferent. Assuming $S_i(R,\dots,R)=R$ holds for players $n,\dots,k$ for some $k\in \{2,\dots,n\}$, player $k-1$ will get cost $c_{k-1}(R,\dots,R)$ if he plays $R$, which again implies that he must choose $R$. Let $A$ be a $k$-lookahead outcome for some $k\in\{1,\dots,n\}$. The only subgame-perfect outcome corresponding to the order $1,\dots,k$ and common tie-breaking towards $R$ is given by $(R,\dots,R)$, so that $A_1=R$. After players 1 up to $\ell$ have fixed action $R$, the profile $(R,\dots,R)$ increases in value at least as much as any other profile, so that $(R,\dots,R)=R^{\min\{k,n-\ell\}}$ will be the only subgame-perfect outcome in the subgame induced by $(A_1,\dots,A_{\ell})=(R,\dots,R)$ for any $\ell\in \{1,\dots,n-1\}$. Hence $A=(R,\dots,R)$. (We find $A=(L,\dots,L)$ if ties are broken in favor of $L$.) \end{proof} \section{Future work} While the focus in this paper is on congestion games, our notion of $k$-lookahead outcomes naturally extends to arbitrary normal-form games (details will be given in the full version of the paper). It will be interesting to study $k$-lookahead outcomes for other classes of games. Also, an interesting direction is to consider heterogeneous (e.g., player-dependent) anticipation levels. In particular, it would be interesting to further explore the relation between ties and anticipation within this framework. Another interesting question for future research is to investigate more closely the relationship between greedy best-response and subgame-perfect outcomes. Note that in Section 3 we were able to use results for $k = 1$ and $k = n$ to infer results for other values of $k$. This might be true for other games as well. Especially, it would be interesting to identify general properties of games for which such an approach goes through. \paragraph{Acknowledgements.} We thank Pieter Kleer for helpful discussions. This work was done while the first author was a research intern at CWI and a student at the University of Amsterdam (UvA) . \newpage
{ "attr-fineweb-edu": 1.859375, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbezxK4sA-5Y3q30e
\section*{NOTE} The work in this paper on bias in the mixed model estimators has been superseded by ``A Diagnostic for Bias in Linear Mixed Model Estimators Induced by Dependence Between the Random Effects and the Corresponding Model Matrix'' \url{https://arxiv.org/abs/2003.08087}. In particular, the diagnostics in Equations (7) and (14) were formed with an \textit{ad hoc} approach, whereas the newly referenced paper takes a thorough theoretical approach. This paper will remain on arXiv per site requirements. \section{Introduction} \label{sec:intro} There has long been observed a ``home-field advantage'' across a variety of sports \citep{harville77,harville94,harville03,pollard05,marshall,wang,goumas}. This advantage may be defined as ``the net effect of several factors that generally benefit the home team and disadvantage the visiting team'' \citep{harville94}. The scope and influence of these factors is debated by sports psychologists. In some sports, such as baseball, there is variation in the structure of the playing field -- such as fence distance and height -- from stadium to stadium, creating the potential for a team to acclimate to its own field. By contrast, \citet{scorecasting} argue that referee bias due to social influence is a primary factor in home-field advantage. Recently, sports journalists and researchers have taken interest in a perceived decline in the home-field advantage (HFA) in college football and basketball. \cite{omaha} notes the decrease in college football HFA by looking at win proportions for the home team within conferences. \cite{si2} makes a similar note of declining proportion of wins for home teams in intraconference NCAA Division I men's basketball games. However, these analyses assume an underlying binomial model for the number of games in a season won by home teams \citep{pollard05} and that game outcomes are independent with an equal probability of the home team winning. Simply tracking the proportion of games won by the home team does not control for the variability in team abilities, nor consider the potential biasing impact of nonrandom scheduling on the estimate, nor provide accurate standard errors for the proportions. HFA can be approached from two different perspectives: \begin{enumerate} \item{\textbf{scoring HFA:} the difference in the expected scores of the home and away teams within a game, after the strengths of each team have been accounted for.} \item{\textbf{win propensity HFA:} the increased win propensity of a team when playing an opponent at home, beyond the predicted propensity of that team defeating the same opponent at a neutral site.} \end{enumerate} This investigation studies whether or not there is a linear trend in the scoring HFA of 6 sports over the past 18 seasons: Men's and Women's NCAA Division I basketball, the NCAA Football Bowl Subdivision (FBS), the National Football League (NFL), the National Basketball Association (NBA), and the Women's National Basketball Association (WNBA). The teams of each of these sports are divided into conferences, which are clusters of teams that play each other with higher probability than would be expected if the schedules were randomized across the sport. We are interested in knowing if trends in scoring HFA differ across the conferences. In the first phase of this investigation, we present a model for estimating HFA (independently) within each conference and in each season. \citet{pollard05} acknowledge that schedules may be unbalanced, but do not explore the potential impact on HFA estimates. However, \citet[Section 5]{harville77} notes that the estimate for HFA resulting from the mean of the differences between home and away scores is ```biased' upwards because, for financial reasons, the better teams are more generally able to arrange schedules with more games at home than away.'' However, we will see that a simplified version of the mixed model estimate for HFA proposed by \citet{harville77} is itself subject to upward bias in this situation, by examining a fixed and a random effect model for scoring HFA. The second phase of the analysis fits a potential linear trend in the yearly conference estimates of HFA that were produced in Phase I. In the college sports, which typically have at least 10 conferences, this phase implements a weighted random coefficient model using the Phase I intraconference HFA estimates as the response and the reciprocal of the squared standard errors from Phase I as weights. The model allows for correlated conference-specific random intercepts and slopes. Due to the presence of only two conferences in each professional sport, the conference-specific intercepts and slopes are instead treated as fixed effects for these sports. Section~\ref{sec:methods} presents a fixed team effect model and a random team effect model for estimating the HFA within years -- or within conferences within years -- (Phase I) and compares the assumptions made by each model regarding the structure of the stochastic schedules. In addition, models for fitting conference-specific linear trends in HFA over time (Phase II) are defined. Section~\ref{sec:results} applies the Phase I and Phase II models to 18 seasons of 6 different sports. This section also refits the Phase I model to simulated data sets that are generated by resampling the residuals and team effects in order to explore bias in the HFA estimates. Section~\ref{sec:discussion} closes the paper with a discussion of the results. \section{Methods}\label{sec:methods} \subsection{Phase I: Estimating HFA} This section presents both a fixed effects model and a mixed effects model for the home team margins of victory, $d_i=y_{H_i}-y_{A_i}$, where $y_{H_i}$ and $y_{A_i}$ are the home and away team scores, respectively, for game $i=1,...,n$. With $N$ teams in a data set, the pattern of game opponents and locations (the schedule) is recorded in an $n\times N$ matrix $\bds{Z}$ as follows: if team $T_H$ hosted team $T_A$ in game $i$, then the $i$-th row of $\bds{Z}$ consists of all zeros except for a $1$ in the column corresponding to $T_H$ and a $-1$ in the column corresponding to $T_A$. \subsubsection{Fixed Effects Model for Scoring Advantage} The first plausible model for scoring HFA fits a fixed effect $\lambda$ for the HFA and vector of fixed team effects $\bds{\beta}=\left(\beta_1,\ldots,\beta_N\right)$ where the difference between team effects provides an estimate of the score difference in a game between the two teams on a neutral field. For each game $i$, this model assumes \begin{equation} d_i\sim N\left(\lambda + \beta_{H_i} - \beta_{A_i}, \sigma^2 \right) \end{equation} where $H_i$ and $A_i$ are the indices for the home and away teams, respectively, in game $i$. The residuals are assumed to be independent, leading to an overall model \begin{equation}\label{eq:femodel} \bds{d}=\lambda*\bds{1} + \bds{Z}\bds{\beta} + \bds{\epsilon} \end{equation} where $\bds{1}$ is a vector of 1's, $\bds{\epsilon}\sim N(\bds{0},\sigma^2\bds{I})$ where $\bds{I}$ is the identity matrix, and $\bds{d}=(d_1,\ldots,d_n)$. This model appears previously as Model 2 of \cite{harville94} and Model 1 of \cite{harville03}. The fixed effect model matrix $\bds{Z}$ is not full rank and the individual team effects $\beta_i$ are not estimable. However, $\lambda$ is estimable, as are the pairwise differences between team effects, provided there is sufficient mixing of the teams \citep[Section 5.2.1]{stroup}. For example, in a conference of three teams (A, B, C), $\lambda$ will be estimable if any two teams have a home and home series, or if, for example, team B plays at team A, team A plays at team C, and team C plays at team B. Provided this is the case and letting $M^+$ represent the Moore-Penrose inverse of a matrix $M$, solutions to model (\ref{eq:femodel}) are given by \begin{equation}\label{eq:febeta} \left[ \begin{array}{c} \widehat{\lambda}\\ \bds{\widehat{\beta}} \end{array} \right] =\left[ \begin{array}{c|c} \left(\bds{1^{\prime}}\bds{1}\right)&\left(\bds{1^{\prime}}\boldsymbol{Z}\right)\\ \hline \left(\boldsymbol{Z}^{\prime}\bds{1}\right)&\left(\boldsymbol{Z}^{\prime}\boldsymbol{Z}\right) \end{array} \right]^{+} \left[ \begin{array}{c} \bds{1^{\prime}}\\ \hline \boldsymbol{Z}^{\prime} \end{array} \right] \bds{d} \end{equation} An additional concern regarding $\boldsymbol{Z}$ is that is potentially a stochastic matrix that depends on the team effects, in the sense that better teams may choose to play weaker opponents or more games at home than away. However, since the elements of $\beta$ are treated as fixed effects, the only variability assumed in their estimates is the variability associated with $\bds{\epsilon}$. Taking the conditional expectation of (\ref{eq:febeta}) given $\left[\bds{1}|\bds{Z}\right]$, it appears that two possible sufficient conditions for the model (\ref{eq:femodel}) to produce unbiased estimates for $\lambda$ are either that the model matrix $\left[\bds{1}|\bds{Z}\right]$ is independent of $\bds{\epsilon}$ (which seems reasonable from a sports scheduling perspective) or if $\bds{1^{\prime}}\boldsymbol{Z}=\bds{0}$. In the later case, each team plays a balanced schedule with the same number of home and away games, and the estimated HFA is simply the mean observed difference, $\widehat{\lambda}=\overline{\bds{d}}$. \subsubsection{Mixed Model for Scoring Advantage} Alternatively, the teams may be modeled with random effects $\bds{\eta}=\left(\eta_1,\ldots,\eta_N\right)\sim N\left(\bds{0},\sigma^2_g\bds{I}\right)$ that are assumed to be independent of the error terms. The remaining details are the same as model (\ref{eq:femodel}), with difference in home and away scores $d_i$ being modeled conditional on the random team effects as \begin{equation} d_i|\boldsymbol{\eta}\sim N\left(\lambda + \eta_{H_i} - \eta_{A_i}, \sigma^2 \right) \end{equation} producing an overall model \begin{equation}\label{eq:mixed} \bds{d}|\boldsymbol{\eta}\sim N(\lambda*\bds{1} + \bds{Z}\bds{\eta},\sigma^2\bds{I}) \end{equation} \citet[Equation 2.1]{harville77} considers a generalization of this model for ranking sports teams. An advantage of placing a distributional assumption on the team effects is that it provides a form of regularization for the model and avoids the estimability concern for the team effects. However, the variance imparted to $\boldsymbol{\eta}$ results in a new restriction on the dependence of $\boldsymbol{Z}$ on $\boldsymbol{\eta}$. As with the fixed effect model (\ref{eq:febeta}), the mixed model estimate for $\lambda$ is also equal to $\overline{\bds{d}}$ when $\bds{1^{\prime}}\boldsymbol{Z}=\bds{0}$ \citep[Equation 5]{henderson75}, leading to identical estimates in the presence of a balanced schedule, even if the team assignments were made not-at-random. In the case of an unbalanced but randomly assigned schedule where $\bds{1^{\prime}}\bds{Z}$ {and} $\boldsymbol{\eta}$ {are independent}, the mixed model assumptions suggest that we should see similar home field estimates (on average) between the two models: the mixed model assumes $\boldsymbol{\eta}\sim N\left(\bds{0},\sigma^2_g\bds{I}\right)$ and thus its product with the off-diagonal quantity of interest from (\ref{eq:febeta}) has mean $\bds{0}$ and is distributed as \begin{equation}\label{eq:dist} \bds{1^{\prime}}\bds{Z}\boldsymbol{\eta}\sim N\left(\bds{0},\sigma^2_g\bds{1^{\prime}}\bds{Z}\bds{Z^{\prime}\bds{1}}\right) \end{equation} However, if the construction or distribution of $\boldsymbol{Z}$ depends on $\boldsymbol{\eta}$, then the relationship in (\ref{eq:dist}) no longer holds since the factorization $\text{E}\left[\bds{Z}\boldsymbol{\eta}\right]=\bds{Z}\text{E}\left[\boldsymbol{\eta}\right]$ requires independence. Under a null hypothesis that $\boldsymbol{\eta}\sim N\left(\bds{0},\sigma^2_g\bds{I}\right)$ and that $\bds{1^{\prime}}\bds{Z}$ and $\boldsymbol{\eta}$ are independent, \begin{equation}\label{eq:oneztest} \frac{1}{\sigma_g^2}\boldsymbol{\eta}^{\prime}\boldsymbol{Z}^{\prime}\bds{1}\left(\bds{1^{\prime}}\bds{Z}\bds{Z^{\prime}\bds{1}}\right)^{-1}\bds{1^{\prime}}\bds{Z}\boldsymbol{\eta}\sim\chi^2_{1} \end{equation} An abnormally large value could indicate dependence between $\bds{1^{\prime}}\bds{Z}$ and $\boldsymbol{\eta}$ that leads to a systematic difference between the estimates of $\lambda$ produced by models (\ref{eq:femodel}) and (\ref{eq:mixed}). We calculate this value by substituting $\sigma^2_g$ with its REML estimate and $\boldsymbol{\eta}$ with its eBLUP. While these substitutions could alter the null distribution from the expected chi-square, simulations have shown it to be an accurate approximation to observed distributions of this value. NOTE: The previous paragraphs represent an initial \textit{ad hoc} approach to this problem. Please see \url{https://arxiv.org/abs/2003.08087} for a more developed analysis. \subsection{Phase II: Fitting a Trend in HFA}\label{sec:phase2} Once the estimated intraconference HFAs and associated standard errors have been obtained from either model (\ref{eq:femodel}) or model (\ref{eq:mixed}), we will check for the presence of a population-wide linear trend in HFA within each sport. In the application in Section~\ref{sec:results}, separate Phase I models are fit for the different conferences within sports in each year. There are many such conferences for the college sports, but only two for each of the professional sports. \subsubsection{College Sports}\label{sec:collegeII} For the college sports, we fit a model with a linear trend in time and correlated random slopes and intercepts for conferences. Letting $\lambda_{ij}$ represent the estimated HFA for conference $j$ in year $i$, we fit \begin{equation}\label{eq:rancoef} \lambda_{ij} = \left(\alpha_0 + b_{0j}\right) + \left(\alpha_1 + b_{1j}\right)t_i+\epsilon_{ij} \end{equation} where $t_i$ is the year (indexed with 2017 as time 0), $\alpha_0$ and $\alpha_1$ are fixed effects for the population intercept and slope, respectively, and the errors are independent and distributed $\epsilon_{ij}\sim N \left(0,{\sigma^2_{\lambda}}{w_{ij}^2}\right)$ where $w_{ij}$ is the standard error of the HFA for conference $j$ in year $i$ from Phase I: this weighting incorporates information about the accuracy of the point estimates from Phase I. The conference-level random intercepts and slopes are assumed to be independent across conferences, and independent of $\epsilon$, distributed as $\left(b_{0j},b_{1j}\right)\sim N\left(\bds{0},\boldsymbol{G}\right)$ where $\boldsymbol{G}$ is a covariance matrix with lower triangle $\left(\sigma^2_1,\sigma_{12},\sigma^2_2\right)^{\prime}$. Testing for the significance of the contributions of the conference-level random slopes and intercepts is not amenable to the standard likelihood ratio test since the null hypothesis of a zero variance component lies on the boundary of the parameter space \citep{scheipl}: a simulation is used to assess the magnitude of the observed restricted loglikelihood statistic under the null hypothesis $\boldsymbol{G}=\bds{0}$. However, random effects are retained in the model regardless of their perceived significance. This could lead to reduced power in testing the fixed effects, but guards against bias due to omitted effects since the plots of intraconference HFA in the application of Section~\ref{sec:results} suggest heterogeneity in slopes. The resulting fixed effect estimate $\widehat{\alpha}_1$ represents sport-wide linear trend in HFA, while $\widehat{\alpha}_0$ estimates the population HFA in 2017. \subsubsection{Professional Sports}\label{sec:proII} Each of the considered professional sports are split into two conferences, which will be denoted A and B. In the NBA and WNBA, these are the Western and Eastern conferences, respectively. In the NFL, they are the NFC and the AFC, respectively. With only two conferences, the HFA estimates from each conference are not able to support the estimation of $\boldsymbol{G}$ in (\ref{eq:rancoef}). Instead, the conference intercepts and slopes are fit with fixed effects in model (\ref{eq:fullfixed}) where the errors are weighted and distributed as described in Section~\ref{sec:collegeII}. The full model is compared via a likelihood ratio test (using maximum likelihood estimates) to model (\ref{eq:fixed1}), which assumes identical slopes and intercepts for both conferences, and to model (\ref{eq:fixed2}), which drops the linear trend in time from the full model. \begin{align} \lambda_{ij} =& \beta_{0A} + \beta_{1A}t_i +\left(\beta_{0B}+\beta_{1B}t_i\right)I\left(j=B\right) +\epsilon_{ij}\label{eq:fullfixed}\\ \lambda_{ij} =& \beta_{0A} + \beta_{1A}t_i +\epsilon_{ij}\label{eq:fixed1}\\ \lambda_{ij} =& \beta_{0A} +\beta_{0B}I\left(j=B\right) +\epsilon_{ij}\label{eq:fixed2} \end{align} Unlike in the random coefficient model, the interpretation of $\beta_{0A}$ and $\beta_{1A}$ changes with the presence of $\beta_{0B}$ and $\beta_{1B}$. \subsection{Estimation} The Phase I restricted maximum likelihood (REML) estimates for the linear mixed model (\ref{eq:mixed}) are obtained with an EM algorithm \citep{laird82,pxem} via the R package \texttt{mvglmmRank} \citep{mvglmmRank}, which also uses the resulting model matrices to construct the fixed effect estimates (\ref{eq:febeta}). The Phase II models of Sections~\ref{sec:collegeII} and \ref{sec:proII} fit in SAS PROC MIXED. Source code for replicating the Phase I and Phase II analyses is included in the supplementary material (\url{https://github.com/HFAbias18/supplement}). The source code can easily be adapted to perform similar analyses on results from other sports using the database provided by \citet{masseyr}. \section{Application to College and Professional Basketball and Football}\label{sec:results} \subsection{Data} Intraconference scores from the 2000-2017 seasons for each of the six sports of Men's and Women's NCAA Division I basketball, NCAA Football Bowl Subdivision (FBS), NFL, the WNBA, and the NBA are furnished by the website of \citet{masseyr}, with some missing WNBA results obtained from WNBA.com. In all cases, neutral site games are excluded, and college results are limited to intradivision games between Division I teams for basketball and FBS teams for football. Our focus is to examine whether the current generation of athletes is playing the same game as the previous generation, and an 18 year window seems appropriate for this task. The choice of 2000 as a starting year is further explained in the appendix. \begin{table} \caption{Median p-values for the test in (\ref{eq:oneztest}) across the 2000--2017 seasons. One test was performed for each full season of each sport, with results summarized in the first row. Likewise, one test was performed for each conference (including only intraconference games) in each year of each sport, with results summarized in the second row. CFB represents college football, while CBB-M and CBB-W represent men's and women's college basketball, respectively. } \label{table:7pvalue} \begin{tabular}{lrrrrrr} \toprule & NFL & NBA & WNBA & CFB & CBB-M & CBB-W \\ \midrule \multicolumn{6}{l}{\textit{One Model for Each Season}}\\ Expression (\ref{eq:oneztest}): p-value & 0.40 & 0.25 & 0.52 & 7e-06 &5e-33 & 2e-19\\ \midrule \multicolumn{6}{l}{\textit{One Model for Each Conference in Each Season}}\\ Expression (\ref{eq:oneztest}): p-value & 0.63 & 0.47 & 0.80 & 0.87 & 0.84 & 0.35 \\ \bottomrule \end{tabular} \end{table} \subsection{Phase I: Estimating the HFA} \subsubsection{Application to Full Season Data} In an initial application to full seasons of NCAA men's basketball results (ignoring conference divisions and thus using every game played during each season), Figure~\ref{plot:marg1} shows that the mean $\overline{\bds{d}}$ is uniformly larger than the estimated HFA from the mixed model (\ref{eq:femodel}), which in turn are uniformly larger than the estimates from the fixed effects model (\ref{eq:mixed}): this suggest that at least one of the models is producing biased estimates. However, HFA estimates from these three models are identical to each other for the same NBA seasons. The same pattern appears in the other college and professional sports, with three distinct estimates in each of the college sports and three nearly identical estimates in each season of each of the professional sports. These differences between the fixed and random effects estimates in the college sports coincide with unexpectedly large values of the test statistic (\ref{eq:oneztest}), as shown in the first row of Table (\ref{table:7pvalue}). A bootstrap simulation helps illustrate this relationship. \begin{figure} \caption{Fixed effect model estimates (dotted) of the scoring HFA are plotted with corresponding mixed model (dashed) estimates and the mean of $\bds{d}$ (solid) across seasons of NCAA Men's Basketball (left) and the NBA (right). The lines in the NBA plot overlap. } \label{plot:marg1} \centering \includegraphics[scale=0.5]{NCAA_basketball_men_scores_out_marginal.pdf} \includegraphics[scale=0.5]{NBA_scores_out_marginal.pdf} \end{figure} \subsubsection{Simulation for Full Seasons}\label{sec:fullsim} To assess the behavior of the HFA estimates produced by the fixed and mixed effect models given the potentially nonrandom schedules, $\boldsymbol{Z}$, we run two simulations in each full season of each sport (ignoring conferences) using the the fitted team effects $\widehat{\boldsymbol{\eta}}$ and error terms $\widehat{\bds{e}}$ from the mixed model (\ref{eq:mixed}). The first simulation generates observations with a HFA of 3 points by resampling the residuals with replacement (bootstrapping) as defined in (\ref{eq:ysim1}). In the second case, presented by (\ref{eq:ysim2}), the team effects are also resampled (without replacement). The first simulation represents a scenario in which the season is played repeatedly according to the same schedule, while the second represents a scenario in which the teams are shuffled before assigning them to the same schedule structure, $\boldsymbol{Z}$. Letting $\bds{s_1}(\bds{x})$ represent a function that samples the elements of $\bds{x}$ with replacement (or without replacement for $\bds{s_0}$), \begin{align} \bds{y}_{sim1}&=3*\bds{1}+\bds{Z}\bds{\widehat{\eta}}+\bds{s_1}\left(\bds{\widehat{e}}\right)\label{eq:ysim1}\\ \bds{y}_{sim2}&=3*\bds{1}+\bds{Z}\bds{s_0}\left(\bds{\widehat{\eta}}\right)+\bds{s_1}\left(\bds{\widehat{e}}\right)\label{eq:ysim2} \end{align} 2000 replicates of $\bds{y}_{sim1}$ and $\bds{y}_{sim2}$ are generated for each season of each sport. These simulated score vectors are fit with both the fixed and mixed effect models. For each year $i$, the mean, $m_i$, of the 2000 estimates of $\lambda$, and the coverage probability of the 95\% confidence intervals is recorded. The means of these values over all 18 seasons is then reported in Table~\ref{tab:ysim1}, along with the p-value from the two-sided $t$-test of the null hypothesis that the $m_i$ were drawn from a population with mean 3. Both of the models reliably recover the simulated HFA of 3 points in all sports when fitting the simulation with randomized team assignments, $\bds{y}_{sim2}$. Likewise, both models produce unbiased estimates of the HFA in the professional sports, which have more balanced schedules than the college sports, when the schedule remains fixed in its current configuration in $\bds{y}_{sim1}$. However, only the fixed effect model provides unbiased estimates for $\bds{y}_{sim1}$ in the college sports: the mixed effect model shows upward bias in its estimate of the college HFAs along with poor coverage probabilities and extremely small p-values. The mixed model (\ref{eq:mixed}) assumes random assignments of the teams to games and to home location. This is not the case, however, as stronger college teams are often able to schedule more home than away games. The use of random effects to condition on team abilities ameliorates, but does not eliminate, the bias due to these nonrandom assignments, as illustrated in Figure~\ref{plot:marg1}. This behavior in the college sports coincides with abnormal values of the test statistic (\ref{eq:oneztest}), as reflected in the first row of Table~\ref{table:7pvalue}. \begin{table} \caption{Results for fitting 2000 simulations of $y_{sim1}$ and $y_{sim2}$ in each of the eighteen seasons for each sport, with $\lambda=3.0$. Coverage probabilities are for 95\% confidence intervals. The p-value refers to the two-sided $t$-test of the null hypothesis that the means of the 2000 simulations in each year are drawn from a population with a mean of 3. CFB represents college football, while CBB-M and CBB-W represent men's and women's college basketball, respectively.} \label{tab:ysim1} \begin{tabular}{lrrrrrr} \toprule & NFL & NBA & WNBA & CFB & CBB-M & CBB-W \\ \midrule \multicolumn{6}{l}{\textit{Fixed team effects for} $y_{sim1}$}\\ $\text{mean}(\widehat{\lambda})$ & 3.00 & 3.00 & 3.00 & 3.00 & 3.00 & 3.00 \\ Coverage Prob. & 0.95 & 0.95 & 0.95 & 0.95 & 0.95 & 0.95 \\ p-value & 0.40 & 0.77 & 0.63 & 0.66 & 5e-03 & 0.87 \\ \midrule \multicolumn{6}{l}{\textit{Random team effects for} $y_{sim1}$}\\ $\text{mean}(\widehat{\lambda})$ & 3.02 & 3.00 & 3.00 & 3.37 & 3.26 & 3.13 \\ Coverage Prob. & 0.95 & 0.95 & 0.95 & 0.89 & 0.62 & 0.87 \\ p-value& 0.02 & 0.37 & 0.12 & 4e-15 & 1e-19 & 4e-18 \\ \midrule \multicolumn{6}{l}{\textit{Fixed team effects for} $y_{sim2}$}\\ $\text{mean}(\widehat{\lambda})$ & 3.00 & 3.00 & 3.01 & 2.99 & 3.00 & 3.00 \\ Coverage Prob. & 0.95 & 0.95 & 0.95 & 0.95 & 0.95 & 0.95 \\ p-value & 0.65 & 0.56 & 0.03 & 0.02 & 0.21 & 0.47 \\ \midrule \multicolumn{6}{l}{\textit{Random team effects for} $y_{sim2}$}\\ $\text{mean}(\widehat{\lambda})$ & 3.00 & 3.00 & 3.01 & 2.99 & 3.00 & 3.00 \\ Coverage Prob. & 0.95 & 0.95 & 0.95 & 0.95 & 0.95 & 0.95 \\ p-value& 0.66 & 0.57 & 0.03 & 0.02 & 0.29 & 0.41 \\ \bottomrule \multicolumn{6}{l}{}\\ \end{tabular} \end{table} \begin{table} \caption{Results for fitting 500 simulations of $y_{sim1}$ in each conference in each of the eighteen seasons, for each sport, with $\lambda=3.0$. Coverage probabilities are for 95\% confidence intervals. The p-value refers to the two-sided $t$-test of the null hypothesis that the population of simulated conference means (averages over the 500 simulations in each conference in each season) has mean equal to 3. CFB represents college football, while CBB-M and CBB-W represent men's and women's college basketball, respectively.} \label{tab:conf7} \begin{tabular}{lrrrrrr} \toprule & NFL & NBA & WNBA & CFB & CBB-M & CBB-W \\ \midrule \multicolumn{6}{l}{\textit{Fixed team effects for} $y_{sim1}$}\\ $\text{mean}(\widehat{\lambda})$ & 2.99 & 3.00 & 3.00 & 3.00 & 3.00 & 3.00 \\ Coverage Prob. & 0.95 & 0.95 & 0.95 & 0.94 & 0.95 & 0.95 \\ p-value & 0.38 & 0.76 & 0.58 & 0.51 & 0.99 & 0.35 \\ \midrule \multicolumn{6}{l}{\textit{Random team effects for} $y_{sim1}$}\\ $\text{mean}(\widehat{\lambda})$ & 3.02 & 3.00 & 3.01 & 2.99 & 3.01 & 3.03 \\ Coverage Prob. & 0.95 & 0.95 & 0.95 & 0.94 & 0.94 & 0.95 \\ p-value& 0.09 & 0.39 & 0.12 & 0.48 &6e-08 & 8e-20 \\ \bottomrule \end{tabular} \end{table} \subsubsection{Restriction to Intraconference Games} Games played within conferences of these sports tend to be more balanced (in terms of opponents strength and number of home and away games) than games played between conferences. Instead of fitting the HFA of using full season results, we could also fit each set of intraconference games separately within each season: interconference game results are discarded. As shown in the second row of Table~\ref{table:7pvalue}, the intraconference games do not show the same evidence of lack of fit with respect to (\ref{eq:oneztest}) as the full season results. The simulation (\ref{eq:ysim1}) was run with 500 iterations for each of the intraconference schedules in each season of each sport (Table~\ref{tab:conf7}). While the restriction to intraconference games largely eliminated the bias in the mixed model estimate of $\lambda$, there is still some remaining evidence of upward bias in these estimates. For example, in women's college basketball the mixed model for the full season simulations produced a mean estimate of $3.13$ with a coverage probability of 0.87. This improved to an estimate of $3.03$ and a coverage probability of 0.95 when restricting to intraconference games; however, the p-value of 8e-20 from the two-sided $t$-test indicates that upward bias is still a systematic problem for the random team effect model. As a result, the Phase II models presented in Section~\ref{sec:phase2} will make use of the HFA estimates from the fixed team effect model (\ref{eq:femodel}). An additional benefit of restricting to intraconference games is the potential for a comparison of HFA trends within conferences across seasons. Of course, this limits the inference space to intraconference games, requiring either additional assumptions or additional modeling structure to generalize conclusions to interconference games. \subsection{Phase II: Trends in Home-Field Advantage} \label{ssec:hfe} After fitting separate HFA estimates for each conference in each season of each sport, the resulting point estimates and standard errors are used to fit the Phase II models of Sections~\ref{sec:collegeII} and \ref{sec:proII}. The results for the college sports appear in Table~\ref{tab:tabone} and those for the professional sports appear in Table~\ref{tab:pro}. The conference-specific trends in scoring HFA are plotted in Figure~\ref{plot:HFE_b}. \begin{figure} \caption{Each point represents the estimated scoring HFA for a conference within each season and sport. Graphs have common limits for the y-axis. In the college sports, the lines correspond to different conferences. In the professional sports, the triangles (circles) and solid (dashed) lines represent the NFC (AFC) conference in the NFL and the Western (Eastern) Conference in the NBA and WNBA.} \label{plot:HFE_b} \centering \includegraphics[scale=0.5]{S_NCAAF_C_ANALYSIS_score_rancoef.pdf} \includegraphics[scale=0.5]{S_NFL_C_ANALYSIS_score_rancoef.pdf} \includegraphics[scale=0.5]{S_MBBALL_C_ANALYSIS_score_rancoef.pdf} \includegraphics[scale=0.5]{S_NBA_C_ANALYSIS_score_rancoef.pdf} \includegraphics[scale=0.5]{S_WBBALL_C_ANALYSIS_score_rancoef.pdf} \includegraphics[scale=0.5]{S_WNBA_C_ANALYSIS_score_rancoef.pdf} \end{figure} \begin{table} \caption{Phase II results of fitting the random coefficient model (\ref{eq:rancoef}) to the Phase I estimated HFAs and associated standard errors for college sports. CFB represents college football, while CBB-M and CBB-W represent men's and women's college basketball, respectively.} \label{tab:tabone} \begin{tabular}{lrrr} \toprule &CFB&CBB-M&CBB-W\\ \midrule \multicolumn{3}{l}{\textit{Estimate}}\\ $\hat{\alpha}_0$ & 2.352 & 2.969 & 2.817 \\ $\hat{\alpha}_1$ & -0.102 & -0.074 & -0.049 \\ $\hat{\sigma}^2_{\lambda}$&1.020&1.100&1.048\\ $\hat{\sigma}^2_{1}$&0.946&0.503&0.350 \\ $\hat{\sigma}_{12}$&0.106&0.001&-0.003\\ $\hat{\sigma}^2_{2}$&0.013&0.001&1e-20\\ mean$(w^2_{2017\cdot})$&3.970&1.129&1.272 \\ \midrule \multicolumn{3}{l}{\textit{95\% Confidence Intervals}}\\ $\alpha_0$ lower&1.402&2.661&2.534 \\ $\alpha_0$ upper&3.302&3.276&3.099 \\ $\alpha_1$ lower&-0.205&-0.097&-0.068 \\ $\alpha_1$ upper&0.002&-0.051&-0.029\\ \midrule \multicolumn{3}{l}{\textit{p-values for null hypothesis}}\\ $\alpha_0=0$ & 3e-4 &5e-19 & 1e-18 \\ $\alpha_1=0$ & 0.053 & 3e-7 &9e-7\\ $\boldsymbol{G}=\bds{0}$ & 0.169&$<$1e-5&$<$1e-5\\ \bottomrule \end{tabular} \end{table} \begin{table} \caption{Phase II results of fitting the linear model (\ref{eq:fullfixed}) to the Phase I estimated HFAs and associated standard errors for professional sports. p-values at bottom are generated by likelihood ratio tests of model (\ref{eq:fullfixed}) against reduced models (\ref{eq:fixed1}) and (\ref{eq:fixed2}), respectively } \label{tab:pro} \begin{tabular}{lrrr} \toprule & NFL & NBA & WNBA \\ \midrule \multicolumn{3}{l}{\textit{Estimate}}\\ $\hat{\beta}_{0A}$ & 2.753 & 3.333 & 3.705 \\ $\hat{\beta}_{0B}$ & -0.276 & -0.775 & -1.702 \\ $\hat{\beta}_{1A}$ & 0.035 & -0.021 & -0.013 \\ $\hat{\beta}_{1B}$ & -0.036 & -0.023 & -0.075 \\ $\sigma^2_{\lambda}$ & 0.865 & 1.096 & 0.715 \\ mean$(w^2_{2017\cdot})$ & 1.583 & 0.399 & 2.875 \\ \midrule \multicolumn{3}{l}{\textit{95\% Confidence Intervals}}\\ $\hat{\beta}_{0A}$ lower & 1.673 & 2.776 & 2.641 \\ $\hat{\beta}_{0A}$ upper & 3.832 & 3.890 & 4.770 \\ $\hat{\beta}_{0B}$ lower & -1.796 & -1.566 & -3.277 \\ $\hat{\beta}_{0B}$ upper & 1.243 & 0.016 & -0.128 \\ $\hat{\beta}_{1A}$ lower & -0.074 & -0.077 & -0.113 \\ $\hat{\beta}_{1A}$ upper & 0.144 & 0.035 & 0.087 \\ $\hat{\beta}_{1B}$ lower & -0.188 & -0.101 & -0.223 \\ $\hat{\beta}_{1B}$ upper & 0.116 & 0.056 & 0.073 \\ \midrule \multicolumn{3}{l}{\textit{p-value for null hypothesis}}\\ $\beta_{0B}=\beta_{1B}=0$ & 0.874 & 0.011 & 0.013 \\ $\beta_{1A}=\beta_{1B}=0$ & 0.784 & 0.179 & 0.221 \\ \bottomrule \end{tabular} \end{table} Most estimates of $\sigma^2_{\lambda}$ in Tables~\ref{tab:tabone} and \ref{tab:pro} are near 1 due to the use of the squared reciprocals of the standard errors of $\hat{\lambda}$ as weights in the linear models. Estimates of error variance away from 1 indicate either over- or under-dispersion in the Phase II model from the estimated Phase I variation. \subsubsection{College Results} The estimates $\hat{\alpha}_1$ and p-values for the test $\alpha_1=0$ in Table~\ref{tab:tabone} indicate the strength of the trend of the population of conference-level HFAs over time. While college football produces the steepest downward slope of the three college sports, there is not strong evidence that this slope is not an artifact of conference and error variability over the observed period (p=0.053). Individually, the SEC and PAC10/12 conferences show an increase in estimated HFA over this time period, while the steepest decreases occur in the MAC, Sun Belt, and WAC conferences (see the supplementary data). By contrast, there is a strong indication of downward linear trends in the men's and women's college basketball HFA. With estimates of $-0.074$ (p~=~3e-07) and $-0.049$ (p~=~9e-07), respectively, these translate into a decrease of 1.332 points in the HFA of men's college basketball teams over the 18 year period considered, and a decrease of 0.882 points for the women's HFA. The magnitudes of these decreases correspond to 45\% and 31\% of the estimated 2017 HFA ($\hat{\alpha}_0$) in each sport, making them appear practically as well as statistically significant. We can conclude that there has been a significant and noticeable decrease in the HFA of men's and women's college basketball intraconference college games from 2000 to 2017. As such, it seems that this would be a worthwhile topic for further study by sports journalists. There is evidence of conference-level heterogeneity in the HFA trends. Variability in conference-level intercepts in all three sports is evident from Figure~\ref{plot:HFE_b}, which also reveals variability in conference-level slopes in college football. Even though the p-value does not provide evidence against the hypothesis that $\boldsymbol{G}=\bds{0}$ for the college football results, an examination of the HFA by time plots for each of the conferences reveals too much variability to be overlooked. This variability is also reflected in the magnitudes of the $\hat{\sigma}^2_1$ and $\hat{\sigma}^2_2$ estimates in Table~\ref{tab:tabone}. While Figure~\ref{plot:HFE_b} does not include conference labels, the source file (available in the supplementary material) reveals similarity in the conference-level HFA estimates from men's and women's college basketball: in both cases, the Big 12 consistently had among the largest scoring HFAs while the Ivy League had among the smallest (Figure~\ref{plot:ivy}). Positive correlation between the conference-level HFA for men's and women's sports suggests the possible existence of important conference-level factors that affect HFA in a similar way for both men's and women's sports within the same conference. For sports journalists, investigating and understanding why there is a greater HFA observed in intraconference Big 12 basketball games than in intraconference Ivy League games could lead to a better understanding of factors that contribute to HFA in general. \begin{figure} \caption{Scoring HFA for intraconference games for the Ivy League and Big 12 conferences of men's and women's college basketball. } \label{plot:ivy} \centering \includegraphics[scale=0.5]{menbig12ivy.pdf} \includegraphics[scale=0.5]{womenbig12ivy.pdf} \end{figure} \subsubsection{Professional Results} We test for a linear trend in intraconference HFA in the professional sports with a likelihood ratio test of the full model (\ref{eq:fullfixed}) against the reduced model (\ref{eq:fixed2}) with $\beta_{1A}=\beta_{1B}=0$. The resulting p-values from this test appear in the last row of Table~\ref{tab:pro}: there is no indication of a sustained trend in any of the three professional sports. This is consistent with the plots of the HFAs in the second column of Figure~\ref{plot:HFE_b}. The NBA and WNBA plots suggest a difference between the trends of the Eastern and Western conference HFAs. In the WNBA, the fitted 2017 Eastern conference HFA is 1.7 points ($1.702/3.705=46\%$) lower than the Western conference HFA, while in the NBA the fitted 2017 Eastern conference HFA is 0.8 points ($0.775/3.333=23\%$) lower than the Western conference HFA. This is formally tested in a likelihood ratio test comparing the full model (\ref{eq:fullfixed}) against the reduced model (\ref{eq:fixed1}) where $\beta_{0B}=\beta_{1B}=0$: the resulting p-values appear in the penultimate row of Table~\ref{tab:pro}. With p-values near 0.01, the tests offer slight (keeping in mind the large number of p-values being reported in this paper) but inconclusive support for the difference in conference slopes and intercepts. Given the size of the estimated differences between conference HFAs and the relative ease with which these HFAs can be fit, this appears to be a worthwhile trend to monitor over the next few seasons and then to investigate for root causes if the trend continues. One possible explanation is that the teams are farther apart in the Western conferences of the WNBA and the NBA. We explored this by adding distance between teams (or quantiles thereof) as a covariate to model (\ref{eq:femodel}) and fitting full season NBA results, but did not find any significant association. \section{Application to Value-Added Models}\label{sec:future} Some value-added models (VAMs) for education evaluation \citep{mc03} also utilize a stochastic random effects model matrix, $\boldsymbol{Z}$, to track students' classroom assignments. Previous work has used a variety of methods to explore the possibility that nonrandom assignment of students to classrooms could bias model estimates \citep{ballou}. As with the HFA example of Section~\ref{sec:results}, it seems that the test of Equation~\ref{eq:oneztest} could be useful for flagging data and model combinations with potentially biased fixed effect estimates. The VAM applications use more general mixed models for student test scores \citep{mariano10} with $\bds{Y}\sim N\left(\bds{X}\bds{\beta}+\bds{Z}\boldsymbol{\eta},\boldsymbol{R}\right)$ with $\boldsymbol{\eta}\sim N\left(\bds{0},\bds{G}\right)$, where classroom-level random effects are modeled with $\boldsymbol{\eta}$, student-level correlation is modeled in $\boldsymbol{R}$, and $\bds{\beta}$ includes student- or classroom-level fixed effects. Provided that the inverse exists, \begin{equation}\label{fulltest} \boldsymbol{\eta}^{\prime}\boldsymbol{Z}^{\prime}\boldsymbol{R}^{-1}\boldsymbol{X}\left(\bds{X^{\prime}}\boldsymbol{R}^{-1}\bds{Z}\bds{G}\bds{Z^{\prime}\boldsymbol{R}^{-1}\bds{X}}\right)^{-1}\bds{X^{\prime}}\boldsymbol{R}^{-1}\bds{Z}\boldsymbol{\eta}\sim\chi^2_{\text{rank}\left(\bds{X^{\prime}}\boldsymbol{R}^{-1}\bds{Z}\right)} \end{equation} We have applied this test (\ref{fulltest}) to the complete persistence value-added model \citep{ballou,mariano10} for three years of data from a cohort of students from a large urban school district \citep{karlcpm}. The test does not reveal anything unexpected when only the three yearly means are included in $\boldsymbol{X}$ as fixed effects (p-value 0.98), but when Race/Ethnicity is added as a fixed effect, the p-value drops to 0.0005. It is possible that this resonance between $\bds{{X^{\prime}}}\boldsymbol{R}^{-1}$ and $\boldsymbol{Z}\boldsymbol{\eta}$ signals that estimates of the fixed effects in these models will be biased in the same way as the mixed model HFA estimates for college basketball results (Figure~\ref{plot:marg1}). Indeed, 200 simulations of the three year data set using the fitted values $\bds{\widehat{G}},\boldsymbol{\eta},\bds{\widehat{\beta}}$ and error vectors sampled from $N(\bds{0},\bds{\widehat{R}})$ reveals that the estimates of the race/ethnicity fixed effects are biased upward for white (0.3\% bias, p=1e-33) and Asian (0.2\% bias, p=1e-05) students, and downward for Hispanic (-0.4\% bias, p=1e-44) students. No bias is detected for the yearly means. The magnitude of the biases in this application are so small that they may not be practically relevant, but they do demonstrate the ability of the statistic (\ref{fulltest}) to flag data sets with biased fixed effect estimates due to the structure of $\boldsymbol{Z}$, without having to run lengthy simulations. \section{Concluding Remarks} \label{sec:discussion} Division I college men's and women's basketball intraconference games have shown strong downward linear trends in scoring HFA over the last two decades. While college football shows an even steeper decrease on intraconference HFA over the same period, there is also much more variability in the Phase I within-season estimates in college football than in college basketball. There is also greater conference-level variability in the Phase II estimates of slopes and intercepts in the HFA linear trends for football. Due to the magnitude of the variability, it is difficult to say if the observed downward trend in college football intraconference HFA represents a systematic shift in population behavior or is merely an artifact of a sample from a noisy process with constant mean. This paper focused on the scoring HFA instead of the win propensity HFA in order to explore the biasing effects of the nonrandom schedules without also considering the potential bias due to the requisite integral approximations in the case of binary game results \citep{breslow95}. The source code provided in the supplementary material (\url{https://github.com/HFAbias18/supplement}) can be modified to fit the win propensity HFA in a generalized linear mixed model with a probit link and a fully exponential Laplace approximation \citep{football, broatch2,broatch3} by changing the \texttt{method} argument of the \texttt{mvglmmRank} function call. No changes would be required for the Phase II models, since they would simply continue to make use of the point estimates and standard errors that are produced by \texttt{mvglmmRank}. In fact, we did fit these models, but chose not to include them, for the sake of brevity, as we did not see any noticeable differences in the Phase II results from those presented for the scoring HFA (outside of slightly larger p-values for the Phase II population trends, possibly due to the information loss in the discretization of the game results). This analysis has raised two interesting questions about men's and women's college basketball for further investigation by sports researchers. What factors contribute to the difference in intraconference HFAs, such as those shown between the Big 12 and the Ivy League (Figure~\ref{plot:ivy})? Second, what factors are driving the practically and statistically significant downward trends in the HFAs? \cite{si2} asked then-head basketball coach at Louisville, Rick Pitino, about a potential decline in HFA. Pitino responded that he believes television has improved refereeing due to increased visibility of bias and errors, noting that the advances in video technology over the past seven years have increased the use of video review for referee evaluation, and that this may make referees less susceptible to home-court crowd pressure. Certainly there will be other theories about contributing factors. \section*{Appendix: $p$-hacking Disclosure} The American Statistical Association (ASA) statement on p-values \citep{pvalue} and the associated commentaries call for a reform in the approach and presentation of statistical analyses. Among its recommendations are that journal authors document additional tests and analysis that were performed but not included in the text of the article. \begin{itemize} \item{Before committing to the analysis of linear trends in HFA, we examined a plot of the estimates to see that a line provided a plausible explanation for the trend. Had there been evidence of curvature or autocorrelation, the analysis would have likely taken a different approach. We also originally allowed for correlation in the Phase II professional results of model (\ref{eq:fullfixed}) by adding Toeplitz bands to the error covariance matrix, but chose not to include these effects in the model presented in the paper after finding no significant correlation. Note that the random coefficient model (\ref{eq:rancoef}) for the college results naturally incorporates correlation at the conference level.} \item{The flexibility to choose a starting and ending year for the study comes with a potential danger of data dredging by selecting these time points based on the results of the analysis. The decision to exclude any game results beyond the end of the 2017 season for each sport was made in advance and based on the manuscript timeline, while the decision to begin with the 2000 season was determined by data set availability and a desire to compare all of the sports over the same seasons. Early drafts did consider additional prior seasons for some sports: no major changes to the conclusions would have resulted from inclusion of these seasons, although there would have been a need to address curvature in the football and men's basketball HFA prior to 2001.} \item{Besides the statistical tests reported here, many others were performed during research for this paper. For example, an early draft only considered full season results from the Phase I mixed model, and also explored trends in the estimated home and away mean scores and conditional home and away error variances from a joint model for home and away scores \citep[Section 2.1]{broatch3}. These were abandoned for the sake of providing a tighter narrative; however, these fitted values remain in the full season Phase I results of the supplementary data and may be further explored by the reader. Even though the original intention of the research was to examine the strength of linear trends in HFA, the number of such other p-values considered during manuscript preparation must be taken into account when considering the strength of the p-values reported in the paper.} \end{itemize} \bigskip \bibliographystyle{agsm}
{ "attr-fineweb-edu": 1.514648, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUc9bxaKPQotSMwhqG
\section{Introduction} In fluid team sports such as soccer and basketball, players interact with each other while maintaining a role in a certain formation~\cite{Rein2016}. Since team formations highly affect the players' movement patterns, analyzing them is one of the most intuitive ways to understand tactics from the domain participants' point of view. However, even with the recent acquisition of a vast amount of sports data, there remain several technical challenges for formation and role estimation as follows: \begin{itemize} \item Coaches initially assign a unique role to each player, but they can change their instruction throughout the match. \item Players temporarily switch roles with their teammates. \item Abnormal situations such as set-pieces sometimes occur, in which all the players ignore the team formation. \end{itemize} Several studies~\cite{Bialkowski2014-Large, Narizuka2017, Narizuka2019, Shaw2019} have tried to estimate the team formation during a match using spatiotemporal tracking data, but they did not overcome all the above challenges. Bialkowski et al.~\cite{Bialkowski2014-Large} successfully captured the temporary role switches between players, but they have assumed that the team formation is consistent throughout half of a match. Narizuka et al.~\cite{Narizuka2017, Narizuka2019} found formations for each instant, resulting in too frequent formation changes that are not intended by coaches. Shaw et al.~\cite{Shaw2019} estimated a team formation per two-minute window, but their bottom-up approach has a trade-off between the reliability of detected change-points and the robustness against the abnormal situations. That is, since the endpoints of windows are the only candidate change-points, the discrepancy between predicted and actual change-points increases as the window size increases. On the other hand, a smaller window size becomes more sensitive to the situations such as set-pieces. To fill this gap, we propose a change-point detection framework named \emph{SoccerCPD} that distinguishes tactically intended formation and role changes from temporary switches in soccer matches. First, we assign roles to players per frame by adopting the role representation proposed by Bialkowski et al.~\cite{Bialkowski2014-Large}. Next, we perform two-step change-point detections: (1) formation change-point detection (FormCPD) in each half of a match and (2) role change-point detection (RoleCPD) in each segment resulting from FormCPD. In FormCPD, we formulate the players' spatial configuration---which we call the \emph{role topology} in this paper---at each frame as a role-adjacency matrix calculated from the Delaunay triangulation \citep{Narizuka2017}. Then we deploy a nonparametric change-point detection algorithm named discrete g-segmentation \citep{Song2020} to find change-points in this sequence of matrices. The detected change-points split a match into several \emph{formation periods} during which the team formation is assumed to be consistent. We represent the team's formation in each segment as the mean role-adjacency matrix and translate it into the language of domain participants (such as ``3-4-3'' or ``4-4-2'') by clustering all the obtained matrices. In RoleCPD, we formulate the players' transposition by a sequence of role permutations in each formation period. As in the previous step, applying discrete g-segmentation to the permutation sequence with pairwise Hamming distances gives the role change-points. They split each formation period into several \emph{role periods} where the player-role assignment remains constant. Lastly, we give domain-specific ``position'' labels (such as ``left wing-back'' or ``center forward'') to the roles to enhance the interpretability. With these results, we make the following four contributions: \begin{itemize} \item applying a state-of-the-art change-point detection method to the sports tracking data, which discovers the underlying tactics and their changes in an unsupervised manner. \item effectively representing the spatial configuration and role switches of players in a time interval as a mean role-adjacency matrix and a sequence of permutations, respectively. \item suggesting practical applications such as switching pattern mining and automatic set-piece detection, which can be easily interpreted and utilized by domain participants. \item providing a Python implementation of our framework with sample GPS data\footnote{https://github.com/pientist/soccercpd}. \end{itemize} \section{Related Work} \subsection{Change-Point Detection} Change-point detection (CPD) is the problem of finding points when the governing parameters of the process change. CPD methods are classified into parametric and nonparametric ones depending on whether they specifically estimate the parameters or directly find change-points \cite{Truong2020}. Parametric CPD models assume a certain form of distribution for given observations and find change-points of the parameters. They include maximum likelihood estimations~\cite{Ko2015, Pein2017, Lavielle2000}, piecewise linear models~\cite{Bai1994, Bai1997, Qu2007}, Mahalanobis metric-based models~\cite{Lajugie2014}, and so on. However, they have limitations in that they are sensitive to the violation of the assumptions of distributions \cite{Aminikhanghahi2017}. Nonparametric CPD methods directly detect breakpoints without estimating distribution parameters. In general, they are based on nonparametric statistics such as rank statistics~\cite{Lung-Yut-Fong2015}, graph-based scan statistics~\cite{Chen2015, Chu2019, Song2020}, and kernel estimation~\cite{Shawe-Taylor2004, Harchaoui2009, Gretton2012}. Free from the above assumptions of distributions, they have shown greater success for various datasets~\cite{Aminikhanghahi2017}. Particularly, graph-based methods~\cite{Chen2015, Chu2019, Song2020} are considered more flexible in real applications since they only use similarity information and thus are applicable to high-dimensional data or even non-Euclidean data. \subsection{Formation Estimation in Team Sports} Several studies have analyzed spatiotemporal tracking data to estimate team formations and find the roles of players in team sports. Bialkowski et al.~\cite{Bialkowski2014-Large} proposed the ``role representation'' that dynamically assigns a unique role to each player frame-by-frame. Narizuka et al.~\cite{Narizuka2017} expressed a temporary formation as an undirected graph from Delaunay triangulation~\cite{Delaunay1934} and clustered the graphs obtained from a match to characterize the formation. Later, Narizuka et al.~\cite{Narizuka2019} combined their method with the role representation to cluster formations over multiple matches. Also, Shaw et al.~\cite{Shaw2019} used relative positions between players to estimate team formation per two-minute window. Moreover, there have been other approaches relying on the annotated or estimated formation for deeper analyses. Bialkowski et al.~\cite{Bialkowski2014-Win} and Tamura et al.~\cite{Tamura2015} studied the correlation between team formation and other context information such as the existence of home advantage or the result of the previous match. Machado et al.~\cite{Machado2017} and Wu et al.~\cite{Wu2019} suggested visual interfaces showing the dynamic changes of the team formation. Especially, many studies have utilized the result of role representation above for various purposes such as highlight detection~\cite{Wei2013}, similar play retrieval~\cite{Sha2016}, team style identification~\cite{Bialkowski2014-Identifying}, and player style identification~\cite{Kim2021}. \section{Problem Definition} \label{problem_def} Soccer coaches change the team formation up to several times during a match, or they command role switches between players without formation change. Thus, our method is divided into two steps to track their decisions: first finds formation change-points and then finds role change-points. In this section, we formalize each step as a separate CPD problem. \begin{figure*} \includegraphics[width=\textwidth]{figures/timeline_formation.png} \caption{An example of formation and role change timeline as a result of SoccerCPD.} \Description{The upper figures indicate the team formation per formation period. The points of each color indicate the frame-by-frame coordinates of a certain role normalized by the team's mean location. The white circles are the mean locations of individual roles and are connected by edges whose widths are the corresponding elements in the mean role-adjacency matrices. The timeline in the lower part indicates the roles assigned to players per role period.} \label{fig:timeline} \end{figure*} \subsection{Formation Change-Point Detection} \label{formcpd_def} One can consider team formation as a probability distribution over several role topologies. Given the sequence of observations in the form of players' $(x,y)$ locations in 10 Hz, the goal of FormCPD is to find change-points of this distribution. Thus, we first need to find effective features representing role topology from raw observations and then perform CPD using the sequence of the features. Formally speaking, given a sequence of features $\{ A(t) \}_{t \in T}$ indicating the role topology, we aim to find a partition $T_1 < \cdots < T_m$ of $T$ such that $t \in T_i$ implies $A(t) \sim \mathcal{F}_i$ for some distinct distributions $\mathcal{F}_1, \ldots, \mathcal{F}_m$. Here we call each interval $T_i$ a \emph{formation period} during which the team maintain a certain formation represented as the distribution $\mathcal{F}_i$. \subsection{Role Change-Point Detection} \label{rolecpd_def} Bialkowski et al. \cite{Bialkowski2014-Large} proposed the role representation that gives the frame-by-frame role assignment to the outfield players, but they did not distinguish long-term tactical changes from these temporary role swaps. Given that a player generally possesses a constant role instructed by the coach for a certain period, the goal of RoleCPD is to find change-points where long-term tactical changes occur. That is, we further partition a formation period $T_i$ into several time intervals $T_{i,1} < \cdots < T_{i,n_i}$ named \emph{role periods} where each player possesses a constant role during a single role period. Then, we can deem the frame-by-frame role interchanges (the result of the role representation) inside role periods as temporary swaps. Formally speaking, for the set $P$ of players in a match time $T$, the role representation finds a set $\mathcal{X} = \{ X_1, \ldots , X_N \}$ of roles and \emph{player-to-temporary-role} (P-TR) maps $\{ \beta_t : P \rightarrow \mathcal{X} \}_{t \in T}$ such that \begin{equation} \beta_t(p) \neq \beta_t(q) \quad \forall p \neq q \in P,\ t \in T \end{equation} Here we aim to express the given P-TR maps $\{ \beta_t \}_{t \in T}$ as the composition of \emph{player-to-instructed-role} (P-IR) maps $\{ \alpha_t : P \rightarrow \mathcal{X} \}_{t \in T}$ and \emph{temporary role permutations} (RolePerm) $\{ \sigma_t : \mathcal{X} \rightarrow \mathcal{X} \}_{t \in T}$ with $\sigma_t \in \mathrm{S}(\mathcal{X})$ (symmetric group on $\mathcal{X}$), i.e., $\beta_t = \sigma_t \circ \alpha_t$. In other words, given P-TR maps $\{ \beta_t \}_{t \in T}$ in $T_i$, we find a partition $T_{i,1} < \cdots < T_{i,n_i}$ of $T_i$ and P-IR maps $\{ \alpha_t \}_{t \in T}$ satisfying the followings: \begin{itemize} \item (\emph{Period-wise consistency}) The instructed role of every player is constant during each role period $T_{i,j}$. i.e., \begin{equation} \alpha_t(p) = X_p^{(i,j)} \quad \forall t \in T_{i,j} \end{equation} for some $X_p^{(i,j)} \in \mathcal{X}$. \item (\emph{Uniqueness}) No two players are instructed to take the same role in a role period. i.e., \begin{equation} X_p^{(i,j)} \neq X_q^{(i,j)} \quad \forall p \neq q \in P \end{equation} \item (\emph{Existence of a role change}) A change of role period implies a change of instruction. i.e., for each $j \in \{ 1, \ldots , n_i-1 \}$, there exists $p \in P$ such that \begin{equation} X_p^{(i,j)} \neq X_p^{(i,j+1)} \end{equation} \end{itemize} Note that the last condition is valid only for the change \emph{inside} a single formation period. Namely, adjoining role periods from distinct formation periods can have the same player-role assignment. \section{Formation Change-Point Detection} \label{formcpd} At first, we express the role topology at each frame as a role-adjacency matrix calculated from the Delaunay triangulation (Section~\ref{formcpd_seq}). Then, we detect formation change-points by applying discrete g-segmentation on the sequence of role-adjacency matrices (Section~\ref{formcpd_gseg}). Finally, we represent the formation in each segment as the mean role-adjacency matrix and cluster the matrices from the entire dataset to give domain-specific labels such as ``3-4-3'' or ``4-4-2'' to individual formation periods (Section~\ref{formcpd_cluster}). Please refer to the block diagram in Appendix~\ref{pipeline} for better understanding of the whole pipeline. \subsection{Calculating the Sequence of Role-Adjacency Matrices} \label{formcpd_seq} First, we assign roles to the players per frame by the role representation proposed by Bialkowski et al.~\cite{Bialkowski2014-Large}. It groups the player locations recorded throughout a session into role clusters, under the constraint that no two players take the same role at the same time. This is achieved by the EM algorithm below: \begin{itemize} \item Initialization: Assign a unique role label from 1 to $N$ to each outfield player and estimate the density of each role. \item E-step: Apply the Hungarian algorithm~\cite{Kuhn1955} per frame to reassign roles to players using the cost matrix based on the log probability of a role generating a player location. \item M-step: Update the density parameters based on the role labels reassigned in E-step. \end{itemize} Next, we perform Delaunay triangulation \citep{Delaunay1934} to get the adjacency matrices $\{A(t)\}_{t \in T} \subset \mathbb{R}^{N \times N}$ between the role labels. That is, the components $a_{kl}(t)$ are defined as \begin{equation*} a_{kl}(t) = \begin{cases} 1 & \text{if the \emph{roles} $X_k$ and $X_l$ are adjacent at $t$},\\ 0 & \text{otherwise} \end{cases} \end{equation*} for the set $\mathcal{X} = \{ X_1, \ldots , X_N \}$ of roles. Fig.~\ref{fig:temp_mat} shows an example of Delaunay graph on role locations. The idea of applying Delaunay triangulation to player tracking data was first proposed by Narizuka et al. \cite{Narizuka2017}. However, they used uniform numbers as indices of the adjacency matrices and applied the role representation only for aligning uniform numbers from multiple matches to a single set of indices \citep{Narizuka2019}. While this method can discover position-exchange patterns well, the resulting player-adjacency matrices are highly affected by temporary switches or irregular situations such as set-pieces which are irrelevant to the original team formation. Hence, we use role labels instead of player labels as the indices of adjacency matrices, which are a more robust representation of the team's spatial configuration. \begin{figure}[htb] \centering \begin{subfigure}[t]{0.23\textwidth} \includegraphics[width=\textwidth]{figures/temp_mat.png} \caption{} \label{fig:temp_mat} \end{subfigure} \begin{subfigure}[t]{0.23\textwidth} \includegraphics[width=\textwidth]{figures/mean_mat.png} \caption{} \label{fig:mean_mat} \end{subfigure} \caption{(a) A temporary Delaunay graph with player trajectories at a certain frame and (b) the weighted graph drawn from the mean role locations and the mean role-adjacency matrix for a formation period. The number inside each circle indicates a role label from 1 to 10, while the number below each circle indicates the player's uniform number.} \label{fig:adj_mats} \end{figure} \subsection{Applying Discrete g-Segmentation to the Sequence of Matrices} \label{formcpd_gseg} In order to find formation change-points, we separately apply CPD to the sequence of role-adjacency matrices obtained from each half of a match. Since our sequences are high-dimensional ($N \times N$ matrices for FormCPD with $N = 10$ in general) or even non-Euclidean (permutations for RoleCPD in Section~\ref{rolecpd}), we focus on graph-based CPD~\cite{Chen2015, Chu2019, Song2020}. They draw a similarity graph such as the minimum spanning tree on the observations and calculate a scan statistic $R(t)$ for each time $t$ by checking how many edges connect observations before and after $t$. Since a large difference between the observations before and after $t$ leads to large $R(t)$, one can regard $\tau = \arg\max_t R(t)$ as a possible change-point if $R(\tau)$ exceeds a certain threshold after normalization. As they only use similarity information between observations, they are applicable to our sequences with proper distance functions. Another issue is that our sequences have repeated observations (binary matrices for FormCPD here and permutations for RoleCPD in Section~\ref{rolecpd}), which makes the optimal similarity graph not uniquely defined. Among the series of graph-based CPD methods, discrete g-segmentation~\cite{Song2020} can only cover this case by using the average statistic $R_{(a)}(t)$ or the union statistic $R_{(u)}(t)$ introduced by Zhang and Chen~\cite{Zhang2017} as alternatives for $R(t)$. That is, they obtain a unique scan statistic either by ``averaging'' the scan statistics over all optimal graphs or by calculating the scan statistic on the "union" of all optimal graphs. Therefore, we employ the discrete g-segmentation to find change-points in our discrete high-dimensional/non-Euclidean sequences. (Please refer to Appendix~\ref{baseline_comparison} that empirically justifies the choice of CPD method.) To elaborate, we use the $L_{1,1}$ matrix norm \begin{equation} d_M(A(t), A(t')) = \| A(t) - A(t')\|_{1,1} = \sum_{k=1}^N \sum_{l=1}^N |A_{kl}(t) - A_{kl}(t')| \label{eq:manhattan} \end{equation} (which we call \emph{Manhattan distance} hereafter) as the distance measure between role-adjacency matrices. Also, we compute the \emph{switch rate} at $t$ by counting the number of players whose temporary roles differ from their most frequent roles in the session. Then, we exclude frames with high switch rates ($> 0.7$) to ignore abnormal situations such as set-pieces. Consequently, discrete g-segmentation returns an estimated change-point among the sequence of the remaining \emph{valid} adjacency matrices using the pairwise Manhattan distances. After finding a change-point during the given period, the algorithm decides its significance based on three conditions. To be specific, we recognize the estimated change-point $\tau$ as significant only if (1) the $p$-value of the scan statistic is less than 0.01, (2) both of the segments last for at least 5 minutes, and (3) the mean role-adjacency matrices calculated from the respective segments are far enough (i.e., have large Manhattan distance) from each other. The threshold duration for (2) comes from the fact that abnormal periods such as injury breaks or VAR checks generally last for 2--3 minutes, and therefore 5 minutes are robust against them. Also, the threshold distance for (3) is empirically set to 7.0. Since there can be more than one formation change-point during a session, we construct a recursive framework to find multiple change-points. First, if there is a significant change-point in the given period, we respectively apply the CPD algorithm to the sequences before and after the change-point again. Each branch-CPD terminates when there is no significant change-point in the segment of interest. In turn, the given session $T$ is partitioned into several formation periods $T_1 < \cdots < T_m$. \subsection{Formation Clustering Based on Mean Role-Adjacency Matrices} \label{formcpd_cluster} We now represent the formation in a formation period $T_i$ as a weighted graph $F(T_i) = (V(T_i), A(T_i))$ with the \emph{mean role locations} \begin{equation} V(T_i) = \frac{1}{|T_i^*|} \sum_{t \in T_i^*} V(t) \end{equation} as vertices and the \emph{mean role-adjacency matrix} \begin{equation} A(T_i) = \frac{1}{|T_i^*|} \sum_{t \in T_i^*} A(t) \end{equation} as edge matrix, where $T_i^*$ denotes the set of valid time stamps in $T_i$ with small switch rates and $V(t) = (v_1(t), \ldots, v_N(t))^T \in \mathbb{R}^{N \times 2}$ is the 2D locations of $N$ roles normalized to have zero mean at each frame $t$ (i.e., $\sum_{k=1}^N v_k(t) = (0,0)$). See Fig.~\ref{fig:mean_mat} as an example. Unlike Section~\ref{formcpd_gseg}, we first need to align the roles when calculating the distance between a pair of formation graphs since each role label from 1 to $N$ has nothing to do with the same label in another graph (especially that from another match). Thus, for a pair of formation graphs, we find the ``optimal'' permutation of the role labels of one graph relative to the other and calculate the Manhattan distance (Eq.~\ref{eq:manhattan}) between the adjacency matrices. To be specific, let $F = (V, A)$ and $F' = (V', A')$ be graphs from a pair of formation periods. Then, we rearrange the rows and columns of $A$ based on the Hungarian algorithm~\citep{Kuhn1955} using the pairwise Euclidean distances between $V$ and $V'$. That is, we find the optimal permutation matrix $Q$ minimizing the assignment cost \begin{equation*} c(V, V'; Q) = \sum_{k=1}^N \|(QV)_k - v'_k\|_2 \end{equation*} and use $QAQ^T$ and $A'$ when calculating the distance between the formations $F$ and $F'$, i.e., \begin{equation} d(F,F') = d_M(QAQ^T, A') \end{equation} Along with this distance metric, we cluster the formation obtained from the entire dataset as in Bialkowski et al.~\cite{Bialkowski2014-Large} to decide which formations are the same or different. For this work, we use the GPS data collected from the two seasons (2019 and 2020) of K League 1 and 2, the first two divisions of the South Korean professional soccer league. The data from 809 sessions (i.e., match halves) with at least one moment when ten outfield players were simultaneously measured are split into 864 formation periods and 2,152 role periods. The two-step agglomerative clustering task applied on the pairwise formation distances divides the 864 formations into six main formation groups and ``others'' group consisting of several clusters with less than 15 observations. (Appendix~\ref{formcpd_cluster_details} describes the detailed process.) Lastly, we give each of the six groups a label commonly used by the domain participants, as in Fig.~\ref{fig:clusters}. The difference of our clustering task from that in Bialkowski et al.~\cite{Bialkowski2014-Large} is the use of the distance between mean role-adjacency matrices instead of the earth mover's distance (EMD) between role locations. While formations in the previous work are from halves of matches with almost the same duration, ours are from formation periods with shorter and varying durations causing distortion of formation graphs. Using mean role-adjacency matrices is more robust to this distortion since they are affected by the adjacency relationships rather than absolute locations. \section{Role Change-Point Detection} \label{rolecpd} The process of RoleCPD is analogous to those of FormCPD, except that it leverages role permutations (calculated in Section~\ref{rolecpd_seq}) instead of role-adjacency matrices. Again, discrete g-segmentation recursively finds role change-points in each formation period (Section~\ref{rolecpd_gseg}). Lastly, we give domain-specific position labels to individual players per role period (Section~\ref{role_labeling}). \begin{table} \caption{Assignment rules of position labels to roles.} \label{tab:positions} \begin{tabular}{ccccccc} \toprule Role & 3-4-3 & 3-5-2 & 4-4-2 & 4-2-3-1 & 4-3-3 & 4-1-3-2 \\ \midrule 1 & LWB & LWB & LB & LB & LB & LB \\ 2 & LCB & LCB & LCB & LCB & LCB & LCB \\ 3 & CB & CB & RCB & RCB & RCB & RCB \\ 4 & RCB & RCB & RB & RB & RB & RB \\ 5 & RWB & RWB & LCM & LDM & CDM & CDM \\ 6 & RCM & CDM & RCM & RDM & LCM & CAM\\ 7 & LCM & LCM & LM & CAM & RCM & LM \\ 8 & LM & RCM & LCF & LM & LM & LCF \\ 9 & CF & LCF & RCF & CF & CF & RCF \\ 10 & RM & RCF & RM & RM & RM & RM \\ \bottomrule \end{tabular} \end{table} \begin{figure}[htb] \centering \begin{subfigure}[t]{0.23\textwidth} \includegraphics[width=\textwidth]{figures/formation_343.png} \caption{3-4-3 (22.3\%)} \label{fig:formation_343} \end{subfigure} \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{figures/formation_352.png} \caption{3-5-2 (12.2\%)} \label{fig:formation_352} \end{subfigure} \begin{subfigure}[t]{0.23\textwidth} \includegraphics[width=\textwidth]{figures/formation_442.png} \caption{4-4-2 (16.3\%)} \label{fig:formation_442} \end{subfigure} \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{figures/formation_4231.png} \caption{4-2-3-1 (20.9\%)} \label{fig:formation_4231} \end{subfigure} \begin{subfigure}[t]{0.23\textwidth} \includegraphics[width=\textwidth]{figures/formation_433.png} \caption{4-3-3 (17.4\%)} \label{fig:formation_433} \end{subfigure} \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{figures/formation_4132.png} \caption{4-1-3-2 (5.8\%)} \label{fig:formation_4132} \end{subfigure} \caption{Mean role locations of each formation group with the proportion (\%) in terms of playing time.} \label{fig:clusters} \end{figure} \subsection{Calculating the Sequence of Role Permutations} \label{rolecpd_seq} During the initialization step of role representation in Section~\ref{formcpd_seq}, each player $p$ is assigned to an \emph{initial role} $X_p$ according to a canonical order such as uniform number. Then, one can express the temporary role assignment to the players $p_1, \ldots, p_N$ at time $t$ as a permutation $\pi_t = (\pi_t(X_{p_1}) \ \cdots \ \pi_t(X_{p_N})) \in \mathrm{S}(\mathcal{X})$ of the initial role assignment $(X_{p_1} \ \cdots \ X_{p_N})$, i.e., \begin{equation} \beta_t(p) = \pi_t(X_p). \label{eq:beta} \end{equation} \subsection{Applying Discrete g-Segmentation to the Sequence of Permutations} \label{rolecpd_gseg} Similar to Section~\ref{formcpd_gseg}, we apply the discrete g-segmentation to the sequence of role permutations $\{ \pi_t \}_{t \in T_i}$ obtained from each formation period $T_i$. Here the algorithm finds an estimated change-point among the sequence of valid permutations (i.e., with switch rates $\le 0.7$) using the Hamming distance \begin{equation} d_H(\pi_t, \pi_{t'}) = |\{ X : \pi_t(X) \neq \pi_{t'}(X), \ X \in \mathcal{X} \}| \label{eq:hamming} \end{equation} as the distance measure between role permutations. Testing the significance of the change-point also emulates that in FormCPD except for the last condition. Since the goal of RoleCPD is to find the point when the dominant role assignment changes, we recognize the change-point $\tau$ as significant only if the most frequent permutations differ between before and after $\tau$. Finally, the recursive CPD applied to the sequence from each formation period $T_i$ results in a partition $T_{i,1} < \cdots < T_{i,n_i}$ of $T_i$. We set the instructed role per player in each role period $T_{i,j}$ be the most frequent permutation, and express every temporary roles resulting from the role representation as a permutation of the instructed roles. Formally speaking, we define the P-IR maps $\{ \alpha_t: P \rightarrow \mathcal{X} \}_{t \in T_{i,j}}$ as \begin{equation} \alpha_t(p) = \pi_{(i,j)}(X_p) \quad \text{(constant along $t \in T_{i,j}$)} \label{eq:alpha} \end{equation} where $\pi_{(i,j)} \in S(\mathcal{X})$ is the most frequent permutation among $\{ \pi_t \}_{t \in T_{i,j}}$. The RolePerm $\sigma_t(X_p) = \beta_t \circ \alpha_t^{-1}$ at $t \in T_{i,j}$ can then be obtained as \begin{equation} \sigma_t(X_p) = \beta_t(\alpha_t^{-1}(X_p)) = \pi_t(\pi_{(i,j)}^{-1}(X_p)) \label{eq:sigma} \end{equation} from Eq.~\ref{eq:beta} and Eq.~\ref{eq:alpha}. The resulting P-IR maps satisfy the three conditions presented in Section~\ref{rolecpd_def}. They satisfy the period-wise consistency and uniqueness by Eq.~\ref{eq:alpha} (i.e., $\alpha_t(p)$ is constant along $t \in T_{i,j}$ and distinct across $p \in P$ since it is a permutation of distinct initial roles). Also, the aforementioned significance test guarantees the existence of a role change between adjacent role periods. \subsection{Assigning Domain-Specific Position Labels to Instructed Roles of Players} \label{role_labeling} Since roles are independently initialized based on the players' uniform numbers per session in Section~\ref{rolecpd_seq}, each role label from 1 to 10 in a formation graph has nothing to do with the same label in another graph. Thus, we first align role labels between formation graphs so that the locations of the same label from different graphs are close to one another. Then, we give one of the 18 position labels such as ``left wing-back (LWB)'' or ``central defensive midfielder (CDM)'' to each of the aligned roles 1--10 per formation group based on domain knowledge. The role alignment is achieved by clustering the roles in each formation group based on the EM algorithm similar to the role representation~\cite{Bialkowski2014-Large} as follows: \begin{itemize} \item Initialization: Partition the role locations in each formation group into ten clusters by agglomerative clustering. \item E-step: Apply the Hungarian algorithm~\citep{Kuhn1955} between the ten role locations and the cluster centroids (mean locations) per formation graph to reassign cluster labels to roles. \item M-step: Update the centroid per cluster. \end{itemize} As a result, roles are relabeled as the annotations in Fig.~\ref{fig:clusters} so that the same label in the same formation graph corresponds to a certain ``position'' in the pitch. As the last step, we give ten position labels to the instructed roles 1--10 per formation group by the rules in Table~\ref{tab:positions}. This results in each player having one position label among the total of 18 in Fig.~\ref{fig:pos_scatter} per role period. For the ``others'' formation group, we assign position labels separately for each formation graph based on the Hungarian algorithm~\citep{Kuhn1955} between the ten role locations and the mean locations of the 18 position labels obtained from the above. Fig.~\ref{fig:pos_annot} shows the resultant position-labeled graphs as examples. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{figures/pos_scatter.png} \caption{Role locations in the ordinary formation groups colored by position label.} \label{fig:pos_scatter} \end{figure} \begin{figure}[htb] \centering \begin{subfigure}[t]{0.23\textwidth} \includegraphics[width=\textwidth]{figures/pos_annot_normal.png} \caption{Formation ``4-3-3''} \label{fig:pos_annot_normal} \end{subfigure} \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{figures/pos_annot_others.png} \caption{Formation ``others''} \label{fig:pos_annot_others} \end{subfigure} \caption{Examples of position-labeled formation graph.} \label{fig:pos_annot} \end{figure} \begin{figure*}[t] \centering \begin{subfigure}[t]{0.36\textwidth} \includegraphics[width=\textwidth]{figures/form_conf.png} \caption{Formation accuracy in minutes} \label{fig:form_conf_mat} \end{subfigure} \begin{subfigure}[t]{0.6\textwidth} \centering \includegraphics[width=\textwidth]{figures/role_conf.png} \caption{Position accuracy in percentage} \label{fig:role_conf_mat} \end{subfigure} \caption{Confusion matrices for formation and position predictions. Note that we calculate the matrix in minutes and in percentage for Fig~\ref{fig:form_conf_mat} and Fig~\ref{fig:role_conf_mat}, respectively, since the former has high class imbalance while the latter does not.} \label{fig:conf_mat} \end{figure*} \section{Experiments} In this section, we first describe the evaluation process of SoccerCPD by comparing our results with annotations by domain experts (Section \ref{model_eval}). Next, we introduce switching pattern discovery (Section \ref{switch_discovery}) and set-piece detection (Section \ref{setpiece}), which are useful and interpretable applications of our model. \subsection{Model Evaluation} \label{model_eval} We evaluated the performance of SoccerCPD by measuring the prediction accuracies of (1) team formation and (2) player position. Namely, we calculated the ratios of correctly detected one-minute segments to the total number of segments (total minutes played). More specifically, we used the ground truth labels annotated by domain experts who worked as coaches or video analysts in elite soccer teams. We divided the whole playing time into one-minute segments and compared the predicted and annotated formation/position labels per segment. (For instance, there are 90 formation annotations and 900 position annotations for a single team's 90-minute match without a red card or missing measurement.) As the minute-wise annotation of formation and role is very costly, we sampled 28 matches, one for each of 28 season-team pairs (i.e., 15 teams in season 2019 and 13 in 2020), and let the experts perform the annotation only for those matches. Fig.~\ref{fig:conf_mat} demonstrates the detailed results in confusion matrices. For the formation prediction, our result accords with the annotation for 1,941 minutes (72.4\%) among 2,680 minutes of the total playing time. By analyzing the failure cases, we have found that our method can make a misprediction in the following patterns: \begin{itemize} \item It sometimes confuses the formations 4-4-2 and 4-2-3-1 because of their similarity. (See the (4,3)-element of the confusion matrix in Fig.~\ref{fig:form_conf_mat}.) Particularly, CAM and CF sometimes form a horizontal line when the team is defending, making the prediction more difficult. \item It cannot distinguish 3-4-1-2 (included in the formation ``others'' in our experiment) from 3-4-3, which leads to the (6,1)-element in Fig.~\ref{fig:form_conf_mat}. 3-4-1-2 deploys LCF-CAM-RCF in their forward line, but their arrangement is too similar to the LM-CF-RM line to be separated from 3-4-3. \end{itemize} For the position prediction, since labels are compared player-by-player in each segment, the accuracy is calculated for the 26,652 minutes of player-segment pairs in total (i.e., about ten times the total playing time). Overall, the predicted labels accord with the annotated ones for 23,051 minutes (86.5\%). Most positions achieve recalls higher than 80\%, except for LDM, RDM, CAM, and LCF whose low recalls are caused by the aforementioned mispredictions of formations. This result is far greater than the formation prediction, implying that our method finds most of the positions correctly even when it fails to predict a formation. \subsection{Switching Pattern Discovery} \label{switch_discovery} The RolePerms obtained from Eq.~\ref{eq:sigma} indicate the temporary role switches between players. If every player maintains the originally instructed role at time $t$, then the RolePerm $\sigma_t$ becomes the identity permutation. In other words, a non-identity RolePerm implies that there is a switching play at that time. Thus, we can identify teams' playing patterns by analyzing the non-identity RolePerms. \begin{figure*}[!htb] \centering \begin{tabular}[t]{cc} \begin{subfigure}{0.45\textwidth} \centering \smallskip \includegraphics[width=\linewidth]{figures/switch_hist.png} \caption{Histograms of top-5 frequent cycles in second.} \label{fig:switch_hist} \end{subfigure} & \begin{tabular}{cc} \smallskip \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{figures/switch_false9_4231.pdf} \caption{False-nine play in FP-1} \label{fig:switch_false9_4231} \end{subfigure} & \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{figures/switch_false9_433.pdf} \caption{False-nine play in FP-2} \label{fig:switch_false9_433} \end{subfigure} \\ \smallskip \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{figures/switch_overlap.pdf} \caption{Fullback overlap in FP-3} \label{fig:switch_overlap} \end{subfigure} & \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{figures/switch_cutin.pdf} \caption{Cutting inside in FP-4} \label{fig:switch_cutin} \end{subfigure} \end{tabular} \end{tabular} \caption{(a) Frequent cycles per formation period, and (b)--(e) some tactically notable cycles from the match in Fig.~\ref{fig:timeline}.} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{figures/timeline_setpieces.pdf} \caption{A time-series of switch rates and set-piece occurrence in the first half of the match in Fig.~\ref{fig:timeline}. Squares with the labels ``CK'' and ``FK'' mean a corner kick and a close free kick (that the ball was kicked into the box), respectively. The color of a square indicates the attacking team during the situation, where the white corresponds to the measured team of interest.} \label{fig:setpieces} \end{figure*} A point to note is that independent switches can occur in the different parts of the pitch at the same time. Thus, we decompose each RolePerm into one or multiple cyclic permutations (which we call \emph{cyclic RolePerms} or simply \emph{cycles}). For example, the RolePerm \begin{equation*} \sigma = \left( \begin{array}{ccccccc} \text{LB} & \cdots & \text{LM} & \cdots & \text{RM} & \cdots & \text{RCF} \\ \text{LM} & \cdots & \text{LB} & \cdots & \text{RCF} & \cdots & \text{RM} \end{array} \right) \end{equation*} with switch rate 0.4 is the concurrence of the two 2-cycles (LB LM) and (RM RCF). Here, LB and RCF raise a non-identity RolePerm $\sigma$ together, but they did not switch roles with each other. Considering this kind of cases, we regard concurrent cycles as distinct role switches to solely focus on the ``interchanging'' patterns of roles. As a concrete case study, we look into the cycles during the match introduced in Fig.~\ref{fig:timeline} and derive some domain insights. Fig.~\ref{fig:switch_hist} shows the top-5 frequent cycles with position labels per formation period. For example, the leftmost bar in the first histogram means that RDM and LDM switched with each other for 120 seconds in formation period 1. Notable observations are as follows, where we abbreviate formation/role periods as FPs/RPs. \begin{itemize} \item The players dynamically switched their roles in FP-1 and FP-4 (top-5 cycles lasted for about 400 and 750 seconds, respectively), while there were few interchanges in FP-2 and FP-3 (only with 280 and 100 seconds, respectively). \item The center forward performed the ``false-nine play'' by dropping deep into the midfield positions, generating (CF CAM) in FP-1, (CF RCM CDM LCM) in FP-2, (CF RCM LCM) and (CF LCM) in FP-3. \item There were fewer switches in FP-3 by and large, but the overlapping of the right back stood out with the cycles (RB RM RCM CDM) and (RB RM RCM CDM RCB). \item In FP-4, the two forwards in 4-4-2 frequently switched with each other. In particular, the cycle (RCF LM) indicates that the left midfielder kept ``cutting inside'' the box. \end{itemize} \subsection{Set-Piece Detection} \label{setpiece} The ``set-piece'' in soccer refers to situations such as corner kicks and free kicks that resume the match from a stoppage by kicking the ball into the scoring zone. Since set-pieces are a great opportunity to score a goal \citep{Power2018}, teams carefully review set-piece situations from previous matches and set aside special tactics for set plays. As a byproduct of SoccerCPD, we can also help soccer teams easily extract and manage the set play data by automatically detecting set-pieces based on the statistic \emph{switch rate}. The term switch rate has been defined in Section~\ref{formcpd_gseg}, but we slightly alter its definition to the Hamming distance (Eq.~\ref{eq:hamming}) between the RolePerm and the identity permutation divided by the number of roles. (Note that we cannot use this definition in Section~\ref{formcpd_gseg}, as it is before obtaining the RolePerms.) Since the players are totally mixed up during set-pieces, switch rates in this situation soar close to 1.0. Thus, we can construct a simple, totally unsupervised, but fairly accurate set-piece detection model that picks situations with high switch rates (such as $\ge 0.9$). Figure~\ref{fig:setpieces} shows the strong co-occurrence pattern of set-pieces and high switch rates. \subsection{Discussion on Limitations} While SoccerCPD offers benefits such as scalability and usability by only relying on player locations, it leaves some limitations as below. For instance, a team can take separate formations for attacking and defending situations, but our framework does not take these contexts into account. Also, ours tends to confuse the formations with similar spatial configurations, since it does not consider on-the-ball actions. The mispredictions of 3-4-1-2 to 3-4-3 and 4-2-3-1 to 4-4-2 in Section~\ref{model_eval} are cases in point. Another limitation is that our framework only predicts popular formations well. In other words, since our framework gives formation labels only for the clusters with enough sizes, it just classifies irregular formations into ``others'' group. This brings about time delay to reflect the tactics of some adventurous managers, such as asymmetrical formations by Pep Guardiola or Jos\'e Mourinho. \section{Conclusion} This study proposes a change-point detection (CPD) framework named SoccerCPD that distinguishes tactically intended formation and role changes from temporary changes in soccer matches. First, we express the temporary role topology and transposition as sequences of binary matrices and permutations, respectively. Considering that these representations are high-dimensional or non-Euclidean data with frequent repetitions, we then apply a graph-based CPD algorithm named discrete g-segmentation to find formation and role assignment change-points. Since the concept of formation and role is the most fundamental and intuitive way to express soccer tactics, tracking and summarizing the team's formation/role changes has its own worth for domain participants. In addition, one can also use additional information such as temporary permutations to discover the switching patterns or to detect set-pieces as introduced in our study. Hence, we expect our method to be widespread in the sports industry. \begin{acks} This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. 2020R1F1A1073451 and No. 2020R1A4A3079947). Also, the authors thank Taein Lee, Minwoo Seo, and Donghwa Shin at Fitogether Inc. for supporting the model evaluation task by annotating formation and position labels as domain experts. \end{acks} \bibliographystyle{ACM-Reference-Format}
{ "attr-fineweb-edu": 1.584961, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdADxK1ThhCdy8KKH
\section{Introduction and Description of Results} The big hockey stick and puck theorem, stated in [Hilton1987] is: \begin{thm}[Hilton1987] (The Big Hockey Stick and Puck Theorem) \[\binom{n}{0}+\binom{n+2}{1}+\binom{n+4}{2}+\binom{n+6}{3}=\binom{n+7}{3}-\binom{n+6}{1}\] \end{thm} We have found the general form of above theorem in Pascal triangle as below. \begin{thm}\label{pascalhockey}(The Hockey Stick Theorem in Pascal Triangle) \begin{equation}\label{pascalformula} \sum _{i=0}^{k}\binom{n+2i}{i} =\sum _{j=0}^{\left\lfloor \frac{k}{2} \right\rfloor }\left(-1\right)^{j}\binom{n+2k-j+1}{k-2j} \end{equation} \end{thm} An example of this theorem is illustrated in Figure \ref{fig:chobechugan}. \setlength{\unitlength}{0.75cm} \begin{figure} \centering \begin{picture}(16,8) \linethickness{0.075mm} \put(8,8){1} \put(6,6){1} \put(8,6){2} \put(10,6){1} \put(4,4){1} \put(6,4){4} \put(8,4){6} \put(10,4){4} \put(12,4){1} \put(2,2){1} \put(4,2){6} \put(6,2){15} \put(8,2){20} \put(10,2){15} \put(12,2){6} \put(14,2){1} \put(0,0){1} \put(2,0){8} \put(4,0){28} \put(6,0){56} \thicklines \put(10.3,0.3){\circle{1}} \put(8,0){70} \put(10,0){56} \put(12,0){28} \put(14,0){8} \put(16,0){1} \put(1,1){1} \put(3,1){7} \put(5,1){21} \put(7,1){35} \put(9,1){35} \put(11,1){21} \put(13,1){7} \put(13.1,1.3){\circle{1}} \put(15,1){1} \put(3,3){1} \put(5,3){5} \put(7,3){10} \put(9,3){10} \put(11,3){5} \put(13,3){1} \put(5,5){1} \put(7,5){3} \put(9,5){3} \put(11,5){1} \put(7,7){1} \put(9,7){1} \thicklines \put(9.3,4.3){\oval(1,7)} \end{picture} \caption{Example of Hocky-Stick:1+3+10+35=56-7}\label{fig:chobechugan} \end{figure} Now we wish to state the hockey stick theorem in trinomial triangle. First using [3,4], we explain what is the trinomial triangle. The trinomial triangle is a number triangle of trinomial coefficients. It can be obtained by starting with a row containing a single "1" and the next row containing three 1s and then letting subsequent row elements be computed by summing the elements above to the left, directly above, and above to the right. We show the trinomial triangle in Figure \ref{fig:trinomial}. The trinomial coefficients are placed as Figure \ref{fig:coeff}. Following the notation of Andrews (1990) in [Andrews1990], the trinomial coefficient $\binom{n}{k}_2$ with $n\geq 0$ and $-n\leq k\leq n$, is given by the coefficient of $x^{n+k}$ in the expansion of $(1+x+x^2)^n$. Therefore, \[\binom{n}{k}_2=\binom{n}{-k}_2\] Equivalently, the trinomial coefficients are defined by \[(1+x+x^{-1})^n=\sum_{k=-n}^{k=n}\binom{n}{k}_2x^k\] We have proven the following theorem in this triangle: \begin{thm}\label{trinomtheorem} (The Hockey Stick Theorem in The Trinomial Triangle) \begin{equation}\label{trinomformula} \sum _{i=0}^{k}\binom{n+i}{n}_2 =\sum _{s=0}^{\lfloor\frac{k}{2} \rfloor}\left(-1\right)^{s}\binom{n+k+1}{n+2s+1}_2. \end{equation} \end{thm} For example see Figures \ref{fig:coeff} and \ref{fig:trinomial}. \setlength{\unitlength}{0.75cm} \begin{figure} \centering \begin{picture}(12,6) \put(0,0){1} \put(1,0){6} \put(2,0){21} \put(3,0){50} \put(4,0){90} \thicklines \put(8.3,0.3){\circle{1}} \put(5,0){126} \put(6,0){141} \put(7,0){126} \put(8,0){90} \put(9,0){50} \put(10,0){21} \put(10.2,0.3){\circle{1}} \put(11,0){6} \put(12,0){1} \put(12.2,0.3){\circle{1}} \put(1,1){1} \put(2,1){5} \put(3,1){15} \put(4,1){30} \put(5,1){45} \put(6,1){51} \put(7,1){45} \put(8,1){30} \put(9,1){15} \put(10,1){5} \put(11,1){1} \put(2,2){1} \put(3,2){4} \put(4,2){10} \put(5,2){16} \put(6,2){19} \put(7,2){16} \put(8,2){10} \put(9,2){4} \put(10,2){1} \put(3,3){1} \put(4,3){3} \put(5,3){6} \put(6,3){7} \put(7,3){6} \thicklines \put(7.3,3.3){\oval(1,5)} \put(8,3){3} \put(9,3){1} \put(4,4){1} \put(5,4){2} \put(6,4){3} \put(7,4){2} \put(8,4){1} \put(5,5){1} \put(6,5){1} \put(7,5){1} \put(6,6){1} \end{picture} \caption{Hockey Stick in Trinomial Triangle: $1+2+6+16+45=90-21+1$}\label{fig:trinomial} \end{figure} \setlength{\unitlength}{1cm} \begin{figure} \centering \begin{picture}(12,6) \put(0,0){$\binom{6}{-6}$} \put(1,0){$\binom{6}{-5}$} \put(2,0){$\binom{6}{-4}$} \put(3,0){$\binom{6}{-3}$} \put(4,0){$\binom{6}{-2}$} \thicklines \put(8.3,0){\circle{1}} \put(5,0){$\binom{6}{-1}$} \put(6,0){$\binom{6}{0}$} \put(7,0){$\binom{6}{1}$} \put(8,0){$\binom{6}{2}$} \put(9,0){$\binom{6}{3}$} \put(10,0){$\binom{6}{4}$} \put(10.2,0){\circle{1}} \put(11,0){$\binom{6}{5}$} \put(12,0){$\binom{6}{6}$} \put(12.2,0){\circle{1}} \put(1,1){$\binom{5}{-5}$} \put(2,1){$\binom{5}{-4}$} \put(3,1){$\binom{5}{-3}$} \put(4,1){$\binom{5}{-2}$} \put(5,1){$\binom{5}{-1}$} \put(6,1){$\binom{5}{0}$} \put(7,1){$\binom{5}{1}$} \put(8,1){$\binom{5}{2}$} \put(9,1){$\binom{5}{3}$} \put(10,1){$\binom{5}{4}$} \put(11,1){$\binom{5}{5}$} \put(2,2){$\binom{4}{-4}$} \put(3,2){$\binom{4}{-3}$} \put(4,2){$\binom{4}{-2}$} \put(5,2){$\binom{4}{-1}$} \put(6,2){$\binom{4}{0}$} \put(7,2){$\binom{4}{1}$} \put(8,2){$\binom{4}{2}$} \put(9,2){$\binom{4}{3}$} \put(10,2){$\binom{4}{4}$} \put(3,3){$\binom{3}{-3}$} \put(4,3){$\binom{3}{-2}$} \put(5,3){$\binom{3}{-1}$} \put(6,3){$\binom{3}{0}$} \put(7,3){$\binom{3}{1}$} \thicklines \put(7.3,3){\oval(1,5)} \put(8,3){$\binom{3}{2}$} \put(9,3){$\binom{3}{3}$} \put(4,4){$\binom{2}{-2}$} \put(5,4){$\binom{2}{-1}$} \put(6,4){$\binom{2}{0}$} \put(7,4){$\binom{2}{1}$} \put(8,4){$\binom{2}{2}$} \put(5,5){$\binom{1}{-1}$} \put(6,5){$\binom{1}{0}$} \put(7,5){$\binom{1}{1}$} \put(6,6){$\binom{0}{0}$} \end{picture} \caption{Trinomial Coefficients}\label{fig:coeff} \end{figure} \section{Proof of Results} In the proof of both theorems, we use induction. \begin{proof} (Theorem \ref{pascalhockey}) We prove this theorem using induction on $k$. By the fact $\binom{n}{n}= \binom{n+1}{n+1}=1$, statement is obvious for the base of induction i.e. $k=1$. Now we assume that the statement for $k$ is true, then the relation \eqref{pascalformula} would be correct. We wish to verify that it is correct for the value $k+1$ too. We have \begin{align*} \sum_{i=0}^{k+1}\binom{n+2i}{i} &=\binom{n+2k+2}{k+1} +\sum_{i=0}^{k}\binom{n+2i}{i}=\binom{n+2k+2}{k+1} +\sum _{j=0}^{\left\lfloor \frac{k}{2} \right\rfloor }\left(-1\right)^{j}\binom{n+2k-j+1}{k-2j}\\&=\left[\binom{n+2k+2}{k+1} +\binom{n+2k+1}{k} +\binom{n+2k+1}{k-1}\right]-\\&\quad -\left[\binom{n+2k+1}{k-1} +\binom{n+2k}{k-2} +\binom{n+2k}{k-3}\right]+\\&\quad \quad\vdots \\ &\quad +(-1)^{\left\lfloor \frac{k}{2} \right\rfloor -1}\left[\binom{n+2k-\left\lfloor \frac{k}{2} \right\rfloor +3}{k-2\left\lfloor \frac{k}{2} \right\rfloor +3}+\binom{n+2k-\left\lfloor \frac{k}{2} \right\rfloor +2}{k-2\left\lfloor \frac{k}{2} \right\rfloor +2} +\binom{n+2k-\left\lfloor \frac{k}{2} \right\rfloor +2}{k-2\left\lfloor \frac{k}{2} \right\rfloor +1} \right]+\\&\quad +(-1)^{\left\lfloor \frac{k}{2} \right\rfloor }\left[\binom{n+2k-\left\lfloor \frac{k}{2} \right\rfloor +2}{k-2\left\lfloor \frac{k}{2} \right\rfloor +1} +\binom{n+2k-\left\lfloor \frac{k}{2} \right\rfloor +1}{k-2\left\lfloor \frac{k}{2} \right\rfloor }+\binom{n+2k-\left\lfloor \frac{k}{2} \right\rfloor +1}{k-2\left\lfloor \frac{k}{2} \right\rfloor -1 }\right]+\\&\quad +(-1)^{\left\lfloor \frac{k}{2} \right\rfloor +1}\binom{n+2k-\left\lfloor \frac{k}{2} \right\rfloor +1}{k-2\left\lfloor \frac{k}{2} \right\rfloor -1 }=\\ \intertext{using properties of Pascal triangle, we get} &=\left[\binom{n+2k+2}{k+1} +\binom{n+2k+2}{k}\right]-\\&\quad -\left[\binom{n+2k+1}{k-1} +\binom{n+2k+1}{k-2}\right]+\\&\quad \quad\vdots \\ &\quad +(-1)^{\left\lfloor \frac{k}{2} \right\rfloor -1}\left[\binom{n+2k-\left\lfloor \frac{k}{2} \right\rfloor +3}{k-2\left\lfloor \frac{k}{2} \right\rfloor +3}+\binom{n+2k-\left\lfloor \frac{k}{2} \right\rfloor +3}{k-2\left\lfloor \frac{k}{2} \right\rfloor +2} \right]+\\&\quad +(-1)^{\left\lfloor \frac{k}{2} \right\rfloor }\left[\binom{n+2k-\left\lfloor \frac{k}{2} \right\rfloor +2}{k-2\left\lfloor \frac{k}{2} \right\rfloor +1} +\binom{n+2k-\left\lfloor \frac{k}{2} \right\rfloor +2}{k-2\left\lfloor \frac{k}{2} \right\rfloor }\right]+\\ &\quad +(-1)^{\left\lfloor \frac{k}{2} \right\rfloor +1}\binom{n+2(k+1)-\left\lfloor \frac{k+1}{2} \right\rfloor +1}{k+1-2\left\lfloor \frac{k+1}{2} \right\rfloor}\mathbf{1}_{\left\lbrace k+1:even\right\rbrace}=\\&=\binom{n+2k+3}{k+1}-\binom{n+2k+2}{k-1}+\cdots +(-1)^{\left\lfloor \frac{k}{2} \right\rfloor }\binom{n+2k-\left\lfloor \frac{k}{2} \right\rfloor +3}{k-2\left\lfloor \frac{k}{2} \right\rfloor +1}+\\&\quad +(-1)^{\left\lfloor \frac{k+1}{2} \right\rfloor }\binom{n+2(k+1)-\left\lfloor \frac{k+1}{2} \right\rfloor +1}{k+1-2\left\lfloor \frac{k+1}{2} \right\rfloor}\mathbf{1}_{\left\lbrace k+1:even\right\rbrace}=\\&=\sum _{j=0}^{\left\lfloor \frac{k+1}{2} \right\rfloor }\left(-1\right)^{j}\binom{n+2(k+1)-j+1}{k+1-2j} \end{align*} The statement for $k+1$ is also true, and the proof is completed. \end{proof} \begin{proof}(Theorem \ref{trinomtheorem}) To prove this theorem similarly we use induction on $k$. Considering the equation $\binom{n}{n} _2=\binom{n+1}{n+1}_2 $ , the result being immediate if $k=0$. Assuming that the statement for $k$ is true, then the relation \eqref{trinomformula} would be correct. Now we intend to illustrate it is correct for the value $k+1$ too. We have \begin{align*} \sum_{i=0}^{k+1}\binom{n+i}{n}_2 &=\binom{n+k+1}{n}_2 +\sum_{i=0}^{k}\binom{n+i}{n}_2=\binom{n+k+1}{n}_2 +\sum_{s=0}^{\left\lfloor \frac{k}{2} \right\rfloor }(-1)^{s} \binom{n+k+1}{n+2s+1} _2=\\&=\left[\binom{n+k+1}{n}_2 +\binom{n+k+1}{n+1}_2 +\binom{n+k+1}{n+2} _2\right]-\\&\quad -\left[\binom{n+k+1}{n+2}_2 +\binom{n+k+1}{n+3}_2 +\binom{n+k+1}{n+4} _2\right]+\\&\quad \quad\vdots \\ &\quad +(-1)^{\left\lfloor \frac{k}{2} \right\rfloor -1}\left[\binom{n+k+1}{n+2\left\lfloor \frac{k}{2} \right\rfloor -2} _2+\binom{n+k+1}{n+2\left\lfloor \frac{k}{2} \right\rfloor -1} _2+\binom{n+k+1}{n+2\left\lfloor \frac{k}{2} \right\rfloor} _2\right]+\\&\quad +(-1)^{\left\lfloor \frac{k}{2} \right\rfloor }\left[\binom{n+k+1}{n+2\left\lfloor \frac{k}{2} \right\rfloor} _2+\binom{n+k+1}{n+2\left\lfloor \frac{k}{2} \right\rfloor +1} _2+\binom{n+k+1}{n+2\left\lfloor \frac{k}{2} \right\rfloor +2} _2\right]+\\&\quad +(-1)^{\left\lfloor \frac{k}{2} \right\rfloor +1}\binom{n+k+1}{n+2\left\lfloor \frac{k}{2} \right\rfloor +2}_2=\\ \intertext{using properties of the trinomial coefficients, we get} &=\sum_{s=0}^{\left\lfloor \frac{k}{2} \right\rfloor }(-1)^{s} \binom{n+k+2}{n+2s+1} _2+(-1)^{\left\lfloor \frac{k}{2} \right\rfloor +1}\binom{n+k+2}{n+2\left\lfloor \frac{k}{2} \right\rfloor +3}_2\\&=\sum_{s=0}^{\left\lfloor \frac{k+1}{2} \right\rfloor }(-1)^{s} \binom{n+k+2}{n+2s+1} _2 \end{align*} The statement for $k+1$ is also true, and the proof is completed. \end{proof} The hockey stick theorem in the trinomial triangles has been proved. This theorem can be translated in Pascal pyramid as follows : \[\sum _{i=0}^{k}\; \sum _{2r+s=2n+i} \binom{n+i}{r, s, r-n} =\sum _{j=0}^{\left\lfloor \frac{k}{2} \right\rfloor }\left(\left(-1\right)^{j}\sum _{2r+s=2n+k+2j+2} \binom {n+k+1} {r, s, r-n-2j-1} \right) \] Other similar theorems might be obtained for Pascal's four dimensional and even $n$-dimensional pyramid.
{ "attr-fineweb-edu": 1.896484, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdbM4dbgg4JMhnhy_
\section{Introduction}\label{sec:intro}} Home-court advantage is often discussed in sports circles as a contributing factor to the outcome of games. It is well-known that the home team typically benefits from some competitive edge from playing at their home-court, resulting in a better chance of winning. Thus, the NBA playing the 2020 playoffs in a bubble due to the COVID-19 pandemic brought a great deal of concern for fans, teams, journalists, and others. For example, \citet{Aschburner} discusses about anticipated effects, sharing concerns from former players, coaches and other experts about the potential effects of removing home-court advantage. Aschburner notes that the NBA did make attempts to recreate the effects by putting the ``home'' team logo on the court and allowing the ``home'' team to play crowd noise and music, but most people doubted these small attempts would recreate a true playoff atmosphere. During the 2020 NBA playoffs, home teams only won about 48.2\% of the games. This is lower than normal, which Aschburner claims usually floats around 60\%. This shift in the home team winning percentage surely indicates the opportunity for thorough investigation. So, what happened? Did the home teams fail to perform up to normal standards without the help of home-court advantage? Were away teams able to rise to the occasion and perform better not having to deal with the headache of going on the road? We seek to answer the questions using scoring totals and shooting percentages as indicators of team performance. This will deepen understanding of how home-court advantage affects home and away teams in the NBA. Our study is quite different from earlier NBA home-court advantage studies. By using the neutral site games of 2020 we will get to compare home and away performance to a control. Typically, studies just compare home vs away performance. These studies do not separate the effects of home-court advantage into the specific effect on the home team and the specific effect on the away team. They show that home teams outperform away teams, but not if this is a result of home teams overperforming or away teams underperforming because of home-court advantage. Some of these studies are reviewed in greater detail in Section\textasciitilde\ref{sec:litrev}. We will compare home team performance in 2020 at a neutral site with no fans vs.~2017-19 playoffs with fans. Likewise, away team performance in 2020 at a neutral site with no fans vs.~2017-19 playoffs with fans. By comparing home teams in 2020 to home teams in 2017-19 and away teams in 2020 to away teams in 2017-19, we add a new perspective to the field of research. This will allow for a more accurate understanding of the effects of home-court advantage on home and away teams in the NBA. We will not only see that home-court advantage helps home teams outperform away teams, but also separate the effects of home-court advantage on home teams and away teams performance individually. Nine hypotheses were tested to understand the differences in 2020 vs.~earlier years. First, whether or not the difference between home win percentage in 2020 and 2017-19 is zero. This difference is found to be statistically significant from zero. Then we assess for differences in home scoring in 2020 vs 2017-2019. Similarly, we can do the same test, but for differences in away scoring in 2020 vs 2017-2019. Also, differences in team shooting (for-two pointers, three-pointers, and free throws) from 2020 vs 2017-2019 for both home and away teams. The results from these tests bring a new perspective to understanding of how home-court advantage impacts games by altering the performance of the home and away teams. \hypertarget{sec:litrev}{% \section{Literature Review}\label{sec:litrev}} There is a voluminous literature on the effects of home-court advantage. Many NBA home-court advantage studies analyze the effects by studying shooting percentages. \citet{Kotecki} reported significant home-court advantage using performance-based statistics, specifically field goal percentage, free throw percentage, and points by comparing home performance vs.~away performance in games. All of which he showed significantly indicates that home-court helps teams play better. \citet{Cao} studied the effects of pressure on performance in the NBA. Using throws as their measure of interest, they tested whether home fans could distract and put pressure on opposing players to make free throws. However, they found insignificant evidence that home status has a substantial impact on missing from the free throw line. \citet{Harris} used two point shots, three-point shots and free throws as measures of interest to study home-court advantage. Two point shots were found to be the strongest predictor of home-court advantage. They suggested that home teams should try to shoot more two point shots and force their opponent to take more two point shot attempts. This strategy will maximize the benefits of home-court advantage and give them the best chance to win. Some studies focus less on shooting and more on scoring differences and other metrics. For example, \citet{Greer} focused on the influence of spectator booing on home-court advantage in basketball. The three methods of performance used in this study were scoring, violations, and turnovers. This study was conducted using the men's basketball programs at two large universities. The study finds that social support, like booing, is an important contributor to home-court advantage. Greer explains, whether the influence is greater on visiting team performance or referee calls is less clear. However, the data does seem to lean slightly in favor of affecting visiting team performance. \citet{Harville} studied the effect of home-court advantage using the 1991-1992 college basketball season. Unlike the NBA, it is not uncommon to have a few games played at neutral sites during the college basketball season. This allowed them to construct two samples, one of home teams and one of neutral teams. They formulate their study in a regression predicting the expected difference in score for home teams. They set up their study to find if the home teams won games by more points when they had home-court advantage vs.~when playing at neutral court. This study concluded with evidence supporting home-court advantage. There are also surveys on the factors contributing to home-court advantage. \citet{Carron1992} gave four main game location factors for home and away teams, namely, the crowd factor, which is the impact of fans cheering; learning factors, which is an advantage from home teams from playing at a familiar venue; travel factors, the idea that away teams may face fatigue and jet lag from traveling; and, rule factors, which says that home teams may benefit from some advantages in rules and officiating. They acknowledge that these factors would all be removed if games were played at a neutral site even if one team was designated as ``home team''. This study was reviewed a decade later by \citet{Carron2005}. The 2005 review goes over the new findings from studies about the significance of these four game location factors. Since 1992 they found that results on these four factors are mixed. However, there is some evidence supporting crowd and travel factors impact games in the NBA. There is less evidence suggesting learning and rule factors impact the NBA. One interesting finding cited by \citet{Carron2005} is that in the absence of crowds result in overall performance increases. \hypertarget{sec:data}{% \section{Data}\label{sec:data}} Data were collected from the official NBA website. The main variables of interest are whether or not the home team won, scoring totals for home and away teams, and shooting percentages for home and away teams on two-pointers, three-pointers, and free throws. These variables were very popular and frequently used in the related literature discussed earlier. The data was collected on a game by game basis, this gave us two observations for each variable per game played, one observation for each team(home and away). There were 83 games played in the 2020 playoffs, giving 83 observations for each variable in 2020 for both the home and away teams (166 observations total). Likewise, there were 243 games played over 2017-2019, giving 243 observations of each variable for both home and away teams over 2017-2020 (486 observations total). While many other measures could be used for measuring the outcome of the game and team performance, scoring seemed to be the most important. The winner of a game is determined by who scores more points. There can only be one winner and one loser making the outcome a binary variable, with one indicating a win and zero indicating a loss. Home-court advantage is the basic idea that the home team is more likely to win. So laying a foundation of typical home-court advantage is crucial. Before focusing on the 2017 to 2020 playoffs we can take a quick look at home team win percentages since 2010. Notice in Figure \ref{fig:Fig1}, the 10 years before 2020, the home team winning percentage ranged from around 0.56 to 0.7 and never dipped below 0.5. The 2020 bubble broke this historic pattern dipping down below 0.5. Foreshadowing, the confirmation of the expectation that the effect of home-court advantage was removed in the 2020 playoffs. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{Fig1} \caption{Winning percentage of NBA home teams in the playoffs since 2010, the green line denotes .500.} \label{fig:Fig1} \end{figure} Moving on to the main focus of the study, comparing 2020 to 2017-2019. Figure \ref{fig:Fig2} shows the histograms of the home (green) and away scoring (red) for 2020 vs.~2017-2019. All histograms are fairly bell shaped, which is important for statistical tests designed for normally distribute data. There appears to be little difference between the 2020 and 2017-2019 for home scoring, but for away scoring, a noticeable shift to the right in 2020 is observed compared with that in 2017-2019. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{Fig2} \caption{Histograms of home (blue) and away (red) scoring for 2020 (bottom) and 2017-2019 (top).} \label{fig:Fig2} \end{figure} Our second target of inference is shooting percentage for home and away teams. Figure \ref{fig:Fig3} shows home shooting for two-pointers, three-pointers and free throws for 2020 (top) vs.~2017-19 (bottom). The histograms appear to be fairly similarly distributed between 2020 and 2017-19. Likewise, Figure \ref{fig:Fig4}, shows the same percentages except for away teams. It appears that the two-point shooting percentage for away teams has a small shift to the right in 2020 relative to that in 2017-2019. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{Fig3} \caption{Histograms of home shooting percentages for two- pointers, three-pointers and free throws for 2020 (top) vs.~2017-19 (bottom).} \label{fig:Fig3} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{Fig4} \caption{Histograms of away shooting percentages for two-pointers, three-pointers and free throws for 2020 (top) vs.~2017-19 (bottom).} \label{fig:Fig4} \end{figure} \hypertarget{sec:methods}{% \section{Methods}\label{sec:methods}} The 2020 bubble provides a new and exciting opportunity to study home-court advantage for the NBA. Unlike college basketball, aside from a few exhibition/preseason games, the NBA always has a home and away team. So, for the first time in NBA history the bubble allows NBA home and away performance to be compared vs a control/neutral field. The NBA bubble, as a neutral court, removed all 4 possible game location factors impacting home-court advantage hypothesized by \citet{Carron1992}. The NBA bubble featured 8 seeding games then a standard playoff format. The focus of this study was the play during the playoff games since it followed the standard playoff format and can easily be compared back to other playoffs. For this study, the 2020 playoffs were compared against the three previous playoffs collectively. To control for the changing play style of the NBA, we limit the study to 2020 vs 2017-2019 for the faster pace play and more common use of the three-point shot in modern basketball. If we used data from say 10 years ago, or earlier, observed differences may not be from effects of the NBA bubble, but rather from the effects of drastic changes in the style of play between the seasons. However, basketball evolves slow enough that we can reasonably assume 2017-2019 are at least very close in pace and playing style to 2020. Comparisons between 2020 and 2017-19 home and away teams were made on home team winning percentage, total team scoring and two-point, three-point and free throw shooting. Comparing the differences in these metrics for home and away teams in 2020 vs previous years will provide valuable insights to the understanding of home-court advantage. We can see how going on the road may negatively impact away performance and how playing at home may positively impact home performance. If there are differences in scoring for home or away teams the differences can be used to show how home-court advantage affects overall performance of home and away teams. While testing for differences in shooting will provide added context for how home-court advantage specifically affects performance. Shooting percentages are not the only possible metrics affected by home-court advantage, but they are the most obvious and certainly an important one. We formulate the following nine specific research questions to test the effects of the COVID bubble on the 2020 NBA playoffs: \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \item Is the home team winning percentage in 2020 different than that it was in 2017-2019? \item Is the average home team scoring different in 2020 than it was over 2017-2019? \item Is the average away team scoring different in 2020 than it was over 2017-2019? \item Are home teams making two-pointers at the same rate in 2020 as 2017-2019? \item Are home teams making three-pointers at the same rate in 2020 as 2017-2019? \item Are home teams making freethrows at the same rate in 2020 as 2017-2019? \item Are away teams making two-pointers at the same rate in 2020 as 2017-2019? \item Are away teams making three-pointers at the same rate in 2020 as 2017-2019? \item Are away teams making free throws at the same rate in 2020 as 2017-2019? \end{enumerate} All nine questions can be approached by a standard two-sample comparison with the \(z\)-test. The \(z\)-test statistic follows a standard normal distribution, which is a good approximation based on the central limit theorem given the sample size in this application. We also conducted nonparametric tests that are distribution free to confirm the results from the \(z\)-test. For question\textasciitilde1, we used Fisher's exact test for a contigency table which summarizes the wins and losses of the home team in the 83 games in 2020 and the 243 games in 2017-2019. For all other eight questions, the data are the scores or shooting percentages from the 84 games in 2020 and the 243 games in 2017-2019. We used Wilcoxan's rank-sum test. All three tests, namely the \(z\)-test, Fisher's exact test, and Wilcoxan's rank-sum test, were performed using R \citep{R}. \hypertarget{sec:results}{% \section{Results}\label{sec:results}} \begin{table} \caption{The results from the 9 tests.} \label{tab:table} \centering \begin{tabular}[t]{lcccc} \toprule & 2020 & 2017-19 & \multicolumn{2}{c}{P-value}\\ \cmidrule(lr){4-5} & & & Z-test & Non-parametric\\ \midrule Home Win & 0.482 & 0.613 & 0.0497 & 0.0400\\ Home Scoring & 1.101 & 1.081 & 0.2321 & 0.2985\\ Away Scoring & 1.091 & 1.040 & 0.0008 & 0.0004\\ Home 2P & 0.523 & 0.515 & 0.4335 & 0.5719\\ Home 3P & 0.363 & 0.357 & 0.5733 & 0.8852\\ Home FT & 0.793 & 0.774 & 0.0692 & 0.0496\\ Away 2P & 0.536 & 0.504 & 0.0003 & 0.0003\\ Away 3P & 0.357 & 0.346 & 0.3256 & 0.3081\\ Away FT & 0.783 & 0.777 & 0.6601 & 0.8370\\ \bottomrule \end{tabular} \end{table} Starting from the top of the Table \ref{tab:table} summarizes p-values of the nine hypotheses for both \(z\)-tests and Wilcoxon tests. The p-values are all fairly similar for both tests giving strong confidence in conclusions drawn. Also reported are the point estimates of the two samples in each comparison. First, we see a statistically significant change in home win percentage in 2020 from 2017-19, with p-value of 0.0497 for the \(z\)-test and 0.0400 for Fisher's exact test. The 95\% confidence interval (CI) of \((-0.255, -0.008)\) confirms our belief that home-court advantage was lost in the 2020 NBA playoffs. However, after accounting for multiple tests using the Bonferroni correction, the p-values for both tests are no longer significant. So, we may only cautiously say there is evidence that home-court advantage was not a factor in 2020. Home team performance did not seem to be negatively impacted by losing home-court advantage like expected. Home scoring, two-point and three-point shooting shooting all show no significant difference, on average, between 2020 vs. 2017-19 based on p-values from both tests. However, the Wilcoxon test and \(z\)-test have conflicting results for free throws. The \(z\)-test p-value of 0.0692 indicates no significant difference, while the Wilcoxon test p-value of 0.0469 indicates a difference at the 5\% significance level. Since, the p-value of Wilcoxon test is so close to significance level and neither p-value is significant after a Bonferonni correction for multiple tests this difference is likely not very meaningful. There appears to be no strong evidence suggesting home teams played at a lower level in 2020 than they did in previous years when they had home-court advantage. Away teams saw more of an impact than home teams. For starters, there is a significant increase in mean points per game, indicated by p-value of 0.0008 for \(z\)-test and 0.0004 for Wilcoxon. It is important to note both p-values also remain significant after a Bonferonni correction giving strong indication of significance. The average difference in points was estimated to be about 5 points, with 95\% CI \((2.083, 7.988)\). Likewise, the away team two-point shooting efficiency increased significantly based on p-value of 0.0003 for both the \(z\)-test and Wilcoxon test. Again, both p-values remain significant after Bonferroni correction. The average difference was estimated to be about 0.03, with 95\% CI \((0.015, 0.050)\). However, unlike two-point shooting, away teams did not see a statistically significant difference in three-point and free throw shooting. Overall, away teams have evidence of change in performance in the bubble. The away teams seemed to perform better than they would under normal conditions as a visiting team. \hypertarget{sec:disc}{% \section{Discussion}\label{sec:disc}} Generally it seemed that away teams fared better in the 2020 NBA playoff bubble than previous years on the road. Starting from the dip in home winning percentage to below 0.482 it is clear that something was different. Although the difference was not significant after a Bonferroni correction it is still informative to consider and understand that home teams seemed to struggle to win compared to normal conditions. Compared to \citet{Kotecki} who finds home teams consistently have a significantly better record than away teams boasting about a 60.5\% win percentage in his sample, the 48.2\% home winning percentage of 2020 home teams is quite a shift. In this study, home teams did not benefit from the usual advantages provided by being the home team. Away team average scoring did increase by a statistically significant amount. This goes hand in hand with our intuition and conclusion about the home winning percentage decreasing. If away teams are scoring significantly more and home teams are not, then we expect to see away teams winning a larger amount of games. This may give more reason to believe the conclusion that there was a significant decrease in home winning percentage in 2020, despite failing to be significant after the Bonferroni correction. Only away team scoring being significantly impacted by playing on a neutral court indicates that home-court advantage stems mainly from adverse effects on the visiting team. An interesting finding is all shooting and scoring numbers for both home and away teams did make at least small increases. Although these increases were not all significant these increases are exactly what is reported in \citet{Carron2005} when they explain how evidence suggests that teams perform better with the absence of fans. This is important because it coincides with our conclusion that home-court advantage mostly plays into games by negatively impacting away teams. If fans cause overall performance to drop, then home court advantage must come from a bigger drop in away performance than the drop in home performance. This is why away teams were able to close the gap with home teams with home-court advantage removed. Separating the effects of the home-court advantage into home effects and away effects allowed for some interesting new insights. Previously, we knew that on average home teams outperformed away teams. It was less clear whether it was from positive effects on the home team or negative effects on the road team or perhaps a bit of both. The biggest takeaway from this study is the main source of home-court-advantage is the negative effects playing on the road away teams face. In 2020 there wasn't any evidence of regression for home team performance, based on the performance measures used, despite being stripped of home-court advantage. Yet, home teams lost about 12\% more of games in the 2020 playoffs than the typical average. This was because of the improvement of away teams. No longer having to face the struggle of traveling, pressure from opposing fans, or playing on an unfamiliar court, teams saw an improvement in their play and an increase in winning. The improvement of away teams confirms a proposition from \citet{Greer} that the positive social impact of crowds benefiting home teams may be a result of inhibiting away teams. At least some of that improvement from away teams came from significantly higher two-point efficiency. This corresponds with the conclusion from \citet{Harris}, where they found home teams are best suited to capitalize on advantages from two-point shots. Normally, by shooting more two-pointers themselves and forcing away teams to shoot more two-pointers the home team benefits from increasing effects of home-court advantage. However, with away teams significantly improving two-point shooting in the bubble this strategy was no longer viable and home-court advantage disappeared. Future studies may want to use the 2020 NBA bubble and compare vs previous years using other performance measures. For example, turnovers, steals, assist, rebounds, and many more game statistics. There are plenty of other possibilities besides just shooting efficiency to pick through looking for more possible sources of added points for away teams. This will further help explain what is lost in the performance of away teams when they travel to opposing arenas. This study is only the beginning of possibilities for studies using the 2020 NBA bubble as a case study for home-court advantage. Although the study is limited by a one time sample, it seems unlikely that these conditions will ever be repeated. It may not be possible to have a follow-up study using the same measures with a different sample. Otherwise, that type of study could help strengthen the conclusion in this paper. \bibliographystyle{chicago}
{ "attr-fineweb-edu": 1.851562, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbKg4ukPiES_fIjMc
\section*{Introduction} \import{sections/}{intro} \section*{Results} \import{sections/}{results} \section*{Methods} \import{sections/}{methods} \subsection*{Dataset} \label{subsection:dataset} The dataset used in this work is a log of the cars' locations, belonging to \textit{Enjoy}, a popular Italian car-sharing service. Although the service is active in six major Italian cities, we focus our analysis on the city of Milan. The data has been collected via the Enjoy APIs, scraping the information through a client account. We obtained information about the area of service (i.e., the limit where people can start and end their ride), the location of vehicles, and the points of interest (POIs) related to the service (such as fuel stations and reserved parkings) from the API endpoints. The endpoints used have base URL \textit{https://enjoy.eni.com/ajax/} and the following endpoints: \href{https://enjoy.eni.com/ajax/retrieve_area}{\href{retrieve\_areas}} for the area, \href{https://enjoy.eni.com/ajax/retrieve_vehicles}{\href{retrieve\_vehicles}} for cars, and \href{https://enjoy.eni.com/ajax/retrieve_pois}{\href{retrieve\_pois}} for points of interests. We collected the data recording all the movements of Enjoy cars within the city of Milan in $2017$. Through the scraping procedure, we collected a series of events, that are divided into two categories: parking events and travel events of each Enjoy car we could observe. Each event comes with a set of information such as the car plate, the time at which the event occurred (with the precision of seconds), and the latitude and longitude of the parking spot in the case of parking events. Starting and arrival points in the case of travel events are also recorded. In the rest of this article, we will focus mainly on parking events, but our methods and analyses could be also applied to study the volume of travel events. The final aim of the work was to predict the number of vehicles parked at a given time in each and every district of the city. To this end, we divided the area of service into the different municipalities (we will call them \textit{zones}). Different tessellation strategies (e.g., using hexagons with a $900 m$ side) have been considered, but the municipality division provides more statistichally stable training data, and thus more precise models. For each type of event (parked car) that we collected, we defined the \textit{activity} of a zone as the number of events (i.e., cars parked) occurring there in a certain fixed amount of time $\delta t$. Indicating each zone with the index $i$, we have obtained for each and every one of them a discrete time series $x_i(t)$ representing the \textit{activity} of the zone $i$ in the time frame $[t, t+\delta t]$. We will indicate with $t=T$ the last time frame observed in the data. In the following, we will use a $\delta t=1800\,$s, corresponding to $30$ minutes. This time bin width has been chosen so to have a stable average activity $\langle x_i(t)\rangle$ and a stable variance $\sigma(x_i(t))$ over all the zones. Indeed, narrower or larger time bins feature unstable mean and/or variance in time: the $30-minutes$ binning is significantly stable throughout all the observation period. This characteristic helps the model to generalize and predict in a better way as the distributions of the modeled quantities are not changing too much in time. Some of the models we would like to implement require real variables distributed over $\mathbb{R}$. However, the $x_i(t)$ activity we have defined so far belongs to $\mathbb{N}$ by definition. For this reason, we defined another kind of variable dividing the activity by the standard deviation: \begin{equation} z = \frac{x}{\sigma}, \label{eq:zscore} \end{equation} where $x$ is the original activity data, and $\sigma$ is the standard deviation of the activity in time. In this case, for the population we take into account the \textit{typical day}: the std is taken w.r.t. the same time bin for every day. So, Eq.~(\ref{eq:zscore}) gets into: \begin{equation} z_i(t)= \frac{x_i(t)}{\sigma_{i}(t_{\delta t})}, \end{equation} where $t_{\delta t} = t\mod(\delta t)$ is the integer division of t by the $\delta t$ time bin width, $\mu_i(t_{\delta t})$ is the average over all the activities of the zone $i$ at times with the same value of $t \mod \delta t$, and $\sigma_{i}(t_{\delta t})$ is their standard deviation. To keep the notation formalism, we will indicate $z_i(t)$ with $x_i(t)$, keeping in mind that now $x$ refers to a \textit{normalized activity}. From now on, we will work on the normalized $x_i(t)$: these indicate how much a given area is more "active" - i.e., has more cars parked - concerning the average volume of that time bin $t$ (typical day activity) by weighting this fluctuation with the standard deviation observed for that zone-hour volume. In this way, we can compare the signal of areas with high fluctuations and high activities with less frequented areas around the city. The final step when processing data for inference and Machine Learning is to divide the data into a \textit{train set} and a \textit{test set}. The models will be then trained using the first dataset, and their precision will be tested on the second one so to check their ability to generalize to unseen data. Different kinds of splittings have been done, like random splitting or taking the first part of the time series as training and the last one as a test. Similar results are obtained. As a final remark, in the following we will indicate with $t$ the time bin of activity, i.e. $x_i(t)$ will indicate the activity of the zone $i$ in a time range $[t \delta t, (t+1)\delta t]$. \subsection*{Pseudo-Log-Likelihood Maximization} \label{sec:2_pll} The Boltzmann probability is defined over the whole time series of all the zones, i.e. \begin{equation} P \Big( x(t), x(t-1),...,x(t-\Delta) \Big) \propto \exp \Big( \mathbf{H}(x(t), x(t-1),...,x(t-\Delta)) \Big). \end{equation} From this, it is straightforward to define the conditional probability of one-time step $x(t)$ concerning all the previous ones: \begin{equation} \label{eq:conditioned} P(x(t) \mid x(t-1),...,x(t-\Delta)) = \frac{P \Big( x(t), x(t-1),...,x(t-\Delta) \Big)}{P \Big( x(t-1),...,x(t-\Delta) \Big)}. \end{equation} Using equation (\ref{eq:pseudo}), we can define the Pseudo-Log-Likelihood as: \begin{equation} \label{eq:pll} \mathcal{L}_{pseudo} = \frac{1}{T-\Delta} \sum_{t=\Delta}^T \log P(x(t) | x(t-1), \dots, x(t-\Delta)). \end{equation} Here, using Eq. (\ref{eq:conditioned}) and substituting the functional form of the two total probabilities, we obtain \begin{equation} \label{eq:pll_cond} P(x(t) | x(t-1), \dots, x(t-\Delta)) = \prod_i \frac{1}{Z_i(t)} \exp( -a_i x_i^2(t) + x_i(t) v_i(t)) , \end{equation} with \begin{equation} \begin{aligned} &Z_i(t) = \frac{1}{\sqrt{a_i}} \exp \left( \frac{v_i(t)^2}{4a_i} \right ) \int_{-\infty}^{v_i(t)/2 \sqrt{a_i}} dz e^{-z^2} = \frac{1}{\sqrt{a_i}} \exp \left( \frac{v_i(t)^2}{4a_i} \right ) I\left(\frac{v_i(t)}{2 \sqrt{a_i}} \right) , \\ &v_i(t) = \sum_{n=1}^{d} \sum_\delta (J_n ^\delta x^n(t-\delta))_i + h_i. \end{aligned} \end{equation} Substituting in eq. (\ref{eq:pll}), we get: \begin{equation} \mathcal{L}_{pseudo} = \frac{1}{T-\Delta} \sum_{t=\Delta}^T \sum_i -a_i x_i^2(t) + x_i(t) v_i(t) - \log Z_i(t) \end{equation} and we can calculate the gradients of the $\mathcal{L}_{pseudo}$ w.r.t. the parameters: \begin{equation} \begin{aligned} & \frac{\partial \mathcal{L}_{pseudo}}{\partial a_{i}} = \frac{1}{T-\Delta} \sum_t -x_i^2(t) - \frac{\partial \log Z_i(t)}{\partial a_i},\\ & \frac{\partial \mathcal{L}_{pseudo}}{\partial J^\delta_{ij}} = \frac{1}{T-\Delta} \sum_t x_i(t) x_j(t-\delta) - \langle x_i(t) \rangle x_j(t-\delta),\\ & \frac{\partial \mathcal{L}_{pseudo}}{\partial h_{i}} = \frac{1}{T-\Delta} \sum_t x_i(t) - \langle x_i(t) \rangle , \end{aligned} \end{equation} where \begin{equation} \langle x_i(t) \rangle = \frac{v_i(t)}{2a_i} + \frac{1}{2 \sqrt{a_i} I(\frac{v_i}{2 \sqrt{a_i}} )} \exp \left ( -\frac{v_i(t)^2} {4a_i}\right) \end{equation} and \begin{equation}\frac{\partial \log Z_i(t)}{\partial a_i} = -\frac{1}{2a_i} + \frac{-v_i^2(t)}{4a_i^2} + \frac{1}{I(\frac{v_i}{2 \sqrt{a_i}} )} \exp \left ( -\frac{v_i(t)^2} {4a_i}\right) \left ( \frac{-v_i(t)}{4 a_i^{3/2}} \right). \end{equation} The fact that the gradients and the cost function can be computed exactly makes the inference of the parameters relatively easy with respect to other cases where they need to be approximated. Once the parameters of the model have been inferred with some method, it is possible to use it to predict the temporal evolution of the normalized activities of the system. Given some certain state of the system unit time $t-1$, i.e. $(x(t-1), \dots x(t-\Delta))$ (past time steps further than $\Delta$ from $t$ are not relevant), we can use equation (\ref{eq:pll_cond}) to predict the next step $x(t)$. Since the probability in (\ref{eq:pll_cond}) is a normal distribution whose average is completely defined by $(x(t-1), \dots x(t-\Delta))$, the best prediction of $x_i(t)$ is the average of the distribution $\langle x_i(t) \rangle = \frac{1}{2a_i} \left ( v_i(t) + \frac{1}{Z_i(t)} \right )$. In other words, we are using the generative model to make a discrimination task. This makes possible the comparison of this model with standard machine learning ones, by comparing their precision in the prediction of the time series. To avoid over-fitting, we used L1-regularization~\cite{buhlmann2011statistics,bishop2006pattern}. An in-depth description of the technique can be found in the references. In practice, the cost function that has to be optimized is: \begin{equation} C(\theta) = \log P(\theta | X) = \log P( X | \theta ) - \lambda \sum_i |\theta_i| + cost . \end{equation} The first term of this sum is the Log-likelihood; instead, the second term is the regularization term. If the gradient is performed, we obtain: \begin{equation} \frac{\partial C(\theta)}{\partial \theta_i} = \frac{\partial \log P( X | \theta )}{\partial \theta_i} - \lambda sign(\theta_i), \end{equation} where $sign$ is the sign function. The training curves show no sign of overfitting,as the log-likelihood asymptotically stabilizes for the validation and train sets (see Fig.~\ref{training} in Appendix). \subsection*{Data} The data we use is a normalised multivariate time-series representing the parking activity of the different municipalities of the city. The procedure to obtain the final form of the time-series in described in sec. \textbf{Methods}. In Fig.~\ref{fig:19} we show an example of activity for two zones of our dataset for the Metropolitan City of Milan. \begin{figure}[H] \centering \includegraphics[width=17cm]{img/ts.png} \caption{Time series activity data for the city center (a) and for one of the suburbs (b). \label{fig:19}} \end{figure} \subsection*{Maximum Entropy Principle} \label{subsection:maxent} The principle of maximum entropy states that the probability distribution which best represents the current state of knowledge is the one with the largest entropy, in the context of precisely stated prior data. According to this principle, the distribution with maximal information entropy is the best choice. The principle was first shown by E. T. Jaynes in two papers in the late fifties where he emphasized a natural correspondence between statistical mechanics and information theory~\cite{jaynes1957information,jaynes1957information2}. In particular, Jaynes offered a new and very general rationale on why the Gibbsian method of statistical mechanics works. He showed that statistical mechanics, and in particular Ensemble Theory, can be seen just as a particular application of information theory, hence there is a strict correspondence between the entropy of statistical mechanics and the Shannon information entropy. Maximum Entropy models unveiled interesting results throughout the years for a large variety of systems, like flocking birds~\cite{bialek2012statistical}, proteins~\cite{ekeberg2013improved}, brain~\cite{tang2008maximum} and social systems~\cite{gresele2017maximum}. We will then implement this approach to define the model of our real-world system in the following sections. A more general introduction to the maximum entropy formalism is out of scope here and we refer to the existing literature for details~\cite{mehta2019high,giffin2009maximum,ibanez2019unsupervised,goodfellow2016deep}. The probability distribution with Maximum Entropy $P_{ME}$ results from the extreme condition of the so-called \textit{Generalized Entropy}: \begin{equation} \label{eq:gen_ent} \mathcal{S} \big[ P \big] = S \big[ P \big] + \sum_{k=1}^K \theta_k (\langle O_k \rangle_{P(\underline{X} )} -\langle O_k \rangle_{obs}), \end{equation} where \begin{equation} S \big[ P \big]= - \sum_{\underline{X}} P(\underline{X}) \log(P(\underline{X})) \end{equation} is the Shannon Entropy of the probability distribution $P(\underline{X})$. The maximum of the Generalized Entropy is the maximum of the Entropy of the model when it is subject to constraints. Computing the functional-derivative (\ref{eq:gen_ent}) with respect to $P(\underline{X})$ and equating to zero results in: \begin{equation} \label{eq:pme} P_{me}(\underline{X}) = \frac{1}{Z(\underline{\theta})} \exp \Big[- \sum_{k=1}^K \theta_k O_{k}(\underline{X})\Big], \end{equation} where \begin{equation} Z(\underline{\theta})=\int\limits_\Omega d\underline{X} \exp \Big[- \sum_{k=1}^K \theta_k O_{k}(\underline{X})\Big] \end{equation} is the normalization (making a parallel with statistical physics, can be called \textit{Partition Function}). $Z(\underline{\theta})$ is written as a sum if $\Omega$ is discrete. Hence, the maximum entropy probability distribution is a Boltzmann distribution in the canonical ensemble at temperature $T = 1K$, with effective Hamiltonian $\mathcal{H}(\underline{X}) = -\sum_{k=1}^K \theta_k O_{k}(\underline{X})$. It must be noticed that the minimization of the generalized entropy is equivalent to the maximization of the experimental average of the likelihood: \begin{equation} \mathcal{S} \big[ P \big] = \log Z(\underline{\theta}) - \sum_{k=1}^K \theta_k \langle O_k \rangle_{e} = -\langle \log P_{me} \rangle_{e} = \frac{1}{M} \sum_{m=1}^M \log P(\underline{X}^{(m)}). \end{equation} In other words, the $\theta_k$ are chosen by imposing the experimental constraints on Entropy or, equivalently, by maximizing the global, experimental likelihood according to a model with the constraints cited above. Focusing on this last sentence, we can say that the optimal parameters of $\theta$ (called \textit{effective couplings}) can be obtained through Maximum Likelihood, but only once one has assumed (by the principle of Maximum Entropy) that the most probable distribution has the form of $P_{ME}$. Given the generative model probability distribution of configurations $P(\underline{X} \mid \underline{\theta})$ and its corresponding partition function by $\log Z( \underline{\theta} )$, the estimator of $\theta$ can be found by maximizing the log-likelihood: \begin{equation} \label{eq:loglike} \mathcal{L}( \underline{\theta} ) = \langle \log(P(\underline{X} \mid \underline{\theta})) \rangle_{data} = - \langle \mathcal{H}(\underline{X} ; \underline{\theta}) \rangle_{data} - \log Z( \underline{\theta} ). \end{equation} Having chosen the log-likelihood as our cost function, we still need to specify a procedure to maximize it with respect to the parameters. One common choice that is widely employed when training energy-based models is Gradient Descent~\cite{goodfellow2016deep} or its variations: optimization is taken w.r.t. the gradient direction. Once one has chosen the appropriate cost function $\mathcal{L}$, the algorithm calculates the gradient of the cost function concerning the model parameters. The update equation is: \begin{equation} \theta_{ij} \leftarrow \theta_{ij} -\eta_{ij} \frac{\partial \mathcal{L}}{\partial \theta_{ij}}. \end{equation} Typically, the difficulty in these kind of problems is to evaluate $\log Z( \underline{\theta} )$ and its derivatives. The reason is that the partition function is rarely an exact integral, and it can be calculated exactly only in a few cases. However, it is still possible to find ways to approximate it and compute approximated gradients. \subsection*{Pseudo-Log-Likelihood (PLL) Maximization} \label{subsection:pll} Pseudo-likelihood is an alternative method compared to the likelihood function and leads to the exact inference of model parameters in the limit of an infinite number of samples \cite{arnold1991pseudolikelihood,nguyen2017inverse}. Let's consider the log-likelihood function $\mathcal{L}( \underline{\theta} ) = \langle \log P(\underline{X} \mid \underline{\theta}) \rangle_{data}$. In some cases we are not able to compute the partition function $Z( \underline{\theta} )$, but it is possible to derive exactly the conditional probability of one component of $\underline{X}$ with respect to the others, i.e. $P(X_j | \underline{X_{-j}}, \underline{\theta})$ where $\underline{X_{-j}}$ indicates the vector $\underline{X}$ without the $j$-th component. In this case, we can write an approximated likelihood called Pseudo-log-likelihood, which takes the form: \begin{equation} \label{eq:pseudo} \mathcal{L}( \underline{\theta} )_{\textit{pseudo}} = \sum_j \langle \log P(X_j | \underline{X_{-j}}, \underline{\theta}) \rangle_{data}. \end{equation} The model we will introduce in this work does not have an explicit form for the partition function, but Eq.~(\ref{eq:pseudo}) and its derivatives can be calculated exactly. Thus, the Pseudo-log-likelihood is a convenient cost function for our problem. \subsection*{Definition of MaxEnt model} \label{subsection:definition} We have seen that the Maximum Entropy inference scheme requires the definition of some observables that are supposed to be relevant to the system under study. Being that one aims to predict the evolution of the different $x_i(t)$, the most simple choice is to consider their correlations. As a preliminary analysis, we study the correlation between the activity $x_i(t)$ of most central zone within the city (highlighted in red) and all the other zones with a \textit{shift} in time (i.e. we correlate $x_i(t)$ with $x_j(t-\delta)$ for al $j\neq i$ and some $\delta>0$). The measure of correlation between two vectors $u$ and $v$ represented is $\frac{u \cdot v}{{||u||}_2 {||v||}_2}$, where ${||.||}_2$ measures the euclidean norm defined in $\mathbb{R}^n$ as $\left\| \boldsymbol{x} \right\|_2 := \sqrt{x_1^2 + \cdots + x_n^2}$. We see that the areas with large correlations vary with $\delta$ so that they clustered around the central area for small values, and they become peripherals when $\delta \sim 31$. When $\delta \sim 48$ they start to cluster around the central area again. An opposite behavior is shown when repeating the same procedure focusing on a peripheral zone. We perform a historical analysis of time-shifted correlations, observing contraction and dilation w.r.t. different zone in a one-day periodicity. Due to the correlation dependency on the time shift, we have to include it in the observables used to describe the system. Hence, we chose as observables all the \textit{shifted correlations} between all the couples of zones defined as, \begin{equation} \label{eq:corr} \langle x_i(t) x_j(t-\delta) \rangle_{data} = \frac{1}{T-\Delta} \sum_{t=\Delta}^T x_i(t) x_j(t-\delta), \end{equation} for $i,j=1,....,N$ (with $N$ as the number of zones we took in consideration) and $\delta=0,....,\Delta$. Another common choice is to also fix the average value of all the variables of the system. We took $\langle x_i \rangle_{data} = \frac{1}{T-\Delta} \sum_{t=\Delta}^T x_i(t)$ and $\langle x^2_i \rangle_{data} = \frac{1}{T-\Delta} \sum_{t=\Delta}^T x^2_i(t)$. From these, we obtain the equation for the probability: \begin{equation} P(x(t), \dots, x(t-\Delta)) = \frac{1}{Z} \exp \left[ -\sum_{t=\Delta}^T \sum_i a_i x_i^2(t) + \sum_{t=\Delta}^T \sum_i h_i x_i(t) + \sum_{t=\Delta}^T \sum_{\delta=1}^\Delta J_{ij}^\delta x_i(t)x_j(t-\delta) \right ], \end{equation} where $a_i$ is the $i$-th component's standard deviation, $h_i$ is its mean and $J_{ij}^\delta$ are the time shifted interactions. Writing $v_i(t) = h_i + \sum_\delta J_{ij}^\delta x_j(t-\delta)$ one can obtain: \begin{equation} P(x(t), \dots, x(t-\Delta)) = \frac{1}{Z} \prod_{t=\Delta}^T \exp \left[ -\sum_i a_i x_i^2(t) + \sum_i v_i(t) x_i(t) \right ]. \end{equation} \subsection*{Time series Forecasting} We trained the model for $100000$ steps using several hyperparamenters ($\Delta = \{24,36,48,72\}$ and $\lambda=\{0.001,0.004,0.005,0.006,0.01\}$, see tab.\ref{tab5}). To quantify the predictive capabilities of our models, we used the \textit{Mean Absolute Error (MAE)}, the \textit{Mean Squared Error (MSE)} and the \textit{Coefficient of Determination}, also known as $R^2$ \cite{devore2011probability}. \begin{table}[H] \caption{$R^2$ values w.r.t. $\Delta$ and $\lambda$ values, resulting from $100000$ steps of training. \label{tab5}} \newcolumntype{C}{>{\centering\arraybackslash}X} \begin{tabularx}{\textwidth}{CCC} \toprule \textbf{$\Delta$} & \textbf{$\lambda$} & \textbf{$R^2$}\\ \midrule 24 & 0.001 & 0.855316 \\ 24 & 0.004 & 0.854723 \\ 24 & 0.005 & 0.860069 \\ 24 & 0.006 & 0.853689 \\ 24 & 0.01 & 0.832542 \\ 36 & 0.001 & 0.854458 \\ 36 & 0.004 & 0.85442 \\ 36 & 0.005 & 0.853946 \\ 36 & 0.006 & 0.853447 \\ 36 & 0.01 & 0.832672 \\ 48 & 0.001 & 0.860072 \\ 48 & 0.004 & 0.861741 \\ 48 & 0.005 & 0.869466 \\ 48 & 0.006 & 0.861626 \\ 48 & 0.01 & 0.833099 \\ 72 & 0.001 & 0.85655 \\ 72 & 0.004 & 0.859633 \\ 72 & 0.005 & 0.867582 \\ 72 & 0.006 & 0.859632 \\ 72 & 0.01 & 0.858497 \\ \bottomrule \end{tabularx} \end{table} We typically find good predictive capabilities on the test set ( as in fig.~\ref{r2} and the exact values for each predictor are found in fig~\ref{tab5}). To check for the best value, we used the average value at each time step during the day in the training set as a baseline model. For $\Delta = 48$ (exactly the day-periodicity) and $\lambda=0.005$, we find our model's best performance, so we will use these values. To assess the predictive power of our approach against more complex models, we compared it with standard statistical inference and Machine Learning methods. In particular: \begin{enumerate} \item a \textbf{SARIMA} model \cite{tseng2002fuzzy}: we performed a grid search over the parameters using \textit{AIC} \cite{burnham2004multimodel} as metrics, the best predictive model is, referring to standard litherature on the topic, $\{p=2,d=0,q=1,P=2,D=1,Q=2,m=48\}$. In tab.\ref{sarima_table} the table of the grid search. \item \textbf{NN} and \textbf{LSTM}: each of them has 3 hidden layers with a $N_{nodes}=\{64,128,256,512\}$. Between the layers, we performed dropout techniques and standard non-linear activation functions (ReLU). Performing a grid-search over the other Machine Learning hyperparameters, the layers with $128$ and $256$ nodes perform better. \end{enumerate} Fig.~\ref{model_eval} shows that our model outperforms SARIMA and performs as well as the Machine Learning ones. Being that the LSTM one always performs slightly better than the FeedForward, we will use the first as a reference. Fig.~\ref{ts_pred} shows this in more detail: SARIMA fails to reproduce variations and it is mostly focused on seasonality. Our model works as well as NNs but with two orders of magnitude of fewer parameters ($109$ w.r.t., for the $128$-nodes configuration, $39040$ for the FeedForward). In fig. \ref{fig:cities} we show data and results for the cities of Rome, Florence, and Turin. \subsection*{Extreme events prediction} \label{subsection:extreme} As already pointed out, prediction is a possible application of our procedure, and we compared the results with state-of-the-art forecasting techniques. Another possible application is the detection of outliers in the data. We see from Fig.~\ref{fig:19} that our car sharing data exhibits quite regular patterns on different days. These patterns can be perturbed by a wide variety of external events, influencing the demand for car-sharing vehicles as well as the traffic patterns in the city. If we compute the log-likelihood from equation~(\ref{eq:pll}) restricting to one of the perturbed days, we would expect it to be particularly low, implying that the model has lower predictive power than usual. Hence, we can use the log-likelihood $\mathcal{L}$ from equation~(\ref{eq:pll}) as a score in a binary classification task. After training the model with a specific parameter choice, we consider a set of days in the test dataset, $d_1 \dots d_D$. Each one is a set of consecutive time steps: $d_i = [t^{d_i}_1, t^{d_i}_2]$ with $t^{d_i}_1 < t^{d_i}_2$. The log-likelihood of a single day is just \begin{equation} \label{eq:pll_singleday} \mathcal{L}_{pseudo}(d_i) = \frac{1}{t^{d_i}_2 - t^{d_i}_1-\Delta} \sum_{t= t^{d_i}_1-\Delta}^{t^{d_i}_2} \log P(x(t) | x(t-1), \dots, x(t-\Delta)). \end{equation} We can now assign a label $L(d_i)=0$ if $d_i$ is a standard day and $L(d_i)=1$ if it is a day where a certain external event occurred. We considered two types of external events: \begin{enumerate} \item Weather conditions: on that day there was fog, a storm or it was raining. \item Strikes: in that day there was a strike in the city capable of affecting car traffic. We considered Taxi strikes, Public Transportation strikes, and general strikes involving all working categories. \end{enumerate} We had $50\%$ of days labeled as 1 among the days in the test dataset. In fig.~\ref{roc2} we show the ROC curve for the classification of such days, using the log-likelihood trained with the parameters $\{\Delta=48,\lambda=0.006\}$. The Area under the ROC curve (AuROC) is 0.81, indicating that $\mathcal{L}$ is a good indicator of whether an external event occurred on a specific day. \begin{figure}[H] \centering \includegraphics[width=13.5cm]{img/model_comparison.jpg} \caption{ Model comparison using different metrics. We see similar results between the MaxEnt model and the Deep Learning ones. \label{model_eval}} \end{figure} \begin{figure}[H] \centering \includegraphics[width=13.5cm]{img/prediction_comparison.jpg} \caption{Model comparison: taking one zone's time series, we see that SARIMA models are good predictors of periodicity, but the other models perform better when the change between one period and another is high. \label{ts_pred}} \end{figure} \begin{figure}[H] \centering \includegraphics[width=13.5cm]{img/MaxEnt_R2.jpg} \caption{R2 MaxEnt prediction over the test set. To have r2=1, the point should align on the bisector line, having a perfect predictor. \label{r2}} \end{figure} \begin{figure}[H] \centering \includegraphics[width=13.5cm]{img/outliers_lam-0.006.png} \caption{ROC curve for the detection of outliers (bad weather conditions and strikes) for $\lambda=0.006$ and $\Delta=48$.\label{roc2}} \end{figure} \section*{Discussion} In this work, we addressed the problem of building statistical and machine learning models to predict mobility patterns within the Milan Metropolitan Area. We focused on the prediction of how many car-sharing vehicles are parked within a specific area at different times. To do this, we analyzed a longitudinal dataset of parking events coming from the APIs of the Italian car-sharing company "Enjoy". The processed data consisted of a collection of time series representing the number of parked vehicles in a given area at a given time. To predict the evolution of these time series, we used a Maximum Entropy (ME) approach that requires the identification of relevant observables from the data and use these observables to define probability density functions depending on some parameters. Maximum Entropy modeling has proven to be very effective at studying and determining the activity patterns inside the city (Fig.~\ref{ts_pred}). We compared our model with other models specifically built for time series forecasting. In particular, we used a SARIMA model, a Feed-Forward Neural Network, and a Long Short-Term Memory Neural Network. Maximum Entropy models outperformed SARIMA models w.r.t. all the metrics used in the evaluation. Our model is as predictive as LSTMs Neural Networks, but using two orders of magnitude fewer parameters , interpretability of the parameters, and the possibility of performing different studies using the same model. Finally, we used the statistical model also to identify extreme events affecting urban traffic, such as bad weather conditions or strikes. In conclusion, this work is a first attempt at modeling the inherently complex interactions and causalities that shape the fluxes of people and vehicles within urban spaces. We showed that the interplay of the activity of different areas at a different time is far from trivial. Indeed, we found the models derived within the ME framework to accurately reproduce the empirical time series using a mixture of correlations between different areas at different times. Moreover, our finding that linear correlations are enough to predict the mobility patterns (wrt neural networks) is an interesting result that requires additional investigations. In \textbf{Additional Information} we show the prediction ability on other three cities: Rome, Florence, and Turin. Given our results, several research directions can be taken to improve and extend the results: \begin{itemize} \item a more extensive study on the effects of seasonality could help in building better models. Specific season-dependent models could be built by taking into account larger datasets. \item Evaluate how the prediction ability of the Maximum Entropy models is dependent upon the structure of the city or the resolution of the time series. \item Entangling mobility patterns with other city indicators, such as socio-political disparities and economic indicators, can lead to a better model that depicts human distribution. \item The inclusion of non-linear interaction in the ME models could be hard if made by defining standard observables. Instead, \textit{hidden variables} approaches could be take into account, e.g. Restricted Boltzmann Machines \cite{sutskever2009recurrent}. \item The ME models could be adapted to perform other anomaly detection tasks, i.e. identifying parts of the time series which are not standard fluctuations of the system \cite{fiore2013network,sun2004mobility}. \end{itemize}
{ "attr-fineweb-edu": 1.673828, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdbg4eIZjuBQZuwWC
\section{Introduction} \label{sec:intro} Humans are naturally interested in competition. In the buildup to any contest of public interest, spectators have long made a study of predicting the outcome ahead-of-time, through a combination of instinct, prior observations, domain expertise, and, more recently, sophisticated analysis of data from past contests. Media outlets frequently capitalize on this interest by forecasting the winners of upcoming sporting events, entertainment awards, and elections, sometimes months in advance. Until relatively recently, predictions published or broadcast to general-interest audiences were largely the product of seasoned domain experts who either relied entirely on personal insight or, if they incorporated data, rarely approached the task with statistical rigor. This sort of prognostication, particularly in politics, is so popular that, coterminous with the rise of 24-hour cable news stations, the word "pundit" took on a new coinage as a professional media forecaster \citep{Chertoff2012ironic}. Despite recent interest by data experts in predicting competition outcomes, pundits dominated the popular market for some time. As the information age has made relevant data across many domains more available without subscription to expensive proprietary databases, data-analytic approaches have been established for generating popular predictions for elections \citep{Rothschild2012obama}, \citep{Linzer2013dynamic}, \citep{Linzer2016votamatic}, \citep{silver2012nov}, \citep{Silver2016who}, sporting outcomes \citep{KerrDineen2017win}, \citep{tango2007book}, \citep{Nguyen2015making} and entertainment awards, \citep{King2019approximating}. Given the relatively recent influx of data analytic approaches, a tension can emerge between data-driven models and opinion-driven subjective punditry. For examples in politics, see \cite{Byers2012nate} and \cite{Cohn2017review}. For examples in sport, see \cite{KerrDineen2017win} and \cite{Lengel2018has}. The purpose of this work is to propose a modeling approach that probabilistically predicts the outcome of upcoming competitions by using both historical data and contemporary expert knowledge. \begin{figure}[ht!] \centering \includegraphics[width=100mm,trim=0 0 0 0]{outline_fig_v01.pdf} \caption[]{Overview of the proposed approach.} \label{fig:method_overview} \end{figure} Most reasonable people would agree that there is predictive value in historical competition data, but there are also usually important contemporary forces that are not trivial to represent in a historical data analysis. To illustrate this dichotomy, consider the upcoming case study in Section \ref{sec:casestudy} where we develop predictions for the winner of the 2019 Academy Award (aka ``Oscar'') for Best Picture using only data available before the 2019 award show. A savvy analyst might gather data from the Directors Guild of America (DGA) awards, which occur before the Academy Awards are announced. This analyst would note the strong association between win-status of Best Director from the DGA, and the eventual win-status of Academy Award for Best Picture. Between 1950 and 2018, the odds ratio for these two variables is $66.6$ ($95\%$ bootstrap confidence interval: $27.6-168.9$). Owing to the strong statistical relationship, and since the DGA announces its winners before the Academy Awards, the DGA award for Best Director is a potentially useful candidate predictor for the Academy Award for Best Picture. Historical predictors such as these are easy to incorporate into statistical models but do not address unique contemporary forces in a given round of the competition. For example, the members of the Academy who elected the winners in 2019 potentially felt resentment towards Netflix for disrupting the traditional Hollywood business model by funding, promoting, and controlling the distribution of their own content \citep{Brody2019my, Lawson2019oscar}. Since the 2019 Best Picture nominee \textit{Roma} was distributed and promoted by Netflix, a wise prognosticator would have weighed this contemporary knowledge heavily in their 2019 prediction. Our previous historical analysis suggested \textit{Roma} was the most likely candidate to win in 2019 \citep{Wilson2019we}, but the film \textit{Green Book} ultimately won. Thus, to account for both historical trends and also contemporary expert knowledge going forward, our proposed method incorporates subjective effects alongside the historical analysis as shown in Figure \ref{fig:method_overview}. The method we propose in Section \ref{sec:method} is flexible and applies to competitions (i) for which historical data is available, (ii) that have an upcoming round with known contestants (e.g. the next football game or season, the next award show, the next election), and (iii) the number of winners is fixed, where we focus on the single winner case. Since the identity of contestants in upcoming rounds of the competition is usually known in advance, we develop a conditional logistic regression approach for our predictions. Importantly, we develop the notion of ``prospective strata,'' which enables out-of-sample prediction for conditional logistic regression. Traditional conditional logistic regression based on matching does not permit out-of-sample prediction. See Section \ref{sec:method} for further detail on our strategy to extend conditional logistic regression for out-of-sample prediction. While many existing methods focus on binary competitions with only two contestants, we allow for multiple contestant-competitions including award shows, division championships, and election primaries. We take a Bayesian approach in this work, which enables natural incorporation of subjective effects into the analysis. Since this work is a data journalism-inspired effort, we describe strategies to implement the modeling approach in an interactive web-based format that should be appealing to a broad readership. While standard Markov chain Monte Carlo techniques are readily available to fit our model, we also develop a \textit{maximum a posteriori} (MAP) approach that is computationally fast enough to be satisfying for use in an online application targeted at a broad readership with a primary focus on point estimation. We illustrate the method by re-analyzing data from a recent Time.com piece \citep{Wilson2019we} which attempted to predict the 2019 Best Picture Academy Award winner using standard logistic regression. The new approach showcases an ability to model all future contestants on a probability scale with a sum-to-one constraint and also to include elicited subjective effects in the analysis. In addition to the methodological approach proposed here, we also compare and contrast the publication process between academic peer-reviewed journals and journalism venues. Statisticians would appear to be well-poised to make meaningful contributions to the nascent area of analytic-powered journalism, yet such collaborations are not especially common. From our team's experience, we briefly describe some challenges that face such collaborations and some suggestions to overcome in Section \ref{sec:disc}. The remainder of this article is organized as follows. Section \ref{sec:method} describes the proposed method. Section \ref{sec:casestudy} revisits the 2019 Academy Awards and illustrates our approach to predicting the winner of Best Picture using data from 1950-2018. Section \ref{sec:disc} includes discussion of broader implications of the proposed method, future plans and directions, and commentary pointed towards facilitating greater cooperation between statisticians and data journalists. \section{Method} \label{sec:method} Let $\bm{Y}$ be an $N \times 1$ vector of binary outcomes such that elements of $\bm{Y}$ are equal to one for winners and zero otherwise. Let $\bm{X}$ be a $N \times p$ model matrix that contains variables useful for predicting $\bm{Y}$. Logistic regression is probably the most widely used approach to model $\bm{Y}$ as a function of $\bm{X}$ when observed data are available. However, we are interested in modeling historical competitions and predicting prospective outcomes in cases where each round has only one winner. The usual logistic regression approach ignores this ``single winner'' constraint, and thus predicted probabilities of winning are formed in terms of the entire historical data and do not sum to one within any given round of the competition. The authors used this standard approach in a previous analysis of nominees for the 2019 Academy Award for Best Picture \citep{Wilson2019we}. For the purpose of estimating win probabilities, a more satisfying approach is to constrain predicted probabilities to sum to one within each round. For example, if we wish to predict the winner of the Academy Award for Best Picture in a future year, we would like the finalists' predicted probabilities to sum to one so that each probability represents a chance of winning the upcoming contest relative to the specific participants in that round of the contest. We impose the sum-to-one constraint by developing a conditional logistic regression approach that enables inference on historical effects and exploits the known structure of upcoming competitions to enable out-of-sample prediction. Let $k=1,\ldots,K$ index the strata, and let $\bm{Y}^{(k)}$ and $\bm{X}^{(k)}$ represent the subset of the outcome vector and model matrix corresponding to the $kth$ strata, i.e. \[ \bm{Y} = \begin{pmatrix} \bm{Y}^{(1)} \\ \vdots \\ \bm{Y}^{(K)} \end{pmatrix} \text{ and } \bm{X} = \begin{bmatrix} \bm{X}^{(1)} \\ \vdots \\ \bm{X}^{(K)} \end{bmatrix}. \] In the Academy Awards example, each year is a strata $k$, $\bm{X^{(k)}}$ might include summaries of critical reception, commercial success, and other film-specific accolades. $\bm{Y^{(k)}}$ is then a length $n_k$ vector that contains a single one corresponding to the winner and zeroes elsewhere, i.e. $\sum_{i=1}^{n_k}y_i^{(k)}=1$ where $y_i^{(k)}$ is the outcome for the $ith$ competitor $i=1,\ldots,n_k$ in strata $k$. The conditional likelihood function arises from conditioning on a fixed number of events (one in this work) per strata. The conditional likelihood for the $kth$ strata is \begin{align} \label{stratalike} P\Big(\bm{y}^{(k)}|\bm{\beta}\Big) =P\Big(Y^{(k)}_1=y^{(k)}_1,\dots,Y^{(k)}_{n_k}=y^{(k)}_{n_k}| \sum_{i=1}^{n_k} y_i^{(k)}=1, \bm{\beta}\Big) \\ = \frac{\text{exp}\Big[\sum_{j=1}^p\big(\sum_{i=1}^{n_k}y_i^{(k)}x_{ij}^{(k)}\big)\beta_j\Big]}{\sum_{S(1)}\text{exp}\Big[\sum_{j=1}^p\big(\sum_{i=1}^{n_k}y_i^{*(k)}x_{ij}^{(k)}\big)\beta_j\Big]}, \nonumber \end{align} where $x_{ij}^{(k)}$ is the $jth$ predictor variable for the $ith$ competitor, $\beta_j$ is the coefficient for the $jth$ predictor, and $j=1,\ldots,p$. The $\sum_{S(1)}$ and $y_i^*$ notations \citep[adapted from][]{Agresti2013categorical} in the denominator of (\ref{stratalike}) represent the possible outcomes in the strata such that there is one event and $n_k-1$ non-events in strata $k$. The rules of conditional probability underlie the intuition for summing over all single winner configurations in the denominator of (\ref{stratalike}). Given the regression effects $\bm{\beta}$, we assume independence between strata. Thus the conditional likelihood function is the product of the strata-level conditional likelihood functions \begin{equation} P\Big(Y_1=y_1,\dots,Y_N=y_N|\sum_{i=1}^{n_k} y_i^{(k)}=1\text{ for }k=1,\ldots,K,\bm{\beta}\Big)=\prod_{k=1}^K P\big(\bm{y}^{(k)}|\bm{\beta}\big). \label{histlike} \end{equation} The canonical use of conditional logistic regression forms strata by matching subjects post-data collection based on similar characteristics and a fixed number of events occurring in each strata. This is typically done to address confounding and improve parameter estimator properties. For a further review of conditional logistic regression, see \cite{Agresti2013categorical}. In the usual case where subjects are placed into strata via matching post-data collection, prospective out-of-sample prediction is unavailable because there is no natural concept of a strata for individuals who were not matched within the study. For example, out-of-sample patients do not belong to strata for which the number of events is fixed and known, which makes it impossible to directly adapt Equation (\ref{stratalike}) when forming a likelihood. This appears to be the reason for the widespread perception that conditional logistic regression is not available for out-of-sample prediction. Fortunately, for the competitions we describe in this manuscript, the identities of competitors for upcoming competitions are known before the competition takes place. Therefore, data on useful predictor variables can be gathered and the sum-to-one constraint can be imposed on the likelihood for upcoming data. We thus develop the notion of a prospective strata and incorporate its likelihood into (\ref{histlike}) to enable out-of-sample probability prediction for the winner of the upcoming contest. Since the main goal of this work is to probabilistically predict future outcomes, we next formally develop the idea of a prospective strata $C$. Let $C$ represent the $(K+1)th$ strata and $n_C$ represent the number of competitors in this strata. For example, in Section \ref{sec:casestudy} the $Cth$ strata is comprised of the films which were nominated for the 2019 Academy Award for Best Picture. We know that only one film will win, and $\bm{X}^{(C)}$ were observed between the announcement of nominees in late January and the award show in February of 2019. In cases such as these, the $Cth$ strata likelihood can be formed as: \begin{align} \label{newstratalike} P\big(\bm{y}^{(C)}|\bm{\beta}\big) =P\Big(Y^{(C)}_1=y^{(C)}_1,\dots,Y^{(C)}_{n_C}=y^{(C)}_{n_C}| \sum_{i=1}^{n_C} y_i^{(C)}=1, \bm{\beta}\Big) \\ = \frac{\text{exp}\Big[\sum_{j=1}^p\big(\sum_{i=1}^{n_C}y_i^{(C)}x_{ij}^{(C)}\big)\beta_j\Big]}{\sum_{S(1)}\text{exp}\Big[\sum_{j=1}^p\big(\sum_{i=1}^{n_C}y_i^{*(C)}x_{ij}^{(C)}\big)\beta_j\Big]}. \nonumber \end{align} Under our conditional independence assumption for strata, a full likelihood can be obtained as the product of (\ref{histlike}) and (\ref{newstratalike}). From there, inference on $\bm{\beta}$ and probabilistic prediction for the $Cth$ strata can be performed from a classical or Bayesian perspective. The final aspect of the proposed method involves optional specification of subjective effects for incorporation into the analysis. When upcoming competitions capture the public's interest, enthusiasts, pundits, and prognosticators may wish to incorporate their opinions about the individual competitors in the prospective $Cth$ strata into the analysis formally. Let $Q=n_C-1$, and let $\bm{\phi}=\langle\phi_1,\ldots,\phi_Q\rangle^T$ represent subjective effects specific to the $n_C$ competitors in the prospective $Cth$ strata. Our model is parameterized using $Q$ competitor-specific effects which represent the change in log odds of winning associated with moving from a baseline competitor to the $qth$ competitor, $q=1,\ldots,Q$. This parameterization is similar to the baseline category parameterization for multinomial logistic regression, see e.g. \cite{Agresti2013categorical}. While it is possible to incorporate subjective effects into any strata the user wishes, we focus mainly on subjective effects for the prospective $Cth$ strata, i.e. the upcoming competition. Figure \ref{fig:effect_schematic} shows a schematic of how effects are conceptualized in the Academy Award case study. Eliciting subjective effects on a relative log odds scale is not intuitive for most people. Probability scales are convenient for subjective elicitation \citep{Ohagan2006uncertain}, so we recommend asking users to provide subjective win probabilities for each prospective competitor such that these probabilities sum to one. Let $p_c$ represent the $cth$ prior subjective probability such that $0 \leq p_c \leq 1$ and $\sum_{c=1}^{n_C} p_c=1$ for $c=1,\ldots,n_C$. To obtain $\bm{\phi}$ from these user-specified win probabilities, consider the film $p_{n_C}$ as the baseline, $$\phi_q = log\Big(\frac{p_q}{p_{n_C}}\Big) \text{ for } q=1,\ldots,Q, $$ and $\phi_q$ measures the shift in log-odds of winning between the baseline competitor and the $qth$ competitor, $q=1,\ldots,Q$. \begin{figure}[ht!] \centering \includegraphics[width=135mm,trim=0 0 0 0]{effect_schematic_v001.pdf} \caption[]{Schematic for specification of historical and subjective effects.} \label{fig:effect_schematic} \end{figure} We incorporate subjective effects into the analysis via a mixture model approach. Let \begin{equation} \label{mixture} P\big(\bm{y}^{(C)}|\bm{\beta},\bm{\phi},\omega\big) = \omega P\big(\bm{y}^{(C)}|\bm{\beta}\big) + (1-\omega)P\big(\bm{y}^{(C)}|\bm{\phi}\big), \end{equation} where $\omega$ is a mixing weight that governs how heavily the historical model effects $\bm{\beta}$ are weighed relative to the subjective effects $\bm{\phi}$, and \begin{equation} \label{} P\big(\bm{y}^{(C)}|\bm{\phi}\big)= \frac{\text{exp}\Big[\sum_{q=1}^Q\big(\sum_{i=1}^{n_C}y_i^{(C)}z_{iq}^{(C)}\big)\phi_q\Big]}{\sum_{S(1)}\text{exp}\Big[\sum_{q=1}^Q\big(\sum_{i=1}^{n_C}y_i^{*(C)}z_{iq}^{(C)}\big)\phi_q\Big]}\ \end{equation} is essentially a $Cth$ strata conditional likelihood function based on subjective effects $\bm{\phi}$. Thus, for the prospective $Cth$ strata, we model win probability as a function of both the historical prediction mixture component and the subjective component. We envision users of this approach will vary in the extent to which they rely on the historical model versus their own subjective opinion of the competitors. Finally, let the complete data vector $\bm{y}^{(full)}=\langle \bm{y},\bm{y}^{(C)} \rangle^T$, and \begin{equation} \label{fullmod} P\big(\bm{y}^{\text{(full)}}|\bm{\beta},\bm{\gamma}\big) = P\big(\bm{y}^{(C)}|\bm{\beta},\bm{\gamma}\big) \times \prod_{k=1}^K P\big(\bm{y}^{(k)}|\bm{\beta}\big). \end{equation} The parameters in full likelihood (\ref{fullmod}) are $\bm{\beta}$, $\bm{\phi}$, and $\omega$. Our analysis proceeds in a Bayesian fashion, where $\bm{\beta}\sim N(\bm{0},\sigma^2 I)$ is given a vague proper prior and $\omega$ and $\bm{\phi}$ are subjectively elicited from the audience interested in the outcome of the competition. We briefly discuss the challenge in eliciting suitable variability for $\omega$ and $\bm{\phi}$ here and include a more thorough discussion in Section \ref{sec:disc}. Conceptually, there is no technical difficulty imposing prior distributions on $\omega$ and $\bm{\phi}$, where the Beta and Normal families of distributions seem like a reasonable starting point, respectively. Assessing variability in subjective opinion for the model proposed is an unresolved issue. If prior precision on the $\bm{\phi}$ effects is low, the subjective mixture component will influence the posterior towards uniform probabilities for the competitors. Since we anticipate deploying this model via a web interface among readers who are unfamiliar with the nuances of Bayesian analysis and prior specification, we wish to restrict the required inputs to $\omega$ and $p_c$ for $c=1,\ldots,n_C$. We anticipate that users who do not follow the subject matter or are not confident in their personal expertise would reduce the $\omega$ value towards zero to reduce the impact of their choice of $\bm{\phi}$ , while those who trust their instincts over historical trends, or believe the current contest is radically different from all prior ones, would place the $\omega$ value at or near 1 and carefully adjust the $\bm{\phi}$ terms. If the user's goal is to favor uniform outcomes among films, we anticipate they would prefer to specify a near-uniform specification of their prior win-probabilities $p_c$ for $c=1,\ldots,n_C$ rather than specifying low precision in the $\bm{\phi}$ terms. Since we suggest deploying our method to a broad readership, it is possible to learn about the distribution of parameters $\omega$ and $\bm{\phi}$ across many subjects empirically. Studying variability in this way may inform choices about prior specification in future studies. We suggest: \begin{itemize} \item[1.] At the participant level, treat user-specified values of $\omega$ and $\bm{\phi}$ as fixed, known constants for the upcoming 2020 Academy Awards. \item[2.] Using an Institutional Review Board-approved protocol, gather empirical data on selected values of $\omega$ and $\bm{\phi}$ for all consenting users \citep[TIME coauthor Wilson has implemented this strategy previously,][]{Wilson2019how}. \item[3.] Report the empirical distributions of $\omega$ and $\bm{\phi}$ so they can be used to inform plausible values of variability in future studies which use this approach. \end{itemize} In Section \ref{sec:casestudy} we use a Metropolis sampler to obtain point estimates and credible intervals for win probabilities for various specifications of $\omega$ and $\bm{\phi}$. Since we envision our method will be used primarily via an interactive web application embedded in general interest stories, we compare the Metropolis sampler with a plug-in strategy based on maximum \textit{a posteriori} estimates, where the latter strategy reduces computation times sufficiently for the method to produce real-time point probabilities suitable for interactive computation. Readers can then have the opportunity to explore competition prediction in an interactive Bayesian environment in real-time. \section{Case study: 2019 Academy Award for Best Picture} \label{sec:casestudy} The Academy of Motion Picture Arts and Sciences is a semi-secretive body of film professionals that annually issues awards for meritorious films. The Academy Awards, more popularly known as the Oscars, are given out in many categories, from technical categories like ``sound editing'' to highly coveted honors for the Best Actor and Actress and Best Director, culminating in Best Picture, for which up to ten films can be nominated (expanded from five in recent years). As mentioned in Section \ref{sec:intro}, this case study focuses on predicting the 2019 Academy Award for Best Picture using historical data from 1950-2018 and considering various choices of $\omega$ and $\bm{\phi}$. This case study is a re-analysis of data which were previously used to make probabilistic predictions using standard logistic regression with no subjective effects \citep{Wilson2019we}. The eight films under consideration for Best Picture in 2019 were \textit{A Star is Born}, \textit{Black Panther}, \textit{BlacKKKlansman}, \textit{Bohemian Rhapsody}, \textit{Green Book}, \textit{Roma}, \textit{The Favourite}, and \textit{Vice}. For this analysis, we rely on the same historical model and predictors we used previously \citep{Wilson2019we} so that we may compare our subjective Bayesian conditional logistic regression approach to the standard unconditional logistic regression analysis used previously. For this exercise, we gathered historical data on all Oscar nominees for Best Picture since 1950. Nominees are announced in late January each year and awards are announced towards the end of February. The DGA, which is unaffiliated with the Academy, announces its nominees in early January and declares winners in late January. Thus, the Academy Award nominations for Best Picture candidates in other categories, like whether a potential winner also generated a Best Actress nomination, as well as the DGA nominations and winners can be used as candidate predictors of winning Best Picture in the same year. The nominees and winners for all relevant awards can be easily harvested from the official Web sites for the awards and fact-checked against sites like the Internet Movie Database for any possible discrepancy. While some of the lesser known technical awards have changed names and precise definitions since 1950, the data set is remarkably consistent across 59 years. We considered 47 possible input variables for each strata, including a list of other Academy Awards for which the film was also nominated in the same year and a small list of awards given by other organizations, like the Directors Guild, that have consistently announced awards before the Academy's. To collect the data, we gathered a candidate list of potential variables by examining the pages for every past Best Picture winner on the Internet Movie Database (imdb.com), which includes a comprehensive list of nominations and wins for everything from the Academy Awards to the Golden Globes, the British Academy of Film and Television Arts (BAFTA), the Directors Guild of America, all the way down to the Dallas-Fort Worth Film Critics Association. Only those societies who nominated (and sometimes awarded) films before the Academy Awards ceremony in a consistent manner back at least to 1950 were considered, which reduced the variables to nominations for other Academy Awards in the same year; the Golden Globes; the BAFTAs; and the Directors Guild of America, all of which began in the 1940s. In some cases, like the Golden Globes, the winners are announced prior to the Oscars Ceremony, so each award provided two variables: Whether a film was nominated and whether it won. The data on each film's nomination and win status, when relevant, was gathered from each award society's official website and spot-checked against both IMDB and the Open Movie Database API (http://www.omdbapi.com ). There was no evidence of disagreement in the historical data. Only awards that have been consistently granted for the same qualifications since 1950 were considered, which eliminated some recognitions of technical achievement that were not yet invented in 1950. The original analysis was simplistic, using two stage model selection based on BIC \citep{Schwarz1978estimating} and including only candidate predictors that had data dating back to 1950. Compared with academic statistics journals, the editorial timeline in journalism is much faster which imposes constraints on the scope of analyses. See Section \ref{sec:disc} for more discussion on this issue. We ultimately selected three binary variables which we denote $x_1,x_2,x_3$: \begin{itemize} \item[$x_1$:] Whether the film was also nominated for the Oscar for Best Director \item[$x_2$:] Whether it was also nominated for the Oscar for Best Editing \item[$x_3$:] Whether it won top honors from the Directors Guild of America, which is announced before the Oscars \end{itemize} The simplicity of our original model selection approach was motivated in part by the editorial deadline. We needed to publish our findings with enough lead time that our article would be of interest to the readership of Time.com. Effective model selection in the context of competition prediction is discussed further in Section \ref{sec:disc}. \begin{figure}[ht!] \centering \includegraphics[width=150mm,trim=0 0 0 0]{Fig1.pdf} \caption[]{Line plots for the case study analysis.} \label{fig:results} \end{figure} Figure \ref{fig:results} shows the posterior predicted probabilities for three specific choices of $\bm{\phi}$ for various $\omega$. The ``GB prior'' (top left) assigns \textit{Green Book} (the eventual winner) 80 \% of the prior probability and splits the remainder among the nine other candidates. The ``U prior'' (Top Right) assigns each of the eight films $12.5\%$ of the prior probability of winning. The ``NR prior'' (Bottom Left) reflects a disposition of someone who thinks \textit{Roma} is unlikely to win (perhaps due to Netflix's role producing the film), where \textit{Roma} has a one percent prior probability of winning and each other film has a uniform share of the remainder. This analysis was conducted using the MAP strategy described in Section \ref{sec:method}. Table \ref{Table1} in the Appendix includes probabilities shown via characters in Figure \ref{fig:results}. Table \ref{Table2} in the Appendix includes point estimates and credible intervals for the same priors based on an MCMC sampler. The user's choice of $\omega$ and $\bm{\phi}$ governs the extent to which the historical model component is weighed against subjective opinion in the production of win probabilities. The left side of each horizontal axis corresponds to the prior probabilities in each of the ``GB,'' ``U,'' and ``NR'' priors. Moving from left to right corresponds to an increase in $\omega$ and hence more weight on the historical model. At $\omega=1$, the prior probabilities on films are completely outweighed by the historical model. In addition to showing how various prior choices affect win probabilities, line plots are useful for ``\textit{post-mortem}-style'' analyses, e.g. a user who favored their own opinions at around $\omega=0.7$ would have needed 80 \% prior probability favoring \textit{Green Book} in order to conclude that the eventual winner was a more likely candidate than \textit{Roma}, which captures the plurality of posterior probability based on this particular historical model. The astute reader will notice that in the ``U'' and ``NR'' settings, \textit{Green Book} and \textit{Bohemian Rhapsody} are tied. This is because they share equal prior weight and happen to also share the same values for $x_1$, $x_2$, and $x_3$. \section{Discussion} \label{sec:disc} \subsection{Practical observations from journalist-statistician collaboration} Nobody cares if you predict the Oscars after the awards have been announced. An interesting aspect of this project that generalizes to many journalism settings is the comparatively short timeline available for analysis relative to other statistical consulting or academic settings. From our experience, a typical start-to-finish timeline is about one-to-two weeks, spanning the conceptualization of the project, acquisition and organization of the data, data analysis and diagnostics, articulation of conclusions, and writing sufficient to appear in the appropriate venue. Deadlines are hard, and falling behind leads to missed opportunities. In the original \cite{Wilson2019we} analysis, there was a certain viable division of labor in the undertaking. Because data journalists invest a great deal of time learning to quickly and responsibly collect, format and fact-check datasets, we were able to start with a reasonably clean dataset of relevant awards. If all partners work in a common computing environment, it is possible to work off the same set of scripts aided by copious comments. Yet, collaboration between academic statisticians and reporters who specialize in data-driven stories is not as common as it might be, given the wide disparity in deadlines and expectations. We aspire to offer some insight into how this gap can be bridged. Academic statisticians undergo a peer review process that is expected to take months for each article. These peer review articles are published at the discretion of the academic journal editor who relies on feedback from associate editors and multiple reviewers. By contrast, journalism editors typically assign an article ahead-of-time and are expecting the journalist to submit text in anywhere from a few days to under an hour, depending on the breadth and depth of the story. At the same time, the journalists who work primarily with quantitative sources--often called ``data journalists'' or ``computational journalists''--have neither the expertise or the same burden of producing new research as academics. Still, given that, as established, a general-interest audience has a keen interest in predictions, any sort of partnership between our two fields benefits all parties by lending exposure to high-quality, innovative models and greatly enhancing the sophistication of the reporting. That said, there are aspects of the original analysis for Time.com that were simplified in order to accommodate our rigid timeline. For example, the original analysis was based on standard logistic regression, which is unappealing here as the predicted probabilities are in the context of the entire historical sample and do not sum to one within any given year. The timeline prohibited the extent of cross validation in the model selection phase that we would have preferred, which is something that lingers as perhaps the biggest regret. Finally, no subjective effects were incorporated. The methodological developments in Section 2 were not possible on the original timeline. This last point--the absence of any ability for readers to enter subjective effects, is an area of exciting future development. Long experience has demonstrated that interactivity, through the common Web tools like sliders and dropdowns, is a highly effective way to engage readers and attract a wide audience--not to mention giving readers the opportunity to explore the functionality of the underlying model. \subsection{Future directions} In this work we have proposed a statistical modeling approach that enables the researcher to obtain probability predictions for the winner of upcoming competitions based on historical data and, optionally, subjective inputs. The method is valuable since it can be applied to any competition in which (i) historical data is available, (ii) the number of winners and identities of participants in an upcoming round of the competition are known in advance, and (iii) the method accommodates subjective input which can supplement historical effects when expert opinion can be elicited. The model can ``sit on top of '' other (not necessarily probabilistic) predictions by incorporating these into historical data where appropriate, or using them in the subjective specification aspect of the approach. Thus, the method would appear to be suitable for predicting the outcomes of award shows and sporting events. The entertainment-based nature of these endeavors makes this a useful exercise as a public-facing opportunity for the readership in these areas to learn about subjective Bayesian approaches to data analysis. Further, an online interactive interface system, when paired with an Institutional Review Board-approved protocol, will allow for the collection of human data which can be used to explore subjective elicitation in the readership audience for sports and movies. The vast majority of work on subjective elicitation focuses on studying \textit{expert} opinion \citep{Ohagan2006uncertain}, thus the information system the authors hope to produce for Time.com corresponding to the upcoming 2020 Oscars will be perhaps the first look at large scale elicitation effects within our targeted readership. Obtaining the distribution of these effects is an important next step towards refining the uncertainty with which subjective probabilities and mixing weight $\omega$ described in Section \ref{sec:method} are expressed. While much of the discussion in this paper has been focused on predicting upcoming competitions, the subjective machinery can be used \textit{post-mortem} to gauge e.g. the required degree of prior belief that would be necessary to ``move an incorrect historical prediction to the correct eventual winner.'' Beyond the entertainment domain of sport and film, the method could potentially be useful for the study of elections, particularly given the still unresolved debate over why so few models or experts correctly predicted the 2016 U.S. presidential contest. Since the method is Bayesian and we anticipate deploying our approach in web-enabled interfaces for a broad readership, this exercise provides an opportunity to educate the layperson about Bayesian methods. The value in this is greater than advocating for a specific inferential paradigm, as the importance of incorporating human judgment in data driven approaches becomes more salient by the day. This method can be viewed as a sort of ``training ground'' for non-technical experts to grapple with these issues. While the method relies on an extension of conditional logistic regression and hence grows out of the biostatistics tradition, outputs from black box machine learning (ML) algorithms can be incorporated into the subjective specification aspect of our approach, or the historical predictors where ML algorithms are applied to the entire corpus of historical data. Our method can thus utilize historical data, subjective expertise, and modern prediction output to generate probability predictions for the outcome of upcoming competitions in a variety of human endeavors. The currently proposed method does have drawbacks and opportunities for extension and improvement. First, we have illustrated our method pre-supposing a known set of predictors in the $X$ matrix. This serves our narrative well since we are chiefly highlighting a clever use of conditional logistic regression in settings where subjective opinion is like to be both valuable and also difficult to fully incorporate into $X$. However, choosing among candidate predictors and model formulations is fundamentally an exercise in model selection. Thus, model selection methodology could be further developed in this context, perhaps using recent results for automatic model selection-consistent mixture $g$ priors \citep[see e.g.][for an overview]{Li2018mixtures}. Competitions frequently vary from round-to-round. Our method assumes that historical effects are constant in time. For example, the sophistication and importance of the passing game and its effects on the composition and approach of opposing defenses in American football has surely changed over the decades. Academy Award judges today are not necessarily operating identically to those from decades past. A final future direction would be to consider modeling dynamic changes to the underlying judging process across strata. \bibliographystyle{chicago}
{ "attr-fineweb-edu": 1.767578, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUgLPxK6EuM_UbtqKB
\section{Introduction} The COVID-19 pandemic has left its marks in the world of sports, forcing in February-March 2020 the full-stop of nearly all sports-related activities around the globe. This implied that the national and international sports leagues were abruptly stopped, which had dramatic impacts especially in the most popular sport, football (or soccer). Indeed, the absence of regular income through ticket selling, television money and merchandising around live matches entailed that a large majority of professional clubs no longer were able to pay their players and other employees \citep{sky2020, kicker2020}. Professional football having become such a big business, this also had an impact on other people whose main income was related to football matches \citep{Arsenal}. And, last but not least, the fans were starving to have their favourite sport resume. However, the very intense and uncertain times made it a tough choice for decision-makers to take the risk of letting competitions go on, especially as it would be without stadium attendance for the first weeks or possibly months. Consequently, some leagues never resumed. The Dutch Eredivisie was the first to declare that it would not be continued, on April 25 2020, followed by the French Ligue 1 on April 28 and the Belgian Jupiler Pro League on May 15. The German Bundesliga was the first professional league to get restarted, followed by the English Premier League, Spanish Primera Division and Italian Serie A. The leagues that decided to not resume playing were facing a tough question: how to evaluate the current season? Should it simply be declared void, should the current ranking be taken as final ranking, or should only a part of the season be evaluated? Either decision would have an enormous financial impact given the huge amount of money at stake. Which team would be declared champion? Which teams would be allowed to represent their country at the European level and hence earn a lot of money through these international games, especially in the Champions League? Which teams would be relegated and, as a potential consequence, be forced to dismiss several employees? Moreover, the broadcasting revenue is allocated on the basis of final standings. Different countries reacted differently: the Eredivisie had no champion nor relegated teams, but the teams qualified for the European competitions were decided based on the ranking of March 8. The Ligue 1 and the Jupiler Pro League, on the other hand, declared also a champion and relegated teams on the basis of the ranking from the moment their seasons got stopped. However, the Jupiler Pro League had to come back on this decision and the relegation was nullified. The Ligue 1 based their final standing on the ratio of points earned per game, since not all teams had played an equal amount of games when the season got stopped. Obviously, several teams were not pleased by such decisions, considering them to be unfair \citep{Holroyd2020} because, inevitably, this favoured those teams that had still to play the strongest opponents in the remaining matches over those that were looking forward to a rather light end-of-season. This naturally raises the question to find a more balanced, scientifically sound way to evaluate the final standing of abruptly ended seasons. It also represents a bigger challenge than evaluating in a fair way an abruptly stopped single game, where for instance cricket has adopted a mathematical formula known as the Duckworth-Lewis-Stern method \citep{DLSmethod}. The literature addressing the challenging question of how a stopped season should be evaluated in the most objective way is fuelled by proposals from after the outbreak of COVID-19. The first and third authors of this paper used the current-strength based rankings by \citet{LeWiVa19}, a statistical model based on the bivariate Poisson distribution and weighted maximum likelihood, to provide all possible final standings of the Belgian Jupiler Pro League together with their associated probabilities. Their results were summarized in a mainstream journal \citep{coronaBelgie}. \citet{Guyon2020} proposed an Elo-based approach applied to the English Premier League. \citet{LaSp20} suggested an eigenvector-based approach, while \citet{csato2020coronavirus} discussed general criteria that a fair ranking should fulfil in such a situation and proposed the generalized row sum method. Recently, \cite{go20} used a statistical model to determine a ranking based on the expected total number of points per team. In this paper, we investigate the extent to which a relatively simple stochastic model can serve the purpose of producing fair final standings for prematurely stopped round-robin type football competitions. Our original approach goes as follows. We construct a stochastic soccer model that is fitted on the played matches and then is used to simulate the remainder of the competition a large number of times, thus yielding for every team the probabilities to reach each possible final rank. This output is much richer in terms of information than giving only the most likely or the expected ranking. This also explains the terminology for our model, namely Probabilistic Final Standing Calculator, which we abbreviate as PFSC. In order to assess its predictive strength, we compare our PFSC with two benchmark prediction models, namely the best performing model of \citet{LeWiVa19}, that uses a similar stochastic model to estimate the current strength of a team based on its matches played in the last two years, and the plus-minus-ratings approach of \citet{PaHv19} that is based on the strengths of the players making up the teams. For each model, the probabilistic final standing of a not yet ended season is obtained by simulating the remaining matches 100,000 times, which gives us for every team the probabilities of reaching each possible place in the final standing. It is not appropriate to compare the predictions of these models on the 2019-2020 competitions which were resumed after the break, since those matches were played under different circumstances, including the absence of fans. It has been shown \citep{Fischer2020} that these changed conditions could influence team performances, by lowering the effect of the home advantage. Therefore we rather compare the three models on the basis of the three preceding seasons of the five most important European football leagues (England, Spain, Germany, Italy and France), which we stopped artificially after every match day. Our evaluation of the models' performance is done in two ways: by means of the Rank Probability Score (RPS) \citep{epstein1969scoring} and the Tournament Rank Probability Score (TRPS) \citep{ekstrom2019evaluating}, see Section~\ref{sec:eval} for their definition. From this comparison, we can see at which point in time the PFSC is able to catch up with the two high-performing but more complicated prediction models. The reader may now wonder why we do not use any of these more elaborate models as PFSC; the reason is that we wish to propose a handy tool that sports professionals can indeed use without the need of too long computation time or large data collections. In the same vein, we will also make the PFSC freely available on the software $\mathtt{R}$ \citep{R2020}. The remainder of the paper is organized as follows. In Section~\ref{sec:methods} we describe our PFSC along with the two alternative models, as well as the two model performance evaluation metrics. Section~\ref{sec:results} then presents the results of this broad comparison. In Section~\ref{sec:France}, we illustrate the advantages of our PFSC by analyzing the French Ligue 1 season 2019-2020 and discussing how fairer decisions could be obtained on the basis of our PFSC. We conclude the paper with final comments in Section~\ref{sec:final}. \section{Methods} \label{sec:methods} In this section, we start by explaining the PFSC (Section~\ref{sec:bPois}) and the two benchmark models (Sections~\ref{sec:ranks} and \ref{sec:pm_ratings}), followed by a description of the two evaluation measures for comparison (Section~\ref{sec:eval}). In what follows, we suppose to have a total of $n$ teams competing in a round-robin type tournament of $M$ matches. \subsection{The PFSC: a bivariate Poisson-based model} \label{sec:bPois} For modelling football matches, the PFSC will make use of the bivariate Poisson distribution. Building on the original idea of \cite{Ma82} to model football match outcomes via Poisson distributions, the bivariate Poisson distribution has been popularized by \cite{KaNt03}. Let $Y_{ijm}$ stand for the number of goals scored by team $i$ against team $j$ ($i,j\in \{1,...,n\}$) in match $m$ (where $m \in \{1,...,M\}$) and let $\lambda_{ijm}\geq0$ resp. $\lambda_{jim}\geq0$ be the expected number of goals for team $i$ resp. $j$ in this match. The joint probability function of the home and away score is then given by the bivariate Poisson probability mass function \begin{align*} &{\rm P}(Y_{ijm}=x, Y_{jim}=y) = \frac{\lambda_{ijm}^x \lambda_{jim}^y}{x!y!} \exp(-(\lambda_{ijm}+\lambda_{jim}+\lambda_{C})) \sum_{k=0}^{\min(x,y)} \binom{x}{k} \binom{y}{k}k!\left(\frac{\lambda_{C}}{\lambda_{ijm}\lambda_{jim}}\right)^k, \label{bivpoissonDens} \end{align*} where $\lambda_{C}\geq0$ is a covariance parameter representing the interaction between both teams. This parameter is kept constant over all matches, as suggested in \cite{LeWiVa19}, who mentioned that models where this parameter depends on the teams at play perform worse. Note that $\lambda_{C}=0$ yields the Independent Poisson model. The expected goals $\lambda_{ijm}$ are expressed in terms of the strengths of team~$i$ and team~$j$, which we denote $r_i$ and $r_j$, respectively, in the following way: $\log(\lambda_{ijm})=\beta_0 + ({r}_{i}-{r}_{j})+h\cdot \rm{I}(\mbox{team $i$ playing at home})$, where $h$ is a real-valued parameter representing the home effect and is only added if team~$i$ plays at home, and $\beta_0$ is a real-valued intercept indicating the expected number of goals $e^{\beta_0}$ if both teams are equally strong and play on a neutral ground. The strengths $r_1,\ldots,r_n$ can take both positive and negative real values and are subject to the identification constraint $\sum_{i=1}^nr_i=0$. Over a period of $M$ matches (which are assumed to be independent), this leads to the likelihood function \begin{equation}\label{lik} L = \prod_{m=1}^{M}{\rm P}(Y_{ijm}=y_{ijm}, Y_{jim}=y_{jim}), \end{equation} where $y_{ijm}$ and $y_{jim}$ stand for the actual number of goals scored by teams $i$ and $j$ in match $m$. The unknown values of the strength parameters $r_1,\ldots,r_n$ are then computed numerically as maximum likelihood estimates, that is, in such a way that they best fit a set of observed match results. \cite{LeWiVa19} established that the bivariate Poisson model and its Independent counterpart are the best-performing maximum likelihood based models for predicting football matches. We refer the interested reader to Section 2 of \cite{LeWiVa19} for more variants of the bivariate Poisson model, as well as for Bradley-Terry type models where the outcome (win/draw/loss) is modelled directly instead of as a function of the goals scored. Using the bivariate Poisson model in the final standing prediction works as follows. The parameters $\lambda_C$, $\beta_0$, $h$ and the strength parameters $r_1,...,r_n$ are estimated using the matches played so far in the current season. Next, these parameters are used to simulate 100,000 times the remaining matches, by sampling the number of goals for each team in each match from the corresponding bivariate Poisson distribution. For each simulated end of season, a final standing is created based on the played and simulated matches, taking into account the specific rules of the leagues. The probabilistic final standing is then calculated by averaging the results over all 100,000 simulations, giving each team a probability to reach every possible rank. Note that \cite{go20} also used the bivariate Poisson distribution as their statistical model, but they only calculate expected ranks and not the complete probabilistic picture as we do. This model is relatively simple compared to the benchmark models that we describe below, but it has some nice properties that make it perfectly suited for determining the final standing of a prematurely stopped competition. First, the PFSC only takes into account match results, so data requirements are benign. Second, the PFSC only takes into account matches of the current season, so there is no bias to teams that performed well in the previous season(s). Third, each played game has the same weight in the estimation of the team strengths. These three properties make this method a fair way to evaluate an unfinished football season. On top of this, the code for the model can easily be executed in a short time. \subsection{Current-strength based team ratings}\label{sec:ranks} The first benchmark model is an extension of the previous model. The idea of \cite{LeWiVa19} was to use a weighted maximum likelihood, where the weight is a time depreciation factor $w_{time,m}>0$ for match $m$, resulting in \begin{equation*}\label{lik2} L = \prod_{m=1}^{M}\left({\rm P}(Y_{ijm}=y_{ijm}, Y_{jim}=y_{jim})\right)^{w_{time,m}}. \end{equation*} The exponentially decreasing time decay function is defined as follows: a match played $x_m$ days back gets a weight of \begin{equation*}\label{smoother} w_{time,m}(x_m) = \left(\frac{1}{2}\right)^{\frac{x_m}{\mbox{\small Half period}}}. \end{equation*} In other words, a match played \emph{Half period} days ago only contributes half as much as a match played today and a match played $3\times$\emph{Half period} days ago contributes 12.5\% of a match played today. This weighting scheme gives more importance to recent matches and leads to a so-called current-strength ranking based on the estimated strength parameters of the teams. Another difference is that this model uses two years of past matches to estimate the team strengths. The half period is set to 390 days, as this was found to be the optimal half period by \cite{LeWiVa19} when evaluated on 10 seasons of the Premier League. The predicted probabilities for each rank in the final standing are obtained in the same way as in the PFSC. \subsection{Plus-minus ratings}\label{sec:pm_ratings} Plus-minus ratings, the second benchmark model, are based on the idea of distributing credit for the performance of a team onto the players of the team. We consider the variant of plus-minus proposed by \citet{PaHv19}. Each football match is partitioned into segments of time, with new segments starting whenever a player is sent off or a substitution is made. For each segment, the set of players appearing on the pitch does not change, and a goal difference is observed from the perspective of the home team, equal to the number of goals scored by the home team during the segment minus the number of goals scored by the away team. The main principle of the plus-minus ratings considered is to find ratings such that the sum of the player ratings of the home team minus the sum of the player ratings of the away team is as close as possible to the observed goal difference. Let $S$ be the set of segments, $P_{h(s)}$ respectively $P_{a(s)}$ the set of players on the pitch for the home respectively away team during segment $s \in S$. Denote by $g(s)$ the goal difference in the segment as seen from the perspective of the home team. If a real-valued parameter $\beta_j$ is used to denote the rating of player $j$, the identification of ratings can be expressed as minimizing \begin{align} \sum_{s \in S} \left( \sum_{j \in P_{h(s)}} \beta_j - \sum_{j \in P_{a(s)}} \beta_j - g(s) \right)^2, \nonumber \end{align} the squared difference between observed goal differences and goal differences implied by the ratings of players involved. To derive more reasonable player ratings, \citet{PaHv19} considered a range of additional factors, which we also consider here: 1) Segments have different durations, so the ratings in each segment are scaled to correspond with the length of the segment. 2) The home team has a natural advantage, which is added using a separate parameter. 3) Some segments have missing players, either due to players being sent off by the referee or due to injuries happening after all allowed substitutions have been made. These situations are represented using additional variables corresponding to the missing players, while remaining player ratings are scaled so that their sum corresponds to an average rating for a full team. 4) The player ratings are not assumed to be constant over the whole data set, but rather to follow a curve that is a function of the age of players. This curve is modelled as a piece-wise linear function which is estimated together with the ratings by introducing corresponding age adjustment variables. 5) Each segment is further weighted by factors that depend on the recency of the segment and the game state. A complete mathematical formulation of the plus-minus rating system was provided by \citet{PaHv19}. To move from plus-minus player ratings to match predictions, an ordered logit regression model is used. This model derives probabilities for a home win, a draw, and an away win based on a single value associated with each match: the sum of the ratings of the players in the starting line-up for the home team, minus the sum of the ratings of the players in the starting line-up for the away team, plus the home field advantage of the corresponding competition. As with the previous benchmark, the remaining matches of a league are simulated. However, some slight differences can be observed. Following \citet{SaHv19}, the starting line-ups of the teams are also simulated, based on the players available in the squads. Each player has a 10 \% chance of being injured or otherwise inadmissible for a given match. Subject to these random unavailable players, the best possible starting line-up is found, consisting of exactly one goalkeeper, and at least three defenders, three midfielders, and one forward. Based on this, probabilities of a home win, draw and away win are derived using the ordered logit regression model. Since this does not provide a goal difference, but just a result, the simulation further assumes that losing teams score 0 goals and drawing teams score one goal, whereas the number of goals for winning teams is selected at random from 1 to 3. \subsection{Metrics to evaluate and compare the three models} \label{sec:eval} We have provided three proposals for predicting the final standings of abruptly stopped football seasons. These are evaluated by predicting, for several completed seasons from different top leagues, the remaining matches after artificially stopping each season after every match. The evaluation of their predictive abilities is done at two levels: single match outcomes and final season standings. For the former, we use the Rank Probability Score as metric, for the latter the Tournament Rank Probability Score. The Rank Probability Score (RPS) is a proper scoring rule that preserves the ordering of the ranks and places a smaller penalty on predictions that are closer to the observed data than predictions that are further away from the observed data \citep{epstein1969scoring,gneiting2007strictly,CoFe12a}. The RPS is defined for a single match as $$ RPS =\frac{1}{R-1}\sum_{r=1}^R\left(\sum_{j=1}^r(o_j-x_j)\right)^2 $$ where $R$ is the number of possible outcomes, $o_j$ the empirical probability of outcome $j$ (which is either 1 or 0), and $x_j$ the forecasted probability of outcome $j$. The smaller the RPS, the better the prediction. The RPS is similar to the Brier score, but measures the accuracy of a prediction differently when there are more than two ordered categories, by using the cumulative predictions in order to be sensitive to the distance. Let us give some further intuition about this metric. Let 1 stand for home win, 2 for draw and 3 for away win, so obviously $R=3$. The formula of the RPS can be simplified to $$ \frac{1}{2}\left((o_1-x_1)^2+(o_1+o_2-x_1-x_2)^2+(1-1)^2\right)=\frac{1}{2}\left((o_1-x_1)^2+(o_3-x_3)^2\right), $$ which shows, for instance, that a home win predicted as draw is less severely penalized than would be a predicted away win in such case. \citet{ekstrom2019evaluating} extended the RPS to final tournament or league standings, and consequently termed it TRPS for Tournament RPS. The idea is very similar to the RPS, as the TRPS compares the cumulative prediction $X_{rt}$ that team $t$ will reach at least rank $r$ (with lower values of $r$ signifying a better ranking) to the corresponding empirical cumulative probability $O_{rt}$. The latter also only attains two different values: a column $t$ in $O_{rt}$ is 0 until the rank which team $t$ obtained in the tournament, after which it is 1. Consequently, the TRPS is defined as $$ TRPS=\frac{1}{T}\sum_{t=1}^T\frac{1}{R-1}\sum_{r=1}^{R-1}(O_{rt}-X_{rt})^2, $$ where $T$ is the number of teams and $R$ is the total number of possible ranks in a tournament or league. A perfect prediction will yield a TRPS of 0 while the TRPS increases when the prediction worsens. The TRPS is a proper scoring rule, very flexible and handles partial rankings. It retains for league predictions the desirable properties of the RPS, and as such assigns lower values to predictions that are almost right than to predictions that are clearly wrong. \section{Results}\label{sec:results} The PFSC, the current-strength team ratings and the plus-minus player ratings are evaluated in terms of correctly predicting match outcomes and the final league table. The evaluation is conducted on the top leagues of England, France, Germany, Italy, and Spain, for the 2016-2017, 2017-2018 and 2018-2019 seasons. Each season and league is halted after each match day, and the outcomes of the remaining matches as well as the final league tables are predicted. Figure~\ref{fig:RPSValues} shows the mean RPS of the remaining matches, given the current match day. The figure illustrates that the performances of the current-strength team ratings and the plus-minus player ratings are similar throughout. The PFSC is a much simpler model, and only uses data from the current season. Therefore, its performance is relatively bad in the beginning of the season. However, in most cases the PFSC converges towards the performance of the benchmark methods after around 10 match days, and in all but one of fifteen league-seasons it has caught up after 25 match days. The exception is the 2017-2018 season of the Italian Serie A, where the PFSC leads to worse predictions than the benchmarks throughout. When nearing the end of the season, the RPS is calculated over few matches. Therefore, the mean RPS behaves erratically and sometimes increases sharply. This is because a single upset in one of the final rounds can have a large effect on the calculated RPS values. However, as the plots in Figure~\ref{fig:RPSValues} show, all three methods follow each other closely, indicating that the results that are difficult to predict are equally hard for all methods. In Figure~\ref{fig:TRSPValues} the mean TRPS is shown for each league and season. As the final table becomes easier to predict the more matches have been played, the TRPS converges to zero. Therefore, the difference in performance among the three methods also converges to zero. However, the PFSC has a similar prediction quality as the benchmark methods already somewhere between 10 and 25 match days into the season. Even for the Italian Serie A in 2017-2018, the TRPS is similar for all methods after 30 match days, although the RPS indicated that the predictions of the PFSC are worse for the remaining matches in that particular season. As for the RPS, the performances of the current-strength team ratings and the plus-minus player ratings are very similar. \begin{figure} \centering \includegraphics[width=0.85\linewidth]{RPS_values_new.pdf} \caption{Mean RPS values calculated after each match day for five leagues and three seasons.}\label{fig:RPSValues} \end{figure} \begin{figure} \centering \includegraphics[width=0.85\linewidth]{TRPS_values_new.pdf} \caption{Mean TRPS values calculated after each match day for five leagues and three seasons.}\label{fig:TRSPValues} \end{figure} \section{Discussion}\label{sec:France} We have shown in the previous section that our simple PFSC is comparable in terms of predictive performance at match and final standing levels to the two benchmark models that are more computationally demanding, more time-consuming and require more input data. This establishes the PFSC as a very good candidate for obtaining fair final standings. We will now illustrate how our PFSC can be used in practice by decision-makers to reach fairer decisions on how an abruptly stopped season should be evaluated. To this end, we will show how the French Ligue 1 could have been settled after it was abruptly stopped in the 2019-2020 season. The match results were downloaded from the website \url{https://www.football-data.co.uk/}. The code for the PFSC applied on the present case study is available in the online supplementary material. The official ranking of the Ligue 1 is given in Table \ref{table:official}. \begin{table}[ht] \caption{The official ranking of the French Ligue 1 in the 2019-2020 season. The order of the teams is determined by the number of points earned per match.} \label{table:official} \centering \resizebox{\textwidth}{!}{\begin{tabular}{rlccccccccc} \hline & Team & Points & Win & Draw & Loss & Goals & \begin{tabular}{c}Goals\\ against\end{tabular} & \begin{tabular}{c}Goal\\ difference\end{tabular} & Matches & \begin{tabular}{c}Points\\per match\end{tabular} \\ \hline 1 & PSG & 68 & 22 & 2 & 3 & 75 & 24 & 51 & 27 & 2.52 \\ 2 & Marseille & 56 & 16 & 8 & 4 & 41 & 29 & 12 & 28 & 2.00 \\ 3 & Rennes & 50 & 15 & 5 & 8 & 38 & 24 & 14 & 28 & 1.79 \\ 4 & Lille & 49 & 15 & 4 & 9 & 35 & 27 & 8 & 28 & 1.75 \\ 5 & Nice & 41 & 11 & 8 & 9 & 41 & 38 & 3 & 28 & 1.46 \\ 6 & Reims & 41 & 10 & 11 & 7 & 26 & 21 & 5 & 28 & 1.46 \\ 7 & Lyon & 40 & 11 & 7 & 10 & 42 & 27 & 15 & 28 & 1.43 \\ 8 & Montpellier & 40 & 11 & 7 & 10 & 35 & 34 & 1 & 28 & 1.43 \\ 9 & Monaco & 40 & 11 & 7 & 10 & 44 & 44 & 0 & 28 & 1.43 \\ 10 & Strasbourg & 38 & 11 & 5 & 11 & 32 & 32 & 0 & 27 & 1.41 \\ 11 & Angers & 39 & 11 & 6 & 11 & 28 & 33 & -5 & 28 & 1.39 \\ 12 & Bordeaux & 37 & 9 & 10 & 9 & 40 & 34 & 6 & 28 & 1.32 \\ 13 & Nantes & 37 & 11 & 4 & 13 & 28 & 31 & -3 & 28 & 1.32 \\ 14 & Brest & 34 & 8 & 10 & 10 & 34 & 37 & -3 & 28 & 1.21 \\ 15 & Metz & 34 & 8 & 10 & 10 & 27 & 35 & -8 & 28 & 1.21 \\ 16 & Dijon & 30 & 7 & 9 & 12 & 27 & 37 & -10 & 28 & 1.07 \\ 17 & \mbox{St.} Etienne & 30 & 8 & 6 & 14 & 29 & 45 & -16 & 28 & 1.07 \\ 18 & N\^imes & 27 & 7 & 6 & 15 & 29 & 44 & -15 & 28 & 0.96 \\ 19 & Amiens & 23 & 4 & 11 & 13 & 31 & 50 & -19 & 28 & 0.82 \\ 20 & Toulouse & 13 & 3 & 4 & 21 & 22 & 58 & -36 & 28 & 0.46 \\ \hline \end{tabular}} \end{table} \subsection{Evaluating the French Ligue 1 2019-2020 season} At the time of stopping the competition, each team had played at least 27 matches. From the findings of the previous section, we know that after this number of match days, our PFSC is a competitive model for predicting the remaining matches and the final ranking. Based on the played matches, the teams strengths were estimated, which resulted in the strengths reported in Table \ref{table:ratings}. We can see that Paris Saint-Germain (PSG) is by far considered as the strongest team in the league. Surprisingly, Olympique Lyon comes out as the second strongest team, while only standing on the 7th place in the official ranking at that time. This could indicate that Lyon did have some bad luck during the season. Looking at their match results, we could see that in almost all their lost matches, they lost with a goal difference of 1 goal. Only PSG at home managed to get a margin of two goals against Lyon. At the bottom of the table, we find that Toulouse was the weakest team in the league, followed by Amiens, St. Etienne and N\^imes. This is in agreement with the official ranking, up to a slightly different ordering of the teams. \begin{table} \caption{The estimated ratings $r_i, i=1,\ldots,18$, of the teams in the French Ligue 1, based on the played matches in the 2019-2020 season, obtained via the bivariate Poisson model in our PFSC.} \label{table:ratings} \centering \begin{tabular}{rlr} \hline & Teams & Estimated strengths \\ \hline 1 & PSG & 0.85 \\ 2 & Lyon & 0.28 \\ 3 & Rennes & 0.24 \\ 4 & Marseille & 0.19 \\ 5 & Lille & 0.15 \\ 6 & Bordeaux & 0.15 \\ 7 & Reims & 0.08 \\ 8 & Nice & 0.03 \\ 9 & Montpellier & 0.02 \\ 10 & Monaco & 0.01 \\ 11 & Strasbourg & $-$0.01 \\ 12 & Nantes & $-$0.02 \\ 13 & Brest & $-$0.09 \\ 14 & Angers & $-$0.10 \\ 15 & Metz & $-$0.16 \\ 16 & Dijon & $-$0.17 \\ 17 & N\^imes & $-$0.27 \\ 18 & \mbox{St.} Etienne & $-$0.29 \\ 19 & Amiens & $-$0.32 \\ 20 & Toulouse & $-$0.59 \\ \hline \end{tabular} \end{table} Using these strengths, we have simulated the remainder of the season 100,000 times and by taking the mean over these simulations, we have calculated the probabilities for each team to reach each possible position, which is summarized in Table \ref{table:PFSC_France}. We can see that PSG would win the league with approximately $100\%$ probability, thanks to the big lead they had and the high team strength. Marseille had a $77\%$ chance of keeping the second position, with also a certain chance of becoming third ($18\%$), or even fourth ($5\%$). Furthermore, we see that Lyon, thanks to their high estimated strength, had the highest probability to be ranked as fifth ($29.5\%$). Their frustration with respect to the decision as it was taken officially by the Ligue 1 is thus understandable \citep{Holroyd2020}. In the bottom of the standing, we see that Toulouse was doomed to be relegated, with only $0.1\%$ chance of not ending at the 19th or 20th place in the league. Amiens had still about $32\%$ chance of staying in the first league. \begin{table}[ht] \caption{The Probabilistic final standing (in percentages) of the Ligue 1 in the 2019-2020 season, according to our PFSC method. Probabilities are rounded to the nearest 1 percent.} \label{table:PFSC_France} \centering \resizebox{\textwidth}{!}{\begin{tabular}{rllllllllllllllllllll} \hline & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\ \hline PSG & 100 & & & & & & & & & & & & & & & & & & & \\ Marseille & & 77 & 18 & 5 & & & & & & & & & & & & & & & & \\ Rennes & & 12 & 41 & 36 & 8 & 2 & 1 & & & & & & & & & & & & & \\ Lille & & 11 & 37 & 39 & 9 & 3 & 1 & & & & & & & & & & & & & \\ Lyon & & & 3 & 10 & 30 & 18 & 13 & 9 & 6 & 4 & 3 & 2 & 1 & & & & & & & \\ Reims & & & & 3 & 12 & 15 & 15 & 13 & 12 & 10 & 8 & 6 & 4 & 2 & & & & & & \\ Montpellier & & & & 3 & 11 & 13 & 13 & 12 & 12 & 10 & 9 & 7 & 5 & 3 & 1 & & & & & \\ Bordeaux & & & & 2 & 7 & 11 & 12 & 12 & 12 & 11 & 10 & 9 & 7 & 4 & 1 & & & & & \\ Nice & & & & 1 & 7 & 10 & 12 & 12 & 12 & 12 & 11 & 9 & 7 & 4 & 1 & & & & & \\ Strasbourg & & & & 1 & 6 & 9 & 11 & 11 & 12 & 12 & 11 & 10 & 9 & 5 & 2 & 1 & & & & \\ Monaco & & & & 1 & 4 & 7 & 9 & 10 & 12 & 12 & 13 & 12 & 10 & 6 & 3 & 1 & & & & \\ Nantes & & & & 1 & 4 & 7 & 8 & 10 & 11 & 12 & 12 & 12 & 11 & 7 & 3 & 1 & & & & \\ Angers & & & & & 2 & 4 & 5 & 7 & 9 & 11 & 13 & 15 & 15 & 11 & 5 & 2 & 1 & & & \\ Metz & & & & & & & & 1 & 1 & 2 & 4 & 7 & 11 & 21 & 22 & 16 & 10 & 4 & & \\ Brest & & & & & & & & 1 & 1 & 2 & 4 & 6 & 10 & 19 & 23 & 18 & 10 & 5 & 1 & \\ Dijon & & & & & & & & & & & 1 & 2 & 4 & 10 & 16 & 22 & 23 & 15 & 5 & \\ \mbox{St.} Etienne & & & & & & & & & & & 1 & 1 & 3 & 7 & 14 & 22 & 25 & 20 & 7 & \\ N\^imes & & & & & & & & & & & & & 1 & 3 & 6 & 12 & 22 & 36 & 18 & \\ Amiens & & & & & & & & & & & & & & & 1 & 3 & 8 & 20 & 66 & 2 \\ Toulouse & & & & & & & & & & & & & & & & & & & 2 & 98 \\ \hline \end{tabular}} \end{table} Now, how could this table be used by decision-makers to handle the discontinued season? One has to decide which team will become the champion, which teams will play in the Champions League and Europe League and which teams will be relegated to the second division. For the first answer, some leagues nowadays have entered a rule stating that if enough matches are played, the current leader of the season would be considered as the champion. However, this does not take into account the gap between the first and the second in the standing. We would recommend changing the rule, in the sense that a team can only be declared champion if it has more than $C\%$ chance to win the league according to the PFSC ($C$ could \mbox{e.g.} be $80$, but this decision of course has to be made by the leagues). For our example, there is little doubt. PSG was expected to be the winner of the league with an estimated chance of $100\%$, so they should be considered as the champions of the Ligue 1. A similar strategy can be adopted regarding which teams should be relegated to the second division. For the participation in the Champions League and Europe League, the leagues need a determined final standing instead of a probabilistic final standing. We will next show how we can get a determined final standing using our PFSC, and how we can use the PFSC to help to determine financial compensations. \subsection{Determined final standing and financial compensations} First we make a determined final standing by calculating the expected rank, using the probabilities. This results in the standing shown in Table \ref{table:determined}. In the example of the French League, we see that PSG gets the direct ticket for the Champions League, while Marseille and Rennes get the tickets for the CL qualification rounds. Lille and Lyon would have received the tickets for the group stage of the Europe League and Reims the ticket for the qualifications. This shows that Nice was one of the teams that got an advantage from the decision of the French league to halt the season. However, transforming our probabilistic standing to a determined standing causes a number of teams to be (dis)advantaged. For example, in Table \ref{table:determined} we can see that the expected rank of Rennes is 3.55, which is the third-highest expected rank. Assigning Rennes to the third rank is therefore an advantageous outcome. Lille, on the other hand, has an expected rank of 3.62, which is only the fourth-best expected rank. Lille is therefore at a disadvantage when being assigned to rank 4. This issue could be solved by using a compensation fund. Assume that the expected profit (in particular prize money for the league placement and starting and prize money from Champions League and Europe League) of a team ending in rank $i$ is equal to $P_i$. The expected profit for \mbox{e.g.} Marseille would be $0.77*P_2+0.18*P_3+0.05*P_4$. In the determined ranking, they end as second, so they will receive $P_2$. Actually, they receive too much, since they had no chance of ranking higher than second, but they had a reasonable chance to become third or even fourth. To compensate for this, they should hand over $P_2-(0.77*P_2+0.18*P_3+0.05*P_4)=0.18*(P_2-P_3)+0.05*(P_2-P_4)$ to the compensation fund. This will then be used to compensate teams that are disadvantaged by the establishing of a determined ranking. There will still be the difficulty of estimating the expected profit from reaching a certain rank (\mbox{e.g.}, a team reaching the Europe League will have further merchandising advantages besides the profit mentioned above as compared to the team classified just outside of these ranks), but we believe that this tool could be very useful for decision-makers in determining which teams have received an advantage or disadvantage from an early stop of the league, and how to compensate for this. \begin{table}[ht] \caption{Determined final standing, using the PFSC probabilities. This standing could be used to decide which teams will play in the Champions League and Europe League.} \label{table:determined} \centering \begin{tabular}{rlr} \hline Rank & Team & Expected rank \\ \hline 1 & PSG & 1.00 \\ 2 & Marseille & 2.28 \\ 3 & Rennes & 3.55 \\ 4 & Lille & 3.62 \\ 5 & Lyon & 6.52 \\ 6 & Reims & 8.17 \\ 7 & Montpellier & 8.51 \\ 8 & Bordeaux & 9.11 \\ 9 & Nice & 9.20 \\ 10 & Strasbourg & 9.58 \\ 11 & Monaco & 10.06 \\ 12 & Nantes & 10.17 \\ 13 & Angers & 11.08 \\ 14 & Metz & 14.38 \\ 15 & Brest & 14.53 \\ 16 & Dijon & 16.00 \\ 17 & \mbox{St.} Etienne & 16.40 \\ 18 & N\^imes & 17.34 \\ 19 & Amiens & 18.52 \\ 20 & Toulouse & 19.98 \\ \hline \end{tabular} \end{table} \section{Conclusion}\label{sec:final} In this paper we proposed a novel tool, the Probabilistic Final Standing Calculator, to determine the most likely outcome of an abruptly stopped football season. Unlike other recent proposals, we provide probabilities for every team to reach each possible rank, which is more informative than only providing the single most likely or expected final ranking. We have shown the strength of our PFSC by comparing it to two benchmark models that are based on much more information, and yet our PFSC is exhibiting similar performances except when a season would get stopped extremely early, which however was anyway more a theoretical than a practical concern (a season stopped after less than a third of the games played would certainly be declared void). Our evaluation has been done at both the match-level (via the RPS) and the final standing level (via the TRPS). We have shown on the concrete example of the 2019-2020 season of the French Ligue 1 how our PFSC can be used, also for a fair division of the money shares. We hope that our PFSC will help decision-makers in the future to reach fairer decisions that will not lead to the same level of dissatisfaction and controversies that one could observe in various countries in the 2019-2020 season. The idea of the PFSC can also be applied, up to minor modifications, to several other types of sports tournaments. We conclude this paper with a historical remark. The problem treated here, namely finding a fair way to determine final league standings if one cannot continue playing, goes back to the very roots of probability theory. The French writer Antoine Gombaud (1607-1684), famously known as Chevalier de M\'er\'e, was a gambler interested by the ``problem of the points'': if two players play a pre-determined series of games, say 13, and they get interrupted at a score of, say, 5-2, and cannot resume the games, how should the stake be divided among them? The Chevalier de M\'er\'e posed this problem around 1654 to the mathematical community, and the two famous mathematicians Blaise Pascal (1623-1662) and Pierre de Fermat (1607-1665) accepted the challenge. They addressed the problem in an exchange of letters that established the basis of our modern probability theory \citep{Dev10}. \section{Declarations} Funding: The authors have no funding to report\\ Conflicts of interest/Competing interests: The authors have no conflict of interest to declare.\\ Availability of data and material: Data needed for the application\\ Code availability: The code for the application is available in a supplementary file.
{ "attr-fineweb-edu": 1.855469, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUctrxK3YB9m3uvNBQ
\section{Introduction} Our ability to point out and describe an athlete's performance relies on a solid sports understanding of his/her actions and all of the elements around him/her. Understanding any sport requires acknowledging its metrics, which is the element that modifies the scoreboard. In ultra-distance races, the runner's assessment depends on the CRT. Like cycling events, athletes pass through several specific locations where their CRTs are registered. Eventually, the first runner to reach the finish line wins the race. Precisely, recording the split time at different locations provides an insight into the runner's performance at a given partial track. There has been significant progress in sports events detection in the last few years. Several applications have benefited from this progress, including summarization or highlight extraction \cite{He20,Gao20}. For instance, tracking a ball in individual sports -i.e., tennis \cite{Qazi15}- or in team sports -i.e., basketball \cite{Wei15}- can be connected in a later semantic associated step such as scoreboard update. In contrast, non-ball-based sports -i.e., running or cycling- have received less attention. Two fundamental challenges when analyzing videos are 1) the localization of the relevant moments in a video and 2) the classification/regression of these moments into specific categories/data. In our problem, the former is dataset-dependent, but the latter focuses on regressing the input footage. These problems are even more complicated in ultra-distance races, where highly dynamic scenes and the race span represent challenges during the acquisition process. This work aims to overcome these issues by proposing a novel pipeline non-dependent on the acquisition instant, neither on the scenario. First, the footage input is preprocessed to focus on the runner of interest (see Figure \ref{fig:intro_img}). Then, an I3D Convolutional Neural Network (I3D) is modified to extract the most meaningful features at the end of its convolutional base. Additionally, these features will feed a brand new regressor for the time regression task. Our model is inspired by recent work in action recognition using I3D \cite{Joefrie19,Wang19,Dai21,Freire22}. Our second contribution introduces a complete ablation study on how single and double input streams affect regression. We have evaluated our model in a dataset collected to evaluate Re-ID methods in real-world scenarios. The collection contains $214$ ultra-distance runners captured at different recording points along the track. The achieved results are remarkable (up to 18,5 minutes of MAE), and they have also provided interesting insights. The first is the importance of context when considering the two 3D stream inputs and how this input outperforms every single 3D stream input. Another insight is related to the importance of contextual information for the pre-trained I3D and the limitations observed during the transfer learning. In this sense, more informative contexts are up to a $50$\% better than restrictive context (i.e., bounding boxes). Finally, we have divided the dataset observations into their timing quartiles. In this regard, all the models have shown a moderate MAE degradation, up to a $39$\% in the best model. The paper is organized into five sections. The next section discusses some related work. Section \ref{sec:pipeline} describes the proposed pipeline. Section \ref{sec:results} reports the experimental setup and the experimental results. Finally, conclusions are drawn in Section \ref{sec:con}. \section{Related Work} \label{sec:sota} Sports analysis has continually drawn the Community's attention, leading to a rapid increase in published work during the last decade. According to the semantic significance, the sports video content can be divided into four layers: raw video, object, event, and semantic layers \cite{Shih18}. The pyramid base comprises the input videos from which objects can be identified in a superior layer. In this sense, key objects represented in the video clips can be identified by extracting objects -i.e., players \cite{Tianxiao20}- and tracking objects -i.e., ball \cite{Shaobo19}, and players \cite{JungSoo20}-. The event layer represents the action of the key object. Several actions combined with scene information can generate an event label, and it represents the related action or interaction among multiple objects. Works connected to action recognition \cite{Freire22}, trajectory prediction \cite{Teranishi20} and highlight detection \cite{Gao20} fit this layer scope. The top layer is the semantic layer, representing the footage semantic summarization \cite{Cioppa18}. Since our purpose is regressing a CRT from athlete footage, we aim to obtain an estimated outcome from the athlete's actions, a value that defines his performance. In this regard, several sports collections have been gathered from international competitions events to boost research on automatic sports quality assessment in the recent past. Some of the latest datasets are the MTL-AQA dataset (diving)~\cite{Parmar19}, UNLV AQA-7 dataset (diving, gymnastic vaulting, skiing, snowboarding, and trampoline)~\cite{Parmar19wacv} and Fis-V dataset (skating)~\cite{Xu20}. All these datasets are collected in an indoor and not-occlusive environment or, in the case of the UNLV AQA-7 dataset (snowboarding/skiing), in a quiet environment: black sky (night) and white ground (snow). Moreover, the collections mentioned above show professional athletes. Our work considers an ultra-distance race collection with a set of variations in terms of lighting conditions, backgrounds, occlusions, accessories, and athlete's proficiency diversity. In ultra-distance races, each runner is provided with a small RFID tag worn on the runner's shoe, as a wristband, or embedded in the runner's bib. RFID readers are placed throughout the course and are used to track the runner's progress \cite{Chokchai18}. Some runners cheat despite these measures by taking shortcuts \cite{Rosie80}. Consequently, the primary research carried out around this sport is re-identification. The Community has developed different proposals such as mathematical models \cite{Goodrich21}, invasive devices similar to RFID \cite{Lingjia19}, and non-invasive models based on the runner's appearance \cite{Klontz13}, or his bib number \cite{Ben-Ami12-bmvc, Carrascosa20}. \begin{figure*}[t] \begin{minipage}[t]{1\linewidth} \centering \includegraphics[scale=0.7]{pipeline_crt.pdf} \end{minipage} \caption{\textbf{The proposed pipeline for the CRT regression problem.} The devised process comprises two main parts: the footage pre-processing block and the regression block. In the former, the tracker assists by helping to neutralize the runner background activity (SB or BB). The latter implies the generation of two streams of data (RGB and Flow) that feed two I3D ConvNets whose outputs can be either added or concatenated ($\Omega$). The resulting tensor acts as an input to the regressor, ending the regression. \label{fig:pipeline}} \end{figure*} In the last two decades, gait analysis has been explored differently. The human pose representation plays a crucial role in evaluating a performed action. Lei et al. have identified three primary pose representations for the action quality assessment~\cite{Lei19}. The challenge relies on finding robust features from pose sequences and establishing a method to measure the similarity of pose features~\cite{Wnuk10}. Second, the skeleton-based representations encode the relationship between joints and body parts~\cite{Freire20}. However, the estimated skeleton data can often be noisy in realistic scenes due to occlusions or changing lighting conditions~\cite{Carissimi18}, especially in an ultra-distance race in wild conditions. More recently, deep learning methods have been applied to analyze gait. Some authors, like Parmar, suggest that in this representation approach, convolutional neural networks (CNN) can be combined with recurrent neural networks (RNN) due to the temporal dimension provided by the video input~\cite{Parmar17}. Another typical network used for sports in terms of quality assessment is the 3D convolution network (C3D). This deep neural network can learn spatio-temporal features that define action quality~\cite{Tran14}. The Residual Network (ResNet) has played a key role in the action recognition task in the last few years. Thus, Hara proposed the 3D ResNets improving the recognition performance by increasing the ResNet layers depth \cite{Hara18}. Both C3D and 3D ResNets operate on single 3D stream input. Precisely, our work can be framed in the deep learning methods for assessing the athlete's action quality suggested by Lei et al. We make use of the I3D ConvNet, which has been used for human action recognition (HAR) in the past~\cite{ Freire22,Carreira17}. Unlike C3D and 3D ResNets, the I3D ConvNet operates on two 3D stream inputs. In summary, the work presented in this paper regresses the race participant's CRT considering an I3D network. This regression provides meaningful semantic information connected to the runner's performance. Since the considered dataset provides a scenario in the wild, we evaluate our pipeline after pre-processing the raw video sequence as input to remove context noisy elements. \section{Proposed architecture} \label{sec:pipeline} As can be seen in Figure \ref{fig:pipeline}, this work proposes a sequential pipeline divided into two major blocks. The I3D ConvNet requires a footage input for inference, and it processes two streams: RGB and optical flow. Consequently, any object (athlete, race staff, cars) not of interest must be removed from the scene. Precisely, the first block pre-preprocess the raw input data to focus on the runner of interest. The output of this block can comprise the runner bounding box (BB) or the runner with still background (SB), see Figure \ref{fig:extract}. Figure \ref{fig:pipeline} also shows how the first block feeds the I3D convolutional base. Then, each pre-processed footage is divided into the two streams mentioned earlier to feed a pre-trained I3D ConvNet. Finally, the extracted features feed a regressor to infer the CRT. \subsection{Footage pre-processing} \label{sec:footpress} A few years ago, Deep SORT \cite{Wojke17} showed significant advances in tracking performance. Then, the SiamRPN+ \cite{zhang2019deeper} lead to remarkable robustness by taking advantage of a deeper backbone like ResNet. Additionally, Freire et al. \cite{Freire22} have combined both Deep SORT and SiamRPN+ to increase the tracking precision when illumination changes or partial occlusions happen in HAR in some sports collections. Recently, ByteTrack has outperformed DeepSort as multi-object tracking by achieving consistent improvements \cite{zhang2021bytetrack}. This tracker uses the YOLOX detector \cite{ge2021yolox} as the backbone, and a COCO pre-trained model \cite{Lin14} as the initialized weights. This tracker has exhibited remarkable occlusion robustness and outperformed the previously described trackers. Therefore, we have used ByteTrack to track the runner of interest. As we mentioned before, a context adjustment pre-processing can generate the scenarios considered in our experiments, defined as BB and SB. Given a runner $i$ bounding box area $BB_i(t, RP)$ at a given time $t \in [0,T]$ and in a recording point $RP \in [0,P]$, the new pre-processed footage $F'_i[RP]$ can be formally denoted as follows: \begin{equation} \label{eq:contextremoval} F'_i[RP]=BB_{i}(t, RP) \cup \tau(RP) \end{equation} Where $\tau(RP)$ can stand for the footage average frame or a neutral frame to generate the SB and BB scenario, respectively (see Figure \ref{fig:extract}). \begin{figure}[bt] \begin{minipage}{1\linewidth} \centering \includegraphics[scale=0.47]{contextfree.pdf} \end{minipage} \caption{\textbf{Context adjustment.} Since the I3D ConvNet processes every action in a scene, we have cleaned the context to focus on the runner of interest.} \label{fig:extract} \end{figure} \subsection{Runners Features Extraction} \label{sec:i3d} A few years ago, Carreira and Zisserman proposed the Inflated 3D Convolutional Neural Network (I3D ConvNet) based on a two-stream network \cite{Carreira17} (see Figure \ref{fig:model}). This deep neural network applies a two-stream structure for RGB and optical flow to the Inception-v1 \cite{Szegedy15} module along with 3D CNNs. In the Inception module, the input is processed by several parallel 3D convolutional branches whose outputs are then merged back into a single tensor. Moreover, a residual connection reinjects previous representations into the downstream data flow by adding a past output tensor to a later output tensor. The residual connection aims to prevent information loss along with the data-processing flow. Nowadays, the I3D ConvNet is one of the most common feature extraction methods for video processing \cite{Freire22}. The approach presented in this work exploits the pre-trained model on the Kinetics dataset as a backbone model \cite{Carreira17}. In our case, we have used the Kinetics \cite{Kay17} that includes $400$ action categories. Consequently, the I3D ConvNet acts as a feature extractor to encode the network input into a $400$ vector feature representation, see Figure \ref{fig:model}. However, since we are not looking for HAR but more fine-action recognition, we also considered the features obtained from a previous layer to provide more insights into the athlete's movement. In our work, we have removed the last inception block by straightly performing an average pooling and a max-pooling to reduce dimensionality to the 1024 most meaningful features. In the I3D ConvNet, optical flow and RGB streams are processed separately through the architecture shown in Figure \ref{fig:model}. Then, the streams 400 output logits are added before applying a softmax layer for HAR. In the experiments conducted in Section \ref{sec:results} we have considered these 400 logits, but also the streams concatenation, devising an 800 vector feature representation. The same happens with the feature representation vectors from the previous inception block. In this case, the resulting embeddings contain 1024 and 2048 elements for the sum and the concatenation, respectively. \begin{figure}[bt] \begin{minipage}{1\linewidth} \centering \includegraphics[scale=0.5]{model_2.pdf} \end{minipage} \caption{\textbf{The Inflated Inception-V1 architecture.} Three main blocks can be appreciated in the diagram: convolutional layers, max/average-pool layers, and inception blocks \cite{Carreira17}. The considered outputs for our experiments can be appreciated at the bottom of the diagram. The end-to-end architecture provides 400 embeddings, whereas we also process the information from an upper layer (1024 embeddings). Both streams, RGB and optical flow are processed through this architecture.} \label{fig:model} \end{figure} A trainable regressor was built on top of this network. In this sense, four different classifiers were tested during the conducted experiments: Linear regression, Random Forest, Gradient Boosting, SVM, and a Multi-Layer Perceptron. \section{Experiments and results} \label{sec:results} This section is divided into two subsections related to the setup and results of the designed experiments. The first subsection describes the dataset adopted for the proposed problem and technical details, such as the method to tackle the problem. The achieved results are summarized in the second subsection. \subsection{Experimental setup} We design our evaluation procedure to address the following questions. Is the presented approach good enough to infer a CRT? What is the role of input context in facilitating an accurate regression process? How well does the model infer different timing percentiles? Finally, we validate our design choices with ablation studies. \begin{figure}[bt] \begin{minipage}{1\linewidth} \centering \includegraphics[scale=0.5]{samples_crt_.pdf} \end{minipage} \caption{\textbf{CRT distribution.} The considered dataset is balanced in terms of CRT distribution. Half of the runner's samples perform relatively well, whereas the rest perform average or worse.} \label{fig:sampledist} \end{figure} \textbf{Dataset}. We have partially used the dataset published by Penate et al.~\cite{Penate20-prl}. The mentioned dataset was collected during Transgrancanaria (TGC) 2020, an ultra-distance running competition. This collection comprised up to six running distances to complete the challenge, but the annotated data covers just participants in the TGC Classic who must cover 128 kilometers in less than 30 hours. The original dataset contains annotations for almost 600 participants in six different recording points. However, just 214 of them were captured after km 84. Thus just the last three recording points are considered in the experiments below. Given the different performances among participants, the gap between leaders and last runners increases along the track, while the number of participants is progressively reduced. For each participant, seven seconds clips at 25 fps captured at each recording point are fed to the footage pre-processing block described in Section \ref{sec:pipeline}. In this sense, we have used the same fps suggested by Carreira and Zisserman \cite{Carreira17}. Finally, the results presented in this section refer to the average MAE on 20 repetitions of a 10-fold cross-validation. On average, 410 samples are selected for training, and the remaining 46 samples are used for the test (Figure \ref{fig:sampledist} shows the CRT distribution). \textbf{Method}. A runner $i$ observation $o_i[RP] \in \mathbb{O}$ at a recording point $RP \in [0,P]$ consists of a pre-processed footage $F'_i[RP]$ and a CRT $\phi_i[RP]$. Additionally, we have normalized the runners CRT as shown in Equation \ref{eq:norm}. \begin{equation} \phi'_i[RP] = \frac{\phi_i[RP] - min(\phi_i[0])}{max(\phi_i[P])} \label{eq:norm} \end{equation} Therefore, $\phi'_i[RP] \in [0,1]$. Our task is to find an end to end regression method that minimizes the following objective: \begin{equation} min\;L(\phi'_i[RP], \psi_i[RP]) = \frac{1}{N}\sum_{j=0}^{N}(\phi'_i[RP]_j-\psi_i[RP]_j)^2 \label{eq:minim} \end{equation} Where $\psi_i[RP]$ stands for the runner $i$ predicted value at a recording point $RP$ just observing the runner movements up to seven seconds. \subsection{Results} As we stated in Section \ref{sec:pipeline}, we have considered two different scene contexts (BB and SB). Additionally, we have considered different I3D ConvNet models by adding -400 embeddings- or concatenating -800 embeddings- the last inception block output, and adding -1024 embeddings- and concatenating -2048 embeddings- the penultimate inception block output. Furthermore, we have reported rates using the following regression methods: Linear regression (LR), Random Forest (RF), Gradient Boosting (GB), SVM, and a Multi-Layer Perceptron (MLP). \begin{table}[!htbp] \renewcommand{\arraystretch}{1.3} \centering \caption{\textbf{Mean average error (MAE) achieved by each model.} The first column shows the model configuration, whereas the rest of the columns show each regression method MAE. Lower is better.} \label{tab:mae_exp} \begin{tabular}{|l|c|c|c|c|c|} \hline \#Embedd.-Stream-Cont. & LR & RF & GB & SVM & MLP\\ \hline 400-RGB-BB & $0.059$ & $0.061$ & $0.058$ & $0.051$ & $0.043$\\\hline 400-Flow-BB & $0.069$ & $0.078$ & $0.076$ & $0.055$ & $0.052$\\\hline 400-RGB+Flow-BB & $0.060$ & $0.067$ & $0.059$ & $0.052$ & $0.040$\\\hline 800-RGB$\cup$Flow-BB & $0.057$ & $0.061$ & $0.058$ & $0.046$ & $0.034$\\\hline \hline 400-RGB-SB & $0.036$ & $0.034$ & $0.030$ & $0.025$ & $0.028$\\\hline 400-Flow-SB & $0.065$ & $0.069$ & $0.068$ & $0.055$ & $0.050$\\\hline 400-RGB+Flow-SB & $0.041$ & $0.040$ & $0.039$ & $0.032$ & $0.027$\\\hline 800-RGB$\cup$Flow-SB & $0.036$ & $0.036$ & $0.031$ & $0.030$ & $0.019$\\\hline \hline 1024-RGB-BB & $0.072$ & $0.059$ & $0.059$ & $0.056$ & $0.036$\\\hline 1024-Flow-BB & $0.077$ & $0.069$ & $0.067$ & $0.061$ & $0.042$\\\hline 1024-RGB+Flow-BB & $0.067$ & $0.057$ & $0.055$ & $0.055$ & $0.038$\\\hline 2048-RGB$\cup$Flow-BB & $0.069$ & $0.060$ & $0.057$ & $0.054$ & $0.033$\\\hline \hline 1024-RGB-SB & $0.039$ & $0.034$ & $0.031$ & $0.028$ & $0.017$\\\hline 1024-Flow-SB & $0.073$ & $0.067$ & $0.062$ & $0.058$ & $0.033$\\\hline 1024-RGB+Flow-SB & $0.041$ & $0.037$ & $0.033$ & $0.032$ & $0.018$\\\hline 2048-RGB$\cup$Flow-SB & $0.040$ & $0.037$ & $0.032$ & $0.031$ & \bm{$0.015$}\\ \hline \end{tabular} \end{table} Table \ref{tab:mae_exp} shows the achieved results by each model configuration. The table is divided into four blocks, having four entries each. The first block is related to the last layer embeddings when BB context is considered, the second block to the last layer embeddings when SB context is considered, the third to the penultimate layer embeddings when BB context is considered, and the last block is referred to the penultimate layer embeddings when SB context is considered. To validate the importance of the input stream, we have analyzed the two 3D streams input -RGB+Flow- but also each 3D stream input independently -RGB or Flow-. As can be appreciated, individual streams often perform worse than the two-stream model when the best model is considered (MLP). Carreira and Zisserman stated that both streams complement each other. In other words, 3D ConvNet learns motion features from RGB input directly in a feedforward computation. In contrast, optical flow algorithms are somehow recurrent by performing iterative optimization for the flow fields. They conclude that having a two-stream configuration is better, with one I3D ConvNet trained on RGB inputs and another on flow inputs that carry optimized flow information. Table \ref{tab:mae_exp} also highlights the relative importance of the context. As can be appreciated, SB consistently outperforms BB context. Recently, Freire et al. have informed a similar loss-performance effect using different contexts on I3D ConvNet \cite{Freire22}. They reported a 2\% to 6\% accuracy loss in a HAR problem. In our case, the loss is significant, but we can not compare it to their work due to different metrics usage and task-related issues. For HAR, background dynamics provides more insights, but it is not useful when regressing an athlete split-time since we focus on a runner activity. They stated this accuracy drop as a data loss problem during the context adjustment in their case. However, we attribute this to the characteristics of the pre-trained I3D ConvNet. As aforementioned, this network has been trained on the Kinetics that includes $400$ action categories. It has, in some sense, learned contextual features during the training process. We further evaluate different I3D ConvNet models based on the number of considered embeddings by the classifier. Classical I3D ConvNet computes the addition -resulting 400 and 1024 embeddings- for HAR tasks. However, the reported results suggest that concatenation -800 and 2048 embeddings- provides further insight on CRT regression. Moreover, the improvement is up to 20\% in the best-reported model on Table \ref{tab:mae_exp}. The result indicates that providing more features to the regression method enhances performance no matter the source layer. \begin{table}[!htbp] \renewcommand{\arraystretch}{1.3} \centering \caption{\textbf{Comparison of different architectures on the dataset used in the present work.} The first column shows the considered pre-trained architectures, whereas the second column shows the MAE. Lower is better.} \label{tab:comp_exp} \begin{tabular}{|l|c|} \hline Architecture & MAE\\ \hline C3D \cite{Tran14} & $0.038$\\\hline 3D ResNets-D30 \cite{Hara18} & $0.036$\\\hline 3D ResNets-D50 \cite{Hara18} & $0.033$\\\hline 3D ResNets-D101 \cite{Hara18} & $0.032$\\\hline 3D ResNets-D200 \cite{Hara18} & $0.031$\\\hline I3D-800SB (Ours) & $0.019$\\\hline I3D-2048SB (Ours) & \bm{$0.015$}\\ \hline \end{tabular} \end{table} Finally, the MLP outperforms any other considered classifier. In this sense, we have performed a grid search to find the most suitable architecture. It turns out that a two-layer configuration with batch normalization reported the best result. The rest of the methods are significantly far from the MAE reported by MLP. However, SVM reports the runner-up results. Consequently, a simple classifier is good enough to select the best embeddings for inference. In terms of minutes, a 0.015 MAE is roughly 18 minutes and a half. Considering that the fastest runner was recorded after 8 hours of CRT, and the last one after 20 hours of CRT, the achieved MAE can be considered a positive outcome. To better compare the proposed pipeline with state-of-the-art proposals, we have included in Table \ref{tab:comp_exp} the best-aforementioned results. This table summarizes the performance reported in recent literature on the mentioned dataset. The table includes three major architectures, C3D, 3D ResNets, and the I3D ConvNet. Since 3D ResNets can be configured considering different depths, we show several configurations (30, 50, 101, and 200 ResNet layers). As can be appreciated in Table \ref{tab:comp_exp}, the 3D ResNets performance difference among depths is not significant beyond 50 layers. Overall, the I3D ConvNet outperforms other considered prior architectures on this task. Figure \ref{fig:quantile} analyzes the evolution of MAE for different configurations over the CRT distribution quartiles. As can be appreciated, the I3D ConvNet configurations exhibited a similar MAE increase along with the different quartiles. The results show that the first quartile achieved the best MAE during the experiments. These results may occur because of a minor timing variance during this quartile, and top-trained runners are quite consistent, while non-professional runners tend to perform less homogeneously. However, runners start to mix due to the distance running points and the different pace when the runners progress throughout the race; some runners may be in a recording point and other runners in a previous one. This increased the MAE to 32\% and 51\% for 2048-SB and 800-SB, respectively. Then, the MAE barely differs in the other quartiles until they reach 38\% and 62\% MAE degradation in Q4. Similarly, 3D ResNets (R3D) and C3D performance degrade regularly along with the different quartiles. Precisely, the results reported in Table \ref{tab:mae_exp} are from Q4. \begin{figure}[bt] \begin{minipage}{1\linewidth} \centering \includegraphics[scale=0.5]{percentile_crt_.pdf} \end{minipage} \caption{\textbf{Absolute MAE increase.} This graph shows two state-of-the-art architectures and Table \ref{tab:mae_exp} best two models, concatenated embeddings from the last and penultimate layers for SB context. The degradation increases along with the timing percentile in all models.} \label{fig:quantile} \end{figure} \section{Conclusion} \label{sec:con} This paper presented a novel approach to determine the CRT in a given recording point. The presented pipeline exploits ByteTrack to achieve tracking of the relevant subjects in the scene. This tracker allows to either remove the runner context (BB) or neutralize it by leaving the runner as the only moving object in the scene (SB). Then, I3D ConvNet is used as a feature extractor to feed several tested regression methods. In this regard, we have analyzed the results of the I3D ConvNet meticulously by splitting and combining the two 3D stream inputs -RGB and Flow-. The Multi-Layer Perceptron (MLP) reported the best results in every tested configuration. Contrary to classical HAR techniques, a higher number of I3D ConvNet embeddings -data from the penultimate inception block- provides a better result. Moreover, the output for each I3D ConvNet provides even a better result when they are concatenated instead of added. In addition, the reported experiments have demonstrated that context plays a key role during the classification process. The results show that accuracy regularly drops when the BB video clips are considered input to the classifiers. Among the most relevant uses, it is evident that runner's performance monitoring must be mentioned. It can benefit from our proposal and, in general, any further achievements in the field by relieving the race staff from the need for tiring continuous attention for health issues. \section*{Acknowledgment} This work is partially funded by the ULPGC under project ULPGC2018-08, the Spanish Ministry of Economy and Competitiveness (MINECO) under project RTI2018-093337-B-I00, the Spanish Ministry of Science and Innovation under project PID2019-107228RB-I00, and by the Gobierno de Canarias and FEDER funds under project ProID2020010024. \bibliographystyle{IEEEtran}
{ "attr-fineweb-edu": 1.713867, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUcizxK1yAgWt79sNw
\section{Introduction} This papers deals with two combinatorial aspects related to the so-called Sierpi\'{n}ski gasket. This graph belongs to the class of fractal objects of noninteger Hausdorff dimension. It is realized by a repeated construction of an elementary shape on progressively smaller length scales. The Sierpi\'{n}ski gasket appears in different contexts: analysis on fractals \cite{analysis}, \cite{woess}, \cite{tep} some physical models as dimer and Ising models \cite{taiwan}, \cite{ddn}, \cite{noidimeri}, \cite{wagner1} and combinatorics \cite{alberi}, \cite{tutte}. Moreover the Sierpi\'{n}ski gasket is the limit space of the Hanoi Towers group on three pegs \cite{hanoi} establishing a connection with the theory of self-similar groups \cite{volo}. One can construct an infinite sequence of finite graphs which are inspired to the Sierpi\'{n}ski gasket. In this paper we inductively construct such a sequence getting a natural limit graph which is an infinite marked graph. The vertices of this graph are labelled by infinite words over a finite alphabet of three symbols. This coding has a geometrical meaning: by reading the $n-$th letter of such infinite word we can construct the next (finite) graph of the finite sequence, and mark it in a precise vertex (see \cite{carpet} for comparison). The results contained in the paper are the following: first, we give a classification, up to isomorphisms of such infinite graphs depending on the corresponding infinite words; then we study the horofunction of a particular case that we call \textit{standard}. The problem of isomorphism is studied in the context of non-marked graphs, i.e., we construct such infinite marked graphs and then we forget the marked vertex and compare them. The study of horofunctions is a classical topic in the setting of $C^{\ast}$-algebras and Cayley graphs of groups, giving rise to the description of the Cayley compactification and the boundary of a group \cite{devin}, \cite{rief}. It is inspired by the seminal work of Gromov in the study of a suitable definition of boundary for a matric space \cite{gromov}. Given a proper matric space $X$, one associates with every point of $X$ a continuous real-valued function in the space endowed with the topology of uniform convergence on compact sets. The topological boundary of this space of functions modulo the constant functions is called the horofunction boundary of $X$ \cite{walsh}. It is immediate that in the case of an infinite graph $G$ with the usual metric $d$ one gets the discrete topology, so every function on it is continuous. This implies that $(G, d)$ is automatically complete and locally compact. Moreover in our case $G$ is countable, and $d$ is proper since every vertex has finite degree. The fractal structure of the graphs studied in this paper allows to give a complete description of the horofunctions. \section{Sierpi\'{n}ski type triangles} In this section we define infinitely many marked graphs that can be thought as approximations of the famous Sierpi\'{n}ski gasket. Each of these graphs is inductively constructed from an infinite word in a finite alphabet. The vertices of such graphs are labeled by infinite words in this alphabet. We will show that there are infinitely many isomorphism classes of such graphs, regarded as non-marked graphs. Let us start by fixing the finite alphabet $X=\{u,l,r\}$, and a triangle with the vertices labeled by $u$ (up), $l$ (left), $r$ (right), with the obvious geometric meaning. Take an infinite word $w=w_1w_2\cdots$, with $w_i\in X$. We denote by $\underline{w}_n$ the prefix $w_1\cdots w_n$ of length $n$ of $w$. \begin{defi} The infinite Sierpi\'{n}ski type graph $\Gamma_{w}$ is the marked graph inductively constructed as follows: \begin{description} \item[Step 1] Mark the vertex corresponding to $w_1$ in the simple triangle. Denote this marked graph $\Gamma_{w}^1$. \item[Step 2] Take three copies of $\Gamma_{w}^n$ and glue them together in such a way that each one shares exactly one (extremal) vertex with each other copy. These copies occupy the up, left or right position in the new graph. This has, by construction, three marked points, we keep the one in the copy corresponding to the letter $w_{n+1}$. Call this graph $\Gamma_{w}^{n+1}$ and identify with $\underline{w}_{n+1}$ the marked vertex. \item[Step 3] Denote by $\Gamma_{w}$ the limit of the marked graphs $\Gamma_{w}^n$. \end{description} \end{defi} The limit in the previous definition means that in $\Gamma_w$ we have an increasing sequence of subgraphs marked at $\underline{w}_n$ isomorphic to the graphs $\Gamma_{w}^n$. By definition, for each $w$, the graph $\Gamma_w$ is the limit of finite graphs $\Gamma_w^n$, $n\geq 1$. Each one of these finite graphs has three external points, that can thought as the \textit{boundary} of the graph $\Gamma_w^n$. These represent the points where we (possibly) glue the three copies of $\Gamma_w^n$ to get $\Gamma_w^{n+1}$. We denote them by $U_n^w=u\cdots u$ (the upmost one), $L_n^w=l\cdots l$ (the leftmost one), $R_n^w=r\cdots r$ (the rightmost one). More precisely, passing from $\Gamma_w^n$ to $\Gamma_w^{n+1}$, we identify $L_n^w$ and $R_n^w$ of the upper copy isomorphic to $\Gamma_w^n$ with the vertices $U_n^w$ of the left and right copies isomorphic to $\Gamma_w^n$, respectively, and we identify $L_n^w$ of the right copy isomorphic to $\Gamma_w^n$ with $R_n^w$ of the left one. We label by the same letters the corresponding vertices in $\Gamma_w$. \begin{os}\rm If we consider the word $w:=l^{\infty}$, then we get \vspace{1cm} \end{os}\unitlength=0,2mm \begin{center} \begin{picture}(500,220) \letvertex a=(30,60)\letvertex b=(0,10)\letvertex c=(60,10) \letvertex d=(160,110)\letvertex e=(130,60)\letvertex f=(100,10)\letvertex g=(160,10) \letvertex h=(220,10)\letvertex i=(190,60) \put(-10,-10){$l$} \put(80,-10){$ll$} \put(240,-10){$w=l^{\infty}$} \drawvertex(a){$\bullet$}\drawvertex(b){$\ast$} \drawvertex(c){$\bullet$}\drawvertex(d){$\bullet$} \drawvertex(e){$\bullet$}\drawvertex(f){$\ast$} \drawvertex(g){$\bullet$}\drawvertex(h){$\bullet$} \drawvertex(i){$\bullet$} \drawundirectededge(b,a){} \drawundirectededge(c,b){} \drawundirectededge(a,c){} \drawundirectededge(e,d){} \drawundirectededge(f,e){} \drawundirectededge(g,f){} \drawundirectededge(h,g){} \drawundirectededge(i,h){} \drawundirectededge(d,i){} \drawundirectededge(i,e){} \drawundirectededge(e,g){} \drawundirectededge(g,i){} \put(0,80){$\Gamma_l$} \put(295,200){$\Gamma_w$} \put(95,80){$\Gamma_{ll}$} \letvertex A=(380,210)\letvertex B=(350,160)\letvertex C=(320,110) \letvertex D=(290,60)\letvertex E=(260,10)\letvertex F=(320,10)\letvertex G=(380,10) \letvertex H=(440,10)\letvertex I=(500,10) \letvertex L=(470,60)\letvertex M=(440,110)\letvertex N=(410,160) \letvertex O=(380,110)\letvertex P=(350,60)\letvertex Q=(410,60) \letvertex R=(450,210) \letvertex S=(560,10) \letvertex T=(410,250)\letvertex U=(530,60) \put(310,-10){$R^w_1$}\put(255,50){$U^w_1$} \put(370,-10){$R^w_2$}\put(290,110){$U^w_2$} \put(345,203){$U^w_3$}\put(490,-10){$R^w_3$} \drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\ast$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$}\drawvertex(L){$\bullet$}\drawvertex(M){$\bullet$} \drawvertex(N){$\bullet$}\drawvertex(O){$\bullet$} \drawvertex(P){$\bullet$}\drawvertex(Q){$\bullet$} \drawundirectededge(A,R){}\drawundirectededge(A,T){}\drawundirectededge(I,S){}\drawundirectededge(I,U){} \drawundirectededge(E,D){} \drawundirectededge(D,C){} \drawundirectededge(C,B){} \drawundirectededge(B,A){} \drawundirectededge(A,N){} \drawundirectededge(N,M){} \drawundirectededge(M,L){} \drawundirectededge(L,I){} \drawundirectededge(I,H){} \drawundirectededge(H,G){} \drawundirectededge(G,F){} \drawundirectededge(F,E){} \drawundirectededge(N,B){} \drawundirectededge(O,C){} \drawundirectededge(M,O){} \drawundirectededge(P,D){} \drawundirectededge(L,Q){} \drawundirectededge(B,O){} \drawundirectededge(O,N){} \drawundirectededge(C,P){} \drawundirectededge(P,G){} \drawundirectededge(D,F){} \drawundirectededge(Q,M){} \drawundirectededge(G,Q){}\drawundirectededge(H,L){}\drawundirectededge(F,P){} \drawundirectededge(Q,H){} \put(510,150){$\bullet$}\put(540,180){$\bullet$} \end{picture} \end{center} We want to study the isomorphism problem for such graphs. We consider these graphs as non-marked and denote by $``\simeq"$ the corresponding equivalence relation. More explicitly, $G\simeq G'$ if there exists an isomorphism $\phi:G\rightarrow G'$. Notice that, if there exists such isomorphism $\phi$ (of two non-marked graphs), it can be seen as an isomorphism of the marked graphs $(G,v)$ and $(G',v')$ rooted at the points $v\in G$ and $v'=\phi(v)\in G'$, for any $v\in G$. We will use this observation in the sequel. Moreover, it is clear that the limit $\Gamma_{w}$ is obtained as an exhaustion of triangles. What $w$ detects is the position of the smaller triangles inside the bigger ones. In what follows $d(\cdot,\cdot)$ will denote the discrete distance in the graphs regarded as metric spaces. \begin{os}\rm We stress the fact that, even if all (marked) graphs $\{\Gamma_w\}$ are limits of the same finite graphs $\{\Gamma^n_w\}$, they are a priori non isomorphic as non marked graphs. An easy example to show that consists in considering the graphs $\Gamma_{l^{\infty}}$ and $\Gamma_{v}$, where $v=v_1v_2\cdots$ and $v_i$ is not definitively equal to $u,l$ or $r$ (see the figure below). In this case $\Gamma_{v}$ does not contain any vertex of degree $2$, contrary to $\Gamma_{l^{\infty}}$. In particular there is no isomorphism between the two graphs. \vspace{1cm} \begin{center} \begin{picture}(500,220) \letvertex a=(30,60)\letvertex b=(0,10)\letvertex c=(60,10) \letvertex d=(160,110)\letvertex e=(130,60)\letvertex f=(100,10)\letvertex g=(160,10) \letvertex h=(220,10)\letvertex i=(190,60) \put(25,65){$u$} \put(100,55){$ul$} \put(250,172){$w=(ul)^{\infty}$} \drawvertex(a){$\ast$}\drawvertex(b){$\bullet$} \drawvertex(c){$\bullet$}\drawvertex(d){$\bullet$} \drawvertex(e){$\ast$}\drawvertex(f){$\bullet$} \drawvertex(g){$\bullet$}\drawvertex(h){$\bullet$} \drawvertex(i){$\bullet$} \drawundirectededge(b,a){} \drawundirectededge(c,b){} \drawundirectededge(a,c){} \drawundirectededge(e,d){} \drawundirectededge(f,e){} \drawundirectededge(g,f){} \drawundirectededge(h,g){} \drawundirectededge(i,h){} \drawundirectededge(d,i){} \drawundirectededge(i,e){} \drawundirectededge(e,g){} \drawundirectededge(g,i){} \put(0,80){$\Gamma_u$} \put(295,200){$\Gamma_w$} \put(82,90){$\Gamma_{ul}$} \letvertex A=(380,230)\letvertex B=(350,180)\letvertex C=(320,130) \letvertex D=(290,80)\letvertex E=(260,30)\letvertex F=(320,30)\letvertex G=(380,30) \letvertex H=(440,30)\letvertex I=(500,30) \letvertex L=(470,80)\letvertex M=(440,130)\letvertex N=(410,180) \letvertex O=(380,130)\letvertex P=(350,80)\letvertex Q=(410,80) \letvertex R=(450,230) \letvertex S=(560,30) \letvertex T=(410,270)\letvertex U=(530,80) \letvertex AA=(290,-20)\letvertex BB=(230,-20)\letvertex CC=(470,-20)\letvertex DD=(530,-20) \drawvertex(A){$\bullet$}\drawvertex(B){$\ast$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$}\drawvertex(L){$\bullet$}\drawvertex(M){$\bullet$} \drawvertex(N){$\bullet$}\drawvertex(O){$\bullet$} \drawvertex(P){$\bullet$}\drawvertex(Q){$\bullet$} \drawundirectededge(A,R){}\drawundirectededge(A,T){}\drawundirectededge(I,S){}\drawundirectededge(I,U){} \drawundirectededge(E,D){} \drawundirectededge(D,C){} \drawundirectededge(C,B){} \drawundirectededge(B,A){} \drawundirectededge(A,N){} \drawundirectededge(N,M){} \drawundirectededge(M,L){} \drawundirectededge(L,I){} \drawundirectededge(I,H){} \drawundirectededge(H,G){} \drawundirectededge(G,F){} \drawundirectededge(F,E){} \drawundirectededge(N,B){} \drawundirectededge(O,C){} \drawundirectededge(M,O){} \drawundirectededge(P,D){} \drawundirectededge(L,Q){} \drawundirectededge(B,O){} \drawundirectededge(O,N){} \drawundirectededge(C,P){} \drawundirectededge(P,G){} \drawundirectededge(D,F){} \drawundirectededge(Q,M){} \drawundirectededge(G,Q){}\drawundirectededge(H,L){}\drawundirectededge(F,P){} \drawundirectededge(Q,H){} \drawundirectededge(AA,E){}\drawundirectededge(BB,E){} \put(375,0){$\bullet$}\put(375,-15){$\bullet$} \put(510,150){$\bullet$}\put(540,180){$\bullet$} \end{picture} \end{center} \end{os} \begin{os}\label{stesse}\rm Notice that different words may correspond to the same vertex of $\Gamma_w^n$. More precisely for any $k$ and $n>k+1$ the following pairs of vertices are identified $u^klv=l^kuv$,$u^krv=r^kuv$ and $r^klv=l^krv$, for any word $v$ in the alphabet $X$. From now on we consider such elements as identified and choose just one representation for them. \end{os} Recall that two infinite words $w$ and $v$ are cofinal if there exists $n$ in $\mathbb{N}$ such that $v_i= w_i$ for all $i>n$. This is clearly an equivalence relation: the \textit{cofinality} and we denote it by $``\sim"$.\\ Notice that an infinite word $x$ corresponds to a vertex of $\Gamma_v$ if and only if $x\sim v$ (see Remark \ref{cofi}). In what follows, given an infinite word $x\in \Gamma_v$ we use the notation $x\in \Gamma_v^n$ meaning that $x$ is an infinite word corresponding to a vertex belonging to the $n-$th subgraph $\Gamma_v^n$ of $ \Gamma_v$, obtained after the first $n$ steps in the construction of $\Gamma_v$. More precisely, $x\in \Gamma_v^n$ if $x\in \Gamma_v$ and it corresponds to a vertex of the subgraph $\Gamma_v^n\hookrightarrow \Gamma_v$ in the natural embedding of the finite graph $\Gamma_v^n$ into the infinite graph $\Gamma_v$. If we want to emphasize the vertices of the finite graph $\Gamma_v^n$ we prefer using the notation $\underline{x}_n$. \begin{lem}\label{lemma0} If $v\sim w$ then $\Gamma_v\simeq \Gamma_w$. \end{lem} \begin{proof} If $v\sim w$ then there is $n$ such that $v_{n+k}=w_{n+k}$ for all $k\geq 1$. The graphs $\Gamma_v^n$ and $\Gamma_w^n$ are isomorphic as non-marked graphs by construction. Let $\phi_n:\Gamma_v^n\rightarrow \Gamma_w^n$ be the identity isomorphism of non-marked graphs. Then $\phi_n$ extends to an isomorphism $\phi :\Gamma_v\rightarrow\Gamma_w$ since $v_{n+k}=w_{n+k}$ for each $k\geq 1$. \end{proof} \begin{lem}\label{lemma1} $\Gamma_v\simeq \Gamma_w$ if and only if there exist two vertices $x\in\Gamma_v$ and $y\in \Gamma_w$ such that, for every $n$ the sets $$ \{d(x,U_n^x),d(x,L_n^x), d(x,R_n^x)\} $$ and $$ \{d(y,U_n^y),d(y,L_n^y), d(y,R_n^y)\} $$ coincide. \end{lem} \begin{proof} First notice that, since $x\in \Gamma_v$ and $y\in \Gamma_w$ then $x\sim v$ and $y\sim w$ so that for $T=U,L,R$ one has $T_n^x=T_n^v$ and $T_n^y=T_n^w$ for every $n$ sufficiently large. Suppose there exists an isomorphism $\phi:\Gamma_v\rightarrow \Gamma_w$ such that $\phi(x)=y$. And assume there is $n$ such that $$ \{d(x,U_n^x),d(x,L_n^x), d(x,R_n^x)\}\neq \{d(y,U_n^y),d(y,L_n^y), d(y,R_n^y)\}. $$ Denote by $M_n:=\min\{m_x^n,m_y^n\}$ where $$ m_x^n:=\max\{d(x,U_n^x),d(x,L_n^x), d(x,R_n^x)\} $$ and $$ m_y^n:=\max\{d(y,U_n^y),d(y,L_n^y), d(y,R_n^y)\}.$$ We claim that the balls\\ $B_x(M_n)$ and $B_y(M_n)$ are not isomorphic, as graphs. If $m_x^n\neq m_y^n$ then only one of $B_x(M_n)$ and $B_y(M_n)$ contains a copy isomorphic to $\Gamma_x^n$ (regarded as a non marked graph). If $m_x^n= m_y^n$ then, the other two distances from $x$ and $y$ to the boundary vertices do not coincide and so the part of the graphs $B_x(M_n)$ and $B_y(M_n)$ exceeding the copy of $\Gamma^n_x$ and $\Gamma^n_y$, respectively are not isomorphic. Viceversa suppose that the sets $\{d(x,U_n^x),d(x,L_n^x), d(x,R_n^x)\}$ and\\ $\{d(y,U_n^y),d(y,L_n^y), d(y,R_n^y)\}$ coincide for each $n$. Let $\phi:\Gamma_v\rightarrow \Gamma_w$ be the map such that $\phi(x)=y$. The balls $B_x(M_n)$ and $B_y(M_n)$ are isomorphic for each $n$ and so the map $\phi$ is an isomorphism of (non-marked) graphs, since $\lim B_x(M_n)=\Gamma_v$ and $\lim B_y(M_n)=\Gamma_w$ regarded as non-marked graphs. \end{proof} \begin{lem}\label{lemmino} Let $\underline{x}_n, \underline{y}_n \in \Gamma_w^n$ be two vertices such that\\ $\{d(\underline{x}_n,U_n^w),d(\underline{x}_n,L_n^w), d(\underline{x}_n,R_n^w)\}=$ $ \{d(\underline{y}_n,U_n^w),d(\underline{y}_n,L_n^w), d(\underline{y}_n,R_n^w)\}$\\ then the same holds for every $k\leq n$. \end{lem} \begin{proof} Observe that, in general, when we pass from $\Gamma_s^n$ to $\Gamma_s^{n+1}$ exactly one of the elements in $\{d(x,U_n^s),d(x,L_n^s), d(x,R_n^s)\}$ is preserved. More precisely if $s_n=t \in \{u,l,r\}$ then $d(x,T_n^s)=d(x,T_{n+1}^s)$, for $T\in \{U,L,R\}$. In the other cases the distance increases by $2^{n-1}$. This implies that if there is a $k$ in which the sets of distances do not coincide then they cannot coincide for $k+1$. \end{proof} The following result describes points with same distances from the boundary points, as elements in the same orbit under the action of the symmetric group. We must take into account the exceptions of the Remark (\ref{stesse}). \begin{lem}\label{lemma1.1} Let $\underline{x}_n,\underline{y}_n \in \Gamma_w^n$ be two vertices. Then \\ $\{d(\underline{x}_n,U_n^w),d(\underline{x}_n,L_n^w), d(\underline{x}_n,R_n^w)\}=$ $ \{d(\underline{y}_n,U_n^w),d(\underline{y}_n,L_n^w), d(\underline{y}_n,R_n^w)\}$ if and only if there exists $\sigma\in Sym(\{u,l,r\})$ such that $\sigma(\underline{x}_n)=\underline{y}_n$, where $$ \sigma(\underline{x}_n)=\sigma(x_1)\cdots\sigma(x_{n-1})\sigma(x_n)=\sigma(\underline{x}_{n-1})\sigma(x_n), $$. \end{lem} \begin{proof} Suppose the sets of distances coincide and proceed by induction on $n$. For $n=1$ the assertion is trivially verified. First suppose, without loss of generality, that $x_n=y_n=u$. It follows from Lemma (\ref{lemmino}) that there exists a permutation $\sigma\in Sym(\{u,l,r\})$ such that $\sigma(\underline{x}_{n-1})=\underline{y}_{n-1}$. We want to prove that this permutation is the identity or the transposition $(l,r)$. The distances of $\underline{x}_{n-1}$ and $\underline{y}_{n-1}$ from $U_{n-1}^w$ must be the same. This implies that $\{i: \ x_i=u\}=\{i: \ y_i=u\}$. For the indices which are not equal to $u$ we observe that, $d(\underline{x}_{n-1},R_{n-1}^w)$ is equal either to $d(\underline{y}_{n-1},R_{n-1}^w)$ or to $d(\underline{y}_{n-1},L_{n-1}^w)$. The first case gives the identity, the second case the transposition $(l,r)$. Suppose now that $x_n\neq y_n$, this implies that $\underline{x}_n$ and $\underline{y}_n$ belong to different copies isomorphic to $\Gamma_w^{n-1}$ of the graph $\Gamma_w^n$. Suppose, for example that $x_n=u$ and $y_n=r$. If $d(\underline{x}_{n},L_{n}^w)=d(\underline{y}_{n},L_{n}^w)$ as before we can show that $\sigma$ is equal to $(r,u)$, since we have $d(\underline{x}_{n-1},L_{n-1}^w)=d(\underline{y}_{n-1},L_{n-1}^w)$. If the distances of $\underline{x}_{n}$ and $\underline{y}_{n}$ from one (all) of the boundary vertices do not coincide, we have $d(\underline{x}_{n},U_{n}^w)=d(\underline{y}_{n},R_{n}^w)$ and $d(\underline{x}_{n},L_{n}^w)=d(\underline{y}_{n},U_{n}^w)$, $d(\underline{x}_{n},R_{n}^w)=d(\underline{y}_{n},L_{n}^w)$. The same property holds at level $n-1$ so that there exists a permutation $\sigma$ such that $\sigma(\underline{x}_{n-1})=\underline{y}_{n-1}$. This permutation cannot be the transposition $(r,u)$ because this would imply that $d(\underline{x}_{n-1},L_{n-1}^w)=d(\underline{y}_{n-1}, L_{n-1}^w)$. And so it is the permutation $(u,r,l)$. On the other hand we prove that permutations preserve distances from boundary in the case that $\sigma$ is a transposition, say $\sigma=(l,r)$. One can easily verify that the same argument can be applied to any permutation of $Sym(\{u,l,r\})$. If $n=1$ it is clear that $x_1$ and $\sigma(x_1)$ satisfy the claim. It follows that $$ \sigma(\underline{x}_n)=\sigma(x_1)\cdots\sigma(x_{n-1})\sigma(x_n)=\sigma(\underline{x}_{n-1})\sigma(x_n), $$ and the sets of distances from $\underline{x}_{n-1}$ and $\sigma(\underline{x}_{n-1})$ to the boundary points coincide by induction. These two points belong to the graph $\Gamma_w^{n-1}$, and one is obtained by the other via the transformation $(l,r)$ corresponding to the reflection with respect to the vertical axis. If $x_n=u$ then both $\underline{x}_{n}$ and $\sigma(\underline{x}_{n})$ are vertices of the upper part of $\Gamma_w^n$ obtained from each other by the same reflection. If $x_n=l$ (resp. $x_n=r$) the vertices $\underline{x}_{n}$ and $\sigma(\underline{x}_{n})$ live, respectively, in the left and right (resp. right and left) part of $\Gamma_w^n$, and so they are obtained from each other by the reflection with respect to the vertical axis, and in particular preserve distances to the boundary vertices. \end{proof} The group $Sym(\{u,l,r\})$ consists of six elements and its action factorizes in orbits consisting of three (e.g. the boundary points) or six elements. \begin{teo}\label{teo1} There are infinitely many isomorphism classes of the graphs $\Gamma_w$. More precisely $\Gamma_v\simeq \Gamma_w$ if and only there exists $\sigma \in Sym(\{u,l,r\})$ such that $w\sim \sigma(v)$. \end{teo} \begin{proof} Suppose that there exists $\sigma \in Sym(\{u,l,r\})$ such that $\sigma(v)\sim w$, hence there is $N\in \mathbb{N}$ and there is $\sigma\in \ Sym(\{u,l,r\})$ such that $\forall n\geq N$ one has $\sigma(v_n)=w_n$. Consider the graphs $\Gamma_v$ and $\Gamma_{\sigma(v)}$ and the sequences of increasing subgraphs $\{\Gamma_v^n\}$ and $\{\Gamma_{\sigma(v)}^n\}$. From Lemma (\ref{lemma1.1}) we get $$ \{d(v,U_n^v),d(v,L_n^v), d(v,R_n^v)\}\!=\!\{d(\sigma(v),U_n^{\sigma(v)}),d(\sigma(v),L_n^{\sigma(v)}),\! d(\sigma(v),R_n^{\sigma(v)})\} $$ for every $n$. Hence Lemma (\ref{lemma1}) implies that $\Gamma_v$ and $\Gamma_{\sigma(v)}$ are isomorphic. Lemma (\ref{lemma0}) gives that $\Gamma_v \simeq \Gamma_w$. Viceversa suppose there exists an isomorphism $\phi:\Gamma_v\rightarrow \Gamma_w$. Without loss of generality we can assume that $\phi(v)=w$. If we restrict $\phi$ to the finite graphs $\Gamma_v^n$ and $\Gamma_w^n$ we get, by Lemma (\ref{lemma1.1}), an isomorphism $\phi_n$ which corresponds to a permutation $\sigma \in Sym(\{u,l,r\})$. This automorphism is determined by the images of the boundary vertices $U_n^v,L_n^v,R_n^v$. Only one of them will coincide with the corresponding boundary vertex of the graph $\Gamma_v^{n+1}$, the same for $U_n^w,L_n^w,R_n^w$. More precisely, if $T\in \{U,L,R\}$ is such that $T_n^v=T_{n+1}^v$ then $\phi(T)_{n}^w=\phi(T_n^v)=\phi(T_{n+1}^v)=\phi(T)_{n+1}^w$, for $T=U,L$ or $R$. So the the permutation yielding the isomorphism between the graphs $\Gamma_v^{n+1}$ and $\Gamma_w^{n+1}$ is given by the same permutation $\sigma$. This implies that $\phi(v)=\sigma(v)=w$. The cofinality follows. \end{proof} \begin{os}\label{cofi}\rm We can refine the last statement by considering that each infinite word $v$ cofinal with $w$ can be seen as a vertex of the graph $\Gamma_w$. In fact if it belongs to the same graph of $w$, there exists $n$ such that $\underline{v}_n$ and $\underline{w}_n$ are vertices in $\Gamma_w^n$. This implies that $v_{n+k}=w_{n+k}$ for each $k\geq 1$. So Theorem (\ref{teo1}) implies that each isomorphism class contains exactly $6$ graphs, except the class of constant words which contains only $3$ graphs (because the orbit of such a word under $Sym(\{u,l,r\})$ contains only three elements) \end{os} \section{Horofunctions} In this section we explicitly compute the horofunctions for the graphs constructed in the previous section. We need some definitions (for more details see \cite{devin} or \cite{walsh}). Let $G=(V,E)$ be a graph, and $\{x_n\}_{n\in \mathbb{N}}$ be a sequence of vertices such that $d(o, x_n)\rightarrow\infty$. For every $n$ we define the function $$ f_n(y):=d(x_n,o)-d(x_n,y), $$ whose limit for $n\to \infty$, considered in the space of (continuous) functions on $G$ with the topology of uniform convergence on finite sets, gives the horofunction associated with the sequence $\{x_n\}_{n\in \mathbb{N}}$. One considers the space of horofunctions up to the equivalence relation which identifies functions whose difference is uniformly bounded. Points which are limit of the geodesic rays in the graph are called \emph{Busemann points} (see \cite{Cormann}). The notion of Busemann point was introduced by Rieffel in \cite{rief}. We want to study the horofunctions on the Sierpi\'{n}ski type graph corresponding to $w=l^{\infty}$ up to the equivalence stated above. \subsection{The standard case} In this section we compute the horofunctions of the infinite graph $\Gamma_w$ where $w=l^{\infty}$. We choose $o=w$. Let $\{x_n\}$ be a sequence of vertices in $\Gamma_w$ such that $d(o,x_n)\rightarrow\infty$. By construction, for each $n$ there exists $k=k(n)$ such that $x_n\in \Gamma_w^k\setminus \Gamma_w^{k-1}$. The following result gives a necessary condition for the existence of the limit of the functions $f_n$. In what follows we omit the superscript $w$. \begin{lem}\label{lemma2} Suppose that there exist infinitely many indices $i$ such that $d(x_i,U_{k(i)-1}) < d(x_i,R_{k(i)-1})$ and infinitely many indices $j$ such that $$ d(x_j,U_{k(j)-1})>d(x_j,R_{k(j)-1}). $$ Then $\lim f_n$ does not exist. \end{lem} \begin{proof} Take $y$ such that $d(o,y)=1$, for example $y=U_1$. For the indices $i$ such that $d(x_i,U_{k(i)-1})<d(x_i,R_{k(i)-1})$ we have $d(x_i,o)=d(x_i,U_{k(i)-1})+d(o,U_{k(i)-1})$ and $d(x_i,U_1)=d(x_i,U_{k(i)-1})+d(U_1,U_{k(i)-1})$ and so $f_i(U_1)=1$. Analogously one can prove that $f_j(U_1)=0$. And so the limit of $f_n(U_1)$ does not exist. \end{proof} The previous Lemma implies that the sequences $\{x_n\}$ that we have to consider to have a limit are those such that $d(x_i,U_{k(i)-1})<d(x_i,R_{k(i)-1})$, $d(x_i,U_{k(i)-1})>d(x_i,R_{k(i)-1})$ or $d(x_i,U_{k(i)-1})=d(x_i,R_{k(i)-1})$, provided $i$ is sufficiently large. Actually we will show that, (up to equivalence), there are only three limit functions. Let us introduce the following sequence of vertices $\{c_n\}$, where $c_n:=r^nu$. Geometrically the $c_n$'s are points \textit{symmetric} with respect to $o$, more precisely there are two paths from $c_n$ to $o$ which realize the distance. \begin{teo}\label{teo2} There are infinitely many horofunctions in the graph $\Gamma_{l^{\infty}}$. More precisely: \begin{enumerate} \item the function $f_U$ corresponding to the Busemann points $\lim U_n$; \item the function $f_R$ corresponding to the Busemann points $\lim R_n$; \item infinitely many functions equivalent to the function $f_c$ obtained as $\lim f_n$ associated with the sequence $\{c_n\}$. \end{enumerate} \end{teo} \begin{proof} It is clear that $f_U$ and $f_R$ are non-equivalent horofunctions. The function $f_c$ is not equivalent to $f_U$ and $f_R$. In fact $f_c(U_n)=f_c(R_n)=2^{n-1}$, as one can easily check. Moreover, all the sequences of points $\{x_n\}$ satisfying conditions of Lemma (\ref{lemma2}) at bounded distance from $\{c_n\}$ (i.e., there exist $M>0$ so that for every $n$ there exists $k$ such that $d(x_n,c_k)<M$) give rise to horofunctions not equal but equivalent to $f_c$. It remains to prove that if $\{x_n\}$ is a sequence whose limit $f$ exists, and such that it is not at bounded distance from $\{c_n\}$, then either $f=f_U$ or $f=f_R$. From Lemma (\ref{lemma2}) we can suppose that, for $n$ sufficiently large, $x_n$ is a vertex in the upper part of $\Gamma_w^k$, for some $k=k(n)$ (the case in which $x_n$ definitively belongs to the right part of $\Gamma_w^k$ is symmetric and left to the reader). Denote by $\gamma_n:=\min_k d(x_n, c_k)$, by our assumption $\gamma_n\rightarrow\infty$. Fix a vertex $y$, then there exists $h$ to be the minimum index such that $y\in \Gamma_w^h$. We want to prove that $f(y)=f_U(y)$. We use the notation $U_{k(n)}$ and $R_{k(n)}$ as before. We have that $$ d(x_n,y)= \begin{cases} d(x_n,U_{k(n)-1})+d(U_{k(n)-1},y) & \text{or}, \\ d(x_n, c_{k(n)})+d(c_{k(n)},R_{k(n)-1})+d(R_{k(n)-1},y) & . \end{cases} $$ In order to prove that, for $n$ large enough the distance is given by the first of the two expressions above, we observe that $$d(x_n,U_{k(n)-1})\leq d(c_{k(n)},R_{k(n)-1})=2^{k(n)-1},\ d(U_{k(n)-1},y)\leq 2^{k(n)-1} $$ and $d(R_{k(n)-1},y)\geq 2^{k(n)-1}-2^h$. This gives $$ 2^{k(n)-1}<\gamma_n+2^{k(n)-1}-2^h \Leftrightarrow \ 2^h<\gamma_n, $$ which is verified for $n$ large enough. So in the limit \begin{eqnarray*} f(y)&=& \lim_n(d(x_n, o)-d(x_n, y)) \\ &=& \lim_n(d(x_n, U_{k(n)-1})+ d( U_{k(n)-1},o)-d(x_n,U_{k(n)-1})-d( U_{k(n)-1},y))\\ &=& f_U(y). \end{eqnarray*} \end{proof} \begin{os}\rm The Busemann function $f_U=\lim f_n$ associated with the point $U=\lim U_n$ can be easily described in the following way. Project each vertex $y$ of the graph on the geodesic ray connecting $o$ to $U$, denote by $y_U$ the image of the projection. then $f_U(y)=d(o,y_U)$. This follows from the fact that for each $y$ the value $f_U(y)$ can be computed as difference of distances in a finite graph $\Gamma^n_w$. The same can be said for $f_R$. \end{os} \begin{center} \begin{picture}(500,280) \letvertex A=(180,210)\letvertex B=(150,160)\letvertex C=(120,110) \letvertex D=(90,60)\letvertex E=(60,10)\letvertex F=(120,10)\letvertex G=(180,10) \letvertex H=(240,10)\letvertex I=(300,10) \letvertex L=(270,60)\letvertex M=(240,110)\letvertex N=(210,160) \letvertex O=(180,110)\letvertex P=(150,60)\letvertex Q=(210,60) \letvertex R=(250,210) \letvertex S=(360,10) \letvertex T=(210,250)\letvertex U=(330,60) \put(210,270){$f_U$} \put(370,10){$f_R$} \drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\ast$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$}\drawvertex(L){$\bullet$}\drawvertex(M){$\bullet$} \drawvertex(N){$\bullet$}\drawvertex(O){$\bullet$} \drawvertex(P){$\bullet$}\drawvertex(Q){$\bullet$} \drawedge(A,T){}\drawundirectededge(A,R){}\drawedge(I,S){}\drawundirectededge(I,U){} \drawundirectededge(E,D){} \drawundirectededge(D,C){} \drawundirectededge(C,B){} \drawundirectededge(B,A){} \drawundirectededge(A,N){} \drawundirectededge(N,M){} \drawundirectededge(M,L){} \drawundirectededge(L,I){} \drawundirectededge(I,H){} \drawundirectededge(H,G){} \drawundirectededge(G,F){} \drawundirectededge(F,E){} \drawundirectededge(N,B){} \drawundirectededge(O,C){} \drawundirectededge(M,O){} \drawundirectededge(P,D){} \drawundirectededge(L,Q){} \drawundirectededge(B,O){} \drawundirectededge(O,N){} \drawundirectededge(C,P){} \drawundirectededge(P,G){} \drawundirectededge(D,F){} \drawundirectededge(Q,M){} \drawundirectededge(G,Q){}\drawundirectededge(H,L){}\drawundirectededge(F,P){} \drawundirectededge(Q,H){} \letvertex Y=(310,150)\letvertex Z=(340,180)\drawedge(Y,Z){}\put(350,200){$f_c$}\drawvertex(Y){$\bullet$} \put(300,135){$c_n$} \put(60,-20){\textbf{Fig.} Horofunctions in $\Gamma_{l^{\infty}}$ (up to equivalence)} \end{picture} \end{center} \vspace{1cm} \section*{Acknowledgments} Some of the results contained in this paper were obtained during my staying at Technion University of Haifa. Among the others, I want to thank Uri Bader, Uri Onn, Amos Nevo, Michael Brandenbursky and Vladimir Finkelshtein for useful discussions. Moreover I am grateful to the anonymous referees for recommending various improvements in exposition.\\
{ "attr-fineweb-edu": 1.80957, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUddM4ubng0WcZyfe_
\section{Introduction} Physical match performance is one of the most studied topics in Sports Science since EPTS devices became a trend in soccer. Despite the vast number of publications, most research has focused on assessing player performance based on isolated metrics such as distance covered, accelerations, or high-intensity runs (HI) (Bradley et al. 2013; Altmann et al. 2021; Ingebrigtsen et al. 2015). In addition, the tactical context tends to be widely simplified and often ignored. For sports scientists and soccer practitioners, the idea that the integration of tactical and qualitative information can be very beneficial to develop a much more in-depth analysis of physical demands does not go unnoticed. However, the lack of spatiotemporal data that allows analyzing individual effort within the collective context has been an enormous barrier for developing this integration between the physical and the tactical. Far on the horizon remains the old question: is it about running more or running better? While event and tracking data have accelerated soccer analytics development in recent years, we still lack a comprehensive framework for understanding how physical abilities impact global performance. There are still many unanswered questions: How do player roles affect the type of runs players do? Can we find similar players based on their movements between lines? Can we assess whether a high- intensity disruptive run provides goal value? What types of runs and movements can we expect from our next opponent, and what would its impact be? Can we select a starting eleven that maximizes value creation in specific spaces? Having a comprehensive approach for addressing these and many other related questions would expand the capabilities of soccer analytics to help the primary decision-makers in football clubs. Some examples are: \begin{enumerate} \item The head of recruitment would find players with great physical capacity and contribute to value creation, both on-ball, and off-ball. \item The head coach and technical directors would assess players' versatility based on their movements and tactical fit to either design tactics or track players' evolution in time. \item The medical department would better understand fatigue by associating lower physical deployment and contribution to value creation. \item Physical coaches could better customize training sessions and design tailored routines for players' readaptation and return-to-play by classifying players' movements and identifying those with higher intensity. \end{enumerate} We present the first framework that gives a deep insight into the link between physical and technical-tactical aspects of soccer. Furthermore, it allows associating physical performance with value generation thanks to a top-down approach: \begin{enumerate} \item We start with a high-level view by identifying differences in physical performance between attack and defense phases, both from a collective and individual perspective. \item Secondly, we contextualize physical indicators employing spatiotemporal features derived from the interaction of the 22 players and the ball. Specifically, we integrate tactical concepts such as dynamic team formations, player roles, attack types, and defense types. \item Finally, we employ a state-of-the-art expected possession-value model to associate runs with value creation and its impact on the overall likelihood of winning more matches. \end{enumerate} This work is structured into two main parts. First, we present the technical details of how we developed the different layers of our top-down approach. Here, we employ an unprecedented data set of tracking data from broadcast, which allows us to assess and compare various teams and players from the main European leagues. Then, in the second part of the work, we present a broad set of practical applications showing new approaches for understanding players and team performance comprehensively. \section{Estimating physical metrics from tracking data} In this work, we count on the availability of TV broadcast tracking data, including near 70\% of the 2020/2021 season's matches from the Big-5 European competitions: the English, Spanish, German, Italian, and French domestic leagues. This novel source of spatiotemporal data provides the location of the 22 players and the ball ten times per second. The availability of this comprehensive data set of tracking data is unprecedented. While EPTS-derived physical metrics might be considered more precise since physical signals are captured directly from devices attached to players' bodies, usually, this information is limited to the 11 players belonging to the team that owns the devices. To obtain a speed signal, we calculate the player's velocity in two consecutive frames and calculate the magnitude at the frame level. Then, we apply a rolling average smoothing to remove the fine-grained variation between frames. In addition, we detect and treat outliers in players' speed. Figure 1 illustrates the frame-by-frame evolution of Messi's speed before and during an on-ball action (link to the video: \href{https://bit.ly/3qYOgC6}{https://bit.ly/3qYOgC6}). We detect intervals from the speed signal where a player maintains the speed in the same range, similar to those proposed by Pons et al. (2019). A speed interval is defined as a valley when both the preceding and following periods are of a higher speed. The full speed signal is then divided into sections from each valley until the following one. Each section represents a player's run, and its speed is the maximum speed reached at the peak between these two valleys. For example, in Figure 1, Messi's movement is divided into three different efforts, with peaks of 6, 21, and 21 km/h, respectively. \begin{figure}[htbp] \centering \includegraphics[width=15cm]{pictures/picture1.png} \caption{Above, the evolution of Messi's speed in a game situation. The first highlighted moment represents a movement to approach (1) the ball. Then, the action is formed on (2) the pass reception, (3) a ball drive with a dribble, and (4) a cross. Below, the video frames of the four highlighted instants.} \end{figure} We compute three different metrics to define the physical performance. The most obvious is the distance covered by a player, calculated by adding up each run's distance. Then, we define high-intensity (HI) runs as those exceeding 21 km/h (Castellano et al. 2011; Ade et al. 2016). Given this definition, we are interested in computing the number of runs and the distance covered at a high intensity. There are three key moments in a player's run used to add the contextual information described in the following section. \begin{enumerate} \item The instant when the valley ends. It is used to set the origin location of the run as the player starts to accelerate and increase in speed. \item The instant when the peak of the run starts. It is the moment used to know if the run is performed within an attack or defense and its derived categories (attack type, defense type, etcetera). It is the moment when the player starts to get to the maximum speed of the run. \item The instant when the peak of the run ends. It is used to set the destination location of the run as the player starts to decelerate. \end{enumerate} \section{A framework for physical performance contextualization} Soccer is a complex sport and, just as we cannot analyze the parts of a complex system separately, we need to bring together the technical and tactical aspects of the sport with the physical performance. There have been other attempts to integrate these three dimensions of the game in the Sports Science literature, being the approach by Bradley and Ade (2018) the most comprehensive so far. They associate each player's effort with a tactical concept manually tagged from the video. However, their integrated approach does not consider the team's style, the opponent's block, or the value-added with those efforts, which are essential to contextualizing the physical metrics better. We have worked on a scalable and automatic way to overcome these issues, which detects contextual tactical concepts by integrating the broadcast tracking data from SkillCorner with StatsBomb's event data. For this paper, we are working with a novel dataset that allows us to have a wide variety of matches from the 2020/2021 season of the Big-5 European competitions (English Premier League, Spanish La Liga, German Bundesliga, French Ligue 1, and Italian Serie A). \begin{figure}[htbp] \centering \includegraphics[width=15cm]{pictures/picture2.png} \caption{The different layers of contextualization of the proposed top-down framework.} \end{figure} The top-down framework we propose has different levels of analysis, starting with the physical performance metrics estimated from tracking data with the process described in Chapter 2. Figure 2 shows the layers of contextualization that will be described in the following sections. \textbf{\subsection{Dynamic lines and team's block}} While the absolute location of high-intensity efforts provides a good piece of information on players' movement patterns, contextualizing these efforts according to the opponent's defending block adds a whole new level of tactical insights. Therefore, we integrate the concept of dynamic formation lines introduced by Fernández et al. (2021) to calculate these relative locations. Representing the defensive structure to compute the relative locations of the events is difficult due to its variability and complexity. However, we assume that players always form three lines, as shown in Figure 3. The centroids of a clustering performed on the defenders' X coordinate model the dynamic lines from tracking. \begin{figure}[htbp] \centering \includegraphics[width=6.5cm]{pictures/picture3.png} \caption{Diagram of the dynamic defensive lines (red dashed lines) and the convex hull representing the defensive team's block (blue shaded area).} \end{figure} In addition to the dynamic lines, we compute the convex hull of the defender's X and Y coordinates to model the team's block. It helps us distinguish those actions inside the block from those in the flanks. Note that goalkeepers are always ignored to model these two tactical concepts. To give tactical meaning to each high-intensity run, we have defined three different zones: \begin{enumerate} \item Inside: Within the opponent's block and between defensive lines. \item Wing: Both left and right flanks outside the block but within defensive lines. \item Back: Everything that is behind the last defensive line. \end{enumerate} By combining these relative zones with the runs' origin and destination relative locations, we have defined the types of movements plotted in Figure 4. Note that we have filtered some of the combinations that are not used in the applications of this paper. These categories of HI runs will help us define player profiles, as shown in the second application in Chapter 4. For example, movements inside-to-back or wing-to-back are common traits of deep wingers and strikers. \begin{figure}[htbp] \centering \includegraphics[width=12cm]{pictures/picture4.png} \caption{Diagram of the types of movements relative to the opponent's block. On the left, those starting from the flank of the block, whereas, on the right, the ones that begin inside the opponent's block.} \end{figure} \textbf{\subsection{Possessions, attack types, and defense types}} Players' responsibilities vary considerably according to who has the ball. The different demands between attacking and defending phases also affect the physical efforts the player has to perform, as studied by Lorenzo-Martínez et al. (2021). Therefore, identifying which team is in possession when a particular run is performed will help add a layer of basic but essential information to contextualize it. Possessions are detected following a rule-based approach. Three possibilities can cause a possession change: (1) the ball goes out, (2) the referee stops the game, or (3) in-game possession changes, which are fuzzier, and we should take into account those times teams lose the ball and regain it instantly. On the other hand, we apply rule-based algorithms based on player locations, teams' defensive lines, and events at each possession's frame to determine the attack and defense types. We developed and validated these algorithms in coordination with coaches from FC Barcelona to be as close as possible to the way they interpret these tactical concepts. We identify four possible attack types: \begin{enumerate} \item Organized attack: both teams are well-structured with no drastic block movements. \item Direct play: the segment of the possession after a long vertical pass from the back of the attacking team. \item Counter-attack: the segment of the possession when both teams travel from one half of the pitch to the other one after a ball recovery. \item Set-piece: seconds after a set-piece in the opponent's half of the pitch. \end{enumerate} The defense types are categorized based on the height of the opponent's block: \begin{enumerate} \item High pressure: The block is almost entirely located in the attacking team's half of the pitch, with at least N players in the last third of the pitch. \item Medium block: The block is distributed in both halves. \item Low block: The block is entirely on the defending team's half of the pitch. \end{enumerate} By automatically detecting these concepts, we can easily define the moment in which a player's run occurs related to the possession of the ball. This way, we can distinguish which runs were performed within an attack, a defense, or out of play (e.g., a central defender comes back from a corner to be ready for the opponent's goal kick.). Finally, we set the attack and defense types regarding the team in possession and the defending team. \textbf{\subsection{Dynamic team formations and player roles}} The position or role played within a team substantially impacts the kind of efforts a soccer player will perform during a match (Lorenzo-Martinez et al. 2021). Figure 5 confirms these variations with the distribution of the distance covered at a high intensity per player role when teams were in and out of possession. Note that the differences are especially noticeable when the team is attacking. \begin{figure}[htbp] \centering \includegraphics[width=12cm]{pictures/picture5.png} \caption{Distribution of meters covered at high intensity in attack and defense by each player roles, normalized per 30 minutes of effective playing time in possession (left) and out of possession (right).} \end{figure} To identify player roles within a match, we first need to know their team's formation. In addition, this information also impacts players' physical performance, as Bradley et al. (2011) studied. We followed an approach similar to the one presented by Shaw and Glickman (2019), which can be summarized in the following steps: (1) we compute player's mean location relative to their teammates to assemble a formation, then (2) we apply an assignment optimization algorithm between a set of templates that represent the most commonly used systems and the previous players' mean relative locations. Finally, (3) we assign the formation's positions to each involved player. In addition, some roles are equivalent among different tactical systems (e.g., a left full-back in a 4-4-2, 4-3-3, and 4-2-3-1), so we choose to simplify them. First, discarding the wing-specific part of the role (e.g., merging left/right-wingers) and then combining similar roles (e.g., full-backs and wing-backs). The resulting player roles are central defenders, full backs, defensive midfielders, midfielders, wingers, and strikers. Note that while these roles correspond to the standard set of positions in soccer, these are calculated for every second in the match, allowing for match-related variations rather than relying on global and hand-label positions. \textbf{\subsection{Linking with Expected Possession Value}} In the latest years, the estimation of player's actions value has become a trend in soccer analytics research. It is a new dimension that allows turning descriptive analysis into prescriptive by providing qualitative insights to practitioners. In the same way that we estimate the value added by a player's passes, we would like to know which players are running better, adding more value to their team's possessions with their off-ball runs. There are different approaches in the literature to estimate the long-term probability that a possession ends in a goal. We can categorize them into (1) action-value and (2) possession-value models, depending on how they define the state of the possession. Models such as the ones described by Decross et al. (2020) or the OBV by StatsBomb (2021) are examples of the former, and they use spatiotemporal features to predict the value added by on-ball actions. On the other hand, possession-value models like the ones presented by Rudd (2011), Singh (2019), or Fernández et al. (2019) provide a representation of the game state and estimate the expected value at the end of the possession by integrating over all the possible paths the possession can take from that state. In this work, we use Fernandez et al. (2021) EPV framework, which is built on top of tracking data and considers the impact of both observed and potential actions, providing a rich source of information to assess players' off-ball contribution. Employing the Expected Possession Value framework, we want to understand the players' runs impact in creating value. In particular, we want to associate the EPV gain between the beginning and end of each HI run to measure the increase in the team's goal probability after the player's effort. The EPV of the attacking team is set at two different moments: (I) when the player starts to increase the speed (the starting valley of the effort ends) and (II) two seconds after the peak of the effort ends. After each high-intensity run ends, we compute the value added to the team's possession with these two values. We then perform an ordinary least-squares linear regression to obtain the influence of each player's HI runs on their team's EPV increase as shown in equation (1), \begin{eqnarray} EPV_{added} \sim \beta_0 + \beta_1 * angle + \beta_2 * distance + \sum_p \beta_p * E_p \end{eqnarray} where $angle$ and $distance$ are computed respectively to the opponent's goal from the initial location of the run. $E_p$ represents the player making the high-intensity run. Note that we are considering the player's role per match so that the same player with two different roles will have different coefficients per role. With this modeling approach, we are essentially estimating a player's average contribution to the possession value when he performs a high-intensity run while controlling for the angle and distance effects. \textbf{\subsection{Aggregation and normalization}} Finally, to create a player and a team profile, we need to aggregate per-match metrics for a defined period of time, which is the 2020/2021 season in our case. The most common way of normalization is per match (I.e., 90 minutes), but the truth is that some games have a higher ball-in-play (also known as effective) time than others, with differences of even 20 minutes in total. Therefore, to avoid biasing the results and favoring teams with matches with higher effective time, we choose to do the following normalization: \begin{enumerate} \item Those metrics that do refer to a specific possession phase will be normalized per 60 effective minutes, very close to the mean of ball-in-play per match. \item Metrics referring to events and actions that happen either in attack or in defense will be normalized per 30 minutes of effective time in and out of possession, respectively. Note that for the following applications, we are filtering those players who did not play a minimum of 450 minutes, so that we remove potential outliers due to the lack of minutes. \end{enumerate} Note that for the following applications, we are filtering those players who did not play a minimum of 450 minutes, so that we remove potential outliers due to the lack of minutes. \section{Applications} Supported by this framework, we now focus on answering a series of commonly asked questions that will serve as practical applications to analyze players' and team behaviors. \textbf{\subsection{The higher the percentage of possession, the less we run?}} Pep Guardiola once said: "Without the ball you have to run, but with the ball you have to stay in position and let the ball run.". While coaches often refer to concepts related to running, not all the metrics are interpreted the same way. Some might refer to the total distance traveled, while others might emphasize the more demanding efforts. So, to give a more practical sense to this analysis we will split the initial question into two different questions: \begin{enumerate} \item When comparing in and out of possession phases, do teams travel more distance in total? \item Do teams travel longer distances in attack phases at a high intensity than in defense phases? And does this vary according to the possession percentage? \end{enumerate} To answer the first question, we need to control for the effect of the possession percentage. Note that if we sum the total distance covered of a team with higher ball possession, they will look like they run longer in attack because they spend more time in this game phase. Therefore, we decide to normalize these metrics per 30 minutes of effective playing time in and out of possession, respectively. Figure 6 shows the distribution of the distance covered by teams in attack and defense phases, normalized as explained before, and we observe that teams tend to run more when out of possession, confirming Guardiola's idea. In addition, we also show that the distribution of the distance covered at a high intensity follows the same trend. From now on, we limit our analysis to evaluating performance in high-intensity running since it allows us to differentiate the players' behaviors more clearly, as they are less confused with insignificant movements. At the same time, the FC Barcelona coaches who contributed to this study consider that high-intensity runs represent the moments of maximum effort and are the most significant for understanding the player's physical capacity. \begin{figure}[htbp] \centering \includegraphics[width=8cm]{pictures/picture6.png} \caption{Distribution of the distance covered in attack and defense, both in total (on the left) and at a high intensity (on the right). All the metrics are normalized per 30 minutes of effective playing time.} \end{figure} Regarding the second question, Lorenzo-Martinez et al. (2021) showed that teams with very high possession percentages run less, both in total and at a high intensity. In our case, we want to evaluate these differences further and isolate them between the attack and defense phases, understanding how team style affects. Figure 7 shows the teams' normalized distances covered at a high intensity in and out of possession from various European competitions in the 2020/2021 season. The red line separates the teams that run more in defense (above the line) from those that run more when attacking. If we focus on the size of the circles, which represents the mean percentage of possession, we can interpret that teams with a higher ball possession percentage tend to travel more distance at high intensity per minute in defense than in attack. But is there more correlation between physical and tactical variables? We have carried out a principal component analysis (PCA) to assess the relationships between tactical and physical indicators in teams' style. Figure 8 presents a biplot comparing the teams from the English, German and Spanish leagues in the 2020/2021 season, according to the two principal components, which explain 87.4\% of the total variance. The circle´s color represent the Expected Goals (xG) differential of each team in the mentioned season (accounting exclusively for the games available in the dataset). Note that the translation between team acronyms and the full names can be found in Appendix 1. \begin{figure}[htbp] \centering \includegraphics[width=9.5cm]{pictures/picture7.png} \caption{The distance at HI covered in attack (X-axis) and defense (Y-axis) normalized per 30 minutes of effective playing time in and out of possession, respectively. The circle's color represents the differential of Expected Goals, whereas the size represents the percentage of ball possession. The red line represents the values whose distance at high-intensity is equal in attack and defense.} \end{figure} By analyzing the relationships between variables in the plot, we extract the following conclusions from the teams' offensive and defensive styles: \begin{enumerate} \item As previously mentioned, the possession percentage is significantly negatively correlated (-0.78) with running more in defense than in attack. \item Teams with more possession have a high tendency to have an associative playing style (i.e., less percentage of direct play), avoiding actions with a higher risk of losing the ball. \item Similarly, teams with more possession tend to have a defensive style based on high press (correlation of 0.81), trying to recover the ball as soon as possible. \item A higher percentage of direct play, which means more counterattacks, transitions, and long balls, is somewhat correlated (0.55) with more distance at a HI covered in attacks. \item Finally, more tendency to press higher is significantly correlated (0.78) with running more at HI in defense, as teams with this characteristic will do shorter but more intense efforts. \end{enumerate} \begin{figure}[htbp] \centering \includegraphics[width=10cm]{pictures/picture8.png} \caption{Biplot of teams in the two first Principal Components of a PCA with a mix of physical metrics (HI distance covered) and tactical metrics (team's style). All metrics are normalized per 30 minutes of effective playing time in or out of possession, depending on if they refer to the attack or defense phases. The circle's color represents the team differential of Expected Goals.} \end{figure} The biplot also lets us grasp the similarity between teams regarding the mix of physical and tactical variables that we have selected. For example, Leeds United shows a striking tendency to run long distances at a high intensity in attack and defense, and only VfL Wolfsburg and VfB Stuttgart are similar to them. In addition, the English team tends to press high. Secondly, teams with higher ball possession percentages such as FC Barcelona, Manchester City, or Bayern Munich are associated with high pressures and low direct play, playing mostly an associative style. These teams are related to a higher ratio of distance covered at a HI in defense. Finally, teams like Cadiz, Newcastle, or Werder Bremen are related with low percentage possession, based on direct play, and they tend to defend retreating in their own half of the pitch. These teams tend to make more high-intensity efforts in attack. Finally, we can see that the xGoals differential, which we use as a proxy for the team's strength, has no apparent correlation with running more; it depends more on the team style. \textbf{\subsection{Do all players run the same way?}} When we want to define the characteristics of a player, the first two dimensions we focus on are the technical and tactical ones. We can approach the former by quantifying the events such as passes, dribbles, drives, or shots and complement it with qualitative information such as the expected value-added. This type of analysis focuses on understanding the purpose of the player within the team's way of playing and other aspects such as the positions they tend to play or their behaviors in the different phases of play. However, a player's profile is not complete without the physical dimension. Do some players mostly run at a high intensity when interacting with the ball? Does a player tend to do disruptive runs to open up spaces for other teammates? Such questions remained unanswered to this date. In addition, quantifying the average distance covered by the player or the number of efforts at a high intensity is not enough. For example, we will find players that distribute the 10 kilometers they run per match differently, and their behaviors are significantly different with respect to the ball. Figure 9 demonstrates that players in different roles perform on-ball actions at very different speeds. For example, Central Defenders tend to do more oDistributions of the percentage of on-ball actions each role tends to do at the different speed categories.n-ball actions than the rest of the roles at very low speeds (< 6km/h, walking) and fewer actions sprinting. Still, there is a wide spread of Central Defenders jogging as some can double the percentage of others. On the other hand, more attack-driven roles such as Wingers and Strikers tend to do more actions at very high speeds (> 21 km/h, sprinting), but there is also wide variability in those roles and speed ranges. \begin{figure}[htbp] \centering \includegraphics[width=15cm]{pictures/picture9.png} \caption{Distributions of the percentage of on-ball actions each role tends to do at the different speed categories.} \end{figure} However, the proportion of high-intensity efforts spent in on-ball actions is only a proportion of the total HI runs that a player does in attack phases. Figure 10 shows the differences in some of the English, Spanish, and German wingers in 2020/2021. Players such as Messi, Grealish, or Mbappé are wingers who save their energy to those times they have a great chance of getting the ball. In addition, as the circle's size indicates, players such as Saint-Maximin and Adama Traoré stand out for carrying the ball at high speed considerably more often than other players. On the other hand, Cheryshev or Jon Morcillo are players who participate less with the ball but cover more distance at a HI. The following figure also shows that, on average, from 60\% to 80\% of the winger's HI efforts in the attack phase do not involve direct on-ball participation in the possession, highlighting the importance of assessing off-ball performance in soccer. Unfortunately, this type of off-ball behavior went unnoticed for data-driven analysis of players until now. With the framework we present in this paper, we can quantify the number of times a player does the defined types of off-ball movements described in Chapter 4. More importantly, we can say how unique this number is compared to the other players with the same role in the Big-5 leagues. \begin{figure}[htbp] \centering \includegraphics[width=10cm]{pictures/picture10.png} \caption{Percentage of winger's high-intensity efforts in the attack phase that are performed to participate directly with the ball. The circle's size represents the exact percentage but driving the ball at least 3 seconds. We have the number of HI efforts per 30 minutes of effective playing time in possession in the Y-axis.} \end{figure} Figure 11 shows off-ball efforts at a high intensity of Serge Gnabry in Bayern Munich's matches in the 2020/2021 season. Note that the number along each arrow represents the frequency the player did a specific type of HI run per 30 minutes of effective time in possession of the ball. In addition, the color is relative to how rare that frequency is compared to the rest of the wingers (with more than 450 minutes played in that role) of the Big-5 European leagues. The German right-footed player is an all-purpose winger who can play in both the left and right flanks. Thanks to this kind of off-ball behavior analysis, we can now unravel that he does completely different movements when he plays in his natural wing, unlike when he did so as an inverted winger playing on the left flank. The plots on the top of Figure 11 show that when he plays as an inverted winger, he tends to do high-intensity runs deep behind the back of the defenders (3.0 times per 30 mins of effective time in possession). In contrast, he does not go inside the opponent's block that often. On the other hand, in those matches that he played most of the time as a traditional winger in his natural flank (plots below), he does the opposite behavior with less disruptive runs to the back and running more to the half-spaces inside the block (3.8 times) or staying in the wing (5.3 times). Note from the plots on the right that whenever he starts HI efforts from inside the opponent's block, the behavior is pretty similar regardless of his position within Bayern Munich's tactical system. This kind of information is precious for a scouting department that seeks to identify the type of winger a player is without watching hours of footage to grasp these insights. And this same pattern is adaptable for the rest of the player roles, creating player profiles that allow us to understand the most common movements at a high intensity of a player. This new dimension of analysis complements tactical-technical studies of frequency and value-added with on-ball actions. \begin{figure}[htbp] \centering \includegraphics[width=12cm]{pictures/picture11.png} \caption{Serge Gnabry's frequency of HI runs per match as an inverted winger (above) and a traditional winger (below). The arrow's origin and destination represent the initial and end locations of the effort relative to the opponent's block. The number is normalized per 30 effective minutes in possession of the ball. Arrow's color is relative to the player's percentile compared to what the wingers in the Big-5 European leagues.} \end{figure} \textbf{\subsection{Picking up the starting lineup}} Following the same path and going a step further, we group the normalized frequencies of off-ball run types in all team players. This way, we could answer questions like these: Is the next opponent a deep time from the inside areas, or will their wingers provide most of the depth? Is the opponent symmetric, or do they present different behaviors from the left and right flanks? We will focus on the 2020/2021 Champions League's winner starting lineup as an example. We have aggregated the frequency of each type of HI runs for all the players that form the 3-4-2-1 used by Chelsea´s coach Thomas Tuchel on that match. Note that the numbers are the full-season averages per player and not the actual number of the final. But how would Chelsea have been less deep if Giroud had played as a striker instead of Werner? Do they keep the same style regardless of the players in the lineup? To answer these questions, we compare the final match´s lineup with an alternative lineup where we substituted the three forwards and the wing-backs. Observing the differences in the plots of Figure 12, we can state that Chelsea would maintain its style. However, there are two noticeable differences in the HI run distribution. \begin{figure}[htbp] \centering \includegraphics[width=11.5cm]{pictures/picture12.png} \caption{Aggregation of the average frequency of the HI runs of all players in two different lineups normalized per 30 minutes of effective playing time in possession of the ball.} \end{figure} Removing Werner from the lineup would decrease the number of disruptive runs per 30 effective minutes of possession both from the inside (from 17.6 to 14.4 times) and from the left wing (from 9.8 to 7.5). In fact, in Havertz's goal in the UCL's final, two inside-to-back runs make the goal chance possible, the former by Werner to drag the central defender and the former by Havertz to face the goalkeeper and score (link to the video of the goal: \href{https://bit.ly/3FPrRuW}{https://bit.ly/3FPrRuW}). Secondly, the team's off-ball runs would be more intense from the right flank than from the left. Maybe because Hudson-Odoi's tends to look for the defender's back. \textbf{\subsection{How fast does our striker have to be to be valuable?}} Section 4.2 focused on categorizing the off-ball runs to create offensive profiles of players. But do players get in touch with the ball at the same speed? And what is more important, do they add equal value at low and high intensities? Those questions must be answered too to distinguish different types of players within the same role. Before focusing on the value-added, we must first understand if players depend more on the high-intensity to receive the ball. We will use a different concept from the standard approach when referring to on-ball actions. Instead of only considering the start and end of passes, crosses, shots, etcetera, we consider the time window between a players' reception of the ball and the last action he performs within that ball control. Following this, the full extent of Messi's ball possession shown in Figure 1 would be considered a single action, providing a more nuanced definition involving ball carries.In that path, Figure 13 shows the percentage of times strikers in the Big-5 European competitions were moving at the different speed categories, 2 seconds before receiving the ball. In the background, we plot the 90\% confidence interval. In addition, we highlight five well-known strikers from the top 5 European leagues. If we focus on Messi, we can observe he is outside the confidence interval in three out of the four speed categories. We see that he is not the classic striker since he tends to receive the ball while walking much more frequently than the rest. On the other hand, it is much less common for him to depend on high-intensity to participate in ball possession. An opposite example is Timo Werner, who tends to receive the ball fewer times at very low intensity than most strikers, but he belongs to the top 5\% of players at high intensities. Finally, we observe that Dybala leans towards receiving after jogging or running compared to the rest of the strikers. \begin{figure}[htbp] \centering \includegraphics[width=10cm]{pictures/picture13.png} \caption{Percentage of times that a striker directly participates with the ball at each speed category. The speed is recorded 2 seconds before the ball reception. The gray shaded area represents the 90\% confidence interval among the players with that same role.} \end{figure} Such information can greatly benefit a physical coach, who could use these player trends to prepare individualized drills that replicate the specific demands that a player will face on game day. But can they reinforce those types of actions in which the player is significantly decisive? That is when the EPV becomes handy. Figure 14 shows the total EPV added of six strikers from the main European leagues in the different speed categories. Note that EPV added measures the difference in EPV at the end and beginning of each action. Generally, we observe that strikers generate significant increases in the possession value at higher speeds. We have highlighted two players who could be considered relatively similar: Messi and Dybala. Even though Messi usually generates more value than Dybala with his actions, they both tend to achieve comparable accumulated value-added for the depicted speeds. On the contrary, we detect players like Mbappé or Adama Traoré who add more value with their actions at high intensities. The examples below and many more extracted from the presented framework are beneficial for both coaches and physical trainers when designing training sessions. Having detailed information on the movements and speeds a player tends to interact with the ball is key, but measuring how much a player influences the team's performance is a game-changer. \begin{figure}[htbp] \centering \includegraphics[width=10cm]{pictures/picture14.png} \caption{EPV added by five strikers with on-ball actions at the different speed categories. The circles' size represents the number of actions the player participates in at each speed category. Both variables are normalized per 30 minutes of effective time in possession. The gray lines in the background represent other strikers from the main European leagues.} \end{figure} \textbf{\subsection{What is the impact of high-intensity runs in increasing goal value?}} The previous section introduced a variable to measure the quality of the on-ball actions, segregating the value by the different speed categories. However, we have not yet discussed adding a qualitative component to all the HI runs a player performs in possession, both on- and off-ball. To capture the influence of players' HI efforts on the variance of the team's probability of scoring a goal long-term (and at the same time lowering the probability of conceding a goal), we will use the player's coefficients from the linear regression presented in the equation defined in Section 3.4. Both plots in Figure 15 show this new metric's values from Real Madrid and FC Barcelona's players (on the top) and Liverpool and Manchester City's (on the bottom). The metric is then compared to each player's distance covered at HI. Note that some players appear more than once in the plot because we consider those player roles separately if they played at least 450 minutes per role. We observe how players like Jordi Alba, Nabil Keïta, or Frenkie De Jong (both as a midfielder and as a Central Defender) tend to cover fewer distances at a HI, but they have a strong influence on their team's EPV when they do so. On the other hand, we have previously seen in Section 4.4 that Messi does not tend to receive at a high intensity very often; however, he has an excellent ability for taking advantage of this kind of situation. \begin{figure}[htbp] \centering \includegraphics[width=11.5cm]{pictures/picture15.png} \caption{Influence of players to their team's EPV after a HI run, extracted from the weights of a linear regression model. Each color is associated with a different player role, and the circle's size represents the on-ball EPV added normalized per 30 minutes of effective playing time in possession.} \end{figure} Furthermore, Sergio Busquets and Thiago Alcantara show an influence below average with high-intensity runs compared to all the positions, but a very high on-ball EPV added (demonstrated by their circle's size). This behavior is a clear representation of positional midfielders. On the other hand, players such as Diogo Jota or Raheem Sterling represent a type of attacking midfielder that contributes highly to the team's EPV through their high-intensity runs. \textbf{\subsection{Do teams maintain high-intensity efforts throughout the entire match?}} Until now, we have focused on the contextualization of the physical indicators through tactical variables. However, there is a fundamental contextual variable in every team sport: time. In this study, we will not delve into the time factor in-depth; however, we want to do a first examination of how physical effort varies throughout the time of the game. \begin{figure}[H] \centering \includegraphics[width=12cm]{pictures/picture16.png} \caption{Above, the distribution of the distance covered by teams in defense per match minute. Below, the variation with respect to the mean of the distance in defense at a high intensity (in orange), compared to the variation of the global distance (in blue). Both curves are smoothed by applying a rolling average of 5 minutes.} \end{figure} Figure 16 shows the evolution of the distance covered in defense in each minute of a match. Note that the metrics are normalized per 60 seconds of effective playing time in out of possession. The plot above presents the distribution of the distance covered at each minute of the game. In contrast, in the plot below, we have computed the variation with respect to the mean of the distance (in blue) and the distance at a high intensity (in orange) in each minute. We can see a considerable decrease in both the global and the high-intensity distance in defense from minute 65 onwards, but the latter's reduction is much steeper. The reasons for this can be multifactorial, such as the goal differential or fatigue, which despite being a complex concept, we are sure determines performance. Further analysis on this path would help determine the optimal moment to make a substitution or change tactics by putting players on the pitch who exploit the areas that require more defensive effort at the end of the game. \begin{doublespace} \section{Discussion} \end{doublespace} The presented top-down framework proposes a novel approach for (1) estimating physical indicators from tracking data, (2) contextualizing each player's run to understand better the purpose and circumstances in which it is done, (3) adding a new dimension to the creation of player and team profiles which were often ignored by physical indicators, and finally (4) assessing the value-added by off-ball high-intensity runs by linking with a possession-value model. In addition to the study's novelty, the framework allows answering practical questions from very different profiles in a soccer club: \begin{enumerate} \item First, starting from the ones closest to the grassroots, players could benefit from data-driven feedback and explanations that help them understand the impact of their efforts on their team's performance to boost player development. \item Secondly, the most direct beneficiaries are coaches and analysts, who can benefit from new tactical insights adding a new dimension to understand better the next opponent and its players merging physic behaviors with tactic information. \item Then, a scouting department would profit from an improved version of player knowledge, identifying the off-ball profile, which is crucial to understand the player as a whole. \item Finally, physical coaches and readaptation physiotherapists could also benefit. The former could design individualized drills based on the demands of each player, whereas the latter could understand the type of high-intensity efforts a player tends to do in-match to tailor the exercises in the return-to-play phase fittingly. \end{enumerate} In this work, we argued that the simple aggregation of physical indicators such as distance traveled, accelerations, or high intensity runs provide an incomplete and many times insufficient picture of players' actual physical skills regarding their contribution to the game. In the end, soccer is about scoring and preventing goals, but there's been little effort so far in literature to integrate tactical context and value creation with players' physical efforts. From an individual perspective, we showed how combining fundamental tactical concepts with players' runs provides a richer context for building a player's movement profile and identifying what types of movements occur more often and which create more value. This type of analysis moves research one step forward towards the so-desired individualization of training drills and the understanding of the physical demands of a match so these can be translated into training (and a long-standing goal in sports science). From a team-level tactical perspective, the proposed contextualization allows coaches to understand which players would maximize a given strategy, such as exploiting spaces in the sidelines or opening the field to overload the midfield. This paper opens a new research path with potential derived studies that could address subjects such as analyzing the optimal moment for the coach to make the substitutions or estimating players' fatigue from a tactical perspective, to identify those zones in the opponent's defense that can be exploited once the match is advanced. In addition, a comprehensive understanding of fatigue's effect on players' decision-making and technical performance would directly improve player development. \begin{doublespace} \end{doublespace}
{ "attr-fineweb-edu": 1.852539, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUc_U5qhDBMnKv_ws8
\section{INTRODUCTION} Ice hockey is a sport in which players score goals in a six-on-six game on a field. Each team tries to achieve a goal by sending a puck, a black disk used in ice hockey, to the opponent's goal. Currently, the defensive formation against the opposing team's attack is determined empirically, and it is unclear whether it is indeed the optimal formation or not. Therefore, we seek to automate the creation of the defensive formation, utilizing the fact that the goal of the ice hockey defensive formation is to cover a large area while placing some defenders around the attacking players. Such action is similar to the objective of the coverage control problem, in which multiple robots form a network and move simultaneously while cooperating. Thus, we attempt to automate the simulation of ice hockey formations based on coverage control. Our predecessors have approached the problem by proposing the above idea and succeeded in imitating defensive strategies for empirically designed specific scenes \cite{DAN2020}. They also simulate pass cut and goal block movements in these scenes by utilizing line-based control barrier function (CBF), which is a tool used to achieve a control's objective subject to safety guarantees \cite{CBFA2014}, \cite{CBFA2017}. However, their approaches are specific to some scenes. Moreover, Their line-based CBF is also not fully able to imitate pass cuts and goal blocks movements. In this paper, we go beyond imitation to specific scenes. We propose a system to generate defensive formation for ice hockey in general scenes. We overcome the drawbacks of the previous logic regarding line-based CBF with a novel CBF model and also further improve the logic based on \cite{MDIC2021} and \cite{OOTA2020}. We also confirm that the novel system enables the generation of strategies in general scenes by inputting data from real scenes. \section{PRELIMINARY} In the following, we introduce the fundamentals of coverage control and control barrier functions for our simulation. These explanations are the same as mentioned in \cite{MDIC2021}. \subsection{Coverage Control} The coverage control is a multi-agent system control method that governs agents to move toward the optimal placement based on a predetermined density function. Many applications of coverage control have been reported in many fields, such as [4]. \label{cover} \subsubsection{System model} We consider the movement of n agents or defensive players $V \equiv \{1,\cdots,n\}$ on the 2-dimensional convex polyhedron field $Q \subset {\mathbb R}^2$. We also consider the defensive player model as the equation below \begin{equation} \label{dynamics} \dot{x}_i (t) = u_i (t),\, x_i(0) = x_{0i} \end{equation} where ${x}_i = [x_{xi},x_{yi}]^T$ is the position of the $i^{th}$ defensive player and ${u}_i = [u_{xi},u_{yi}]^T$ is the input of the $i^{th}$ player. Each area in the field also has its weight, defined by the weight function $\phi(q)$. Next, we divide the field area $Q$ by the defensive players $x_i$ into several Voronoi cells $C_i$ according to Eq. \ref{voronoi}. \begin{equation} \label{voronoi} C_i(x) = \{q \in Q : ||q-x_i||^2 \leq ||q-x_j||^2, \forall{j} \in V \} \end{equation} This implies that for any area $q \in C_{i}(x)$, the area is subjected to the cell, corresponding to the closest defensive player. For example, the visualization of the Voronoi cells can be seen Fig.\ref{voronoi_fig}. \begin{figure}[htbp] \centering \includegraphics[width=0.6\linewidth,clip]{images/voronoi.png} \caption{Voronoi region by \cite{YHYF2018}} \label{voronoi_fig} \end{figure} \subsubsection{Evaluation function and control input} The evaluation function for the coverage control is defined as \begin{equation} \label{eval_func} J(x,t) = \sum_{i=1}^{n}{\int_{V_i(x)} ||q-x_i||^2\phi(q) \,dx} \end{equation} where $\phi(q)$ is the weight function of position $q$. This evaluation equation quantifies how ``well'' the agents cover the area given the predetermined weight function. With Eq. \ref{eval_func}, we calculate the equilibrium state of the agents corresponding to the evaluation function by solving its derivatives as shown below. \begin{equation} \label{eval_func_diff} \frac{\partial{J}}{\partial{x}}\Bigr|_{x=x^{*}} = 0 \end{equation} By solving the above equation, we can derive the equilibrium state which is \begin{equation} \label{eval_func_diff_ans} x_{i}^{*} = \frac{\int_{V_{i}(x)}q\phi(q)dq}{\int_{V_{i}(x)}\phi(q)dq} \end{equation} By considering the Voronoi cell $C_i(x)$ as a rigid body, we use Eq.\ref{mass_voronoi},\ref{cent_voronoi} to describe its properties. \begin{eqnarray} \label{mass_voronoi} \rm{mass}(C_i(x(t))) &=& \int_{C_i(x)} \phi(q) dq \\ \label{cent_voronoi} \rm{cent}(C_i(x(t))) &=& \frac{1}{\mbox{mass}(C_i(x(t)))} \int_{C_i(x)} q\phi(q) dq \end{eqnarray} We can consider Eq. \ref{mass_voronoi} as the cell's mass representation and Eq. \ref{cent_voronoi} as the cells center of mass representation. Consequently, the interpretation shows that we can rewrite and consider the equilibrium state of the agents as the center of mass of each Voronoi cells which is \begin{equation} \label{eval_func_diff_ans_cent} x_{i}^{*} = \rm{cent}(C_i(x(t))) \nonumber \end{equation} As the objective of coverage control is to optimize the coverage of the agents over the area, we can design a controller that drives the agents toward the equilibrium state as Eq.\ref{control_input}. \begin{equation} \label{control_input} u_i(t) = -k(x_i(t)-\rm{cent}(C_i(x(t)))) \end{equation} In Eq. \ref{control_input}, $k$ is a constant gain that the designer can set. Thus, with the above controller, we can optimize the coverage of the agents over the specified area. For example, from Fig.\ref{voronoi_fig} and given that the weight function is some constant number over all the area., the eventual result of this setup is as shown in Fig.\ref{voronoi_eventual}. \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth,clip]{images/voronoi_eventual.png} \caption{Eventual result of coverage control by \cite{YHYF2018}} \label{voronoi_eventual} \end{figure} In summary, this section explains how the controller input is interpreted and calculated. The input will drive each agent toward its own Voronoi cell's centre of mass. For further details and proof regarding coverage control, please see \cite{YHYF2018}. \subsection{Control Barrier Function (CBF)} As stated in \cite{MDIC2021}, coverage control is necessary for the system to simulate goal block and pass cut movements. In this section, we introduce the fundamentals of CBF used in our simulation. Considering the previously stated system dynamics from Eq.\ref{dynamics} \begin{equation} \label{new_dynamics} \dot{x} = f(x) \end{equation} given $x \in D \subseteq \mathbb{R}^m $ as the system state and $f(x)$ as continuous Lipschitz function. Next, suppose we have a set $C$ that satisfy the following equations \begin{eqnarray} \label{on_in_bound} C &=& \{ x \in \mathbb{R}^n :h(x) \geq 0 \}, \\ \label{on_bound} \partial{C} &=& \{ x \in \mathbb{R}^n :h(x) = 0 \}, \\ \label{in_bound} Int(C) &=& \{ x \in \mathbb{R}^n :h(x) > 0 \} \end{eqnarray} given that $h : \mathbb{R}^m \rightarrow \mathbb{R}$ is a continuously differentiable function, we recall the following theorems due to \cite{MDIC2021}. \newline\newline \textbf{Theorem 1:} \emph{From the system in Eq. \ref{new_dynamics} and set $C$ in Eq. \ref{on_in_bound}-\ref{in_bound}, if there exists class $K$ function $\alpha$, which satisfies the equation below, then $h$ function is a control barrier function over $C \subseteq D \in \mathbb{R}^m $} \begin{equation} \label{cbf} sup[L_{f}{h(x)}+\alpha(h(x))] \geq 0, \forall{x} \in D \end{equation} As stated in \cite{MDIC2021}, we also let $C$ = $D$ in this paper. Next, we recall the following theorem due to \cite{WANG2017}. \newline \newline \textbf{Theorem 2:} \emph{For all time $t$ and all initial state $x_0 \in C$, if the state $x(t,x_0)$ is in set $C$ from Eq. \ref{on_in_bound}-\ref{in_bound}, then set $C$ is forward invariant.} Next, we define the set $K_{cbf}$ as below, \begin{equation} \label{kcbf} K_{cbf}(x) = \{u \in U : L_{f}h(x) + \alpha(h(x)) \geq 0 \} \end{equation} Thus, considering the continuous differentiable function h and considering the set $C$ defined by Eq. \ref{on_in_bound}-\ref{in_bound}, if the function $h$ is a control barrier function on the set $C$, according to \textbf{Theorem 2}, for any Lipschitz continuous input $u(x) \in K_{cbf}(x)$ which maps $C$ to $U$, set $C$ is a forward invariant. In conclusion, using $u$ from Eq. \ref{kcbf} will guarantee that the state will not deviate from set $C$. As we will apply the control barrier function on top of coverage control, which will produce $u_{nom}$, we can solve Eq.\ref{unom}-\ref{condition} to attain minimum change of $u$ that satisfy the constraint in the control barrier function. \begin{eqnarray} \label{unom} u = \mathop{\rm argmin}\limits_u||u-u_{nom}||^2 \\ \label{condition} s.t.\hspace{0.3cm} L_{f}h(x)+\alpha(h(x)) \geq 0 \end{eqnarray} We can rewrite the above constraint as \begin{equation} \label{constraint_new} \dot{h}(x)+\alpha(h(x)) \geq 0 \end{equation} \section{PROBLEM SETUP} In this chapter, we describe the general framework of the problem. We consider a defensive phase against an attack by the opposing team in the hockey field, shown in Fig. \ref{field}, which covers 61 m wide and 30 m long. In this paper, we only consider the defensive formation to be only on the left-hand side of the field for simplicity in our simulation. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth,clip]{images/field.png} \caption{Field} \label{field} \end{figure} By inputting the position of the opposing team's players, which is extracted from real scenes, and designing appropriate weight functions and constraints in the control barrier function, we generate the defensive formation of five players, excluding the goalkeeper. We denote $i=1,2,\dots, 5$ as the index of each defensive player,$X_i$ as the position of player $i$, whose dynamics follow Eq. \ref{dynamics}. Moreover, the maximum value of $||u_i||$ is constrained to 3.0 m/s, taking into account the movement speed of an actual ice hockey player. Under the above setup, the velocity input $u_i$ of each player $i$ is fixed to the Eq. \ref{control_input}. Once the density function $\phi$ and the constant $k$ is determined, the motion of the defensive players can be simulated. In this paper, we consider the problem of designing a density function $\phi$ and the control barrier function's constraints, that generate defensive formation in general scenes and call this the ice hockey defensive formation generating problem. \section{CONTROL LOGIC FOR DEFENSIVE MOTION GENERATION} In the following, we describe the specifications for the control logic qualitatively. The detailed implementation of these specifications will be discussed in the next chapter. \subsection{Weight function design} We first recall the motivation to use coverage control for defense formation. Coverage control is used to make each agent, a defensive player, cover areas on the field, prioritizing significant regions. We can assign the significance of the area by designing the weight function. An area with a higher weight value implies higher importance. In our case, the defensive players should block the offensive players and try to protect the area in front of the goal. For blocking the offensive players, defensive players should stay between offensive players and the goal while also moving according to the offensive players' movement. Accordingly, we would design the weight function to be high where the defensive players should perform blocking. We will also consider the goal as another offensive player to make the defensive player move to protect the goal. For covering the significant regions, the area closer to the goal should be more significant than the one further away. Moreover, the area in front of the goal should also be significantly important. Specifically, defensive players should protect the yellow area, which is shown in Fig.\ref{inhouse_info}. According to \cite{OOTA2020}, this area contributes to 75\% of goals scored in a real game. \begin{figure}[ht] \centering \includegraphics[width=0.4\linewidth,clip]{images/inhouse_info2.png} \caption{House area from \cite{OOTA2020}} \label{inhouse_info} \end{figure} Next, the defensive players should pay more attention to the puck holder, which is the player who holds the puck. When the puck holder moves closer to the goal, we can also partially ignore the vertically opposite side of the field, omitting the goal's weight on the vertical-opposite side but still paying attention to offensive players in the area. Lastly, the middle area on the right side of the blue line empirically should also have lower importance. We can visualize that area in Fig.\ref{blue_field}. \begin{figure}[ht] \centering \includegraphics[width=0.4\linewidth,clip]{images/field_blue.png} \caption{Area to lower the gain} \label{blue_field} \end{figure} \subsection{CBF design} The control barrier function is used in this problem to simulate goal block and pass cut movements according to \cite{MDIC2021}. In our simulation, we choose to focus only on performing a pass cut, which is the action of a defensive player to move in between two offensive players. In the previous CBF model in \cite{MDIC2021}, a line-based CBF is proposed for realizing pass cut. However, this model does not guarantee to simulate such action. \subsubsection{Drawbacks of line-based CBF} In the previous model from \cite{MDIC2021}, an orthogonal distance toward a line, running through two offensive players' positions, is used as a model for a control barrier function for simulating a pass cut. In this case, CBF is non-negative when the defensive player is no further than $\delta$ from the line, as shown in Eq.\ref{old_pass_cut_cbf}. \begin{equation} \label{old_pass_cut_cbf} h_{i}^{cut} = \delta^2 - \dfrac{(aX_{x}-X_{y}+b)^2}{a^2+1} \end{equation} where $[X_{x} , X_{y}]^T$ is the position of the defensive player. Let other variables in Eq. \ref{old_pass_cut_cbf} are defined as \begin{equation} \label{a} a = \dfrac{X_{y1}-X_{y2}}{X_{x1}-X_{x2}} \end{equation} \begin{equation} \label{b} b = \dfrac{X_{x1}X_{y2}-X_{x2}X_{y1}}{X_{x1}-X_{x2}} \end{equation} where $[X_{xk} , X_{yk}]^T$ is the position of the $k^{th}$ offensive player for $k \in {1,2}$. There are two problems in the previous model. First, Eq. \ref{a} and \ref{b} are incalculable when $X_{x1} = X_{x2}$. Second, this control barrier function doesn't guarantee that the defensive player moves between two offensive players as shown in Fig.\ref{line_cbf_ncor}. \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth,clip]{images/line_cbf_cor.png} \caption{Proper result using line-based CBF} \label{line_cbf_cor} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth,clip]{images/line_cbf_ncor.png} \caption{Improper result using line-based CBF} \label{line_cbf_ncor} \end{figure} Fig. \ref{line_cbf_ncor} demonstrates the incorrect movement as the blue star dot, representing defensive player, does not move in between two red dots, representing static offensive players. This happens because the line, defined from two offensive players, stretches beyond the positions of two. \subsubsection{Ellipsoidal CBF} We propose to overcome this problem by introducing the ellipsoidal model as a control barrier function. The motivation of using the ellipsoidal model is to force the defensive players to move inside the ellipsoid, defined by the position of two offensive players, as shown in Fig.\ref{ellipsoid}. \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth,clip]{images/ellipsoid_exp.png} \caption{Example of ellipsoidal control barrier function} \label{ellipsoid} \end{figure} \section{IMPLEMENTATION OF CONTROL LOGIC} In this chapter, we describe the implementation of the control logic according to the specifications in the previous chapter. At the end of the chapter, we also describe the comprehensive algorithm used in the simulation. \subsection{Weight function implementation} We assign a weight, described in chapter 2, to each offensive player and the goal by applying a two-dimensional multivariate Gaussian distribution as a weight to each position. When we decide the position to apply the distribution for each player, we also consider their velocity and the nature of the defensive player, which tends to stay between offensive players and the goal, as stated in the previous chapter. We notate the position of the weight that represents offensive player $i$ as $P_{wi}$. The weight position can be described as shown in the equation below. \begin{equation} \label{gaussian} P_{wi} = P_i + \dfrac{P_{goal}-P_i}{|P_{goal}-P_i|} + \dot{P}_i \ \end{equation} where $P_i$ is $i^{th}$ offensive players' positions and $P_{goal}$ is the position of the goal which is $[6,15]^T$. After determining the position to apply the Gaussian distribution, we use the following covariance matrix as a specification for applying the distribution to each position defined above. \begin{equation} \label{convarience} \Sigma_{Wi} = \left( \begin{array}{ccc} 15 & 0 \\ 0 & 15 \end{array} \right) \\ \end{equation} In conclusion, we have n+1 weight distributions on the field which are $W_1,W_2,..,W_n$ and $W_{goal}$. Once applied the distribution as a weight to each position stated above, the result can be seen in Fig.\ref{weight_pos}, given that the red dots are offensive players. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth,clip]{images/weight3.png} \caption{Weight assigned for each offensive player and at the goal position} \label{weight_pos} \end{figure} Next, we apply distance gain to each of the weights representing an offensive player, prioritizing the area closer to the goal with a higher weight value. We design the distance gain to have a higher value when applying to the position closer to the goal to achieve this specification. The updated weights are \begin{equation} \label{distance_gain} \scalemath{0.8}{ W^*_i(x,y) = W_i(x,y) * \dfrac{(25 - |P_{goal} - [x,y]^T|)}{25}, \forall{i} \in {1,2,...,n},\forall{[x,y]^T} \in Q } \end{equation} where $W^*_i(x,y)$ is the updated weight from the $i^{th}$ weight distribution at the position (x,y) on the field $Q$. Fig.\ref{distance field} demonstrates the result of the updated weights. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth,clip]{images/weight4.png} \caption{Weight from Fig.9 when applying distance gain} \label{distance field} \end{figure} Furthermore, another area in the field which defensive players should protect is the house area, as stated in the previous chapter. Therefore, we also imitate the house area and apply constant gain to raise the weights in the area. We design the house area as the inner area bounded by Eq. \ref{l1}-\ref{circle_house}. \begin{eqnarray} \label{l1} y_1 &= -x_1 + 20 \\ \label{l2} y_2 &= x_2 + 10 \end{eqnarray} \begin{equation} \label{circle_house} (x_2-5)^2+(y_2-15)^2=225 \end{equation} In this paper, we denote this bounded area as $Y$. We apply a constant gain $p$ to the weight inside this area. Furthermore, we also remove the goal weight outside the house zone. Using the above specification, we define the resulting weight as \begin{eqnarray} \label{distance_yellow} W^{**}_i(x,y) &=& W^*_i(x,y) * p, \forall{(x,y)} \in Y \\ &&\forall{i} \in {1,...,n} \nonumber \\ W^{**}_{goal}(x,y) &=& \begin{cases} W^*_{goal}(x,y) * p * 1.5 &\text{$\forall{(x,y)}$ $\in$ Y \nonumber}\\ 0 &\text{otherwise} \end{cases} \end{eqnarray} where $Y$ is the inner area described by Eq. \ref{l1}-\ref{circle_house} and $p$ is the gain constant. By setting $p=2$, the resulting weight can be seen in Fig. \ref{inhouse_weight}. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth,clip]{images/house_field.png} \caption{Weight from Fig.10 when applying gain on house area ($p=2$)} \label{inhouse_weight} \end{figure} To prioritize the puck holder when the puck holder moves close to the goal area, we selectively apply a constant gain to lower the significance of other offensive players on the field as shown in Alg. \ref{alg:the_alg1}. We denote the puck holder's index as $pu$ $\in \{1,2,...,n\}$ and the x and y position of the puck holder as $P_{pu\_x}$ and $P_{pu\_y}$ respectively. \begin{algorithm} \caption{Weight adjustment algorithm} \label{alg:the_alg1} \hspace*{\algorithmicindent} \textbf{Result} $W^{***}_i(x,y) \hspace*{0.5cm} \forall{i} \in \{1,2,...,n,goal\}$, a set of weights. \begin{algorithmic}[1] \small \If{$P_{pu\_x} \leq 10$ and $P_{pu\_y} \leq 15$} \For{$i \gets 1$ to $n$} \If{i != J} \State $W^{***}_i(x,y) = 0.1 * W^{**}_i(x,y)$ $\forall{y} <$ 15 \State $W^{***}_i(x,y) = 0.05 * W^{**}_i(x,y)$ $\forall{y} \geq$ 15 \EndIf \EndFor \State $W^{***}_{goal}(x,y) = W^{**}_{goal}(x,y)$ \hspace*{0.1cm} $\forall{y} <$ 15 \State $W^{***}_{goal}(x,y) = 0$ \hspace*{0.1cm} $\forall{y} \geq$ 15 or $x \geq$ 10 \ElsIf{$P_{pu\_x} \leq 10$} \For{$i \gets 1$ to $n$} \If{i != J} \State $W^{***}_i(x,y) = 0.1 * W^{**}_i(x,y)$ $\forall{y} \geq$ 15 \State $W^{***}_i(x,y) = 0.05 * W^{**}_i(x,y)$ $\forall{y} <$ 15 \EndIf \EndFor \State $W^{***}_{goal}(x,y) = W^{**}_{goal}(x,y)$ \hspace*{0.1cm}$\forall{y} >$ 15 \State $W^{***}_{goal}(x,y) = 0$ \hspace*{0.1cm} $\forall{y} \leq$ 15 or $\forall{x} \geq$ 10 \EndIf \end{algorithmic} \end{algorithm} Given that the orange dot represents the puck holder, the result of the above weights can be visualized as Fig.\ref{p_gain_weight}. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth,clip]{images/house_corner.png} \caption{Weight from Fig.11 when applying constant gain.} \label{p_gain_weight} \end{figure} Empirically, the green area in Fig. \ref{field} is also less important than other areas. As a result, we apply a constant gain to lower this specific area. We can define the updated weight by applying the following equation \begin{eqnarray} \label{blue_gain} \scalemath{0.8}{ W^{****}_i(x,y) = \begin{cases} W^{***}_i(x,y) * 0.1 &\text{$\forall{x} \in$ [14.945,21.960]}\\ &\text{and $\forall{y} \in$ [10,20]}\\ W^{***}_i(x,y) &\text{otherwise} \end{cases} } \end{eqnarray} \begin{equation*} \scalemath{0.7}{ W^{****}_{pu}(x,y) = W^{***}_{pu}(x,y) } \end{equation*} \begin{equation*} \scalemath{0.7}{ W^{****}_{goal}(x,y) = W^{***}_{goal}(x,y) } \end{equation*} for i $\in$ $\{1,2,...n\}\symbol{92} pu$. In other words, this specification only applies to non-puck holder offensive players. Lastly, as we focus only on the left-hand side of the field, we omit all the weight on the right side of the field. \begin{equation} \label{final_weight} \scalemath{0.8}{ W^{final}_i(x,y) = \begin{cases} 0 &\text{$\forall{x}$ $>$ 30}\\ W^{****}_i(x,y) &\text{otherwise} \end{cases} \forall{i} \in {1,2,..,n,goal} } \end{equation} This is the finalized weights that we will use in the simulation. \subsection{Ellipsoidal CBF implementation} As stated in section 5.1, we design the new CBF as the equation below. \begin{equation} \label{ellipsoidal model} h(X,X_1,X_2) = 1-(X-X_o)^{T}P(X-X_o) \end{equation} where \begin{equation} \label{P model} P = \scalemath{0.7}{ \left[ \begin{array}{cc} \left( \dfrac{\rm{cos}^2\theta}{(l/2)^2} + \dfrac{\rm{sin}^2\theta}{(d/2)^2} \right) & \dfrac{\rm{sin}2\theta}{2} \left( \dfrac{1}{(l/2)^2} - \dfrac{1}{(d/2)^2} \right) \\ \dfrac{\rm{sin}2\theta}{2} \left( \dfrac{1}{(l/2)^2} - \dfrac{1}{(d/2)^2} \right) & \left( \dfrac{\rm{sin}^2\theta}{(l/2)^2} + \dfrac{\rm{cos}^2\theta}{(d/2)^2} \right) \end{array} \right] } \end{equation} given that $X = [x,y]^T \in Q$ is the position of the defensive player, $X_o = [x_o,y_o]^T \in Q$ is the center of the ellipse formed by having the position of offensive two players $X_1, X_2$ as vertices and the height of the semi-minor axes $d$, and $X_k$ = $[x_k,y_k]^T$ is the position of the $k^{th}$ offensive players for $k \in {1,2}$. Furthermore, the parameters in Eq. \ref{P model} can be defined as the following. \begin{equation} \label{x0 equation} X_o = \dfrac{X_1 + X_2}{2}, l = ||X_2 - X_1|| \end{equation} where $l$ is the width of the semi-major axes and $ \theta $ is defined as below \begin{eqnarray} \label{theta equation} \theta &=& \rm{arctan}\left( \dfrac{y_2 - y_1}{x_2-x_1} \right), \\ \label{cos equation} \rm{cos}(\theta) &=& \dfrac{x_2-x_1}{l}, \\ \label{sin equation} \rm{sin}(\theta) &=& \dfrac{y_2-y_1}{l} \end{eqnarray} With the above ellipsoidal model, $h$ will be non-negative only when the defensive player moves inside the ellipse, between 2 offensive players. As, we define the new ellipsoidal model, we rewrite the resulting constraint as Eq. \ref{H big view} by applying it to Eq. \ref{constraint_new}. \begin{equation} \begin{split} \label{H big view} \dot{h}(X,X1,X2) &= \dfrac{\partial h}{\partial X}\dot{X}+\dfrac{\partial h}{\partial X_1}\dot{X_1}+\dfrac{\partial h}{\partial X_2}\dot{X_2} \\ &= \dfrac{\partial h}{\partial X}u+\dfrac{\partial h}{\partial X_1}\dot{X_1}+\dfrac{\partial h}{\partial X_2}\dot{X_2}\\ & \geq -\alpha(h(X,X_1,X_2)) \end{split} \end{equation} The derivative of h can be expanded into the following equation. \begin{equation} \begin{split} \label{H expand} \dot{h}(X,X1,X2) &= a(\dot{x_1}+\dot{x_2}-2\dot{x})\\ &+ b(\dot{x_1}-\dot{x_2})\\ &+ c(\dot{y_1}+\dot{y_2}-2\dot{y})\\ &+ d(\dot{y_2}-\dot{y_1}) \end{split} \end{equation} where a,b,c and d are \begin{equation} \begin{split} \label{a_h} a &= \Bigg[ \left(\dfrac{\rm{cos}^2(\theta)}{(l/2)^2} + \dfrac{\rm{sin}^2(\theta)}{(d/2)^2} \right)(x-x_0) \\ &+ \rm{cos}\theta \rm{sin} \theta \left( \dfrac{1}{(l/2)^2} - \dfrac{1}{(d/2)^2} \right)(y-y_0) \Bigg] \end{split} \end{equation} \begin{equation} \begin{split} \label{b_h} b &= \Bigg[ - \bigg( \left( \dfrac{1}{(d/2)^2} - \dfrac{1}{(l/2)^2}\right) \rm{sin}(2\theta) \left( \dfrac{y_2 - y_1 }{l^2} \right) \\ &+ \dfrac{\rm{cos}^2(\theta) (x_1 - x_2)}{2(l/2)^4} \bigg)(x-x_0)^2 \\ &- 2\bigg(\rm{sin} (2\theta) \left( \dfrac{4(x_2-x_1)}{l^4} \right) \\ &+ \bigg( \dfrac{1}{(l/2)^2} - \dfrac{1}{(d/2)^2} \bigg) \rm{cos}(2\theta) \left( \dfrac{y_2 - y_1}{l^2} \right) \bigg)\\ &(x-x_0)(y-y_0) \\ &- \bigg( \left( \dfrac{1}{(l/2)^2} - \dfrac{1}{(d/2)^2}\right) \rm{sin}(2\theta) \left( \dfrac{y_2 - y_1 }{l^2} \right) \\ &- \dfrac{\rm{sin}^2(\theta) (x_1 - x_2)}{2(l/2)^4} \bigg)(y-y_0)^2 \Bigg] \end{split} \end{equation} \begin{equation} \begin{split} \label{c_h} c &= \Bigg[\rm{cos}\theta \rm{sin} \theta \left( \dfrac{1}{(l/2)^2} - \dfrac{1}{(d/2)^2} \right)(x-x_0) \\ &+ \left( \dfrac{\rm{sin}^2(\theta)}{(l/2)^2} + \dfrac{\rm{cos}^2(\theta)}{(d/2)^2} \right) (y-y_0) \Bigg] \end{split} \end{equation} \begin{equation} \begin{split} \label{d_h} d &= \Bigg[ -\bigg(\left( \dfrac{1}{(d/2)^2} - \dfrac{1}{(l/2)^2}\right) \rm{sin}(2\theta) \left( \dfrac{x_2 - x_1 }{l^2} \right) \\ & - \dfrac{\rm{cos}^2(\theta) (y_2 - y_1)}{2(l/2)^4} \bigg)(x-x_0)^2 \\ &- 2\bigg( \rm{sin} (2 \theta) \left( \dfrac{4(y_1-y_2)}{l^4} \right) \\ & + \left( \dfrac{1}{(l/2)^2} - \dfrac{1}{(d/2)^2} \right) \rm{cos} (2\theta) \left( \dfrac{x_2 - x_1}{l^2} \right) \bigg) \\ & (x-x_0)(y-y_0) \\ &- \bigg( \left( \dfrac{1}{(l/2)^2} - \dfrac{1}{(d/2)^2}\right) \rm{sin}(2\theta) \left( \dfrac{x_2 - x_1 }{l^2} \right) \\ &- \dfrac{\rm{sin}^2(\theta) (y_2 - y_1)}{2(l/2)^4} \bigg)(y-y_0)^2 \Bigg] \end{split} \end{equation} Lastly, in our simulation, $d$ is set to 0.01, and the $\alpha$ function in the constraint equation is set to the h function. For performing pass cut, $P_2$ will always be puck holder. Thus, we can consider that the ellipsoid CBF only needs 2 inputs which are the position of defensive player $X$ and the position of another offensive player which should be blocked $X_1$. \subsection{Online selection of CBFs} In the following, we explain the algorithm used in the simulation to selectively choose the pairs of a defensive player and a offensive player to apply CBF to perform a pass cut. Pass cut is an action of blocking the player from passing the puck to other players. Alg.\ref{alg:selective_alg} is designed based on this motivation. Specifically, we aim to apply CBF when there is a possibility that offensive players, who are inside the house area, can receive the puck and subsequently score a goal. Therefore, we design an algorithm to only select defensive players inside the house area to perform a pass cut and prioritise block offensive players closer to the goal than those further away. \begin{algorithm}[ht] \caption{Online selection CBF algorithm} \label{alg:selective_alg} \hspace*{\algorithmicindent} \textbf{Result} $P_{passcut}$, a set of pairs of defensive player' indexes and offensive player' indexes to perform pass cut. \begin{algorithmic}[1] \State{Identify offensive players except puck holder in the house area as $O_{house}$ and defensive players in the house area as $D_{house}$.} \State{Let $P_{passcut}$ = $\varnothing$} \While{$O_{house} \neq \varnothing $ and $D_{house} \neq \varnothing $ } \State{Select $i \in O_{house} = \mathop{\rm argmin}\limits\limits_{i} ||x_i-x_{goal}||$ O, identify $d_{i} \in D$ = $\argmax\limits_d$ h($P_d,P_i,P_{puck}$)} \State{Add $(d_{i},i)$ to $P_{passcut}$} \State{Remove $d_i$ from $D_{house}$ and $i$ from $O_{house}$} \EndWhile \end{algorithmic} \end{algorithm} From Alg.\ref{alg:selective_alg}, we compute the pairs of offensive players that need to be blocked and the corresponding defensive players to perform pass cut. In the next section, we will apply the ellipsoidal CBF to each pair in the set $P_{passcut}$, preventing offensive players in the house area from receiving the puck from the puck holder. \subsection{Comprehensive algorithm} \begin{algorithm}[ht] \caption{Comprehensive algorithm} \label{alg:comprehensive_alg} \hspace*{\algorithmicindent} \textbf{Result} $u$, a set of inputs for defensive players $\{1,2,...,5 \}$ \begin{algorithmic}[1] \State{Initialize position of defensive players i $\in$ $D = \{1,2,..,5\}$ as $X_{di} \in Q$} \State{Let the position of offensive player j $\in$ $O = \{1,2,...,n\}$ as $X_{oj} \in Q$} \State{Let the weight function as $\phi(x,y)$ with the design and implementation stated in section 5.1 using $X_{oj}$} \State{Compute Voronoi cell $C_i$ using $X_{di}$ and $\phi(x,y)$ in Eq. \ref{voronoi}} \State{Compute cent($C_i$) using $C_i$ and $\phi(x,y)$ in Eq. \ref{cent_voronoi} $\forall{i}$ $\in D$ } \State{Compute $u^{nom}_i$ using $\rm{cent}(C_i)$ and $\phi(x,y)$ in Eq. \ref{control_input} with $k$ = 1 $\forall{i}$ $\in D$ } \State{Let $P_{passcut}$ is the output of Algorithm 2.} \For{$i \gets 1$ to $5$} \If{i is the first index of pair $p \in$ $P_{passcut}$} \State{Let $k$ = second index of the pair $p$} \State Compute $u_i$ by applying the ellipsoidal CBF with $x_i$ and the corresponding $x_k$ using quadratic programming to solve Eq. \ref{condition} subject to Eq. \ref{constraint_new}.. \Else \State $u_i$ = $u^{nom}_i$ \EndIf \If{$||u_i||$ $>$ 3} \State $u_i$ = $\dfrac{u_i}{||u_i||}*3$ \EndIf \EndFor \end{algorithmic} \end{algorithm} In the following, we describe the comprehensive algorithm in Alg.\ref{alg:comprehensive_alg}, which is used to calculate $u_i$ for each offensive player by applying the theorem, specifications, and algorithms stated above. After computing the input for each defensive player $u_i$ from Alg.\ref{alg:comprehensive_alg}, we input the control inputs to the system and update the position of defensive players. Then, we do this process again with the new offensive player's positions in the following frame until the simulation ends. \section{SIMULATION STUDY} In this chapter, we explain the method to attain the data from real scenes in this papar. Then, we describe the evaluation method that we will use to evaluate the simulation results. Lastly, we show the results of the simulation using the real scenes' data. \subsection{Data extraction from real scene} For the data, we use ice hockey game scenes from Winter Olympic 2018. We selected two defensive scenes in which we can track offensive players almost all the time to attain the most accurate input for the simulation. The scenes are chosen among several scenes proposed by Takahito Suzuki, a retired head coach of the Japanese national team with who we cooperate within this research. The scenes used in this simulation are as the following: \begin{itemize} \item Canada and Czech republic's game \item Canada and Finland's game \end{itemize} For extracting the data from the scene, we have tried techniques according to \cite{NEIL2020}. However, we discovered two problems using these techniques. First, we are not able accurately track players' poses using the pose estimation tool due to frequent collisions between players. Second, the court parsing process can not track the template images, needed to extract camera movement from these particular scenes because the number of the template images in these specific scenes is not high enough. As a result, we decide to extract the data manually. We extract the data frame by frame using the human eye to observe the position and movement of the players on the scenes. The example of the extracted data from Canada and Finland's game scene can be seen in Fig. \ref{extract_data}. \begin{figure}[ht] \centering \includegraphics[width=1\linewidth,clip]{images/extraction.png} \caption{Example of extracted data from real scenes} \label{extract_data} \end{figure} From figure \ref{extract_data}, each coloured line represents one offensive player's motion throughout the scene. \subsection{Evaluation Method} After surveying many papers and journals related to the evaluation of ice hockey defensive strategy, we found that there are no evaluation metric which can assess how well the generated formation is. The current evaluation metrics are based on statistical data such as \cite{STAT2017} and \cite{STAT2018}. Thus, we decide to use empirical method to evaluate our result by collaborating with Takahito Suzuki to evaluate the generated formations. \subsection{Results} This section shows results from simulating the defensive formation using the real data previously mentioned in this chapter. We use blue dots to represent defensive players, an orange dot to represent a puck holder, and red dots to represent other offensive players. This section also shows the simulation results using different $p$ values defined in Eq.\ref{distance_yellow}. The screenshots of the simulation from Canada and Czech republic's game using $p$ = 2 can be seen in Fig. \ref{Canada and Czesh1}-\ref{Canada and Czesh3}. \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth,clip]{images/p1_1.png} \caption{The screenshots of the simulation from Canada and Czech republic's game 1} \label{Canada and Czesh1} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth,clip]{images/p1_2.png} \caption{The screenshots of the simulation from Canada and Czech republic's game 2} \label{Canada and Czesh2} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth,clip]{images/p1_3.png} \caption{The screenshots of the simulation from Canada and Czech republic's game 3} \label{Canada and Czesh3} \end{figure} \newpage The screenshots of the simulation from Canada and Finland's game using $p$ = 2 can be seen in Fig. \ref{Canada and Finland1} - \ref{Canada and Finland3}. \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth,clip]{images/p2_1.png} \caption{The screenshots of the simulation from Canada and Finland republic's game 1} \label{Canada and Finland1} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth,clip]{images/p2_2.png} \caption{The screenshots of the simulation from Canada and Finland republic's game 2} \label{Canada and Finland2} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth,clip]{images/p2_3.png} \caption{The screenshots of the simulation from Canada and Finland republic's game 3} \label{Canada and Finland3} \end{figure} Both simulations demonstrate the defensive movement of defensive players defending the area in front of the goal while also moving along with offensive players. These results are also being evaluated by the professional, as mentioned in section 6.2. He assesses that the generated movements of the defensive players are closed to the actual players. Thus, we can conclude that we achieve the generation of defensive formation for the scenes. Furthermore, the results from the simulation using Canada and Finland's game with different $p$ values, which are 0.5,3 and 10 from top to bottom respectively, as shown in Fig.\ref{p val 1}-\ref{p val 3}. \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth,clip]{images/p3_01.png} \caption{The screenshots of the simulation from Canada and Finland's game using different $p$ values (0.5,3 and 10 from top to bottom) 1} \label{p val 1} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth,clip]{images/p3_11.png} \caption{The screenshots of the simulation from Canada and Finland's game using different $p$ values (0.5,3 and 10 from top to bottom) 2} \label{p val 2} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth,clip]{images/p3_21.png} \caption{The screenshots of the simulation from Canada and Finland's game using different $p$ values (0.5,3 and 10 from top to bottom) 3} \label{p val 3} \end{figure} Fig. \ref{p val 1}-\ref{p val 3} show that different $p$ value gives different results. With a lower $p$ value, the importance of the house area is relatively low, and defensive players will move close to each offensive player instead. A low $p$ value leads to movements that are close to a peer-to-peer defensive strategy in the real game. On the other hand, when $p$ grows bigger, the weight on the house area becomes higher, increasing the area's significance. Consequently, defensive players exhibit defensive behavior that resembles zone defense strategy in the real game, where defensive players move to cover the important area rather than the players. We can also observe the intermediate strategies when we set $p = 3$. \section{CONCLUSION} In conclusion, we further develop the control logic from \cite{OOTA2020} and \cite{MDIC2021} by changing the weight function's design in coverage control and improving the CBF model to an ellipsoidal CBF. Furthermore, we confirm that we can generate the defensive formation of ice hockey players using a new logic for two general scenes without adjusting the weight function specifically for the scene. We also discover that we can produce different strategies by changing the weight ratio parameter $p$, which affects the ratio between the players' weights and the area's weight in the house area. By adjusting $p$, we can generate several strategies, such as peer-to-peer strategy when $p$ is small or zone defense strategy when $p$ is large. We can also observe the intermediate strategies by tuning parameter $p$ to be between the two strategies. \section{DISCUSSION} In this section, we discuss the problems that can be further improved for this paper. \begin{enumerate} \item We need to test the simulation with more scenes. In this paper, we have only tested the system with two general scenes. It is necessary to investigate the new logic further with several more scenes to confirm our assumption. \item We need to automate the extraction of data from the videos. In this paper, we extract the input manually from the video. The method can be inaccurate as a human estimate the data. Developing a technique to automate this process is necessary to gain accurate input for testing the logic. \item We need to take into consideration of the puck position in the future simulation. In the current simulation, we do not take into account the puck position. Although we assign the offensive player who holds the puck as a puck holder, it is undeniable that there exists a time when the puck is not with any players, such as when the puck holder sends the puck to the other player. Thus, the generated movement might not accurately reflect the actual game movement. \item We need to create an evaluation metric to standardize our evaluation. In this experiment, we asked the professional to evaluate our experiment's result. With ample experience in the field, there is some credibility to his evaluation. However, it is undeniable that there is a need to create a systematic evaluation metric for evaluating defensive movements. \end{enumerate}
{ "attr-fineweb-edu": 1.883789, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdaHxK0wg09FXV55i
\section{Introduction} Professional sports teams have started to gain a competitive advantage in recent decades by using advanced data analysis. However, soccer has been a late bloomer in integrating analytics, mainly due to the difficulty of making sense of the game's complex spatiotemporal relationships. To address the nonstop flow of questions that coaching staff deal with daily, we require a flexible framework that can capture the complex spatial and contextual factors that rule the game while providing practical interpretations of real game situations. This paper addresses the problem of estimating the expected value of soccer possessions (EPV) and proposes a decomposed learning approach that allows us to obtain fine-grained visual interpretations from neural network-based components.\\ The EPV is essentially an estimate of which team will score the next goal, given all the spatiotemporal information available at any given time. Let $G \in \{-1,1\}$, where the values represent one or the other team scoring next, respectively; the EPV corresponds to the expected value of $G$. The frame-by-frame estimation of EPV constitutes a one-dimensional time series that provides an intuitive description of how the possession value changes in time, as presented in Figure \ref{fig:epv}. While this value alone can provide precise information about the impact of observed actions, it does not provide sufficient practical insight into either the factors that make it fluctuate or which other advantageous actions could be taken to boost EPV further. To reach this granularity level, we formulate EPV as a composition of the expectations of three different on-ball actions: passes, ball drives, and shots. Each of these components is estimated separately, producing an ensemble of models whose outputs can be merged to produce a single EPV estimate. Additionally, by inspecting each model, we can obtain detailed insight on the impact that each of the components has on the final EPV estimation.\\ \begin{figure*} \includegraphics[width=1\textwidth]{images/epv_flow_res} \caption{Evolution of the expected possession value (EPV) of FC Barcelona during a match against Real Betis in La Liga season 2019/2020.} \label{fig:epv} \end{figure*} We propose two different approaches to learn each of the separated models, depending on whether we need to consider each possible location on the field or just single locations. We propose several deep neural architectures capable of producing full prediction surfaces from low-level features for the first case. We show that it is possible to learn these surfaces from very challenging learning set-ups where only a single-location ground-truth correspondence is available for estimating the whole surface. For the second case, we use shallow neural networks on top of a broad set of novel spatial and contextual features. From a practical standpoint, we are splitting out a complex model into more easily understandable parts so the practitioner can both understand the factors that produce the final estimate and evaluate the effect that other possible actions may have had. This type of modeling allows for easier integration of complex prediction models into the decision-making process of groups of individuals with a non-scientific background. Also, each of the components can be used individually, multiplying the number of potential applications. The main contributions of this work are the following: \begin{itemize} \item We propose a framework for estimating the instantaneous expected outcome of any soccer possession, which allows us to provide professional soccer coaches with rich numerical and visual performance metrics. \item We show that by decomposing the target EPV expression into a series of sub-components and estimating these separately, we can obtain accurate and calibrated estimates and provide a framework with greater interpretability than single-model approaches \citep{cervone2016multiresolution,bransen2018measuring}. \item We develop a series of deep learning architectures to estimate the expected possession value surface of potential passes, pass success probability, pass selection probability surfaces, and show these three networks provide both accurate and calibrated surface estimates. \item We present a handful of novel practical applications in soccer that are directly derived from this framework. \end{itemize} \section{Background} The evaluation of individual actions has been recently gaining attention in soccer analytics research. Given the relatively low frequency of soccer goals compared to match duration and the frequency of other events such as passes and turnovers, it becomes challenging to evaluate individual actions within a match. Several different approaches have been attempted to learn a valuation function for both on-ball and off-ball events related to goal-scoring.\\ Handcrafted features based on the opinion of a committee of soccer experts have been used to quantify the likelihood of scoring in a continuous-time range during a match \citep{link2016real}. Another approach uses observed events' locations to estimate the value of individual actions during the development of possessions \citep{decroos2018actions}. Here, the game state is represented as a finite set of consecutive observed discrete actions and, a Bernoulli distributed outcome variable is estimated through standard supervised machine learning algorithms. In a similar approach, possession sequences are clustered based on dynamic time warping distance, and an XGBoost \citep{chen2016xgboost} model is trained to predict the expected goal value of the sequence, assuming it ends with a shot attempt \citep{bransen2018measuring}. \cite{gyarmati2016qpass}, calculate the value of a pass as the difference of field value between different locations when a ball transition between these occurs. \cite{rudd2011framework} uses Markov chains to estimate the expected possession value based on individual on-ball actions and a discrete transition matrix of 39 states, including zonal location, defensive state, set pieces, and two absorbing states (goal or end of possession). A similar approach named expected threat uses Markov chains and a coarsened representation of field locations to derive the expected goal value of transitioning between discrete locations \citep{xtkarun}. The estimation of a shot's expectation within the next 10 seconds of a given pass event has also been used to estimate a pass's reward, based on spatial and contextual information \citep{power2017not}. Beyond the quantification of on-ball actions, off-ball position quality has also been quantified, based on the goal expectation. In \cite{spearmanbeyond}, a physics-based statistical model is designed to quantify the quality of players' off-ball positioning based on the positional characteristics at the time of the action that precedes a goal-scoring opportunity. All of these previous attempts on quantifying action value in soccer assume a series of constraints that reduce the scope and reach of the solution. Some of the limitations of these past work include simplified representations of event data (consisting of merely the location and time of on-ball actions), using strongly handcrafted rule-based systems, or focusing exclusively on one specific type of action. However, a comprehensive EPV framework that considers both the full spatial extent of the soccer field and the space-time dynamics of the 22 players and the ball has not yet been proposed and fully validated. In this work, we provide such a framework and go one step further estimating the added value of observed actions by providing an approach for estimating the expected value of the possession at any time instance.\\ Action evaluation has also been approached in other sports such as basketball and ice-hockey by using spatiotemporal data. The expected possession value of basketball possessions was estimated through a multiresolution process combining macro-transitions (transitions between states following coarsened representation of the game state) and micro-transitions (likelihood of player-level actions), capturing the variations between actions, players, and court space \citep{cervone2016multiresolution}. Also, deep reinforcement learning has been used for estimating an action-value function from event data of professional ice-hockey games \citep{liu2018deep}. Here, a long short-term memory deep network is trained to capture complex time-dependent contextual features from a set of low-level input information extracted from consecutive on-puck events. \section{Structured modeling} In this study, we aim to provide a model for estimating soccer possessions' expected outcomes at any given time. While the single EPV estimate has practical value itself, we propose a structured modeling approach where the EPV is decomposed into a series of subcomponents. Each of these components can be estimated separately, providing the model with greater adaptability to component-specific problems and facilitating the final estimate's interpretation. \subsection{EPV as a Markov decision process}\label{sec:epv_markov} This problem can be framed as a Markov decision process (MDP). Let a player with possession of the ball be an agent that can take any action of a discrete set $A$ from any state of the set of all possible states $S$; we aim to learn the state-value function $EPV(s)$, defined as the expected return from state $s$, based on a policy $\pi(s,a)$, which defines the probability of taking action $a$ at state $s$. In contrast with typical MDP applications, our aim is not to find the optimal policy $\pi$, but to estimate the expected possession value (EPV) from an average policy learned from historical data.\\ Let $\Gamma$ be the set of all possible soccer possessions, and $r \in \Gamma$ represents the full path of a specific possession. Let $\Psi$ be a high dimensional space, including all the spatiotemporal information and a series of annotated events, $T_t(r) \in \Psi$ is a snapshot of the spatiotemporal data after $t$ seconds from the start of the possession. And let $G(r)$ be the outcome of a possession $w$, where $G(r) \in \{-1,1\}$, with $1$ being a goal is scored and $-1$ being a goal is conceded.\\ \begin{definition}\label{def:epv_general} The expected possession value of a soccer possession at time $t$ is $EPV_t = \E[G| T_t]$ \end{definition} This initial definition shares similarities with previous approaches in other sports, such as basketball \citep{cervone2016multiresolution} and American football \citep{yurko2019going}, from which part of the notation used in this section is inspired. Following Definition \ref{def:epv_general}, we can observe that EPV is an integration over all the future paths a possession can take at time $t$, given the available spatiotemporal information at that time, $T_t$. We employ player tracking data consisting of the location of the 22 players and the ball, usually provided at a frequency ranging from 10Hz to 25Hz, and captured using computer-vision algorithms on top of videos of professional soccer matches. We will assume that tracking data is accompanied and synchronized with event data, consisting of annotated events observed during the match, indicating the location, time, and other possible tags. Let $\Psi$ be the infinite set of possible tracking data snapshots; this modeling approach defines a continuous state space, represented by $\Psi$.\\ \subsection{A decomposed model}\label{sec:decomposed} In order to obtain the desired structured modeling of EPV described in Section \ref{sec:epv_markov}, we will further decompose Definition \ref{def:epv_general} following the law of total expectation and considering the set of possible actions that can be taken at any given time. We assume that the space of possible actions $A=\{\rho, \delta,\varsigma\}$ is a discrete set where $\rho$, $\delta$, and $\varsigma$ represent pass, ball drive, and shot attempt actions, respectively. We can rewrite Definition \ref{def:epv_general} as in Equation \ref{eq:structured_epv}. \begin{equation}\label{eq:structured_epv} \begin{split} EPV_t =\sum_{a \in A}\E[G| A=a, T_t]\overbrace{\PP(A=a | T_t)}^{\parbox{7em}{\footnotesize\centering Action selection probability}} \end{split} \end{equation} Additionally, to consider that passes can go anywhere on the field, we define $D_t$ to be the selected pass destination location at time $t$ and $\PP(D_t| T_t)$ to be a transition probability model for passes. Let $L$ be the set of all the possible locations in a soccer field, then $D_t \in L$. On the other hand, we assume that ball drives ($\delta$) and shots ($\varsigma$) have a single possible destination location (the expected player location in one second and the goal line center, respectively). Following this, we can rewrite Definition \ref{def:epv_general} as presented in Equation \ref{eq:structured_epv}. \begin{equation}\label{eq:structured_epv} \begin{split} EPV_t =(\sum_{l \in L} \overbrace{\E[G| A=\rho, D_t = l, T_t]}^{\parbox{8em}{\footnotesize\centering Joint expected value surface of passes}} \overbrace{\PP(D_t=l |A=\rho, T_t)}^{\parbox{8em}{\footnotesize\centering Pass selection probability}}) \PP(A=\rho | T_t) \\ + \overbrace{\E[G| A=\delta, T_t]}^{\parbox{6em}{\footnotesize\centering Expected value of ball drives}} \PP(A=\delta | T_t) \\ + \overbrace{\E[G| A=\varsigma, T_t]}^{\parbox{6em}{\footnotesize\centering Expected value from shots}} \PP(A=\varsigma | T_t) \end{split} \end{equation} The expected value of passing actions, $\E[G| D, A=\rho]$, can be further extended to include the two scenarios of producing a successful or a missed pass (turnover). We model the outcome of a pass as $O_{\rho}$, which takes a value of $1$ when a pass is successful or $0$ in case of a turnover. We can then rewrite this expression as in Equation \ref{eq:pass_expectation}. \begin{equation}\label{eq:pass_expectation} \begin{split} \E[G| A=\rho, D_t, T_t] = \overbrace{\E[G| A=\rho, O_{\rho}=1, D_t, T_t]}^{\parbox{11em}{\footnotesize\centering Expected value of successful/missed passes}} \overbrace{\PP(O_{\rho}=1 | A=\rho, D_t, T_t)}^{\parbox{9em}{\footnotesize\centering Probability of successful/missed passes}}\\ + \E[G| A=\rho, O_{\rho}=0, D_t, T_t]\PP(O_{\rho}=0 | A=\rho, D_t, T_t) \end{split} \end{equation} Equation \ref{eq:ball_drive_expectation} represents an analogous, definition for ball drives, having $O_{\delta}$ be a random variable taking values $0$ or $1$, representing a successful ball drive or a loss of possession following that ball drive, which we will refer as a missed ball drive. \begin{equation}\label{eq:ball_drive_expectation} \begin{split} \E[G| A=\delta, T_t] = \overbrace{\E[G| A=\delta, O_{\delta}=1, T_t]}^{\parbox{8em}{\footnotesize\centering Expected value of successful/missed ball drives}} \overbrace{\PP(O_{\delta}=1 | A=\delta, T_t)}^{\parbox{8em}{\footnotesize\centering Probability of successful/missed ball drives}}\\ + \E[G| A=\delta, O_{\delta}=0, T_t]\PP(O_{\delta}=0 | A=\delta, T_t) \end{split} \end{equation} Finally, the expression $\E[G|A=\varsigma]$ is equivalent to an expected goals model, a popular metric in soccer analytics \citep{lucey2014quality,eggels2016expected} which models the expectation of scoring a goal based on shot attempts. In Figure \ref{fig:all_layers_epv} we present how the outputs of the different components presented in this section are combined to produce a single EPV estimation, while also providing numerical and visual information of how each part of the model impacts the final value. \begin{figure*} \includegraphics[width=1\textwidth]{images/all_layers_epv} \caption{Diagram representing the estimation of the expected possession value (EPV) for a given game situation through the composition of independently trained models. The final EPV estimation of $0.0239$ is produced by combining the expected value of three possible actions the player in possession of the ball can take (pass, ball drive, and shot) weighted by the likelihood of those actions being selected. Both pass expectation and probability are modeled to consider every possible location of the field as a destination; thus the diagram presents the predicted surfaces for both successful and unsuccessful potential passes, as well as the surface of destination location likelihood.} \label{fig:all_layers_epv} \end{figure*} \section{Spatiotemporal feature extraction}\label{sec:feature_extraction} Each of the decomposed EPV formulation components presents challenging tasks and requires sufficiently comprehensive representations of the game states to produce accurate estimates. We build these state representations from a wide set of low-level and fine-grained features extracted from tracking data (see Section \ref{sec:epv_markov} for the definition of tracking data). While low-level features are straightforwardly obtained from this data (i.e., players' location and speed), fine-grained features are built through either statistical models or handcrafted algorithms developed in collaboration with a group of soccer match analysts from FC Barcelona. Figure \ref{fig:features_diagram} presents a visual representation of a game situation where we can observe the available players and ball locations and a subset of features derived from that tracking data snapshot. Conceptually, we split the features into two main groups: spatial features and contextual features. Both feature types are described in Section \ref{sec:spatial_features} and Section \ref{sec:contextual_features}. The full set of features and their usage within the different models presented in this work are detailed in Appendix \ref{app:feature_set}. \begin{figure*}[h!] \includegraphics[width=1\textwidth]{images/features_diagram} \caption{Visual representation of a tracking data snapshot of spatial and contextual features in a soccer match situation. Yellow and blue shaded dots represent players of the attacking and defending team, respectively, while the green dot represents the ball location. The red and blue surface represents the pitch control of the attacking team along the field. The grey rectangle covering the yellow dots represents the opponent's formation block. The green vertical lines represent the defending team's vertical dynamic pressure lines, while the polygons with solid yellow lines represent the players clustered in each pressure line. The black dotted rectangles represent the relative locations between dynamic pressure lines. Dotted yellow lines and associated text describe the main extracted features} \label{fig:features_diagram} \end{figure*} \subsection{Spatial Features}\label{sec:spatial_features} We consider spatial features those directly derived from the spatial location of the players and the ball in a given time range. These can be obtained for any game situation regardless of the context and comprise mainly physical and spatial information. Table \ref{table:spatial_features} details a set of concepts where the specific list of features presented in Appendix \ref{app:feature_set} are derived from. The main spatial features obtained from tracking data are related to the location of players from both teams, the velocity vector of each player, the ball's location, and the location of the opponent's goal at any time instance. From the player's spatial location, we produce a series of features related to the control of space and players' density along the field. The statistical models used for pitch control and pitch influence evaluation are detailed in Appendix \ref{app:pitch_control}. \begin{table}[h!] \begin{tabular}{lp{8cm}} \hline\noalign{\smallskip} Concept type & Description \\ \noalign{\smallskip}\hline\noalign{\smallskip} (x,y) location & Location of a player, the ball, or attempted action, normalized in the [0,1] range according to pitches' dimensions.\\ Pitch control & Probability of controlling the ball in a specific location. \\ Pitch influence & Degree of influence of a set of players in a specific location. \\ Distance between locations & Distance in meters between two locations.\\ The angle between locations & Angle in degrees between two locations.\\ Player's velocity & Player's velocity vector in the last second.\\ \noalign{\smallskip}\hline \end{tabular} \caption{Description of a set of spatial concepts derived from tracking data.} \label{table:spatial_features} \end{table} \subsection{Contextual Features}\label{sec:contextual_features} To provide a more comprehensive state representation, we include a series of features derived from soccer-specific knowledge, which provides contextual information to the model. Table \ref{table:contextual_features} presents the main concepts from which multiple contextual features are derived. \begin{table}[h!] \begin{tabular}{lp{8cm}} \hline\noalign{\smallskip} Concept type & Description \\ \noalign{\smallskip}\hline\noalign{\smallskip} Possession & Possessions start and end times are identified to segment each match in episodes or sequences of actions. \\ Dynamic pressure lines & Relative positioning of players according to the team's current formation or the opponents. These formations change dynamically and are calculated in time ranges of a few seconds.\\ Outplayed players & Number of players that are surpassed after an action is attempted. \\ Interceptability & Features related to the likelihood of intercepting the ball. \\ Baseline event-based models & Models built on top of event-data, which are used as a baseline to enrich the learning of tracking data-based models.\\ \noalign{\smallskip}\hline \end{tabular} \caption{Description of a set of contextual concepts derived from tracking data.} \label{table:contextual_features} \end{table} The concept of dynamic pressure lines refers to players being aligned with their teammates within different alignment groups. For example, a typical conceptualization of pressure lines in soccer would be the groups formed by the defenders, the midfielders, and the attackers, which tend to be aligned to keep a consistent formation. The details on the calculation of dynamic pressure lines are presented in Appendix \ref{app:dynamic_pressure_lines}. By identifying the pressure lines, we can obtain every player's opponent-relative location, which provides high-level information about players' expected behavior. For example, when a player controls the ball and is behind the opponent's first pressure line, we would expect a different pressure behavior and turnover risk than when the ball is close to the third pressure line and the goal. Also, the football experts that accompanied this study considered passes that break pressure lines to significantly impact the increase of the goal expectation at the end of the possession. \\ From the concept of outplayed players, we can derive features such as the number of opponent players to overcome after a given pass is attempted or the number of teammates in front of or behind the ball, among many similar derivatives. In combination with the opponent's formation block location, we can obtain information about whether the pass is headed towards the inside or outside of the formation block and how many players are to be surpassed. Intuitively, a pass that outplays several players and that is headed towards the inside of the opponent block is more likely to produce an increase of the EPV, than a pass back directed outside the opponent's block that adds two more opponent players in front of the ball. On the other hand, the interceptability concept is expected to play an essential role in capturing opponents' spatial influence near a shooting option, allowing us to produce a more detailed expected goals model. Mainly, we derive features related to the number of players pressing the shooter closely and the number of players in the triangle formed between the shooter and the posts.\\ The described spatial and contextual features represent the main building blocks for deriving the set of features used for each implemented model. In Section \ref{sec:inference}, we describe in great detail the characteristics of these models. \section{Separated component inference}\label{sec:inference} In this section we describe in detail the approaches followed for estimating each of the components described in Equation \ref{eq:structured_epv}, \ref{eq:pass_expectation} and \ref{eq:ball_drive_expectation}. In general, we use function approximation methods to learn models for these components from spatiotemporal data. Specifically, we want to approximate some function $f^{*}$ that maps a set of features $x$, to an outcome $y$, such that $y=f^{*}(x)$. To do this, we will find the mapping $y=f(x;\theta)$ to learn the values of a set of parameters $\theta$ that result in an approximation to $f^{*}$.\\ Customized convolutional neural network architectures are used for estimating probability surfaces for the components involving passes, such as pass success probability, the expected possession value of passes, and the field-wide pass selection surface. Standard shallow neural networks are used to estimate ball drive probability, expected possession value from ball drives and shots, and the action selection probability components. This section describes the selection of features $x$, observed value $y$, and model parameters $\theta$ for each component. \subsection{Estimating pass impact at every location on the field} \label{sec:pass_models_inference} One of the most significant challenges when modeling passes in soccer is that, in practice, passes can go anywhere on the field. Previous attempts on quantifying pass success probability and expected value from passes in both soccer and basketball assume that the passing options a given player has been limited to the number of teammates on the field, and centered at their location at the time of the pass \citep{power2017not,cervone2016multiresolution,hubavcek2018deep}. However, in order to accurately estimate the impact of passes in soccer (a key element for estimating the future pathways of a possession), we need to be able to make sense of the spatial and contextual information that influences the selection, accuracy, and potential risk and reward of passing to any other location on the field. We propose using fully convolutional neural network architectures designed to exploit spatiotemporal information at different scales. We extend it and adapt it to the three related passing action models we require to learn: pass success probability, pass selection probability and pass expected value. While these three problems necessitate from different design considerations, we structure the proposed architectures in three main conceptual blocks: a \emph{feature extraction block}, a \emph{surface prediction block}, and a \emph{loss computation block}. The proposed models for these three problems also share the following common principles in its design: a layered structure of input data, the use of fully convolutional neural networks for extracting local and global features and learning a surface mapping from single-pixel correspondence. We first detail the common aspects of these architectures and then present the specific approach for each of the mentioned problems. \paragraph{Layers of low-level and field-wide input data} To successfully estimate a full prediction surface, we need to make sense of the information at every single pixel. Let the set of locations $L$, presented in section \ref{sec:epv_markov}, be a discrete matrix of locations on a soccer field of width $w$ and height $h$, we can construct a layered representation of the game state $Y(T_t)$, consisting on a set of slices of location-wise data of size $w\times h$. By doing this, we define a series of layers derived from the data snapshot $T_t$ that represent both spatial and contextual low-level information for each problem. This layered structure provides a flexible approach to include all kinds of information available or extractable from the spatiotemporal data, which is considered relevant for the specific problem being addressed. \paragraph{Feature extractor block} The feature extractor block is fundamentally composed of fully convolutional neural networks for all three cases, based on the SoccerMap architecture \citep{fernandez2020soccermap}. Using fully convolutional neural networks, we leverage the combination of layers at different resolutions, allowing us to capture relevant information at both local and global levels, producing location-wise predictions that are spatially aware. Following this approach, we can produce a full prediction surface directly instead of a single prediction on the event's destination. The parameters to be learned will vary according to the input surfaces' definition and the target outcome definition. However, the neural network architecture itself remains the same across all the modeled problems. This allows us to quickly adapt the architecture to specific problems while keeping the learning principles intact. A detailed description of the SoccerMap architecture is presented in Appendix \ref{app:soccernet}. \paragraph{Learning from single-pixel correspondance} Usually, approaches that use fully convolutional neural networks have the ground-truth data for the full output surface. In more challenging cases, only a single classification label is available, and a weakly supervised learning approach is carried out to learn this mapping \citep{pathak2015constrained}. However, in soccer events, only a single pixel ground-truth information is available: for example, the destination location of a successful pass. This makes our problem highly challenging, given that there is only one single-location correspondence between input data and ground-truth. At the same time, we aim to estimate a full probability surface. Despite this extreme set-up, we show that we can successfully learn full probability surfaces for all the pass-related models. We do so by selecting a single pixel from the predicted output matrix, during training, according to the known destination location of observed passes, and back-propagating the loss at a single-pixel level.\\ In the following sections, we describe the design characteristics for the feature extraction, surface prediction, and loss computation blocks for the three pass-related problems: pass success probability, pass selection probability, and expected value from passes. By joining these models' output, we will obtain a single action-value estimation (EPV) for passing actions, expressed by $\E[G | A=\rho, T_t]$. The detailed list of features used for each model is described in Appendix \ref{app:feature_set}. \subsubsection{Pass success probability}\label{sec:pass_probability} From any given game situation where a player controls the ball, we desire to estimate the success probability of a pass attempted towards any other of the potential destination locations, expressed by $\PP(A=\rho, D_t | T_t)$. Figure \ref{fig:pass_probability_diagram} presents the designed architecture for this problem. The input data at time $t$ is conformed by 13 layers of spatiotemporal information obtained from the tracking data snapshot $T_t$ consisting mainly of information regarding the location, velocity, distance, and angles between the both team's players and the goal. The feature extraction block is composed strictly by the SoccerMap architecture, where representative features are learned. This block's output consists of a $104\times68\times1$ pass probability predictions, one for each possible destination location in the coarsened field representation. In the surface prediction block a sigmoid activation function $\sigma$ is applied to each prediction input to produce a matrix of pass probability estimations in the [0,1] continuous range, where $\sigma(x) = \frac{e^x}{e^x+1}$. Finally, at the loss computation block, we select the probability output at the known destination location of observed passes and compute the negative log loss, defined in Equation \ref{eq:negative_logloss}, between the predicted ($\hat{y}$) and observed pass outcome ($y$).\\ \begin{equation}\label{eq:negative_logloss} \mathcal{L}(\hat{y},y) = - (y \cdot \log(\hat{y}) + (1-y) \cdot \log(1-\hat{y})) \end{equation} Note that we are learning all the network parameters $\theta$ needed to produce a full surface prediction by the back-propagation of the loss value between the predicted value at that location and the observed outcome of pass success at a single location. We show in Section \ref{sec:results} that this learning set is sufficient to obtain remarkable results. \begin{figure*} \includegraphics[width=1\textwidth]{images/pass_prob_2} \caption{Representation of the neural network architecture for the pass probability surface estimation, for a coarsened representation of size 104$\times$68. Thirteen layers of spatial features are fed to a SoccerMap feature extraction block, which outputs a 104$\times$68$\times$1 prediction surface. A sigmoid activation function is applied to each output, producing a pass probability surface. The output at the destination location of an observed pass is extracted, and the log loss between this output and the observed outcome of the pass is back-propagated to learn the network parameters.} \label{fig:pass_probability_diagram} \end{figure*} \subsubsection{Expected possession value from passes}\label{sec:pass_epv} Once we have a pass success probability model, we are halfway to obtaining an estimation for $\E[G|A=\rho, D_t, T_t]$, as expressed in Equation \ref{eq:pass_expectation}. The remaining two components, $\E[G|A=\rho, O_p=1, Dt, Tt]$ and $\E[G|A=\rho, O_p=0, Dt,Tt]$, correspond to the expected value of successful and unsuccessful passes, respectively. We learn a model for each expression separately; however, we use an equivalent architecture for both cases. The main difference is that one model must be learned with successful passes and the other with missed passes exclusively to obtain full surface predictions for both cases.\\ The input data matrix consists of 16 different layers with equivalent location, velocity, distance, and angular information to those selected for the pass success probability model. Additionally, we append a series of layers corresponding to contextual features related to outplayed players' concepts and dynamic pressure lines. Finally, we add a layer with the pass probability surface, considering that this can provide valuable information to estimate the expected value of passes. This surface is calculated by using a pre-trained version of a model for the architecture presented in Section \ref{sec:pass_probability}.\\ The input data is fed to a SoccerMap feature extraction block to obtain a single prediction surface. In this case, we must observe that the expected value of $G$ should reside within the $[-1,1]$ range, as described in Section \ref{sec:epv_markov}. To do so, in the surface prediction block, we apply a sigmoid activation function to the SoccerMap predicted surface obtaining an output within $[0,1]$. We then apply a linear transformation, so the final prediction surface consists of values in the $[-1,1]$ range. Notably, our modeling approach does not assume that a successful pass must necessarily produce a positive reward or that missed passes must produce a negative reward.\\ The loss computation block computes the mean squared error between the predicted values and the reward assigned to each pass, defined in Equation \ref{eq:mse}. The model design is independent of the reward choice for passes. In this work, we choose a long-term reward associated with the observed outcome of the possession, detailed in Section \ref{sec:estimands}. \begin{equation}\label{eq:mse} \text{MSE}(\hat{y},y) = \frac{1}{N} \sum_i^N(y_i-\hat{y}_i)^2 \end{equation} \subsubsection{Pass selection probability} Until now, we have models for estimating both the probability and expected value surfaces for both successful and missed passes. In order to produce a single-valued estimation of the expected value of the possession given a pass is selected, we model the pass selection probability $\PP(A=\rho, D_t | T_t)$ as defined in Equation \ref{eq:structured_epv}. The values of a pass selection probability surface must necessarily add up to 1, and will serve as a weighting matrix for obtaining the single estimate.\\ Both the input and feature extraction blocks of this architecture are equivalent to those designed for the pass success probability model (see Section \ref{sec:pass_probability}). However, we use the softmax activation function presented in Equation \ref{eq:softmax} for the surface prediction block, instead of a sigmoid activation function. We then extract the predicted value at a given pass destination location and compute the log loss between that predicted value and 1, since only observed passes are used. With the different models presented in Section \ref{sec:pass_models_inference}, we can now provide a single estimate of the expected value given a pass action is selected, $\E[G|A=\rho, T_t]$. \begin{equation}\label{eq:softmax} \textup{softmax}(v)_i = \frac{e^{v_i}}{\sum_{j=1}^{K}e^{v_i}} \text{for } i=0,\ldots, K \end{equation} \subsection{Estimating ball drive probability}\label{sec:drive_prob} We will focus now on the components needed for estimating the expected value of ball drive actions. In this work's scope, a ball drive refers to a one-second action where a player keeps the ball in its possession. Moreover, when a player attempts a ball drive, we assume the player will maintain its velocity, so the event's destination location would be the player's expected location in the next second. While keeping the ball, the player might sustain the ball-possession or lose the ball (either because of bad control, an opponent interception, or by driving the ball out of the field, among others). The probability of keeping control of the ball with these conditions is modeled by the expression $\PP(O_{\delta}=1 | A=\delta, T_t)$.\\ We use a standard shallow neural network architecture to learn a model for this probability, consisting of two fully-connected layers, each one followed by a layer of ReLu activation functions, with a single-neuron output preceded by a sigmoid activation function. We provide a state representation for observed ball drive actions that are composed of a set of spatial and contextual features, detailed in Appendix \ref{app:feature_set}. Among the spatial features, the level of pressure a player in possession of the ball receives from an opponent player is considered to be a critical piece of information to estimate whether the possession is maintained or lost. We model pressure through two additional features: the opponent's team density at the player's location and the overall team pitch control at that same location. Another factor that is considered to influence the ball drive probability is the player's contextual-relative location at the moment of the action. We include two features to provide this contextual information: the closest opponent's vertical pressure line and the closest possession team's vertical pressure line to the player. These two variables are expected to serve as a proxy for the opponent's pressing behavior and the player's relative risk of losing the ball. By adding features related to the spatial pressure, we can get a better insight into how pressed that player is within that context and then have better information to decide the probability of keeping the ball. We train this model by optimizing the loss between the estimated probability and observed ball drive actions that are labeled as successful or missed, depending on whether the ball carrier's team can keep the ball's possession during after the ball drive is attempted. \subsection{Estimating ball drive expectation}\label{sec:balldrive_expectation} Finally, once we have an estimate of the ball drive probability, we still need to obtain an estimate of the expected value of ball drives, in order to model the expression $\E[G|A=\delta,T_t]$, presented in Equation \ref{eq:ball_drive_expectation}. While using a different architecture for feature extraction, we will model both $\E[G|A=\delta,O_\delta=1,T_t]$ and $\E[A=\delta,O_\delta=0,T_t]$, following an analogous approach of that used in Section \ref{sec:pass_epv}.\\ Conceptually, by keeping the ball, player's might choose to continue a progressive run or dribble to gain a better spatial advantage. However, they might also wait until a teammate moves and opens up a passing line of lower risk or higher quality. By learning a model for the expression $\E[G| A=\delta, T_t]$ we aim to capture the impact on the expected possession value of these possible situations, all encapsulated within the ball drive event. We use the same input data set and feature extractor architecture used in Section \ref{sec:drive_prob}, with the addition of the ball drive probability estimation for each example. Similarly to the loss surface prediction block of the expected value of passes (see Section \ref{sec:pass_epv}), we apply a sigmoid activation function to obtain a prediction in the $[0,1]$ range, and then apply a linear transformation to produce a prediction value in the $[-1,1]$ range. The loss computation block computes the mean squared loss between the observed reward value assigned to the action and the model output. \subsection{Expected goals model}\label{sec:expected_goals_model} Once we have a model for the expected values of passes and ball drives, we only need to model the expected value of shots to obtain a full value state-value estimation for the action set $A$. We want to model the expectation of scoring a goal at time $t$ given that a shot is attempted, defined as $\E[G|A=\varsigma]$. This expression is typically referred to as \emph{expected goals} (xG) and is arguably one of the most popular metrics in soccer analytics \citep{eggels2016expected}. While existing approaches make use exclusively of features derived from the observed shot location, here we include both spatial and contextual information related to the other 22 players' and the ball's locations to account for the nuances of shooting situations.\\ Intuitively, we can identify several spatial factors that influence the likelihood of scoring from shots, such as the level of defensive pressure imposed on the ball carrier, the interceptability of the shot by the nearby opponents, or the goalkeeper's location. Specifically, we add the number of opponents that are closer than 3 meters to the ball-carrier to quantify the level of immediate pressure on the player. Additionally, we account for the interceptability of the shot (blockage count) by calculating the number of opponent players in the triangle formed by the ball-carrier location and the two posts. We include three additional features derived from the location of the goalkeeper. The goalkeeper's location can be considered an important factor influencing the scoring probability, particularly since he has the considerable advantage of being the only player that can stop the ball with his hands. In addition to this spatial information, we add a contextual feature consisting of a boolean flag indicating whether the shot is taken with the foot or the head, the latter being considered more difficult. Additionally, we add a prior estimation of expected goal as an input feature to this spatial and contextual information, produced through the baseline expected goals model described in Appendix \ref{app:xg}. The full set of features is detailed in Appendix \ref{app:feature_set}. \\ Having this feature set, we use a standard neural network architecture with the same characteristics as the one used for estimating the ball drive probability, explained in Section \ref{sec:drive_prob}, and we optimize the mean squared error between the predicted outcome and the observed reward for shot actions. The long-term reward chosen for this work is detailed in Section \ref{sec:estimands}. \subsection{Action selection probability} Finally, to obtain a single-valued estimation of EPV we weigh the expected value of each possible action with the respective probability of taking that action in a given state, as expressed in Equation \ref{eq:structured_epv}. Specifically, we estimate the action selection probability $\PP(A | T_t)$, where $A$ is the discrete set of actions described in Section \ref{sec:epv_markov}. We construct a feature set composed of both spatial and contextual features. Spatial features such as the ball location and the distance and angle to the goal provide information about the ball carrier's relative location in a given time instance. Additionally, we add spatial information related to the possession and team's pitch control and the degree of spatial influence of the opponent team near the ball. On the other hand, the location of both team's dynamic lines relative to the ball location provides the contextual information to the state representation. We also include the baseline estimation of expected goals at that given time, which is expected to influence the action selection decision, especially regarding shot selection. The full set of features is described in Appendix \ref{app:feature_set}. We use a shallow neural network architecture, analogous to those described in Section \ref{sec:drive_prob} and Section \ref{sec:balldrive_expectation}. This final layer of the feature extractor part of the network has size 3, to which a softmax activation function is applied to obtain the probabilities of each action. We model the observed outcome as a one-hot encoded vector of size $3$, indicating the action type observed in the data, and optimize the categorical cross-entropy between this vector and the predicted probabilities, which is equivalent to the log loss. \section{Experimental setup} \subsection{Datasets}\label{sec:datasets} We build different datasets for each of the presented models based on optical tracking data and event-data from 633 English Premier League matches from the 2013/2014 and 2014/2015 season, provided by \emph{STATS LLC}. This tracking data source consists of every player's location and the ball at a 10\emph{Hz} sampling rate, obtained through semi-automated player and ball tracking performed on match videos. On the other hand, event-data consists of human-labeled on-ball actions observed during the match, including the time and location of both the origin and destination of the action, the player who takes action, and the outcome of the event. Following our model design, we will focus exclusively on the pass, ball drive, and shot events. Table \ref{table:events} presents the total count for each of these events according to the dataset split presented below in Section \ref{sec:model_setting}. The definition of success varies from one event to another: a pass is successful if a player of the same team receives it, a ball drive is successful if the team does not lose the possession after the action occurs, and a shot is labeled as successful if a goal is scored from that shot. Given this data, we can extract the tracking data snapshot, defined in Section \ref{sec:epv_markov}, for every instance where any of these events are observed. From there, we can build the input feature sets defined for each of the presented models. For the detailed list of features used, see Appendix \ref{app:feature_set}. \begin{table}[h!] \resizebox{\textwidth}{!}{\begin{tabular}{llllll} \hline\noalign{\smallskip} Data Type & \# Total & \# Training & \# Validation & \# Test & \% Success \\ \noalign{\smallskip}\hline\noalign{\smallskip} Match & 633 & 379 & 127 & 127 & - \\ Pass & 480,670 & 288,619 & 96,500 & 95,551 & 79.64 \\ Ball drive & 413,123 & 284,759 & 82,271 & 82,093 & 90.60 \\ Shot & 13,735 & 8,240 & 2,800 & 2,695 & 8.54 \\ \noalign{\smallskip}\hline \end{tabular}} \caption{Total count of events included within the tracking data of 633 English Premier League matches from the 2013/2014 and 2014/2015 season.} \label{table:events} \end{table} \subsection{Defining the estimands}\label{sec:estimands} Each of the components of the EPV structured model has different estimands or outcomes. For both the pass success and ball drive success probability models, we define a binomially distributed outcome, according to the definition of success provided in \ref{sec:datasets}. These outcomes correspond to the short-term observed success of the actions. For the pass selection probability, we define the outcome as a binomially distributed random variable as well, where a value of 1 is given for every observed pass in its corresponding destination location. We define the action selection model's estimand as a multinomially distributed random variable that can take one of three possible values, according to whether the selected action corresponds to a pass, a ball drive, or a shot.\\ For the EPV estimations of passes, ball drives, and shot actions, respectively, we define the estimand is a long-term reward, corresponding to the outcome of the possession where that event occurs. For doing this, we first need to define when possession ends. There is a low frequency of goals in matches (2.8 goals on average in our dataset) compared to the number of observed actions (1,433 on average). Given this, the definition of the time extent of possession is expected to influence the balance between individual actions' short-term value and the long-term expected outcome after that action is taken. The standard approach for setting a possession's ending time is when the ball changes control from one team to another. However, here we define a possession end as the time when the next goal occurs. By doing this, we allow the ball to either go out of the field or change control between teams an undefined number of times until the next goal is observed. Once a goal is observed, all the actions between the goal and the previous one are assigned an outcome of $1$ if the action is taken by the scoring team or $-1$ otherwise. Following this, each action gets assigned as an outcome a long-term reward (i.e., the next goal observed).\\ However, this approach is expected to introduce noise, especially for actions that are largely apart in time from an observed goal. Let $\epsilon$ be a constant representing the time between each action and the next goal, in seconds. We can choose a value for $\epsilon$ that represents a long-term reward-vanishing threshold so that all the actions observed more than $\epsilon$ time from the observed goal received a reward of $0$. For this work, we choose $\epsilon=15s$, which corresponds to the average duration of standard soccer possessions in the available matches. Note this is equivalent to assuming that the current state of a possession only has $\epsilon$ seconds impact. \subsection{Model setting}\label{sec:model_setting} We randomly sample the available matches and split them into training (379), validation (127), and test sets (127). From each of these matches, we obtain the observed on-ball actions and the tracking data snapshots to construct the set of input features corresponding to each model, detailed in Appendix \ref{app:feature_set}. The events are randomly shuffled in the training dataset to avoid bias from the correlation between events that occur close in time. We use the validation set for model selection and leave the test set as a hold-out dataset for testing purposes. We train the models using the adaptive moment estimation algorithm \citep{kingma2014adam}, and set the $\beta_1$ and $\beta_2$ parameters to $0.9$ and $0.999$ respectively. For all the models we perform a grid search on the learning rate ($\{1\mathrm{e}{-3}, 1\mathrm{e}{-4}, 1\mathrm{e}{-5}, 1\mathrm{e}{-6}\}$), and batch size parameters ($\{16,32\}$). We use early stopping with a delta of $1\mathrm{e}{-3}$ for the pass success probability, ball drive success probability, and action selection probability models, and $1\mathrm{e}{-5}$ for the rest of the models. \subsection{Model calibration} We include an after-training calibration procedure within the processing pipeline for the pass success probability and pass selection probability models, which presented slight calibration imbalances on the validation set. We use the temperature scaling calibration method for both models, a useful approach for calibrating neural networks \citep{guo2017calibration}. Temperature scaling consists of dividing the vector of logits passed to a softmax function by a constant \emph{temperature} value $T_p$. This product modifies the scale of the probability vector produced by the softmax function. However, it preserves each element's ranking, impacting only the distribution of probabilities and leaving the classification prediction unmodified. We apply these post-calibration procedures exclusively on the validation set. \subsection{Evaluation Metrics}\label{sec:metrics} For the pass success probability, keep ball success probability, pass selection probability, and action selection models, we use the cross-entropy loss. Let $M$ be the number of classes, $N$ the number of examples, $y_{ij}$ the estimated outcome, and $\hat{y}_{ij}$ the expected outcome, we define the cross-entropy loss function as in Equation \ref{eq:cross_entropy}. For the first three models, where the outcome is binary, we set $M=2$. We can directly observe that for this set-up, the cross-entropy is equivalent to the negative log-loss defined in Equation \ref{eq:negative_logloss}. For the action selection model, we set $M=3$. For the rest of the models, corresponding to EPV estimations, we can observe the outcome takes continuous values in the $[-1,1]$ range. For these cases, we use the mean squared error (MSE) as a loss function, defined in Equation \ref{eq:mse}, by first normalizing both the estimated and observed outcomes into the $[0,1]$ range. \begin{equation}\label{eq:cross_entropy} \text{CE}(\hat{y},y) = - \sum_j^M\sum_i^N(y_{ij} \cdot \log(\hat{y}_{ij})) \end{equation} We are interested in obtaining calibrated predictions for all of the models, as well as for the joint EPV estimation. Having the models calibrated allows us to perform a fine-grained interpretation of the variations of EPV within subsets of actions, as shown in Section \ref{sec:applications}. We validate the model's calibration using a variation of the expected calibration error (ECE) presented in \cite{guo2017calibration}. For obtaining this metric, we distribute the predicted outcomes into $K$ bins and compute the difference between the average prediction in each bin and the average expected outcome for the examples in each bin. Equation \ref{eq:ece} presents the ECE metric, where $K$ is the number of bins, and $B_k$ corresponds to the set of examples in the $k$-th bin. Essentially, we are calculating the average difference between predicted and expected outcomes, weighted by the number of examples in each bin. In these experiments, we use quantile binning to obtain K equally-sized bins in ascending order. \begin{equation}\label{eq:ece} \text{ECE} = \sum_{k=1}^{K} \frac{\abs{B_k}}{N} \abs{ \bigg(\frac{1}{|B_k|} \sum_{i \in B_k}y_i\bigg) - \bigg(\frac{1}{|B_k|} \sum_{i \in B_k} \hat{y}_i \bigg) } \end{equation} \subsection{Results}\label{sec:results} Table \ref{tab:results} presents the results obtained in the test set for each of the proposed models. The loss value corresponds to either the cross-entropy or the mean squared loss, as detailed in Section \ref{sec:metrics}. The table includes the optimal values for the batch size and learning rate parameters, the number of parameters of each model, and the number of examples per second that each model can predict.\\ We can observe that the loss value reported for the final joint model is equivalent to the losses obtained for the EPV estimations of each of the three types of action types, showing stability in the model composition. The shot EPV loss is higher than the ball drive EPV and pass EPV losses, arguably due to the considerably lower amount of observed events available in comparison with the rest, as described in Section \ref{sec:datasets}. While the number of examples per second is directly dependent on the models' complexity, we can observe that we can predict 899 examples per second in the worst case. This value is 89 times higher than the sampling rate of the available tracking data (10Hz), showing that this approach can be applied for the real-time estimation of EPV and its components.\\ Regarding the models' calibration, we can observe that the ECE metrics present consistently low values along with all the models. Figure \ref{fig:calibration_all} presents a fine-grained representation of the probability calibration of each of the models. The x-axis represents the mean predicted value for a set of $K=10$ bins, while the y-axis represents the mean observed outcome among the examples within each corresponding bin. The circle size represents the percentage of examples in the bin relative to the total number of examples. In these plots, we can observe that the different models provide calibrated probability estimations along their full range of predictions, which is a critical factor for allowing a fine-grained inspection of the impact that specific actions have on the expected possession value estimation. Additionally, we can observe the different ranges of prediction values that each model produces. For example, ball drive success probabilities are distributed more often above $0.5$, while pass success probabilities cover a wide range between $0$ and $1$, showing that it is harder for a player to lose the ball while keeping possession than it is to lose the ball by attempting a pass towards another location on the field. The action selection probability distribution is heavily influenced by each action type's frequency, showing a higher frequency and broader distribution on ball drive and pass actions compared with shots. The joint EPV model's calibration plot shows that the proposed approach of estimating the different components separately and then merging them back into a single EPV estimation provides calibrated estimations. We applied post-training calibration exclusively to the pass success probability and the pass selection probability models, obtaining a temperature value of $0.82$ and $0.5$, respectively.\\ Having this, we have obtained a framework of analysis that provides accurate estimations of the long-term reward expectation of the possession, while also allowing for a fine-grained evaluation of the different components comprised in the model.\\ \begin{table} \resizebox{\textwidth}{!}{\begin{tabular}{lllllll} \hline\noalign{\smallskip} Model & Loss & ECE & \begin{tabular}[c]{@{}l@{}}Batch\\Size\end{tabular} & \begin{tabular}[c]{@{}l@{}}Learning\\Rate\end{tabular} & \# Params. & Ex. (s) \\ \noalign{\smallskip}\hline\noalign{\smallskip} Pass probability & 0.190 & 0.0047 & 32 & $1\mathrm{e}{-4}$ & 401,259 & 942 \\ Ball drive probability & 0.2803 & 0.0051 & 32 & $1\mathrm{e}{-3}$ & 128 & 67,230 \\ Pass successful EPV & 0.0075 & 0.0011 & 16 & $1\mathrm{e}{-6}$ & 403,659 & 899 \\ Pass missed EPV & 0.0085 & 0.0015 & 16 & $1\mathrm{e}{-6}$ & 403,659 & 899 \\ Pass selection probability & 5.7134 & - & 32 & $1\mathrm{e}{-5}$ & 401,259 & 984 \\ Pass EPV * Pass selection & 0.0067 & 0.0011 & - & - & - & - \\ Ball drive successful EPV & 0.0128 & 0.0022 & 16 & $1\mathrm{e}{-4}$ & 153 & 57,441 \\ Ball drive missed EPV & 0.0072 & 0.0025 & 16 & $1\mathrm{e}{-4}$ & 153 & 57,441 \\ Shot EPV & 0.2421 & 0.0095 & 16 & $1\mathrm{e}{-3}$ & 231 & 72,455 \\ Action selection probability & 0.6454 & - & 32 & $1\mathrm{e}{-3}$ & 171 & 23,709 \\ EPV & 0.0078 & 0.0023 & - & - & - & -\\ \hline\noalign{\smallskip} \end{tabular}} \caption{The average loss and calibration value for each of the components of the EPV model, as well as for the joint EPV estimation, on the corresponding test datasets. Additionally, the table presents the optimal value of the hyper-parameters, total number of parameters, and the number of predicted examples by second, for each of the models.} \label{tab:results} \end{table} \begin{figure*} \includegraphics[width=1\textwidth]{images/calibration_all_res} \caption{Probability calibration plots for the action selection (top-left), pass and ball drive probability (top-right), pass (successful and missed) EPV (mid-left), ball drive (successful and missed) EPV (mid-right), pass and ball drive EPV joint estimation (bottom-left), and the joint EPV estimation (bottom-right). Values in the x-axis represent the mean value by bin, among 10 equally-sized bins. The y-axis represents the mean observed outcome by bin. The circle size represents the percentage of examples in each bin relative to the total examples for each model.} \label{fig:calibration_all} \end{figure*} \section{Practical Applications}\label{sec:applications} In this section, we present a series of novel practical applications derived from the proposed EPV framework. We show how the different components of our EPV representation can be used to obtain direct insight in specific game situations at any frame during a match. We present the value distribution of different soccer actions and the contextual features developed in this work and analyze the risk and reward comprised by these actions. Additionally, we leverage the pass EPV surfaces, and the contextual variables developed in this work to analyze different off-ball pressing scenarios for breaking Liverpool's organized buildup. Finally, we inspect the on-ball and off-ball value-added between every Manchester City player (season 14-15) and the legendary attacking midfielder David Silva, to derive an optimal team that would maximize Silva's contribution to the team. \subsection{A real-time control room} In most team sports, coaches make heavy use of video to analyze player performance, show players their correctly or incorrectly performed actions, and even point out other possible decisions the player may have taken in a given game situation. The presented structured modeling approach of the EPV provides the advantage of obtaining numerical estimations for a set of game-related components, allowing us to understand the impact that each of them has on the development of each possession. Based on this, we can build a control room-like tool like the one shown in Figure \ref{fig:app_control_room}, to help coaches analyze game situations and communicate effectively with players. \begin{figure*}[h] \includegraphics[width=1\textwidth]{images/control_room} \caption{A visual control room tool based on the EPV components. On the left, a 2D representation of the game state at a given frame during the match, with an overlay of the pass EPV added surface and selection menus to change between 2D and video perspective, and to modify the surface overlay. On the bottom-left corner, a set of video sequence control widgets. On the center, the instantaneous value of selection probability of each on-ball action, and the expected value of each action, as well as the overall EPV value. On the right, the evolution of the EPV value during the possession and the expected EPV value of the optimal passing option at every frame.} \label{fig:app_control_room} \end{figure*} The control room tool presented in Figure \ref{fig:app_control_room} shows the frame-by-frame development of each of the EPV components. Coaches can observe the match's evolution in real-time and use a series of widgets to inspect into specific game situations. For instance, in this situation, coaches can see that passing the ball has a better overall expected value than keeping the ball or shooting. Additionally, they can visualize in which passing locations there is a higher expected value. The EPV evolution plot on the right shows that while the overall EPV is $0.032$, the best possible passing option is expected to increase this value up to $0.112$. The pass EPV added surface overlay shows that an increase of value can be expected by passing to the teammates inside the box or passing to the teammate outside the box. With this information and their knowledge on their team, coaches can decide whether to instruct the player to take immediate advantage of these kinds of passing opportunities or wait until better opportunities develop. Additionally, the player can gain a more visual understanding of the potential value of passing to specific locations in this situation instead of taking a shot. If the player tends shooting in these kinds of situations, the coach could show that keeping the ball or passing to an open teammate has a better goal expectancy than shooting from that location.\\ This visual approach could provide a smoother way to introduce advanced statistics into a coaching staff analysis process. Instead of evaluating actions beforehand or only delivering hard-to-digest numerical data, we provide a mechanism to enhance coaches' interpretation and player understanding of the game situations without interfering with the analysis process. \subsection{Not all value is created (or lost) equal} There is a wide range of playing strategies that can be observed in modern professional soccer. There is no single best strategy found in successful teams from Guardiola's creative and highly attacking FC Barcelona to Mourinho's defensive and counter-attacking Inter Milan. We could argue that a critical element for selecting a playing strategy lies in managing the risk and reward balance of actions, or more specifically, which actions a team will prefer in each game situation. While professional coaches intuitively understand which actions are riskier and more valuable, there is no quantification of the actual distribution of the value of the most common actions in soccer.\\ From all the passes and ball drive actions described in Section \ref{sec:datasets}, and the spatial and contextual features described in Section \ref{sec:feature_extraction} we derived a series of context-specific actions to compare their value distribution. We identify passes and ball drives that break the first, second, or third line from the concept of dynamic pressure lines. We define an action (pass or ball drive) to be under-pressure if the player's pitch control value at the beginning of the action is below $0.4$ and without pressure otherwise. A long pass is defined as a pass action that covers a distance above 30 meters. We define a pass back as passes where the destination location is closer to the team's goal than the ball's origin location. We count with manually labeled tags indicating when a pass is a cross and when the pass is missed, from the available data. We identify lost balls as missed passes and ball drives ending in recovery by the opponent. For all of these action types, we calculate the added value of each observed action (EPV added) as the difference between the EPV at the end and the start of the action. We perform a kernel density estimation on the EPV added of each action type to obtain a probability density function. In Figure \ref{fig:app_value_creation} we compare the density between all the action types. The density function value is normalized in the $[0,1]$ range by dividing by the maximum density value in order to ease the visual comparison between the distributions.\\ \begin{figure*}[h] \includegraphics[width=1\textwidth]{images/pass_value_added} \caption{Comparison of the probability density function of ten different actions in soccer. The density function values are normalized into the $[0,1]$ range. The normalization is obtained by dividing each density value by the maximum observed density value.} \label{fig:app_value_creation} \end{figure*} From Figure \ref{fig:app_value_creation}, we can gain a deeper understanding of the value distribution of different types of actions. From passes that break lines, we can observe that the higher the line, the broader the distribution, and the higher the extreme values. While passes breaking the first line are centered around $0$ with most values ranging in $[-0.01,0.015]$, the distribution of passes breaking the third line is centered around $0.005$, and most passes fall in the interval $[-0.025,0.05]$. Similarly, ball drives that break lines present a similar distribution as passes breaking the first line. Regarding the level of spatial pressure on actions, we can see that actions without pressure present an approximately zero-centered distribution, with most values falling in a $[-0.01,0.01]$ range. On the other hand, actions under pressure present a broader distribution and a higher density on negative values. This shows both that there is more tendency to lose the ball under pressure, hence losing value, and a higher tendency to increase the value if the pressure is overcome with successful actions. Whether crosses are a successful way for reaching the goal or not has been a long-term debate in soccer strategy. We can observe that crosses constitute the type of action with a higher tendency to lose significant amounts of value; however, it does provide a higher probability of high value increases in case of succeeding, compared to other actions. Long passes share a similar situation, where they can add a high amount of value in case of success but have a higher tendency to produce high EPV losses. For years, soccer enthusiasts have argued about whether passing backward provides value or not. We can observe that, while the EPV added distribution of passing back is the narrowest, near half of the probability lies on the positive side of the x-axis, showing the potential value to be obtained from this type of action. Finally, losing the ball often produces a loss of value. However, in situations such as being close to the opponent's box and with pressure on the ball carrier, losing the ball with a pass to the box might provide an increment in the expected value of the possession, given the increased chance of rebound. \subsection{Pressing Liverpool} A prevalent and challenging decision that coaches face in modern professional football is how to defend an organized buildup by the opponent. We consider an organized buildup as a game situation where a team has the ball behind the first pressure line. When deciding how to press, a coach needs to decide first in which zones they want to avoid the opponent receiving passes. Second, how to cluster their players in order to minimize the chances of the opponent moving forward. This section uses EPV passing components and dynamic pressure lines to analyze how to press Brendan Rodgers' Liverpool (season 14/15).\\ We identify the formation being used every time by counting the number of players in each pressure line. We assume there are only three pressure lines, so all formations are presented as the number of defenders followed by the number of midfielders and forwards. For every formation faced by Liverpool during buildups, we calculate both the mean off-ball and on-ball advantage in every location on the field. The on-ball advantage is calculated as the sum of the EPV added of passes with positive EPV added. On the other hand, the off-ball advantage is calculated as the sum of positive potential EPV added. We then say that a player has an off-ball advantage if he is located in a position where, in case of receiving a pass, the EPV would increase. Figure \ref{fig:app_press_liverpool} presents two heatmaps for every of the top 5 formations used against Liverpool during buildups, showing the distribution where Liverpool obtained on-ball and off-ball advantages, respectively. The heatmaps are presented as the difference with the mean heatmap in all of Liverpool's buildups during the season. \begin{figure*}[h] \includegraphics[width=1\textwidth]{images/liverpool_pressure} \caption{In the first row, one distribution for every formation Liverpool's opponents used during Liverpool's organized buildups, showing the difference between the distribution of off-ball advantages and the mean distribution. The second row is analogous to the first one, presenting the on-ball EPV added distributions. The green circle represents the ball location.} \label{fig:app_press_liverpool} \end{figure*} We will assume that the coach wants to avoid Liverpool playing inside its team block during buildups. We can see that when facing a 3-4-3 formation, Liverpool can create higher off-ball advantages before the second pressure line and manages to break the first line of pressure by the inside successfully. Against the 4-4-2, Liverpool has more difficulties in breaking the first line but still manages to do it successfully while also generating spaces between the defenders and midfielders, facilitating long balls to the sides. If the coaches' team does not have a good aerial game, this would be a harmful way of pressing. We can see the 4-3-3 is an ideal pressing formation for avoiding Liverpool playing inside the pressing block. This pressing style pushes the team to create spaces on the outside, before the first pressure line and after the second pressure line. In the second row, we can observe that Liverpool struggles to add value by the inside and is pushed towards the sides when passing. The 4-2-4 is the formation that avoids playing inside the block the most; however, it also allows more space on the sides of the midfielders. We can see that Liverpool can take advantage of this and create spaces and make valuable passes towards those locations. If the coach has fast wing-backs that could press receptions on long balls to the sides, this could be an adequate formation; otherwise, 4-3-3 is still preferable. Finally, the 5-3-2 provides significant advantages to Liverpool that can create spaces both by the inside above the first pressure line and behind the defenders back, while also playing towards those locations effectively.\\ This kind of information can be highly useful to a coach to decide tactical approaches for solving specific game situations. If we add the knowledge that the coach has of his players' qualities, he can make a fine-tuned design of the pressing he wants his team to develop. \subsection{Growing around David Silva} Most teams in the best professional soccer leagues have at least one player who is the key playmaker. Often, coaches want to ensure that the team's strategy is aligned with maximizing the performance of these key players. In this section, we leverage tracking data and the passing components of the EPV model to analyze the relationship between the well known attacking midfielder David Silva and his teammates when playing at Manchester City in season 14/15. We calculated the playing minutes each player shared with Silva and aggregated both the on-ball EPV added and expected off-ball EPV added of passes between each player pair for each match in the season. We analyze two different situations: when Silva has the ball and when any other player has the ball and Silva is on the field. We also calculate the selection percentage, defined as the percentage of time Silva chooses to pass to that player when available (and vice versa). Figure \ref{fig:app_silva} presents the sending and receiving maps involving David Silva and each of the two players with more minutes by position in the team. Every player is placed according to the most commonly used position in the league. Players represented by a circle with a solid contour have the highest sum of off-ball and on-ball EPV in each situation than the teammate assigned for the same position, presented with a dashed circle. The size of the circle represents the selection percentage of the player in each situation. We represent off-ball EPV added by the arrows' color, and on-ball EPV added of attempted passes by the arrow's size. \begin{figure*}[h] \includegraphics[width=1\textwidth]{images/silva_receive_and_send} \caption{Two passing maps representing the relationship between David Silva and each of the two players with more minutes by position in the Manchester City team during season 14/15. The figure on the left represents passes attempted by Silva, while the figure on the right represents passes received by Silva. The color of the arrow represents the average expected off-ball EPV added of the passes. The size of the circle represents the selection percentage of the destination player of the pass. Circles present a solid contour when that player is considered better for Silva than the teammate in the same position. The size of the arrow represents the mean on-ball EPV added of attempted passes. Players are placed according to their highest used position on the field. All metrics are normalized by minutes played together and multiplied by 90 minutes.} \label{fig:app_silva} \end{figure*} We can see that both the wingers and forwards generate space for Silva and receive high added value from his passes. However, the most frequently selected player is the central midfielder Yaya Tour\'e, who also looks for Silva often and is the midfielder providing the highest value to him. Regarding the other central midfielder, Fernandinho has a better relationship with Silva in terms of received and added value than Fernando. Silva shows a high tendency to play with the wingers; however, while Milner and Jovetic can create space and receive value from Silva, Navas and Nasri find Silva more often, with higher added value. Based on this, the coach can decide whether he prefers to lineup wingers that can benefit from Silva's passes or wingers, increasing Silva's participation in the game. A similar situation is presented with the right and left-backs. Additionally, we can observe that Silva tends to be a highly preferable passing option for most players. This information allows the coach to gain a deeper understanding of the effective off-ball and on-ball value relationship that is expected from every pair of players and can be useful for designing playing strategies before a match. \section{Discussion} This paper presents a comprehensive approach for estimating the instantaneous expected value of ball possessions in soccer. One of the main contributions of this work is showing that by deconstructing a single expectation into a series of lower-level statistical components and then estimating each of these components separately, we can gain greater interpretation insight into how these different elements impact the final joint estimation. Also, instead of depending on a single-model approach, we can make a more specialized selection of the models, learning approach, and input information that is better suited for learning the specific problem represented by each sub-component of the EPV decomposition. The deep learning architectures presented for the different passing components produce full probability surfaces, providing rich visual information for coaches that can be used to perform fine-grained analysis of player's and team's performance. We show that we can obtain calibrated estimations for all the decomposed model components, including the single-value estimation of the expected possession value of soccer possessions. We develop a broad set of novel spatial and contextual features for the different models presented, allowing rich state representations. Finally, we present a series of practical applications showing how this framework could be used as a support tool for coaches, allowing them to solve new upcoming questions and accelerating the problem-solving necessities that arise daily in professional soccer.\\ We consider that this work provides a relevant contribution to improving the practitioners' interpretation of the complex dynamics of professional soccer. With this approach, soccer coaches gain more convenient access to detailed statistical estimations that are unusual in their practice and find a visual approach to analyze game situations and communicate tactics to players. Additionally, on top of this framework, there is a large set of novel research that can be derived, including on-ball and off-ball player performance analysis, team performance and tactical analysis for pre-match and post-match evaluation, player profile identification for scouting, young players evolution analysis, match highlights detection, and enriched visual interpretation of game situations, among many others. \section{Introduction} Professional sports teams have started to gain a competitive advantage in recent decades by using advanced data analysis. However, soccer has been a late bloomer in integrating analytics, mainly due to the difficulty of making sense of the game's complex spatiotemporal relationships. To address the nonstop flow of questions that coaching staff deal with daily, we require a flexible framework that can capture the complex spatial and contextual factors that rule the game while providing practical interpretations of real game situations. This paper addresses the problem of estimating the expected value of soccer possessions (EPV) and proposes a decomposed learning approach that allows us to obtain fine-grained visual interpretations from neural network-based components.\\ The EPV is essentially an estimate of which team will score the next goal, given all the spatiotemporal information available at any given time. Let $G \in \{-1,1\}$, where the values represent one or the other team scoring next, respectively; the EPV corresponds to the expected value of $G$. The frame-by-frame estimation of EPV constitutes a one-dimensional time series that provides an intuitive description of how the possession value changes in time, as presented in Figure \ref{fig:epv}. While this value alone can provide precise information about the impact of observed actions, it does not provide sufficient practical insight into either the factors that make it fluctuate or which other advantageous actions could be taken to boost EPV further. To reach this granularity level, we formulate EPV as a composition of the expectations of three different on-ball actions: passes, ball drives, and shots. Each of these components is estimated separately, producing an ensemble of models whose outputs can be merged to produce a single EPV estimate. Additionally, by inspecting each model, we can obtain detailed insight on the impact that each of the components has on the final EPV estimation.\\ \begin{figure*} \includegraphics[width=1\textwidth]{images/epv_flow_res} \caption{Evolution of the expected possession value (EPV) of FC Barcelona during a match against Real Betis in La Liga season 2019/2020.} \label{fig:epv} \end{figure*} We propose two different approaches to learn each of the separated models, depending on whether we need to consider each possible location on the field or just single locations. We propose several deep neural architectures capable of producing full prediction surfaces from low-level features for the first case. We show that it is possible to learn these surfaces from very challenging learning set-ups where only a single-location ground-truth correspondence is available for estimating the whole surface. For the second case, we use shallow neural networks on top of a broad set of novel spatial and contextual features. From a practical standpoint, we are splitting out a complex model into more easily understandable parts so the practitioner can both understand the factors that produce the final estimate and evaluate the effect that other possible actions may have had. This type of modeling allows for easier integration of complex prediction models into the decision-making process of groups of individuals with a non-scientific background. Also, each of the components can be used individually, multiplying the number of potential applications. The main contributions of this work are the following: \begin{itemize} \item We propose a framework for estimating the instantaneous expected outcome of any soccer possession, which allows us to provide professional soccer coaches with rich numerical and visual performance metrics. \item We show that by decomposing the target EPV expression into a series of sub-components and estimating these separately, we can obtain accurate and calibrated estimates and provide a framework with greater interpretability than single-model approaches \citep{cervone2016multiresolution,bransen2018measuring}. \item We develop a series of deep learning architectures to estimate the expected possession value surface of potential passes, pass success probability, pass selection probability surfaces, and show these three networks provide both accurate and calibrated surface estimates. \item We present a handful of novel practical applications in soccer that are directly derived from this framework. \end{itemize} \section{Background} The evaluation of individual actions has been recently gaining attention in soccer analytics research. Given the relatively low frequency of soccer goals compared to match duration and the frequency of other events such as passes and turnovers, it becomes challenging to evaluate individual actions within a match. Several different approaches have been attempted to learn a valuation function for both on-ball and off-ball events related to goal-scoring.\\ Handcrafted features based on the opinion of a committee of soccer experts have been used to quantify the likelihood of scoring in a continuous-time range during a match \citep{link2016real}. Another approach uses observed events' locations to estimate the value of individual actions during the development of possessions \citep{decroos2018actions}. Here, the game state is represented as a finite set of consecutive observed discrete actions and, a Bernoulli distributed outcome variable is estimated through standard supervised machine learning algorithms. In a similar approach, possession sequences are clustered based on dynamic time warping distance, and an XGBoost \citep{chen2016xgboost} model is trained to predict the expected goal value of the sequence, assuming it ends with a shot attempt \citep{bransen2018measuring}. \cite{gyarmati2016qpass}, calculate the value of a pass as the difference of field value between different locations when a ball transition between these occurs. \cite{rudd2011framework} uses Markov chains to estimate the expected possession value based on individual on-ball actions and a discrete transition matrix of 39 states, including zonal location, defensive state, set pieces, and two absorbing states (goal or end of possession). A similar approach named expected threat uses Markov chains and a coarsened representation of field locations to derive the expected goal value of transitioning between discrete locations \citep{xtkarun}. The estimation of a shot's expectation within the next 10 seconds of a given pass event has also been used to estimate a pass's reward, based on spatial and contextual information \citep{power2017not}. Beyond the quantification of on-ball actions, off-ball position quality has also been quantified, based on the goal expectation. In \cite{spearmanbeyond}, a physics-based statistical model is designed to quantify the quality of players' off-ball positioning based on the positional characteristics at the time of the action that precedes a goal-scoring opportunity. All of these previous attempts on quantifying action value in soccer assume a series of constraints that reduce the scope and reach of the solution. Some of the limitations of these past work include simplified representations of event data (consisting of merely the location and time of on-ball actions), using strongly handcrafted rule-based systems, or focusing exclusively on one specific type of action. However, a comprehensive EPV framework that considers both the full spatial extent of the soccer field and the space-time dynamics of the 22 players and the ball has not yet been proposed and fully validated. In this work, we provide such a framework and go one step further estimating the added value of observed actions by providing an approach for estimating the expected value of the possession at any time instance.\\ Action evaluation has also been approached in other sports such as basketball and ice-hockey by using spatiotemporal data. The expected possession value of basketball possessions was estimated through a multiresolution process combining macro-transitions (transitions between states following coarsened representation of the game state) and micro-transitions (likelihood of player-level actions), capturing the variations between actions, players, and court space \citep{cervone2016multiresolution}. Also, deep reinforcement learning has been used for estimating an action-value function from event data of professional ice-hockey games \citep{liu2018deep}. Here, a long short-term memory deep network is trained to capture complex time-dependent contextual features from a set of low-level input information extracted from consecutive on-puck events. \section{Structured modeling} In this study, we aim to provide a model for estimating soccer possessions' expected outcomes at any given time. While the single EPV estimate has practical value itself, we propose a structured modeling approach where the EPV is decomposed into a series of subcomponents. Each of these components can be estimated separately, providing the model with greater adaptability to component-specific problems and facilitating the final estimate's interpretation. \subsection{EPV as a Markov decision process}\label{sec:epv_markov} This problem can be framed as a Markov decision process (MDP). Let a player with possession of the ball be an agent that can take any action of a discrete set $A$ from any state of the set of all possible states $S$; we aim to learn the state-value function $EPV(s)$, defined as the expected return from state $s$, based on a policy $\pi(s,a)$, which defines the probability of taking action $a$ at state $s$. In contrast with typical MDP applications, our aim is not to find the optimal policy $\pi$, but to estimate the expected possession value (EPV) from an average policy learned from historical data.\\ Let $\Gamma$ be the set of all possible soccer possessions, and $r \in \Gamma$ represents the full path of a specific possession. Let $\Psi$ be a high dimensional space, including all the spatiotemporal information and a series of annotated events, $T_t(r) \in \Psi$ is a snapshot of the spatiotemporal data after $t$ seconds from the start of the possession. And let $G(r)$ be the outcome of a possession $w$, where $G(r) \in \{-1,1\}$, with $1$ being a goal is scored and $-1$ being a goal is conceded.\\ \begin{definition}\label{def:epv_general} The expected possession value of a soccer possession at time $t$ is $EPV_t = \E[G| T_t]$ \end{definition} This initial definition shares similarities with previous approaches in other sports, such as basketball \citep{cervone2016multiresolution} and American football \citep{yurko2019going}, from which part of the notation used in this section is inspired. Following Definition \ref{def:epv_general}, we can observe that EPV is an integration over all the future paths a possession can take at time $t$, given the available spatiotemporal information at that time, $T_t$. We employ player tracking data consisting of the location of the 22 players and the ball, usually provided at a frequency ranging from 10Hz to 25Hz, and captured using computer-vision algorithms on top of videos of professional soccer matches. We will assume that tracking data is accompanied and synchronized with event data, consisting of annotated events observed during the match, indicating the location, time, and other possible tags. Let $\Psi$ be the infinite set of possible tracking data snapshots; this modeling approach defines a continuous state space, represented by $\Psi$.\\ \subsection{A decomposed model}\label{sec:decomposed} In order to obtain the desired structured modeling of EPV described in Section \ref{sec:epv_markov}, we will further decompose Definition \ref{def:epv_general} following the law of total expectation and considering the set of possible actions that can be taken at any given time. We assume that the space of possible actions $A=\{\rho, \delta,\varsigma\}$ is a discrete set where $\rho$, $\delta$, and $\varsigma$ represent pass, ball drive, and shot attempt actions, respectively. We can rewrite Definition \ref{def:epv_general} as in Equation \ref{eq:structured_epv}. \begin{equation}\label{eq:structured_epv} \begin{split} EPV_t =\sum_{a \in A}\E[G| A=a, T_t]\overbrace{\PP(A=a | T_t)}^{\parbox{7em}{\footnotesize\centering Action selection probability}} \end{split} \end{equation} Additionally, to consider that passes can go anywhere on the field, we define $D_t$ to be the selected pass destination location at time $t$ and $\PP(D_t| T_t)$ to be a transition probability model for passes. Let $L$ be the set of all the possible locations in a soccer field, then $D_t \in L$. On the other hand, we assume that ball drives ($\delta$) and shots ($\varsigma$) have a single possible destination location (the expected player location in one second and the goal line center, respectively). Following this, we can rewrite Definition \ref{def:epv_general} as presented in Equation \ref{eq:structured_epv}. \begin{equation}\label{eq:structured_epv} \begin{split} EPV_t =(\sum_{l \in L} \overbrace{\E[G| A=\rho, D_t = l, T_t]}^{\parbox{8em}{\footnotesize\centering Joint expected value surface of passes}} \overbrace{\PP(D_t=l |A=\rho, T_t)}^{\parbox{8em}{\footnotesize\centering Pass selection probability}}) \PP(A=\rho | T_t) \\ + \overbrace{\E[G| A=\delta, T_t]}^{\parbox{6em}{\footnotesize\centering Expected value of ball drives}} \PP(A=\delta | T_t) \\ + \overbrace{\E[G| A=\varsigma, T_t]}^{\parbox{6em}{\footnotesize\centering Expected value from shots}} \PP(A=\varsigma | T_t) \end{split} \end{equation} The expected value of passing actions, $\E[G| D, A=\rho]$, can be further extended to include the two scenarios of producing a successful or a missed pass (turnover). We model the outcome of a pass as $O_{\rho}$, which takes a value of $1$ when a pass is successful or $0$ in case of a turnover. We can then rewrite this expression as in Equation \ref{eq:pass_expectation}. \begin{equation}\label{eq:pass_expectation} \begin{split} \E[G| A=\rho, D_t, T_t] = \overbrace{\E[G| A=\rho, O_{\rho}=1, D_t, T_t]}^{\parbox{11em}{\footnotesize\centering Expected value of successful/missed passes}} \overbrace{\PP(O_{\rho}=1 | A=\rho, D_t, T_t)}^{\parbox{9em}{\footnotesize\centering Probability of successful/missed passes}}\\ + \E[G| A=\rho, O_{\rho}=0, D_t, T_t]\PP(O_{\rho}=0 | A=\rho, D_t, T_t) \end{split} \end{equation} Equation \ref{eq:ball_drive_expectation} represents an analogous, definition for ball drives, having $O_{\delta}$ be a random variable taking values $0$ or $1$, representing a successful ball drive or a loss of possession following that ball drive, which we will refer as a missed ball drive. \begin{equation}\label{eq:ball_drive_expectation} \begin{split} \E[G| A=\delta, T_t] = \overbrace{\E[G| A=\delta, O_{\delta}=1, T_t]}^{\parbox{8em}{\footnotesize\centering Expected value of successful/missed ball drives}} \overbrace{\PP(O_{\delta}=1 | A=\delta, T_t)}^{\parbox{8em}{\footnotesize\centering Probability of successful/missed ball drives}}\\ + \E[G| A=\delta, O_{\delta}=0, T_t]\PP(O_{\delta}=0 | A=\delta, T_t) \end{split} \end{equation} Finally, the expression $\E[G|A=\varsigma]$ is equivalent to an expected goals model, a popular metric in soccer analytics \citep{lucey2014quality,eggels2016expected} which models the expectation of scoring a goal based on shot attempts. In Figure \ref{fig:all_layers_epv} we present how the outputs of the different components presented in this section are combined to produce a single EPV estimation, while also providing numerical and visual information of how each part of the model impacts the final value. \begin{figure*} \includegraphics[width=1\textwidth]{images/all_layers_epv} \caption{Diagram representing the estimation of the expected possession value (EPV) for a given game situation through the composition of independently trained models. The final EPV estimation of $0.0239$ is produced by combining the expected value of three possible actions the player in possession of the ball can take (pass, ball drive, and shot) weighted by the likelihood of those actions being selected. Both pass expectation and probability are modeled to consider every possible location of the field as a destination; thus the diagram presents the predicted surfaces for both successful and unsuccessful potential passes, as well as the surface of destination location likelihood.} \label{fig:all_layers_epv} \end{figure*} \section{Spatiotemporal feature extraction}\label{sec:feature_extraction} Each of the decomposed EPV formulation components presents challenging tasks and requires sufficiently comprehensive representations of the game states to produce accurate estimates. We build these state representations from a wide set of low-level and fine-grained features extracted from tracking data (see Section \ref{sec:epv_markov} for the definition of tracking data). While low-level features are straightforwardly obtained from this data (i.e., players' location and speed), fine-grained features are built through either statistical models or handcrafted algorithms developed in collaboration with a group of soccer match analysts from FC Barcelona. Figure \ref{fig:features_diagram} presents a visual representation of a game situation where we can observe the available players and ball locations and a subset of features derived from that tracking data snapshot. Conceptually, we split the features into two main groups: spatial features and contextual features. Both feature types are described in Section \ref{sec:spatial_features} and Section \ref{sec:contextual_features}. The full set of features and their usage within the different models presented in this work are detailed in Appendix \ref{app:feature_set}. \begin{figure*}[h!] \includegraphics[width=1\textwidth]{images/features_diagram} \caption{Visual representation of a tracking data snapshot of spatial and contextual features in a soccer match situation. Yellow and blue shaded dots represent players of the attacking and defending team, respectively, while the green dot represents the ball location. The red and blue surface represents the pitch control of the attacking team along the field. The grey rectangle covering the yellow dots represents the opponent's formation block. The green vertical lines represent the defending team's vertical dynamic pressure lines, while the polygons with solid yellow lines represent the players clustered in each pressure line. The black dotted rectangles represent the relative locations between dynamic pressure lines. Dotted yellow lines and associated text describe the main extracted features} \label{fig:features_diagram} \end{figure*} \subsection{Spatial Features}\label{sec:spatial_features} We consider spatial features those directly derived from the spatial location of the players and the ball in a given time range. These can be obtained for any game situation regardless of the context and comprise mainly physical and spatial information. Table \ref{table:spatial_features} details a set of concepts where the specific list of features presented in Appendix \ref{app:feature_set} are derived from. The main spatial features obtained from tracking data are related to the location of players from both teams, the velocity vector of each player, the ball's location, and the location of the opponent's goal at any time instance. From the player's spatial location, we produce a series of features related to the control of space and players' density along the field. The statistical models used for pitch control and pitch influence evaluation are detailed in Appendix \ref{app:pitch_control}. \begin{table}[h!] \begin{tabular}{lp{8cm}} \hline\noalign{\smallskip} Concept type & Description \\ \noalign{\smallskip}\hline\noalign{\smallskip} (x,y) location & Location of a player, the ball, or attempted action, normalized in the [0,1] range according to pitches' dimensions.\\ Pitch control & Probability of controlling the ball in a specific location. \\ Pitch influence & Degree of influence of a set of players in a specific location. \\ Distance between locations & Distance in meters between two locations.\\ The angle between locations & Angle in degrees between two locations.\\ Player's velocity & Player's velocity vector in the last second.\\ \noalign{\smallskip}\hline \end{tabular} \caption{Description of a set of spatial concepts derived from tracking data.} \label{table:spatial_features} \end{table} \subsection{Contextual Features}\label{sec:contextual_features} To provide a more comprehensive state representation, we include a series of features derived from soccer-specific knowledge, which provides contextual information to the model. Table \ref{table:contextual_features} presents the main concepts from which multiple contextual features are derived. \begin{table}[h!] \begin{tabular}{lp{8cm}} \hline\noalign{\smallskip} Concept type & Description \\ \noalign{\smallskip}\hline\noalign{\smallskip} Possession & Possessions start and end times are identified to segment each match in episodes or sequences of actions. \\ Dynamic pressure lines & Relative positioning of players according to the team's current formation or the opponents. These formations change dynamically and are calculated in time ranges of a few seconds.\\ Outplayed players & Number of players that are surpassed after an action is attempted. \\ Interceptability & Features related to the likelihood of intercepting the ball. \\ Baseline event-based models & Models built on top of event-data, which are used as a baseline to enrich the learning of tracking data-based models.\\ \noalign{\smallskip}\hline \end{tabular} \caption{Description of a set of contextual concepts derived from tracking data.} \label{table:contextual_features} \end{table} The concept of dynamic pressure lines refers to players being aligned with their teammates within different alignment groups. For example, a typical conceptualization of pressure lines in soccer would be the groups formed by the defenders, the midfielders, and the attackers, which tend to be aligned to keep a consistent formation. The details on the calculation of dynamic pressure lines are presented in Appendix \ref{app:dynamic_pressure_lines}. By identifying the pressure lines, we can obtain every player's opponent-relative location, which provides high-level information about players' expected behavior. For example, when a player controls the ball and is behind the opponent's first pressure line, we would expect a different pressure behavior and turnover risk than when the ball is close to the third pressure line and the goal. Also, the football experts that accompanied this study considered passes that break pressure lines to significantly impact the increase of the goal expectation at the end of the possession. \\ From the concept of outplayed players, we can derive features such as the number of opponent players to overcome after a given pass is attempted or the number of teammates in front of or behind the ball, among many similar derivatives. In combination with the opponent's formation block location, we can obtain information about whether the pass is headed towards the inside or outside of the formation block and how many players are to be surpassed. Intuitively, a pass that outplays several players and that is headed towards the inside of the opponent block is more likely to produce an increase of the EPV, than a pass back directed outside the opponent's block that adds two more opponent players in front of the ball. On the other hand, the interceptability concept is expected to play an essential role in capturing opponents' spatial influence near a shooting option, allowing us to produce a more detailed expected goals model. Mainly, we derive features related to the number of players pressing the shooter closely and the number of players in the triangle formed between the shooter and the posts.\\ The described spatial and contextual features represent the main building blocks for deriving the set of features used for each implemented model. In Section \ref{sec:inference}, we describe in great detail the characteristics of these models. \section{Separated component inference}\label{sec:inference} In this section we describe in detail the approaches followed for estimating each of the components described in Equation \ref{eq:structured_epv}, \ref{eq:pass_expectation} and \ref{eq:ball_drive_expectation}. In general, we use function approximation methods to learn models for these components from spatiotemporal data. Specifically, we want to approximate some function $f^{*}$ that maps a set of features $x$, to an outcome $y$, such that $y=f^{*}(x)$. To do this, we will find the mapping $y=f(x;\theta)$ to learn the values of a set of parameters $\theta$ that result in an approximation to $f^{*}$.\\ Customized convolutional neural network architectures are used for estimating probability surfaces for the components involving passes, such as pass success probability, the expected possession value of passes, and the field-wide pass selection surface. Standard shallow neural networks are used to estimate ball drive probability, expected possession value from ball drives and shots, and the action selection probability components. This section describes the selection of features $x$, observed value $y$, and model parameters $\theta$ for each component. \subsection{Estimating pass impact at every location on the field} \label{sec:pass_models_inference} One of the most significant challenges when modeling passes in soccer is that, in practice, passes can go anywhere on the field. Previous attempts on quantifying pass success probability and expected value from passes in both soccer and basketball assume that the passing options a given player has been limited to the number of teammates on the field, and centered at their location at the time of the pass \citep{power2017not,cervone2016multiresolution,hubavcek2018deep}. However, in order to accurately estimate the impact of passes in soccer (a key element for estimating the future pathways of a possession), we need to be able to make sense of the spatial and contextual information that influences the selection, accuracy, and potential risk and reward of passing to any other location on the field. We propose using fully convolutional neural network architectures designed to exploit spatiotemporal information at different scales. We extend it and adapt it to the three related passing action models we require to learn: pass success probability, pass selection probability and pass expected value. While these three problems necessitate from different design considerations, we structure the proposed architectures in three main conceptual blocks: a \emph{feature extraction block}, a \emph{surface prediction block}, and a \emph{loss computation block}. The proposed models for these three problems also share the following common principles in its design: a layered structure of input data, the use of fully convolutional neural networks for extracting local and global features and learning a surface mapping from single-pixel correspondence. We first detail the common aspects of these architectures and then present the specific approach for each of the mentioned problems. \paragraph{Layers of low-level and field-wide input data} To successfully estimate a full prediction surface, we need to make sense of the information at every single pixel. Let the set of locations $L$, presented in section \ref{sec:epv_markov}, be a discrete matrix of locations on a soccer field of width $w$ and height $h$, we can construct a layered representation of the game state $Y(T_t)$, consisting on a set of slices of location-wise data of size $w\times h$. By doing this, we define a series of layers derived from the data snapshot $T_t$ that represent both spatial and contextual low-level information for each problem. This layered structure provides a flexible approach to include all kinds of information available or extractable from the spatiotemporal data, which is considered relevant for the specific problem being addressed. \paragraph{Feature extractor block} The feature extractor block is fundamentally composed of fully convolutional neural networks for all three cases, based on the SoccerMap architecture \citep{fernandez2020soccermap}. Using fully convolutional neural networks, we leverage the combination of layers at different resolutions, allowing us to capture relevant information at both local and global levels, producing location-wise predictions that are spatially aware. Following this approach, we can produce a full prediction surface directly instead of a single prediction on the event's destination. The parameters to be learned will vary according to the input surfaces' definition and the target outcome definition. However, the neural network architecture itself remains the same across all the modeled problems. This allows us to quickly adapt the architecture to specific problems while keeping the learning principles intact. A detailed description of the SoccerMap architecture is presented in Appendix \ref{app:soccernet}. \paragraph{Learning from single-pixel correspondance} Usually, approaches that use fully convolutional neural networks have the ground-truth data for the full output surface. In more challenging cases, only a single classification label is available, and a weakly supervised learning approach is carried out to learn this mapping \citep{pathak2015constrained}. However, in soccer events, only a single pixel ground-truth information is available: for example, the destination location of a successful pass. This makes our problem highly challenging, given that there is only one single-location correspondence between input data and ground-truth. At the same time, we aim to estimate a full probability surface. Despite this extreme set-up, we show that we can successfully learn full probability surfaces for all the pass-related models. We do so by selecting a single pixel from the predicted output matrix, during training, according to the known destination location of observed passes, and back-propagating the loss at a single-pixel level.\\ In the following sections, we describe the design characteristics for the feature extraction, surface prediction, and loss computation blocks for the three pass-related problems: pass success probability, pass selection probability, and expected value from passes. By joining these models' output, we will obtain a single action-value estimation (EPV) for passing actions, expressed by $\E[G | A=\rho, T_t]$. The detailed list of features used for each model is described in Appendix \ref{app:feature_set}. \subsubsection{Pass success probability}\label{sec:pass_probability} From any given game situation where a player controls the ball, we desire to estimate the success probability of a pass attempted towards any other of the potential destination locations, expressed by $\PP(A=\rho, D_t | T_t)$. Figure \ref{fig:pass_probability_diagram} presents the designed architecture for this problem. The input data at time $t$ is conformed by 13 layers of spatiotemporal information obtained from the tracking data snapshot $T_t$ consisting mainly of information regarding the location, velocity, distance, and angles between the both team's players and the goal. The feature extraction block is composed strictly by the SoccerMap architecture, where representative features are learned. This block's output consists of a $104\times68\times1$ pass probability predictions, one for each possible destination location in the coarsened field representation. In the surface prediction block a sigmoid activation function $\sigma$ is applied to each prediction input to produce a matrix of pass probability estimations in the [0,1] continuous range, where $\sigma(x) = \frac{e^x}{e^x+1}$. Finally, at the loss computation block, we select the probability output at the known destination location of observed passes and compute the negative log loss, defined in Equation \ref{eq:negative_logloss}, between the predicted ($\hat{y}$) and observed pass outcome ($y$).\\ \begin{equation}\label{eq:negative_logloss} \mathcal{L}(\hat{y},y) = - (y \cdot \log(\hat{y}) + (1-y) \cdot \log(1-\hat{y})) \end{equation} Note that we are learning all the network parameters $\theta$ needed to produce a full surface prediction by the back-propagation of the loss value between the predicted value at that location and the observed outcome of pass success at a single location. We show in Section \ref{sec:results} that this learning set is sufficient to obtain remarkable results. \begin{figure*} \includegraphics[width=1\textwidth]{images/pass_prob_2} \caption{Representation of the neural network architecture for the pass probability surface estimation, for a coarsened representation of size 104$\times$68. Thirteen layers of spatial features are fed to a SoccerMap feature extraction block, which outputs a 104$\times$68$\times$1 prediction surface. A sigmoid activation function is applied to each output, producing a pass probability surface. The output at the destination location of an observed pass is extracted, and the log loss between this output and the observed outcome of the pass is back-propagated to learn the network parameters.} \label{fig:pass_probability_diagram} \end{figure*} \subsubsection{Expected possession value from passes}\label{sec:pass_epv} Once we have a pass success probability model, we are halfway to obtaining an estimation for $\E[G|A=\rho, D_t, T_t]$, as expressed in Equation \ref{eq:pass_expectation}. The remaining two components, $\E[G|A=\rho, O_p=1, Dt, Tt]$ and $\E[G|A=\rho, O_p=0, Dt,Tt]$, correspond to the expected value of successful and unsuccessful passes, respectively. We learn a model for each expression separately; however, we use an equivalent architecture for both cases. The main difference is that one model must be learned with successful passes and the other with missed passes exclusively to obtain full surface predictions for both cases.\\ The input data matrix consists of 16 different layers with equivalent location, velocity, distance, and angular information to those selected for the pass success probability model. Additionally, we append a series of layers corresponding to contextual features related to outplayed players' concepts and dynamic pressure lines. Finally, we add a layer with the pass probability surface, considering that this can provide valuable information to estimate the expected value of passes. This surface is calculated by using a pre-trained version of a model for the architecture presented in Section \ref{sec:pass_probability}.\\ The input data is fed to a SoccerMap feature extraction block to obtain a single prediction surface. In this case, we must observe that the expected value of $G$ should reside within the $[-1,1]$ range, as described in Section \ref{sec:epv_markov}. To do so, in the surface prediction block, we apply a sigmoid activation function to the SoccerMap predicted surface obtaining an output within $[0,1]$. We then apply a linear transformation, so the final prediction surface consists of values in the $[-1,1]$ range. Notably, our modeling approach does not assume that a successful pass must necessarily produce a positive reward or that missed passes must produce a negative reward.\\ The loss computation block computes the mean squared error between the predicted values and the reward assigned to each pass, defined in Equation \ref{eq:mse}. The model design is independent of the reward choice for passes. In this work, we choose a long-term reward associated with the observed outcome of the possession, detailed in Section \ref{sec:estimands}. \begin{equation}\label{eq:mse} \text{MSE}(\hat{y},y) = \frac{1}{N} \sum_i^N(y_i-\hat{y}_i)^2 \end{equation} \subsubsection{Pass selection probability} Until now, we have models for estimating both the probability and expected value surfaces for both successful and missed passes. In order to produce a single-valued estimation of the expected value of the possession given a pass is selected, we model the pass selection probability $\PP(A=\rho, D_t | T_t)$ as defined in Equation \ref{eq:structured_epv}. The values of a pass selection probability surface must necessarily add up to 1, and will serve as a weighting matrix for obtaining the single estimate.\\ Both the input and feature extraction blocks of this architecture are equivalent to those designed for the pass success probability model (see Section \ref{sec:pass_probability}). However, we use the softmax activation function presented in Equation \ref{eq:softmax} for the surface prediction block, instead of a sigmoid activation function. We then extract the predicted value at a given pass destination location and compute the log loss between that predicted value and 1, since only observed passes are used. With the different models presented in Section \ref{sec:pass_models_inference}, we can now provide a single estimate of the expected value given a pass action is selected, $\E[G|A=\rho, T_t]$. \begin{equation}\label{eq:softmax} \textup{softmax}(v)_i = \frac{e^{v_i}}{\sum_{j=1}^{K}e^{v_i}} \text{for } i=0,\ldots, K \end{equation} \subsection{Estimating ball drive probability}\label{sec:drive_prob} We will focus now on the components needed for estimating the expected value of ball drive actions. In this work's scope, a ball drive refers to a one-second action where a player keeps the ball in its possession. Moreover, when a player attempts a ball drive, we assume the player will maintain its velocity, so the event's destination location would be the player's expected location in the next second. While keeping the ball, the player might sustain the ball-possession or lose the ball (either because of bad control, an opponent interception, or by driving the ball out of the field, among others). The probability of keeping control of the ball with these conditions is modeled by the expression $\PP(O_{\delta}=1 | A=\delta, T_t)$.\\ We use a standard shallow neural network architecture to learn a model for this probability, consisting of two fully-connected layers, each one followed by a layer of ReLu activation functions, with a single-neuron output preceded by a sigmoid activation function. We provide a state representation for observed ball drive actions that are composed of a set of spatial and contextual features, detailed in Appendix \ref{app:feature_set}. Among the spatial features, the level of pressure a player in possession of the ball receives from an opponent player is considered to be a critical piece of information to estimate whether the possession is maintained or lost. We model pressure through two additional features: the opponent's team density at the player's location and the overall team pitch control at that same location. Another factor that is considered to influence the ball drive probability is the player's contextual-relative location at the moment of the action. We include two features to provide this contextual information: the closest opponent's vertical pressure line and the closest possession team's vertical pressure line to the player. These two variables are expected to serve as a proxy for the opponent's pressing behavior and the player's relative risk of losing the ball. By adding features related to the spatial pressure, we can get a better insight into how pressed that player is within that context and then have better information to decide the probability of keeping the ball. We train this model by optimizing the loss between the estimated probability and observed ball drive actions that are labeled as successful or missed, depending on whether the ball carrier's team can keep the ball's possession during after the ball drive is attempted. \subsection{Estimating ball drive expectation}\label{sec:balldrive_expectation} Finally, once we have an estimate of the ball drive probability, we still need to obtain an estimate of the expected value of ball drives, in order to model the expression $\E[G|A=\delta,T_t]$, presented in Equation \ref{eq:ball_drive_expectation}. While using a different architecture for feature extraction, we will model both $\E[G|A=\delta,O_\delta=1,T_t]$ and $\E[A=\delta,O_\delta=0,T_t]$, following an analogous approach of that used in Section \ref{sec:pass_epv}.\\ Conceptually, by keeping the ball, player's might choose to continue a progressive run or dribble to gain a better spatial advantage. However, they might also wait until a teammate moves and opens up a passing line of lower risk or higher quality. By learning a model for the expression $\E[G| A=\delta, T_t]$ we aim to capture the impact on the expected possession value of these possible situations, all encapsulated within the ball drive event. We use the same input data set and feature extractor architecture used in Section \ref{sec:drive_prob}, with the addition of the ball drive probability estimation for each example. Similarly to the loss surface prediction block of the expected value of passes (see Section \ref{sec:pass_epv}), we apply a sigmoid activation function to obtain a prediction in the $[0,1]$ range, and then apply a linear transformation to produce a prediction value in the $[-1,1]$ range. The loss computation block computes the mean squared loss between the observed reward value assigned to the action and the model output. \subsection{Expected goals model}\label{sec:expected_goals_model} Once we have a model for the expected values of passes and ball drives, we only need to model the expected value of shots to obtain a full value state-value estimation for the action set $A$. We want to model the expectation of scoring a goal at time $t$ given that a shot is attempted, defined as $\E[G|A=\varsigma]$. This expression is typically referred to as \emph{expected goals} (xG) and is arguably one of the most popular metrics in soccer analytics \citep{eggels2016expected}. While existing approaches make use exclusively of features derived from the observed shot location, here we include both spatial and contextual information related to the other 22 players' and the ball's locations to account for the nuances of shooting situations.\\ Intuitively, we can identify several spatial factors that influence the likelihood of scoring from shots, such as the level of defensive pressure imposed on the ball carrier, the interceptability of the shot by the nearby opponents, or the goalkeeper's location. Specifically, we add the number of opponents that are closer than 3 meters to the ball-carrier to quantify the level of immediate pressure on the player. Additionally, we account for the interceptability of the shot (blockage count) by calculating the number of opponent players in the triangle formed by the ball-carrier location and the two posts. We include three additional features derived from the location of the goalkeeper. The goalkeeper's location can be considered an important factor influencing the scoring probability, particularly since he has the considerable advantage of being the only player that can stop the ball with his hands. In addition to this spatial information, we add a contextual feature consisting of a boolean flag indicating whether the shot is taken with the foot or the head, the latter being considered more difficult. Additionally, we add a prior estimation of expected goal as an input feature to this spatial and contextual information, produced through the baseline expected goals model described in Appendix \ref{app:xg}. The full set of features is detailed in Appendix \ref{app:feature_set}. \\ Having this feature set, we use a standard neural network architecture with the same characteristics as the one used for estimating the ball drive probability, explained in Section \ref{sec:drive_prob}, and we optimize the mean squared error between the predicted outcome and the observed reward for shot actions. The long-term reward chosen for this work is detailed in Section \ref{sec:estimands}. \subsection{Action selection probability} Finally, to obtain a single-valued estimation of EPV we weigh the expected value of each possible action with the respective probability of taking that action in a given state, as expressed in Equation \ref{eq:structured_epv}. Specifically, we estimate the action selection probability $\PP(A | T_t)$, where $A$ is the discrete set of actions described in Section \ref{sec:epv_markov}. We construct a feature set composed of both spatial and contextual features. Spatial features such as the ball location and the distance and angle to the goal provide information about the ball carrier's relative location in a given time instance. Additionally, we add spatial information related to the possession and team's pitch control and the degree of spatial influence of the opponent team near the ball. On the other hand, the location of both team's dynamic lines relative to the ball location provides the contextual information to the state representation. We also include the baseline estimation of expected goals at that given time, which is expected to influence the action selection decision, especially regarding shot selection. The full set of features is described in Appendix \ref{app:feature_set}. We use a shallow neural network architecture, analogous to those described in Section \ref{sec:drive_prob} and Section \ref{sec:balldrive_expectation}. This final layer of the feature extractor part of the network has size 3, to which a softmax activation function is applied to obtain the probabilities of each action. We model the observed outcome as a one-hot encoded vector of size $3$, indicating the action type observed in the data, and optimize the categorical cross-entropy between this vector and the predicted probabilities, which is equivalent to the log loss. \section{Experimental setup} \subsection{Datasets}\label{sec:datasets} We build different datasets for each of the presented models based on optical tracking data and event-data from 633 English Premier League matches from the 2013/2014 and 2014/2015 season, provided by \emph{STATS LLC}. This tracking data source consists of every player's location and the ball at a 10\emph{Hz} sampling rate, obtained through semi-automated player and ball tracking performed on match videos. On the other hand, event-data consists of human-labeled on-ball actions observed during the match, including the time and location of both the origin and destination of the action, the player who takes action, and the outcome of the event. Following our model design, we will focus exclusively on the pass, ball drive, and shot events. Table \ref{table:events} presents the total count for each of these events according to the dataset split presented below in Section \ref{sec:model_setting}. The definition of success varies from one event to another: a pass is successful if a player of the same team receives it, a ball drive is successful if the team does not lose the possession after the action occurs, and a shot is labeled as successful if a goal is scored from that shot. Given this data, we can extract the tracking data snapshot, defined in Section \ref{sec:epv_markov}, for every instance where any of these events are observed. From there, we can build the input feature sets defined for each of the presented models. For the detailed list of features used, see Appendix \ref{app:feature_set}. \begin{table}[h!] \resizebox{\textwidth}{!}{\begin{tabular}{llllll} \hline\noalign{\smallskip} Data Type & \# Total & \# Training & \# Validation & \# Test & \% Success \\ \noalign{\smallskip}\hline\noalign{\smallskip} Match & 633 & 379 & 127 & 127 & - \\ Pass & 480,670 & 288,619 & 96,500 & 95,551 & 79.64 \\ Ball drive & 413,123 & 284,759 & 82,271 & 82,093 & 90.60 \\ Shot & 13,735 & 8,240 & 2,800 & 2,695 & 8.54 \\ \noalign{\smallskip}\hline \end{tabular}} \caption{Total count of events included within the tracking data of 633 English Premier League matches from the 2013/2014 and 2014/2015 season.} \label{table:events} \end{table} \subsection{Defining the estimands}\label{sec:estimands} Each of the components of the EPV structured model has different estimands or outcomes. For both the pass success and ball drive success probability models, we define a binomially distributed outcome, according to the definition of success provided in \ref{sec:datasets}. These outcomes correspond to the short-term observed success of the actions. For the pass selection probability, we define the outcome as a binomially distributed random variable as well, where a value of 1 is given for every observed pass in its corresponding destination location. We define the action selection model's estimand as a multinomially distributed random variable that can take one of three possible values, according to whether the selected action corresponds to a pass, a ball drive, or a shot.\\ For the EPV estimations of passes, ball drives, and shot actions, respectively, we define the estimand is a long-term reward, corresponding to the outcome of the possession where that event occurs. For doing this, we first need to define when possession ends. There is a low frequency of goals in matches (2.8 goals on average in our dataset) compared to the number of observed actions (1,433 on average). Given this, the definition of the time extent of possession is expected to influence the balance between individual actions' short-term value and the long-term expected outcome after that action is taken. The standard approach for setting a possession's ending time is when the ball changes control from one team to another. However, here we define a possession end as the time when the next goal occurs. By doing this, we allow the ball to either go out of the field or change control between teams an undefined number of times until the next goal is observed. Once a goal is observed, all the actions between the goal and the previous one are assigned an outcome of $1$ if the action is taken by the scoring team or $-1$ otherwise. Following this, each action gets assigned as an outcome a long-term reward (i.e., the next goal observed).\\ However, this approach is expected to introduce noise, especially for actions that are largely apart in time from an observed goal. Let $\epsilon$ be a constant representing the time between each action and the next goal, in seconds. We can choose a value for $\epsilon$ that represents a long-term reward-vanishing threshold so that all the actions observed more than $\epsilon$ time from the observed goal received a reward of $0$. For this work, we choose $\epsilon=15s$, which corresponds to the average duration of standard soccer possessions in the available matches. Note this is equivalent to assuming that the current state of a possession only has $\epsilon$ seconds impact. \subsection{Model setting}\label{sec:model_setting} We randomly sample the available matches and split them into training (379), validation (127), and test sets (127). From each of these matches, we obtain the observed on-ball actions and the tracking data snapshots to construct the set of input features corresponding to each model, detailed in Appendix \ref{app:feature_set}. The events are randomly shuffled in the training dataset to avoid bias from the correlation between events that occur close in time. We use the validation set for model selection and leave the test set as a hold-out dataset for testing purposes. We train the models using the adaptive moment estimation algorithm \citep{kingma2014adam}, and set the $\beta_1$ and $\beta_2$ parameters to $0.9$ and $0.999$ respectively. For all the models we perform a grid search on the learning rate ($\{1\mathrm{e}{-3}, 1\mathrm{e}{-4}, 1\mathrm{e}{-5}, 1\mathrm{e}{-6}\}$), and batch size parameters ($\{16,32\}$). We use early stopping with a delta of $1\mathrm{e}{-3}$ for the pass success probability, ball drive success probability, and action selection probability models, and $1\mathrm{e}{-5}$ for the rest of the models. \subsection{Model calibration} We include an after-training calibration procedure within the processing pipeline for the pass success probability and pass selection probability models, which presented slight calibration imbalances on the validation set. We use the temperature scaling calibration method for both models, a useful approach for calibrating neural networks \citep{guo2017calibration}. Temperature scaling consists of dividing the vector of logits passed to a softmax function by a constant \emph{temperature} value $T_p$. This product modifies the scale of the probability vector produced by the softmax function. However, it preserves each element's ranking, impacting only the distribution of probabilities and leaving the classification prediction unmodified. We apply these post-calibration procedures exclusively on the validation set. \subsection{Evaluation Metrics}\label{sec:metrics} For the pass success probability, keep ball success probability, pass selection probability, and action selection models, we use the cross-entropy loss. Let $M$ be the number of classes, $N$ the number of examples, $y_{ij}$ the estimated outcome, and $\hat{y}_{ij}$ the expected outcome, we define the cross-entropy loss function as in Equation \ref{eq:cross_entropy}. For the first three models, where the outcome is binary, we set $M=2$. We can directly observe that for this set-up, the cross-entropy is equivalent to the negative log-loss defined in Equation \ref{eq:negative_logloss}. For the action selection model, we set $M=3$. For the rest of the models, corresponding to EPV estimations, we can observe the outcome takes continuous values in the $[-1,1]$ range. For these cases, we use the mean squared error (MSE) as a loss function, defined in Equation \ref{eq:mse}, by first normalizing both the estimated and observed outcomes into the $[0,1]$ range. \begin{equation}\label{eq:cross_entropy} \text{CE}(\hat{y},y) = - \sum_j^M\sum_i^N(y_{ij} \cdot \log(\hat{y}_{ij})) \end{equation} We are interested in obtaining calibrated predictions for all of the models, as well as for the joint EPV estimation. Having the models calibrated allows us to perform a fine-grained interpretation of the variations of EPV within subsets of actions, as shown in Section \ref{sec:applications}. We validate the model's calibration using a variation of the expected calibration error (ECE) presented in \cite{guo2017calibration}. For obtaining this metric, we distribute the predicted outcomes into $K$ bins and compute the difference between the average prediction in each bin and the average expected outcome for the examples in each bin. Equation \ref{eq:ece} presents the ECE metric, where $K$ is the number of bins, and $B_k$ corresponds to the set of examples in the $k$-th bin. Essentially, we are calculating the average difference between predicted and expected outcomes, weighted by the number of examples in each bin. In these experiments, we use quantile binning to obtain K equally-sized bins in ascending order. \begin{equation}\label{eq:ece} \text{ECE} = \sum_{k=1}^{K} \frac{\abs{B_k}}{N} \abs{ \bigg(\frac{1}{|B_k|} \sum_{i \in B_k}y_i\bigg) - \bigg(\frac{1}{|B_k|} \sum_{i \in B_k} \hat{y}_i \bigg) } \end{equation} \subsection{Results}\label{sec:results} Table \ref{tab:results} presents the results obtained in the test set for each of the proposed models. The loss value corresponds to either the cross-entropy or the mean squared loss, as detailed in Section \ref{sec:metrics}. The table includes the optimal values for the batch size and learning rate parameters, the number of parameters of each model, and the number of examples per second that each model can predict.\\ We can observe that the loss value reported for the final joint model is equivalent to the losses obtained for the EPV estimations of each of the three types of action types, showing stability in the model composition. The shot EPV loss is higher than the ball drive EPV and pass EPV losses, arguably due to the considerably lower amount of observed events available in comparison with the rest, as described in Section \ref{sec:datasets}. While the number of examples per second is directly dependent on the models' complexity, we can observe that we can predict 899 examples per second in the worst case. This value is 89 times higher than the sampling rate of the available tracking data (10Hz), showing that this approach can be applied for the real-time estimation of EPV and its components.\\ Regarding the models' calibration, we can observe that the ECE metrics present consistently low values along with all the models. Figure \ref{fig:calibration_all} presents a fine-grained representation of the probability calibration of each of the models. The x-axis represents the mean predicted value for a set of $K=10$ bins, while the y-axis represents the mean observed outcome among the examples within each corresponding bin. The circle size represents the percentage of examples in the bin relative to the total number of examples. In these plots, we can observe that the different models provide calibrated probability estimations along their full range of predictions, which is a critical factor for allowing a fine-grained inspection of the impact that specific actions have on the expected possession value estimation. Additionally, we can observe the different ranges of prediction values that each model produces. For example, ball drive success probabilities are distributed more often above $0.5$, while pass success probabilities cover a wide range between $0$ and $1$, showing that it is harder for a player to lose the ball while keeping possession than it is to lose the ball by attempting a pass towards another location on the field. The action selection probability distribution is heavily influenced by each action type's frequency, showing a higher frequency and broader distribution on ball drive and pass actions compared with shots. The joint EPV model's calibration plot shows that the proposed approach of estimating the different components separately and then merging them back into a single EPV estimation provides calibrated estimations. We applied post-training calibration exclusively to the pass success probability and the pass selection probability models, obtaining a temperature value of $0.82$ and $0.5$, respectively.\\ Having this, we have obtained a framework of analysis that provides accurate estimations of the long-term reward expectation of the possession, while also allowing for a fine-grained evaluation of the different components comprised in the model.\\ \begin{table} \resizebox{\textwidth}{!}{\begin{tabular}{lllllll} \hline\noalign{\smallskip} Model & Loss & ECE & \begin{tabular}[c]{@{}l@{}}Batch\\Size\end{tabular} & \begin{tabular}[c]{@{}l@{}}Learning\\Rate\end{tabular} & \# Params. & Ex. (s) \\ \noalign{\smallskip}\hline\noalign{\smallskip} Pass probability & 0.190 & 0.0047 & 32 & $1\mathrm{e}{-4}$ & 401,259 & 942 \\ Ball drive probability & 0.2803 & 0.0051 & 32 & $1\mathrm{e}{-3}$ & 128 & 67,230 \\ Pass successful EPV & 0.0075 & 0.0011 & 16 & $1\mathrm{e}{-6}$ & 403,659 & 899 \\ Pass missed EPV & 0.0085 & 0.0015 & 16 & $1\mathrm{e}{-6}$ & 403,659 & 899 \\ Pass selection probability & 5.7134 & - & 32 & $1\mathrm{e}{-5}$ & 401,259 & 984 \\ Pass EPV * Pass selection & 0.0067 & 0.0011 & - & - & - & - \\ Ball drive successful EPV & 0.0128 & 0.0022 & 16 & $1\mathrm{e}{-4}$ & 153 & 57,441 \\ Ball drive missed EPV & 0.0072 & 0.0025 & 16 & $1\mathrm{e}{-4}$ & 153 & 57,441 \\ Shot EPV & 0.2421 & 0.0095 & 16 & $1\mathrm{e}{-3}$ & 231 & 72,455 \\ Action selection probability & 0.6454 & - & 32 & $1\mathrm{e}{-3}$ & 171 & 23,709 \\ EPV & 0.0078 & 0.0023 & - & - & - & -\\ \hline\noalign{\smallskip} \end{tabular}} \caption{The average loss and calibration value for each of the components of the EPV model, as well as for the joint EPV estimation, on the corresponding test datasets. Additionally, the table presents the optimal value of the hyper-parameters, total number of parameters, and the number of predicted examples by second, for each of the models.} \label{tab:results} \end{table} \begin{figure*} \includegraphics[width=1\textwidth]{images/calibration_all_res} \caption{Probability calibration plots for the action selection (top-left), pass and ball drive probability (top-right), pass (successful and missed) EPV (mid-left), ball drive (successful and missed) EPV (mid-right), pass and ball drive EPV joint estimation (bottom-left), and the joint EPV estimation (bottom-right). Values in the x-axis represent the mean value by bin, among 10 equally-sized bins. The y-axis represents the mean observed outcome by bin. The circle size represents the percentage of examples in each bin relative to the total examples for each model.} \label{fig:calibration_all} \end{figure*} \section{Practical Applications}\label{sec:applications} In this section, we present a series of novel practical applications derived from the proposed EPV framework. We show how the different components of our EPV representation can be used to obtain direct insight in specific game situations at any frame during a match. We present the value distribution of different soccer actions and the contextual features developed in this work and analyze the risk and reward comprised by these actions. Additionally, we leverage the pass EPV surfaces, and the contextual variables developed in this work to analyze different off-ball pressing scenarios for breaking Liverpool's organized buildup. Finally, we inspect the on-ball and off-ball value-added between every Manchester City player (season 14-15) and the legendary attacking midfielder David Silva, to derive an optimal team that would maximize Silva's contribution to the team. \subsection{A real-time control room} In most team sports, coaches make heavy use of video to analyze player performance, show players their correctly or incorrectly performed actions, and even point out other possible decisions the player may have taken in a given game situation. The presented structured modeling approach of the EPV provides the advantage of obtaining numerical estimations for a set of game-related components, allowing us to understand the impact that each of them has on the development of each possession. Based on this, we can build a control room-like tool like the one shown in Figure \ref{fig:app_control_room}, to help coaches analyze game situations and communicate effectively with players. \begin{figure*}[h] \includegraphics[width=1\textwidth]{images/control_room} \caption{A visual control room tool based on the EPV components. On the left, a 2D representation of the game state at a given frame during the match, with an overlay of the pass EPV added surface and selection menus to change between 2D and video perspective, and to modify the surface overlay. On the bottom-left corner, a set of video sequence control widgets. On the center, the instantaneous value of selection probability of each on-ball action, and the expected value of each action, as well as the overall EPV value. On the right, the evolution of the EPV value during the possession and the expected EPV value of the optimal passing option at every frame.} \label{fig:app_control_room} \end{figure*} The control room tool presented in Figure \ref{fig:app_control_room} shows the frame-by-frame development of each of the EPV components. Coaches can observe the match's evolution in real-time and use a series of widgets to inspect into specific game situations. For instance, in this situation, coaches can see that passing the ball has a better overall expected value than keeping the ball or shooting. Additionally, they can visualize in which passing locations there is a higher expected value. The EPV evolution plot on the right shows that while the overall EPV is $0.032$, the best possible passing option is expected to increase this value up to $0.112$. The pass EPV added surface overlay shows that an increase of value can be expected by passing to the teammates inside the box or passing to the teammate outside the box. With this information and their knowledge on their team, coaches can decide whether to instruct the player to take immediate advantage of these kinds of passing opportunities or wait until better opportunities develop. Additionally, the player can gain a more visual understanding of the potential value of passing to specific locations in this situation instead of taking a shot. If the player tends shooting in these kinds of situations, the coach could show that keeping the ball or passing to an open teammate has a better goal expectancy than shooting from that location.\\ This visual approach could provide a smoother way to introduce advanced statistics into a coaching staff analysis process. Instead of evaluating actions beforehand or only delivering hard-to-digest numerical data, we provide a mechanism to enhance coaches' interpretation and player understanding of the game situations without interfering with the analysis process. \subsection{Not all value is created (or lost) equal} There is a wide range of playing strategies that can be observed in modern professional soccer. There is no single best strategy found in successful teams from Guardiola's creative and highly attacking FC Barcelona to Mourinho's defensive and counter-attacking Inter Milan. We could argue that a critical element for selecting a playing strategy lies in managing the risk and reward balance of actions, or more specifically, which actions a team will prefer in each game situation. While professional coaches intuitively understand which actions are riskier and more valuable, there is no quantification of the actual distribution of the value of the most common actions in soccer.\\ From all the passes and ball drive actions described in Section \ref{sec:datasets}, and the spatial and contextual features described in Section \ref{sec:feature_extraction} we derived a series of context-specific actions to compare their value distribution. We identify passes and ball drives that break the first, second, or third line from the concept of dynamic pressure lines. We define an action (pass or ball drive) to be under-pressure if the player's pitch control value at the beginning of the action is below $0.4$ and without pressure otherwise. A long pass is defined as a pass action that covers a distance above 30 meters. We define a pass back as passes where the destination location is closer to the team's goal than the ball's origin location. We count with manually labeled tags indicating when a pass is a cross and when the pass is missed, from the available data. We identify lost balls as missed passes and ball drives ending in recovery by the opponent. For all of these action types, we calculate the added value of each observed action (EPV added) as the difference between the EPV at the end and the start of the action. We perform a kernel density estimation on the EPV added of each action type to obtain a probability density function. In Figure \ref{fig:app_value_creation} we compare the density between all the action types. The density function value is normalized in the $[0,1]$ range by dividing by the maximum density value in order to ease the visual comparison between the distributions.\\ \begin{figure*}[h] \includegraphics[width=1\textwidth]{images/pass_value_added} \caption{Comparison of the probability density function of ten different actions in soccer. The density function values are normalized into the $[0,1]$ range. The normalization is obtained by dividing each density value by the maximum observed density value.} \label{fig:app_value_creation} \end{figure*} From Figure \ref{fig:app_value_creation}, we can gain a deeper understanding of the value distribution of different types of actions. From passes that break lines, we can observe that the higher the line, the broader the distribution, and the higher the extreme values. While passes breaking the first line are centered around $0$ with most values ranging in $[-0.01,0.015]$, the distribution of passes breaking the third line is centered around $0.005$, and most passes fall in the interval $[-0.025,0.05]$. Similarly, ball drives that break lines present a similar distribution as passes breaking the first line. Regarding the level of spatial pressure on actions, we can see that actions without pressure present an approximately zero-centered distribution, with most values falling in a $[-0.01,0.01]$ range. On the other hand, actions under pressure present a broader distribution and a higher density on negative values. This shows both that there is more tendency to lose the ball under pressure, hence losing value, and a higher tendency to increase the value if the pressure is overcome with successful actions. Whether crosses are a successful way for reaching the goal or not has been a long-term debate in soccer strategy. We can observe that crosses constitute the type of action with a higher tendency to lose significant amounts of value; however, it does provide a higher probability of high value increases in case of succeeding, compared to other actions. Long passes share a similar situation, where they can add a high amount of value in case of success but have a higher tendency to produce high EPV losses. For years, soccer enthusiasts have argued about whether passing backward provides value or not. We can observe that, while the EPV added distribution of passing back is the narrowest, near half of the probability lies on the positive side of the x-axis, showing the potential value to be obtained from this type of action. Finally, losing the ball often produces a loss of value. However, in situations such as being close to the opponent's box and with pressure on the ball carrier, losing the ball with a pass to the box might provide an increment in the expected value of the possession, given the increased chance of rebound. \subsection{Pressing Liverpool} A prevalent and challenging decision that coaches face in modern professional football is how to defend an organized buildup by the opponent. We consider an organized buildup as a game situation where a team has the ball behind the first pressure line. When deciding how to press, a coach needs to decide first in which zones they want to avoid the opponent receiving passes. Second, how to cluster their players in order to minimize the chances of the opponent moving forward. This section uses EPV passing components and dynamic pressure lines to analyze how to press Brendan Rodgers' Liverpool (season 14/15).\\ We identify the formation being used every time by counting the number of players in each pressure line. We assume there are only three pressure lines, so all formations are presented as the number of defenders followed by the number of midfielders and forwards. For every formation faced by Liverpool during buildups, we calculate both the mean off-ball and on-ball advantage in every location on the field. The on-ball advantage is calculated as the sum of the EPV added of passes with positive EPV added. On the other hand, the off-ball advantage is calculated as the sum of positive potential EPV added. We then say that a player has an off-ball advantage if he is located in a position where, in case of receiving a pass, the EPV would increase. Figure \ref{fig:app_press_liverpool} presents two heatmaps for every of the top 5 formations used against Liverpool during buildups, showing the distribution where Liverpool obtained on-ball and off-ball advantages, respectively. The heatmaps are presented as the difference with the mean heatmap in all of Liverpool's buildups during the season. \begin{figure*}[h] \includegraphics[width=1\textwidth]{images/liverpool_pressure} \caption{In the first row, one distribution for every formation Liverpool's opponents used during Liverpool's organized buildups, showing the difference between the distribution of off-ball advantages and the mean distribution. The second row is analogous to the first one, presenting the on-ball EPV added distributions. The green circle represents the ball location.} \label{fig:app_press_liverpool} \end{figure*} We will assume that the coach wants to avoid Liverpool playing inside its team block during buildups. We can see that when facing a 3-4-3 formation, Liverpool can create higher off-ball advantages before the second pressure line and manages to break the first line of pressure by the inside successfully. Against the 4-4-2, Liverpool has more difficulties in breaking the first line but still manages to do it successfully while also generating spaces between the defenders and midfielders, facilitating long balls to the sides. If the coaches' team does not have a good aerial game, this would be a harmful way of pressing. We can see the 4-3-3 is an ideal pressing formation for avoiding Liverpool playing inside the pressing block. This pressing style pushes the team to create spaces on the outside, before the first pressure line and after the second pressure line. In the second row, we can observe that Liverpool struggles to add value by the inside and is pushed towards the sides when passing. The 4-2-4 is the formation that avoids playing inside the block the most; however, it also allows more space on the sides of the midfielders. We can see that Liverpool can take advantage of this and create spaces and make valuable passes towards those locations. If the coach has fast wing-backs that could press receptions on long balls to the sides, this could be an adequate formation; otherwise, 4-3-3 is still preferable. Finally, the 5-3-2 provides significant advantages to Liverpool that can create spaces both by the inside above the first pressure line and behind the defenders back, while also playing towards those locations effectively.\\ This kind of information can be highly useful to a coach to decide tactical approaches for solving specific game situations. If we add the knowledge that the coach has of his players' qualities, he can make a fine-tuned design of the pressing he wants his team to develop. \subsection{Growing around David Silva} Most teams in the best professional soccer leagues have at least one player who is the key playmaker. Often, coaches want to ensure that the team's strategy is aligned with maximizing the performance of these key players. In this section, we leverage tracking data and the passing components of the EPV model to analyze the relationship between the well known attacking midfielder David Silva and his teammates when playing at Manchester City in season 14/15. We calculated the playing minutes each player shared with Silva and aggregated both the on-ball EPV added and expected off-ball EPV added of passes between each player pair for each match in the season. We analyze two different situations: when Silva has the ball and when any other player has the ball and Silva is on the field. We also calculate the selection percentage, defined as the percentage of time Silva chooses to pass to that player when available (and vice versa). Figure \ref{fig:app_silva} presents the sending and receiving maps involving David Silva and each of the two players with more minutes by position in the team. Every player is placed according to the most commonly used position in the league. Players represented by a circle with a solid contour have the highest sum of off-ball and on-ball EPV in each situation than the teammate assigned for the same position, presented with a dashed circle. The size of the circle represents the selection percentage of the player in each situation. We represent off-ball EPV added by the arrows' color, and on-ball EPV added of attempted passes by the arrow's size. \begin{figure*}[h] \includegraphics[width=1\textwidth]{images/silva_receive_and_send} \caption{Two passing maps representing the relationship between David Silva and each of the two players with more minutes by position in the Manchester City team during season 14/15. The figure on the left represents passes attempted by Silva, while the figure on the right represents passes received by Silva. The color of the arrow represents the average expected off-ball EPV added of the passes. The size of the circle represents the selection percentage of the destination player of the pass. Circles present a solid contour when that player is considered better for Silva than the teammate in the same position. The size of the arrow represents the mean on-ball EPV added of attempted passes. Players are placed according to their highest used position on the field. All metrics are normalized by minutes played together and multiplied by 90 minutes.} \label{fig:app_silva} \end{figure*} We can see that both the wingers and forwards generate space for Silva and receive high added value from his passes. However, the most frequently selected player is the central midfielder Yaya Tour\'e, who also looks for Silva often and is the midfielder providing the highest value to him. Regarding the other central midfielder, Fernandinho has a better relationship with Silva in terms of received and added value than Fernando. Silva shows a high tendency to play with the wingers; however, while Milner and Jovetic can create space and receive value from Silva, Navas and Nasri find Silva more often, with higher added value. Based on this, the coach can decide whether he prefers to lineup wingers that can benefit from Silva's passes or wingers, increasing Silva's participation in the game. A similar situation is presented with the right and left-backs. Additionally, we can observe that Silva tends to be a highly preferable passing option for most players. This information allows the coach to gain a deeper understanding of the effective off-ball and on-ball value relationship that is expected from every pair of players and can be useful for designing playing strategies before a match. \section{Discussion} This paper presents a comprehensive approach for estimating the instantaneous expected value of ball possessions in soccer. One of the main contributions of this work is showing that by deconstructing a single expectation into a series of lower-level statistical components and then estimating each of these components separately, we can gain greater interpretation insight into how these different elements impact the final joint estimation. Also, instead of depending on a single-model approach, we can make a more specialized selection of the models, learning approach, and input information that is better suited for learning the specific problem represented by each sub-component of the EPV decomposition. The deep learning architectures presented for the different passing components produce full probability surfaces, providing rich visual information for coaches that can be used to perform fine-grained analysis of player's and team's performance. We show that we can obtain calibrated estimations for all the decomposed model components, including the single-value estimation of the expected possession value of soccer possessions. We develop a broad set of novel spatial and contextual features for the different models presented, allowing rich state representations. Finally, we present a series of practical applications showing how this framework could be used as a support tool for coaches, allowing them to solve new upcoming questions and accelerating the problem-solving necessities that arise daily in professional soccer.\\ We consider that this work provides a relevant contribution to improving the practitioners' interpretation of the complex dynamics of professional soccer. With this approach, soccer coaches gain more convenient access to detailed statistical estimations that are unusual in their practice and find a visual approach to analyze game situations and communicate tactics to players. Additionally, on top of this framework, there is a large set of novel research that can be derived, including on-ball and off-ball player performance analysis, team performance and tactical analysis for pre-match and post-match evaluation, player profile identification for scouting, young players evolution analysis, match highlights detection, and enriched visual interpretation of game situations, among many others.
{ "attr-fineweb-edu": 1.34375, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdvs5qoTBG_qrq2Yr
\section{Introduction} Athletic endeavors are a function of strength, skill, and strategy. For the high-jump, the choice of strategy has been of particular historic importance. Innovations in techniques or strategies have repeatedly redefined world records over the past 150 years, culminating in the now well-established Fosbury flop (Brill bend) technique. In this paper, we demonstrate how to discover diverse strategies, as realized through physics-based controllers which are trained using reinforcement learning. We show that natural high-jump strategies can be learned without recourse to motion capture data, with the exception of a single generic run-up motion capture clip. We further demonstrate diverse solutions to a box-jumping task. Several challenges stand in the way of being able to discover iconic athletic strategies such as those used for the high jump. The motions involve high-dimensional states and actions. The task is defined by a sparse reward, i.e., successfully making it over the bar or not. It is not obvious how to ensure that the resulting motions are natural in addition to being physically plausible. Lastly, the optimization landscape is multimodal in nature. We take several steps to address these challenges. First, we identify the take-off state as a strong determinant of the resulting jump strategy, which is characterized by low-dimensional features such as the net angular velocity and approach angle in preparation for take-off. To efficiently explore the take-off states, we employ Bayesian diversity optimization. Given a desired take-off state, we first train a run-up controller that imitates a single generic run-up motion capture clip while also targeting the desired take-off state. The subsequent jump control policy is trained with the help of a curriculum, without any recourse to motion capture data. We make use of a pose variational autoencoder to define an action space that helps yield more natural poses and motions. We further enrich unique strategy variations by a second optimization stage which reuses the best discovered take-off states and encourages novel control policies. \newpage In summary, our contributions include: \begin{itemize} \item A system which can discover common athletic high jump strategies, and execute them using learned controllers and physics-based simulation. The discovered strategies include the Fosbury flop (Brill bend), Western Roll, and a number of other styles. We further evaluate the system on box jumps and on a number of high-jump variations and ablations. \item The use of Bayesian diversity search for sample-efficient exploration of take-off states, which are strong determinants of resulting strategies. \item Pose variational autoencoders used in support of learning natural athletic motions. \end{itemize} \section{SYSTEM OVERVIEW} \input{figures/overview} We now give an overview of our learning framework as illustrated in Figure~\ref{fig:overview}. Our framework splits athletic jumps into two phases: a run-up phase and a jump phase. The {\em take-off state} marks the transition between these two phases, and consists of a time instant midway through the last support phase before becoming airborne. The take-off state is key to our exploration strategy, as it is a strong determinant of the resulting jump strategy. We characterize the take-off state by a feature vector that captures key aspects of the state, such as the net angular velocity and body orientation. This defines a low-dimensional take-off feature space that we can sample in order to explore and evaluate a variety of motion strategies. While random sampling of take-off state features is straightforward, it is computationally impractical as evaluating one sample involves an expensive DRL learning process that takes hours even on modern machines. Therefore, we introduce a sample-efficient Bayesian Diversity Search (BDS) algorithm as a key part of our Stage~1 optimization process. Given a specific sampled take-off state, we then need to produce an optimized run-up controller and a jump controller that result in the best possible corresponding jumps. This process has several steps. We first train a {\em }run-up controller, using deep reinforcement learning, that imitates a single generic run-up motion capture clip while also targeting the desired take-off state. For simplicity, the run-up controller and its training are not shown in Figure~\ref{fig:overview}. These are discussed in Section~\ref{sec:Experiments-Runup}. The main challenge lies with the synthesis of the actual jump controller which governs the remainder of the motion, and for which we wish to discover strategies without any recourse to known solutions. The jump controller begins from the take-off state and needs to control the body during take-off, over the bar, and to prepare for landing. This poses a challenging learning problem because of the demanding nature of the task, the sparse fail/success rewards, and the difficulty of also achieving natural human-like movement. We apply two key insights to make this task learnable using deep reinforcement learning. First, we employ an action space defined by a subspace of natural human poses as modeled with a Pose Variational Autoencoder (P-VAE). Given an action parameterized as a target body pose, individual joint torques are then realized using PD-controllers. We additionally allow for regularized {\em offset} PD-targets that are added to the P-VAE targets to enable strong takeoff forces. Second, we employ a curriculum that progressively increases the task difficulty, i.e., the height of the bar, based on current performance. A diverse set of strategies can already emerge after the Stage 1 BDS optimization. To achieve further strategy variations, we reuse the take-off states of the existing discovered strategies for another stage of optimization. The diversity is explicitly incentivized during this Stage 2 optimization via a novelty reward, which is focused specifically on features of the body pose at the peak height of the jump. As shown in Figure~\ref{fig:overview}, Stage~2 makes use of the same overall DRL learning procedure as in Stage~1, albeit with a slightly different reward structure. \section{Results} \label{sec:results} We demonstrate multiple strategies discovered through our framework for high jumping and obstacle jumping in Section~\ref{sec:Experiments-Diverse-Strategies}. We validate the effectiveness of BDS and P-VAE in Section~\ref{sec:Experiments-Comparison-and-Ablation-Study}. Comparison with motion capture examples, and interesting variations of learned policies are given in Section~\ref{sec:Results-variations}. All results are best seen in the supplementary videos in order to judge the quality of the synthesized motions. \subsection{Diverse Strategies} \label{sec:Experiments-Diverse-Strategies} \subsubsection{High Jumps}\label{sec:results:high-jump} In our experiments, six different high jump strategies are discovered during the Stage 1 initial state exploration within the first ten BDS samples: Fosbury Flop, Western Roll (facing up), Straddle, Front Kick, Side Jump, Side Dive. The first three are high jump techniques standard in the sports literature. The last three strategies are not commonly used in sporting events, but still physically valid so we name them according to their visual characteristics. The other four samples generated either repetitions or failures. Strategy repetitions are generally not avoidable due to model errors and large flat regions in the motion space. Since the evaluation of one BDS sample takes about six hours, the Stage 1 exploration takes about 60 hours in total. The discovered distinct strategies at $z_\text{freeze}=100cm$ are further optimized separately to reach their maximum difficulty level, which takes another 20 hours. Representative take-off state feature values of the discovered strategies can be found in Appendix~\ref{app:takeoffStates}. {We also show the DRL learning and curriculum scheduling curves for two strategies in Appendix~\ref{app:curves}.} In Stage 2, we perform novel policy search for five DRL iterations from each good initial state of Stage 1. Training is warm started with the associated Stage 1 policy for efficient learning. The total time required for Stage 2 is roughly 60 hours. More strategies are discovered in Stage 2, but most are repetitions and only two of them are novel strategies not discovered in Stage 1: Western Roll (facing sideways) and Scissor Kick. Western Roll (sideways) shares the same initial state with Western Roll (up). Scissor Kick shares the same initial state with Front Kick. The strategies discovered in each stage are summarized in Figure~\ref{fig:strategies}. We visualize all eight distinct strategies in Figure~\ref{fig:teaser} and Figure~\ref{fig:highJumps}. We also visualize their peak poses in Figure~\ref{fig:High-jump-peak-poses}. While the final learned control policies are stochastic in nature, the majority of the results shown in our supplementary video are the deterministic version of those policies, i.e., using the mean of the learned policy action distributions. In the video we further show multiple simulations from the final stochastic policies, to help give insight into the true final endpoint of the optimization. As one might expect for a difficult task such as a maximal-height high jump, these stochastic control policies will also fail for many of the runs, similar to a professional athlete. \subsubsection{Obstacle Jumps}\label{sec:results:obstacle-jump} Figure~\ref{fig:obstacleJumps1} shows the six different obstacle jump strategies discovered in Stage 1 within the first 17 BDS samples: Front Kick, Side Kick, Twist Jump (clockwise), Twist Jump (counterclockwise), Straddle and Dive Turn. More strategies are discovered in Stage 2, but only two of them are novel as shown in Figure~\ref{fig:obstacleJumps2}: Western Roll and Twist Turn. Western Roll shares the initial state with Twist Jump (clockwise). Twist Turn shares the initial state with Dive Turn. The two stages together take about 180 hours. We encourage readers to watch the supplementary video for better visual perception of the learned strategies. Although our obstacle jump task is not an Olympic event, it is analogous to a long jump in that it seeks to jump a maximum-length jumped. Setting the obstacle height to zero yields a standard long jump task. The framework discovers several strategies, including one similar to the standard long jump adopted in competitive events, with the strong caveat that the distance achieved is limited by the speed of the run up. Please refer to the supplementary video for the long jump results. \subsection{Validation and Ablation Study} \label{sec:Experiments-Comparison-and-Ablation-Study} \subsubsection{BDS vs. Random Search} \label{sec:comparison-bds-random-search} We validate the sample efficiency of BDS compared with a random search baseline. Within the first ten samples of initial states exploration in the high jump task, BDS discovered six distinct strategies as discussed in Section~\ref{sec:results:high-jump}. Given the same computational budget, random search only discovered three distinct strategies: Straddle, Side Jump, and one strategy similar to Scissor Kick. Most samples result in repetitions of these three strategies, due to the presence of large flat regions in the strategy space where different initial states lead to the same strategy. In contrast, BDS largely avoids sampling the flat regions thanks to the acquisition function for diversity optimization and guided exploration of the surrogate model. \subsubsection{Motion Quality with/without P-VAE} We justify the usage of P-VAE for improving motion naturalness with results shown in Figure~\ref{fig:ablation}. Without P-VAE, the character can still learn physically valid skills to complete the tasks successfully, but the resulting motions usually exhibit unnatural behavior. In the absence of a natural action space constrained by the P-VAE, the character can freely explore any arbitrary pose during the course of the motion to complete the task, which is unlikely to be within the natural pose manifold all the time. \subsection{Comparison and Variations} \label{sec:Results-variations} \subsubsection{Synthesized High Jumps vs. Motion Capture} \label{sec:synthesized-mocap} We capture motion capture examples from a university athlete in a commercial motion capture studio for three well-known high jump strategies: Scissor Kick, Straddle, and Fosbury Flop. We retarget the mocap examples onto our virtual athlete, which is shorter than the real athlete as shown in Table~\ref{tb:modelParams}. We visualize keyframes sampled from our simulated jumps and the retargeted mocap jumps in Figure~\ref{fig:compare-mocap}. Note that the bar heights are set to the maximum heights achievable by our trained policies, while the bar heights for the mocap examples are just the bar heights used at the actual capture session. We did not set the mocap bar heights at the athlete's personal record height, as we wanted to ensure his safety and comfort while jumping in a tight mocap suit with a lot of markers on. \subsubsection{High Jump Variations} \label{sec:synthesized-variations} In addition to discovering multiple motion strategies, our framework can easily support physically valid motion variations. We show four high jump variations generated from our framework in Figure~\ref{fig:variations}. We generate the first three variations by taking the initial state of the Fosbury Flop strategy discovered in Stage 1, and retrain the jumping policy with additional constraints starting from a random initial policy. Figure~\ref{fig:variation-weakerLeg} shows a jump with a weaker take-off leg, where the torque limits are reduced to $60\%$ of its original values. Figure~\ref{fig:variation-inflexibleSpine} shows a character jumping with a spine that does not permit backward arching. Figure~\ref{fig:variation-fixedKnee} shows a character jumping with a fixed knee joint. All these variations clear lower maximum heights, and are visually different from the original Fosbury Flop in Figure~\ref{fig:highjump-flop}. For the jump in Figure~\ref{fig:variation-landonFoot}, we take the initial state of the Front Kick, and train with an additional constraint that requires landing on feet. In Figure~\ref{fig:High-jump-mars} we also show a high jump trained on Mars, where the gravity $g=3.711m/s^2$ is lower, from the initial state of the Fosbury flop discovered on Earth. \section{Conclusion and Discussion} {We have presented a framework for discovering and learning multiple natural and distinct strategies for highly challenging athletic jumping motions. A key insight is to explore the take-off state, which is a strong determinant of the jump strategy that follows once airborne. In a second phase, we additionally use explicit rewards for novel motions. Another crucial aspect is to constrain the action space inside the natural human pose manifold. With the proposed two-stage framework and the pose variational autoencoder, natural and physically-nuanced jumping strategies emerge automatically without any reference to human demonstrations. Within the proposed framework, the take-off state exploration is specific to jumping tasks, while the diversity search algorithms in both stages and the P-VAE are task independent. We leave further adaptation of the proposed framework to additional motor skills as future work. We believe this work demonstrates a significant advance by being able to learn a highly-technical skill such as high-jumping.} {We note that the current world record for men's high jump as of year 2021 is $245cm$, set in year 1993 by an athlete of $193cm$ in height. Our high jump record is $200cm$ for a virtual athlete $170cm$ tall. The performance and realism of our simulated jumps are bounded by many simplifications in our modeling and simulation.} We simplify the athlete's feet and specialized high jump shoes as rigid rectangular boxes, which reduces the maximum heights the virtual athlete can clear. We model the high jump crossbar as a wall at training time and as a rigid bar at run time, while real bars are made from more elastic materials such as fiberglass. We use a rigid box as the landing surface, while real-world landing cushions protect the athlete from breaking his neck and back, and also help him roll and get up in a fluid fashion. The run-up phase of both jump tasks imitates reference motions, one single curved run for all the high jumps and one single straight run for all the obstacle jumps. The quality of the two reference runs affect the quality of not only the run-up controllers, but also the learned jump controllers. This is because the jump controllers couple with the run-up controllers through the take-off states, for which we only explore a low-dimensional feature space. The remaining dimensions of the take-off states stay the same as in the original reference run. As a result, the run-up controllers for our obstacle jumps remain in medium speed, and the swing leg has to kick backward sometimes in order for the body to dive forward. If we were to use a faster sprint with more forward leaning as the reference run, the discovered jumps could potentially be more natural and more capable to clear wider obstacles. Similarly, we did not find the Hurdle strategy for high jumping, likely due to the reference run being curved rather than straight. In both reference runs, there is a low in-place jump after the last running step. We found this jumping intention successfully embedded into the take-off states, which helped both jump controllers to jump up naturally. Using reference runs that anticipate the intended skills is definitely recommended, although retraining the run-up controller and the jump controller together in a post-processing stage may be helpful as well. {We were able to discover most well-known high-jump strategies, and some lesser-known variations. There remains a rich space of further parameters to consider for optimization, with our current choices being a good fit for our available computational budget. It would be exciting to discover a better strategy than the Fosbury flop, but a better strategy may not exist. We note that Stage 1 can discover most of the strategies shown in Figure~\ref{fig:strategies}. Stage 2 is only used to search for additional unique strategies and not to fine tune the strategies already learned in Stage 1. We also experimented with simply running Stage 1 longer with three times more samples for the BDS. However, this could not discover any new strategies that can be discovered by Stage 2. In summary, Stage 2 is not absolutely necessary for our framework to work, but it complements Stage 1 in terms of discovering additional visually distinctive strategies. We also note that our Stage 2 search for novel policies is similar in spirit to the algorithm proposed in \cite{zhang2019novel-policies}. An advantage of our approach is its simplicity and the demonstration of its scalability to the discovery of visually distinct strategies for athletic skills.} There are many exciting directions for further investigations. First, we have only focused on strategy discovery for the take-off and airborne parts of jumping tasks. For landing, we only required not to land on head first. We did not model get-ups at all. How to seamlessly incorporate landing and get-ups into our framework is a worthy problem for future studies. Second, there is still room to further improve the quality of our synthesized motions. The P-VAE only constrains naturalness at a pose level, while ideally we need a mechanism to guarantee naturalness on a motion level. This is especially helpful for under-constrained motor tasks such as crawling, where feasible regions of the tasks are large and system dynamics cannot help prune a large portion of the state space as for the jumping tasks. Lastly, our strategy discovery is computationally expensive. We are only able to explore initial states in a four dimensional space, limited by our computational resources. If more dimensions could be explored, more strategies might be discovered. Parallel implementation is trivial for Stage 2 since searches for novel policies for different initial states are independent. For Stage 1, batched BDS where multiple points are queried together, similar to the idea of \cite{azimi2010batch}, may be worth exploring. The key challenge of such an approach is how to find a set of good candidates to query simultaneously. \section{LEARNING NATURAL STRATEGIES} \label{sec:discovery} Given a character model, an environment, and a task objective, we aim to learn feasible natural-looking motion strategies using deep reinforcement learning. We first describe our DRL formulation in Section~\ref{sec:methods-DRL-formulation}. To improve the learned motion quality, we propose a Pose Variational Autoencoder (P-VAE) to constrain the policy actions in Section~\ref{sec:methods-PVAE}. \subsection{DRL Formulation} \label{sec:methods-DRL-formulation} Our strategy learning task is formulated as a standard reinforcement learning problem, where the character interacts with the environment to learn a control policy which maximizes a long-term reward. The control policy $\pi_\theta(a|s)$ parameterized by $\theta$ models the conditional distribution over action $a \in \mathcal{A}$ given the character state $s \in \mathcal{S}$. At each time step $t$, the character interacts with the environment with action $a_t$ sampled from $\pi(a|s)$ based on the current state $s_t$. The environment then responds with a new state $s_{t+1}$ according to the transition dynamics $p(s_{t+1}|s_t,a_t)$, along with a reward signal $r_t$. The goal of reinforcement learning is to learn the optimal policy parameters $\theta^*$ which maximizes the expected return defined as \begin{equation} J(\theta) = \mathbb{E}_{\tau\sim p_\theta(\tau)}\left[ \sum_{t=0}^T{\gamma^t r_t} \right], \end{equation} where $T$ is the episode length, $\gamma \leq 1$ is a discount factor, and $p_\theta(\tau)$ is the probability of observing trajectory $\tau = \{s_0, a_0, s_1, ..., a_{T-1}, s_T\}$ given the current policy $\pi_\theta(a|s)$. \paragraph{States} The state $s$ describes the character configuration. We use a similar set of pose and velocity features as those proposed in DeepMimic \cite{Peng:2018:DeepMimic}, including relative positions of each link with respect to the root, their rotations parameterized in quaternions, along with their linear and angular velocities. Different from DeepMimic, our features are computed directly in the global frame without direction-invariant transformations for the studied jump tasks. The justification is that input features should distinguish states with different relative transformations between the character and the environment obstacle such as the crossbar. In principle, we could also use direction-invariant features as in DeepMimic, and include the relative transformation to the obstacle into the feature set. However, as proved in \cite{Ma19}, there are no direction-invariant features that are always singularity free. Direction-invariant features change wildly whenever the character's facing direction approaches the chosen motion direction, which is usually the global up-direction or the $Y$-axis. For high jump techniques such as the Fosbury flop, singularities are frequently encountered as the athlete clears the bar facing upward. Therefore, we opt to use global features for simplicity and robustness. Another difference from DeepMimic is that time-dependent phase variables are not included in our feature set. Actions are chosen purely based on the dynamic state of the character. \paragraph{Initial States} \label{sec:methods-initial-states} The initial state $s_0$ is the state in which an agent begins each episode in DRL training. We explore a chosen low-dimensional feature space ($3\sim4D$) of the take-off states for learning diverse jumping strategies. As shown by previous work \cite{ma2021spacetime}, the take-off moment is a critical point of jumping motions, where the volume of the feasible region of the dynamic skill is the smallest. In another word, bad initial states will fail fast, which in a way help our exploration framework to find good ones quicker. Alternatively, we could place the agent in a fixed initial pose to start with, such as a static pose before the run-up. This is problematic for several reasons. First, different jumping strategies need different length for the run-up. The planar position and facing direction of the root is still a three dimensional space to be explored. Second, the run-up strategies and the jumping strategies do not correlate in a one-to-one fashion. Visually, the run-up strategies do not look as diverse as the jumping strategies. Lastly, starting the jumps from a static pose lengthens the learning horizon, and makes our learning framework based on DRL training even more costly. Therefore we choose to focus on just the jumping part of the jumps in this work, and leave the run-up control learning to DeepMimic, which is one of the state-of-the-art imitation-based DRL learning methods. More details are given in Section~\ref{sec:Experiments-Runup}. \paragraph{Actions} The action $a$ is a target pose described by internal joint rotations. We parameterize 1D revolute joint rotations by scalar angles, and 3D spherical joint rotations by exponential maps \cite{Grassia:1998:ExpMap}. Given a target pose and the current character state, joint torques are computed through the Stable Proportional Derivative (SPD) controllers \cite{Tan:2011:SPD}. Our control frequency $f_{\text{control}}$ ranges from 10 $Hz$ to 30 $Hz$ depending on both the task and the curriculum. For challenging tasks like high jumps, it helps to quickly improve initial policies through stochastic evaluations at early training stages. A low-frequency policy enables faster learning by reducing the needed control steps, or in another word, the dimensionality and complexity of the actions $(a_0,...,a_T)$. This is in spirit similar to the 10 $Hz$ control fragments used in SAMCON-type controllers \cite{Liu16}. Successful low-frequency policies can then be gradually transferred to high-frequency ones according to a curriculum to achieve finer controls and thus smoother motions. We discuss the choice of control frequency in more detail in Section~\ref{sec:Experiments-Curriculum}. \paragraph{Reward} We use a reward function consisting of the product of two terms for all our strategy discovery tasks as follows: \begin{equation} \label{eq:stage1-reward} r = r_{\text{task}} \cdot r_{\text{naturalness}} \end{equation} where $r_{\text{task}}$ is the task objective and $r_{\text{naturalness}}$ is a naturalness reward term computed from the P-VAE to be described in Section \ref{sec:methods-PVAE}. For diverse strategy discovery, a simple $r_{\text{task}}$ which precisely captures the task objective is preferred. For example in high jumping, the agent receives a sparse reward signal at the end of the jump after it successfully clears the bar. In principle, we could transform the sparse reward into a dense reward to reduce the learning difficulty, such as to reward CoM positions higher than a parabolic trajectory estimated from the bar height. However in practice, such dense guidance reward can mislead the training to a bad local optimum, where the character learns to jump high in place rather than clears the bar in a coordinated fashion. Moreover, the CoM height and the bar height may not correlate in a simple way. For example, the CoM passes underneath the crossbar in Fosbury flops. As a result, a shaped dense reward function could harm the diversity of the learned strategies. We will discuss reward function settings for each task in more details in Section~\ref{sec:Experiments-Reward}. \paragraph{Policy Representation} We use a fully-connected neural network parameterized by weights $\theta$ to represent the control policy $\pi_\theta(a|s)$. Similar to the settings in \cite{Peng:2018:DeepMimic}, the network has two hidden layers with $1024$ and $512$ units respectively. ReLU activations are applied for all hidden units. Our policy maps a given state $s$ to a Gaussian distribution over actions $a=\mathcal{N}(\mu(s), \Sigma)$. The mean $\mu(s)$ is determined by the network output. The covariance matrix $\Sigma=\sigma I$ is diagonal, where $I$ is the identity matrix and $\sigma$ is a scalar variable measuring the action noise. We apply an annealing strategy to linearly decrease $\sigma$ from $0.5$ to $0.1$ in the first $1.0\times 10^7$ simulation steps, to encourage more exploration in early training and more exploitation in late training. \paragraph{Training} We train our policies with the Proximal Policy Optimization (PPO) method \cite{Schulman:2017:PPO}. PPO involves training both a policy network and a value function network. The value network architecture is similar to the policy network, except that there is only one single linear unit in the output layer. We train the value network with TD($\lambda$) multi-step returns. We estimate the advantage of the PPO policy gradient by the Generalized Advantage Estimator GAE($\lambda$) \cite{Generalized-Advantage-Estimation}. \subsection{Pose Variational Autoencoder}\label{sec:methods-PVAE} The dimension of natural human poses is usually much lower than the true degrees of freedom of the character model. We propose a generative model to produce natural PD target poses at each control step. More specifically, we train a Pose Variational Autoencoder (P-VAE) from captured natural human poses, and then sample its latent space to produce desired PD target poses for control. Here a pose only encodes internal joint rotations without the global root transformations. We use publicly available human motion capture databases to train our P-VAE. Note that none of these databases consist of high jumps or obstacle jumps specifically, but they already provide enough poses for us to learn the natural human pose manifold successfully. \paragraph{P-VAE Architecture and Training} Our P-VAE adopts the standard Beta Variational Autoencoder ($\beta$-VAE) architecture \cite{beta-VAE}. The encoder maps an input feature $x$ to a low-dimensional latent space, parameterized by a Gaussian distribution with a mean $\mu_x$ and a diagonal covariance $\Sigma_x$. The decoder maps a latent vector sampled from the Gaussian distribution back to the original feature space as $x'$. The training objective is to minimize the following loss function: \begin{equation} \mathcal{L} = \mathcal{L}_\text{MSE}(x,x') + \beta \cdot \text{KL}(\mathcal{N}(\mu_x, \Sigma_x), \mathcal{N}(0, I)), \end{equation} where the first term is the MSE (Mean Squared Error) reconstruction loss, and the second term shapes the latent variable distribution to a standard Gaussian by measuring their Kulback-Leibler divergence. We set $\beta = 1.0\times10^{-5}$ in our experiments, so that the two terms in the loss function are within the same order of magnitude numerically. We train the P-VAE on a dataset consisting of roughly $20,000$ poses obtained from the CMU and SFU motion capture databases. We include a large variety of motion skills, including walking, running, jumping, breakdancing, cartwheels, flips, kicks, martial arts, etc. The input features consist of all link and joint positions relative to the root in the local root frames, and all joint rotations with respect to their parents. We parameterize joint rotations by a $6$D representation for better continuity, as described in \cite{zhou2019continuity, Ling20}. We model both the encoder and the decoder as fully connected neural networks with two hidden layers, each having 256 units with $tanh$ activation. We perform PCA (Principal Component Analysis) on the training data and choose $d_{\text{latent}} = 13$ to cover $85\%$ of the training data variance, where $d_{\text{latent}}$ is the dimension of the latent variable. We use the Adam optimizer to update network weights \cite{kingma2014adam}, with the learning rate set to $1.0\times10^{-4}$. Using a mini-batch size of $128$, the training takes $80$ epochs within $2$ minutes on an NVIDIA GeForce GTX 1080 GPU and an Intel i7-8700k CPU. We use this single pre-trained P-VAE for all our strategy discovery tasks to be described. \paragraph{Composite PD Targets} PD controllers provide actuation based on positional errors. So in order to reach the desired pose, the actual target pose needs to be offset by a certain amount. Such offsets are usually small to just counter-act the gravity for free limbs. However, for joints that interact with the environment, such as the lower body joints for weight support and ground takeoff, large offsets are needed to generate powerful ground reaction forces to propel the body forward or into the air. Such complementary offsets combined with the default P-VAE targets help realize natural poses during physics-based simulations. Our action space $\mathcal{A}$ is therefore $d_{\text{latent}} + d_{\text{offset}}$ dimensional, where $d_{\text{latent}}$ is the dimension of the P-VAE latent space, and $d_{\text{offset}}$ is the dimension of the DoFs that we wish to apply offsets for. We simply apply offsets to all internal joints in this work. Given $a=(a_{\text{latent}},a_{\text{offset}}) \in \mathcal{A}$ sampled from the policy $\pi_\theta(a|s)$, where $a_{\text{latent}}$ and $a_{\text{offset}}$ correspond to the latent and offset part of $a$ respectively, the final PD target is computed by $D_{\text{pose}}(a_{\text{latent}}) + a_{\text{offset}}$. Here $D_{\text{pose}}(\cdot)$ is a function that decodes the latent vector $a_{\text{latent}}$ to full-body joint rotations. We minimize the usage of rotation offsets by a penalty term as follows: \begin{equation} \label{eq:r-pvae} r_{\text{naturalness}} = 1 - \mathop{\mathrm{Clip}}\nolimits\left(\left(\frac{||a_{\text{offset}}||_1}{c_{\text{offset}}}\right)^2, 0, 1\right) , \end{equation} where $c_{\text{offset}}$ is the maximum offset allowed. For tasks with only a sparse reward signal at the end, $||a_{\text{offset}}||_1$ in Equation~\ref{eq:r-pvae} is replaced by the average offset norm $\frac{1}{T}\sum_{t=0}^T{||a^{(t)}_{\text{offset}}||_1}$ across the entire episode. We use $L1$-norm rather than the commonly adopted $L2$-norm to encourage sparse solutions with fewer non-zero components \cite{tibshirani1996regression, chen2001atomic}, as our goal is to only apply offsets to essential joints to complete the task while staying close to the natural pose manifold prescribed by the P-VAE. \section{LEARNING DIVERSE STRATEGIES} \label{sec:diverse} Given a virtual environment and a task objective, we would like to discover as many strategies as possible to complete the task at hand. Without human insights and demonstrations, this is a challenging task. To this end, we propose a two-stage framework to enable stochastic DRL to discover solution modes such as the Fosbury flop. The first stage focuses on strategy discovery by exploring the space of initial states. For example in high jump, the Fosbury flop technique and the straddle technique require completely different initial states at take-off, in terms of the approaching angle with respect to the bar, the take-off velocities, and the choice of inner or outer leg as the take-off leg. A fixed initial state may lead to success of one particular strategy, but can miss other drastically different ones. We systematically explore the initial state space through a novel sample-efficient Bayesian Diversity Search (BDS) algorithm to be described in Section~\ref{sec:methods-bayesian-diversity-search}. The output of Stage 1 is a set of diverse motion strategies and their corresponding initial states. Taken such a successful initial state as input, we then apply another pass of DRL learning to further explore more motion variations permitted by the same initial state. The intuition is to explore different local optima while maximizing the novelty of the current policy, compared to previously found ones. We describe our detailed settings for the Stage 2 novel policy seeking algorithm in Section~\ref{sec:methods-phase2}. \subsection{Stage 1: Initial States Exploration with Bayesian Diversity Search}\label{sec:methods-bayesian-diversity-search} In Stage 1, we perform diverse strategy discovery by exploring initial state variations, such as pose and velocity variations, at the take-off moment. We first extract a feature vector $f$ from a motion trajectory to characterize and differentiate between different strategies. A straightforward way is to compute the Euclidean distance between time-aligned motion trajectories, but we hand pick a low-dimensional visually-salient feature set as detailed in Section~\ref{sec:Experiments-Strategy-Features}. We also define a low-dimensional exploration space $\mathcal{X}$ for initial states, as exploring the full state space is computationally prohibitive. Our goal is to search for a set of representatives $X_n = \{x_1, x_2, ..., x_n | x_i \in \mathcal{X}\}$, such that the corresponding feature set $F_n= \{f_1, f_2, ..., f_n | f_i \in \mathcal{F}\}$ has a large diversity. Note that as DRL training and physics-based simulation are involved in producing the motion trajectories from an initial state, the computation of $f_i=g(x_i)$ is a stochastic and expensive black-box function. We therefore design a sample-efficient Bayesian Optimization (BO) algorithm to optimize for motion diversity in a guided fashion. Our BDS (Bayesian Diversity Search) algorithm iteratively selects the next sample to evaluate from $\mathcal{X}$, given the current set of observations $X_t = \{x_1, x_2, ..., x_t\}$ and $F_t = \{f_1, f_2, ..., f_t\}$. More specifically, the next point $x_{t+1}$ is selected based on an acquisition function $a(x_{t+1})$ to maximize the diversity in $F_{t+1}=F_t \cup \{f_{t+1}\}$. We choose to maximize the minimum distance between $f_{t+1}$ and all $f_i \in F_t$: \begin{equation}\label{eq:diversity-objective} a(x_{t+1}) = \min_{f_i \in F_t}{||f_{t+1} - f_i||} . \end{equation} Since evaluating $f_{t+1}$ through $g(\cdot)$ is expensive, we employ a surrogate model to quickly estimate $f_{t+1}$, so that the most promising sample to evaluate next can be efficiently found through Equation~\ref{eq:diversity-objective}. We maintain the surrogate statistical model of $g(\cdot)$ using a Gaussian Process (GP) \cite{rasmussen2003gaussian-process}, similar to standard BO methods. A GP contains a prior mean $m(x)$ encoding the prior belief of the function value, and a kernel function $k(x,x')$ measuring the correlation between $g(x)$ and $g(x')$. More details of our specific $m(x)$ and $k(x,x')$ are given in Section~\ref{sec:Experiments-GP-Priors-Kernels}. Hereafter we assume a one-dimensional feature space $\mathcal{F}$. Generalization to a multi-dimensional feature space is straightforward as multi-output Gaussian Process implementations are readily available, such as \cite{GPflow2020multioutput-gp}. Given $m(\cdot)$, $k(\cdot, \cdot)$, and current observations $\{X_t, F_t\}$, posterior estimation of $g(x)$ for an arbitrary $x$ is given by a Gaussian distribution with mean $\mu_{t}$ and variance $\sigma^2_{t}$ computed in closed forms: \begin{equation}\label{eq:gp-mu-sigma} \begin{gathered} \mu_{t}(x) = k(X_t, x)^T (K + \eta^{2} I)^{-1} \left(F_t - m(x)\right) + m(x), \\ \sigma_{t}^{2}(x) = k(x, x) + \eta^{2} - k(X_t, x)^T(K + \eta^{2}I)^{-1} k(X_t,x) , \end{gathered} \end{equation} where $X_t \in \mathbb{R}^{t \times \text{dim}(\mathcal{X})}$, $F_t \in \mathbb{R}^{t}$, $K \in \mathbb{R}^{t \times t}, K_{i,j} = k(x_{i}, x_{j})$, and $k(X_t, x) = [k(x,x_{1}),k(x,x_{2}),...k(x,x_{t})]^T$. $I$ is the identity matrix, and $\eta$ is the standard deviation of the observation noise. Equation~\ref{eq:diversity-objective} can then be approximated by \begin{equation}\label{eq:diversity-approximation} \hat{a}(x_{t+1}) = \mathbb{E}_{\hat{f}_{t+1} \sim \mathcal{N}(\mu_t(x_{t+1}), \sigma^2_t(x_{t+1}))}\left[\min_{f_i \in F_t}{||\hat{f}_{t+1} - f_i||}\right] . \end{equation} Equation~\ref{eq:diversity-approximation} can be computed analytically for one-dimensional features, but gets more and more complicated to compute analytically as the feature dimension grows, or when the feature space is non-Euclidean as in our case with rotational features. Therefore, we compute Equation~\ref{eq:diversity-approximation} numerically with Monte-Carlo integration for simplicity. The surrogate model is just an approximation to the true function, and has large uncertainty where observations are lacking. Rather than only maximizing the function value when picking the next sample, BO methods usually also take into consideration the estimated uncertainty to avoid being overly greedy. For example, GP-UCB (Gaussian Process Upper Confidence Bound), one of the most popular BO algorithms, adds a variance term into its acquisition function. Similarly, we could adopt a composite acquisition function as follows: \begin{equation}\label{eq:diversity-ucb} a'(x_{t+1}) = \hat{a}(x_{t+1}) + \beta\sigma_t(x_{t+1}), \end{equation} where $\sigma_t(x_{t+1})$ is the heuristic term favoring candidates with large uncertainty, and $\beta$ is a hyperparameter trading off exploration and exploitation (diversity optimization in our case). Theoretically well justified choice of $\beta$ exists for GP-UCB, which guarantees optimization convergence with high probability \cite{GP-bandit}. However in our context, no such guarantees hold as we are not optimizing $f$ but rather the diversity of $f$, the tuning of the hyperparameter $\beta$ is thus not trivial, especially when the strategy evaluation function $g(\cdot)$ is extremely costly. To mitigate this problem, we decouple the two terms and alternate between exploration and exploitation following a similar idea proposed in \cite{alternative-explore-exploit}. During exploration, our acquisition function becomes: \begin{equation}\label{eq:explore-acquisition} a_\text{exp}(x_{t+1}) = \sigma_t(x_{t+1}). \end{equation} The sample with the largest posterior standard deviation is chosen as $x_{t+1}$ to be evaluated next: \begin{equation}\label{eq:explore-formula-2} x_{t+1} = \mathop{\arg\max}\limits_{x}\sigma_t(x). \end{equation} Under the condition that $g(\cdot)$ is a sample from GP function distribution $\mathcal{GP}(m(\cdot), k(\cdot,\cdot))$, Equation~\ref{eq:explore-formula-2} can be shown to maximize the Information Gain $I$ on function $g(\cdot)$: \begin{equation}\label{eq:explore-formula-1} x_{t+1} = \mathop{\arg\max}\limits_{x}I\left(X_t \cup \{x\}, F_t \cup \{g(x)\} ; g\right), \end{equation} where $I(A;B)=H(A)-H(A|B)$, and $H(\cdot)=\mathbb{E}\left[-\log p(\cdot)\right]$ is the Shannon entropy \cite{information-theory}. We summarize our BDS algorithm in Algorithm~\ref{algorithm::BDS}. The alternation of exploration and diversity optimization involves two extra hyperparameters $N_\text{exp}$ and $N_\text{opt}$, corresponding to the number of samples allocated for exploration and diversity optimization in each round. Compared to $\beta$ in Equation~\ref{eq:diversity-ucb}, $N_\text{exp}$ and $N_\text{opt}$ are much more intuitive to tune. We also found that empirically the algorithm performance is insensitive to the specific values of $N_\text{exp}$ and $N_\text{opt}$. The exploitation stage directly maximizes the diversity of motion strategies. We optimize $\hat{a}(\cdot)$ with a sampling-based method DIRECT (Dividing Rectangle) \cite{Jones2001-DIRECT}, since derivative information may not be accurate in the presence of function noise due to the Monte-Carlo integration. This optimization does not have to be perfectly accurate, since the surrogate model is an approximation in the first place. The exploration stage facilitates the discovery of diverse strategies by avoiding repeated queries on well-sampled regions. We optimize $a_\text{exp}(\cdot)$ using a simple gradient-based method L-BFGS \cite{LBFGS}. \begin{algorithm}[t] \caption{Bayesian Diversity Search}\label{algorithm::BDS} \KwIn{Strategy evaluation function $g(\cdot)$, exploration count $N_\text{exp}$ and diversity optimization count $N_\text{opt}$, total sample count $n$.} \KwOut{Initial states $X_n = \{x_1, x_2, ..., x_n\}$ for diverse strategies.} $t=0$; $X_0 \leftarrow \varnothing$; $F_0 \leftarrow \varnothing$\; Initialize GP surrogate model with random samples\; \While{$t < n$} { \eIf{$t \% (N_\text{exp} + N_\text{opt}) < N_\text{exp}$} { $x_{t+1} \leftarrow \mathop{\arg\max}a_\text{exp}(\cdot)$ by L-BFGS; \tcp*[f]{Equation~\ref{eq:explore-acquisition}} } { $x_{t+1} \leftarrow \mathop{\arg\max}\hat{a}(\cdot)$ by DIRECT; \tcp*[f]{Equation~\ref{eq:diversity-approximation}} } $f_{t+1} \leftarrow g(x_{t+1})$\; $X_{t+1} \leftarrow X_t \cup \{x_{t+1}\}$; $F_{t+1} \leftarrow F_t \cup \{f_{t+1}\}$\; Update GP surrogate model with $X_{t+1}$, $F_{t+1}$; \hspace{8pt}\tcp{Equation~\ref{eq:gp-mu-sigma}} $t \leftarrow t+1$; } \Return $X_n$ \end{algorithm} \subsection{Stage 2: Novel Policy Seeking} \label{sec:methods-phase2} In Stage 2 of our diverse strategy discovery framework, we explore potential strategy variations given a fixed initial state discovered in Stage 1. Formally, given an initial state $x$ and a set of discovered policies $\Pi = \{\pi_1, \pi_2, ..., \pi_n\}$, we aim to learn a new policy $\pi_{n+1}$ which is different from all existing $\pi_i \in \Pi$. This can be achieved with an additional policy novelty reward to be jointly optimized with the task reward during DRL training. We measure the novelty of policy $\pi_i$ with respect to $\pi_j$ by their corresponding motion feature distance $||f_i - f_j||$. The novelty reward function is then given by \begin{equation} r_\text{novelty}(f) = \text{Clip}\left(\frac{\min_{\pi_i \in \Pi}{||f_i - f||}}{d_\text{threshold}}, 0.01, 1\right) , \end{equation} which rewards simulation rollouts showing different strategies to the ones presented in the existing policy set. $d_\text{threshold}$ is a hyperparameter measuring the desired policy novelty to be learned next. Note that the feature representation $f$ here in Stage 2 can be the same as or different from the one used in Stage 1 for initial states exploration. Our novel policy search is in principle similar to the idea of \cite{zhang2019novel-policies,Sun2020novel-policies}. However, there are two key differences. First, in machine learning, policy novelty metrics have been designed and validated only on low-dimensional control tasks. For example in \cite{zhang2019novel-policies}, the policy novelty is measured by the reconstruction error between states from the current rollout and previous rollouts encapsulated as a deep autoencoder. In our case of high-dimensional 3D character control tasks, however, novel state sequences do not necessarily correspond to novel motion strategies. We therefore opt to design discriminative strategy features whose distances are incorporated into the DRL training reward. Second, we multiply the novelty reward with the task reward as the training reward, and adopt a standard gradient-based method PPO to train the policy. Additional optimization techniques are not required for learning novel strategies, such as the Task-Novelty Bisector method proposed in \cite{zhang2019novel-policies} that modifies the policy gradients to encourage novelty learning. Our novel policy seeking procedure always discovers novel policies since the character is forced to perform a different strategy. However, the novel policies may exhibit unnatural and awkward movements, when the given initial state is not capable of multiple natural strategies. \section{RELATED WORK} We build on prior work from several areas, including character animation, diversity optimization, human pose modeling, and high-jump analysis from biomechanics and kinesiology. \subsection{Character Animation} Synthesizing natural human motion is a long-standing challenge in computer animation. We first briefly review kinematic methods, and then provide a more detailed review of physics-based methods. To the best of our knowledge, there are no previous attempts to synthesize athletic high jumps or obstacle jumps using either kinematic or physics-based approaches. Both tasks require precise coordination and exhibit multiple strategies. \paragraph{Kinematic Methods} Data-driven kinematic methods have demonstrated their effectiveness for synthesizing high-quality human motions based on captured examples. Such kinematic models have evolved from graph structures~\cite{Kovar:2002:MotionGraphs,Safonova-Interpolated-Motion-Graphs}, to Gaussian Processes~\cite{levine2012continuous, ye2010synthesis-responsive}, and recently deep neural networks \cite{Holden:2017:PFNN, Zhang:2018:MANN, starke2019neural-state-machine, starke2020local-motion-phases, lee2018interactive-RNN, Ling20}. Non-parametric models that store all example frames have limited capability of generalizing to new motions due to their inherent nature of data interpolation \cite{Clavet16}. Compact parametric models learn an underlying low-dimensional motion manifold. Therefore they tend to generalize better as new motions not in the training dataset can be synthesized by sampling in the learned latent space~\cite{Holden16}. Completely novel motions and strategies, however, are still beyond their reach. Most fundamentally, kinematic models do not take into account physical realism, which is important for athletic motions. We thus cannot directly apply kinematic methods to our problem of discovering unseen strategies for highly dynamic motions. However, we do adopt a variational autoencoder (VAE) similar to the one in~\cite{Ling20} as a means to improve the naturalness of our learned motion strategies. \paragraph{Physics-based Methods} Physics-based control and simulation methods generate motions with physical realism and environmental interactions. The key challenge is the design or learning of robust controllers. Conventional manually designed controllers have achieved significant success for locomotion, e.g.,~\cite{Yin07,wang2009optimizing,Coros10,wang2012optimizing,geijtenbeek2013flexible,lee2014locomotion,felis2016synthesis,jain2009optimization}. The seminal work from Hodgins \textit{et al.} demonstrated impressive controllers for athletic skills such as a handspring vault, a standing broad jump, a vertical leap, somersaults to different directions, and platform dives~\cite{Hodgins95,Wooten98}. Such handcrafted controllers are mostly designed with finite state machines (FSM) and heuristic feedback rules, which require deep human insight and domain knowledge, and tedious manual trial and error. Zhao and van de Panne~\shortcite{Zhao05} thus proposed an interface to ease such a design process, and demonstrated controllers for diving, skiing and snowboarding. Controls can also be designed using objectives and constraints adapted to each motion phase, e.g.,~\cite{jain2009optimization,deLasa2010}, or developed using a methodology that mimics human coaching~\cite{ha2014-human-coach}. In general, manually designed controllers remain hard to generalize to different strategies or tasks. With the wide availability of motion capture data, many research endeavors have been focused on tracking-based controllers, which are capable of reproducing high-quality motions by imitating motion examples. Controllers for a wide range of skills have been demonstrated through trajectory optimization \cite{sok2007simulating,da2008interactive,muico2009contact,lee2010data-driven-biped,ye2010optimal,lee2014locomotion}, sampling-based algorithms \cite{Liu:2010:Samcon,Liu:2015:Samcon2,Liu16}, and deep reinforcement learning ~\cite{Peng2017deepLoco,Peng:2018:DeepMimic,Peng:2018:SFV,ma2021spacetime,Liu18,Lee19}. Tracking controllers have also been combined with kinematic motion generators to support interactive control of simulated characters~\cite{Bergamin:2019:DReCon,Park:2019:LPS,won2020scalable}. Even though tracking-based methods have demonstrated their effectiveness on achieving task-related goals~\cite{Peng:2018:DeepMimic}, the imitation objective inherently restricts them from generalizing to novel motion strategies fundamentally different from the reference. Most recently, style exploration has also been demonstrated within a physics-based DRL framework using spacetime bounds~\cite{ma2021spacetime}. However, these remain style variations rather than strategy variations. Moreover, high jumping motion capture examples are difficult to find. We obtained captures of three high jump strategies, which we use to compare our synthetic results to. Our goal is to discover as many strategies as possible, so example-free methods are most suitable in our case. Various tracking-free methods have been proposed via trajectory optimization or deep reinforcement learning. Heess \textit{et al.} \shortcite{Heess:2017:Emergence} demonstrate a rich set of locomotion behaviors emerging from just complex environment interactions. However, the resulting motions show limited realism in the absence of effective motion quality regularization. Better motion quality is achievable with sophisticated reward functions and domain knowledge, such as sagittal symmetry, which do not directly generalize beyond locomotion \cite{Yu:2018:LSLL,coumans2016pybullet,Xie2020allsteps,mordatch2013-cio-locomotion,mordatch2015interactive}. Synthesizing diverse physics-based skills without example motions generally requires optimization with detailed cost functions that are engineered specifically for each skill~\cite{AlBorno13}, and often only works for simplified physical models~\cite{Mordatch12}. \subsection{Diversity Optimization} Diversity Optimization is a problem of great interest in artificial intelligence \cite{hebrard2005finding-diverse,ursem2002diversity,srivastava2007diverse-plan,coman2011generating-diverse,pugh2016quality-diversity,lehman2011novelty-search}. It is formulated as searching for a set of configurations such that the corresponding outcomes have a large diversity while satisfying a given objective. Diversity optimization has also been utilized in computer graphics applications~\cite{merrell2011interactive,Agrawal:2013:Diverse}. For example, a variety of 2D and simple 3D skills have been achieved through jointly optimizing task objectives and a diversity metric within a trajectory optimization framework~\cite{Agrawal:2013:Diverse}. Such methods are computationally prohibitive for our case as learning the athletic tasks involve expensive DRL training through non-differentiable simulations, e.g., a single strategy takes six hours to learn even on a high-end desktop. We propose a diversity optimization algorithm based on the successful Bayesian Optimization (BO) philosophy for sample efficient black-box function optimization. In Bayesian Optimization, objective functions are optimized purely through function evaluations as no derivative information is available. A Bayesian statistical \textit{surrogate model}, usually a Gaussian Process (GP) \cite{rasmussen2003gaussian-process}, is maintained to estimate the value of the objective function along with the uncertainty of the estimation. An \textit{acquisition function} is then repeatedly maximized for fast decisions on where to sample next for the actual expensive function evaluation. The next sample needs to be promising in terms of maximizing the objective function predicted by the surrogate model, and also informative in terms of reducing the uncertainty in less explored regions of the surrogate model~\cite{jones1998-bo-expected-improvement, frazier2009-bo-knowledge-gradient, GP-bandit}. BO has been widely adopted in machine learning for parameter and hyperparameter optimizations \cite{snoek2015scalable,klein2017fast,kandasamy2018neural,kandasamy2020tuning,korovina2020chembo,snoek2012practical}. Recently BO has also seen applications in computer graphics~\cite{koyama2017sequential,koyama2020sequential}, such as parameter tuning for fluid animation systems \cite{brochu2007preference}. We propose a novel acquisition function to encourage discovery of diverse motion strategies. We also decouple the exploration from the maximization for more robust and efficient strategy discovery. We name this algorithm Bayesian Diversity Search (BDS). The BDS algorithm searches for diverse strategies by exploring a low-dimensional initial state space defined at the take-off moment. Initial states exploration has been applied to find appropriate initial conditions for desired landing controllers \cite{ha2012falling}. In the context of DRL learning, initial states are usually treated as hyperparameters rather than being explored. Recently a variety of DRL-based learning methods have been proposed to discover diverse control policies in machine learning, e.g., \cite{Eysenbach19,zhang2019novel-policies,Sun2020novel-policies,achiam2018variational,sharma2019dynamics,haarnoja2018soft,conti2018improving,houthooft2016vime,hester2017intrinsically,schmidhuber1991curious}. These methods mainly encourage exploration of unseen states or actions by jointly optimizing the task and novelty objectives~\cite{zhang2019novel-policies}, or optimizing intrinsic rewards such as heuristically defined curiosity terms~\cite{Eysenbach19,sharma2019dynamics}. We adopt a similar idea for novelty seeking in Stage~2 of our framework after BDS, but with a novelty metric and reward structure more suitable for our goal. Coupled with the Stage~1 BDS, we are able to learn a rich set of strategies for challenging tasks such as athletic high jumping. \subsection{Natural Pose Space} In biomechanics and neuroscience, it is well known that muscle synergies, or muscle co-activations, serve as motor primitives for the central nervous system to simplify movement control of the underlying complex neuromusculoskeletal systems~\cite{Overduin15,Zhao19}. In character animation, human-like character models are much simplified, but are still parameterized by 30+ DoFs. Yet the natural human pose manifold learned from motion capture databases is of much lower dimension \cite{Holden16}. The movement of joints are highly correlated as typically they are strategically coordinated and co-activated. Such correlations have been modelled through traditional dimensionality reduction techniques such as PCA \cite{chai2005performance-lowd}, or more recently, Variational AutoEncoders (VAE) \cite{habibie2017,Ling20}. We rely on a VAE learned from mocap databases to produce natural target poses for our DRL-based policy network. Searching behaviors in low dimensional spaces has been employed in physics-based character animation to both accelerate the nonlinear optimization and improve the motion quality \cite{Safonova:2004:OptPCA}. Throwing motions based on muscle synergies extracted from human experiments have been synthesized on a musculoskeletal model \cite{cruz2017synergy}. Recent DRL methods either directly imitate mocap examples \cite{Peng:2018:DeepMimic, won2020scalable}, which makes strategy discovery hard if possible; or adopt a \textit{de novo} approach with no example at all \cite{Heess2015learning}, which often results in extremely unnatural motions for human like characters. Close in spirit to our work is \cite{ranganath2019lowd-joint-coactivation}, where a low-dimensional PCA space learned from a single mocap trajectory is used as the action space of DeepMimic for tracking-based control. We aim to discover new strategies without tracking, and we use a large set of generic motions to deduce a task-and-strategy-independent natural pose space. We also add action offsets to the P-VAE output poses so that large joint activation can be achieved for powerful take-offs. {Reduced or latent parameter spaces based on statistical analysis of poses have been used for grasping control \cite{Ciocarlie10,Andrews13,Osa18}. A Trajectory Generator (TG) can provide a compact parameterization that can enable learning of reactive policies for complex behaviors~\cite{Iscen18}. Motion primitives can also be learned from mocap and then be composed to learn new behaviors~\cite{Peng19}.} \subsection{History and Science of High Jump} The high jump is one of the most technically complex, strategically nuanced, and physiologically demanding sports among all track and field events \cite{Donnelly14}. Over the past 100 years, high jump has evolved dramatically in the Olympics. Here we summarize the well-known variations \cite{SevenStyles,Donnelly14}, and we refer readers to our supplemental video for more visual illustrations. \begin{itemize} \item {The Hurdle}: the jumper runs straight-on to the bar, raises one leg up to the bar, and quickly raises the other one over the bar once the first has cleared. The body clears the bar upright. \item {Scissor Kick}: the jumper approaches the bar diagonally, throws first the inside leg and then the other over the bar in a scissoring motion. The body clears the bar upright. \item {Eastern Cutoff}: the jumper takes off like the scissor kick, but extends his back and flattens out over the bar. \item {Western Roll}:the jumper also approaches the bar diagonally, but the inner leg is used for the take-off, while the outer leg is thrust up to lead the body sideways over the bar. \item {The Straddle}: similar to Western Roll, but the jumper clears the bar face-down. \item {Fosbury Flop}: The jumper approaches the bar on a curved path and leans away from the bar at the take-off point to convert horizontal velocity into vertical velocity and angular momentum. In flight, the jumper progressively arches their shoulders, back, and legs in a rolling motion, and lands on their neck and back. The jumper's Center of Mass (CoM) can pass under the bar while the body arches and slide above the bar. It has been the favored high jump technique in Olympic competitions since used by Dick Fosbury in the 1968 Summer Olympics. It was concurrently developed by Debbie Brill. \end{itemize} In biomechanics, kinesiology, and physical education, high jumps have been analyzed to a limited extent. We adopt the force limits reported in \cite{Okuyama03} in our simulations. Dapena simulated a higher jump by making small changes to a recorded jump \cite{Dapena02}. Mathematical models of the Center of Mass (CoM) movement have been developed to offer recommendations to increase the effectiveness of high jumps~\cite{Adashevskiy13}. \section{Task Setup and Implementation} \label{sec:Experiments} We demonstrate diverse strategy discovery for two challenging motor tasks: high jumping and obstacle jumping. We also tackle several variations of these tasks. We describe task specific settings in Section~\ref{sec:Experiments-Task-Setup}, and implementation details in Section~\ref{sec:Experiments-Implementation}. \subsection{Task Setup} \label{sec:Experiments-Task-Setup} The high jump task follows the Olympics rules, where the simulated athlete takes off with one leg, clears the crossbar, and lands on a crash mat. We model the landing area as a rigid block for simplicity. The crossbar is modeled as a rigid wall vertically extending from the ground to the target height to prevent the character from cheating during early training, i.e., passing through beneath the bar. A rollout is considered successful and terminated when the character lands on the rigid box with all body parts at least $20$ $cm$ away from the wall. A rollout is considered as a failure and terminated immediately, if any body part touches the wall, or any body part other than the take-off foot touches the ground, or if the jump does not succeed within two seconds after the take-off. The obstacle jump shares most of the settings of the high jump. The character takes off with one leg, clears a box-shaped obstacle of $50$ $cm$ in height with variable widths, then lands on a crash mat. The character is required to complete the task within two seconds as well, and not allowed to touch the obstacle with any body part. \subsubsection{Run-up Learning} \label{sec:Experiments-Runup} A full high jump or obstacle jump consists of three phases: run-up, take-off and landing. Our framework described so far can discover good initial states at take-off that lead to diverse jumping strategies. What is lacking is the matching run-up control policies that can prepare the character to reach these good take-off states at the end of the run. We train the run-up controllers with DeepMimic \cite{Peng:2018:DeepMimic}, where the DRL learning reward consists of a task reward and an imitation reward. The task reward encourages the end state of the run-up to match the desired take-off state of the jump. The imitation reward guides the simulation to match the style of the reference run. We use a curved sprint as the reference run-up for high jump, and a straight sprint for the obstacle jump run-up. For high jump, the explored initial state space is four-dimensional: the desired approach angle $\alpha$, the $X$ and $Z$ components of the root angular velocity $\omega$, and the magnitude of the $Z$ component of the root linear velocity $v_z$ in a facing-direction invariant frame. We fix the desired root $Y$ angular velocity to $3 \text{rad}/\text{s}$, which is taken from the reference curved sprint. In summary, the task reward $r_\text{G}$ for the run-up control of a high jump is defined as \begin{equation} r_\text{G} = \text{exp}\left(-\frac{1}{3}\cdot||\omega - \bar{\omega}||_1 - 0.7\cdot(v_z - \bar{v}_z)^2\right), \end{equation} where $\bar{\omega}$ and $\bar{v}_z$ are the corresponding targets for $\omega$ and $v_z$. $\alpha$ does not appear in the reward function as we simply rotate the high jump suite in the environment to realize different approach angles. For the obstacle jump, we explore a three-dimensional take-off state space consisting of the root angular velocities along all axes. Therefore the run-up control task reward $r_\text{G}$ is given by \begin{equation} r_\text{G} = \text{exp}(-\frac{1}{3}\cdot||\omega - \bar{\omega}||_1). \end{equation} \input{tables/curriculum} \subsubsection{Reward Function} \label{sec:Experiments-Reward} We use the same reward function structure for both high jumps and obstacle jumps, where the character gets a sparse reward only when it successfully completes the task. The full reward function is defined as in Equation~\ref{eq:stage1-reward} for Stage 1. For Stage 2, the novelty bonus $r_\text{novelty}$ as discussed in Section~\ref{sec:methods-phase2} is added: \begin{equation} r = r_\text{task}\cdot r_\text{naturalness}\cdot r_\text{novelty}. \label{eq:stage2-reward} \end{equation} $r_\text{naturalness}$ is the motion naturalness term discussed in Section~\ref{sec:methods-PVAE}. For both stages, the task reward consists of three terms: \begin{equation} r_\text{task} = r_{\text{complete}} \cdot r_{\omega} \cdot r_\text{safety}. \end{equation} $r_\text{complete}$ is a binary reward precisely corresponding to task completion. $r_{\omega} = \text{exp}(-0.02 ||\omega||)$ penalizes excessive root rotations where $||\omega||$ is the average magnitude of the root angular velocities across the episode. $r_\text{safety}$ is a term to penalize unsafe head-first landings. We set it to $0.7$ for unsafe landings and $1.0$ otherwise. $r_\text{safety}$ can also be further engineered to generate more landing styles, such as a landing on feet as shown in Figure~\ref{fig:variations}. \subsubsection{Curriculum and Scheduling} \label{sec:Experiments-Curriculum} The high jump is a challenging motor skill that requires years of training even for professional athletes. We therefore adopt curriculum-based learning to gradually increase the task difficulty $z$, defined as the crossbar height in high jumps or the obstacle width in obstacle jumps. Detailed curriculum settings are given in Table~\ref{tb:curriculum}, where $z_\text{min}$ and $z_\text{max}$ specify the range of $z$, and $\Delta z$ is the increment when moving to a higher difficulty level. We adaptively schedule the curriculum to increase the task difficulty according to the DRL training performance. At each training iteration, the average sample reward is added to a reward accumulator. We increase $z$ by $\Delta z$ whenever the accumulated reward exceeds a threshold $R_T$, and then reset the reward accumulator. Detailed settings for $\Delta z$ and $R_T$ are listed in Table~\ref{tb:curriculum}. The curriculum could also be scheduled following a simpler scheme adopted in \cite{Xie2020allsteps}, where task difficulty is increased when the average sample reward in each iteration exceeds a threshold. We found that for athletic motions, such average sample reward threshold is hard to define uniformly for different strategies in different training stages. Throughout training, the control frequency $f_\text{control}$ and the P-VAE offset penalty coefficient $c_\text{offset}$ in Equation \ref{eq:r-pvae} are also scheduled according to the task difficulty, in order to encourage exploration and accelerate training in early stages. We set $f_\text{control} = 10 + 20 \cdot \text{Clip}(\rho, 0, 1)$ and $c_\text{offset} = 48 - 33 \cdot \text{Clip}(\rho, 0, 1)$, where $\rho = 2z-1$ for high jumps and $\rho = z$ for obstacle jumps. We find that in practice the training performance does not depend sensitively on these hyperparameters. \input{tables/model_parameter} \subsubsection{Strategy Features} \label{sec:Experiments-Strategy-Features} We choose low-dimensional and visually discriminate features $f$ of learned strategies for effective diversity measurement of discovered strategies. In the sports literature, high jump techniques are usually characterized by the body orientation when the athlete clears the bar at his peak position. The rest of the body limbs are then coordinated in the optimal way to clear the bar as high as possible. Therefore we use the root orientation when the character's CoM lies in the vertical crossbar plane as $f$. This three-dimensional root orientation serves well as a Stage 2 feature for high jumps. For Stage 1, this feature can be further reduced to one dimension, as we will show in Section \ref{sec:Experiments-Diverse-Strategies}. More specifically, we only measure the angle between the character's root direction and the global up vector, which corresponds to whether the character clears the bar facing upward or downward. Such a feature does not require a non-Euclidean GP output space that we need to handle in Stage 1. We use the same set of features for obstacle jumps, except that root orientations are measured when the character's CoM lies in the center vertical plane of the obstacle. Note that it is not necessary to train to completion, i.e., the maximum task difficulty, to evaluate the feature diversity, since the overall jumping strategy usually remains unchanged after a given level of difficulty, which we denote by $z_\text{freeze}$. Based on empirical observations, we terminate the training after reaching $z_\text{freeze}=100cm$ for both high jump and obstacle jump tasks for strategy discovery. \subsubsection{GP Priors and Kernels} \label{sec:Experiments-GP-Priors-Kernels} We set GP prior $m(\cdot)$ and kernel $k(\cdot,\cdot)$ for BDS based on common practices in the Bayesian optimization literature. Without any knowledge on the strategy feature distribution, we set the prior mean $m(\cdot)$ to be the mean of the value range of a feature. Among the many common choices for kernel functions, we adopt the Mat\'ern$^5/_2$ kernel \cite{matern1960spatial,klein2017fast}, defined as: \begin{equation} k_{^5/_2}(x, x') = \theta(1 + \sqrt{5}d_{\lambda}(x,x') + \frac{5}{3}d^{2}_{\lambda}(x, x'))e^{-\sqrt{5}d_{\lambda}(x, x')} \end{equation} where $\theta$ and $\lambda$ are learnable parameters of the GP. $d_{\lambda}(x,x') = (x-x')^{T}\mathop{\mathrm{diag}}\nolimits(\lambda)(x-x')$ is the Mahalanobis distance. \subsection{Implementation} \label{sec:Experiments-Implementation} We implemented our system in PyTorch \cite{PyTorch} and PyBullet \cite{coumans2016pybullet}. The simulated athlete has 28 internal DoFs and 34 DoFs in total. We run the simulation at 600~$Hz$. {Torque limits for the hips, knees and ankles are taken from Biomechanics estimations for a human athlete performing a Fosbury flop \cite{Okuyama03}. Torque limits for other joints are kept the same as \cite{Peng:2018:DeepMimic}. Joint angle limits are implemented by penalty forces.} We captured three standard high jumps from a university athlete, whose body measurements are given in Table~\ref{tb:modelParams}. For comparison, we also list these measurements for our virtual athlete. For DRL training, we set $\lambda=0.95$ for both TD($\lambda$) and GAE($\lambda$). We set the discounter factor $\gamma=1.0$ since our tasks have short horizon and sparse rewards. The PPO clip threshold is set to $0.02$. The learning rate is $2.5\times 10^{-5}$ for the policy network and $1.0\times 10^{-2}$ for the value network. In each training iteration, we sample 4096 state-action tuples in parallel and perform five policy updates with a mini-batch size of 256. For Stage 1 diverse strategy discovery, we implement BDS using GPFlow \cite{GPflow2020multioutput-gp} with both $N_\text{exp}$ and $N_\text{opt}$ set to three. $d_\text{threshold}$ in Stage 2 novel policy seeking is set to $\pi/2$. Our experiments are performed on a Dell Precision 7920 Tower workstation, with dual Intel Xeon Gold 6248R CPUs (3.0 GHz, 48 cores) and an Nvidia Quadro RTX 6000 GPU. Simulations are run on the CPUs. One strategy evaluation for a single initial state, i.e. Line 9 in Algorithm~\ref{algorithm::BDS}, typically takes about six hours. Network updates are performed on the GPU. \section{Representative Take-off State Features} \label{app:takeoffStates} We list representative take-off state features discovered through BDS in Table~\ref{tb:take-off-states} for high jumps and Table~\ref{tb:take-off-states-box} for obstacle jumps. The approach angle $\alpha$ for high jumps is defined as the wall orientation in a facing-direction invariant frame. The orientation of the wall is given by the line $x\text{sin}\alpha - z\text{cos}\alpha = 0$. \section{Learning Curves} \label{app:curves} {We plot Stage 1 DRL learning and curriculum scheduling curves for two high jump strategies in Figure~\ref{fig:learning-curves}. An initial solution for the starting bar height $0.5m$ can be learned relatively quickly. After a certain bar height has been reached (around $1.4m$), the return starts to drop because larger action offsets are needed to jump higher, which decreases the $r_{naturalness}$ in Equation~\ref{eq:r-pvae} and therefore the overall return in Equation~\ref{eq:stage1-reward}. Subjectively speaking, the learned motions remain as natural for high crossbars, as the lower return is due to the penalty on action offsets.}
{ "attr-fineweb-edu": 1.851562, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUf0bxK7IDBOV4tcs6
\section{Dataset} \label{section:data} Our dataset consists of play-by-play match event data for the 2017/2018 season of the English Premier League, Spanish LaLiga, German 1. Bundesliga, Italian Serie A, French Ligue Un, Dutch Eredivisie and Belgian Pro League provided by SciSports' partner Wyscout. Our dataset represents each match as a sequence of roughly 1500 on-the-ball events such as shots, passes, crosses, tackles, and interceptions. For each event, our dataset contains a reference to the team, a reference to the player, the type of the event (e.g., a pass), the subtype of the event (e.g., a cross), 59 boolean indicators providing additional context (e.g., accurate or inaccurate and left foot or right foot), the start location on the pitch and when relevant also the end location on the pitch. \section{Introduction} \label{sec:introduction} Over the last few decades, professional football has turned into a multi-billion industry. During the 2017/2018 season, the total revenue of all twenty English Premier League clubs combined exceeded a record high of 5.3 billion euro with 1.8 billion euro spent on the acquisition of players in the 2017 summer transfer window alone~\cite{deloitte2018annual,cies2018seas}. Hence, the player recruitment process has become of vital importance to professional football clubs. That is, football clubs who fail to strengthen their squads with the right players put their futures at stake. Compared to twenty years ago, scouts at football clubs have plenty more tools at their disposal to find appropriate players for their clubs. Scouts nowadays can watch extensive video footage for a large number of players via platforms like Wyscout\footnote{\url{https://www.wyscout.com}} and obtain access to advanced statistics, performance metrics and player rankings via platforms like SciSports Insight.\footnote{\url{https://insight.scisports.com}} Although these tools provide many insights into the abilities and general level of a player, they largely fail to answer the question whether the player would fit a certain team's playing style. For example, star players like Cristiano Ronaldo and Lionel Messi operate at a comparable level but clearly act and behave in different ways on the pitch. To help answer this question, we compose a set of 21 candidate player roles in consultation with the SciSports Datascouting department and introduce an approach to automatically identify the most applicable roles for each player from play-by-play event data collected during matches. To optimally leverage the available domain knowledge, we adopt a supervised learning approach and pose this task as a probabilistic classification task. For each player, we first extract a set of candidate features from the match event data and then perform a separate classification task for each of the candidate player roles. \section{Approach} \label{section:approach} In this section, we propose a method to automatically derive the most applicable player roles for each player from play-by-play match event data. We adopt a supervised learning approach to optimally leverage the available domain knowledge within the SciSports Datascouting department. More specifically, we address this task by performing a probabilistic classification task for each of the candidate roles introduced in Section~\ref{section:player-roles}. In the remainder of this section, we explain the feature engineering and probabilistic classification stages in more detail. \subsection{Feature engineering} \label{sub:ftrc} The goal of the feature engineering stage is to obtain a set of features that both characterizes the candidate player roles and distinguishes between them. The most important challenges are to distinguish between the qualities and quantities of a player's actions and to account for the strength of a player's teammates and opponents as well as the tactics and strategies employed by a player's team. We perform the feature engineering stage in two steps. In the first step, we compute a set of basic statistics and more advanced performance metrics. We base these statistics and metrics on the specific tasks players in the different roles perform. We normalize these basic statistics and metrics in two different ways to improve their comparability across players. In the second step, we standardize the obtained statistics and metrics to facilitate the learning process and construct features by combining them based on domain knowledge. \subsubsection{Computing statistics and metrics.} We compute a set of 242 basic statistics and metrics for each player across a set of matches (e.g., one full season of matches). This set includes statistics like the number of saves from close range, the number of duels on the own half, the number of passes given into the opposite box, the number of long balls received in the final third, the number of offensive ground duels on the flank and the number of high-quality attempts. For example, a ball-winning midfielder is tasked with regaining possession, aggressively pressing opponents and laying off simple passes to more creative teammates. Hence, we compute statistics like the number of interceptions in the possession zone (i.e., the middle of the pitch), the number of defensive duels won in the possession zone, and the number of low-risk passes from the possession zone. We normalize these statistics and metrics in two different ways to obtain 484 features. First, we normalize the values by the total number of actions performed by the player. This normalization helps to improve the comparability across players on strong and weak teams, where strong teams tend to perform more on-the-ball actions. Second, we normalize the values by the total number of actions performed by the player's team. This normalization helps to identify key players that are often in possession of the ball (e.g., playmakers). In addition, we compute 31 metrics including centrality measures in the passing network, defensive presence in the team and a player's wide contribution. \subsubsection{Constructing features.} We construct a set of 181 key features by combining the statistics and metrics. In order to facilitate the learning process, we standardize our set of 515 features. More specifically, we project the feature values to the $[\mu-2\sigma, \mu+2\sigma]$ interval, where $\mu = 0$ and $\sigma = 1$. We project any extreme values that fall outside the interval to the interval bounds. We use the resulting set of standardized features to construct the key feature set. For example, we combine the statistics representing the number of attempts from the area immediately in front of goal and the expected-goals metrics for these attempts into a key feature reflecting the frequency of high-quality attempts from short range. \subsection{Probabilistic classification} \label{sub:modc} We obtain labels for a portion of our dataset from the SciSports Datascouting department. In particular, we request the Datascouting department to assign the primary role to each of the players in our dataset they feel confident enough about to assess. Although football players often fulfill several different roles during the course of a season or even within a single match, we only collect each player's primary role to simplify our learning setting. We perform a separate binary classification task for each of the 21 roles. For each role, we construct a separate dataset in two steps. First, we obtain all examples of players that were assigned that role and label them as positive examples. Second, we obtain all examples of players that were assigned one of the ``disconnected'' roles in Figure~\ref{fig:player_roles} as well as 25\% of the examples for players that were assigned one of the ``connected'' roles and label them as negative examples. We notice that keeping 25\% of the examples for players having a connected role improves the accuracy of our classification models. If we keep all connected examples, the learner focuses on the specific tasks that distinguish between the role of interest and the connected roles. If we remove all connected examples, the learner focuses on distinguishing between positions rather than specific roles. \begin{figure}[!htp] \vspace{-10pt} \begin{center} \includegraphics[width=\textwidth]{fig/normal_features.png} \caption{The five most relevant features for ball-winning midfielders. The blue dots represent players labeled as ball-winning midfielders, while the black dots represent players labeled as any of the other roles. The blue lines represent the optimal region.} \label{fig:norm_feat} \end{center} \vspace{-20pt} \end{figure} \begin{figure}[!htp] \vspace{-10pt} \begin{center} \includegraphics[width=\textwidth]{fig/distance_features.png} \caption{The five most relevant features for ball-winning midfielders after the transformation. The blue dots represent players labeled as ball-winning midfielders, while the black dots represent players labeled as any of the other roles. The optimal value is 0.} \label{fig:dist_feat} \end{center} \vspace{-20pt} \end{figure} We perform a final transformation of the key feature values before they are fed into the classification algorithm. For each key feature and each role, we first define the optimal feature value range and then compute the distance between the actual feature value and the optimal range. We determine the optimal range by computing percentile bounds on the labeled training data. We set a parameter $\beta$, which translates into a lower bound of $\beta$ and an upper bound of $1-\beta$, as can be seen in Figure~\ref{fig:norm_feat}. For example, $\beta = 0.25$ translates into the $[0.25,0.75]$ range. We set the feature values that fall within the optimal range to 0 and all other feature values to their distance to the closest bound of the optimal range. Hence, we project the original $[-2,2]$ range to a $[0,4]$ range, where $0$ is the optimal value, as can be seen in Figure~\ref{fig:dist_feat}. \section{Conclusion} This paper proposed 21 player roles to characterize the playing styles of football players and presented an approach to automatically derive the most applicable roles for each player from play-by-play event data. Our supervised learning approach proceeds in two steps. First, we compute a large set of statistics and metrics that both characterize the different roles and help distinguish between the roles from match data. Second, we perform a binary classification task for each role leveraging labeled examples obtained from the SciSports Datascouting department. Our approach goes beyond existing techniques by not only deriving each player's standard position (e.g., an attacking midfielder in a 4-2-3-1 formation) but also his specific role within that position (e.g., an advanced playmaker). Our experimental evaluation demonstrates our approach for deriving five roles for central midfielders from data collected during the 2017/2018 season. In the future, we plan to further improve and extend our approach. We will investigate different learning algorithms to tackle the classification task (e.g.,~\texttt{XGBoost}) as well as different learning settings. More specifically, we aim to obtain a richer set of labels from the SciSports Datascouting department. In addition to each player's primary role, we also wish to collect possible alternative player roles, which would turn our task into a multi-label classification setting. \section{Player roles} \label{section:player-roles} \begin{figure}[htp] \begin{center} \includegraphics[width=\textwidth]{fig/player_roles.png} \caption{A graphical overview of the 21 proposed player roles for six different positions. Roles that have tasks and duties in common are connected by arrows. Roles that apply to the same position are in the same color. Roles for goalkeepers are in green, roles for defenders are in light blue, roles for backs are in red, roles for central midfielders are in dark blue, roles for wingers are in pink and roles for forwards are in purple.} \label{fig:player_roles} \end{center} \end{figure} We compile a list of 21 player roles based on an extensive sports-media search (e.g.,~\cite{arse2016dm,espn2018dm,thompson2018four}) and the Football Manager 2018 football game, which is often renowned for its realism and used by football clubs in their player recruitment process (e.g.,~\cite{goal2012how,sullivan2016beautiful}). In consultation with the football experts from the SciSports Datascouting department, we refine the obtained list and also introduce two additional roles, which are the mobile striker and the ball-playing goalkeeper. The mobile striker is dynamic, quick and aims to exploit the space behind the defensive line. The ball-playing goalkeeper has excellent ball control and passing abilities and often functions as an outlet for defenders under pressure. For each of these roles, we define a set of key characteristics that can be derived from play-by-play match event data. Most of the defined player roles describe players who excel in technical ability, game intelligence, strength, pace or endurance. Figure~\ref{fig:player_roles} presents an overview of the 21 player roles, where related roles are connected by arrows and roles for the same position are in the same color. For example, a full back and wing back have similar defensive duties, which are marking the opposite wingers, recovering the ball, and preventing crosses and through balls. In contrast, a wing back and winger have similar offensive duties, which are providing width to the team, attempting to dribble past the opposite backs, and creating chances by crossing the ball into the opposite box. Due to space constraints, we focus this paper on the five roles for central midfielders. We now present each of these five roles in further detail. \begin{description} \item[Ball-Winning Midfielder (BWM):] mainly tasked with gaining possession. When the opponent has possession, this player is actively and aggressively defending by closing down opponents and cutting off pass supply lines. This player is a master in disturbing the build-up of the opposing team, occasionally making strategic fouls such that his team can reorganize. When in possession or after gaining possession, this player plays a simple passing game. A player in this role heavily relies on his endurance and game intelligence. Notable examples for this role are Kant\'{e} (Chelsea), Casemiro (Real Madrid) and Lo Celso (Paris Saint-Germain). \item[Holding Midfielder (HM):] tasked with protecting the defensive line. When the opponent has possession, this player defends passively, keeps the defensive line compact, reduces space in front of the defence and shadows the opposite attacking midfielders. In possession, this player dictates the pace of the game. This role requires mostly game intelligence and strength. Notable examples for this role are Busquets (Barcelona), Weigl (Borussia Dortmund) and Mati\'{c} (Manchester United). \item[Deep-Lying Playmaker (DLP):] tasked with dictating the pace of the game, creating chances and exploiting the space in front of his team's defense. This player has excellent vision and timing, is technically gifted and has accurate passing skills to potentially cover longer distances as well. Hence, this player is rather focused on build-up than defense. He heavily relies on his technical ability and game intelligence. Notable examples for this role are Jorginho (transferred from Napoli to Chelsea), F\`{a}bregas (Chelsea) and Xhaka (Arsenal). \item[Box-To-Box (BTB):] a more dynamic midfielder, whose main focus is on excellent positioning, both defensively and offensively. When not in possession, he concentrates on breaking up play and guarding the defensive line. When in possession, this player dribbles forward passing the ball to players higher up the pitch and often arrives late in the opposite box to create a chance. This player heavily relies on endurance. Notable examples for this role are Wijnaldum (Liverpool), Matuidi (Juventus) and Vidal (transferred from Bayern M\"{u}nchen to Barcelona). \item[Advanced Playmaker (AP):] the prime creator of the team, occupying space between the opposite midfield and defensive line. This player is technically skilled, has a good passing range, can hold up a ball and has excellent vision and timing. He relies on his technical ability and game intelligence to put other players in good scoring positions by pinpointing passes and perfectly timed through balls. Notable examples for this role are De Bruyne (Manchester City), Luis Alberto (Lazio Roma) and Coutinho (Barcelona). \end{description} \section{Related work} \label{section:related-work} The task of automatically deriving roles of players during football matches has largely remained unexplored to date. To the best of our knowledge, our proposed approach is the first attempt to identify specific roles of players during matches fully automatic in a data-driven fashion using a supervised learning approach. In contrast, most of the work in this area focuses on automatically deriving positions and formations from data in an unsupervised setting. Bialkowski et al.~\cite{bialkowski2014win} present an approach for automatically detecting formations from spatio-temporal tracking data collected during football matches. Pappalardo et al.~\cite{pappalardo2018playerank} present a similar approach for deriving player positions from play-by-play event data collected during football matches. Our approach goes beyond their approaches by not only deriving each player's standard position in a formation (e.g., a left winger in a 4-3-3 formation) but also his specific role within that position (e.g., a holding midfielder or a ball-winning midfielder). Furthermore, researchers have compared football players in terms of their playing styles. Mazurek~\cite{mazurek2018football} investigates which player is most similar to Lionel Messi in terms of 24 advanced match statistics. Pe{\~n}a and Navarro~\cite{pena2015can} analyze passing motifs to find an appropriate replacement for Xavi who exhibits a similar playing style. In addition, several articles discussing playing styles and player roles have recently appeared in the mainstream football media (e.g., \cite{arse2016dm,espn2018dm,thompson2018four}). \vspace{40pt} \section{Experimental evaluation} This section presents an experimental evaluation of our approach. We present the methodology, formulate the research questions and discuss the results. \subsection{Methodology} We construct one example for each player in our dataset. After dropping all players who played fewer than 900 minutes, we obtain a set of 1910 examples. The SciSports Datascouting department manually labeled 356 of these examples (i.e., 18.6\% of the examples) as one of the 21 roles introduced in Section~\ref{section:player-roles}. We prefer algorithms learning interpretable models to facilitate the explanation of the inner workings of our approach to football coaches and scouts. More specifically, we train a Stochastic Gradient Descent classifier using logistic loss as the objective function for each role. Hence, the coefficients of the model reflect the importances of the features. For each role, we perform ten-fold cross-validation and use the Synthetic Minority Over-Sampling Technique (SMOTE) to correct the balance between the positive and negative examples~\cite{chaw2002smote}. After performing the transformation explained in Section~\ref{sub:modc}, we obtain a set of over 300 examples. \subsection{Discussion of results} In this section, we investigate what the optimal values for the regularization parameter $\alpha$ and boundary parameter $\beta$ are and who the most-suited players are to fulfill each of the central midfield roles. \subsubsection{What are the optimal values for $\alpha$ and $\beta$?} \label{sub:optim} We need to optimize the regularization parameter $\alpha$ and the boundary parameter $\beta$ to obtain the most-relevant features for each of the roles. We try a reasonable range of values for both parameters and optimize the weighted logistic loss due to the class imbalance. \begin{table} \centering \caption{The weighted logistic loss across the roles and the folds for different values of the regularization parameter $\alpha$ and the boundary parameter $\beta$. We obtain the optimal weighted logistic loss for $\alpha = 0.050$ and $\beta = 0.250$. The best result is in bold.} \label{table:optimization} \begin{tabular}{rrr} \toprule \bm{$\alpha$} & \bm{$\beta$} & \textbf{Weighted logistic loss} \tabularnewline \midrule 1.000 & 0.250 & 0.2469 \tabularnewline 0.500 & 0.250 & 0.1800 \tabularnewline 0.100 & 0.250 & 0.0977 \tabularnewline \textbf{0.050} & \textbf{0.250} & \textbf{0.0831} \tabularnewline 0.010 & 0.250 & 0.0835 \tabularnewline 0.005 & 0.250 & 0.1034 \tabularnewline 0.001 & 0.250 & 0.2199 \tabularnewline \midrule 0.050 & 0.100 & 0.0865 \tabularnewline 0.050 & 0.200 & 0.0833 \tabularnewline \textbf{0.050} & \textbf{0.250} & \textbf{0.0831} \tabularnewline 0.050 & 0.300 & 0.0840 \tabularnewline 0.050 & 0.350 & 0.0863 \tabularnewline 0.050 & 0.400 & 0.0890 \tabularnewline \bottomrule \end{tabular} \end{table} Table~\ref{table:optimization} shows the weighted logistic loss across the ten folds and 21 roles for several different values for the regularization parameter $\alpha$ and the boundary parameter $\beta$. The top half of the table shows that 0.050 is the optimal value for the regularization parameter $\alpha$, while the bottom half of the table shows that 0.250 is the optimal value for the boundary parameter $\beta$. \subsubsection{What are the most suitable players for the central midfielder roles?} \begin{table} \centering \caption{The probabilities for the five central midfielder roles for 20 players in our dataset. For each role, we show the top-ranked labeled player as well as the top-three-ranked unlabeled players. The highest probability for each player is in bold.} \label{table:results} \begin{tabular}{lccccccc} \toprule \textbf{Player} & \textbf{Labeled} & \textbf{BWM} & \textbf{HM} & \textbf{DLP} & \textbf{BTB} & \textbf{AP}\tabularnewline \midrule Trigueros & Yes & \textbf{0.98} & 0.85 & 0.87 & 0.89 & 0.29\tabularnewline Sergi Darder & No & \textbf{0.98} & 0.90 & 0.82 & 0.94 & 0.11\tabularnewline Guilherme & No & \textbf{0.97} & 0.94 & 0.32 & 0.93 & 0.03\tabularnewline E. Skhiri & No &\textbf{0.97} & 0.93 & 0.38 & 0.96 & 0.04\tabularnewline \midrule N. Matic & Yes & 0.91 & \textbf{0.98} & 0.82 & 0.42 & 0.01\tabularnewline J. Guilavogui & No & 0.96 & \textbf{0.97} & 0.71 & 0.76 & 0.05\tabularnewline Rodrigo & No & 0.90 & \textbf{0.96} & 0.23 & 0.48 & 0.00\tabularnewline G. Pizarro & No & 0.90 & \textbf{0.96} & 0.24 & 0.41 & 0.00\tabularnewline \midrule Jo\~{a}o Moutinho & Yes & 0.91 & 0.88 & \textbf{0.98} & 0.41 & 0.14\tabularnewline J. Henderson & No & 0.68 & 0.79 & \textbf{0.97} & 0.13 & 0.07\tabularnewline S. Kums & No & 0.87 & 0.90 & \textbf{0.96} & 0.37 & 0.03\tabularnewline D. Demme & No & 0.73 & 0.76 & \textbf{0.96} & 0.22 & 0.04\tabularnewline \midrule J. Martin & Yes & 0.84 & 0.48 & 0.40 & \textbf{0.98} & 0.21\tabularnewline Zurutuza & No & 0.90 & 0.75 & 0.39 & \textbf{0.98} & 0.23\tabularnewline T. Rinc\'{o}n & No & 0.93 & 0.75 & 0.29 & \textbf{0.98} & 0.11\tabularnewline T. Bakayoko & No & 0.90 & 0.57 & 0.29 & \textbf{0.98} & 0.20\tabularnewline \midrule C. Eriksen & Yes & 0.08 & 0.01 & 0.43 & 0.18 & \textbf{0.98}\tabularnewline J. Pastore & No & 0.12 & 0.02 & 0.56 & 0.31 & \textbf{0.97}\tabularnewline H. Vanaken & No & 0.31 & 0.03 & 0.58 & 0.35 & \textbf{0.95}\tabularnewline V. Birsa & No & 0.18 & 0.01 & 0.21 & 0.57 & \textbf{0.94}\tabularnewline \bottomrule \end{tabular} \end{table} Table~\ref{table:results} shows the predicted probabilties of fulfilling any of the five central midfielder roles for 20 players in our dataset. For each role, the table shows the top-ranked labeled player as well as the top-three-ranked unlabeled players. The table shows that most players fulfill different types of roles during the course of a season. In general, players who rate high on one role also score high on at least one other role. For example, top-ranked box-to-box midfielders like Jonas Martin (Strasbourg), David Zurutuza (Real Sociedad), Tom\'{a}s Rinc\'{o}n (Torino) and Ti\'{e}mou\'{e} Bakayoko (Chelsea) also score high for the ball-winning midfielder role. The advanced playmaker (AP) role seems to be an exception though. The players who rate high on this role do not rate high on any of the other roles. Furthermore, the table also shows that our approach is capable of handling players playing in different leagues. With deep-lying playmaker Sven Kums of Anderlecht and advanced playmaker Hans Vanaken of Club Brugge, two players from the smaller Belgian Pro League appear among the top-ranked players for their respective roles. The table lists six players from the Spanish LaLiga, four players from the English Premier League, three players from the French Ligue Un, three players from the Italian Serie A, two players from the German 1. Bundesliga, two players from the Belgian Pro League, and no players from the Dutch Eredivisie. \section{Early experiments} \label{sec:considerations} \change{Our first approach in extracting player roles from event data has been to regard this as being an unsupervised learning task. One advantage of this approach was that the data would ideally point out the number of roles available, which could be provided with a label later on. To add domain knowledge to the model we asked a large group of experts to define 'is-similar' and 'is-different' constraints for a large set of players. From these constraint based clustering experiments we learned that it is almost impossible to detect clear cluster bounds. One of the main issues in this is the rather continues mapping of players, e.g. players 1 and 2 are similar, players 2 and 3 are similar but players 1 and 3 are different. Therefore we now approach this as being a supervised learning task, where expert knowledge is used in determining the labels for players.}
{ "attr-fineweb-edu": 1.913086, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdE_xaKgTuVvaEwzi
\section{Introduction} \label{intro} Within this paper we look to determine the ability of those players who play in the English Premier League. The Premier League is an annual soccer league established in 1992 and is the most watched soccer league in the world \citep{yueh_2014, curley_2016}. It is made up of 20 teams, who, over the course of a season, play every other team twice (both home and away), giving a total of 380 fixtures each year. It is the top division of English soccer, and every year the bottom 3 teams are relegated to be replaced by 3 teams from the next division down (the Championship). In recent times the Premier League has also become known as the richest league in the world \citep{deloitte_2016}, through both foreign investment and a lucrative deal for television rights \citep{cave_2016, rumsby_2016, bbc_2015}. Whilst there is growing financial competition from China, the Premier league arguably still attracts some of the best players in the world. Staying in the Premier league (by avoiding relegation) is worth a large amount of money, therefore teams are looking for any advantage when accessing a player's ability to ensure they sign the best players. With (enormously) large sums of money spent to buy/transfer these players, it is natural to ask ``How good are they at a specific skill, for example, passing a ball, scoring a goal or making a tackle?'' Here, we present a method to access this ability, whilst quantifying the uncertainty around any given player. The statistical modelling of sports has become a topic of increasing interest in recent times, as more data is collected on the sports we love, coupled with a heightened interest in the outcome of these sports, that is, the continuous rise of online betting. Soccer is providing an area of rich research, with the ability to capture the goals scored in a match being of particular interest. \cite{reep_1971} used a negative binomial distribution to model the aggregate goal counts, before \cite{maher_1982} used independent Poisson distributions to capture the goals scored by competing teams on a game by game basis. \cite{dixon_1997} also used the Poisson distribution to model scores, however they departed from the assumption of independence; the model is extended in \cite{dixon_1998}. The model of \cite{dixon_1997} is also built upon in \cite{karlis_2000, karlis_2003}, who inflate the probability of a draw. \cite{baio_2010} consider this model in the Bayesian paradigm, implementing a Bayesian hierarchical model for goals scored by each team in a match. Other works to investigate the modelling of soccer scores include \citep{lee_1997, joseph_2006, karlis_2009}. A player performance rating system (the EA Sports Player Performance Index) is developed by \cite{mchale_2012}. The rating system is developed in conjunction with the English Premier League, the English Football League, Football DataCo and the Press Association, and aims to represent a player's worth in a single number. There is some debate within the soccer community on the weightings derived in the paper, and as \cite{mchale_2012} point out, the players who play for the best teams lead the index. There is also some questions raised as to whether reducing the rating to a single number (whilst easy to understand), masks a player's ability in a certain skill, whether good or bad. Finally, as mentioned by the authors, the rating system does not handle those players who sustain injuries (and therefore have little playing time) well. \cite{mchale_2014} attempt to identify the goal scoring ability of players. Spatial methods to capture a team's style/behaviour are explored in \cite{lucey_2013}, \cite{bialkowski_2014} and \cite{bojinov_2016}. Here, our interest lies in defining that player ability, addressing some of the issues raised by \cite{mchale_2012}, before attempting to capture the goals scored in a game, taking into account these abilities. To infer player abilities we appeal to variational inference (VI) methods, an alternative strategy to Markov chain Monte Carlo (MCMC) sampling, which can be advantageous to use when datasets are large and/or models have high complexity. Popularised in the machine learning literature \citep{jordan_1999, wainwright_2008}, VI transforms the problem of approximate posterior inference into an optimisation problem, meaning it is easier to scale to large data and tends to be faster than MCMC. Some application areas and indicative references where VI has been used include sports \citep{kitani_2011, ruiz_2015, franks_2015}, computational biology \citep{jojic_2004, stegle_2010, carbonetto_2012, raj_2014}, computer vision \citep{bishop_2000, likas_2004, blei_2006, cummins_2008, sudderth_2009, du_2009} and language processing \citep{reyes_2004, wang_2013, yogatama_2014}. For a discussion on VI techniques as a whole, see \cite{blei_2017} and the references therein. The remainder of this article is organised as follows. The data is presented in Section~\ref{data}. In Section~\ref{bayes} we outline our model to define player abilities before discussing a variational inference approach; we finish the section by offering our extension to the Bayesian hierarchical model of~\cite{baio_2010}. Applications are considered in Section~\ref{app} and a discussion is provided in Section~\ref{disc}. \section{The data} \label{data} The data available to us is a collection of touch-by-touch data, which records every touch in a given fixture, noting the time, team, player, type of event and outcome. A section of the data is shown in table~\ref{tab-data}. The data covers the 2013/2014 and 2014/2015 English Premier League seasons, and consists of roughly 1.2 million events in total, which equates to approximately 1600 for each fixture in the dataset. There are 39 event types in the dataset, which we list in table~\ref{eventtype}. The nature of most of these event types is self-explanatory, that is, Goal indicates that a player scored a goal at that event time. Throughout this paper we will mainly concern ourselves with event types which are self-evident, and will define the more subtle event types as and when needed. \begin{table} \centering \begin{tabular}{ccccccc} \hline minute & second & period & team id & player id & type & outcome \\ \hline 0 & 1 & FirstHalf & 663 & 91242 & Pass & Successful \\ 0 & 2 & FirstHalf & 663 & 23736 & Pass & Successful \\ 0 & 3 & FirstHalf & 663 & 17 & Pass & Successful \\ 0 & 4 & FirstHalf & 663 & 14230 & Pass & Successful \\ 0 & 5 & FirstHalf & 663 & 7398 & Pass & Successful \\ 0 & 6 & FirstHalf & 663 & 31451 & Pass & Successful \\ 0 & 9 & FirstHalf & 663 & 7398 & Pass & Successful \\ 0 & 10 & FirstHalf & 690 & 38772 & Tackle & Successful \\ 0 & 10 & FirstHalf & 663 & 80767 & Dispossessed & Successful \\ 0 & 12 & FirstHalf & 690 & 8505 & Pass & Successful \\ \hline \end{tabular} \caption{A section of the touch-by-touch data.} \label{tab-data} \end{table} \begin{table} \centering \begin{tabular}{cccc} \hline Stop & Control & Disruption & Miscellanea \\ \hline Card & Aerial & BlockedPass & CornerAwarded\\ End & BallRecovery & Challenge & CrossNotClaimed\\ FormationChange & BallTouch & Claim & KeeperSweeper\\ FormationSet & ChanceMissed & Clearance & ShieldBallOpp\\ OffsideGiven & Dispossessed & Interception & \\ PenaltyFaced & Error & KeeperPickup & \\ Start & Foul & OffsideProvoked & \\ SubstitutionOff & Goal & Punch & \\ SubstitutionOn & GoodSkill & Save & \\ & MissedShots & Smother & \\ & OffsidePass & Tackle & \\ & Pass & & \\ & SavedShot & & \\ & ShotOnPost & & \\ & TakeOn & & \\ \hline \end{tabular} \caption{Event types contained within the data.} \label{eventtype} \end{table} We can split the event types into 4 categories. \begin{enumerate} \item \textbf{Stop:} An event corresponding to a stoppage in play such as a substitution or offside decision. \item \textbf{Control:} An event where a team is perceived to be in control of the ball, these are mainly seen as attacking events. \item \textbf{Disruption:} An event where a team is perceived to be disrupting the current play within a game, these can generally be seen as defensive events. \item \textbf{Unclear:} These events could be classified in any of the other three categories. \end{enumerate} In this paper we are interested in those events which correspond to a player during active game-play, hence we remove Stop events from the data. Instead we focus on those event types categorised as either Control or Disruption, or, when a team is attempting to score a goal and when a team is attempting to stop the opposition from scoring a goal respectively. It should be noted that OffsideGiven is the inverse of OffsideProvoked and as such we remove one of these events from the data. Henceforth, it is assumed that the event type OffsideGiven is removed from the data, rewarding the defensive side for provoking an offside through OffsideProvoked. The frequency of each event type (after removing Pass) during the Liverpool vs Stoke match, which occurred on the 17th August 2013, is shown in figure~\ref{typefig}. The match is typical of any fixture within in the dataset. Pass dominates the data over all other event types recorded, with a ratio of approximately 10:1 to BallRecovery, and hence is removed for clarity. This is not surprising given the make up of a soccer match (where teams mainly pass the ball). \begin{figure \centering \includegraphics[scale=0.8]{type_nopass.pdf} \caption{Frequency of each event type observed in the Liverpool vs Stoke 2013/2014 English Premier League match, 17th August 2013. The event type Pass is removed for clarity, it occurs with a ratio of approximately 10:1 over BallRecovery.} \label{typefig} \end{figure} In determining a player's ability for a given event type we make the assumption that the more times a player is involved, the better they are at that event type; for example, a player who makes more passes than another player is assumed to be the better passer. On this basis, we can transform the data displayed in table~\ref{tab-data} to represent the number of each event type each player is involved in, at a fixture by fixture level. This count data is illustrated in table~\ref{tab-count}. It is to this data, which the methods of Section~\ref{bayes} will be applied. \begin{table} \centering \begin{tabular}{ccc|cccc} \hline & & & \multicolumn{4}{c}{Count for each event type} \\ fixture id & player id & team id & Goal & Pass & Tackle & \ldots \\ \hline 1483412 & 17 & 663 & 0 & 97 & 3 & \\ 1483412 & 3817 & 663 & 0 & 37 & 3 & \\ 1483412 & 4574 & 663 & 0 & 73 & 3 & \\ \vdots & \vdots &\vdots &\vdots &\vdots &\vdots & \\[1.1ex] 1483831 & 10136 & 676 & 1 & 36 & 4 & \\ 1483831 & 12267 & 676 & 0 & 45 & 0 & \\ 1483831 & 12378 & 676 & 0 & 52 & 2 & \\ \vdots & \vdots &\vdots &\vdots &\vdots &\vdots & \\ \hline \end{tabular} \caption{A section of the count data derived from the data of table~\ref{tab-data}.} \label{tab-count} \end{table} \section{Bayesian inference} \label{bayes} Consider the case where we have $K$ matches, numbered $k=1,\ldots,K$. We denote the set of teams in fixture $k$ as $T_k$, with $T_k^H$ and $T_k^A$ representing the home and away teams respectively. Explicitly, $T_k = \{T_k^H, T_k^A\}$. We take $P$ to be the set of all players who feature in the dataset, and $P_k^j\in P$ to be the subset of players who play for team~$j$ in fixture $k$. We may want to consider how players' abilities over different event types interact, for this we group event types to create meaningful interactions. For simplicity, we describe the model for a single pair of event types which are deemed to interact, for example, Pass and Interception, we denote these event types $e_1$ and $e_2$, such that $E=\{e_1, e_2\}$. Taking $X_{i,k}^{e}$ as the number of occurrences (counts) of event type $e$, by player $i$ (who plays for team $j$), in match $k$, we have \begin{equation} \label{posmod} X_{i,k}^{e}\sim Pois\left(\eta_{i,k}^{e}\tau_{i,k} \right), \end{equation} where \begin{equation} \label{eta} \eta_{i,k}^{e} = \exp\left\{\Delta_i^{e} + \tau_{i,k}\left(\lambda_1^e\sum_{i'\in P_k^j}\Delta_{i'}^{e} - \lambda_2^e\sum_{i'\in P_k^{T_k\setminus j}}\Delta_{i'}^{E\setminus e}\right) + \left(\delta_{T_k^H,j}\right)\gamma^{\,e} \right\}, \end{equation} $\delta_{r,s}$ is the Kronecker delta and $\tau_{i,k}$ is the fraction of time player $i$ (playing for team $j$), spent on the pitch in match $k$, with $\tau_{i,k}\in[0, 1]$. Explicitly, if a player plays 60 minutes of a 90 minute match then $\tau_{i,k}=2/3$. The home effect is represented by $\gamma^{\,e}$. The home effect reflects the (supposed) advantage the home team has over the away team in event type $e$. The $\Delta_i^{e}$ represent the (latent) ability of each player for a specific event type, where we let $\Delta$ be the vector of all players abilities. The impact of a player's own team on the number of occurrences is captured through $\lambda_1^e$, with $\lambda_2^e$ describing the opposition's ability to stop the player in that event type. For identifiability purposes, we impose the constraint that the $\lambda$s must be positive. Figure~\ref{pic-model} illustrates the model for one fixture, allowing for some abuse in notation, and where we assume each team consists of 11 players only (that is, we ignore substitutions) and suppress the time dependence ($\tau$). From \eqref{posmod} and \eqref{eta}, the log-likelihood is given by \begin{equation} \ell = \sum_{e\in E}\sum_{k=1}^K\sum_{j\in T_k}\sum_{i\in P_k^j} X_{i,k}^{e} \log{\left(\eta_{i,k}^{e}\tau_{i,k} \right)} - \eta_{i,k}^{e}\tau_{i,k} - \log{\left(X_{i,k}^{e}\,! \right)}. \label{llike} \end{equation} \begin{figure \centering \begin{tikzpicture}[scale=5,>=latex] \node[draw,circle,fill=black,inner sep=0.5mm, label=above:{\large $\Delta_1^{e_1}$}] (xo0h) at (0,1) {}; \node[draw,circle,fill=black,inner sep=0.5mm, label=above:{\large $\Delta_2^{e_1}$}] (xo1h) at (0.3,1) {}; \node at (0.5, 1) {{$\ldots$}}; \node[draw,circle,fill=black,inner sep=0.5mm, label=above:{\large $\Delta_{11}^{e_1}$}] (xo2h) at (0.7,1) {}; \node[draw,circle,fill=black,inner sep=0.5mm, label=above:{\large $\Delta_{12}^{e_2}$}] (xo0a) at (1.4,1) {}; \node[draw,circle,fill=black,inner sep=0.5mm, label=above:{\large $\Delta_{13}^{e_2}$}] (xo1a) at (1.7,1) {}; \node at (1.9, 1) {{$\ldots$}}; \node[draw,circle,fill=black,inner sep=0.5mm, label=above:{\large $\Delta_{22}^{e_2}$}] (xo2a) at (2.1,1) {}; \node[draw,circle,fill=black,inner sep=0.5mm, label={[yshift=-0.9cm]\large $\log\left(\eta_1^{e_1}\right)$}] (xu0) at (0,0) {}; \node[draw,circle] (hsum) at (0.3,0.5) {\large $\displaystyle\lambda_1\sum_{i=1}^{11}\Delta_i^{e_1}$}; \node[draw,circle] (asum) at (1.7,0.5) {\large $\displaystyle-\lambda_2\sum_{i=12}^{22}\Delta_i^{e_2}$}; \node[draw,circle] (gam) at (0.7,0.3) {\large $\gamma^{\;e_1}$}; \draw (xo0h) edge[out=240,in=120,->] (xu0) ; \draw (xo0h) edge[->] (hsum) ; \draw (xo1h) edge[->] (hsum) ; \draw (xo2h) edge[->] (hsum) ; \draw (xo0a) edge[->] (asum) ; \draw (xo1a) edge[->] (asum) ; \draw (xo2a) edge[->] (asum) ; \draw (hsum) edge[->] (xu0) ; \draw (gam) edge[->] (xu0) ; \draw (asum) edge[out=220,in=-20,->] (xu0) ; \end{tikzpicture} \caption{Pictorial representation of the model for one fixture. For ease we assume that only 11 players play for each team in the fixture (that is, we ignore substitutions) and suppress the time dependence ($\tau$).} \label{pic-model} \end{figure} Interest lies in estimating this model using a Bayesian approach. We put independent Gaussian priors over all abilities, whilst treating the remaining unknown parameters as hyperparameters --- to be fitted by the marginal likelihood function. Given the size of the data and the number of parameters needing to be estimated to fit equation~\ref{llike}, we appeal to variational inference techniques, which are the subject of the next section. \subsection{Variational inference} \label{var-inf} For a general introduction to variational inference (VI) methods we direct the reader to \cite{blei_2017} (and the references therein). Variational inference is paired with automatic differentiation in \cite{kucukelbir_2016}, leading to a technique known as automatic differentiation variational inference (ADVI). ADVI provides an automated solution to VI and is built upon recent approaches in \emph{black-box} VI, see \citep{ranganath_2014, mnih_2014, kingma_2014}. \cite{duvenaud_2015} present some \verb+Python+ code (5 lines) for implementing black-box VI. A good overview to VI can be found in Chapter 19 of \cite{goodfellow_2016}. Given the set-up of the model above though, it is sufficient within this paper, to require only \emph{standard} variational inference methods. We briefly outline these below, and refer the reader to \cite{blei_2017} for a more complete derivation. In contrast to some other techniques for Bayesian inference, such as MCMC, in VI we specify a variational family of densities over the latent variables ($\nu$). We then aim to find the best candidate approximation, $q(\nu)$, to minimise the Kullback-Leibler~(KL) divergence to the posterior \[ \nu^* = \underset{\nu}{\textrm{argmin KL}}\left\{q(\nu)\vert\vert\pi(\nu\vert x) \right\}, \] where $x$ denotes the data. Unfortunately, due to the analytic intractability of the posterior distribution, the KL divergence is not available in closed (analytic) form. However, it is possible to maximise the evidence lower bound (ELBO). The ELBO is the expectation of the joint density under the approximation minus the entropy of the variational density and is given by \begin{equation}\label{elbo} \textrm{ELBO}(\nu) = \textrm{E}_{\nu}\left[\log\left\{\pi\left(\nu,x\right)\right\}\right] - \textrm{E}_{\nu}\left[\log\left\{q\left(\nu\right)\right\}\right]. \end{equation} The ELBO is the equivalent of the negative KL divergence up to the constant $\log\{\pi(x)\}$, and from \cite{jordan_1999} and \cite{bishop_2006} we know that, by maximising the ELBO we minimise the KL divergence. In performing VI, assumptions must be made about the variational family to which $q(\nu)$ belongs. Here we consider the \emph{mean-field variational family}, in which the latent variables are assumed to be mutually independent. Moreover, each latent variable $\nu_r$ is governed by its own variational parameters ($\phi_r$), which determine $\nu_r$'s variational factor, the density $q(\nu_r\vert\phi_r)$. Specifically, for $R$ latent variables \begin{equation}\label{meanfield} q\left(\nu\vert\phi \right) = \prod_{r=1}^Rq_r\left(\nu_r\vert\phi_r\right). \end{equation} We note that the complexity of the variational family determines the complexity of the optimisation, and hence impacts the computational cost of any VI approach. In general, it is possible to impose any graphical structure on $q(\nu_r\vert\phi_r)$; a fully general graphical approach leads to structured variational inference, see \cite{saul_1996}. Furthermore, the data ($x$) does not feature in equation~\ref{meanfield}, meaning the variational family is not a model of the observed data; it is in fact the ELBO which connects the variational density, $q(\nu\vert\phi)$, to the data and the model. For the model outlined at the beginning of this section, let $\nu=\Delta$, and set \begin{equation}\label{qDelta_single} q\left(\Delta_i^{e}\vert\phi_i^{e} \right)\sim N\left(\mu_{\Delta_i^{e}}, \sigma_{\Delta_i^{e}}^2 \right). \end{equation} Our aim is to find suitable candidate values for the variational parameters \[ \phi_i^{e}=\left(\mu_{\Delta_i^{e}},\sigma_{\Delta_i^{e}}\right)^T,\qquad\forall i ,\forall e. \] Explicitly $q\left(\Delta\vert\phi\right)$ follows~\eqref{meanfield}. Whence \begin{equation}\label{qDelta} q\left(\Delta\vert\phi \right) = \prod_{e\in E}\prod_{j\in T}\prod_{i\in P^j} q\left(\Delta_i^{e}\vert\phi_i^{e} \right), \end{equation} where $T$ is the set of all teams and $P^j$ are the players who play for team $j$. Finally we take $\psi=(\lambda_1^{e_1}, \lambda_2^{e_1}, \gamma^{\,e_1},\lambda_1^{e_2}, \lambda_2^{e_2}, \gamma^{\,e_2})^T$ to be fixed parameters, and assume each $\Delta_i^{e}$ follows a $N(m,s^2)$ prior distribution, fully specifying the model given by equations~\eqref{posmod}--\eqref{llike}. Thus, the ELBO \eqref{elbo} is given by \begin{align} \textrm{ELBO}\left(\Delta\right) &= \sum_{e\in E}\sum_{k=1}^K\sum_{j\in T_k}\sum_{i\in P_k^j} \textrm{E}_{\Delta_i^{e}}\left[\log\left\{\pi\left(\Delta_i^{e},\phi_i^{e},\psi,x\right)\right\}\right] - \textrm{E}_{\Delta_i^{e}}\left[\log\left\{q\left(\Delta_i^{e}\vert\phi_i^{e}\right)\right\}\right] \nonumber \\ & = \sum_{e\in E}\sum_{k=1}^K\sum_{j\in T_k}\sum_{i\in P_k^j} \textrm{E}_{\Delta_i^{e}}\left[\log\left\{\pi\left(\Delta_i^{e}\right)\right\}\right] + \textrm{E}_{\Delta_i^{e}}\left[\log\left\{\pi\left(x\vert\Delta_i^{e},\phi_i^{e},\psi\right)\right\}\right] \nonumber\\ &\qquad\qquad\qquad\qquad\quad- \textrm{E}_{\Delta_i^{e}}\left[\log\left\{q\left(\Delta_i^{e}\vert\phi_i^{e}\right)\right\}\right]. \label{model_elbo} \end{align} The above is available in closed-form (see Appendix~\ref{app_elbo}), avoiding the need for black-box VI. We do however incorporate the techniques of automatic differentiation for computational ease, and use the \verb+Python+ package \verb+autograd+ \citep{maclaurin_2015} to fit the model. \subsection{Hierarchical model} \label{hier} Building on the methods of Section~\ref{var-inf}, we wish to discover whether the inferred $\Delta$s have any impact on our ability to predict the goals scored in a soccer match. As a baseline model we consider the work of \cite{baio_2010}, who present the model of \cite{karlis_2003} in a Bayesian framework. The model has close ties with \citep{dixon_1997, lee_1997, karlis_2000} which have all previously been used to predict soccer scores. We first briefly outline the model of \cite{baio_2010}, before offering our extension to include the imputed $\Delta$s. The model is a Poisson-log normal model, see for example \cite{aitchison_1989}, \cite{chib_2001} or \cite{tunaru_2002} (amongst others). For a particular fixture $k$, we let $y^k=(y^k_h,y^k_a)^T$ be the total number of goals scored, where $y^k_h$ is the number of goals scored by the home team, and $y^k_a$, the number by the away team. Inherently, we let $h$ denote the home team and $a$ the away team for the given fixture $k$. The goals of each team are modelled by independent Poisson distributions, such that \begin{equation}\label{hier_pois} y_t^k\vert\theta_t\overset{indep}{\sim } Pois\left(\theta_t \right), \qquad t\in\{h,a\}, \end{equation} where \begin{align} \log{\left(\theta_h\right)} &= \textrm{home} + \textrm{att}_h +\textrm{def}_a, \label{theta_h}\\ \log{\left(\theta_a\right)} &= \textrm{att}_a + \textrm{def}_h. \label{theta_a} \end{align} Each team has their own team-specific attack and defence ability, $\textrm{att}$ and $\textrm{def}$ respectively, which form the scoring intensities $(\theta_t)$ of the home and away teams. A constant home effect $(\textrm{home})$, which is assumed to be constant across all teams and across the time-span of the data, is also included in the rate of the home team's goals. For identifiability, we follow \cite{baio_2010} and \cite{karlis_2003}, and impose sum-to-zero constraints on the attack and defence parameters \[ \sum_{t\in T} \textrm{att}_t = 0 \qquad \textrm{and} \qquad\sum_{t\in T} \textrm{def}_t = 0, \] where $T$ is the set of all teams to feature in the dataset. Furthermore, the attack and defence parameters for each team are seen to be draws from a common distribution \[ \textrm{att}_t\sim N\left(\mu_{\textrm{att}}, \sigma_{\textrm{att}}^2\right) \qquad \textrm{and} \qquad \textrm{def}_t\sim N\left(\mu_{\textrm{def}}, \sigma_{\textrm{def}}^2\right). \] We follow the prior set-up of \cite{baio_2010} and assume that $\textrm{home}$ follows a $N(0,100^2)$ distribution \emph{a priori}, with the hyper parameters having the priors \begin{alignat}{3} &\mu_{\textrm{att}}\sim N\left(0,100^2\right), &&\qquad \mu_{\textrm{def}}\sim N\left(0,100^2\right), \nonumber\\ &\sigma_{\textrm{att}}\sim Inv\textrm{-}Gamma(0.1,0.1), &&\qquad \sigma_{\textrm{def}}\sim Inv\textrm{-}Gamma(0.1,0.1).\nonumber \end{alignat} A graphical representation of the model is given in figure~\ref{pic-hierarchical}. \begin{figure} \centering \begin{tikzpicture}[scale=2,>=latex] \node[draw,circle] (mu_a) at (0,3) {\large $\mu_{\textrm{att}}$}; \node[draw,circle] (sigma_a) at (1,3) {\large $\sigma_{\textrm{att}}$}; \node[draw,circle] (mu_d) at (3,3) {\large $\mu_{\textrm{def}}$}; \node[draw,circle] (sigma_d) at (4,3) {\large $\sigma_{\textrm{def}}$}; \node[draw,circle] (home) at (-0.15,2) {\large $\textrm{home}$}; \node[draw,circle] (att_h) at (0.5,2) {\large $\textrm{att}_{h}$}; \node[draw,circle] (def_a) at (1.1,2) {\large $\textrm{def}_{a}$}; \node[draw,circle] (att_a) at (3.2,2) {\large $\textrm{att}_{a}$}; \node[draw,circle] (def_h) at (3.8,2) {\large $\textrm{def}_{h}$}; \node[draw,circle,dashed, black!70!] (delta_h) at (-0.8,1.6) {\large $f(\Delta)_h$}; \node[draw,circle,dashed, black!70!] (delta_a) at (4.38,1.6) {\large $f(\Delta)_a$}; \node[draw,circle] (theta_h) at (0.5,1.25) {\large $\theta_{h}$}; \node[draw,circle] (theta_a) at (3.5,1.25) {\large $\theta_{a}$}; \node[draw,circle] (y_h) at (0.5,0.5) {\large $y_{h}$}; \node[draw,circle] (y_a) at (3.5,0.5) {\large $y_{a}$}; \draw (mu_a) edge[->] (att_h) ; \draw (mu_a) edge[->] (att_a) ; \draw (sigma_a) edge[->] (att_h) ; \draw (sigma_a) edge[->] (att_a) ; \draw (mu_d) edge[->] (def_h) ; \draw (mu_d) edge[->] (def_a) ; \draw (sigma_d) edge[->] (def_h) ; \draw (sigma_d) edge[->] (def_a) ; \draw (home) edge[->] (theta_h) ; \draw (att_h) edge[->] (theta_h) ; \draw (def_a) edge[->] (theta_h) ; \draw (att_a) edge[->] (theta_a) ; \draw (def_h) edge[->] (theta_a) ; \draw (delta_h) edge[dashed, ->, black!70!] (theta_h) ; \draw (delta_a) edge[dashed, ->, black!70!] (theta_a) ; \draw (theta_h) edge[->] (y_h) ; \draw (theta_a) edge[->] (y_a) ; \end{tikzpicture} \caption{Pictorial representation of the Bayesian hierarchical model. Removing both $f(\Delta)_h$ and $f(\Delta)_a$ gives the baseline model of \cite{baio_2010}.}\label{pic-hierarchical} \end{figure} As an extension to the model of \cite{baio_2010} we propose to include the latent $\Delta$s of Section~\ref{var-inf} in the scoring intensities of both the home and away teams. Explicitly \eqref{theta_h} and \eqref{theta_a} become \begin{align} \log{\left(\theta_h\right)} &= \textrm{home} + \textrm{att}_h +\textrm{def}_a + f\left(\Delta\right)_h, \label{theta_h_ext}\\ \log{\left(\theta_a\right)} &= \textrm{att}_a + \textrm{def}_h + f\left(\Delta\right)_a, \label{theta_a_ext} \end{align} where $f(\Delta)$ is to be determined. For a single pair of event types (as outlined at the start of this section), a sensible choice for $f(\Delta)$ could be \begin{align} f\left(\Delta\right)_h &= \sum_{i\in I_k^{T_k^H}}\mu_{\Delta_i^{e}} - \sum_{i\in I_k^{T_k^A}}\mu_{\Delta_{i}^{E\setminus e}} \label{fdeltah} \intertext{and} f\left(\Delta\right)_a &= \sum_{i\in I_k^{T_k^A}}\mu_{\Delta_{i}^{e}} - \sum_{i\in I_k^{T_k^H}}\mu_{\Delta_{i}^{E\setminus e}}, \label{fdeltaa} \end{align} with $I_k^j$ being the initial eleven players who start fixture $k$ for team $j$ and $\mu_{\Delta}$ being the mean of the marginal posterior variational densities. This extension is also illustrated in figure~\ref{pic-hierarchical}. We fit both the baseline model of~\cite{baio_2010} and our extension using \verb+PyStan+~\citep{stan_2016}. We note that it may be desirable to fit both the model of Section~\ref{var-inf} and the Bayesian hierarchical model concurrently. However, we find that in reality this is infeasible as the latter can be fit using MCMC, whilst it is difficult to fit the model of Section~\ref{var-inf} using MCMC due to the large number of parameters. \section{Applications} \label{app} Having outlined our approach to determine a player's ability in a given event type, and offered an extension to the model of \cite{baio_2010} to capture the goals scored in a specific fixture, we wish to test the proposed methods in real world scenarios. We therefore consider two applications. In the first we use data from the 2013/2014 English Premier League to learn players abilities across the season as a whole for a number of event types, including the ability to score a goal. The second example concerns the number of goals observed in a given fixture, specifically, we predict whether a certain number of goals will be scored (or not) in each fixture. \subsection{Determining a player's ability} \label{ability} In this section we consider the touch-by-touch data described in Section~\ref{data} and consider data for the 2013/2014 English Premier League season only. We look to create an ordering of players abilities, from which we hope to extract meaning based on what we know of the season. We also have data on the amount of time each player spent on the pitch in each match and this information is factored in accordingly through $\tau_{i,k}$. The season consisted of 380 matches for the 20 team league, with 544 different players used during matches. The teams are listed in table~\ref{tableteams}, with the final league table shown in figure~\ref{finaltable}. From figure~\ref{finaltable} we note that Manchester City and Liverpool were the teams who scored the most goals, with Chelsea conceding the least. These teams did well over the season and we expect players from these teams to have high abilities. The teams to do worst (and got relegated), were Norwich City, Fulham and Cardiff; we do not expect players from these teams to feature highly in any ordering created. A final note is that, in this season, Manchester United underperformed (given past seasons) under new manager David Moyes. Whence, $k=1,\ldots,380$, $j\in T_k$ where $T_k$ consists of a subset of $\{1,\ldots,20\}$ and $i\in P_k^j$ where $P_k^j$ is a subset of $P=\{1,\ldots,544\}$. \begin{table \centering \begin{tabular}{llll} \hline \multicolumn{4}{c}{2013/2014 English Premier League teams} \\ \hline Arsenal & Everton & Manchester United & Sunderland\\ Aston Villa & Fulham & Newcastle United & Swansea City\\ Cardiff City & Hull City & Norwich City & Tottenham Hotspur \\ Chelsea & Liverpool & Southampton & West Bromwich Albion\\ Crystal Palace & Manchester City & Stoke City & West Ham United \\ \hline \end{tabular} \caption{The teams which constituted the 2013/2014 English Premier League.} \label{tableteams} \end{table} \begin{figure}[t!] \centering \includegraphics[scale=0.35]{2013-2014-table.png} \caption{Final league table for the 2013/2014 English Premier League. \emph{Pl} matches played, \emph{W} matches won, \emph{D} matches drawn, \emph{L} matches lost, \emph{F} goals scored, \emph{A} goals conceded, {GD} goal difference ($\textrm{scored}-\textrm{conceded}$), \emph{Pts} final points total.} \label{finaltable} \end{figure} For a pair of interacting event types we fit the model defined by \eqref{posmod}--\eqref{llike}, by maximising \eqref{model_elbo} where $q(\cdot)$ follows \eqref{qDelta_single}. This model set-up has 2182 parameters governing any two interacting event types. We take the (reasonably uninformative) prior \begin{equation}\label{ab-prior} \pi\left(\Delta_i^{e}\right)\sim N\left(-2,2^2\right), \end{equation} where -2 represents the ability of an average player. We found little difference in results for alternative priors. We begin by considering occurrences of Goal and GoalStop. GoalStop is an event type of our own creation (in conjunction with expert soccer analysts), made up of many other event types (BallRecovery, Challenge, Claim, Error, Interception, KeeperPickup, Punch, Save, Smother, Tackle), with BallRecovery being the event type where a player collects the ball after it has gone loose. GoalStop aims to represent all the things a team can do to stop the other team from scoring a goal. A Monte Carlo simulation of the prior for $\eta_{i,k}^{\textrm{Goal}}$ using 100K draws of $\Delta_i^{\textrm{Goal}}$ from \eqref{ab-prior} is shown in figure~\ref{priorgoal}, where most players are viewed to score 0 or 1 goal (as expected). \begin{figure \centering \includegraphics[scale=0.7]{prior_goal.pdf} \caption{Monte Carlo simulation of the prior for $\eta_{i,k}^{\textrm{Goal}}$ using 100K draws of $\Delta_i^{\textrm{Goal}}$ from \eqref{ab-prior}.} \label{priorgoal} \end{figure} \begin{figure \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[scale=0.55]{goal-elbo.pdf} \end{minipage} \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[scale=0.55]{goal-param3.pdf} \end{minipage} \\ \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[scale=0.55]{goal-randommu.pdf} \end{minipage} \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[scale=0.55]{goal-randomsigma.pdf} \end{minipage} \caption{Trace plots for the ELBO and a selection of the model parameters outputted every 100 iterations.} \label{paramfig} \end{figure} We ran the model for 7000 iterations to achieve convergence. Trace plots of the ELBO and a selection of model parameters are shown in figure~\ref{paramfig}. It is clear that convergence has been achieved (measured via the ELBO). For completeness the values of the fixed parameters ($\psi$) under the model are given in table~\ref{tableparam}, where all respective parameters appear to be on the same scale. We observe small differences in the parameters dictating the amount of impact both a player's own team, and the opposing team has on occurrences of an event type. There are more noticeable differences in the home effects of each event type, with the home effect for Goal being much larger than that of GoalStop. This is in line with other research around the goals scored in a match, where a clear home effect is acknowledged, see \cite{dixon_1997}, \cite{karlis_2003} and \cite{baio_2010} (amongst others) for further discussion of this home effect. The home effect for GoalStop is closer to zero, suggesting the number of attempts a team makes to stop a goal is similar whether they are playing at home or away. \begin{table \centering \begin{tabular}{l|ccc} \hline & \multicolumn{3}{c}{Fixed parameter}\\ Event type ($e$) & $\lambda_1^e$ & $\lambda_2^e$ & $\gamma^e$ \\[0.3ex] \hline \\[-1.9ex] Goal & $2.907\times 10^{-8}$ & 0.041 & 0.165 \\[0.3ex] GoalStop & $1.621\times 10^{-7}$ & 0.009 & 0.003 \\ \hline \end{tabular} \caption{Values of the fixed parameters ($\psi$) for interacting event types Goal and GoalStop.} \label{tableparam} \end{table} Figure~\ref{fitfig} shows the $\eta_{i,k}^{e}$ \eqref{eta} we obtain when the model parameters are combined for 2 randomly selected matches, where we set $\Delta_i^{e}$ to be $\mu_{\Delta_i^{e}}$. We plot these against the observed counts and include the 95\% prediction intervals for each $\eta_{i,k}^{e}$ to add further clarity. The solid line on each plot separates the players from the two opposing teams. A large number of the model $\eta$s are close to the observed counts (especially for GoalStop), and nearly all observed values fall within the 95\% prediction intervals, showing a reasonable model fit. The number of goal-stops across teams is not particularly variable, however there is a suggestion of player variability (although this is somewhat clouded by the fact that substitutes are not specifically marked, as we would expect them to register lower counts, by virtue of less playing time). \begin{figure \begin{minipage}[b]{0.48\linewidth} \centering \qquad\qquad Goal\vspace{0.01cm} \includegraphics[scale=0.55]{goal-1483482.pdf} \end{minipage} \begin{minipage}[b]{0.48\linewidth} \centering \qquad\qquad GoalStop\vspace{0.01cm} \includegraphics[scale=0.55]{goalstop-1483482.pdf} \end{minipage} \\ \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[scale=0.55]{goal-1483620.pdf} \end{minipage} \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[scale=0.55]{goalstop-1483620.pdf} \end{minipage} \caption{Within sample predictive distributions for the number of goals/goal-stops in the 2013/2014 English Premier League for 2 randomly selected matches. \emph{Cross} model combinations of $\eta_{i,k}^{e}$, \emph{circle} observed counts, \emph{dashed bars} 95\% prediction interval for each~$\eta_{i,k}^{e}$. The solid line separates the players from the two teams.} \label{fitfig} \end{figure} We sample the marginal posterior variational densities, $q(\Delta_i^{e})$, 10K times (constructing the corresponding $\eta_{i,k}^{e}$), and simulate from the relevant Poisson distributions (with mean $\eta_{i,k}^{e}\tau_{i,k}$). This gives a Monte Carlo simulation of each player's number of goals and goal-stops for each fixture in the 2013/2014 English Premier League. Summing over the players who played in a given match, gives an in-sample prediction of the total number of goals/goal-stops for each team, in every fixture. We present box-plots of these totals in figure~\ref{boxfig}, where for reference, we also include box-plots for each team's total number of goals/goal-stops in each fixture constructed from the touch-by-touch data. The model is clearly capturing the patterns between differing teams (and the patterns observable within the data), especially for goal-stop. The teams who scored the most goals over the season, Manchester City and Liverpool, have higher Goal box-plots than other teams, encompassing a larger range of goals scored. On the other hand, teams who scored few goals over the season, such as Norwich City, have the lowest box-plots, which cover a small range of goals scored in a match. Slightly surprisingly, there appears to be no connection between the occurrences of GoalStop and the goals a team concedes, with both Chelsea and Norwich City having similar box-plots, despite conceding a vastly different number of goals, 27 and 62 respectively. There is some suggestion that such observations may be used to determine a team's style of play, for example, whether they are a \emph{passing} team or follow the \emph{long ball} philosophy; however, we leave such questions for future investigation given the set-up we derive here. We can conclude, nevertheless, that the model is capturing the trends observed in the touch-by-touch data well. \begin{figure \begin{minipage}[b]{0.48\linewidth} \centering \qquad Model\vspace{0.01cm} \includegraphics[scale=0.55]{goal-10000-montecarlo-box-model.pdf} \end{minipage} \hspace{0.3cm} \begin{minipage}[b]{0.48\linewidth} \centering \qquad Touch-by-touch data\vspace{0.01cm} \includegraphics[scale=0.55]{goal-box-real.pdf} \end{minipage} \\ \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[scale=0.55]{goalstop-10000-montecarlo-box-model.pdf} \end{minipage} \hspace{0.3cm} \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[scale=0.55]{goalstop-box-real.pdf} \end{minipage} \caption{Box-plots of the total number of goals/goal-stops in a game for each team in the 2013/2014 English Premier League observed under the model and from the data. \emph{Top row} Goal, \emph{bottom row} GoalStop.} \label{boxfig} \end{figure} Marginal posterior variational densities of Goal, $q(\Delta_i^{\textrm{Goal}})$, for two players are presented in figure~\ref{playerfig}, where $q(\Delta_i^{\textrm{Goal}})$ takes the form of \eqref{qDelta_single} and the prior is \eqref{ab-prior}. The two players shown are Daniel Sturridge and Harrison Reed. Sturridge played 29 times over the season, totalling 2414 minutes of match time, scoring 21 goals, whereas Reed played 4 times, totalling 23 minutes, scoring zero goals. These attributes are clearly captured by the posteriors; the greater number of observations for Sturridge leading to a posterior with a much smaller variance. The high number of goals scored by Sturridge leads to him having a higher value of $\mu_{\Delta_i^{\textrm{Goal}}}$ (with reasonable certainty), whilst the lack of both goals and playing time leads to a posterior for Reed which resembles the prior. \begin{figure \begin{minipage}[b]{0.48\linewidth} \centering \qquad\quad Sturridge\vspace{0.01cm} \includegraphics[scale=0.55]{goal287.pdf} \end{minipage} \begin{minipage}[b]{0.48\linewidth} \centering \qquad\quad Reed\vspace{0.01cm} \includegraphics[scale=0.55]{goal523.pdf} \end{minipage} \\ \caption{Marginal posterior variational densities of Goal for 2 players in the 2013/2014 English Premier League. \emph{Dashed} prior, \emph{solid} posterior.} \label{playerfig} \end{figure} The model is clearly capturing differences between players abilities, as evidenced by the posteriors of figure~\ref{playerfig}. Thus, the natural question to ask is whether these differences are sensible, and, if we were to order the players by their inferred ability, would this ordering agree with (a debatable) reality. Hence, we construct the marginal posterior variational densities for all players and rank them according to the 2.5\% quantile of these densities. A top 10 for Goal is presented in table~\ref{tablegoal}, with a ranking for GoalStop given in table~\ref{tablegoalstop}. We present top 10 lists for other event types in Appendix~\ref{app_top10}. The ranking shown in table~\ref{tablegoal} appears sensible, and comprises of those players who were the main goal scorers (strikers) for the best teams, and the players who scored nearly all the goals a lesser team scored over the season. The ranking is very close to that obtained by ranking players on the total number of goals scored over the season (although there is some debate in the soccer community as to whether this is a sensible way of ranking, with some suggesting a ranking based on a per 90 minute statistic, however, this can be distorted by those with very little playing time, see Chapter 3 of \cite{anderson_2013} or \cite{AGR_2016} for further discussion). The questionable deviations from this ranking are Aguero (ranked third) and van Persie (ranked seventh). Both these players have less playing time over the season compared to their competitors, and thus, the model highlights them as better goal scorers, given the time available to them, than other players based on total goals scored. Expert soccer analysts agreed with this view when we showed them these rankings. Suarez has an inferred ability much greater than any other player, which is evidenced by the 31 goals he scored (10 more than any other player). At points the difference between successive ranks is small, suggesting some players are harder to distinguish between. Finally, we note that the standard deviations for all players in the top 10 are roughly the same, meaning we have similar confidence in the ability of any of these players. \begin{table \centering \begin{tabular}{clccccccc} \hline \multicolumn{9}{c}{Goal - top 10}\\ \hline \multirow{2}{*}{Rank} & \multirow{2}{*}{Player} & 2.5\% & \multirow{2}{*}{Mean} & Standard & \multirow{2}{*}{Observed} & Observed & Rank & Time\\ & & quantile & & deviation & & rank & difference & played \\ \hline 1 & Suarez & 0.508 & 0.869 & 0.184 & 31 & 1 & 0 & 3185\\ 2 & Sturridge & 0.176 & 0.617 & 0.225 & 21 & 2 & 0 & 2414\\ 3 & Aguero & 0.147 & 0.636 & 0.250 & 17 & 4 & +1 & 1616\\ 4 & Y. Toure & -0.043 & 0.395 & 0.224 & 20 & 3 & -1 & 3113\\ 5 & Rooney & -0.056 & 0.421 & 0.243 & 17 & 5 & 0 & 2625\\ 6 & Dzeko & -0.065 & 0.424 & 0.249 & 16 & 8 & +2 & 2128\\ 7 & van Persie & -0.136 & 0.430 & 0.289 & 12 & 15 & +8 & 1690 \\ 8 & Remy & -0.230 & 0.302 & 0.271 & 14 & 11 & +3 & 2274\\ 9 & Bony & -0.257 & 0.238 & 0.252 & 16 & 7 & -2 & 2644\\ 10 & Rodriguez & -0.354 & 0.161 & 0.263 & 15 & 10 & 0 & 2758\\ \hline \end{tabular} \caption{Top 10 goal scorers in the 2013/2014 English Premier League based on the 2.5\% quantile of the marginal posterior variational density for each player, $q(\Delta_i^{\textrm{Goal}})$.} \label{tablegoal} \end{table} The ranking of GoalStop (table~\ref{tablegoalstop}) appears, at first glance, to be less sensible than that of table~\ref{tablegoal}. It features 3 players with comparatively larger standard deviations, Kallstrom (rank 2), Lewis (rank 6) and Palacios (rank 7). Whilst these players did well with the little playing time afforded to them, it is somewhat presumptuous to postulate that they would maintain a similar level of ability given more game time, leading to their ranking slipping. Unfortunately, this is a by product of inducing a ranking from the 2.5\% quantile --- ideally we would provide several tables for each event type, filtering players by the amount of uncertainty surrounding them, although such an approach would be unwieldy given the large number of players in the dataset. Moreover, fully factorised mean-field approximations are known to underestimate the uncertainty of the posterior \citep{bishop_2006}. Although comparative uncertainty between players is easier to gauge, it is less clear how to quantify how much bias is being added to the variances of each latent variable individually. In a future work, this could be mitigated by adopting a variational approximation that accounts for some correlations of the latent variables. However, the rest of the list appears sensible and is made up mainly of defensive midfielders (whose main role it is to disrupt the oppositions play); only Mannone and Ruddy are goalkeepers (discounting the 3 players with large standard deviations). This suggests, that to stop a goal, it is more prudent to invest in a better defensive midfielder than it is a goalkeeper, presuming you can not just buy the best player in each position. Here, the differences between successive ranks are much smaller than in table~\ref{tablegoal}, implying it is harder to distinguish between player ability to perform goal-stops than it is the ability to score goals. \begin{table \centering \begin{tabular}{clccccccc} \hline \multicolumn{9}{c}{GoalStop - top 10}\\ \hline \multirow{2}{*}{Rank} & \multirow{2}{*}{Player} & 2.5\% & \multirow{2}{*}{Mean} & Standard & \multirow{2}{*}{Observed} & Observed & Rank & Time\\ & & quantile & & deviation & & rank & difference & played \\ \hline 1 & Mulumbu & 2.575 & 2.653 & 0.040 & 631 & 1 & 0 & 3319\\ 2 & Kallstrom & 2.553 & 2.900 & 0.177 & 33 & 405 & +403 & 144 \\ 3 & Mannone & 2.528 & 2.615 & 0.044 & 508 & 12 & +9 & 2767 \\ 4 & Yacob & 2.510 & 2.614 & 0.053 & 359 & 43 & +39 & 1979 \\ 5 & Tiote & 2.474 & 2.560 & 0.044 & 517 & 8 & +3 & 2988 \\ 6 & Lewis & 2.446 & 2.863 & 0.213 & 23 & 436 & +430 & 98 \\ 7 & Palacios & 2.441 & 2.638 & 0.101 & 100 & 286 & +279 & 585 \\ 8 & Jedinak & 2.420 & 2.500 & 0.041 & 603 & 2 & -6 & 3651 \\ 9 & Ruddy & 2.411 & 2.491 & 0.041 & 600 & 3 & -6 & 3679\\ 10 & Arteta & 2.409 & 2.503 & 0.048 & 431 & 21 & +11 & 2615 \\ \hline \end{tabular} \caption{Top 10 goal-stoppers in the 2013/2014 English Premier League based on the 2.5\% quantile of the marginal posterior variational density for each player, $q(\Delta_i^{\textrm{GoalStop}})$.} \label{tablegoalstop} \end{table} Overall though, the model provides a good fit to the data and suggests a reasonable prowess to determine a player's ability in a specific event type, with marginal posterior variational densities providing a good visual comparison between different players abilities (and the confidence surrounding that ability). In the next section we look to utilise these player abilities in the prediction of goals in a soccer match. \subsection{Prediction} \label{prediction} A key betting market, stemming from the rise of online betting, is the over/under market \citep{betfair_2017, bethq_2017, Sportingindex_2017}, where people bet whether a certain number of goals will (over) or won't (under) be scored in a match. Here we attempt to predict whether 2.5 goals (a number of goals common with online betting) will be scored or not in a given fixture. To predict the goals scored in a fixture which takes place in the future we first fit the model on all the past available to us. We use a whole season of data to train the model, before predicting the following season in incremental blocks. Here, we use the entirety of the 2013/2014 English Premier League season (380 fixtures) to train the model, before attempting to predict the goals scored in each match of the 2014/2015 English Premier League season. We introduce the fixtures (on which we predict) in blocks of size 80, with a final block of 60 fixtures to total 380 (the number of fixtures over a season). In each case we use all of the available past to fit the model, that is, in predicting the second block of 80 fixtures in the 2014/2015 season, we use all of the 2013/2014 season and the first block of the 2014/2015 season to fit the model. Figure~\ref{pic-window} shows a graphical representation of this approach. \begin{figure} \centering \begin{tikzpicture}[scale=5,>=latex] \draw[black!20!,-] (0,0) -- (0,0.5); \draw[black!20!,-] (1,0) -- (1,0.5); \node[draw,circle,fill=black,inner sep=0.5mm, label=above:{13/14}] (xo0) at (0,0.5) {}; \node[draw,circle,fill=black,inner sep=0.5mm, label=above:{14/15}] (xo1) at (1,0.5) {}; \draw[black] ((0,0.35) rectangle (1,0.45); \draw[black!70!,densely dotted] ((1,0.35) rectangle (1.25,0.45); \draw[black] ((0,0.2) rectangle (1.25,0.3); \draw[black!70!,densely dotted] ((1.25,0.2) rectangle (1.5,0.3); \draw[black] ((0,0.05) rectangle (1.5,0.15); \draw[black!70!,densely dotted] ((1.5,0.05) rectangle (1.75,0.15); \node at (1.795, -0.01) {\vdots\;etc}; \end{tikzpicture} \caption{Illustration of the approach to prediction. The model is fit on all past data, before predictions are made for a future block of fixtures. \emph{Solid} fit, \emph{dashed} predict.} \label{pic-window} \end{figure} For the extension to the model of \cite{baio_2010} (which we consider to be the baseline model), we include the latent player abilities for the event types Goal, Shots and ChainEvents, with their counterparts being GoalStop, ShotStop and AntiPass respectively. Goal and GoalStop are as defined in Section~\ref{ability}, whilst Shots and ShotStop have homogeneous roots to Goal and GoalStop, that being the ability to shoot or to stop a shot. ChainEvents represents how prevalent a player is in the lead up to a good attacking chance, with AntiPass being a player's ability to stop the other team from passing the ball. We refer the reader to Appendix~\ref{app_top10} for the more technical definitions of these event types. Explicitly \eqref{fdeltah} and \eqref{fdeltaa} are given by \begin{align} f\left(\Delta\right)_h &= \sum_{i\in I_k^{T_k^H}}\left(\Delta_{i}^{\textrm{Goal}} + \Delta_{i}^{\textrm{Shots}} + \Delta_{i}^{\textrm{ChainEvents}}\right) \nonumber\\ &\qquad\qquad- \sum_{i\in I_k^{T_k^A}}\left(\Delta_{i}^{\textrm{GoalStop}} + \Delta_{i}^{\textrm{ShotStop}} + \Delta_{i}^{\textrm{AntiPass}}\right) \label{fdeltah-pred} \intertext{and} f\left(\Delta\right)_a &= \sum_{i\in I_k^{T_k^A}}\left(\Delta_{i}^{\textrm{Goal}} + \Delta_{i}^{\textrm{Shots}} + \Delta_{i}^{\textrm{ChainEvents}}\right) \nonumber\\ &\qquad\qquad- \sum_{i\in I_k^{T_k^H}}\left(\Delta_{i}^{\textrm{GoalStop}} + \Delta_{i}^{\textrm{ShotStop}} + \Delta_{i}^{\textrm{AntiPass}}\right), \label{fdeltaa-pred} \end{align} where $I_k^j$ is the initial eleven players who start fixture $k$ for team $j$. We also considered including a player's ability to pass, but found this led to no increase in predictive power (and in some instances diminished it). We found little difference when setting $\Delta_i^{e}$ to be either $\mu_{\Delta_i^{e}}$ or the 2.5\% quantile of $q(\Delta_i^{e})$ in \eqref{fdeltah-pred} and \eqref{fdeltaa-pred}, and so here report results for the mean ($\mu_{\Delta_i^{e}}$). Both the models were fit using \verb+PyStan+, and were run long enough to yield a sample of approximately 10K independent posterior draws, after an initial burn in period. The teams which feature in the data are those in table~\ref{tableteams}, with the addition of Burnley, Leicester City and Queens Park Rangers, who replaced the relegated teams of Cardiff City, Fulham and Norwich City for the 2014/2015 season. The set-up outlined at the beginning of this section allows us to view the evolution of a team's attack/defence parameter or a player's latent ability through time after different fitting blocks. We denote block 0 to be all the fixtures in the 2013/2014 English Premier League season, block 1 to be block 0 plus the first 80 fixtures of the 2014/2015 season, block 2 to be block 1 plus the next 80 fixtures, block 3 to include the next 80 fixtures, with block 4 including the next 80 fixtures again. The attack and defence parameters through time for both the baseline model and the model including the latent player abilities for selected teams are shown in figure~\ref{attdeffig}, where we plot negative defence so that positive values indicate increased ability. Recall that these parameters for all teams must sum-to-zero. We see similar, but not identical, patterns under both models. The model including latent player abilities reduces the variance of the attack and defence parameters compared to the baseline model, suggesting the inclusion of the $\Delta$s accounts for some of a team's attack and defensive ability. Manchester City and Chelsea follow similar patterns under both models, with Chelsea clearly having the best defence parameter. Including the $\Delta$s impacts Liverpool's attacking ability, where the removal of Suarez (Liverpool's best attacking player, who transferred to Barcelona between the 2013/2014 and 2014/2015 seasons), clearly reduces Liverpool's attacking threat at a more drastic rate than the baseline model. This is inline with reality, where Liverpool only scored 52 goals over the 2014/2015 season compared to 101 goals the previous year. Cardiff City, who got relegated after the 2013/2014 season, but feature in all blocks despite not being used for prediction (as fixtures involving them can inform the attack and defence abilities of other teams), have relatively constant parameters under both models, accounting for the reduction in variance. Notable for Burnley is the peak/trough observed after block~3; this is due to the fact that Burnley were starting to look at the prospect of relegation and needed to start winning games, hence, they tried (and succeeded) to score more goals in order to win games, but found themselves more likely to concede goals in the process of doing so. \begin{figure \begin{minipage}[b]{0.48\linewidth} \centering \qquad\, Baseline\vspace{0.01cm} \includegraphics[scale=0.55]{baseline-attack.pdf} \end{minipage} \begin{minipage}[b]{0.48\linewidth} \centering \qquad\, Including latent player abilities\vspace{0.01cm} \includegraphics[scale=0.55]{GSC-attack.pdf} \end{minipage} \\ \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[scale=0.55]{baseline-defence.pdf} \end{minipage} \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[scale=0.55]{GSC-defence.pdf} \end{minipage} \caption{Attack and defence parameters through time under the baseline model and the model including the latent player abilities for selected teams. \emph{Top row} attack, \emph{bottom row} negative defence. \hbox{\emph{Black-solid} Liverpool}, \emph{black-dashed} Chelsea, \emph{black-dotted} Manchester City, \emph{grey-solid} Cardiff City, \hbox{\emph{grey-dashed} Burnley}.} \label{attdeffig} \end{figure} The mean of $q(\Delta_i^{\textrm{Goal}})$ through time for a selection of players are illustrated in figure~\ref{goaltimefig}, we let this value represent a player's ability to score a goal. If a player does not feature in a block we represent their ability by the mean of the prior distribution (-2). We see that the model is quick to identify a given player's ability. To elucidate, Costa is immediately (after block 1) identified as one of the top goal scorers, despite not featuring in the 2013/2014 season. The same can be said for Sanchez, who takes longer to establish his ability after a less impressive start to the season. Aguero was one of the best goal scorers across all the data and has a constant ability near the top. Kane had little playing time until block 2 where the model starts to increase his ability to score a goal. Defoe spent most of the 2013/2014 season on the bench before transferring to Toronto~FC; he returned to the English Premier League with Sunderland in January 2015, where he scored a number of goals, saving Sunderland from relegation. The model rightly acknowledges this and raises his ability as a goal scorer (a trait he is well known for). Given a player scores a small number of goals, relative to other event types, we include G. Johnson to show the effect of scoring a goal. Johnson scored 1 goal in the 2013/2014 and 2014/2015 seasons (during block 2), and his ability rises by a large jump because of this; such jumps are not evident for players who score a reasonable number of goals (5+). \begin{figure \centering \includegraphics[scale=0.7]{delta-time-goal_mean.pdf} \caption{The mean of $q(\Delta_i^{\textrm{Goal}})$ through time for a selection of players. \hbox{\emph{Black-solid} Costa}, \hbox{\emph{black-dashed} A. Sanchez}, \emph{black-dotted} Aguero, \emph{grey-solid} Defoe, \emph{grey-dashed} G. Johnson, \hbox{\emph{grey-dotted} Kane}.} \label{goaltimefig} \end{figure} The mean of $q(\Delta_i^{\textrm{Control}})$ and the mean of $q(\Delta_i^{\textrm{Disruption}})$ are plotted against each other through time for a selection of players in figure~\ref{controltimefig}. Control and Disruption comprise of the event types listed in table~\ref{eventtype}. It is evident that for the majority of players, their Control and Disruption abilities do not vary much through time (from block to block). This is perhaps unsurprising, given we do not expect a player's ability to change dramatically from fixture to fixture. Those that vary the most are the players with fewer minutes played in the earlier blocks, but have much more playing time as time progresses, for example, Kane (see figure~\ref{controltimefig}). The figure does however show clear distinction between players, with defenders tending to occupy the top half of the graph, and strikers the bottom half. An interesting extension to this work would be to see whether a clustering analysis of these latent player abilities would reveal player positions, that is, central defender or wing-back for example. \begin{figure \centering \includegraphics[scale=0.7]{delta-time-control-disruption_mean.pdf} \caption{The mean of $q(\Delta_i^{\textrm{Control}})$ versus the mean of $q(\Delta_i^{\textrm{Disruption}})$ through time for a selection of players. \hbox{\emph{Triangle} block 0}, \emph{square} block 4. \emph{Black-solid} Aguero, \emph{black-dashed} A. Carroll, \emph{black-dotted} Koscielny, \emph{grey-solid} G. Johnson, \emph{grey-dashed} Silva, \emph{grey-dotted} Kane.} \label{controltimefig} \end{figure} To form our predictions of whether over or under 2.5 goals are scored in a given fixture, we take each of our posterior draws (fitted using the previous block) and construct the $\theta_t$ of~\eqref{hier_pois} via \eqref{theta_h} and \eqref{theta_a} (baseline model), or \eqref{theta_h_ext} and \eqref{theta_a_ext} (including latent player abilities) for the fixtures in the following block (our prediction block). Our prediction blocks are formed of fixtures between teams we have already seen in the previous (fitting) blocks, hence prediction block 1 consists of 57~fixtures, prediction blocks 2-4 are made up of 80 fixtures, with 60 fixtures in prediction block~5. We use a predicted starting line-up from expert soccer analysts to determine $I_k^j$, the players who enter \eqref{fdeltah-pred} and \eqref{fdeltaa-pred}; these are usually quite accurate (86\% accuracy over the season) and vary little from the players who start a particular game. We then combine the $\theta_t$ for the home and away teams to give an overall scoring rate for each fixture, $\theta = \theta_h + \theta_a$, from which we calculate the probability of there being over 2.5 goals in the match. We average these probabilities across the posterior sample. ROC curves based on these averaged probabilities, for each prediction block, are presented in figure~\ref{rocfig}. For clarity we also present the area under the curve (AUC) values in table~\ref{tableroc}. It is evident from both the figure and the table, that including the latent player abilities in the model leads to a better predictive performance. We observe this increase across all blocks, although the difference between the models in block 5 is severely reduced compared to other blocks. The reasons for this reduction are twofold, the first being that given a near full season of data (2014/2015) on which we are predicting, the baseline model can reasonably accurately capture a team's attack and defence parameters better than it can towards the start of the season. Secondly the last block of a season tends to be more volatile, as some teams try out younger players (who are not observed in the data previously), and others have increased motivation to score more goals to try and win games, for example, to avoid relegation. Whence, we observe similar behaviour under both models, as we observe less players in a starting line-up, moving the model including player abilities towards the baseline model. However, overall, we can conclude that the inclusion of the latent player abilities in the model results in a better predictive performance throughout the 2014/2015 season. \begin{figure \begin{minipage}[b]{0.48\linewidth} \centering \qquad\qquad Block 1\vspace{0.01cm} \includegraphics[scale=0.55]{both-roc-window1.pdf} \end{minipage} \begin{minipage}[b]{0.48\linewidth} \centering \qquad\qquad Block 2\vspace{0.01cm} \includegraphics[scale=0.55]{both-roc-window2.pdf} \end{minipage} \vspace{0.2cm}\\ \begin{minipage}[b]{0.48\linewidth} \centering \qquad\qquad Block 3\vspace{0.01cm} \includegraphics[scale=0.55]{both-roc-window3.pdf} \end{minipage} \begin{minipage}[b]{0.48\linewidth} \centering \qquad\qquad Block 4\vspace{0.01cm} \includegraphics[scale=0.55]{both-roc-window4.pdf} \end{minipage} \vspace{0.2cm}\\ \begin{minipage}[b]{0.48\linewidth} \centering \qquad\qquad Block 5\vspace{0.01cm} \includegraphics[scale=0.55]{both-roc-window5.pdf} \end{minipage} \caption{ROC curves based on averaged probabilities for each prediction block. \emph{Black} model including the latent player abilities, \emph{grey} baseline model, the \emph{dashed line} is the line $y=x$.} \label{rocfig} \end{figure} \begin{table \centering \begin{tabular}{l|ccccc} \hline \multicolumn{6}{c}{Area under the curve values}\\ \hline & \multicolumn{5}{c}{Block}\\ Model & 1 & 2 & 3 & 4 & 5 \\ \hline Baseline & 0.47 & 0.60 & 0.53 & 0.55 & 0.61 \\ Including latent player abilities & 0.54 & 0.65 & 0.58 & 0.68 & 0.62 \\ \hline \end{tabular} \caption{Area under the ROC curves based on averaged probabilities for each prediction block under both models.} \label{tableroc} \end{table} \section{Discussion} \label{disc} We have provided a framework to establish player abilities in a Bayesian inference setting. Our approach is computationally efficient and centres on variational inference methods. By adopting a Poisson model for occurrences of event types we are able to infer a player's ability for a multitude of event types. These inferences are reasonably accurate and have close ties to reality, as seen in Section~\ref{ability}. Furthermore, our approach allows the visualisation of differences between players, for a specific ability, through the marginal posterior variational densities. We also extended the Bayesian hierarchical model of \cite{baio_2010} to include these latent player abilities. Through this model we captured a team's propensity to score goals, including a team's attacking ability, defensive ability and accounting for a home effect. We used output from this model to predict whether 2.5 goals would be scored in a fixture or not, observing an improvement in performance over the baseline model. A benefit of the prediction approach (and the block structure we implemented), is that, it allowed us to see how our inference about a player's ability evolved through time, explicitly highlighting what impact fringe players can have when they start getting regular playing time, for example, Kane in Section~\ref{prediction}. We plan three major ways of extending the current work. First, we intend to extend the variational approximation to allow for dependency among the latent abilities in the posterior. Allowing for correlations in $q(\cdot)$ will let the model infer higher posterior variances, resulting in a more robust ranking of players, and possibly improved predictive power for tasks such as providing probabilities on the number of goals in a future match. From a modelling perspective, an extension is to let abilities change over time using a random walk across seasons and within seasons, which will be particularly useful when a substantial number of years of historical touch-by-touch data eventually becomes available. Finally, as the model gets applied to more competitions simultaneously, it will be important to propose ways of scaling up the procedure. A topic worth investigating is how to best iteratively subsample the data for stochastic optimisation of the variational objective function.
{ "attr-fineweb-edu": 1.912109, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbeHxK4sA-9zR-9pA
\section*{Results} \subsection{Data sets and measures (Observe).} Our dataset consists of soccer logs \cite{gudmundsson2017spatio,stein2017how,cintia2015harsh,pappalardo2017quantifying} describing 760 games of the Italian Serie A in seasons 2015/2016 and 2016/2017. Every game is described by a sequence of events on the field with a total of one million events. Each event -- passes, shots, tackles, dribbles, clearances, goalkeeping actions, fouls, intercepts, aerial duels (see Supplementary Note 1) -- consists of a timestamp, the players involved in the event, the position and the outcome (successful or failed). From these events we derive a large number of features by counting the number of events of a specific type, aggregating events, or combining them through specific functions (Supplementary Note 2) which capture the technical performance of a player $u$ during a game $g$, defined as a $n$-dimensional feature vector: $$\vec{p}(u, g) = [x_1(u, g), x_2(u, g), \dots, x_n(u, g)],$$ where $x_i(u, g)$ is a feature describing a specific aspect of $u$'s technical performance in game $g$. We extract 150 technical performance features, such as the total number of shots by a player in a game (volume of play), the number of successful passes to a teammate (quality of play), or a player's dangerousness during a game (see Supplementary Note 2). Since the technical features have different range of values, we standardize them by computing their z-scores. In total, we analyze $\approx$20,000 technical performance vectors, each describing the performance of a player in a specific game. Our second data source consists of all the ratings given by three Italian sports newspapers -- Gazzetta dello Sport ($G$), Corriere dello Sport ($C$) and Tuttosport ($T$) -- to each player after each game of the season. A rating $v(u, g, x)$ is assigned to a player $u$ by an anonymous judge (typically a sports reporter) who watched game $g$ for newspaper $x \in \{G, C, T\}$ (see Figure \ref{fig:game_visualization}). These ratings range from 0 (unforgettably bad) to 10 (unforgettably amazing) in units of 0.5 and indicate a judge's perception of a player's performance during a game. It is worth noting that both variables and the criteria behind these decision records are unknown, though they hopefully rely on some quantitative aspects of technical performance. Though the three newspapers rate the players independently, we find that their rating distributions are statistically indistinguishable (Figure \ref{fig:ratings}a). They are peaked at 6, indicating that most performances are rated as sufficient (see Figure \ref{fig:ratings}b). The soccer rating system is borrowed from the Italian school rating system, where 6 indicates sufficiency reached by a student in a given discipline. Only $\approx 3\%$ of ratings are lower than 5 and only $\approx 2\%$ of ratings are higher than 7. These outliers refer to rare events of historical importance or deplorable actions by players, such as a 10 assigned to Gonzalo Higua\'in when he became the best seasonal top scorer ever in the history of Serie A, or a 3 assigned to a goalkeeper who intentionally nudged an opponent in the face. In line with the high school rating system, we categorize the ratings in bad ratings (< $6$), sufficient ratings ($\in \{6, 6.5\}$), and good ratings $(> 6.5)$, finding that bad ratings are around three times more frequent than good ratings. Ideally, the same performance should result in the same rating for all three judges. In reality, ratings are a personal interpretation of performance, prompting us to assess the degree of agreement between two judges' ratings. We observe a strong Pearson correlation ($r=0.76$) between all newspapers pairs (Figure \ref{fig:ratings}c), though in rare cases two ratings may differ by up to 6 rating units. In Figure \ref{fig:ratings}c we highlight areas of disagreement, i.e., situations where a judge perceives a performance as good while the other perceives it as bad, representing around 20\% of the cases. Also, the Root Mean Squared Error (RMSE) of a rating set with respect to another is $RMSE = 0.50$, meaning that typically the error between two paired ratings is within one unit (i.e., $0.5$) and that there is no systematic discrepancy between the newspapers' ratings (Supplementary Note 3). In summary, though situations of disagreement exist, overall we observe a good agreement on paired ratings between the newspapers, finding that the ratings (i) have identical distributions; (ii) are strongly correlated to each other; and (iii) typically differ of one rating unit ($0.5$). The key question we wish to address is the consistency of ratings with respect to technical performance, i.e., to what extent similar technical performances are associated with similar ratings \cite{hogson2008examination, mantonakis2009order}. For every pair of technical performances $\vec{p}_i$ and $\vec{p}_j$ ($i \neq j$) we compute their Minkowski distance $d_T(\vec{p}_i, \vec{p}_j)$ (see Supplementary Note 4) and the absolute difference of the corresponding ratings $d_R(\vec{p}_i, \vec{p}_j) = |v_i - v_j|$. We then investigate the relation between $d_T(\vec{p}_i, \vec{p}_j)$ and $d_R(\vec{p}_i, \vec{p}_j)$, finding that the more similar two performances are, the closer are the associated ratings (Figure \ref{fig:ratings}d). This is good news, confirming the existence of a relationship between technical performance and soccer ratings, i.e., judges associate close ratings to performances that are close in the feature space. \subsection{Simulating human perception of performance (Predict).} The criteria behind the rating process of the judges are unknown, since their decisions are a personal interpretation of a player's performance during a game. Here we use machine learning to generate an ``artificial judge'', i.e., a classifier which simulates the human rating process. The artificial judge serves two purposes: \emph{(i)} helps us understand to what extent human decisions are related to measurable aspects of a player's technical performance; \emph{(ii)} helps us build an interpretable model to unveil the rating criteria, i.e., the reasoning behind the judges' rating decisions. Formally, we assume the existence of a function $\mathcal{F}$ representing a rating process which assigns a set of ratings $V$ based on a set of technical performances $P$, i.e., $V = \mathcal{F}(P)$. We use machine learning to infer a function $f$ (i.e., the artificial judge) which approximates the unknown function $\mathcal{F}$, representing the human judge's rating process. Given a set $P = \{\vec{p}_1, \dots, \vec{p}_n\}$ of technical performances and the corresponding list of a judge $R$'s ratings $\vec{v}_R = \{v_1, \dots, v_n\}$, with $R \in \{G, C, T\}$, we compute function $f$ by training a machine learning classifier $M_P$ which learns the relations between technical performance and a human judge's ratings (see Supplementary Note 5). We evaluate $M_P$'s ability of simulating $\mathcal{F}$ as its ability to produce a rating set which agrees with a human judge as much as two human judges agree with each other. Figure \ref{fig:ratings}e shows the statistical agreement between the ratings $\vec{v}_{M_P}$ assigned by $M_P$ and the ratings $\vec{v}_G$ assigned by newspaper $G$. We find that the root mean squared error indicating the quality of $M_P$'s prediction is $RMSE_P = 0.60$, while the Pearson correlation between $M_P$'s predictions and real ratings is $r_P = 0.55$ (Figure \ref{fig:ratings}e). In contrast, the statistical agreement between two human judges is characterized by $RMSE_P = 0.50$ and $r_P = 0.76$ (see Figure \ref{fig:ratings}c-d). Moreover, we find that the distance between the distribution of $\vec{v}_{M_P}$ and the distribution of $\vec{v}_R$, as expressed by the Kolmogorov-Smirnov statistics \cite{justel1997multivariate}, is $KS_P = 0.15$, while for two human judges $KS_P = 0.02$. On one hand, these results indicate that a human judge partly relies on technical features, since the function $f$ inferred by the training task is able to assign ratings with some degree of agreement with human judges. Indeed, a null model where ratings are assigned randomly according to the distribution of real ratings results in null correlations (see Supplementary Note 6). On the other hand, the disagreement indicates that the technical features alone cannot fully explain the rating process. This can be due to the fact that soccer logs do not capture some important aspects of technical performance, or by the fact that the judges consider aspects other than a player's technical performance, i.e., personal bias or external contextual information. To investigate the second hypothesis we retrieve contextual information about players, teams and games, such as age, nationality, the club of the player, the expected game outcome as estimated by bookmakers, the actual game outcome and whether a game is played home or away (see Supplementary Note 7). We then generate for every player an extended performance vector which include both technical and contextual features and train a machine learning classifier $M_{(P + C)}$ on the set of extended performances. Figure \ref{fig:ratings}f shows that, by adding contextual information, the statistical agreement between the artificial judge and the human judge increases significantly. In particular, the artificial judge $M_{(P + C)}$ is much more in agreement with human judge $G$ than $M_P$ is, with the correlation increasing from $r_{P}=0.56$ to $r_{(P + C)}=0.68$ and the root mean squared error decreasing from $RMSE_P = 0.60$ to $RMSE_{(P + C)} = 0.54$ (Figure \ref{fig:ratings}f). Furthermore, the distribution of $\vec{v}_{M_{(P + C)}}$ is more similar to $\vec{v}_G$ than $\vec{v}_{M_P}$ is, since $KS_{P} = 0.15$ while $KS_{(P + C)} = 0.09$ (Figure \ref{fig:ratings}f). We observe that the highest errors of $M_{(P + C)}$ are associated with the outliers (ratings $> 7$ or $< 5$) and can be due to either the scarcity of examples in the dataset or by the absence of other contextual knowledge. For example, 10 is assigned only two times in the dataset. One case is when Gonzalo Higua\'in, by scoring three goals in a game, became the best seasonal top scorer ever in the Italian Serie A. This is a missing variable which is not related to the player's performance in that game, making this exceptional rating impossible to predict. Artificial judge $M_{(P + C)}$ classifies that performance simply as a good one assigning to the player a high rating (8). Our results clearly show that $\mathcal{F}$ cannot be accurately inferred from technical performance only, as human observers rely also on contextual factors which go beyond the technical aspects a player's performance. For example, a victory by a player's team or an unexpected game outcome compared to the expectation recorded by bookmakers, significantly changes a player's rating, regardless of his actual technical performance. \subsection{Factors influencing human perception of performance (Explain).} Figure \ref{fig:importances} summarizes the importance of each technical and contextual feature to a human judge's rating process (Supplementary Note 8). As in soccer a player is assigned to one of four roles -- goalkeeper, defender, midfielder, forward -- each implying specific tasks he is expected to achieve, the importance of the features to a judge varies from role to role. For this reason, we compute the importance of technical and contextual variables for each role separately (Supplementary Note 8). We observe that most of a human judge's attention is devoted to a small number of features, and the vast majority of technical features are poorly considered or discarded during the evaluation process. We further investigate this aspect by selecting the $k$ most important features for every $k=1, \dots, 150$ and evaluating the accuracy of $M_{(P + C)}$ on that subset of selected features. We find that the predictive power of $M_{(P + C)}$ stabilizes around 20 features, confirming that the majority of technical features have negligible influence to the human rating process (Supplementary Note 9). We find that human perception of a feature's importance changes with the role of the player, i.e., the same feature has different importance for different roles. For example, while the most attractive feature for goalkeepers and forwards is a technical feature (saves and goals, respectively), midfielders and defenders attract a human judge's attention mainly by collective features like the team's goal difference. This is presumably because goalkeepers and forwards are directly associated with events which naturally attract the attention of observers such as goals, either scored or suffered. In contrast, the evaluation of defenders and midfielders is more difficult, as they are mainly involved in events like tackles, intercepts and clearances, which attract less the attention of human observers. Indeed, as Figure \ref{fig:importances} shows, for defenders and midfielders the contextual features have the highest importance, in contrast with goalkeepers and forwards which are mainly characterized by technical features (Figure \ref{fig:importances}). \subsection{Noticeability heuristic.} Here we investigate how human judges use the available features to construct a performance's evaluation. Given a rating $r$, we define its average performance as $\vec{p}(r) = [\overline{x}^{(r)}_1, \overline{x}^{(r)}_2, \dots, \overline{x}^{(r)}_n]$, where $\overline{x}^{(r)}_i {=} \frac{1}{N} \sum_{u, g} x_i(u, g)$ is the average value of technical feature $x_i$ computed across all performance vectors associated with $r$. Figure \ref{fig:heatmap}a visualizes $\vec{p}(r)$ discriminating between positive features, which have a positive correlation with soccer ratings, and negative features which have a negative correlation with them. We observe that, for $r \in \{5.5, 6.0, 6.5\}$, feature values in $\vec{p}(r)$ are all close to the average, while for the other ratings features are either significantly higher (red) or lower (blue) than the average (Figure \ref{fig:heatmap}a). In other words, while performances associated with extreme ratings have some features with extreme values hence capturing a judge's attention, performances associated with ratings in the vicinity of 6 are characterized by average feature values, hence representing performances less worthy of attention. We quantify this aspect by defining a rating's \emph{feature variability} $\sigma(r)$ as the standard deviation of $\vec{p}(r)$. We find that extreme ratings are characterized by the highest values of $\sigma(r)$ (see Supplementary Note 10), denoting the presence in $\vec{p}(r)$ of some feature values far from the average. This suggests that a judge's evaluation relies on a \emph{noticeability heuristic}, i.e., they assign a rating to a performance based on the presence of feature values that can be easily brought to mind. Driven by the above results, we label a feature value as \emph{noticeable} if it is significantly higher or lower than the average (see Supplementary Note 11). We then aggregate the original feature space into a low-dimensional performance vector $S = [P_{T}^+, P_{T}^-, N_{T}^+, N_{T}^-, P_{C}^+, P_{C}^-, N_{C}^+, N_{C}^-]$, where $P$ indicates the number of positive features having noticeable values, $N$ indicates the number of negative features having noticeable values, symbols $+$ and $-$ indicate that the features considered are positive or negative respectively, $T$ and $C$ refer to technical and contextual features respectively. For each performance, we construct $S$ and train a machine learning model $M_S$ to predict soccer ratings from it. Figure \ref{fig:heatmap}b shows how $r_{M_{(P + C)}}$ (blue curve) and $r_{M_S}$ (red curve) changes as we reduce the number of features used during the learning phase by imposing a minimum feature importance $w_{min}$, which is a proxy of a feature's relevance to a judge's evaluation. We find that while $r_{M_{(P + C)}}$ decreases as $w_{min}$ increases, $r_{M_S}$ has an initial increasing trend and reaches $r_{M_S} = 0.56$ when $w_{min} = 0.6$, equalizing artificial judge $M_P$. Notably, when $w_{min} = 0.6$ only 8 features out of 150 are selected to construct $S$, corresponding to the features in the two widest circles in the radar charts of Figure \ref{fig:importances}. The presence of a peak at $w_{min} = 0.6$ in Figure \ref{fig:heatmap}b suggests how a human judge constructs the evaluation: first she selects the most relevant features and then verifies the presence of noticeable values in them. Figure \ref{fig:heatmap}c visualizes the average performance $\vec{p}_*(r) = [\overline{P}_{T}^+, \overline{P}_{T}^-, \overline{N}_{T}^+, \overline{N}_{T}^-, \overline{P}_{C}^+, \overline{P}_{C}^-, \overline{N}_{C}^+, \overline{N}_{C}^-]$ consisting of the average values computed across all the low-dimensional performance vectors associated with a rating $r$. We find that average performances associated with extreme ratings are characterized by the presence of noticeable feature values (Figure \ref{fig:heatmap}c). In particular, for forwards and midfielders the attention of the judges is dominated by positive technical and contextual features $P_{T}^+$ and $P_{C}^+$, which include the most prominent features like goals and the game's goal difference (Supplementary Figure 6). Defenders are instead dominated by the number of negative contextual features $N_{C}^-$, which include for example the number of goals suffered (see Supplementary Figure 6). Overall, the description of performance based on the noticeability heuristic has the same predictive power as a description using 150 technical features. These results clearly show the simplicity of the human rating process, indicating that (i) human judges select only a few, mainly contextual, features and (ii) they base the evaluation on the presence of noticeable values in the selected features. \section*{Conclusion} In this work we investigated the cognitive process underlying performance evaluation. We find that human judges select a small set of indicators which capture their attention. Contextual information is particularly important to the human evaluation process, indicating that similar technical performances may have different evaluations according to a game's results or outcome expectation. Moreover, we discover that human judges rely on a noticeability heuristic, i.e., they select a limited number of features which attract their attention and then construct the evaluation by counting the number of noticeable values in the selected features. The analytical process proposed in our study is a first step towards a framework to ``evaluate the evaluator'', a fundamental step to understand how human evaluation process can be improved with the help of data science and artificial intelligence. Although in many environments evaluations as objective as possible are desirable, it is essential that human evaluators inject their values and experience into their decisions, possibly taking into account aspects that may not be systematically mirrored in the available data. To this extent, the observe-predict-explain process exemplified in this paper can be used to empower human evaluators to gain understanding on the underlying logic of their decisions, to assess themselves also in comparison with peers, and to ultimately improve the quality of their evaluations, balancing objective performance measurements with human values and expertise. \begin{figure}\centering \includegraphics[scale=0.75]{player_events.pdf} \caption{\scriptsize \textbf{Technical performance and soccer player ratings.} The image illustrates a portion of the events produced by a player during a game. The game is watched by three sports newspapers' reporters (Gazzetta dello Sport, Corriere dello Sport, Tuttosport) who assign a rating according to their personal interpretation of a player's performance.} \label{fig:game_visualization} \end{figure} \begin{figure} \centering \includegraphics[scale=0.35]{ratings_analysis_v2_2.pdf} \caption{\tiny \textbf{The patterns of soccer player ratings.} \textbf{(a)} Violin plots comparing the distribution of ratings for the three newspapers. We see that they are almost identical, with an average Kolmogorov-Smirnoff statistics $\overline{KS} = 0.02$. \textbf{(b)} Distribution of ratings for newspaper $G$. The ratings are peaked around 6 which corresponds to sufficiency in the Italian high school rating system. Most of the ratings are between 4 and 8, while ratings below 4 and above 8 are extremely rare. \textbf{(c)} Correlation between newspapers $G$ and $C$. The same strong correlation ($r=0.76$) is observed for every pair of newspapers. \textbf{(d)} Decile of Minkowski distance $d_T$ between performances versus the the average absolute difference of ratings $d_R$ for the three newspapers ($G$, $C$ and $T$) and three null models where the ratings are randomly shuffled across the performances ($G_{null}$, $C_{null}$, $T_{null}$). \textbf{(e)} Comparison between the distributions of $M_P$ and $G$, and between the distribution of $M_{(P + C)}$ and $G$ \textbf{(f)}. We see that $M_{(P+C)}$ is closer to $G$ than $M_P$ is.} \label{fig:ratings} \end{figure} \begin{figure}\centering \includegraphics[scale=0.45]{radar_charts_v2.pdf} \caption{\tiny \textbf{Importance of technical and contextual features to human rating process.} The radar charts indicate the importance of every feature, normalized in the range $[0, 1]$, to human rating process for Goalkeepers (a), Defenders (c), Midfielders (d) and Forwards (b). Feature with an importance $\ge 0.4$ are highlighted and contextual features are grouped in the left upper corner of every radar chart. We compute a feature's importance by a combinations of the weights produced by a repertoire of machine learning models (see Supplementary Note 9). The plots indicate that, for example, for forwards three features matter the most: the number of \emph{goals} scored (performance), the game goal difference (contextual) and the expected game outcome as estimated by bookmakers before the match (contextual).} \label{fig:importances} \end{figure} \begin{figure}\centering \includegraphics[scale=0.34]{heatmap_end_model_new.pdf} \caption{\tiny (a) Typical performances for every rating, grouped by role on the field. The color of each cell, in a gradient from blue to red, represents the difference from the mean of the corresponding feature. Red cells indicate values above the average, blue cells indicate values below the average, white cells indicate values on the average of the feature's distribution. We observe that good ratings (> 6.5) are characterized by values of positive features above the mean and values of negative features below the mean. The opposite applies for bad ratings (< 5.5). (b) Correlations $r_{M_{(P +C)}}$ (blue curve) and $r_{M_S}$ (red curve) as less features are selected during the learning phase by imposing a minimum feature importance. The black curve indicates the number of features selected by a given minimum feature importance. Model $M_S$ has a Pearson correlation $r=0.55$ when aggregating over the 8 most important features, achieving the same predictive power as model $M_P$ which uses 150 technical features. (c) Relative importance of features in the noticeability heuristic model, for forwards (red), midfielders (blue), defenders (yellow) and goalkeepers (green). In the noticeability model, the evaluation of forwards and midfielders is dominated by positive technical and contextual features; for goalkeepers and defenders it is instead dominated by the number of negative contextual features.} \label{fig:heatmap} \end{figure} \newpage \bibliographystyle{naturemag}
{ "attr-fineweb-edu": 1.901367, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUa9zxaKPQoqJ9_j0l
\section{Introduction} The past decade has seen a rapid development of the live broadcast market, especially the e-sports market such as 'Douyu', 'Kuaishou' and 'Penguin E-sports' in China. One of the core function on live platforms is Highlight Flashback, which aims at showing the most exciting fighting clips during the lengthy live broadcast. However, the current highlight flashbacks on platforms are manually edited and uploaded. This content generation process consumes considerable human resources. Hence, automatic highlights generation is an urgent demand for these live platforms. \begin{figure}[t] \centering \includegraphics[width=.95\linewidth]{introduction.pdf} \caption{Examples of highlights in the live videos of the game 'Honor of Kings'. 1)The raw long live video; 2) Highlight segments corresponded to the original raw video; 3) Audio clips of the video.} \vspace{-0.1in} \label{fig:introduction} \end{figure} To tackle the issue above, previous works explored highlight detection from frame-level or clip-level \cite{sun2014ranking,gygli2016video2gif,yao2016highlight,fu2017video,ringer2018deep,xiong2019less}. \cite{fu2017video} addressed highlights detection task as a classification task, that was, highlight parts were regarded as target class and the rest were background. This method needed accurate annotations for each frame or clips, which was always used under supervised pattern. \cite{ringer2018deep} regarded highlight as a novel event for every frame in a video. A convolutional autoencoder was constructed for visual analysis of the game scene, face, and audio. On the other hand, \cite{gygli2016video2gif,yao2016highlight,xiong2019less} took use of the internal relationship between highlight clips and non-highlight clips, where highlight clips got higher scores than the non-highlight ones. Based on this point, a ranking net was employed to implement the relationship under both supervised and unsupervised pattern. In this paper, we focus on the highlight generation for the live videos of the game 'Honor of Kings'. We define the intense fight scenes in videos as highlights, at which audiences are excited. Example is demonstrated in Figure \ref{fig:introduction}. However, laborious annotations are needed for training a network specially designed for the game scenes. Thus, we take a novel training strategy for network learning using the existing videos downloaded from the 'Penguin E-sports' without any additional annotations. \begin{figure*} \centering \includegraphics[width=.8\linewidth]{architecture.pdf} \caption{Overview of our framework. Three streams are trained using ranking loss, respectively. And during the inference time, we first get three scores for the target video clip and then fuse them together to obtain the final score.} \vspace{-0.1in} \label{fig:network} \end{figure*} Considering that the unlabeled videos are hard to be classified as highlight or non-highlight, we adopt the ranking net as our basic model. Since a significant number of highlight videos are edited and uploaded to the platform by journalists and fanciers, we take clips from edited videos as the positive samples, and clips from long videos as the negative samples for our network. Since the long videos also contain high clips, which may introduce much noise for data training, we use Huber loss \cite{gygli2016video2gif} to further reduce the impact of noisy data. Besides, as shown in Figure \ref{fig:network}, a multi-stream network is constructed to take full use of the temporal, spatial and audio information of the videos. We use simple convolutional layers to produce final highlights from temporal aspects, while fuse the auxiliary audio and spatial components to further fine-tune the results. Our contributions can be summarized as follow: \begin{itemize} \item A novel training strategy for network training is proposed, which uses existing downloaded videos without any additional annotations. \item A multi-stream network containing temporal, spatial and audio components is constructed, which takes full use of the information from the game videos. \item Further experiment results on the game videos demonstrate the effectiveness of our method. \end{itemize} \section{Dataset Collection} Existing highlight datasets \cite{sun2014ranking,song2015tvsum} contain variety of videos in natural scenes, which have great gap with the scenes in game videos. For this reason, we recollect the videos for the target scenes. We collect long raw live videos and highlight videos from Penguin E-sports platform. For network training, first, we randomly select 10 players and query their game videos, then 450 edited highlight videos and 10 long raw videos are then downloaded. The highlight videos are average 21 seconds long while the lengths of raw videos range from 6 to 8 hours. Due to the extreme length of the raw live videos, we randomly intercept 20 video clips from videos with an average length of 13 minutes each, so that the positive and negative samples can be balanced. Note that each video clip contains both the highlight and non-highlight intervals, where highlight clips account for about 20\% of the whole video. For testing, We download another four long videos from different players, and got one video clip from each long video with 15 minutes. Specially, the evaluated video clips contain different master heroes so that the scenarios varies dramatically and the task becomes a challenge. To evaluate the effectiveness of our approach, we annotate the evaluated video clips on the second-level. The annotated videos have 55 highlight time periods totally with average 7.83 seconds for each period. \section{Methodology} In this section, we introduce our multi-stream architecture which is demonstrated in Figure \ref{fig:network}. We combine three components for highlight generation: Temporal Network extracts the temporal information; Spatial Network gets the spatial context information for each frame; Audio Network filters the unrelated scenes by utilizing the internal most powerful sound effect, which reveals the player's immersion. All of the three networks use the ranking framework, which constrains the scores produced from positive and negative samples. \vspace{0.05in}\noindent \textbf{Temporal Network.} In this component, we exploit temporal dynamics using 3D features. We extract the features from the output of final pooling layer of ResNet-34 model \cite{he2016deep} with 3D convolution layers \cite{hara2018can}, which is pre-trained on the Kinetics dataset \cite{carreira2017quo}. The inputs for the network are video clips with 16 frames each. After the corresponding features are obtained, three fully-connected layers with the channels \{512, 128, 1\} are added to perform the ranking constraint. \vspace{0.05in}\noindent \textbf{Spatial Network.} The spatial and context information plays an important role in object classification and detection tasks, which is also critical for highlight detection by providing distinguish appearances for different scenes and objects. Therefore, we set up a stream to classify highlight or non-highlight video frames. Different from the Temporal Network, we train the Spatial Network on frame level. A fixed-length feature is firstly extracted for each frame and then fed into the spatial ranking net. Here, we use the AlexNet \cite{krizhevsky2012imagenet}, which is pre-trained on 1.2 million images of ImageNet challenge dataset \cite{deng2009imagenet}, to generate 9216-dimension feature from the last pooling layer. And seven fully-connected layers are followed with the channels \{9216, 4096, 1024, 512, 256, 128, 64, 1\} respectively. \begin{figure*}[t] \begin{center} \includegraphics[width=.98\linewidth]{results.pdf} \end{center} \caption{Quantitative highlight generation results. Each row demonstrates the inference scores $f(x)$ for frames from low to high in a test video. The first two rows show several non-game frames which get lower scores than the game frames. And the last two rows show the ranking scores between game frames. Obviously, the frames from intense fight scenes obtain higher scores than that from non-fight scenes.} \label{fig:results} \end{figure*} \vspace{0.05in}\noindent \textbf{Audio Network.} It is observed that different scenes in game live contain different characteristics. For example, the highlight clips may be immersed in the audible screams and sounds of struggle while the non-highlight parts share the more quiet property. Each one-second audio is firstly fed into a pre-trained Speaker Encoder \cite{jia2018transfer} to generate a 256-dimension feature. Then two fully-connected layers with the channels \{64, 1\} are followed. \vspace{0.05in}\noindent \textbf{Training \& Inference process.} We train three streams separately, and samples from highlights and non-highlights satisfy the constraint: \begin{equation} f(x^+) > f(x^-), \forall(x^+, x^-) \in D\\ \end{equation} where $x^+, x^-$ are the features of input samples (frames or video clips) from highlights and non-highlights videos, respectively. $f(*)$ denotes the ranking network, and $D$ is the Dataset. Therefore, to optimize the networks, we employ the ranking loss between input positive and negative samples, which is described as follow: \begin{equation} L_{p}(x^+, x^-) = max(0, 1-f(x^+) + f(x^-))^p, \\ \label{equation:loss} \end{equation} The loss function is gained under the condition that the negative samples are non-highlights. However, in our case, the input clips from live videos can also be highlights. Thus, we apply the Huber loss proposed in \cite{gygli2016video2gif} to decrease the negative effect of outliers: \begin{equation} L_{Huber}(x^+, x^-) = \left\{ \begin{array}{rcl} \frac{1}{2}L_2(x^+, x^-), & & \mu \leq \delta\\ \delta L_1(x^+, x^-) - \frac{1}{2} \delta^2, & & otherwise \end{array} \right. \end{equation} where $\mu = 1 - f(x^+) + f(x^-)$, so that the losses for outliers are small than the normal ones. $L_1$ and $L_2$ are the loss functions where $p=1$ and $p=2$ in equation (\ref{equation:loss}), respectively. Here we set $\delta = 1.5$ to distinct the outliers. During the inference, after the scores are got from three streams, we need to fuse them to form the final scores. Here, the simple weighted summation is used with the weight \{0.7, 0.15, 0.15\} for temporal, spatial, and audio scores respectively, which demonstrates the importance of 3D information. \vspace{0.05in}\noindent \textbf{Implementation details.} The framework is implemented in PyTorch\footnote{\url{http://pytorch.org/}}. Three networks all use Stochastic Gradient Descent (SGD) with a weight decay of 0.00005 and a momentum of 0.9. The learning rate for Temporal and Spatial network is 0.005 while 0.1 for Audio network. ReLu non-linearity \cite{nair2010rectified} and Dropout\cite{srivastava2014dropout} are applied after each fully-connected layer of the three streams. \section{Experiments} We evaluate our approach on four raw videos with an average length of 15 minutes. Particularly,, we divide the video into clips with 5 seconds each, and measure the average scores across clips. Since no other approach is trained and tested on our dataset, we only demonstrate ablation experiment and whole results of our approach \vspace{0.05in}\noindent \textbf{Metrics.} As described in \cite{gygli2016video2gif}, the commonly used metric mean Average Precision (mAP) is sensitive to video length, that is, longer video will lead to lower score. Considering that our test videos are about 3 to 5 times longer than other highlight datasets \cite{song2015tvsum,sun2014ranking}, we use the normalized version of MSD (nMSD) proposed in \cite{gygli2016video2gif} to measure the method. The nMSD refers to the relative length of the selected highlight clips at a recall rate $\alpha$, it can be defined as: \begin{equation} nMSD = \frac{|G^*| - \alpha |G^{gt}|}{|V| - \alpha |G^{gt}|}, \\ \end{equation} where $|*|$ means the length of corresponding videos. $V$ denotes the raw test video while the $G^{gt}$ and $G^*$ are the ground truth and the predicted highlights under the recall rate of $\alpha$. Note that the lower $nMSD$, the better performance is, so that the best performance occurred when $nMSD = 0$. \vspace{0.05in}\noindent \textbf{Results.} The results can be seen in Table \ref{tab:results}. As shown in the table, performance of the network becomes better when more information is fused into the network. It is notable that the single Audio Network has inferior performance since the audios extracted from videos are riddle with much noise, such as the voice of anchor and background music played by the player. It is concluded that our approach performs well even though no annotated video is available. More quantitative results can be seen in Figure \ref{fig:results}. \begin{table}[t] \begin{center} \begin{tabular}{ccc|c|c} \hline temporal & spatial & audio & nMSD $\downarrow$ & mAP $\uparrow$ \\ \hline $\surd$ & & & 15.13\% & 21.51\% \\ & $\surd$ & & 14.41\% & 21.24\% \\ & & $\surd$ & 43.28\% & 13.38\% \\ $\surd$ & $\surd$ & & 13.58\% & 21.74\% \\ $\surd$ & $\surd$ & $\surd$ & \textbf{13.36}\% & \textbf{22.27}\% \\ \hline \end{tabular} \end{center} \caption{Highlight score fusion strategies. The result achieves the best performance when three streams are fused together.} \label{tab:results} \end{table} \section{Conclusion} In this paper, we propose a multi-stream architecture to automatically generate highlights for live videos of the game 'Honor of Kings', which saves a lot of human resources. Particularly, since the edited highlight clips and the long live videos satisfy the constraint that the former gain higher scores than the latter, we take use of the existing highlight videos on 'Penguin E-sports' platform to optimize the network. On the other hand, we exploit the information from spatial, temporal and audio aspect, which further improves the performance on highlight generation. In the future, we will explore more effective techniques to make the best of inherent characteristics in game videos, namely audio information. For example, the video and audio can learn from each other via teacher-student mechanism. Besides, good performances on different categories of game videos can be easily achieved by applying transfer learning, such as transformation between the MOBA and RPG games. {\small \bibliographystyle{ieee_fullname}
{ "attr-fineweb-edu": 1.796875, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUcwc5qsBC5lINb2P1
\section{Introduction} Consider a set of $n$ football matches which each end in either a win, draw, or loss. How many bets are necessary for an individual to guarantee that they predict at least $n-1$ of outcomes of the games correctly? What about having at least $n-k$ outcomes correct? The above problem is the classical Football Pool Problem that has been extremely well studied for small specific values of $n$, as well as the generalization allowing for ``more" possible outcomes \cite{blokhuis1984more,fernandes1983football,haas2007lower,haas2009lower,hamalainen1991upper, kalbfleisch1969combinatorial,koschnick1993new, rodemich1970coverings,singleton1964maximum}. In particular, consider the hypercube $H_{n,k}$, the $k$-dimensional hypercube with side length $n$. Then place $H_{n,k}$ on the lattice $\{0,\ldots,n-1\}^k$ and define the distance between two points in $H_{n,k}$ to be the Hamming distance between their coordinate representations. In other words, the Hamming distance is the number of places in which the coordinate representations of the points differ. An $R$-\emph{covering} is a set of points $S$ such that every point in the hypercube is within distance $R$ of a point in $S$ \cite{van1991bounds}. In the literature the minimum possible size of an $R$-covering has been the primary subject of interest, especially when $R=1$. Similarly, a set $T$ is an $R'$-\emph{packing} if no two points are within distance $R'$ of each other. In this case the primary object of study is the largest possible $R'$-packing \cite{van1991bounds}. For the remainder of this paper, we will focus on generalizations of the well studied cases where $R=1$ and $R'=1,2$. Given the extensive research in the case where points can cover in all directions parallel to the axes, we instead consider the generalization where each point can cover in only a subset of these directions. In particular define an $\ell$-\emph{rook} to be rook which can cover in $\ell$ dimensions. More precisely, an $\ell$-rook is a point in $\mathbb Z^k$ along with a selection of $\ell$ out of $k$ coordinates and this point covers exactly the points which differ in one of the $\ell$ chosen coordinates. For example a $2$-rook in two dimensional space is a regular planar rook. Given this close relation with the chessboard piece, we use the terms ``attack" and ``cover" interchangeably. With this notion, we can now define the primary objects of study for this paper. \begin{defn} Let $n$, $k$, $\ell$ be positive integers with $k\ge \ell$. Define $a_{n,k,\ell}$ to be the minimum number of $\ell$-rooks that can cover $H_{n,k}$. Similarly define $b_{n,k,\ell}$ to be the maximum number of $\ell$-rooks in $H_{n,k}$ with no rooks attacking another. Finally, define $c_{n,k,\ell}$ to be the maximum number of $\ell$-rooks that can be placed in $H_{n,k}$ so that no two rooks attack the same point. Note that in all of these cases we do not allow multiple rooks at a single point. \end{defn} Below are three figures which demonstrate the optimal constructions for $a_{n, k, \ell}, b_{n, k, \ell}$, and $c_{n, k, \ell}$ in the case $(n, k, \ell)=(3, 3, 2)$. \begin{figure}[H] \centering \scalebox{0.4}{\begin{tikzpicture} \tikzset{>=latex} \def \dx{4}; \def \dy{4}; \def \dz{4}; \def \nbx{3}; \def \nby{3}; \def \nbz{3}; \foreach \mathbf x in {1,...,\nbx} { \foreach \mathbf y in {1,...,\nby} { \foreach \mathbf z in {1,...,\nbz} { \node at (\mathbf x*\dx,\mathbf y*\dy,\mathbf z*\dz) [circle, fill=black] {}; } } } \foreach \mathbf x in {1,...,\nbx} { \foreach \mathbf z in {1,...,\nbz}{ \foreach \mathbf y in {2,...,\nby}{ \draw [-, color = black, line width = 1pt](\mathbf x*\dx,\mathbf y*\dy - \dy,\mathbf z*\dz) -- ( \mathbf x*\dx , \mathbf y*\dy, \mathbf z*\dz); } } } \foreach \mathbf y in {1,...,\nbx} { \foreach \mathbf z in {1,...,\nbz}{ \foreach \mathbf x in {2,...,\nbx}{ \draw[-, color = black, line width = 1pt](\mathbf x * \dx - \dx,\mathbf y*\dy,\mathbf z*\dz) -- ( \mathbf x * \dx,\mathbf y*\dy,\mathbf z*\dz); } } } \foreach \mathbf x in {1,...,\nbx} { \foreach \mathbf y in {1,...,\nbz}{ \foreach \mathbf z in {2,...,\nby}{ \draw[-, color = black, line width = 1pt](\mathbf x*\dx,\mathbf y*\dy,\mathbf z*\dz - \dz) -- ( \mathbf x*\dx,\mathbf y*\dy,\mathbf z*\dz); } } } \node at (\dx, \dx, \dx)[circle, fill=red, color=red]{}; \node at (\dx, 2*\dx, \dx)[circle, fill=red, color=red]{}; \node at (\dx, 3*\dx, \dx)[circle, fill=red, color=red]{}; \node at (2*\dx, 3*\dx, 2*\dx)[circle, fill=red, color=red]{}; \node at (2*\dx, \dx, 2*\dx)[circle, fill=red, color=red]{}; \node at (2*\dx, 2*\dx, 2*\dx)[circle, fill=red, color=red]{}; \node at (3*\dx, \dx, 3*\dx)[circle, fill=red, color=red]{}; \draw [->, color = red, line width = \dx] (\dx,\dx,\dx)--(1.3*\dx, \dx, \dx); \draw [->, color = red, line width = \dx] (\dx,\dx,\dx)--(\dx, \dx, 1.5*\dx); \draw [->, color = red, line width = \dx] (\dx,2*\dx,\dx)--(\dx, 2*\dx, 1.5*\dx); \draw [->, color = red, line width = \dx] (\dx,2*\dx,\dx)--(1.3*\dx, 2*\dx, \dx); \draw [->, color = red, line width = \dx] (\dx,3*\dx,\dx)--(\dx, 3*\dx, 1.5*\dx); \draw [->, color = red, line width = \dx] (\dx,3*\dx,\dx)--(1.3*\dx, 3*\dx, \dx); \draw [->, color = red, line width = \dx] (2*\dx,\dx,2*\dx)--(2.3*\dx, \dx, 2*\dx); \draw [->, color = red, line width = \dx] (2*\dx,\dx,2*\dx)--(2*\dx, \dx, 2.5*\dx); \draw [->, color = red, line width = \dx] (2*\dx,2*\dx,2*\dx)--(2*\dx, 2*\dx, 2.5*\dx); \draw [->, color = red, line width = \dx] (2*\dx,2*\dx,2*\dx)--(2.3*\dx, 2*\dx, 2*\dx); \draw [->, color = red, line width = \dx] (2*\dx,3*\dx,2*\dx)--(2*\dx, 3*\dx, 2.5*\dx); \draw [->, color = red, line width = \dx] (2*\dx,3*\dx,2*\dx)--(2.3*\dx, 3*\dx, 2*\dx); \draw [->, color = red, line width = \dx] (2*\dx,\dx,2*\dx)--(1.7*\dx, \dx, 2*\dx); \draw [->, color = red, line width = \dx] (2*\dx,\dx,2*\dx)--(2*\dx, \dx, 1.5*\dx); \draw [->, color = red, line width = \dx] (2*\dx,2*\dx,2*\dx)--(2*\dx, 2*\dx, 1.5*\dx); \draw [->, color = red, line width = \dx] (2*\dx,2*\dx,2*\dx)--(1.7*\dx, 2*\dx, 2*\dx); \draw [->, color = red, line width = \dx] (2*\dx,3*\dx,2*\dx)--(2*\dx, 3*\dx, 1.5*\dx); \draw [->, color = red, line width = \dx] (2*\dx,3*\dx,2*\dx)--(1.7*\dx, 3*\dx, 2*\dx); \draw [->, color = red, line width = \dx] (3*\dx,\dx,3*\dx)--(3*\dx, 1.3*\dx, 3*\dx); \draw [->, color = red, line width = \dx] (3*\dx,\dx,3*\dx)--(2.7*\dx, \dx, 3*\dx); \end{tikzpicture} \hspace{20mm} \begin{tikzpicture} \tikzset{>=latex} \def \dx{4}; \def \dy{4}; \def \dz{4}; \def \nbx{3}; \def \nby{3}; \def \nbz{3}; \foreach \mathbf x in {1,...,\nbx} { \foreach \mathbf y in {1,...,\nby} { \foreach \mathbf z in {1,...,\nbz} { \node at (\mathbf x*\dx,\mathbf y*\dy,\mathbf z*\dz) [circle, fill=black] {}; } } } \foreach \mathbf x in {1,...,\nbx} { \foreach \mathbf z in {1,...,\nbz}{ \foreach \mathbf y in {2,...,\nby}{ \draw [-, color = black, line width = 1pt](\mathbf x*\dx,\mathbf y*\dy - \dy,\mathbf z*\dz) -- ( \mathbf x*\dx , \mathbf y*\dy, \mathbf z*\dz); } } } \foreach \mathbf y in {1,...,\nbx} { \foreach \mathbf z in {1,...,\nbz}{ \foreach \mathbf x in {2,...,\nbx}{ \draw[-, color = black, line width = 1pt](\mathbf x * \dx - \dx,\mathbf y*\dy,\mathbf z*\dz) -- ( \mathbf x * \dx,\mathbf y*\dy,\mathbf z*\dz); } } } \foreach \mathbf x in {1,...,\nbx} { \foreach \mathbf y in {1,...,\nbz}{ \foreach \mathbf z in {2,...,\nby}{ \draw[-, color = black, line width = 1pt](\mathbf x*\dx,\mathbf y*\dy,\mathbf z*\dz - \dz) -- ( \mathbf x*\dx,\mathbf y*\dy,\mathbf z*\dz); } } } \node at (3*\dx, 2*\dx, 3*\dx)[circle, fill=red, color=red]{}; \node at (\dx, 2*\dx, \dx)[circle, fill=red, color=red]{}; \node at (\dx, 3*\dx, \dx)[circle, fill=red, color=red]{}; \node at (2*\dx, 3*\dx, 2*\dx)[circle, fill=red, color=red]{}; \node at (3*\dx, \dx, 2*\dx)[circle, fill=red, color=red]{}; \node at (3*\dx, \dx, \dx)[circle, fill=red, color=red]{}; \node at (\dx, \dx, 3*\dx)[circle, fill=red, color=red]{}; \node at (2*\dx, \dx, 3*\dx)[circle, fill=red, color=red]{}; \node at (2*\dx, 2*\dx, 2*\dx)[circle, fill=red, color=red]{}; \node at (3*\dx, 3*\dx, 3*\dx)[circle, fill=red, color=red]{}; \draw [->, color = red, line width = \dx] (\dx,\dx,3*\dx)--(\dx, \dx, 2.5*\dx); \draw [->, color = red, line width = \dx] (\dx,\dx,3*\dx)--(\dx, 1.3*\dx, 3*\dx); \draw [->, color = red, line width = \dx] (2*\dx,\dx,3*\dx)--(2*\dx, \dx, 2.5*\dx); \draw [->, color = red, line width = \dx] (2*\dx,\dx,3*\dx)--(2*\dx, 1.3*\dx, 3*\dx); \draw [->, color = red, line width = \dx] (3*\dx,\dx,\dx)--(2.7*\dx, \dx, \dx); \draw [->, color = red, line width = \dx] (3*\dx,\dx,\dx)--(3*\dx, 1.3*\dx, \dx); \draw [->, color = red, line width = \dx] (3*\dx,\dx,2*\dx)--(2.7*\dx, \dx, 2*\dx); \draw [->, color = red, line width = \dx] (3*\dx,\dx,2*\dx)--(3*\dx, 1.3*\dx, 2*\dx); \draw [->, color = red, line width = \dx] (\dx,2*\dx,\dx)--(\dx, 2*\dx, 1.5*\dx); \draw [->, color = red, line width = \dx] (\dx,2*\dx,\dx)--(1.3*\dx, 2*\dx,\dx); \draw [->, color = red, line width = \dx] (2*\dx,2*\dx,2*\dx)--(2*\dx, 2*\dx, 2.5*\dx); \draw [->, color = red, line width = \dx] (2*\dx,2*\dx,2*\dx)--(2.3*\dx, 2*\dx,2*\dx); \draw [->, color = red, line width = \dx] (2*\dx,2*\dx,2*\dx)--(2*\dx, 2*\dx, 1.5*\dx); \draw [->, color = red, line width = \dx] (2*\dx,2*\dx,2*\dx)--(1.7*\dx, 2*\dx,2*\dx); \draw [->, color = red, line width = \dx] (3*\dx,2*\dx,3*\dx)--(3*\dx, 2*\dx, 2.5*\dx); \draw [->, color = red, line width = \dx] (3*\dx,2*\dx,3*\dx)--(2.7*\dx, 2*\dx,3*\dx); \draw [->, color = red, line width = \dx] (\dx,3*\dx,\dx)--(\dx, 3*\dx, 1.5*\dx); \draw [->, color = red, line width = \dx] (\dx,3*\dx,\dx)--(1.3*\dx, 3*\dx,\dx); \draw [->, color = red, line width = \dx] (2*\dx,3*\dx,2*\dx)--(2*\dx, 3*\dx, 2.5*\dx); \draw [->, color = red, line width = \dx] (2*\dx,3*\dx,2*\dx)--(2.3*\dx, 3*\dx,2*\dx); \draw [->, color = red, line width = \dx] (2*\dx,3*\dx,2*\dx)--(2*\dx, 3*\dx, 1.5*\dx); \draw [->, color = red, line width = \dx] (2*\dx,3*\dx,2*\dx)--(1.7*\dx, 3*\dx,2*\dx); \draw [->, color = red, line width = \dx] (3*\dx,3*\dx,3*\dx)--(3*\dx, 3*\dx, 2.5*\dx); \draw [->, color = red, line width = \dx] (3*\dx,3*\dx,3*\dx)--(2.7*\dx, 3*\dx,3*\dx); \end{tikzpicture} \hspace{20mm} \begin{tikzpicture} \tikzset{>=latex} \def \dx{4}; \def \dy{4}; \def \dz{4}; \def \nbx{3}; \def \nby{3}; \def \nbz{3}; \foreach \mathbf x in {1,...,\nbx} { \foreach \mathbf y in {1,...,\nby} { \foreach \mathbf z in {1,...,\nbz} { \node at (\mathbf x*\dx,\mathbf y*\dy,\mathbf z*\dz) [circle, fill=black] {}; } } } \foreach \mathbf x in {1,...,\nbx} { \foreach \mathbf z in {1,...,\nbz}{ \foreach \mathbf y in {2,...,\nby}{ \draw [-, color = black, line width = 1pt](\mathbf x*\dx,\mathbf y*\dy - \dy,\mathbf z*\dz) -- ( \mathbf x*\dx , \mathbf y*\dy, \mathbf z*\dz); } } } \foreach \mathbf y in {1,...,\nbx} { \foreach \mathbf z in {1,...,\nbz}{ \foreach \mathbf x in {2,...,\nbx}{ \draw[-, color = black, line width = 1pt](\mathbf x * \dx - \dx,\mathbf y*\dy,\mathbf z*\dz) -- ( \mathbf x * \dx,\mathbf y*\dy,\mathbf z*\dz); } } } \foreach \mathbf x in {1,...,\nbx} { \foreach \mathbf y in {1,...,\nbz}{ \foreach \mathbf z in {2,...,\nby}{ \draw[-, color = black, line width = 1pt](\mathbf x*\dx,\mathbf y*\dy,\mathbf z*\dz - \dz) -- ( \mathbf x*\dx,\mathbf y*\dy,\mathbf z*\dz); } } } \node at (\dx, \dx, 3*\dx)[circle, fill=red, color=red]{}; \node at (2*\dx, \dx, 2*\dx)[circle, fill=red, color=red]{}; \node at (3*\dx, 2*\dx, \dx)[circle, fill=red, color=red]{}; \node at (3*\dx, 3*\dx, \dx)[circle, fill=red, color=red]{}; \draw [->, color = red, line width = \dx] (\dx,\dx,3*\dx)--(\dx, 1.3*\dx, 3*\dx); \draw [->, color = red, line width = \dx] (\dx,\dx,3*\dx)--(1.3*\dx, \dx, 3*\dx); \draw [->, color = red, line width = \dx] (2*\dx,\dx,2*\dx)--(2*\dx, 1.3*\dx, 2*\dx); \draw [->, color = red, line width = \dx] (2*\dx,\dx,2*\dx)--(1.7*\dx, \dx, 2*\dx); \draw [->, color = red, line width = \dx] (2*\dx,\dx,2*\dx)--(2.3*\dx, \dx, 2*\dx); \draw [->, color = red, line width = \dx] (3*\dx,2*\dx,\dx)--(3*\dx, 2*\dx, 1.5*\dx); \draw [->, color = red, line width = \dx] (3*\dx,2*\dx,\dx)--(2.7*\dx, 2*\dx, \dx); \draw [->, color = red, line width = \dx] (3*\dx,3*\dx,\dx)--(3*\dx, 3*\dx, 1.5*\dx); \draw [->, color = red, line width = \dx] (3*\dx,3*\dx,\dx)--(2.7*\dx, 3*\dx, \dx); \end{tikzpicture}} \caption{ $a_{3, 3, 2}=7, b_{3, 3, 2}=10, c_{3, 3, 2}=4$} \end{figure} Previous research studies the case when $k=\ell$. In particular, $a_{n,k,k}$ corresponds to a $1$-covering while $b_{n,k,k}$ and $c_{n,k,k}$ correspond to a $1$-packing and $2$-packing, respectively. Furthermore $c_{q, k, k}$ corresponds to generalized $q$-ary Hamming-distance-$3$ subsets of $H_{q, k}$, which are useful for error correcting codes. The most classical bound in the case of coverings is the sphere-packing bound. We give the analog in this case; our proof is nearly identical to the classical one. This determines $a_{n,k,l}$ to within a constant depending on $l$. \begin{thm}\label{SpherePacking} We have \[ \frac{n^k}{\ell(n-1)+1}\le a_{n,k,l}\le n^{k-1}. \] \end{thm} \begin{proof} Suppose for the sake of contradiction, there exists a covering $S$ of $H_{n,k}$ with $\ell$-rooks and $|S|<\frac{n^k}{\ell(n-1)+1}$. Since each rook covers at most $\ell(n-1)+1$ points, it follows that $S$ covers at most $ |S|(\ell(n-1)+1)<n^k$ points, which is a contradiction. To prove the upper bound, let S be the set of all points with first coordinate 0. Allow each point in S to attack in the direction of the first coordinate, and arbitrarily choose the other $\ell-1$ directions in which it may attack and these rooks collectively cover the cube. Note that we do not utilize the last $\ell-1$ dimensions for the upper bound. \end{proof} Note that since the above theorem holds for $\ell=1$ and it implies that $a_{n,k,1}=n^{k-1}$. Given the triviality of this case, we consider $\ell\ge 2$ in the remainder of the paper. The analogous lower bounds for $b_{n,k,k}$ and $c_{n,k,k}$ comes from the classical Singleton bound \cite{singleton1964maximum}. The proof presented in the classical case can be adapted to this situation as well, however we rely on a more geometrical argument. \begin{thm}\label{Singleton} For all positive integers $n$, $k$, and $\ell$ with $k\ge \ell$, we have \[ b_{n,k,\ell}\le \frac{k n^{k-1}}{\ell}. \] Furthermore, if $k\ge\ell\ge 2$ then \[ c_{n,k,\ell}\le \frac{\binom{k}{2} n^{k-2}}{\binom{\ell}{2}}. \] \end{thm} \begin{proof} For $b_{n,k,\ell}$, consider all lines parallel to edges of the $H_{n,k}$ containing $n$ points in $H_{n,k}$. Note that there are $kn^{k-1}$ such lines by choosing a direction and letting the remaining coordinates vary over all possibilities within the cube. Furthermore, no two $\ell$-rooks can cover the same axis. Since each $\ell$-rook cover $\ell$ axes, it follows that $b_{n,k,\ell}\le \frac{k n^{k-1}}{\ell}.$ Similarly, for $c_{n,k,\ell}$ consider all planes passing through $H_{n,k}$, parallel to one of the faces. Note there are $\binom{k}{2}n^{k-2}$ of these faces and each $\ell$-rook covers $\binom{\ell}{2}$ planes. If two rooks cover the same plane, then they intersect, and it follows that $ c_{n,k,\ell}\le \frac{\binom{k}{2} n^{k-2}}{\binom{\ell}{2}} $ for $\ell\ge 2$. (If $\ell=1$, the $\ell$-rook does not determine a plane, so the proof does not follow.) \end{proof} Note that $c_{n,k,1}\le n^{k-1}$ as each $1$-rook covers $n$ points and the points these rooks cover are distinct. This can be achieved by putting $1$-rooks on all points with the first coordinate $0$ and having all rooks point in the direction of the first coordinate. Given this difference in behavior between $\ell\ge 2$ and $\ell=1$ for $c_{n,k,\ell}$, we assume that $\ell\ge 2$ for the remainder of the paper in this case as well. In the remainder of the paper, we focus on the asymptotic growth rates of $a_{n,k,\ell}$, $b_{n,k,\ell}$, and $c_{n,k,\ell}$ when $k$ and $\ell$ are fixed and $n$ increases. \begin{notation} Let $a_{k,\ell}=\displaystyle\lim_{n\to\infty}\frac{a_{n,k,\ell}}{n^{k-1}},$ $b_{k,\ell}=\displaystyle\lim_{n\to\infty}\frac{b_{n,k,\ell}}{n^{k-1}},$ and $c_{k,\ell}=\displaystyle\lim_{n\to\infty}\frac{c_{n,k,\ell}}{n^{k-2}}.$ \end{notation} The remainder of the paper is organized as follows. Section $2$ establishes the existence of such limits for all $k$ and $\ell$ (with $\ell\ge 2$ for $c_{k,\ell}$). Section 3 focuses on covering bounds and demonstrates that for $\ell\neq 1$ that the lower sphere-packing bound in Theorem 2 is never asymptotically tight. Furthermore, Section 3 proves that $a_{k,\ell}\to\frac{1}{\ell}$ as $k\to\infty$. Section 4 focuses on the packing bounds and demonstrates that $b_{k,\ell}$ and $c_{k,\ell}$ achieve the bounds in Theorem \ref{Singleton} in several possible cases. Finally, Section 5 presents a series of open problems regarding $a_{k,l}, b_{k,l},$ and $c_{k,l}$. \section{Existence Results} The general idea for our proofs in this section is to demonstrate that $a_{nm,k,\ell}\le m^{k-1} a_{n,k,\ell}$ for all integers $m$ and then show that adjacent terms are sufficiently close. (The first inequality is reversed for $b_{n,k,\ell}$ and a similar result holds for $c_{n,k,\ell}$.) For $a_{n,k,\ell}$ and $b_{n,k,\ell}$, the first inequality is demonstrated using a construction of Blokhuis and Lam \cite{blokhuis1984more} whereas for $c_{n,k,\ell}$ we rely on a different construction. \begin{thm}\label{aexists} For positive integers $k\ge \ell$, the limits \[a_{k,\ell}=\lim_{n\to\infty}\frac{a_{n,k,\ell}}{n^{k-1}},\] \[b_{k,\ell}=\lim_{n\to\infty}\frac{b_{n,k,\ell}}{n^{k-1}}\] exist. \end{thm} \begin{proof} We first consider $a_{k,\ell}$. For $\ell=1$, $a_{n,k,1}=n^{k-1}$ and the result is trivial. Therefore it suffices to assume that $k\ge 2$. Using Theorem \ref{SpherePacking}, it follows that \[\frac{1}{\ell}\le\liminf_{n\to\infty}\frac{a_{n,k,\ell}}{n^{k-1}}\le\limsup_{n\to\infty}\frac{a_{n,k,\ell}}{n^{k-1}}\le 1.\] Now suppose that $L=\liminf_{n\to\infty}\frac{a_{n,k,\ell}}{n^{k-1}}$. Then for every $\epsilon>0$, there exists an integer $m$ such that $\frac{a_{m,k,\ell}}{m^{k-1}}\le L+\frac{\epsilon}{2}$. Now consider the points $(x_1,\ldots,x_k)$ in $\{0,1,\ldots,n-1\}^k$ such that \[x_1+\cdots+x_k\equiv 0\mod n.\] (This is the construction present in \cite{blokhuis1984more}.) Note that if a $k$-rook is placed at every point in this construction, all points are covered and that every point of an outer face of the hypercube has a axis ``protruding" out of it. Therefore we can essentially blowup every point in $H_{m,k}$ to a copy of $H_{n,k}$ to create an $H_{mn,k}$, mark all the corresponding $H_{n,k}$ in $H_{mn,k}$ that correspond to rooks from the construction of $a_{m,k,\ell}$ and place $\ell$-rooks within these $H_{n,k}$ corresponding to the points from the earlier construction. Then choose the $\ell$ axes for each of these rooks that corresponds to the orientation for the $\ell$-rook in the original construction of $H_{m,k}$ in $a_{m,k,\ell}$. This gives a covering of $H_{mn,k}$, so it follows that $a_{nm,k,\ell}\le n^{k-1} a_{m,k,\ell}$. Now consider $a_{n+1,k,\ell}$ and $a_{n,k,\ell}$. If we let $H_{n,k}=\{0,\ldots,n-1\}^{k}$ and $H_{n+1,k}=\{0,\ldots,n\}^{k}$ then we place the construction for $a_{n,k,\ell}$ in $\{1,\ldots,n\}^{k}$. In order to cover the rest of the cube, place $\ell$-rooks at every point with at least $2$-coordinates being $0$ and we choose the directions of the points arbitrarily. For the remaining $k(n-1)^{k-1}$ points with exactly one $0$ and we break into cases with points of the form $(a_1,\ldots,a_{i-1},0,a_{i+1},\ldots,a_k)$. In order to cover these point we take all points such points with $a_{i}=0$ and place one axis of the $\ell$ possible in the direction of the $(i+1)^{st}$ coordinate where indices are taken$\mod n$. These points together cover the $H_{n+1,k}$ and we have added on at most $kn^{k-2}+\sum_{i=2}^{k}\binom{k}{i}n^{k-i}\le \sum_{i=1}^{k}\binom{k}{i}n^{k-2}\le 2^{k}n^{k-2}$ additional points. Therefore it follows that $a_{n+1,k,\ell}\le 2^{k}n^{k-2}+a_{n,k,\ell}$ and thus \[\frac{a_{n+1,k,\ell}}{(n+1)^{k-1}}\le \frac{2^{k}n^{k-2}+a_{n,k,\ell}}{(n+1)^{k-1}}\]\[\le \frac{2^{k}n^{k-2}+a_{n,k,\ell}}{n^{k-1}}=\frac{2^k}{n}+\frac{a_{n,k,\ell}}{n^{k-1}}.\] Taking $n$ sufficiently large it follows that \[\sum_{i=mn}^{mn+m-1}\frac{2^k}{i}<\frac{\epsilon}{2}.\] Thus for $i\ge mn$ it follows that $\frac{a_{i,k,\ell}}{i^{k-1}}\le L+\epsilon$. Therefore \[\limsup_{n\to\infty}\frac{a_{n,k,\ell}}{n^{k-1}}\le L+\epsilon\] and since $\epsilon$ was an arbitrary constant greater than $0$, the result follows. For $b_{k,\ell}$, an identical procedure demonstrates that $\frac{b_{mn,k,\ell}}{(mn)^{k-1}}\ge \frac{b_{n,k,\ell}}{(n)^{k-1}}$ for all positive integers $m$ and $n$. Furthermore the sequence $\frac{b_{n,k,\ell}}{n^{k-1}}$ is bounded due to Theorem \ref{Singleton} and note that \[\frac{b_{n+1,k,\ell}}{(n+1)^{k-1}}\ge \frac{b_{n,k,\ell}}{(n+1)^{k-1}}= \frac{n^{k-1}}{(n+1)^{k-1}}\bigg(\frac{b_{n,k,\ell}}{n^{k-1}}\bigg).\] Thus taking $L=\limsup_{n\to\infty}\frac{b_{n,k,\ell}}{n^{k-1}}$ and choosing $\epsilon>0$ arbitrarily there exists an $m$ such that $\frac{b_{m,k,\ell}}{m^{k-1}}>L-\frac{\epsilon}{2}$. Now suppose that $m$ satisfies $(\frac{mn}{mn+n-1})^k>\frac{L-\frac{\epsilon}{2}}{L-\epsilon}$. Then for all $i\ge mn$, $\frac{a_{i,k,\ell}}{i^{k-1}}>L-\epsilon$. Therefore, \[\limsup_{n\to\infty}\frac{b_{n,k,\ell}}{n^{k-1}}>L-\epsilon.\] Since $\epsilon$ was arbitrary the result follows. \end{proof} For the existence of $c_{k,\ell}$, we follow a similar strategy except we rely on a different construction for the initial inequality that allows only for prime ``blowup" factors. This construction is closely related and motivated by the construction of general $q$-ary codes. \begin{thm}\label{cExists} For positive integers $k\ge \ell\ge 2$, the limit \[c_{k,\ell}=\lim_{n\to\infty}\frac{c_{n,k,\ell}}{n^{k-2}}\] exists. \end{thm} \begin{proof} Suppose $p$ is prime and $p\ge k$. Consider the set of points $(x_1,\ldots,x_k)$ in $H_{p,k}$ that satisfy $x_{k-1}\equiv x_1+\cdots+x_{k-2} \mod p$ and $x_k\equiv x_1+2x_2+3x_3+\cdots+(k-2)x_{k-2}\mod p$. We will show that in this construction no two points are less than distance $3$ apart. Suppose for sake of contradiction that there are two points $A=(a_1,\ldots,a_k)$ and $B=(b_1,\ldots,b_k)$ such that the distance between A and B is at most 2. If $a_i=b_i$ for all $t$ with $1\leq t\leq k-2$, then $A=B$. If $a_t=b_t$ for $1\le t\le k-2$ except for $i\in \{1,\ldots, k-2\}$ where $a_i\neq b_i$, then $a_{k-1}\neq b_{k-1}$ and $a_{k}\neq b_{k}$. Finally, we consider the case where $a_t=b_t$ for $1\le t\le k-2$ except for $i,j\in \{1,\ldots, k-2\}$ where $a_i\neq b_i$ and $a_j\neq b_j$. If both of the last two digits match then $a_i+a_j\equiv b_i+b_j\mod p$ and $ia_i+ja_j\equiv ib_i+jb_j\mod p$. Subtracting $i$ times the first equation from the second yields $(j-i)a_j\equiv (j-i)b_j\mod p$ or $a_j\equiv b_j\mod p$, which is impossible. Thus each pair of points in $S$ differ by at least a distance $3$. Furthermore note the set $S$ has exactly $p^{k-2}$ points. Now given a construction for $c_{n,k,l}$ in $H_{n,k}$, we can blow up each point to a copy of $H_{p,k}$ (for $p>k$ and $p$ prime). Then place the construction given above into each $H_{p,k}$ corresponding to marked points in the original construction. Orienting the set of points in each $H_{p,k}$ to match the original orientation of the corresponding point in $H_{n,k}$, it follows that $\frac{H_{np,k}}{(np)^{k-2}}\ge \frac{H_{n,k}}{n^{k-2}}$ for all primes greater than $k$. Furthermore, note that \[\frac{c_{n+1,k,\ell}}{(n+1)^{k-2}}\ge \frac{c_{n,k,\ell}}{(n+1)^{k-2}}= \frac{n^{k-2}}{(n+1)^{k-2}}\bigg(\frac{c_{n,k,\ell}}{n^{k-2}}\bigg).\] Now $\frac{c_{n,k,l}}{n^{k-2}}$ is bounded above due to Theorem \ref{Singleton} and bounded below as it is positive. Let $L=\limsup_{n\to\infty}\frac{c_{n,k,\ell}}{n^{k-2}}$ and thus for every $\epsilon>0$ there is an $m$ such that $\frac{c_{n,k,\ell}}{n^{k-2}}>L-\frac{\epsilon}{2}$. Now order the primes $2=p_1<p_2<\cdots$. Since $\lim_{i\to\infty}\frac{p_{i+1}}{p_{i}}=1$ it follows that there exists $j$ such that for $i\ge j$, $\frac{p_{i+1}}{p_i}<(\frac{L-\frac{\epsilon}{2}}{L+\epsilon})^{\frac{1}{k-2}}$. For every integer $t>p_jn$ it follows that $t\in [p_in,p_{i+1}n-1]$ and $\frac{c_{t,k,\ell}}{t^{k-2}}>(\frac{t}{p_in})^{k-2}\frac{c_{p_in,k,\ell}}{(p_in)^{k-2}}>L-\epsilon$. Therefore $\lim\inf_{n\to\infty}\frac{c_{n,k,\ell}}{n^{k-2}}>L-\epsilon$, and since $\epsilon$ was arbitrary the result follows. \end{proof} \section{Bounds for Covering} Given the initial bounds from Theorem \ref{SpherePacking}, it follows that $\frac{1}{\ell}\le a_{k,\ell}\le 1$. However, in general we demonstrate that $a_{k,\ell}\neq \frac{1}{\ell}$, except for the trivial case $a_{k,1}=1$. To do this it is necessary to ``amortize" a result of Rodemich \cite{rodemich1970coverings} which is equivalent to $a_{n,k,k}\ge \frac{n^{k-1}}{k-1}$. However the original proof given by Rodemich can be replicated for this situation and we reproduce the proof below for the readers convenience. \begin{thm}\label{KeyIdea} Suppose that $N\le n^{k-1}$. Then for sufficiently large $n$, $N$ $k$-rooks on a $H_{n, k}$ cover at most $kNn-\frac{(k-1)N^2}{n^{k-2}}$ points. \end{thm} \begin{proof} The bound is clear when $k=1$. For $k=2$ note that $N$ $2$-rooks cover at most $n^2-(n-N)^2=2Nn-N^2$ points as least $n-N$ rows and columns are uncovered. Therefore it suffices to consider $k\ge 3$. Furthermore, when $N\in [\frac{n^{k-1}}{k-1}, n^{k-1}]$, we have $kNn-\frac{(k-1)N^2}{n^{k-2}}\ge n^{k}$ so the bound holds in these cases. Hence, it suffices to consider $N\le \frac{n^{k-1}}{k-1}$. Now consider any set $S$ of $k$-rooks with $|S|=N$. For any point $P\in H_{n,k}$ define $c_{j}(P)$ to be the number of points of times that the point $P$ is attacked in the $j^{th}$ direction. Furthermore define $q(P)$ the number of directions that $P$ is attacked in and define \[m(P)=\sum_{1\le j\le k}c_j(P)=q(P)+\sum_{c_j(P)>0}(c_j(P)-1).\] Furthermore define $e_{i,j}(P)$ to be $1$ if $P$ is covered in the $i$ and $j$ directions and $0$ otherwise. Then note that \[\sum_{1\le i<j\le k}e_{i,j}(P)=\frac{q(P)(q(P)-1)}{2}\le \frac{k(q(P)-1)}{2}\] for points $P$ that are attacked and therefore \[q(P)\ge 1+\frac{2}{k}\sum_{1\le i<j\le k}{e_{i,j}(P)}.\] Finally define $n_j(P)=c_j(P)-1$ if $c_j(P)$ is positive and $0$ otherwise. Therefore \begin{align*}m(P)&=q(P)+\sum_{1\le j\le k}n_j(P) \\ &\ge 1+\sum_{1\le j\le k}n_j(P)+\frac{2}{k}\sum_{1\le i<j\le k}{e_{i,j}(P)} \end{align*} for points $P$ that are attacked and suppose that $S$ attacks the points $T\in H_{n,k}$. Summing over $P\in T$ yields \begin{align*}kNn &\ge |T|+\sum_{1\le j\le k}\sum_{P\in T}n_j(P)+\frac{2}{k}\sum_{1\le i<j\le k}\sum_{P\in T}e_{i,j}(P) \\ &=|T|+\sum_{1\le j\le k}n_j+\frac{2}{k}\sum_{1\le i<j\le k}e_{i,j} \end{align*} where we have defined \[n_j=\sum_{P\in T}n_j(P)\] and \[e_{i,j}=\sum_{P\in T}e_{i,j}(P).\] Now we arbitrarily order the $n^{k-2}$ planes in the $(i,j)$ direction. For $r^{th}$ plane suppose there are $a_r$ rows in the $i^{th}$ direction with a point of $S$ in them, $b_r$ rows in the $j^{th}$ direction with a point of $S$ in them, and $d_r$ total points in this plane. Furthermore for convenience define $\alpha_r=d_r-a_r$ and $\beta_r=d_r-b_r$. Then it follows that \begin{align*}e_{i,j}&=\sum_{1\le r\le n^{k-2}}{a_rb_r}\\ &=\sum_{1\le r\le n^{k-2}}(d_r-\alpha_r)(d_r-\beta_r) \\ &=\sum_{1\le r\le n^{k-2}}\bigg(d_r-\frac{\alpha_r+\beta_r}{2}\bigg)^2-\bigg(\frac{\alpha_r-\beta_r}{2}\bigg)^2\end{align*} Using the trivial inequality that $|\alpha_r-\beta_r|\le n$ it follows that \begin{align*}e_{i,j} &\ge\frac{1}{n^{k-2}}\bigg(\sum_{1\le r\le n^{k-2}}d_r-\frac{\alpha_r+\beta_r}{2}\bigg)^2-\frac{n}{2}\sum_{1\le r\le n^{k-2}}\frac{\alpha_r+\beta_r}{2} \\ &=\frac{1}{n^{k-2}}\bigg(N-\frac{n_i+n_j}{2n}\bigg)^2-\frac{n_i+n_j}{4}.\end{align*} Here we have used the fact that \[n\sum_{1\le r\le n^{k-2}}\alpha_r+\beta_r=n_i+n_j\] which follows from counting the number of points covered multiple times in the $i$th and $j$th directions. Summing over all $i,j$ it follows that \begin{align*}\sum_{1\le i<j\le k}e_{i,j}&\ge \frac{k(k-1)N^2}{2n^{k-2}}-\frac{(k-1)N}{n^{k-1}}\sum_{1\le j\le k}n_j-\frac{k-1}{4}\sum_{1\le j\le k}n_j+\frac{1}{4n^k}\sum_{1\le i<j\le k}(n_i+n_j)^2.\end{align*} Applying this inequality it follows that \begin{align*}kNn &\ge |T|+\sum_{1\le j\le k}n_j+\frac{2}{k}\sum_{1\le i<j\le k}e_{i,j} \\ &\ge |T|+\bigg(1-\frac{2(k-1)N}{kn^{k-1}}-\frac{k-1}{2k}\bigg)\sum_{1\le j\le k} n_j+\frac{(k-1)N^2}{n^{k-2}}+\frac{1}{2kn^k}\sum_{1\le i<j\le k}(n_i+n_j)^2 \\ &\ge |T|+\bigg(1-\frac{2(k-1)N}{kn^{k-1}}-\frac{k-1}{2k}\bigg)\sum_{1\le j\le k} n_j+\frac{(k-1)N^2}{n^{k-2}}.\end{align*} Using $N\le \frac{n^{k-1}}{k-1}$ it then follows that \begin{align*}kNn &\ge |T|+\bigg(1-\frac{2}{k}-\frac{k-1}{2k}\bigg)\sum_{1\le j\le k} n_j+\frac{(k-1)N^2}{n^{k-2}} \\ &\ge |T|+\bigg(\frac{k-3}{2k}\bigg)\sum_{1\le j\le k} n_j+\frac{(k-1)N^2}{n^{k-2}} \\ &\ge |T|+\frac{(k-1)N^2}{n^{k-2}}\end{align*} and therefore it follows that \[|T|\le kNn-\frac{(k-1)N^2}{n^{k-2}}\] as desired. \end{proof} Note the previous bound in general cannot be improved as $a_{k+1,k+1}=\frac{1}{k}$ when $k$ is a prime power due to the existence of perfect codes \cite{blokhuis1984more}. Using this amortized version of Rodemich's result, we now prove a better lower bound for $a_{k,\ell}$. \begin{thm} For every pair of positive integers $(\ell, k)$ with $\ell\le k$, we have \[a_{k, \ell}\ge \frac{2}{\ell(1+\sqrt{1-\frac{4(\ell-1)}{\ell^2\binom{k}{\ell}}})}.\] \end{thm} \begin{proof} Suppose we have a configuration of $N$ $\ell$-rooks that covers $H_{n, k}$. Since the $\ell=k$ case of Theorem $8$ is established in by Rodemich's result \cite{rodemich1970coverings}, it suffices to consider when $k>\ell$. In this case $\binom{k}{\ell}>1$, so the right-hand side above is less than $\frac{1}{\ell-1}$. Therefore, it suffices to consider the case $N\le \frac{n^{k-1}}{\ell-1}$. We first prove the following lemma: \begin{lemma} Suppose that $a_1, \ldots, a_{n^{k-\ell}}$ are nonnegative reals that satisfy $\displaystyle\sum_{i=1}^{n^{k-\ell}}a_i=A\le \frac{n^{k-1}}{\ell-1}$. Then \[\sum_{i, a_i\le \frac{n^{\ell-1}}{\ell-1}}(\ell n a_i-\frac{\ell-1}{n^{\ell-2}}a_i^2)+\sum_{i, a_i>\frac{n^{\ell-1}}{\ell-1}}n^\ell\le \ell nA-\frac{\ell-1}{n^{k-2}}A^2.\] \end{lemma} \begin{proof} Consider the piecewise function $f(x)$ defined by \[f(x)=\begin{cases}\ell n x-\frac{\ell-1}{n^{\ell-2}}x^2 & x\le \frac{n^{\ell-1}}{\ell-1}; \\ n^\ell & x\ge \frac{n^{\ell-1}}{\ell-1}. \end{cases}\] Then $f(x)$ is continuous and concave on the region $[0, A]$. It follows that for $A=\sum_{i=1}^{n^{k-\ell}}a_i$ fixed, the left-hand side achieves its minimum when the $a_i$ are all equal to $\frac{A}{n^{k-\ell}}$. Since $\frac{A}{n^{k-\ell}}\le \frac{n^{\ell-1}}{\ell-1}$, it follows that the left-hand side is at most \[n^{k-\ell}f(\frac{A}{n^{k-\ell}})=\ell n A-\frac{\ell-1}{n^{k-2}}A^2\] as required. \end{proof} Now we proceed with the proof of Theorem 10. We consider the $\binom{k}{\ell}$ possible choices of direction for the $\ell$-points separately. Label these, directions $1, \ldots, \binom{k}{\ell}$ arbitrarily. Each choice of direction corresponds to a choice of $\ell$ out of $k$ coordinates, so there are $n^{k-\ell}$ distinct dimension-$\ell$ hypercubes for each direction, and these collectively form a partition of the full $H_{n, k}$. Order these $\ell$-dimensional hyperplanes arbitrarily and let $a_{i, j}$ denote the number of $\ell$-points in the $j^{th}$ hyperplane of the $i^{th}$ direction with points in that direction. Furthermore let $A_i=\sum_{j=1}^{n^{k-\ell}}a_{i, j}$. Since the $\binom{k}{\ell}$ directions contain all points exactly once between them, $\sum_{i=1}^{\binom{k}{l}}a_i=N$. Also, since $N\le \frac{n^{k-1}}{\ell-1}$, we have $A_i\le \frac{n^{k-1}}{\ell-1}$ for each $i$. Now invoking Theorem $6$, the total number of cubes covered is bounded above by \[\sum_{i=1}^{\binom{k}{\ell}}(\sum_{j, a_{i, j}\le \frac{n^{\ell-1}}{\ell-1}}(\ell n a_{i, j}-\frac{\ell-1}{n^{\ell-2}}a_{i, j}^2)+\sum_{j, a_{i, j}>\frac{n^{\ell-1}}{\ell-1}}n^\ell).\] It follows that \begin{align*} n^k&\le \sum_{i=1}^{\binom{k}{\ell}}(\sum_{j, a_{i, j}\le \frac{n^{\ell-1}}{\ell-1}}(\ell n a_{i, j}-\frac{\ell-1}{n^{\ell-2}}a_{i, j}^2)+\sum_{j, a_{i, j}>\frac{n^{\ell-1}}{\ell-1}}n^\ell) \\ &\le \sum_{i=1}^{\binom{k}{\ell}}(\ell nA_i-\frac{\ell-1}{n^{k-2}}A_i^2) \\ &= \ell nN-\frac{\ell-1}{n^{k-2}}\sum_{i=1}^{\binom{k}{\ell}}A_i^2 \\ &\le \ell nN-\frac{\ell-1}{\binom{k}{\ell}n^{k-2}}N^2 \end{align*} where we have used Lemma 9 and then the Cauchy-Schwarz inequality. Rearranging this gives \[(\ell-1)(\frac{N}{n^{k-1}})^2-\binom{k}{l}(\frac{\ell N}{n^{k-1}}-1)\le 0.\] It follows that for all $n$, \[a_{n, k, \ell} \ge \frac{2n^{k-1}}{\ell(1+\sqrt{1-\frac{4(\ell-1)}{\ell^2\binom{k}{\ell}}})},\] and the result follows. \end{proof} \begin{corollary} For $k\ge \ell\ge 2$, $a_{k,\ell}\neq \frac{1}{\ell}$. Therefore, in the limit, $a_{n,k,\ell}$ never achieves the lower bound of the sphere-packing bound. \end{corollary} However, despite the fact that $a_{k,\ell}\neq \frac{1}{\ell}$ for $\ell\ge 2$, we can show that as $k$ gets large $a_{k,\ell}$ in fact approaches $\frac{1}{\ell}$. In particular the portion of forced ``overlap" of the attacking rooks goes to $0$. For convenience, define $f(k)$ to be the largest prime power less than or equal to $k$. \begin{thm}\label{intLower} For every pair of positive integers $(k, \ell)$, with $k\ge 2$ and $f(k)\ge \ell$, \[a_{k, \ell}\le \frac{1}{f(k)-1}\left\lceil\frac{f(k)}{\ell}\right\rceil.\] \end{thm} \begin{proof} Take an integer $n_1>\frac{f(k)}{\ell}$. We first construct a size-$n_1$, dimension-$k$ block from $\left\lceil\frac{f(k)}{\ell}\right\rceil$ $\ell$-rooks. In particular consider points that satisfy $x_1+x_2+\cdots+x_k\equiv i\mod n_1$ for $0\le i\le \lceil\frac{f(k)}{\ell}\rceil-1$ and then choosing the $(i\ell+1)^{st}$ through $((i+1)\ell)^{th}$ directions to attack for the points whose coordinate sum is equivalent to $i$ where we take the specified direction $\mod n$. Note that this block has an attacking line in every possible axis. Since perfect $q$-nary covering codes exists for prime powers $q$ (see \cite{van1991bounds} for example) and $f(k)\le k$, we have $a_{k, k}\le a_{f(k), f(k)}=\frac{1}{f(k)-1}$. Now we note using the size $n_1$ scaled blocks in place of points for a construction of $a_{n_2,k,k}$, an $H_{n_1n_2, k}$ can be tiled with at most \[(\left\lceil\frac{f(k)}{\ell}\right\rceil n_1^{k-1})(a_{n_2, k, k})\] $\ell$-rooks, and the result follows from $\lim_{n\to\infty}\frac{a_{n, k, k}}{n^{k-1}}=a_{k, k}$. \end{proof} \begin{corollary} For every positive integer $\ell$, $\lim_{k\to\infty}a_{k, \ell}=\frac{1}{\ell}$. \end{corollary} We end the section on packing with the specific case of $a_{3,2}$ that demonstrates that the bounds in the previous two theorems are not tight in general. \begin{thm} It holds that \[a_{3, 2}\le \frac{1}{\sqrt{2}}.\] \end{thm} Note that this bound is less than $\frac{3}{4}$, the bound achieved by Theorem \ref{intLower}. \begin{proof} Let $(a, b)$ be any pair of positive integers that satisfies $2<\frac{a}{b}<\sqrt{2}+1$, so that $\frac{4ab}{2a-2b}\ge a+b$. Consider a construction on $H_{2a+2b, 3}$. For $0\le i\le 2a-1, 0\le j\le 2b-1$ we place a $2$-rook at $(i, j, \lfloor\frac{2bi+j}{2a-2b}\rfloor)$ that covers along the second and third coordinates and place a $2$-rook at $(2a+2b-j-1, 2a+2b-i-1, 2a+2b-1-\lfloor\frac{2bi+j}{2a-2b}\rfloor)$ which attacks along the first and third coordinates. We note that the points between these groups are distinct since the first coordinates between them never coincide. Now we claim that the uncovered squares in the plane $z=k$ are contained in the union of $2b$ columns and $2b$ rows. Indeed, in each plane of this form, either $2a-2b$ rooks of the first type are covering in the second coordinate, or $2a-2b$ rooks of the second type are covering the third direction. Since the corresponding rooks are in distinct rows, the remaining plane can be covered via at most $2b$ $2$-rooks, so this construction yields a covering of $H_{2a+2b, 3}$ with at most $8ab+2b(2a+2b)=4b^2+12ab$ $2$-rooks. This yields an upper bound $a_{3, 2}\le \frac{4b^2+12ab}{4(a+b)^2}=\frac{1+3t}{(1+t)^2}$ where $t=\frac{a}{b}$ as the proof of Theorem \ref{aexists} implies $\frac{a_{n,3,2}}{n^2}\ge a_{3,2}$. Taking $t=\frac{a}{b}$ to be an arbitrarily precise rational approximation of $\sqrt{2}+1$ from below, we obtain \[a_{3, 2}\le \frac{4+3\sqrt{2}}{(2+\sqrt{2})^2}=\frac{1}{\sqrt{2}}\] as required. \begin{thm} It holds that \[\frac{9-3\sqrt{5}}{4}\le a_{3,2}\]. \end{thm} In order to prove this lower bound, we first introduce some algebraic lemmas: \begin{lemma} Suppose that $c_i, x_i$ are nonnegative integers with $c_i\in [0, n], x_i\in [0, n^2]$ for $1\le i\le n$ and set $C=\sum_{i=1}^nc_i$ and $X=\sum_{i=1}^nx_i$. Suppose that $(n-c_i-x_i)(n-c_i)\le X$ for each $i$. Then: \[C+X\ge n^2(\frac{t}{2}+1-\frac{t(1+t)}{2(1-t)}-(1-\frac{t}{1-t})\sqrt{t})-2n\] where $t=\frac{X}{n^2}$. \end{lemma} \begin{proof} Suppose we have $x_i, c_i$ for which the minimum value of $X+C$ is achieved. We first claim that it suffices to prove the statement when $(n-c_i)(n-c_i-x_i)=X$ for each $i$. To prove this take $x_i, c_i$ such that $C+X$ is minimized. For fixed $x_i$, it suffices to consider the case where the $c_i$ are small as possible. The allowable range for each $c_i$ is the intersection of the intervals $[0, n]$ and $\bigg[\frac{2n-x_i-\sqrt{x_i^2+4X}}{2}, \frac{2n-x_i+\sqrt{x_i^2+4X}}{2}\bigg]$. Hence it suffices to set $c_i$ equal to $\max\bigg\{0, \frac{2n-x_i-\sqrt{x_i^2+4X}}{2}\bigg\}$. Now suppose that some $c_i$ is equal to $0$ with $\frac{2n-x_i-\sqrt{x_i^2+4X}}{2}<0$. We claim that $c_j>0$ for some $j$. Indeed, suppose otherwise. Then $n(n-x_i)\le X$ for each $i$, and summing over $i$ yields $n^3\le 2nX$, or $X\ge\frac{n^2}{2}$, which is false in this case. So $c_j>0$ for some $j$. But then we can simultaneously decrease $x_i$ and increase $x_j$ at the same rate until $\frac{2n-x_i-\sqrt{x_i^2+4X}}{2}=0$. At this point, $c_i=0$ still satisfies the required condition, but $c_j$ can be replaced with a $c_j'<c_j$ since $x_j$ increased. So, it suffices to prove the statement in the case that $c_i=\frac{2n-x_i-\sqrt{x_i^2+4X}}{2}\ge 0$ for each $i$. Then \begin{align*} C+X&=\sum_{i=1}^n\frac{2n-x_i-\sqrt{x_i^2+4X}}{2}+\sum_{i=1}^nx_i \\ &=\sum_{i=1}^n (n+\frac{x_i-\sqrt{x_i^2+4X}}{2}) \end{align*} We focus on minimizing this last expression. In addition to $x_i\ge 0$, the additional condition $c_i\ge 0$ implies that $\frac{2n-x_i-\sqrt{x_i^2+4X}}{2}\ge 0$ for each $i$, so that $x_i\le n-\frac{X}{n}$ for each $i$. Because the function $f(x)=\sqrt{x^2+c}$ is convex for $x\ge 0$ when $c\ge 0$, it follows that the expression above is minmized when all but one of the $x_i$ are equal to one of the boundaries of the interval $[0, n-\frac{X}{n}]$. Let $A=\lfloor\frac{X}{n-\frac{X}{n}}\rfloor$, so that there are $A$ values $i$ with $x_i=A$. Then: \begin{align*} \sum_{i=1}^n (n+\frac{x_i}{2}-\frac{\sqrt{x_i^2+4X}}{2})&\ge n^2+\frac{X}{2}-\frac{1}{2}(A+1)\sqrt{(n-\frac{X}{n})^2+4X}-(n-A)\sqrt{X} \\ &\ge n^2+\frac{X}{2}-\frac{1}{2}(\frac{X}{n-\frac{X}{n}})(n+\frac{X}{n})-(n-\frac{X}{n-\frac{X}{n}})\sqrt{X}-2n \\ &= n^2(\frac{t}{2}+1-\frac{t(1+t)}{2(1-t)}-(1-\frac{t}{1-t})\sqrt{t})-2n \end{align*} as required. \end{proof} \begin{lemma} Suppose that $c_i, x_i$ are nonnegative integers with $c_i\in [0, n], x_i\in [0, n^2]$ for $1\le i\le n$. Let $C=\sum_{i=1}^nc_i$ and $X=\sum_{i=1}^nx_i$, and suppose that $C\ge \frac{X}{2}$ and $(n-c_i-x_i)(n-c_i)\le X$ for each $i$. Further suppose that $\sum_{i=1}^n (X-(n-c_i-x_i)(n-c_i))\ge \frac{X^2}{2n-\frac{X}{n}}$. Then $C+X\ge (\frac{9-3\sqrt{5}}{4}) n^2-2n$. \end{lemma} \begin{proof} We take two cases based on the size of $X$. Let $\alpha=\frac{9-3\sqrt{5}}{4}$. Case 1: $X\ge \frac{2\alpha}{3}n^2$. Then $C+X\ge \frac{3X}{2}\ge \alpha n^2$ as required. Case 2: $X\le \frac{2\alpha}{3}n^2$. Then consider the indices $i$ for which $c_i>\max\{0, \frac{2n-x_i-\sqrt{x_i+4X}}{2}\}$. For each such index $i$, decrease the $c_i$ until it equals this maximum. Repeat this procedure for all of the $c_i$. Then, repeat the procedure described in Lemma 15 so that each $c_i$ is equal to $\frac{2n-x_i-\sqrt{x_i+4X}}{2}$. Let $C'$ denote the new sum of the $c_i$. Then according to Lemma 15: \[C'+X\ge n^2(1-\frac{t^2}{1-t})-2n\] where $t=\frac{X}{n^2}$ as before. Now we claim $C\ge C'+\frac{n^2t^2}{2(2-t)}$. Indeed, note that: \[\frac{\partial}{\partial c_i}((n-c_i)(n-c_i-x_i))=2c_i+x_i-2n\ge -2n\]. It follows that, in the process of decreasing $\sum_{i=1}^n (n-c_i)(n-x_i-c_i)$ by $\frac{X^2}{2(2n-\frac{X}{n})}$, the sum of the $c_i$ decreases by at least $\frac{X^2}{2n(2n-\frac{X}{n})}=\frac{n^2t^2}{2(2-t)}$ as required. Hence: \begin{align*} C+X&=(C'+X)+(C-C') \\ &\ge n^2(\frac{t}{2}+1-\frac{t(1+t)}{2(1-t)}-(1-\frac{t}{1-t})\sqrt{t}+\frac{t^2}{2(2-t)})-2n \\ &\ge \alpha n^2-2n \end{align*} where we here used $t\le \frac{2\alpha}{3}$, and that: \[f(x)=\frac{x}{2}+1-\frac{x(1+x)}{2(1-x)}-(1-\frac{x}{1-x})\sqrt{x}+\frac{x^2}{2(2-x)}\] is decreasing on $(0, 0.4)$ with $f(\frac{2\alpha}{3})=\alpha$. \end{proof} \begin{lemma} Suppose that $0<X\le n^2$ squares are marked in a $n\times n$ grid, and in each marked square is written either the number of marked squares in the same row, or the number of marked squares in the same column. Then the sum of the written numbers is greater than $\frac{X^2}{2n-\frac{X}{n}}$. \end{lemma} \begin{proof} We claim that the sum of the reciprocal of the written numbers is at most $2n-\frac{X}{n}$. Indeed, for a marked square $m_i$, let $c_i, n_i$ denote the number of marked squares in the chosen and not chosen direction of $m_i$ respectively. Then: \begin{align*} \sum_{i=1}^X\frac{1}{c_i}&= \sum_{i=1}^X(\frac{1}{c_i}+\frac{1}{n_i})-\sum_{i=1}^X\frac{1}{n_i} \\ &= 2n-\sum_{i=1}^X\frac{1}{n_i} \\ &\le 2n-\frac{X}{n} \end{align*} Then the result follows from Cauchy-Schwarz, since: \[(\sum_{i=1}^X\frac{1}{c_i})(\sum_{i=1}^Xc_i)\ge X^2\]. \end{proof} Now we proceed to the proof of the lower bound. Suppose that there is a configuration of $2$-rooks which covers an $H_{n, 3}$ situated at $1\le x, y, z\le n$ in coordinate space. We may orient this configuration so that at least $\frac{1}{3}$ of the rooks cover in the $x$ and $y$ directions. We call these rooks \textit{cross rooks,} and all other rooks \textit{axis rooks}. For each $i, 1\le i\le n$,we denote by $x_i$ the number of axis rooks which lie in the plane $z=i$, and $c_i$ the number of cross rooks which lie in this plane. Let $C=\sum_{i=1}^nc_i, X=\sum_{i=1}^nx_i$, so that $C\ge \frac{X}{2}$. Note that we may assume $c_i\le n$ since $n$ cross rooks can already cover a plane. We claim that, in the $i$th plane, at most $n^2-(n-c_i)(n-c_i-x_i)$ points are covered by rooks in that plane. Indeed, suppose that $h_i$ of the $x_i$ axis points choose to cover a row in $z=i$, and $v_i$ choose to cover a column, so that $v_i+h_i=x_i$. Then at most $c_i+v_i$ rows are covered by the rooks in plane $i$, and at most $c_i+h_i$ columns are covered. Hence in total, at most: \[n^2-(n-c_i-h_i)(n-c_i-v_i)=n^2-(n-c_i)(n-c_i-x_i)-h_iv_i\le n^2-(n-c_i)(n-c_i-x_i)\] points in plane $i$ are covered by rooks in that plane as required. It follows that the remaining points in plane $i$ must by covered by axis points from other planes, so that in particular $X\ge (n-c_i)(n-c_i-x_i)$ for every $i$. Furthermore, due to the combination of lemmas 15, 16 it follows that: \[\sum_{i=1}^n (X-(n-c_i-x_i)(n-c_i))\ge \frac{X^2}{2n-\frac{X}{n}}\] Therefore, the $x_i, c_i$ satisfy the conditions given in Lemma 16, so it follows that the total number of rooks used is: \[C+X\ge \alpha n^2-2n\] Hence $a_{3, 2}\ge \alpha$ as required. \end{proof} \section{Bounds for Packing} In this section we prove that $b_{k,\ell}=\frac{k}{\ell}$ and $c_{k,\ell}=\frac{\binom{k}{2}}{\binom{\ell}{2}}$ for certain special values of $k$ and $\ell$. We begin by demonstrating that this equality holds when $\ell$ divides $k$. \begin{thm}\label{firstStep} For positive integers $k, t$, we have $b_{kt, t}=k$. \end{thm} \begin{proof} By Theorem \ref{Singleton}, it follows that $b_{kt,t}\le k$. Therefore, it suffices to demonstrate that $b_{kt,t}\ge k$. We prove this by demonstrating that $b_{n,kt,t}\ge kn^{k(t-1)}(n-1)^{k-1}$ through explicit construction. Consider points of the form $(x_1,\ldots,x_t,x_{t+1},\ldots,x_{2t},\ldots, x_{kt})$ with $0\le x_i\le n-1$ for $1\le i\le kt$. Define the $L_j$ block of points as the set of points that satisfy \[\sum_{i=0}^{t-1}x_{jt-i}\equiv 0\mod n\] and satisfy for $m\in \{1,\ldots, \ell\}$ and $m\neq j$, \[\sum_{i=0}^{t-1}x_{mt-i}\nequiv 0\mod n.\] For each point in the $L_j$ block, place an $\ell$-rook that attacks in the direction of the $((j-1)t+1)^{th}$ coordinate to the ${jt}^{th}$ coordinate. Note that $|L_j|=n^{t-1}(n^{t-1}(n-1))^{k-1}=n^{k(t-1)}(n-1)^{k-1}$ and thus taking the union of these rooks for $1\le j\le k$ it follows that \[\left|\bigcup_{j=1}^{k}L_j\right|=kn^{k(t-1)}(n-1)^{k-1}.\] Now we demonstrate that no point attacks another in the above constructions. Suppose for the sake of contradiction that $R_1$ attacks $R_2$ with $R_1\in L_i$ and $R_2\in L_j$. If $i\neq j$ then note that $R_1$ and $R_2$ differ in at least one coordinate in $x_{(i-1)t+1},\ldots,x_{it}$ and at least one coordinate in $x_{(j-1)t+1},\ldots,x_{jt}$. Since attacking rooks differ by at most $1$ coordinate, such rooks $R_1$ and $R_2$ do not exist. Otherwise $R_1$ and $R_2$ both lie in $L_i$. If these points differ in the coordinates $x_{(i-1)t+1},\ldots,x_{it}$ then they differ in at least two positions and therefore they cannot attack each other. Otherwise $R_1$ and $R_2$ differ outside of the coordinates $x_{(i-1)t+1},\ldots,x_{it}$, and since $R_1$ and $R_2$ attack in these coordinates, $R_1$ does not attack $R_2$ and the result follows. \end{proof} We can now establish a crude lower bound for $b_{k,\ell}$. \begin{corollary} For positive integers $k, \ell$, $b_{k, \ell}\ge \lfloor\frac{k}{\ell}\rfloor$. \end{corollary} \begin{proof} Note that $nb_{n,k,\ell}\le b_{n,k+1,\ell}$ as we can stack $n$ constructions of $b_{n,k,\ell}$ in the $(k+1)^{st}$ dimension. Therefore it follows that $b_{k,\ell}\le b_{k+1,\ell}$ and that $b_{k,\ell}\ge b_{\ell\lfloor\frac{k}{\ell}\rfloor,\ell}=\lfloor\frac{k}{\ell}\rfloor$ where we have used Theorem \ref{firstStep} in the final step. \end{proof} The last bound we establish for $b_{k,\ell}$ is that in fact $b_{k,2}=\frac{k}{2}$ for all integers $k\ge 2$. \begin{thm} For integers $k\ge 2$, we have $b_k=\frac{k}{2}$. \end{thm} \begin{proof} We will provide an inductive construction based on constructions in Section 2. In particular, we show the following. Claim: For every integer $k\ge 2$, there exists a nonnegative constant $c_k$ such that $b_{n, k, 2}\ge \frac{k}{2}n^{k-1}-c_kn^{k-2}$. For the base case $k=2$, we may take $c_2=0$, and place $2$-rooks along the main diagonal of $H_{n, 2}$ which is a size-$n$ square. Suppose the claim holds for all $j\le k-1$ and that $(k-1)!^2$ divides $n$. Then there exists some set $S$ of $\frac{k-1}{2}n^{k-2}-c_{k-1}n^{k-3}$ $2$-rooks that tiles $H_{n, k-1}$. We now describe a way to pack \[\frac{(k-1)^k(\frac{n}{k-1})!}{(\frac{n}{k-1}-k+1)!}\] labeled $1$-rooks into a $H_{n, k-1}$ so that no two $1$-rooks of the same label attack each other. For this, we first group some of the $n^{k-1}$ points in the hypercube into \[\frac{(\frac{n}{k-1})!}{(\frac{n}{k-1}-k+1)!}\] buckets of size $(k-1)^{k-1}$. We do this by sending the point $(x_1, \ldots, x_{k-1})$ to a bucket labeled $(\lfloor\frac{x_1}{k-1}\rfloor, \ldots, \lfloor \frac{x_k}{k-1}\rfloor)$ if and only if the $\lfloor\frac{x_i}{k-1}\rfloor$ are distinct. Notice that the points in each bucket form a $H_{k-1, k-1}$. Within each bucket, we partition the points into $k-1$ parts of the form $\sum_{i=1}^{k-1}x_i\equiv j\mod k-1$, each of which has $(k-1)^{k-2}$ points. We then label each point in the $j$th part of such a partition with the label $\lfloor \frac{x_j}{k-1}\rfloor$. When this is done, there are \[\frac{(k-1)^k(\frac{n}{k-1})!}{n(\frac{n}{k-1}-k+1)!}\] points of label $i$ for each $i\in \{1, \ldots, \frac{n}{k-1}\}$. All points of label $i$ have $\lfloor \frac{x_j}{k-1}\rfloor=i$ for exactly one index $j$. Therefore, assigning all points of label $i$ to attack in the direction corresponding to this direction yields a packing of $\frac{(k-1)^k(\frac{n}{k-1})!}{(\frac{n}{k-1}-k+1)!}$ labeled $1$-rooks into a $H_{n, k-1}$ so that no two $1$-rooks of the same label attack each other, as required. Now we combine this partition $P$ with the set $S$. For $1\le x_k\le \frac{n}{k-1}$, we let the last coordinate act as the label for $P$ and have all of these rooks attack in the direction of the last coordinate in addition to their normal direction. For $\frac{n}{k-1}+1\le x_k\le n$, we fill each layer corresponding to a fixed $x_k$ according to $S$. The number of points in the construction at this point is at least \[\frac{(k-1)^k(\frac{n}{k-1})!}{(\frac{n}{k-1}-k+1)!}+(\frac{k-2}{k-1}n)(\frac{k-1}{2}n^{k-1}-c_kn^{k-2})\ge \frac{k}{2}n^{k-1}-(\frac{k-2}{k-1}c_k+\frac{k(k-1)^2}{2})n^{k-2}.\] Here we have used the estimate that \[\frac{(\frac{n}{k-1})!}{(\frac{n}{k-1}-k+1)!}\ge (\frac{n}{k-1})^{k-1}-\frac{k(k-1)}{2}(\frac{n}{k-1})^{k-2}.\] At this point, the only pairs of $2$-rooks that attack other $2$-rooks are the rooks from $P$ that attack the rooks from $S$. But there are at most $\frac{k-1}{2}n^{k-2}$ points in $S$ and each point in $S$ lead to at most $1$ offending point in $P$. Therefore we may simply remove these points to obtain a configuration of at least $\frac{k}{2}n^{k-1}-(\frac{k-2}{k-1}c_k+\frac{k(k-1)^2}{2}+\frac{k-1}{2})n^{k-2}$ $2$-rooks, none of which attack each other. It follows that $b_{k, 2}=\lim_{t\to\infty}b_{(k-1)!^2t, k, 2}=\frac{k}{2}$, as desired. \end{proof} Transitioning, we now determine $c_{k,2}$ and $c_{k,k}$. The second constant is known implicitly in the literature, but the proof is included in the following theorem. \begin{thm} For all positive integers $k$, $c_{k,k}=1$ and for $k\ge 2$, $c_{k,2}=\binom{k}{2}$. \end{thm} \begin{proof} We begin by proving that $c_{k,k}=1$. By Theorem \ref{Singleton}, $c_{k,k}\le 1$. The construction that $c_{k,k}\ge1$ is the exactly the one given in Theorem \ref{cExists} as this demonstrates $c_{p,k,k}=p^{k-2}$ for primes $p>k$. The result follows. For the second part of the proof, note that $c_{k,2}\le \binom{k}{2}$ by Theorem \ref{Singleton}. Therefore, it suffices to demonstrate that $c_{k,2}\ge \binom{k}{2}$. To demonstrate this we prove that $c_{n,k,2}\ge \binom{k}{2}(n-2\binom{k}{2})^{k-2}$ for $n>2\binom{k}{2}$. In particular, for $i<j$, let $A_{i,j}$ be the set of points with $i^{th}$ and $j^{th}$ coordinates being $2i-2+(j-1)(j-2)$ and $2i-1+(j-1)(j-2)$ respectively, and other coordinates varying in the range $[k(k-1)+1,n-1]$. Note that $2i-2+(j-1)(j-2)$ and $2i-1+(j-1)(j-2)$ lie between $[0,k(k-1)]$ and take each value in this range exactly one. Now for each point in $A_{i,j}$, orient the corresponding $2$-rook to attack in the direction of the $i^{th}$ and $j^{th}$ axes. No two points within a set attack each other as they differ outside the $i^{th}$ and $j^{th}$ coordinates and any pair of rooks from differing sets differ in at least $3$ coordinates and therefore cannot attack each other. Therefore, the result follows by taking all $A_{i,j}$ where the $2$-rooks in $A_{i,j}$ are directed to attack along the $i^{th}$ and ${j}^{th}$ dimension. \end{proof} \section{Conclusion and Open Questions} There are several natural questions and conjectures regarding the values of $a_{k,\ell}$, $b_{k,\ell}$, and $c_{k,\ell}$. The most surprising open question is the following. \begin{conj} For integers $k\ge 2$, $a_{k,k}=\frac{1}{k-1}$. \end{conj} Note that the above conjecture is known when $k$ is one more than a power of a prime \cite{blokhuis1984more}, and the construction is essentially that of perfect codes. This construction implies that the first open case of this conjecture is $a_{7,7}$. The most natural case when $k\neq \ell$ and that is not covered by our result is $a_{3,2}$. \begin{question} What is the exact value of $a_{3,2}$? \end{question} We end on pair of conjectures based on the results of the previous section, and in contrast to $a_{k,\ell}$ we conjecture exact values for $b_{k,\ell}$ and $c_{k,\ell}$ that are upper bounds from Theorem \ref{Singleton}. \begin{conj} For positive integers $k\ge \ell$, $b_{k,\ell}=\frac{k}{\ell}$. \end{conj} \begin{conj} For positive integers $k\ge \ell\ge 2$, $c_{k,\ell}=\frac{\binom{k}{2}}{\binom{\ell}{2}}$. \end{conj} \section{Acknowledgements} This research was conducted at the University of Minnesota Duluth REU and was supported by NSF grant 1659047. We would like to thank Joe Gallian, Colin Defant, and Ben Gunby for reading over the manuscript and providing valuable comments. We would especially like to thank Ben Gunby for finding a critical error in an earlier version of the paper. \bibliographystyle{plain}
{ "attr-fineweb-edu": 1.90332, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUca_xK7FjYCv2RF8T
\section{Introduction} In the game of basketball, the purpose of an offensive set is to generate a high-quality shot opportunity. Thus, a successful play ends with some player from the offensive team being given the opportunity to take a reasonably high-percentage shot. At this final moment of the play, the player with the ball must make a decision: should that player take the shot, or should s/he retain possession of the ball and wait for the team to arrive at a higher-percentage opportunity later on in the possession? The answer to this question depends crucially on three factors: 1) the (perceived) probability that the shot will go in, 2) the distribution of shot quality that the offense is likely to generate in the future, and 3) the number of shot opportunities that the offense will have before it is forced to surrender the ball to the opposing team (say, because of an expired shot clock). In this paper I examine the simplest model that accounts for all three of these factors. Despite the game's lengthy history, the issue of shot selection in basketball has only recently begun to be considered as a theoretical problem \cite{Oliver2004bpr}. Indeed, it is natural to describe the problem of shot selection as belonging to the class of ``optimal stopping problems", which are often the domain of finance and, more broadly, decision theory and game theory \cite{Moerbeke1976osa}. A very recent work \cite{Goldman2011aad} has examined this problem using perspective of ``dynamic" and ``allocative" efficiency criteria. The former criterion requires that every shot be taken only when its quality exceeds the expected point value of the remainder of the possession. The second criteria stipulates that, at optimum, all players on a team should have equal offensive efficiency. This allocative efficiency criterion is a source of some debate, as a recent paper \cite{Skinner2010poa} has suggested that the players' declining efficiency with increased usage implies an optimal shooting strategy that can violate the allocative efficiency criterion. Further complications arise when considering ``underdog" situations, in which a team that is unlikely to win needs to maximize its chance of an unlikely upset rather than simply maximizing its average number of points scored per possession \cite{Skinner2011ssu}. Nonetheless, Ref.\ \onlinecite{Goldman2011aad} demonstrates that players in the National Basketball Association (NBA) are excellent at shooting in a way that satisfies dynamic efficiency. That is, players' shooting \emph{rates} seem to be consistent with their shooting \emph{accuracy} when viewed from the requirement of maximizing dynamic efficiency. Still, there is no general theoretical formula for answering the question ``when should a shot be taken and when should it be passed up?". Inspired by these recent discussions, in this paper I construct a simple model of the ``shoot or pass up the shot" decision and solve for the optimal probability of shooting at each shot opportunity. This model assumes that for each shot opportunity generated by the offense the shot quality $p$ is a random variable, independent of all other shot opportunities, and is therefore described by some probability distribution. For simplicity, following Ref.\ \onlinecite{Goldman2011aad}, all calculations in this paper assume that the probability distribution for $p$ is a flat distribution: that is, at each shot opportunity $p$ is chosen randomly between some minimum shot quality $f_1$ and some maximum $f_2$. The best numerical definition for $p$ is the expected number of points that will be scored by the shot \footnote{ The possibility of offensive rebounds -- whereby the team retains possession of the ball after a missed shot -- is not considered explicitly in this paper, but one can think that this possibility is lumped into the expected value of a given shot.}; in other words, $p$ is the expected field goal percentage for a given shot multiplied by its potential point value (usually, 2 or 3). If all shots are taken to be worth 1 point, for example, then $0 \leq f_1 \leq f_2 \leq 1$ . The primary concern of this paper is calculating the optimal minimal value $f$ of the shot quality such that if players shoot if and only if the quality $p$ of the current shot satisfies $p > f$, then their team's expected score per possession will be maximized. I should first note that this ``lower cutoff" for shot quality $f$ must depend on the number of plays $n$ that are remaining in the possession. For example, imagine that a team is running their offense without a shot clock, so that they can reset their offense as many times as they want (imagine further, for the time being, that there is no chance of the team turning the ball over). In this case the team can afford to be extremely selective about which shots they take. That is, their expected score per possession is optimized if they hold on to the ball until an opportunity presents itself for a shot that is essentially certain to go in. On the other hand, if a team has time for only one or two shot opportunities in a possession, then there is a decent chance that the team will be forced into taking a relatively low-percentage shot. So, intuitively, $f(n)$ must increase monotonically with $n$. In the limit $n = 0$ (when the current opportunity is the last chance for the team to shoot), we must have $f(0) = f_1$: the team should be willing to take even the lowest quality shot. Conversely, in the limit $n \rightarrow \infty$ (and, again, in the absence of turnovers), $f(n \rightarrow \infty) = f_2$: the team can afford to wait for the ``perfect" shot. As I will show below, the solution for $f(n)$ at all intermediate values of $n$ constitutes a non-trivial sequence that can only be defined recursively. I call this solution, $f(n)$, ``the shooter's sequence"; it is the main result of the present paper. In the following section, I present the solution for $f(n)$ in the absence of turnovers. Sec.\ \ref{sec:noclock} is concerned with calculating the optimal shot quality cutoff $f$ in ``pickup ball" situations, where there is no shot clock and therefore no natural definition of $n$, but there is a finite rate of turnovers. Sec.\ \ref{sec:turn} combines the results of Secs.\ \ref{sec:f} and \ref{sec:noclock} to describe the case where there is a finite shot clock length as well as a finite turnover rate. In Sec.\ \ref{sec:hazard} the sequence $f(n)$ is used to calculate the expected shooting rate as a function of time for an optimal-shooting team in a real game, where shot opportunities arise randomly over the course of the possession. Finally, Sec.\ \ref{sec:data} compares these predicted optimal rates to real data taken from NBA games. The comparison suggests that NBA players tend to wait too long before shooting, and that this undershooting can be explained in part as an undervaluation by the players of the probability of committing a turnover. \section{The shooter's sequence} \label{sec:f} In this section I calculate the optimal lower cutoff for shot quality, $f(n)$, for a situation where there is enough time remaining for exactly $n$ additional shot opportunities after the current one. I also calculate the expected number of points per possession, $F(n)$, that results from following the optimal strategy defined by $f(n)$. The effect of a finite probability of turning the ball over is considered in Secs.\ \ref{sec:noclock} and \ref{sec:turn}. To begin, we can first consider the case where the team is facing its last possible shot opportunity ($n = 0$). In this situation, the team should be willing to take the shot regardless of how poor it is, which implies $f(0) = f_1$. The expected number of points that results from this shot is the average of $f_1$ and $f_2$ (the mean of the shot quality distribution): \begin{equation} F(0) = \frac{f_1 + f_2}{2} \end{equation} Now suppose that the team has enough time to reset their offense one time if they choose to pass up the shot; this is $n = 1$. If the team decides to pass up the shot whenever its quality $p$ is below some value $y$, then their expected number of points in the possession is \begin{equation} F_y(1) = \frac{f_2 - y}{f_2 - f_1} \cdot \frac{y + f_2}{2} + \left( 1 - \frac{f_2 - y}{f_2 - f_1} \right) F(0). \label{eq:optF1} \end{equation} In Eq.\ (\ref{eq:optF1}), the expression $(f_2 - y)/(f_2 - f_1)$ corresponds to the probability that the team will take the shot, so that the first term on the right hand side corresponds to the expected points per possession from shooting and the second term corresponds to the expected points per possession from passing up the shot. The optimal value of $p$, which by definition is equal to $f(1)$, can be found by taking the derivative of $F_y(1)$ and equating it to zero: \begin{equation} \left. \frac{d F_y(1)}{d y} \right|_{y = f(1)} = 0. \label{eq:optf1} \end{equation} Combining Eqs.\ (\ref{eq:optF1}) and (\ref{eq:optf1}) gives $f(1) = F(0) = (f_1 + f_2)/2$. In other words, the team should shoot the ball whenever the shot opportunity has a higher quality $p$ than the average of what they would get if they held the ball and waited for the next position. This is an intuitive and straightforward result. It can be extended to create a more general version of Eqs.\ (\ref{eq:optf1}) and (\ref{eq:optF1}). Namely, \begin{equation} F_y(n) = \frac{f_2 - y}{f_2 - f_1} \cdot \frac{y + f_2}{2} + \left( 1 - \frac{f_2 - y}{f_2 - f_1} \right) F(n-1). \label{eq:optF} \end{equation} and \begin{eqnarray} \left. \frac{d F_y(n)}{d y} \right|_{y = f(n)} & = & 0. \label{eq:optf} \end{eqnarray} Together, these two equations imply \begin{equation} f(n) = F(n-1). \label{eq:Fequalsf} \end{equation} This is the general statement that a team should shoot the ball only when the quality of the current opportunity is greater than the expected value of retaining the ball and getting $n$ more shot opportunities. The conclusion of Eq.\ (\ref{eq:Fequalsf}) allows one to rewrite Eq.\ (\ref{eq:optF}) as a recursive sequence for $f(n)$: \begin{equation} f(n+1) = \frac{[f(n)]^2 -2 f_1 f(n) + f_2^2}{2(f_2 - f_1)}. \label{eq:ss} \end{equation} Along with the initial value $f(0) = f_1$, Eq.\ (\ref{eq:ss}) completely defines ``the shooter's sequence". Surprisingly, considering the simplicity of the problem statement, this sequence $f(n)$ has no exact analytical solution. Its first few terms and its asymptotic limit are as follows: \begin{eqnarray} f(0) &=& f_1 \nonumber \\ f(1) &=& (f_1 + f_2)/2 \nonumber \\ f(2) &=& (3 f_1 + 5 f_2)/8 \nonumber \\ f(3) &=& (39 f_1 + 89 f_2)/128 \nonumber \\ f(4) &=& (8463 f_1 + 24305 f_2)/32768 \nonumber \\ &...& \nonumber \\ f(n \rightarrow \infty) &=& f_2 \nonumber \end{eqnarray} Note that in the limit where the team has infinite time, their shooting becomes maximally selective (only shots with ``perfect" quality $f_2$ should be taken) and maximally efficient (every possession scores $f_2$ points). Since Eq.\ (\ref{eq:ss}) constitutes a recursive, quadratic map, it has no general solution \cite{Weissteinqm}. Nonetheless, the expression for $f(n)$ can be simplified somewhat by writing it in the form \begin{equation} f(n) = \alpha(n) f_1 + \beta(n) f_2, \end{equation} where $\alpha(n)$ and $\beta(n)$ are separate recursive sequences defined by \begin{equation} \alpha(n+1) = \alpha(n) - \alpha(n)^2/2, \hspace{5mm} \alpha(0) = 1 \label{eq:alpha} \end{equation} and \begin{equation} \beta(n) = \frac{1 + \beta(n-1)^2}{2}, \hspace{5mm} \beta(0) = 0, \label{eq:beta} \end{equation} respectively. While $\alpha(n)$ and $\beta(n)$ have no analytical solution, in the limit of large $n$ they have the asymptotic behavior $\alpha(n) \simeq 2/n + \mathcal{O}(1/n^2)$ and $\beta(n) \simeq 1 - 2/n + \mathcal{O}(1/n^2)$. \section{Optimal shooting without a shot clock} \label{sec:noclock} In this section I consider ``pickup ball"-type situations, where there is no natural time limit to a possession. In this case, the number of shot opportunities that the team can generate is limited only by their propensity to turn the ball over -- if the team attempts to continually reset the offense in search of a perfect shot they will eventually turn the ball over without taking any shots at all. Thus, in these situations there is no natural definition of $n$, which implies that the solution for the optimal shot quality cutoff $f$ is a single number rather than a sequence. Its value depends on the upper and lower values of the distribution, $f_1$ and $f_2$, and on the probability $p_t$ that the team will turn the ball over between two subsequent shot opportunities. To calculate $f$, one can consider that the team's average number of points per possession, $F$, will be the same at the beginning of every offensive set, regardless of whether they have just chosen to pass up a shot. The team's optimal strategy is to take a shot whenever that shot's quality exceeds $F$; \emph{i.e.}, $f = F$ as in Eq.\ (\ref{eq:Fequalsf}). This leads to the expression \begin{equation} f = p_t \times 0 + (1-p_t) \left[ \frac{f_2 - f}{f_2 - f_1} \cdot \frac{f + f_2}{2} + \left( 1 - \frac{f_2 - f}{f_2 - f_1} \right) f \right]. \label{eq:optFnoclock} \end{equation} In this equation, the term proportional to $p_t$ represents the expected points scored when the team turns the ball over (zero) and the term proportional to $1 - p_t$ represents the expected points scored when the team does not turn the ball over. As in Eq.\ (\ref{eq:optF}), the two terms inside the bracket represent the points scored when the shot is taken and when the shot is passed up. Eq.\ (\ref{eq:optFnoclock}) is a quadratic equation in $f$, and can therefore be solved directly to give the optimal lower cutoff for shot quality in situations with no shot clock. This process gives \begin{equation} f = \frac{f_2 - f_1 p_t - \sqrt{p_t (f_2 - f_1)\left[2 f_2 - p_t(f_1 + f_2)\right]} }{1-p_t}. \label{eq:fnoclock} \end{equation} For $0 \leq p_t < 1$ and $0 \leq f_1 \leq f_2$, $f$ is real and positive. In the limit $p_t \rightarrow 0$, Eq.\ (\ref{eq:fnoclock}) gives $f \rightarrow f_2$ (perfect efficiency), as expected. \section{The shooter's sequence in the presence of turnovers} \label{sec:turn} In this section I reconsider the problem of Sec.\ \ref{sec:f} including the effect of a finite turnover probability $p_t$. This constitutes a straightforward generalization of Eqs.\ (\ref{eq:optF}) and (\ref{eq:optFnoclock}). Namely, \begin{eqnarray} F(n) & = & (1-p_t) \times \left[ \frac{f_2 - f(n-1)}{f_2 - f_1} \cdot \frac{f(n-1) + f_2}{2} \right. \nonumber \\ & & + \left. \left( 1 - \frac{f_2 - f(n-1)}{f_2 - f_1} \right) F(n-1) \right]. \label{eq:optFturn} \end{eqnarray} Simplifying this expression and using $f(n) = F(n-1)$ gives the recurrence relation \begin{equation} f(n) = (1-p_t) \frac{f(n-1)^2 - 2 f_1 f(n-1) + f_2^2}{2(f_2 - f_1)}. \label{eq:fturn} \end{equation} Together with the condition $f(0) = f_1$, Eq.\ (\ref{eq:fturn}) completely defines the sequence $f(n)$. Unfortunately, the sequence $f(n)$ is unmanageable algebraically at all but very small $n$. It can easily be evaluated numerically, however, if the values of $f_1$, $f_2$, and $p_t$ are known. The first few terms of $f(n)$ and its limiting expression are as follows: \begin{eqnarray} f(0) &=& f_1 \nonumber \\ f(1) &=& (1-p_t)(f_1 + f_2)/2 \nonumber \\ f(2) &=& \frac{1-p_t}{8(f_2 - f_1)} \left\{ [5 - (2-p_t)p_t]f_2^2 \right. \nonumber \\ & & \left. - 2 f_1 f_2 (1-p_t)^2 - f_2^2 (1-p_t)(3+p_t) \right\} \nonumber \\ &...& \nonumber \\ f(n \rightarrow \infty) &=& \frac{f_2 - f_1 p_t - \sqrt{p_t (f_2 - f_1)\left[2 f_2 - p_t(f_1 + f_2)\right]} }{1-p_t} \nonumber \end{eqnarray} Notice that $f(n)$ approaches the result of Eq.\ (\ref{eq:fnoclock}) in the limit where many shot opportunities remain (\emph{i.e.} the very long shot clock limit). Overall, the sequence $f(n)$ has two salient features: 1) it increases monotonically with $n$ and ultimately approaches the ``no shot clock" limit of Sec.\ \ref{sec:noclock}, and 2) it generally calls for the team to accept lower-quality shots than they would in the absence of turnovers, since the team must now factor in the possibility that future attempts will produce turnovers rather than random-quality shot opportunities. \section{Shooting rates of optimal shooters} \label{sec:hazard} Secs.\ \ref{sec:f} -- \ref{sec:turn} give the optimal shot quality cutoff as a function of the number of shots remaining. In this sense, the results presented above are useful for a team trying to answer the question ``when should I take a shot?". However, these results do not directly provide a way of answering the question ``is my team shooting optimally?". In other words, it is not immediately obvious how the shooter's sequence should manifest itself in shooting patterns during an actual game. When analyzing the shooting of a team based on collected (play-by-play) data, it is often instructive to look at the team's ``shooting rate" $R(t)$. The shooting rate (also sometimes called the ``hazard rate") is defined so that $R(t) dt$ is the probability that a team with the ball at time $t$ will shoot the ball during the interval of time $(t-dt, t)$. Here, $t$ is defined as the time remaining on the shot clock, so that $t$ decreases as the possession goes on. In this section I calculate the optimum shooting rate $R(t)$ implied by the results of Secs.\ \ref{sec:f} -- \ref{sec:turn}. This calculation provides a means whereby one can evaluate how much a team's shooting pattern differs from the optimal one. In order to calculate optimal shooting rate as a function of time, one should assume something about how frequently shot opportunities arise. In this section I make the simplest natural assumption, namely that shot opportunities arise randomly with some uniform rate $1/\tau$. For example, $\tau = 4$ seconds would imply that on average a team gets six shot opportunities during a 24-second shot clock. I also assume that there is some uniform turnover rate $1/\tau_t$. Under this set of assumptions, one can immediately write down the probability $P(t, n; \tau)$ that at a given instant $t$ the team will have enough time for exactly $n$ additional shot opportunities. Specifically, $P(t,n;\tau)$ is given by the Poisson distribution: \begin{equation} P(n, t; \tau) = \left( \frac{t}{\tau} \right)^n \frac{e^{-t/\tau}}{n!}. \label{eq:Poisson} \end{equation} The probability $p_t$ of a turnover between successive shot opportunities is given by \begin{equation} p_t = \int_0^\infty \left(1 - e^{-t'/\tau_t} \right) e^{-t'/\tau} \frac{dt'}{\tau} = \frac{\tau}{\tau_t + \tau}. \label{eq:pt} \end{equation} This integrand in Eq.\ (\ref{eq:pt}) contains the probability that there is at least one turnover during a time interval $t'$ multiplied by the probability that there are no shot attempts during the time $t'$ multiplied by the probability that a shot attempt arises during $(t', t'+dt')$, and this is integrated over all possible durations $t'$ between subsequent shot attempts. In general, for a team deciding at a given time $t$ whether to shoot, the rate of shooting should depend on the proscribed optimal rate for when there are exactly $n$ opportunities left, multiplied by the probability $P(n, t; \tau)$ that there are in fact $n$ opportunities left, and summed over all possible $n$. More specifically, consider that a team's optimal probability of taking a shot when there are exactly $n$ opportunities remaining is given by $[f_2 - f(n)]/(f_2 - f_1)$, where $f(n)$ is the shooter's sequence defined by Eq.\ (\ref{eq:fturn}). The probability that the team should shoot during the interval $(t-dt, t)$ is therefore given by \begin{equation} R(t) dt = \frac{dt}{\tau} \sum_{n = 0}^{\infty} P(n, t; \tau) \frac{f_2 - f(n)}{f_2 - f_1}. \end{equation} Inserting Eq.\ (\ref{eq:Poisson}) gives \begin{equation} R(t) = \sum_{n = 0}^{\infty} \frac{t^n e^{-t/\tau}}{\tau^{n+1} n!} \cdot \frac{f_2 - f(n)}{f_2 - f_1}. \label{eq:R} \end{equation} Since the sequence $f(n)$ has no analytical solution, there is no general closed-form expression for $R(t)$. The corresponding expected average efficiency (points/possession) of a team following the optimal strategy is derived in the Appendix. As an example, consider a team that encounters shot opportunities with rate $1/\tau = 1/(4 \textrm{ seconds})$ and turns the ball over with rate $1/\tau_t = 1/(50 \textrm{ seconds})$. Using the sequence defined in Eq.\ (\ref{eq:fturn}), one can evaluate numerically the shooting rate implied by Eq.\ (\ref{eq:R}). This result is plotted as the black, solid line in Fig.\ \ref{fig:hazardrates}a, using $f_2 = 1$ and $f_1 = 0$. In Fig.\ \ref{fig:hazardrates}a the optimal shooting rate is plotted as the dimensionless combination $R(t)\tau$, which can be thought of as the probability that a given shot should be taken if the opportunity arises at time $t$ (as opposed to $R(t)$, which is conditional on an opportunity presenting itself). For reference, I also plot the case where there are no turnovers, $\tau_t \rightarrow \infty$. One can note that the finite turnover rate causes the optimal shooting rate to increase appreciably early in the shot clock. In other words, when there is a nonzero chance of turning the ball over the team cannot afford to be as selective with their shots. The optimal shooting rate can also be expressed in terms of the optimal lower cutoff for shot quality, $f$, as a function of time. Since $R(t)\tau$ is the probability that a shot at $t$ should be taken, $f$ can be expressed simply as $f(t) = f_2 - R(t)\tau(f_2 - f_1)$. This optimal lower cutoff is plotted in Fig.\ \ref{fig:hazardrates}b. A team that follows the optimal shooting strategy shown in Fig.\ \ref{fig:hazardrates} can be expected to score $0.64$ points per possession during games with a $24$-second shot clock [see Eq.\ (\ref{eq:Ft})], a significant enhancement from the value $0.5$ that might be naively expected by taking the average of the shot quality distribution. \begin{figure}[htb!] \centering \includegraphics[width=0.45 \textwidth]{shooting_rates} \caption{a) Optimal shooting rate for a hypothetical team with $f_2 = 1$, $f_1 = 0$, $\tau = 4$ seconds, and $\tau_t = 50$ seconds, as given by Eq.\ (\ref{eq:R}). The shooting rate $R(t)$ is plotted in the dimensionless form $R(t)\tau$, which can be thought of as the probability that a given shot that has arisen should be taken. The dashed line shows the hypothetical shooting rate for the team in the absence of turnovers. b) Optimal lower cutoff for shot quality, $f$, as a function of time for the same hypothetical team, both with and without a finite turnover rate.} \label{fig:hazardrates} \end{figure} In the limit of large time $t$ (or when there is no shot clock at all), as considered in Sec.\ \ref{sec:noclock}, the shooting rate $R(t)$ becomes independent of time and Eq.\ (\ref{eq:R}) has the following simple form: \begin{eqnarray} R & = & \frac{1}{\tau} \frac{f_2 - f}{f_2 - f_1}, \hspace{5mm} (\textrm{no shot clock}) \nonumber \\ & = & \frac{1}{\tau_t} \left[ \sqrt{1 + \frac{2 f_2}{f_2 - f_1} \frac{\tau_t}{\tau} } - 1 \right]. \label{eq:Rnoclock} \end{eqnarray} Notice that when turnovers are very rare, $\tau_t \rightarrow \infty$, the shooting rate goes to zero, since the team can afford to be extremely selective about their shots. Eq.\ (\ref{eq:Rnoclock}) also implies an intriguingly weak dependence of the shooting rate on the average time $\tau$ between shot opportunities. Imagine, for example, two teams, A and B, that both turn the ball over every $50$ seconds of possession and both have shot distributions characterized by $f_2 = 1$, $f_1 = 0$. Suppose, however, that team A has much faster ball movement, so that team A arrives at a shot opportunity every 4 seconds while team B arrives at a shot opportunity only every 8 seconds. One might expect, then, that in the absence of a shot clock team A should have a shooting rate that is twice as large as that of team B. Eq.\ (\ref{eq:Rnoclock}), however, suggests that this is not the case. Rather, team B should shoot on average every $19$ seconds and the twice-faster team A should shoot every $12$ seconds. The net result of this optimal strategy, by Eqs.\ (\ref{eq:fnoclock}) and (\ref{eq:pt}), is that team A scores $0.67$ points per possession while team B scores $0.57$ points per possession. In other words, team A's twice-faster playing style buys them not a twice-higher shooting rate, but rather an improved ability to be selective about which shots they take, and therefore an improved offensive efficiency. \section{Comparison to NBA data} \label{sec:data} Given the results of the previous section, one can examine the in-game shooting statistics of basketball players and evaluate the extent to which the players' shooting patterns correspond to the ideal optimum strategy. In this section I examine data from NBA games and compare the measured shooting rates and shooting percentages of the league as a whole to the theoretical optimum rates developed in Secs.\ \ref{sec:f}--\ref{sec:hazard}. The analysis of this section is based on play-by-play data from 4,720 NBA games during the 2006-2007 -- 2009-2010 seasons (available at http://www.basketballgeek.com). Shots taken and points scored are sorted for all possessions by how much time remains on the shot clock at the time of the shot. Following Ref.\ \onlinecite{Goldman2011aad}, possessions that occur within the last 24 seconds of a given quarter or within the last six minutes of a game are eliminated from the data set \footnote{ In such end-of-quarter or end-game situations, players are often not trying to optimize their offensive efficiency in a risk-neutral way. An analysis of these ``underdog" situations is presented in Ref.\ \cite{Skinner2011ssu}. }, along with any possessions for which the shot clock time cannot be accurately inferred. The resulting average shooting rate and shot quality (points scored per shot) are plotted as the symbols in Fig.\ \ref{fig:data_compare}a and b, respectively, as a function of time. Open symbols correspond to shots taken during the first seven seconds of the shot clock, which generally correspond to ``fast break" plays during which the offense is not well-described by the theoretical model developed in this paper. In order to compare this data with the theoretical optimum behavior proscribed by the theories of Secs.\ \ref{sec:f}--\ref{sec:hazard}, one should determine the values $f_1$, $f_2$, $\tau$, and $\tau_t$ that best describe the average NBA offense. This last parameter, the average time between turnovers, can be extracted directly the data: $\tau_t = 100.2$ seconds. The other parameters can be determined only implicitly, by fitting the observed shooting rates and percentages to the theoretical model. For the curves shown in Fig.\ \ref{fig:data_compare}, the following approach is employed. First, the average shot quality for NBA teams is determined from the data as a function of time (Fig.\ \ref{fig:data_compare}b). Then, the theoretical average shot quality $[f_2 + f(t)]/2$ of an optimal-shooting team is fit to this data in order to determine the best-fit values of $f_1$, $f_2$, and $\tau$, assuming optimal behavior. This procedure gives $f_1 = 0.5$, $f_2 = 1.1$, and $\tau = 2.8$ seconds. The corresponding fit line is shown as the solid curve in Fig.\ \ref{fig:data_compare}b. The shooting rate $R(t)$ implied by these parameter values is then calculated and compared to the shooting rate measured from NBA games (Fig.\ \ref{fig:data_compare}a). In this way one can compare whether the measured shooting \emph{rates} of NBA players are consistent with their shooting \emph{percentages}, within the assumptions of the theoretical model. \begin{figure}[htb!] \centering \includegraphics[width=0.45 \textwidth]{data_compare} \caption{A comparison between the theoretical optimum shooting strategy and data from NBA games. a) The shooting rate as a function of shot clock time $t$. The solid black line corresponds to the parameters $f_1 = 0.5$, $f_2 = 1.1$, $\tau = 2.8$ s, which are determined by a best fit to the shot quality data, using the NBA average turnover rate $\tau_t = 100.2$ seconds. The dashed blue line corresponds to the same parameters except with the turnover rate $1/\tau_t$ set to zero. b) The average shot quality (points per shot) as a function of $t$. The solid line corresponds to the best fit curve to the filled symbols, from which the parameters for the solid black line in a) are determined.} \label{fig:data_compare} \end{figure} The result, as shown in Fig.\ \ref{fig:data_compare}a, is that NBA players seem noticeably more reluctant to shoot the ball during the early stages of the shot clock than is proscribed by the theoretical model. With $15$ seconds remaining on the shot clock, for example, the average NBA team has a probability of only about $4\%$ of shooting the ball during the next second, whereas the optimal strategy suggests that this probability should be as high as $12\%$. This observation is in qualitative agreement with the findings of Ref.\ \onlinecite{Goldman2011aad}, which concludes that under-shooting is far more common in the NBA than over-shooting. As a consequence, NBA players are much more likely to delay shooting until the last few seconds of the shot clock, where they are likely to be rushed and their shooting percentages are noticeably lower. The price of this suboptimal behavior is reflected in the average efficiency $F$. For NBA teams, the expected number of points per possession is $0.86$, or $0.83$ if one considers only possessions lasting past the first seven seconds of the shot clock. In contrast, the optimal shooting strategy shown by the solid lines in Fig.\ \ref{fig:data_compare} produces $0.91$ points/possession for a $24$-second shot clock and $0.88$ points/possession for a 17-second clock (see the Appendix), even though it corresponds to the same distribution of shot quality. This improvement of $0.05$ points/possession translates to roughly $4.5$ points per game. According to the established ``Pythagorean" model of a team's winning percentage in the NBA \cite{JustinKubatko2007spa}, such an improvement can be expected to produce more than 10 additional wins for a team during an 82-game season. One natural way to interpret the discrepancy between the observed and the theoretically optimal shooting behavior of NBA players is as a sign of overconfident behavior. That is, NBA players may be unwilling to settle for only moderately high-quality shot opportunities early in the shot clock, believing that even better opportunities will arise later. Part of the discrepancy can also be explained in terms of undervaluation of turnover rates. If the players believe, for example, that they have essentially no chance of turning the ball over during the current possession, then they will be more likely to hold the ball and wait for a later opportunity. This effect is illustrated by the dashed blue line in Fig.\ \ref{fig:data_compare}a, which shows the optimal shooting rate for the hypothetical case $\tau_t = \infty$ (the absence of turnovers). This line is in significantly better agreement with the observed shooting rates at large $t$, which suggests that when NBA players make their shooting decisions early in the shot clock they do not account for the probability of future turnovers. Of course, it is possible that much of the disagreement between the observed and theoretically optimum shooting rates can be attributed to an inaccuracy in the theory's assumption (in Sec.\ \ref{sec:hazard}) that shot opportunities arise randomly in time. It is likely that NBA players often run their offense so as to produce more shot opportunities as the clock winds down. This behavior would produce shooting rates that are weighted more heavily toward later times. It is also likely that at very small time $t$ the theory's assumption of a uniform distribution of shot quality becomes invalid. Indeed, in these ``buzzer-beating" situations the players' shots are often forced, and their quality is likely not chosen from the same random distribution as for shots much earlier in the shot clock. In this sense, the theoretical result of Eq.\ (\ref{eq:R}) cannot be considered a very exact description of the shooting rates of NBA teams. In order to improve the applicability of the model for real-game situations, one should account for the possibility of time dependence in the shot quality distribution ($f_1$ and $f_2$) and the rate of shot opportunities ($1/\tau$). Such considerations are beyond the scope of the present work. Nonetheless, the apparent sub-optimal behaviors illustrated in Fig.\ \ref{fig:data_compare} are instructive, and Eq.\ (\ref{eq:R}) may be helpful in determining how optimal strategy should adapt to changing features of the offense -- \textit{e.g.} an altered pace of play ($\tau$) or an improving/declining team shooting ability ($f_1$ and $f_2$) or a changing turnover rate ($\tau_t$). If nothing else, the theories developed in this paper help to further the study of shot selection and optimal behavior in basketball, and may pave the way for a more complex theoretical model in the future. In this way the problem of shot selection in basketball should be added to the interesting and growing literature on optimal stopping problems. More broadly, the question of optimal behavior in sports provides an interesting, novel, and highly-applicable playground for mathematics and statistical mechanics. \acknowledgments I am grateful to M.\ R.\ Goldman and S.\ Redner for helpful discussions.
{ "attr-fineweb-edu": 1.503906, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUc_A4uzlhbeH0Zg9D
\section{\label{sec:level1}First-level heading:\protect\\ The line \documentclass[twocolumn,showpacs,preprintnumbers,amsmath,amssymb]{revtex4} \usepackage{graphicx \usepackage{dcolumn \usepackage{bm \begin{document} \title{Towards the perfect prediction of soccer matches} \date{\today} \author{Andreas Heuer} \affiliation{\frenchspacing Westf\"alische Wilhelms Universit\"at M\"unster, Institut f\"ur physikalische Chemie, Corrensstr.\ 30, 48149 M\"unster, Germany} \affiliation{\frenchspacing Center of Nonlinear Science CeNoS, Westf\"alische Wilhelms Universit\"at M\"unster, Germany} \author{Oliver Rubner} \affiliation{\frenchspacing Westf\"alische Wilhelms Universit\"at M\"unster, Institut f\"ur physikalische Chemie, Corrensstr.\ 30, 48149 M\"unster, Germany} \affiliation{\frenchspacing Center of Nonlinear Science CeNoS, Westf\"alische Wilhelms Universit\"at M\"unster, Germany} \begin{abstract} We present a systematic approach to the prediction of soccer matches. First, we show that the information about chances for goals is by far more informative than about the actual results. Second, we present a multivariate regression approach and show how the prediction quality increases with increasing information content. This prediction quality can be explicitly expressed in terms of just two parameters. Third, by disentangling the systematic and random components of soccer matches we can identify the optimum level of predictability. These concepts are exemplified for the German Bundesliga. \end{abstract} \keywords{prediction} \maketitle \section{Introduction} One important field is the prediction of soccer matches. In literature different approaches can be found. In one type of models \cite{Lee97,Dixon97,Dixon98,Rue00} appropriate parameters are introduced to characterize the properties of individual teams such as the offensive strength. Of course, the characterization of team strengths is not only restricted to soccer; see, e.g., \cite{Sire}. The specific values of these parameters can be obtained via Monte-Carlo techniques. These models can then be used for prediction purposes and allow one to calculate probabilities for individual match results. A key element of these approaches is the Poissonian nature of scoring goals \cite{Maher82,janke1,janke2}. Beyond these goals-based prediction properties also results-based models are used. Here the final result (home win, draw, away win) is predicted from comparison of the difference of the team strength parameters with some fixed values \cite{Koning00}. The quality of both approaches has been compared and no significant differences have been found \cite{Goddard05}. Going beyond these approaches additional covariates can be included. For example home and away strengths are considered individually or the geographical distance is taken into account \cite{Goddard05}. Recently, also the ELO-based ratings have been used for the purpose of forecasting soccer matches \cite{Hvattum}. Recent studies suggest that statistical models are superior to lay and expert predictions but have less predictive power than the bookmaker odds \cite{Andersson,Song,Forrest,Hvattum}. This observation strongly suggests that either the information, used by the bookmakers, is more powerful or, alternatively, the inference process, based on the same information, is more efficient. Probably, both aspects may play a role. When predicting soccer matches different key aspects have to be taken into account: (i) Choice of appropriate observables which contain optimum information about the individual team strengths, (ii) Definition and subsequent estimation of the team strength, (iii)Estimation of the outcome of a soccer match based on the two team strengths, (iv) Additional consideration of the stochastic (Poissonian) contributions to a soccer match. The final two aspects have been analyzed in detail in Ref.\cite{Heuer10}. In the present work we concentrate on the first two aspects. Therefore we are restricting ourselves to predict the outcome of the second half of the season, i.e. summing over the final 17 matches (in the German Bundesliga). To reach this aim the stochastic aspects are somewhat easier to handle than for the prediction of a single match so that we can concentrate on (i) and (ii). However, all concepts can be also directly applied to the prediction of single soccer matches. Furthermore, our analysis can naturally be transferred to all other soccer leagues. As a key result we identify the level of optimum predictability and determine how close our actual inference approaches this optimum level. It will turn out that the chances for goals are highly informative. They are provided by a professional sports journal (www.kicker.de) since the season 1995/96. In total we take into account all seasons until 2010/11. Since the definition of the chances for goals has slightly changed during the first years of the reporting period we have normalized the chances for goals such that their total number is identical in every season. \section{Key elements of the prediction process} \subsection{Systematic and stochastic effects in soccer matches} Our general goal is the prediction of the future results of soccer matches. More specifically, we concentrate on the prediction of the outcome of the second half of the league tournament (German Bundesliga). This second half involves $N_2 = 17$ matches. We want to predict the final goal difference $\Delta G_2$ of each team after these $N_2$ match. A similar analysis could also be performed for points. We mention in passing that the information content of the goal difference about the team strength is somewhat superior to that of points \cite{Heuer09}. In previous work we have defined the team strength $S_2$ of a team as the expected average goal difference when playing against all other 17 teams. Strictly speaking, $S_2$ could be strictly determined if this team plays very often against the other 17 teams under identical conditions. Let $\Delta G_2(N_2)$ denote the goal difference of some team after $N_2$ matches in the second half, normalized per match. Then $\Delta G_2(N_2)$ can be expressed as the sum of its strength $S_2$ and a random variable $\xi$, which denotes the non-predictable contributions in the considered matches. In what follows we assume that the variance of $\xi$ is not correlated to the strength index $S_2$. Taking into account that the random contributions during different matches are uncorrelated one immediately obtains \begin{equation} \label{VG1} Var(\Delta G_2(N_2)) = Var(S_2) + V_2/N_2 \end{equation} where $V_2$ describes the variance of the random contribution during a single match and $Var(S_2)$ reflects the variance of the distribution of team strengths in the league \cite{Heuer09}. The $1/N_2$-scaling simply expresses that the statistical effects average out when taking into account a larger number of matches. This scaling only breaks down for $N_2$ close to unity because then the goal difference also depends on the strength of the opponent. In practice it turns out that for $N_2 > 4$ the difference of the $N_2$ opponents has sufficiently averaged out. This dependence on the number of considered matches has been explicitly analyzed in Ref.\cite{Heuer09,heuer_buch}. For the present set of data we obtain $Var(S_2) = 0.21$ and $ V_2 = 2.95$. Actually, $V_{2}$ is very close to the total number of goals per match (2.85). This expectation is compatible with the assumption of a Poissonian process. \subsection{Prediction within one season} In an initial step we use information from the first half of the season to predict the second half. The independent variable in the first half is denoted as $Y$, the dependent variable in the second half as $Z$. As the most simple approach we formulate the linear regression problem $Z = bY$. In what follows all variables fulfill the condition that the first moment of the variable, if averaged over all teams, is strictly zero. Generalization is, of course, straightforward. The regression problem requires the minimization of $ \langle (Z-\hat{Z})^2\rangle$ with respect to $b$ where $\hat{Z}=bY$ is the explicit prediction of $Z$. Inserting the resulting value of $b_{opt}$ yields for this optimum quadratic variation \begin{equation} \label{chi2} \chi^2(Y) = Var(Z) \left [1 - [corr(Y,Z)]^2 \right ] \end{equation} where $Var(Z$) denotes the variance of the distribution of $Z$ and \begin{equation} corr(Y,Z) = \frac{\langle YZ \rangle }{ \sqrt{Var(Y) Var(Z)}} \end{equation} the Pearson correlation coefficient between the variables $Y$ and $Z$. This relation has a simple intuitive interpretation. The higher the correlation between the variables $Y$ and $Z$ the better the predictability of $Z$ in terms of $Y$. To be somewhat more general, we consider the case that exactly $N_1 (\le 17)$ matches in the first half of the season have been taken into account to define the independent variable $Y$. Whenever we want express the dependence on $N_1$ we use the terminology $Y(N_1)$. Without this explicit dependence we always refer to $N_1 = 17$. To reduce the statistical errors we always average over different random selections of $N_1$ matches from the first half of the season. \subsection{Choice of observables} A natural choice for the variable $Y$ is the goal difference $\Delta G_1$ during the first half. We always assume that the results have been corrected for the average home advantage in that season. The quality of the prediction is captured by $corr(Y,Z)$; see Eq.\ref{chi2}. From the empirical data we obtain $corr(Y=\Delta G_1,Z=\Delta G_2)=0.56$. Are there other observables $Y$ which allow one to increase $ corr(Y,\Delta G_2) $ significantly beyond the value of 0.56? The scoring of goals is the final step in a series of match events. One may thus expect that there exist other match characteristics which are more informative about the team strength. A possible candidate is the number of chances for goals. We denote the chances for goals as $C_\pm$ and the goals as $G_\pm$. The sign indicates whether it refers to the considered team (+) or the opponent of that team (-). In a next step one can define the goal efficiencies $p_{\pm}$ via the relation \begin{equation} G_\pm = C_\pm \cdot p_{\pm}. \end{equation} Here, $p_{+}$ denotes the probability that the team is able to convert a chance for a goal into a real goal and $1 - p_{-}$ that the team manages to not concede a goal after a chance for a goal of the opponent. Averaging over all teams and seasons one obtains $\langle p_{\pm} \rangle = 0.24$. In analogy to $\Delta G$ we will mainly consider the difference $\Delta C = C_+ - C_-$ for prediction purposes. If the goal efficiencies strongly vary from team to team in an a priori unknown way the chances for goals contain only very little information about the actual number of goals. If, however, the goal efficiencies are identical for all teams the chances for goals are more informative than the goals themselves. In Appendix I this general statement is rationalized for a simple model. \begin{figure}[tb] \centering\includegraphics[width=0.9\columnwidth]{fig1} \caption{The efficiency factors $p_\pm$ as a function of the differences of the chances for goals $\Delta C$ } \label{fig1} \end{figure} In Fig.\ref{fig1} the actual goal efficiencies $p_+$ after a season are shown together with the respective values of $\Delta C$. Naturally, $\Delta C$ is strongly positively correlated with the team strength. Two effects are prominent. (1) There is a slight correlation between $\Delta C$ and $p_+$. On average better teams have a slightly better efficiency to score goals. Analogous correlations exist between $p_-$ and $\Delta C$. (2) The goal efficiencies are widely distributed between approx. 15\% and 35\%. This observation would indicate that the information content of the chances for goals about the resulting team strength, defined in terms of scoring goals, is quite limited. Surprisingly, this is not true. For the correlation coefficient $corr(Y=\Delta C_1,Z=\Delta G_2)$ one obtains a value of 0.65 which is much larger than $corr(Y=\Delta G_1,Z=\Delta G_2)=0.56$. To understand this high correlation for the chances for goals with the team strength we can discuss the reason for strong fluctuations of $p_{\pm}$ between the different teams. In general they are a superposition from two effects: (i) true differences between teams and (ii) statistical fluctuations, reflecting the random effects in the 34 soccer matches of the season. Both effects can be disentangled if one analyses the $N$-dependence of the variance of $p_{\pm}$. Whereas the statistical effects should average out for large $N$ the systematic effects remain for all $N$. In analogy to Eq.\ref{VG1} this can be written as \begin{equation} Var(p_{\pm}(N)) = Var(p_\pm) + const_\pm/N \end{equation} \begin{figure}[tb] \centering\includegraphics[width=0.9\columnwidth]{fig2} \caption{ The variance of the distribution of goal efficiencies in dependence of the number of match days. } \label{fig2} \end{figure} $Var(p_\pm)$ can be interpreted as the true variance of the distribution of $p_\pm$ void of any random effects. This N-dependence of $p_+(N)$ is explicitly shown in Fig.\ref{fig2}. Obviously, one obtains very small values for $Var(p_+)$ and $Var(p_-)$ $(0.00017 \pm 0.00010$ and $0.00018 \pm 0.00010$, respectively). Thus, by far the largest contributions to the scatter of $Var(p_{\pm}(N=34))$ in Fig.\ref{fig1} is due to random effects. Stated differently, beyond the minor correlation between $p_{\pm}$ and $\Delta C$, shown in Fig.\ref{fig1}, the efficiency to score a goal out of a chance for a goal is basically the same for all teams! To better understand the statistical properties of the chances for goals we again disentangle the systematic and random parts by writing \begin{equation} \label{VC1} Var(\Delta C_1(N_1)) = Var(S_1) + \frac{V_1}{N_1}. \end{equation} One obtains $Var(S_1) = 2.66$ and $V_1 = 14.2$. Based on this relation it is possible to discuss the individual contributions to the Pearson correlation coefficient $corr(\Delta C_1(N_1),\Delta G_2)$. Using the independence of the random effects in the first and the second half of the season one obtains \begin{eqnarray} \label{corryz} & & corr(Y=\Delta C_1(N_1),Z=\Delta G_2) \nonumber \\ & = & \frac{corr({S}_1,S_2)}{\sqrt{1+{V}_1/(N_1 Var({S}_1))}\sqrt{1+V_2/(17 Var(S_2))}}. \end{eqnarray} This expression clearly shows that there are three reasons why the prediction has intrinsic uncertainties, i.e. the correlation coefficient is smaller than unity. First, the team strength may change in the course of the season, i.e. $corr(S_1,S_2) < 1$. Since all parameters on the right side are explicitly known (see above) we can evaluate Eq.\ref{corryz}, e.g., for $N_1=17$. We obtain $corr(S_1,S_2) = 1.00$. Thus, the variation of the team strength during a single season is basically absent; see also Ref.\cite{Heuer10}. Second, the estimation of the team strength in the first half of the season is hampered by random effects, as expressed by $V_1/Var(S_1) > 0$. Of course, the larger the information content, i.e. the larger $N_1$, the better the prediction. For the chances for goals this ratio is given by $5.3$. If we had based $Y$ on $\Delta G$ rather than $\Delta C$ we would have obtained a value of 11.1. This comparison explicitly reveals why the chances for goals are more informative. Knowledge of the chances for goals of 10 matches is as informative as the goal differences of approx. 21 matches. Third, the prediction of $\Delta G_2$ always has intrinsic uncertainties due to the unavoidable random effects in the second half of the season, i.e. $V_2/Var(S_2) > 0$. Eq.\ref{corryz} allows one to define the limit of optimum prediction. It this case $Y$ would be explicitly given by $S_1$, i.e. $V_1 = 0$. This yields $corr(Y,Z=\Delta G_2) = 0.73$. This shows that the improvement of taking the chances for goals ($corr(Y=\Delta C_1,Z=\Delta G_2) = 0.65$) rather than the goals ($corr(Y=\Delta G_1,Z=\Delta G_2) = 0.56$) indeed is a major improvement relative to this optimum limit. \subsection{Going beyond the present season} Naturally, the prediction quality can be further improved by incorporating information from the previous season about the team strength. This additional variable is denoted as $X$. Here we consider the chances for goals of the previous season which we denote $ X=\Delta C_0$. One obtains $ corr(\Delta C_0, \Delta G_2) = 0.56$. In principle one can again analyse the systematic and random contributions of $\Delta C_0(N_0)$. The corresponding $N_0$-dependent variance reads (see Eq.\ref{VC1}) \begin{equation} Var(\Delta C_0(N_0)) = Var(S_0) + \frac{V_0}{N_0} \end{equation} with $Var(S_0) = 2.32$ and $V_0 = 14.1$. For reasons of comparison all relevant statistical parameters are summarized in Tab.\ref{tab1}. \begin{table} \label{tab1} \centering \begin{tabular}[t]{|c|c|c|}\hline & $Var(S_i)$ & $V_i$ \\ \hline $i=0: \Delta C_0$ & 2.32 & 14.1\\ \hline $i=1: \Delta C_1$ & 2.66 & 14.2 \\\hline $i=2: \Delta G_2$ & $0.21$ & 2.95\\ \hline \end{tabular} \caption{ The different systematic and random contributions of the observables, relevant for this work.} \end{table} Of course, both values are close to $Var(S_1)$ and $V_1$. The small differences expresses the fact that the statistical properties of the first and the second half of the season are slightly different \cite{heuer_buch}. Using the same reasoning as in the context of Eq.\ref{corryz} one finally obtains $corr({S}_0,S_2)=0.88$ and $corr({S}_0,S_1)=0.86$. Both values are identical within statistical errors. This is compatible with the observation that the team strength does not vary within a season. The fact that both values are significantly smaller than unity shows, however, that there is a small but significant variation of the team strength between two seasons. For future purposes we use the average value of $corr({S}_0,S_{1,2})=0.87$ for the characterization of the correlation of the team strength between two seasons. \section{Quality of the regression procedure} \subsection{General information content} \begin{figure}[tb] \centering\includegraphics[width=0.9\columnwidth]{fig3} \caption{Schematic representation of the general prediction setup. } \label{fig3} \end{figure} For small $N_1$, i.e. at the beginning of the season, the information content about the strength of a team is quite limited. Therefore it is essential to incorporate also team information which is already available at the beginning of the tournament, i.e. reflects the strength of this team from the past season. Thus, before the first match the prediction is fully based on $X$ and gradually with an increasing number of matches the variable $Y$ contains more and more information about the present team strength and thus will gain a stronger statistical weight in the inference process. This setup is sketched in Fig.\ref{fig3}. As discussed above we choose for $X$ the chances for goals of the previous season. The general relations, however, also hold beyond this specific choice. Interestingly, the quality of the multivariate prediction can be expressed in analogy to Eq.\ref{chi2} and reads \begin{equation} \label{chi2new} \chi^2(X,Y) = \chi^2(Y) \left [1 - [corr(X-Y,Z-Y)]^2 \right ] \end{equation} where the partial correlation coefficient \begin{equation} c(X-Y,Z-Y) = \frac{corr(X,Z) - corr(X,Y)corr(Y,Z)}{\sqrt{1 - corr(X,Y)^2}\sqrt{1-corr(Y,Z)^2}} \end{equation} has been used. $\chi^2(Y)$ has been already defined in Eq.\ref{chi2}. The second factor on the right-hand side of Eq.\ref{chi2new} explicitly contains the additional information of the variable $X$ as compared to $Y$. One can easily show that in agreement with expectation Eq.\ref{chi2new} is completely symmetric in $X$ and $Y$. Since Eq.\ref{chi2new} is non-standard it is explicitly derived in the Appendix II via some general arguments. \subsection{Estimation of the team strength} So far, we have identified $Z$ with the goal difference in the second half of the season which is composed of $S_2$ and the non-predictable random effects as expressed by $Var(\Delta G_2) = Var(S_2) + V_2/17$. Now we define \begin{equation} \label{chipract} \tilde{\chi}^2(X,Y) = {\chi}^2(X,Y) - V_2/17. \end{equation} \begin{figure}[tb] \centering\includegraphics[width=0.9\columnwidth]{fig4} \caption{The prediction quality of the team strength, determined via $\sqrt{\tilde{\chi}^2(X,Y)}$, is shown as a function of the number of match days $N_1$. Different choices of variables are shown. The solid lines are based on the explicit formulas for the prediction quality.}\label{fig4} \end{figure} This can be interpreted as the statistical error for the prediction of the individual team strengths. In case of a perfect estimation of the team strengths one would have $\tilde{\chi}^2(X,Y)=0$. Mathematically this result can be derived by choosing $Z=S_2$ rather than $Z=\Delta G_2$ in Eq.\ref{chi2new}. After employing some straightforward algebraic manipulations of Eq.\ref{chi2new} one directly obtains $\tilde{\chi}^2(X,Y)$. \section{Results} \subsection{Numerical results} For each value of $N_1$ we have performed a multivariate regression analysis, yielding $\chi^2(X,Y)$, and finally subtracted $V_2/17$. As before we have chosen several subsets of $N_1$ matches from the first half of the season to decrease the statistical error. Now we proceed in two steps. First, we neglect the contribution of $X$, i.e. the information from the previous season. The results are shown in Fig.\ref{fig4}. One can see that (trivially) for $N_1=0$ the standard deviation in the estimation of the team strength is identical to the standard deviation of the $S_2$-distribution. The longer the season, the more information is available to distinguish between stronger and weaker teams. Using the information of the complete first half of the season ($N_1=17$) the statistical uncertainty decreases to 0.22. Here one can explicitly see the advantage of using the chances for goals rather than the goals themselves. Repeating the same analysis with the number of goals one would have an uncertainty of 0.30 after $N_1=17$ matches which is significantly higher than the value of 0.22, reported above. Second, when additionally incorporating the information from $X$, the statistical uncertainty is already quite small at the beginning of the season (0.3). Of course, during the course of the season it becomes even smaller. Even after 17 matches the additional gain of using $X$ is significant (0.22 vs. 0.19). \subsection{Analytical results} $\tilde{\chi}^2(X,Y)$ can be also calculated analytically by incorporating the statistical properties of the variables $X,Y$, and $Z$. For future purposes we abbreviate $d=\tilde{V_1}/Var({S}_1)$. First, we have (using ($corr(S_1,S_2)=1$) \begin{equation} corr(Y=\Delta C_1(N_1),S_2) = \frac{1}{\sqrt{1+d/N_1}}. \end{equation} Furthermore, we express $corr(X,S_2)$ as \begin{equation} corr(X=\Delta C_0,S_2) = \frac{corr(S_0,S_{1,2})}{\sqrt{1+V_0/(17 Var(S_0))}} \equiv c, \end{equation} In analogy one obtains \begin{equation} corr(X=\Delta C_0,Y=\Delta C_1(N_1)) = \frac{c}{\sqrt{1+d/N_1}}. \end{equation} In summary, all information is contained in the two constants $c$ and $d$. A straightforward calculation yields $corr(X-Y,Z-Y) = c \sqrt{1-1/(1+d/N)}/\sqrt{1-c^2/(1+d/N)}$. Finally, one ends up with \begin{equation} \label{chitheo} \tilde{\chi}^2(X,Y) = Var(S_2)\frac{\left ( 1-\frac{1}{1+d/N}\right )\left ( 1-c^2 \right )}{\left ( 1-\frac{c^2}{1+d/N}\right )}. \end{equation} Now we can compare the actual uncertainty, as already shown in Fig.\ref{fig4}, with the theoretical expectation, as expressed by the analytical result Eq.\ref{chitheo}. The results are included in Fig.\ref{fig4}. To reproduce the case without the variable $X$ one can simply choose $c=0$. One can see a very close agreement with the actual data. Is this good agreement to be expected? Actually, our analysis just contains two approximations. First, we have chosen $corr(S_0,S_1) = corr(S_0,S_2)$ which, indeed, holds very well (see above). Second, we have assumed that the team strength does not vary during the first half of the season. As shown in Ref.\cite{heuer_buch} the team strength fluctuates with a small amplitude of approx. $A=0.17$ and with a decorrelation time of approx. 7 matches. Since we average over different choices of $N_1$ matches and, furthermore, restrict ourselves to the prediction of the total second half, these temporal fluctuation are to a large extent averaged out. \section{Discussion} The main goal was (i) to analyse the information content of different observables and (ii) to better understand the limits of the prediction of soccer matches. The prediction quality could be grasped by the two parameters $c$ and $d$. One can easily see that the theoretical expression for the prediction quality Eq.\ref{chitheo} approaches the limit of perfect prediction in two limits (i) For $c=1$ and $d=0$ the information from either the previous or from the present season, respectively, perfectly reflects the present team strength. (ii) For $ N_1 \rightarrow \infty $ all random effects have averaged out so that only the systematic effects remain. This result can be easily generalized. For example one can show for the German Bundesliga that the market value, determined before the season, is highly informative for the expected outcome. Taking an appropriately chosen linear combination of different observables one may slightly increase the value of $c$ but keeping the general structure of Eq.\ref{chitheo} identical. The same analysis could have been also performed by predicting points rather than goal differences. Both observables are linearly correlated via the simple relation $ P_2 = 0.61 S_2 + 23$. In analogy to $S_2$ the value of $P_2$ denotes the expected number of points which a team gains in a match against an average team of the league in a neutral stadium. Thus, an average team ($S-2=0$) on average gains 23 points per half-season. \begin{figure}[tb] \centering\includegraphics[width=0.9\columnwidth]{fig5} \caption{The uncertainty of the prediction of the goal difference of the second half when using the complete information of the first half ($N_1 = 17$). Different choices of variables are shown. Furthermore, the limit of perfect predictability is indicated. } \label{fig5} \end{figure} One interesting question arises: is the residual statistical error of $S_2$ for $N_1=17$ small or large? This question may be discussed in two different scenarios. First, one may want to predict the outcome of the second half of the league. In the present context the uncertainty is given by $17\sqrt{\chi^2(X,Y)} = 17\sqrt{\tilde{\chi}^2(X,Y)+V_2/17}$. These values are plotted for different prediction scenarios in Fig.\ref{fig5}. One can see how the additional information decreases the uncertainty of the prediction. Most importantly, the {\it no man's land} below an uncertainty of $\sqrt{17 V_2}=7.1$ cannot be reached by any type of prediction. The art of approaching this perfect prediction thus resorts to decrease the present value of 7.8 to a value closer to 7.1. Second, one may be interested in the prediction of a single match. This case is somewhat different. Since the team fluctuations are very difficult to predict the fluctuation amplitude $A=0.17$ \cite{heuer_buch} serves as a scale for estimating the quality of prediction. If the uncertainty is much smaller than $A$ any further improvement would not help. In the present case the statistical error is close to $A$ so that a further reduction of $\tilde{\chi}^2(X,Y)$ would still be relevant for prediction purposes. Note that the chances for goals are not completely objective observable because finally also the subjective judgement of a sports journalist may influence its estimates. In this sense the high information content of chances for goals indicates that the subjective component is quite small and the general definition is very reasonable. Of course, in the future one may look for strictly objective match observables taken by companies such as Opta and Impire to further improve the information content. We gratefully acknowledge helpful discussions with D. Riedl, B. Strauss, and J. Smiatek. \section{Appendix I} Here we consider a simple example of a fictive coin-tossing tournament where the head appears with probability $p$ which in this simple example is given by 1/2. A team is allowed to toss the coin $M$ times per round. In the first round this results in $g_{1}$ times tossing the head. Thus, in the first round one has observed the number of tosses $M$ as well as the number of heads $g_{1}$. In the relation to soccer $M$ would correspond to the number of chances for goals and $g_1$ to the number of goals in that match. In order to keep the argument simple we assume that $M$ is a constant whereas in a real soccer match $M$ can naturally vary. How to predict the expected number of goals $g_2$ in the next round? Here we consider two different approaches. (1) The prediction is based on the achievement of the first round, i.e. on the value of $g_1$. Then the best prediction is $g_2 = g_1$. The variance of the statistical error of the prediction can be simply written as $\sum_{g_1,g_2}p(g_1)p(g_2)(g_1-g_2)^2$ where $p(g)$ is the binomial distribution. A straightforward calculation yields for this variance a value of $2Mp(1-p)$. (2) The prediction is based on the knowledge of tossing attempts. The optimum prediction is, of course, $pM$. The variance of the statistical error is given by the binomial distribution, i.e. by $Mp(1-p)$. Stated differently, knowing the number of attempts to reach a specific goal (here tossing a head) is more informative that the actual number of successful outcomes as long as the probability $p$ is well known. \section{Appendix II} Here we show a simple derivation of the chosen form of $\chi^2(X,Y)$. Let $d_{YZ}$ denote the solution of the regression problem $Z = dY$. Accordingly, $d_{YX}$ is the solution of the regression problem $X = dY$. In a next step one defines the new variables $\tilde{Z} = Z - d_{YZ}Y$ and $\tilde{X} = X - d_{YX}Y$. For these new variables the correlation with $Y$ is explicitly taken out. A straightforward calculation shows that the Pearson correlation coefficient $corr(\tilde{X},\tilde{Z})$ is exactly given by the partial correlation coefficient $corr(X-Y,Z-Y)$. Now we consider the regression problem of interest $Z = aX + bY$. In a first step it is formally rewritten as \begin{equation} Z - d_{YZ} Y= a (X - d_{YX}Y) + (b - d_{YZ} + a d_{YX})Y. \end{equation} Using the above notation and introducing the new regression parameter $\tilde{b}$ we abbreviate this relation via \begin{equation} \tilde{Z} = a \tilde{X} + \tilde{b} Y. \end{equation} By construction the observable $Y$ is uncorrelated to $\tilde{X}$ and $\tilde{Z}$. Therefore the independent variable $Y$ does not play any role for the prediction of $\tilde{Z}$ so that effectively one just has a single-variable regression problem. Therefore one can immediately write \begin{equation} \chi^2(X,Y) = Var(\tilde{Z}) \left [ 1 - [corr(\tilde{X},\tilde{Z})]^2\right]. \end{equation} The first factor is identical to $\chi^2(Y)$ whereas the Pearson correlation coefficient in the second factor is identical to $corr(X-Y,Z-Y)$. This concludes the derivation of $\chi^2(X,Y)$.
{ "attr-fineweb-edu": 1.886719, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUb1fxK4sA-9F9jOY2
\section{Introduction} We study the design of tournaments, which is scheduling the pairwise comparisons between participating competitors (players) \cite{sziklai2021efficacy}. There are varying formulations for such designs. While, in the literatures of sports, economics and management sciences, the formulation includes the distribution of some effort (intensity) for the sake of achieving some goal \cite{lazear1981rank,rosen1986prizes,taylor1995digging,prendergast1999provision,szymanski2003economic,orrison2004multiperson,brown2014selecting,bimpikis2019designing}; we consider the effort to be constant across the competition \cite{sziklai2021efficacy}. This formulation assumes that the competitors perform at their peak in all matches, which manifests itself in a lot of situations such as high stakes competitions, e.g., elections \cite{klumpp2006primaries}, musical competitions \cite{ginsburgh2003expert}, contests for crowdsourcing \cite{hou2021optimal}, sports tournaments \cite{palacios2009field}, innovation contests \cite{harris1987racing,yucesan2013efficient,ales2017optimal}. There exist two traditional tournament formats: knockout tournaments and group tournaments. \begin{itemize} \item In the knockout tournaments, the most fundamental one is the single elimination tournament. Here, after a round of match-ups between the contestants, the losers are eliminated while the winners continue to compete. This tournament style has been extensively studied, especially in the fields of economics and statistics \cite{hartigan1968inference,israel1981stronger,hwang1982new,horen1985comparing,knuth1987random,chen1988stronger,edwards1998non,schwenk2000correct,glickman2008bayesian,vu2011fair,groh2012optimal,prince2013designing,krakel2014optimal,hennessy2016bayesian,karpov2016new,adler2017random,dagaev2018competitive,karpov2018generalized,arlegi2020fair,kulhanek2020surprises,arlegi2022can}. \item In the group tournaments, the most fundamental one is the round-robin tournament. Here, all players compete against all other players and they are ranked based on their cumulative match results according to some rules \cite{harary1966theory,rubinstein1980ranking}. \end{itemize} There also exist multi stage tournaments which are combinations of knockout and group tournaments. The tournament design problem is in a way a sequential learning problem \cite{gokcesu2017online,gokcesu2018sequential}, where sequential matches determine the rankings. When choosing a tournament format, various factors come into play. The most prominent ones are: \begin{enumerate} \item fairness \cite{cea2020analytics,goossens2012soccer,guyon2015rethinking,guyon2018fairer,guyon2020risk,kendall2010scheduling,laliena2019fair,van2020handling,wright2014or} \item promoting competitiveness \cite{chater2021fixing} \item incentive compatibility \cite{csato2020incentive,csato2021tournament,csato2022quantifying,dagaev2018winning,pauly2014can,preston2003cheating,vong2017strategic}, \item maximizing attendance \cite{krumer2020testing} \item minimizing rest mismatches \cite{atan2018minimization} \item revenue allocation \cite{bergantinos2020sharing,petroczy2021revenue}. \end{enumerate} The tournament design problem has a very complex structure with multitude of factors in consideration. Nonetheless, most of these factors have a common denominator, which is the fair ranking of the players. Strong contending players should continue to play and weak inconsequential players should not. For example, in multi stage tournaments, the players should be allocated (set partition \cite{gokcesu2021efficient,gokcesu2021quadratic,gokcesu2022linearithmic}) to groups in a fair manner. Moreover, a tournament's efficiency, i.e., how well it can rank the participating players according to their strength, is very important (if not the most). Note that the player strengths are hidden and the tournament needs to create this ranking based on the pair-wise match results as well as possible. Therefore, one can formulate tournaments as online learning problems \cite{gokcesu2017online,gokcesu2018sequential}, since sequential matches decide the relevant rankings. The match results are noisy in the sense that the stronger player may not always be the winner (the underdog can win). However, the winning probability of the stronger player monotonically increases \cite{gokcesu2021optimally,gokcesu2021anytime} with the strength difference between the matched-up players. Henceforth, the question becomes how to design the relevant loss functions \cite{gokcesu2021generalized,gokcesu2022nonconvex}. Such a loss function will probably have a very convoluted structure and potentially non-convex (hard to solve) \cite{gokcesu2021regret,gokcesu2022low}. Since it has a very convoluted structure, nonparametric analysis \cite{gokcesu2022natural,gokcesu2021nonparametric} is desirable. Because of these complexities, tournament design is a relatively hard problem. If we approach this problem from a probabilistic perspective, it is possible to derive the probabilities of each player finishing in each rank for a small set of players based on some match result heuristics \cite{david1959tournaments,glenn1960comparison,searls1963probability}. However, this becomes increasingly more difficult as the number of players and the number of matches increase. If the tournament also has a convoluted structure, this method becomes intractable \cite{sziklai2021efficacy}. For this reason, the standard approach is using Monte Carlo simulations. The work in \cite{appleton1995may} determines the likelihood of the best player winning the tournament in a variety of competitions. The work in \cite{mcgarry1997efficacy} shows the efficiencies in ranking for traditional tournaments for eight players in a variety of initial conditions. The work in \cite{mendoncca2000comparing} study varying ranking techniques for round-robin tournaments in their efficiency to create rankings equivalent to the strength of the players. The work in \cite{marchand2002comparison} calculates the likelihood of a player in the top seed winning in standard and random knockout tournaments and determines that, unexpectedly, their outcomes are not much different from each other. The authors in \cite{ryvkin2008predictive} proclaim that the efficiency of determining the strongest player in the knockout and round-robin tournaments is non-monotonic with the number of players for heavy-tailed strength distributions among the players. In \cite{ryvkin2010selection}, the author studies alternative measures of selection efficiency. Sometimes, we may not need the whole ranking and what is most important is who becomes the champion. Here, the tournament efficiency is determined by how likely the strongest player wins the tournament, which is especially true for sports tournaments. However, even such a simpler efficiency metric requires extensive assumptions to simulate and analyze. The work in \cite{scarf2009numerical} gives a numerical study of the sports tournament designs, including a comprehensive list of formats in practice, a simulation framework and their evaluation metrics. The work in \cite{scarf2011numerical} continues this numerical analysis and studies the influence of the seeding policy on the tournament outcomes for the World Cup Finals. The work in \cite{goossens2012comparing} compares different league formats considered by the Belgian Football Association with respect to the match importance. The work in \cite{annis2006comparison} compares potential playoff systems for NCAA IA football, where the systems differ in their number of playoff teams, their selection and seeding. The work in \cite{lasek2018efficacy} studies the efficiency of league formats in their ability to rank teams in European association football competitions. The work in \cite{csato2021simulation} compares different tournament designs for the World Men's Handball Championships. We may also have some past performances as an initial estimate for the rankings and the tournament can adopt this as a seeding rule. A better seeding of the players in the tournament may improve its efficiency, however, it is not easy to do so. After all, if we have a really good estimate for the power rankings, we may not even need a tournament in the first place. Interestingly, real data suggests that the performance in tournaments can be very different from any existing past performance; which makes seeding a somewhat marginal component in improving the efficiency of the tournaments \cite{sziklai2021efficacy}. This claim is supported by the study in the UEFA Europa League and the UEFA Champions League \cite{engist2021effect}, which shows that the seeding by itself is insufficient in its contribution to the success of better teams. Nevertheless, in all analyses, more matches result in better estimations of the power rankings \cite{lasek2018efficacy,csato2021simulation}, which is statistically intuitive. However, it becomes a concern especially if the number of participating players are too large. Furthermore, more general, flexible tournament designs are needed since the number of participants may substantially differ across competitions. The works in \cite{appleton1995may,mcgarry1997efficacy} have studied tournament designs with $8$ or $16$ players. The works in \cite{sziklai2021efficacy,scarf2009numerical} studies the tournaments with $32$ competitors. Although they study various tournament structures, the limitations on the number of participants is problematic. All in all, we tackle the tournament design problem from the perspective of how to best design a tournament given a limited number of matches. In this perspective, the knockout tournaments and round-robin tournaments are the two ends of the spectrum. It seems redundant to match up the same players against each other, hence, it is better to design different, more informative, match ups as much as possible \cite{sziklai2021efficacy}. However, we also cannot rank the players without at least $\log_2 N$ matches since the match results are binary and the rank can be represented as an at least $\log_2 N$ length binary sequence. To this end, we aim to naturally combine the knockout and group settings to create a flexible near linear elimination tournament. \section{Linear Elimination Tournament Design} In a seeded tournament let $n\in\{1,2,\ldots,N\}$, where $N$ is number of participants, be the indices of different players (or teams). \begin{assumption} The player identified by $n$ is ranked $r_n=n$ at the beginning. The players are seeded such that the highest ranked player is $n=1$ and the lowest ranked player is $n=N$ based on some metric (such as the past performance). \end{assumption} In each round of the tournament, we have the following successive stages: \begin{enumerate} \item Determination of match-ups \begin{itemize} \item At start of every round, match-ups between players are decided. \begin{assumption} $N$ is even to ensure every player participates in a match, i.e., to minimize rest mismatches. \end{assumption} \begin{assumption} We disregard any unfair bias such as home court advantage. \end{assumption} \item After the matches, we have the results of which player won and lost. \begin{assumption} We do not consider ties in the matches, i.e., there should always be a winner and a loser. \end{assumption} \end{itemize} \item Re-ranking of the players \begin{itemize} \item Using these results, the players are re-ranked accordingly. \end{itemize} \item Elimination of players \begin{itemize} \item After the new rankings, some players are eliminated from the tournament. \begin{assumption} Even number of players are eliminated to ensure that every player can be matched-up in the successive rounds. \end{assumption} \end{itemize} \end{enumerate} \begin{remark} In both knockout and round-robin tournaments a similar fashion is followed in general. After the last round the last standing player or the first ranked player is crowned the champion. \end{remark} If all players that lose each round are eliminated, the tournament can end at best in $\log_2 N$ rounds. The question is how the tournament should be designed given the number of contestants $N$ and the number of matches $M$. While designing such a tournament, we have to take note of few important factors as described in the introduction. \begin{itemize} \item We need to ensure that both the re-ranking and the elimination stages are fair. \item For incentive compatibility and to promote competitiveness, we need to reward the winners and penalize the losers. \item To maximize attendance, we need to create high stakes matches as much as possible. Moreover, the tournament should gradually match up stronger players with each other. \item For tournament efficiency, weaker players need to be eliminated and stronger players should continue the tournament. \end{itemize} \subsection{Determination of Match-ups} For the determination of the match-ups, we have the following notion. \begin{notion} Stronger players should have higher chances of winning the tournament. \end{notion} This notion is traditionally followed in all kinds of tournament or competition designs, where the goal is to maximize the chances of the strongest player winning the tournament. To this end, in our design, the tournament follows a snake matching system in the sense that the first ranked plays the last ranked and the second ranked plays the second to last ranked etc., i.e., if players $i$ and $j$ match up, we have \begin{align} r_i+r_j=N+1. \end{align} Since the rankings and the total number of remaining players change over time, we represent them as time dependent, i.e., after any $t^{th}$ match-ups, we have \begin{align} N_t,& &&\forall t,\\ r_{t,n},& &&\forall n,t, \end{align} and if $i$ and $j$ match-up at $t^{th}$ round, we have \begin{align} r_{t-1,i}+r_{t-1,j}=N_{t-1}+1. \end{align} Note that, before any match-up, i.e., $t=0$, we have $N_0=N$ and $r_{0,n}=n$. The tournament ends when $N_{\tau}=2$ for some $\tau$ and the winner is crowned the champion. \subsection{Fair Re-ranking} Note that, at each round $t$, the given rankings are our best estimates so far. To determine the re-ranking rules, we first make the following notion. \begin{notion}\label{no:nonincdec} The ranking of a loser and a winner after a match-up should be nonincreasing and nondecreasing respectively. \end{notion} This notion is observed in all kinds of tournament designs, where the winners are rewarded and the losers are penalized. Moreover, our only observations in each round are the results of the determined matches. To this end, we make the following notion. \begin{notion}\label{no:order} The ranks of losers and winners after a match-up round should be ordered as before the match-up round individually. \end{notion} Moreover, the change in rankings, i.e., rank increase for the winners and the rank decrease for the losers should not be biased, hence, fair. To ensure this, we make the following notion. \begin{notion}\label{no:together} If two consecutively ranked players both won (or lost), their increase (or decrease) in the rankings should be equal. \end{notion} \subsection{Formulating Re-ranking as an Optimization Problem} We can think of the re-ranking as an optimization problem, where we want to re-rank the players as much as possible whilst conforming to the notions. To construct the optimization problem, we need to define a gain or loss function. Let us have a binary vector of results $\boldsymbol{b}=\{b_n\}_{n=1}^N$, where $b_n=1$ if $n^{th}$ player won or $b_n=0$ if that player lost. Observe that we have $b_n\in\{0,1\}, \forall n$ and $b_n+b_{N+1-n}=1$ because of the match-ups. \begin{remark} Note that if the players were perfectly ranked and the matches resulted according to the expectations, we would have $b_n=1, n\in\{1,2,\ldots,N/2\}$ and $b_n=0, n\in\{N/2+1,N/2+2,\ldots,N\}$. \end{remark} Let $\boldsymbol{r}=\{r_n\}_{n=1}^N$ be the new ranks of $n$-ranked players. Since we want to re-rank them as much as possible with the new information (the match results), we can define the objective as \begin{align} \max_{\boldsymbol{r}}\sum_{n=1}^N\abs{r_n-n},\label{eq:obj} \end{align} given $\boldsymbol{b}$. However, this optimization problem as itself exploits the results of the matches, which can be observed in the following. \begin{proposition} The global maximizer of the objective function in \eqref{eq:obj} (that conforms to the notions given $\boldsymbol{b}$) is given by the new ranks $r_n$, which are the highest ranked winner to the lowest ranked winner, then the highest ranked loser to the lowest ranked loser. \begin{proof} First of all, this solution conforms to the notions. Moreover, the new ranks for the winners and the losers are upper and lower bounds respectively from \autoref{no:order}. Hence, this ranking is a global maximizer from \autoref{no:nonincdec}, which concludes the proof. \end{proof} \end{proposition} However, such a ranking mostly disregards the initial ranking. Since the initial rankings give us some form of estimate for the true power rankings, we should not disregard it. We observe that the same global maximizer solution is a global minimizer for the following optimization problem. We can consider the number of changes in the vector $\boldsymbol{b}$ as a loss (a measure of how well we predicted the rankings), i.e., \begin{align} \min_{\boldsymbol{r}}\sum_{n=1}^{N+1}\abs{\tilde{b}_n-\tilde{b}_{n-1}},\label{eq:objB} \end{align} where $\tilde{b}_{r_n}=b_n$ for a given $\boldsymbol{b}$; $\tilde{b}_{0}=1$, $\tilde{b}_{N+1}=0$ are dummy results. The global maximizer of \eqref{eq:obj} is a global minimizer of \eqref{eq:objB}, where the minimum value is $1$. We observe that to increase the objective in \eqref{eq:obj}, while abiding by the notions, we need to simultaneously decrease the objective in \eqref{eq:objB}. To guarantee exploration whilst utilizing the match results (exploitation), we can put a constrained on the decrease on \eqref{eq:objB}. To this end, to limit the change in the rankings between our old and new estimates, we regularize the reranking objective function such that we want to maximize one of them whilst constrained by the other. Although limiting the number of changes in the new ranks and minimizing the path change of $\boldsymbol{b}$ can also be considered, putting a constraint on the path change of $\boldsymbol{b}$ and maximizing the new rank changes is more natural since it is the initial objective. Moreover, we observe that while abiding by the notions, the value of the objective in \eqref{eq:objB} can only be odd. Thus, one such constraint is limiting the path change of $\tilde{\boldsymbol{b}}$ two less than the original path change of ${\boldsymbol{b}}$. Hence, unless the path change of $\boldsymbol{b}$ is $1$, the optimization function is . \begin{align} \max_{\boldsymbol{r}}\sum_{n=1}^N\abs{r_n-n}:&&\sum_{n=1}^{N+1}\abs{\tilde{b}_n-\tilde{b}_{n-1}}=\sum_{n=1}^{N+1}\abs{b_n-b_{n-1}}-2. \end{align} where $\tilde{b}_{r_n}=b_n$ and $\tilde{b}_{0}=1$, $\tilde{b}_{N+1}=0$, ${b}_{0}=1$, ${b}_{N+1}=0$ are dummy results. \begin{theorem} After a match-up, switching the places of consecutive losers with winners is a solution to the optimization function. \begin{proof} To satisfy \autoref{no:together}, we observe that the consecutive ranked teams with same match results cannot be separated. Hence, we can only move them together in the rankings. To satisfy \autoref{no:nonincdec}, winners either stay in the same ranks or move up (the converse for the losers). Thus, our only move is swapping the places of consecutive winners with immediate higher ranked losers. Just one swap between any such groups will decrease the path change by $2$. However, this may not be the optimal solution. We observe that simultaneous swapping of all such groups also decreases the path change by $2$. Thus, we swap all such groups with each other to maximize the total number of rank changes, which concludes the proof. \end{proof} \end{theorem} This re-ranking is intuitive in the sense that each player rises or falls in the rankings by utilizing exclusively the limited information about the match results. \begin{remark} After a match up between players $i$ and $j$, the winner climbs up the rankings as much as the loser climbs down. Hence, each match-up satisfies a zero-sum game in the sense that they are equally rewarded or penalized in their cumulative rank changes. \end{remark} \begin{example} For example, let the players be ranked $$\boldsymbol{r}^{old}=\{1,2,3,4,5,6,7,8,9,10,11,12,13,14\}$$ at some round $t$ for $N=14$. Let these players have the match-up results $$W,L,L,W,W,W,L,W,L,L,L,W,W,L$$ where the binary values $1$ and $0$ is changed with $W$ (win) and $L$ (lose) for ease of understanding. Note that this sequence is odd symmetric around its middle because of the match-up rule. Our re-ranking results in the following new rankings $$\boldsymbol{r}^{new}=\{1,4,5,6,2,3,8,7,12,13,9,10,11,14\}.$$ \end{example} \begin{remark} For higher exploitation, the re-ranking can be considered in a modular manner and applied multiple times, where we decrease the value of the objective in \eqref{eq:objB} by $2$ at each time. \end{remark} \subsection{Elimination of Players} For the player eliminations, to avoid resting periods for players, we have the following notion. \begin{notion} Every player should match-up every round. \end{notion} To this end, to preserve valid match-ups for all players, the eliminated players should be even numbered, i.e., \begin{align} N_{t-1}-N_t=0\pmod{2}. \end{align} Moreover, we make the following notion to keep the match-ups and the tournament meaningful. \begin{notion} Every player still in the running should have a chance to win it all. \end{notion} For this reason, we first postulate that the eliminated players can only be from the set of players who lost in their last round. For the elimination part, we eliminate the lowest ranking set of players that lost their last match. Given $N$ players and $M$ competition rounds, we determine the eliminated number of players as follows. Since we somehow combine knockout and round-robin tournaments, we distribute the eliminations approximately linearly. In knockout tournaments, the players are eliminated in a multiplicative fashion, while in the round-robin no players are eliminated. Therefore, the linear elimination is somewhere in between. Since we only eliminate the losers, we need to satisfy \begin{align} N_{t}\leq 2N_{t+1}, &&\forall t. \end{align} Thus, we linearly distribute the eliminated players across different rounds as much as possible whilst satisfying the constraints. Hence, we formulate the following objective \begin{align} \min_{\boldsymbol{N}}\sum_{t}\abs{(N_{t}-N_{t+1})-(N_{t-1}-N_{t})}: && N_{t}\leq 2N_{t+1}, \forall t, \label{eq:N} \end{align} \begin{lemma}\label{thm:noninc} The resulting sequence $\boldsymbol{N^*}$ has nonincreasing change, i.e., \begin{align} N_{t}-N_{t+1}\leq N_{t-1}-N_{t}, &&\forall t. \end{align} \begin{proof} Let us assume it is not nonincreasing. In such a scenario, if we move the excess elimination that does not satisfy nonincreasing behavior to the first round for all such places, we will have a smaller path change in the number of eliminated players. Hence, in the optimal solution, we need to have a nonincreasing elimination sequence. \end{proof} \end{lemma} To solve the optimization problem in \eqref{eq:N}, we propose the following construction for $\boldsymbol{N}$. \begin{enumerate} \item Input $N$, $M$. \item Let $N^*_{M-1}=2$, $N^*_0=N$, $m=M-1$ \item If $m=1$, STOP; else continue.\label{step:it} \item $2\lfloor{(N^*_0-N^*_{m})/(2m)}\rfloor=\delta$ \item If $\delta\geq N^*_m$, $N^*_{m-1}=2N^*_m$,\\ else $N^*_{m-1}=N^*_m+\delta$. \item $m\rightarrow m-1$, return to Step \ref{step:it} \end{enumerate} For our construction of the remaining number of players $\boldsymbol{N^*}$, we have the following result. \begin{theorem} The resulting $\boldsymbol{N^*}=\{N^*_m\}_{m=0}^{M-1}$ is a minimizer for the objective function \eqref{eq:N}. \begin{proof} From \autoref{thm:noninc}, we observe that the total amount of path change in the number of eliminated players is equal to the difference between the first number of eliminations and the last. Since at least $2$ players need to be eliminated and we have $N_{M-1}=2$, the last number of eliminated people is $2$. Thus, the problem is equivalent to minimizing the first number of eliminations. We can only achieve this by equally distributing the number of eliminations across rounds as much as possible, which concludes the proof. \end{proof} \end{theorem} \begin{example} The elimination numbers are found as follows. For example, given $N=134$ and $M=15$, we have \begin{align} &N_0=134,N_1=122,N_2=110,N_3=98,N_4=86,\\ &N_5=76,N_6=66,N_7=56,N_8=46,N_{9}=36,\\ &N_{10}=26,N_{11}=16,N_{12}=8,N_{13}=4,N_{14}=2 \end{align} \end{example} \section{Discussions} \begin{remark} Since at least two players are eliminated at each round, the same match never happens in consecutive rounds. \end{remark} Our design is build upon a seeded tournament setting. If we lack such a seed ranking there are a number of things that can be done. \begin{itemize} \item One such example is a preliminary tournament with $\log_2(N)$ rounds to determine an initial ranking. \item Or we can consider a random initial seed, which will not be any worse than other tournament designs. \item Or we can modify the elimination and re-ranking rule accordingly for initial random seeds. In such scenarios, instead of decreasing the initial path change of $\boldsymbol{b}$ by $2$, we can consider a nonincreasing decrease where in the beginning stages, we can fully rank them. In such a scenario, eliminating fewer players at the beginning may be more meaningful. \end{itemize} \begin{remark} Moreover, the eliminated players can continue to play against each other in a separate league similar to our initial design to determine the full ranking instead of just the champion. \end{remark} \begin{remark} Furthermore, our tournament can be implemented in a connected manner by creating a losers bracket. We can design a promotion and relegation structure. Our near linear number of player elimination design will concern the net change in the players, i.e., the relegated players minus the promoted players. \end{remark} \begin{remark} Since the eliminations are only from the set of players that lost that round, most of the players face an elimination risk, which increases the stakes. \end{remark} \begin{remark} Our design is flexible in the sense that it can be adapted to any number of players $N$ as long as it is even and any number of matches $M$ as long as $\log_2 N\leq M\leq N/2$. If $N$ is odd, we can eliminate an odd number of players at the first round. If $M<\log_2 N$, the winner can be decided from the top-ranked remaining players. If $M>N/2$, the extra matches can be used for initial seeding. \end{remark} \bibliographystyle{IEEEtran}
{ "attr-fineweb-edu": 1.72168, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUeJvxK6wB9dbuUr6z
\section{Introduction} \label{sec:intro} For the past decade in sports analytics, the holy grail has been to find "the one best metric" which can best capture the performance of players and teams through the lens of winning. For example, expected metrics such as wins-above replacement (WAR) in baseball~\cite{whatisWAR}, expected point value (EPV) and efficiency metrics in basketball~\cite{Cervone2014POINTWISEP, oliver_BasketballOnPaper} and expected goal value in soccer~\cite{lucey2014quality} are used as the gold-standard in team and player analysis. In American Football, Defense-Adjusted Value Over Average (DVOA)~\cite{dvoa}, is perhaps the most respected and utilized advanced metric in the NFL which utilizes both an expected value and efficiency metric, and analyzes the value of a play compared to expected for every play and also normalizes for team, drive and game context. Win probability has also been widely used as the go-to metric for analyzing plays and performance across sports~\cite{inpredictableWP, iWinRNFL, statsWP}. Although the works mentioned above are all impressive and have improved analysis and decision making across sports, we believe the overall hypothesis of having a single metric to explain all performance is limiting. By its very nature, sport is extremely complex and lends itself to many queries for different contexts and temporal resolutions. Instead of having a single predictor (or metric), we believe we need many predictors to enable these analyses. For example, if a team has won the title, we want predictors that highlight their dominant attributes across the season (i.e., offensively and/or defensively strong, and if so which locations and times). Or if a team has won or lost a single match, we want predictors which highlight which plays or play was decisive or of note. While we believe that a multitude of predictors are required to enable multi-resolution analyses, we believe there should be a single source or brain that generates these predictors. For example, if we are predicting the expected point value of a drive in American Football and that jumps based on a big play, that should be correlated/co-occurring with a jump with the win-probability – otherwise this would possibly be contradictory and cause the user not to trust the model. Additionally, we believe the predictors should go beyond "expected metrics" which compare solely to the league average and produce distributions to enable deeper analysis (i.e., instead of comparing to the league average, why can't we compare against the top 20\% or 10\% of teams). To effectively do this we need four ingredients: i) a fine-grained representation of performance at the play-level which can be aggregated up to different resolutions, ii) fine-grain spatial data to enable the representation, iii) a multi-task learning approach where we train the predictors simultaneously, and iv) a method which can generate distributions instead of just point values. In this paper, we show how we can do this. We focus on the sport of Rugby League which has an abundance of spatial information available across many seasons. Specifically, we focus our analysis on the 2018 NRL Season which was the closest regular season in the 110-year history of Rugby League in Australia (and the history of all sport). Our single source prediction engine – which we call Rugby-Bot – showcases how our engine can highlight how we can uncover how the league was won. In Figure 1 we showcase Rugby-Bot and in the next section we describe it. \section{What is Rugby League and What is Rugby-Bot?} Rugby League is a continuous and dynamic game played between 2 teams of 13 players (7 forwards and 6 backs) across 2x40 minute halves with the goal of "scoring a try" and obtaining points by carrying the ball across the opponent's goal-line. Rugby League has numerous similarities to American Football: it is played on a similar size pitch (100m x 70m) with a "set of 6 tackles" per possession (in place of "4 downs") in which a team has the opportunity to score a try. The scoring convention of Rugby League is: 4 points for a try and 2 points for the subsequent successful conversion; 2 points for a successful penalty kick; and 1 point for a successful field-goal. The most popular Rugby League competition in the world is the "National Rugby League (NRL)", which consists of 16 teams across Australia and New Zealand. The NRL is the most viewed and attended Rugby League competition in the world. The 2018 NRL season was particularly noteworthy as it was the most competitive regular season in history of the league, and we believe in the history of sport. After 25 rounds of the regular season competition (where each team plays 24 games), the top 8 teams were separated by one win. Specifically, the top four teams (Roosters, Storm, Rabbitohs and Sharks) all finished with 16 wins. The next four teams (Panthers, Broncos, Warriors and Tigers) finished with 15 wins. The minor premiership – which goes to the team that finishes first during the regular season was determined by point differential – saw the Sydney Roosters win it for the fourth time in the past six years. To achieve this feat, the Roosters had to beat the Parramatta Eels by a margin of 27 points on the last day of the regular season, which they did with a 44-10 victory. The result meant they pushed the Melbourne Storm out of top spot, with a point differential of just +8. Given the closeness of the competition, it would be useful to have many measurement tools to dissect how the league was won – which is what Rugby-Bot enables. An example of Rugby-Bot tool is depicted in Figure~\ref{fig:1_rugbybot}. In the example we show a complete "set" (i.e., Rugby League's equivalent of a "drive" in American Football) from the Round of 25 of the 2018 NRL Season with the Roosters in possession against the Eels. In the "Set Summary" (Second Row - left), Rugby-Bot tracks the likelihood the Roosters will score a try (exTrySet) during that set at the start of each tackle. We see that they initially field the ball in terrible field position and only a 3.5\% likelihood of scoring. However, a great return boosts their exTrySet to 7.4\% at the start of their first tackle. A huge run during the second tackle raised their exTrySet further to 11.5\%, and good central field position on the fourth tackle sees it grow to 15.2\%. Unfortunately for the Roosters, after a bad fifth tackle they elect to kick in an attempt to recover the ball and turn the ball over. Rugby-Bot also allows us to analyze that decision making, which we will explore later. We can dig further into the spatial context of the big run by exploring the expected meters gained as a function of field location in the "Contextual Spatial Insights" panel (Second Row – right). This panel shows the expected meters as a function of not only the field position, but the full context: tackle number, team, scoreline, and opponent. In the plot we see the advantaged gained by starting a tackle near the middle of the pitch. An important concept in Rugby League is that of "Momentum" during a set of six tackles. This is done in Rugby-Bot by comparing the current exTrySet to the average for that context (field position, time, scoreline, team identities). In the example shown in Figure~\ref{fig:1_rugbybot} (Third Row), we see a boost in momentum both from the runback after the initial possession as well as the big run on tackle 2. Rugby-Bot enables us to identify big runs not just in terms of the meters gained, but in how exceptional that run was given that context. Next to the video player we see a "Big Play Highlight" indicator showing the distribution of meters gained expected under the conditions of tackle 2 and where the actual ran fell in this distribution, indicated by the red arrow. Finally, in the bottom row, we see the Scoreline Prediction tracker. In this plot, we not only track the current score differential (red dashed line), we predict the end-of-game score differential. Note that because of our approach, which we will explore in the next section, our predictor is not just a final value, but a distribution, displayed here as the 90-10\% confidence interval (black lines) around the mean (green line). We will dive into analyzing this plot more in Section~\ref{Analysis}. \begin{figure} \centering \includegraphics[width=0.8\linewidth,trim={0 0 0 0}, clip]{./figures/fig1.pdf} \caption{An example of our Rugby-Bot tool used to analyze a Round 25 match from the 2018 NRL Season which consists of the following components: (Top) Shows the video and game information, as well as an indicator of a "highlight" using our anomaly detector. (Second Row) Shows the spatial information of each tackle in addition to the Expected TrySet Value for that set at each tackle. Additionally, "Contextual Spatial Insights", can be gleaned by looking at the expected meters gained for each location on the field under for a given context. (Third Row) The "Set Momentum" is monitored by tracking the current Expected Try Value at each tackle relative to average value. (Bottom Row) The final scoreline predictor tracks the current scoreline (red) and final prediction distribution (mean in green, 90-10\% confidence interval in black). } \label{fig:1_rugbybot} \end{figure} \section{Multi-Task Learning using Fine-Grain Spatial Data} Rugby-Bot is designed to analyze the state of the game across multiple time resolutions: play, set, and game. That is, in any given situation, we would like to know how likely various outcomes are. In particular, we predict: \begin{itemize} \item Play Selection [$y_{p}$]: likelihood a team will perform an offensive kick, defensive kick, run a normal offensive play on the next tackle \item Expected Meters (Tackle) [$y_{m}$]: predicted meters gained/lost on the next tackle \item Expected Try (Tackle) [$y_{tt}$]: likelihood of scoring a try on the next tackle \item Expected Try (Set) [$y_{ts}$]: likelihood of scoring a try at some point during that set \item Win Probability [$y_{w}$]: likelihood of winning the game \item Scoreline Prediction [$y_{s}$]: predicted final score of each team \end{itemize} Simply constructing a model "or specialist" for each task leaves much to be desired: it fails to share information and structure that is common across tasks and is an (unnecessary) hassle to maintain in a production environment. Instead we strategically frame this problem as a model whose output can be thought of as a "state-vector" of the match. \subsection{National Rugby League (NRL) Dataset} Rugby-Bot is built on data from the 2015-2018 National Rugby League seasons which includes over 750 games and more than 250,000 distinct tackles (i.e., plays). For our analysis, the continuous event data was transformed into a segmented dataset with each data point representing a distinct tackle in the match. At each tackle, we have information about the position of the ball, the subsequent play-by-play event sequence, the players and teams involved, the team in possession of the ball, as well as game-context (e.g. tackle number, score, time remaining). Data is split 80-train/20-test for modeling. \subsection{Modeling Approach} Our approach draws on three key approaches used throughout the machine learning and deep learning communities. First, it draws on "wide and deep" models commonly used for recommendation tasks~\cite{cheng2016wide}. Second, we cast this as a multi-task learning approach, predicting all outcomes from the same model, thereby sharing common features throughout. Finally, we use a mixture density network which enables us to produce a distribution of outcomes, thereby capturing the uncertainty in the network. The advantage to this is three-fold. First, the Bayesian nature of MDNs afford us the ability to treat all losses in terms of a probabilistic likelihood and therefore enable us to do regression and classification tasks simultaneously. Next, it is intrinsically multi-task which forces the model to learn globally relevant features about the game. Finally, by modeling the outputs as a multi-dimensional distribution, it captures the relationships between the outputs as well as providing uncertainties about the predictions. The model architecture is shown in Figure~\ref{fig:2_network} and we will explore how each component contributes to the prediction task and the advantages that this architecture offers. \begin{figure} \centering \includegraphics[height=0.3\linewidth,trim={0 0 0 0}, clip]{./figures/fig2.pdf} \caption{Our base network architecture is a Mixture Density Network (MDN). Categorical features as well as field position ($x$, $y$) are passed through embedding layers (green) to create a dense representation. These are then concatenated with the continuous features (blue) and passed through two fully connected layers (black). The mixture density layer consists of 5 mixture each with a prediction for $\mu$ and $\sigma$. Expected meters ($y_{m}$), expected try tackle ($y_{tt}$), expected try set ($y_{ts}$), win probability ($y_{w}$), and final scoreline ($y_{s}$) are all predicted simultaneously.} \label{fig:2_network} \end{figure} \subsubsection{Capturing Spatial and Contextual Information through Wide and Deep Models} Our goal is to leverage both spatial and contextual information to make predictions about the outcome at the play, possession, and game-level. Some of the challenges in modeling this data include: \begin{itemize} \item Raw spatial data is low-dimensional and densely represented \item Spatial features are inherently non-linear \item Contextual features (e.g. teams, tackle) are discrete and may be sparse \item The relationship between the spatial and contextual features is non-linear \end{itemize} To address these points, we draw on inspiration from the wide and deep models of~\cite{cheng2016wide}. Following the approaches there, we pass each categorical feature through its own deep portion of the network to construct higher-level, dense features from the originally sparse inputs, creating an embedding vector. These embeddings are then concatenated with the remaining dense features (such as time and position) before passing through 2 final dense layers to capture their non-linear relationships. A uniqueness to our approach is in how we represented the interactions of certain categorical (i.e. contextual) features such as season and team/opponent identity. As is common practice, we represented each as a one-hot vector. As opposed to creating an embedding for season, team, and opponent independently, however, we instead concatenated the one-hot vector of season with the one-hot vector of team (and similarly for opponent). This enables us to more directly share identity information across seasons, capturing the notion that teams maintain a level of similarity across years while also picking up on league-wide trends over different seasons. We used a similar approach to capture the tackle number (1-6) and back-to-back sets (represented as a binary flag). The other key innovation in our approach is the manner in which high-level features are extracted from the field position; this addresses the first point above. The field position is a dense, two-dimensional ${x,y}$ input. In a traditional wide and deep model, this value would simply be concatenated at the embedding layer. However, the shallowness of the wide portion of these models prevents extraction of higher-level spatial features. To address this, we treat field position as both a sparse and dense feature. For the sparse portion, we pass the two-dimensional position through several layers just like the categorical features. Note that we initially boost from the original two-dimensions up to 50 at the first hidden layer. This gives the network sufficient dimensionality to "mix" the inputs and create the desired higher-level features. This is a key insight. Traditionally, the low dimensionality of the spatial input has been addressed by fully discretizing the field position, that is, by treating the field as a grid and representing the position as a 1-hot vector indicating the occupancy in that bin~\cite{DBLP:conf/icdm/YueLCBM14, miller2014factorized}. This has several limitations. First, the discrete representation is inherently lossy as positions are effectively rounded during the binning process. Second, the resulting dimensionality may be massive (e.g. a 70m x 100m field broken into $\frac{1}{3}$ m $\times$ $\frac{1}{3}$m bins results in a 63,000 dimensional input). While we would like to increase our dimensionality above 2, expanding to the thousands or tens-of-thousands is unnecessarily high and results in an extremely sparse representation, requiring more neurons and more data. Our approach avoids these pitfalls by working directly on the raw spatial input, moderately increasing the dimensionality in the first hidden layer, and then enabling the network to extract important spatial relationships directly from the data. Finally, because of the importance of the field position in all of these prediction tasks, we also include the position again as a dense feature. This enables us to capture both low-level (i.e. raw), as well as high-level (i.e. embedded) spatial information. \subsubsection{Mixture Density Network for simplified Multi-Task Learning} Multi-task learning has been commonplace within the machine learning because of its ability to exploit common structures and characteristics across tasks~\cite{caruana1997multitask}. Often a difficult with multi-task learning comes when one would like to perform both regression (which would use a loss such as RMSE) and classification (which would use a loss such as cross entropy) simultaneously. For example, in predicting both win probability and final score-line, how much should a 1\% likelihood in winning be weighted relative to a 1pt change in the score? Even when all tasks are of the same type (classification or regression), how each loss is scaled and/or weighted can be somewhat arbitrary: how should 1m error in the predicted meters gained be waited against 1pt error in the final scoreline? Mixture density networks (MDN) are a class of models which combine neural networks and mixture density models~\cite{bishop_MDN}. As such it is naturally Bayesian and allows us to model the conditional probability distribution $p(\mathbf{t}|\mathbf{x})$ where $\mathbf{t}$ are the parameters which describe the distributions generating our game-state vector $[y_{p}, y_{m}, y_{tt}, y_{ts}, y_{w}, y_{s}]$ and $\mathbf{x}$ is our input feature vector. Therefore, we have a single loss for our prediction: the negative log-likelihood that a game-state is observed given the current game-state. This dramatically simplifies our loss function. Additionally, the loss over the full distribution constrains the model to learn the relationships between the outputs, for example, that it would be unlikely to observe a high value for meters gained but very low value for an expected try. The "mixture" part of the MDN allows us to capture the multi-modality of the underlying distribution. This produces various "modes" of the game, such as lopsided contests or very close scenarios. MDNs also give us insight into the uncertainty of our models "for free". This enables us to generate a distribution of outcomes - as opposed to just a mean - for any given prediction. Cross entropy is used as the loss for training. Our test loss is .051. Note that this is a single loss for across the entire state-vector: even if several components in a prediction are very good, the loss may be quite high if the state is inconsistent (e.g. a positive final scoreline prediction, but a low win probability are inconsistent and therefore would have a high loss). \subsubsection{Modeling decision-making on the "last tackle"} Similar to the decision-making opportunities on 4th-down in American Football (punt, kick, go-for-it), in Rugby League teams must decide whether to kick defensive (equivalent to a punt), kick offensively (in an attempt to regain the ball for back-to-back sets), or "go-for-it" on the last tackle. For this task we use a simple logistic regressor. We elect to model this decision separately because it only pertains to the last tackle and is therefore "silent" on most downs unlike the other tasks which are active every single tackle. The test loss for this model is .64. \section{2018 NRL Season Analysis: DVOA for Rugby League} Football Outsider's DVOA (Defense-adjusted Value Over Average) has become arguably the most popular advanced statistic in the NFL. It aims to correctly value each play and its impact on scoring and winning. DVOA factors in the context such as opponent, time, and score to compare performance to a baseline for a situation~\cite{dvoa}. It can be broken down and applied in several ways, but the most popular application is to measure team strength. Rugby-Bot can be used to create to a DVOA for Rugby League. Like American Football, Rugby League has segmented plays where the fight for field position is key and not all yards/meters are equal even though they appear so in the box score. The Expected Try (set) prediction generates the predicted chance of scoring during the set for every play. The prediction, like football's DVOA, considers the score, time, opponent, and location. Taking the difference between this expected value and the actual value we can see which teams outperform their expectation. To create a more interpretable 'DVOA' the values are scaled with a mean of 0. A positive offensive DVOA and negative defensive DVOA correspond to strong offenses and defenses, respectively. We can use our Rugby League DVOA to analyze the 2018 NRL Season as seen in Table~\ref{tab:1_leagueSummary}. 2018 was an extraordinarily close season in which the top 4 teams all had the same number of points and wins. The top two teams had similar meters for and against, however, the context under which those meters were gained and conceded strongly favored the Roosters as indicated by their DVOA differential. The champion Roosters had the top DVOA per play mostly due to their strong defense, surpassing the next strongest defensive team (the Storm) by more than 1.5 defensive DVOA pts per play. \begin{table} \label{tab:1_leagueSummary} \caption{The points, wins, losses, meters for and against, meter differential (for-against), offensive and defensive DVOA, and differential DVOA (offensive-defensive) is shown for each team during the 2018 NRL season. A large positive number for Offensive DVOA and a large negative number for Defensive DVOA indicate a strong offense and defense, respectively.} \centering \resizebox{\textwidth}{!}{\begin{tabular}{l l l l l l l l l l l} \csvautotabular{./figures/tab1.csv} \end{tabular}} \end{table} Next we examine the Roosters form over the course of the season. Figures~\ref{fig:3_dvoa}A and \ref{fig:3_dvoa}B break down the offensive and defensive strength over the season. In Figure~\ref{fig:3_dvoa}A we see that initially the Roosters got off to a slow start, largely on-par with the league average (they started the season with a 4-4 record). However, by mid-season, they began to find their form. The offensive DVOA allows us to track how the Roosters went from average to great in the 2nd half of the season. Defensively, the Roosters were dominant all season long as seen in Figure~\ref{fig:3_dvoa}B. They maintained their strong form even as the league worsened in Defensive DOVA. With our individual play valuation we can dive deeper in the Roosters success and reveal where their defense outperformed their opponents. Figure~\ref{fig:3_dvoa}C shows that the Roosters defense was average during plays occurring in the first $\frac{3}{4}$ of the field. (Note that both the Roosters and the league average have a negative DVOA in these situations as they rarely result in scoring.) However, in critical moments, in the final 25m of the pitch, they were incredibly strong in the last 25 yards with their Defensive DVOA being maintained at 0.12 while the league-average shot to over 0.15 in key situations. \begin{figure} \centering \includegraphics[width=\linewidth,trim={0 2.2cm 0 0}, clip]{./figures/fig3.pdf} \caption{(A) The cumulative Offensive DOVA for the Roosters (red) and the League-average (green); the Roosters got off to a slow start offensively then began to pull away from the field. (B) The cumulated Defensive DVOA for the Roosters (red) and League-average (green); the Roosters dominated defensively (indicated by a negative value for defense) through-out the season. (C) Comparison of the Roosters to the League when their opponents were on offense and in normal field position versus the near the goal (within the 75m line).} \label{fig:3_dvoa} \end{figure} \section{Rugby League Game and Play Analysis} \label{Analysis} \subsection{Distributed Scoreline Prediction: Intelligent Game Analysis} We return now to the critical Round of 25 matchup between the Roosters and the Eels which was shown earlier. Recall that to clinch the title, the Roosters needed to win this match by 27 points. In Figure~\ref{fig:1_rugbybot}, we viewed final scoreline prediction only a few minutes into the match when the Roosters lead by only 8 pts. Is the title within reach? To answer this question we need to be able to predict not only the final score differential, but the uncertainty around this measurement. Rugby-Bot enables us to do this. Similar to the approaches used in ~\cite{dvoa} our MDN allows us to predict a range of likely final scorelines at each point in the match. Now we plot the full scoreline predictor over the course of the match in Figure~\ref{fig:4_scoreline}. The actual score is shown in red and the predicted mean score in green with the 90-10\% confidence interval shown in black around it. Note that although we are showing the mean here, the mean need not be the most likely value; as our model is a mixture of gaussians it is inherently multimodal and the most likely outcome could occur at a mode away from the mean. At any point in the game we can view the full shape of the predicted distribution, but here the mean and 90-10 bounds are presented for visual simplicity. Also note that because it is a mixture model, the distribution complex and not necessarily symmetric: the values of the mean, 90-bound, and 10-bound trend together, but are not identical. There are several interesting observations we can draw from this analysis. First, with 10 minutes remaining in the first half, the game appears to be largely over: the lower 10\% bound has moved about a score differential of 0, making a comeback unlikely. It dips back down ever so slightly after the start of the second half and stays here for the next few minutes until by H2 33:32, the 90-10 interval never includes 0 for the remainder of the game. At this point we can say the same is effectively over. This is interesting in recreating the story of the game, but is also useful for analyses such as quantifying "garbage time" or understanding the risk impact of substitutions to rest key players late in a game. Although the game is "over" shortly into the second half, the fate of the season is still uncertain. The Roosters must win by 27 and at H2 33:32, they are up only by 14. Furthermore, even the 90-10 confidence interval does not include a 27 point victory for the Roosters. This does not occur until H2 18:13 when the Roosters go up by 22. Now the title is within reach. Note that as the game progresses, the width of the confidence interval shrinks. Early in the game the outcome is very uncertain, but as the game progresses, the range of possible outcomes shrinks until it is identically known once the game is over. Even in the remaining few minutes, there is still some uncertainty as to the final score, as there is always a possibility for a last-minute score or turnover. However, the model has learned that at most the score will fluctuate by is the value of a single try and conversion. \begin{figure} \centering \includegraphics[width=\linewidth,trim={0 0 0 0}, clip]{./figures/fig4.pdf} \caption{(A) The predictor of the final scoreline for the Round of 25 matchup between the Roosters and Eels. The actual score is shown in red, the mean of the predicted distribution is shown in green, and the 90-10\% confidence interval is shown in black.} \label{fig:4_scoreline} \end{figure} \subsection{Valuing and Comparing Individual Plays Across Different Contexts} Rugby-Bot allows us to determine the value of each play in a game. To demonstrate this, we can compare two plays from the Round 25 Eels v Roosters Game. It is obvious that a big run will increase a team's chance of scoring, but not all meters are created equal. Figure~\ref{fig:5_playcompare} shows two different sets in the match; on the left is the same set in Figure~\ref{fig:1_rugbybot}, and on the right is another set occurring near the end of the first half. We can evaluate tackle 3 from the example shown on the left, and tackle 4 in example shown on the right in Figure~\ref{fig:5_playcompare}. Both runs were over 22 meters and over 95th percentile of predicted expected meters. In example shown on the left, the run is made closer to the goal line and earlier in the set. This causes the exTrySet value to go up 3.8\% while the run in example shown on the right is late in the set and far from the goal line so it only increases the exTrySet 1.2\%. This quantifies the conclusion that an observer would create that the meters gain in the example of the right were not as valuable as the chances of scoring are still miniscule. Taking these deltas between plays allows us to pinpoint the plays in the match that are impactful outside of just yards or scoring. To further highlight how good this play on the left of Figure~\ref{fig:5_playcompare} was – we can use a distribution of predictions to easily identify big plays and analyze individual play performance. Figure~\ref{fig:6_lasttackle} visualizes the 22 meter run through a different lens. The blue dot corresponds to the starting position of the play and the red dot is where the play ended up. We can see based on the context $(x,y)$ starting position, tackle number and game context), this play was in the top 5\% of plays (or in the 95th percentile). Effectively, we can value each play and determine whether it was within the "top-plays" in any situation. \begin{figure} \centering \includegraphics[width=\linewidth,trim={0 0 0 0}, clip]{./figures/fig5.pdf} \caption{The likelihood of scoring a try at each tackle (exTrySet) is shown for two different sets. Plays have been normalized so the offensive team is always going left to right.} \label{fig:5_playcompare} \end{figure} \subsection{Analyzing Decision Making on the Last Tackle} The first five tackles in a set in Rugby League are largely uniform in their objective: score a try or gain the most meters possible towards the tryline. On the last tackle, however, teams may elect to kick defensively (similar to a punt in American Football), kick offensively (in an attempt to score or regain the ball for back-to-back sets), or run a standard play in an attempt to score as they have to turn the possession of the ball over to the opposition at end of the tackle. Rugby-Bot enables us to predict what a given team is likely to do, analyze the outcome of that decision, and assess the impact of that decision on the game relative to another choice having been made. \begin{table} \caption{Within the opponent's 20 meter line, teams consistently make conservative decisions, electing to kick when running produces more than twice the expected points.} \centering \begin{tabular}{lll} \toprule \cmidrule(r){1-3} Play Decision & Frequency & Expected Points \\ \midrule Run & 32\% & 1.15 \\ Kick & 68\% & 0.53 \\ \bottomrule \end{tabular} \label{tab:2_decision} \end{table} Given the last tackle is often the most exciting and variable play – it enables us to investigate some interesting behavioral questions. One such question is "do teams make the best decision on the last tackle, or are they naturally conservative in their play calling?" To answer this requires us to analyze both the position as well as context surrounding the sixth tackle. Using our predictions of play call (run, defensive kick, offense kick) and our play values, we can value each decision on each part of the field. In Table~\ref{tab:2_decision}, we see that in the "Red Zone" attempting to run the ball for a Try is more valuable than kicking. Despite this fact, teams tend to be conservative and attempt to run it on only 32\% of plays. Table~\ref{tab:3_lastTacklePerformance} displays how the most aggressive team in the 2018 season, the Canberra Raiders, also have the most points over expected. Two of the best teams, the Roosters and Storm, do not run the ball often on final tackle. This could be because, as more talented teams, they can afford to be risk averse. \begin{table} \label{tab:3_lastTacklePerformance} \caption{2018 Season performance on last tackle. Each team has roughly the same Expected Points on their final tackle, but the actual points obtained varies dramatically.} \centering \resizebox{\textwidth}{!}{\begin{tabular}{lcccccc} \toprule Team & Expected Points &Actual Points & Over Expected & Run$(\%)$ & Run$(\%)$ Rank & Over Expected Rank \\ \midrule Canberra Raiders & 0.56 & 1.04 & 0.47 & 41.67\% & 1 & 1 \\ Brisbane Broncos & 0.54 & 0.97 & 0.43 & 37.18\% & 6 & 2 \\ Penrith Panthers & 0.56 & 0.78 & 0.22 & 38.39\% & 5 & 3 \\ Newcastle Knights & 0.56 & 0.73 & 0.18 & 39.80\% & 2 & 4 \\ Cronulla-Sutherland Sharks & 0.52 & 0.68 & 0.16 & 30.58\% & 9 & 5 \\ Manly-Warringah Sea Eagles & 0.55 & 0.66 & 0.11 & 25.89\% & 10 & 6 \\ North Queensland Cowboys & 0.56 & 0.65 & 0.09 & 19.58\% & 15 & 7 \\ South Sydney Rabbitohs & 0.55 & 0.6 & 0.04 & 24.47\% & 13 & 8 \\ Sydney Roosters & 0.58 & 0.57 & -0.01 & 16.16\% & 16 & 9 \\ Gold Coast Titans & 0.54 & 0.5 & -0.04 & 38.46\% & 4 & 10 \\ Canterbury Bulldogs & 0.55 & 0.51 & -0.04 & 33.79\% & 8 & 11 \\ St. George Illawarra Dragons & 0.55 & 0.51 & -0.05 & 39.39\% & 3 & 12 \\ Melbourne Storm & 0.53 & 0.48 & -0.06 & 25.24\% & 11 & 13 \\ New Zealand Warriors & 0.53 & 0.46 & -0.07 & 20.56\% & 14 & 14 \\ Wests Tigers & 0.56 & 0.42 & -0.13 & 37.17\% & 7 & 15 \\ Parramatta Eels & 0.55 & 0.38 & -0.17 & 25.00\% & 12 & 16 \\ \bottomrule \end{tabular}} \end{table} \begin{figure} \centering \includegraphics[width=0.5\linewidth,trim={0 1.0cm 0 0}, clip]{./figures/fig6.pdf} \caption{The blue dot corresponds to the starting position of the play and the red dot is where the play ended up. Using our distribution analysis, we can see that this play was in the top 5\% of plays (or in the 95th percentile).} \label{fig:6_lasttackle} \end{figure} Returning to our example set from Round 25 (Figure~\ref{fig:1_rugbybot}), we saw the Roosters in outstanding field position before the final tackle, with an exTrySet value well above average. However, instead of running the ball, which they had done with great success during that set, they elected to kick to the far touch-line, consistent with their style of running far less than any other team in the league on the final tackle. The ball was mishandled, lost out of bounds, and possession was lost to the Eels. The Eels mounted an extended drive to the shadow of the Roosters' goal, only to fail to convert. The Roosters' defense again came to their rescue. Overall, the analysis shown in Table~\ref{fig:3_dvoa}is effectively the "Red Zone efficiency", which teams can utilize to see how effective they are in the most important area of the field. \section{Summary} In this paper, we presented a multi-task learning method of generating multiple predictions for analysis via a single prediction source. To enable this approach, we utilized a fine-grain representation using fine-grain spatial data using a wide-and-deep learning approach. Additionally, our approach can predict distributions rather than single point values. We highlighted the utility of our approach on the sport of Rugby League and call our prediction engine "Rugby-Bot". Utilizing our slue of prediction tools, we showed how Rugby-Bot could be used to discover how the Sydney Roosters won the 2018 NRL minor premiership (the closest in the history of the sport). Additionally, we showed how it can be used to value each individual play, accurate predict the outcome of the match as well as investigate play behavior in Rugby League – specifically behavior on the last tackle. \bibliographystyle{unsrt}
{ "attr-fineweb-edu": 1.646484, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUa_7xaKgS2MYsNLMC
\section{Introduction} There is a rapid growth in sports analytics over the recent years thanks to the fast development in game tracking technologies \citep{albert2017handbook}. New statistics and machine learning methods are largely needed to address a range of questions from game outcome prediction to player performance evaluation \citep{cervone2014pointwise,franks2015characterizing, cervone2016multiresolution}, and to in-game strategy planning \citep{fernandez2018wide,sandholtz2019measuring}. Our focus in this paper is to study the National Basketball Association (NBA) players' {\it shooting patterns}, in particular, how they change over shooting locations and game time (e.g., first quarter versus clutch time). In professional basketball research, shooting pattern (or field goal attempt) is a fundamental metric since it often provides valuable insights to players, coaches and managers, e.g., players and coaches will be able to obtain a better understanding of their current shooting choices and hence develop further offensive/defensive plans accordingly, while managers can make better data-informed decisions on player recruitment. In the literature, it is common to employ spatial and spatial-temporal models \citep{reich2006spatial,miller2014factorized, jiao2019bayesian} to study the spatial and temporal dependence in the field goal attempt data. The key novelty of our paper is to introduce a new Bayesian tensor multidimensional clustering method that studies the heterogeneity of shooting patterns among players. Our starting point is to recognize that the field goal attempt data in basketball games enjoys a natural {\it tensor structure}. For example, we can divide the basketball half court into regions under the {\it polar coordinate} system and then summarize the number of field goal attempts over each region during each of the four quarters of the game as entries in a three-way tensor, where each tensor direction corresponds to the shooting distance, angle, and game time (quarter). One advantage of considering a tensor representation is that the spatial-temporal dependence structure is automatically considered as part of the tensor structure. Compared to the most existing works that rely on Cartesian coordinate system \citep{miller2014factorized,jiao2019bayesian,yin2020bayesian,hu2020bayesiangroup,yin2020analysis1}, our approach certainly makes more sense since shooting angle and distance are two important factors that affect the shooting selection of professional players \citep{reich2006spatial}. Studying the change of shooting patterns over time is also meaningful, e.g., Stephen Curry has made a comparable fewer number of attempts in fourth quarter during regular season simply because Gold State Worriers has often established a significant lead at the end of the third quarter during the 2017-2018 regular season. Tensor models have received a great deal of attention in machine learning and statistics literature \citep{kolda2009tensor,sun2014tensors,bi2021tensors}. For tensor clustering problems, most existing work \citep{sun2019dynamic,chi2020provable,mai2021doubly} either works with a single tensor or assumes that the clustering structure is the same across different tensor directions. It is also common that they only consider tensors with entries that takes continuous or binary values. In terms of model estimation, clustering is often based on solving a regularized optimization problem, which requires pre-specifying the number of clusters or choosing the cluster number based on certain {\it ad hoc} criteria. There are some recent papers on Bayesian tensor models, e.g., \citet{guhaniyogi2017bayesian,spencer2019bayesian,guhaniyogi2014bayesian}. However, all of them are in the regression context, hence cannot be directly applied to solve our problem. Our proposed approach differs with aforementioned methods in several ways. First, we consider a flexible multidimensional tensor clustering problem, which allows different clustering structures over tensor directions. We believe this is a meaningful relaxation in many applications, e.g., basketball players' shooting choice may differ significantly in terms of shooting distance, angle and game time depending on players' position, shooting preference and role in the team. Moreover, we focus on count-valued tensors (rather than continuous-valued tensors) for the obvious reason that the number of shot attempts is the main outcome of interest in our application. Thirdly, our model is fully probabilistic, which allows an easier interpretation compared to optimization-based method. In particular, we consider a Bayesian nonparametric model under the mixture of finite mixtures framework \citep[MFM;][]{miller2018mixture}, which allows simultaneous estimation and inference for the number of clusters and the associated clustering configuration for each direction (e.g. distance, angle, and quarter). We show that the posterior inference can be conducted efficiently for our method and demonstrate its excellent empirical performance via simulation studies. We also establish posterior convergence properties for our method. Finally, our proposed method reveals several interesting data-driven insights of field goal attempts data which are useful for professional basketball players, coaches, and managers. \textbf{Main contributions and broad impact:} (1) We are among the first (to the best of our knowledge) to introduce {\it tensor models} to sports analytics. It is our hope that this paper can contribute to promoting more use of tensor methods in different sport applications. (2) We develop a novel multidimensional tensor clustering approach that allows different clustering structures over tensor directions and handles count-valued data. The proposed method is fully Bayesian, which renders convenient inference on the number of clusters and the clustering structure. (3) We provide a large-sample theoretical justification for our method by showing posterior consistency for the cluster number and contraction rate for the mixing distributions. These results are new for Bayesian tensor models. \section{Method}\label{sec:method} \subsection{Probabilistic multi-dimensional tensor clustering } We treat the shooting chart data as three-way tensors and discuss a multi-dimensional clustering approach in this section. Note that each direction of the tensor represents the shooting angle, distance to the basket, and one of the four quarters in the game. Our proposed method can be conveniently extended to study general multi-way tensor data as well. Let~$\bm{Y}$ be a~$p_1 \times p_2 \times p_3$ tensor with each element~$Y_{ijk}$ only taking count values for $i=1,\ldots,p_1; j=1,\ldots,p_2; k=1,\ldots,p_3$. It is natural to consider a Poisson distribution with a mean parameter represented as a rank-one tensor, that is, \begin{align}\label{eq1} \bm{Y} \sim \text{Poisson}( \bm{\gamma}_{1} \circ \bm{\gamma}_{2} \circ \bm{\gamma}_{3}), \end{align} where $\circ$ denotes the outer product between two vectors, and $\bm{\gamma}_{1} \in \mathbb{R}_{+}^{p_1},\bm{\gamma}_{2} \in \mathbb{R}_{+}^{p_2},\bm{\gamma}_{3} \in \mathbb{R}_{+}^{p_3}$. Model \eqref{eq1} can also be viewed as a Poisson regression model where the mean parameter corresponds to an analysis of variance (ANOVA) model with main effects only, that is, $\log \text{E}(Y_{ijk}) = \log \gamma_{1,i} + \log \gamma_{2,j} + \log \gamma_{3,k}$ for $1\leq i \leq p_1, 1\leq j \leq p_2, 1 \leq k \leq p_3$. By ignoring the interaction effects, the number of parameters is effectively reduced from $p_1 p_2 p_3$ to $(p_1 + p_2+ p_3)$. Hence it renders parsimonious parameter estimation and easy interpretation in our NBA application study, i.e., the main effects $\log \bm{\gamma}_1, \log \bm{\gamma}_2, \log \bm{\gamma}_3$ correspond to the additive effect of shooting distance, angle, and game time (quarter). In order to learn the multidimensional heterogeneity pattern, we consider three independent mixture of finite mixtures \citep[MFM;][]{miller2018mixture} priors on $\bm{\gamma}_1, \bm{\gamma}_2, \bm{\gamma}_3$ such that the clustering pattern in those three directions can be learned separately. Here we present a brief introduction to MFM without getting into more details. Given $n$ observations, we consider $z_1,\ldots,z_n$ as their clustering labels, e.g., $z_1 = z_2 = z_4$ would mean that observations $1,2,4$ belong to the same cluster. Then the MFM prior can be expressed as \begin{eqnarray}\label{eq:MFM} K \sim p(\cdot), \quad (\pi_1, \ldots, \pi_K) \mid K \sim \mbox{Dir} (\gamma, \ldots, \gamma), \quad z_i \mid K, \pi \sim \sum_{h=1}^K \pi_h \delta_h,\quad i=1, \ldots, n, \end{eqnarray} where $K$ is the number of clusters, $(\pi_1,\ldots,\pi_K)$ are associated cluster weights and $\sum_{h=1}^K \pi_h \delta_h$ is the mixture distribution with $\delta_h$ being a point-mass at~$h$. Under the Bayesian framework, all those three quantities are random and hence are assigned with prior distributions, i.e., we use $p(\cdot)$, which is a proper probability mass function on~$\mathbb{N}_+$, as a prior on $K$, and a Dirichlet distribution on the mixture weights. Compared to the Chinese restaurant process (CRP), the probability of introducing a new table (cluster) for MFM is slowed down by a factor of ~$V_n(t+1)/ V_n(t)$, which allows for a model-based pruning of the tiny extraneous clusters. Here the coefficient~$V_n(t)$ is defined as \begin{align*} \begin{split} V_n(t) &= \sum_{n=1}^{+\infty}\dfrac{k_{(t)}}{(\gamma k)^{(n)}} p(k), \end{split} \end{align*} where $k_{(t)}=k(k-1)\ldots(k-t+1)$, and $(\gamma k)^{(n)} = {\gamma k} (\gamma k+1)\ldots(\gamma k+n-1)$, and $\gamma$ is the hyperparameter in the Dirichlet prior for the weights. The conditional distributions of $z_i, i=2, \ldots, n$ under~\eqref{eq:MFM} can be defined in a P\'{o}lya urn scheme similar to CRP: \begin{eqnarray}\label{eq:mcrp} P(z_{i} = c \mid z_{1}, \ldots, z_{i-1}) \propto \begin{cases} \abs{c} + \gamma , & \text{at an existing table labeled}\, c\\ V_n(t+1)/ V_n(t)\gamma, & \text{if}~ \, c\, \text{~is a new table} \end{cases}, \end{eqnarray} with~$t$ being the number of existing clusters. Now back to the shooting chart data, as we propose to use three independent MFM priors for clustering shooting distance, angle, and game time, our final model can be presented in the following hierarchical structure, \begin{equation}\label{eq:hierarchical_model} \begin{split} & K_{\ell} \stackrel{\text{i.i.d.}}{\sim} p_{K}, ~~ \ell=1,2,3,\\ & \bm{\pi}_{\ell} = (\pi_{\ell, 1},\ldots,\pi_{\ell,K_{\ell}}) \mid K_{\ell} \sim \text{Dir}(\nu,\ldots,\nu),~~, \nu > 0,~ \ell=1,2,3, \\ & \log{\bm{\gamma}_{\ell,1}},\ldots,\log{\bm{\gamma}_{\ell,K_{\ell}}}\stackrel {\text{i.i.d.}}{\sim }\text{MVN}_{p_{\ell}}(\bm{0},\Sigma_{\ell}),~~ \ell=1,2,3,\\ &\Sigma_{\ell}=\sigma^2_{\ell}(I_{\ell}-\rho_{\ell}\bm{W}_{\ell}),~~ \ell=1,2,3,\\ &\sigma^2_{\ell}\sim \text{Gamma}(a,b),~~ \ell=1,2,3,\\ &\rho_{\ell}\sim\text{Unif}(c_{1\ell},c_{2\ell}),~~ \ell=1,2,3,\\ &P(z_{i\ell} =j\mid \bm{\pi}_{\ell},K_\ell)=\pi_{j\ell},~~ \ell=1,2,3, ~~j=1,\ldots,K_\ell,\\ &\bm{Y}_i \sim \text{Poisson}( \bm{\gamma}_{1,z_{i1}} \circ \bm{\gamma}_{2,z_{i2}} \circ \bm{\gamma}_{3,z_{i3}}),~i=1,\ldots,n, \end{split} \end{equation} where the main effects for distance, angle, and period are modeled by multivariate normal distributions and their covariance matrices involve adjacency matrices, denoted by $\bm{W}_l$'s, that are used for incorporating the potential spatial and temporal correlation information (e.g., two shooting locations are next to each other). To ensure those covariance matrices $\Sigma_{\ell}$ are positive definite, we introduce $c_{1\ell}$ and $c_{2\ell}$ as the reciprocals of minimum and maximum eigenvalues of $\bm{W}_l$, respectively. For the prior $p_{K}$ on the number of clusters, we consider a truncated Poisson$(1)$ following the recommendations in \citet{miller2018mixture} and \citet{geng2019probabilistic}. Our multidimensional clustering model in \eqref{eq:hierarchical_model} sits between two extremes. One is the usual tensor clustering model that assumes the same cluster structure across different directions, which certainly is more restrictive compared to ours. The other is to marginally cluster over each of the tensor directions and solve multiple clustering problems independently, which does not fully utilize the tensor structural information. Our method combines the attractive features from both sides by allowing cluster structures to be different over directions while borrowing information to improve the estimation efficiency. Our model in \eqref{eq:hierarchical_model} can be viewed as a Bayesian mixture of rank-one tensor models. Compared to the frequenstist work on tensor clustering \citep{sun2019dynamic,chi2020provable}, where a Tucker decomposition is usually utilized and the choice of the rank relies heavily on pre-specification or certain model selection criteria, our approach is capable of automatically determining the rank while quantifying the uncertainty in rank selection. Moreover, our method is fully probabilistic; hence each mixture component is easy to interpret in practice. \subsection{Theoretical Properties}\label{theory} Next we study the theoretical properties for the posterior distribution obtained from model \eqref{eq:hierarchical_model}. For convenience, we define three mixing measures $G_{\ell} = \sum_{i=1}^{K_{\ell}} \pi_{\ell,i} \delta(\gamma_{\ell,i})$ for $\ell=1,2,3$, where $\delta(\cdot)$ is the point mass measure. In other words, $G_1,G_2,G_3$ represent the clustering structures and associated parameters along each of the three directions in the tensor. In order to establish the posterior contraction results, we consider a refined parameter space $\bm{\Theta^*}$ defined as $\cup_{k_1,k_2,k_3=1}^{\infty} \bm{\Theta_k^*}$ for $\bm{k} = (k_1,k_2,k_3)$, where $\bm{\Theta_k^*}$ is a compact parameter space for all model parameters including mixture weights and main effects given a fixed cluster number for each direction, i.e., $K_1=k_1, K_2 = k_2,$ and $K_3 = k_3$. More precisely, we define $\bm{\Theta_k^*}$ as \begin{align*} \Big\{&\pi_{\ell,i} \in (\epsilon, 1-\epsilon) ~~\text{for every}~i=1,\ldots,k_{\ell}, \ell=1,2,3, ~\sum_{j}^{k_{\ell}} \pi_{\ell,j} = 1 ~~\text{for every}~\ell=1,2,3, \\ & \gamma_{\ell,i} \in (-M,M) ~~\text{for every}~i=1,\ldots,k_{\ell}, \ell=1,2,3, \Big\}, \end{align*} where $\epsilon$ and $M$ are some pre-specified positive constants. For any two mixing measures $G_1 = \sum_{i=1}^k p_i \delta(\gamma_i)$ and $G_2 = \sum_{j=1}^{k'} p_j' \delta(\gamma_j)$, we define their Wasserstein distance as $W(G_1,G_2) = \inf_{q \in \mathcal{Q}} \sum_{i,j} q_{ij} |\gamma_i - \gamma_j | $, where $\mathcal{Q}$ denotes the collection of joint discrete distribution on the space of $\{1,\ldots,k\} \times \{1,\ldots,k'\}$ and $q_{ij} $ is the probability being associated with $(i,j)$-element and it satisfies the constraint that $\sum_{i=1}^k q_{ij} = p_j'$ and $\sum_{j=1}^{k'} q_{ij} = p_i$, for every $i=1,\ldots,k$ and $j=1,\ldots,k'$. For $\ell = 1,2,3$, let $K_{\ell}^0$ and $G_{\ell}^0$ be the true number of clusters and true mixing measure along direction $\ell$. Also let $P_0$ be the associated joint probability measure. Then the following theorem establishes the posterior consistency and contraction rate for the cluster number and mixing measure. The proof is given in the Supplementary Material; and it is based on the results for Bayesian mixture models in \citet{guha2021posterior}. \begin{theorem}\label{thm1} Let $\Pi_n(\cdot \mid \bm{Y}_1,\ldots,\bm{Y}_n)$ be the posterior distribution obtained from \eqref{eq:hierarchical_model} given i.i.d. observations $\bm{Y}_1,\ldots,\bm{Y}_n$. Suppose that the true parameters belong to $\bm{\Theta^*}$. Then for each of $\ell =1,2,3$, we have \begin{align*} \Pi_n\left\{K_{\ell} = K_{\ell}^0 \mid \bm{Y}_1,\ldots,\bm{Y}_n)\right\} \rightarrow 1, ~\text{and}~~ \Pi_n \left\{(W(G_{\ell},G_{\ell}^0)\lesssim (\log n/n)^{-1/2} \mid \bm{Y}_1,\ldots,\bm{Y}_n)\right\} \rightarrow 1, \end{align*} almost surely under $P_0$ as $n \rightarrow \infty$. \end{theorem} Theorem \ref{thm1} shows that as sample size $n \rightarrow \infty$, our proposed Bayesian model is capable of correctly identifying the unknown number of clusters along each of the tensor directions with posterior probability tending to one. Moreover, the latent clustering structure (e.g., cluster membership) can also be consistently recovered. The contraction rate for $G_{\ell}$ is nearly parametric with an additional logarithmic factor, which is common in the Bayesian asymptotic literature \citep{guha2021posterior}. The assumption of a compact parameter space $\bm{\Theta^*}$ is needed to rule out extreme scenarios, for example, when some mixture probabilities are extremely close to $0$, for which it becomes very challenging to distinguish between our model and a sub-model without these small mixture components. In practice, this assumption is often satisfied since we can always restrict the modeling parameters to take values within a pre-specified range, e.g., assuming cluster probability to be at least $\epsilon$ for some small $\epsilon$ values such as $.0001\%$. Our results can also be extended to general multi-way tensors as long as the independent MFM priors are used for each direction. \subsection{Bayesian Inference}\label{sec:inference} We discuss the posterior sampling scheme for our model. For the MFM prior, we use the stick-breaking \citep{sethuraman1994constructive} approximation to reconstruct $$K_{\ell} \sim p_{K}, \bm{\pi}_{\ell} = (\pi_{1\ell},\ldots,\pi_{K_{\ell} \ell}) \mid K_{\ell} \sim \text{Dir}(\nu,\ldots,\nu),~~ \ell=1, 2, 3,$$ as follows for each of $\ell=1,2,3$, \begin{itemize} \item \textbf{Step 1.} Generate $\eta_1,\eta_2,\cdots \overset{\text{iid}}{\sim} \text{Exp}(\psi_\ell)$, \item \textbf{Step 2.} Let $K_\ell=\min\{j:\sum_{k=1}^j \eta_k \geq 1\}$, \item \textbf{Step 3.} Set $\pi_{h\ell}=\eta_h$, for $h=1,\cdots,K_\ell-1$, \item \textbf{Step 4.} Set $\pi_{h\ell}=1-\sum_{h=1}^{K_\ell-1}\pi_h$, \end{itemize} where we choose $(K_\ell-1) \sim \mbox{Poisson}(\psi_\ell)$ and $\nu=1$. Based on the stick-breaking reparameterization, we obtain a similar hierarchical model as the Dirichlet process mixture model in \citet{ishwaran2001gibbs} when we choose a sufficiently large dimension $T$ for $\bm{\pi}_{\ell}$ and set the last $T-K_\ell$ elements to be zero. Due to the lack of available analytical form for the posterior distribution of~$\gamma$'s, we employ the MCMC sampling algorithm to sample from the posterior distribution, and then obtain the posterior estimates of the unknown parameters. Computation is facilitated by the \textbf{nimble} \citep{de2017programming} package in \textsf{R} \citep{Rlanguage2013}. To determine the final clustering configuration based on post-burn-in iterations, we use the Dahl's method \citep{dahl2006model}. The main idea is to obtain a clustering configuration that best represents the posterior samples based on comparing the ``pairwise similarity'' between different cluster structures. The procedure can be described as follows. First, at MCMC iteration~$t$, based on the $n$-dimensional vector $(z_1^{(t)},\ldots,z_n^{(t)})$ for the latent clusters, a membership matrix~$M^{(t)}$ consisting of~0 and~1's can be obtained, where $M^{(t)}(i,j) = M^{(t)}(j,i) = 1(z_i^{(t)} = z_j^{(t)})$. Next, the membership matrices are averaged over all post-burn-in iterations to get a matrix of pairwise similarity, $\bar{M} = \sum_{t=1}^T M^{(t)} / T$, where~$T$ denotes the total number of iterations. Finally, the iteration that has the smallest element-wise Euclidean distance from $\bar{M}$ is taken as the inferred clustering configuration, i.e., with~$t^*$ being \begin{equation*} t^* = \argmin_t \sum_{i=1}^n \sum_{j=1}^n \left( M^{(t)}(i,j) - \bar{M}(i,j) \right)^2, \end{equation*} and the final inferred configuration is obtaind as $(z_1^{t^*},\ldots, z_n^{t^*})$. \section{Simulation}\label{sec:simu} To evaluate the performance of the proposed model, simulation studies are performed on generated data sets with a total of 150 players. We consider two simulation settings. For the first setting, we consider a three-angle pattern and three-distance partition of the court, i.e., the court is divided into $9$ parts based on combinations of distance and angle. Two clusters of size 75 are set for angle, distance, and quarter, respectively. The patterns for angle and group are visualized in Figure~\ref{fig:simu_pars_1} of the Supplementary Material. For quarter group~1, we choose $\bm{\gamma}_3 = (-1, -1, -1, -1)^\top$; and for quarter group~2, $\bm{\gamma}_3 = (-0.5, -2, -0.5, -2)^\top$. For the second simulation setting, we consider a finer partition of the court, including 11 angles, 12 distances, and again two quarter patterns in the same way as design~1. The true number of clusters is 3 (each cluster cluster of size 50) for the angle, 3 (each cluster of size 50) for the distance, and 2 (each of size 75) for the quarter. The angle and group patterns are visualized in Figure~\ref{fig:simu_pars_2} of the Supplementary Material. Under both settings, for each piece of the partitioned court, the corresponding number of shots is generated using the associated $\gamma_1, \gamma_2$ and~$\gamma_3$ based on the last line of Equation~\ref{eq:hierarchical_model}. The proposed multidimensional clustering approach is then applied to fit the generated data; and this procedure is repeated for~100 times for each setting. All the computations were performed on a computing server (256GB RAM, with 8 AMD Opteron 6276 processors, operating at 2.3 GHz, with 8 processing cores in each) and the running time was within twelve hours. \begin{figure}[tbp] \centering \includegraphics[width=0.8\textwidth]{plot/simu_all} \caption{Simulation results: Rand index boxplots for angle, distance, and quarter over 100 Monte-Carlo replicates.} \label{fig:randindex} \end{figure} To evaluate the clustering performance on each of the tensor directions, we use the Rand index \citep[RI;][]{rand1971objective}, which is a commonly used metric that measures the concordance between two clustering schemes. Taking values between~$0$ and~$1$, a larger value of RI indicates a higher agreement. To evaluate whether the true number of clusters is correctly inferred, we also examine the total number of clusters inferred in each replicate over each of the three directions. We consider two competing methods, $K$-means (function \texttt{kmeans()} in R) and density-based spatial clustering \citep[DBSCAN; implemented in \textbf{fpc},][]{fpc2020}. To make a fair comparison, we use the number of clusters obtained by our method for $K$-means. For DBSCAN, as the method depends on a pre-specified ``reachability distance'', we use four candidate values, 25, 50, 75, and 100; and we denote the methods as DBSCAN-25,$\ldots,$ DBSCAN-100 for the rest of this paper. Both methods are applied to each of the three directions in an independent manner. In other words, for~150 simulated players, we sum out the distance and quarter directions, and obtain~150 11-dimensional count vectors for clustering. We summarize the rand indexes from 100 replicates as boxplots in Figure~\ref{fig:randindex} and also report the average RI in Table~\ref{tab:average_ri}. From the results, we find a clear advantage of our method over $K$-means. Compared to DBSCAN, our advantage is not obvious under the simple setting, Design 1; but becomes significantly better under Design 2. We also note that the performance of DBSCAN is quite sensitive to the choice of the reachability distance, e.g., DBSCAN-25 has the worst performance for all three directions under Design 2, but not for Design 1. Our method, on the other hand, manages to achieve a reasonably high average Rand Index for different tensor directions under both simulation designs. These results highlight the benefit of incorporating the tensor structure and borrowing information from other directions by our method. \begin{figure}[tbp] \centering \includegraphics[width=.7\textwidth]{plot/K_hist_tensor} \caption{Simulation results: Histograms of cluster numbers over 100 Monte-Carlo replicates for angle, distance, and quarter under two designs. Correct estimate of $K$ is marked in orange color. } \label{fig:K_hist} \end{figure} \begin{table}[tbp] \centering \caption{Simulation results: Average Rand Index over 100 Monte-Carlo replicates under two simulation designs for the proposed method and two competing methods.} \label{tab:average_ri} \begin{tabular}{llccc} \toprule Design & Method & Angle & Distance & Quarter \\ \midrule Design~1 & Proposed & 0.999 & 0.987 & 0.938 \\ & $K$-means & 0.720 & 0.712 & 0.985 \\ & DBSCAN-25 & 0.996 & 0.998 & 0.982 \\ & DBSCAN-50 & 1.000 & 1.000 & 1.000 \\ & DBSCAN-75 & 1.000 & 1.000 & 1.000 \\ & DBSCAN-100 & 1.000 & 1.000 & 1.000 \\ \midrule Design~2 & Proposed & 0.826 & 0.851 & 0.903 \\ & $K$-means & 0.773 & 0.786 & 0.851 \\ & DBSCAN-25 & 0.499 & 0.667 & 0.554 \\ & DBSCAN-50 & 0.687 & 0.706 & 0.759 \\ & DBSCAN-75 & 0.720 & 0.720 & 0.776 \\ & DBSCAN-100 & 0.720 & 0.720 & 0.776 \\ \bottomrule \end{tabular} \end{table} We also present the histogram of the estimated number of clusters from 100 replicates in Figure~\ref{fig:K_hist}. Under Design~1, where there are only two clusters on each direction, the proposed method manages to correctly estimate the cluster number $98\%, 94\%$, and $87\%$ of times for angle, distance, and quarter, respectively. In Design~2, with a finer partition of the court, it becomes harder to infer the number of clusters, and the percentage of correct estimation reduces to $56\%, 60\%$, and $61\%$. \section{NBA Data Analysis}\label{sec:real_data} We consider the shot attempts made by players during the 2017--2018 NBA regular season excluding the overtime period. Rookie year players who started their NBA career in 2017 are excluded. We also exclude players who made very few number of shots in that season, e.g., due to long-term injury. Shots that were made at negative degrees (under the polar coordinate system) are also excluded. At the end, the dataset that we study consists of 122,001 shot attempts made by~191 players, with Aron Baynes bottoming the list with~317, and Russell Westbrook topping the list with~1356 shot attempts. We consider the polar coordinate representation of shot attempts in a similar way with \cite{reich2006spatial}. We treat the basket as origin and partition the angle (from~0 to~$\pi$) into 11 equal sections. In terms of the shooting distance, we partition it into 12 sections, with the first~11 be designed so that the areas of all sectors and annular sectors are the same. The remaining~9 areas correspond to the remaining areas on the offensive half court. The partition scheme is illustrated in Figure~\ref{fig:partition_scheme} of the Supplementary Material. Compared to the partition scheme in Figure~2 of \cite{reich2006spatial}, where the annular sectors only covered the regions near the three-point line, we extend the annular sectors because of the current trend of making three-point shots among NBA players, e.g., Stephen Curry and Damian Lillard. For each player, we further divide the number of shot attempts by four game quarters for each court partition, and end up with a~$11\times 12 \times 4$-dimensional tensor. In Figure~\ref{fig:tensor_viz}, we choose three players, Bradley Beal, LeBron James, and Russel Westbrook, and present their shot charts for demonstration. Some interesting patterns can be observed from the plots, e.g., LeBron James makes more shots facing the basket, and Russel Westbrook makes fewer shot attempts in the fourth quarter on average. \begin{figure}[tbp] \centering \includegraphics[width=.9\textwidth]{plot/players_example} \caption{Visualization of shot count tensors for Bradley Beal, LeBron James, and Russel Westbrook.} \label{fig:tensor_viz} \end{figure} We apply the proposed multidimensional heterogeneity learning approach on the collected tensor data from 191 players. The same neighborhood matrices $W_1$, $W_2$, and $W_3$ from the simulation studies are used. We consider a MCMC chain of length 10,000 and and a thinning interval of 2, resulting in a total of 5,000 posterior samples. We then discard the first 2,000 as burn-in and use the rest of 3,000 samples to obtain the final clustering configuration using Dahl's method as described in Section \ref{sec:inference}. We obtain two clusters of sizes 71 and 120 over the angle direction as shown in Figure~\ref{fig:angle_cluster}. While it can be seen that players in both clusters make more shots when facing the basket, those in cluster~1 also make a fair amount of shots at the two wings, as well as the corners. Players in cluster~2, however, mostly shoot in the region facing the basket and its immediate neighbors. Compared to those in cluster~1, they make less corner shots, as the estimated~$\bm{\gamma}_1$ is almost~0 in the two regions on each side. Representative players for the two clusters are, respectively, James Harden and John Wall. Their shot charts are given in Figure \ref{fig:angle_cluster_players} of the Supplementary Material. \begin{figure}[tbp] \centering \includegraphics[width = 0.55\textwidth]{plot/angle_clusters} \caption{Visualization of $\bm{\gamma}_1$ estimates for two shooting angle clusters.} \label{fig:angle_cluster} \end{figure} Shooting patterns in terms of distance to basket have been clustered into three groups as visualized in Figure~\ref{fig:dist_cluster}. Players in the cluster~1 have two hot regions: near the basket, and beyond the three point line. Point guards and shooting guards (small forwards) make the majority of this cluster (75 players), with representative players such as Kyrie Irving and Stephen Curry. Compared with cluster~1, the 90 players in cluster~2 tend to shoot less beyond the three point line, but make more perimeter shots. A representative player for this cluster is Russell Westbrook. Finally, in cluster~3, most of the 26 players only shoot in regions that are closest to the basket, such as DeAndre Jordan and Clint Capela. Most of their shots are slam dunks and alley-oops. Some other players (e.g., Fred VanVleet and Jonas Valanciunas) in cluster~3, although also making perimeter shots and three-pointers, rely heavily on lay-ups. We pick one representative player from each cluster and present their shooting charts in Figure \ref{fig:dist_cluster_players} of the Supplementary Material. \begin{figure}[tbp] \centering \includegraphics[width = .7\textwidth]{plot/dist_clusters} \caption{Visualization of $\bm{\gamma}_2$ estimates for the three shooting distance clusters.} \label{fig:dist_cluster} \end{figure} Finally, the two clusters for quarters are visualized in Figure~\ref{fig:quarter_cluster}. In cluster~1, players make more shots in quarters~1 and~3, and less shots in quarters~2 and~4. Most players in this cluster are leading players in their teams, and they often take breaks during the second quarter. In the fourth quarter, leading players may also take breaks if their teams lead or fall behind by wide margins. Stephen Curry, Kevin Durant and Paul George are in this cluster. In cluster~2, the distribution of shots across four quarters is more even than that in cluster~1, and on average the estimated $\bm{\gamma}_3$ is relatively smaller. The cluster sizes are 91 and 100, which indicate these two patterns are similarly prevalent among the players that we studied. We pick Anthony Davis and Chris Paul as two representative players from two clusters and present their shooting charts over four quarters in Figure \ref{fig:quarter_cluster_players} of the Supplementary Material. \begin{figure}[tbp] \centering \includegraphics[width = .8\textwidth]{plot/quarter_clusters} \caption{Visualization of $\bm{\gamma}_3$ estimates for two game quarter clusters.} \label{fig:quarter_cluster} \end{figure} \section{Discussion}\label{sec:discussion} We propose a new multidimensional tensor clustering method in this paper and demonstrate its utility by studying how shooting patterns distribute over court locations and game time among different players. Our method is applicable to many other sports such as football and baseball, where it is natural to formulate and model the multi-way array data. The proposed method also applies to other applications such as imaging analysis and recommender systems. Several future work directions remain open. First, our method is based on Poisson distribution for outcomes in the tensor; and it is of interest to generalize this assumption by considering other types of distributions such as zero-inflated Poisson and continuous distributions. Incorporating sparsity in tensor models is another interesting direction that will allows us to deal with high-dimensional tensors. From applications point of view, it is of interest to analyze and compare the shooting patterns between different periods of games/seasons, e.g., regular season versus playoff games, and before-pandemic versus 2020 NBA bubble seasons. \section{Introduction} There is a rapid growth in sports analytics over the recent years thanks to the fast development in game tracking technologies \citep{albert2017handbook}. New statistics and machine learning methods are largely needed to address a range of questions from game outcome prediction to player performance evaluation \citep{cervone2014pointwise,franks2015characterizing, cervone2016multiresolution}, and to in-game strategy planning \citep{fernandez2018wide,sandholtz2019measuring}. Our focus in this paper is to study the National Basketball Association (NBA) players' {\it shooting patterns}, in particular, how they change over shooting locations and game time (e.g., first quarter versus clutch time). In professional basketball research, shooting pattern (or field goal attempt) is a fundamental metric since it often provides valuable insights to players, coaches and managers, e.g., players and coaches will be able to obtain a better understanding of their current shooting choices and hence develop further offensive/defensive plans accordingly, while managers can make better data-informed decisions on player recruitment. In the literature, it is common to employ spatial and spatial-temporal models \citep{reich2006spatial,miller2014factorized, jiao2019bayesian} to study the spatial and temporal dependence in the field goal attempt data. The key novelty of our paper is to introduce a new Bayesian tensor multidimensional clustering method that studies the heterogeneity of shooting patterns among players. Our starting point is to recognize that the field goal attempt data in basketball games enjoys a natural {\it tensor structure}. For example, we can divide the basketball half court into regions under the {\it polar coordinate} system and then summarize the number of field goal attempts over each region during each of the four quarters of the game as entries in a three-way tensor, where each tensor direction corresponds to the shooting distance, angle, and game time (quarter). One advantage of considering a tensor representation is that the spatial-temporal dependence structure is automatically considered as part of the tensor structure. Compared to the most existing works that rely on Cartesian coordinate system \citep{miller2014factorized,jiao2019bayesian,yin2020bayesian,hu2020bayesiangroup,yin2020analysis1}, our approach certainly makes more sense since shooting angle and distance are two important factors that affect the shooting selection of professional players \citep{reich2006spatial}. Studying the change of shooting patterns over time is also meaningful, e.g., Stephen Curry has made a comparable fewer number of attempts in fourth quarter during regular season simply because Gold State Worriers has often established a significant lead at the end of the third quarter during the 2017-2018 regular season. Tensor models have received a great deal of attention in machine learning and statistics literature \citep{kolda2009tensor,sun2014tensors,bi2021tensors}. For tensor clustering problems, most existing work \citep{sun2019dynamic,chi2020provable,mai2021doubly} either works with a single tensor or assumes that the clustering structure is the same across different tensor directions. It is also common that they only consider tensors with entries that takes continuous or binary values. In terms of model estimation, clustering is often based on solving a regularized optimization problem, which requires pre-specifying the number of clusters or choosing the cluster number based on certain {\it ad hoc} criteria. There are some recent papers on Bayesian tensor models, e.g., \citet{guhaniyogi2017bayesian,spencer2019bayesian,guhaniyogi2014bayesian}. However, all of them are in the regression context, hence cannot be directly applied to solve our problem. Our proposed approach differs with aforementioned methods in several ways. First, we consider a flexible multidimensional tensor clustering problem, which allows different clustering structures over tensor directions. We believe this is a meaningful relaxation in many applications, e.g., basketball players' shooting choice may differ significantly in terms of shooting distance, angle and game time depending on players' position, shooting preference and role in the team. Moreover, we focus on count-valued tensors (rather than continuous-valued tensors) for the obvious reason that the number of shot attempts is the main outcome of interest in our application. Thirdly, our model is fully probabilistic, which allows an easier interpretation compared to optimization-based method. In particular, we consider a Bayesian nonparametric model under the mixture of finite mixtures framework \citep[MFM;][]{miller2018mixture}, which allows simultaneous estimation and inference for the number of clusters and the associated clustering configuration for each direction (e.g. distance, angle, and quarter). We show that the posterior inference can be conducted efficiently for our method and demonstrate its excellent empirical performance via simulation studies. We also establish posterior convergence properties for our method. Finally, our proposed method reveals several interesting data-driven insights of field goal attempts data which are useful for professional basketball players, coaches, and managers. \textbf{Main contributions and broad impact:} (1) We are among the first (to the best of our knowledge) to introduce {\it tensor models} to sports analytics. It is our hope that this paper can contribute to promoting more use of tensor methods in different sport applications. (2) We develop a novel multidimensional tensor clustering approach that allows different clustering structures over tensor directions and handles count-valued data. The proposed method is fully Bayesian, which renders convenient inference on the number of clusters and the clustering structure. (3) We provide a large-sample theoretical justification for our method by showing posterior consistency for the cluster number and contraction rate for the mixing distributions. These results are new for Bayesian tensor models. \section{Method}\label{sec:method} \subsection{Probabilistic multi-dimensional tensor clustering } We treat the shooting chart data as three-way tensors and discuss a multi-dimensional clustering approach in this section. Note that each direction of the tensor represents the shooting angle, distance to the basket, and one of the four quarters in the game. Our proposed method can be conveniently extended to study general multi-way tensor data as well. Let~$\bm{Y}$ be a~$p_1 \times p_2 \times p_3$ tensor with each element~$Y_{ijk}$ only taking count values for $i=1,\ldots,p_1; j=1,\ldots,p_2; k=1,\ldots,p_3$. It is natural to consider a Poisson distribution with a mean parameter represented as a rank-one tensor, that is, \begin{align}\label{eq1} \bm{Y} \sim \text{Poisson}( \bm{\gamma}_{1} \circ \bm{\gamma}_{2} \circ \bm{\gamma}_{3}), \end{align} where $\circ$ denotes the outer product between two vectors, and $\bm{\gamma}_{1} \in \mathbb{R}_{+}^{p_1},\bm{\gamma}_{2} \in \mathbb{R}_{+}^{p_2},\bm{\gamma}_{3} \in \mathbb{R}_{+}^{p_3}$. Model \eqref{eq1} can also be viewed as a Poisson regression model where the mean parameter corresponds to an analysis of variance (ANOVA) model with main effects only, that is, $\log \text{E}(Y_{ijk}) = \log \gamma_{1,i} + \log \gamma_{2,j} + \log \gamma_{3,k}$ for $1\leq i \leq p_1, 1\leq j \leq p_2, 1 \leq k \leq p_3$. By ignoring the interaction effects, the number of parameters is effectively reduced from $p_1 p_2 p_3$ to $(p_1 + p_2+ p_3)$. Hence it renders parsimonious parameter estimation and easy interpretation in our NBA application study, i.e., the main effects $\log \bm{\gamma}_1, \log \bm{\gamma}_2, \log \bm{\gamma}_3$ correspond to the additive effect of shooting distance, angle, and game time (quarter). In order to learn the multidimensional heterogeneity pattern, we consider three independent mixture of finite mixtures \citep[MFM;][]{miller2018mixture} priors on $\bm{\gamma}_1, \bm{\gamma}_2, \bm{\gamma}_3$ such that the clustering pattern in those three directions can be learned separately. Here we present a brief introduction to MFM without getting into more details. Given $n$ observations, we consider $z_1,\ldots,z_n$ as their clustering labels, e.g., $z_1 = z_2 = z_4$ would mean that observations $1,2,4$ belong to the same cluster. Then the MFM prior can be expressed as \begin{eqnarray}\label{eq:MFM} K \sim p(\cdot), \quad (\pi_1, \ldots, \pi_K) \mid K \sim \mbox{Dir} (\gamma, \ldots, \gamma), \quad z_i \mid K, \pi \sim \sum_{h=1}^K \pi_h \delta_h,\quad i=1, \ldots, n, \end{eqnarray} where $K$ is the number of clusters, $(\pi_1,\ldots,\pi_K)$ are associated cluster weights and $\sum_{h=1}^K \pi_h \delta_h$ is the mixture distribution with $\delta_h$ being a point-mass at~$h$. Under the Bayesian framework, all those three quantities are random and hence are assigned with prior distributions, i.e., we use $p(\cdot)$, which is a proper probability mass function on~$\mathbb{N}_+$, as a prior on $K$, and a Dirichlet distribution on the mixture weights. Compared to the Chinese restaurant process (CRP), the probability of introducing a new table (cluster) for MFM is slowed down by a factor of ~$V_n(t+1)/ V_n(t)$, which allows for a model-based pruning of the tiny extraneous clusters. Here the coefficient~$V_n(t)$ is defined as \begin{align*} \begin{split} V_n(t) &= \sum_{n=1}^{+\infty}\dfrac{k_{(t)}}{(\gamma k)^{(n)}} p(k), \end{split} \end{align*} where $k_{(t)}=k(k-1)\ldots(k-t+1)$, and $(\gamma k)^{(n)} = {\gamma k} (\gamma k+1)\ldots(\gamma k+n-1)$, and $\gamma$ is the hyperparameter in the Dirichlet prior for the weights. The conditional distributions of $z_i, i=2, \ldots, n$ under~\eqref{eq:MFM} can be defined in a P\'{o}lya urn scheme similar to CRP: \begin{eqnarray}\label{eq:mcrp} P(z_{i} = c \mid z_{1}, \ldots, z_{i-1}) \propto \begin{cases} \abs{c} + \gamma , & \text{at an existing table labeled}\, c\\ V_n(t+1)/ V_n(t)\gamma, & \text{if}~ \, c\, \text{~is a new table} \end{cases}, \end{eqnarray} with~$t$ being the number of existing clusters. Now back to the shooting chart data, as we propose to use three independent MFM priors for clustering shooting distance, angle, and game time, our final model can be presented in the following hierarchical structure, \begin{equation}\label{eq:hierarchical_model} \begin{split} & K_{\ell} \stackrel{\text{i.i.d.}}{\sim} p_{K}, ~~ \ell=1,2,3,\\ & \bm{\pi}_{\ell} = (\pi_{\ell, 1},\ldots,\pi_{\ell,K_{\ell}}) \mid K_{\ell} \sim \text{Dir}(\nu,\ldots,\nu),~~, \nu > 0,~ \ell=1,2,3, \\ & \log{\bm{\gamma}_{\ell,1}},\ldots,\log{\bm{\gamma}_{\ell,K_{\ell}}}\stackrel {\text{i.i.d.}}{\sim }\text{MVN}_{p_{\ell}}(\bm{0},\Sigma_{\ell}),~~ \ell=1,2,3,\\ &\Sigma_{\ell}=\sigma^2_{\ell}(I_{\ell}-\rho_{\ell}\bm{W}_{\ell}),~~ \ell=1,2,3,\\ &\sigma^2_{\ell}\sim \text{Gamma}(a,b),~~ \ell=1,2,3,\\ &\rho_{\ell}\sim\text{Unif}(c_{1\ell},c_{2\ell}),~~ \ell=1,2,3,\\ &P(z_{i\ell} =j\mid \bm{\pi}_{\ell},K_\ell)=\pi_{j\ell},~~ \ell=1,2,3, ~~j=1,\ldots,K_\ell,\\ &\bm{Y}_i \sim \text{Poisson}( \bm{\gamma}_{1,z_{i1}} \circ \bm{\gamma}_{2,z_{i2}} \circ \bm{\gamma}_{3,z_{i3}}),~i=1,\ldots,n, \end{split} \end{equation} where the main effects for distance, angle, and period are modeled by multivariate normal distributions and their covariance matrices involve adjacency matrices, denoted by $\bm{W}_l$'s, that are used for incorporating the potential spatial and temporal correlation information (e.g., two shooting locations are next to each other). To ensure those covariance matrices $\Sigma_{\ell}$ are positive definite, we introduce $c_{1\ell}$ and $c_{2\ell}$ as the reciprocals of minimum and maximum eigenvalues of $\bm{W}_l$, respectively. For the prior $p_{K}$ on the number of clusters, we consider a truncated Poisson$(1)$ following the recommendations in \citet{miller2018mixture} and \citet{geng2019probabilistic}. Our multidimensional clustering model in \eqref{eq:hierarchical_model} sits between two extremes. One is the usual tensor clustering model that assumes the same cluster structure across different directions, which certainly is more restrictive compared to ours. The other is to marginally cluster over each of the tensor directions and solve multiple clustering problems independently, which does not fully utilize the tensor structural information. Our method combines the attractive features from both sides by allowing cluster structures to be different over directions while borrowing information to improve the estimation efficiency. Our model in \eqref{eq:hierarchical_model} can be viewed as a Bayesian mixture of rank-one tensor models. Compared to the frequenstist work on tensor clustering \citep{sun2019dynamic,chi2020provable}, where a Tucker decomposition is usually utilized and the choice of the rank relies heavily on pre-specification or certain model selection criteria, our approach is capable of automatically determining the rank while quantifying the uncertainty in rank selection. Moreover, our method is fully probabilistic; hence each mixture component is easy to interpret in practice. \subsection{Theoretical Properties}\label{theory} Next we study the theoretical properties for the posterior distribution obtained from model \eqref{eq:hierarchical_model}. For convenience, we define three mixing measures $G_{\ell} = \sum_{i=1}^{K_{\ell}} \pi_{\ell,i} \delta(\gamma_{\ell,i})$ for $\ell=1,2,3$, where $\delta(\cdot)$ is the point mass measure. In other words, $G_1,G_2,G_3$ represent the clustering structures and associated parameters along each of the three directions in the tensor. In order to establish the posterior contraction results, we consider a refined parameter space $\bm{\Theta^*}$ defined as $\cup_{k_1,k_2,k_3=1}^{\infty} \bm{\Theta_k^*}$ for $\bm{k} = (k_1,k_2,k_3)$, where $\bm{\Theta_k^*}$ is a compact parameter space for all model parameters including mixture weights and main effects given a fixed cluster number for each direction, i.e., $K_1=k_1, K_2 = k_2,$ and $K_3 = k_3$. More precisely, we define $\bm{\Theta_k^*}$ as \begin{align*} \Big\{&\pi_{\ell,i} \in (\epsilon, 1-\epsilon) ~~\text{for every}~i=1,\ldots,k_{\ell}, \ell=1,2,3, ~\sum_{j}^{k_{\ell}} \pi_{\ell,j} = 1 ~~\text{for every}~\ell=1,2,3, \\ & \gamma_{\ell,i} \in (-M,M) ~~\text{for every}~i=1,\ldots,k_{\ell}, \ell=1,2,3, \Big\}, \end{align*} where $\epsilon$ and $M$ are some pre-specified positive constants. For any two mixing measures $G_1 = \sum_{i=1}^k p_i \delta(\gamma_i)$ and $G_2 = \sum_{j=1}^{k'} p_j' \delta(\gamma_j)$, we define their Wasserstein distance as $W(G_1,G_2) = \inf_{q \in \mathcal{Q}} \sum_{i,j} q_{ij} |\gamma_i - \gamma_j | $, where $\mathcal{Q}$ denotes the collection of joint discrete distribution on the space of $\{1,\ldots,k\} \times \{1,\ldots,k'\}$ and $q_{ij} $ is the probability being associated with $(i,j)$-element and it satisfies the constraint that $\sum_{i=1}^k q_{ij} = p_j'$ and $\sum_{j=1}^{k'} q_{ij} = p_i$, for every $i=1,\ldots,k$ and $j=1,\ldots,k'$. For $\ell = 1,2,3$, let $K_{\ell}^0$ and $G_{\ell}^0$ be the true number of clusters and true mixing measure along direction $\ell$. Also let $P_0$ be the associated joint probability measure. Then the following theorem establishes the posterior consistency and contraction rate for the cluster number and mixing measure. The proof is given in the Supplementary Material; and it is based on the results for Bayesian mixture models in \citet{guha2021posterior}. \begin{theorem}\label{thm1} Let $\Pi_n(\cdot \mid \bm{Y}_1,\ldots,\bm{Y}_n)$ be the posterior distribution obtained from \eqref{eq:hierarchical_model} given i.i.d. observations $\bm{Y}_1,\ldots,\bm{Y}_n$. Suppose that the true parameters belong to $\bm{\Theta^*}$. Then for each of $\ell =1,2,3$, we have \begin{align*} \Pi_n\left\{K_{\ell} = K_{\ell}^0 \mid \bm{Y}_1,\ldots,\bm{Y}_n)\right\} \rightarrow 1, ~\text{and}~~ \Pi_n \left\{(W(G_{\ell},G_{\ell}^0)\lesssim (\log n/n)^{-1/2} \mid \bm{Y}_1,\ldots,\bm{Y}_n)\right\} \rightarrow 1, \end{align*} almost surely under $P_0$ as $n \rightarrow \infty$. \end{theorem} Theorem \ref{thm1} shows that as sample size $n \rightarrow \infty$, our proposed Bayesian model is capable of correctly identifying the unknown number of clusters along each of the tensor directions with posterior probability tending to one. Moreover, the latent clustering structure (e.g., cluster membership) can also be consistently recovered. The contraction rate for $G_{\ell}$ is nearly parametric with an additional logarithmic factor, which is common in the Bayesian asymptotic literature \citep{guha2021posterior}. The assumption of a compact parameter space $\bm{\Theta^*}$ is needed to rule out extreme scenarios, for example, when some mixture probabilities are extremely close to $0$, for which it becomes very challenging to distinguish between our model and a sub-model without these small mixture components. In practice, this assumption is often satisfied since we can always restrict the modeling parameters to take values within a pre-specified range, e.g., assuming cluster probability to be at least $\epsilon$ for some small $\epsilon$ values such as $.0001\%$. Our results can also be extended to general multi-way tensors as long as the independent MFM priors are used for each direction. \subsection{Bayesian Inference}\label{sec:inference} We discuss the posterior sampling scheme for our model. For the MFM prior, we use the stick-breaking \citep{sethuraman1994constructive} approximation to reconstruct $$K_{\ell} \sim p_{K}, \bm{\pi}_{\ell} = (\pi_{1\ell},\ldots,\pi_{K_{\ell} \ell}) \mid K_{\ell} \sim \text{Dir}(\nu,\ldots,\nu),~~ \ell=1, 2, 3,$$ as follows for each of $\ell=1,2,3$, \begin{itemize} \item \textbf{Step 1.} Generate $\eta_1,\eta_2,\cdots \overset{\text{iid}}{\sim} \text{Exp}(\psi_\ell)$, \item \textbf{Step 2.} Let $K_\ell=\min\{j:\sum_{k=1}^j \eta_k \geq 1\}$, \item \textbf{Step 3.} Set $\pi_{h\ell}=\eta_h$, for $h=1,\cdots,K_\ell-1$, \item \textbf{Step 4.} Set $\pi_{h\ell}=1-\sum_{h=1}^{K_\ell-1}\pi_h$, \end{itemize} where we choose $(K_\ell-1) \sim \mbox{Poisson}(\psi_\ell)$ and $\nu=1$. Based on the stick-breaking reparameterization, we obtain a similar hierarchical model as the Dirichlet process mixture model in \citet{ishwaran2001gibbs} when we choose a sufficiently large dimension $T$ for $\bm{\pi}_{\ell}$ and set the last $T-K_\ell$ elements to be zero. Due to the lack of available analytical form for the posterior distribution of~$\gamma$'s, we employ the MCMC sampling algorithm to sample from the posterior distribution, and then obtain the posterior estimates of the unknown parameters. Computation is facilitated by the \textbf{nimble} \citep{de2017programming} package in \textsf{R} \citep{Rlanguage2013}. To determine the final clustering configuration based on post-burn-in iterations, we use the Dahl's method \citep{dahl2006model}. The main idea is to obtain a clustering configuration that best represents the posterior samples based on comparing the ``pairwise similarity'' between different cluster structures. The procedure can be described as follows. First, at MCMC iteration~$t$, based on the $n$-dimensional vector $(z_1^{(t)},\ldots,z_n^{(t)})$ for the latent clusters, a membership matrix~$M^{(t)}$ consisting of~0 and~1's can be obtained, where $M^{(t)}(i,j) = M^{(t)}(j,i) = 1(z_i^{(t)} = z_j^{(t)})$. Next, the membership matrices are averaged over all post-burn-in iterations to get a matrix of pairwise similarity, $\bar{M} = \sum_{t=1}^T M^{(t)} / T$, where~$T$ denotes the total number of iterations. Finally, the iteration that has the smallest element-wise Euclidean distance from $\bar{M}$ is taken as the inferred clustering configuration, i.e., with~$t^*$ being \begin{equation*} t^* = \argmin_t \sum_{i=1}^n \sum_{j=1}^n \left( M^{(t)}(i,j) - \bar{M}(i,j) \right)^2, \end{equation*} and the final inferred configuration is obtaind as $(z_1^{t^*},\ldots, z_n^{t^*})$. \section{Simulation}\label{sec:simu} To evaluate the performance of the proposed model, simulation studies are performed on generated data sets with a total of 150 players. We consider two simulation settings. For the first setting, we consider a three-angle pattern and three-distance partition of the court, i.e., the court is divided into $9$ parts based on combinations of distance and angle. Two clusters of size 75 are set for angle, distance, and quarter, respectively. The patterns for angle and group are visualized in Figure~\ref{fig:simu_pars_1} of the Supplementary Material. For quarter group~1, we choose $\bm{\gamma}_3 = (-1, -1, -1, -1)^\top$; and for quarter group~2, $\bm{\gamma}_3 = (-0.5, -2, -0.5, -2)^\top$. For the second simulation setting, we consider a finer partition of the court, including 11 angles, 12 distances, and again two quarter patterns in the same way as design~1. The true number of clusters is 3 (each cluster cluster of size 50) for the angle, 3 (each cluster of size 50) for the distance, and 2 (each of size 75) for the quarter. The angle and group patterns are visualized in Figure~\ref{fig:simu_pars_2} of the Supplementary Material. Under both settings, for each piece of the partitioned court, the corresponding number of shots is generated using the associated $\gamma_1, \gamma_2$ and~$\gamma_3$ based on the last line of Equation~\ref{eq:hierarchical_model}. The proposed multidimensional clustering approach is then applied to fit the generated data; and this procedure is repeated for~100 times for each setting. All the computations were performed on a computing server (256GB RAM, with 8 AMD Opteron 6276 processors, operating at 2.3 GHz, with 8 processing cores in each) and the running time was within twelve hours. \begin{figure}[tbp] \centering \includegraphics[width=0.8\textwidth]{plot/simu_all} \caption{Simulation results: Rand index boxplots for angle, distance, and quarter over 100 Monte-Carlo replicates.} \label{fig:randindex} \end{figure} To evaluate the clustering performance on each of the tensor directions, we use the Rand index \citep[RI;][]{rand1971objective}, which is a commonly used metric that measures the concordance between two clustering schemes. Taking values between~$0$ and~$1$, a larger value of RI indicates a higher agreement. To evaluate whether the true number of clusters is correctly inferred, we also examine the total number of clusters inferred in each replicate over each of the three directions. We consider two competing methods, $K$-means (function \texttt{kmeans()} in R) and density-based spatial clustering \citep[DBSCAN; implemented in \textbf{fpc},][]{fpc2020}. To make a fair comparison, we use the number of clusters obtained by our method for $K$-means. For DBSCAN, as the method depends on a pre-specified ``reachability distance'', we use four candidate values, 25, 50, 75, and 100; and we denote the methods as DBSCAN-25,$\ldots,$ DBSCAN-100 for the rest of this paper. Both methods are applied to each of the three directions in an independent manner. In other words, for~150 simulated players, we sum out the distance and quarter directions, and obtain~150 11-dimensional count vectors for clustering. We summarize the rand indexes from 100 replicates as boxplots in Figure~\ref{fig:randindex} and also report the average RI in Table~\ref{tab:average_ri}. From the results, we find a clear advantage of our method over $K$-means. Compared to DBSCAN, our advantage is not obvious under the simple setting, Design 1; but becomes significantly better under Design 2. We also note that the performance of DBSCAN is quite sensitive to the choice of the reachability distance, e.g., DBSCAN-25 has the worst performance for all three directions under Design 2, but not for Design 1. Our method, on the other hand, manages to achieve a reasonably high average Rand Index for different tensor directions under both simulation designs. These results highlight the benefit of incorporating the tensor structure and borrowing information from other directions by our method. \begin{figure}[tbp] \centering \includegraphics[width=.7\textwidth]{plot/K_hist_tensor} \caption{Simulation results: Histograms of cluster numbers over 100 Monte-Carlo replicates for angle, distance, and quarter under two designs. Correct estimate of $K$ is marked in orange color. } \label{fig:K_hist} \end{figure} \begin{table}[tbp] \centering \caption{Simulation results: Average Rand Index over 100 Monte-Carlo replicates under two simulation designs for the proposed method and two competing methods.} \label{tab:average_ri} \begin{tabular}{llccc} \toprule Design & Method & Angle & Distance & Quarter \\ \midrule Design~1 & Proposed & 0.999 & 0.987 & 0.938 \\ & $K$-means & 0.720 & 0.712 & 0.985 \\ & DBSCAN-25 & 0.996 & 0.998 & 0.982 \\ & DBSCAN-50 & 1.000 & 1.000 & 1.000 \\ & DBSCAN-75 & 1.000 & 1.000 & 1.000 \\ & DBSCAN-100 & 1.000 & 1.000 & 1.000 \\ \midrule Design~2 & Proposed & 0.826 & 0.851 & 0.903 \\ & $K$-means & 0.773 & 0.786 & 0.851 \\ & DBSCAN-25 & 0.499 & 0.667 & 0.554 \\ & DBSCAN-50 & 0.687 & 0.706 & 0.759 \\ & DBSCAN-75 & 0.720 & 0.720 & 0.776 \\ & DBSCAN-100 & 0.720 & 0.720 & 0.776 \\ \bottomrule \end{tabular} \end{table} We also present the histogram of the estimated number of clusters from 100 replicates in Figure~\ref{fig:K_hist}. Under Design~1, where there are only two clusters on each direction, the proposed method manages to correctly estimate the cluster number $98\%, 94\%$, and $87\%$ of times for angle, distance, and quarter, respectively. In Design~2, with a finer partition of the court, it becomes harder to infer the number of clusters, and the percentage of correct estimation reduces to $56\%, 60\%$, and $61\%$. \section{NBA Data Analysis}\label{sec:real_data} We consider the shot attempts made by players during the 2017--2018 NBA regular season excluding the overtime period. Rookie year players who started their NBA career in 2017 are excluded. We also exclude players who made very few number of shots in that season, e.g., due to long-term injury. Shots that were made at negative degrees (under the polar coordinate system) are also excluded. At the end, the dataset that we study consists of 122,001 shot attempts made by~191 players, with Aron Baynes bottoming the list with~317, and Russell Westbrook topping the list with~1356 shot attempts. We consider the polar coordinate representation of shot attempts in a similar way with \cite{reich2006spatial}. We treat the basket as origin and partition the angle (from~0 to~$\pi$) into 11 equal sections. In terms of the shooting distance, we partition it into 12 sections, with the first~11 be designed so that the areas of all sectors and annular sectors are the same. The remaining~9 areas correspond to the remaining areas on the offensive half court. The partition scheme is illustrated in Figure~\ref{fig:partition_scheme} of the Supplementary Material. Compared to the partition scheme in Figure~2 of \cite{reich2006spatial}, where the annular sectors only covered the regions near the three-point line, we extend the annular sectors because of the current trend of making three-point shots among NBA players, e.g., Stephen Curry and Damian Lillard. For each player, we further divide the number of shot attempts by four game quarters for each court partition, and end up with a~$11\times 12 \times 4$-dimensional tensor. In Figure~\ref{fig:tensor_viz}, we choose three players, Bradley Beal, LeBron James, and Russel Westbrook, and present their shot charts for demonstration. Some interesting patterns can be observed from the plots, e.g., LeBron James makes more shots facing the basket, and Russel Westbrook makes fewer shot attempts in the fourth quarter on average. \begin{figure}[tbp] \centering \includegraphics[width=.9\textwidth]{plot/players_example} \caption{Visualization of shot count tensors for Bradley Beal, LeBron James, and Russel Westbrook.} \label{fig:tensor_viz} \end{figure} We apply the proposed multidimensional heterogeneity learning approach on the collected tensor data from 191 players. The same neighborhood matrices $W_1$, $W_2$, and $W_3$ from the simulation studies are used. We consider a MCMC chain of length 10,000 and and a thinning interval of 2, resulting in a total of 5,000 posterior samples. We then discard the first 2,000 as burn-in and use the rest of 3,000 samples to obtain the final clustering configuration using Dahl's method as described in Section \ref{sec:inference}. We obtain two clusters of sizes 71 and 120 over the angle direction as shown in Figure~\ref{fig:angle_cluster}. While it can be seen that players in both clusters make more shots when facing the basket, those in cluster~1 also make a fair amount of shots at the two wings, as well as the corners. Players in cluster~2, however, mostly shoot in the region facing the basket and its immediate neighbors. Compared to those in cluster~1, they make less corner shots, as the estimated~$\bm{\gamma}_1$ is almost~0 in the two regions on each side. Representative players for the two clusters are, respectively, James Harden and John Wall. Their shot charts are given in Figure \ref{fig:angle_cluster_players} of the Supplementary Material. \begin{figure}[tbp] \centering \includegraphics[width = 0.55\textwidth]{plot/angle_clusters} \caption{Visualization of $\bm{\gamma}_1$ estimates for two shooting angle clusters.} \label{fig:angle_cluster} \end{figure} Shooting patterns in terms of distance to basket have been clustered into three groups as visualized in Figure~\ref{fig:dist_cluster}. Players in the cluster~1 have two hot regions: near the basket, and beyond the three point line. Point guards and shooting guards (small forwards) make the majority of this cluster (75 players), with representative players such as Kyrie Irving and Stephen Curry. Compared with cluster~1, the 90 players in cluster~2 tend to shoot less beyond the three point line, but make more perimeter shots. A representative player for this cluster is Russell Westbrook. Finally, in cluster~3, most of the 26 players only shoot in regions that are closest to the basket, such as DeAndre Jordan and Clint Capela. Most of their shots are slam dunks and alley-oops. Some other players (e.g., Fred VanVleet and Jonas Valanciunas) in cluster~3, although also making perimeter shots and three-pointers, rely heavily on lay-ups. We pick one representative player from each cluster and present their shooting charts in Figure \ref{fig:dist_cluster_players} of the Supplementary Material. \begin{figure}[tbp] \centering \includegraphics[width = .7\textwidth]{plot/dist_clusters} \caption{Visualization of $\bm{\gamma}_2$ estimates for the three shooting distance clusters.} \label{fig:dist_cluster} \end{figure} Finally, the two clusters for quarters are visualized in Figure~\ref{fig:quarter_cluster}. In cluster~1, players make more shots in quarters~1 and~3, and less shots in quarters~2 and~4. Most players in this cluster are leading players in their teams, and they often take breaks during the second quarter. In the fourth quarter, leading players may also take breaks if their teams lead or fall behind by wide margins. Stephen Curry, Kevin Durant and Paul George are in this cluster. In cluster~2, the distribution of shots across four quarters is more even than that in cluster~1, and on average the estimated $\bm{\gamma}_3$ is relatively smaller. The cluster sizes are 91 and 100, which indicate these two patterns are similarly prevalent among the players that we studied. We pick Anthony Davis and Chris Paul as two representative players from two clusters and present their shooting charts over four quarters in Figure \ref{fig:quarter_cluster_players} of the Supplementary Material. \begin{figure}[tbp] \centering \includegraphics[width = .8\textwidth]{plot/quarter_clusters} \caption{Visualization of $\bm{\gamma}_3$ estimates for two game quarter clusters.} \label{fig:quarter_cluster} \end{figure} \section{Discussion}\label{sec:discussion} We propose a new multidimensional tensor clustering method in this paper and demonstrate its utility by studying how shooting patterns distribute over court locations and game time among different players. Our method is applicable to many other sports such as football and baseball, where it is natural to formulate and model the multi-way array data. The proposed method also applies to other applications such as imaging analysis and recommender systems. Several future work directions remain open. First, our method is based on Poisson distribution for outcomes in the tensor; and it is of interest to generalize this assumption by considering other types of distributions such as zero-inflated Poisson and continuous distributions. Incorporating sparsity in tensor models is another interesting direction that will allows us to deal with high-dimensional tensors. From applications point of view, it is of interest to analyze and compare the shooting patterns between different periods of games/seasons, e.g., regular season versus playoff games, and before-pandemic versus 2020 NBA bubble seasons.
{ "attr-fineweb-edu": 1.481445, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdag25V5i9my3-e5L
\section{Introduction} The use of complex systems theory as an alternative paradigm for analyzing elite sport dynamics is currently arousing intense academic interest \cite{almeira2017structure,schaigorodsky2014memory,perotti2013innovation, erkol2020best}. Fostered by the new advances in data acquisition \cite{pappalardo2019public,pettersen2014soccer} and artificial intelligence techniques \cite{fister2015computational,neiman2011reinforcement,mukherjee2019prior}, the use of state--of--the--art statistical tools to evaluate teams' performances is, nowadays, shaping a new profile of data--driven--based professional coaches worldwide. One can find in the literature plentiful research works in several fields of physics that have been devoted to studying phenomena related to sports science \cite{laksari2018mechanistic,schade2012influence,le2015theory,yu2016textured,trenchard2014collective}. The research community in statistical physics, focused in the study of sports mainly in the framework of the stochastic processes. For instance, studying the time evolution of scores \cite{clauset2015safe,ribeiro2012anomalous,kiley2016game}. Alternatively, other studies propose innovative models, based on ordinary differential equation \cite{ruth2020dodge}, stochastic agent-based game simulations \cite{chacoma2020modeling}, and network science theory \cite{yamamoto2021preferential}, aiming to describe the complex dynamical behavior of teams' players. Formally, sports teams can be thought of as complex {\it sociotechnical systems} \cite{davids2013complex}, where a wide range of organizational factors might interact to influence athletes' performances \cite{hulme2019sports,petersen2020renormalizing,mclean2017s,soltanzadeh2016systems}. Particularly, in collective games like football, cooperative interplay dynamics seem to be a key feature to be analyzed \cite{gudmundsson2017review, gonccalves2016effects}. In principle, collective behaviors in soccer are important since they are connected to team tactics and strategies. Usually, features of these collective behaviors are described by using simple group-level metrics~\cite{clemente2013collective, frencken2011oscillations, moura2013spectral, yue2008mathematicalp1, yue2008mathematicalp2,narizuka2019clustering}. Furthermore, temporal sequences of ball and player movements in football, showing traits of complex behaviors, have been reported and studied using stochastic models and statistical analysis~\cite{mendes2007statistics, kijima2014emergence, narizuka2017chasing,chacoma2020modeling}. Recent works has focused on describing cooperative on--ball interaction in football within the framework of network science \cite{martinez2020spatial,herrera2020pitch,buldu2019defining,gonccalves2017exploring,narizuka2021networks}. In \cite{garrido2020consistency}, for instance, D. Garrido et al. studied the so call {\it Pitch Passing Networks} in the games of the Spanish League at $2018/2019$ season. In this outstanding work, the authors use network metrics and topological aspects to define teams' consistency and identifiability, two highly relevant global indicators to analyze team performance during competition. From an alternative perspective, our research group has focused on studying the dynamical interactions at the microscopic level, i.e. by modeling player--player interactions. In a previous work \cite{chacoma2020modeling}, we proposed an agent--based model that correctly reproduces key features of global statistics indicators of nearly $2000$ real--life football matches. In the same line, the present paper aims to describe the complexity of this game uncovering the underlying mechanisms ruling the collective dynamics. Our goal is to give a new step towards a full description of the spatiotemporal dynamics in a football match. With this purpose, we posit a simple model based on linear interactions to describe teams' cooperative evolution. To do so, we analyzed a public database containing body--sensor traces from three professional football matches of the Norwegian team {\it Tromsø IL} (see section \ref{se:metodos}). We will show that our model succeeds in capturing part of the cooperative dynamics among the players and that higher--order contributions (non--linear interactions) can be carefully modeled as fluctuations. Moreover, we will show that our framework provides a useful tool to analyze and evaluate tactical aspects of the teams. This paper is divided into three sections: in section Material and Methods, we describe the database, the statistical regularities found in the data, and formally propose our theoretical framework. In section Results and Discussion, we present our main results and relevant findings. Our conclusion and future perspectives are briefly summarized in the last section. \section{Material and methods} \label{se:metodos} \subsection{The database} In the year 2014, S.A. Pettersen et al. published a database of traces recorded with body--sensors from three professional soccer games recorded in November 2013 at the Alfheim Stadium in Troms{\o}, Norway \cite{pettersen2014soccer}. This database is divided into five datasets, each one contains the halves of the game Troms{\o}~IL vs. Str{\o}msgodset~IF ($DS1$, $DS2$), the halves of the game Troms{\o}~IL vs. Anzhi Makhachkala ($DS3$, $DS4$), and $40$ minutes of the game Troms{\o}~IL vs. Tottenham Hotspurs ($DS5$). This contribution also offers video records, but they were not used in our research. We highlight that only Troms{\o}~IL gave consent to be tracked with the body--sensors, thus the traces available in the datasets are only from this team. The goalkeeper's position, likewise, was not tracked. The player positions were measured at $20~Hz$ using the highly accurate {\it ZXY Sport Tracking system} by {\it ChyronHego} (Trondheim, Norway). However, to perform our analysis, we pre--processed the data so as to have the players' position in the field in one-second windows. In this way, we lose resolution but it becomes simpler to analyze players' simultaneous movements or coordination maneuvers. \subsection{Statistical regularities} In this section, we describe relevant aspects of the statistical observations that we used to propose our stochastic model. In this case, we focus on the analysis of $DS1$, but similar results can be obtained for the others datasets. Let us discuss Fig.~\ref{fi:stat}. In panel $(a)$ we show the team deployed in the field at two different times. In an offensive position at $t_1= 7~(min)$, and in an defensive position at $t_2=8~(min)$. The ellipses, drawn in the figure with dashed lines, give the standard deviation intervals around the mean (see supplementary material S1 for further details) and can be thought of as a measure of the team's dispersion in the field. The ellipses' area can be thought of as a characteristic area of the team deployed in the field. Note, the system seems to suffer an expansion at $t_1$ and a contraction at $t_2$. To characterize this process we study the temporal evolution of $s:= 100\times\frac{Ellipse~Area}{Field~area}$ ($\%$), that we show in panel $(b)$. We measured $\avg{s(t)} = 5.1$, $\sigma_s = 2.2$. The low dispersion in the time series and the symmetry around the mean, indicate the system moves around the field with a well--defined characteristic area exhibiting small variations throughout the dynamics of the game. Let us now focus on analyzing the level of global ordering in the team. To do so, we analyze the evolution of the parameter $\phi (t) = \big | \frac{1}{N}\sum^N_{n=1} \frac{\vec{v}_n(t)}{|\vec{v}_n(t)|} \big|$, (see eq.~(1) in \cite{cavagna2010scale}). Where $v_n$ is the velocity of player $n$, and $N$ is the total number of players. In the case $\phi \approx 1$, this parameter indicates the players move as a {\it flock}, following the same direction. On the contrary, when $\phi \approx 0$, they move in different directions. Panel $(c)$, shows the temporal evolution of $\phi$. We measured $\avg{\phi(t)}= 0.7$, $\sigma_{\phi} = 0.2$. This level of global ordering during the game shows that there are time intervals when the players tend to move as a highly coordinated {\it flock} \cite{welch2021collective}. Therefore, it seems there are interactions among the teammates that cause the emergence of global ordering. We now turn our attention to the analysis of the temporal structure of series $s(t)$ and $\phi(t)$. Let us define $t_R$ as the time of return to the mean value. In panel $(d)$ we show the distribution $P_{s}(t_R)$ and $P_{\phi}(t_R)$. In both cases, we can see heavy--tailed distributions. By performing a non--linear fit, using the expression $P(t_R) = C t_R^{-\gamma}$, for the case of the relative area, s, within the range $(0, 0.4)~min$, we measured $\gamma_s = 0.96 \pm 0.03$. In the case of the order parameter $\phi$, in the whole range, we obtained $\gamma_{\phi} = 2.16 \pm 0.04$. It is well known that for a random walk process in one dimension the probability of first return turns out $\gamma= 3/2$ \cite{redner2001guide}. In our case, the measured non--trivial exponents seem to indicate the presence of a complex multiscale temporal structure in the dynamics. Notice that these two parameters, related to the team structure and order, wrap somehow the memory and complex dynamics for the team during the match. We now focus on describing the dynamics of the center of mass (CM). In Fig.~\ref{fi:stat} panel $(e)$, we show the relations $x_{cm}$ vs. $v^x_{cm}$, and $y_{cm}$ vs. $v^y_{cm}$. We can see the position variables are bounded to the field area, and the velocities in both axes seem to be bounded within the small range $(-5,5)$ ($m/s$). In panel $(f)$, on the other hand, we show the distribution of accelerations. We measured $\avg{a^x_{cm}}=\avg{a^y_{cm}}=0$, $\sigma_{a^x_{cm}}=0.4$, and $\sigma_{a^y_{cm}}=0.2$ ($m/s^2$). Since the CM of the system is barely accelerated, we can approximate the center of mass as an inertial system. Then, to simplify our analysis, we can study the dynamical motion of the players from the center of mass frame of reference. In this frame, we aim to define action zones for the players. To do so, we analyze the positions in the plane that the players have explored during the match. The ellipses in panel $(g)$, give the standard--deviation intervals around the mean, and can be thought of as characteristic action zones for the players. Note that, from this perspective, it naturally emerges that Troms{\o}~IL used in this half the classical tactical system {\it 4-4-2}. Summarizing, we observed $(i)$ the spatial dispersion of the players follows a well defined characteristic area, $(ii)$ inside the team, the players' movements exhibit correlation and global ordering, $(iii)$ the system shows a complex multiscale temporal structure, and $(iv)$ the players' motion can be studied from the center of mass frame of reference, simplifying the analysis. In the next section, we use these insights to propose a simple stochastic model of cooperative interaction to analyze the spatiotemporal evolution of the team during the match. \subsection{The model} The interplay among teammates can be thought of as individuals that cooperate to run a tactical system. In this frame, we aim to define a model to describe the spatiotemporal evolution of the team. Since our goal is to define a simple theoretical framework such that we can easily interpret the results, we propose a model based on players to players' interactions. We proceed as follows (i) we define the equations of evolution for the players in the team, (ii) we use the empirical data to fit the equations' parameters, and (iii) we model the error in the fitting as fluctuation in the dynamics. In the following, we present the results in this regard. \subsubsection{The equations of evolution} In the CM frame of reference, we define $\vec{r}_n(t)= \big(x_n(t), y_n(t)\big)^T$ and $\vec{v}_n(t)= \big(v_n^x(t), v_n^y(t)\big)^T$ as the position and velocity of player $n$ at time $t$. We propose the dynamical variables change driven by interactions that can be thought of as springs--like forces. Every player in our model is bounded to $(i)$ a place in the field related to their natural position in the team, $\vec{a}_n$, and $(ii)$ the other players. The equation of motion for a team player $n$ can be written as follow, \begin{equation} \centering M_n \ddot{\vec{r}}_n = -\gamma_n\vec{v}_n + k_{an}(\vec{a}_n-\vec{r}_n) + {\sum_m}^\prime k_{nm}(\vec{r}_m-\vec{r}_n), \label{eq:eqmov} \end{equation} where the first term is a damping force, the second one is an ``anchor'' to the player's position, and the sum is the contribution of the interaction forces with the other players. We propose different interaction constants in the horizontal and vertical axis, therefore the parameters $\gamma_n$, $k_{an}$, and $k_{nm}$, are $2-D$ diagonal matrices such as, $\gamma_n= \big(\begin{smallmatrix} \gamma_n^x & 0\\ 0 & \gamma_n^y \end{smallmatrix}\big)$, $k_{an}= \big(\begin{smallmatrix} k_{an}^x & 0\\ 0 & k_{an}^y \end{smallmatrix}\big)$, $k_{nm}= \big(\begin{smallmatrix} k_{nm}^x & 0\\ 0 & k_{nm}^y \end{smallmatrix}\big)$. Moreover, since players have comparable weights, for simplicity we consider $M_n=1$ for all the players. Notice, if equilibria exist, $\vec{r}_n(t\rightarrow \infty)= \vec{r}_n^*$, and $\vec{v}_n(t\rightarrow \infty)=0$. Then, \begin{equation} -\big(k_{na}+ {\sum_m}^\prime k_{nm}\big) {\vec{r^*}_n} + {\sum_m}^\prime k_{nm}\vec{r^*}_m + k_{an}\vec{a}_n=0, \label{eq:equilibrium} \end{equation} must hold. Eqs. \ref{eq:eqmov} can also be written as a first order equations system as follows, \begin{equation} \begin{split} \dot{\vec{r}}_n &= \vec{v}_n\\ \dot{\vec{v}}_n &= -\big(k_{an}+{\sum_m}^\prime k_{nm}\big) \vec{r}_n + {\sum_m}^\prime k_{nm}\vec{r}_m -\gamma_n \vec{v}_n + k_{an}\vec{a}_n. \end{split} \label{eq:system1} \end{equation} Furthermore, by defining $\vec{x}=\big(x_1,..x_{n}, v^x_1,...,v^x_{n}\big)$ and $\vec{y}=\big(y_1,..y_{n}, v^y_1,...,v^y_{n}\big)$, we can write, \begin{equation} \begin{split} \dot{\vec{x}} = J^x \big(\vec{x}-\vec{x^*}\big)\\ \dot{\vec{y}} = J^y \big(\vec{y}-\vec{y^*}\big), \end{split} \label{eq:jacobian} \end{equation} where $J^x, ~ J^y \in R^{2n \times 2n}$ are the Jacobian matrices of system (\ref{eq:system1}). With Eqs.~(\ref{eq:jacobian}), we can analyze separately the system evolution along the horizontal and the vertical axis. Moreover, in the section \ref{se:results} we will show that the Jacobian matrices can be used to describe the team's collective behavior. \subsubsection{Fitting the model's parameters} \label{se:fitting} In this section we show how to fit the model's parameters $\gamma_n$, $k_{an}$, $k_{nm}$, and $\vec{a}_n$ by using the datasets, and Eqs.~(\ref{eq:system1}) and (\ref{eq:equilibrium}). To proceed, we have considered the following steps, \begin{enumerate} \item For every player in the team, each dataset provides the position in the field $\vec{r}_n(t)$. The velocity is calculated as $\vec{v}_n(t):=\frac{\vec{r}_n(t+\Delta t)- \vec{r}_n(t)}{\Delta t}$ ($\Delta t=1~s$) \item The discrete version of system (\ref{eq:system1}), gives the tool to estimate the states of the players at time $t+\Delta t$ by using as inputs the real states at time $t$ and the model's parameters, \begin{equation*} \begin{split} \vec{r}_n(t+\Delta t)^{\prime} &= \vec{r}_n(t)+\vec{v}_n(t)\Delta t\\ % \vec{v}_n(t+\Delta t)^{\prime} &= \vec{v}_n(t)+ \\ &+ \big(-\gamma_n \vec{v}_n(t)- \big(k_{an}+{\sum_m}^\prime k_{nm}\big) \vec{r}_n(t)+ \\ &+{\sum_m}^\prime k_{nm}\vec{r}_m(t)+ k_{an}\vec{a}_n\big) \Delta t. % \end{split} \label{eq:discretesystem} \end{equation*} Where $\vec{r}_n(t+\Delta t)^{\prime}$ and $\vec{v}_n(t+\Delta t)^{\prime}$ are the model's estimations. \item Note, by considering the definition of the velocity made in $1$, $\vec{r}_n(t+\Delta t) = \vec{r}_n(t+\Delta t)^{\prime}$. Then, at every step, the parameters are only used to predict the new velocities. \end{enumerate} Moreover, to simplify our framework, \begin{enumerate} \setcounter{enumi}{3} \item Since parameters $\vec{a}_n$ are linked to the equilibria values by Eq.~(\ref{eq:equilibrium}), without lost of generality, we take $\vec{c}_n=\vec{r^*}_n$ (where $\vec{c_n}$ is the center of the action radii for every players, empirically obtained from the datasets see Fig. \ref{fi:stat}~D). By doing this, values $\vec{a}_n$ can be calculated from the values of $\vec{c}_n$ and the other parameters. \item To simplify our analysis, we normalized the players' position in the dataset such that the standard deviation (scale) of players' velocities are the unit (i.e. $\sigma_{\vec{v}}=1$). This is useful for later assess the fitting performance. \end{enumerate} In this frame, we define the error $\vec{\xi}_n(t) := \vec{v}_n(t+\Delta t)-\vec{v}_n(t+\Delta t)^\prime $, and fit $\gamma_n$, $k_{an}$, $k_{nm}$ by minimizing the sum $\sum_t \sum_n \big|\vec{\xi}_n(t)\big|$. With this method, we obtain a unique set of parameters that govern the equations. For a more detailed description of the minimizing procedure, c.f. supplementary material S2. Notice that, to avoid possible large fluctuations linked to drastic tactical changes, we fitted the set of parameters to each dataset (a half match). This criterion, at the same time, let us compare the strength of the interactions among different matches halves. The results of the optimization process for $DS1$ are given in Table~\ref{tabla1}. There we show the values of the fitted parameters in both coordinates, $x$, and $y$. We can see $\alpha_n \approx 10^{-1}$, $k_{an} \approx 10^{-2}$ and $k_{nm} \lesssim 10^{-2}$. Particularly interesting are parameters $k_{nm}$, since they indicate the strength of the interactions among players. In this case, we can see a wide variety of values, from strong interactions as in the case of players $1-2$ to negligible interactions in the case of players $3-8$. For results on other datasets, please c.f. supplementary material S3. \subsubsection{Modeling $\vec{\xi}_n(t)$ as fluctuations in the velocities} By using the optimal set of parameters calculated with the method proposed in the previous section, we can calculate for all the players at every time step the difference between the real velocity and the model's prediction. This defines $N$ temporal series $\vec{\xi}_n(t) = \big \{\vec{\xi}_n(t_0), \vec{\xi}_n(t_1), ...,\vec{\xi}_n(t_T)\big\}$, that can be thought of as stochastic fluctuations in the players velocities. Note that, in the context of a football match, these fluctuations can be related to stochastic forces acting upon the players. With this idea in mind, we propose to introduce in the system (\ref{eq:system1}) a noisy component linked to these fluctuations. With this aim, we focus on analyzing and describing the behavior of $\vec{\xi}_n(t)$. Let us turn our attention to Fig.~\ref{fi:ruido}. Here, our goal is to characterize the fluctuations linked to $DS1$. In panel $(a)$, we show the distributions of values related to $\xi^x_n$ ($n=1,..,10$). We can see, in each case, the curves approach a Gaussian shape. Dashed line indicate a non--linear fit performed to the distribution given by the entire set of values ($\xi^x$), in this case, we have measured $\avg{\xi^x} = 0.001 \pm 0.002$ and $\sigma_{\xi^x} =0.60 \pm 0.02$ $(m/s)$ ($R^2=0.97$). Note, the fluctuations scale is lower than the velocities scale $\sigma_{\xi^x}<\sigma_v$ (with $\sigma_v=1$, c.f. section \ref{se:fitting} item $5$). Panel $(b)$, on the other hand, shows the autocorrelation functions $A_{\xi_n^x}(\tau) = \frac{\avg{Z Z^\prime} - \avg{Z} \avg{Z^\prime}} {\sigma_{Z} \sigma_{Z^{\prime}}}$, with $Z=\xi^x_n(t)$ and $Z^\prime =\xi^x_n(t+\tau)$. For each case, we can see an abrupt decay at the beginning. To help the eye to visualize the behavior of the curves, the black dashed line in the plot shows the autocorrelation function for a white noise process. The inset in the panel shows the values of the Hurst exponent calculated by performing a Detrended Fluctuation Analysis (DFA) to $\xi^x_n(t)$. We obtain values around $0.5 \pm 0.06$, which is consistent with a set of memoryless processes. Panel $(c)$, on the other hand, shows the Pearson matrix $R_{nm}^x = \frac{C_{nm}^x}{\sqrt{C_{nn}^xC_{mm}^x}}$, where $C_{nm}^x$ is the covariance matrix of series $\xi^x_n(t)$. We can see that $R_{nm}^x< 0.25~\forall~n,m$; which indicates a low level of linear correlation among the fluctuations associated with the different players. A similar description with analogous results can be done for $\xi^y_n$, by analyzing panels $(d)$, $(e)$ and $(f)$. Base on the observations made above, we propose to model the fluctuations in both axis as non-correlated Gaussian noise, such that $\vec{\xi}_n(t)= \vec{\sigma}_n \xi_n$, with $\avg{\xi_n(t)}=0$, $\avg{\xi_n(t)\xi_n(t^\prime)}= \delta (t-t^\prime)$, $\avg{\xi_n(t)\xi_m(t)}= 0$, and $\vec{\sigma_n}= (\sigma^x_n, \sigma^y_n)$ the empirical measured scales for the processes. \section{Results and discussion} \label{se:results} \subsection{Simulations on the collective dynamics} As we previously stated, we firstly used the datasets to fit the model's parameters, and secondly characterized the errors as fluctuation in the velocities. With these inputs, in the frame of our model, we can simulate the players' collective dynamics and compare the results with empirical data to assess the model performance. In order to do this, we modify system (\ref{eq:system1}) as follow, \begin{equation} \begin{split} d\vec{r}_n &= \vec{v}_n dt\\ d\vec{v}_n &= \big[ -\big(k_{na}+{\sum_m}^\prime k_{nm}\big) \vec{r}_n + {\sum_m}^\prime k_{nm}\vec{r}_m - \\ &-\gamma_n \vec{v}_n + k_{na}\vec{a}_n \big] dt + \vec{dW_n}, \end{split} \label{eq:system2} \end{equation} where $d\vec{W}_n = \vec{\sigma}_n \xi_n dt$, with $\vec{\sigma}_n$ and $\xi_n dt$ as were defined in the previous section. Note, (\ref{eq:system2}) is a system of stochastic differential equation (SDE). To solve it, we use the Euler--Maruyama algorithm for Ito equations. In this section, we show the results linked to the dataset $DS1$, similar results for the other datasets can be found in the supplementary material S4. Let us focus on Fig.~\ref{fi:simulations}. Panels $(a)$ and $(b)$ show two heatmaps with the probability of finding a team player in the position $(x,y)$. The left panel shows the results for the empirical data, whereas the right panel for simulations. For better visualization, in both cases, the probabilities were normalized to the maximum value, defining the parameter $\rho \in [0,1]$. As we can observe, simulations reproduce reasonably well the empirical observations. To quantify the result, we calculated the Jensen--Shannon distance ($DJS$) between the distributions. We measured $DJS= 0.05$, which indicates a good similarity between distributions. In panel $(c)$, on the other hand, we compare players' action zones. The empirical observations are the shadow ellipses, whereas the simulation the curves. We can see, on the whole, the model gives a good approximation. Particularly, for those areas with smaller dispersion. In panel $(d)$, we analyze the kinetic energy of the system, $E_k := \sum_ n \frac{1}{2} |\vec{v}_n|^2$. Our goal here is to globally describe the temporal structure of the system. In the inset, we show the temporal evolution of this quantity. Regarding the mean values of the energy we measured $\avg{E_k}_{DS1}=10.3$ for the data, and $\avg{E_k}_{MO}=11.5$ for simulations. We can see, the energy reaches high peaks in the empirical case that are not observed in the outcomes of the model compensating for the slightly lower mean. This effect indicates the presence of higher--order contributions, probably linked to non--linear interactions, that our model based on linear interactions does not take into account. The main plot in the panel shows the distribution of the times to return to the mean value for the kinetic energy, $P(t_R)$, for both empirical data and model. We can see in both cases, $P(t_R) \propto t_R^{-\gamma_{E_k}}$, with $\gamma_{E_k} = 2.4 \pm 0.1$. Note, the value of the exponent, agree we the value measured for the time to return in the case of the variable $\phi$ (see section Materials and Methods, Statistical regularities). This seems to indicate that the temporal structure captured by the evolution of the energy, may be related to the emergence of order in the system. Let us turn our attention to the distributions' tails. For the case of the empirical data, we can see the presence of extreme events that are not observed in simulations. This effect, as well as the peaks in $E_k$, could be linked to higher--order contributions of the interactions. To summarize, in this section, we showed that despite its simplicity, our first--order stochastic model succeeds in uncovering several aspects of the complexity of the spatiotemporal structure underlying the dynamics. In the next two sections, we show how to take profit of the fitted interactions to describe the players' individual and collective behavior. \subsection{Describing the team behavior by analyzing the model's parameters} \subsubsection{Hierarchical clustering classification on the team lineup} The interaction parameters $\vec{k}_{nm}$ can be useful to analyze the interplay among team players, and to describe the environment they are constrained to. In Fig.~\ref{fi:conexiones} panel $(a)$, we show a visualization of the players' interactions magnitude. In this networks of players, the links' transparency (alpha value), represents the connections strength between players $n$ and $m$. These values are calculated as $\kappa_{nm} = \sqrt{({k^x}_{nm})^2 + ({k^y}_{nm})^2}$. Note, the connection values can be used as a proxy of the distance among the players. Let us define the distance $\delta_{nm}:= e^{-\frac{\kappa_{nm}}{\sigma_\kappa}}$, between player $n$ and $m$, where $\sigma_\kappa$ is the standard deviation of the set of values $\kappa_{nm}$. Note, the exponential function in $\delta_{nm}$ is used to define large distances linked to small connections, and short distances related to strong connections. Then, based on this metric, we calculate the matrix of distances. With this matrix, by using a hierarchical clustering classification technique, we detect small communities of players within the team. In the following, we discuss the results in this regard. In Fig.~\ref{fi:conexiones} panel $(b)$, we show the re--ordered matrix of distances with two equal dendrograms, one placed at the top and the other at the left, showing the hierarchical relationship between players. To perform this calculation we used the {\it ward} method \cite{ward1963hierarchical}. With this classification, we can easily observe the presence of two main clusters of players. Those colored in green, players $5-10-8-9$, can be related to the offensive part of the team; the others to the defensive. Within the latter group, the cluster colored in red (players $1-2-6-7$) are related to back and middle defenders at the left side of the field, whereas players $3-4$ to back defenders at the right. Within the former group, we can differentiate between a central--right group of attackers, $8,9,10$, and an individual group giving by player $5$. As we said above, this technique allows us to study groups of players with strong interactions during the match. For instance, let us focus on analyzing the group of players $6-7$. These players cover the central zone, they are likely in charge of covering the gaps when other defenders go to attack. For instance, the advance of the wing--backs $1-4$ by the sides. We should expect them to behave similarly, which agrees with the result of our classification. The case of player $5$ is particularly interesting. Our results indicate that this player is less constrained than the others attackers. Probably, our classification frame is detecting that $5$ is a free player in the team, a classic {\it play builder} on the midfield, in charge of generating goal--scoring opportunities. The information provided by the hierarchical clustering classification, allows us to characterize the players' behavior within a team and, therefore, provide useful insight into the collective organization. In the light of this technique, it is possible to link the strengths and weaknesses of the team to the levels of coordination among the players. For instance, if it is observed a lack in the levels of coordination among the rival players at the right side of their formation (as we can see in the case of Troms{\o}~IL in $DS1$), it would be interesting to foster attacks in this sector. We could also perform a comparative study by analyzing several games to correlate results with levels of coordination among the players. With this approach, we can detect if there are patterns related to a winning or a losing formation, providing valuable information for coaches to use. The same idea can be applied to training sessions, to promote routines oriented to strengthen the players' connection within particular groups in the line-up, aiming to improve the team performance in competitive scenarios. To summarize, the hierarchical clustering analysis evidences the presence of highly coordinated behavior among subgroups of players, that can be directly related to their role in the team. In this framework, coaches may find a useful tool to support the complex decisions--making process involved in the analysis of the tactical aspects of the team, assess players' performances, propose changes, etc. \subsubsection{Collective modes} The eigenvalues and eigenvectors of the Jacobian matrices $J^x$ and $J^y$ (see Eqs.~(\ref{eq:jacobian})) can be handy to describe some aspects of the team players dynamics on the field. Note, if the system exhibit complex eigenvalues the eigenvectors bears information on the collective modes of the system, and, consequently, on the collective behavior of the team. In Fig.~\ref{fi:autoval} panel $(a)$, we plotted the system's eigenvalues $\lambda \in C$, as $Re_\lambda$ vs. $Im_\lambda$. We can see in most cases, $Im(\lambda)=0$. However, around $Re(\lambda) \approx -0.2$, we can see the presence of characteristics frequencies in both coordinates. Let us focus on describe the case of $\lambda_1$, the eigenvalue with smallest real part and $Im(\lambda_1)\ne 0$. This case is particularly important, because the energy that enters the system as noise, is transferred mainly to the vibration mode given by the eigenvector associated to $\lambda_1$, $v_{\lambda_1}$ (first mode) \cite{strogatz2018nonlinear,nayfeh2008applied}. In the frame of our model, therefore, $v_{\lambda_1}$ bears information on the collective behavior of the players. We calculated $\lambda^x_1=(-0.14 \pm i~0.11)~(1/s)$ and $\lambda^y_1= (-0.19 \pm i~0.04)~(1/s)$ (notice, complex conjugates are not shown in the plot). To describe these collective modes, let us focus on Fig.~\ref{fi:autoval}. Here panel $(b)$ is linked to the horizontal coordinate and panel $(c)$ to the vertical one. In the plots, each circle represents the players in its field's natural positions. The circles' radii are proportional to the absolute value of the components of $v^x_{\lambda_1}$ (panel $(b)$, blue) and $v^y_{\lambda_1}$ (panel $(c)$, yellow). Therefore, the size of the circles in the visualization, indicates the effect of the vibration mode on the players, or, in other words, how much the player is involved in this particular collective behavior. For instance, we can see that player $1$ is not affected by the mode in the horizontal coordinate but is highly affected by the mode in the vertical coordinate. In another example, for the case of player $5$, we can see the collective modes in both coordinates slightly affect the motion of this player. This is because, how we have previously stated, $5$ seems to be a free player in the field, therefore his maneuvers are not constrained by other players. Conversely, player $9$ is the most affected in both coordinates; which seems to indicate that the collective behavior of the team directly affects the free movement of this particular player. A similar analysis can be performed for every team player in the field. Let us now focus on using this information to describe the behavior of the defenders and their roles in the team. Fig.~\ref{fi:autoval} shows that defenders 1 and 2, at the left-back in the formation, are slightly involved in the collective mode related to the horizontal axis [panel (B)] and highly involved in the mode related to the vertical axis [panel (C)]. This indicates that these players exhibit a natural trend to coordinate with the team in the vertical direction and behave more freely when they perform movements in the horizontal direction. Conversely, players 3 and 4, at the right-back in the formation, are highly involved in the collective mode related to the horizontal axis and slightly involved in the mode related to the vertical. This indicates that these defenders exhibit a natural trend to follow the movements of the team in the horizontal axis (towards the goal). These observations reveal a mixed behavior in the defense, where the defenders 3 and 4 are more likely to participate in attacking actions, whereas defenders 1 and 2 are more likely devoted to covering gaps. Naturally, an expert coach may easily uncover these kinds of observations while attending a game. However, our technique could be useful for the systematic analysis of hundreds or thousands of games. The reader may note, that the eigenvalues and eigenvectors of the systems provide a handy analytical tool for coaches to assess several aspects of teams dynamics. As well as in the case of the hierarchical clustering analysis, in this frame, it is possible to link the strengths and weaknesses of a teams to collective modes uncovered by this technique, that could be useful to identify patterns associated to a winning or a losing formation to act accordingly in decision--making processes. \subsubsection{Using network metrics to analyze the game Troms{\o}~IL vs. Anzhi} Datasets $DS3$ and $DS4$ are related to the first and the second half of the game Troms{\o}~IL vs. Anzhi. In this game, Anzhi scored a goal at the last minute of the second half to obtain a victory over the local team. In this context, the idea is to fit $DS3$ and $DS4$ to the model, obtain the networks of players defined by parameters $\vec{k}_{nm}$ and perform a comparative study of the two cases analyzing our results by using traditional network science metrics. Let us focus on describing Fig.~\ref{fi:comparacion}. In panel $(a)$, we show the largest eigenvalue, $\lambda_1$, of the network adjacency matrix in both datasets. This parameter gives information on the network strength \cite{aguirre2013successful}. Higher values of $\lambda_1$ indicate that important players in the graph are connected among them. We can see that $\lambda_1$ in $DS3$ is $\approx 19\,\%$ higher than in $DS4$, which indicates that the network strength decreases in the second half of the game. In panel $(b)$ we show the algebraic connectivity, $\tilde{\lambda}_2$. This value corresponds to the smallest eigenvalue of the Laplacian matrix of the players networks \cite{newman2018networks} and bears information on the structural and dynamical properties of the networks. Small values of $\tilde{\lambda}_2$ indicate the presence of independent groups inside the network and are also linked to higher diffusion times, thus, it indicates a lack of players' connectivity. We can see that $\tilde{\lambda}_2$ decreases $\approx 31\,\%$ in $DS4$, which seems to indicate that in the second half of the game the team players lost cohesion. Box plots presented in panel $(c)$ show the weighted clustering coefficient~\cite{ahnert2007ensemble} of the team players. This parameter measures the local robustness of the network. We can see that in $DS3$ the clustering is slightly higher than in $DS4$, and the dispersion of the values is lower. This indicates that the network of players is more robust and homogeneous in the first half of the game. Lastly, in panel $(d)$ we show box plots with the eigenvector centrality \cite{newman2008mathematics} of the players in both networks. The centrality indicates the influence of a player in the team. A higher value in a particular player is related to strong connections with the other important players in the team. The mean value of the centrality is in both cases $\approx 0.3$ and the standard deviation $\approx 0.05$. The maximum, likewise, is very similar $\approx 0.4$ and the median, showed in the box plot, is a little lower in $DS3$. We can also observe that the network linked to $DS3$ seems to exhibits more homogeneous values among the team players than the network linked to $DS4$. In the light of the results discussed above, we can see that in the second half of the game the team decreases in connectivity, cohesion, and becomes more heterogeneous. Considering that Troms{\o}~IL received a goal at the end of the second half and lost the game, the fall of these particular metrics seems to be related to a decrease in the team's performance. Notice, previous works devoted to the analysis of passing networks have also reported a relationship between the magnitudes of these particular metrics in these networks and team performances~\cite{buldu2019defining}. In this regard, the previous analysis suggests that there is consistency between the results reached through our methods and results previously reported in the literature. \section{Summary and conclusions} In this work, we studied the spatiotemporal dynamics of a professional football team. Based on empirical observations, we proposed to model the player cooperative interactions to describe the global behavior of the group. In this section, we summarize our main results. Firstly, we surveyed a database containing body--sensor traces from one team on three professional soccer games. We observed statistic regularities in the dynamics of the games that reveal the presence of a strong correlation in the players' movements. With this insight, we proposed a model for the team's dynamic consisting of a fully connected system where the players interact with each other following linear--spring--like forces. In this frame, we performed a minimization process to obtain the parameters that fit the model to the datasets. Furthermore, we showed that is possible to treat the higher--order contributions as stochastic forces in the players' velocities, which we modeled as Gaussian fluctuations. Secondly, once defined the model, we carried out numerical simulations and evaluated the model performance by comparing the outcomes with the empirical data. We showed that the model generates spatiotemporal dynamics that give a good approximation to the real observations. Particularly, we analyzed $(i)$ the probability of finding a player in a position $(x,y)$, $(ii)$ the action zones of the players, and $(iii)$ the temporal structure of the system by studying the time to return to the mean value in the temporal series of the kinetic energy of the system. Despite its simplicity, in all the cases the model exhibited a good performance. Thirdly, we described the system at the local level by using the parameters we obtained from the minimization process. On this, we proposed to use two analytical tools, a Hierarchical cluster classification, and a Eigenvalues--eigenvectors based analysis. We found that is possible to describe the team behavior at several organization levels and to uncover non--trivial collective interactions. Lastly, we used network science metrics to carry out a comparative analysis on the two halves of the game Troms{\o}~IL vs. Anzhi. We observed that a decrease in connectivity and cohesion, and an increase in the heterogeneity of the network of players, seem to be related to a decrease in the team performance. We consider this contribution is a new step towards a better understanding of the game of football as a complex system. The proposed stochastic model, based on linear interactions is simple and can be easily understood in the frame of standard dynamical variables. Moreover, our framework provides a handy analytical tool to analyze and evaluate tactical aspects of teams, something helpful to support the decision-making processes that coaches face in their working activities. It is important to highlight that our framework is not limited to be used only in the analysis of football games. A similar approach can be performed to study others sport disciplines, mainly when the evaluation of players' interactions is key to understand the game results. In \cite{cervone2016multiresolution}, for instance, the authors use players-tracking data in basketball games to estimate the expected number of points obtained by the end of a possession. In this case, a complementary analysis within our framework could also unveil collective behavior patterns linked to players' coordination interactions that can be correlated to the upshot at the end of the possession intervals. However, we point out that the full dynamics of a football match (and others sports) cannot be addressed by only analyzing cooperative aspects within a particular team. To describe the full dynamics, we should also model the competitive interactions among the players in both teams. To do so, we need to measure the interplay among rivals, which in our framework implies having body--sensor traces for both teams, something that football clubs reject because of competing interests. Moreover, it could be useful to have a record of the ball position to improve our analysis. In this context, it becomes relevant the use of alternatives measurement techniques base on artificial intelligence and visual recognition \cite{sanford2020group,ganesh2019novel,khan2018soccer}. To summarize, our model provides a simple approach to describe the collective dynamics of a football team untangling interactions among players, and stochastic inputs. The structure of interactions that turns out from this approach can be considered a new metric for this sport. In this sense, our analysis complement recent contributions in the framework of network science~\cite{martinez2020spatial,herrera2020pitch,buldu2019defining,gonccalves2017exploring,garrido2020consistency,narizuka2021networks}. Note, there is a major difference between our approach and current network-science-oriented methods: the latter analyzes interaction based on players' passes, whereas our approach analyzes interaction based on players' movements. In this regard, the problem of study a football team from only passing/pitch networks is that this approach is entirely based on on-ball action, which completely neglects how players behave when they are far from the center of the plays (off-ball actions). Our approach, instead, integrates the information of the entire team to calculate every single link between players, therefore in our model, we are also considered off-ball actions, which is key to correctly evaluate teams' performance~\cite{casal2017possession}. In addition, many of the metrics of current use in this sport can be tested through our model~\cite{clemente2013collective,frencken2011oscillations,moura2013spectral,yue2008mathematicalp1,yue2008mathematicalp2,narizuka2019clustering}. On the other hand, to perform an analysis based on passing/pitch networks, it is required to have a record of the ball position and be able to characterize events (passes) during the match. Our model, instead, employs just the data of players' positions, something that nowadays can be easily measured, both in training sessions and competitive scenarios, with the simplest GPS--trackers available in the market. Finally, we consider that is possible to improve our model by incorporating non--linear interactions in the equations of motion. An analysis of the most commonly observed players' maneuvers may help to find new statistical patterns to be used as an insight to propose a non--linear approach. In this regard, we leave the door open to futures researches projects in the field. \section*{Acknowledgement} This work was partially supported by grants from CONICET (PIP 112 20150 10028), FonCyT (PICT-2017-0973), SeCyT–UNC (Argentina) and MinCyT Córdoba (PID PGC 2018).
{ "attr-fineweb-edu": 1.722656, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUf_45qhDClb2slf39
\section{Abstract} \setlength{\parindent}{10ex}The prevailing belief propagated by NBA league observers is that the workload of the NBA season dramatically influences a player's performance$^{1,2,3,4,5}$. We offer an analysis of cross game player fatigue that calls into question the empirical validity of these claims. The analysis is split into three observational studies on prior NBA seasons. First, to offer an analysis generalized to the whole league, we conduct an examination of relative workloads with in-game player tracking data as a proxy for exertion. Second, to introduce a more granular perspective, we conduct a case study of the effectiveness of load management for Kawhi Leonard. Third, to extend the analysis to a broader set of fatigue sources, we examine the impact of schedule features structurally imposed on teams. All three analyses indicate the impact of cumulative player fatigue on game outcomes is minimal. As a result, we assert in the context of the measures already taken by teams to maintain the health of their players,$^6$ the impact of fatigue in the NBA is overstated. \section{Status Quo NBA} Throughout the past decade, addressing player fatigue at a schedule level has been and continues to be a prevalent concern for both NBA teams and the league office. \subsection{Density Reduction} One way the league office has reduced the burden placed on athletes is by removing occurrences where teams play four games in five days. In the 2012-13 season, there were 76 instances of four in five, in 2016-17 such instances were reduced to 23, and by the 2017-18 season, four games in five days became extinct from the schedule. This was achieved by strategically starting the season a week earlier, such that increasing the season length from 170 to 177 days allowed for a less condensed schedule$^7$. Such efforts have also aimed to reduce the number of three games in four days as well as back-to-back games, however these events are still present throughout the schedule. As of the 2020-21 NBA season, the league office also introduced a new feature to the schedule: two-game series, where visiting teams play the home team two times in a row at the same arena. These series are played over the course of either two or three days. Not only did they reduce traveling and impose restrictions to reduce COVID-19 exposure, they also decreased the amount of one game road trips in the schedule. However, despite the league's efforts to design less densely packed schedules, many coaches such as Greg Popovich and Nick Nurse have still adopted load management strategies where they rest their key players during the regular season. This is problematic to the NBA league office, especially when these star-less games are nationally televised. Not only are these games less entertaining for fans, they consequently result in substantially lower television ratings$^8$. To address this, in recent years the league office has instituted a resting policy that prohibits teams from resting healthy players in high profile, nationally televised games. In addition, barring unusual circumstances, teams should not rest multiple healthy players in the same game, nor should they rest healthy players on the road. The fines for violating these resting policies sit at a minimum of \$100,000$^9$. \subsection{Schedule Fairness} Many NBA teams located on the west coast and southern borders travel substantially more miles throughout a given season compared to northeastern teams. Consequently, these teams also cross many more time zones to get to and from games. \\ \begin{center} \includegraphics[width=6cm]{table10.png}\newline Fig. 1 \end{center} The map above illustrates the cumulative flight distance traveled for each NBA team in the 2014-2015 season. The sizes of the logos are scaled using percentiles, showing that the larger the logo, the more that team had to travel. The Portland Trail Blazers had to fly over 62,000 miles that season, while the Cleveland Cavaliers only traveled 35,000 miles. This raises an important question, to both NBA teams and the league office, as to whether or not the teams who travel more face a structural disadvantage due to scheduling. Aggregated travel could inflict cumulative fatigue, and having to cross time zones could also very well disrupt players' circadian rhythms and inhibit their performance and ability to recover. \section{Related Work} \subsection{Defining fatigue in basketball} To maintain consistency with prior work examining fatigue in basketball$^{10}$, we define fatigue as "an acute impairment of performance that includes both an increase in the perceived effort necessary to exert a desired force and the eventual inability to produce this force"$^{11}$. This definition is inclusive of fatigue from any source not discriminating against muscular cognitive or any other qualifier attached to the source of the fatigue. On average, during a game of basketball, players travel 5–6 km at physiological intensities above lactate threshold and 85\% of maximal heart rate$^{12}$. This leaves no question basketball competitions levy a degree of strain on players that given insufficient recovery could induce fatigue under this definition. \subsection{Fatigue's and athlete performance} There is an extensive body of work examining the physiological demands placed on athletes in basketball at all levels. A literature review of twenty-five publications found evidence that players incur fatigue as the game progresses, albeit to differing degrees depending on position$^{12}$. "Higher-level players", the category NBA players would fall into, sustain greater workloads than lower-level players at the same positions$^{12}$. This suggests the NBA is a prime candidate for examining fatigue accumulation over a season. Nonetheless, the literature also creates a basis for questioning prevailing beliefs on fatigue's influence on game outcomes because several research groups have failed to find a relationship between the muscular properties of athletes and player performance in basketball. An analysis of eight semi-professional basketball players found "trivial-small associations between all external and internal workload variables and player efficiency"$^{13}$. Furthermore, an article from Barca Innovation Hub examining training workload found no connection between a higher external load and better performance in-game$^{14}$. As opposed to in-game fatigue, analyses on structural fatigue sources suggest fatigue does influence performance. A research group using t-tests and analysis of variance (ANOVA) found a significant difference of means in traveling West to East asserting this influences a player's circadian rhythm$^{15}$. It is well-established that air travel in general influences athletes and one research group asserts these impacts should materialize in the NBA$^{16}$. One of the biggest performance concerns with athlete fatigue is potential for increased injuries with higher workloads. Work examining sports in general has shown that combining an understanding of the training dose and external influences helps players prevent injury and improve performance$^{17}$. However, there are inconsistent results across the literature about the impact of fatigue on injury occurrence in basketball. A paper examining thirty-three NBA players in an observational retrospective cohort study found an inverse relationship between training workload and injuries$^{18}$. Yet, a work using over 70,000 NBA games and 1,600 injury events showed the likelihood of injury increased by 2.87\% for every ninety-six minutes played and decreased by 15.96\% for every day of rest$^{19}$. \subsection{Gap in the literature} There are three main gaps in the literature targeted by the three sections of this paper. First, we are unaware of any research that measures relative player workloads and performance at a league wide level. This allows examination of exertion and cumulative fatigue related to in-game performance. These ideas are addressed in the section of the paper titled In-Game Workload Induced Fatigue. Second, we are unaware of any case studies on the effectiveness of strategies that rest players to avoid fatigue accumulation, referred to as load management colloquially. This is explored in the section titled Case study of Kawhi Leonard. Lastly, while there has been work on structural fatigue in basketball$^{15, 16}$, we propose a more robust procedure for isolating the impact of structural fatigue that is less vulnerable to latent variables and offers interpretations on the relevance of observed impacts to game outcome. This work is detailed in the section titled Structural Fatigue. \section{Data} To access data about player and team performance, as well as potential sources of fatigue, we used a set of existing R packages maintained by the basketball analytics community. These packages include nbastatr$^{20}$, airball$^{21}$, and NBAr$^{22}$. They provided access to box score statistics about player performance and some more advanced metrics such as game-net rating and Player Impact Estimate at a game and season level. To measure in-game fatigue we used a player's total in-game distance traveled via computer vision generated by Second Spectrum and courtesy of nbastatr$^{20}$. \section{In-game workload induced fatigue} In-game workload induced fatigue examines player in-game workload and performance. The analysis of in-game workload induced fatigue consists of three additional conceptual divisions. First, an examination of how player exertion correlates with performance. Second, relating an accumulation of player workload over a window of time and player exertion in an observation game. Third, examining how accumulation of player workload over a window of time influences player performance. These are located in sections 5.3, 5.4 and 5.5 respectively. \subsection{Theoretical foundations} A player's relative in-game distance is used as the proxy for player workload. The formal definition of work from kinematics includes an object's displacement and the force acting on the object. Where W is work, F is force, and s is displacement. $$W = Fs$$ The force to move an object of constant mass is also constant. Assuming a player's mass is constant over a season, changes in distance for a game captures the relative change in that player's work for the game. In other words, work is agnostic to the duration and cardinality of the player's motion and leveraging this allows measurement of player exertion with only tracking data of player displacement. It is important to acknowledge the literature establishes that in basketball different types of motions are associated with slightly different degrees of exertion due to the biomechanics associated with them$^{23, 24}$. Furthermore, players pushing against each other during games, moving in the z axis via jumps (of particular relevance to basketball), and the tracking data being generated by a computer vision model all introduce additional noise. However, there is no publicly available tracking data with the granularity to account for these sources of noise nor is there a guarantee they even cause enough noise to justify the complexity of designing a system of equivalence. Therefore, studying a player's total displacement in the xy plane relative to their own baseline offers the most effective picture of relative changes in work available. \subsection{Exertion and performance methodology} An important assumption baked into the hypothesis that a fatigued player's performance suffers is that exerting more effort in a game positively affects performance. This relationship is non-trivial because the scoring dynamics of basketball are highly tied to athlete skill level offering some isolation from exertion not present in sports like track. The relationship between exertion and performance is modeled using simple linear regression as there was found no indication a more flexible method was necessary. In the models, a player's deviation from their season average in a set of performance measures is the response and the player's deviation from their season average in distance traveled divided by minutes played is the predictor. $$PMD =\beta_0 +\beta_1(DSD)$$ Where PMD is performance measure deviation from season mean, DSD is distance per second of playtime deviation from season mean. As discussed in the Theoretical foundations, a player's deviations from their season average treats player mass as constant and correlates the relative change in work with a relative change in performance. The distance measured is the total player movement for the game (regardless if they have the ball). If the performance statistic measures offensive performance, such as field goal percentage, then only offensive distance divided by offensive minutes is used. This ideally tailors the players exertion in that subarea of the game to their performance in that subarea as well. It is important to note this procedure was not used when measuring total workload for fatigue discussed in the next section. The distance divided by playtime is necessary because many cumulative statistics like rebounds and steals would naturally accumulate as a player is in the game for more time which is not necessarily indicative of better performance. Each observation in this analysis is a player in a game, and two subsets of this data were used to improve the quality of the findings. The first subset was composed of players who in the game four or more minutes, and the second was twelve or more minutes (NBA games are 48 minutes long). Excluding players in the game for fewer than four or twelve minutes reduces the variability in player performance measures. For example, a player who is substituted into the game for a couple minutes and misses their one shot but in the next game makes their single shot would be a 100 percent change in shooting percentage. These small time periods offers a very noisy measurement of that player's performance. Filtering by minutes, as opposed to a different arbitrary cutoff, introduces less selection bias towards players with certain behavioral patterns. For example, a cutoff on number of shot attempts would introduce a bias towards players who take lots of shots. Two subsets are reported because a more strict limit on the number of minutes in the game lowers variability in performance measures at the tradeoff of only measuring players who are in the game for a larger percentage of the time. The results section highlights the importance of this procedure to the empirical findings. \subsection{Results exertion and performance} There are two major takeaways from the analysis offered in this section with full results in Fig. 2 and Fig. 3. The first takeaway is deviations in player exertion appear to be loosely positively correlated with their performance in the game. Points scored is arguably the most important performance metric because the points a player scores is most closely tied to the game's outcome. Points scored shows a strong correlation with player exertion for both subsets, p-values of .01 and 1e-12 for the four and twelve minute subsets respectively. This offers evidence of a positive relationship between exertion and scoring. Furthermore, the correlation was positive and statistically significant for field goal percentage, offensive rebounds, and steals across both subsets.\newline \vspace{4cm} \begin{center} \includegraphics[width=6cm]{table1.png}\newline Fig. 2 \end{center} \vspace{.5cm} \begin{center} \includegraphics[width=6cm]{table2.png} \newline Fig. 3 \end{center} Despite these relationships, the results present a reasonable degree of ambiguity about how exertion relates to more granular measures of performance. There was no correlation in at least one subset between turnovers, three point shooting, offensive rebounds, and free throws. Furthermore, there was a significant inverse relationship with the number of offensive rebounds (more offensive rebounds is considered good in basketball) leaving a literal interpretation that greater exertion hurts performance in this capacity. Furthermore, the response is highly sensitive to the subset of the data used. P-values varied by several orders of magnitude across the subsets. Despite achieving significant p-values on some measures the dramatic change in results across subsets has two potential causes. First, it could suggests their is a different underlying relationship between the these different areas of the data or second, the large number of observations is leading models to over assert the prominence of weak relationships, an established problem with large sample sizes and the p-value $^{25}$. Either interpretation raises questions about how any conclusions regarding exertion and performance will generalise. The second major takeaway is exertion accounts for a low percentage of variation in performance across all measures and time constraints. Players in the game for longer than 12 minutes and field goal percentage had the largest adjusted-r squared value (.0013) of all measured factors. However, even this accounted for less than 1 percent of the variation in the response. This has two potential interpretations. First, player exertion accounts for a very small, arguably negligible, amount of the variation in player performance. Alternatively second, measuring player exertion in this fashion introduces too much noise to capture the magnitude of its relationship with performance. Without additional experimentation it is challenging to argue for either hypothesis beyond speculation, however, there is work in the literature that has shown there is some ambiguity attached to the notion simply exerting more work means an athlete will perform better in basketball$^{14,18}$. \subsection{Post distance overload regress} If players experience a performance detriment associated with fatigue, then subsets of the season where a player's workload increases should be followed by a regression. Extracting this relationship required examination of windows of the NBA schedule at the player level. An observation included a player in the game for more than 15 minutes for the 2015 season (n= 19,751). The playtime cutoff of 15 minutes intends to exclude players who are in the game for a short period of time as it is unlikely they experience measurable cross-game fatigue. To examine a potential association between prior workload and observed game performance a simple linear regression was used. The previous x days difference from the players season average in distance was treated as the predictor. The distance in the game difference from season average immediately following the window of time was treated as the response. Fig. 4 contains a scatter plot of these results for window sizes of 3, 5, 7, and 10 days with the regression line. \begin{center} \includegraphics[width=6cm]{table4.png} \newline Fig. 4 \end{center} This analysis offers evidence contrary to what the fatigued player hypothesis would suggest. Players that covered more distance in prior games covered more distance in the observation game as well across all windows. These relationships had $\le$2e-16 p-values and adjusted r-squared values showing about 5 percent of the variance in the response is explained by the window. One hypothesis for such a strong correlation is this measurement strategy ties too closely to player's usage rate which maintains local temporal variations as their role on the team evolves over the season. This can be adjusted for by using a playtime adjusted response which is agnostic to a players usage rate. The trade-off of this analysis is measuring distance playtime adjusted is no longer a proxy for player work. Scatter plots of the relationships for windows sizes 5 and 7 are in Fig.6 , and show how the positive association disappears, supporting this hypothesis. \newline \begin{center} \includegraphics[width=6cm]{table5.png} \newline Fig. 5 \end{center} \vspace{.25cm} In fact, the $\beta_1$ coefficient for the linear regressions became negative with p-values of .003 and 1.1 e-6 for windows of five and seven days respectively. This indicates an albeit small significant negative association. A concern with such a small the negative relationship is it could simply be regression to the mean. Furthermore, making the playtime adjustment violates the assumptions set in the empirical motivations as it no longer measures cumulative fatigue. This makes the interpretation of this result in the context of cumulative fatigue impossible. \subsection{Post distance overload performance} Lastly, searching for any highly compelling evidence of players experiencing a workload regression, we ignored all intermediate measures and directly compared prior workload to observations of game outcomes. Fig. 6 and 7 shows the results of treating performance measures as the response and windows remain the predictor. In line with the prior results the relationship between these performance measures and fatigue is ambiguous. Field goal percentage achieves a significant p-value in the 3 day window but no others shown in Fig. 6. Furthermore, with points there is a positive association between prior distance, meaning a literal interpretation would suggest players who are more fatigued score more points. Similar to the first windowed analyses that found a positive association between prior workload and observation workload, the most likely hypothesis for the relationship is the usage rate increases as players are performing better. Therefore, if a player is scoring more points in prior games they are given more playtime and in turn travel more distance. \begin{center} \includegraphics[width=6cm]{table6.png} \newline Fig. 6 \end{center} \vspace{.25cm} \begin{center} \includegraphics[width=6cm]{table7.png} \newline Fig. 7 \end{center} \subsection{In-game workload big picture} Across all of the approaches to modeling fatigue influences on performance there is a consistent narrative. Regardless how the data is subset, normalized, or what is treated as the response there appears to be an inconsistent and largely ambiguous relationship between fatigue and performance. \section{Case Study of Kawhi Leonard} Load management is a strategy employed by NBA coaches where they deliberately rest players, often star players, from playing a game, with the goals of reducing their risk for injury and preserving them for the playoffs. While these are ultimately long run goals, the case study below explores how strategies of load management can pay off in the short run. In particular, it delves into whether or not Kawhi Leonard, the NBA player most notorious for practicing load management, plays better in the games immediately following periods of extra rest. \vspace{.25cm} \begin{center} \includegraphics[width=6cm]{table8.png} \newline Fig. 8 \end{center} \vspace{.25cm} Each point in Fig. 8 represents a game in Leonard's career, with the blue points showing instances where he was load managed before the game versus the gray points showing no extra days of rest. The response variable is Player Impact Estimate; PIE is the NBA's metric for player performance, factoring in both defense and offense. The dashed line across the middle represents an exceptional PIE of 0.15, which was the 95th percentile in the most recent NBA season. Using a difference of means test, Kawhi was no more likely to perform better after being load managed. This is visually apparent in Fig. 8. The majority of all of the blue points are above the dashed line, suggesting that Leonard plays exceptionally well after resting for an extra day. However, these blue points are not convincingly above the gray points, as many of the gray points are also located above this dashed line. Even in games following no extra rest, Leonard performs at an elite level for his team. Thus there exists no substantial evidence to support that Leonard necessarily performs better in the games following load management versus the games where he did not rest extra. This suggests that load management is not particularly effective in the short run, at least for Kawhi Leonard. \section{Structural Fatigue} Structural fatigue encapsulates any factors outside of a team's control, in other words, induced by the structure of the schedule. For example, teams have no influence over how far they have to travel to play a game or how many games they play in a week. We aggregated travel data from the 2010 through 2019 NBA seasons, courtesy of airball$^{21}$. This data consisted of between-game potential features of interest such as time zone changes, tipoff time, flight distance, days of rest and schedule density. This section is an examination of how these schedule-induced-factors influence performance at a team level. \subsection{Model Construction} The first step to developing the model was to find a strong measure for capturing game outcome. Compared to alternatives such as score differential or a binary measure like win loss, game net rating offered the most detailed and accurate measurement of the result of a game. Game net rating is the difference between a team's offensive and defensive rating for a given game, and both of these ratings are calculated per 100 possessions. Having the response variable standardized to be per 100 possessions allowed for stronger comparisons across NBA seasons as playstyles and points per game evolve over time. Measuring the effects that schedule induced factors have on game net ratings required normalizing for the strength of the teams. There was significant exploration for the strongest proxy of team strength. The simplest model uses the difference of the teams' end of season net rating. Attempting to improve on this, models that temporally weight games based on recency, nonparametric models including random forest, and dimension reduction techniques over a large set of team information using elastic net were all tested. These models all achieved improvements within 2 standard errors in 10 fold cross validation of team net rating. As this paper was not about modeling team strength, an active area of sports research, we selected end of season net rating in a linear regression model as a proxy for team strength. We acknowledge improving the proxy for team strength is an excellent direction to expand the results of this work. After determining a proxy to account for differences in team strength, the next step was developing a model in two stages. The first stage regressed game net rating on the net rating difference strength proxy. Then by calculating the residuals of this first regression and regressing them on all of the travel metrics, the second stage effectively teased out what portion of the variance in game net rating the travel metrics could explain that the strength proxy could not. The results can be seen in the following figures. \subsection{First model results} $$game\: net\: rating = \beta_0 + \beta_1(net \:rating\: di\!f\!ference)$$ The proxy for team strength difference, net rating difference, yielded an adjusted R-squared of 0.2262. The data was structured such that the response was relative to the visiting team, meaning the $\beta_0$ term captures that on average, visiting teams are disadvantaged by about -2.81 in game net rating. The 1coefficient presents that for each point in net rating that a given team had more than their opponent, their game net rating for that game is expected to increase by 0.968. \subsection{Second model results} $$ residuals = \beta_0 + \beta_i (schedule factor_i)$$ \begin{center} \includegraphics[width=6cm]{table9.png} \newline Fig. 10 \end{center} The regression results convey that out of all of the travel metrics, days of rest difference, traveling three hours back, and the observed game being the third game in four days were the only variables that had statistically significant effects on game net rating. The coefficient on rest difference presents that the visiting team's game net rating is expected to increase by 0.35 for each day of rest they have more than their opponent. Traveling three hours back also had a detrimental effect; if a team had to travel from the east coast all the way west, their game net rating is expected to go down by about -1.740. Likewise, if the game being played is the team's third game in four days, the model estimates their game net rating to decrease by -1.290. However, it is important to acknowledge these results had adjusted R-squared indicating they explain less than .1 percent of the variance in the response. Meaning they offer little information about why a game turned out the way it did. It was perhaps more insightful, however, to observe that many of the collected travel metrics had insignificant effects on game net rating. Contrary to conventional wisdom, the observed game being the second leg of a back to back did not have a significant negative effect. Neither did the majority of our time zone shift metrics, save for three hours back. In addition, all of the distance metrics- team flight distance, flight distance difference, and windowed versions of these metrics were also insignificant. This addresses the aforementioned concern that having to travel and cross more time zones inherently puts teams at a disadvantage. The results offer that this is not the case. Moreover, this provides incredibly useful insight to the NBA league office because it suggests that scheduling teams to fly more miles does not necessarily put them at a structural disadvantage, but that scheduling them to have 3 games in 4 days appears to be more consequential to performance. Such insight can help the league office effectively design schedules that are more fair. \section{Relevance to Stakeholders} Fatigue analysis provides valuable insight to NBA teams and the league office, from strategic, schedule fairness, as well as viewer and profit maximization standpoints. \subsection{NBA teams} NBA teams can utilize fatigue analysis to understand how to best allocate their resources and important areas of focus for the team. When teams, trainers, and athletes understand how fatigue is impacting each game, they can develop strategies to minimize fatigue. A team can use this knowledge to reallocate time and effort, such as changing team travel plans, changing how and when the team gets to the stadium, or adjusting player's rest to practice/warm-up ratio in order to decrease fatigue's impact on game net rating. Our work in particular suggests that teams' existing strategies are highly effective at mitigating the impacts of fatigue. Across all analyses the impact of player fatigue on game outcomes was ambiguous with the exception of playing 3 games in 4 days, and the rest difference between teams at a schedule level. These rare scheduling events may represent the last holdout of improvements that can be achieved by NBA teams. Therefore, from an optimizing player performance perspective there is little incentive for raised investments from teams into player recovery when existing procedures appear to be eliminating any regressions in performance. The case study of Kawhi Leonard shows that the impact of load management was somewhat ambiguous, at least in the short run. An NBA team would want to perform a similar analysis with other key players. This is because there exists an opportunity cost every time a player does not suit up for the game. They will be more rested and have less potential for injury, but there is a higher chance of losing that game, and other players will have to exert themselves more. Therefore it would be important for NBA teams to carry out some form of a cost-benefit analysis to explore whether or not the payoffs of load management outweigh the deficits. \subsection{NBA league office} The structural fatigue analysis identified certain features of the schedule that detrimentally impacted team performance, albeit with a moderately small overall impact on game outcome. From a league office perspective, they want to strategically minimize occurrences of these events, but if they are unavoidable, the league will want to make sure that these events are distributed fairly so that certain teams are not put at structural disadvantages. Additionally, structural fatigue analysis is important when considering viewership and profit maximization. Using strategies like starting the season a week earlier as a blueprint, the league office can continue to take measures that adjust the schedule so that players have adequate time to rest and recover between games. Schedule analysis drives insight to inform what kind of changes need to be made in order to do so. This kills two birds with one stone as it also serves as a means of discouraging teams from load managing their players. Organizing the schedule in such a way that coaches like Popovich or Nurse do not even have the need to rest their stars could effectively ensure that marquee players are suited up on the court. Doing so would very well increase the entertainment value of NBA games, which in turn boosts broadcasting viewership and revenue generated. \section{Future Work} \subsection{Exertion and performance} In the section Results on Exertion and Athlete Performance, we are unable to establish a string association between player exertion and performance. There is almost certainly a more pronounced relationship between player performance and exertion. In particular, there is significant room to improve this analysis with more granular tracking data that better characterizes in-game events. For example, off the ball movement requires less energy than on ball movement and more accurately reflects the game. Achieving better mappings between the muscular properties of athletes and game outcomes will create a more nuanced picture of how exertion influences performance. At a team level, this picture could be improved by incorporating practice workload into the picture to build the most accurate measure of cumulative workload. \subsection{Long run load management} The case study suggests that Kawhi Leonard's player impact estimate does not necessarily improve in games immediately following additional rest compared to those with no load management. This calls into question the effectiveness of load management as a strategy in the short run, however it would ultimately be more important to figure out how load management pays off in the long run. \section{References} \begin{enumerate} \item Holmes, B. "Schedule alert! Every game tired teams should lose this month." ESPN Enterprises, Inc, 1 Nov. 2017, https://www.espn.com/nba/story/\_/id/\newline 21236405/nba-schedule-alert-20-games-tired-teams-lose-november. \item Holmes, B. "Which games will your team lose because of the NBA schedule?." ESPN Enterprises, Inc, 30 Oct. 2018, https://www.espn.com/nba/story/\_/id/\newline 25117649/nba-schedule-alert-games-your-team-lose-2018-19. \item Magness, S. "Fatigue and the NBA Finals: How Players Raise Their Game When It Matters Most." Medium, 3 June 2017, https://medium.com/@stevemagness/how-player-fatigue-will-impact-the-nba-finals-74d8003d84de. \item "How Much Does Fatigue Matter in NBA Betting?." OddsShark, https://www.oddsshark.com/\newline nba/how-much-does-fatigue-matter-nba-betting. \item McMahan, I. "How sleep and jet-lag influences success in the travel-crazy NBA." Guardian News \& Media Limited, 26 Oct. 2018, https://www.theguardian.com/sport/2018/oct/\newline 26/sleep-nba-effects-basketball. \item "Professional teams are waking up to new methods of managing athlete fatigue." Fatigue Science, 10 Oct. 2013, https://www.fatiguescience.com/blog/\newline professional-teams-are-waking-up-to-new-methods-of-managing-athlete-fatigue. \item Schuhmann, J. "Schedule Breakdown: How rest, road trips and other factors will impact 2021-22." 2021 NBA Media Ventures, LLC, 22 Aug. 2021, https://www.nba.com/news/how-rest-road-trips-and-other-factors-played-out-in-2021-22-schedule. \item Mullin, B. "Injury-Plagued NBA Draws Fewer TV Viewers." 2021 Dow Jones \& Company, Inc., 27 Dec. 2019, https://www.wsj.com/articles/injury-plagued-nba-draws-fewer-tv-viewers-11577487612. \item Pickman, B. "Report: NBA Could Issue Teams Fines of \$100K For Resting Players Throughout Season." 2021 ABG-SI LLC. Sports Illustrated, 7 Dec. 2020, https://www.si.com/nba/2020/12/07/nba-resting-policy-load-management. \item Edwards T., Spiteri T., Piggott B., Bonhotal J., Haff G.G., and Joyce C. "Monitoring and managing fatigue in Basketball." Sports. 6: 19, 2018. \item Seamon, B.A., and Harris-Love, M.O. "Clinical Assessment of Fatigability in Multiple Sclerosis: A Shift from Perception to Performance." Front. Neurol. 7: 194, 2016. \item Stojanović, E., Stojiljković, N., Scanlan, A.T. et al. "The Activity Demands and Physiological Responses Encountered During Basketball Match-Play: A Systematic Review." Sports Med 48: 111–135, 2018. \item Fox, J., Stanton, R., O'Grady, C., Teramoto, M., Sargent, C., and Scanlan, A. "Are acute player workloads associated with in-game performance in basketball?." Biology of Sport, 2020. \item García, A.C. "Training Load and Performance in Basketball: The More The Better?." FC Barcelona Innovation Hub, 22 Oct. 2020, https://barcainnovationhub.com/training-load-and-performance-in-basketball-the-more-the-better/. \item Holmes, B. "How fatigue shaped the season, and what it means for the playoffs." ESPN Enterprises, Inc, 10 Apr. 2018, https://www.espn.com/nba/story/\_/ id/23094298/how-fatigue-shaped-nba -season-means-playoffs. \item Huyghe T., Scanlan A., Dalbo V., Calleja-González J. "The negative influence of air travel on health and performance in the National Basketball Association: A narrative review." Sports. 6: 89, 2018. \item Drew, M.K., Cook, J., and Finch, C.F. "Sports-related workload and injury risk:simply knowing the risks will not prevent injuries: Narrative review." British Journal of Sports Medicine 50: 1306-1308, 2016. \item Caparrós T., Casals M., Solana Á., and Pena J. "Low External Workloads Are Related to Higher Injury Risk in Professional Male Basketball Games." J. Sports Sci. Med. 17: 289–297, 2018. \item Lewis, M. "It's a hard-knock life: game load, fatigue, and injury risk in the National Basketball Association." J. Athletic Train. 53, 503–509, 2018. \item Bresler, A. (2021). Nbastatr. R package version 0.1.1504. http://asbcllc.com/nbastatR/. \item Fernandez, J. (2020). airball: Schedule \& Travel Related Metrics in Basketball. R package version 0.4.0. https://github.com/josedv82/airball. \item Chodowski, P. (2021). NBAr. R package version 4.0.4. https://github.com/\newline PatrickChodowski/NBAr. \item Abdelkrim, B.N., Castagna, C.; El Fazaa, S., and El Ati, J. "The Effect of Players' Standard and Tactical Strategy on Game Demands in Men's Basketball." Journal of Strength and Conditioning Research 24 - Issue 10: 2652-2662, 2010. \item Abdelkrim, B.N., El Fazaa, S., and El Ati J. "Time–motion analysis and physiological data of elite under-19-year-old basketball players during competition." British Journal of Sports Medicine 41:69-75, 2007. \item Lin, Mingfeng \& Lucas, Henry \& Shmueli, Galit. (2013). Too Big to Fail: Large Samples and the p-Value Problem. Information Systems Research. 24. 906-917. 10.1287/isre.2013.0480. \end{enumerate} \end{flushleft} \end{multicols*} \end{document}
{ "attr-fineweb-edu": 1.442383, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUfcM25V5jgCb63wiX
\section{World Rankings} One of the mose fascinating results can be found in Table~\ref{mworldwind}, the wind--corrected performances. Donovan Bailey's 9.84s WR adjusts to a 9.88s equivalent in calm air. Meanwhile, Frank Fredricks' 9.86s clocking (Lausanne, 03 Jul 1996) was run with a wind reading of $-0.4$ m/s, which after correction surpasses Bailey's WR performance by 0.04s! It is certainly conceivable that, given the proper conditions, Fredricks could have claimed the elusive title of ``World's Fastest Man'' with this race. In fact, if Fredricks had given this same performance in Atlanta ({\it i.e.} with a wind speed of +0.7 m/s), he would have crossed the line in roughly 9.81s! It should be noted that, due to the drag effects mentioned earlier, races run into a head wind will have faster corresponding times than races run with a tail wind of equivalent strength. Figure~\ref{windplot} shows that the ``correction curve'' is not linear, but rather a curve bending toward the right. Hence, a head wind will fall on the ``steeper'' portion of the curve, while a tail wind will be on the shallower side. The 9.84s WR would rank 6th all--time if we included banned performances (Table~\ref{banned}). After correcting for the wind conditions, (Table~\ref{mworldwb}), this time climbs to 5th, but is surpassed by several different performances. Thompson's 9.69s run has a wind--corrected equivalent of 9.93s, and has sunk to 16th. Meanwhile, Davidson Ezinwa's 9.91s race (into a 2.3 m/s headwind) has a wind--corrected 9.76s equivalent. Note that this performance is marked with a ``d'', indicating a doubtful wind reading\footnote{It is generally known that athletes who race at altitude perform better than they do closer to sea level, and it has been suggested that this effect may be more physiological in nature than physical, since the corresponding change in air density does not yield the expected change in race time.}. Florence Griffith--Joyner's 100m WR performance of 10.49s at the 1988 US Olympic Trials is skeptical. It has been noted that, at the time of this race (second heat of the second round), the wind gauge for the 100m straight read a speed of +0.0 m/s, whereas a nearby gauge (for the jumps runway) read speeds in excess of +4.0 m/s. Furthermore, the wind reading for the third heat was +5.0 m/s. This mysterious sudden calm was never really addressed; it is unreasonable to assume that the wind would die down completely for the duration of one race. So, assuming that the wind speed was actually between +4.0 m/s and +5.0 m/s during Flo--Jo's ``WR'' race, she would have actually clocked a time somewhere in the range of 10.70s -- 10.74s for a still wind, which would be much more consistent with her other performances (her time of 10.61s in the final was legal, however, with a wind speed of +1.2 m/s). \section{Canadian Rankings} This analysis also shows some neat results of local interest. For example, Bailey's 9.84s WR from Atlanta rounds to a 9.88s still--air equivalent. Furthermore, if the correction formula (\ref{wind}) is applied to Bailey's Atlanta splits\footnote{The formula has to be modified for the distance, though; the '100' in the numerator changes to 50 and 60 to correspond to the race distance. Although, it doesn't make much of a difference to leave it as 100.}, these times could be compared with indoor performances (where there is no wind speed) over the same distance. In this case, one finds 50m and 60m times of 5.63s and 6.53s, respectively. The former (5.63s) is only 0.07s slower than his 50m 5.56s indoor WR (Reno, NV, 09 Feb 1996), a difference which could perhaps be accounted for by reaction time; {\it i.e.} if Bailey had a reaction time of around 0.11--0.12s for his 50m WR, then these results would be consistent. The latter (6.53s) is 0.02s off his 1997 indoor PB of 6.51s (Maebashi, Japan, 08 Feb 1997). This would tend to suggest that Bailey's Olympic 100m WR was consistent with his other PB performances. The 1996 100m rankings can be similarly restructured. Table~\ref{allcan} shows the top 46 performances, accounting for wind assistance, and Tables~\ref{can10},\ref{canwind10} show the top 10 legal and wind--corrected rankings. The Canadian rankings do not suffer as much of a restructuring as do the World rankings. \section{Conclusions} Who, then, did run the fastest 100m race ever? Based on this model, and discounting substance--assisted performances, doubtful wind readings, and hand--times, Fredricks comes out the winner, and Bailey has to settle for 2nd. Only 3 of the top 20 performances are now sub--9.90, whereas before 8 out of 20 were under this mark. The third fastest wind--corrected time is Christie's; it would have been interesting had he not false--started out of the final in Atlanta. Only about 7 of the top 20 wind--corrected athletes will be competing this year (who knows if Christie will be back with a vengeance?). We'll most likely see the most sub--9.90s races to date. Most importantly, though, Fredricks has the real potential to better Bailey's existing record. Of course, Bailey will also have the same potential. It seems quite likely that the 100m WR drop from its 1996 mark. Will we see a sub--9.80s race? If so, who will be the one to run it? Based on their best races last year, Bailey could run a 9.79s with a $+$1.7m/s tailwind, while Fredricks would need a mere $+$1.0m/s! With more training under their belt, who knows what to expect. Watch for a showdown between these two at the 1997 WC!
{ "attr-fineweb-edu": 1.94043, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdog4eIXguxF2oyAo
\section{Introduction} \label{sec:intro} Team sports analytics has numerous applications, ranging from broadcast content enrichment to game statistical analysis for coaches~\cite{Chen16, Thomas17, Zheng16}. Assigning team labels to detected players is of particular interest when investigating the relationship between team positioning and sport action success/failure statistics~\cite{Bialkowski14, Hobbs18, Liu14}, but also for some specific tasks such as offside detection in soccer~\cite{Dorazio09} or ball ownership prediction in basketball~\cite{Wei15}. Many previous works have investigated computer vision methods to detect and track team sport players~\cite{Cioppa18,Dorazio09,Lu18,Lu13,Manafifard17,Parisot17,Tong11}. They can detect individual players, but generally resort to unpractical manual intervention or to unreliable heuristics to adapt their processing pipeline to recognize the players' team. Specifically, they generally need human intervention to adjust the team discriminant features (\eg \mbox{RGB} histogram in~\cite{Lu13}, or \mbox{CNN} features in~\cite{Lu18}) to the game at hand~\cite{Bialkowski14, Liu14, Lu18, Lu13}. A few methods have attempted to derive game-specific team features in an automatic manner~\cite{Dorazio09, Tong11}. They consider the unsupervised clustering of color histograms~\cite{Dorazio09} or bags of color features~\cite{Tong11} computed on the spatial support of the players that are detected in the game at hand. Those methods depend on how well color discriminates the two teams, but is also quite sensitive to occlusions and to the quality of player detection and segmentation~\cite{Manafifard17}. This probably explains why those previous works have been demonstrated in outdoor and highly contrasted scenes, as encountered in soccer for example. We show in Section~\ref{sec:eval} that those methods fail to address real-life indoor cases. As observed in~\cite{Lu18}, indoor sports analytics have to deal with lower color contrast between players and background, and more dynamic scenes, with more frequent occlusions. \cite{Parisot16, Parisot17} also point out the low illumination, the strong reflections induced by dynamic advertising boards, the severe shadows, the large player density and the lack of color discrimination in indoor scenes. In our work, we do not arbitrarily select a handcrafted feature to discriminate the teams. We do not consider a framework that requires game-specific adjustment either. Instead we adopt a generic learning-based strategy that aims at predicting a feature vector in each pixel, in such a way that, independently of the game at hand, similar vectors are predicted in pixels lying in players from a same team, while distinct vectors are assigned to pairs of pixels that correspond to distinct teams. In other words, we train a neural network to separate, in an embedding space, the pixels of different teams and to group those in common team. A simple and efficient clustering algorithm can then be used to dissociate different teams in an image. Hence, we do not rely on explicit recognition of specific teams, but rather learn how to map player pixels to a feature space that promotes team clustering, whatever the team appearance. Although teams change at each game, there is thus no need for fine tuning or specific manual annotation for new games. The approach has been inspired by the associative embedding strategy recently introduced to discriminate instances in object detection problems~\cite{Newell17b, Newell17a}. However, differently from~\cite{Newell17b, Newell17a}, it is demonstrated using a lightweight \mbox{ICNet} convolutional neural network (opening broader deployment perspectives than the heavy stacked hourglass architecture promoted in~\cite{Newell17b, Newell17a}) and, to our knowledge, is the first work assigning similar embeddings to unconnected pixels, thereby extending the field of application of pixel-wise associative embedding. To validate our method, we have trained our network on a representative set of images captured in a variety of games and arenas. Since only a few player keypoints (head, pelvis, and feet) have been annotated in addition to the player team index, the player segmentation component of our network has been trained with approximate ground-truth masks, corresponding to ellipses connecting the key points. Our \mbox{CNN} model is validated on games (teams) and arenas that have not been seen during training. It achieves above $90\%$ team recognition accuracy, despite the challenging scenes (indoor, dynamic background, low contrast) and the inaccurate segmentation ground-truth considered during training. Interestingly, the lightweight backbone makes the solution realistic for real-time deployment. Our paper is organized as follow. Section~\ref{sec:sota} reviews the related works associated to CNN-based sport analysis, segmentation, and associative embedding. Section~\ref{sec:model} then introduces our proposed method, using a \mbox{ICNet} variant to both segment the players and compute pixel-wise team discriminant embeddings. The experiments presented in Section~\ref{sec:eval} demonstrate the relevance of our approach, while conclusions and some perspectives are provided in Section~\ref{sec:conclu}. \section{Related works} \label{sec:sota} Recent developments in computer vision make an extensive use of Convolutional Neural Networks~\cite{Russakovsky15}. This section reviews the specific type of \mbox{CNNs}, named Fully Convolutional Network (\mbox{FCN}), that is used for image segmentation. It then introduces the recent associative embedding methods considered to turn object class segmentation into object instance segmentation. \subsection{Fully Convolutional Network (\mbox{FCN})} Fully Convolutional Networks are characterized by the fact that they output spatial feature maps, strictly computed by the recursive application of convolutional layers, generally completed with \mbox{ReLu} activation and batch-normalization or dropout regularization layers. In recent works dealing with sport video analysis, \mbox{FCNs} have been considered for specific segmentation tasks, including player jersey number extraction~\cite{Gerke17}, soccer field lines and players segmentation~\cite{Cioppa18}. In~\cite{Lu18}, a two-steps architecture, inspired by~\cite{Yang16} and~\cite{Yu16}, is even proposed to extract players bounding-boxes with team labels. The network however needs to be trained on a game-per-game basis, which is impractical for large scale deployment. None of these works is thus able to differentiate player teams without requiring a dedicated training for each game, as proposed in Section~\ref{sec:model}, where a real-time amenable \mbox{FCN} provides the player segmentation mask, as well as a pixel-wise team-discriminant feature vector. There are two main categories of real-time \mbox{FCNs}: encoder-decoder networks and multi-scale networks. Encoder-decoder architectures adopt the encoder structure of classification networks, but replace their dense classification layers by fully convolutional layers that upsample and convolve the coded features up to pixel-wise resolution. \\ \mbox{SegNet} (Segmentation Network)~\cite{Badrinarayanan17} was the first segmentation architecture to reach near real-time inference. It is a symmetrical encoder-decoder network, with skip connection of pooling indices from encoder layers to decoder layers. \mbox{ENet} (Efficient Neural Network)~\cite{Paszke16} follows \mbox{SegNet}, but comes with various improvements, whose most prominant one is the use of a smaller decoder than the encoder. Quite recently, several authors proposed to adopt multi-scale architectures to better balance accuracy and inference complexity. Considering multiple scales allows to exploit both a large receptive field and a fine image resolution, with a reduced number of network layers. Among those networks, \mbox{ICNet} (Image Cascade Network)~\cite{Zhao18} is based on \mbox{PSPNet} (Pyramid Scene Parsing Network)~\cite{Zhao17}, a state-of-the-art network for non real-time segmentation. \mbox{ICNet} encodes the features at three scales. The coarsest branch is a \mbox{PSPNet}, while finer ones are lighter networks, allowing to infer segmentation in real-time. Two-columns network~\cite{Wu17}, \mbox{BiSeNet} (Bilateral Segmentation Network)~\cite{Yu18}, \mbox{GUN} (Guided Upsampling Network)~\cite{Mazzini18} and \mbox{ContextNet}~\cite{Poudel18} are composed of two branches. \subsection{Associative embedding} An embedding vector denotes a local descriptor that characterizes a signal locally in a way that can support a task of interest. Embeddings are thus not defined a priori. Instead, they are defined in an indirect manner, to support the task of interest. In computer vision, \mbox{FCNs} have recently been considered to compute pixel-wise embeddings in a variety of contexts related to pixel clustering or pixel association tasks. In this context, \mbox{FCN} training is not supervised to output a specified value. Rather, \mbox{FCN} training supervises the relations between the embedded vectors, and checks that they are consistent with the task of interest. In~\cite{Vondrick18}, the embedding vector is used to compute the similarity between two pixel neighborhoods from two distinct images, typically to support a tracking task. Interestingly, a proxy task that consists in predicting the (known) color of a target frame based on the color in a reference frame is used to supervise the training of the \mbox{FCN} computing the embeddings. Good embeddings indeed result in relevant pixel associations, and in accurate color predictions. This reveals that a \mbox{FCN} can be trained in an indirect manner to support various higher-level tasks based on richer pixel-wise embedding. Of special interest with respect to our team discrimination problem, \emph{associative} embeddings have been introduced in~\cite{Newell17b, Newell17a} and used in~\cite{Law18, Newell17b, Newell17a} to associate pixels sharing a common semantic property, namely the fact that they belong to the same object instance. Authors in~\cite{Newell17a} introduced associative embedding in the context of multi-person pose estimation from joints detection and grouping, and extended it to instance segmentation. More recently, \cite{Law18} proposed \mbox{CornerNet}, a new state-of-the art one-shot bounding box object detector, by using associative embedding to group top-left and bottom-right box corners. In all these publications, the network is trained to give close embeddings to pixels from the same instance and distant embeddings to pixels corresponding to different instances. All these works are based on the same heavy stacked hourglass architecture. However, \cite{Newell17b} suggest that the approach is not strictly restricted to this architecture, as long as two important properties are fulfilled: first, the network should have access both to global and local information; second, pixel-wise prediction at fine resolution is recommended, in order to avoid that a vector is subject to concurrent instances. This makes \mbox{ICNet} a premium candidate to segment players and compute team-specific embeddings in real time, since it computes features at three scales instead of two for other lightweight multi-branch \mbox{FCN} architectures. \section{Team segmentation using pixel-wise associative embedding} \label{sec:model} Player team discrimination is not a conventional segmentation problem since the visual specificities of each class are not known in advance. This section explains how associative embedding can be combined with player segmentation to address this problem. \begin{figure*} \centering \includegraphics[width=0.9\linewidth,page=1]{head4.pdf} \caption{Overview of our architecture. \mbox{ICNet}~\cite{Zhao18} is used as backbone for following assets: pixel-wise segmentation, combination of three scales to encode global and local features, fast (\cite{Zhao18}~reaches 30~FPS at $1024 \times 2048$ resolution). Its last convolution is modified to output a segmentation mask along with vector embeddings in each pixel. We keep the multi-scale supervision for the segmentation and add $L_{push}$ and $L_{pull}$ to obtain similar embeddings in pixels of the same team and distant embeddings for pixels of different teams. After network inference, a simple clustering algorithm can effectively split in different teams the pixels from the segmentation mask $\hat{m}$ according to their embeddings $\mathbf{e}$.} \label{fig:network} \end{figure*} \subsection{Team discrimination \& player segmentation} We propose to adopt the associative embedding paradigm to support the team discrimination task. In short, we design a fully convolutional network so that, in addition to a player segmentation mask, it outputs for each pixel a \mbox{$D$-dimensional} feature vector that is similar for pixels that correspond to players of the same team, while being distinct for pixels associated to distinct teams. As explained in the previous section, embeddings learning is not based on an explicit supervision. Instead, embeddings are envisioned as a latent pixel-wise representation, trained to support a pixel-wise association task, typically to group~\cite{Law18} or match~\cite{Vondrick18} pixels together. In the context of object detection, associative embedding has been applied with success in~\cite{Law18, Newell17a} to group pixels corresponding to a same object instance. In these works, multiple hourglass-shaped networks are stacked recursively in order to progressively refine the \mbox{1-D} embedding value that aims to differentiate object instances in a given class. Our work differs from~\cite{Newell17a, Newell17b} and~\cite{Law18} in two main aspects. First, and because we target real-time deployment, the stacked hourglass architecture is replaced by an \mbox{ICNet}~\cite{Zhao18} backbone, as illustrated in Figure~\ref{fig:network}. As stated in \cite{Zhao18}, \mbox{ICNet} reaches 30~FPS for images of $1024 \times 2048$ pixels on one Titan~X GPU card. We use \mbox{ICNet} because its multi-scale encoders, along with a spatial pyramidal pooling, give access to a reasonably large receptive field (important to share embedding information spatially) while preserving the opportunity to exploit high-resolution image signal locally (important for a fine characterization of the content). Second, our work deals with the problem of associating pixels of players that are scattered across the whole image. This is in contrast with the association of neighboring/connected pixels generally considered in traditional association tasks~\cite{Law18, Newell17a}. \subsection{Network architecture} The \mbox{ICNet} network architecture has mostly been left unchanged. Only the final convolution layer has been adapted to provide $D+1$ channels. Those comprise $1$ channel for semantic segmentation, with a sigmoid activation, along with $D$ channels for embeddings with linear activation. Figure~\ref{fig:network} presents the player segmentation channel in blue while the $D$ channels for embeddings are represented in orange. A number of loss functions are combined to train the network. Along with the multi-scale semantic segmentation loss from~\cite{Zhao18}, composed by $L_{124}$, $L_{24}$ and $L_4$, we add an embedding loss inspired by~\cite{Newell17a, Newell17b, Law18}. It comprises two components, $L_{pull}$ and $L_{push}$, which respectively pull teammates embeddings together and push opponents embeddings away from each other. $L_{pull}$ and $L_{push}$ only apply to the finest resolution. We have defined all loss components based on mean square distances. \\ $L_{124}$, $L_{24}$ and $L_4$ losses are defined as: \begin{equation} L_{s \in \{124,24,4\}} = \frac{1}{HW}\sum_{(i,j)}^{H\times W} (\hat{m}^s_{ij} - m^s_{ij})^2 \end{equation} with $H$ and $W$ being the layer height and width, while $\hat{m}^s$ and $m^s$ respectively denote the predicted and ground-truth player masks at scale $s$. Similarly, $L_{pull}$ is formulated as: \begin{equation} L_{pull} = \frac{1}{HW}\sum_{n}^{\{1,2\}} \sum_{(i,j)}^{M_n} (t_{ij}-T_n)^2, \end{equation} where $T_n$ is the mean of the embeddings $t_{ij}$ predicted across the pixels $M_n$ of team $n$, \ie $T_n = \sum_{(i,j)\in M_n} t_{ij}$. \\ In~\cite{Newell17a}, the push loss is expressed as a mean over pairs of pixels of a cost function that is chosen to be high (low) when pixels that are not supposed to receive the same embedding have a similar (different) embedding. Recently, \cite{Newell17b} and~\cite{Law18} employed a "margin-based penalty", and wrote that this is the most reliable formulation they tested. Hence, we also adopt a margin-based penalty loss. Formally, $L_{push}$ is defined similarly to $L_{pull}$, except that rather than penalizing embeddings that are far away from their centroid, it penalizes embeddings that are too close from the centroid of another team: \begin{equation} L_{push} = \frac{1}{HW}\sum_{n}^{\{1,2\}}\sum_{(i,j)}^{M_n} \max(0;1-(t_{ij}-T_{3-n})^2) \end{equation} \\ Our global objective function finally becomes: \begin{equation} \begin{split} L = &\lambda_{124} L_{124} + \lambda_{24} L_{24} + \lambda_{4} L_{4}\\ + &\lambda_{pull} L_{pull} + \lambda_{push} L_{push} \end{split} \label{eq:loss} \end{equation} with the lambda loss factors having to be tuned (chosen values are explained in Section~\ref{ssec:implem}). At inference, upsampling of last layer is inserted before activation (respectively bilinear and nearest neighbor interpolations for segmentation and embedding channels). Then, a clustering algorithm is required to group pixels in teams. Fortunately, as observed in~\cite{Newell17a}, the network does a great job at separating the embeddings for distinct teams, so that a simple and greedy method such as the one detailed in Algorithm~\ref{alg:clustering} is able to handle the clustering properly. As appears from the pseudocode, our naive clustering algorithm relies on the assumption that a player pixel embedding surrounded by similar embeddings is representative of its team embedding. Given a team embedding vector, player pixels are likely to be assigned to that team if their embedding lies in a sphere of radius $1$ around the team embedding. We incorporate a refinement step in which we compute the centroid of the selected pixels. Then, to resolve ambiguities, player pixels are associated to the closest of the centroids. \begin{algorithm} \DontPrintSemicolon \newcommand{\mathbf{e}}{\mathbf{e}} \newcommand{\mathbf{c}_n}{\mathbf{c}_n} \SetKwInOut{AData}{Input} \SetKwInOut{AResult}{Result} \SetKwInOut{AInit}{Init} \AData{$\hat{m}_{ij}$ the predicted segmentation mask \\ $\mathbf{e}_{ij}$ the predicted embedding in pixel (i,j)} \AResult{$O \in [\![ 0; 2 ]\!]^{H\times W}$ teams occupancy map ($0$ is associated to the background)} \BlankLine $\mathcal{N}(i,j)$ neighborhood of pixel $(i,j)$ \; $V_{ij} \gets \frac{1}{\lvert\mathcal{N}(i,j)\rvert} \sum_{(k,l)\in\mathcal{N}(i,j)} \lVert\mathbf{e}_{ij} - \mathbf{e}_{kl}\rVert_2^2$ \; $n \gets 0$ the team counter \; $\mathcal{R} \gets \mathcal{S} \gets \{(i,j) \mid \hat{m}_{ij} > 0.5\}$ the pixels to cluster \; $O \gets D_{1} \gets D_{2} \gets \mathbf{0}^{H\times W}$ \; \While{$\mathcal{R} \neq \{\}$ and $n<2$}{ $n \gets n+1$ \; $(i, j) \gets \argmin_{(i,j) \in \mathcal{R}} V_{ij}$ \; $\mathbf{c}_n \gets \mathbf{e}_{ij}$ the centroid for team n \; $\mathcal{M}_n \gets \{(i,j) \in \mathcal{R} \mid \lVert \mathbf{e}_{ij}-\mathbf{c}_n\rVert_2^2 < 1\}$ \; \BlankLine $\mathbf{c}_n \gets \frac{1}{|\mathcal{M}_n|}\sum_{(i,j)\in \mathcal{M}_n} \mathbf{e}_{ij}$ \; $\mathcal{M}_n \gets \{(i,j) \in \mathcal{R} \mid \lVert \mathbf{e}_{ij}-\mathbf{c}_n\rVert_2^2 < 1\}$ \; \BlankLine $\mathcal{R} \gets \mathcal{R} \setminus \mathcal{M}_n$ \; \For{$(i,j) \in \mathcal{S}$}{ $ D_{n}(i,j) \gets \lVert \mathbf{e}_{ij}-\mathbf{c}_n\rVert_2^2 $ \; } } \For{$(i,j) \in \mathcal{S}$}{ \uIf{$n = 1$}{ $ O(i,j) \gets 1 $ \; } \ElseIf{$n = 2$}{ $ O(i,j) \gets \argmin( \{ D_1(i,j); D_2(i,j) \} ) +1 $ \; } } \caption{Simple clustering algorithm of pixels in the space of their associated embeddings. Up to two centroids are searched and refined, from the observation that for a team, embeddings of neighbour pixels can serve as initial prototype when they are similar. Embeddings similarity in the neighborhood of pixel $(i,j)$, $\mathcal{N}(i,j)$, is called $V_{ij}$. After that, points are clustered according to their embedding vector's distance to centroids mapped in $D_{n}$ arrays.} \label{alg:clustering} \end{algorithm} \subsection{Implementation details and hyperparameters} \label{ssec:implem} Our network is trained to extract players only, and to estimate associative embeddings for team discrimination. Referees and other non-player persons are part of the background class. Our work is based on the \mbox{PyTorch} \mbox{ICNet} implementation~\cite{mshahsemseg}. Parameters have been empirically tuned. For the training, we employ Adam optimizer~\cite{Kingma15}. Losses factors defined in Equation \ref{eq:loss} are: $\lambda_{124} = 1$ and $\lambda_{24} = \lambda_{4} = 0.4$ as in original \mbox{ICNet}~\cite{Zhao18}, $\lambda_{pull} = \lambda_{push} = 4$ and are thus very different than in~\cite{Newell17a, Newell17b, Zhao18} because our pull and push losses definitions are averaged over pixels rather than over instances. Our best found learning rate is $lr = 10^{-3}$, and has been implemented with the "poly" learning rate decay taken from~\cite{Zhao17, Zhao18, Yu18} and their own sources. Compared to them, we apply the decay by epochs instead of iterations, but we keep the same power of 0.9. Hence, the learning rate at $k^{\text{th}}$ epoch is $lr . (1 - \frac{k}{max})^{power}$, with $max=200$ denoting the total number of epochs, and $lr$ being the base learning rate defined above. All but last layers of \mbox{ICNet} are initialized with pretrained Cityscapes (\cite{Cordts16}) weights from~\cite{Zhao18}, but a full training is done as the point of view adopted for sport field coverage is too different from the frontal point of view considered by cars in Cityscapes. Minibatch size is 16 and batch-normalization is applied. Neither weight decay regularization, nor dropout are added, but the following random data augmentation is considered: mirror flipping, central rotation of maximum 10 degrees, scaling such that $\min(\text{width}, \text{height}) = \text{random}([\frac{2}{3},\frac{3}{2}]) \times 512$, color jitter in the perceptually uniform \mbox{CIE~L*C*h} color space fixed to L~$\pm 10$, C~$\pm 7$ and h~$\pm 30$ degrees, to keep natural colors. We trained the network on crops of $512 \times 512$ pixels, located randomly in scaled images. Validation is performed on $512 \times 512$ pixels patches, extracted from images scaled such as its $\min(\text{width}, \text{height})$ equals 512. For each model, we select the parameters of the best epoch according to a validation score defined as the mean of intersection over union of the two teams, between prediction and our approximate reference masks. Inference for testing is done on court images downsampled to $1024 \times 512$ and padded to preserve the aspect ratio. In our implementation, we adopted \mbox{$5$-D} embeddings, mainly because more dimensions a priori get more ability to capture/encode visual team characteristics unambiguously. We expect this ability to become especially useful when the receptive field does not cover the whole scene. In that case, the embedding prediction in one pixel may not be able to rely on a teammate appearance or on the absence of collision with an opponent embedding when those players are far and disconnected from the pixel of interest. The embeddings have thus to be consistent across the scene, despite their relatively local receptive field. In other words, they have to capture local team characteristics unambiguously. In practice, \mbox{ICNet} builds a global receptive field, and our trials provided similar results with 1- to \mbox{5-D} embeddings \section{Experimental validation} \label{sec:eval} To assess our method, this section first introduces an original dataset, and associated evaluation metrics. It then runs a \mbox{K-fold} cross-validation procedure, and compares the performance of our associative embedding team discrimination, with a conventional color histogram clustering, applied on top of instance segmentation. \subsection{Dataset characteristics} \label{ssec:feet} To demonstrate our solution, we have considered a proprietary basketball dataset. It involves a large variety of games and sport halls: images come from 40 different games and 27 different arenas. Images show innumerable situations: occlusions between teammates and/or opponents, regular player distribution, absence or presence of all the players, images from training sessions and professional games with public, various game actions, still and moving players, presence of referees, managers, mascots, dynamic led advertisements, photographers or other humans, various lighting conditions, different image sizes (smaller dimension is generally close or superior to 1000 pixels). This dataset is composed of 648 images covering a bit more than half of the sport field. Each player has been manually annotated. Annotations considered in our work include a team label (Team~A vs. Team~B), and an approximate player mask. This mask has been derived from manual annotation of head, pelvis, and feet. It consists in seven ellipses approximately covering the head, the body (between head and pelvis), the pelvis, the legs (between pelvis and each foot), and the feet. Occlusions between ellipses of players located at different depth has been taken into account. Similarly to~\cite{Cioppa18}, our experiments reveal that the network can learn despite the coarseness of the masks. Players size in images feeding the network (scaling strategy in Section~\ref{ssec:implem}) is around $25 \times 75 \pm 15$ pixels. \subsection{Evaluation metrics} Our network enables player segmentation, as well as team discrimination. Evaluation metrics should thus reflect whether players have been properly detected, and whether teammates have received the same team label. Therefore, we consider the following counters and metrics, to be computed on a set of test images: \begin{itemize} \item $N_{miss}$: Number of missing players \item $N_{corr}$: Number of correct team associations \item $N_{err}$: Number of incorrect team associations \item Missed players rate, \begin{equation} R_{miss} = \frac{N_{miss}}{N_{corr}+N_{err}+N_{miss}} \end{equation} \item Correct team assignments rate, \begin{equation} R_{CTA} = \frac{N_{corr}}{N_{corr}+N_{err}} \end{equation} \end{itemize} We now explain how the outputs of our network, namely the player segmentation mask and the map of team labels derived from the embeddings clusters, are turned into those evaluation metrics\footnote{Since accurate ground truth segmentation masks are not available from the dataset (see Section~\ref{ssec:feet}), the segmentation quality can not be assessed based on conventional intersection over union metrics.}. Given a reference segmentation mask and a team label for each player instance, a simple majority vote strategy is adopted. A player is considered to be detected when the majority of pixels in the player instance segmentation mask are part of the segmentation mask predicted by the network. In that case, the majority label observed in the instance mask defines the team of the player. In practice, since our ground-truth mask only provides a rough approximation of the actual player instance silhouette, we resort to the part of the instance mask that is the most relevant for team classification, \ie to the two ellipses that respectively cover the body and the pelvis area. Since pixels that are in the central part of the body and pelvis ellipses are less likely to be part of the background, only the pixels that are sufficiently close to the main principal axis of the body/pelvis shape are considered. (A distance threshold equal to one third of the maximal distance between ellipse border and principal axis has been adopted. Changing this threshold does not impact significantly the results.) \subsection{Results} \label{ssec:results} In order to validate the proposed team discrimination method with available data, we consider a \mbox{K-fold} cross-validation framework. It partitions the \mbox{648-images} dataset into K disjoint subsets, named folds. Each \mbox{K-fold} iteration preserves one fold for the test, and use the other folds for training and validation. Average and standard deviation metrics can then be computed based on the K iterations of the training/testing procedure. In our case, ten folds of approximately equal size have been considered. Moreover, to assess whether the model generalizes properly on new games and new arenas, we construct the folds so that each fold contains images from distinct games and/or arenas. Table~\ref{tab:game-kfold} lists cross-game folds characteristics, and Table~\ref{tab:hall-kfold} cross-arena folds characteristics. To estimate the value to give to our results, we compare them to a baseline reference. Since most previous methods recognize teams based on color histograms~\cite{Dorazio09,Lu13,Tong11}, generally after team-specific training, we compare associative embeddings to a method that collects color histograms on player instances, before clustering them into two sets. In practice, as for the associative embedding evaluation, only the player pixels that are sufficiently close to the body/pelvis principal axis are considered to build the histogram in \mbox{RGB}, with 8 bins per dimension (\mbox{512-dimensional} histogram). Adopted clustering is the~\cite{scikit-learn} implementation of variational inference algorithm with a Dirichlet process prior~\cite{Blei06}, to fit at max two gaussians representing our two clusters (two teams). This method has the advantage of being able to automatically reduce the number of prototypes, it is useful when less than two teams are visible in an image. \begin{table} \centering \begin{tabular}{|l|cccc|} \hline Fold & 1 .. 3 & 4 & 5 .. 9 & 10 \\ \hline \hline Train & 516 & 518 & 520 & 518 \\ Val & 66 & 64 & 64 & 66 \\ Test & 66 & 66 & 64 & 64 \\ \hline \end{tabular} \caption{Splits of the dataset used for cross-game validation. Each column corresponds to one (set of) folds, and lines define the number of training/validation/test samples. Validation and test sets contain the images from 4 games } \label{tab:game-kfold} \end{table} \begin{table} \centering \begin{tabular}{|l|cccccc|} \hline Fold & 1 & 2 .. 5 & 6 & 7 & 8 .. 9 & 10 \\ \hline \hline Train & 514 & 516 & 518 & 522 & 524 & 518 \\ Val & 66 & 66 & 64 & 62 & 62 & 68 \\ Test & 68 & 66 & 66 & 64 & 62 & 62 \\ \hline \end{tabular} \caption{Splits of the dataset used for cross-arena validation. Each column corresponds to one (set of) folds, and lines define the number of training/validation/test samples. Validation and test sets contain the images from 2 or 3 halls } \label{tab:hall-kfold} \vspace{-0.3em} \end{table} Results of cross-game validation are presented in Table~\ref{tab:game-eval}, while cross-validation on sport halls is presented in Table~\ref{tab:hall-eval}. Standard deviations are low, demonstrating the weak dependence to a specific set of training data. Rate of missing detections is about 11\%, which is an acceptable rate considering our backbone is the real-time \mbox{ICNet} model~\cite{Zhao18} with arduous indoor basketball images. It could probably be improved with a finer tuning of hyperparameters, as well as more accurate segmentation masks and a formulation that involves a class for referees (see failure cases analysis below). More recent and effective improved segmentation networks could also be considered as long as they are compatible with associative embedding. In Figure~\ref{fig:img-validation}, we observe that players are generally well detected but roughly segmented, probably due to our approximate training masks. However, segmentation masks are very clean compared to the background-subtracted foreground masks derived for such kind of scenes (see for example \cite{Parisot17}). Therefore, they could advantageously replace those masks in algorithms using camera calibration to detect individual players from the segmentation mask~\cite{Alahi11,Carr12,Delannay09}. In terms of team assignment, \cite{Lu18} mentions that they can not achieve good cross-game team assignment without fine-tuning. In comparison, our method reaches more than 90\% of correct team assignments while testing on games and sport halls that are not seen during training. The baseline Bayesian color histogram clustering only reaches 62\% of correct team assignments, which confirms that the team assignment task in the context of indoor sport is extremely difficult, as described in Section~\ref{sec:intro}. We get near identical results for cross-arena evaluation. \begin{table} \centering \begin{tabular}{|l|cc|} \hline Method & $R_{miss}$ & $R_{CTA}$ \\ \hline \hline Associative Embedding & \multirow{2}{*}{0.11 $\pm$ 0.04} & 0.91 $\pm$ 0.04 \\ Color Histogram & & 0.62 $\pm$ 0.02 \\ \hline \end{tabular} \caption{Evaluation measures on cross-game \mbox{K-fold}: mean and standard deviation of missed player detection and of correct team assignment rates, for 10 folds.} \label{tab:game-eval} \vspace{-0.3em} \end{table} \begin{table} \centering \begin{tabular}{|l|cc|} \hline Method & $R_{miss}$ & $R_{CTA}$ \\ \hline \hline Associative Embedding & \multirow{2}{*}{0.11 $\pm$ 0.06} & 0.91 $\pm$ 0.03 \\ Color Histogram & & 0.63 $\pm$ 0.02 \\ \hline \end{tabular} \caption{Evaluation measures on cross-arena \mbox{K-fold}: mean and standard deviation of missed player detection and of correct team assignment rates, for 10 folds } \label{tab:hall-eval} \end{table} Qualitative results are shown in Figure~\ref{fig:img-validation}. As written in Section~\ref{ssec:implem}, we intend to extract players only, excluding referees and other humans. Images belong to testing folds, meaning that they originate from games or arenas not seen during training. Teams masks are drawn in red and blue. The first five rows in Figure~\ref{fig:img-validation} illustrate how well the proposed method can deal with indoor basketball conditions. Players in fast movement and low contrast are detected and well grouped in teams. Occlusions, led advertisements, and artificial lighting are not a major problem. Associative embedding has a low sensitivity to high color similarities between background and foreground. Specific treacherous scenes with players of only one team and some other humans are correctly handled. We estimate to $10\%$ of the number of annotated players, the quantity of isolated regions that could fit humans, extracted in addition to reference instances. These detections come from referees and other unwanted persons on or close to the ground, and in certain cases from scenery elements. In basketball, the proportion of the number of referees related to the players is from $20$ to $30\%$ (we usually count 2 or 3 referees in a complete field, while players are $5+5$). Thus, it is interesting to see that our \mbox{FCN} trained on players generally avoids referees and other people. However, this is a challenging task, as can be seen in the two prominent failure cases shown in the last two rows of Figure~\ref{fig:img-validation}, where referees shirts or pants are visually similar to a team. In the first example, a referee is detected as a player and included in a team (referee on the right, under the basket), and a player is filtered from predicted player class probably because it is seen as a referee by the network (background player in side of a referee). In the second example, the dark pants of a referee and a coach in the back of the court are assimilated to the team in black. This sample also presents a severe occlusion implying four players; inside and around this area, detection is inaccurate and team assignment of the orange player mixed with black teammates is lost. \begin{figure* \centering \vspace{-1em} \subfloat{ \includegraphics[width=0.3\linewidth]{{figures/item_29_thr_0.5_picture}.jpg} } \subfloat{ \includegraphics[width=0.3\linewidth, trim={20mm 45mm 100mm 15mm},clip]{{figures/item_29_thr_0.5_fused_truth}.png} } \subfloat{ \includegraphics[width=0.3\linewidth, trim={20mm 45mm 100mm 15mm},clip]{{figures/item_29_thr_0.5_fused_pred}.png} } \\ \vspace{-0.3em} \subfloat{ \includegraphics[width=0.3\linewidth]{{figures/item_1_thr_0.5_picture}.jpg} } \subfloat{ \includegraphics[width=0.3\linewidth, trim={120mm 70mm 40mm 10mm},clip]{{figures/item_1_thr_0.5_fused_truth}.png} } \subfloat{ \includegraphics[width=0.3\linewidth, trim={120mm 70mm 40mm 10mm},clip]{{figures/item_1_thr_0.5_fused_pred}.png} } \\ \vspace{-0.3em} \subfloat{ \includegraphics[width=0.3\linewidth]{{figures/item_48_thr_0.5_picture}.jpg} } \subfloat{ \includegraphics[width=0.3\linewidth, trim={25mm 50mm 75mm 0mm},clip]{{figures/item_48_thr_0.5_fused_truth}.png} } \subfloat{ \includegraphics[width=0.3\linewidth, trim={25mm 50mm 75mm 0mm},clip]{{figures/item_48_thr_0.5_fused_pred_RBOK}.png} } \\ \vspace{-0.3em} \subfloat{ \includegraphics[width=0.3\linewidth]{{figures/item_30_thr_0.5_picture}.jpg} } \subfloat{ \includegraphics[width=0.3\linewidth, trim={65mm 60mm 95mm 20mm},clip]{{figures/item_30_thr_0.5_fused_truth}.png} } \subfloat{ \includegraphics[width=0.3\linewidth, trim={65mm 60mm 95mm 20mm},clip]{{figures/item_30_thr_0.5_fused_pred_RBOK}.png} } \\ \vspace{-0.3em} \subfloat{ \includegraphics[width=0.3\linewidth]{{figures/item_23_thr_0.5_picture}.jpg} } \subfloat{ \includegraphics[width=0.3\linewidth, trim={20mm 35mm 80mm 15mm},clip]{{figures/item_23_thr_0.5_fused_truth}.png} } \subfloat{ \includegraphics[width=0.3\linewidth, trim={20mm 35mm 80mm 15mm},clip]{{figures/item_23_thr_0.5_fused_pred_RBOK}.png} } \\ \vspace{-0.3em} \noindent\rule{0.95\linewidth}{0.4pt} \subfloat{ \includegraphics[width=0.3\linewidth]{{figures/item_37_thr_0.5_picture}.jpg} } \subfloat{ \includegraphics[width=0.3\linewidth, trim={10mm 50mm 90mm 0mm},clip]{{figures/item_37_thr_0.5_fused_truth}.png} } \subfloat{ \includegraphics[width=0.3\linewidth, trim={10mm 50mm 90mm 0mm},clip]{{figures/item_37_thr_0.5_fused_pred_RBOK}.png} } \\ \vspace{-0.3em} \subfloat{ \includegraphics[width=0.3\linewidth]{{figures/item_53_thr_0.5_picture}.jpg} } \subfloat{ \includegraphics[width=0.3\linewidth, trim={10mm 50mm 140mm 25mm},clip]{{figures/item_53_thr_0.5_fused_truth}.png} } \subfloat{ \includegraphics[width=0.3\linewidth, trim={10mm 50mm 140mm 25mm},clip]{{figures/item_53_thr_0.5_fused_pred}.png} } \caption{Team discrimination with associative embedding. From left to right: test image, zoomed reference masks and prediction. The first five rows present success cases, while the last two show failure cases. From top to bottom: running players; strong shadows; occlusions; court and teams share the same colors; only one team; confusion between players and referees; extreme occlusions. Please refer to the numerical version of the paper for the colors and ability to zoom on details. } \label{fig:img-validation} \end{figure*} \section{Conclusion} \label{sec:conclu} Associative embedding is considered to address the team assignment problem in team sport competitions. It offers the advantage of discriminating teams in sport scenes, without requiring an unpractical per-game training. Promising results are obtained on a challenging basketball dataset, with few tuning and only approximate player mask annotations. In this work, the embeddings come with a player segmentation mask from a relatively simple multi-scale CNN, rather than the stacked hourglass network considered in previous works~\cite{Law18, Newell17b, Newell17a}. Our work could be extended to support instance segmentation, by using either instance embeddings~\cite{Newell17a} or projective geometry~\cite{Alahi11,Carr12,Delannay09}. Future investigations of interest include the explicit recognition of referees, a deeper analysis of the embeddings distribution and a more careful weighting of losses~\cite{Kendall18}. {\small \bibliographystyle{ieee}
{ "attr-fineweb-edu": 1.771484, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUfyQ5qhLBzgIB9jZl
\section*{Acknowledgment} \input{07_acks} \balance \bibliographystyle{IEEEtran} \section{Introduction} Sports analytics has been gaining increased traction as detailed spatio-temporal player event and trajectory data become readily available, enabled by enhanced data acquisiton systems~\cite{Assuncao19SportsAnalytics}. To acquire these data, practitioners rely on methods that range from manual annotation~\cite{Liu13Opta} to automated tracking systems that record player locations at high frequencies. Oftentimes, the collected sports data are large, complex, and heterogeneous, which presents interesting and unique research directions in information retrieval, visualization, but in particular, machine learning-based prediction~\cite{Brefeld17SISports}. The increase in high-fidelity sports data streams has been a boon for many prediction-based problems in sports, such as valuing players in soccer~\cite{Decroos19VAEP, Yurko2020LSTM}, detecting high leverage moments in esports~\cite{Xenopoulos2020CSGOWPA}, or assessing decision making in basketball~\cite{cervone2014pointwise}. The way in which the state of a game is represented is central to aforementioned sports analytics efforts. Game states can contain both the \textit{global} game context, such as each team's score or the game time remaining, as well as \textit{local} context like player locations stored in X/Y coordinates. Typically, a game state has been represented as a single vector, since a vector is a straightforward abstraction and readily applicable for many out-of-the-box data mining techniques. As spatio-temporal data, such as player actions and trajectories, become more prevalent, game state vectors are also increasingly including spatially-derived features. When constructing spatially-derived features, practitioners must enforce permutation invariance with respect to the ordering of players, since they often use a single vector as input. Consequently, to create permutation invariant spatially-derived features, practitioners often turn to one of (1) a permutation invariant function, like the average, to aggregate player-specific information into global features~\cite{Xenopoulos2020CSGOWPA, Yang17ResultPred}, (2) a strict ordering of players through role assignment, such as assigning each player to a distinct game role~\cite{Makarov17PredictingWinner, Lucey13PlayerRoles, Sha17TreeAlignment}, or (3) keypoint-based feature construction, where features are ordered in relation to an important location on the field~\cite{Mehrasa2018Deep, Sicilia19MicroActions, Yurko2020LSTM}. Example keypoints include the ball-carrier in American football or the goal in soccer. Using this construction, features can then be defined as the closest player distance to the goal in soccer, second closest player distance to the ball, and so on. While the aforementioned methods are oftentimes readily interpretable, they discard information in a player's local neighborhood or require identification of a keypoint before preprocessing the data in order to anchor spatial information. Furthermore, although keypoints are usually identifiable in ball-based sports such as soccer or basketball, they are ambiguous for sports such as esports, or competitive video gaming, where no ball exists. To address these issues, we introduce a sport-agnostic, graph-based framework to represent game states. In our framework, a game state is represented by a fully-connected graph, where players constitute nodes. Edges between players can represent the distance between the players or be assigned a constant value. Using our graph representation of game states, we present permutation invariant graph neural networks to predict sports outcomes. We demonstrate our method's efficacy over traditional vector-based representations for prediction tasks in American football and esports. Our contributions are (1) a sports-agnostic graph representation of game states to facilitate prediction in sports, (2) a graph-based neural network architecture to predict outcomes in American football and esports, and (3) case-studies that not only validate our models by identifying known relationships in sports but also demonstrate how our modeling approach enables ``what if'' analysis in sports. The rest of the paper is structured as follows. In Section~\ref{sec:related_work}, we review relevant literature on feature construction for sports prediction and on graphs in sports analytics. Section~\ref{sec:methods} presents our graph-based representation of game states, graph neural networks, and the model architectures we use. In Sections~\ref{sec:results} and \ref{sec:discussion}, we describe our results and present use cases of our method. In Section~\ref{sec:conclusion}, we conclude the paper. \section{Related Work} \label{sec:related_work} \subsection{Feature Construction} Feature construction is a fundamental aspect of sports analytics. With an influx of spatio-temporal data, game states are increasingly including sport-specific spatial features, such as closest player distance to the goal in soccer or average player distance to the ball-carrier in American football. To craft these features, practitioners must identify a keypoint in the game. For example, to estimate the success of soccer passes Power~et~al.~\cite{Power17Passing} calculate the speed, distance and angle of the nearest defender to the passer. In this instance, the keypoint is the passer. Ruiz~et~al.~\cite{Ruiz17Leicester} present an expected goals model for soccer, where they use angle and distance to goal, making the defending team's goal the keypoint. Decroos~et~al.~\cite{Decroos19VAEP} build a scoring likelihood model where they include features such as distance and angle to goal, as well as distance covered during a soccer action. We can also order players, and then use their features as input, if the ordering given any permutation of the players in a game is invariant. In Mehrasa~et~al.~\cite{Mehrasa2018Deep}, the authors propose a permutation invariant sorting scheme based on an anchor object. The first player in the sorting scheme is the anchor player. Then, the next player is the one closest to the anchor player, and the next player is the one second-closest to the anchor player, and so on. Similarly, Sicilia~et~al.~\cite{Sicilia19MicroActions} order player coordinate features based on distance to the scoring basket. Similarly, Yurko~et~al.~\cite{Yurko2020LSTM} create features for each player based on their relative location and direction compared to the ball-carrier in American football. These features are then ordered based their distance to the ball-carrier. Another way to enforce permutation invariance on game state features is through role assignment. Using a role assignment scheme, features can then be tied to a particular position, e.g., $Forward_x$, $Forward_y$, and so on. Role assignment can work well in sports which maintain a rigid structure. For example, Makarov~et~al. \cite{Makarov17PredictingWinner} use historical stats to predict a player's role in Defense of the Ancients (DOTA) 2, a popular esport where players generally play very specialized roles. Using these predicted roles, they then assign players in a team so that each role is occupied by a player. Lucey~et~al.~\cite{Lucey13PlayerRoles} introduce a role assignment method where the current player formation is compared against a formation template. Sha~et~al.~\cite{Sha17TreeAlignment} propose a tree-based method to estimate player roles which goes through a hierarchical approach of assigning roles based on a template, and then partitioning the data. Lastly, one way to ensure permutation invariance is through global aggregations. Aggregations are common in esports, due to a lack of keypoints. For example, Xenopoulos~et~al.~\cite{Xenopoulos2020CSGOWPA} aggregate team equipment values and minimum distances to bombsites for each game state in Counter-Strike. Yang~et~al.~\cite{Yang17ResultPred} aggregate each team's gold, experience, and deaths in DOTA to estimate win probability throughout the game. Makarov~et~al.~\cite{Makarov17PredictingWinner} consider the total number of healthy and damaged players, rankings of players, and total number of grenades remaining for each team to predict round winners in Counter-Strike. \subsection{Graphs in Sports} Graphs have found limited application in sports, with past works focusing primarily on network analysis or multi-agent trajectory prediction. Concerning network analysis, Passos~et~al.~\cite{Passos11Networks} use graphs, where nodes are players and edges represent number of passes between the players, to represent passing networks in water polo. The authors then used this information to differentiate between successful and unsuccessful patterns of play. Gudmundsson and Horton~\cite{Gudmundsson17Spatiotemporal} provide a summary of network analysis in sports, along with a comprehensive review of spatio-temporal analysis in sports. Graphs have also proven useful for multi-agent modeling in sports. Sports situations can be represented with fully-connected graphs using nodes as players. Kipf~et~al.~\cite{KipfFWWZ18} introduce a variational autoencoder model in which the encoder predicts the player interactions given the trajectories, and the decoder learns the dynamical model of the agents. Recently, more works have focused on attention based mechanisms. Hoshen~et~al.~\cite{Hoshen17} introduce an attentional architecture to predict player movement that scales linearly with the number of agents. Yeh~et~al.~\cite{Yeh19MultiAgentGen} propose a graph variational recurrent neural network model to predict player trajectories. Ding~et~al.~\cite{Ding20Attention} use a graph-based attention model to learn the dependency between agents and a temporal convolution network to support long trajectory histories. While understanding how players move is an important focus on sports analytics, predicting outcomes, such as who will win the game, has broad and established applications for teams assessing player value or for bookmakers setting odds. Our work combines the traditional sports analytics objective of predicting sports outcomes with the graph-based representation prevalent in multi-agent sports trajectory modeling. \section{Graphs for Sports} \label{sec:methods} \subsection{Game State Graph Construction} \label{sec:graph_construction} Let $S_t$ be a game state which contains the complete game information at time $t$. For example, $S_t$ can contain global information, such as team scores and time remaining, as well as player-specific information like location, velocity or other player metadata. Accordingly, $S_t$ is represented by the set $\{z_t, X_t\}$, where $z_t$ is a vector of global game state information and $X_t = \{x_t^1, x_t^2, ..., x_t^N\}$ is a set containing all player information, where $x_t^p$ is a vector containing the information for player $p$. $N$ is the number of players. Since $X_t$ is an unordered set, some consistent ordering is required to build a model which returns stable output across different permutations of the same input. As previously discussed in section~\ref{sec:related_work}, practitioners typically perform some operation on $X_t$, such as taking the average of its features or ordering its elements, to construct permutation invariant features. However, we seek a representation that avoids upfront global feature aggregation or role assignment. Furthermore, we want to apply a permutation invariant function to such a representation. To do so, we can represent game states as graphs, and use them as inputs to permutation invariant graph neural networks. Consider a graph $G_t = (V_t, E_t)$, where $V_t$ represents the nodes and $E_t$ the edges. We use $G_t$'s nodes to represent player information of game state $S_t$. Setting $V_t = X_t$, we ensure that each player is a node in $G_t$ with an associated feature vector $x^p_t$. To ensure that every player is connected to one another, we let $e_{ij} = 1, \forall i,j$. This representation includes self-loops for every node. An alternate representation is to set edge weight $e_{ij}$ to the distance between player $i$ and player $j$. $e_{ij}$ does not necessarily need to be symmetric. For example, distances in esports are often asymmetric, as shown in~\cite{Xenopoulos2020CSGOWPA}. We show our game state and graph construction process in Figure~\ref{fig:graph_construction}. Our representation avoids any sport or problem-specific preprocessing steps, like aggregations, keypoint-definitions or role assignment, since we only need to take tracking data and transform it into a graph. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/sports_graph_rep.pdf} \caption{Our game state representation transforms raw player tracking data into a fully-connected graph where each player is described by a node feature vector, compared to the standard approach of representing a game state as a vector where features are constructed using keypoints, role-assignment, or aggregations. Thus, our method does not require any problem-specific preprocessing steps.} \label{fig:graph_construction} \end{figure} \subsection{Graph Attention Networks} \label{sec:gat} Using $G_t$ as input, we can apply graph neural networks to learn game outcomes from a given game state. Graph convolutional networks (GCNs)~\cite{Kipf17} and graph attention networks (GATs)~\cite{VelickovicCCRLB18} are popular graph neural network architectures. Both architectures accept a set of nodes as input, $\{h_1, h_2, ..., h_N\}, h_i \in \mathbb{R}^F$, where $N$ represents the number of nodes in the graph, and $F$ represents the number of features for each node. The output of both architectures is a set of node representations $\{h_1^{'}, h_2^{'}, ..., h_N^{'}\}, h_i^{'} \in \mathbb{R}^{K}$. We first parameterize a weight matrix $\mathbf{W} \in \mathbb{R}^{F \times K}$. To produce a set of transformed nodes, for each node we calculate \begin{equation} \label{eq:node_rep} h_i^{'} = \sigma \Bigg( \sum_{j\in \mathcal{N}_i} e_{ij} h_j \mathbf{W} \Bigg) \end{equation} \noindent where $\sigma$ is an activation function, $\mathcal{N}_i$ represents the neighborhood of node $i$, and $e_{ij}$ represents the edge weight from node $j$ to node $i$. Effectively, a node is transformed as the weighted average of its neighbors, where the weights are produced from a function learned during model training. The node representations can be fed into another GAT layer or into a pooling layer which can downsample the graph by producing coarse representations. The entire set of node representations can also be pooled through a global average pool, whereby we produce a vector containing the average values of the $K$ features of each element $h_i^{'}$. Such an operation is permutation invariant. The main difference between GCNs and GATs are that GATs estimate attention coefficients between neighboring nodes. To do so, we use a feedforward network parameterized by a weight vector $\mathbf{a} \in \mathbb{R}^{2 K}$. The attention coefficient $e_{ij}$, which estimates the importance of node $j$'s features to node $i$, is calculated as \begin{equation} \label{eq:attn} e_{ij} = \frac{ \mathrm{exp}(\mathrm{LeakyReLU}(\mathbf{a}^T [h_i \mathbf{W} \| h_j \mathbf{W} ]))}{\sum_{k \in \mathcal{N}_i } \mathrm{exp}(\mathrm{LeakyReLU}(\mathbf{a}^T [h_i \mathbf{W} \| h_k \mathbf{W} ]))} \end{equation} \noindent where $\|$ represents a concatenation. Velickovic~et~al.~\cite{VelickovicCCRLB18} note that multi-head attention can be useful for stabilizing the self-attention process. To do so, one specifies $K$ independent attention mechanisms and performs the same transformation as in Equation~\ref{eq:node_rep}. We can create transformed node representations $h_i^{'}$ by doing \begin{equation} \label{eq:node_rep} h_i^{'} = \Bigg\|_{k=1}^K \sigma \Bigg( \sum_{j\in \mathcal{N}_i} e_{ij}^k h_j \mathbf{W} \Bigg) \end{equation} \noindent where $e_{ij}^k$ is the attention weight from the $k$-th attention mechanism. To implement GCNs and GATs and average pooling, we use the Spektral library in Python~\cite{Spektral}. \subsection{Candidate Models} \label{sec:models} To test if graph neural networks improve upon traditional sports analytics methodology, we consider a variety of models. Since a single feature vector is commonly used to represent sports game states, we structure our models in the same fashion, where we take a game state $G_t$ with associated outcome $Y_t$, and want to estimate some function $f$ where $f(G_t) = \widehat{Y_t}$. Our baseline model, referred to as the ``state model'', is based on such a representation. The input to our baseline model is a game state vector that contains global features, along with aggregated player features. Such a baseline model is common in sports analytics, and is used in~\cite{Yurko2020LSTM, Yang17ResultPred}. We consider two graph-based models to compare against our baseline. The first is a graph attention network, as described in Section~\ref{sec:gat}. The second is a graph convolutional network (GCN)~\cite{Kipf17}. In our GCN model, nodes are transformed as shown in equation~\ref{eq:node_rep}, however $e_{ij}$ are \textit{not} estimated as shown in equation~\ref{eq:attn}. Instead, $e_{ij}$ are set to the inverse of the distance between player $i$ and player $j$. We use the inverse of the distance since, if node $j$ is closer to node $i$, it will have a higher weight when computing~\ref{eq:node_rep}. Intuitively, we would expect proximal nodes to have more influence over each other. The graph input to the GCN architecture is still fully-connected. When using GAT-based models, we use one attention head. We explore the effects of varying the attention head parameter in Section~\ref{sec:attn_heads}. Finally, we also consider an architecture which takes both a game state's graph and vector representations as inputs. We visualize this architecture in Figure~\ref{fig:gat_model}. In total, we consider five models: a state model, a GAT-only model, a GCN-only model, a GAT+State model, and a GCN + State model. \begin{figure} \centering \includegraphics[scale=0.69]{figures/GATStateModel.pdf} \caption{The GAT branch of the combined GAT+State model considers a game state graph as input and outputs transformed node representations using estimated attention weights $e_{ij}$. These new node representations are then average-pooled. The state model branch is a simple feedforward network. The outputs of the GAT and state branches are concatenated and passed through a final dense layer to predict the sports outcome of interest.} \label{fig:gat_model} \end{figure} \subsection{Sports Prediction Tasks} We investigate graph-based game state representations across prediction tasks in two different sports. We focus on American football as it is a team and ball-based sport, much like soccer or basketball, as well as Counter-Strike: Global Offensive, a popular esport. \subsubsection{Predicting Yards Gained in American Football} Offensive American football plays typically involve passing or running the ball to advance it towards the defending team's endzone. When a player attempts to advance by running with the ball, the play is referred to as a ``rushing'' play. Predicting rushing yards gained given the current alignment of players is a common task in sports analytics for American football. For example, Yurko~et~al.~\cite{Yurko2020LSTM} used yards gained as the target for their ball-carrier model. Using this model, they then valued ball-carriers on their yards gained above expectation. The problem formulation of predicting yards gained in American football is a regression task where we estimate $Y_t \in \mathbb{R}$, which represents the amount of yards in the horizontal direction the ball-carrier gains until the play ends, given a single game state $G_t$. We refer to this task as the ``NFL task.'' Our baseline game state vectors for the NFL task contain the down, distance to go, ball-carrier velocity and displacement, and 11 features containing the distances to the $i$-th closest defender to the ball-carrier. In the graph representation, every player node is described by a player's X/Y coordinates, velocity, displacement since the last state, difference in location and speed to the ball-carrier, average distance to other players, and flags for if the player is on the offensive team and if he is the ball-carrier. \subsubsection{Estimating Win Probability in Esports} Estimating win probability is a fundamental task in sports analytics, with particular importance to sports betting and player valuation. In sports betting, win probability models guide odds for live betting. For player valuation, many systems revolve around valuing players based on how their actions change a team's chance of winning or scoring~\cite{Xenopoulos2020CSGOWPA, Decroos19VAEP}. In this prediction task, we aim to predict round winners in the popular esport, Counter-Strike: Global Offensive (CSGO), a five-on-five shooter where two sides, denoted T and CT, seek to eliminate each other and either aim to plant a bomb as the T side, or defuse the bomb (if planted) as the CT side. We refer to this task as the ``CSGO task.'' In a CSGO match, two teams play best-of-30 rounds on a ``map', which is a virtual world. Each team plays the first 15 rounds as one of the T or CT sides, and then switches starting at the 16th round. At the beginning of every round, each side spends money earned in previous rounds to buy equipment and weapons. To win a round, the T side attempts to plant a bomb at one of two bombsites, and the CT side's goal is to defuse the bomb. Each side can also win a round by eliminating all members of the opposing side. When a player dies, they can no longer make an impact in the round. Consider a game state $G_t^r$ which occurs in round $r$ at time $t$. Round $r$ is associated with an outcome, $Y_r$, which is equal to 1 if the CT side wins the round, and 0 if the CT side loses. The resulting prediction task is to estimate $P(Y_r = 1 | G_t^r)$. The formulation of predicting win probability in CSGO given a single game state is consistent with past win probability estimation in esports literature~\cite{Xenopoulos2020CSGOWPA, Yang17ResultPred, Makarov17PredictingWinner}. Our baseline game state vectors for the CSGO task contain the time, starting equipment values, current equipment values, total HP and armor, and scores for each team. Every player node is described by a player's X/Y/Z coordinates, health and armor remaining, total equipment value and grenades remaining, distance to both bombsites and indicators for player side, if the player is alive, if the player has a helmet and if the player has a defuse kit, and a one hot encoding of the part of the map the player is located. \subsubsection{Assessing Performance} \label{sec:assessing_performance} We assess model performance using mean squared error and mean absolute error for the NFL task (regression), and log loss and AUC for the CSGO task (classification). We use 70\% of game states for training, 10\% for model validation, and 20\% for testing in both of our prediction tasks. For CSGO, we create a different model for each of the six maps, which are different virtual worlds, since each map is different, unlike the standardized playing surface in most conventional sports. \section{Results} \label{sec:results} \subsection{Data} \label{sec:data} For our NFL task, we use Next Gen Stats tracking data, which contains player and ball locations, from the first six weeks of the 2017 National Football League (NFL) season. The data was publicly released for the duration of the 2018 NFL Big Data Bowl. To create our data set of rushing plays, we remove plays with penalties, fumbles, and those involving special teams, like kickoffs and punts. We define a rushing play as starting when the rusher receives the handoff, and ending when the rusher is tackled, called out of bounds, or scores a touchdown. In total, we used 4,038 plays containing 129,859 game states. For our classification task in CSGO, we use the csgo package from Xenopoulos~et~al.~\cite{Xenopoulos2020CSGOWPA} to parse 7,222 professional local area network (LAN) games from January 1st, 2017 to January 31st, 2020. To acquire the CSGO data, we downloaded game replay files (also known as a demofile) from a popular CSGO website, HLTV.org, which hosts competitive CSGO replays. We use LAN games since the environment is more controlled, as players are physically in the same space, and every player has low latency with the game server. A single game can contain performances on multiple maps, and each map performance is contained in a single demofile. We parsed 9,414 demofiles into JSON format at a rate of one game state per second. In total, we used 14,291,069 game states. \subsection{Model Results} \begin{figure*} \centering \includegraphics[scale=0.6]{figures/nfl_play_sequence.pdf} \caption{Left-to-Right, Top-to-Bottom, a sequence of game states from an NFL rushing play. The rusher is red, offensive players are brown, and defending players are blue. As the rusher moves into the unoccupied space at the bottom of the field, his expected ending yard line (red dashed line) intuitively increases from over five yards compared to the prediction at the beginning of the play. The faint red dashed line is the previous game states's expected ending yard line. We see the states which provide large yard gains increases over previous states are those where the rusher finds large amounts of uncontested space ahead of him.} \label{fig:football_sequence} \end{figure*} \begin{table}[] \caption{Graph neural network performance on prediction tasks for American football (NFL) and esports (CSGO). The combined GCN+State model performs best on the NFL task, and the GAT+State model, as described in Figure~\ref{fig:gat_model}, performed best on the CSGO task.} \centering \begin{tabular}{@{}rcccc@{}} \multicolumn{1}{c}{} & \multicolumn{2}{c}{\textit{NFL}} & \multicolumn{2}{c}{\textit{CSGO}} \\ \toprule \multicolumn{1}{c}{\textbf{Model}} & \textbf{MSE} & \textbf{MAE} & \textbf{Log Loss} & \textbf{AUC} \\ \midrule \textit{State} & 35.39 & 3.05 & 0.4351 & 0.8715 \\ \textit{GCN} & 33.40 & 3.02 & 0.6111 & 0.7399 \\ \textit{GAT} & 33.64 & 2.73 & 0.4465 & 0.8684 \\ \textit{GCN + State} & \textbf{30.62} & \textbf{2.68} & 0.4357 & 0.8718 \\ \textit{GAT + State} & 33.58 & 2.93 & \textbf{0.4276} & \textbf{0.8753} \\ \bottomrule \end{tabular} \label{tab:results} \end{table} In this section, we compare our candidate models quantitatively and qualitatively, not only by analyzing the performance metrics described in Section~\ref{sec:assessing_performance}, but also by confirming known relationships in American football and CSGO. Table~\ref{tab:results} presents the results of our model testing procedure. Overall, we see that graph architectures outperformed the more classical state-vector models. We also see that in some instances, predefined weights using player distances work better than weights estimated using GATs, and vice versa, depending on the sport. Finally, we also see that our graph-based models encode known relationships in both American football and CSGO. For the NFL task, due to the small amount of NFL plays in our training set, we repeat the train/validation/test procedure outlined in \ref{sec:models} 30 times, and we report the mean model performance for MSE and MAE in Table~\ref{tab:results}. We perform paired t-tests between each model and the baseline state model, where the null hypothesis is defined as $H_0: MSE_{state} - MSE_{model} = 0$ and the alternate hypothesis is defined as $H_a: MSE_{state} - MSE_{model} > 0$. There are significant differences between all of the models' MSE's and the state model's MSE, with the GCN + State model showing the largest test statistic ($t = 9.95$), and the GAT + State model showing the smallest ($2.71$). Of note, the state model only performed better in one trial compared to the GCN + State model. We show statistically significant improvements of 9\% over the state of the art single-state input models as described in Yurko~et~al.~\cite{Yurko2020LSTM}. They also describe a sequence-based model that uses multiple game states to predict rusher end yard position which performs better than any single-state model. However, the feature construction in~\cite{Yurko2020LSTM} still relies on ordering and anchor objects, and we discuss in the next section how our approach may yield more interpretable results. The results from the NFL task suggest that using inter-player distances in a GCN, when paired with state vector information, may be beneficial over estimating player weights through a GAT-based model. The GCN took twice as long to train as the state model, and the GAT took almost three times as long as the state model. Interestingly, we also created a GAT model for the NFL task which only included the ball-carrier and the defensive players. The aforementioned model outperformed the state model with an MSE of 34.32, which was significantly lower, however it did not outperform any of the other candidate models. For CSGO, we report the map-weighted performance mean, as some maps are more common in our data than others. Since we had millions of game states, we used one train/validation/split rather than the 30 trials procedure for the NFL task, since there was considerably less sampling variability. In Table~\ref{tab:results}, we see that the combined GAT + State model outperformed the state model. Our solution improves upon the state of the art for the CSGO task in Xenopoulos~et~al.~\cite{Xenopoulos2020CSGOWPA} by 20\%. Additionally, the GAT model performed well in the CSGO task compared to the GCN model, indicating that estimating attention coefficients greatly aided in predicting CSGO round winners as opposed to using player distances. One of the main differences is that the GATs were able to correctly assign attention coefficients of near-zero to dead players. This behavior is intuitive, since dead players are unable to make an impact for the rest of a round. We can also assess our proposed models for how well they capture well-known sports phenomena. For the following evaluation, we consider the GAT-only model, however, the other graph-based models also confirm the same phenomena. The first phenomena we consider is the idea that American football rushers will often seek open parts of the field in order to advance the ball for longer before being stopped by the defending team. The intuition here is that the larger the unoccupied space ahead of the rusher, the greater distance we would expect him to run. In Figure~\ref{fig:football_sequence}, we see a rushing play that starts near the 10 yard line, which is close to the defending team's endzone, meaning the team in red/orange has a scoring opportunity. We can observe the rusher (red) exploiting the gap in the defensive line (blue players) to advance, which intuitively increases his expected ending yard line. In fact, in the middle row of Figure~\ref{fig:football_sequence}, we see a large increase in the rusher's expected ending yard line, particularly in the last frame of this row, when the rusher clears the gap and is furthest away from defenders. Win probability, the quantity we estimate in the CSGO task, is commonly used in player valuation models. Xenopoulos~et~al.~\cite{Xenopoulos2020CSGOWPA} valued player actions by the change in win probability that those actions incurred. One finding of the study was that kills are one of the most important actions in CSGO, as they greatly change a team's win probability. One of the downsides to the approach in~\cite{Xenopoulos2020CSGOWPA} was that beyond damage and kill events, the model produced relatively flat win probabilities for most of a round. Because of this problem, events that positioned a player to achieve a kill, such as movement across the map, or engagements that resulted in little damage (such as those displacing a player), were not explicitly modeled, and thus were difficult to value. Using our GAT model, we estimate win probability over the course of a CSGO round for both the CT and T sides in Figure~\ref{fig:csgo_win_probability}. Consistent with prior findings, we see that large jumps in win probability are due to kills (gray vertical lines). Additionally, we see far more variance in the win probability plots over the course of the round than in~\cite{Xenopoulos2020CSGOWPA}, which is due to our model more explicitly considering individual player locations, their equipment and their relationship with other players. \begin{figure} \centering \includegraphics[scale=0.55]{figures/win_prob_csgo.png} \caption{Our CSGO model closely tracks influential game events, such as kills (gray vertical lines), and improves upon previous CSGO win probability models, which often had significant portions of rounds with unchanging predictions due to inability to account for player-specific positions and equipment. Thus, actions which do not result in a kill or damage, such as player movement, can now be valued.} \label{fig:csgo_win_probability} \end{figure} \subsection{Enabling ``What If'' Analysis in Sports} Understanding how model predictions change due to input perturbations is important for practitioners in sports analytics such as game analysts, coaches, players, or even curious fans. In both information symmetric games like American football, where players know the locations of all other players, and asymmetric games like CSGO, where players only initially know the locations of their teammates, decision making is fundamentally rooted in an understanding of players in the game space. One of the drawbacks to current keypoint-based models is that changes in player location only change the prediction if the relationship to the keypoint is changed. Accordingly, any type of ``what if'' analysis can be unsatisfactory to stakeholders since small changes in player location will do little to change model output. However, we know that relationships exist not only between a player and a keypoint, but also between other players. Since our graph construction ensures that every node is connected to one another, changes in one player's characteristics can propagate to all other player nodes. To illustrate the above issue, consider a model that takes a game state vector as input, like the one described in Section~\ref{sec:models}, which takes 11 features that denote the $i$-th defender distance to the rusher in American football. If we move a defender along a fixed radius circle to the rusher, the state-vector model output will be unchanged. However, we know that the positioning of the defender is incredibly important. For example, if the defender is behind the rusher, it is unlikely that the defender will continue to significantly contribute to the play. On the other hand, if the defender is in the rusher's running path, he has greater potential to impede the rusher's advancement. We would expect in the latter situation, if our model reflected real relationships, that our predicted rusher gain would be smaller. Our model confirms this relationship, which we show in Figure~\ref{fig:football_what_if}. Since our defender's distance to the rusher is unchanged, the change in rusher expected gain is purely from the moved defender's changing relationship with other players. \begin{figure} \centering \includegraphics[scale=0.45]{figures/what_if_football.pdf} \caption{Our GAT model is able to consider how a player's relationship to other players, as opposed to a keypoint, changes the prediction. Moving the original defender along a circle with fixed radius to the rusher, a state-vector model would show no difference, but our GAT model intuitively predicts the rusher has a lower expected gain (red dotted line) than the gain with the defender in the original position (faint red dotted line).} \label{fig:football_what_if} \end{figure} Using model output, coaches and analysts can begin to value the impact of player decisions and portray the implications to players. Consider a scenario where the original defender position in Figure~\ref{fig:football_what_if} reflects a missed tackle by that defender. One way to value the implication of the tackle is to use the outcome of the play. We know that outcomes such as touchdowns or the rusher's gain may be informative, but also reflect the entirety of the play and can be noisy. Thus, without the GAT model, we may only be able to convey to the player that the implication of his missed tackle is that the rusher will advance. Such an insight is obvious. However, by using the GAT model, we are able to tell the player from Figure~\ref{fig:football_what_if} that the missed tackle caused the rusher's predicted gain to increase from an average of 3.54 yards (solid red line) to 4.28 yards (faint red line), which represents roughly a 20\% increase. With this insight, a player may better be able to consider the implication of the missed tackle. In CSGO, the CT side defends two bombsites to prevent the T side from planting the bomb. Accordingly, players often assume static positions and coordinate among themselves on how they will defend a bombsite. Prior CSGO win probability models did not explicitly update win probability based on player position, and the main determinants of win probability were players remaining, their health and their equipment. In Figure~\ref{fig:csgo_win_probability}, we see that although this remains true when using a GAT to predict CSGO win probability, there exists significant change in win probability between kills. These changes are driven primarily by player movement. Thus, it becomes possible for a CSGO analyst, coach or player to assess his or her position, which enables ``what if'' analysis for CSGO. Consider the five CT versus four T scenario posed in Figure~\ref{fig:csgo_what_if}. Here the two CT players in the bottom right of both frames are defending bombsite ``A''. On the left frame, which represents the original game state, both players are located side by side in a building that does not overlook the bombsite. We can manually exploring different positional setups by moving players around, and search for setups with increase the CT side's win probability. We proposed the showed the situation in Figure~\ref{fig:csgo_what_if} to a coach and an analyst from two top CSGO teams (both teams were in the world top 30). The feedback we received was that the increase in win probability was intuitive, as the players set up a ``crossfire'', which is when two view directions create a perpendicular angle. Additionally, the analyst remarked that the players in the second scenario have ``deeper map information'', as their lines of sight are larger and non-overlapping, as compared to the first scenario. \begin{figure} \centering \includegraphics[scale=0.45]{figures/csgo_what_if_analysis.pdf} \caption{Our GAT model can be used to enhance game analysis and coaching. Positioning is an important aspect to CSGO gameplay. Here, we see that using just a slightly different positioning, the CT side can increase their probability of winning the round by 3.5\%. } \label{fig:csgo_what_if} \end{figure} In practice, our model can drive a visual analytics system that allows one to position players on a playing surface or change player attributes, such as health or equipment, to answer questions such as ``how much does our chance of winning increase if player $A$ stood at spot $X$ with weapon $W$ as opposed to spot $B$''. These interactive systems are well received by the sports analytics community. For example, in the Chalkboarding system by Sha~et~al.~\cite{Sha18CHI}, users were able to engage in interactive analytics to discover most advantageous scenarios to make a successful shot in basketball. \section{Discussion} \label{sec:discussion} \begin{figure*} \centering \includegraphics[scale=0.63]{figures/csgo_attn.pdf} \caption{Estimated attention coefficients vary across the different models. The GAT-only model is unable to account for the bomb plant and low time remaining in the ``defusing'' scenario. Additionally, we see that in the GAT+State model, the players in the yellow bounded box have much higher average attention coefficients, potentially indicating that their engagement is high leverage.} \label{fig:csgo_attn} \end{figure*} \subsection{Visualizing Attention Coefficients} One benefit of using GATs to predict sports outcomes is a byproduct of the training process -- the attention coefficients described in Section~\ref{sec:gat}. We observed that the attention coefficients generated from the CSGO task reflect game phenomena. For example, if a player was dead (and thus, could not create an impact for the rest of the round), the estimated average attention coefficient between said player and other players was effectively near zero. However, interestingly, we observed that on average, T sided players had much higher attention coefficients ($e_{i,T} = 0.1707$) than CT players ($e_{i, CT} = 0.02924$). We reran our GAT model using only T and only CT players, and achieved an average log loss of 0.5609 for the T-only player model and 0.5714 for the CT-only player model. Thus, neither model performs well, indicating that both sides' information is still necessary to accurately predict round win probability. One explanation for the discrepancy in attention coefficients may be that if neither side performed any actions, the CT side would, by default, win the round, since the time would expire. Thus, the onus is on the T side to act. Attention coefficients also varied across models. In Figure~\ref{fig:csgo_attn}, we show the difference between the combined GAT + State model and the GAT-only model for two different scenarios in the same CSGO round. In the ``pre-plant'' scenario, the T side has not yet planted the bomb, and the situation is 2 CT versus 3 T. In the ``defusing'' scenario, all T players have been eliminated, and the last CT player has to defuse the bomb to win the round. In the ``defusing scenario'', the GAT-only model estimates attention coefficients that are, on average, much lower for the remaining player than the GAT + State model. This is likely due to the GAT-only model unable to account for the little time remaining and the bomb is planted, which is information held in the state vector. Another difference is in the average attention coefficients of players in the upper-left (yellow box) portion of the map in the ``pre-plant'' scenario. In the GAT+State model, the T sided player ``Ethan'' has an attention coefficient that is 50\% higher than the coefficient estimated by the GAT-only model. Even though it is not visible due to CT attention coefficients generally being small, the CT player also has a higher attention coefficient relative to the other CT player alive. Both the T and CT player make up the majority of their team's total attention weight. This may indicate that the engagement that is taking place in the yellow-bounded box is important to the outcome of the round. The same set of CSGO domain experts from Section~\ref{sec:results} indicate that the engagement between Ethan and his opponent is important since it conveys important information to both teams, namely, if the CT player wins the engagement, they will know that there are no other players. If the T player (Ethan) wins the engagement, then the situation becomes three T players versus one CT player. In general, a coach or analyst may use attention coefficients to find high-value engagements in order to highlight key moments for game review. Furthermore, one could use attention coefficients as input to a player valuation system. \subsection{Model Design Choices} \subsubsection{Multi-Head Attention} \label{sec:attn_heads} Velickovic~et~al.~\cite{VelickovicCCRLB18} suggest that employing multi-head learning may stabilize the self-attention learning process. We report the results of varying the number of attention heads on the NFL prediction task in Table~\ref{tab:multi-attention}. Using each attention head, we present the average MSE for 30 independent train/validation/test splits on the American football task. We conduct a paired t-test to compare each model to the single attention head model, and report the test statistic and p-values. Overall, we see that for the NFL prediction task, a single attention head is enough to achieve strong performance. \begin{table}[] \caption{Performance of GAT model by number of attention heads on the NFL prediction task. We see no significant statistical differences between using 2, 4 or 8 attention heads compared to a single attention head, which suggests that single attention head is enough for our NFL task.} \centering \begin{tabular}{@{}cccc@{}} \toprule \textbf{Attention Heads} & \textbf{MSE} & \textbf{t-statistic} & \textbf{p} \\ \midrule 1 & 30.04 & -- & -- \\ 2 & 29.15 & 0.38 & 0.705 \\ 4 & 35.00 & 2.01 & 0.054 \\ 8 & 33.98 & -1.86 & 0.073 \\ \bottomrule \end{tabular} \label{tab:multi-attention} \end{table} \subsubsection{Keypoint-Based Features} One of the benefits of graph-based models to predict sports outcomes is that agents no longer need to be ordered to ensure a permutation invariant output given the same sports game state. However, the question still remains over the necessity of keypoint-based features. For example, in our NFL task, we use each player's difference to the ball-carrier's speed and position as features. To test model performance without using keypoint-based features, we remove these features and rerun our GCN and GAT models on the NFL prediction task. Using a paired t-test, both the GAT ($t = -2.54$) and GCN ($t = -34.24$) model performance without keypoint-based features were significantly worse than the state-based model, thus, using keypoint-based features to represent player nodes is still important, particularly for American football. \subsection{Limitations and Future Work} We see two major directions for future work -- model improvements and applications. One of the limitations of the current framework is that we use a single game state as input. While we improve upon the state of the art for single-state models, sequence-based models that consider multiple game states also perform well in sports prediction tasks. Yurko~et~al.~\cite{Yurko2020LSTM} showed how a recurrent neural network could consider sequences of game states represented as vectors to predict NFL outcomes. With this in mind, an interesting avenue to explore is if sequences of game states can be represented as graphs. Yan~et~al.~\cite{Yan18} propose representing sequences of human pose through a single graph, where different edges connect nodes not only within the same frame, but also temporally. Another limitation of our modeling approach was our graph pooling layer. Although we used global average pooling, which condenses our set of transformed player vectors into a single vector, one may consider different layer that downsample the graph to a coarser granularity. Finally, future work should explore how our proposed framework performs on tasks in other sports. Our graph-based models can be used in a variety of applications in sports. One of the most applicable is player valuation. Similar to Xenopoulos~et~al.~\cite{Xenopoulos2020CSGOWPA}, our model which predicts CSGO round winners can be used to value player actions. Additionally, like Yurko~et~al.~\cite{Yurko2020LSTM}, our model to predict NFL rusher end yard line can be used to evaluate rusher performance. Aside from player valuation, we also see potential for our models to serve as the underpinning of visual analytics systems to analyze ``what if'' scenarios in sports. Similar to the work by Sha~et~al.~\cite{Sha18CHI}, we see valuable future work in developing and evaluating an interface that stakeholders, such as coaches and players, can use to investigate different positional setups and the corresponding change in predicted outcome. \section{Conclusion} \label{sec:conclusion} This paper introduces a sports-agnostic, graph-based representation of sports game states. Using this representation as input to graph convolutional networks, and evaluate our models both qualitatively and quantitatively. We show how our proposed networks provide statistically significant improvements over current state of the art approaches for prediction tasks in American football and esports (9\% and 20\% test set loss reduction, respectively). Furthermore, we assess our model's ability to represent well-known phenomena in sports, such as the advantageous effects of space in the NFL and high-impact events in CSGO. Besides improved prediction performance, one of the benefits of our modeling approach is that it enables the answering of ``what if'' questions and explicitly models interactions between players. We showcase our model's ability to assist sports analytics practitioners through examples in American football and esports. Furthermore, we visualize our model's estimated attention coefficients between players and show how the estimated coefficients reflect relevant game information.
{ "attr-fineweb-edu": 1.310547, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUeVrxK7IAD7XMJwro
\chapter*{Abstract} \noindent Climbing is a popular and growing sport, especially indoors, where climbers can train on man-made routes using artificial holds. Both strength and good technique is required to successfully reach the top of a climb, and often coaches work to improve technique so less strength is required, enabling a climber to ascent more difficult climbs. Various aspects of adding computer-interaction to climbing have been studied in recent years, but there is a large space for research into lightweight tools to aid recreational intermediate climbers, both with trickier climbs and to improve their own technique. In this project, I explored which form of data-capture and output-features could improve a climber's training, and analysed how climbers responded to viewing their data throughout a climbing session, then conducted a user-centred design to build a lightweight mobile application for intermediate climbers. A variety of hardware and software solutions were explored, tested and developed through series of surveys, discussions, wizard-of-oz studies and prototyping, resulting in a system that most closely meets the needs of local indoor boulderers given the project's time scope. \noindent \begin{itemize} \item I spent over $60$ hours conducting in-field observations of climbers interacting with various prototypes. \item I iteratively developed an interactive mobile app that: \begin{itemize} \item can record, graph, and score the acceleration of a climber, as both a training tool and gamification incentive for good technique \item can link a video recording to the acceleration graph, to enable frame-by-frame inspection of weaknesses \item is fully approved and distributed on the Google play Store and currently being regularly used by 15 local climbers. \end{itemize} \item I wrote over $1000$ lines of \verb|C#| source code, with a further $20,000$ lines of Unity code-files defining the graphical interface. \item I conducted a final usability study, comprising a thematic analysis of forty minutes's worth of interview transcripts, to gain a deep understanding of the app's impact on the climbers using it, along with its benefits and limitations. \end{itemize} \chapter*{Acknowledgements} Firstly, a massive thanks must go to my dissertation supervisor Peter Bennett, for both aiding me greatly in the formation of the initial idea, and providing invaluable support throughout the execution of the project. I want to thank Beth, the department's Teaching Technologist, for lending me a second android phone, which greatly sped up development of the app. To the three climbers who lent me hours of their time to test various iterations of the app, and the six climbers who enabled me to perform my final in-depth analysis: thank you! And of course my unending gratitude to the Bristol climbing community, for providing responses to my surveys, performing countless beta-tests, giving me the much-needed feedback I needed to develop the app. \chapter{Interview Transcriptions} \label{appx:transcriptions} \section{Participant One} \input{transcriptions/P1_transcription.txt} \section{Participant Two} \input{transcriptions/P2_transcription.txt} \section{Participant Three} \input{transcriptions/P3_transcription.txt} \section{Participant Four} \input{transcriptions/P4_transcription.txt} \section{Participant Five} \input{transcriptions/P5_transcription.txt} \section{Participant Six} \input{transcriptions/P6_transcription.txt} \chapter{Conclusion} \label{chap:conclusion} \section{Main Achievements} The main achievement of this project was the development of the \verb|augKlimb| app. My iterative user-centric process involved over $60$ hours of in-field observations of (and discussions with) climbers interacting with various prototypes, culminating in a market-ready app that is published and deployed online. For a final evaluation, I interviewed six climbers who had been using the app, and then conducted a thematic analysis of the interview transcripts, gaining a deep understanding of the app's impact on the climbers using it, along with its various use-cases, benefits and limitations. \section{Project Status} \subsection{App developed} Iterative design never truly ends, and the series of interviews I conducted at the end of this project highlighted many potential directions I could take the app in the future, which I intend to continue doing even after the completion of this project - I go into more detail below. However, in order to conduct that final usability study, I drew a line and deployed a fully-working version of the app, which is on the Google Play store, and is currently being used on a regular basis by around 15 local climbers, something I am very proud of. \subsection{Features} The app can can record the acceleration of a climber, displaying it as a graph, and annotate second-by-second smoothness ratings. A video can be optionally linked to the graph, providing context to the peaks and troughs, and enabling frame-by-frame playback of the climb. Although these are good features, and allow both deep and shallow levels of technique feedback, they are not as technically advanced as I was hoping to achieve when I first started this project. By restricting myself to a mobile phone as the device being used (a deliberate choice prompted by both my initial survey and my want for the final product to be as accessible as possible), I was limited to only 2D video and 1D accelerometer data as possible inputs. In my rush to get a coded prototype out to testing, I didn't spend a lot of time experimenting with video-based analysis techniques, but built an accelerometer-based app, planning to revisit the possibility of CV at a later date. Inevitably, as my I iterated through the development, effectively adding extra visualisations and analytic each time, the core essence of the app remained centred on the accelerometer as the primary method of input. Perhaps if I were to do this project again, I would spend more time at the beginning developing a CV-based analysis feature, which just feels more interesting and innovative than graphs and statistics based on acceleration data. However, I still believe that in the context of a user-centred design, and given the time constraints of the project, I made the right choices given what I knew at the time. \subsection{Future Plans} \subsubsection{Short-Term} The interviews and subsequent analysis acted as part of the Iterative Design cycle, so my immediate inclination is to act upon the easier-to-implement suggestions, and improve the app by adding an introductory guide, the ability to link data from repeated attempts at the same climb, and more connectivity over social media to enhance remote competition. \subsubsection{Mid-Term} The unexplored option of CV-analysis, potentially aided by cloud-computing, would be an interesting area for future development. Especially as the last major iterative loop was to add video-playback to the app, recording and importing videos are a part of the current user-pattern, and so gradually starting to use these to produce more statistics would be a good idea. One of the major issues with the current app is the difficulty in transferring and importing the video or climbing files between devices. This was mainly caused by the usage of Unity as the app-development tool. Although Unity was great for the quick iteration and testing of a simple app, now I have a more refined idea of what the app's features and requirements are, I could potentially port it over to another platform that doesn't offer as much speed or iterative support, but instead offers better device-connectivity options. This would also potentially open up the ability for the app to interface with wearables or smartwatches, a much-requested feature that was not possible with Unity. \subsubsection{Long-Term} When it comes to a more general direction in which to take the app, two potential suggestions came out of the interviews: \begin{itemize} \item Increase the analytical component of the app - add goals, technique drills and plans, long-term progression tracking, and other training-oriented features. \item Accentuate the capacity for gamification - add more social features, maybe even moving towards turning the app into a literal game, with points scored per climb and online connectivity with friends. \end{itemize} \section{Comparison to Original Aims} My general aim was to iteratively build a working and useful product, and then analyse how the data given to climbers was used, with a set of hypotheses. I managed to successfully meet both of these aims, although at times they had slight conflicts. \label{aimsconf} If developing the best app possible had been my only goal, then I would have included guides on how to interpret the data and examples of how the app can be used in different ways. However, this set of instructions would have severely limited my ability to analyse how the climbers themselves interpreted the data, and so the version of the app I used for the final study did not include them, which caused issues as were highlighted through the ``Ease-Of-Use" and ``Complexity of analytics" themes in the TA. From my original proposal, my more detailed aims were as follows: \begin{itemize} \item Must Have: \begin{itemize} \item At least 2 wizard-of-oz prototypes to test user interaction and preferences. \item A final product that implements some of the features at a low-fi level. \item Some form of testing and analysis of both prototypes and the final product. \item Full ethics and health-and-safety approval. \end{itemize} \item Should Have: \begin{itemize} \item An analysis of the current needs and wants of intermediate-level climbers. \item A final product that fully works and is very useful for intended purpose. \item Plenty of user testing to give both qualitative and quantitative results for both the prototypes and final design. \end{itemize} \item Could Have: \begin{itemize} \item Strong usage of User-centred-design techniques. \item Usage of a very novel technology/technologies as part of the final product. \item A final product that is ready for market. \item An excellent writeup analysing choices and mistakes made along the development process. \end{itemize} \end{itemize} The Must-Haves were very conservative, so I easily met those aims. I partially met all the Should-Haves: whether the final app is \textit{very} useful is a subjective matter I am not entirely sure about, it is certainly useful in some ways, but also has its limitations. I conducted a lot of qualitative user-testing, but the closest I got to using quantitative data was for the development of my smoothness score. Moving onto the Could-Have aims: I definitely followed a very user-centred-design-centred methodology; my main source of dissatisfaction with the project is the lack of a very novel technology in the final product - as explained above, I prioritised user-centrism over novelty of features; my final product is not only ready for market, it is in the market and being actively used, yet can definitely be improved-upon, especially after my deeper analysis of its limitations. As for the final Could-Have, I have attempted to analyse the choices and mistakes made throughout my project, but I leave it to the reader to decide whether this writeup is excellent or not. \chapter{Contextual Background} \label{chap:context} In this chapter, I aim to outline what bouldering is, why recreational intermediate boulderers were my target audience and test-population, the gaps in literature that this project aims to explore. I also detail the four hypotheses that guided the development of both the app and the study, and outline some of the key challenges this project faced. \section{The Problem Being Investigated} \subsection{Indoor Bouldering} \subsubsection{What} For the purpose of this project, I will be studying how interacting with data affects recreational indoor boulderers. Bouldering is a sub-genre of climbing where the routes are generally quite short (under 5m), so no ropes or gear are required, only a soft crash-mat for safety. \begin{figure}[h] \centering \includegraphics[width=4cm]{imgs/boulder.jpeg} \includegraphics[width=4cm]{imgs/wall.jpg} \caption{Example images of a bouldering wall} \label{fig:playstore} \end{figure} Many indoor bouldering ``gyms" have opened in the past decade, with the low requirements for kit and more social environment being mentioned as some reasons for this growth~\cite{socialclimb}. \subsubsection{Why} Studying an indoor variant of climbing was chosen as it is weather in-dependant (a useful factor for a studying being conducted in the winter-spring seasons). Also, it was a lot more accessible for both me and my potential testers, with three locations across Bristol, all within easy cycling distance of the university. This meant that many more test sessions could take place, between two and four three-hour sessions per week, which is much more than would have been possible with a drive out to a nearby rock-face. Another reason indoor climbing (and the bouldering sub-type in particular) was chosen is because it usually involves shorter, slightly harder routes, with participants often discussing the climbs between attempts. In addition, the shorter routes are easily visible from the ground, with fairly consistent lighting, potentially allowing a stationary camera-equipped device to observe the entire climb in-frame. \subsubsection{Grades} As a sport, the goal of climbing is to ascend (go up) or traverse (go across) a wall, from the ground to an end-point. Some of these routes are more difficult than others, and so they can be ``graded" in various ways to indicate to other climbers the relative difficulty. This difficulty is generally subjective, and is usually a combination of: the type and convenience of the holds, the distance between holds, and the incline of the wall. Some people use this as just an indicator for which climbs they are likely to have the most fun on, and other use this as a tool for measuring their progression, trying to climb the highest grade they can. Climbing routes that are easier than a persons maximum ability offers an opportunity to refine and hone strength and technique before attempting harder-graded climbs. There are a variety of ways of grading routes. Often bouldering uses a ``$V$" system, with $V0$ being easy beginner-orientated climbs, and $V7$ being an expert-level route. In the old outdoor bouldering, this was determined by discussion and opinions on various natural holes in rocks, but in modern indoor bouldering gyms, climbing routes are designed, or ``set" with all the hand- and foot-holds being made of a certain coloured plastic, and that colour will signify it's difficulty. \subsection{Intermediate-level Climbers} To further help narrow the scope of the project, I decided to aim a potential product or interactive device on a typical intermediate climber. This is because beginners often progress very quickly with just \textit{more time} spent climbing, and expert-level climbers often have their own coaches to aid in their growth, whilst a large number of intermediate climbers, who want to improve, but may not know what they need to work on to improve, and get no feedback from their current climbing. For many intermediate boulderers who are facing a plateau in their progression, they currently face two options: either to continue climbing a lot at a low grade, hoping they gradually build up the strength and technique required to break through and climb harder; or to pay (quite a lot of) money for a private coach who can analyse their technique and give them precise feedback on how to improve. Therefore from a climbing-improvement perspective, the product I was aiming to build could either incentivise climbing lower-grade with a high-volume, or give feedback on technique, or both. \section{Importance of Topic} There is currently no lightweight device, product or app in the market that is aimed at helping intermediate climbers see data and progress in their climbing. As will be further discussed in Chapter~\ref{chap:technical}, academic research has thus-far focused on very specialised hardware or custom-built climbing-walls, with little attention on something smaller or lightweight that is highly-accessible for every climber to use. In conducting a user-centred iterative design to discover what kind of data interaction is most useful for intermediate climbers, I have both built a product that is fit-for-purpose, and have begun exploring what is an as-of-yet under-studied area in Human-Computer Interaction: In-field user studies that aim to capture the view and use-patterns of the majority of boulderers. \section{Hypotheses} Along with developing a product, from a HCI perspective I also wanted to see how climbers interact with a more modern product or app. Many climbers either ``just climb", leaving their phone in the lockers, or if they do track it they simply list which climbs they have completed in a logbook or logging-app, and there has been little research into the computerisation of mainstream climbing aids thus far. Does the adding of some measurements and metrics aid a climber in the long run, either by making climbing more fun and therefore causing them to go train more often, or does the frequency not change as much as the effectiveness of the training itself - instead of aimlessly climbing with a vague goal of ``getting better", does providing quantitative analysis of the climbs completed give more of a focus to the sessions that a climber does? With these thoughts in mind, I formulated four key hypotheses to explore: \begin{enumerate} \item Augmenting a climbing session with a live-feed of data analytics will positively impact climbing technique. \item A lightweight, low-cost and simple-to-use product will be popular among intermediate climbers who are serious enough to want to improve, but not so serious they want to pay for coaching. \item Seeing a ``score" that rates climbing technique will enable gamification and fun, both for individuals and within groups. \item Providing more data to climbers will enable more focused progression tracking and goal-oriented training. \end{enumerate} \section{Central Challenges} One central challenge of the project was to iteratively develop a useful product that fits the needs of local indoor intermediate boulderers. The associated second central challenge was the required testing, analysis and understanding of how the augmentation of live data analytics can aid a climber's progress or enjoyment. With these initial goals, after months of study and through four major iteration-cycles, I developed an app that can record, share, and match accelerometer data and video recordings taken from a variety of devices at different times. This was the third major challenge - alongside designing a product and investigating its impact on users, actually coding and publishing a feature-rich app that is reliable and fast on a range of devices was a big challenge. Sub-challenges included: \begin{enumerate} \item designing a series of metrics that are useful and consistently comparable between climbs, \item exploring the video-analysis of climbing technique, \item constant user testing and re-evaluation of goals, quickly developing features in time for a thrice-weekly in-field testing session at the wall, \item ethical considerations of developing a product that advises people whilst they perform an inherently risky sport - both during the development of ideas, and the two full academic-ethics applications \item running a series of longer-term user-analyses and interviews at the end of the project, to further explore the impact of using the completed app, then conducting a thematic analysis over the transcripts of the interviews. \end{enumerate} \chapter{Critical Evaluation} \label{chap:evaluation} Although the iterative design process can be seen as a never-ending spiral that continually refines a product, for the purpose of this project, I stopped iterating and deployed a ``final" version of the app. This allowed me to spend the final two weeks assessing and evaluating, through a series of semi-structured interviews, both how well the app meets the original aims I set out to achieve as a product, and also to what extent my HCI-based hypotheses were accurate. \section{Format of Final Evaluative Testing} \subsection{Qualitative vs Quantitative} Some of the previous research in the field has analysed the efficacy of their climbing products in a quantitative way, by statistically proving predictions of competition placements~\cite{climbaxstudy} or viewing an increase in climbing ability~\cite{climbbsn}. However, in the field of HCI, where usability and complex interactions between humans and technology are being examined, it is hard to extract this and reduce the data to numbers than can be statistically tested, leading to more qualitative methods being used~\cite{oro11911}. Also, for the purposes of the project, recording progression for a quantitative analysis would require much more time than is available, and the hypotheses laid out at the start of the project require the deeper understanding that only qualitative analysis can provide. \subsection{Thematic Analysis of Interview Data} \subsubsection{Why this method} Surveys were a useful source of information for both my initial requirements-gathering and for ongoing feedback points throughout the course of the development, but the static format inhibits elaboration (which leads to richer data), even with the most open-ended of questions~\cite{ozoksurvey}. Therefore I opted to undertake a series of semi-structured interviews with a range of different users of the augKlimb app, and conduct a Thematic Analysis (TA) over the transcripts, a popular technique in recent HCI work~\cite{themanbrown}. Semi-structured interviews are open discussions, guided by a set of rough topics or questions, but which allow for the exploration of new ideas by following any leads that come up throughout the conversation, producing rich data~\cite{oro11911}. TA is a process of encoding qualitative information, and has been presented as the bridge between qualitative and quantitative methods~\cite{boyatzis1998transforming} Thematic Analysis was developed in the field of psychology, and the most commonly cited methodology I found in HCI literature was the one laid out by Braun and Clarke~\cite{braunclarke06}. They later wrote a chapter detailing how to apply thematic analysis to interview data, and so a later chapter~\cite{brauminterviewta} written by the same authors, detailing how to apply TA to interview data, was the guide I followed whilst performing the below analysis. \clearpage \subsection{Braun \& Clarke's six phase approach to TA} Here is the outline of the TA approach, detailed by Braun \& Clarke, that I performed: \begin{quote} \begin{enumerate} \item \textbf{Familiarisation with the data:} reading and re-reading the data. \item \textbf{Coding:} generating succinct labels that identify important features of the data relevant to answering the research question; after coding the entire dataset, collating codes and relevant data extracts. \item \textbf{Searching for themes:} examining the codes and collated data to identify significant broader patterns of meaning; collating data relevant to each candidate theme. \item \textbf{Reviewing themes:} checking the candidate themes against the dataset, to determine that they tell a convincing story that answers the research question. Themes may be refined, split, combined, or discarded. \item \textbf{Defining and naming themes:} developing a detailed analysis of each theme; choosing an informative name for each theme. \item \textbf{Writing up:} weaving together the analytic narrative and data extracts; contextualising the analysis in relation to existing literature. \end{enumerate} \hspace*{\fill}(Direct quote from~\cite{brauminterviewta}) \end{quote} \section{Performing the Interviews} \subsection{Question Selection} The goals of my interviews were two-fold: to generally assess the usability and usefulness of the app, and also assess my hypotheses. With this in mind, I developed a list of nine questions, which started by asking the climbers about their personal climbing (to both provide context and get the conversation flowing), and then lead on to how the app impacted their climbing, and how were their experiences of using it. \begin{itemize} \item What do your climbing sessions usually include? \item Would you class your sessions as fun, training, or a mixture? \item How do you think using the app impacted your climbing session today? \item Do you see the app as a training tool or as gamification of (adding fun to) your climbs, or both? \item Do you think the app caused or helped improve your climbing? \item What was your favourite aspect? \item What was your least favourite aspect? \item Are you likely to continue using the app in the future? \item What feature(s) would you like to see extended or added? \end{itemize} These questions were only used as a guideline in the interviews, with more questions being asked to explore interesting points that were raised throughout the conversation. Although I aimed to obtain answers to all the questions, the wording and order of the questions varied, and often a participant would cover a question without prompting, during discussion leading from another question. \subsection{Ethics} Because this was a distinct user-study, with different aims and data-collection methods than my initial study, I applied for a second full ethics approval, the paperwork for which can be found in Appendix \ref{appx:ethics2}. Privacy and anonymity concerns related to the recording of audio was the biggest challenge. This was solved by transcribing the recorded audio files to text, before deleting the original recordings. A secure list of names was kept to enable the particpants' right to withdraw, but any personally identifiable data was removed from the transcripts, and each participant was assigned a number, which is how I will refer to them throughout the below analysis. \subsection{Participants} I interviewed two males and four females, three of which were university students, and all were between the ages of 20 and 30. It should be noted that this is not the most representative sample of the general climbing population, especially with regard to age, technology usage and climbing experience; many climbers are older and have been climbing for multiple decades. Future research could explore how different age-groups or differently-experienced climbers interact with data-led augmentation of climbing. \subsection{Transcription} As stated above, after recording the interviews, I transcribed the audio into text, to enable the TA to be performed on the data. Due to both privacy concerns and the inaccuracies of speech-recognition software, I manually transcribed the forty minutes of audio. The full transcriptions from all six interviews can be found in Appendix~\ref{appx:transcriptions}. Although this process was painstakingly laborious, and took over eight hours of typing to perform, it did also help to fulfil the first step of TA: to familiarise myself with the data. \section{Thematic Analysis} \subsection{Familiarisation} Transcribing the audio helped begin my familiarisation with the data, but as this was a mostly passive process, trying to type the words as quickly as I could hear them, it did not provide me with the analytical immersion required for TA. Therefore I repeatedly read through the transcripts, actively thinking about how the text applied to my research goals, and ``treating the data as \textit{data}"\cite{brauminterviewta} until I was fully engaged with the concepts highlighted through the discussion. \subsection{Coding} Leading on from the familiarisation, I began to systematically highlight and annotate the ideas and interesting points being raised by my interviewees. Where similarities arose, I ensured that the same annotation was applied consistently: these annotations were then \textit{codes}. Multiple passes through the dataset was required as my analysis gradually became more developed, and more nuanced points were highlighted and re-discovered in previously annotated data. \subsection{Theme Identification} Patterns gradually arose in the codes I was noting down, also known as ``themes" in the TA terminology. I kept these themes in mind as I read through the data set again, colour-coding codes that fell within the various meanings. After reviewing the themes, and the various codes that they collated, against the transcripts again, I defined and named them. The themes that arose, and a brief selection of the codes that characterised them, can be found in Table~\ref{tab:themes}. \begin{table}[h] \begin{tabular}{|l|l|} \hline Route-difficulty & warming-up with easy climbs, technique focus on easy climbs, \\& repeating the same climb, attempts to climb at limits, \\& app more useful on easy climbs \\ \hline Seriousness of a climbing session & fun, training, gamification, social interaction, competition with self, \\& competition with others \\ \hline Complexity of analytics provided & quick performance feedback, detailed analytic feedback, \\& simple score, graph spikes, video to graph visualisation, \\& request for labelling technique as "good or bad" \\ \hline Ease of use & easy to use, simple, request for more instructions, \\& not wanting to look at the screen, too many button presses, \\& difficulty in connecting video, gui output is good, \\& linking repeated attempts of climbs \\ \hline Mobile phone as a form-factor & suggestion for wristband, instant display useful, lack of pockets \\ \hline \end{tabular} \label{tab:themes} \end{table} These themes align quite closely with both my hypotheses and with the questions asked in the interviews. There are two reasons for this: the interviews were conducted with guideline questions, so the discussions prompted from these guidelines will naturally follow similar topics, and also I had my research aims in the back of my mind whilst performing both the coding and theme-collation steps, as recommended by the guide I was following~(\cite{brauminterviewta}). \subsection{Discussion} I will now discuss what I learnt from this analysis, first in relation to each of the four hypothesis points, and then on a theme-by-theme basis. \subsubsection{Relation to Hypotheses} Here are the four hypotheses laid out in Chapter~\ref{chap:context}: \begin{enumerate} \item Augmenting a climbing session with a live-feed of data analytics will positively impact climbing technique. \item A lightweight and simple-to-use product will be popular among intermediate climbers who are serious enough to want to improve, but not so serious they want to pay for coaching. \item Seeing a ``score" that rates climbing technique will enable gamification and fun, both for individuals and within groups. \item Providing more data to climbers will enable more focused progression tracking and goal-oriented training. \end{enumerate} All of these hypotheses were at least partially-confirmed during the testing: \begin{enumerate} \item Although the first hypothesis may have lent itself to a more quantitative analysis, I had to rely on the climbers' self- perception on whether their climbing improved. Although P6 did not feel as though the app had impacted their ability, all the other interviewers said it had some form of impact: either in the short-term (for example P5 saying that the ``pressure" from knowing they were being recorded made them ``think a lot more" on smooth technique) or the long term (P1 saying they were ``climbing better" after using the app). \item With all of the interviewees being within my target demographic, all saying they enjoyed using the app, and four out of six of the interviewees saying that they are definitely going to continue using the app in the future, it can be concluded that the app is popular among the type of climbers I was aiming to develop it for. \item The smoothness score definitely enabled fun, with P3 saying that they ``loved the gamification, wanting to get the scores as high or as low as possible, that is quite good fun", yet some interviewees who saw the app as more of an analytics tool ``hadn't really considered treating it like a game to try and get a better score"(P1). \item P2 in particular enjoyed the more analytic progression-teacking available, using the app mainly as a ``figure for performance" and to ``compare the two times I've managed to do those climbs". However, there were some limitations with the app's ease of use in this area, with P3 saying that although it was ``interesting to see how they compare" when tracking progression, it was ``easy to mix it up" when trying to scroll back and view a previous attempt at a climb. \end{enumerate} \subsubsection{Route-difficulty} A variety of comments were made in the interviews about how differently graded routes were deliberately climbed at different times throughout a session. The average session seemed to consist mostly of ``warming up at the beginning with easier climbs, and then working up towards harder climbs"(P1). Across all participants, the app was used more often to focus on smoothness during easier climbing, partly because it was ``a very easy way to use the app"(P4). Interestingly, for some interviewees, choosing to use the app seemed to impact the choice of route-difficulty, whilst conversely for others, the choice of climb impacted how the app was being used: When P3 decided to spend time using the app, they would select an easy route and ``keep doing the same climb" until they got the score as low as possible. Alternatively, P4 stated that ``when I'm warming up, I'll be looking at my technique, trying to do things slowly and smoothly, like that's when I'd really be looking at the app" to quickly determine smoothness score. Then, they ``try to remember those smooth movements when I move onto the harder climbs", but if they fall off or get stuck, they often ``wanted a full analysis with the video as well" to help them determine the weak points. Whenever these two different use-cases were mentioned, they were linked to the perceived difficulty of the route being attempted. \subsubsection{Seriousness of a climbing session} The interviewees had a range of views about whether their climbing sessions were for fun or training. Both P3 and P5 said their sessions were always just fun-orientated, P1 and P4 had similar views in that they climbed ``mostly for fun but obviously ... I want to keep improving"(P1) and that ``the more you train the more fun it gets"(P3), and P6 deliberately alternated, doing ``two training sessions and one fun session" per week, and P2 focusing almost entirely on the ``training side". This had a strong correlation with how they perceived the app, as either a training tool or as a gamifying aid to fun. Gamification has already been discussed above in the analysis of the third hypothesis, so I won't repeat it here. The ability of the app to be used in the same way by different people (in that they click, climb, and interpret the data identically), but then for them to emotionally respond in such a different manner, as either a competitive fun element or as a drive to improve their performance, depending upon what their goals were for the session, was an interesting result. \subsubsection{Complexity of Analytics} The two different analytical uses of the app, covered both in Section~\ref{conflict} and briefly in the Route-Difficulty theme, were interesting. Despite a wide range of different ways the app could be used, all interviewees described either: (1) quickly comparing just the smoothness score of multiple ascents, or (2) going into a detailed analysis with the graphs and video to determine why they struggled with a harder route. This suggests that users want to interpret very complex analytics, or quickly view very simple analytics, but not anything in the middle. Also grouped within this theme were the codes that related to complaints and improvement-suggestions relating to the analytics sometimes being undecipherable, with P5 saying they didn't enjoy the smoothness score because they ``didn't really have an idea of where the scale went from and to on the rating", and P3 suggesting that using a 1`simpler 5-star rating" would be easier to use, as being less experienced, they ``don't really know what some of the stats mean". This was good feedback, and I intend to implement the suggestions on future versions of the app, but as I will elaborate on in Section~\ref{aimsconf}, the test version of the app didn't contain any guidelines, to allow my to examine how climbers naturally use those datapoints when given no prompting. \subsubsection{Ease of use} The app was generally stated as ``pretty easy to use"(P2) by all interviewees, most ``liked the interface``(P6), and P5 even stated the ease-of-use as their favourite aspect. However, some aspects and use-scenarios were found to be more difficult or not slick enough. As detailed above, a general guide on how to use the app would have been appreciated by most participants. P2 disliked how, because they jumped off the wall at the end of a climb, they had to ``delete the spike from the jumping" before an accurate smoothness score was shown, as ``it felt like there were too many buttons that I had to press every time", and suggested an automatic-detection of such a fall to speed up and ease how they used the app The linking of devices, to transfer video or climb-files was the most commonly reported issue, with it being ``quite complicated"(P1) to do so, a problem I explored in depth in section~\ref{network}, but due to the limitations of Unity, was unable to find a more satisfactory solution to. \subsubsection{Mobile phone as a form-factor} Although they appreciated the use of a mobile phone for displaying the output in an interactive way, most of the interviewees highlighted the device's shortcomings as a data-collection tool. Particularly P6, who stated the phone as their least favourite part of the app, because ``I don't have any pockets on my clothing, so I find it quite hard to put it somewhere". They then went on to say ``what I'd like to see in the future is linking it to a wristband or a smartwatch or something", an idea echoed by P4, who said ``it would be great to have external sensors", and P2, who didn't like how a phone ``wobbled`` in their pocket, and would prefer it ``if it was on an armband". P4 did actually use the phone with it attached to their body with an arm-strap, but said that the movements were sometimes ``too jolty", and next time wanted to either ``wear pockets ... to see what results it gives when I've got it attached to my torso", or ``putting it on difficult limbs, to see if you're working harder on each hand for instance", interesting points I had not considered. \chapter{Project Execution} \label{chap:execution} \section{Introduction} The main activity of this project was the development of a mobile app that can usefully analyse intermediate climbers' technique and then provide them with various data whilst they climbing, whether it be for training or fun. To do this, I performed a User-Centred iterative design process, with four large main iterations, and many smaller ones to guide along the way. \subsection{User-Centred Iterative Design} Throughout the execution of the development of this project, I followed the guidelines of both the Iterative Design and User-Centred Design methodologies. \subsubsection{Iterative Design} Iterative Design is a methodology where a cyclic process of prototypes, tests, analysis, and refinements of a product are made. See Figure~\ref{fig:iter}. \begin{figure}[h] \centering \includegraphics[width=10cm]{imgs/iterativeloop.jpg} \caption{Diagram of the Iterative Development Model. \newline \textit{\small Westerhoff [Public domain], via Wikimedia Commons}} \label{fig:iter} \end{figure} After some initial plans have been made, a set of requirements are laid out, then some very basic form of the product is implemented, before being tested to gain feedback, then this feedback is evaluated to inform the next cycle's planning stage. This process then repeats, gradually refining the product and gaining stronger insights into the user's requirements. In this project, four major iterations took place, which the majority of this chapter is dedicated to outlining. \subsection{User-Centred Design} Because this project aimed to build a product that most closely matches the needs and wants of local recreational climbers, I decided to also use the User-Centred design approach, which states ``users should be involved throughout the project development process"~\cite{ISO9241-210}. In order to do this, I met and discussed with the users throughout every stage of the Iterative Design process detailed above, not just the Testing Phase. Therefore, within and alongside the larger iterations, minor iterative loops took place, where users were consulted repeatedly, their test-responses evaluated and suggested changes implemented, during the implementation of each major feature. \section{Iteration \#1: Wizard-Of-Oz Prototypes} In this section, I will describe my first large iteration, in which I gathered initial thoughts and requirements from climbers via a survey, and then trialled some of these in a series of wizard-of-oz (WOZ) tests to glean the probable requirements of the users. Then, after implementing a prototype app, I brought it back to the users again to test, evaluating their responses to bring me forward. \subsection{Initial Survey} To inform my first forays into the project, I released a questionnaire online. The ethics application for the survey (which also covered prototype testing) can be found in Appendix~\ref{appx:ethics1}, and the survey questions can be found in Appendix~\ref{appx:survey}. I shared links to this survey across a variety of local climbing club pages, and received 49 responses in total. It must be mentioned that approximately half of these responses came in after I had already started development of the app, so although they did not all contribute to my initial development, I regularly checked the survey results and took into account any new opinions. This is one benefit of an online survey: I got a much broader reach, quantity and range of responses compared to if I had performed a more traditional paper survey at a climbing wall. \subsubsection{Survey Respondents} \begin{figure}[h] \centering \includegraphics[width=10cm]{imgs/surveydemographics.png} \caption{Demographics of Respondents to the Initial Survey} \label{fig:surveydemographics} \end{figure} Some statistics on the demographics of the respondents can be seen in Figure~\ref{fig:surveydemographics}. The male-female gender split is fairly consistent with the general indoor climbing population: 65\% male respondents compared to the 64\% stated in Rapelje's study on the demographics of climbers~\cite{climbing-sub-worlds}, and 65\% aged 19-24, compared to 60\% in that study. This was aided by efforts to spread the survey link across a range of local climbing pages and Facebook Groups, not just the student groups to get a more representative sample of views. With three-quarters of the respondents self-describing as intermediate climbers, the survey predominately hit the target audience for the project. Although all the responses were taken into account, a more heavy weighting was placed on the suggestions given by intermediates compared to the lower and higher abilities. \subsection{Insights Gained From the Survey} \subsubsection{Equipment Used} To inform my decisions on what form my interactive device should take, I asked what items of equipment climbers often bring to the wall. Everyone stated that they bring their own shoes, 90\% stated they bring a chalk-bag (a small hip-bag that contains powdered chalk for drying fingers and improving grip), and 31\% said they bring a small brush for cleaning dirty holds. This provides a range of locations for locating a potential device, either inside the pocket of the chalk-bags, or attached to a brush, keeping with the light-weight aims of the project. Only one respondent in this question included a phone in their list of equipment brought to the wall, and one even explicitly stated that they deliberately left their phone in their bag in order to "get away" from it. However, in later questions, an app (and thus a phone) was the most discussed potential device for usage, with multiple suggestions that a "you would severely limit your possible audience by having the need for a device" as "everyone has a phone" so I should "stick with an app". \subsubsection{Communication With Others Whilst Climbing} Because a potential use of the device was to aid some of the common interactions that climbers had with friends at the wall, two of the questions in this initial survey asked what kind of things people wanted to hear whilst climbing, and what they often told their friends. The most common response was ``beta", which is a colloquial climbing term for advice on how body-positioning should be used to solve a tricky climb. This can vary a lot between climbers, with each climb usually having two or more potential solutions. Also, being able to ``read" a climb to determine this beta is a tough and much-sought-after skill within the climbing community, to the extent that it is often nearly impossible for the very best in the sport, and thus definitely out of reach of any Artificial Intelligence (AI) I could hope to create within this project. An alternative, which was suggested by a few survey respondents, could have been to record (with either video or other sensors) a proficient climber climbing that route, and then showing that at a later date. However, this data-collection and -replay contradicted my initial aim of the device/app being able to be used anywhere by anyone: for a low-cost app to be usable everywhere, the requirement to go to every climbing centre every few months and re-record somebody climbing every climb at each climbing wall, just for the product to remain usable, was not feasible. Another common interaction highlighted by the survey was ``pointing out holds" to a climber, who is often so engrossed in the climb that they do not notice a certain location that they could put their hands or feet. This has an obvious potential use-case for a device, by using a colour video and existing computer-vision algorithms for coloured-blob-detection to find and then audio-relay the location of nearby holds to a climber, so this was selected to be examined further. \subsubsection{Requested Potential Features} Arguably the most useful question was the one asking \textit{``Are there any features you'd like to see in a training app or device?"}. This elicited a very wide range of suggestions, from yoga and meal plans to VR-viewing of someone climbing alongside. Many of these were either too simple (and could be achieved either by existing apps, or an online search), or far too ambitious in scope. The most common request was a way to log climbs in some way, and although a few logging apps already exist, this feature was definitely required in some way alongside whichever novel features were to be developed. Tracking progression and seeing gradual improvements over time is a typical characteristic of training in any sport, but instead of just listing climbs that had been done, a few respondents suggested that other metrics could be tracked, such as height climbed, frequent muscle groups used, and weak-points in technique that caused a climber to fall off at a certain point. Many respondents emphasised that ease-of use was very important, so they didn't waste time climbing by clicking through the app. ``Large buttons for chalky fingers" were requested, alongside ``a minimum of two clicks between opening the app and using it". The video-analysis of technique was mentioned by seven respondents, with a mixture of technique-tutoring, centre-of-gravity detection and limb-annotation suggested. Both centre-of-gravity and limb-annotation could be feasible achieved with computer-vision, so those were taken forward to the WOZ testing phase. However, if beta-detection has been ruled out as too difficult for CV/AI, then giving accurate technique advice is even further out of the range of potential features: with the high complexity of movements in the sport, years of learning required to achieve coaching qualifications, and the high risk of injury if incorrect, giving explicit advice about how to climb is not something that I wanted to go into with this project. One metric that was proposed is the analysis of efficiency or ``smoothness" in a climb. A commonly accepted signifier of ``good technique" is when a climber moves neatly and directly between each successive climb, without any jerky motions or unnecessary readjustments~\cite{centreofmass}. An accelerometer recording could be used to determine this metric and thus this suggestion was also brought forward to the next stage of testing. \subsection{Feasibility and Potential Limitation of Video Analysis} The initial survey showed interest in using CV to either give feedback about climbing technique, or to point out where nearby handholds were. To those ends, I began experimenting with using OpenCV to detect blobs in previous recordings of my own climbing. Although some small success was had in detecting the centre of body mass, and some nearby coloured handholds, the wide variety of different lighting conditions and noise from the low-res camera made the detections very inaccurate. Another issue that arose was that the code would analyse the video very slowly, even on my PC with a graphics card, so converting that to battery-efficient code that could run in real-time on a mobile device would have been extremely difficult. Due to the time-scope of this project, and the aims being to iteratively develop something and analyse how climbers interacted with such a product, I was wary of spending too much time trying to get video-detection working. \subsection{Wizard-Of-Oz Tests} To determine the requirement for this video feature vs using an accelerometer to provide data input, a series of informal discussions and WOZ tests were performed at a climbing wall with 3 boulderers. This involved me writing a script, and showing images of potential screen displays when they performed an action or climbed. Now knowing the probable abilities of the CV's video-analysis, which was either to label limbs and centre of mass on a video output, or to state nearby handholds during the climb, I acted out both of those features in a WOZ format, and the results of that test was that they were determined to be potentially useful. However, the labelled video feature was stated to be only slightly better than just watching a video playback, and the inefficiency of setting up a phone to point at a wall to call out handholds with some delay was not something that excited the climbers. The WOZ prototype that involved having an accelerometer data showing up on a graph with some form of "smoothness rating" was received well in the testing, and I felt that it would be a good idea to explore that route with the iterative testing and design, whilst also working on some of the video analysis in the background. \section{Iteration \#2: Adding Accelerometer Data} This section will outline my second large iteration. Now that I had completed the first iteration, with WOZ prototypes, and roughly knew which features and form-factors the users wanted, I next had to design and implement a real-code prototype to test. In order to do this, I had to make some choices regarding the tools I'd be using, then make a minimal app that could record some data, before exploring different output options with my testers. \subsection{Fundamental Early Decisions for Development} \subsubsection{Platform Choice} Despite researching a variety of different devices and form factors, one of the key aims of the project was always to make the final product as accessible as possible, therefore a OS-independent mobile app was decided upon after reviewing the survey results. By not using any extra equipment such as wristbands or 3d-cameras, the scope and ability of the final product could be limited (by lack of sensors and processing power). However, limiting the need for anything but just a phone, which almost everyone owns, was a worthwhile sacrifice. As well as making user-testing much easier, this choice also helped narrow down the direction of the project: working solely within the limits of what a mobile app could achieve meant that the capabilities of that platform could be fully stretched and explored. The two inputs that mobiles can easily capture, and that would potentially be most useful for the analysis of climbing technique, were video recordings and accelerometer data. Video recordings can be analysed with various Computer Vision (CV) techniques and accelerometer data can be shown on a graph and analysed statistically to provide various outputs. \subsubsection{Tool Selection} \paragraph{App-development tool} Next was to find a app-development tool that is quick and easy to use (for quick repeated iterations of the app), can easily import or link to the OpenCV Library (for the CV aspect), and is platform-independent (so any mobile phone owners can use the app, irrespective of OS). Unity was chosen as it meets all three of those requirements, with a wide variety of ``Assets" (downloadable open-source libraries) that can be imported. By providing a platform for creating simple user interfaces that could be exported to a range of platforms, and easy linking between the UI elements and back-end \verb|C#| code, I knew that quickly building and iterating over multiple app designs would be possible. Also, having used Unity for games development in the past I was very familiar with it, so compared to potentially wasting time learning how to use other tools, this one met all the above requirements, with the added benefit of previous experience using the tool. \paragraph{Version Control} For version-control I used \verb|git|, backed-up on GitHub. Being a one-person project I rarely bothered with the overhead of using different branches, but the ability to roll-back to working code and releases, and to \verb|stash| and \verb|pop| various files at different times was very useful during development. Once or twice a week, after a new feature or big set of fixes had been added, I would up-version the app, and release a compiled .apk binary to the project's GitHub page. This allowed the easy re-installing of older versions when trying to locate a bug, as well as a clear documentation of all fixes and features included at each minor version upgrade. \subsection{First minimalistic app: Recorder} After the success of the accelerometer idea in the previous iteration's tests, I decided to build an app based around this, to use in some real in-field testing. This first app just had a two buttons, START and STOP, which would, respectively, start and stop recording the mobile phone's raw accelerometer data into a \verb|.csv| file. \subsubsection{Axes Selection} The accelerometer's API allowed me to collect data from each of the individual axes, so it seemed like a good idea to use this to determine and compare acceleration vertically vs horizontally, which was a potentially useful feature. However, after viewing the raw data output from the first climbing session, it became clear that the phone's axes would very rarely match up with the real-world axes in any meaningful way, and even some sort of calibration at the start of the climb would quickly become meaningless as the climber rotates and moves up the wall. Therefore just the total magnitude of the vector-sum of the acceleration measured along all three axes was used going forward. \subsubsection{Mock Output} During this first test, the lack of an in-app output was an obvious limitation, but after a climbing test session with just this feature, I was able to copy the data into a spreadsheet software and graph the results, giving Figure~\ref{fig:twoclimbsgraph}. \begin{figure}[h] \centering \includegraphics[width=12cm]{imgs/graph_two_climbs.PNG} \caption{Overlaid Graphs of the Accelerometer Data Recorded During Two Different Climbs} \label{fig:twoclimbsgraph} \end{figure} I then met up again with the climbers from that test-day at the wall, and we discussed the graphs and could recognise which climbers were climbing which climbs, just from the spikes on each line graph; this was the first time I saw a clear correlation between the acceleration and the climbing style and ability, an exciting result. \subsection{In-App Statistics} The testers enjoyed viewing the graphs I had produced, so I started working on adding graphs to the app. Unfortunately here I hit the first limitation of the tool I was using - Unity. Primarily designed as a Games Engine, Unity didn't have an easy method of plotting graphs, and any Asset Store solutions were expensive to buy. A solution involving drawing dots on the screen for every data point, and calculating the size and angles of lines connecting them was possible, but I knew that would take a few days of coding, and wanted to iron out any potential kinks in my data-collection methods before investing that much time into coding. Also, I had another testing day scheduled with my climbers, so wanted to quickly code a simpler form of output to see how they interacted with the data. Simply calculating the min and max acceleration, and the time taken to climb, only took a few lines of code, and so the second prototype of my app included a screen that would display those statistics after a climb. During the test session, both of the people I was climbing with enjoyed using their max acceleration in a fun and competitive manner. They took it in turns trying to produce as much acceleration, or "power", as possible on some climbing moves, and also compared times whilst slowly doing easier climbs, trying to keep their max acceleration down to as low a score as possible, and doing certain climbing moves with as much or as little speed as possible. When the potential of a "smoothness" score was brought up in discussions, they told me that a statistic other than just max acceleration, and more closely linked to actual climbing technique, would be very useful. \subsection{Development of Smoothness Score} Seeing a range of graphs on the spreadsheet software (similar to Figure~\ref{fig:twoclimbsgraph} but with many more climbs) was very interesting. Some climbs' graphs were consistently fluid with no big spikes, some were mostly flat with a few large spikes, and some had a lot of small spikes, which clearly showed that there were discernible differences between the accelerometer data produced by different climbs, something I intended to utilise. After discussion with some climbing coaches, they suggested that what they often tell clients to do is to climb the same easy climb repeatedly, aiming for as fluid and smooth a motion as possible. Alongside this, occasionally some harder climbs call for bigger ``dynamic" moves, but accuracy in catching the holds without lots of small adjustments is still important for good technique. What the coaches I spoke to said they would generally judge as a climb with poor technique is one where many small inefficient and jerky movements were made: the repeated re-gripping of holds can lead to early forearm tiring, and the uncontrolled and ungainly fluctuation of momentum of the overall body mass is very inefficient. One aim of good climbing technique is to prevent this early tiring, and reduce as much inefficiency in movement as possible, as these are often the factors that prevents the successful summit of a harder climb. However, because climbing technique and style can be very variant among both climbers and between each climb on a wall, there was no way to provide an accurate "best" or "aim" score, especially when variations in the acclerometer recordings on different phones is taken into account. Instead I opted to try and find some form of quantifiable metric that, as the testers had suggested, was more closely linked to climbing style than just minimum or maximum acceleration, yet didn't have an absolute limit or goal, to keep the flexibility required between different climbs and different sensors. \subsubsection{Four Alternatives} Although previous studies\cite{climbaxstudy, climbbsn} use machine-learning techniques to train classifiers that can predict ability, attempting to directly analyse the smoothness of a graphed acceleration data-set to distinguish between climbing styles has not previously been attempted. I decided to test out a range of different options, on real climbing data, to determine which score was the closest representation of climbing style. To aid with the comparison of climbs, the metric should be able to distinguish between different styles: if a climber is repeatedly trying to climb as statically and smoothly as possible, then the score should be able to reflect that by changing in a predictable manner. \paragraph{Mean} $$\frac{\sum n }{N}$$ The simplest statistical measure was worth testing: a large amount of higher accelerations would result in a higher mean, which would possibly show differences in climbing style. \paragraph{Variance} $$\frac{\sum (n - \bar{n} )^2 }{N-1}$$ Again, a classical statistical metric that had potential: measuring how varied the accelerometer data-points are is linked to the above discussion on how more ``spiky" graphs often portray less smooth climbs. \paragraph{Average squared difference between each consecutive data-point} $$\frac{\sum (n_i - n_{i+1} )^2 }{N}$$ Taking variance a step further, and calculating the pairwise differences in the dataset could give a good measure of how ``jumpy" the chart is. By measuring the differences in acceleration, this statistic works in much the same way as finding the average squared ``jerk", which is the derivative of the acceleration. \paragraph{Single-Lagged Auto-Correlation} $$ \frac{\sum(n_{i} - \bar{n})(n_{i+k} - \bar{n})} {\sum(n_{i} - \bar{n})^{2} } $$ My idea for using this metric was that if each data-point is closely correlated with its successive data-point, then the overall line would have less jolts, but run in a more smoothly varying way. \subsubsection {Comparative Testing} I had three test climbers climb three different routes, in either a deliberately static style, a deliberately dynamic style, or their usual style: a hybrid of the two. Using the data recorded from these climbs, I calculated each of the above metrics, visible in Table~\ref{tab:smoothnesses}. \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline climb & climber & style & min & max & mean & var & var-diff & lagged-autocorr \\ \hline purple v0 route & 1 & static & 0.29 & 3.50 & 1.10 & 0.12 & 0.06 & 1617.90 \\ \hline purple v0 route & 3 & static & 0.07 & 6.39 & 1.09 & 0.16 & 0.12 & 1600.46 \\ \hline purple v0 route & 2 & dynamic & 0.16 & 10.96 & 1.11 & 0.39 & 0.18 & 2150.63 \\ \hline purple v0 route & 3 & dynamic & 0.20 & 9.76 & 1.14 & 0.40 & 0.16 & 1931.72 \\ \hline yellow v1 route & 1 & static & 0.21 & 2.20 & 1.06 & 0.05 & 0.02 & 795.96 \\ \hline yellow v1 route & 1 & hybrid & 0.06 & 11.41 & 1.06 & 0.17 & 0.11 & 2176.12 \\ \hline yellow v1 route & 1 & dynamic & 0.03 & 8.20 & 1.07 & 0.21 & 0.10 & 1862.94 \\ \hline pink v2 route & 2 & static & 0.45 & 2.74 & 1.04 & 0.04 & 0.03 & 793.59 \\ \hline pink v2 route & 2 & hybrid & 0.21 & 4.04 & 1.05 & 0.08 & 0.03 & 1939.77 \\ \hline pink v2 route & 2 & dynamic & 0.16 & 10.32 & 1.12 & 0.25 & 0.15 & 1826.36 \\ \hline \end{tabular} \caption{Comparison of Smoothness Score Candidates} \label{tab:smoothnesses} \end{table} As can be seen, despite both the var-diff and the lagged-autocorr algorithms showing some promise, the only metric that can reliably distinguish between the three styles of climbing is the variance, so this is the score I took forward. \subsubsection{Confirmatory Testing of Variance} In the app, I added a calculation at the end of the accelerometer-recording, which would display the variance to the user on the app screen immediately upon completing a climb. Taking this back to the test-users, they agreed that this metric was much better for them being able to compete on a climb, both with themselves and with each other, to get "as static a climb as possible", meeting the requirements for this score. One point they did raise was that comparing a score of, for example, $0.21234$ and $0.17018$, felt odd. My testers found comparing and discussing numbers in the 20-100 range much easier and more natural, so on the next iteration of the app, before displaying the variance I multiplied it by a hundred and displayed it as an integer. \subsection{Graphing Acceleration Data} Now that I had a working "score" that I could analyse user interaction with, I moved onto the next feature that my testers requested: plotting and showing a graph of the accelerometer data in the app. \subsubsection{Timing Issues} For the first version of my graphs, I simply plotted a rudimentary scatter-graph: within a given box on the screen, for each data-point $n$ in the accelerometer recording, I drew a small circle object $n$ pixels up, and spread the dots across so they stretched to fill the width of the box. Although this resulted in what looked like a line-chart with time as the x-axis, there were a few issues that arose when I took this version of the app to be tested; the graphs that were being displayed didn't quite ``look right". Although the x-axis of the graph was supposed to represent time, because I wasn't recording the actual time but just successive data-points, the x-axis didn't truly represent time in the real-life stress-testing that climbing involved. What I found out to be happening was that the way Unity handles incoming accelerometer data was to record it as fast as possible - which is exactly what a developer would want if they were using it as an input to a game. However, this caused the side-effect of the time-scale varying slightly, in a usually unnoticeable manner. During at-home debugging, when I recorded a timestamp to the csv file alongside the accelerometer data, the time differences only varied from 0.004s to 0.006s, and although this would obviously need to be fixed for the accuracy of the graphs, it didn't explain how the graphs when climbing would be so inaccurate. Taking this code back to the the climbing wall a few days later, I discovered that during some more dynamic climbing moves, the differences in recordings reached up to 0.5s. Upon closer inspection, I saw that when the phone was detecting a very large acceleration along with a large rotation, it would try to change the display orientation, resulting in a brief pause in the recording of the accelerometer. Therefore, to fix these issues, I forced the app to remain in portrait mode, and re-wrote the recording function caller to ensure that exactly 20 samples per second would always be recorded. Despite fixing this consistency of timings, I also re-wrote the graph-drawing code to take into account the associated timestamp for each point. \subsection{Viewing Previous Climbs} Up until this point, the statistics and graphs of each climb had been only visible directly after a climb had been completed, and to compare climbs the users had to memorise the previous climb's smoothness score and roughly how the graph looked, to then hope to improve. Viewing a list of all the previous climbs, with associated smoothness scores and graphs, was the obvious (and much-requested) next feature to add. Because of the way that I had been saving the accelerometer recordings to the phone's persistent memory for debug purposes, I could scrape that folder for all csv files, and populate a scrolling list with a set of graphs and smoothness scores. See Figure~\ref{fig:scrollview} for the final display of this. \begin{figure}[h] \centering \includegraphics[width=6cm]{imgs/scrollview.jpg} \caption{augKlimb's scrolling view-all-climbs page} \label{fig:scrollview} \end{figure} \subsection{Viewing Individual Climbs} By viewing how my testers used this scrolling view, to compare previous climbs' graphs and smoothness scores, I also noticed that they would occasionally touch a climb's graph with a finger. When asked about this action, they told me that they had instinctively clicked, with the expectation that they could load that specific climb and view more information on it. Therefore I added another page into the project (see Figure~\ref{fig:singleclimb}), to view an individual climb's graph in full-screen detail, with more statistics like smoothness and time taken all on that screen. \subsubsection{Horizontal Scale} With the full-screen view of an individual climbing graph now possible, a discussion of scale was needed. Whereas in the scroll-view of all the climbs, the graphs were scaled to fit the whole data-set inside the given box (again see Figure~\ref{fig:scrollview}), one of the main expectations of the per-climb graph view was that graphs would be "zoomed-in" to see more detail. After some testing it was found that a scale of 100px per second was optimum for viewing the acceleration's spikes with enough clarity whilst also not making the longer climbs to have an excessively long scroll to view. \begin{figure}[h] \centering \includegraphics[width=6cm]{imgs/singleclimb.PNG} \caption{augKlimb's single-climb-viewing page} \label{fig:singleclimb} \end{figure} \section{Iteration \#3: Adding Video Feedback} In this section I outline the third major iteration cycle: the incorporation of video footage and frame-by-frame playback linked to the acceleration graphs created previously. One tester had suggested that as it was now possible to scroll horizontally through a climb's graph, using that scrollbar to also view frames of a connected video file could be a useful feature. One option would be to use the phone to record a video, and get the accelerometer data from a wristband or other device. However, keeping in with the theme of accessibility and only using a mobile app, the idea of connecting two mobile phone devices, so a friend with the app could video-record you as you climbed with the accelerometer feature, was chosen. \subsection{Networking Options} \label{network} A variety of different options for networking two phones together was explored, including Unity's built-in Networking, Wifi-Direct, uploading to the Cloud, and Bluetooth. These options had varying levels of support through Assets on Unity's Asset store, but after looking through reviews most of them were deprecated, broken, or did not fit my needs. Those that potentially did cost upwards of \pounds50, which is not something I was willing to pay for a library of code that may not do what I needed. Thus, I would have to code my own solution. \subsubsection{Unity's Built-in Networking} The only one of the above options that is provided in some way by Unity is the Networking libraries. At first its probable ease of use seemed like a good option, and despite having some issues using it in the past (during my third-year Games Project)I knew how to connect various devices together. However, the UNet library provided by Unity had just recently been deprecated, and the replacement networking service was under development at the time of writing this, so documentation was often unavailable or incorrect. Also, despite the ability to link two phones whilst I was at home, when at the climbing walls this presented an issue as the free WIFI provided at these wall often blocked LAN connections between devices. \subsubsection{Cloud} Uploading to cloud would require coding some sort of scalable server and storage, something I could definitely do, but this level of overhead for what should just be the sharing of files and syncing between two devices that are located physically very close together felt like an unnecessary amount of work and time spent. Also, it would have been very slow to upload and download a video over the web for every climb, whereas a solution that could utilise the close physical locations of the devices to transfer files faster. The requirement to add some sort of user profiles, or unique code to differentiate and request the correct download, was a big added complexity, yet one that could have given an added bonus of users' data being backed up to the web and usable in other ways, for example from a web platform. Although this may be something to look into in the future a cloud server isn't the solution to this specific problem, and is not really within the scope of this project. \subsubsection{Wifi-Direct} Wifi-Direct enables the direct connection and file-transfer between compatible devices Although this seems like a great technology, and could be ideal for this specific use case, the few docs I could find online that detailed using Wifi-Direct with Unity stated that many parts of this feature had been deprecated, and was closely linked to the now-obsolete Unity Networking API. \subsubsection{Bluetooth \& NFC} An idealised use-case for the app that presented itself in WOZ testing was to just bump two mobile devices together, ``pairing" them using NFC, and then using the longer-range Bluetooth to transfer a sync command and any video or data files. Unfortunately, this uncovered a large issue with using Unity as my tool of choice - no support or libraries for either Bluetooth or NFC. Even the single free Asset online was broken, leaving me with one solution: to write my own \verb|Java| plugin using Android Studio to connect the NFC and handle Bluetooth file transfers, then expose public functions in a compiled \verb|.jar| file that the \verb|C#| Unity code could then connect to. After spending a few days writing this code, it worked occasionally, but the interplay between the two sets of libraries, on a variety of Android devices, was just too unstable. \subsubsection{Native Messaging} Both Android and IOS provide easy APIs for sharing files via social media or email, which is utilised by the free NativeShare Asset I downloaded from the Unity Asset Store. Using this library, the JSON representation of each climb could easily be shared between devices using a messaging app, which was enough of a solution to this problem to allow me to take it to the testers and determine whether a more streamlined solution was needed. During testing, climbers could record a climb, view it, and then hit the Share button to send that file to a friend, who could import that file and view it on their phone. Despite the time spent on unsuccessfully trying to get Bluetooth working with my plugin, by simply using this same button, my testers could use Bluetooth instead of the messaging app to send the climb files - a slightly more convoluted approach to simply bumping two phones together, but one that worked more reliably when Bluetooth stopped working, which it did occasionally on one of my test devices. \subsection{Video Player} Now that there was a way of sharing climb-data-files, it was time to add the video-scrolling feature. By changing the climbing graph view to shrink down to half the page, and adding one of unity's VideoPlayer components in the top half of the screen, the video could play. Then, to set the frame of the video to display, the graph's scrollbar's position could be queried, and translated into a frame-number, which was set on the player. Everything seemed to work great, and the testers really enjoyed it: I would record them climbing with their phone in their pocket, then send them the video via messenger, they would download and attach it, and then could view and scroll through the video frame-by-frame to see which spikes in the graphs corresponded to which moves. \subsubsection{Timing Issues} However, the two recordings (video and accelerometer) would have to be started at precisely the same time in order for the timings to match up. There was, at first, some potential in scraping the video-file's metadata for a timestamp, and matching that timestamp with the times in the recording; unfortunately when the videos are sent via the various messaging apps we were using, they would be transcode the video - to save data whilst sending, I assume - which would strip all useful metadata and timestamps. Therefore, instead of automatically time-matching a list of recorded videos with a list of climbs, a video file would have to be manually selected and added to a climb to associate the two - an unwanted but necessary extra step for reliability. Also, a manual ``Video Offset" selector was added above the video-player, so users could adjust the time-differential between the graph and video so that they could scroll in time if the automatic metadata-scraping and timing-detection failed to predict the correct time difference upon manual matching. To aid this automated system, any video recordings made via the app would save the exact timestamp (to millisecond-precision) into the video's filename, which is not always changed when these messaging services transcode the video. \subsubsection{Video Player Bugs} Through this testing of a variety of video-sending methods, I discovered a bug: on some devices, some videos, in some orientations, with some video-formats, would cause the video player to fail. This is one of the toughest bugs I have ever had to fix, as it was very nearly un-reproducible: If I changed any one of the four requirements above, the bug would disappear, and if I changed the scaling settings in the \verb|VideoPlayer| component then a different combination of the settings would cause it to fail instead. Most often, the use-case that produced this bug was when a video recorded on a phone was matched with an externally-imported climb-file. Unfortunately, that was a very common use-case for my testers. Whether the video was recorded through the in-app video recorder, or the phone's native camera app, it would usually fail. If that video was sent via a messaging app, then re-downloaded off that app, it would work. If that video was rotated and then rotated back again, it would work. This was frustrating, as surely if a video was recorded on a device, and playable by that device, Unity should have access to the codecs required to correctly play that video? In total, around two week's worth of development time was spent just trying to figure out how to play video reliably, as it would be very frustrating for the testers when they'd get down off a climb expecting to analyse the graphs and visualise the climb frame-by-frame, only for a flashing screen to greet them. Eventually I completely changed how the videos were being played in Unity to solve this issue: instead of using a simple VideoPlayer component, I attached a second camera to the VideoPlayer API, and then scaled and rendered that camera's view onto a mesh in the view. This more convoluted way of displaying video wasn't as smooth to playback the video, and delayed when searching for a specific frame, especially on older devices, but the decision was that a flickering video feed was better than an unreliable one. \subsection{Testing} The testing phase of this major iteration was successful: users enjoyed viewing the video footage alongside the graphs, and made a few other suggestions for UI improvements for the next stage too, but the app was nearly ready for deployment. \section{Iteration \#4: Final Improvements and Deployment} Now that the app was relatively stable and the two big features (acceleration and video) were working well together, it was time to clean the code with a refactor, implement the list of small requests that my testers had been suggesting, and then deploy the app, publishing it on the Play Store. \subsection{Code Refactor} Because I was starting to send climbing files between devices, I needed a more reliable way of exporting and parsing those files, especially with the added complexity of having video-paths and other information attached to them, the plain \verb|csv| files were not sufficient. Also, having gradually built up the app with a script file per-view, for example one script to handle the recording and another to handle the viewing, as I added more features and file information, the code-base was getting convoluted. \subsubsection{Refactor} Therefore, I completely refactored all the code, keeping the only the minimum logic required for each view in those files, and splitting out all file-handling and graph-drawing methods into separate classes, along with creating a new \verb|ClimbInfo| dataclass to store the accelerometer data, title and associated video information. Now that all the data associated with a climb was contained in a single class, I could utilise Unity's built-in \verb|Serialization|, which could automatically convert a correctly setup class directly into a \verb|json| file, and visa-versa. This was complex to achieve, but it proved very useful later on when adding more information to my ClimbData class: I no longer had to maintain my own file-saving and file-parsing code, but could rely on the serialisation to work with the files. \subsubsection{Caching} This refactor also gave me the chance to fix some performance issues that some of my testers had been having after using the app for multiple sessions. As the number of climbs increased, so did the time it took to populate the list of climbs, as each file had to be loaded, and each graph drawn, before that page of the app could load. Caching a list of \verb|ClimbData| objects prevented this repeated file-loading, but caused some issues with cache invalidation: \begin{itemize} \item When loading the app for the first time, all of the climb-files in persistent storage should be read and cached. \item When a new climb is recorded, it should be saved to persistent storage and also cached. \item When an external climb-file is imported, it is first loaded into memory, then should be saved both to persistent storage. \item When a climb file is edited, it should be overwritten in persistent storage, but is already cached so should not be re-added to prevent duplicates. \end{itemize} This all took a lot of working out to cover all edge-cases, but the eventual solution involved tightly coupling cache-updating with the action of saving to disk, and decoupling it from any loading functions, checking for duplicates before adding to the cache, and adding an auto-initialisation feature to the cache: If the cache is empty when queried, it will contact the \verb|FileHandler| and populate itself with a list of \verb|ClimbData| objects corresponding to what is in persistent storage. \subsection{Testing-driven UI Improvements} Alongside the video feature being developed and tested, my twice- or thrice-weekly testing sessions had provoke a range of smaller UI tweak and improvement suggestions, so I implemented all of those as I prepared the app for the first full deployment: \subsubsection{Countdown} One of the earliest of these was that starting recording as soon as the START button was pressed would record the acceleration caused by the climber putting their phone in their pocket and walking up to the wall. To prevent this, a five-second countdown timer was introduced before recording commences, accompanied by a vibration so that the climber could have time to set up and know when to begin climbing. \subsubsection{Guides} Also, a lot of questions were asked when climbers first started using the app, so alongside a guide and FAQ on the website, I added clearer button names and brief descriptions at the bottom of each page to describe what happens with each feature. \subsubsection{Cropping} Sometimes when reaching the top of a climb, testers would just jump off, and land on the floor. This created a large spike on the accelerations graph, and had a resultantly large impact on the smoothness score. Thus a button was added to "crop" a climb's graph. The horizontal location of the scrollbar was detected, and after a confirmation from the user, any climbing recording data after that point would be removed. \subsubsection{Deleting} To keep down clutter, testers had previously requested the ability to delete old or mis-recorded climbs, so now that each climb had its own page, a button was added that would remove the currently-open climb from the device. \subsubsection{Back Button} For Android, I removed the in-app back button, instead using the native OS's API to detect when the phone's back button had been pressed and returning the user to the previous screen, integrating the users' usage more closely to what they are used to from other apps. \subsection{Publishing to Google Play} As I don't have either a Mac or an iPhone, despite my choices to use a platform-independent software tool, I was only able to build to Android throughout this project. That meant that the natural distribution service to use was Google Play. Publishing to that store required a developer account, images of the app in use, various review procedures, and for me to write both a code of conduct and a privacy policy. This Play Store listing can be seen \href{https://play.google.com/store/apps/details?id=com.lukestorry.augKlimb}{here}, with a screenshot in Figure~\ref{fig:playstore}. \begin{figure}[h] \centering \includegraphics[width=9cm]{imgs/playstorescreenshot.png} \caption{Screenshot of Play Store Listing} \label{fig:playstore} \end{figure} To ensure that the general public did not have access to the test versions of this app, but only participants who had consented to be a part of this study, I did not release a full production version of the app, but used Google's Limited Beta Rollout feature to only provide beta access to the testers that I had given links to. \subsection{Easier testing} One obstacle that all my testers had faced throughout the project was how to install the app. With such an iterative development processes, I was recording the app every day and trying two or three different versions each week. This required either me lending one of my mobile devices to the testers to use, or side-loading an \verb|.apk| file onto the users' phones, disabling various security measures to install the app each time. Having the app on Google Play not only made it a lot easier to upgrade the app on everyone's phones before each testing session, but the added legitimacy provided by not having to side-load the apk meant that a wider variety of people felt comfortable installing the app from a provided link. This caused the amount of testers who were regularly using and testing the app to increase from 4 to 15, providing a rich variety of feedback from regular users of all abilities, both through occasional conversations and through the feedback form I had linked to the homepage of the app. \subsection{Smaller Test-led Refinements} Having more testers, each using the app slightly differently, led to a wide range of feedback, advice, and suggested improvements. Also, the testing being on an assortment of different devices highlighted a range of small bugs that were either non-existent or at least infrequent on my own devices. Few of these are worth mentioning, usually just requiring corrections to button-placement or changing the configuration of the file browser. One bug caused a permissions error on Nexus devices, but after a week of helping that user manually re-enable the correct permissions, Unity released an update that fixed what turned out to be a bug in their engine. \subsubsection{Title Editing} Until this point, the title of a climb had just been the date and time that it was recorded. In the initial survey, a climb-logging app was requested, and so to meet that demand I added the ability to edit this title to name the climb's grade, colour, and location. Some testers didn't use this feature as they weren't interested in creating a list of climbs, just analysing the specific one they were working on at the time, but for others this added a nice way to be able to scroll back through climbs from a previous session and easily find one to replicate or compare to. \subsubsection{Video Recorder} Despite previously having moved away from recording videos within the app, and instead just using the phone's normal camera app, some testers suggested that it would feel more integrated to have all of the recording features within the app. To this end, I updated the in-app camera feature to automatically save videos both to their phone's external camera-roll and to the app's persistent storage. \subsubsection{Colour-Grading the Graphs} One of the most common user-flows that I observed during the testing was repeatedly climbing the same climb to try and get the lowest smoothness-score possible, then during a rest-break, scrolling through and reviewing the graph for each climb, to see which moves caused the largest spikes, and thus how they can climb in a different way to reduce the score further. To make a more coherent link between the overall smoothness-score and the graphs, I implemented the suggestion of calculating the per-second smoothness-score and including those on the graphs. However, despite some users appreciating the added clarity of scores, some disliked the multitude of numbers everywhere, and felt that the graph was becoming very cluttered, with axis labels and lines and now scores, it was hard to see what was going on. To aid with this visual, I vastly reduced the font-size of the smoothness scores, and instead portrayed the varying per-second score through changing the darkness of the background - see Figure~\ref{fig:finalclimbview}. Users could now quickly scroll through the graph to the darkest section, and see which portion of their climb was causing the smoothness score to be high. \begin{figure}[h] \centering \includegraphics[width=6cm]{imgs/finalclimbview.jpg} \caption{The final version of augKlimb's single-climb-viewing page} \label{fig:finalclimbview} \end{figure} \subsection{Conflicting Requests From Users} \label{conflict} Due to the variety of use-cases the app can provide, a frequent side-effect of streamlining one user-flow was that other users would feed back that their preferred usage of the app had been hindered. The biggest example of this was was which view to show the user upon completion of an accelerometer recording. Some users who were using the graph to analyse their climbs, or wanted to quickly link a video recording, wanted that page to show up immediately after a recording had finished, rather than having to go back to view the lists of climbs and select that climb's page. Others, who were just using the smoothness-score, only wanted to view that metric before being able to swiftly press the Record button again and continue climbing, and favoured the clean and simple UI that mode of operation provided, only occasionally going to look at the graphs to inspect weaknesses. Differing proposals for improvements from both groups initially caused me to alter the app in each direction in turn, only when revisiting my changelog later did I notice that on a weekly basis I was switching back and forth between a very simplistic UI and one that provided a lot of data, just dependant upon whose feedback I had been exposed to on those development and testing days. To solve this, I invited three testers who fell across the above range, and prompted a discussion between them around both what they would expect to see, and what they wanted to see. A compromise was formed, where the initial recording page showed nothing but the time elapsed and the smoothness score (see Figure~\ref{finalrecorderview}), with a shortcut link to that climb's specific page, which could contain the full list of statistics, graphs, editing abilities and video-linking facilities. \begin{figure}[h] \centering \includegraphics[width=6cm]{imgs/finalrecorderview.jpg} \caption{The final version of augKlimb's post-recording view} \label{fig:finalrecorderview} \end{figure} \chapter*{Supporting Technologies} \begin{itemize} \item I used Unity to develop my app, as its simple UI builder, support for multiple mobile platforms and rich library of Asset addons made developing and iterating my designs much quicker and easier. \url{https://unity.com/} \item I used the free NativeCamera (\url{https://assetstore.unity.com/packages/tools/integration/native-camera-for-android-ios-117802}), NativeShare (\url{https://assetstore.unity.com/packages/tools/integration/native-share-for-android-ios-112731}), NativeGallery (\url{https://assetstore.unity.com/packages/tools/integration/native-gallery-for-android-ios-112630}), and SimpleFileBrowser (\url{https://assetstore.unity.com/packages/tools/gui/runtime-file-browser-113006}) Unity Assets to add those features to my app without wasting time developing the code myself. \item I used parts of the OpenCV Computer Vision Library to explore frame-by-frame video-analysis, mainly edge-detection and blob-detection. \\\url{https://opencv.org/} \\\url{https://assetstore.unity.com/packages/tools/integration/opencv-plus-unity-85928} \item I used an Android Mobile device supplied by the Department, as well as my own Android device, to help build and debug the app during development. \end{itemize} \chapter{Technical Background} \label{chap:technical} \section{Previous Sports-Data-Augmentation Studies} \subsection{Impact of Feedback on Sport Progression} The impact of appropriate feedback on motor skill acquisition has been widely studied in the field of psycho-kinesiology~\cite{schmidt75aschema, schmidt2005motor}. One of the earliest reviews of the impact that more modern computer-based feedback could have on athlete's performance is Liebermann et al's \textit{``Advances in the application of information technology to sport performance"}~\cite{lieberreview}, which discusses how video-based review systems, eye-tracking technology, and force-plates could all be used by coaches to provide an athlete with ``sophisticated and objective" feedback during a training session, leading to ``effective and efficient learning" across a wide variety of sports. Although one of the aims of the project was to not require a coach, the clearly-defined link between high-quality feedback and learning speed that was presented by those two papers was promising, especially paired with the many studies contained within that review that showed the efficacy of computer-aided feedback systems. In a their study~\cite{bacafeedback}, Baca \& Kornfeind examined the use of advanced computerised ``Rapid Feedback Systems" in elite training for rowing, table-tennis and biathalon, resulting in a set of considerations and guidelines for when designing such a system. Among others, these included: \begin{itemize} \item minimising the impact that the measurement system has on the athlete \item ensuring the system is mobile, so usable at the training location, to prevent restriction to laboratory environments \item providing a fast, comprehensible and easily-decipherable GUI \item reducing the setup and training time required for proficient usage of the system. \end{itemize} Again, this study included the interaction of coaches, but it provided a useful discussion on the efficacy of instant easy-to-use feedback systems, along with these guidelines for developing my product going forwards. \subsection{Video} Using video-playback to aid in the analysis of sports performance is a common traditional coaching technique, with studies spanning the past four decades~\cite{sportperformance86, groomcoachperceptions, groomvideo}. With the development of more advanced computer-aided systems, it became possible for coaches to manually augment the videos with annotations and markers\cite{kinovea}, then interactively play back specific sections of the videos, which was shown by O'Donahue to have a beneficial effect on training and performance~\cite{odonovideo}. As Computer-Vision (CV) has become more sophisticated, the automation of some these video-annotations has become more accurate and prevalent~\cite{cvinsport}, but currently seems limited to semi-accurate player-location-detection, and fairly-inaccurate pose detection, which are very useful for overall analytics in some sports~\cite{pansiottenniscv}, but not for specific technique recommendations in others. \subsubsection{Marker-Based Motion Detection} Some systems, using multiple cameras along with reflective VICON markers placed on the body, can accurately detect movement. This has lead to successful automation of technique-coaching in rowing~\cite{automaticrowingcoach}, yet probably has limited functionality for the average climber due to excessive setup and equipment costs. \subsection{Wearables} If a single acceleration-recording wearable device can capture data that is rich enough to distinguish between levels of ability and technique, that was a very useful potential feature in the early idea-development of my project. In Ohgi's paper~\cite{oghiswim}, he showed that by attaching a small tubular device to a swimmer's wrist, and graphing the tri-axial time-series acceleration data that was recorded, it was possible to determine the full path of the arm-stroke, and distinguish between a stroke with good technique, and a later, more fatigued stroke with worse technique. Building on this, in his review paper~\cite{callawayvideoacccomp} Callaway showed that for some sports, such as swimming, having multiple wearables on an athlete's body, providing accelerometer and gyroscopic measurements, can provide much better data and analytic feedback than the traditional video techniques. Instead of giving an overall impression of technique, measuring the angles, accelerations and velocities of various body-parts at different phases of swimming-stroke could result in technique-adjustment recommendations, although in that paper it was noted that the relative infancy of the measurement technologies available at the time caused some accuracy issues that meant these recommendations were not precise enough for real usage. \subsubsection{Mobiles} The use of mobile phone devices to provide data for sports technique analysis seems to be an area with very little research. A common use of mobile devices is through using the accelerometer to provide a pedometer functionality, counting footsteps, a feature that comes built-in with many modern smartphones. However, this has been shown to be very inaccurate without extensive calibration, and often varies greatly between software and hardware used, with only GPS showing an accurate measurement of distance or speed travelled~\cite{pedometer}. This lack of research into the area of mobile-usage for sports analysis could be attributed to the generally elite-athlete-orientated nature of the field; often very specialist hardware can be used, as the recreational sportsperson is not usually the target population as it is in this study. \section{Climbing-related Research} Although climbing has not seen as much research interest as many other sports, there have been some studies, looking into both data-collection of athletes and a couple beginning to explore the interaction of climbers with devices and software. \subsection{Data-Collection} \subsubsection{Computer Vision} Sibella et al performed an analysis of twelve climbers on a specially-constructed 3mx3m route, capturing their motion with reflective markers and a series of cameras~\cite{centreofmass}. By using computer vision to analyse the climbers' centres-of-mass, the force, agility and power of each participant could be determined, and further testing showed that the better climbers minimised power and climbed more efficiently than the less experienced in the test group. \subsubsection{Wearables} Multiple studies have looked into using wearable devices to record accelerometer data and correlate that data to various performance measures. ClimbBSN~\cite{climbbsn} uses a specially-designed ear-worn sensor to record tri-axis acceleration data, which is then analysed with a combination of Principal Component Analysis and Guassian Mixture Models to present climbing profiles for the four test climbers. This was used to show that the data collected from the sensors correlates strongly with the grade at which the climber was proficient at climbing, and could therefore be used to track progression in the future. ClimbAX \cite{climbaxstudy} is a similar system that uses wearable acceleronmeter-recording bands on all four limbs, over a series of climbing sessions. Data from many climbers across many climbs was analysed, leading to the training of a set of classifiers that could accurately assess performance, across a range of classifiers such as stability, speed, control and power. This lead to successful predictions of results in a later climbing competition. The study presented itself as the first step towards an ``enthusiast-orientated coaching system", and currently a startup company of the same name have been working on building that into a cloud analysis platform, but with no product released to market yet. It shows the potential of an accelerometer-based automatic analysis system, something that I decided to lay as an option in later wizard-of-oz studies. \subsection{Climbing-HCI Research} Various aspects of adding computer-interaction to climbing have been studied in recent years. \subsubsection{Visualisation} A prototype visualisation platform for the above ClimbAX system was developed, analysed and evaluated by Niederer et al~\cite{niederervis}. Their logbook-style interactive web-app, which implemented and displayed the analyses developed in the ClimbAX study in a clean interface, alongside graphs of performance over time, was well-received in their usability study and reviews. Taking from this that usability and simplicity are key, alongside the participant's positive review of graphed accelerometer data, I nevertheless wanted to perform my user-centred iterative design from a relatively clean, slate, and not allow these findings to impact my surveys and discussions with the testers I was working with. \subsubsection{Projectors} Kajastila et all developed a system that uses an array of projectors and laser-sensors to illuminate a climbing wall~\cite{projectedclimbwall}. This succeeded in giving the ultimate real-time feedback with no attached sensors: climbers could climb like normal but see either games, statistics or timers projected above and around them. However, the very specialist apparatus must be set up and calibrated for a specific wall with specific holds, meaning that although derivatives of the technology have been sold to various climbing centres, it is not feasible for the individual climber to use unless they travel to one of these few walls whilst they are set up. \subsubsection{Setters} One climbing-related HCI study that didn't directly interact with climbers was StrangeBeta\cite{strangebeta}, which linked mathematical chaos with machine-learning to assist route-setters - the people who design and set indoor routes. This showed the power that augmenting normal climbing-related activities with data could create novel and more interesting routes, although some participants stated they didn't always appreciate input from the software. It also demonstrated the complexity of the computerisation of climbing routes, requiring the development of a domain-specific-language and advanced parsers to be able to allow the app to reason about and communicate hold-variety and -location. This intricacy (and occasional failure) for machine-learning systems to work with the very complex range of climbs, even limited to indoor routes, was a consideration that I was conscious of throughout my project. \subsubsection{Custom Holds} Edge~\cite{edgeinteractive} is a system involving custom climbing holds that have built-in sound, light and haptic feedback mechanisms, and link to wrist- and ankle-bands. It guides the climber up the wall, indicating which limb should contact which part of which hold as the climber, resulting in a variety of different routes for a single route set. \section{Other Climbing Technologies} \subsection{Products and Apps} \subsubsection{Logging} The traditional form of data-collection for climbing is in the form of a logbook: a small booklet that climbers can use to log each climb they successfully ascend, with notes on date, time, location, grade, type and ease. This has been reflected in a multitude of mobile applications and website that supply the same basic feature: to record and list climbs. Some of these apps also have graphing features, showing the progression of the average and the maximum grades of climb summitted over time~\cite{verticallife}. \subsubsection{Logging aided by accelerometer data} One interactive device that was lauded by the press upon initial announcement was the Whipper~\cite{whipper}, a clip-on accelerometer linked to an app, that made over twice its funding goal on IndieGoGo but unfortunately never came to fruition. The basic idea was a way to accentuate logbook apps with some more accurate data-collection through acclerometer, GPS and barometric (height) sensors. They also claimed to be able to automatically determine the name and grade of the climb just from this data - a tough task that many commentators believed led to the company's eventual downfall. Despite the rising popularity of fitness-tracking smartwatches from manufacturers such as Garmin and FitBit, these devices do not have any functionality for climbing-specific recording other than logging time spent doing the activity. With Apple's latest smartwatch allowing app developers to access raw accelerometer data, some apps have been developed that use this to passively track a climbing session, recording number of climbs and time spent climbing vs resting~\cite{chalkprint}. \subsubsection{Computer-aided Technique Analysis} Lattice Training is a company set up by two climbing coaches and personal trainers, with the aim of assessing and planning climbing training with a rigorous data-led approach~\cite{lattice}. They use a variety of strength and endurance tests on specialist equipment, and after analysing the data of hundreds of climbers, have developed a series of statistical measures that can link each strength test to expected grade of climbing ability, highlighting weaknesses to be worked on. They have also released an accompanying mobile app for logging workouts. Their system has shown that by using data-collection for meaningful training over long periods of time, climbers can progress much more efficiently and quickly, but this expensive and intense product is too much for the average recreational climber. \subsection{Where AugKlimb fits in} The main aim of this project was to develop a product that fills the gap between the simplistic lists of climbs presented by the logging apps and the elite-level coaching or data analytics offered to the athletes. Something for the average recreational climber to use in a more casual every-day fashion, but that still provides meaningful data augmentations.
{ "attr-fineweb-edu": 1.756836, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbPM5qhLACDi6AskJ
\section{Introduction} A large-scale Transformer encoder-decoder model, which reportedly has high performance in question answering and machine translation in the adjacent fields of dialogue \cite{wang2018glue,wang2019acl}, has been applied to a chat dialogue system \cite{Adiwardana2020TowardsChatbot,stephen2020blenderbot}. There are two chit-chat dialogue systems based on the Transformer model: Meena by Google and BlenderBot by Facebook. Both use for pre-training a large number of response pairs that are collected from SNS responses or discussions on bulletin boards. Meena is the first neural-based system that clearly surpasses the conventional rule-based systems. It only uses simple response pairs obtained from SNS and trains them with an extremely large amount of computation. BlenderBot is a large-scale, transformer, encoder-decoder model pre-trained with Reddit data and fine-tuned with a corpus of three skills (personality, empathy and knowledge) defined by the authors as necessary for chatting dialogues. By training on high-quality data, BlenderBot subjectively outperformed Meena with the same model size with less computation and shows similar performance in comparison with humans. \begin{table}[t] \centering \footnotesize \begin{tabular}{r|p{20em}} System& Let's talk about your favorite things. (早速ですがご趣味をうかがってもいいですか?)\\ User& I like traveling, especially traveling alone. (私の趣味は、旅行です!一人旅!)\\ System& Awesome! Do you generally travel domestically or overseas? (そうなんですね! 旅行は国内ですか?海外ですか?)\\ User& Overseas. I love going to Europe! (海外旅行のほうが多いです!ヨーロッパとか大好きなんです)\\ System& Europe sounds great. Which country have you been to? (ヨーロッパいいですね! 特にどの国に行かれましたか?)\\ User& Italy, many times to Venice. (何度も行ってるのは、イタリアのベネチアです)\\ System& I've never been to Venice. What's it like? (ベネチア行ったことないです! どんなところですか?)\\ User& It's a floating city, and the whole city is World Heritage. We can travel around by boats. (街全体が世界遺産の水上都市です。交通手段が船だけなんですよ)\\ System& That's interesting. I want to visit there. The scenery looks really beautiful! (えー!すごいですね! 行ってみたいです! 景色も綺麗そうですね!)\\ User& Exactly! I took a lot of pictures. (まさに!景色も最高です!写真撮りまくりです!)\\ System& I like taking pictures, too. What kind do you take? (写真撮るの好きです! どんなジャンル撮られるんですか?)\\ User& I often take pictures of scenery when traveling. (写真楽しいですよね!私は旅行先で景色撮ることが多いです)\\ System& So do I. Have you traveled anywhere recently? (私も同じです。旅行先で景色撮ってます。 最近どこか行かれましたか?)\\ User& No, not really, although I did recently go to Kyoto. (最近はなかなか出かけてないですが、少し前に京都行きましたよ)\\ \end{tabular} \caption{Example of Japanese chit-chat with our Transformer-based model (trained with our favorite-things dialogue dataset)} \label{tab:hobbyist} \end{table} BlenderBot showed good performance in generating chit-chat responses. Although its model behaviors have been analyzed, three remaining unrevealed issues have not yet been addressed. The first issue is how the characteristics of the training corpus affect user impressions. Since conventional work examined only one setting of the fine-tuning datasets (even mixed datasets) of their models and just used a few overall evaluation metrics (e.g., ACUTE, SSA), the effects of varying the fine-tuning datasets on user impressions have not been examined. Another problem is that the system's effectiveness has only been verified for English, not for such languages with large inter-language distances as Japanese. Due to the differences in language resources and community sizes, non-English initiatives on pre-trained models are much less common compared to English. Multilingual BART (mBART) is one initiative that has used a multilingual corpus for simultaneous learning in multiple languages \cite{liu-etal-2020-multilingual-denoising}. Although it works well for languages with close linguistic characteristics, such as European languages, it has performed less favorably for languages with distant linguistic characteristics, such as Japanese (especially in the generation task). In this study, we develop large-scale Transformer-based Japanese dialogue models and Japanese chit-chat datasets to examine the effectiveness of the Transformer-based approach for building chit-chat dialogue systems. We also analyze the relationship between user impressions and the fine-tuning strategies of the Transformer model, such as the dataset characteristic and amount, model size, and the presence or absence of additional information. Since we expect the above fine-tuning strategies to affect various impressions, this study established multiple evaluation scales to conduct a multifaceted evaluation of the Transformer-based chatting system. The following are the contributions of this paper: \begin{itemize} \item Pre-trained, evaluated, and published a Japanese dialogue model with data comparable in scale to the SoTA systems in English\footnote{We are preparing for releasing the models and datasets.} (Table \ref{tab:hobbyist}). \item Created and published benchmark data for a chit-chat dialogue system in Japanese. \item Analyzed the impact of fine-tuning strategies (datasets used, model size and the use of additional information) on subjective evaluations. \end{itemize} \begin{table*}[t] \centering \scalebox{1.0}[1.0]{ \begin{tabular}{rrrrrrrrr} Model name & Total Parameters & $V$ & $L_{enc}$ & $L_{dec}$ & $d$ & $h$ &Steps & PPL\\\hline 0.35B & 359,607,808 & 32K & 2 & 22 & 896 & 32 & 46K & 5.159 \\ 0.7B & 698,536,960 & 32K & 2 & 22 & 1280 & 32 & 48K & 5.033 \\ 1.1B & 1,065,683,200 & 32K & 2 & 22 & 1600 & 32 & 48K & 4.968 \\ 1.6B & 1,627,872,000 & 32K & 2 & 24 & 1920 & 32 & 48K & 4.924 \\\hline \end{tabular} } \caption{Training parameters of pre-trained models and perplexity on the validation set of our Twitter pre-training dataset for several models with given architecture settings. Columns include the vocabulary size (V), number of encoder and decoder layers ($L_{enc}$, $L_{dec}$), embedding dimensionality (d), Multihead Attention Heads (h), and training steps.} \label{tab:pre-train} \end{table*} \section{Pre-training} \subsection{Model architecture} We generated responses with standard Seq2Seq Transformer encoder-decoder architecture, which was also used with the {\it generative} model of BlenderBot \cite{stephen2020blenderbot}. We used Sentencepiece tokenizer \cite{kudo-richardson-2018-sentencepiece} as implemented in the Official Github site \footnote{https://github.com/google/sentencepiece} to avoid unnecessary dependency on a specific fixed vocabulary. We examined the improvement in model size in detail by considering four model sizes: 0.35B, 0.7B, 1.1B, and 1.6B parameters. Although our largest model (1.6B parameter) is slightly smaller than the original BlenderBot 2.7B because of the limitation of our computation resources, we believe that the 1.6B-sized model is sufficient to examine the effect of model size. \subsection{Pre-training dataset} For standard sentence classification or machine translation, plain text is commonly used for pre-training with denoising tasks that aim to recover original sentences from noise-added ones. Meena and BlenderBot, on the other hand, use a large amount of interactive data extracted from social networking sites or Reddit as pre-training to learn the relationship between direct input contexts and target utterances. We follow the conventional research and utilize a large amount of Twitter reply pairs, which are interactive data, for pre-training. The following is our data cleaning and setup procedures. First, we retrieved all the tweets from January 2016 to March 2018 of randomly sampled Japanese users. After a tweet-cleaning process, we performed declension and removed the account names and emojis. Then we removed the tweets that match the following conditions: \begin{itemize} \setlength{\parskip}{0cm} \setlength{\itemsep}{0cm} \item Tweets that have another tweet with a cosine similarity of 0.9 or higher on the same day (tweets with fewer than 20 characters are not filtered). \item Retweets. \item Tweets that contain URLs. \item Tweets that contain parentheses to prevent emoticons. \item Tweets where the user is a bot. \item Tweets that contain fewer than 30\% Hiragana and Katakana characters. \end{itemize} Next we extracted the tweets in a reply relationship from the cleaned tweets and paired them with the input contexts and target utterances. Using the tweet pairs, we extracted reply chains of tweets by extending the chain one by one from its root tweet. We utilized the last tweet of a chain as a target utterance and the rest that contain the root as the input context. For example, if the chain is A-B-C-D, we used A-B, AB-C, and ABC-D, but not B-C. We obtained 2.1 billion (521 GB) pairs by this method. The average number of utterances in the input context was 2.91\footnote{The input context was calculated using 0.12\% of the total data.}, and the average number of characters was 62.3 for the input context and 20.3 for the target utterance. We built a Sentencepiece tokenization model \cite{kudo-richardson-2018-sentencepiece} using 20 million sentences sampled from the data of a QA community service called "Oshiete goo!\footnote{https://oshiete.goo.ne.jp/}" since the data cover the period from 2001 to 2019 and contain more recent topics than our pre-training data. \subsection{Training details} The model parameters are based on the 2.7 billion parameters of Meena and BlenderBot, where the encoder is two layers and the decoder is 22 or 24 layers. The number of dimensions of the hidden layers is adjusted to avoid memory errors on the GPU (V100 16GB) available at AIST ABCI Cloud\footnote{https://abci.ai/ja/}, which is the computing resource we used. \tabref{tab:pre-train} shows the training parameters of each pre-training model used in this study, which are related to the model size. The other parameters are explored using Weight and Biases \cite{wandb} and set as follows. The dropout of the feed-forward layer and attention is set to 0.1. The learning rate is 1e-3, with 10000 warmup steps, and the maximum number of tokens per step is 2.4M. The objective function is set to label the smoothed cross entropy to promote early learning. Our computational resources were 400 V100 16GB cards. 48000 was the maximum number of training steps for the 1.6B model, where the early stopping steps of the training were about 45000 steps, which is almost equivalent to three epochs. The input format to the encoder connected the utterances in the input context of each pair with [SEP] tokens; no other information was added. The implementation uses the TransformerEncoderDecoderModel from fairseq\footnote{https://github.com/pytorch/fairseq} that was trained on a translation task. When we tried to pre-train on a single dataset, the data were too large. Therefore, we grouped the reply pairs by their Tweet dates and trained them by data sharding. The validation data were set to the 3/28/2018 data. \section{Fine-tuning} \label{sec:fine-tuning} In this study, we created a Japanese version of a similar corpus using BlenderBot as a reference and used it for fine-tuning. In this section, we describe the corpus used for fine-tuning, the format of the input information, and the detail settings of the fine-tuning. \subsection{Fine-tuning dataset} For fine-tuning BlenderBot, \citet{stephen2020blenderbot} used PersonaChat, EmpatheticDialogues, and Wizard of Wikipedia as datasets that individually correspond to three abilities: personality, empathy, and knowledge, which should be possessed by a chit-chat system, and simultaneously used BlendedSkillTalk dataset to integrate the abilities. For fine-tuning our Japanese version of BlenderBot, we develop a Japanese version of PersonaChat and EmpatheticDialogues and our original FavoriteThingsChat datasets. Although we also tried to construct a Wizard of Wikipedia, conducting meaningful conversations was actually very difficult. In the construction of Wizard of Wikipedia dataset, a knowledge-offering interlocutor (wizard) of each dialogue can refer only the first paragraph of Wikipedia pages, which gives a definition of the page content and is insufficient for the wizard to expand the dialogue meaningfully. Although we examined the translation from the original English Wizard of Wikipedia to Japanese one, many topics were different from those that generally appear in Japanese conversations. After determining that it did not contribute to the learning, we abandoned the utilization of Wizard of Wikipedia. The unavailability of the Wizard of Wikipedia greatly complicated building the BlendedSkillTalk, which requires dialogue models that are learned in each of the three datasets. As an alternative of BlendedSkillTalk, we originally develop FavoriteThingsChat dataset, which also contains utterances displaying empathy, knowledge and consistent personality. The details of each corpus are described below. \subsubsection{PersonaChat(PC)} PersonaChat \cite{zhang2018acl} is a corpus where each speaker sets five profile sentences that regulates the features of the speaker. Conversations are conducted based on the given profile sentence set, and the conversations of various speakers are collected in a pseudo manner. In this study, we constructed a Japanese version corpus of Persona-chat by creating a Japanese version of a profile sentence set and collecting conversations by Japanese speakers. The Japanese version of the profile sentence set is made as one profile sentence set by combining five sentences of 30 characters or fewer following a previous method \cite{zhang2018acl}. Cloud workers created 100 sets. A trained annotator rewrote some sentences to remove similar structures or words. In the dialogue collection, 100 sets of obtained profile sentences were allocated to each cloud worker, and 5000 dialogues were collected. All the cloud workers engaged in chats to talk about the content of the set of profile sentences given to them. Each utterance alternately carried out one utterance, and a dialogue was collected so that one utterance can consist of a maximum of 100 characters, a minimum of 12 utterances, and a maximum of 15 utterances (6-8 turns). 61794 utterances were included in the 5000 collected dialogues. \subsubsection{EmpatheticDialogues(ED)} EmpatheticDialogues is a dataset that collects dialogues with an open-domain one-on-one conversational setting where two people are discussing a situation that happened to one of them, related to a given emotion \cite{rashkin2019empathetic}. \citet{rashkin2019empathetic} used crowdsourcing to collect 24,850 dialogues that are grounded descriptions of situations in which a speaker expresses a given feeling of 32 emotional words. In this study, to make a Japanese version of EmpatheticDialogues, we translated 32 English words that show emotions into Japanese, and a Japanese speaker used them to construct situation sentences and dialogues. To reduce collection cost, one dialogue was not an actual dialogue done by two workers; it was a pseudo dialogue written by one worker. The crowdworker refers to the translated list of emotions and creates a context sentence of 1-3 sentences based on the emotions and a text dialogue of four utterances by two persons (speaker and listener) who interact in the context. 20,000 dialogues and 80,000 pairs of utterances were collected. \subsubsection{FavoriteThingsChat dataset(Fav)} The FavoriteThingsChat dataset consists of intensively collected chats about the favorites of various people. All 80 experiment's participants talked with more than 60 other participants. We collected 3480 dialogues and 123,069 utterances. Since the participants talk about their own actual favorites, they proactively show knowledge about their own favorites and display empathy and interest about the interlocutor's favorites with consistent personality. The range of the presented topics is comparatively narrow, because they are limited to the favorite things of each speaker, and because only 80 speakers repeatedly talk with each other participants. We expect that dialogues collected with such the "high density" dialogue collection setting are helpful for the dialogue model to have enough knowledge to speak each dialogue topic deeply. In addition, each conversation continues for a relatively large number of turns (average of 35.3), which is a suitable setting for learning a long conversation. We expect that the learning will improve the dialogue impressions more than those conducted by more PersonaChat where the number of turns is low and the speaker plays the Persona and Empathetic Dialogues with fewer turns and much data are deleted from one dialogue scene. \tabref{tab:favdial} shows an example of the collected dialogues. \begin{table}[t] \centering \footnotesize \begin{tabular}{r|p{20em}} Speaker & Utterance \\\hline 65 & Hello! (こんにちは)\\ 71 &(Pleased to meet you! (よろしくお願いします!)\\ 67 &What do you do in your free time? (趣味はなんですか?)\\ 71 &I like traveling, watching movies and reading books! How about you? (私の趣味は、旅行と映画を観たり本を読んだりすることです! あなたの趣味は何ですか?)\\ 67 & I like watching baseball games and playing video games Where have you traveled to recently? (趣味は野球観戦とゲームですねー 旅行は最近どちらに行かれましたか?)\\ 71& Do you have a favorite team to watch baseball games? (野球観戦は好きなチームがありますか?)\\ 67& For professional baseball, I like Orix. (プロ野球だとオリックスですねー)\\ 71& Recently, I went to Okinawa last July and Toyama in May! Orix is the team where Ichiro joined, right? (最近は去年の7月に沖縄と5月に富山に行きました!オリックスは昔イチローがいたチームですね?)\\ 67&Yes! But when I started rooting for them, he was in the major leagues... What kind of places did you visit in Toyama...?(そうです!ただ僕が応援し始めたときにはメジャーリーグに行っていましたねー 富山はどういったところ廻られたんですか…?) \\ 71& That was a long time ago! Do you go to see games at the stadium? (結構昔ですもんね!!!試合をドームとかに観に行ったりもされるんですか?)\\ 67 & Yes, I do. I didn't go to many games last year, but I think I went to about 10 games the year before last. (行きますねー 去年はあんまりでしたが、一昨年は10試合ほど行ったと思いますー)\\ 71 & In Toyama, there is a park with tulips, the most beautiful Starbucks in Japan, and a glass museum. I went to a park with tulips, a beautiful Starbucks and a glass museum! I also ate white shrimps and Toyama black (ramen)! (富山は、チューリップがある公園と、日本一?美しいスタバとガラスの美術館に行きました! あとは白エビ食べたり、富山ブラック(ラーメン)食べたりしました!)\\ \end{tabular} \caption{Example of our FavoriteThingsChat dataset)} \label{tab:favdial} \end{table} \subsubsection{Mixed dataset(Mix)} In BlenderBot's generative model, several kinds of corpus are mixed for fine-tuning. No research has clarified yet whether the interaction function, which is the intention of each dataset, is achieved. Nor has research investigated which improves the interaction performance more: a high-quality dataset that is used singly or when the overall quantity is increased by adding another dataset. In this study, in addition to the case in which the above dataset is used for the fine-tuning alone, we compare the following two cases: one mixed in the same quantity as each dataset and another where each dataset was mixed in the whole quantity. \subsection{Use of additional information in query sentences} \label{sec:addinfo} In response generation using the encoder-decoder model, when information is input to the encoder, in addition to the dialogue's context, additional information can also be input in the same text format. In this study, we analyzed the effect of the presence or absence of such additional information on the impressions of dialogues. Such information might improve the generation performance because clinging to it deleteriously affects long dialogues. Below we show the information to be added to each dataset. PersonaChat, as in a previous work, adds a set of profile statements to the input \cite{zhang2018acl} to improve the utterance consistency by generating responses by linking them. In EmpatheticDialogues, situation sentences and emotional words are added to input sentences, as in previous studies. The stability of utterance generation is expected to increase since the situation and the feeling to be generated are decided \cite{rashkin2019empathetic}. In the FavoriteThingsChat, only the speaker's ID is added as information. In comparison with the above two datasets of PersonaChat and EmpatheticDialogues, the effect seems comparatively small, because the information is not accompanied by concrete content in a simple ID. \begin{table}[h] \centering \small \begin{tabular}{p{\linewidth}} $<${\it Dataset name}$>$:[SEP]$<${\it Speaker ID}$>$[SEP][SPK1] $<${\it System Utt.}$>_{t-2}$[SEP][SPK2] $<$ {\it User Utt.}$>_{t-1}$ \end{tabular} \caption{Query sentence format input for Encoder} \label{tab:input_template} \end{table} \subsection{Fine-tuning training details} In fine-tuning, up to four utterances are used as a dialogue context until the maximum character length reaches 128. As in the pre-training, Adafactor was used as the optimizer for training. The other parameters were changed from the pre-training settings: the learning rate was 1e-4, 3000 warmup steps, and a batch size of 256. With these settings, we trained up to 3000 steps (about 30 minutes with 128 V100 16GB cards) with a model that minimized the perplexity of the validation set. \section{Sentence generation settings} \paragraph{Decoding} \label{sec:decode} For decoding the utterances from the model, we adopted the sample-and-rank format as in Meena \cite{Adiwardana2020TowardsChatbot}. In this method, the final output is the candidate with the lowest perplexity among $N$ speech candidates generated independently by sampling. In our initial study, we used the diverse beam search method that resembles BlenderBot. However, we found that the sample-and-rank method was more advantageous for expanding the topics because the diverse beam search often produced somewhat boring responses. We also introduced temperature $T$ when calculating the softmax that is used for controlling the token output probability \cite{hinton2015distilling}. A temperature of $T=1$ results in normal sampling, and the higher temperature $T$ is, the more contextually unusual tokens (e.g., proper nouns) are generated. At the same time, contextually incorrect tokens are more likely to be generated. Conversely, the lower temperature $T$ is, the more likely safe and common words will be selected. In addition, we used nucleus sampling to limit the number of words sampled by the probability cumulative density \cite{holtzman2019topp}. We used $top\_p=0.9$ and $T=1.0$ based on preliminary experiments. \paragraph{Filtering candidate utterances} In a preliminary experiment, we found that many repetitive utterances, generated from the models, have almost identical content as the utterances in the past context. To suppress such repetitive utterances, we filtered the candidate utterances with the similarity of the Gestalt pattern matching algorithm\footnote{https://docs.python.org/ja/3/library/difflib.html} with utterances in the context and sentences (segmented by punctuation from the context utterances) exceed threshold $\sigma_r$. We set $\sigma_r$ to 0.5. \begin{table*}[t] \small \centering \begin{tabular}{r|l} Metric name & Questionnaire \\\hline Humanness & The system utterances were human-like and natural. \\ &(システムの発話は人間らしく自然だった)\\ Ease & Continuing the dialogue was easy. \\&(簡単に対話を続けることができた)\\ Enjoyability & I enjoyed interacting with the system.\\ &(システムとの対話は楽しかった)\\ Empathetic&I was able to empathize with the system utterances.\\& (システムの発話に共感できた)\\ Attentiveness & The system was interested in me and was actively trying to talk with me.\\&(システムはあなたに興味を持って積極的に話そうとしていた)\\ Trust & I felt that what the system said was trustworthy.\\ &(システムの話したことは信頼できると感じた)\\ Personality & I could sense the system's personality and character.\\&(システムの個性・人となりが感じられた)\\ Agency & I felt that the system was speaking from its own perspective.\\&(システムは自身の考えをもって話していると感じた)\\ Topic & I felt that the system had a topic it wanted to discuss.\\&(システムには話したい話題があると感じた)\\ Emotion & I felt that the system had feelings.\\&(システムは感情を持っていると感じた)\\ Consistency&The system utterances were consistent and coherent.\\&(システムの発話は矛盾せず一貫していた)\\ Involvement&I was absorbed in this dialogue.\\&(この対話にのめりこめた)\\ Respeak&I want to talk with this system again.\\&(またこのシステムと話したい)\\\hline \end{tabular}\vspace{-0mm} \caption{Evaluation metrics}\vspace{0mm} \label{tab:criteria} \end{table*} \section{Experiment} As described in Section \ref{sec:fine-tuning}, we analyzed how the user's impressions of the dialogue changed depending on the dataset used for fine-tuning and inputting additional information to the encoder. We also analyzed the effects of the mixture of datasets and the model size on the overall performance. \begin{table*}[t] \centering \begin{tabular}{ccccccc} Fine-tuning corpus & Size & PC & ED & Fav & Mix\\\hline\hline Pre-trained & 1.6B & 38.45/39.50 & 27.35/28.03 & 29.35/33.41 & 31.65/33.67 \\\hline PC50k & 0.35B & 25.03/21.72 & 27.83/21.89 & 39.57/35.2 & 30.29/25.75 \\ PC50k & 0.7B & 23.06/19.77 & 24.11/19.41 & 35.25/31.30 & 27.08/23.07 \\ PC50k & 1.1B & 21.88/18.86 & 22.89/18.06 & 34.71/30.42 & 26.03/21.99 \\ PC50k & 1.6B & 21.32/18.35 & 22.15/17.58 & 34.06/29.58 & 25.38/21.39 \\\hline ED50k & 0.35B & 42.84/33.92 & 19.72/15.64 & 38.64/37.05 & 32.86/27.82 \\ ED50k & 0.7B & 39.15/30.50 & 17.81/14.13 & 35.99/34.09 & 30.12/25.25 \\ ED50k & 1.1B & 38.47/28.78 & 16.97/13.42 & 35.53/33.39 & 29.37/24.19 \\ ED50k & 1.6B & 34.22/28.26 & 16.21/13.05 & 31.05/32.26 & 26.52/23.54 \\\hline Fav50k & 0.35B & 44.97/42.13 & 31.37/27.48 & 21.74/21.07 & 31.41/29.19 \\ Fav50k & 0.7B & 41.50/39.34 & 28.46/25.12 & 19.97/19.60 & 28.79/27.05 \\ Fav50k & 1.1B & 39.83/35.91 & 26.85/23.11 & 19.12/18.54 & 27.47/25.05 \\ Fav50k & 1.6B & 37.23/34.79 & 25.26/22.21 & 19.05/17.94 & 26.30/24.21 \\\hline Mix50k & 0.35B & 28.91/24.3 & 21.43/17.15 & 23.25/23.11 & 24.53/21.55 \\ Mix50k & 0.7B & 26.27/22.00 & 19.23/15.43 & 21.36/21.20 & 22.29/19.56 \\ Mix50k & 1.1B & 25.04/21.01 & 18.24/14.57 & 20.35/20.23 & 21.22/18.61 \\ Mix50k & 1.6B & 24.21/20.43 & 17.58/14.20 & 19.83/19.60 & 20.55/18.09 \\\hline Mix150k & 0.35B & 25.64/21.84 & 20.10/15.91 & 22.19/21.54 & 22.69/19.8 \\ Mix150k & 0.7B & 23.52/20.00 & 18.00/14.33 & 20.48/20.02 & 20.71/18.13 \\ Mix150k & 1.1B & 22.35/19.04 & 17.04/13.50 & 19.53/19.00 & 19.68/17.19 \\ Mix150k & 1.6B & 21.69/18.46 & 16.41/13.09 & 18.94/18.24 & 19.05/16.61 \\\hline \end{tabular} \caption{Perplexity of compared models on each test dataset. Left values show flat (no additional information) condition and right show tagged (with additional information) condition.} \label{tab:ppl} \end{table*} \subsection{Experiment procedures for verification} This section explains our verifying factors for fine-tuning and experiment procedures. \paragraph{Fine-tuning datasets} The utterances contained in each dataset have different properties depending on the dialogue function intended by the dataset. For example, EmpatheticDialogues are expected to have empathetic and emotional utterances, and PersonaChat to have questions and self-disclosing utterances about the interlocutors' personalities. These properties will give users different dialogue impression. We analyze how the user's impression of the dialogue with the model changed depending on the nature of the utterances in the fine-tuning datasets. First, we train a dialogue model using only the utterances contained in each dataset, without additional information such as profile sentences. We call this setting as {\it flat}. Then we let the experiment participants interact with all the model and evaluate each dialogue using 13 measures described below. We compare the average values of the 13 metrics among the models $v(m^{flat}_d, s) \in D$ to verify overall performance of the models. We also compare the values of the 13 metrics with their averages for each fine-tuning dataset to verify whether each contributes to the value of a particular scale. Note that, since we expected the range of values for each metric to be different, we calculate normalized values of each metric with subtracting the average value of each metric for each dataset from the values assigned to each metric to reduce the effect of any differences in the metrics themselves. If the difference value of a metric has a large absolute amount, corpus-specific effects on the metric are being observed. We performed Friedman test for repeated measurements \cite{friedman1937test} on the differences of the normalized value of each metric for each dataset. For the dataset that are found to be significant, we perform the Wilcoxon signed rank test \cite{wilcoxon1945signed} to examine the difference between each metric and the averaged scores. For the multiple testing correction, we adopt the BH procedure that controls False Discovery Rate \cite{benjamini1995fdrbh}. \paragraph{Using additional information} We analyzed the effect of the presence of additional information on the dialogue impressions based on Section \ref{sec:addinfo}. Even though using additional information may improve the generation performance, it might also negatively impact long dialogues because of adherence to the additional information. We verify the effect of the additional information through almost the same process as the {\it flat} condition described above, with the difference of using additional information in the fine-tuning ({\it tagged} condition). \paragraph{Mixing fine-tuning datasets} We investigated the impact of mixing multiple datasets on the overall model performance. We considered two methods. The first trained the model on the same amount of data as the individual datasets to test the usefulness of mixing different types of dialogues. Although training on a wide range of data might improve the overall performance by robustly training the model, when mixing high- and low-quality datasets, the performance might only be improved with a high-quality dataset. The second case simply increases the amount of data. In this case, we examined whether the performance is improved by increasing the amount of data, even with low-quality data. In addition, we mixed the datasets under two conditions: one adds a word that represents each corpus ("個性雑談" (meaning PersonaChat in Japanese), "共感雑談" (EmpatheticDialogues), and "趣味雑談" (FavoriteThingsChat))) at the beginning of each input sentence of the dataset and additional information that corresponds to each dataset (we call this as {\it mixed-tagged} condition); the other only fine-tunes from utterances without any additional information ({\it mixed-flat}). In the inference for actual conversations of mixed-tagged condition, we use dataset type "趣味雑談" (FavoriteThingsChat)) and randomly set IDs to be added based on the settings of FavoriteThingsChat to minimize the additional information. \paragraph{Model size and fine-tuning datasets} In BlenderBot, the performance did not improve even when the model size increased. We investigated the effect of varying the model size and the training dataset on the performance. We used the average value of each measure and examined whether the evaluation values correlated with the model size for each dataset. \subsection{Evaluation metrics} \label{sec:criteria} \citet{fitrianie2020iva} conducted an extensive meta-survey of evaluation measures of interactive virtual agents and user interaction and classified those used in existing research. The classified evaluation measures contain various perspectives, and are useful references for designing evaluation metrics for dialogue systems in our study. However, since the scales are not for textual dialogues but for dialogues with CG-visualized virtual agents, they include many multi-modal factors such as appearance, while the scales are rather rough in terms of language operational ability. Therefore, we discard, integrate and divide the original measures to fit our research objectives. Our evaluation metrics are shown in \tabref{tab:criteria}. \subsection{Collection of dialogue data} \subsubsection{Participants} In this study, we use crowdsourcing to conduct subjective evaluations. We recruited 32 Japanese-speaking crowdworkers from a Japanese crowdsourcing service called Lancers\footnote{https://www.lancers.jp/}. Only workers who performed high-quality work were selected. The unit price was set at 300 yen (about three dollars) per dialogue task. 32 workers were assigned to all 25 systems and collected one dialogue for each system. \subsubsection{Dialogue task settings} In this study, based on a live dialogue system competition in Japan\cite{higashinaka2020dslc3}, the dialogue length was set to 15 turns each by the system and the user. The conversation starts with a fixed phrase from the system: "Hello. Nice to meet you." After 15 utterances each, the following fixed phrases notify the user of the dialogue's end: "Oh, I'm sorry. Our time is about up. Thank you for today." The "goodbye" answers the responses of the 15 user utterances, and the conversation is finished. After each conversation, a link is displayed by Google Form that evaluates the conversation. Interaction evaluations are done by a 11-stage Likert scale that ranges from 0 (completely disagree) to 10 (completely agree) for each item. A telegram platform was used for the interactions. The dialog environment (PCs, smartphones, etc.) of the workers did not include any particular constraints. \begin{table*}[h] \begin{minipage}{0.5\linewidth} \centering \begin{tabular}{c|rrr|r} Measure & ED & PC & Fav & Average\\\hline Naturalness & 5.81$\uparrow$ & 5.00 & 6.41 & 5.74\\ Ease & 5.97 & 6.12 & 7.00 & 6.36\\ Enjoyment & 5.16 & 5.50 & 6.97 &5.88\\ Empathy & 4.25 & 4.94 & 6.03 & 5.07\\ Attentiveness & {\bf 4.31}$\downarrow$ & 5.34 & {\bf 8.12}$\uparrow$ & 5.93\\ Trust & 4.22 & 4.09 & 5.62 & 4.65\\ Personality & 5.53 & 5.19 & 6.09 & 5.60\\ Agency & 5.78 & 5.00 & 6.22 & 5.67\\ Topic & 5.03 & 5.38 & 7.03 & 5.81\\ Emotion & 5.53$\uparrow$ & 4.66 & 5.69$\downarrow$ & 5.29\\ Consistency & 4.41$\uparrow$ & 3.25 & 4.81 & 4.16\\ Engagement & 4.94 & 4.78 & 5.59 & 5.10\\ Respeak & 4.88 & 4.59 & 5.94 & 5.14\\\hline Average & 5.06 & 4.91 & 6.27 & \end{tabular} \subcaption{Human evaluations of models without additional information ({\it flat} condition)} \label{tab:eval_flat} \end{minipage} \begin{minipage}{0.5\linewidth} \centering \begin{tabular}{c|rrr|r} Measure & ED & PC & Fav & Average\\\hline Naturalness & 3.41 & 3.41$\downarrow$ & 7.09 &4.64\\ Ease & 4.12 & 3.81 & 7.12 &5.02\\ Enjoyment & 3.50 & 3.47 & 6.22$\downarrow$ &4.40\\ Empathy & 2.84 & 2.88 & 5.84 & 3.85\\ Attentiveness & {\bf 2.75}$\downarrow$ & 3.78 & 7.00 &4.51\\ Trust & 2.44 & 2.22 & 6.00 & 3.55\\ Personality & 3.66 & 3.53 & 6.06 &4.42\\ Agency & 4.12 & 3.47 & 6.50 &4.70\\ Topic & {\bf 5.00}$\uparrow$ & 5.03$\uparrow$ & 6.50 &5.51\\ Emotion & 2.97 & 3.44 & 5.91 &4.11\\ Consistency & 1.50$\downarrow$ & 2.81 & 5.91 &3.41\\ Engagement & 2.53 & 3.12 & 5.81 &3.82\\ Respeak & 2.78 & 3.00 & 5.88 &3.89\\\hline Average & 3.20 & 3.38 & 6.30 \end{tabular} \subcaption{Human evaluations of models with additional information ({\it tagged} condition)} \label{tab:eval_tagged} \end{minipage} \caption{ Human evaluations on multi-axial evaluation measures: Up arrows denote corresponding dataset significantly improved the evaluation metric, and down arrows denote decrease of metric (bold: $p<.05$, non-bold: $p<.1$).} \end{table*} \section{Results and analysis} \subsection{Automatic evaluation} \tabref{tab:pre-train} shows the perplexity of the pre-trained models on the validation set of pre-training dataset. The perplexity decrease with larger models. \tabref{tab:ppl} shows the perplexity of each model on the test set of each fine-tuning dataset. For all fine-tuning datasets except Pre-trained, the larger the model size shows the lower the perplexity, and the use of additional information improves the perplexity. \subsection{Human evaluation} \subsubsection{User impressions of fine-tuning datasets} We analyzed how the various datasets used for fine-tuning affected the user's impressions using a multi-axial evaluation scale. \tabref{tab:eval_flat} shows the evaluation results using only the dataset sentences ({\it flat} condition). \tabref{tab:eval_flat} shows that the ED dataset improved naturalness, emotion, and consistency but lowered attentiveness. Since ED has empathetic utterances for many kinds of situations and emotions, the ED-trained model enables users to feel that the system shows natural and consistent emotions. However, since the dialogues in the ED dataset only included four utterances, the system barely developed dialogue topics and simply repeats empathetic utterances, which probably decreased attentiveness. In contrast, \tabref{tab:eval_flat} illustrates that the PC has no significant effect on evoking specific impression including personality. \tabref{tab:eval_flat} also shows that Fav improved attentiveness scores but decreased emotion. This is because the Fav dataset consists of long dialogues that contain many questions that effectively improve attentiveness. On the other hand, such questions seem less effective to express the speaker's own emotions. From the viewpoint of overall impression, the Fav dataset significantly outperforms the other two datasets. \tabref{tab:eval_tagged} shows the evaluation results that include additional information ({\it tagged} condition). The main difference with the flat condition is the huge decrease of the average overall scores of ED and PC. The ED and PC datasets have concrete dialogue topics defined with profile sentences or situation information. Such information contributes to the improvement of topic scores, but in the actual dialogues, these systems with additional information frequently generate persistent utterances that have almost the same content of dialogue history. \begin{figure}[t] \centering \includegraphics[width=.95\linewidth]{addinfo.pdf} \caption{Mixing datasets\label{fig:addinfo}} \end{figure} \begin{figure*}[t] \includegraphics[width=1.05\linewidth]{modelsize_datatype.pdf} \caption{Variation of model size and datasets\label{fig:datatype}} \end{figure*} \subsection{Mixing datasets} We also tested whether mixing datasets improved the performance more than using a single dataset. Figure \ref{fig:addinfo} shows the relationship between the corpus and the average evaluation value for each additional bit of information. In all cases, Mix50K $<$ Fav50K $<$ Mix150K. For the same amount of datasets, the performance is slightly degraded by mixing datasets with low evaluations, such as ED and PC. On the other hand, when the total amount of datasets increased, the performance improved even when low evaluation datasets are mixed. With respect to the presence or absence of additional information, the evaluation of single datasets (especially ED and PC) tended to decrease with additional information, and the performance of mixed datasets improved with additional information. In the case of a mixture of different datasets, learning to distinguish the type and specifying it with the highest evaluation during inferences may have contributed to the performance improvement. \subsubsection{Model size and datasets} The performance of the model size for each corpus is shown in Figure \ref{fig:datatype}. For Fav50k and Mix150k, the model size is significantly correlated with the evaluation value. On the other hand, for ED50k, PC50k, and Mix50k, the correlation between the model size and evaluation value was insignificant, indicating that the evaluation did not increase with the model size. In general, performance improved as model size increased, and in fact, the perplexity improved with increasing model size for all the models in Mix and Fav, although the results were different from the objective evaluation. In fact, the perplexity improved with increasing model size for all models in both Mix and Fav; the results were different from the objective evaluations. This suggests that in long dialogues, factors that strongly affect the impression cannot be measured by simple perplexity. \section{Conclusion} We developed the largest Transformer-based Japanese dialogue models and Japanese version of PersonaChat and EmpatheticDialogues, which are widely used standard benchmarking dataset for evaluating chit-chat models in English. We also evaluated and analyzed the effects of the changes in the fine-tuning datasets, model size, and the use of additional information on users' impressions of dialogues from multiple perspectives. Our results identified the following: The model performance and user impressions greatly depend on the sentences contained in fine-tuning dataset, and this effect exists when additional information (e.g., profile sentences) is not available. The use of additional information is intended to improve specific impressions, but it is not always beneficial. A relationship between model size and overall performance varies greatly depending on the type of fine-tuned corpus. We found that the model size did not improve the PersonaChat and EmpatheticDialogues performance. Future work will clarify the relationship how dialogue content or degrees of breakdown affect to dialogue impressions. \section*{Acknowledgement} This work was supported by JSPS KAKENHI Grant Number 19H05693. \bibliographystyle{acl_natbib}
{ "attr-fineweb-edu": 1.745117, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUf3TxK3YB9raX3aGp
\section*{Abstract} We modeled the dynamics of a soccer match based on a network representation where players are nodes discretely clustered into homogeneous groups. Players were grouped by physical proximity, supported by the intuitive notion that competing and same-team players use relative position as a key tactical tool to contribute to the team's objectives. The model was applied to a set of matches from a major European national football league, with players' coordinates sampled at 10Hz, resulting in $\approx$ 60,000 network samples per match. We took an information theoretic approach to measuring distance between samples and used it as a proxy for the game dynamics. Significant correlations were found between measurements and key match events that are empirically known to result in players jostling for position, such as when striving to get unmarked or to mark. These events increase the information distance, while breaks in game play have the opposite effect. By analyzing the frequency spectrum of players' cluster transitions and their corresponding information distance, it is possible to build a comprehensive view of player's interactions, useful for training and strategy development. This analysis can be drilled down to the level of individual players by quantifying their contribution to cluster breakup and emergence, building an overall multi-level map that provides insights into the game dynamics, from the individual player, to the clusters of interacting players, all the way to the teams and their matches. \\ \\ \\ \end{@twocolumnfalse} ] \section{Introduction} \label{introduction} \noindent Complex systems, with time evolving interactions among its elements, abound in the social, biological and physical domains. In many of these systems, elements are clustered in groups that also undergo changes with time. A temporal, clustered network can be an appropriate representation of such a system. In this article we apply this representation to the sport of soccer. Soccer, as many other competitive team sports, can be seen as a social-biological complex system \cite{Ramos2020}. The domain dynamics of these sport modalities are neither fully random nor fully designed. This contributes decisively to their complexity. This is a property shared by many other complex systems that are subject to constrained random chance, therefore we believe the techniques and approaches researched for this article have potential application beyond sports. We use the term ``clustering'' to mean the set of disjoint non-empty subsets of nodes observed in the network at a given point in time. Some authors call it a ``partition''. These terms represent similar constructs, clustering being semantically associated with an emerging, bottom-up aggregation of nodes, while partition conveys the idea of a top-down driven process. In soccer there is not a single entity controlling group formation \cite{Ribeiro2019b}, at least not directly and in real time, so the former seems more appropriate. The soccer match is here represented as a succession of network observations where clusters are subsets of players, including the two football goal frames, resulting in a network with a maximum of 24 nodes concurrently active, plus substitutes \cite{Ramos2017}. While studying a soccer match as an evolving clustered network, we start from the proposition that players' spatial distribution on the pitch is the determining variable for clustering. Intuitively we could think that an optimal assignment of players to clusters would require a physical distance measure, predicating link weights by player relative distance. However, there are complicating factors to the usage of such a precise measurement, as the importance of inter-player distance is not independent of game play \cite{Ramos2018}. It varies with pitch location, ball position, game rules, environmentals (such as playing surfaces or weather), or the relation between time and distance in dynamic game settings. All these contribute to the actual player's instantaneous grasp of his performance environment and perception of opportunity for action \cite{Araujo2016}. The network representations we are using for the present analysis were built with a different approach. Instead of inter player links weighed by distance, players were clustered into homogeneous and disjoint groups connected by a single link \cite{Ramos2017}, using the formalism of hypergraphs \cite{Berge1973}. A hypergraph is characterized by having multiple nodes connected by a single link in contrast with a traditional graph where links have a maximum of two endpoints. A set of nodes that share a link is called a simplex. The process to identify these sets is non parametric and is explained in \cite{Ramos2017}. It guarantees that no node is closer to a node belonging to a different simplex than to its closest node in the same simplex. In the particular context of the present article, simplices are sets or clusters, and the collection of simplices observed in a single sample, a clustering. In the reminder of this document, we use the terms simplex and cluster interchangeably, all referring to the same construct: a group of players in articulated interaction and proximity. An example of the clustering process is illustrated in figure \ref{fig:7} in the appendix. \pagestyle{fancy} \fancyhf{} \chead{The Soccer Game, bit by bit} \rfoot{Page \thepage\ of \pageref{LastPage}} It could be argued that discretization and assignment of nodes to a pairwise disjoint family of sets, would lead to a distorted representation of events on the pitch. After all, players move freely in an Euclidean space and in continuous real time, while in the proposed representation time is discrete and players move on a lattice, understood not as a grid that spans the pitch but as the configuration space of all possible set arrangements \cite{Conway1999, Johnson2010}. Frequent observation, however, mitigates these effects. For example, peripheral players in a simplex will more easily transfer to a different simplex and, if frequently observed, any simplex changes will be quickly captured. Due to the high frequency characteristic of the network (10Hz), errors will smooth out as player simplices form and dissolve, establishing a bridge between the continuous domain of game play and the time sliced network representation employed \cite{Johnson2016}. This discretization carries with it a significant advantage. We are no longer in a continuous domain, and the toolkit of information theory \cite{Cover2006} becomes available to us. In a discrete domain, information can be quantified for complexity, such as in the Kolmogorov complexity or the Shannon entropy \cite{Kolmogorov1968, C.E.Shannonvol.27pp.JulyOctober1948, Grunwald2008}. Similarly, two pieces of information can be compared for distance. We can determine how far apart or how close they are by the number of units of information that are needed to find one given the other. In this article the pieces of information are the individual clustering samples of the soccer match. Formally, a clustering is: \begin{multline} C = \{c_1, \cdots, c_k\}: \\ (c_i \cap c_j = \varnothing \;\; \forall \;( 1 \leq i, j \leq k\; \land \; i \neq j)) \land \; \cup _{i=1}^{\,k} \, c_i = V \label{eq:1} \end{multline} where $c$ are the disjoint subsets, $k$ the number of subsets, and $V$ the set of all nodes. There are several methods to measure the inter-distance between clusterings, with varying properties, such as the Rand Index \cite{Rand1971}, Adjusted Rand Index \cite{Hubert1985}, the Normalized Mutual Information \cite{Danon2005}, the Van Dongen-Measure \cite{Dongen2000} and others. A thorough discussion of the major methods can be found in \cite{Vinh2010, Wagner2007, Meila2007}. We chose the Variation of Information $(\mathit{VI})$ \cite{Meila2007} to measure the information distance between samples and thus evaluate the change a clustered network experiences as a function of time. In a nutshell, $\mathit{VI}$, measures the amount of information gained or lost on every new network observation. If no changes in the clusters are observed, then there is no variation of information. As clusterings shift from one another, $\mathit{VI}$ increases. This is easy to visualize when considering the so-called confusion matrix \cite{Stehman1997} between clusterings at successive observations. This matrix describes the node spread, where each element represents the number of nodes moving from one cluster to another. If clusters are unchanged and keep their node affiliation, the confusion matrix will be a monomial matrix, $\mathit{VI}=0$ and we know exactly where each node ends up. But as the number of non-zero entries in the confusion matrix increases and their distribution tends to uniform, the uncertainty about each node destination also increases. Consider as an example a cluster that splits in half versus another that sheds a single node. There is a higher uncertainty about each node final destination in the former than in the latter. In simple terms, $\mathit{VI}$ measures this uncertainty. $\mathit{VI}$ has been applied in multiple contexts, for example to address the problem of clustering news published by online newspapers \cite{Rodrigues2013}. A practical illustration of how to compute $\mathit{VI}$ can be seen in tables \ref{tab:table1} and \ref{tab:table2} in the appendix. We have selected $\mathit{VI}$ as it is a true metric, respecting the triangle inequality, meaning that no indirect path is shorter than a direct one. This is important in analyzing the rate of change at multiple scales, avoiding the unreasonable possibility of having a greater rate of change for a given time interval, when sampling the network at a lower rate. $\mathit{VI}$ also increases when fragmentation and merges occur in larger clusters, which intuitively relates to playing dynamics, given the rise in degrees of freedom experienced in larger groups of interacting players. Fundamentally, although in this article we consider VI as a proxy for game dynamics, VI itself is not a quantification of informational meaning or semantics, but simply, a quantification of informational variation, or as Shannon puts it "semantic aspects of communication are irrelevant to the engineering problem" \cite[p.1]{C.E.Shannonvol.27pp.JulyOctober1948}. In this article we consider a split of $\mathit{VI}$ into two terms. A clustering has a signature in the (multi)set of its clusters' sizes. We call it a formation, as it vaguely captures the popular notion of team match formation in soccer, although these concepts do not overlap. A formation, using the previous notation is defined as: \begin{equation} \label{eq:2} F = \{\vert c_1\vert, \cdots, \vert c_k\vert \}: \sum_{i=1}^{k}\vert c_i\vert =\vert V \vert \end{equation} Using this construct we split $\mathit{VI}$ into two terms: \begin{enumerate} \item $\mathit{VI}_f$, which is the minimum amount of inherent change resulting from the evolving formation as described above, and \item the compositional $\mathit{VI}_c$ computed as $\mathit{VI}_c = \mathit{VI} - \mathit{VI}_f$, which is the additional information distance accrued on top of the minimum implied by the evolving formation. \end{enumerate} To understand these constructs consider that for two consecutive clusterings to show a null $\mathit{VI}$ it is necessary, but not sufficient, that their formations are equal. In fact, the formations can be equal (which implies that $\mathit{VI}_f=0$), but the clusterings' transition can still show a positive $\mathit{VI}$, due to compositional changes (in which case $\mathit{VI}=\mathit{VI}_c>0$). Consider, as illustration, a clustering made up of $n$ clusters. For simplicity, consider they are all of the same size $s$, or formally $C^t = \{c^t_1, \cdots, c^t_{n}\} \land \vert c^t_k \vert = s$. Its formation is $F_{c^t}=\{s^n\}$. Comparing with another clustering $C^{t+\delta}$, also with $F_{c^{t+\delta}}=\{s^n\}$, we have: \begin{equation*} \begin{cases} \textit{VI} = 0 \iff \forall i \in \{1 \cdots n\} \; \exists! \; j \in \{1 \cdots n\} \; \mid c^t_i = c^{t+\delta}_j \\ \textit{VI} > 0, \; \textrm{otherwise} \end{cases} \end{equation*} Another example can be found on figure \ref{fig:7} in the appendix. There we can observe a transition from a formation $\{2^4,3^4,4\}$ in (a) to $\{2^6,3,4,5\}$ in (b). As these formations are not equal, $\mathit{VI}>0$, however it is not the minimum for this transition. We can see that there is additional entropy, for instance in the restructuring of the 4-node simplex from players \{12, 20, 22, 21\} to \{9, 10, 14, 21\}, that the simple changes in formation would not necessarily require. We consider the usefulness of such a split analysis, guided by the intuition that the interplay of strategy, play patterns, set pieces, and individual player initiative \cite{Araujo2016, Ribeiro2019a} may drive differently $\mathit{VI_f}$ and $\mathit{VI_c}$. Depending on the represented system, these two components can have different meanings. This is an open issue that we briefly touch upon but that deserves further research. While calculating the total $\mathit{VI}$ is computationally trivial if the network partition into clusters is known, finding $\mathit{VI_f}$ is not routinely tractable, as we need to find the minimum node change for the formation transition, an NP-hard problem, meaning that it will be at least as algorithmically complex to solve precisely as any non-deterministic polynomial time algorithm. We employ a heuristic developed previously to approximate it efficiently\cite{Pereira2019a}. In the reminder of this document, we discuss the correlation of $\mathit{VI}$ and playing dynamics in section \ref{methods}, followed by a section \ref{results} describing the results obtained. We discuss these results in section \ref{discussion} and we conclude with directions for future research in section \ref{conclusion}. The main research question is whether $(\mathit{VI})$ can be a faithful proxy for game dynamics, and we expect to confirm a strong correlation. We are also interested in how $\mathit{VI_f}$ and $\mathit{VI_c}$ contribute to total $\mathit{VI}$ and how it relates to game tactics and play development. \section{Methods} \label{methods} The proposed approach is applied to the analysis of a set of 9 soccer matches from the 2010-11 season of the English Premier League. Based on an information stream collected from realtime pitch-located raw video feed, each match is modeled as a high-resolution (10Hz) temporal hypernetwork with simplices as clusters\cite{Ramos2017a, Ramos2017}, parsed by player proximity. The whole network is made up of up to 30 nodes (28 players and 2 football goals) of which only a maximum of 24 are present on the pitch at any given moment (11 players from each team and 2 goals). These nodes are clustered into a variable number of simplices, 10 times a second based on the location data. The method used for clustering guarantees that a node and its closest node belong to the same simplex. This implies that the smallest simplex has a minimum of 2 nodes, i.e., there are no isolated nodes. Although there maybe occasions where a player is side-lined, this will be an exception, as the expectation at the top-level of sports performance is that every single player have an active role in-play, in relation to their teammates and their opponents. Although the football goals are obviously fixed on the pitch, there is no fixed frame of reference for the clustering process, and the relation between players and football goals, especially with the goal keeper, are of particular importance, which justifies their inclusion. On average, considering a match, including extra time, we observed and measured the network $\approx$ 60,000 times. Each of these 60,000 samples is a clustering of the network. The measure used, $\mathit{VI}$, is a function that takes two clusterings as parameters and returns the information distance between the clusterings. $\mathit{VI}$ is computed as: \begin{equation} \label{eq:3} VI(X;Y) = - \sum_{i=1}^k\sum_{j=1}^lr_{ij}[\log_2(\frac{r_{ij}}{p_i})+\log_2(\frac{r_{ij}}{q_j})] \end{equation} where $X=\{x_1,\cdots, x_k\}$ and $Y=\{y_1,\cdots, y_l\}$ are clusterings of a given set $S$, with $n=\vert S \vert$, $k=\vert X \vert$, $l=\vert Y \vert$, $r_{ij}=\frac{\vert x_i \cap y_j \vert}{n}$, $p_i=\frac{\vert x_i \vert}{n}$ and $q_j=\frac{\vert y_i \vert}{n}$. From this equation it is easy to see that when the simplices in $X$ and $Y$ are the same, the result is zero, as $r_{ij}=p_i=q_j$. This result expresses the fact that no information is gained or lost when going from one clustering to the other. For empty intersections of pairwise simplices, $r_{ij}=0$, and although $\log(0)$ is not defined, applying l'Hopital rule we get a null contribution from these intersections to the overall $\mathit{VI}$. In summary, only pairwise non-disjoint, non-identical clusters contribute to the information distance. $\mathit{VI}$ works as a distance metric for clusterings of the same set of nodes. In the model used to represent the soccer match, the set of nodes remains constant, except on substitutions and send-offs. However, the number of observations affected by these events are so low, that we have ignored their contribution in the model. Using base 2 logarithms, $\mathit{VI}$ is measured in bits and describes the balance of information needed to determine one clustering from another. $\mathit{VI}$ is algorithmically simple (it can be computed in $\mathcal{O}(n + k l)$)) and, as mentioned in section \ref{introduction}, it is a true metric \cite{Kraskov2005}, respecting positivity, symmetry, and the triangle inequality. Using the previous notation, for every individual player $p_{ij} \in \{x_i \cap y_j \}$ his contribution to the overall $\mathit{VI}$ is computed as: \begin{equation} \label{eq:4} \mathit{VI}^{p_{ij}} = -r_{ij}\frac{[\log_2(\frac{r_{ij}}{p_i})+\log_2(\frac{r_{ij}}{q_j})]}{\vert x_i \cap y_j\vert} \end{equation} which takes the contribution of pairwise simplices $x_i, y_j$ to the overall $\mathit{VI}$, and divides it in equal parts among all players $\in x_i \cap y_j$. Note that, in the particular case of the network that we built, all nodes/players are present in all observations and are members of one and only one simplex in any one observation. Equation \ref{eq:4} registers the contributions of players involved in their simplices when these change. The only exception is the case of a send-off or substitution, in which case the player no longer contributes to the dynamics of the match. The $\mathit{VI}$ of two clusterings ($X, Y$) of $S$ can only be zero if $\forall s \in S \mid s \in X \leftrightarrow s \in Y$. If this condition is not met then $\min(\mathit{VI}) \geq \frac{2}{n^*}$ \cite{Meila2007}, where $n^* = \max(k, l)$ still using the same notation. In the soccer match representation proposed in this article the number of nodes is fixed at 24 (barring any red cards), and thus, $n^*=12$ and $\min(\mathit{VI})=\frac{1}{12}$ every time there are any clustering changes. This is also $\min(\mathit{VI_f})$ under those conditions. $\mathit{VI}$ depends on the level of fragmentation on the pitch across observations, which intuitively reflects the situation of players jostling for position, but cannot exceed $\log_2(n)$ \cite{Meila2007}. These extreme values of $\mathit{VI}$ are, however, just boundaries that limit minima and maxima given any set of clusterings. In the present case, we have a minimum of 2 nodes per cluster, which implies a maximum of 12 clusters, resulting in $\max(\mathit{VI})=\log_2(12)=3.585$, which is attained when a clustering with a single cluster splits into 12 clusters with two nodes each, or vice-versa. In practice, the maximum VI registered is substantially lower with typical observed values of $\max{\mathit{VI}} \approx 1.2$, corresponding to the maximum distance between clusterings with $0.1s$ separation, or $\mathit{\dot{VI}} \approx 12$ bps $(\mathit{bits\cdot s^{-1}})$. As mentioned previously, the data used for this article were captured 10 times a second. A significant amount of sparsity, i.e. a large amount of transitions without clustering changes, is observed at this frequency. This posits the question of the ideal sampling rate \cite{Moura2013}, given the dynamics of a soccer game, the capturing technology and the clustering methodology. The observed sparsity lead us to adopt a set of measures in the results section ahead, to enhance analysis and observability. These included: \begin{itemize} \item the usage of differentials and measuring change in bps, denoted as $\mathit{\dot{VI}}$; \item the use of rolling averages for visualization and compatibility with the rate of change and play of a soccer match. Results shown use 4s sample windows, except when noted; \item and, finally, we made use of cubic Hermite splines \cite{Neuman1978} to envelope $\mathit{\dot{VI}}$ maxima. Results use an inter pivot distance that dynamically varies up to a maximum of 80s depending on the position of the observed value in the probability density function of $\mathit{\dot{VI}}$ (figure \ref{fig:0}). \end{itemize} \begin{figure*}[!ht] \centering \includegraphics[scale=0.90, trim= 4 4 4 4]{PdfCdf.png} \caption{Probability Density Function ($f(\dot{\mathit{VI}})$) and Cumulative Distribution Function ($F(\dot{\mathit{VI}})$) for all nine matches measured on a 4s moving average window. Games color coded. There is a consistency of patterns that likely mirrors energy expenditure and management throughout the game \cite{Osgnach2010}.} \label{fig:0} \end{figure*} \section{Findings} \label{results} Given that the space of all clusterings is substantial, corresponding to a lattice of over $4.4\times 10^{17}$ points (Bell number $B_{24}$), the amount of unique clusterings we can observe is just a small fraction of this space, gated by the total of samples collected (average $58283$, $\sigma=1336$). Assuming a random distribution, the probability of observing the same clustering, that is the same sets of simplices, is for all purposes nil when considering the space size. Obviously the real distribution is not random and is heavily condition by its prior state. But, when excluding consecutive observations, a significant level of clustering re-appearances still emerges (average $6.4\%$, $\sigma=0.5\%$), which, intuitively, can be interpreted as the influence of strategic design over match playing patterns \cite{Ramos2020}. Having analyzed nine soccer matches of the 2010-11 season of the English premier league at 10Hz, on a 40 sample moving average window (4s), we found that the average $\mathit{\dot{VI}}$ and the standard deviation for the whole match is consistent across matches, with a total average of $0.597$ bps, $\sigma=0.0369$. Considering that a typical player spends on average over half of his time standing or walking and only sprints ($ > 8.3 ms^{-1}$) 1.4\% of the time \cite{Ferro2014}, 10Hz is a sampling frequency that often generates no clustering changes in consecutive samples. In fact, in almost 80\% of the network observations clusterings do not change. The standard deviation per match has an average $\mathit{\dot{VI}}$ of 1.30 bps, with a maximum of 1.37 and a minimum of 1.25 bps across all nine matches. A full report for all matches can be found in table \ref{tab:table3}. The dispersion of $\mathit{\dot{VI}}$ as measured by the coefficient of variation of all match observations averages $218\%$, reflection of the high activity level of the soccer game. We found no correlation between the time ordered sets of $\mathit{VI}$ observations between the matches we have analysed. However, a similar $\mathit{\dot{VI}}$ average and dispersion is observed across matches. The probability density functions for all nine matches, which can be seen in figure \ref{fig:0}, are strikingly similar. There is a clear consistency of dynamics as measured by $\mathit{\dot{VI}}$, in which matches exhibit similar probabilities of finding given levels of dynamics. An explanation is player's regulation of exertion during the match to manage fatigue, particularly at the high intensity professional matches are played \cite{Sarmento2018}. \begin{figure*}[hbtp] \centering \includegraphics[scale=0.83, trim= 4 4 4 4]{LinReg.png} \caption{Linear Regression of $\mathit{\dot{VI}}$ for all nine matches. Games color coded. Only one match (purple line) has an average positive gradient.} \label{fig:1} \end{figure*} In 8 out of the 9 matches we examined, we observe a descending slope when the time ordered $\mathit{\dot{VI}}$ set is linearly regressed as seen in figure \ref{fig:1} ($p=0.0012$, $H_0:$ normal null average distribution, single tailed). It is not a very pronounced slope. Two interpretations for this observation are increased fatigue as the matches progresses on one hand, and adjusted tactics as a result of increased acquaintance with competitor behavior on the other. Similar observations have been previously reported \cite{Rampinini2009}. However, it is important to note that the same team plays in every match. A larger sample of matches, from a wider population, may offer more consistency to this pattern. At a sampling rate of 10Hz, $\mathit{VI_f}$ is the major contributor to the total $\mathit{VI}$. Typically $\frac{\mathit{VI_f}}{\mathit{VI_c}}\approx 5$. However, decreasing the sampling rate has a dramatic effect on this ratio. For example, sampling every second changes that ratio to $\frac{\mathit{VI_f}}{\mathit{VI_c}}\approx 1.7$. This could intuitively be expected. Formations and clusterings take time to evolve, but the former has a much more restricted space. The number of possible formations is given by the integer partition function, which is $P_{24}=1575$ reducing to $320$ when considering that formations with isolated players are not allowed, while the space of clusterings, as referred above, is given by the Bell number $B_{24} \approx 4.4\times 10^{17}$, that only reduces to $\approx 4.0 \times 10^{16}$ when excluding clusterings with singleton clusters. In practice we observed an average of $11070, \sigma=678$ unique clusterings, but only an average of $193, \sigma=29$ of full formations per match (i.e. with 22 players on the pitch, red cards impact these results as it reduces the number of players, preventing clusterings from reappearing). Although $\mathit{VI_f}$ far outweighs $\mathit{VI_c}$ in its contribution to the information distance, the difference in maximum scores is much less dramatic, which points to less frequent contributions but equally impactful at certain moments of game play. \begin{figure*}[!hbtp] \centering \subfloat[][4s moving average window]{ \includegraphics[scale=0.40]{VifVic1.png} \label{sfig:21}} \subfloat[][No rolling average]{ \includegraphics[scale=0.4]{VifVic2.png} \label{sfig:22}} \caption{On a moving average with sample window of $4s$, $\dot{\mathit{VI}}_{f}$ has a $\approx 5$ times heavier influence on total $\dot{\mathit{VI}}$ than $\dot{\mathit{VI}}_{c}$ when sampled at 10Hz (\ref{sfig:21}). However, when looking at individual sample maxima, that difference almost disappears (\ref{sfig:22}). If we equate $\dot{\mathit{VI}}$ to energy expenditure, we can interpret this is due to energy management by the individual players, being judicious about their marking and unmarking efforts.} \label{fig:2} \end{figure*} This can be seen when comparing the envelope splines for the same match with and without a moving average ($4s$) at $80s$ pivot separation (see figure \ref{fig:2}). This trend can also be seen on the average of the coefficient of variation for $\dot{\mathit{VI}}_{c}$ and $\dot{\mathit{VI}}_{f}$, respectively $547\%$ and $227\%$. The impact of the sampling rate is sizable and further exploration of the significance of $\mathit{VI_f}$ and $\mathit{VI_c}$ in the context of a soccer match warrants a deeper analysis of the interaction of the sampling rate, the game dynamics and the resulting $\mathit{VI}$. To validate $\mathit{VI}$ as an indicator of game dynamics, we searched for correlations between known moments of intensive player repositioning and peaks in the information distance. To identify those moments in our datasets we made use of publicly available match commentary, as visual information was not available to us \begin{figure}[!h] \begin{center} \includegraphics[scale=0.50, trim={0, 0.2cm, 0, 0.5cm}]{CornerVI.png} \end{center} \caption{Conditional probability of having a peak ($\mathit{\ddot{VI}}=0$) if a corner is taken $P(peak\vert corner)$, with peaks taken from the cubic hermit spline using the inter-pivot distance to control the number of peaks obtained. Analysis performed with $\approx 24 ([23..26])$ peaks per match. Increasing the number of peaks, increases $P(peak \cap corner)$, but $P(peak)$ as well.} \label{fig:6} \end{figure} for most matches. The time accuracy of these commentaries is restricted to a resolution of 60 seconds, leading to a potential error of $\pm 300$ observations, discounting other timing commentary errors. This mismatch between commentary and sampling resolution was addressed as described ahead. We collected timed tags for goals, redcards, corners and substitutions among others. Out of these, only corners are intuitively associated with quick player re-positioning, which justifies their selection for analysis. It should be noted that there is no special reason to select corners except for the observation that if $\mathit{VI}$, as used in this article, is a good measure for game dynamics, then we should expect peaks when corners are taken, and their time correlation useful for validation of the hypothesis that $\mathit{VI}$ is a good proxy for playing dynamics. In the 9 matches, we observed an average of $10$ corners per match $\sigma=2.5$. To evaluate the performance of $\dot{\mathit{VI}}$ as a measure of game dynamics, knowing that corners should rate as moments of highly changeable player positioning, we computed the conditional probability $P(peak\mid corner)$ of observing a peak every time a corner is taken. As match commentary resolution is 1 minute, $P(peak \cap corner)$ was measured at the real peak $\pm 30s$. This is contrasted to the probability of finding a random peak under the same conditions, see figure \ref{fig:6}. This analysis is done per match, as this probability is dependent on the number of peaks observed in the match and their time distribution. At $\pm 30s$ overlaps can occur, as successive corners are not infrequent. Peaks were collected from the Hermite splines, with inter pivot distance adjusted to generate $\approx 20$ peaks. With one minute resolution this still covers, assuming no peak overlaps, a little over 20\% of the whole match, which is confirmed by the results obtained for the probability of finding a peak at random. Using total $\mathit{VI}$, we were able to recognize 72.4\% of all corners. These results shore up the compelling observations recovered from the match $\dot{\mathit{VI}}$ graphs. $\mathit{VI}$, as used in this study, is clearly a proxy for game dynamics, understood as a rapid pace of inter-players relative displacement, i.e. without a fixed frame of reference. This is notably obvious during set pieces. Corners and free kicks invariably generate a spike in $\mathit{VI}$, especially supported by $\mathit{VI_f}$, which could indicate the execution of set routines. Conversely, other events, like substitutions or send-offs, generate pauses that are captured by a drop in $\mathit{VI}$. Examples can be seen in figures \ref{fig:3}, where $\mathit{VI}$ is plotted for a whole match, with vertical bars indicating the type and time of events. To analyse player contribution to the overall $\mathit{VI}$, we apply equation \ref{eq:4}. We consider the player individual $\mathit{\dot{VI}}$, and his overall activity compared to the average $\mathit{\dot{VI}}$ per player. This may be useful to assess his activity during the match (figure \ref{fig:8}). \begin{figure*} \centering \subfloat[][Match 1, 0-0]{ \includegraphics[scale=0.12]{TypGame1.png} \label{sfig:31}} \\ \subfloat[][Match 2, 2-2]{ \includegraphics[scale=0.12]{TypGame2.png} \label{sfig:32}} \\ \subfloat{ \includegraphics[scale=0.8]{Legendv2.png}} \caption{Plots for two matches where orange and purple points are, respectively, observations of $\dot{\mathit{VI}}_{f}$ and $\dot{\mathit{VI}}_{c}$ at each sample transition, and the colored lines the respective peak envelope. $\dot{\mathit{VI}}_{f}$ and $\dot{\mathit{VI}}_{c}$ seem to be heavily correlated with match events, such as corners, where a high level of player repositioning is expected, and player substitutions, usually associated with a trough in $\dot{\mathit{VI}}$. It is also visible at minute 92 in \ref{sfig:31} that the match virtually "stopped" during the send-off of two players from opposing teams.} \label{fig:3} \end{figure*} \begin{figure*}[h] \centering \includegraphics[scale=0.10]{player22b.png} \caption{$\mathit{\dot{VI}}$ for a single player, in a single match, with maxima envelope. His total $\mathit{\dot{VI}}$ is compared against the match average for the whole match on the red bar on right hand side of this plot. In this case, a center forward player is represented, showing a lower than average $\mathit{\dot{VI}}$, which may be expected, because a forward is typically less active than the other players during his team defensive sub-phases of the match.} \label{fig:8} \end{figure*} We also introduce the concept of a simplex transition, a tuple of simplices $(c_i^{t}, c_j^{t+\delta})$ such that $(c_i^t \cap c_j^{t+\delta} \neq \varnothing) \land (c_i^t \neq c_j^{t+\delta})$, that, at successive observations, involves always the same players. \begin{figure*} \centering \includegraphics[scale=0.10]{vi_22.png} \caption{This chart shows the top ten simplex transitions player 22 of match 1 (figure \ref{sfig:31}) was involved in, as well as their formation. His contribution to the match $\mathit{VI}$ resulting from participating in these simplex transitions, is proportionally encoded in the area of the circle: larger circle signifies higher contributions. Each formation is coded in color and shade, with green and blue representing, respectively, home and visitor players, and the number of shades the number of participating players in the simplex. Each tick signals a transition and the match moment when it occurred, with a full match taking a full circle. The lower and upper semicircles describe, respectively, the formation of the prior (source) and immediately subsequent (destination) simplices, where the player was involved. Finally, simplices are identified by the participating players' numbers, with home players first, followed by visitors. Player 22 is a visiting forward, and as seen in the picture, is frequently observed alone (the single shade of blue in the semi circles) in a simplex with opposing back player(s), a typical pattern. Transition from formation $3-22$ to $3,12-22$, when home player 12 joins the simplex, has the highest accumulated $\mathit{VI}$ contribution from player 22. It occurs throughout the match but with an emphasis in the first half of the first 45 min. Player 22 is supported by a teammate in only two transitions out of the 10 represented. } \label{fig:4} \end{figure*} We visualize the type of transition, color coded to denote the number of home and visiting players involved. Each simplex transition plot is scaled by overall $\mathit{VI}$ contribution for that set of transitions, and details when those transitions occurred (see figure \ref{fig:9}). Plotted in reference to a single player, the major simplex transitions he was involved in build a full view of the player activity during the match. This is depicted in figure \ref{fig:4}, where the visual representation of this view is detailed. An aggregation of all simplex transition charts provides a full view of a complete match. \vspace{-2.5mm} \begin{figure*}[t] \centering \includegraphics[scale=0.10]{VI_allplayers.png} \caption{This chart uses the same symbolic elements as figure \ref{fig:4} but operates at a different level. Each circle represents the overall contribution to the match $\mathit{VI}$ of a whole transition and not just the player's contribution. Here we represent a match top ten transitions. The encoded information in this and in figure \ref{fig:4} can be useful to study and train high frequency transitions that contribute significantly to playing dynamics.} \label{fig:9} \end{figure*} \section{Discussion} \label{discussion} In this section we discuss the relevance, principles, relationships and generalizations that can be derived from the results presented above. We cover eight major findings informed by expertise about the soccer game. \subsection{Information distance time series} $\mathit{\dot{VI}}$ is highly variable throughout a match. Even with a 4 second moving average sample we found an average $\mathit{\dot{VI}}$ coefficient of variation of $\approx 218\%$ across all nine games. As expected we found no significant correlation among $\mathit{\dot{VI}}$ time series across matches, as every match is different. Although these findings confirm empirical expectations from a typical soccer match, it is compounding evidence that $\mathit{\dot{VI}}$ reflects the game dynamics. \subsection{Information distance variability} When comparing different matches, we found consistent $\mathit{\dot{VI}}$ averages, with a coefficient of variation of the averages of $\approx 5\%$. The probability density function of a match $\mathit{\dot{VI}}$ measurements is highly consistent across matches as seen on figure \ref{fig:0}. We did not find matches where $\mathit{\dot{VI}}$ is consistently high or consistently low. All matches come from the official English premier league games, usually played at a similar competitive level, so these results are not surprising, if $\mathit{\dot{VI}}$ is indeed a proxy for game dynamics. \subsection{Information distance match trend} We observe a general decreasing trend in $\mathit{\dot{VI}}$ as the matches progress. When linearly regressed eight out of nine games exhibit this trend. Player fatigue and inter-team tactical adjustments may be a determining factor, although the evolving match score and significance in the context of each teams general endeavors, may play a role as well. \subsection{Information distance and event correlation} There is evidence that the peaks and troughs observed in the values of $\mathit{\dot{VI}}$ correlate with known events, such as corners, free kicks or substitutions that similarly affect the game dynamics. \subsection{Match sampling frequency} Sampling at 10Hz generate $\approx 80\%$ of null $\mathit{\dot{VI}}$ measurements. A lower sampling rate may produce similar outcomes, resulting in a more efficient data capture and computing process. However, not all events develop in the same time scale, and further analysis would be required to fine tune the sampling rate to the specific analysis sought. \subsection{Meso-patterns distribution} We found that clusterings reappear throughout the matches with a probability ($0.064, \sigma = 0.005\%$) much higher than what would be expected by chance ($1.46^{-12}$). This can be interpreted as player dynamic placement on the pitch according to a game plan design. \subsection{$\mathit{\dot{VI}}$ components} We found that, at 10Hz, average $\dot{\mathit{VI}}_{f}$ is the main driver of total $\mathit{\dot{VI}}$, meaning that when clusterings change, players end up in the clusters that frequently minimize the information distance. However if we inspect the maxima of these two components, we find that player repositioning within the clustering, i.e. $\dot{\mathit{VI}}_{c}$, sometimes contribute as much as $\dot{\mathit{VI}}_{f}$ to total $\mathit{\dot{VI}}$. An hypothesis to justify this observation is that players are judicious with their energy expenditure, while individual initiative can heavily impact game dynamics. \subsection{Multi-layer analysis} The proposed way of measuring the soccer game enables a multi-layer decomposition of its dynamics from macro level (a full match) to meso (clusters of players, transitions and teams), to micro (individual players), as exemplified by the information presented, respectively, in figures \ref{fig:3}, \ref{fig:9}, and \ref{fig:8}. This enriches the information that can be extracted, helpful to evaluate the dynamics generated by individual players, but also cluster changes experienced during a clustering transition, which can be helpful to understand which sets of players are more prevalent, how they change and how they impact the overall $\mathit{\dot{VI}}$. \section{Final Remarks} \label{conclusion} The presented results endorse the status of $\mathit{\dot{VI}}$ as a measure for game dynamics. The fact that it captures with accuracy and precision well known moments of players jostling for position, such as when corners are taken, supports this interpretation. With error free and detailed metadata, a more accurate analysis would be possible, especially with concurrent visualization and representation. The present work is based on prior data, captured and clustered independently, that abstract the reality of a soccer match. Based on the promise shown here by the variation of information as an analysis tool, the proposed methods could be valuable to evaluate different approaches to data capture, such as sampling rates, as well as different clustering methods and game representations, such as overlapping, distance weighted networks, non-inertial frames of reference that accommodate ancillary factors, centroid based clustering, among many others. Although the soccer game was the subject matter of this article, we believe the principles and approaches used extend to other socio-biological systems with structural competing interactions, of which those found in competitive team sports are an example. This is left for future research. \section*{Appendix} \label{appendix} To illustrate how $\dot{\mathit{VI}}$ is computed, consider the two moments in a fictional match represented in figure \ref{fig:7}. \begin{figure*}[!h] \centering \subfloat[][Clustering at time t]{ \includegraphics[scale=0.45]{Toy1.png} \label{sfig:71}} \\ \subfloat[][Clustering at time t+0.9s]{ \includegraphics[scale=0.45]{Toy2.png} \label{sfig:72}} \\ \caption{Clustering for two moments of a fictional match separated by 900ms. Cluster 1 (goal and goalkeeper of the red team) and Cluster 9 (goal and goalkeeper of the yellow team), are not visible. The clustering process ensures that a node and its closest neighbor are nodes of the same simplex. Home players are numbered in red circles, visitors in yellow. Blue hexagons identify the simplices. White lines are only used to identify simplex membership. Formation for (a) is $\{2^4,3^4,4\}$ and for (b) is $\{2^6,3,4,5\}$, which correspond to the row and column sums of the matrix in table \ref{tab:table1}.} \label{fig:7} \end{figure*} The corresponding confusion matrix, which describes the transition of nodes between simplices when going from moment t to t+0.9s during the match, is given in table \ref{tab:table1}. \begin{table*}[!h] \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \midrule \multicolumn{1}{|l|}{\textbf{Simplex}} & \textbf{1} & \textbf{2} & \textbf{10} & \textbf{11} & \textbf{12} & \textbf{13} & \textbf{14} & \textbf{15} & \textbf{9} \\ \midrule \rowcolor[rgb]{ .851, .882, .949} 1 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \midrule 2 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \midrule \rowcolor[rgb]{ .851, .882, .949} 3 & 0 & 0 & 3 & 0 & 0 & 0 & 0 & 0 & 0 \\ \midrule 4 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 \\ \midrule \rowcolor[rgb]{ .851, .882, .949} 5 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3 & 0 \\ \midrule 6 & 0 & 0 & 0 & 2 & 1 & 0 & 0 & 0 & 0 \\ \midrule \rowcolor[rgb]{ .851, .882, .949} 7 & 0 & 0 & 0 & 0 & 1 & 2 & 0 & 0 & 0 \\ \midrule 8 & 0 & 0 & 0 & 0 & 0 & 0 & 3 & 1 & 0 \\ \midrule \rowcolor[rgb]{ .851, .882, .949} 9 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 \\ \bottomrule \end{tabular}% \caption{Confusion matrix going from t to t+0.9s} \label{tab:table1}% \end{table*}% Null matrix elements, as well as unchanged simplices (simplices 1, 2 and 9), do not contribute to informational distance. The contribution of the others is computed according to equation \ref{eq:4}. The result is shown in table \ref{tab:table2}, where the contribution from each simplex transition can be seen. \begin{table*}[!h] \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \midrule \multicolumn{1}{|l|}{\textbf{Simplex}} & \textbf{1} & \textbf{2} & \textbf{10} & \textbf{11} & \textbf{12} & \textbf{13} & \textbf{14} & \textbf{15} & \textbf{9} \\ \midrule \rowcolor[rgb]{ .851, .882, .949} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \midrule 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \midrule \rowcolor[rgb]{ .851, .882, .949} 3 & 0 & 0 & 0.092121 & 0 & 0 & 0 & 0 & 0 & 0 \\ \midrule 4 & 0 & 0 & 0.110161 & 0 & 0 & 0 & 0 & 0 & 0 \\ \midrule \rowcolor[rgb]{ .851, .882, .949} 5 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0.05188 & 0 \\ \midrule 6 & 0 & 0 & 0 & 0.048747 & 0.107707 & 0 & 0 & 0 & 0 \\ \midrule \rowcolor[rgb]{ .851, .882, .949} 7 & 0 & 0 & 0 & 0 & 0.107707 & 0.048747 & 0 & 0 & 0 \\ \midrule 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0.05188 & 0.166667 & 0 \\ \midrule \rowcolor[rgb]{ .851, .882, .949} 9 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \bottomrule \end{tabular}% \caption{Computing $\mathit{VI}$} \label{tab:table2}% \end{table*}% The end result is $\mathit{VI}=0.785615$ or, given that we are measuring a 0.9s interval, $\dot{\mathit{VI}}=\frac{0.785615}{0.9}=0.872905$ bps. \begin{table*}[!h] \begin{tabular}{|c|c|r|r|r|r|r|r|r|r|r|} \toprule \multicolumn{2}{|c|}{Match} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{3} & \multicolumn{1}{c|}{4} & \multicolumn{1}{c|}{5} & \multicolumn{1}{c|}{6} & \multicolumn{1}{c|}{7} & \multicolumn{1}{c|}{8} & \multicolumn{1}{c|}{9} \\ \midrule \multicolumn{2}{|c|}{Result} & \multicolumn{1}{c|}{0-0} & \multicolumn{1}{c|}{2-1} & \multicolumn{1}{c|}{2-2} & \multicolumn{1}{c|}{1-0} & \multicolumn{1}{c|}{3-0} & \multicolumn{1}{c|}{1-0} & \multicolumn{1}{c|}{0-1} & \multicolumn{1}{c|}{2-1} & \multicolumn{1}{c|}{1-0} \\ \midrule \rowcolor[rgb]{ .851, .882, .949} & Avg & 0.544 & 0.591 & 0.631 & 0.665 & 0.622 & 0.573 & 0.568 & 0.599 & 0.581 \\ \rowcolor[rgb]{ .851, .882, .949} & $\sigma$ & 1.255 & 1.278 & 1.346 & 1.369 & 1.330 & 1.276 & 1.273 & 1.292 & 1.282 \\ \rowcolor[rgb]{ .851, .882, .949}\multirow{-2}[2]{*}{$\dot{\mathit{VI}}_t$} & a & -4.6E-4 & -6.0E-4 & -2.9E-4 & -9.9E-4 & 1.4E-4 & -1.2E-3 & -8.7E-4 & -1.3E-3 & -4.7E-4 \\ \midrule \multirow{3}[2]{*}{$\dot{\mathit{VI}}_h$} & Avg & 0.277 & 0.290 & 0.329 & 0.330 & 0.314 & 0.284 & 0.301 & 0.302 & 0.292 \\ & $\sigma$ & 0.702 & 0.691 & 0.774 & 0.756 & 0.746 & 0.696 & 0.739 & 0.717 & 0.711 \\ & a & -6.2E-5 & -4.2E-4 & 2.4E-4 & -3.6E-4 & -9.4E-5 & -6.2E-4 & 4.4E-4 & -6.3E-4 & -2.9E-4 \\ \midrule \rowcolor[rgb]{ .851, .882, .949} & Avg & 0.267 & 0.301 & 0.303 & 0.335 & 0.308 & 0.289 & 0.267 & 0.301 & 0.289 \\ \rowcolor[rgb]{ .851, .882, .949} & $\sigma$ & 0.677 & 0.715 & 0.718 & 0.769 & 0.734 & 0.712 & 0.673 & 0.719 & 0.709 \\ \rowcolor[rgb]{ .851, .882, .949} \multirow{-2}[2]{*}{$\dot{\mathit{VI}}_v$} & a & -4.0E-4 & -1.8E-4 & -5.3E-4 & -6.2E-4 & 2.4E-4 & -6.2E-4 & -1.3E-3 & -6.6E-4 & -1.8E-4 \\ \midrule \multirow{3}[2]{*}{$\dot{\mathit{VI}}_c$} & Avg & 0.083 & 0.102 & 0.102 & 0.112 & 0.105 & 0.095 & 0.095 & 0.104 & 0.097 \\ & $\sigma$ & 0.496 & 0.542 & 0.559 & 0.582 & 0.561 & 0.529 & 0.531 & 0.550 & 0.539 \\ & a & -1.5E-6 & -2.5E-4 & -1.7E-4 & -3.4E-4 & -1.3E-4 & -1.7E-4 & -3.2E-4 & -2.3E-4 & -2.8E-5 \\ \midrule \rowcolor[rgb]{ .851, .882, .949} & Avg & 0.461 & 0.489 & 0.529 & 0.553 & 0.518 & 0.477 & 0.473 & 0.495 & 0.483 \\ \rowcolor[rgb]{ .851, .882, .949} & $\sigma$ & 1.100 & 1.110 & 1.173 & 1.181 & 1.154 & 1.110 & 1.113 & 1.116 & 1.113 \\ \rowcolor[rgb]{ .851, .882, .949} \multirow{-2}[2]{*}{$\dot{\mathit{VI}}_f$} & a & -4.6E-4 & -3.5E-4 & -1.2E-4 & -6.5E-4 & 2.6E-4 & -1.1E-3 & -5.4E-4 & -1.1E-3 & -4.5E-4 \\ \bottomrule \end{tabular}% \caption{Average (avg), standard deviation ($\sigma$), and linear regression slope (a) for $\dot{\mathit{VI}}$ results (Total, Home, Visitor, Compositional and Formation) for the nine matches used in this article} \label{tab:table3}% \end{table*}% \newpage \section*{Declarations} \subsection*{Funding} \begin{samepage} This project was partly supported by Fundação para a Ciência e Tecnologia through project UID/ Multi/ 04466/ 2019. R. J. Lopes was partly supported by the Fundação para a Ciência e Tecnologia, under Grant UID/50008/2020 to Instituto de Telecomunicações. D. Araújo was partly funded by Fundação para a Ciência e Tecnologia, grant number UIDB/00447/2020 attributed to CIPER – Centro Interdisciplinar para o Estudo da Performance Humana (unit 447). \end{samepage} \clearpage \onecolumn{\ifx\undefined\leavevmode\rule[.5ex]{3em}{.5pt}\ \newcommand{\leavevmode\rule[.5ex]{3em}{.5pt}\ }{\leavevmode\rule[.5ex]{3em}{.5pt}\ } \fi \ifx\undefined\textsc \newcommand{\textsc}[1]{{\sc #1}} \newcommand{\emph}[1]{{\em #1\/}} \let\tmpsmall\small \renewcommand{\small}{\tmpsmall\sc} \fi
{ "attr-fineweb-edu": 1.566406, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUeZvxK4tBVgEFWNuo
\section{Introduction} In this paper, we propose a simple analysis of a $D/GI/1$ vacation system with impatient customers. Such models may be used to describe many data switching systems whose data transmission must be done in a very short time. Such systems may have to support periodic arrival streams and might also execute other tasks that come from other queues. Thus, these secondary tasks may be regarded as server on vacations. Some investigations have been published regarding the performance of vacation queueing systems with impatient customers. In the literature queueing system may say as limited waiting time or limited sojourn time. Most of these works focus on $M/G/1 $ queueing models with a general vacation distribution and constant deadline. The study of these queueing systems have been introduced by \cite{vanderDuynSchouten78}. The author derives the joint stationary distribution of the workload and the state of the server (available for service or on vacations). In \cite{TakineHasegawa90}, the authors have considered two $M/G/1$ with balking customers and deterministic deadline on the waiting time and sojourn time. They have obtained integral equations for the steady state probability distribution function of the waiting times and the sojourn times. They expressed these equations in terms of steady state probability distribution function of the $M/G/1$ queue with vacations without deadline. Recently, \cite{KatayamaTsuyoshi2011} has investigated the $ M/G/1 $ queue with multiple and single vacations, sojourn time limits and balking behavior. Explicit solutions for the stationary virtual waiting time distribution are derived under various assumptions on the service time distribution. The same author in \cite{KatayamaTsuyoshi2012}, derives recursive equations in the case of a deterministic service times for the steady-state distributions of the virtual waiting times in a $ M/G/1 $ queue with multiple and single vacations, sojourn time limits and balking behavior. In \cite{AltmanYechiali2006}, the authors have analyzed queueing models in which customers become impatient only when servers are on vacations. They have derived some performance measures for $M/M/1 $, $M/G/1 $ and $ M/G/1 $ for both multiple and single vacations. In the case of a single service discipline, we analyse a Lindley-type equation \cite{Lindley52} and the model may be reduce to the $D/GI/1 + D$ queue. A sufficient condition is given for the existence of the stationary waiting time distribution and an integral equation is established for both reneging and balking models in the case of a single service discipline. The stationary probability of rejection is also derived for both models. Using the Laplace transform to solve a differential equation, simple explicit solutions are given when vacation times are exponentially distributed, and when service times distribution are either exponential, or deterministic. This model was studied by \cite{Ghosal1963} and the more general case was studied, for example by \cite{Daley65}, \cite{Baccelli84}, or \cite{Stanford79}. In \cite{Ghosal1963}, the author derives an integral equation for the stationary waiting time distribution based on the model introduced by \cite{Finch1960} (see also \cite{Ghosal1970}, Chapter 1, equation (1.4)). The same author in \cite{Finch1960} proposes a correction in \cite{Finch1961}. In this paper, we take into account \cite{Finch1961} and to rewrite the integral equation and solve it in a simple manner under various hypotheses. \vspace{0.5cm} The paper is organized as follows. In Section 2, we give a description of reneging and balking models. In Section 3, we establish preliminary results. In Section 4, we give a sufficient condition for the stability and derives integral equations for the stationary waiting time distribution and the stationary probability of rejection for reneging and balking models. Section 5 derives an explicit solution of the integral equation in the case of exponentially distributed vacation times and deterministic service times. In Section 6, we focus on the solution of the integral equation, when the service times and the vacation times are both exponentially distributed. \section{Model description and assumption} We consider a first-in-first-out single-server queueing system with single vacations in which customers are subject to a constant deadline $K > 0$ on the waiting time. A customer cannot wait more than $K$ time units in the queue. If he does not reach the server before a time $K$, he leaves the system and never returns. When he reaches the server, he remains until service completion. Customers arrive at periodic epochs, $T_n := nT$, $n \in \mathbb{N}$ and require service duration $\sigma_n$, $n \in \mathbb{N}$. We assume that the server is free initially, and the first customer begins to be served on arrival. Customers may renege from the queue or balk upon arrival. A balking customer do not join the queue at all, and a customer who reneges joins the queue but leaves without being served. We examine the single service discipline, i.e., after each service completion the server goes on vacations (\textbf{justifier}). At the end of its vacation period, the server begins serving if a customer is present; otherwise it remains idle until a new customer arrives. For $n > 0$, denote by $v_n$ a real-valued random variable representing the length of the $n$th vacation period. Both sequences $(v_n, n \geq 1)$ and $(\sigma_n , n \geq 0)$ are assumed to be independent and identically distributed non-negative random variables with distribution function $V(x)$ and $B(x)$. All random variables are defined on the same probability space $(\Omega , \mathcal{F}, \mathbb{P})$. In practice observable data may consist only in partial information. Suppose that, we observe $ \{(T_n, \sigma_n), n \geq 0\} $ and that the vacation periods are not observable. In this setting, the workload process which depends on $\{ \sigma_n, n \geq 0 \}$, $\{ v_n, n \geq 0 \}$ cannot be anticipated, so that the balking model is not appropriate. It is then desirable to investigate the reneging behaviour of customers. In \cite{WardGlynn2005}, authors have taken into consideration the nature of the observable data by supposing that queue length is observable and have developed performance measure approximations for both reneging and balking models. As observed by Baccelli et al. \cite{Baccelli84}, customers who renege from the queue do not influence the waiting time of served customers. Thus, many steady-state performance measures are identical for reneging and balking models. \vspace{0.2cm} In what follows, Lindley-type recursive equations are formulated for both reneging and balking models under the assumption that we observe the arrivals and the service duration of customers but the vacation periods are not directly observable. This implies that the periodic arrival point process will be marked only by the sequence $\{ \sigma_n, n \geq 0 \}$. \subsection{The waiting time process for the reneging model} In the reneging model, all customers enter into the system. If the waiting time of a customer upon his arrival exceeds his patience time $K$, he abandons the queue without being served. Let $w_n$ be the waiting time experienced by the $n$th customer, i.e., the time between his arrival and the instant where he starts his service (if he receives a service). The idea is to express the waiting time of the $(n+1)$th customer in terms of those of the last served customer. \vspace{0.3cm} $\bullet$ If the $n$th customer joins the server ($ w_n < K $), at his departure time the server takes one vacation of length $v_{N(n+1)}$, where $N(n+1)$ is the number of served customers prior to the $(n+1)$th arrival. Since a customer cannot wait more than $K$ time units, the waiting time of the $(n+1)$th customer is given by \begin{eqnarray*} w_{n+1} = \min[K, (w_n + \sigma_n + v_{N(n+1)} - T)^+], \end{eqnarray*} where \begin{eqnarray} \label{equation0} N(n+1) = \sum_{k=0}^n \mathbf{1}_{(w_k < K)} \end{eqnarray} is the number of successfully served customers prior to the $(n+1)$th arrival. \vspace{0.5cm} $\bullet$ If the $n$th customer abandons the queue without being served ($w_n = K$) and the $(n-1)$th customer joins the server ($w_{n-1} < K $), the waiting time $w_{n+1}$ satisfies \begin{eqnarray*} w_{n+1} = \min[K, (w_{n-1} + \sigma_{n-1} + v_{N(n)} - 2T)^+]. \end{eqnarray*} The fact that the $n$th customer leaves the system without being served is expressed by \begin{eqnarray*} w_{n-1} + \sigma_{n-1} + v_{N(n)} - T = K.\vspace{0.5cm} \end{eqnarray*} $\bullet$ More generally, if $w_n = \ldots = w_{n-k+1} = K$ and $w_{n-k} < K$ for some $k = 0,1,\ldots, n$ we have \begin{eqnarray} \label{equation1} \left\{\begin{array}{lll} w_0 = 0\\ w_{n+1} = \min [ (w_{n-k} + \sigma_{n-k} + v_{N(n-k+1)} - (k+1)T)^+,K ],\;\; n \geq 0,\\ \end{array}\right. \end{eqnarray} where $k$ is the number of lost customers between the $(n-k)$th and the $(n+1)$th customer. Furthermore, we have the following inequalities for each lost customer \begin{equation} \label{equation2} w_{n-k} + \sigma_{n-k} + v_{N(n-k+1)} - (j+1)T = K, \quad j = 0, \ldots, k-1. \end{equation} \noindent Equations \eqref{equation1} and \eqref{equation2} are particular cases of Equations (2) and (3) in \cite{Daley65}, where the author studied the general queueing system $GI/G/1 + GI$. \subsection{The workload process for the balking model} Let $\{ \tilde{w}_t, t\in\mathbb{R} \}$ be the workload process. The random variable $w_t$ represents the amount of work remaining to be done by the server at time $t$. By convention $\{ \tilde{w}_t, t\in\mathbb{R} \}$ will be taken right-continuous with left limit $\tilde{w}_{t^-}$ and $\tilde{w}_{0^-} = 0$. We define the workload sequence by, $\tilde{w}_n = \tilde{w}_{T_n^-} $, for all $n \in \mathbb{N}$. Thus, the value of $\tilde{w}_n$ taken up to time $T_n$ represents the time that the $n$th customer would have to wait to reach the server. The workload upon arrival of a customer is assumed to be known, hence a customer enters the system if and only if the workload upon his arrival is lower than his patience time $K$. If not, the customer does not enter and never returns. The server takes one vacation as soon as a customer completes his service. Consequently, vacation lengths are indexed by (\ref{equation0}) (by replacing $w_n$ by $\tilde{w}_n$). The general case was studied by \cite{Baccelli84}, or \cite{Stanford79}. We have for $n \geq 0$ \begin{eqnarray} \label{equation3}\left\{\begin{array}{lll} \tilde{w}_0 = 0\\ \tilde{w}_{n+1} = \left[\tilde{w}_n + (\sigma_n + v_{N(n+1)})\mathbf{1}_{(\tilde{w}_n < K)} -T \right]^+ . \end{array}\right. \end{eqnarray} \noindent The above equation is similar to Equations (2.1) in \cite{Baccelli84}. \section{Preliminary results} In this section, we shall derive time-dependent integral equations for both the waiting time and the workload process. Let us, first introduce one lemma. \vspace{0.3cm} Let $n$ and $p$ be two non-negative integers. Recall that $N(n+1) = \sum_{k=0}^n \mathbf{1}_{(w_k < K)}$ is the number of successfully served customers prior to $(n+1)T$ (for the reneging scenario). For $2 \leq p \leq n+1$, if $N(n+1) = p$ and $w_n < K$, then the $n$th customer is the $p$th served customer. After his service, the server takes its $p$th vacation period of length $v_p$. Thus, the event $\{w_n < K, N(n+1) = p \}$ is a function of $\sigma_0, \ldots , \sigma_{n-1}, v_1,\ldots, v_{p-1}$, and $v_p$ is independent of the event $\{w_n < K, N(n+1) = p \}$. If $p > n+1$ or $p = 1$, then $\{N(n+1) = p, w_n < K\} = \emptyset $. Let $\sigma(\sigma_0, \ldots , \sigma_{n-1},v_1, \ldots, v_p)$ the $\sigma$-field generated by the random variables $\sigma_0, \ldots , \sigma_{n-1}, v_1, \ldots, v_p$. We have the following lemma. \begin{lem} \label{lemma1} Let $n \geq 1 $ and $p \geq 1$ be two non-negative integers. The events $\{w_n < K, N(n+1) = p\}$ and $\{\tilde{w}_n < K, N(n+1) = p\}$ are both $\sigma(\sigma_0, \ldots , \sigma_{n-1},v_1, \ldots, v_{p-1})$ measurable, and $v_p$ is independent of both events $\{w_n < K, N(n+1) = p\}$ and $\{\tilde{w}_n < K, N(n+1) = p\}$. \end{lem} Let $W_n(x)$ and $\tilde{W}_n(x)$, $x \in \mathbb{R}^+$ be the distribution functions of $w_n$ and $\tilde{w}_n$ respectively. For the reneging scenario, no customer can wait in the queue more than $K$ units of times. For all $n \geq 0$, $w_n$ is lower than $K$ with probability one. Thus, for $0 \leq x < K$ we have \begin{equation*} \mathbb{P}( w_{n+1} \leq x) = \sum_{k=0}^n \mathbb{P}( w_{n+1} \leq x , w_{n-k} < K, w_{n-j} = K, j = 0, \ldots k-1 ). \end{equation*} From Equations \eqref{equation1} and \eqref{equation2}, by conditioning first with respect to $w_n$, secondly, with respect to $N(n+1)$, and using Lemma~\ref{lemma1}, and the fact that both sequences $(\sigma_n, n \geq 0)$ and $(v_n, n \geq 1)$ are i.i.d. and mutually independent, yields for $0 \leq x < K$ \begin{align*} \begin{split} \mathbb{P}(w_{n+1} &\leq x , w_{n-k} < K, w_{n-j} = K, j = 0, \ldots k-1)\\ \qquad &=\int_{0-}^{K-0} [G(a_k^x(w)) - G(b_k(w))]dW_{n-k}(w), \quad k = 0,\ldots, n, \end{split} \end{align*} \noindent where \begin{equation} \label{def suites a et b} \left\lbrace \begin{array}{lll} a_k^x(w) = x- w +(k+1)T, \; k \geq 0 \vspace{0.3cm}\\ b_0(w) = 0, \quad b_k(w) = K-w+kT, \; k \geq 1, \end{array} \right. \end{equation} \noindent and $G$ denotes the distribution function of $\sigma_0 + v_1$. The previous equation is given with the condition that $G(s) - G(u) = 0$ for $s-u \leq 0$. For $0 \leq x < K$, the time-dependent integral equation for the waiting time for the reneging behaviour satisfies, for $ 0 \leq x < K$ \begin{equation} \label{equation5} W_{n+1}(x) = \sum_{k=0}^n \int_{0^-}^{K-0} \left\lbrace G(a_k^x(w)) - G(b_k(w))\right\rbrace dW_{n-k}(w). \end{equation} If $x\geq K$, $W_{n+1}(x) = 1$. \vspace{0.5cm} \noindent Simple calculations yield for the balking scenario \begin{equation} \label{equation6} \tilde{W}_{n+1}(x) = \int_{0-}^{K-0} G(x - w + T)d\tilde{W}_n(w) + \int_{K-0}^{T+x}d\tilde{W}_n(w). \end{equation} with the condition that $G(u) = 0$ and $dW(u) = 0$ for $u < 0$. \section{Stability condition} For all $n \geq 1$ let $u_n = \mathbb{P}(w_n = 0)$ be the probability that the $n$th customer finds the server free (with $u_0 = 1$), and $f_n = \mathbb{P}(w_n = 0,w_{n-1} > 0, \ldots, w_1 > 0)$, the probability that the event $\{ w_n = 0 \}$ occurs after $n$ steps (with $f_0 = 0$). For $k \geq 0$, let $\nu_0^k$ (with $\nu^{0}_{0} = 0$) be the number of entering customers in the $k$th busy period, that is, the duration for which the server is either serving or on vacations. We denote by $\mu$ the mean of the renewal epochs. The following result is a corollary of Theorem 1 in \cite{Daley65}. \begin{cor} \label{thm1} For the $D/GI/1$ queue with single vacations, constant deadline and single service discipline, assuming that $\mathbb{P}(\sigma_0 + v_1 - T < 0) > 0$, the limiting waiting time distribution function $W$ exists. Moreover, if the distribution functions $B$ and $V$ are continuous, then $W$ satisfies for $0 \leq x < K$ \begin{equation} \label{equation7} W(x) = \int_{0^-}^{K-0} \sum_{n \geq 0}G_n^x(w) dW(w), \end{equation} where \begin{equation} \label{equation8} \sum_{n \geq 0}G_n^x(w) = \sum_{n \geq 0} \mathbb{P}(b_n(w) \leq \sigma_0 + v_1 \leq a_n^x(w)), \end{equation} \noindent where $\{ a_n^{x} \}_{n \geq 0}$ and $\{ b_n\}_{n \geq 0}$ defined by Equation~\eqref{def suites a et b}. \vspace{0.3cm} \noindent The probability of rejection is given by \begin{equation} \label{equation9} B_K := \lim_{n \rightarrow \infty}\mathbb{P}(w_n = K) = \int_{0^-}^{K-0} \sum_{n=1}^{\infty} \left[ 1 - G(K - w + nT) \right]dW(w). \end{equation} \end{cor} \begin{rem} \label{rem1} Since the sequence $(v_n, n \geq 1)$ is i.i.d., the sequence $(w_n, n\geq 0 )$ has the same law than the sequence $ (z_n, n\geq 0)$, where $z_{n+1}$ is defined by $z_0 = 0$, $z_{n+1} = \min [ (z_{n-k} + \sigma_{n-k} + v_{n-k+1} - (k+1)T)^+,K ]$ for $n \geq 0$. It means, that the model that we propose in this paper, coincides (in law) with the classical $D/GI/1 +D$ queuing model for the reneging scenario, in which the sequence of service durations $(s_n, n \geq 0)$ is defined by $s_n := \sigma_n + v_{n+1}$, $n \geq 0$. In other words, the non observable data, $(v_n, n \geq 1)$ may be regarded as a sequence of marks for the arrival process. \end{rem} \begin{rem} \label{rem2} As in Remark~\ref{rem1}, this model coincides (in law) with the model defined by $\tilde{z}_0 = 0$, $\tilde{z}_{n+1} = [\tilde{z}_n + (\sigma_n + v_{n+1})\mathbf{1}_{(\tilde{z}_n < K)} - T]^+$. Thus, it may be reduce to the $D/GI/1 + D$ queue for balking customers. \end{rem} \begin{rem} \label{rem3} The $D/G/1 + D$ queue was studied by Ghosal in \cite{Ghosal1963}. The author derived an integral equation for the stationary waiting time based on the model introduced by Finch in \cite{Finch1960} (see also \cite{Ghosal1970}, Chapter 1, equation (1.4)), where the case $w_n=K$ is not taken into account (see \cite{Finch1961}, for a correction). Thus, the integral equation derived in \cite{Ghosal1963} does not take into account the case where customers left prematurely the queue. \end{rem} \begin{lem} \label{lemma2} Under the condition $\mathbb{P}(\sigma_0 + v_1 < T) > 0$, we have $\mu < \infty $. \end{lem} \begin{proof} The sequence $ (\tilde{z}_n, n\geq 0)$ defined in Remark 2, is a regenerative process with respect to the renewal sequence $ (\tau_0^k, k\geq 0)$, where $\tau_0^k$ is the number of customers entering during the $k$th busy period (wit h $\tau_0^0 = 0$) and provided $\mathbb{P}(\tau_0^1 < \infty) = 1$. When comparing $w_n$ with $\tilde{w}_n$ and using Remark 2 we have $w_n \leq \tilde{w}_n \stackrel{\mathcal{L}}{=} z_n$ for all $n\geq 0$. Since $\mathbb{P}(v_1 + \sigma_0 < T) > 0$, the renewal sequence $(\tau_0^k, k \geq 0)$ is aperiodic. From Theorem 2.2 in~\cite{Asmussen} we have $\mu \leq \tilde{\mu} = \mathbb{E}(\tau_0^1) = (q_0)^{-1} < \infty$, where $q_0 := \lim_{n \rightarrow \infty}\mathbb{P}(\tilde{z}_n = 0) > 0$. \end{proof} \noindent We now are able to prove Corollary~\ref{thm1}. \begin{proof} \textit{Existence}. The first part of the proof is similar to that of Theorem 1 in \cite{Daley65}. For $ x \geq 0$, we introduce the function \begin{equation*} F_n(x) = \mathbf{P}(w_n \leq x, w_k > 0, k = 1, \ldots, n-1). \end{equation*} Then $F_n(0) = f_n$ and $F_n(\infty) = f_n + f_{n+1}+ \ldots$ . \noindent One computes \begin{align*} \begin{array}{llll} W_n(x) &= \sum_{k=0}^{n-1} \mathbf{P}(w_k = 0, w_{k+1} > 0, \ldots , w_n \leq x)\\ \\&=\sum_{k = 1}^n u_{n-k} F_k(x). \vspace{0.5 cm}\\ \end{array} \end{align*} Since $\sum_{n \geq 1}f_n = 1$ and $(f_n ,n \geq 1)$ is aperiodic the Theorem 2.2 of \cite{Asmussen} yields \begin{equation*} \lim_{n \rightarrow \infty} u_n = \mu^{-1}. \end{equation*} Furthermore, for all $x \in[0,K)$, $0 \leq F_n(x) \leq F_n(\infty)$,we obtain \begin{equation*} \sum_{n = 1}^{\infty}F_n(x) \leq \sum_{n = 1}^{\infty}F_n(\infty) = \sum_{n = 1}^{\infty} nf_n = \mu < \infty. \end{equation*} The series uniformly converges over $x \in [0,K)$. It follows from Theorem 1 p. 318 of ~\cite{FellerI} that the sequence $(W_n(x))$ converges uniformly for $x \in [0,K)$ and therefore the limit function $W(x)$ is a distribution function. \noindent \\ \textit{Limit value}. For any integer $m$, we introduce the subdivision selected on continuity points of $W$ \begin{eqnarray*} 0 = w_{m,0} < w_{m,1} < \ldots < w_{m,l_m} = K, \end{eqnarray*} and $\Delta_m = \sup_{1 \leq j \leq l_m}(w_{m,j} - w_{m,j-1})$ such that $\lim_{m \rightarrow \infty} \Delta_m = 0$. For $x$ and $w$ $\in [0,K)$, define the sequence $(G^x_k)_{k \geq 0}$ such that \begin{eqnarray*} \left\lbrace \begin{array}{lll} G_0^x(w) = G(a^x_0(w)),\vspace{0.3cm} G_k^x(w) = G(a_k^x(w)) - G(b_k(w)), \;\; k \geq 1. \end{array} \right. \end{eqnarray*} The function $G_k^x$ depends on $T,K$ and $\sigma$. For sake of simplicity, we omitted these parameters. For all $k$, the functions $G_k^x(w)$ are continuous on $[0,K)$ uniformly over $x$. Thus, according to the definition of Riemann-Stieltjes integral \begin{equation} \label{equation10} \lim_{n \rightarrow \infty} W_{n+1}(x) =\lim_{n \rightarrow \infty} \lim_{m \rightarrow \infty} \sum_{j=1}^{l_m}\sum_{k=0}^n G^x_k (w_{m,j-1})[W_{n-k}(w_{m,j}) - W_{n-k}(w_{m,j-1})]. \end{equation} \noindent Define the sequence $(\alpha_{m,n})_{m \geq 0, n \geq 0}$ by \begin{equation*} \alpha_{m,n} = \sum_{j=1}^{l_m}\sum_{k=0}^n G^x_k(w_{m,j-1})[W_{n-k}(w_{m,j}) - W_{n-k}(w_{m,j-1})]. \end{equation*} \noindent The sequence $W_n$ and the series $ \sum_{n=0}^{\infty} G^x_n(w) $ are uniformly convergent. Thus, according to Theorem 1 p. 318 in \cite{FellerI}, the sequence $(\alpha_{m,n})_{n \geq 0}$ converges uniformly over $m$ to \begin{eqnarray*} \sum_{j=1}^{l_m}\sum_{n=0}^{\infty} G^x_n (w_{m,j-1})[W(w_{m,j}) - W(w_{m,j-1})]. \end{eqnarray*} \noindent Furthermore, the definition of Riemann-Stieltjes integral gives the convergence of $(\alpha_{m,n})_{m \geq 0}$ to \[ \int_{0^-}^{K-0} \sum_{k=0}^n G_k^x(w)dW_{n-k}(w).\] \noindent Thus, we may invert the limit in (\ref{equation10}) which yields (\ref{equation7}). \noindent \\ \textit{Probability of rejection.} This probability is expressed as \begin{align*} \mathbb{P}(w_n = K) &= \sum_{k=1}^{n} \mathbb{P}(w_n = K, w_{n-k} < K, w_{n-j} = K, j = 1, \ldots , k-1)\\ & = \sum_{k=1}^{n} \int_{0^-}^{K-0} \left[ 1 - G(b_k(w)) \right]dW_{n-k}. \end{align*} \noindent Since $W(x)$ exists, (\ref{equation9}) is proved. \end{proof} \noindent Corollary~\ref{thm2} is similar to Corollary~\ref{thm1}. It establishes the sufficient condition for the stability for the balking scenario and gives the stationary workload distribution together with the blocking probability $B_K := \underset{n \rightarrow \infty}{\lim} \mathbb{P}(\tilde{w}_n \geq K)$. \begin{cor} \label{thm2} For the $D/GI/1$ queue with single vacations, constant deadline and single service discipline, assuming that $\mathbb{P}(v_1 + \sigma_0 < T) > 0 $, the limiting waiting time distribution function $\tilde{W}$ exists. Moreover, if the distribution function $G$ is continuous on $\mathbb{R}^+$, then $\tilde{W}$ satisfies \begin{equation} \label{equation01} \tilde{W}(x) = \int_{0-}^{K-0} G(x - w + T)d\tilde{W}(w) + \int_{K-0}^{x+T} d\tilde{W}(w), \quad x \geq 0. \end{equation} \noindent The blocking probability is given by \begin{equation*} B_K = \int_{0^-}^{K-0} \left[ 1 - G(K - w + T) \right]d\tilde{W}(w) + \int_{K+T}^{\infty}d\tilde{W}(w). \end{equation*} \end{cor} \begin{proof} The first part of the proof is similar to that of Corollary~\ref{thm1}. The limit $\tilde{W}$ is obtained by an Helly-Bray type argument (see, for example, ~\cite{LoeveI}). \end{proof} \begin{rem} \label{rem4} Integrating by parts the first term of Equation~\eqref{equation01}) leads to \[ G(x+T-K) - \int_{0^-}^{K-0} W(w)dG(T+x-w), \] with the condition that $G(u) = 0$ and $G(u) = 0$ for $u < 0$. This equation is the same as Equation (1) of Ghosal in \cite{Ghosal1963}, but the author does not consider the case where the previous customer abandons the queue without being served. \end{rem} \section{The case of deterministic service and exponentially distributed vacation times} In this section, we analyse the reneging model under the assumptions that customers require a deterministic service duration $\sigma > 0$ and the vacations are exponentially distributed with parameter $\lambda > 0$. Equation \eqref{equation7} becomes \begin{equation} \label{equation19} W(x) = \int_{0^-}^{K-0} \sum_{n \geq 0} V_n^x(w) dW(w), \end{equation} \noindent where \begin{equation*} \sum_{n \geq 0} V_n^x(w) = \sum_{n \geq 0} \mathbb{P}(b_n(w) \leq v \leq a_n^x(w)), \end{equation*} \noindent and \begin{eqnarray*} \left\{\begin{array}{lll} a_n^{x,T}(w) := x - \sigma - w + (n+1)T, \hspace{1.4 cm} n \geq 0, \vspace{0.3 cm}\\ b_0 := 0, \quad b_n^{K,T}(w) := K - \sigma - w + kT, \hspace{0.6 cm} n \geq 1.\\ \end{array}\right. \vspace{0.3 cm} \end{eqnarray*} \noindent Substituting $V(x) = 1 - e^{-\lambda x}$, $x \geq 0$ in \eqref{equation19} gives, for $0 \leq x < K, \; 0 \leq w < K$ \begin{equation} \label{equation20} \begin{split} W(x) &= 1 - e^{-\lambda(x-\sigma +T)} + \alpha_{\lambda}[e^{-\lambda(K-\sigma)} - e^{-\lambda(x-\sigma+T)}] \\ & \quad + \int_{0^+}^{K-0} 1 - e^{-\lambda (x - w - \sigma + T)} dW(w) + \alpha_{\lambda} \int_{0^+}^{K-0} e^{-\lambda(K - w - \sigma)} - e^{-\lambda (x - w - \sigma + T)} dW(w), \end{split} \end{equation} where $\alpha_{\lambda} = \dfrac{e^{-\lambda T}}{1 - e^{-\lambda T}}$. From Equation~\eqref{equation20} and the Lebesgue's dominated convergence theorem it follows that \begin{equation*} \lim_{n \rightarrow \infty} \dfrac{W(x+h_n)-W(x)}{h_n} \leq \alpha_{\lambda} \lambda e^{\lambda(K+\sigma)}, \end{equation*} where $ h_n \longrightarrow 0$ as $n \rightarrow \infty$. The distribution function $W$ is differentiable on $(0,K)$ and has a bounded derivative with a finite number of discontinuity (at points $ x=0$ and $x=K$). Hence $W$ is absolutely continuous with respect to the Lebesgue measure and we denote by $f$ the probability density function (pdf) of $W$. Taking the derivative in Equation~\eqref{equation14} yields \begin{equation} \label{equation21} f(x) = W(0)\alpha e^{\lambda \sigma}\lambda e^{-\lambda x} + \lambda \alpha e^{\lambda \sigma}\int_{0}^x e^{-\lambda(x - w)}f(w)dw, \quad 0 < x < K. \end{equation} \noindent Suppose that there exists a function $g$ satisfying, for all $x > 0$, \begin{equation} \label{equation22} g(x) = G(0)\alpha e^{\lambda \sigma}\lambda e^{-\lambda x} + \lambda \alpha e^{\lambda \sigma}\int_{0}^x e^{-\lambda(x - w)}g(w)dw, \end{equation} \noindent where $G$ is the distribution function of $g$, and $W(0) = G(0)$. Taking the Laplace transform of (\ref{equation22}), we have \begin{eqnarray*} \Phi(\theta) = W(0)\alpha e^{\lambda \sigma}\dfrac{\lambda}{\lambda + \theta} + \alpha e^{\lambda \sigma}\dfrac{\lambda}{\lambda + \theta} \Phi(\theta),\quad \theta > \lambda(\alpha e^{\lambda \sigma} - 1). \end{eqnarray*} \noindent Rewriting this last equation gives \begin{eqnarray*} \Phi(\theta) = \dfrac{G(0)\lambda \alpha e^{\lambda \sigma} }{\theta - \lambda(\alpha e^{\lambda \sigma} -1)}. \end{eqnarray*} \noindent Hence, by inversion we obtain \begin{equation*} g(x) = G(0)\lambda\alpha e^{\lambda \sigma} e^{\lambda (\alpha e^{\lambda \sigma} -1 )x}, \quad x > 0. \end{equation*} \noindent Identifying $f$ with $g$ on $(0,K)$ leads to \begin{equation} \label{equation23} f(x) = W(0)\lambda\alpha e^{\lambda \sigma} e^{\lambda (\alpha e^{\lambda \sigma} -1 )x}, \quad 0 < x < K. \end{equation} \noindent The constant $W(0)$ is evaluated by the condition \begin{eqnarray*} W(0) + \int_0^K f(x)dx + B_K = 1 \end{eqnarray*} \noindent thus \begin{align} \label{equation24} W(0) &= [1 - B_K] \left[ 1 + \int_{0^-}^K \lambda\alpha e^{\lambda \sigma} e^{\lambda (\alpha e^{\lambda \sigma} -1 )x} dx \right]^{-1} \nonumber \\ &= [1 -B_K] \left[ \dfrac{\alpha_{\lambda} e^{\lambda \sigma} - 1}{\alpha_{\lambda} e^{\lambda(K \alpha_{\lambda}e^{\lambda \sigma} - K + \sigma)} -1} \right], \end{align} \noindent where $B_K$ is the probability of rejection \begin{equation} \label{equation25} B_K = \int_{0^-}^{K-0} \sum_{n=1}^{\infty} \left[ 1 - V(K - w - \sigma + nT) \right]dW(w). \end{equation} \noindent Substituting $V(x) = 1 - e^{-\lambda x}$ in (\ref{equation25}) gives \begin{equation} \label{equation26} B_K = W(0)\alpha_{\lambda} e^{-\lambda(K-\sigma-\alpha_{\lambda}e^{\lambda \sigma})}. \end{equation} \noindent We have the following Proposition. \begin{prp} In the $D/D/1$ queue with exponential vacation times and single service discipline, under the conditions of Theorem 1, the density of $W$ on $(0,K)$ is \begin{equation*} f(x) = W(0)\lambda\alpha e^{\lambda \sigma} e^{\lambda (\alpha e^{\lambda \sigma} -1 )x}, \end{equation*} \noindent where $W(0)$ and $B_K$ are given by (\ref{equation24}), (\ref{equation26}) respectively. \end{prp} \begin{rem} \label{rem5} Equation (\ref{equation21}) is of the type \begin{equation*} f(x) = h(x) + \Lambda \int_{0}^x K(x-w)f(w)dw, \quad 0 < x < K, \end{equation*} \noindent which is the so-called Volterra equation. Applying the method of the resolvent, we obtain \begin{equation*} K_{n+1}(x,w) = \dfrac{(x-w)^n}{n!}e^{-\lambda(x-w)}, \quad n \geq 0. \end{equation*} \noindent Therefore, the resolvent kernel of (\ref{equation21}) is \begin{equation*} R(x,w;\Lambda) = e^{(\Lambda -\lambda)(x-w)}, \end{equation*} \noindent and the solution is given by \begin{equation*} f(x) = h(x) + \Lambda \int_{0}^x e^{(\Lambda -\lambda)(x-w)} h(w)dw. \end{equation*} \noindent The calculation of the previous equation yields (\ref{equation23}). \end{rem} We now focus on the time-dependent waiting time distribution. \begin{prp} For all $n \geq 0$ and $x \geq 0$ the time-dependent distribution of the waiting time for the balking model is given by \begin{equation*} \mathbb{P}(\tilde{w}_{n+1} > x) = \sum_{j=0}^n \binom{n}{j}K^{n-j}\lambda^{n-j}e^{-\lambda(x+(n+1)T - (n+1-j)\sigma)}. \end{equation*} \end{prp} \begin{proof} The proof is by induction. For $n = 0$, we have $\mathbb{P}(\tilde{w}_1 > x) = \mathbb{P}(\sigma + v_1 - T > x) = e^{-\lambda(x + T - \sigma)}$. Assume that $\mathbb{P}(\tilde{w}_{n} > x) = \sum_{j=0}^{n-1} \binom{n-1}{j}K^{n-1-j}\lambda^{n-1-j}e^{-\lambda(x+nT - (n-j)\sigma)}$ hold. Then, \begin{equation*} \begin{split} \mathbb{P}(\tilde{w}_{n+1} > x, \tilde{w}_n < K) &= \int_{0}^K \mathbb{P}(w + \sigma + v_{n+1} - T > x) \\ & \qquad \times\lambda \sum_{j=0}^{n-1} \binom{n-1}{j}K^{n-1-j}\lambda^{n-1-j}e^{-\lambda(x+nT - (n-j)\sigma)}\\ &= \sum_{j=0}^{n-1} \binom{n-1}{j}K^{n-j}\lambda^{n-j}e^{-\lambda(x + (n+1)T - (n+1-j)\sigma)}. \end{split} \end{equation*} \noindent consequently, $\mathbb{P}(\tilde{w}_{n+1} > x, \tilde{w}_n \geq K) = \sum_{j=0}^{n-1} \binom{n-1}{j}K^{n-1-j}\lambda^{n-1-j}e^{-\lambda(x+(n+1)T - (n-j)\sigma)}$. Hence, \begin{equation*} \begin{split} \mathbb{P}(\tilde{w}_{n+1} > x) &= \sum_{j=0}^{n-1} \binom{n-1}{j}K^{n-j}\lambda^{n-j}e^{-\lambda(x + (n+1)T - (n+1-j)\sigma)} \\ & \qquad + \sum_{j=0}^{n-1} \binom{n-1}{j}K^{n-1-j}\lambda^{n-1-j}e^{-\lambda(x+(n+1)T - (n-j)\sigma)} \\ & =\binom{n-1}{0}K^n \lambda^n + e^{-\lambda(x+(n+1)T - (n+1)\sigma)} \\ & \qquad + \sum_{j=1}^{n-1} \binom{n-1}{j}K^{n-j}\lambda^{n-j}e^{-\lambda(x + (n+1)T - (n+1-j)\sigma)}\\ & \qquad + \sum_{j=0}^{n-2} \binom{n-1}{j}K^{n-1-j}\lambda^{n-1-j}e^{-\lambda(x+(n+1)T - (n-j)\sigma)}\\ & \qquad + \binom{n-1}{n-1}e^{-\lambda (x+(n+1)T - \sigma)}\\ &=\sum_{j=0}^n \binom{n}{j}K^{n-j}\lambda^{n-j}e^{-\lambda(x+(n+1)T - (n+1-j)\sigma)}. \end{split} \end{equation*} \begin{equation*} \begin{split} \mathbb{P}(\tilde{w}_{n+1} > x) &= \sum_{j=0}^{n-1} \binom{n-1}{j}K^{n-j}\lambda^{n-j}e^{-\lambda(x + (n+1)T - (n+1-j)\sigma)} \\ & \qquad + \sum_{j=0}^{n-1} \binom{n-1}{j}K^{n-1-j}\lambda^{n-1-j}e^{-\lambda(x+(n+1)T - (n-j)\sigma)} \\ &=\sum_{j=0}^n \binom{n}{j}K^{n-j}\lambda^{n-j}e^{-\lambda(x+(n+1)T - (n+1-j)\sigma)}. \end{split} \end{equation*} \end{proof} \section{The case of exponentially distributed service times and vacation times} In this section, from \eqref{equation7} we obtain differential equation for the unknown pdf $f$ and solve it explicitly using the Laplace transform as in the previous section. Throughout this section, we assume that $B(x) = 1 - e^{-\mu x}$ and $V(x) = 1 - e^{-\lambda x}$, $x \geq 0$ with $\lambda >0$, $\mu > 0$ and $\lambda \neq \mu$. Let $G(x)$ be the distribution function of the random variable $\sigma_0 + v_1$, that is for $ x \geq 0$ \begin{equation*} G(x) = 1 - \mu/(\mu - \lambda)e^{-\lambda x} - \lambda/(\lambda - \mu)e^{-\mu x}. \end{equation*} \noindent Equation \eqref{equation8} becomes \begin{equation*} \begin{split} \sum_{n \geq 0} G^x_n(w) &= \mathbb{P}(\sigma + v \leq x -w +T)\\ & \quad +\sum_{n \geq 1}\mathbb{P}(K -w + nT \leq \sigma + v \leq x -w +(n+1)T) \\ &= 1 - \dfrac{\mu}{\mu - \lambda}e^{-\lambda(x - w + T)} - \dfrac{\lambda}{\lambda - \mu}e^{-\mu (x - w + T)} + \dfrac{\mu}{\mu - \lambda} \alpha_{\lambda} \left[e^{-\lambda(K - w)} - e^{-\lambda(x - w + T)} \right] \\ & \qquad + \dfrac{\lambda}{\lambda - \mu} \alpha_{\mu} \left[e^{-\mu(K - w)} - e^{-\mu(x - w + T)}\right] , \end{split} \end{equation*} \noindent where \[ \alpha_{\lambda} = e^{-\lambda T}/(1 - e^{-\lambda T}) \qquad \mbox{and} \qquad \alpha_{\mu} = e^{-\mu T}/(1 - e^{-\mu T} ). \] \noindent We now derive the stationary waiting time integral equation. Substituting the above equation in \eqref{equation7}q, yields for $0 \leq x < K$, \begin{equation} \label{equation11} \begin{split} W(x) &= W(0)\sum_{n \geq 0} G^x_n(0) + \int_{0^+}^{K-0} \sum_{n \geq 0} G^x_n(w)dW(w)\\ &= W(0) \left\{ 1 - \dfrac{\mu}{\mu - \lambda} e^{-\lambda(x+T)} - \dfrac{\lambda}{\lambda - \mu}e^{-\mu(x+T)} + \dfrac{\mu}{\mu - \lambda} \alpha_{\lambda} \left( e^{-\lambda K} - e^{-\lambda (x+T)} \right) \right. \\ & \qquad \qquad \; \left. + \dfrac{\lambda}{\lambda - \mu} \alpha_{\mu} \left( e^{-\mu K} - e^{-\mu (x+T)} \right) \right\} \\ & \qquad + \int_{0^+}^{K-0}\left[ 1 - \dfrac{\mu}{\mu - \lambda} e^{-\lambda(x - w + T)} - \dfrac{\lambda}{\lambda - \mu}e^{-\mu(x - w + T)} \right] dW(w) \\ & \qquad + \dfrac{\mu}{\mu - \lambda} \alpha_{\lambda} \int_{0^+}^{K-0}\left[ e^{-\lambda(K-w)} - e^{-\lambda (x-w+T)} \right]dW(w)\\ & \qquad + \dfrac{\lambda}{\lambda - \mu} \alpha_{\mu}\int_{0^+}^{K-0}\left[ e^{-\mu (K-w)} - e^{-\mu (x-w+T)} \right] dW(w). \end{split} \end{equation} \noindent Taking the derivative of \eqref{equation11} with respect to $x$ yields \begin{equation} \label{equation12} \begin{split} f(x) &= W(0) \left\lbrace \dfrac{\lambda \mu}{\mu - \lambda} \alpha_{\lambda} e^{-\lambda x} + \dfrac{\lambda \mu}{\lambda - \mu} \alpha_{\mu} e^{-\mu x}\right\rbrace \\ &\quad + \quad \dfrac{\lambda \mu}{\mu - \lambda} \alpha_{\lambda} \int_{0}^{x} e^{-\lambda(x - w)} f(w)dw + \dfrac{\lambda \mu}{\lambda - \mu} \alpha_{\mu}\int_{0}^{x} e^{-\mu(x - w)} f(w)dw. \end{split} \end{equation} \noindent Hereinafter, we transform \eqref{equation12}q into an $2$-nd order linear homogeneous differential equation. Taking the derivative with respect to $x$ to Equation~\eqref{equation12} yields \begin{equation} \label{equation13} \begin{split} f'(x) &= -\lambda \left\lbrace W(0) \dfrac{\lambda \mu}{\mu - \lambda}\alpha_{\lambda}e^{-\lambda x} + \dfrac{\lambda \mu}{\mu - \lambda} \alpha_{\lambda} \int_{0}^x e^{-\lambda (x-w)}f(w)dw \right\rbrace \\ & \quad -\mu \left\lbrace W(0) \dfrac{\lambda \mu}{\lambda - \mu}\alpha_{\mu}e^{-\mu x} + \dfrac{\lambda \mu}{\lambda - \mu} \alpha_{\mu} \int_{0}^x e^{-\mu (x-w)}f(w)dw \right\rbrace\\ & \quad + \dfrac{\lambda \mu}{\lambda - \mu}\alpha_{\mu} f(x)+ \dfrac{\lambda \mu}{\mu - \lambda}\alpha_{\lambda} f(x).\\ \end{split} \end{equation} \noindent Equation (\ref{equation13}) can be rewritten as \begin{equation*} \begin{split} f'(x) &= \left( -\lambda - \mu + \dfrac{\lambda \mu}{\mu - \lambda}\alpha_{\lambda} + \dfrac{\lambda \mu}{\lambda - \mu}\alpha_{\mu} \right)f(x)\\ & \qquad+ \lambda \left\lbrace W(0) \dfrac{\lambda \mu}{\lambda - \mu}\alpha_{\mu}e^{-\mu x} + \dfrac{\lambda \mu}{\lambda - \mu} \alpha_{\mu} \int_{0}^x e^{-\mu (x-w)}f(w)dw \right\rbrace\\ & \qquad + \mu \left\lbrace W(0) \dfrac{\lambda \mu}{\mu - \lambda}\alpha_{\lambda}e^{-\lambda x} + \dfrac{\lambda \mu}{\mu - \lambda} \alpha_{\lambda} \int_{0}^x e^{-\lambda (x-w)}f(w)dw \right\rbrace. \end{split} \end{equation*} \noindent Differentiating the above equation leads to the following $2$-nd order homogeneous linear differential equation \begin{equation} \label{equation14} f''(x) + A_{\lambda,\mu}f'(x) + B_{\lambda,\mu}f(x) = 0, \end{equation} \noindent where \[ A_{\lambda,\mu} = \lambda + \mu - \dfrac{\lambda \mu}{\mu - \lambda}\alpha_{\lambda} - \dfrac{\lambda \mu}{\lambda - \mu}\alpha_{\mu} \qquad \mbox{and}\qquad B_{\lambda,\mu} = \lambda \mu - \dfrac{\lambda^2 \mu}{\lambda - \mu} \alpha_{\mu} - \dfrac{\lambda \mu^2}{\mu - \lambda}\alpha_{\lambda}. \] \noindent To solve (\ref{equation14}), we will apply the Laplace transform. The density $f$ has a bounded support. In order to inverse the Laplace transform, we shall introduce a function $g$ with support $(0, \infty)$. Supppose that there exists a function $g$ which coincides with $f$ on $(0,K)$, so that it satisfies for all $x > 0$, \begin{equation*} \begin{split} g(x) &= G(0) \left\lbrace \dfrac{\lambda \mu}{\mu - \lambda} \alpha_{\lambda} e^{-\lambda x} + \dfrac{\lambda \mu}{\lambda - \mu} \alpha_{\mu} e^{-\mu x} \right\rbrace + \dfrac{\lambda \mu}{\mu - \lambda} \alpha_{\lambda} \int_{0}^{x} e^{-\lambda(x - w)} g(w)dw \\ & \qquad + \dfrac{\lambda \mu}{\lambda - \mu} \alpha_{\mu}\int_{0}^{x} e^{-\mu(x - w)} g(w)dw, \end{split} \end{equation*} \noindent where $G$ denotes the probability function of $g$, and \begin{equation} \label{equation15} g''(x) + A_{\lambda,\mu}g'(x) + B_{\lambda,\mu}g(x) = 0. \end{equation} \noindent Assume futhermore, that $W(0) = G(0)$. Let $\Phi$ be the Laplace transform of $g$, \begin{equation*} \Phi(\theta) = \int_{0}^{\infty} e^{- \theta x} g(x) dx, \qquad x \in \mathbb{R}^+, \quad \theta \in \mathbb{C}, \quad \mbox{Re}(\theta) \geq 0. \end{equation*} \noindent By taking the Laplace transform of (\ref{equation15}), we obtain \begin{equation*} (\theta^2 + A_{\lambda, \mu}\theta + B_{\lambda,\mu})\Phi(\theta) = (A_{\lambda,\mu} + \theta)g(0) + g'(0). \end{equation*} \noindent Rearranging the terms gives \begin{equation} \label{racine gamma} \dfrac{\Phi(\theta)}{G(0)} = \dfrac{\theta \lambda \mu(\alpha_{\lambda} - \alpha_{\mu}) + \lambda \mu (\mu \alpha_{\lambda} - \lambda \alpha_{\mu}) }{\theta^2 (\mu - \lambda) + \theta \left[ (\mu ^2 - \lambda ^2) + \lambda \mu (\alpha_{\mu} - \alpha_{\lambda}) \right] + \lambda \mu \left[ \mu (1 - \alpha_{\lambda}) - \lambda (1 - \alpha_{\mu})\right]}. \end{equation} \noindent The denominator of $\dfrac{\Phi(\theta)}{G(0)}$ is clearly polynomial of degree two. Its roots will be denoted $\gamma_1$ and $\gamma_2$ and have negative real parts. Furthermore, the derivative has one zero which is not a root of the denominator, thus $\gamma_i$ are simple roots ($i=1,2$). The numerator is polynomial of degree one, and we have the following partial fraction expansion \begin{equation} \label{equation16} \dfrac{\Phi(\theta)}{G(0)} = \sum_{i=1}^2 \dfrac{C_i}{\theta - \gamma_i}. \end{equation} \noindent The constants $C_i$ ($i = 1,2$), are expressed by \begin{eqnarray} \label{constante C1} C_1 = \lim_{\theta \rightarrow \gamma_1} \dfrac{\Phi(\theta)}{G(0)}(\theta - \gamma_1) = \dfrac{\gamma_1 \lambda \mu(\alpha_{\lambda} - \alpha_{\mu}) + \lambda \mu(\mu \alpha_{\lambda} - \lambda \alpha_{\mu}) }{\gamma_1 - \gamma_2}, \end{eqnarray} \noindent and \begin{eqnarray} \label{constante C2} C_2 = \lim_{\theta \rightarrow \gamma_2} \dfrac{\Phi(\theta)}{G(0)}(\theta - \gamma_2) = \dfrac{\gamma_2 \lambda \mu(\alpha_{\lambda} - \alpha_{\mu}) + \lambda \mu(\mu \alpha_{\lambda} - \lambda \alpha_{\mu}) }{\gamma_2 - \gamma_1}. \end{eqnarray} \noindent Inverting (\ref{equation16}) yields \begin{eqnarray*} g(x) = G(0)\sum_{i=1}^2 C_i e^{\gamma_i x}, \qquad x > 0 . \end{eqnarray*} \noindent Identifying $f(x)$ with $g(x)$ on $0 < x < K$, gives \begin{eqnarray*} f(x) = W(0)\sum_{i=1}^2 C_i e^{\gamma_i x}. \end{eqnarray*} \noindent It remains to find the constant $W(0)$ which is done by the normalizing condition \begin{eqnarray*} W(0) + \int_{0}^K f(x) dx + B_K = 1. \end{eqnarray*} \noindent Therefore, \begin{eqnarray} \label{equation17} W(0) = \left[ 1 - B_K \right]\left[ 1 + \int_{0}^K \sum_{i=1}^2 C_i e^{\gamma_i x} dx \right]^{-1}. \end{eqnarray} \noindent The calculation of the probability of rejection comes from to (\ref{equation9}), we have \begin{equation*} \begin{split} B_K &= W(0) \left\lbrace \dfrac{\mu}{\mu - \lambda}\alpha_{\lambda}e^{-\lambda K} + \dfrac{\lambda}{\lambda - \mu}\alpha_{\mu} e^{-\mu K} \right\rbrace \\ & \qquad + \dfrac{\mu}{\mu - \lambda}\alpha_{\lambda}\int_{0}^K e^{\lambda w} f(w) dw + \dfrac{\lambda}{\lambda - \mu}\alpha_{\mu} e^{-\mu K} \int_{0}^K e^{\mu w} f(w) dw. \end{split} \end{equation*} \noindent The calculation of integrals in the last equality gives \begin{eqnarray*} \int_{0}^K e^{\lambda w} f(w) dw = \dfrac{W(0)C_1}{\lambda + \gamma_1}\left[ e^{(\lambda + \gamma_1)K} - 1 \right] + \dfrac{W(0)C_2}{\lambda + \gamma_2}\left[ e^{(\lambda + \gamma_2)K} - 1 \right], \end{eqnarray*} \begin{eqnarray*} \int_{0}^K e^{\mu w} f(w) dw = \dfrac{W(0)C_1}{\mu + \gamma_1}\left[ e^{(\mu + \gamma_1)K} - 1 \right] + \dfrac{W(0)C_2}{\mu + \gamma_2}\left[ e^{(\mu + \gamma_2)K} - 1 \right]. \end{eqnarray*} \noindent Finally, \begin{equation} \label{equation18} \begin{split} B_K &= W(0) \dfrac{\mu}{\mu - \lambda}\alpha_{\lambda} \left\lbrace e^{-\lambda K} + \dfrac{C_1}{\lambda + \gamma_1}\left[ e^{(\lambda + \gamma_1)K} - 1 \right] + \dfrac{C_2}{\lambda + \gamma_2}\left[ e^{(\lambda + \gamma_2)K} - 1 \right]\right\rbrace \\ & + W(0)\dfrac{\lambda}{\lambda - \mu}\alpha_{\mu} \left\lbrace e^{-\mu K} + \dfrac{C_1}{\mu + \gamma_1}\left[ e^{(\mu + \gamma_1)K} - 1 \right] + \dfrac{C_2}{\mu + \gamma_2}\left[ e^{(\mu + \gamma_2)K} - 1 \right]\right\rbrace . \end{split} \end{equation} \begin{rem} Equation (\ref{equation14}) may be solved using the characteristic equation $t^2 + A_{\lambda, \mu}t + B_{\lambda, \mu} = 0$. The solution of (\ref{equation14}) is of the form $f(x) = C_1 e^{t_1 x} + C_2 e^{t_2 x}$, where $t_1 \neq t_2$ are the roots of the characteristic equation and $C_1$, $C_2$ are calculated from initial or boundary conditions. \end{rem} \noindent These conclusions are summarized in the following Proposition. \begin{prp} In the $D/M/1$ queue with exponential vacation times and single service discipline, under conditions of Proposition~\eqref{thm1}, the pdf of $W$ on $(0,K)$ is given by \begin{equation*} f(x) = W(0)\sum_{i=1}^2 C_i e^{\gamma_i x}, \end{equation*} \noindent where $W(0)$ and $B_K$ are given by \eqref{equation17} and \eqref{equation18} respectively, the constants $C_1$ and $C_{2}$ are given by \eqref{constante C1} and \eqref{constante C2}, and $\gamma_{i}$, $i=1,2$ are the roots of \eqref{racine gamma}. \end{prp}
{ "attr-fineweb-edu": 1.371094, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUgFTxK7ICUuXemBTf
\section{Introduction\label{sec:Introduction}} Soccer is the most popular sport in the world, not only in terms of television audience (almost 4 billion followers in 200 countries) or players (more than 260 million)~\cite{Felix2020}, but also regarding research~\cite{kirkendall2020evolution}. Much of the research work proposed in recent years has focused on responding to the demand for applications capable of enriching the content of live broadcasts with augmented reality~\cite{goebert2020new} and applications aimed at analyzing and understanding the game~\cite{andrienko2019constructing,kapela2015real}. To tackle these high-level tasks, it is required to register the images in a model of the playing field~\cite{cuevas2020automatic} and/or calibrate the cameras~\cite{chen2018two,citraro2020real}. The strategies with these aims are typically based on the detection of key-points determined from the intersections between the line marks on the grass~\cite{yao2017fast}. These line marks can be of two types, straight lines and circles (seen as ellipses in the images). Therefore, the location of the line marks is a key stage in all these high-level tasks. However, existing methods exhibit shortcomings in difficult lighting conditions, require manual tuning or rely on assumptions about the straightness or distribution of line markings. In this paper we propose a novel method to detect the playing field line points and determine which of them belong to straight lines and which to ellipses. The quality of the results obtained has been assessed in a database composed of numerous annotated images taken from a wide variety of points of view, in different and challenging lighting conditions, and its usefulness has been demonstrated after being compared with other state-of-the-art methods. Although we focus on soccer videos, the proposed strategy can be adapted to other types of \textquotedblleft pitch sports\textquotedblright . \subsection{Contribution} The main contributions of this work are: \begin{itemize} \item Line mark segmentation using a stochastic watershed transformation. Many proposals for line marking detection start from a proto-edge-detection stage that yields many unwanted or duplicate edges. Unlike existing methods, we use the watershed transformation because it lends itself naturally to flood out irrelevant edges such as those due to the players or the ball. \item Seed placing strategy. Our proposal builds upon the stochastic watershed algorithm~\cite{angulo2007stochasticWatershed}, but our goal is different (line marking detection instead of region segmentation); hence, our method does not require the user to manually set a number of relevant regions to be sought and ensures quick convergence. \item Viewpoint-independent classification of line marks into the two kinds of primitive structures that appear in soccer fields: straight lines and ellipses. Previous works apply independent algorithms to detect straight lines and ellipses, while we perform a joint analysis that provides higher quality results. Additionally, unlike those works, we not only detect the center circle but also the penalty arcs. \end{itemize} \subsection{Organization} The paper is organized as follows: in section~\ref{sec:Related-work} we review existing algorithms for line marking detection. Sections~\ref{sec:Watershed-segmentation} and~\ref{sec:Line-classification} describe our proposal for line mark segmentation and classification, respectively. Experiment results are reported in section~\ref{sec:Results} and, finally, section~\ref{sec:Conclusions} presents the conclusions of the paper. \section{Related work\label{sec:Related-work}} To measure distances and speeds of players and/or ball throughout the matches, the images need to be registered to a model of the playing field. This is typically accomplished from sets of key-points that result from intersections between line marks. To bring out the white line marks from the rest of the elements in the playing field, some authors use edge detectors: the Sobel detector in~\cite{rao2015novel,zhang2015research}, the Canny detector in~\cite{direkoglu2018player,doria2021soccer}, or the Laplacian of Gaussian (LoG) detector in~\cite{szenberg2001automatic,bu2011automatic}. To obtain the points centered in the line marks, other authors use the Top-Hat transform~\cite{cuevas2020automatic,sun2009field,yang2017robust}. There are also works that apply combinations of morphological operations~\cite{aleman2014camera}. All these methods, although quick and simple, have the important drawback of the correct selection of a threshold. If the threshold value is too high, they are not able to detect the lines in the lower contrast areas of the image (heavily shadowed or brightly lit areas). On the other hand, if the threshold is too low, numerous false detections occur due to the grass texture and the presence of players on the pitch. \begin{figure}[tbh] \centering{}\includegraphics[width=1\columnwidth]{Figures/RelatedWork2}\caption{\label{fig:Related}Examples of typical line mark detections using the Hough transform superimposed on the original images.} \end{figure} Once the white lines mask is obtained, most strategies apply the Hough transform to detect the straight line marks~\cite{yao2017fast}. The Hough transform is computationally efficient and provides successful results in simple images. However, it is very sensitive to the presence of the aforementioned false detections and therefore results in numerous false lines. An additional drawback of Hough-based methods is the need to set the number of lines to consider. Typically, in images captured by the Master Camera\footnote{The master camera is the one used most of the time in soccer broadcasting and also the only one considered in most approaches in the literature. It is placed approximately on the extension of the halfway line. It performs pan, tilt, and zoom movements, but not rotations around its longitudinal axis (i.e., no roll).}, the maximum number of straight lines that can be seen is 10 (straight lines on each of the halves of the playing field). However, in many cases significantly fewer lines are visible. If the number of lines considered is too high, in images showing few straight lines several false detections are obtained (see left image in Fig.~\ref{fig:Related}). On the other hand, if the number of lines considered is too low, in images with many lines (e.g., images showing any of the goal areas), some lines are misdetected (see right image in Fig.~\ref{fig:Related}). To reduce these misdetections, some works have proposed the application of the Hough transform independently along small windows that cover the entire image~\cite{ali2012efficient}. A further drawback of Hough-based methods is that the images generally suffer from radial distortion. Consequently, duplicate lines are typically obtained (see the bottom line shown in the left image in Fig.~\ref{fig:Related}). Once the straight lines have been detected, they are typically classified according to their tilt~\cite{cuevas2020automatic,sadlier2005event}. However these analyses are limited to the cases in which the position of the camera used to acquired the images is known~\cite{jabri2011camera}. Alternatively, the lines are classified in only two sets (longitudinal and transverse) that are used to determine two vanishing points~\cite{hayet2005fast,homayounfar2017sports}. There are also strategies that focus on detecting the center circle of the playing field~\cite{aleman2014camera}. Some of them use a 6-dimensional Hough transform to detect ellipses~\cite{mukhopadhyay2015survey} since, due to the perspective of the images, the center circle is seen as an ellipse in the images. These strategies have very high computational and memory requirements~\cite{xu1990new,bergen1991probabilistic}. Additionally, since the Hough transform is applied on edge or Top-Hat images, their results are inaccurate because of the presence of players, billboards, etc. An additional limitation of these algorithms is that they cannot obtain successful results in images where the center circle does not appear complete. Alternatively, some authors have proposed strategies using Least Squares Fitting (LSF) methods~\cite{wu2019efficient}. However, since they are also very sensitive to the presence of data that do not belong to the ellipse~\cite{wan2003real}, they require complex steps to discard data from other lines, players, etc~\cite{cuevas2020automatic}. Another important limitation of all these strategies is that none of them is able to detect the circles of the penalty areas. \section{Watershed-based line mark segmentation\label{sec:Watershed-segmentation}} We propose a completely automatic procedure based on stochastic watershed~\cite{angulo2007stochasticWatershed,malmberg2014exactStochastic} to detect the line markings in a soccer pitch. The usual purpose of a watershed transform is to segment regions in a grayscale image separated by higher ground~\cite{maxwell1870onHillsAndDales}, interpreting the levels the image as terrain elevation. However, since every local minimum is such a region, the user has to manually determine how many significant regions are there in the image, and even small discontinuities in the ridges between regions can upset the results (see Fig.~\ref{fig:watershed-marker-position-sensitivity}). \begin{figure}[tbh] \centering{}\includegraphics[width=.6\columnwidth]{Figures/marker_position_sensitivity}\caption{\label{fig:watershed-marker-position-sensitivity}Examples of different segmentation results (bottom) depending on the position of the seeds (top). Note that although a human observer interprets as clearly distinct areas those above and below the line marking, they are actually joined due to the interference of the player.} \end{figure} In our case, however, we are not interested in the regions themselves but in the lines that delimit them, and we do not care if a discontinuity in a line as the one shown in Fig.~\ref{fig:watershed-marker-position-sensitivity} joins two regions. The basic idea is that, since the line markings are brighter than their surroundings, if we manage to place \emph{numerous} markers well distributed throughout both sides of every line mark, the result will be an image whose watershed lines will include (almost) every portion of the line markings we want to detect, along with a lot of spurious lines due to having placed many more markers than there are regions. However, upon repetition of the experiment with different sets of markers, spurious lines will change, as illustrated in Fig.~\ref{fig:watershed-marker-position-sensitivity}, but true lines will arise again, and we will reliably detect them by averaging multiple experiments. \begin{figure}[tbh] \centering{}\includegraphics[width=.7\columnwidth]{Figures/proposal_iteration_example}\caption{\label{fig:our-proposal-one-iteration-example}Left to right, top to bottom: original image, $\protect\imagen$; field of play mask, $\protect\mascaracampo$; regions determined by a single experiment of stochastic watershed segmentation; corresponding boundaries.} \end{figure} Fig.~\ref{fig:our-proposal-one-iteration-example} shows an example of a single experiment whose results clearly exhibit most of the line markings we are looking for, where even lines interrupted by players (e.g., the penalty arc) are present at both sides of the interruption. Let $\maskgrayscale$ be the grayscale image to segment (obtained as described in section~\ref{sec:Preprocessing}), with a resolution of $\alto$ rows and $\ancho$ columns. A possible first approach is to simply generate sets of $\numseeds$ seeds uniformly distributed across the image: \begin{equation} \seedset_{\expindex}=\left\{ \individualseed_{\expindex,\seedindex}=\begin{bmatrix}\rowcoord_{\expindex,\seedindex}\\ \colcoord_{\expindex,\seedindex} \end{bmatrix}\colon\begin{array}{c} \rowcoord_{\expindex,\seedindex}=\uniformrv 1{\alto},\\ \colcoord_{\expindex,\seedindex}=\uniformrv 1{\ancho},\\ \seedindex\in\left\{ 1,2,\ldots,\numseeds\right\} \end{array}\right\} ,\label{eq:uniform-seed-generation} \end{equation} where $\rowcoord$ and $\colcoord$ are the row and column coordinates where each seed $\individualseed$ is placed, and $\expindex$ identifies each individual watershed experiment; let us notate the standard seeded watershed transform on a grayscale image $\maskgrayscale$ with the set of seeds $\seedset$ as $\seededWS{\maskgrayscale}{\seedset}$, yielding a binary image where the pixels corresponding to watershed lines are set to 1 and all others are set to 0. Thus, we can compute \begin{equation} \randomWatershedImage=\frac{1}{\numexperiments}\sum_{\expindex=1}^{\numexperiments}\seededWS{\maskgrayscale}{\seedset_{\expindex}}, \end{equation} where $\numexperiments$ is the number of experiments. We can interpret the value of each pixel of $\randomWatershedImage$ as akin to the probability of it being a true watershed line of $\maskgrayscale$. Since we are interested in robust line detection, the final boundary mask $\mascaralineas$ will only consider as positive detection pixels with values exceeding a threshold $\thresholdRWS$\footnote{We have used $\thresholdRWS=0.8$ throughout all the reported experiments to guarantee a robust detection, but it is not a sensitive parameter, the results are very similar for a wide range of values around the chosen one.} and located within a region of interest, in our case the playing field, denoted as $\mascaracampo$\footnote{There are many available methods to segment the playing field, i.e., the area covered by grass, in the image, namely~\cite{quilon2015unsupervised}.}: \begin{equation} \myfunc{\mascaralineas}{\rowcoord,\colcoord}=\begin{cases} 1, & \myfunc{\randomWatershedImage}{\rowcoord,\colcoord}\geq\thresholdRWS\land\myfunc{\mascaracampo}{\rowcoord,\colcoord}=1;\\ 0, & \myfunc{\randomWatershedImage}{\rowcoord,\colcoord}<\thresholdRWS\lor\myfunc{\mascaracampo}{\rowcoord,\colcoord}=0. \end{cases} \end{equation} \begin{figure}[tbh] \begin{centering} \includegraphics[width=.7\columnwidth]{Figures/uniform} \par\end{centering} \caption{\label{fig:uniform-distribution-watershed}Results applying stochastic watershed with uniform seed distribution for different numbers of seeds and experiments. Top left: original input; top right: few experiments and few seeds ($\protect\numexperiments=20,\protect\numseeds=200$); bottom left: few experiments and many seeds ($\protect\numexperiments=20,\protect\numseeds=1000$); bottom right: many experiments and few seeds ($\protect\numexperiments=200,\protect\numseeds=200$).} \end{figure} However, distributing seeds uniformly across both rows and columns does not guarantee that every region of the image is covered, as Fig.~\ref{fig:uniform-distribution-watershed} shows: using a relatively small number of seeds has a non-negligible chance of leaving regions uncovered, leading to misdetected lines. Line misdetection can be solved either increasing the number of seeds or the number of experiments, but the former increases false detections and, while the latter does produce correct results, it does so at significantly higher cost. To deal with these problems, we propose a windowed random seed generation which will ensure that all the regions of the image contain seeds, attaining quick convergence without increasing the rate of false detections. Let us divide the input image $\maskgrayscale$ into a lattice of $\numseedsrows$ vertical divisions and $\numseedscols$ horizontal divisions that will delimit $\numseedsrows\times\numseedscols$ non-overlapping rectangular regions (w.l.o.g., let us assume $\alto$ and $\ancho$ are divisible by $\numseedsrows$ and $\numseedscols$ respectively to simplify notation); then we will place a single seed with uniform distribution into each of these regions. Thus, we replace equation~\ref{eq:uniform-seed-generation} with \begin{equation} \seedset_{\expindex}=\left\{ \individualseed_{\expindex,\seedindex,k}=\begin{bmatrix}\rowcoord_{\expindex,\seedindex,k}\\ \colcoord_{\expindex,\seedindex,k} \end{bmatrix}\colon\begin{array}{c} \rowcoord_{\expindex,\seedindexrow,\seedindexcol}=1+\modulo{\rowcoord_{\mathrm{o},\expindex}+\frac{\seedindexrow\alto}{\numseedsrows}+\uniformrv 1{\frac{\alto}{\numseedsrows}},\alto},\\ \colcoord_{\expindex,\seedindexrow,\seedindexcol}=1+\modulo{\colcoord_{\mathrm{o},\expindex}+\frac{\seedindexcol\ancho}{\numseedscols}+\uniformrv 1{\frac{\ancho}{\numseedscols}},\ancho},\\ \rowcoord_{\mathrm{o},\expindex}=\uniformrv 1{\frac{\alto}{\numseedsrows}},\quad\colcoord_{\mathrm{o},\expindex}=\uniformrv 1{\frac{\ancho}{\numseedscols}},\\ \seedindexrow\in\left\{ 0,1,\ldots,\numseedsrows-1\right\} ,\quad\seedindexcol\in\left\{ 0,1,\ldots,\numseedscols-1\right\} , \end{array}\right\} , \end{equation} the rest of the procedure unchanged. Fig.~\ref{fig:Results-old-windows} shows the results of this seeding method, where we can see that a small number of experiments ($M=20$) already yields results comparable to those obtained with the uniform seed generator for $\numexperiments=200$ (shown on Fig.~\ref{fig:uniform-distribution-watershed}). Furthermore, we can also see that increasing significantly the number of experiments does not yield significant advantages. If the lattice used to generate seeds has the same origin in every experiment (e.g., $\rowcoord_{\mathrm{o},\expindex}=\colcoord_{\mathrm{o},\expindex}=0$), the procedure yields results which exhibit a faint yet definite pattern, roughly shaped like the lattice, in flat areas of $\maskgrayscale$, as illustrated in Fig.~\ref{fig:Results-old-windows}; although this effect does not actually affect results, it is easy to eliminate by using a different random origin for the lattice in each experiment, as proposed. \begin{figure}[tbh] \begin{centering} \includegraphics[width=.7\columnwidth]{Figures/oldandslidingwindows} \par\end{centering} \caption{\label{fig:Results-old-windows}Results applying stochastic watershed with windowed random seed generation. On the left, $\protect\numexperiments=20$; on the right, $\protect\numexperiments=200$. Rows: fixed lattice across experiments (top), proposed approach with different origin in each experiment (middle) and proposed approach after thresholding (bottom).} \end{figure} The proposed seed location algorithm guarantees a bounded distance between seeds, like the iterative Poisson disk sampling~\cite{bridson2007fastPoisson}, but it has a substantially lower cost and is more amenable to parallelization. \subsection{Preprocessing\label{sec:Preprocessing}} As explained before, it should reasonably be possible to use the brightness of the RGB input image $\imagen$ as the input $\maskgrayscale$ for the line detection stage because the line markings have a higher brightness than their surroundings. However, in practice, this results in many false line detections due to playing field texture and capture noise. Therefore, before applying the watershed-based line detection we apply a simple pre-processing stage to enhance the input image and maximize the contrast of the white lines against the grass. \begin{figure}[tbh] \centering{}\includegraphics[width=.7\columnwidth]{Figures/local_thresholding}\caption{\label{fig:thresholding-illumination-change-1}Example of false contours due to abrupt illumination changes when performing a local thresholding operation (only the contours inside the pitch area are shown).} \end{figure} A simple solution is to apply a local thresholding operator~\cite{Bradley2007} to remove the noisy areas before applying the line detection stage. However, in the face of abrupt illumination changes this approach is less than ideal because it may create false contours, which will then be erroneously detected as lines (see Fig.~\ref{fig:thresholding-illumination-change-1}). To only preserve thin areas that are lighter than their surroundings in all directions, we use the Top-Hat transform~\cite{Meyer1979}: $\cosatophat X=X-X\circ\opening$, where $X$ is a grayscale image, $\circ$ denotes the opening morphological operation and $\opening$ is a structuring element whose diameter must be slightly greater than the maximum width of the features to be preserved (line marks in our case\footnote{In the images of the database we have used to assess the quality of the strategy (see Section~\ref{sec:Results}) the width of the lines is typically under 10 pixels. Therefore, a structuring element with a diameter of 11 pixels has been used.}). The grayscale image to apply the Top-Hat transform onto could just be the luminance of $\imagen$, but since the line markings are white, we profit from the fact that they must stand out in each of the R, G and B channels of $\imagen$. Thus, we use $\maskgrayscale=\myfunc{\min}{\cosatophat R,\cosatophat G,\cosatophat B}$, so that non-white features are discarded. \section{Classification of line marks\label{sec:Line-classification}} \begin{figure*}[tbh] \begin{centering} \includegraphics[width=1\textwidth]{Figures/Class_1}\caption{\label{fig:LineClassification}(a) Original image with detected line points in cyan. (b) Connected regions (one different color per region). (c) Ellipses obtained in the initial classification. (d) Straight lines obtained in the initial classification. (e) Initial classification: regions associated to ellipses in yellow, regions associated to straight lines in white, and discarded regions in red. (f) Straight lines and regions after the straight line merging step (the same color for regions associated with the same straight line). (g) Ellipses and regions after the ellipse merging step. (h) Final classification: regions associated to ellipses in yellow, regions associated to straight lines in white, and discarded regions in red. (i) Final straight lines with their associated regions in white and final ellipses with their corresponding regions in yellow.} \par\end{centering} \end{figure*} The last stage of our strategy is in charge of determining which of the primitive structures in the playing field (straight line or ellipse) the detected line points belong to. For this, we have developed a procedure based on the adjustment of the detected line points to both types of primitive structures, which consists in the following steps: \begin{enumerate} \item Line point segmentation: The line points in $\mascaralineas$ are segmented into the set of $N_{R}$ connected regions, $R=\{r_{i}\}_{i=1}^{N_{R}}$, that results after removing line intersections (i.e., line points with more than two neighbors). As an example, Fig.~\ref{fig:LineClassification} shows an image with the line points in $\mascaralineas$ (Fig.~\ref{fig:LineClassification}.a) that has resulted in the set of $N_{R}=26$ connected regions in Fig.~\ref{fig:LineClassification}.b. \item Initial classification: Each region in $R$ is classified as part of an ellipse or part of one or more straight lines. \begin{enumerate} \item The least squares fitting algorithm in~\cite{fitzgibbon1999direct} is applied to obtain the ellipse that best fits each connected region. The root mean square error (RMSE) of this fit is used as a measure of quality of the fit. \item Deming regression~\cite{cornbleet1979incorrect} is used obtain the straight line that best fits each connected region. If this fit is not accurate enough, the region is divided into the smallest set of subregions that allows an accurate fit to a straight line of each of these subregions. Given that regions corresponding to straight lines tend to be noisy, we have decided that a fit is accurate enough when the RMSE of the fit is lower than 2. \item Each region or subregion is associated with the primitive structure (ellipse or line) that fits best (i.e., the one with the smallest RMSE). \end{enumerate} Smaller regions (e.g., fewer than 50 pixels) are not representative enough by themselves. Therefore, they are initially left unassigned to either primitive structure type. In the example of Fig.~\ref{fig:LineClassification} it can be seen that after applying this stage, 2 regions associated with ellipses (Fig.~\ref{fig:LineClassification}.c) and 18 regions associated with straight lines (Fig.~\ref{fig:LineClassification}.d) have been obtained. In addition, in Fig.~\ref{fig:LineClassification}.e it can be seen that 9 regions have been left unassigned (those depicted in red). Therefore, in this example we have gone from 26 to 29 regions, since 3 of the regions have been split to fit two straight lines each. \item Straight line merging: Regions associated with straight lines are merged which, after merging, still result in a fine fit. Here, since there may be lines represented by segments that practically cross the image, to deal with radial distortion, it has been decided to relax the criterion by which it is determined if a fit is accurate enough, allowing a maximum RMSE of 4~px. In this merging step, the small regions discarded in the previous step are also considered. As it can be seen in Fig.~\ref{fig:LineClassification}.f, after applying this step, we have gone from 18 regions represented by 18 lines to 21 regions represented by 8 lines (3 of the previously discarded regions have been incorporated). \item Ellipse merging: Regions associated with ellipses are merged if they still result in a good fit together (the RMSE resulting from the fit is at most 4). In this step, the possible fusion with regions previously discarded due to their size is also considered. Notably, not only unassigned or elliptic regions are considered in this step, but also regions hitherto thought of as straights are considered because the curvature of ellipses varies so much that parts of them may locally fit a straight line (for example, the region corresponding to the top of the central circle in the example in Fig.~\ref{fig:LineClassification}) and it is only possible to determine their correspondence with elliptical structures when multiple regions are considered together. Fig.~\ref{fig:LineClassification}.g shows that after applying this step, the mentioned region has been associated with the ellipse corresponding to the central circle of the playing field. \item Refinement: Ellipses and straight lines are refined by removing pixels that deviate significantly from the estimated models. The criterion used to determine which data fit well is the same as those used in steps 3 and 4 to perform region merging (i.e., the distance to the model is greater than 4). Fig.~\ref{fig:LineClassification}.h, shows the final classification of the line points in $\mascaralineas$. Thanks to this last stage, small segments that do not actually belong to the geometric model have been discarded (for example, the leftmost piece of the set of regions associated with the upper touchline). \end{enumerate} The result in Fig.~\ref{fig:LineClassification}.i shows that the proposed strategy has correctly classified most of the line points detected in the previous stage into their corresponding ellipses (central circle and arc of the left area) and straight lines. This image also shows that 7 of the 8 straight lines have been correctly modeled; the only misdetection, due to its small size in this image, occurs on the upper line of the goal area. \section{Results\label{sec:Results}} \begin{figure*}[htb] \centering{}\includegraphics[width=1\textwidth]{Figures/Res1-transposed}\caption{\label{fig:Some-representative-results}Some representative results obtained in the test sequences. (a)~Original images. (b)~Line mark segmentation: true positives in green, false negatives in blue, and false positives in red. (c)~Line mark classification: in white the points assigned to straight lines, in yellow the points assigned to elliptical lines, and in red the discarded points. (d)~Primitives: straight lines in blue, regions associated to straight lines in white, ellipses in purple, regions associated to ellipses in yellow.} \end{figure*} To assess the quality of the proposed strategy, we have created a new and public database named LaSoDa\footnote{http://www.gti.ssr.upm.es/data/}. As far as we know, there is only one other database that, like ours, allows the evaluation of the quality of algorithms to detect or segment line marks in images of soccer matches~\cite{homayounfar2017sports}. However, this database presents two serious shortcomings: all its images have been acquired from the same viewpoint and similar wide-view angle, limiting the variety of its contents; and, crucially, many of the homography matrices provided in this database are too inaccurate to provide a good registration of a model of the pitch with the images, precluding a pixel-level evaluation of line mark detections. LaSoDa is composed by 60 annotated images from matches in five stadiums with different characteristics (e.g., positions of the cameras, view angles, grass colors) and light conditions (day and night). Its images cover all areas of the playing field, show five different zoom levels---from 1 (closest zoom) to 5 (widest zoom)---, have been acquired with four different types of cameras---master camera (MC), side camera (SC), end camera (EC), and aerial camera (AC)---, and include different and challenging lighting conditions (e.g., day and night matches, and some heavily shaded images). The quality of the results has been measured by the recall ($\recall$), precision ($\precision$), and F-score ($\fscore$) metrics, which are computed as: \begin{equation} \recall=\frac{\truepos}{\truepos+\falseneg},\;\precision=\frac{\truepos}{\truepos+\falsepos},\;\fscore=\frac{2\truepos}{2\truepos+\falsepos+\falseneg}, \end{equation} where $\truepos$, $\falseneg$, and $\falsepos$ are, respectively, the amounts of true positives, false negatives and false positives. { \setlength{\tabcolsep}{1.5pt} \renewcommand*{\arraystretch}{1.1} \footnotesize \begin{longtable}[c]{|ccc|ccc||ccc|ccc||ccc|ccc|} \caption{\label{tab:Summary-of-results}Results at pixel and object level} \tabularnewline \hline \multirow{3}{*}{\makecell{Image\\Id.}} & \multirow{3}{*}{\makecell{Camera\\Type}} & \multirow{3}{*}{\makecell{Zoom\\level}} & \multicolumn{3}{c||}{Segmentation} & \multicolumn{6}{c||}{Classification (px)} & \multicolumn{6}{c|}{Classification (obj)}\tabularnewline \cline{7-18} \cline{8-18} \cline{9-18} \cline{10-18} \cline{11-18} \cline{12-18} \cline{13-18} \cline{14-18} \cline{15-18} \cline{16-18} \cline{17-18} \cline{18-18} & & & \multicolumn{3}{c||}{} & \multicolumn{3}{c|}{Straight} & \multicolumn{3}{c||}{Elliptical} & \multicolumn{3}{c|}{Straight} & \multicolumn{3}{c|}{Elliptical}\tabularnewline \cline{4-18} \cline{5-18} \cline{6-18} \cline{7-18} \cline{8-18} \cline{9-18} \cline{10-18} \cline{11-18} \cline{12-18} \cline{13-18} \cline{14-18} \cline{15-18} \cline{16-18} \cline{17-18} \cline{18-18} & & & rec & pre & F & rec & pre & F & rec & pre & F & rec & pre & F & rec & pre & F\tabularnewline \endfirsthead \hline \multirow{3}{*}{\makecell{Image\\Id.}} & \multirow{3}{*}{\makecell{Camera\\Type}} & \multirow{3}{*}{\makecell{Zoom\\level}} & \multicolumn{3}{c||}{Segmentation} & \multicolumn{6}{c||}{Classification (px)} & \multicolumn{6}{c|}{Classification (obj)}\tabularnewline \cline{7-18} \cline{8-18} \cline{9-18} \cline{10-18} \cline{11-18} \cline{12-18} \cline{13-18} \cline{14-18} \cline{15-18} \cline{16-18} \cline{17-18} \cline{18-18} & & & \multicolumn{3}{c||}{} & \multicolumn{3}{c|}{Straight} & \multicolumn{3}{c||}{Elliptical} & \multicolumn{3}{c|}{Straight} & \multicolumn{3}{c|}{Elliptical}\tabularnewline \cline{4-18} \cline{5-18} \cline{6-18} \cline{7-18} \cline{8-18} \cline{9-18} \cline{10-18} \cline{11-18} \cline{12-18} \cline{13-18} \cline{14-18} \cline{15-18} \cline{16-18} \cline{17-18} \cline{18-18} & & & rec & pre & F & rec & pre & F & rec & pre & F & rec & pre & F & rec & pre & F\tabularnewline \endhead \hline 1--01 & MC & 2 & 0.90 & 0.99 & 0.94 & 0.78 & 1.00 & 0.88 & 1.00 & 1.00 & 1.00 & 0.75 & 1.00 & 0.86 & 1.00 & 1.00 & 1.00\tabularnewline \hline 1--02 & MC & 4 & 0.94 & 0.98 & 0.96 & 0.89 & 1.00 & 0.94 & 0.96 & 1.00 & 0.98 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \hline 1--03 & EC & 1 & 0.93 & 1.00 & 0.96 & 0.81 & 1.00 & 0.90 & 0.85 & 1.00 & 0.92 & 0.75 & 0.82 & 0.78 & 0.67 & 1.00 & 0.80\tabularnewline \hline 1--04 & AC & 3 & 0.97 & 1.00 & 0.99 & 0.95 & 1.00 & 0.98 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \hline 1--05 & MC & 4 & 0.99 & 0.97 & 0.98 & 0.98 & 1.00 & 0.99 & - & - & - & 1.00 & 1.00 & 1.00 & - & - & -\tabularnewline \hline 1--06 & MC & 2 & 0.87 & 0.99 & 0.93 & 0.78 & 1.00 & 0.87 & 0.98 & 1.00 & 0.99 & 1.00 & 0.83 & 0.91 & 1.00 & 1.00 & 1.00\tabularnewline \pagebreak[0] \hline 1--07 & MC & 5 & 0.89 & 1.00 & 0.94 & 0.84 & 0.94 & 0.89 & 0.00 & 1.00 & 0.00 & 1.00 & 0.78 & 0.88 & 0.00 & 1.00 & 0.00\tabularnewline \hline 1--08 & MC & 3 & 0.96 & 0.99 & 0.97 & 0.95 & 1.00 & 0.97 & - & - & - & 1.00 & 1.00 & 1.00 & - & - & -\tabularnewline \hline 1--09 & MC & 3 & 0.97 & 1.00 & 0.98 & 0.96 & 1.00 & 0.98 & 0.76 & 1.00 & 0.86 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \hline 1--10 & SC & 3 & 0.92 & 0.99 & 0.95 & 0.89 & 1.00 & 0.94 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \hline 1--11 & AC & 3 & 1.00 & 1.00 & 1.00 & 0.98 & 1.00 & 0.99 & 0.99 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \hline 1--12 & MC & 4 & 0.94 & 1.00 & 0.97 & 0.89 & 1.00 & 0.94 & 0.95 & 1.00 & 0.98 & 1.00 & 0.86 & 0.92 & 1.00 & 1.00 & 1.00\tabularnewline \pagebreak[1] \hline 2--01 & MC & 3 & 0.97 & 0.98 & 0.98 & 0.91 & 0.96 & 0.93 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \hline 2--02 & MC & 3 & 0.96 & 0.99 & 0.98 & 0.94 & 0.96 & 0.95 & 0.00 & 1.00 & 0.00 & 1.00 & 0.80 & 0.89 & 0.00 & 1.00 & 0.00\tabularnewline \hline 2--03 & MC & 3 & 0.97 & 0.99 & 0.98 & 0.95 & 1.00 & 0.97 & 0.99 & 1.00 & 1.00 & 1.00 & 0.86 & 0.92 & 1.00 & 1.00 & 1.00\tabularnewline \hline 2--04 & EC & 4 & 0.98 & 0.98 & 0.98 & 0.97 & 1.00 & 0.98 & 0.97 & 1.00 & 0.98 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \hline 2--05 & SC & 3 & 0.99 & 0.99 & 0.99 & 0.98 & 0.99 & 0.99 & 0.96 & 1.00 & 0.98 & 1.00 & 0.90 & 0.95 & 1.00 & 1.00 & 1.00\tabularnewline \hline 2--06 & SC & 3 & 0.98 & 0.99 & 0.99 & 0.97 & 1.00 & 0.98 & 0.98 & 1.00 & 0.99 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \pagebreak[0] \hline 2--07 & MC & 3 & 0.93 & 1.00 & 0.96 & 0.90 & 1.00 & 0.95 & 0.94 & 1.00 & 0.97 & 1.00 & 0.67 & 0.80 & 1.00 & 1.00 & 1.00\tabularnewline \hline 2--08 & MC & 2 & 0.95 & 0.99 & 0.97 & 0.96 & 0.96 & 0.96 & 0.00 & 1.00 & 0.00 & 1.00 & 0.80 & 0.89 & 0.00 & 1.00 & 0.00\tabularnewline \hline 2--09 & SC & 5 & 0.82 & 0.97 & 0.89 & 0.79 & 1.00 & 0.88 & - & - & - & 1.00 & 0.83 & 0.91 & - & - & -\tabularnewline \hline 2--10 & SC & 5 & 0.76 & 0.97 & 0.85 & 0.70 & 1.00 & 0.83 & - & - & - & 1.00 & 0.71 & 0.83 & - & - & -\tabularnewline \hline 2--11 & EC & 4 & 0.97 & 0.99 & 0.98 & 0.94 & 0.88 & 0.91 & 0.58 & 1.00 & 0.73 & 0.67 & 0.44 & 0.53 & 0.50 & 1.00 & 0.67\tabularnewline \hline 2--12 & SC & 1 & 0.96 & 0.97 & 0.97 & 0.93 & 0.97 & 0.95 & 0.94 & 1.00 & 0.97 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \pagebreak[1] \hline 3--01 & EC & 1 & 0.93 & 0.98 & 0.95 & 0.91 & 0.99 & 0.95 & 0.96 & 1.00 & 0.98 & 1.00 & 0.75 & 0.86 & 1.00 & 1.00 & 1.00\tabularnewline \hline 3--02 & MC & 2 & 0.99 & 0.99 & 0.99 & 0.97 & 1.00 & 0.99 & 1.00 & 1.00 & 1.00 & 1.00 & 0.90 & 0.95 & 1.00 & 1.00 & 1.00\tabularnewline \hline 3--03 & MC & 4 & 0.97 & 1.00 & 0.98 & 0.93 & 1.00 & 0.97 & - & - & - & 1.00 & 1.00 & 1.00 & - & - & -\tabularnewline \hline 3--04 & MC & 5 & 0.94 & 1.00 & 0.97 & 0.91 & 0.92 & 0.92 & 0.00 & 1.00 & 0.00 & 1.00 & 0.89 & 0.94 & 0.00 & 1.00 & 0.00\tabularnewline \hline 3--05 & MC & 1 & 0.99 & 0.99 & 0.99 & 0.99 & 1.00 & 1.00 & 0.99 & 1.00 & 0.99 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \hline 3--06 & MC & 4 & 0.95 & 0.98 & 0.96 & 0.94 & 1.00 & 0.97 & 0.94 & 1.00 & 0.97 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \pagebreak[0] \hline 3--07 & AC & 3 & 1.00 & 1.00 & 1.00 & 0.96 & 1.00 & 0.98 & 1.00 & 1.00 & 1.00 & 1.00 & 0.73 & 0.84 & 1.00 & 1.00 & 1.00\tabularnewline \hline 3--08 & MC & 1 & 0.99 & 0.99 & 0.99 & 0.99 & 1.00 & 1.00 & 0.96 & 1.00 & 0.98 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \hline 3--09 & AC & 1 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 0.98 & 1.00 & 0.99 & 1.00 & 0.89 & 0.94 & 1.00 & 1.00 & 1.00\tabularnewline \hline 3--10 & SC & 5 & 0.97 & 0.99 & 0.98 & 0.95 & 0.96 & 0.96 & 0.00 & 1.00 & 0.00 & 1.00 & 0.82 & 0.90 & 0.00 & 1.00 & 0.00\tabularnewline \hline 3--11 & AC & 4 & 0.98 & 1.00 & 0.99 & 0.97 & 1.00 & 0.98 & 0.95 & 1.00 & 0.97 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \hline 3--12 & AC & 1 & 0.98 & 1.00 & 0.99 & 0.97 & 1.00 & 0.98 & 0.99 & 1.00 & 0.99 & 0.78 & 1.00 & 0.88 & 1.00 & 1.00 & 1.00\tabularnewline \pagebreak[1] \hline 4--01 & AC & 1 & 0.99 & 1.00 & 1.00 & 0.99 & 0.95 & 0.97 & 0.74 & 1.00 & 0.85 & 0.78 & 0.78 & 0.78 & 0.50 & 1.00 & 0.67\tabularnewline \hline 4--02 & AC & 3 & 1.00 & 1.00 & 1.00 & 0.99 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \hline 4--03 & AC & 3 & 0.99 & 1.00 & 0.99 & 0.99 & 1.00 & 0.99 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \hline 4--04 & SC & 3 & 0.94 & 0.99 & 0.96 & 0.93 & 1.00 & 0.96 & 0.90 & 1.00 & 0.95 & 1.00 & 0.89 & 0.94 & 1.00 & 1.00 & 1.00\tabularnewline \hline 4--05 & AC & 3 & 0.98 & 0.99 & 0.99 & 0.97 & 1.00 & 0.98 & 0.99 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \hline 4--06 & SC & 2 & 0.98 & 0.99 & 0.99 & 0.97 & 1.00 & 0.98 & 0.97 & 1.00 & 0.99 & 0.80 & 1.00 & 0.89 & 1.00 & 1.00 & 1.00\tabularnewline \pagebreak[0] \hline 4--07 & MC & 3 & 0.99 & 1.00 & 0.99 & 0.94 & 1.00 & 0.97 & 0.97 & 1.00 & 0.99 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \hline 4--08 & AC & 4 & 0.94 & 1.00 & 0.97 & 0.96 & 0.98 & 0.97 & 0.00 & 1.00 & 0.00 & 1.00 & 0.67 & 0.80 & 0.00 & 1.00 & 0.00\tabularnewline \hline 4--09 & AC & 1 & 0.95 & 0.96 & 0.95 & 0.92 & 0.98 & 0.95 & 0.99 & 1.00 & 1.00 & 1.00 & 0.73 & 0.84 & 1.00 & 1.00 & 1.00\tabularnewline \hline 4--10 & AC & 3 & 1.00 & 1.00 & 1.00 & 0.98 & 1.00 & 0.99 & 1.00 & 1.00 & 1.00 & 0.86 & 1.00 & 0.92 & 1.00 & 1.00 & 1.00\tabularnewline \hline 4--11 & MC & 2 & 0.98 & 1.00 & 0.99 & 0.98 & 1.00 & 0.99 & 0.96 & 1.00 & 0.98 & 1.00 & 0.89 & 0.94 & 1.00 & 1.00 & 1.00\tabularnewline \hline 4--12 & AC & 1 & 0.99 & 0.99 & 0.99 & 0.97 & 0.98 & 0.98 & 0.85 & 1.00 & 0.92 & 1.00 & 0.91 & 0.95 & 0.67 & 1.00 & 0.80\tabularnewline \pagebreak[1] \hline 5--01 & MC & 2 & 0.99 & 0.99 & 0.99 & 0.99 & 1.00 & 0.99 & 0.95 & 1.00 & 0.97 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \hline 5--02 & MC & 3 & 0.95 & 1.00 & 0.98 & 0.93 & 1.00 & 0.96 & 0.96 & 1.00 & 0.98 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \hline 5--03 & MC & 3 & 0.98 & 0.98 & 0.98 & 0.97 & 1.00 & 0.98 & 0.96 & 0.99 & 0.97 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \hline 5--04 & AC & 3 & 0.99 & 1.00 & 0.99 & 0.98 & 1.00 & 0.99 & 0.99 & 1.00 & 0.99 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \hline 5--05 & AC & 5 & 0.99 & 1.00 & 1.00 & 0.99 & 1.00 & 1.00 & - & - & - & 1.00 & 1.00 & 1.00 & - & - & -\tabularnewline \hline 5--06 & MC & 5 & 0.89 & 1.00 & 0.94 & 0.86 & 1.00 & 0.92 & - & - & - & 0.80 & 0.80 & 0.80 & - & - & -\tabularnewline \pagebreak[0] \hline 5--07 & SC & 5 & 0.78 & 0.99 & 0.87 & 0.75 & 0.99 & 0.85 & 0.00 & 1.00 & 0.00 & 0.71 & 0.83 & 0.77 & 0.00 & 1.00 & 0.00\tabularnewline \hline 5--08 & AC & 5 & 0.80 & 0.99 & 0.89 & 0.74 & 1.00 & 0.85 & 0.82 & 0.97 & 0.89 & 0.71 & 0.83 & 0.77 & 1.00 & 1.00 & 1.00\tabularnewline \hline 5--09 & SC & 4 & 0.90 & 1.00 & 0.94 & 0.90 & 1.00 & 0.95 & 0.54 & 1.00 & 0.70 & 1.00 & 0.89 & 0.94 & 1.00 & 1.00 & 1.00\tabularnewline \hline 5--10 & SC & 3 & 0.88 & 1.00 & 0.93 & 0.87 & 1.00 & 0.93 & 0.57 & 1.00 & 0.72 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \hline 5--11 & AC & 1 & 0.97 & 0.99 & 0.98 & 0.95 & 1.00 & 0.97 & 0.95 & 1.00 & 0.97 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\tabularnewline \hline 5--12 & AC & 4 & 0.93 & 1.00 & 0.97 & 0.91 & 1.00 & 0.95 & 1.00 & 1.00 & 1.00 & 0.67 & 1.00 & 0.80 & 1.00 & 1.00 & 1.00\tabularnewline \hline \multicolumn{3}{|c|}{Match 1} & 0.94 & 0.99 & 0.96 & 0.89 & 0.99 & 0.94 & 0.92 & 1.00 & 0.96 & 0.94 & 0.93 & 0.93 & 0.86 & 1.00 & 0.92\tabularnewline \hline \multicolumn{3}{|c|}{Match 2} & 0.95 & 0.99 & 0.97 & 0.92 & 0.97 & 0.94 & 0.82 & 1.00 & 0.90 & 0.97 & 0.81 & 0.88 & 0.77 & 1.00 & 0.87\tabularnewline \hline \multicolumn{3}{|c|}{Match 3} & 0.97 & 0.99 & 0.98 & 0.96 & 0.99 & 0.97 & 0.89 & 1.00 & 0.94 & 0.98 & 0.89 & 0.93 & 0.88 & 1.00 & 0.93\tabularnewline \hline \multicolumn{3}{|c|}{Match 4} & 0.98 & 0.99 & 0.98 & 0.97 & 0.99 & 0.98 & 0.89 & 1.00 & 0.94 & 0.95 & 0.88 & 0.92 & 0.81 & 1.00 & 0.90\tabularnewline \hline \multicolumn{3}{|c|}{Match 5} & 0.92 & 0.99 & 0.96 & 0.90 & 1.00 & 0.94 & 0.89 & 1.00 & 0.94 & 0.91 & 0.94 & 0.92 & 0.92 & 1.00 & 0.96\tabularnewline \hline \multicolumn{3}{|c|}{Total} & 0.95 & 0.99 & 0.97 & 0.93 & 0.99 & 0.96 & 0.88 & 1.00 & 0.93 & 0.95 & 0.89 & 0.92 & 0.85 & 1.00 & 0.92\tabularnewline \hline \end{longtable} \normalsize } \begin{figure*}[tbh] \centering{}\includegraphics[width=.9\textwidth]{Figures/Graphic2}\caption{\label{fig: Segmentation algorithms}Global quality results obtained with other state-of-the-art line mark segmentation algorithms: (a) Top-Hat transform, (b) Sobel edge detector, and (c) LoG edge detector.} \end{figure*} \begin{figure}[htb] \centering{}\includegraphics[width=.68\columnwidth]{Figures/TopHat2} \caption{\label{fig:Example-of-line}Example of line mark segmentation with the considered algorithms with different normalized threshold values: true positives in green, false negatives in blue, and false positives in red.} \end{figure} Table~\ref{tab:Summary-of-results} summarizes the quality measures obtained in each of the images in the proposed database using the provided ground truth for the mask of the playing field $\mascaracampo$, as well as the overall quality results for each match individually and for the entire database. In addition, Fig.~\ref{fig:Some-representative-results} shows the results of a significant image in each of the five matches in the database (the results corresponding to the rest of the images are available on the LaSoDa website). The ``Segmentation'' columns of Table~\ref{tab:Summary-of-results} report the pixel-level results of the segmentation of the line marks in bulk (i.e., before classification into straight or elliptical marks). These results show that the quality of the line mark segmentation is very high, with F-score values over 0.9 in most images and over 0.8 in the remaining few, yielding an overall F-score of 0.97. The few false positives are mainly due to players wearing white kits and to white elements of the goals, which show a similar appearance to that of the white lines on the pitch. False negatives correspond to line points occluded by players or that barely stand out from the grass because they are very far from the camera. To compare with the most commonly used methods for line detection, Fig.~\ref{fig: Segmentation algorithms} summarizes the global quality results obtained with three of the most popular algorithms that are used in other works, as indicated in section~\ref{sec:Related-work}, to highlight the line markings in soccer images: the Sobel edge detector, the LoG edge detector, and the Top-Hat transform. We can see that none of them performs well across the whole dataset: there is no threshold value that gives a really good balance between precision and recall in any of these methods. Fig.~\ref{fig:Example-of-line} compares the results obtained on an image where approximately half of the playing field is strongly sunlit, while the other half is in shadow. It can be seen that only our strategy is able to detect the line marks in all areas of the playing field, while at the same time yielding the fewest false positives. The ``Classification (px)'' columns of Table~\ref{tab:Summary-of-results} also report pixel-wise results, but after classification into straight and elliptical lines, showing that after applying the line mark classification stage the quality obtained is very high too. As it can be seen in Fig.~\ref{fig:Some-representative-results} and in the table, most false detections that resulted from the segmentation stage have been discarded. However, some of the correctly segmented points at this stage have also been discarded, not being assigned to straight lines or ellipses, which has caused a slight decrease in the recall. This generally occurs with line segments that, because of their small size, have not been conclusively classified as any of the primitive structures. Finally, the ``Classification (obj)'' columns of Table~\ref{tab:Summary-of-results} report the object-level results corresponding to the primitive structures associated with the line points resulting from the classification (illustrated in the last column of Fig.~\ref{fig:Some-representative-results}). Out of the 402 straight lines featured in the whole of the database, only 22 have been misdetected. These non-detections are associated with the shortest line segments, as illustrated in Fig.~\ref{fig:Some-representative-results}. On the other hand, 59 false detections have been obtained, which are overwhelmingly due to cases where only the straightest parts of ellipses (mainly on the penalty arc, which is small and frequently presents partial occlusion from players) are visible and have been wrongly classified as small straight lines. Regarding ellipses, out of the 71 that appear in the 60 images analyzed, 12 have been misdetected but there are no false detections at all. These non-detections are associated with images where the detected line segments have not been long enough to determine that they fit an ellipse better than a straight line (an example of this can also be seen in the last of the images in Fig.~\ref{fig:Some-representative-results}). \begin{figure}[tbh] \centering{}\includegraphics[width=.5\columnwidth]{Figures/Graph}\caption{\label{fig:Quality-results-obtained}Straight line detection results obtained with the Hough transform.} \end{figure} Figure~\ref{fig:Quality-results-obtained} reports the global results using the Hough transform, a very popular method for straight line detection in the methods reviewed in section~\ref{sec:Related-work}, from the points in $\mascaralineas$. As already stated in section~\ref{sec:Related-work}, this method requires the user to set a target number of lines to search for. These results show that setting a low number of target lines results in many misdetections; conversely, a high number of target lines results in a significant increase in the number of false detections. The best result has been obtained by configuring the method to obtain 8 lines per image. However, this global optimal configuration still results in a relatively low F-score of 0.78, while our proposal reaches 0.90. \section{Conclusions\label{sec:Conclusions}} We have presented a novel strategy to segment and classify line markings in football pitches, irrespectively of camera position or orientation, and validated its effectiveness on a variety of images from several stadiums. This strategy is based on the stochastic watershed transform, which is usually employed to segment regions rather than lines, but we have shown that, coupled with the seeding strategy we propose, it provides a robust way to segment the line markings from the playing field without assuming that the lines are straight or conform to any particular pattern. Specifically, our method is able to correctly segment the curved lines that are typical of wide-angle takes due to radial distortion of broadcasting cameras or cope with the interference of players or the ball, which frequently cause errors in most methods based on the Hough transform. Following the segmentation of the lines, we have also proposed a simple method to classify the segmented pixels into straight lines and ellipses or parts thereof. This method is also robust against moderate radial distortion and outliers, and does not make assumptions about the viewpoint or the distribution of the markings, leaving those considerations for later higher-level classification and camera resection stages. To assess the quality of the proposed strategy, a new and public database (LaSoDa) has been developed. LaSoDa is composed of 60 annotated images from matches in stadiums with different characteristics (positions of the cameras, grass colors) and light conditions (natural and artificial). Some improvements can be considered for future work, for instance parallelization of the watershed transform, since the regular arrangement of seeds may allow to independently compute different regions of the image, but the current proposal already constitutes a solid foundation for other stages in a sports video analysis pipeline to build upon. \section*{Acknowledgments} This work has been partially supported by MCIN/AEI/10.13039/501100011033 of the Spanish Government [grant number PID2020-115132RB (SARAOS)]. \bibliographystyle{elsarticle-num}
{ "attr-fineweb-edu": 1.657227, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdc85qhLBRA7e7j0w
\section{Summary} In this work, we proposed a framework for sports camera selection using Internet videos to address the data scarcity problem. With effective feature representation and data imputation, our method achieved the state-of-the-art performance on a challenging soccer dataset. Moreover, some of our techniques such as foreground feature extraction are generic and can be applied to other applications. The proposed method mainly focuses on camera selection in single frames at the current stage. In the future, we would like to explore temporal information for camera selection. \noindent {\bf Acknowledgements:} This work was funded partially by the Natural Sciences and Engineering Research Council of Canada. We thank Peter Carr from Disney Research and Tian Qi Chen from University of Toronto for discussions. {\small \bibliographystyle{ieee} \section{USL Broadcast Dataset} \begin{table*}[t] \centering \scalebox{0.9} { \begin{tabular}{| l |c| c| c | c| c | c | c |} \hline Dataset & Year& Game type & Length (min.) & \# Game & \# Camera & Camera type & Ground truth \\ \midrule APIDIS \cite{daniyal2011multi} & 2011& basketball & 15 & 1 & 5 & static & non-prof. \\ ADS \cite{chen2013computational} & 2013 & field hockey & 70 & 1 & 3 & PTZ & prof. \\ OMB \cite{foote2013one} & 2013 & basketball & $\sim 16$ & 1 & 2 & PTZ & non-prof. \\ VSR \cite{wang2014context} & 2014 & soccer & 9 & 1 & 20 & static & non-prof. \\ CDF \cite{chen2018camera} & 2018 & soccer & 47 & 1 & 3 & PTZ & hybrid \\ \hline USL (Ours) & 2018 & soccer & 94 + 108 & 1+12 & 3 & PTZ & prof. \\ \hline \end{tabular} } \caption{\textbf{Dataset comparison.} In our dataset, the $108$ minutes data (column four) is sparsely sampled from total $1,080$ minutes data.} \vspace{-0.15in} \label{table:dataset_cmp} \end{table*} We collect a dataset from United Soccer League (USL) 2014 season. The dataset has one main game and twelve auxiliary games. The main game has six videos. Two videos are from static cameras which look at the left and right part of the playing field, respectively. Three other videos are from pan-tilt-zoom (PTZ) candidate cameras ($1280 \times 720$). Among them, one camera was located at mid-field, giving an overview of the game. The other two cameras are located behind the left and right goals respectively, providing detailed views. Figure \ref{fig:camera_setting} visualizes the camera locations and shows image examples. The sixth video is the broadcast video composited by a professional director from the three PTZ videos. We manually remove commercials and \doubleQuote{replays} and synchronize this video with other videos. The length of the main game is about 94 minutes. Our system only uses information from the three PTZ cameras to select the broadcast camera. Static cameras are only used in one of the baselines. Twelve auxiliary games are collected from Youtube. These games are hosted in the same stadium as the main game. They are typically 1.5 hours long. Unlike the main game, each auxiliary game only has a composited broadcast video ($640 \times 360$). Figure \ref{fig:main_idea}(bottom left) shows image examples from the auxiliary games. In the main game, we manually annotate the ball locations on the static cameras and two detailed view PTZ cameras, at 1fps. In the auxiliary games, we manually check the classified camera IDs around detected camera transition points (details in Section \ref{seq:implementation}). In all games, we detected bounding boxes of players and balls using a method similar to \cite{dai2016r}. Table \ref{table:dataset_cmp} compares our dataset with previous camera view selection datasets. To the best of our knowledge, ours is the first dataset with dense annotations for such long dynamic multi-camera image sequences. We put more dataset details in the supplementary material. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figures/camera_settings.png} \caption{Camera settings of the main game and image examples. Blue: static cameras; red: PTZ cameras. } \label{fig:camera_setting} \vspace{-0.15in} \end{figure} \section{Evaluation} We evaluate our method on the the USL dataset. To effectively use the data, we test on the main game using 3-fold leave-one-sequence-out cross-validation. We use the camera selection from human operators as the ground truth and report the classification accuracy. We also report the prediction accuracy on the dataset from \cite{chen2018camera} (the first 4 datasets in Table \ref{table:dataset_cmp} are not publicly available). \begin{table*}[t] \centering \begin{tabular}{| l |ccc |c |ccc |c |} \hline & \multicolumn{4}{c|}{Main} & \multicolumn{4}{c|}{Main + aux.} \\ \cline{2-9} Feature & L & M & R & All & L & M & R & All \\ \midrule whole-image & 53.4 & 74.4 & 57.5 & 63.2 & 62.8 & 77.8 & 61.4 & 68.5 \\ foreground & 45.9 & 84.1 & 39.5 & 59.7 & 53.1 & \textbf{86.2} & 41.3 & 63.2 \\ \hline both & 58.3 & 78.0 & 58.7 & 66.5 & \textbf{70.0} & 85.2 & \textbf{68.9} & \textbf{75.9} \\ \hline \end{tabular} \vspace{1mm} \caption{Selection accuracy using different features and training data. \doubleQuote{Main} and \doubleQuote{Main+aux.} mean the training data is from the main game only and is with auxiliary videos, respectively. L, M and R represent the camera on the left, middle, and right side, respectively. The highest accuracy is highlighted by bold.} \label{table:main_result} \end{table*} For comparison, we also implement six baselines. \textbf{Baseline 1}: constantly select one camera with the highest prior in the training data. This baseline always selects the overview (middle) camera. \textbf{Baseline 2}: select the camera that is closest to the human-annotated grounded truth location of the ball. \textbf{Baseline 3}: predict the camera using the team occupancy map introduced in \cite{bialkowski2013recognising}. The team occupancy map describes the player distribution on the playing ground using tracked players from the static cameras. \textbf{Automated director assistant (ADA)}~\cite{chen2013computational}: it learns a random forest classifier using player distribution and game flow at a time instance. Our implementation augments temporal information by concatenating the features in a 2-second sliding window, making the predictions more reliable. \textbf{C3D}~\cite{tran2015learning}: it is a deep CNN modified from the original C3D network. First, images from three cameras pass through the original C3D network, separately. Then their $fc6$ activations are concatenated and fed into a fully connected network ($1024 \times 32 \times 3$) with \textit{Softmax} loss. \textbf{RDT+CDF}~\cite{chen2018camera}: it uses the recurrent decision tree (RDT) method and a cumulative distribution function (CDF) regularization to predict camera selections in a sequence. Because \cite{chen2018camera} requires real-valued labels in training, we only compare with it on the dataset from \cite{chen2018camera}. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{figures/acc_vs_event_2.pdf} \vspace{-0.2in} \caption{Prediction accuracy with and without auxiliary games (grouped by events).} \label{fig:accuracy_event} \vspace{-0.15in} \end{figure} \subsection{Main Results} Table \ref{table:main_result} shows the main results of our method. First, auxiliary data provides significant performance improvement (about 9.4\%). The improvement is from two stages: feature extraction and data imputation. Figure \ref{fig:accuracy_event} shows details of the improvement by separating these two stages and grouping the frames into different events. Overall, the main improvement is from the feature extraction stage (about 6.6\%). Data imputation provides an extra 2.8\% improvement, which is significant in \doubleQuote{throw in} and \doubleQuote{general play}. Second, the foreground feature improves performance, especially when the auxiliary games are used. The main reason might be that the foreground feature excludes the negative impact of backgrounds (\eg different fan groups, weather and light conditions) of auxiliary games. \begin{table}[t] \centering \scalebox{1.0}{ \begin{tabular}{| l | c |c |} \hline & {Accuracy (\%)} & $\Delta$ \\ \hline Constant selection & 40.9 & 35.0 \\ Closest to ball (GT.) & 37.6 & 38.3 \\ Team occupancy map \cite{bialkowski2013recognising} & 49.8 & 26.1 \\ ADA \cite{chen2013computational} & 54.1 & 21.8 \\ C3D \cite{tran2015learning} & 64.3 & 11.6 \\ \hline Both feature w/ aux. (Ours) & \textbf{75.9} & -- \\ Both feature w/o aux. & 66.5 & 9.4 \\ whole-image feature w/ aux. & 68.5 & 7.4 \\ foreground feature w/ aux. & 63.2 & 12.7 \\ \hline \end{tabular} } \caption{Comparison against various baselines and analysis of the effects of various components.} \label{table:vs_others} \vspace{-0.15in} \end{table} \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{figures/vs_rdt_2.pdf} \vspace{-0.2in} \caption{Camera selection on a 5-minute testing sequence. (a) Result of \cite{chen2018camera}. (b) Ours. The green line is from human operators and the red line is from algorithms. Best viewed in color.} \label{fig:vs_rdt} \vspace{-0.15in} \end{figure} Table \ref{table:vs_others} shows the comparison with the baselines. Our method outperforms all the methods by large margins. Baselines 1-3 do not work well on this dataset mainly because they omit the image content from the PTZ cameras. This result suggests that heuristic techniques using ball and player locations are not enough for dynamic PTZ camera selection. ADA \cite{chen2013computational} seems to have substantial challenges on this dataset. It is partially because of hand-crafted features (such as player flow) are quite noisy from fast-moving PTZ cameras. C3D \cite{tran2015learning} works reasonably well as it learns both appearance and motion features end-to-end. Its performance is slightly better than our whole-image-feature model. However, our full model is significantly more accurate (11.6 \%) than C3D. It is worth noting that training C3D with auxiliary data is very difficult because the input of C3D is consecutive frames. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{figures/imputation_result_3.pdf} \caption{Imputation test on the main game data. (a),(b),(c) Color-coded visualization of imputed feature sequence. The x-axis is the feature dimension. The y-axis is the frame index. Colors visualize the feature values. Blue color blocks indicate the missing data. (d) The imputation accuracy as a function of the error thresholds. Best viewed in color.} \label{fig:imputation} \vspace{-0.1in} \end{figure} \vspace{-0.1in} \paragraph{Combined with a Temporal Model} To test the capability of our method with temporal models, we conducted experiments on the dataset from \cite{chen2018camera}. This dataset has 42 minutes (60 fps) data for training and a 5-min sequence for testing. In the experiment, we feed the selection probability to the cumulative distribution function (CDF) method from \cite{chen2018camera}. The CDF method prevents too short (brief glimpse) and too long (monotonous selection) camera selections. The experiment shows our method is more accurate than \cite{chen2018camera} (70\% vs. 66\%). Figure \ref{fig:vs_rdt} shows a visualization of the camera selection. Video results are in the supplementary material for visual inspection. \subsection{Further Analysis} \paragraph{Data Imputation Accuracy} Because the missing data in the auxiliary videos has no ground truth, we analyze the accuracy of our data imputation method using the main game data. We use the last $1,100$ frames as testing data by masking the features from the un-selected cameras as missing data. A random survival forest model is trained from the rest of the data. The error is measured by absolute errors normalized by the range of feature in each dimension. This error metric is a good indication of the performance of imputed data in the final model. The final model (\ie a random forest) uses the sign of the difference between the feature value and the decision boundary in internal nodes to guide the prediction. Figure \ref{fig:imputation}(a)(b)(c) visualizes the incomplete data, complete data and imputed data, respectively. The imputed data is visually similar to the ground truth. Figure \ref{fig:imputation}~(d) shows the imputation accuracy as a function of the error thresholds. When the error threshold is 0.2, about $80\%$ of the data are correctly predicted. Although the accuracy is tested on the main game, it suggests a reasonably good prediction on the auxiliary games. \begin{table*}[t] \parbox{.45\linewidth}{ \centering \begin{tabular}{| l | c |c |} \hline & {Acc. (\%)} & $\Delta$ \\ \midrule RSF & \textbf{75.9} & -- \\ \hline NN & 72.2 & 3.7\\ OptSpace \cite{keshavan2010matrix} & 68.6 & 7.3 \\ Autoencoder & 73.9 & 2.0 \\ \hline \end{tabular} \vspace{1mm} \caption{Comparison of RSF with alternatives.} \label{table:imputation} } \hspace{0.1in} \parbox{.5\linewidth}{ \centering \begin{tabular}{| l | c | c|c |c |} \hline & loc. & appe. & {Acc. (\%)} & $\Delta$ \\ \midrule SA heatmap & \checkmark & \checkmark & \textbf{59.7} & -- \\ \hline Avg pool. & & \checkmark & 41.8 & 17.9 \\ Max pool. & & \checkmark & 42.4 & 17.3 \\ Heatmap in [8] & \checkmark & & 48.4 & 11.3 \\ \hline \end{tabular} \vspace{1mm} \caption{Comparison of SA heatmap with alternatives.} \label{table:vs_other_heatmap} } \end{table*} To evaluate the performance of RSF on the real data, we also imputed the missing values using nearest neighbor (NN), OptSpace \cite{keshavan2010matrix} and a neural autoencoder. Table \ref{table:imputation} shows that RSF outperforms all of them with a safe margin. \begin{figure*}[t] \centering \includegraphics[width=0.97\linewidth]{figures/qualitative_result.jpg} \caption{\textbf{Qualitative results.} The ground truth row shows the ground truth image sequences (about 3 seconds). The predicted sequence row shows the predictions from our method (omitted if all predictions are correct such as in (a)). The contributing sequence row shows the most dominant training example in the leaf node. In each sequence, player trajectories are visualized on the playing field template. The ball locations and their trajectories (dashed lines) are overlaid on the original images. The red cross mark indicates incorrect predictions. Best viewed in color.} \label{fig:contributing} \vspace{-0.1in} \end{figure*} \vspace{-0.1in} \paragraph{Foreground Feature Aggregation} To compare the performance of the SA heatmap with other alternatives, we conducted experiments on the main game data. All the methods use the same appearance feature from the siamese network as input. Table \ref{table:vs_other_heatmap} shows that SA heatmap outperforms other alternatives mainly because it encodes both location and appearance information. \vspace{-0.1in} \paragraph{Qualitative Results} Figure \ref{fig:contributing} shows predicted image sequences with the ground truth and contributing sequences. The contributing sequence is from the most dominant contributing examples in the leaf nodes for each prediction. Figure \ref{fig:contributing}(b) (last column) shows an example of incorrect predictions. The ground truth camera is kept as the middle camera. Our prediction switches to the right camera. By inspecting the video, we found the human operator's selection has better temporal consistency while ours tends to provide more information in single frames. \vspace{-0.1in} \paragraph{Discussion} In real applications, more than three candidate cameras are used. However, we found most of the shots are from the three cameras that cover the left goal area, the middle field and the right goal area. We also qualitatively verified that the camera setting in the proposed dataset is representative for soccer games from \cite{wang2014context} and \cite{homayounfar2017sports}. It indicates that our method can be applied to many real situations, especially in small-budget broadcasting. Although we collected the largest-ever training data from the Internet, the testing data is from one game. We mitigate this limitation by using dense testing (3-fold cross-validation). We leave large-scale camera selection as future work. \section{Introduction} The sports market in North America was worth 69.3 billion in 2017 and is expected to reach 78.5 billion by 2021. Our work focuses on soccer game which has about 40\% shares of the global sports market. As the biggest reason for market growth, media rights (game broadcasts and other sports media content) are projected to increase from 19.1 billion in 2017 to 22.7 billion in 2021 \cite{pwc2017at}. Computational broadcasting is a promising way to offer consumers with various live game experiences and to decrease the cost of media production. Automatic camera selection is one of the key techniques in computational broadcasting. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figures/main_idea_2.jpg} \caption{Learning camera selection from Internet videos. The goal of our work is to select one camera from multiple candidate cameras for sports broadcast. The ideal way is trained from a dataset that has all candidate views such as the main game shown in the figure. However, it is hard to acquire this kind of data (including candidate videos and broadcasting videos) because they are generally not available to researchers (owned by broadcasting companies). Our method uses publicly available Internet videos as auxiliary data to train a model with state-of-the-art prediction accuracy. Best viewed in color.} \vspace{-0.15in} \label{fig:main_idea} \end{figure} Machine learning has produced impressive results on view point selection. These include automatic \cite{chen2016learning,su2016activity,su2017making,hu2017deep360,chen2018camera} and semi-automatic \cite{foote2013one} methods from various inputs such as first-person cameras, static and pan-tilt-zoom (PTZ) cameras. The underlying assumption of these methods is that large training data are easily available. \textbf{Motivation} For sports camera selection, amounts of large training data are not directly available. As a result, most previous methods are trained on a single game (main game) because researchers can not acquire the data that are owned by broadcasting companies \cite{chen2013computational,chen2018camera}. On the other hand, broadcast videos are widely available on the Internet (\eg Youtube). These games (auxiliary games) provide a large number of positive examples. Using these Internet videos can scale up the training data with negligible cost. In practice, arbitrarily choosing auxiliary games does not necessarily improve the performance, when main games are from minor leagues while auxiliary games are from premier leagues. So, the main game and the auxiliary games should be similar in terms of camera locations and the action of players. Although a universal camera selection model should be the final goal, a model for a specific team is also valuable. For example, teams in minor leagues can reduce the cost of live broadcasting for host games. Targeting these applications, we constrain the main games and auxiliary games to be from the same stadium at the current stage. The main challenge of using auxiliary games is the missing views in the video composition. Omitting non-broadcast views is the default setting for TV channels and live streams on the Internet. As a result, the amount of complete and incomplete data is highly unbalanced. To overcome this challenge, we introduce the random survival forest method (RSF) \cite{ishwaran2008random} from statistical learning to impute the missing data. To the best of our knowledge, we are the first to use Internet videos and RSF to solve camera selection problems. The second challenge is from the potentially negative impact of background information in auxiliary games. Auxiliary games are very different in lighting, fan celebration and stadium decoration. In practice, camera operators are trained to capture interesting players and keep the ball visible \cite{owens2015television}. Inspired by this observation, we propose a spatial-appearance heatmap to represent foreground objects locations and their appearances jointly. Our main contributions are: (1) Using Internet data and random survival forests to address the data scarcity problem in camera selection for soccer games. (2) Proposing a spatial-appearance heatmap to effectively represent foreground objects. With these novel techniques, our method significantly outperforms state-of-the-art methods on a challenging soccer dataset. While we present results on soccer games, the technique developed in this work can apply to many other team sports such as basketball and hockey. \section{Method} \label{sec:method} \begin{figure*}[t] \centering \includegraphics[width=0.9\linewidth]{figures/overview_3.png} \vspace{-0.1in} \caption{Main components of our method. (a) Feature extraction. Two CNNs are used to extract whole-image and foreground features. (b) Training process. We first extract features from both main game and auxiliary game frames. The feature of auxiliary games is imputed for the missing data. Both data are then used to train the final model. (c) Data imputation (Section \ref{sec:random_survival_forest}). Best viewed in color. } \label{fig:pipeline} \vspace{-0.15in} \end{figure*} \subsection{Problem Setup} We have two sources of data. One is from complete games which have videos and selections. Another is from auxiliary games which only have one broadcast video and selections. We model the problem as a classification task given hybrid data $D = \{D_{com}, D_{incom}\}$ in which $D_{com}$ is the \textit{complete} data and $D_{incom}$ is the \textit{incomplete} data. Let $D_{com} = \{X_{com}, Y\}$ where $X_{com}$ is the feature representation of all candidate views and $Y \in \{1,2,3\}$ is the corresponding label. $X$ can be an arbitrary feature representation for an image. Let $D_{incom} = \{\{X_{obs}, X_{mis}\}, Y\}$ where $X_{obs}$ is the \textit{observed} data and $X_{mis}$ is the \textit{missing} data (\eg unrecorded views). Our goal is to learn a classifier from the whole data to predict the best viewpoint from multiple candidate viewpoints (\eg an unseen $X_{com}$): \begin{equation} y_t = f(\mathbf{x}_t). \vspace{-0.05in} \end{equation} We do instantaneous single frame prediction and $\mathbf{x}_t$ is a feature representation from all camera views. During training, $\mathbf{x}_t$ is either a raw feature extracted from the main game, or a raw plus imputed feature from an auxiliary game. We only test on the main game. Our primary novelty is to use auxiliary data from the Internet which augments the training data with lots of positive examples. On the other hand, this choice creates considerable challenges because of the missing data. \vspace{-0.1in} \paragraph{Assumptions and Interpretation} Our method has three assumptions. First, ${X}_{inputed} = \{X_{obs}, \hat{X}_{mis}\}$ ($\hat{X}$ means the inferred values) and $X_{inputed}$ has a similar distribution as $X_{com}$. This assumption is reasonable since both types of games are collected from the same stadium with the same host team. Also, we expect the broadcast crew to have consistent behaviors across games to some extent. Second, images from different viewpoints are correlated at a particular time instance. Camera operators (from different viewpoints) cooperate to tell the story of the game. For example, often the focus of attention of the cameras is the same (\ie joint attention). In this case, the observed data $X_{obs}$ has a strong indication of the missing data $X_{mis}$. Third, our method models the viewpoint prediction problem as single frame prediction problem without using temporal information. Single-frame prediction is the focus of our work. We will briefly show the adaptation of our method to a temporal model in the experiment. \subsection{Random Survival Forest} \label{sec:random_survival_forest} With these assumptions, we first impute missing data in training. We randomly draw imputed data from the joint posterior distribution of the missing data given the observed data \cite{valdiviezo2015tree}. \begin{equation} X_{mis} \sim p(X_{mis}|X_{obs}, Y) \vspace{-0.05in} \end{equation} with \begin{equation} \label{equ:mis_obs} p(X_{mis}|X_{obs}, Y) =\int{p(X_{mis}|X_{obs}, \theta)p(\theta|X_{obs}, Y)} \text{d}\theta, \vspace{-0.05in} \end{equation} \noindent where $\theta$ is the model which is decision trees in our method and $Y$ is the label. Please note this process is in training phase so that $Y$ is available. However, it is often difficult to draw from this predictive distribution due to the requirement of integrating over all $\theta$. Here we introduce random survival forests to simultaneously estimate $\theta$ and draw imputed values. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figures/rst_2.png} \caption{A two-level random survival tree. Each row represents a three-dimensional feature. The first dimension of the fifth feature is imputed. $\tau$ is the decision boundary. Labels are omitted for clarity. Best viewed in color. } \label{fig:rst} \vspace{-0.15in} \end{figure} A random survival forest (RSF) is an ensemble of random survival trees, which was originally designed to identify a complex relationship between long-term survival and attributes of persons (\eg body mass, kidney function and smoking). Each decision tree recursively splits training data into sub-trees until the stopping criteria is satisfied. The statistics (\eg mean values of labels for regression) of training examples in the leaf nodes are used as the prediction \cite{criminisi2012decision}. A survival tree imputes missing data as below. \begin{enumerate} \item In internal nodes, only observed data is used to optimize tree parameters such as the decision boundary by minimizing the cross entropy loss. This step estimates the model $\theta$ from the distribution $p(\theta|X_{obs}, Y)$ in \eqref{equ:mis_obs}. \item To assign an example with missing data to the left or right sub-trees, the missing value is \doubleQuote{imputed} by drawing a random value from a uniform distribution $U(x|a,b)$ where $(a, b)$ are the lower/upper bounds of $X_{obs}$ of the target dimension. This step draws samples from $p(X_{mis}|X_{obs}, \theta)$ in \eqref{equ:mis_obs}. \item After the node splitting, imputed data are reset to missing and the process is repeated until terminal nodes are reached. \item Missing data in terminal nodes are then imputed using non-missing terminal node data from all the trees. For categorical variables, a majority vote is used; a mean value is used for continuous variables. \end{enumerate} Figure \ref{fig:rst} shows the data imputation in a two-level random survival tree. Specific details of RSF can be found in \cite{ishwaran2008random,tang2017random}. With the RSF method, we impute the missing data with substituted values to obtain the new data $\{X_{imputed}, Y\}$. To the best of our knowledge, we are the first to introduce RSF from statistical learning to solve vision problems. Besides, we will experimentally show that it outperforms other alternatives in our problem. \subsection{Foreground Feature} \label{subsec:sa_heatmap} \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{figures/heatmap_example_3.png} \caption{Spatial-appearance heatmap. Left: one player on a $4 \times 4$ grid; right: an example of detected objects and corresponding heatmap.} \label{fig:heatmap} \vspace{-0.2in} \end{figure} Besides the whole-image feature from a CNN, we also represent foreground objects in an image using a spatial-appearance (SA) heatmap which encodes object appearances in a quantized image space. First, we quantized the image space into a $16 \times 9$ grid. Then, we represent the location of each player using five points (four corners and one center point) of its bounding box. Each point contributes \doubleQuote{heat} to its located and neighboring cells. In the conventional heatmap, the \doubleQuote{heat} is pre-defined values such as the number of players \cite{chen2016learning}. In our heatmap, the \doubleQuote{heat} is the object appearance feature that is learned from the data. Figure \ref{fig:heatmap} (left) illustrates how the SA heatmap is computed on a $4 \times 4$ grid. The bottom right corner of the bounding box contributes the weighted \doubleQuote{heats} to $C_1, C_2, C_3$ and $C_4$. The weights are the barycentric coordinates of the corner with respect to four cell centers. We use the heatmap as input to train a binary classification CNN and its second-last fully connected layer is used as the foreground feature. \vspace{-0.1in} \paragraph{Appearance Feature Learning} Given the detected bounding boxes of the objects \cite{dai2016r}, we use a siamese network \cite{chopra2005learning} to learn object appearance features. We train the siamese network using the player tracking information between frames and extract features from image patches of players in testing. To train the network, we obtain positive (similar) examples from tracked players \cite{lu2013learning} in consecutive frames (\eg from frame 1 to frame 2). The underlying assumption is that the tracked players in consecutive frames have similar appearance, pose and team membership. Any player not part of a track is likely to be dissimilar. The siamese network minimizes the contrastive loss \cite{hadsell2006dim}: \begin{equation} \label{equ:con_loss} \begin{array}{l} {L_c}({{\bf{x}}_i},{{\bf{x}}_j},{y_{i,j}}) = \\ {y_{i,j}}D{({{\bf{x}}_i},{{\bf{x}}_j})^2} + (1 - {y_{i,j}})\max {(\delta - D({{\bf{x}}_i},{{\bf{x}}_j}),0)^2}, \end{array} \vspace{-0.05in} \end{equation} \noindent where $\bf{x}_i$ and $\bf{x}_j$ are sub-images, $y_{i, j}$ are similar/dissimilar labels, $D(.)$ is the $L_2$ norm distance and $\delta$ is a margin (1 in this work). The loss function minimizes the distance between paired examples when $y_{i,j} = 1$, and maximizes the distance according to the margin $\delta$ when $y_{i,j} = 0$. \section{Implementation} \label{seq:implementation} \paragraph{Label Estimation of Internet Videos} We pre-process Internet videos for training labels. Given a raw video, we first detect shot boundaries using \cite{ekin2003automatic}. We call the consecutive frames at the shot boundary \textit{boundary frames} for simplicity. Given boundary frames, we train a CNN to classify their camera IDs to four categories (\ie left, middle, right and other-view). The other-view images are commercials, replay logos or frames that are captured from other viewpoints. To train the camera-ID CNN, we first randomly sample 500 training frames from each PTZ video of the main game. For the other-view, we sample the same number of images from a non-sports video. Then, we apply the trained model to classify boundary frames. The classification result is manually checked and refined. The refined boundary frames are used to re-train the CNN. This process is repeated for each video. After five games, the prediction accuracy is about $85\%$. We found this performance is sufficient to lighten the workload of human annotation. Initialized by the CNN then manually corrected, we collect $1,634$ pairs of boundary frames from twelve videos. \vspace{-0.1in} \paragraph{Feature Extraction} Each frame is represented by two types of features: the whole-image feature and the foreground feature. The whole-image feature (16 dimensions) is from a binary classification CNN to classify if an image is selected or not by human operators. The foreground feature (16 dimensions) is described in Section \ref{subsec:sa_heatmap}. We balance the number of positive and negative examples in training. For the main game, we choose the positive candidate view and one of the negative camera views at sampled times. For the auxiliary games, we randomly sample negative examples from the main game. \vspace{-0.1in} \paragraph{Data Imputation and Final Model Training} In data imputation, we randomly sampled $4,000$ frames around camera shot boundaries (within 2 seconds). The imputed data are verified by a model trained from the complete examples (about $2,100$ data passed verification). We use the random forest method to fuse features from all candidate cameras since it is relatively easy to train. In the final model, about $6,000$ examples are uniformly downsampled (1fps) from the main game. The dimension of the feature is 96 ($16 \times 3 \times 2$ for two types of features from three candidate cameras). The parameters of the random forest are: tree number 20, maximum depth 20 and minimum leaf node number 5. More details of the implementation are provided in the supplementary material. \section{Related Work} \textbf{Data Scarcity and Imputation} The availability of a large quantity of labeled training data is critical for successful learning methods. This assumption is unrealistic for many tasks. As a result, many recent works have explored alternative training schemes, such as unsupervised learning \cite{zhou2017unsupervised}, and tasks where ground truth is very easy to acquire \cite{agrawal2016learning}. We follow this line of work with additional attention to data imputation approaches. Data imputation fills in missing data from existing data \cite{enders2010applied}. The missing data falls into three categories: missing at random (MAR), missing completely at random (MCAR) and missing not at random (MNAR). The missing data in our problem is MNAR because the missing data is related to the value itself (all missing data is unselected by human operators). Our solution is adapted from the state-of-the-art random survival forests method \cite{ishwaran2008random,tang2017random}. \textbf{Camera Viewpoint Prediction} In single camera systems, previous techniques have been proposed to predict camera angles for PTZ cameras~\cite{chen2016learning} and to generate natural-looking normal field-of-view (NFOV) video from $360^o$ panoramic views \cite{su2016activity,su2017making,hu2017deep360}. In multi-camera systems, camera viewpoint prediction methods select a subset of all available cameras~ \cite{wang2008automatic,daniyal2011multi,tessens2014camera,arev2014automatic,fujisawa2015automatic,gaddam2014your,yus2015multicamba,wang2016personal,lefevre2018automatic}. In broadcast systems, semi-automatic \cite{foote2013one} and fully automatic systems have been developed in practice. For example, Chen \etal \cite{chen2013computational} proposed the automated director assistance (ADA) method to recommend the best view from a large number of cameras using hand-crafted features for field hockey games. Chen \etal \cite{chen2018camera} modeled camera selection as a regression problem constrained by temporal smoothness. They proposed a cumulative distribution function (CDF) regularization to prevent too short or too long camera durations. However, their method requires a real-valued label (visual importance) for each candidate frame. Our problem belongs to multiple dynamic camera systems. \textbf{Video Analysis and Feature Representation} Team sports analysis has focused on player/ball tracking, activity recognition, localization, player movement forecasting and team formation identification~ \cite{ibrahim2016hierarchical,felsen2017will,wei2014forecasting,lucey2013representing,thomas2017computer,lu2018lightweight,giancola2018soccernet}. For example, activity recognition models for events with well defined group structures have been presented in \cite{ibrahim2016hierarchical}. Attention models have been used to detect key actors \cite{ramanathan_cvpr16} and localize actions (\eg who is the shooter) in basketball videos. Gaze information of players have been used to select proper camera views \cite{arev2014automatic} from first-person videos. Hand-crafted features \cite{chen2016learning,felsen2017will}, deep features \cite{tran2015learning} and semantic features \cite{bialkowski2016discovering} have been used to describe the evolution of multi-person sports for various tasks. Most deep features are extracted from the whole image using supervised learning \cite{chen2018camera}. On the other hand, object-level (\eg image patches of players) features are difficult to learn because of the lack of object-level annotations. Our object appearance features are learned from a siamese network \cite{chopra2005learning} without object-level annotations.
{ "attr-fineweb-edu": 1.638672, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUfhPxK7IAB1deV2Ka
\section{Introduction} Consider a tournament on $n$ teams competing to win a single prize via $\binom{n}{2}$ pairwise matches. A tournament rule is a (possibly randomized) map from these $\binom{n}{2}$ matches to a single winner. In line with several recent works~\cite{AK10,APT09, ssw17,swzz20,dw21}, we study rules that satisfy some notion of fairness (that is, ``better'' teams should be more likely to win), and non-manipulability (that is, teams have no incentive to manipulate the matches). More specifically, prior work identifies \textit{Condorcet-consistence} (Definition \ref{condorcetconsistent}) as one desirable property of tournament rules: whenever an undefeated team exists, a Condorcet-consistent rule selects that team as the winner with probability $1$. Another desirable property is \textit{monotonicity} (Definition \ref{monotonedef}): no team can unilaterally increase the probability that it wins by throwing a single match. Arguably, any sensible tournament rule should at minimum satisfy these two basic properties, and numerous such simple rules exist. \cite{APT09,AK10} further considered the following type of deviation: what if the same company sponsors multiple teams in an eSports tournament, and wants to maximize the probability that one of them wins the top prize?\footnote{Similarly, perhaps there are multiple athletes representing the same country or university in a traditional sports tournament.} In principle, these teams might manipulate the outcomes of the matches they play amongst themselves in order to achieve this outcome. Specifically, they call a tournament rule $k$-Strongly-Non-Manipulable ($k$-SNM, Definition \ref{manipulatingatournament}), if no set of $k$ teams can successfully manipulate the $\binom{k}{2}$ pairwise matches amongst themselves to improve the probability that one of these $k$ teams wins the tournament. Unfortunately, even for $k=2$,~\cite{APT09, AK10} establish that no tournament rule is both Condorcet-consistent and $2$-SNM. This motivated recent work in~\cite{ssw17,swzz20,dw21} to design tournament rules which are Condorcet-consistent \emph{as non-manipulable as possible}. Specifically,~\cite{ssw17} defines a tournament rule to be $k$-SNM-$\alpha$ if no set of $k$ teams can manipulate the $\binom{k}{2}$ pairwise matches amongst themselves to increase total probability that any of these $k$ teams wins \emph{by more than $\alpha$} (see Definition~\ref{manipulatingatournament}). These works design several simple Condorcet-consistent and $2$-SNM-$1/3$ tournament rules, which is optimal for $k=2$~\cite{ssw17}. In fact, the state of affairs is now fairly advanced for $k=2$: each of~\cite{ssw17, swzz20, dw21} proposes a new $2$-SNM-$1/3$ tournament rule.~\cite{swzz20} considers a stronger fairness notion that they term Cover-consistent, and~\cite{dw21} considers probabilistic tournaments (see Section~\ref{sec:related} for further discussion). However, \emph{significantly} less is known for $k > 2$. Indeed, only~\cite{swzz20} analyzes manipulability for $k > 2$. They design a rule that is $k$-SNM-$2/3$ for all $k$, but that rule is non-monotone, and it is unknown whether their analysis of that rule is tight. Our main result provides a tight analysis of the manipulability of Randomized Death Match~\cite{dw21} when $k=3$. We remark that this is: a) the first tight analysis of the manipulability of any Condorcet-consistent tournament rule for $k>2$, b) the first analysis establishing a monotone tournament rule that is $k$-SNM-$\alpha$ for any $k > 2$ and $\alpha < 1$, and c) the strongest analysis to-date of any tournament rule (monotone or not) for $k=3$. We overview our main result in more detail in Section~\ref{sec:results} below.\\ \indent Beyond our main result, we further consider manipulations through a \emph{Sybil attack} (Definition \ref{sybil_attack_def}). As a motivating example, imagine that a tournament rule is used as a proxy for a voting rule to select a proposal (voters compare each pair of proposals head-to-head, and this constitutes the pairwise matches input to a tournament rule). A proposer may attempt to manipulate the protocol with a Sybil attack, by submitting numerous nearly-identical clones of the same proposal. This manipulates the original tournament, with a single node $u_1$ corresponding to the proposal, into a new one with additional nodes $u_2,\ldots, u_m$ corresponding to the Sybils. Each node $v \notin \{u_1,\ldots, u_m\}$ either beats all the Sybils, or none of them (because the Sybil proposals are essentially identical to the original). The questions then become: Can the proposer profitably manipulate the matches within the Sybils? Is it beneficial for a proposer to submit as many Sybils as possible? We first show that, when participating in Randomized Death Match, the Sybils can't gain anything by manipulating the matches between them. Perhaps more surprisingly, we show that Randomized Death Match is \emph{Asymptotically Strongly Sybil-Proof}: as the number of Sybils approaches $\infty$, the collective probability that a Sybil wins RDM approaches \emph{zero} (unless the original proposal is a Condorcet winner, in which case the probability that a Sybil wins is equal to $1$, for any number of Sybils $> 0$). \subsection{Our Results}\label{sec:results} As previously noted, our main result is a tight analysis of the manipulability of Randomized Death Match (RDM) for coalitions of size $3$. Randomized Death Match is the following simple rule: pick two uniformly random teams who have not yet been eliminated, and eliminate the loser of their head-to-head match. \begin{inf_theorem} (See Theorem \ref{theorem2}) RDM is 3-SNM-$\frac{31}{60}$. RDM is not 3-SNM-$\alpha$ for $\alpha < \frac{31}{60}$. \end{inf_theorem} Recall that this is the first tight analysis of any Condorcet-consistent tournament rule for any $k>2$ and the first analysis establishing a monotone, Condorcet-consistent tournament rule that is $k$-SNM-$\alpha$ for any $k > 2$, $\alpha < 1$. Recall also that previously the smallest $\alpha$ for which a $3$-SNM-$\alpha$ (non-monotone) Condorcet-consistent tournament rule is known is $2/3$. Our second result concerns manipulation by Sybil attacks. A Sybil attack is where one team starts from a base tournament $T$, and adds some number $m-1$ of clones of their team to create a new tournament $T'$ (they can arbitrarily control the matches within their Sybils, but each Sybil beats exactly the same set of teams as the cloned team) (See Definition \ref{sybil_attack_def}). We say that a tournament rule $r$ is \emph{Asymptotically Strongly Sybil-Proof} (Definition \ref{ASSP_def}) if for any tournament $T$ and team $u_1 \in T$ that is not a Condorcet winner, the maximum collective probability that a Sybil wins (under $r$) over all of $u_1$'s Sybil attacks with $m$ Sybils goes to 0 as $m$ goes to infinity. See Section \ref{sec:prelim} for a formal definition. \begin{inf_theorem}(See Theorem~\ref{theoremcopiesto0}) RDM is Asymptotically Strongly Sybil-Proof. \end{inf_theorem} \subsection{Technical Highlight} All prior work establishing that a particular tournament rule is $2$-SNM-$1/3$ follows a similar outline: for any $T$, cases where manipulating the $\{u,v\}$ match could potentially improve the chances of winning are coupled with two cases where manipulation cannot help. By using such a coupling argument, it is plausible that one can show that RDM is $3$-SNM-$(\frac{1}{2} + c)$ for a small constant $c$. However, given that Theorem~\ref{theorem2} establishes that RDM is $3$-SNM-$31/60$, it is hard to imagine that this coupling approach will be tractable to obtain the exact answer. Our approach is instead drastically different: we find a particular 5-team tournament, and a manipulation by $3$ teams that gains $31/60$, and directly prove that this must be the worst case. We implement our approach using a first-step analysis, thinking of the first match played in RDM on an $n$-team tournament as producing a distribution over $(n-1)$-team tournaments. The complete analysis inevitably requires some careful case analysis, but is tractable to execute fully by hand. Although this may no longer be the case for future work that considers larger $k$ or more sophisticated tournament rules, our approach will still be useful to limit the space of potential worst-case examples. \subsection{Related Work}\label{sec:related} There is a vast literature on tournament rules, both within Social Choice Theory, and within the broad CS community~\cite{Ban85,Fis77,FR92,LLB93,KSW16,KW15,Mil80,SW11}. The Handbook of Computational Social Choice provides an excellent survey of this broad field, which we cannot overview in its entirety~\cite{BCELP}. Our work considers the model initially posed in~\cite{AK10,APT09}, and continued in~\cite{dw21,ssw17,swzz20}, which we overview in more detail below. \cite{AK10,APT09} were the first to consider Tournament rules that are both Condorcet-consistent and $2$-SNM, and proved that no such rules exist. They further considered tournament rules that are $2$-SNM and approximately Condorcet-consistent.~\cite{ssw17} first proposed to consider tournament rules that are instead Condorcet-consistent and approximately $2$-SNM. Their work establishes that Randomized Single Elimination Bracket is $2$-SNM-$1/3$, and that this is tight.\footnote{Randomized Single Elimination Bracket iteratively places the teams, randomly, into a single-elimination bracket, and then `plays' all matches that would occur in this bracket to determine a winner.} \cite{swzz20} establish that Randomized King of the Hill (RKotH) is $2$-SNM-$1/3$,\footnote{Randomized King of the Hill iteratively picks a `prince', and eliminates all teams beaten by the prince, until only one team remains.} and~\cite{dw21} establish that Randomized Death Match is $2$-SNM-$1/3$.~\cite{swzz20} show further that RKotH satisfies a stronger fairness notion called Cover-consistence, and~\cite{dw21} extends their analysis to probabilistic tournaments. In summary, the state of affairs for $k=2$ is quite established: multiple $2$-SNM-$1/3$ tournament rules are known, and multiple different extensions beyond the initial model of~\cite{ssw17} are known. For $k>2$, however, significantly less is known.~\cite{ssw17} gives a simple example establishing that no rule is $k$-SNM-$\frac{k-1-\varepsilon}{2k-1}$ for any $\varepsilon > 0$, but no rules are known to match this bound for any $k > 2$. Indeed,~\cite{swzz20} shows that this bound is not tight, and proves a stronger lower bound for $k \rightarrow \infty$. For example, a corollary of their main result is that no $939$-SNM-$1/2$ tournament rule exists. They also design a non-monotone tournament rule that is $k$-SNM-$2/3$ for all $k$. Other than these results, there is no prior work for manipulating sets of size $k > 2$. In comparison, our work is the first to give a tight analysis of any Condorcet-consistent tournament rule for $k > 2$, and is the first proof that any monotone, Condorcet-consistent tournament rule is $k$-SNM-$\alpha$ for any $k > 2,\alpha < 1$. \\ \indent Regarding our study of Sybil attacks, similar clone manipulations have been considered prior in Social Choice Theory under the name of \emph{composition-consistency}. \cite{lll96} introduces the notion of a \emph{decomposition} of the teams in a tournament into components, where all the teams in a component are clones of each other with respect to the teams not in the component. \cite{lll96} defines a deterministic tournament rule to be \emph{composition-consistent} if it chooses the best teams from the best components\footnote{For a full rigorous mathematical definition see Definition 10, \cite{lll96}}. In particular, composition-consistency implies that a losing team cannot win by introducing clones of itself or any other team. \cite{lll96} shows that the tournament rules Banks, Uncovered Set, TEQ, and Minimal Covering Set are \emph{composition-consistent}, while Top Cycle, the Slater, and the Copeland are not. Both computational and axiomatic aspects of \textit{composition-consistency} have been explored ever since. \cite{efs12} studies the structural properties of clone sets and their computational aspects in the context of voting preferences. In the context of probabilistic social choice, \cite{bbs16} gives probabilistic extensions of the axioms \emph{composition-consistency} and \emph{population-consistency} and uniquely characterize the probabilistic social choice rules, which satisfy both. In the context of scoring rules, \cite{o20} studies the incompatibility of \emph{composition-consistency} and \emph{reinforcement} (stronger than \emph{population-consistency}) and decomposes composition-consistency into four weaker axioms. In this work, we consider Sybil attacks on Randomized Death Match. Our study of Sybil attacks differs from prior work on the relevant notion of \emph{composition-consistency} in the following ways: (i) We focus on a randomized tournament rule (RDM), (ii) We study settings where the manipulator creates clones of themselves (i.e. not of other teams), (iii) We explore the asymptotic behavior of such manipulations (Definition \ref{ASSP_def}, Theorem \ref{theoremcopiesto0}). \subsection{Roadmap} Section \ref{sec:prelim} follows with definitions and preliminaries, and formally defines Randomized Death Match (RDM). Section \ref{examples} introduces some basic properties and examples for the RDM rule as well as a recap of previous work for two manipulators. Section \ref{rdm3} consists of a proof that the manipulability of 3 teams in RDM is at most $\frac{31}{60}$ and that this bound is tight. Section \ref{copiessection} consists of our main results regarding Sybil attacks on a tournament. Section ~\ref{sec:conclusion} concludes. \section{Preliminaries}\label{sec:prelim} In this section we introduce notation that we will use throughout the paper consistent with prior work in \cite{dw21,ssw17,swzz20}. \begin{definition}[Tournament] A (round robin) tournament $T$ on $n$ teams is a complete, directed graph on $n$ vertices whose edges denote the outcome of a match between two teams. Team $i$ beats team $j$ if the edge between them points from $i$ to $j$. \end{definition} \begin{definition}[Tournament Rule] A tournament rule $r$ is a function that maps tournaments $T$ to a distribution over teams, where $r_i(T) := \Pr(r(T) = i)$ denotes the probability that team $i$ is declared the winner of tournament $T$ under rule $r$. We use the shorthand $r_S(T) := \sum_{i\in S} r_i(T)$ to denote the probability that a team in $S$ is declared the winner of tournament $T$ under rule $r$. \end{definition} \begin{definition}[$S$-adjacent]\label{s_adjacent} Two tournaments $T,T'$ are $S$-adjacent if for all $i,j$ such that $\{i,j\} \not \subseteq S$, $i$ beats $j$ in $T$ if and only if $i$ beats $j$ in $T'$. \end{definition} In other words, two tournaments $T,T'$ are $S$-adjacent if the teams from $S$ can manipulate the outcomes of the matches between them in order to obtain a new tournament $T'$. \begin{definition}[Condorcet-Consistent]\label{condorcetconsistent} Team $i$ is a Condorcet winner of a tournament $T$ if $i$ beats every other team (under $T$). A tournament rule $r$ is Condorcet-consistent if for every tournament $T$ with a Condorcet winner $i$, $r_i(T) = 1$ (whenever $T$ has a Condorcet winner, that team wins with probability 1). \end{definition} \begin{definition}[Manipulating a Tournament]\label{manipulatingatournament} For a set $S$ of teams, a tournament $T$ and a tournament rule $r$, we define $\alpha_S^r(T)$ be the maximum winning probability that $S$ can possibly gain by manipulating $T$ to an $S$-adjacent $T'$. That is: $$\alpha_S^r(T) = \max_{\textit{T': T' is S-adjacent to T}} \{r_{S}(T') - r_{S}(T)\}$$ For a tournament rule $r$, define $\alpha_{k,n}^r = \sup_{T,S: |S|= k, |T| = n} \{\alpha_S^r(T)\}$. Finally, define $$\alpha_{k}^r = \sup_{n \in \mathbb{N}}\alpha_{k,n}^r = \sup_{T,S: |S|= k} \{\alpha_S^r(T)\}$$ If $\alpha_k^r \leq \alpha$, we say that $r$ is $k$-Strongly-Non-Manipulable at probability $\alpha$ or $k$-SNM-$\alpha$. \end{definition} Intuitively, $\alpha_{k,n}^r$ is the maximum increase in collective winning probability that a group of $k$ teams can achieve by manipulating the matches between them over tournaments with $n$ teams. And $\alpha_{k}^r$ is the maximum increases in winning probability that a group of $k$ teams can achieve by manipulating the matches between them over all tournaments. \\ Two other naturally desirable properties of a tournament rule are monotonicity and anonymity. \begin{definition}[Monotone]\label{monotonedef} A tournament rule $r$ is monotone if $T,T'$ are $\{u,v\}$-adjacent and $u$ beats $v$ in $T$, then $r_{u}(T) \geq r_{u}(T')$ \end{definition} \begin{definition}[Anonymous]\label{anonimousdef} A tournament rule $r$ is anonymous if for every tournament $T$, and every permutation $\sigma$, and all $i$, $r_{\sigma(i)}(\sigma(T)) = r_{i}(T)$ \end{definition} Below we define the tournament rule that is the focus of this work. \begin{definition}[Randomized Death Match] Given a tournament $T$ on $n$ teams the Randomized Death Match Rule (RDM) picks two uniformly random teams (without replacement) and plays their match. Then, eliminates the loser and recurses on the remaining teams for a total of $n-1$ rounds until a single team remains, who is declared the winner. \end{definition} Below we define the notions of \textit{Sybil Attack} on a tournament $T$, and the property of \textit{Asymptotically Strongly Sybil-Proof} (ASSP) for a tournament rule $r$, both of which will be relevant in our discussion in Section 5. \begin{definition}[Sybil Attack]\label{sybil_attack_def} Given a tournament $T$, a team $u_1 \in T$ and an integer $m$, define $Syb(T,u_1,m)$ to be the set of tournaments $T'$ satisfying the following properties: \\ \indent 1. The set of teams in $T'$ consists of $u_2 \ldots, u_m$ and all teams in $T$\\ \indent 2. If $a,b$ are teams in $T$, then $a$ beats $b$ in $T'$ if and only if $a$ beats $b$ in $T$. \\ \indent 3. If $a \neq u_1$ is a team in $T$ and $i \in [m]$, then $u_i$ beats $a$ in $T'$ if and only if $u_1$ beats $a$ in $T$ \\ \indent 4. The match between $u_i$ and $u_j$ can be arbitrary for each $i \neq j$ \end{definition} Intuitively, $Syb(T,u_1,m)$ is the set of all Sybil attacks of $u_1$ at $T$ with $m$ Sybils. Each Sybil attack is a tournament $T' \in Syb(T,u_1,m)$ obtained by starting from $T$ and creating $m$ Sybils of $u_1$ (while counting $u_1$ as a Sybil of itself). Each Sybil beats the same set of teams from $T \setminus u_1$ and the matches between the Sybils $u_1, \ldots, u_m$ can be arbitrary. Every possible realization of the matches between the Sybils gives rise to new tournament $T' \in Syb(T,u_1, m)$ (implying $Syb(T,u_1, m)$ contains $2^{\binom{m}{2}}$ tournaments). \begin{definition}[Asymptotically Strongly Sybil-Proof]\label{ASSP_def} A tournament rule $r$ is \textit{Asymptotically Strongly Sybil-Proof} (ASSP) if for any tournament $T$, team $u_1 \in T$ which is not a Condorcet winner $$\lim_{m \to \infty}\max_{T' \in Syb(T,u_1,m)} r_{u_1, \ldots, u_m}(T') = 0$$ \end{definition} Informally speaking, Definition \ref{ASSP_def} claims that $r$ is ASSP if the probability that a Sybil wins in the most profitable Sybil attack on $T$ with $m$ Sybils, goes to zero as $m$ goes to $\infty$. \section{Basic Properties of RDM and Examples}\label{examples} In this section we consider a few basic properties of RDM and several examples on small tournaments. We will refer to those examples in our analysis later. Throughout the paper we will denote RDM by $r$ and it will be the only tournament rule we consider. We next state the first-step analysis observation that will be central to our analysis throughout the paper. For the remainder of the section let for a match $e$ denote by $T|_e$ the tournament obtained from $T$ by eliminating the loser in $e$. Let $S|_e = S \setminus x$, where $x$ is the loser in $e$. Let $d_x$ denote the number of teams $x$ loses to and $T \setminus x$ the tournament obtained after removing team $x$ from $T$. \begin{observation}[First-step analysis]\label{obs_fsa} Let $S$ be a subset of teams in a tournament $T$. Then $$r_{S}(T) = \frac{1}{\binom{n}{2}}\sum_{e}r_{S|_e}(T|_e) = \frac{1}{\binom{n}{2}}\sum_{x} d_x r_{S \setminus x}(T \setminus x)$$ (if $S = \{v\}$, then we define $r_{S \setminus v}(T \setminus v) = 0$, and if $x \not \in S$, then $S \setminus x = S$) \end{observation} \begin{proof} The first equality follows from the fact that after we play $e$ we are left with the tournament $T|_e$ and we sum over all possible $e$ in the first round. To prove the second equality, notice that for any $x$ the term $r_{S \setminus x}(T \setminus x)$ appears exactly $d_x$ times in $\sum_{e} r_{S|_e}(T|_e)$ because $x$ loses exactly $d_x$ matches. \end{proof}\\ As a first illustration of first-step analysis we show that adding teams which lose to every other team does not change the probability distribution of the winner. \begin{lemma}\label{lemma1} Let $T$ be a tournament and $u \in T$ loses to everyone. Then for all $v \neq u$, we have $r_v(T) = r_v(T \setminus u)$. \end{lemma} \begin{proof} We prove the statement by induction on $|T|$. If $|T| = 2$, then clearly $r_v(T) = r_v(T \setminus u) = 1$. Suppose it holds on all tournaments $T'$ such that $|T'| < |T| = n$ and we will prove it for $T$. By first-step analysis (Observation \ref{obs_fsa}) we have that $$r_{v}(T) = \frac{1}{\binom{n}{2}}\sum_{e} r_{v}(T|_e) = \frac{1}{\binom{n}{2}} \sum_{x \neq v} d_x r_{v}(T \setminus x)$$ where team $x$ loses $d_x$ matches in $T$. By the inductive hypothesis we have that $r_{v}(T \setminus x) = r_{v}(T \setminus \{x,u\})$ for $x \neq u,v$ and $d_u = n-1$. Thus, \begin{align*} r_{v}(T) &= \frac{1}{\binom{n}{2}} \sum_{x \neq v} d_x r_{v}(T \setminus x) =\\ &= \frac{1}{\binom{n}{2}}(\sum_{x \notin \{u,v\}} d_x r_{v}(T \setminus \{x,u\}) + (n-1)r_{v}(T \setminus u)) = \\ &= \frac{1}{\binom{n}{2}}(\binom{n-1}{2}r_{v}(T \setminus u) + (n-1)r_{v}(T \setminus u)) =r_{v}(T \setminus u) \\ \end{align*} where in the second to last line we used $\sum_{x \notin \{u,v\}} d_x r_{v}(T \setminus \{x,u\}) = \binom{n-1}{2}r_{v}(T \setminus u)$, which follows from first-step analysis (Observation \ref{obs_fsa}) and because $x$ loses $d_x$ matches in $T \setminus u$ as $u$ loses to every team in $T$ \end{proof}\\ As a natural consequence of Lemma \ref{lemma1} we show that the most manipulable tournament on $n+1$ teams is at least as manipulable as the most manipulable tournament on $n$ teams. \begin{lemma}\label{lemma2} $\alpha_{k,n}^r \leq \alpha_{k,n+1}^r$ \end{lemma} \begin{proof} See Appendix~\ref{appendix_sec3} for a proof. \end{proof}\\ We now show another natural property of RDM, which is a generalization of Condorcet-consistent (Definition \ref{condorcetconsistent}), namely that if a group of teams $S$ wins all its matches against the rest of teams, then a team from $S$ will always win. \begin{lemma}\label{lemma3} Let $T$ be a tournament and $S \subseteq T$ a group of teams such that every team in $S$ beats every team in $T \setminus S$. Then, $r_{S}(T) = 1$. \end{lemma} \begin{proof} See Appendix~\ref{appendix_sec3} for a proof. \end{proof}\\ As a result of Lemma \ref{lemma3} RDM is Condorcet-Consistent. As expected, RDM is also monotone (See Definition \ref{monotonedef}). \begin{lemma}\label{rdm_monotone} RDM is monotone. \end{lemma} \begin{proof} See Appendix~\ref{appendix_sec3} for a proof. \end{proof}\\ Lemma \ref{lemma1} tells us that adding a team which loses to all other teams does not change the probability distribution of the other teams winning. Lemmas \ref{lemma1}, \ref{lemma2}, \ref{lemma3}, \ref{rdm_monotone} will be useful in our later analysis in Sections \ref{rdm3} and \ref{copiessection}. Now we consider a few examples of tournaments and illustrate the use of first-step analysis (Observation \ref{obs_fsa}) to compute the probability distribution of the winner in them. \begin{enumerate} \item Let $T = \{a,b,c\}$, where $a$ beats $b$, $b$ beats $c$ and $c$ beats $a$. By symmetry of RDM, we have $r_a(T) = r_b(T) = r_c(T) = \frac{1}{3}$. \item Let $T = \{a,b,c\}$ where $a$ beats $b$ and $c$. Then clearly, $r_a(T) = 1$ and $r_b(T) =r_c(T) = 0$. \item By Lemma \ref{lemma1}, it follows that the only tournament on 4 teams whose probability distribution cannot be reduced to a distribution on 3 teams is the following one $T = \{a_1,a_2,a_3,a_4\}$, where $a_i$ beats $a_{i+1}$ for $i = 1,2,3$, $a_4$ beats $a_1$, $a_1$ beats $a_3$ and $a_2$ beats $a_4$. By using what we computed in (1) and (2) combined with Lemma \ref{lemma1} we get by first step analysis \begin{align*} r_{a_1}(T) &= \frac{1}{6}(r_{a_1}(T \setminus a_2) +2 r_{a_1}(T \setminus a_3)+2 r_{a_1}(T \setminus a_4)) = \frac{1}{6}(\frac{1}{3} + \frac{2}{3} + 2) = \frac{1}{2}\\ r_{a_2}(T) &= \frac{1}{6}(r_{a_2}(T \setminus a_1) +2 r_{a_2}(T \setminus a_3)+2 r_{a_2}(T \setminus a_4)) =\frac{1}{6}(1 +\frac{2}{3}) = \frac{5}{18}\\ r_{a_3}(T) &= \frac{1}{6}(r_{a_3}(T \setminus a_1) +r_{a_3}(T \setminus a_2)+2 r_{a_3}(T \setminus a_4)) =\frac{1}{6}(\frac{1}{3}) = \frac{1}{18}\\ r_{a_4}(T) &= \frac{1}{6}(r_{a_4}(T \setminus a_1) +r_{a_4}(T \setminus a_2)+2 r_{a_4}(T \setminus a_3)) =\frac{1}{6}(\frac{1}{3} + \frac{2}{3}) = \frac{1}{6} \end{align*} \end{enumerate} The above examples are really important in our analysis because: a) we will use them in later for our lower bound example in Section \ref{lb}, and b) they are a short illustration of first-step analysis. \\ \indent In the following subsection, we review prior results for 2-team manipulations in RDM, which will also be useful for our treatment of the main result in Section \ref{rdm3}. \subsection{Recap: Tight Bounds on 2-Team Manipulations in RDM} \cite{dw21} (Theorem 5.2) proves that RDM is 2-SNM-$\frac{1}{3}$ and that this bound is tight, namely $ \alpha_2^{RDM} = \frac{1}{3}$. We will rely on this result in Section \ref{rdm3}. \begin{theorem}\label{theorem1} (Theorem 5.2 in \cite{dw21}) $\alpha_2^{RDM} = \frac{1}{3}$ \end{theorem} \cite{ssw17} (Theorem 3.1) proves that the bound of $\frac{1}{3}$ is the best one can hope to achieve for a Condorcet-consistent rule. \begin{theorem}\label{theoremssw17} (Theorem 3.1 in \cite{ssw17}) There is no Condorcet-consistent tournament rule on $n$ players (for $n \geq 3$) that is 2-SNM-$\alpha$ for $\alpha < \frac{1}{3}$ \end{theorem} We prove the following useful corollary, which will be useful in Section \ref{rdm3}. \begin{corollary}\label{corollary} Let $T$ be a tournament and $u,v \in T$ two teams such that there is at most one match in which a team in $\{u,v\}$ loses to a team in $T \setminus \{u,v\}$. Then $$r_{u,v}(T) \geq \frac{2}{3}$$ \end{corollary} \begin{proof} If $u$ and $v$ beat every team in $T \setminus \{u,v\}$, then by Lemma \ref{lemma3}, $r_{u,v}(T) = 1 \geq \frac{2}{3}$. WLOG suppose that there is some team $t$ which beats $u$, loses to $v$ and all other teams lose to both $u$ and $v$. Let $T'$ be $\{u,v\}$-adjacent to $T$ such that $v$ is a Condorcet winner in $T'$. Clearly we have $r_{u,v}(T') = 1$ as RDM is Condorcet-Consistent. By Theorem \ref{theorem1} we have $r_{u,v}(T') -r_{u,v}(T) \leq \frac{1}{3}$. This, implies that $r_{u,v}(T) \geq \frac{2}{3}$ as desired. \end{proof} \section{Main Result: $\alpha^{RDM}_3 = 31/60$}\label{rdm3} The goal of this section is to prove that no 3 teams can improve their probability of winning by more than $\frac{31}{60}$ and that this bound is tight. We prove the following theorem \begin{theorem}\label{theorem2} $\alpha_3^{RDM} = \frac{31}{60}$ \end{theorem} Our proof consists of two parts: \begin{itemize} \item Lower bound: $\alpha_3^{RDM} \geq \frac{31}{60}$, for which we provide a tournament $T$ and a set $S$ of size 3, which can manipulate to increase their probability by $\frac{31}{60}$ \item Upper bound: $\alpha_3^{RDM} \leq \frac{31}{60}$, for which we provide a proof that for any tournament $T$ no set $S$ of size 3 can increase their probability of winning by more than $\frac{31}{60}$, i.e. RDM is 3-SNM-$\frac{31}{60}$ \end{itemize} \subsection{Lower Bound}\label{lb} Let $r$ denote RDM. Denote by $B_x$ the set of teams which team $x$ beats. Consider the following tournament $T = \{u,v,w,a,b\}$ (shown in Fig \ref{fig:example_graph}): $$B_{a} = \{u,v,b\}, B_b = \{u,v\}, B_u = \{v,w\}, B_v = \{w\}, B_{w} = \{a,b\}$$ The tournament is also shown in Figure \ref{fig:example_graph}. Let $S = \{u,v,w\}$. By first-step analysis (Observation \ref{obs_fsa}) and by using our knowledge in Section \ref{examples} for tournaments on 4 teams we can write \begin{align*} r_{u,v,w}(T) &= \frac{1}{10}(3 r_{u,w}(T \setminus v) +2 r_{u,v}(T \setminus w) + 2 r_{u,v,w}(T \setminus b) + r_{u,v,w}(T \setminus a) + 2 r_{v,w}(T \setminus u)) = \\ &= \frac{1}{10} (3\times(\frac{1}{2}+\frac{1}{6}) + 2 \times 0 + 2 \times (\frac{5}{18}+\frac{1}{18}+\frac{1}{6}) + (\frac{5}{18}+\frac{1}{18}+\frac{1}{6}) + 2 \times (\frac{1}{2}+\frac{1}{6}))= \\ &= \frac{1}{10}(2+1 + \frac{1}{2} + \frac{4}{3}) = \frac{29}{60} \end{align*} Now suppose that $u$ and $v$ throw their matches with $w$. i.e. $T'$ is $S$-adjacent to $T$, where in $T'$, $w$ beats $u$ and $v$ and all other matches have the same outcomes as in $T$. Then, since $w$ is a Condorcet winner, $r_{u,v,w}(T') = r_{w}(T') = 1$. Therefore, $$\alpha_3^{RDM} \geq r_{u,v,w}(T')-r_{u,v,w}(T) = 1-\frac{29}{60} = \frac{31}{60}$$ Thus, $\alpha_3^{RDM} \geq \frac{31}{60}$ as desired. \begin{figure}[htp] \centering \includegraphics[width=6cm]{example_graph.png} \caption{The tournament $T$ in which $S = \{u,v,w\}$ achieves a gain of $\frac{31}{60}$ by manipulation.} \label{fig:example_graph} \end{figure} \subsection{Upper Bound} Suppose we have a tournament $T$ on $n \geq 3$ vertices and $S = \{u,v,w\}$ is a set of 3 (distinct) teams, where $S$ will be the set of manipulator teams. Let $I$ be the set of matches in which a team from $S$ loses to a team from $T \setminus S$. Our proof for $\alpha^{RDM}_k \leq \frac{31}{60}$ will use the following strategy \begin{itemize} \item In Sections \ref{FSA_framework} and \ref{bounds_on_abc} we introduce the first-step analysis framework by considering possible cases for the first played match. In each of these cases the loser of the match is eliminated and we are left with a tournament with one less team. We pair each match in $T$ with its corresponding match in $T'$ and we bound the gains of manipulation in each of the following cases separately (these correspond to each of the terms $A,B$, and $C$ respectively in the analysis in Section \ref{bounds_on_abc}). \begin{itemize} \item The first match is between two teams in $S$ (there are 3 such matches). \item The first match is between a team in $S$ and a team in $T \setminus S$ and the team from $S$ loses in the match (there are $|I|$ such matches). \item The first match is any other match not covered by the above two cases \end{itemize} \item In Section \ref{caseI4} we prove that if $|I| \leq 4$, then $\alpha^{RDM}_{S}(T) \leq \frac{31}{60}$ (i.e. the set of manipulators cannot gain more than $\frac{31}{60}$ by manipulating). \item In Section \ref{gen_upper_bound} we prove that if $T$ is the most manipulable tournament on $n$ vertices (i.e. $\alpha^{RDM}_{S}(T) = \alpha_{3,n}^{RDM}$), then $\alpha^{RDM}_{S}(T) \leq \frac{|I| + 7}{3(|I| + 3)}$ \item In Section \ref{final_proof} we combine the above facts to finish the proof of Theorem \ref{theorem2} \end{itemize} We first introduce some notation that we will use throughout this section. Suppose that $T'$ and $T$ are $S$-adjacent. Recall from Section \ref{examples} that for a match $e = (i,j)$, $T|_{e}$ is the tournament obtained after eliminating the loser in $e$. Also $d_x$ is the number of teams that a team $x$ loses to in $T$. For $x \in S$, let $\ell_x$ denote the number of matches $x$ loses against a team in $S$ when considered in $T$ and let $\ell'_x$ denote the number of matches that $x$ loses against a team in $S$ when considered in $T'$. Let $d^{*}_x$ denote the number of teams in $T \setminus S$ that $x$ loses to. Notice that since $T$ and $T'$ are $S$-adjacent, $x \in S$ loses to exactly $d^{*}_x$ teams in $T' \setminus S$ when considered in $T'$. Let $G = I \cup \{uv, vw, uw\}$ be the set of matches in which a team from $S$ loses. \subsubsection{The First Step Analysis Framework}\label{FSA_framework} Notice that in the first round of RDM, a uniformly random match $e$ from the $\binom{n}{2}$ matches is chosen. If $e \in G$ then we are left with $T \setminus x$ where $x$ loses in $e$ for some $x \in S$. If $e \not \in G$, we are left with $T|_{e}$ and all teams in $S$ are still in the tournament. For $x \in S$, there are $\ell_x$ matches in which they lose to a team from $S$ and $d^{*}_x$ matches in which they lose to a team from $T \setminus S$. By considering each of these cases and using first-step analysis (Observation \ref{obs_fsa}), we have \begin{align*} r_{u,v,w}(T) &= \frac{1}{\binom{n}{2}} \Bigg[ \sum_{x \in \{u,v,w\}} \ell_x r_{\{u,v,w\} \setminus x}(T \setminus x) + d^{*}_u r_{v,w}(T \setminus u) + d^{*}_v r_{u,w}(T \setminus v) +d^{*}_w r_{v,u}(T \setminus w) \\&+ \sum_{e \notin G} r_{u,v,w}(T|_{e}) \Bigg] \end{align*} Since $T$ and $T'$ are $S$-adjacent each $x\in S$ loses to exactly $d^{*}_x$, teams form $T' \setminus S$, and we can similarly write \begin{align*} r_{u,v,w}(T') &= \frac{1}{\binom{n}{2}}\Bigg[\sum_{x \in \{u,v,w\}} \ell'_x r_{\{u,v,w\} \setminus x}(T' \setminus x) + d^{*}_u r_{v,w}(T' \setminus u) + d^{*}_v r_{u,w}(T' \setminus v) + d^{*}_w r_{v,u}(T' \setminus w)\\ &+\sum_{e \notin G} r_{u,v,w}(T'|_{e})\Bigg]\\ \end{align*} By subtracting the above two expression we get \begin{align*} &r_{u,v,w}(T') - r_{u,v,w}(T) = \\ &= \frac{1}{\binom{n}{2}} \Bigg[ \sum_{x \in \{u,v,w\}} \ell'_x r_{\{u,v,w\} \setminus x}(T' \setminus x) - \ell_x r_{\{u,v,w\}\setminus x}(T \setminus x) + d^{*}_u(r_{\{v,w\}}(T' \setminus u)- r_{\{v,w\}}(T \setminus u)) \\ &+ d^{*}_v(r_{\{u,w\}}(T' \setminus v)- r_{\{u,w\}}(T \setminus v)) + d^{*}_w(r_{\{v,u\}}(T' \setminus w)- r_{\{v,u\}}(T \setminus w)) \\ &+\sum_{e \notin G} r_{u,v,w}(T'|_{e})-r_{u,v,w}(T|_{e}) \Bigg] \end{align*} Thus, \begin{equation}\label{fsa:1} r_{u,v,w}(T') - r_{u,v,w}(T) = \frac{1}{\binom{n}{2}}(A+B+C) \end{equation} where \begin{align*} A &= \sum_{x \in \{u,v,w\}} \ell'_x r_{\{u,v,w\} \setminus x}(T' \setminus x) - \ell_x r_{\{u,v,w\}\setminus x}(T \setminus x)\\ B &= \sum_{x \in S} d^{*}_x (r_{\{u,v,w\} \setminus x}(T' \setminus x)- r_{\{u,v,w\} \setminus x}(T \setminus x))\\ C &= \sum_{e \notin G} r_{u,v,w}(T'|_{e})-r_{u,v,w}(T|_{e}) \end{align*} \subsubsection{Upper Bounds on $A,B$ and $C$}\label{bounds_on_abc} We now prove some bounds on the terms $A$, $B$ and $C$ (defined in Section \ref{FSA_framework}) which will be useful later. Recall that $I$ denotes the set of matches in which a team from $S$ loses from a team from $T \setminus S$. We begin with bounding $A$ in the following lemma \begin{lemma}\label{lemmaA} For all $S$-adjacent $T$ and $T'$, we have $A \leq \frac{7}{3}$. Moreover, if $|I| \leq 1$, then $A \leq 1$. \end{lemma} \begin{proof} See Appendix \ref{appendix_sec4} for a proof. \end{proof}\\ Next, we show the following bound on the term $B$. \begin{lemma}\label{lemmaB} For all $S$-adjacent $T,T'$ we have $$B \leq \frac{d^{*}_u + d^{*}_v + d^{*}_w}{3} = \frac{|I|}{3}$$ Moreover, if $|I| \leq 1$, then $B = 0$ \end{lemma} \begin{proof}See Appendix \ref{appendix_B} for a proof. \end{proof}\\ We introduce some more notation. For $n \in \mathbb{N}$, define $M_n(a_1,a_2,a_3)$ as the maximum winning probability gain that three teams $\{u,v,w\}$ can achieve by manipulation in a tournament $T$ of size $n$, in which there are exactly $a_i$ teams in $T \setminus S$ each of which beats exactly $i$ teams of $S$. Formally, \begin{align*} M_n(a_1,a_2,a_3) &= \max \Big\{r_{S}(T')-r_{S}(T) | T,T' \text{ are } S\text{-adjacent},|T|=n, |S|=3, \\ &\text{ $a_i$ teams in $T \setminus S$ beat exactly $i$ teams in $S$} \Big\} \end{align*} \indent Additionally, let $L_i$ be the set of teams in $T \setminus S$ each of which beats exactly $i$ teams in $S$. Let $Q$ be the set of matches in which two teams from $L_i$ play against each other or in which a team from $L_i$ loses to a team from $S$ for $i = 1,2,3$. Notice that $|Q| = 2a_1 + a_2 + \binom{a_1}{2} + \binom{a_2}{2} + \binom{a_3}{2}$ if there are $a_i$ teams in $S \setminus T$ each which beat $i$ teams from $S$. \\ \indent With the new notation, we are now ready to prove a bound on the term $C$. Recall that $$C = \sum_{e \notin G} r_{u,v,w}(T'|_{e})-r_{u,v,w}(T|_{e})$$ where $G = I \cup \{uv, vw, uw\}$ is the set of matches in which a team from $S$ loses. Then we have the following bound on $C$. \begin{lemma}\label{lemmaC} For all $S$-adjacent $T$ and $T'$ we have that $C$ is at most \begin{align*} &(2a_1 + \binom{a_1}{2})M_{n-1}(a_1-1,a_2,a_3)+ (a_2 + \binom{a_2}{2}) M_{n-1}(a_1,a_2-1,a_3) + \binom{a_3}{2}M_n(a_1,a_2,a_3-1)\\ &+\sum_{e \notin G \cup Q} r_{u,v,w}(T'|_{e})-r_{u,v,w}(T|_{e}) \end{align*} \end{lemma} \begin{proof} See Appendix \ref{appendix_C} for a proof. \end{proof} \subsubsection{ The case $|I| \leq 4$}\label{caseI4} We summarize our claim when $|I| \leq 4$ in the following lemma \begin{lemma}\label{lemmaI4} Let $T$ be a tournament, and $S$ a set of 3 teams. Suppose that there are at most 4 matches in which a team in $S$ loses to a team in $T \setminus S$ (i.e. $|I| \leq 4$). Then $\alpha^{RDM}_{S}(T) \leq \frac{31}{60}$ \end{lemma} \begin{proof} We will show that $M_n(a_1,a_2,a_3) \leq f(a_1,a_2,a_3)$ by induction on $n \in \mathbb{N}$ for the values of $(a_1,a_2,a_3)$ and $f(a_1,a_2,a_3)$ given in Table \ref{tab:upp_bounds} below. Notice that when there are at most 4 matches between a team in $S$ and a team in $T \setminus S$, in which the teams from $S$ loses, then we fall into one of the cases shown in the table for $(a_1,a_2,a_3)$. \setlength{\tabcolsep}{18pt} \renewcommand{\arraystretch}{1.6} \begin{table}[h]\label{table} \scriptsize \centering \begin{tabular}{|c|c|} \hline $(a_1,a_2,a_3)$&$f(a_1,a_2,a_3)$\\ \hline (0,0,0)& 0\\ \hline (1,0,0)&$\frac{1}{6}$ \\ \hline (2,0,0) & $\frac{23}{60}$ \\ \hline (3,0,0) & $\frac{407}{900}$ \\ \hline (4,0,0) & $\frac{4499}{9450}$ \\ \hline (0,1,0)& $\frac{1}{2}$\\ \hline (0,2,0) & $\frac{31}{60}$ \\ \hline (1,1,0)& $\frac{1}{2}$\\ \hline (2,1,0)& $\frac{131}{260}$ \\ \hline (0,0,1)&0 \\ \hline (1,0,1)&$\frac{11}{27}$\\ \hline \end{tabular} \caption{Upper bounds on $M_n(a_1,a_2,a_3)$} \label{tab:upp_bounds} \end{table} \indent \textbf{1. Base case} Our base case is when $n = 3$. If we are in the case of 3 teams then $S$ wins with probability 1, so the maximum gain $S$ can achieve by manipulation is clearly 0, which satisfies all of the bounds in the table. \\ \indent \textbf{2. Induction step} Assume that $M_k(a_1,a_2,a_3) \leq f(a_1,a_2,a_3)$ hold for all $ k < n$ and $a_1,a_2,a_3$ as in Table \ref{tab:upp_bounds}. We will prove the statement for $k = n$. Notice that by Table \ref{tab:upp_bounds} it is clear that $f$ is monotone in each variable i.e. if $a'_i \leq a_i$ for $i = 1,2,3$, then \begin{equation}\label{eq:monotonicity} f(a'_1,a'_2,a'_3) \leq f(a_1,a_2,a_3) \end{equation} Suppose that $e \notin G \cup Q$\footnote{Recall $G$ is the set of matches in which a team from $S$ loses and $Q$ is the set of matches in between two teams from $L_i$ or when a team from $L_i$ loses to a team from $S$ (see discussion before Lemma \ref{lemmaC})}. Then since $e \notin G$, $S$ remains in $T|_{e}$. Let for a tournament $H$ define $t(H) = (a'_1,a'_2,a'_3)$, where in $H$ there are exactly $a'_i$ teams in $H \setminus \{u,v,w\}$ that beat exactly $i$ out of $\{u,v,w\}$. Clearly, we have $t(T|_e) = (a'_1, a'_2,a'_3)$, where $a'_i \leq a_i$. By monotonicity of $f$ in (\ref{eq:monotonicity}) and the inductive hypothesis it follows that $$r_{u,v,w}(T'|_{e})-r_{u,v,w}(T|_{e}) \leq M_{n-1}(a'_1,a'_2,a'_3) \leq f(a'_1,a'_2,a'_3) \leq f(a_1,a_2,a_3)$$ Since $|G \cup Q| = 3(1+a_1+a_2+a_3) + \binom{a_1}{2} + \binom{a_2}{2} +\binom{a_3}{2}$, we have that $$\sum_{e \notin G \cup Q} r_{u,v,w}(T'|_{e})-r_{u,v,w}(T|_{e}) \leq (\binom{n}{2}-3(1+a_1+a_2+a_3)-\binom{a_1}{2}-\binom{a_2}{2}-\binom{a_3}{2})f(a_1,a_2,a_3)$$ Also, by the inductive hypothesis we have \begin{align*} M_{n-1}(a_1-1,a_2,a_3) &\leq f(a_1-1,a_2, a_3)\\ M_{n-1}(a_1,a_2-1,a_3) &\leq f(a_1,a_2-1, a_3)\\ M_{n-1}(a_1,a_2,a_3-1) &\leq f(a_1,a_2, a_3-1)\\ \end{align*} Therefore, by Lemma \ref{lemmaC}, combined with the inequalities above we have \begin{align*} C &\leq (2a_1 + \binom{a_1}{2})f(a_1-1,a_2,a_3)+ (a_2 + \binom{a_2}{2}) f(a_1,a_2-1,a_3) + \binom{a_3}{2}f(a_1,a_2,a_3-1) \\ &+(\binom{n}{2}-3(1+a_1+a_2+a_3)-\binom{a_1}{2}-\binom{a_2}{2}-\binom{a_3}{2})f(a_1,a_2,a_3) \end{align*} By Lemma \ref{lemmaB} we have $$B \leq \frac{d^{*}_u + d^{*}_v + d^{*}_w}{3} = \frac{a_1 + 2a_2 + 3a_3}{3} = \frac{|I|}{3}$$ Combining the above with bounds and plugging into $(\ref{fsa:1})$ we get \begin{align*} &r_{u,v,w}(T') - r_{u,v,w}(T) \leq \frac{1}{\binom{n}{2}}(A+B+C) \leq \\ &\leq \frac{1}{\binom{n}{2}}\Bigg[A' + B'+(2a_1 + \binom{a_1}{2})f(a_1-1,a_2,a_3)+ (a_2 + \binom{a_2}{2}) f(a_1,a_2-1,a_3) \\ &+\binom{a_3}{2}f(a_1,a_2,a_3-1) + (\binom{n}{2}-3(1+a_1+a_2+a_3)-\binom{a_1}{2}-\binom{a_2}{2}-\binom{a_3}{2})f(a_1,a_2,a_3)\Bigg] \end{align*} where $A' = 1$ and $B' = 0$ if $|I| \leq 1$ and $A' = \frac{7}{3}$ and $B' = \frac{|I|}{3}$ if $|I| \geq 2$ by Lemma \ref{lemmaA} and Lemma \ref{lemmaB}. As the RHS depends only on $(a_1,a_2,a_3)$ we can take the maximum over all tournaments on $n$ teams so we can get \begin{align*} M_n(a_1,a_2,a_3) &\leq \frac{1}{\binom{n}{2}}\Bigg[A' + B' +(2a_1 + \binom{a_1}{2})f(a_1-1,a_2,a_3)+ (a_2 + \binom{a_2}{2}) f(a_1,a_2-1,a_3) \\ &+\binom{a_3}{2}f(a_1,a_2,a_3-1) \\ &+ (\binom{n}{2}-3(1+a_1+a_2+a_3)-\binom{a_1}{2}-\binom{a_2}{2}-\binom{a_3}{2})f(a_1,a_2,a_3)\Bigg] \text{ ($\Delta$)} \end{align*} Now, apply the formula $(\Delta)$ to each of the cases in Table \ref{tab:upp_bounds}. We present the computations for $(a_1, a_2, a_3) \in \{(0,0,0), (1,0,0), (0,2,0)\}$ in the body to illustrate the method and we defer the other cases from Table \ref{tab:upp_bounds} to Appendix \ref{appendix_sec4}. Note the manipulators can achieve $\frac{31}{60}$ only when $(a_1,a_2,a_3) = (0,2,0)$. \textbf{Case 1} $(a_1,a_2,a_3) = (0,0,0)$. By Lemma \ref{lemma3} it follows that $M_n(0,0,0) = 0 = f(0,0,0)$ as a team from $S$ wins with probability 1 regardless of the matches within $S$. \\ \textbf{Case 2} $(a_1,a_2,a_3) = (1,0,0)$. In this case $|I| = 1$. So by applying $\Delta$ with $A' = 1$ and $B = 0$ we obtain $$M_n(1,0,0) \leq \frac{1}{\binom{n}{2}}(1 + 2 f(0,0,0) + (\binom{n}{2}-6)\frac{1}{6}) = \frac{1}{6} = f(1,0,0)$$ \\ \textbf{Case 3} $(a_1,a_2,a_3) = (0,2,0)$. Applying $\Delta$, we get \begin{align*} M_n(0,2,0) &\leq \frac{1}{\binom{n}{2}}(\frac{7}{3} + \frac{4}{3} + (2 + 1)f(0,1,0) + (\binom{n}{2}-10)f(0,2,0)) \\ &= \frac{1}{\binom{n}{2}}(\frac{11}{3} + \frac{3}{2} + (\binom{n}{2}-10)\frac{31}{60}) \\ &= \frac{31}{60} + \frac{1}{\binom{n}{2}}(\frac{22+9}{6} - \frac{31}{6}) = \frac{31}{60} = f(0,2,0) \end{align*} \textbf{Case 4} $(a_1,a_2,a_3) = (2,0,0)$. See Appendix ~\ref{appendix_cases}. \textbf{Case 5} $(a_1,a_2,a_3) = (3,0,0)$. See Appendix ~\ref{appendix_cases}. \textbf{Case 6} $(a_1,a_2,a_3) = (4,0,0)$. See Appendix ~\ref{appendix_cases}. \textbf{Case 7} $(a_1,a_2,a_3) = (0,1,0)$. See Appendix ~\ref{appendix_cases}. \textbf{Case 8} $(a_1,a_2,a_3) = (1,1,0)$. See Appendix ~\ref{appendix_cases}. \textbf{Case 9} $(a_1,a_2,a_3) = (2,1,0)$. See Appendix ~\ref{appendix_cases}. \textbf{Case 10} $(a_1,a_2,a_3) = (0,0,1)$. See Appendix ~\ref{appendix_cases}. \textbf{Case 11} $(a_1,a_2,a_3) = (1,0,1)$. See Appendix ~\ref{appendix_cases}. This, finishes the induction and the proof for the bounds in Table \ref{tab:upp_bounds}. Note that $f(a_1,a_2,a_3) \leq \frac{31}{60}$ for all $a_1,a_2,a_3$ in Table \ref{tab:upp_bounds} and this bounds is achieved when $(a_1,a_2,a_3) = (0,2,0)$ i.e. there are 2 teams that beat exactly two of $S$ as is the case in the optimal example in Section \ref{lb}. Thus,we get that if there are at most 4 matches that a team from $S$ loses from a team in $T \setminus S$, then $\alpha^{RDM}_S(T) \leq \frac{31}{60}$. This finishes the proof of the lemma. \end{proof} \subsubsection{General Upper Bound for the Most Manipulable Tournament}\label{gen_upper_bound} \begin{lemma}\label{lemmaI5} Suppose that $\alpha^{RDM}_{S}(T) = \alpha^{RDM}_{3,n}$. Let $I$ be the set of matches a team of $S$ loses to a team from $T \setminus S$. Then $$\alpha^{RDM}_{3,n}= \alpha^{RDM}_S(T) \leq \frac{|I|+7}{3(|I|+3)}$$ \end{lemma} \begin{proof} Let $T$ and $T'$ be $S$-adjacent tournaments on $n$ vertices such that $S = \{u,v,w\}$ and $$\alpha_{3,n}^{RDM} = \alpha^{RDM}_S(T) = r_S(T') - r_s(T)$$ I.e. $T$ is the "worst" example on $n$ vertices. By (\ref{fsa:1}) we have $$ \alpha_{3,n}^{RDM} = \frac{1}{\binom{n}{2}}(A+B+C) $$ where $A,B$ and $C$ were defined in Section \ref{FSA_framework}. By Lemma \ref{lemmaA} we have $$A \leq \frac{7}{3}$$ and by Lemma \ref{lemmaB} $$B \leq \frac{d^{*}_u + d^{*}_v + d^{*}_w}{3} = \frac{|I|}{3}$$ Let $e \notin G$. Notice that both $T'|_{e}$ and $T|_{e}$ are tournaments on $n-1$ vertices and by definition of $G$, $u,v,w$ are not eliminated in both $T'|_{e}$ and $T|_{e}$. Moreover, $T'|_{e}$ and $T|_{e}$ are $S$-adjacent. Therefore, for every $e \notin G$, we have by Lemma \ref{lemma2} $$ r_{u,v,w}(T'|_{e})-r_{u,v,w}(T|_{e}) \leq \alpha_{3,n-1}^{RDM} \leq \alpha_{3,n}^{RDM} $$ By using the above on each term in $C$ and the fact that $|G| = 3 +|I|$, we get that $$ C \leq (\binom{n}{2} -(3 + |I|))\alpha_{3,n}^{RDM} $$ By using the above 3 bounds we get \begin{align*} \alpha_{3,n}^{RDM} &\leq \frac{1}{\binom{n}{2}} (\frac{7}{3}+\frac{|I|}{3} + (\binom{n}{2} -(3 + |I|))\alpha_{3,n}^{RDM}) \\ \iff (|I|+3)\alpha_{3,n}^{RDM} &\leq \frac{|I|+7}{3}\\ \iff \alpha_{3,n}^{RDM} = \alpha^{RDM}_S(T) &\leq \frac{|I|+7}{3(|I|+3)} \end{align*} as desired. \end{proof} \subsubsection{Proof of Theorem \ref{theorem2}}\label{final_proof} Suppose that $T$ is the most manipulable tournament on $n$ vertices i.e. it satisfies $\alpha^{RDM}_{S}(T) = \alpha^{RDM}_{3,n}$. If $|I| \leq 4$, then by Lemma \ref{lemmaI4}, we have that $$\alpha_{3,n}^{RDM} = \alpha^{RDM}_S(T) \leq \frac{31}{60}$$ If $|I| \geq 5$, then by Lemma \ref{lemmaI5} $$\alpha_{3,n}^{RDM} = \alpha^{RDM}_S(T) \leq \frac{|I|+7}{3(|I|+3)} \leq \frac{5+7}{3(5+3)} = \frac{1}{2}$$ where above we used that $\frac{x+7}{3(x+3)}$ is decreasing for $x \geq 5$. Combining the above bounds, we obtain $\alpha_{3,n}^{RDM} \leq \frac{31}{60}$ for all $n \in \mathbb{N}$. Therefore, $$\alpha_{k}^{RDM} = \max_{n \in \mathbb{N}}\alpha_{k,n}^{RDM} \leq \frac{31}{60}$$ which proves the upper bound and finishes the proof of Theorem \ref{theorem2}. \section{Sybil Attacks on Tournaments}\label{copiessection} \subsection{Main Results on Sybil Attacks on Tournaments} Recall our motivation from the Introduction. Imagine that a tournament rule is used as a proxy for a voting rule to select a proposal. The proposals are compared head-to-head, and this constitutes the pairwise matches in the resulting tournament. A proposer can try to manipulate the protocol with a Sybil attack and submit many nearly identical proposals with nearly equal strength relative to the other proposals. The proposer can choose to manipulate the outcomes of the head-to-head comparisons between two of his proposals in a way which maximizes the probability that a proposal of his is selected. In the tournament $T$ his proposal corresponds to a team $u_1$, and the tournament $T'$ resulting from the Sybil attack is a member of $Syb(T,u_1,m)$ (Recall Definition 2.9). The questions that we want to answer in this section are: (1) Can the Sybils manipulate their matches to successfully increase their collective probability of winning? and (2) Is it beneficial for the proposer to create as many Sybils as possible? The first question we are interested is whether any group of Sybils can manipulate successfully to increase their probability of winning. It turns out that that the answer is No. We first prove that the probability that a team which is not a Sybil wins does not depend on the matches between the Sybils. \begin{lemma}\label{lemmacopies} There exists a function $q$ that takes in as input integer $m$, tournament $T$, team $u_1 \in T$, and team $v \in T \setminus u_1$ with the following property. For all $T' \in Syb(T,u_1,m)$, we have $$r_{v}(T') = q(m,T,u_1,v)$$ where the dependence on $u_1$ is encoded as the outcomes of its matches with the rest of $T$. \end{lemma} \begin{proof} See Appendix ~\ref{appendix_sec5} for a full proof. \end{proof}\\ Note that by Lemma \ref{lemmacopies} $r_v(T') = q(m,T,u_1,v)$ does not depend on which tournament $T' \in Syb(T, u_1,m)$ is chosen. Now, we prove our first promised result. Namely, that no number of Sybils in a Sybil attack can manipulate the matches between them to increase their probability of winning. \begin{theorem}\label{theorem4} Let $T$ be a tournament, $u_1 \in T$ a team, and $m$ and integer. Let $T_1' \in Syb(T,u_1,m)$. Let $S = \{u_1, \ldots, u_m\}$. Then $$\alpha_{S}^{RDM}(T_1') = 0$$ \end{theorem} \begin{proof} Let $T_1'$ and $T_2'$ be $S$-adjacent. By Definition 2.9, $T_2' \in Syb(T,u_1,m)$. Therefore by Lemma \ref{lemmacopies}, $r_v(T_1') = r_v(T_2') = q(m,T,u_1,v)$ for all $v \in T \setminus u_1$. Using this we obtain $$r_{S}(T_1') =1 - \sum_{v \in T \setminus u_1} r_{v}(T_1') = 1 - \sum_{v \in T \setminus u_1} r_{v}(T_2') = r_{S}(T_2')$$ Therefore, $r_{S}(T_1') = r_{S}(T_2')$ for all $S$-adjacent $T_1', T_2'$, which implies the desired result. \end{proof}\\ Theorem \ref{theorem4} says that any number of Sybils cannot manipulate to increase their collective probability of winning. This leaves the question whether it is beneficial for the proposer to send many (nearly) identical proposals to the tournament to maximize the probability that a proposal of his is selected. We show that Randomized Death Match disincentivizes such behaviour. When $u_1$ is a Condorcet winner in $T$, then by Lemma \ref{lemma3} a Sybil will win with probability one. We prove that if $u_1$ is not a Condorcet winner in $T$ then as $m$ goes to infinity, the maximum probability (over all Sybils attacks of $u_1$) that any Sybil wins goes to 0. Or equivalently, RDM is Asymptotoically Strongly Sybil-Proof (recall Definition 2.10).\\ \indent Before we state our second main theorem let's recall some notation. Let $u_1 \in T$ be a team (which is not a Condorcet winner). Let $A$ be the set of teams in $T$ that $u_1$ beats, and $B$ the of teams $u_1$ loses to. Let $T' \in Syb(T,u_1, m)$ and $v \in T \setminus u_1$. By Lemma \ref{lemmacopies}, $r_v(T') = q(m, T, u_1,v)$ for all $T' \in Syb(T,u_1,m)$. Also, by Lemma \ref{lemmacopies}, \begin{equation}\label{prob_sybils} r_{u_1, \ldots, u_m}(T') = 1-\sum_{v \in T \setminus u_1} r_v(T') = 1-\sum_{v \in T \setminus u_1} q(m,T,u_1,v) \end{equation} and \begin{equation}\label{prob_A} r_{A}(T') = \sum_{v \in A}q(m,T,u_1,v) \end{equation} Note that the terms in the RHS of each of (\ref{prob_sybils}) and (\ref{prob_A}) depends only on $T, u_1$ and $m$. Thus, we can define the functions $$h(m,T,u_1) = r_{u_1, \ldots, u_m}(T')$$ $$g(m,T,u_1) = r_{A}(T')$$ and $$p(m,T,u_1) = h(m,T,u_1) + g(m,T,u_1)$$ (here $p(m,T,u_1)$ is the total collective probability that a Sybil or a team from $A$ wins) \\ \indent We are now ready to prove our second main result. Namely, that RDM is Asymptotically Strongly Sybil-Proof (Definition \ref{ASSP_def}). Before we present the result (Theorem \ref{theoremcopiesto0}) we will try to convey some intuition for why RDM should be ASSP. Observe that the only way a Sybil can win is when all the teams from $B$ are eliminated before all the Sybils. The teams from $B$ can only be eliminated by teams from $A$. However, as $m$ increases there are more Sybils and, thus, the teams from $A$ are intuitively more likely to all lose the tournament before the teams from $B$. When there are no teams from $A$ left and at least one team from $B$ left, no Sybil can win. In fact, this intuition implies something stronger than RDM being ASSP: the collective winning probability of the Sybils and the teams from $A$ (denoted by $p(m,T,u_1)$) converges to 0 as $m$ converges to $\infty$ (or, equivalently, the probability that a team from $B$ wins goes to $1$). This intuition indirectly lies behind the technical details of the proof of Theorem \ref{theoremcopiesto0}. \begin{theorem}\label{theoremcopiesto0} Randomized Death Match is Asymptotically Strongly Sybil-Proof. In fact a stronger statement holds, namely if $u_1 \in T$ is not a Condorcet winner, then $$\lim_{m \to \infty} p(m,T,u_1) = 0$$ \end{theorem} \begin{proof}See Appendix ~\ref{appendix_sec5} for a full proof. \end{proof} \\ \subsection{On a Counterexample to an Intuitive Claim} We will use Theorem \ref{theoremcopiesto0} to prove that RDM does not satisfy a stronger version of the monotonicity property in Definition \ref{monotonedef}. First, we give a generalization of the definition for monotonicity given in Section \ref{examples} \begin{definition}[Strongly monotone] Let $r$ be a tournament rule. Let $T$ and let $C \cup D$ be any splitting of the teams in $T$ into two disjoint sets. A tournament rule $r$ is \textit{strongly monotone} for every $(u,v) \in C \times D$, if $T'$ is $\{u,v\}$-adjacent to $T$ such that $u$ beats $v$ in $T'$ we have $r_{C}(T') \geq r_{C}(T)$ \end{definition} Intuitively, $r$ is Strongly monotone if whenever flipping a match between a team from $C$ and a team from $D$ in favor of the team from $C$ makes $C$ better off. Notice that if $|C| = 1$ this is the usual definition of monotonicity (Definition \ref{monotonedef}), which is satisfied by RDM by Lemma \ref{rdm_monotone}. However, RDM is not strongly monotone, even though strong monotonicity may seem like an intuitive property to have. \begin{claim}\label{rdm_not_strongly_monotne} \label{claim_rdm_not_monotone} RDM is not strongly monotone \end{claim} \begin{proof} Suppose the contrary i.e. RDM is strongly monotone. Start with a 3-cycle tournament $T$ where $a_1$ beats $b$, $b$ beats $c$, and $c$ beats $a_1$. Let $T' \in Syb(T,a_1,m)$ be a Sybil attack of $a_1$ on $T$ with $m$ Sybils. Let the Sybils be $C = \{a_1,a_2, \ldots, a_m\}$ where $a_i$ beats $a_j$ in $T'$ for $i < j$. By Theorem \ref{theoremcopiesto0} we can take $m$ large enough so that $r_{C}(T') < \frac{1}{6}$. Suppose all Sybils in $C$ but $a_1$ throw all of their matches with $b$ and denote the tournament resulting from that $T''$. Then, if RDM were strongly monotone we would have $r_{C}(T'') \leq r_{C}(T') < \frac{1}{6}$ (start with $T''$ and iteratively apply strong monotonicity). Note that starting from $T''$ and iteratively applying Lemma \ref{lemma1} to $a_{i}$ and removing $a_i$ from the tournament for $i = m,m-1, \ldots, 2$ we will obtain the tournament $T$, and the probability distribution over the winner in $T''$ will be the same as in $T$. Therefore, $r_{C}(T'') \geq r_{a}(T'') = r_{a}(T) = \frac{1}{3}$, but $r_{a}(T'') < \frac{1}{6}$, a contradiction with our assumption that RDM is strongly monotone. \end{proof} \section{Conclusion and Future Work}\label{sec:conclusion} We use a novel first-step analysis to nail down the minimal $\alpha$ such that RDM is $3$-SNM-$\alpha$. Specifically, our main result shows that $\alpha^{RDM}_3 = \frac{31}{60}$. Recall that this is the first tight analysis of any Condorcet-consistent tournament rule for any $k > 2$, and also the first monotone, Condorcet-consistent tournament rule that is $k$-SNM-$\alpha$ for any $k > 2,\alpha < 1$. We also initiate the study of manipulability via Sybil attacks, and prove that RDM is Asymptotically Strongly Sybil-Proof. Our technical approach opens up the possibility of analyzing the manipulability of RDM (or other tournament rules) whose worst-case examples are complicated-but-tractable. For example, it is unlikely that the elegant coupling arguments that work for $k=2$ will result in a tight bound of $31/60$, but our approach is able to drastically reduce the search space for a worst-case example, and a tractable case analysis confirms that a specific 5-team tournament is tight. Our approach can similarly be used to analyze the manipulability of RDM for $k > 3$, or other tournament rules. However, there are still significant technical barriers for future work to overcome in order to keep analysis tractable for large $k$, or for tournament rules with a more complicated recursive step. Still, our techniques provide a clear approach to such analyses that was previously non-existent. \bibliographystyle{alpha}
{ "attr-fineweb-edu": 1.891602, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUclC6NNjgBpvIA3oU
\section{Introduction} \label{sec:intro} This paper investigates strategies to recommended travel destinations for users who provided a list of preferred activities at \url{Booking.com}, a major online travel agent. This is a complex exploratory recommendation task characterized by predicting user preferences with a limited amount of noisy information. In addition, the industrial application setting comes with specific challenges for search and recommendation systems \citep{kohavi_kdd_2014}. To motivate our problem set-up, we introduce a service which allows to find travel destinations based on users' preferred activities, called {destination finder.\footnote{\url{http://www.booking.com/destinationfinder.html}} \begin{figure*}[!tb] \centerline{% \includegraphics[width=0.75\textwidth]{Figure1.pdf}} \caption{Example of \href{http://www.booking.com/destinationfinder.html}{destination finder} use: a user searching for `Nightlife' and `Beach' obtains a ranked list of recommended destinations (top 4 are shown).} \label{fig:use_cases} \end{figure*} Consider a user who knows what activities she wants to do during her holidays, and is looking for travel destinations matching these activities. This process is a complex exploratory recommendation task in which users start by entering activities in the search box as shown in Figure~\ref{fig:use_cases}. The \href{http://www.booking.com/destinationfinder.html}{destination finder} service returns a ranked list of recommended destinations. The underlying data is based on reviews from users who have booked and stayed at a hotel at some destination in the past. After their stay, users are asked to endorse the destination with activities from a set of `endorsements'. Initially, the set of endorsements was extracted from users' free-text reviews using a topic-modeling technique such as LDA~\citep{blei_2003, noulas_2014}. Nowadays, the set of endorsements consists of $256$ activities such as `Beach,' `Nightlife,' `Shopping,' etc. These endorsements imply that a user liked a destination for particular characteristics. Two examples of the collected endorsements for two destinations, `Bangkok' and `London', are shown in Figure~\ref{fig:endors_example}. As an example of the multi-criteria endorsement data, consider three endorsements: $e_1$ = \emph{`Beach'}, $e_{2}$ = \emph{`Shopping'}, and $e_{3}$ = \emph{`Family Friendly'} and assume that a user $u_j$, after visiting a destination $d_k$ (e.g.\ `London'), provides the review $r_i(u_j,d_k)$ as: \begin{equation} \label{eq:review} r_i(u_j,d_k)=(0,1,0). \end{equation} This means our user endorses London for `Shopping' only. However, we cannot conclude that London is not `Family Friendly'. Thus, in contrast to the ratings data in a traditional recommender systems setup, negative user opinions are hidden. In addition, we are dealing with multi-criteria ranking data In contrast, in classical formulations of Recommender Systems (RS), the recommendation problem relies on single \emph{ratings} ($R$) as a mechanism of capturing user ($U$) preferences for different items ($I$). The problem of estimating unknown ratings is formalized as follows: $F:U \times I \rightarrow R$. RS based on latent factor models have been effectively used to understand user interests and predict future actions~\citep{Agarwal_wsdm_2010,Agarwal_kdd_2010}. Such models work by projecting users and items into a lower-dimensional space, thereby grouping similar users and items together, and subsequently computing similarities between them. This approach can run into data sparsity problems, and into a continuous cold start problem when new items continuously appear. In multi-criteria RS~\cite{Adomavicius_2007, Adomavicius_2010, Lakiotaki_2011} the rating function has the following form: \begin{equation} \label{eq:mcrs} F:U \times I \rightarrow (r_0 \times r_1\dots \times r_n) \end{equation} The \emph{overall rating} $r_0$ for an item shows how well the user likes this item, while criteria ratings $r_1,\dots,r_n$ provide more insight and explain which aspects of the item she likes. MCRS predict the overall rating for an item based on past ratings, using both overall and individual criteria ratings, and recommends to users the item with the best overall score. According to~\cite{Adomavicius_2007}, there are two basic approaches to compute the final rating prediction in the case when the overall rating is known. In our work we consider a new type of input for RS which is multi-criteria ranking data without an overall rating. \begin{figure}[!tb] \centerline{% \includegraphics[width=\linewidth]{Figure2.pdf}} \caption{The \href{http://www.booking.com/destinationfinder.html}{destination finder} endorsement pages of \href{http://www.booking.com/destinationfinder/cities/gb/london.html}{London} and \href{http://www.booking.com/destinationfinder/cities/th/bangkok.html}{Bangkok}.} \label{fig:endors_example} \end{figure} There are a number of important challenges in working on the real world application of travel recommendations. First, it is not easy to apply RS methods in large scale industrial applications. A large scale application of an unsupervised RS is presented in \citep{Hu:2014:SLT:2623330.2623338}, where the authors apply topic modeling techniques to discover user preferences for items in an online store. They apply Locality Sensitive Hashing techniques to overcome performance issues when computing recommendations. We should take into account the fact that if it's not fast it isn't working. Due to the volume of traffic, offline processing\:---\:done once for all users\:---\:comes at marginal costs, but online processing\:---\:done separately for each user\:---\:can be excessively expensive. Clearly, response times have to be sub-second, but even doubling the CPU or memory footprint comes at massive costs. Second, there is a continuous cold start problem. A large fraction of users has no prior interactions, making it impossible to use collaborative recommendation, or rely on history for recommendations. Moreover, for travel sites, even the more active users visit only a few times a year and have volatile needs or different personas (e.g., business and leisure trips), making their personal history a noisy signal at best. To summarize, our problem setup is the following: \textbf{(1)} we have a set geographical destinations such as `Paris', `London', `Amsterdam' etc.; and \textbf{(2)} each destination was reviewed by users who visited the destination using a set of endorsements. Our main goal is to increase user engagement with the travel recommendations as indicator of their interest in the suggested destinations. Our main research question is: \textsl{How to exploit multi-criteria rating data to rank travel destination recommendations?} Our main contributions are: \begin{itemize} \item we use multi-\:criteria rating data to rank a list of travel destinations; \item we set up a large-scale online A/B testing evaluation with live traffic to test our methods; \item we compared three different rankings against the industrial baseline and obtained a significant gain in user engagement in terms of conversion rates. \end{itemize} The remainder of the paper is organized as follows. In Section~\ref{sec:approach}, we introduce our strategies to rank destinations recommendations. We present the results of our large-scale online A/B testing in Section~\ref{sec:experiment}. Finally, Section~\ref{sec:conclusion} concludes our work in this paper and highlights a few future directions. \section{Ranking Destination Recommendations} \label{sec:approach} In this section, we present our ranking approaches for recommendations of travel destinations. We first discuss our baseline, which is the current production system of the \href{http://www.booking.com/destinationfinder.html}{destination finder} at \url{Booking.com}. Then, we discuss our first two approaches, which are relatively straightforward and mainly used for comparison: the random ranking of destinations (Section~\ref{sec:random}), and the list of the most popular destinations (Section~\ref{sec:popularity}). Finally, we will discuss a Naive Bayes ranking approach to exploit the multi-criteria ranking data. \subsection{Booking.com Baseline} We use the currently live ranking method at \url{Booking.com}'s \href{http://www.booking.com/destinationfinder.html}{destination finder} as a main baseline. We are not able to disclose the details, but the baseline is an optimized machine learning approach, using the same endorsement data plus some extra features not available to our other approaches. We refer further to this method as `Baseline'. Next, we present two widely eployed baselines, which we use to give an impression how the baseline performs. Then we introduce an application of the Naive Bayes ranking approach to multi-criteria ranking. \subsection{Random Destination ranking} \label{sec:random} We retrieve all destinations that are endorsed at least for one of the activities that the user is searching for. The retrieved list of destinations is randomly permuted and is shown to users. We refer further to this method as `Random'. \subsection{Most Popular Destinations} \label{sec:popularity} A very straightforward and at the same time very strong baseline would be the method that shows to users the most popular destinations based on their preferences~\cite{dean2013overview}. For example, if the user searches for the activity `Beach', we calculate the popularity rank score for a destination $d_i$ as the conditional probability: $P(\text{Beach}| d_i)$. If the user searches for a second endorsement, e.g.\ `Food', the ranking score for $d_i$ is calculated using a Naive Bayes assumption as: $P(\text{Beach}| d_i) \times P(\text{food}| d_i)$. In general, if the users provides $n$ endorsements, $e_1,\ldots,e_n$, the ranking score for $d_i$ is $P(e_1|d_i)\times \ldots \times P(e_n|d_i)$. We refer further to this method as `Popularity'. \subsection{Naive Bayes Ranking Approach} As a primary ranking technique we use a Naive Bayes approach. We will describe its application to the multi-criteria ranking data (presented in Equation~\ref{eq:review}) with an example. Let us again consider a user searching for `Beach'. We need to return a ranked list of destinations. For instance, the ranking score for the destination `Miami' is calculated as \begin{equation}\small \label{eq:base_ranker} \begin{split} P(\text{Miami}, \text{Beach}) = P (\text{Miami}) \times P(\text{Beach} | \text{Miami}), \end{split} \end{equation} where $P(\text{Beach} | \text{Miami})$ is the probability that the destination Miami gets the endorsement `Beach'. $P(\text{Miami})$ describes our prior knowledge about Miami. In the simplest case this prior is the ratio of the number of endorsements for Miami to the total number of endorsements in our database. If a user uses searches for a second activity, e.g.\ `Food', the ranking score is calculated in the following way: \begin{equation}\small \label{eq:base_ranker_2} \begin{split} P(\text{Miami}, \text{Beach}, \text{Food}) = P (\text{Miami}) \times P(\text{Beach} | \text{Miami}) \\ \times P(\text{Food} | \text{Miami}) \end{split} \end{equation} If our user provides $n$ endorsements, Equation~\ref{eq:base_ranker_2} becomes a standard Naive Bayes formula. We refer further to this method as `Naive Bayes'. \medskip To summarize, we described three strategies to rank travel destination recommendations: the random ranking, the popularity based ranking, and the Naive Bayes approach. These three approaches will be compared to each other and against the industrial baseline. Next, we will present our experimental pipeline which involves online A/B testing at the \href{http://www.booking.com/destinationfinder.html}{destination finder} service of \url{Booking.com}. \section{Experiments and Results} \label{sec:experiment} In this section we will describe our experimental setup and evaluation approach, and the results of the experiments. We perform experiments on users of \url{Booking.com} where an instance of the \href{http://www.booking.com/destinationfinder.html}{destination finder} is running in order to conduct an online evaluation. First, we will detail our online evaluation approach and used evaluation measures. Second, we will detail the experimental results. \subsection{Research Methodology} \label{sec:res_meth} We take advantage of a production A/B testing environment at \url{Booking.com}, which performs randomized controlled trials for the purpose of inferring causality. A/B testing randomly splits users to see either the baseline or the new variant version of the website, which allows to measure the impact of the new version directly on real users~\citep{koha:onli13,tang_kdd_2010,kohavi_kdd_2014}. As our primary \emph{evaluation metric} in the A/B test, we use conversion rate, which is the fraction of sessions which end with at least one clicked result~\citep{lalmas_2014}. As explained in the motivation, we are dealing with an exploratory task and therefore aim to increase customer engagement. An increase in conversion rate is a signal that users click on the suggested destinations and thus interact with the system. In order to determine whether a change in conversion rate is a random statistical fluctuation or a statistically significant change, we use the G-test statistic (G-tests of goodness-of-fit). We consider the difference between the baseline and the newly proposed method significant when the G-test p-value is larger than$90\%$. \subsection{Results} \label{sec:result} Conversion rate is the probability for a user to click at least once, which is a common metric for user engagement. We used it as a primary evaluation metric in our experimentation. Table~\ref{tab:res} shows the results of our A/B test. The production `Baseline' substantially outperforms the `Random' ranking with respect to conversion rate, and performs slightly (but not significantly) better than the `Popularity' approach. The `Naive Bayes' ranker significantly increases the conversion rate by 4.4\% compared to the production baseline. \begin{table*}[!tb] \caption{Results of the \href{http://www.booking.com/destinationfinder.html}{destination finder} online A/B testing based on the number of unique users and clickers. \strut} \label{tab:res} \centering \begin{tabularx}{0.7\linewidth}{X@{~}c@{~}c@{~}c@{~}c} \toprule Ranker type & Number of users & Conversion rate & G-test \\ \midrule Baseline & 9.928 & 25.61\% $\pm$ 0.72\% & \\ Random & 10.079& 24.46\% $\pm$ 0.71\% & 94\% \\ Popularity & 9.838 & 25.50\% $\pm$ 0.73\% & 41\% \\ Naive Bayes & 9.895 & \textbf{26.73\% $\pm$ 0.73\%} & 93\% \\ \bottomrule \end{tabularx} \end{table*} We achieved this substantial increase in conversion rate with a straightforward Naive Bayes ranker. Moreover, most computations can be done offline. Thus, our model could be trained on large data within reasonable time, and did not negatively impact wallclock and CPU time for the \href{http://www.booking.com/destinationfinder.html}{destination finder} web pages in the online A/B test. This is crucial for a webscale production environment \citep{kohavi_kdd_2014}. \medskip To summarize, we used three approaches to rank travel recommendations. We saw that the random and popularity based ranking of destinations lead to a decrease in user engagement, while the Naive Bayes approach leads to a significant engagement increase. \section{Conclusion and Discussion} \label{sec:conclusion} This paper reports on large-scale experiments with four different approaches to rank travel destination recommendations at \url{Booking.com}, a major online travel agent. We focused on a service called \href{http://www.booking.com/destinationfinder.html}{destination finder} where users can search for suitable destination based on preferred activities. In order to build ranking models we used multi-criteria rating data in the form of endorsements provided by past users after visiting a booked place. We implemented three methods to rank travel destinations: Random, Most Popular, and Naive Bayes, and compared them to the current production baseline in \url{Booking.com}. We observed a significant increase in user engagement for the Naive Bayes ranking approach, as measured by the conversion rate. The simplicity of our recommendation models enables us to achieve this engagement without significantly increasing online CPU and memory usage. The experiments clearly demonstrate the value of multi-criteria ranking data in a real world application. They also shows that simple algorithmic approaches trained on large data sets can have very good real-life performance \citep{halevy_2009}. We are working on a number of extension of the current work, in particular on contextual recommendation approaches that take into account the context of the user and the endorser, and on ways to detect user profiles from implicit contextual information. Initial experiments with contextualized recommendations show that this can lead to significant further improvements of user engagement. Some of the authors are involved in the organization of the TREC Contextual Suggestion Track \citep{dean2013overview,dean_hall_trec_ca_2014,trec:url}, and the use case of the \href{http://www.booking.com/destinationfinder.html}{destination finder} is part of TREC in 2015, where similar endorsements are collected. The resulting test collection can be used to evaluate destination and venue recommendation approaches. \medskip \subsection*{Acknowledgments} This work was done while the main author was an intern at \url{Booking.com}. We thank Lukas Vermeer and Athanasios Noulas for fruitful discussions at the early stage of this work. This research has been partly supported by STW and is part of the CAPA project.\footnote{\url{http://www.win.tue.nl/~mpechen/projects/capa/}} \renewcommand{\bibsection}{\section*{References}} \bibliographystyle{abbrvnat} \setlength{\bibhang}{1em} \setlength{\bibsep}{.5\itemsep}
{ "attr-fineweb-edu": 1.680664, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUeorxK7IAHYL5-eJM
\section{Introduction}\label{sec:intro} Modern online travel platforms (OTPs) offer a vast variety of products and services to travellers, such as flights, accommodations and ground transportation. Recently, offering a promotion such as a room upgrade, free meals, or discounted car rental has become a popular approach for offering customers more value for their money when completing a purchase. A promotion is expected to have a positive Average Treatment Effect (ATE) on outcomes like customer base growth or customer satisfaction, but can be viable only under set Return on Investment (ROI) constraints. ROI is a common measure in the field of marketing campaigns \cite{mishra2011current}, which represents the ratio of profit to investment as follows: \begin{equation} ROI = \frac{\Delta Revenue-\Delta Investment}{\Delta Investment} \end{equation} In order to ensure that a promotional campaign is sustainable over time, the target ROI is often restricted with a lower bound, such as $ROI\geq0$. This creates a trade-off between the increment in target outcome and the cost of offering the promotion. A good example of such a promotion could be offering free lunch as an incentive for booking an accommodation; the promotional campaign will be sustainable only if the incremental value in accommodation booking rate can cover the extra cost of providing the free lunch. By offering the promotion to a subset of customers, a company can provide greater value to each customer while maintaining financial constraints that keep the promotional campaign sustainable over time \cite{reutterer2006dynamic}. One popular approach for efficiently assigning promotions to a subset of customers is targeted personalization \cite{lin2017monetary}. In this approach, uplift models \cite{devriendt2018literature} are commonly used to estimate the expected causal effect of the treatment on the outcome in order to distinguish between the \textit{voluntary buyers} and the \textit{persuadables}, who would only purchase as a response to the incentive \cite{lai2006direct}. This can be achieved by estimating the Conditional Average Treatment Effect (CATE) of the promotion. $CATE_Y(x)$ is defined as the conditional increment in probability of completing the purchase $Pr(Y=1)$ \textit{caused} by the incentive, given the customer characteristics $x$. CATE estimation can be conducted via various meta-learning techniques \cite{gutierrez2017causal}, such as the two-models approach \cite{radcliffe2007using}, transformed outcome \cite{jaskowski2012uplift}, uplift trees \cite{rzepakowski2012decision}, and other meta-learners \cite{kunzel2019metalearners}. Some previous work aimed to address allocation of promotions by estimating the marginal effect \cite{zhao2019unified}, incorporating net value optimization within the meta-learner \cite{zhao2019uplift} and modeling the optimization as a min-flow problem \cite{makhijani2019lore}. Such estimations might be unreliable \cite{lewis2015unfavorable,deng2015diluted}, however, due to high variance in the training dataset, which is heavily dependant on a majority of non-transactional visits having no intent to purchase. In addition, offline solutions might be biased towards historical data and require dynamic calibration \cite{zhou2008budget} according to long-term changes and seasonality trends, which are particularly common in the travel industry \cite{bernardi2019150} and in promotional campaigns \cite{kim2006pay}. Dynamic adaptive strategy adjustments is a common practice in cases of budget constraints and were found effective in ads management \cite{lin2017monetary}, influence maximization \cite{goldenberg2018timing, sela2018active}, marketing budget allocation \cite{badanidiyuru2012learning, lai2006direct, zhao2019unified} and discount personalization \cite{fischer2011dynamic}. In this paper we present a novel technique based on \textit{Retrospective Estimation}, a modeling approach that relies solely on data from positive outcome examples. This is extended by a method to dynamically maximize the target outcome of the treatment within ROI constraints. Our main contributions are: (1) Optimization framework setup for uplift within ROI constraints; (2) \textit{Retrospective Estimation} technique, relying only on data from positive examples; (3) Dynamic model calibration for online applications; and (4) Large scale experimental validation. The study proceeds as follows: The second section formalizes the problem we address and the constraints imposed by business and product requirements. The third section covers the suggested method for uplift modeling under ROI constraints, retrospective estimation technique, and dynamic calibration of the method. The fourth section covers validation of the method (Randomized Controlled Experiment), including details on data gathering, experimental setup and results. \section{Problem formulation} In this work, we focus on maximizing the overall number of customers completing a purchase, while deciding whether or not to offer the promotion in order to comply with the global constraint of $ROI\geq0$. In addition to $Y$ (a random variable representing the completion of a purchase), we consider the random variables $R$ and $C$, representing the monetary revenue associated with the purchase and the cost associated with the promotion, respectively. We assume that there are no revenue and no cost without a purchase. We adopt the Potential Outcomes framework \cite{imbens2010rubin}, which allows us to express the causal effects of the promotions on all three different outcomes: for a customer $i$, $Y_i(1)$ represents the potential purchase if $i$ is given the promotion ($T=1$), while $R_i(1)$ and $C_i(1)$ represent the potential revenue and costs. Likewise, $Y_i(0)$, $R_i(0)$, and $C_i(0)$ represent the potential outcomes if the promotion is not given. We can thus define the conditional average treatment effect on all these variables, for a given customer with pre-promotion covariates $x$, as follows: $CATE_Y(x) = \mathbf{E}(Y_i(1)-Y_i(0)|X_i=x)$, $CATE_R(x) =\mathbf{E}(R_i(1)-R_i(0)|X_i=x)$ and $CATE_C(x) = \mathbf{E}(C_i(1)-C_i(0)|X_i=x)$. We also define the profit outcome $\Pi_i=R_i-C_i$ and the loss as $\mathcal{L} = -\Pi$. Their associated $CATE_{\Pi}(x) = CATE_R(x)-CATE_C(x) = -CATE_{\mathcal{L}}(x)$ are simply the average increment in profit (or loss). Our goal is to find a function $F$, that given $x$, decides if the customer is offered the promotion or not, while maximizing the total incremental number of transactions under incremental ROI constraints. Therefore, our problem is to learn $F$, given a sample of customers $U$, where some received the promotion and others did not. Formally, our objective is to learn $F$ from tuples $(x_i,T_i,Y_i,R_i,C_i)$, $0<i<n$, where $T_i=1$ if the customer received the promotion and $T_i=0$ otherwise. \subsection{Solution Framework}\label{sec:optimization} We model $F$ with a scoring function $g$ and a threshold $\theta$, that outputs its decision as $ \mathbbm{1}{[g(x)\geq \theta}]$. Function $g$ needs to represent the increment in probability of booking as well as the associated revenue and costs. By relying of existing data, we can reassign the promotions in $U$ by solving the following optimization problem: \begin{equation} \begin{array}{cc} \noindent \text{Maximize } \displaystyle \sum\limits_{i\in U} z_i \cdot CATE_Y(x_i)\\ \text{ subject to: } \displaystyle \sum\limits_{i\in U} z_i \cdot CATE_{\mathcal{L}}(x_i) \leq 0 \end{array} \end{equation} Here, $z_i\in \{0,1\}$ is the assignment variable indicating if customer $i$ is offered the promotion or not. The target function is maximizing the total treatment effect, while the constraint specifies the condition of incremental total loss being non-positive (equivalent to positive ROI), by comparing the incremental costs and revenues potentially incurred by each incentivized customer. This sets a \textit{Binary Knapsack Problem} with negative utilities and weights. We derive that $CATE_Y\leq 0 \implies CATE_{\mathcal{L}}\geq0$, since the revenue can only come from extra purchases. Applying the transformation described in \cite{toth1990knapsack} results in the residual problem of a standard binary Knapsack Problem for customers with $CATE_Y > 0$ and $CATE_{\mathcal{L}} > 0$. It has a new constraint constant given by $\sum_j CATE^j_{\mathcal{L}}$ for customers where $CATE^j_{Y}>0$ and $CATE^j_{\mathcal{L}}\leq0$. According to the greedy algorithm for the \textit{Fractional Knapsack Problem}, we can approximate the solution by sorting customers $U$ by ${utility_i}/{weight_i}$ descending. In our problem this ratio maps to ${CATE^i_{Y}}/CATE^i_{\mathcal{L}}$. All customers are treated in that order, until the constraint is no longer satisfied. Denoting $j$ as the last treated customer, we can set $g(x)={CATE_Y(x)}/CATE_{\mathcal{L}}(x)$ and the threshold $\theta^*={CATE^j_{Y}}/{CATE^j_{\mathcal{L}}}$. Assuming the sample $U$ is a representative of future customers, we can use this rule to decide who to target with our promotion. This framework reduces the problem to learning $\frac{CATE_Y(x)}{CATE_{\mathcal{L}}(x)}$; $\sign[CATE_Y(x)]$; and $\sign[CATE_{\mathcal{L}}(x)]$. The following sections explore different approaches. \section{Method for uplift modeling under ROI constraints}\label{sec:method} We performed a comparison of two common uplift modeling techniques and introduced two novel modeling approaches: \begin{itemize} \item \textbf{\textit{Two-Models}} \cite{radcliffe2007using}: a difference between two individual estimations of the effect under each of the treatments. \item \textbf{\textit{Transformed outcome}} \cite{jaskowski2012uplift}: direct estimation of $CATE_{Y}(x)$ with a single model. \item \textbf{\textit{Fractional Approximation}}: greedy sorting score relying on \textit{Two-Models} output - ${CATE_Y(x)}/{CATE_{\mathcal{L}}(x)}$. \item \textbf{\textit{Retrospective Estimation}} (\ref{sec:retro}): direct estimation of greedy sorting score, relying only on positive examples. \end{itemize} All methods rely on data from an online randomized controlled experiment \cite{kohavi2007practical, kaufman2017democratizing}, which allows us to write the potential outcomes $\mathbb{E}[Y_i(1)|X_i=x]$ as $\mathbb{E}[Y_i|T_i=1,X_i=x]$ (and their analogous for Revenue, Cost and Profit). These expectations are in turn estimated using machine learning. All methods are trained with underlying \textit{Xgboost} models \cite{chen2016xgboost} on data of >100 millions of interactions on Booking.com website, lasting over several weeks. The model performance was tuned with time-based cross validation and evaluated as described in section \ref{sec:eval}. With ${\hat{C}(x)} =\mathbb{E}[C |x,T=1,Y=1] $ and ${\hat{R}_i(x)} =\mathbb{E}[R |x,T=i,Y=1] $, the full greedy criterion $\frac{CATE_Y(x)}{CATE_{\mathcal{L}}(x)}$ can be written in terms of expectations as follows: \begin{equation} \label{eq:greedy} \frac{Pr(Y=1|x,T=1)-Pr(Y=1|x,T=0)} {Pr(Y=1|x,T=1)[{\hat{C}(x)}-{\hat{R}_1(x)}]+Pr(Y=1|x,T=0){\hat{R}_0(x)}} \end{equation} \textbf{\textit{Fractional approximation}} technique relies on direct estimations of all the probabilities and expectations from formula \ref{eq:greedy}. The \textbf{\textit{Two-Models}} and \textbf{\textit{Transformed Outcome}} methods that are estimating only the numerator of Eq. \ref{eq:greedy}, are used as benchmark solutions to the unconstrained problem. \subsection{Retrospective Estimation}\label{sec:retro} This technique relies on the following response distribution factorization: \begin{equation} Pr(Y=1|x, T=1) = \frac{Pr(T=1|x,Y=1)Pr(Y=1|x)}{Pr(T=1|x)} \end{equation} The denominator is the treatment propensity, which we can set to $0.5$. This decomposition does not allow us to estimate $CATE_Y(x)$ directly, but instead provides an expression for the ratio between $Pr(Y=1|x,T=1)$ and $Pr(Y=1|x,T=0)$: \begin{equation} \frac{Pr(Y=1|x,T=1)}{Pr(Y=1|x,T=0)} = \frac{Pr(T=1|x,Y=1)}{1-Pr(T=1|x,Y=1)} = \frac{S(x)}{1-S(x)} \end{equation} Here $S(x) = Pr(T=1|x,Y=1)$ is the probability of a positive example to result from the treatment $T=1$. This allows us to write the full greedy criterion in terms of $S(x)$: \begin{align} &\frac{CATE_Y(x)}{CATE_{\mathcal{L}}(x)}= \frac{2S(x)-1}{{S(x) [{\hat{C}(x)}-{\hat{R}_1(x)}] + [1-S(x)] {\hat{R}_0(x)}}}\\ &\sign[CATE_Y(x)]=\sign\left[S(x)-0.5\right]\\ &\sign[CATE_{\mathcal{L}}(x)]= \sign\left[\frac{{\hat{R}_1(x)}}{2{\hat{R}_1(x)}-{\hat{C}(x)}}-S(x)\right] \end{align} Estimating $S(x)$, $\hat{R}_1(x)$, $\hat{R}_0(x)$ and $\hat{C}(x)$ requires data only from customers where $Y=1$, making this approach robust to the noise in the general set of visitors (including scrapes and crawlers - a significant portion of traffic who rarely purchase) and less prone to class imbalance issues. It also requires a smaller transactional dataset, which is often easier to collect. Furthermore, assuming that the costs and revenues per booking are all independent of $x$ and that the revenue per booking is independent of treatment $T$ ($\hat{R}_i(x) = \hat{R}$ and $\hat{C}(x) = \hat{C}$) we can represent the ratio by equation \ref{eq:simplefrac}: \begin{equation} \label{eq:simplefrac} \frac{2S(x)-1}{S(x) [\hat{C}-\hat{R}] + [1-S(x)] \hat{R}} \end{equation} We can derive positive values of $CATE_{\mathcal{L}}(x)>0$ if $S(x)<\frac{\hat{R}}{2\hat{R}-\hat{C}}$ and $CATE_Y(x)>0$ if $S(x)>0.5$. This means that we can solve the greedy problem just by modeling $S(x)=Pr(T = 1|x, Y=1)$. \subsection{Dynamic calibration strategy} Fitting a function $g$ and a threshold $\theta$ offline provides a solution to the offline samples assignment. However, during an online campaign, the general traffic distribution might differ and therefore the optimal threshold $\theta^*$ can shift accordingly. We propose a simple curve fitting technique to address this problem. As seen in figure \ref{fig:roi}, we assume a non-linear monotonic relation between the portion of the exposed population $Q_{\theta}$ and the resulted $ROI$. We suggest to model this relation as an exponential curve, by learning the parameters $a$,$b$ and $c$ such that: $\widehat{ROI}(Q)= ae^{bQ}+c$ using the Levenberg-Marquardt least-squares curve fitting algorithm \cite{more1978levenberg} to select a new $\theta^*$ such that $\widehat{ROI}(Q_{\theta^*}) = 0$. Using this parametric curve fitting method allows incorporating historical and up-to-date data in ROI estimation, by refitting the curve every selected time-period. \subsection{Evaluating an Uplift Model: Qini and Qini-ROI curves}\label{sec:eval} The methods are compared with \textit{Qini curves} \cite{radcliffe2007using} that represent the potential cumulative treatment effect achieved by targeting a selected top ranked population based on the model predictions (see figure \ref{fig:cate}). The \textit{Qini Coefficient} (also called \textit{AUUC} - Area Under the Uplift Curve) \cite{devriendt2018literature} is used to quantify and evaluate the overall quality of an uplift model. Following the same ranking procedure, we propose the \textit{Qini-ROI curve}, representing the expected ROI that can be achieved by targeting a selected portion of the top ranked population (see figure \ref{fig:roi}). According to the described optimization problem (\ref{sec:optimization}), $ROI\geq0$ is a binding constraint to the treatment assignment, therefore the optimal model threshold is at the operating point where the treatment effect is the highest and also $ROI\geq0$. Thus, the models' performance is evaluated by \textit{Maximal $ATE$ at $ROI\geq0$} metric. The \textit{AUUC} and \textit{Maximal targeted population at $ROI\geq0$} metrics are used to evaluate the class-separation and the population coverage of the methods. \section{Experimental Study} \begin{figure* \centering \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=1\textwidth]{qini.pdf} \captionof{figure}{Qini curves comparison} \label{fig:cate} \end{minipage}% \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=1\textwidth]{roi.pdf} \captionof{figure}{Qini-ROI curves comparison} \label{fig:roi} \end{minipage} \end{figure*} \subsection{Randomized controlled trial of undiscriminated treatment assignment} \label{sec:randomized} In order to test the suggested method, we conducted a series of offline and online experiments. As a preceding step to our method, we conducted a randomized controlled trial to assess the potential impact of a given promotion. The experiment took the form of an online A/B test, in which the treatment group was offered a promotion and the control group was not. The experiment was conducted on Booking.com website and lasted for several weeks. In our experiment, all of the customers in the treatment group received an offer of a free benefit as an incentive for booking an accommodation. The individual outcome metrics - completing a booking, promotion cost, and incremental revenue - were aggregated and compared between the control (no promotion) and treatment (free benefit) groups, producing an estimation of the $ATE$ and $ROI$ of the promotion treatment. The experiment resulted in a conclusively positive treatment effect \textbf{($ATE>0$)}, but with an \textbf{$ROI<0$} value, which is insufficient in order to keep offering the promotion. \subsection{Uplift methods training and offline evaluation} The data gathered during the A/B test was used as training instances for uplift modeling. The dataset consisted of >100 million website visit instances, associated with customer's current and historical search queries and transactions. Each instance was attributed with associated transaction, revenue, and cost, if applicable. We evaluated the models' performance on a validation set according to the metrics described in \ref{sec:eval}. Figure \ref{fig:cate} presents the comparison of the models' uplift performance; Figure \ref{fig:roi} presents the comparison of the models' ROI profile; key values are summarized in table \ref{tab:results}. The stars at each curve represent the best operating points (in terms of $ATE$) for each method, s.t. $ROI\geq0$. \begin{table}[b \caption{Uplift methods' performance comparison} \label{tab:results} \begin{tabular}{lccc} \hline Method & AUUC & \specialcell{Max. Population\\at ROI=0} & \specialcell{Max. $ATE$ \\at ROI=0} \\ \hline Two-Models & 0.638 & 0\% & 0\%\\ Transformed Outcome & \textbf{0.912} & 2.4\% & 38.6\% \\ Fractional Approximation & 0.721 & \textbf{38\%} & 65.8\%\\ Retrospective Estimation & 0.806 & 31.4\% & \textbf{79.7\%}\\ \hline \end{tabular} \end{table} The \textit{Transformed Outcome} method achieves the highest $AUUC$ score (0.912), however, its $ROI=0$ intersection is only at 2.4\% of population, achieving 38.6\% of the treatment effect relative to giving the promotion to all the users. The \textit{Two-Models} method has the highest potential treatment effect (109\%) in an unconstrained setup, but it performs poorly in key metrics and doesn't have any operating point with $ROI\geq0$. The \textit{Fractional Approximation} method provides the highest population coverage (38\%) at $ROI\geq0$ and achieves 65.8\% of the full treatment effect. Recalling the objective function of the optimisation problem, the \textit{Retrospective Estimation} method provides the best solution. It leads with a great advantage on maximal treatment effect achieving 79.7\% of the possible uplift and is a runner-up in both AUUC (0.805) and Population at $ROI=0$ (31.4\%) metrics. It is worth highlighting that the \textit{Retrospective Estimation} performance relies solely on positive examples data. \subsection{Randomized controlled trial with personalized treatment assignment} We recreated the randomized controlled trial setup as an online validation of our technique \ref{sec:randomized}. For this iteration, visitor traffic was split into four randomly-assigned treatment groups: \begin{itemize} \item \textbf{A} - \textbf{\textit{Control Group}} - no promotion offering \item \textbf{B} - \textbf{\textit{Undiscriminated Treatment}} - promotion is offered to everyone \item \textbf{C} - \textbf{\textit{Personalized Treatment}} - promotion is offered to a population with $Score(x)\geq\theta$ \item \textbf{D} - \textbf{\textit{Dynamically Personalized Treatment}} - promotion is offered to a population with $Score(x)\geq\theta_{d}$ \end{itemize} We used the \textit{Retrospective Estimation} technique for the underlying personalized treatment assignment method in groups \textbf{C} and \textbf{D}, due to its superior performance on the target metric. Model score threshold ($\theta$) was selected based on offline evaluation of the model, such that $31.4\%$ of the population are expected to receive the promotion. We re-calibrated the threshold for the \textit{Dynamically Personalized Treatment} ($\theta_d$) after each predefined time-period. A randomized controlled trial (over 8 update periods) was conducted under the same promotion conditions as in the first experiment (an offer of a free benefit as an incentive for an accommodation booking on the website). The goal of the experiment was to evaluate the incremental ATE and the ROI of the treatments groups compared to control group \textbf{A}. \begin{figure* \centering \includegraphics[width=0.95\textwidth]{dynamic_results_both_wide.pdf} \caption{Experimental results (achieved ROI and portion of ATE) of personalized promotions assignment} \label{fig:dynamic} \end{figure*} Figure \ref{fig:dynamic} presents the measured cumulative $ROI$ (dashed lines) and the relative $ATE$ (solid lines) of the treatments, where the \textit{diamond} markers ($\Diamond$) represent the dynamic calibration points during the experiment. Treatment \textbf{B} resulted in $ROI$ values below 0, similarly to the previous iteration of the experiment. Treatment \textbf{C} (Personalized group) provided consistently high $ROI$ values ($ROI=0.36$ at the last time period) and its total cumulative uplift is 66\% of that achieved by fully-treated group \textbf{B}. Treatment \textbf{D} starts with a high $ROI=0.3$ in the first period and converges to $ROI=0.24$ at the end of the experiment. Dynamic adjustment of the threshold in group \textbf{D} allowed for exposing the treatment to 47\% of the population, achieving on average 74\% of the uplift achieved by treatment \textbf{B}, including 80\% during the last period. \section{Conclusions} In this work, we introduced an optimization framework to solve the personalized promotion assignment by relying on the Knapsack Problem and presented a novel method to estimate the required sorting score. Classical uplift models excelled in AUUC metric, but under-performed in ROI-related measures as they do not account for the cost factor. Our novel \textit{Retrospective Estimation} technique relies solely on data from positive outcome examples, which makes its training easier and more scalable in big-data environments, especially for cases where only the outcomes' data is available. The method achieved 80\% of the possible uplift, by targeting only 31\% of the population and meeting the $ROI\geq0$ constraint during offline evaluation. It outperforms the direct \textit{Fractional Estimation} model, potentially thanks to robustness to noise in the training data, which opens an opportunity for further research on new applications. We suggested a dynamic calibration technique, which allows adjusting the model threshold and exposing a different portion of the population to the treatment, by relying on online performance. We evaluated our suggested techniques through a massive online randomized controlled trial and showed that a personalized treatment assignment can turn an under-performing promotions campaign into a campaign with a viable $ROI$ and significant uplift ($ATE=66\%$ of the possible effect). Applying the dynamic threshold calibration to online data improved the uplift of the treatment by up to 20\%, while staying within the constraints, and provided a long term solution for a rapidly changing environment. Our contributions lay the foundation for future research to apply the \textit{Retrospective Estimation} technique to other uplift modeling and personalization applications. This sets the groundwork for new optimization approaches and provides solid empirical evidence of the benefits of online personalized promotion allocation and dynamic calibration. It creates more value to the customers and increases the loyal customer base for the vendors and the platform. \begin{acks} We would like to thank the entire Value Team for their contribution to the project and to Adam Horowitz, Tamir Lahav, Guy Tsype, Pavel Levin, Irene Teinemaa and Nitzan Mekel-Bobrov for their support and insightful discussions. \end{acks} \bibliographystyle{ACM-Reference-Format}
{ "attr-fineweb-edu": 1.672852, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdI85qdmC7lEem-_-
\section{Introduction} Probabilistic forecasting of sporting events such as football matches has become an area of considerable interest in recent years. One reason for this is that forecasting can help inform gambling decisions and therefore has the potential to support the identification of profitable betting strategies. Probabilistic forecasting has also grown in popularity in the sports media. In some media outlets, for example, estimated probabilities are routinely disseminated in match previews and even in-play. An obvious implication of the growth of probabilistic forecasting in sport is the need for effective methods of forecast evaluation. This is particularly true in the case of gambling where `beating the bookmaker' is a difficult task which typically requires highly informative predictions. However, even when the forecasts are used for purposes other than gambling, there is often still an incentive for the forecasts to be informative, or at least to be perceived as such. There is therefore a need for objective measures of forecast performance. This paper is concerned with the question of how to evaluate probabilistic forecasts of events such as football matches with three or more possible outcomes. \par Evaluation of probabilistic forecasts is typically performed using scoring rules, functions of the forecast and corresponding outcome aimed at assessing forecast performance. A large number of scoring rules have been defined over the years and there is considerable debate surrounding which are the most appropriate. A common approach with which to differentiate candidate scoring rules is to identify desirable properties and favour scores that have them. There is often debate, however, surrounding which properties are (most) desirable and hence a lack of consensus remains. As a result, in fields such as weather forecasting, a wide range of different scores are often presented. \par One property of scoring rules that is perhaps the most widely agreed upon is called propriety. A score is proper if, in expectation, it favours a forecast that consists of the distribution from which the outcome is drawn, i.e a perfect probabilistic forecast. This paper is concerned primarily with two more contentious properties. One of those properties is locality. A score is \emph{local} if it only considers the probability at the outcome and disregards the rest of the distribution. A \emph{non-local} score therefore takes at least some of the rest of the forecast distribution into account. The other property of interest concerns whether a scoring rule takes ordering into account. Events with discrete outcomes can be divided into two categories: nominal and ordinal. Ordinal events have a natural ordering. For example, a question on a survey asking an interviewee to rate a service might have a set of potential responses ranging from `very poor' to `very good'. It is clear that `very good' ranks higher than `good', whilst `good' ranks higher than `poor'. Nominal events, on the other hand, have no natural ordering. For example, there is no obvious way to rank a set of colours or nationalities. The outcomes of football matches can be considered to be ordinal (along with matches in other sports in which a draw is allowed). A home win is closer to a draw than it is to an away win. As such, there is a question of whether a scoring rule should take into account this ordering. A paper by Constantinou and Fenton argues that forecast probability placed on potential outcomes close to the actual outcome should be rewarded and therefore ordering should be taken into account (\cite{constantinou2012solving}). Therefore, if the match outcome is a home win, probability placed on a draw should be rewarded more than probability on an away win. Scoring rules that have this property are referred to as being `sensitive to distance'. One scoring rule that has this property is the ranked probability score (RPS). Constantinou and Fenton therefore argue that the RPS is the appropriate score for the evaluation of probabilistic forecasts of football matches. As a result, the RPS has become perhaps the most popular and widely used scoring rule for this purpose. In this paper, the view that sensitivity to distance in a scoring rule is beneficial is disputed along with Constantinou and Fenton's suggestion that the RPS should be widely used to evaluate football forecasts. \par Three scoring rules are considered in this paper: the RPS, which is both non-local and sensitive to distance, the Brier score, which is non-local but insensitive to distance and the ignorance score, which is local and therefore also insensitive to distance. It is argued that the ignorance score is the most appropriate out of these three candidate scores and evidence is presented in the form of two experiments demonstrating that the ignorance score is able to identify a set of perfect forecasts quicker than the other two scoring rules. \par The question of how probabilistic forecasts of discrete events should be evaluated is one with a long history. An early contribution to the literature was the introduction of the Brier score (\cite{brier1950verification}). The Brier score considers the squared distance between the forecast probability and the outcome for each possible category in which the outcome could fall (the category in which the outcome falls is represented with a one and all other categories with a zero). Whilst the Brier score is most commonly applied to binary events, it was originally formulated more generally such that it can be extended to events with more than two possible outcomes. The ignorance score (\cite{good1992rational,roulston2002evaluating}), often referred to as the logarithmic score, takes a different approach by simply taking the logarithm of the probability placed on the outcome. The rationale behind the ignorance score is in information theory and is closely related to other information measures such as the Kullback-Leibler Divergence (\cite{brocker2007scoring}). The ranked probability score (\cite{epstein1969scoring}) is closely related to the Brier score but compares the cumulative distribution function of the forecast and the outcome rather than the probability mass function. Other proposed scoring rules include the spherical score, which combines the probability placed on the outcome with a correction term to ensure that it is proper (\cite{friedman1983effective}) and the quadratic score which simply takes the mean squared distance between the forecast and the outcome (\cite{selten1998axiomatic}). This paper, however, is concerned only with the ignorance score, Brier score and RPS. These scoring rules were chosen because we are principally interested in the properties of locality and sensitivity to distance. The RPS is both non-local and sensitive to distance, the Brier score is non-local and insensitive to distance and the ignorance score is local and insensitive to distance (the ignorance score is in fact the only local and proper scoring rule (\cite{bernardo1979expected}). \par A range of other properties of scoring rules have been proposed, many of which have been suggested as desirable in some way. Propriety, as mentioned above is perhaps the most well known property and it stipulates that, in expectation, a scoring rule should favour the distribution from which the outcome was drawn over all others (\cite{brocker2007scoring}). Another property is locality. A score is local if only the probability at the outcome is taken into account (\cite{parry2012proper}). Other properties of scoring rules include those that are equitable, defined as those that ascribe the same score, in expectation, to constant forecasts as they do to a random forecast, regular, those that only ascribe an infinite score to a forecast that places zero probability on the outcome (\cite{gneiting2007strictly}), and feasible, those that assign bad scores to forecasts that give material probability to events that are highly unlikely (\cite{maynard}). \par A number of authors have commented on the value of sensitivity to distance in scoring rules. \cite{jose2009sensitivity} recommended the use of scoring rules that are sensitive to distance, including for forecasts of football matches. They provide generalisations of existing scoring rules to make them sensitive to distance. \cite{stael1970family} also recommended that scoring rules should be sensitive to distance and suggest a family of scoring rules based on the RPS that also have this property. \cite{murphy1970ranked} compared the formulation of the RPS and the Brier score and recommended that the RPS should at least be used alongside the Brier score when the event of interest is ordered. \cite{bernardo1979expected}, on the other hand commented that ``when assessing the worthiness of a scientist's final conclusions, only the probability he attaches to a small interval containing the true value should be taken into account.'' arguing for locality as a desirable property. \par There is a steadily increasing literature describing methodology for the construction of probabilistic forecasts of sporting events such as football matches (\cite{diniz2019comparing}). In many of these, scoring rules have been deployed to attempt to assess the quality of those forecasts. For example, \cite{forrest2005odds} use the Brier score to compare probabilistic forecasts derived from bookmakers' odds and from a statistical model. \cite{spiegelhalter2009one} use the Brier score to assess the performance of their Premier League match predictions. The ranked probability score has also been widely used. For example, \cite{koopman2019forecasting} use the RPS to evaluate their dynamic multivariate model of football matches, \cite{baboota2019predictive} use the RPS to evaluate their machine learning approach to football prediction and \cite{schauberger2016modeling} use the RPS alongside cross-validation to select a tuning parameter in their model. The ignorance score appears to be less widely used than the Brier score and the RPS. \cite{diniz2019comparing} compare the performance of a number of predictive models using the ignorance score alongside the Brier score and the spherical score whilst it also has been used by \cite{schmidt2008accuracy} alongside the Brier score to assess the probabilistic performance of a prediction market for the 2002 World Cup. \par This paper is organised as follows. In section~\ref{section:background}, formal definitions of the scoring rules and their properties are given. In section~\ref{section:rebuttal}, the arguments of Constantinou and Fenton are presented and disputed. In section~\ref{section:what_scoring_rules_for}, the question of how scoring rules are used in practice is discussed. The philosophical difference between a perfect and imperfect model scenario is discussed in section~\ref{section:perfect_imperfect}. The performance of the Brier score, Ignorance score and RPS are compared in a model selection experiment using examples from Constantinou and Fenton's paper in section~\ref{section:experiment_one}. A similar experiment is performed in section~\ref{section:experiment_two} using forecast probabilities derived from bookmakers' odds of actual matches. Finally, section~\ref{section:discussion} is used for discussion and conclusions. \par \section{Background} \label{section:background} \subsection{Definitions of Scoring Rules} The three scoring rules considered in this paper are defined as follows. For an event with $r$ possible outcomes, let $p_{j}$ and $o_{j}$ be the forecast probability and outcome at position $j$ where the ordering of the positions is preserved. The Brier score, generalised for forecasts of events with $r$ possible outcomes, is defined as \begin{equation} \mathrm{Brier}=\sum_{i=1}^{r}(p_{i}-o_{i})^{2}. \end{equation} The ranked probability score is defined as \begin{equation} \mathrm{RPS}=\sum_{i=1}^{r-1}\sum_{j=1}^{i}(p_{j}-o_{j})^{2}. \end{equation} The ignorance score is defined as \begin{equation} \mathrm{IGN}=-\log_{2}(p(Y)) \end{equation} where $p(Y)$ is the probability placed on the outcome $Y$. \subsection{Properties of Scoring Rules} \label{section:properties_scoring_rules} Throughout a long history of research, a large number of properties of scoring rules have been defined. Here, those that are relevant to the arguments and experiments in this paper are described. \par Perhaps the most well known property of scoring rules is \emph{propriety}. A score is \emph{proper} if it is optimised, in expectation, with the distribution from which the outcome was drawn. As such, a proper scoring rule always favours a perfect probabilistic forecast in expectation. It is widely held that scoring rules that do not have this property should be dismissed (\cite{brocker2007scoring}). Each of the scoring rules described above are proper. A perfect probabilistic forecast is rarely, if ever, expected to be possible to achieve in practice. Therefore, in expectation, whilst a proper scoring rule will always rank a perfect forecast more favourably than an imperfect one, different proper scores will often rank pairs of imperfect forecasts differently. \par Another property of scoring rules is locality. A score is \emph{local} if it only takes into account the probability at the outcome. If any of the rest of the distribution is taken into account by the score, it is non-local. The ignorance score is local whilst the Brier Score and RPS are both non-local. \par For discrete events, another property concerns whether the score takes into account the ordering of a set of potential outcomes. Scores that do this are defined as \emph{sensitive to distance}. For sporting events, for example, a draw and a home win can be considered to be closer together than a home win and an away win. An scoring rule that is sensitive to distance will therefore reward probability placed on an event closer to the actual outcome. Whilst the RPS is sensitive to distance, the ignorance and Brier Scores are not since they do not take into account the ordering of the possible outcomes. \par \section{A Rebuttal of Arguments in Favour of the RPS} \label{section:rebuttal} The popularity of the RPS for evaluating probabilistic forecasts of football matches is largely due to a paper written by Constantinou and Fenton, published in the Journal of Quantitative Analysis in Sports in 2012 (\cite{constantinou2012solving}). The crux of the argument in that paper is that probability placed on potential outcomes `close' to the actual outcome should be rewarded more than probability placed on those that are `further away'. If the home team is currently winning by one goal, it would take the away team to score one more goal for the match to end in a draw and two more goals for it to end in an away win and, therefore, the potential outcomes are, in a sense, ordered. The authors claim that, in light of this, only scoring rules that are sensitive to distance should be considered. A natural choice is therefore argued to be the RPS. \par With the aim of presenting further evidence towards the suitability of the RPS, Constantinou and Fenton define five hypothetical football matches, each with a specified outcome (i.e. a home win, a draw or an away win). For each match, they define a competing pair of probabilistic forecasts and use general reasoning to argue that, given the defined outcome, one is more informative than the other. They then show that the RPS is the only scoring rule out of a number of candidates that assigns the best score to their favoured forecast in each case, and argue that this provides evidence of its suitability. We dispute the validity of this reasoning. We argue that the approach by which the performance of the scores is compared under a specific outcome of the match is flawed. Instead, scores should be compared by considering the underlying probability of each possible outcome, thereby taking into account the underlying probability distribution of the match. This is a much more difficult task. \par To provide a setting with which to illustrate the arguments of Constantinou and Fenton and to provide counterarguments, details of the five hypothetical matches used as examples in that paper are reproduced. The outcome of each match, the two forecasts and an indicator of Constantinou and Fenton's favoured forecast ($\alpha$ or $\beta$) are shown in table~\ref{table:constantinou_scenarios}. \begin{table}[] \begin{tabular}{|l|l|lll|l|l|} \hline Match & Forecast & p(H) & p(D) & p(A) & Result & `Best' Forecast \\ \hline 1 & $\alpha$ & 1 & 0 & 0 & H & $\alpha$ \\ & $\beta$ & 0.9 & 0.1 & 0 & & \\ \hline 2 & $\alpha$ & 0.8 & 0.1 & 0.1 & H & $\alpha$ \\ & $\beta$ & 0.5 & 0.25 & 0.25 & & \\ \hline 3 & $\alpha$ & 0.35 & 0.3 & 0.35 & D & $\alpha$ \\ & $\beta$ & 0.6 & 0.3 & 0.1 & & \\ \hline 4 & $\alpha$ & 0.6 & 0.25 & 0.15 & H & $\alpha$ \\ & $\beta$ & 0.6 & 0.15 & 0.25 & & \\ \hline 5 & $\alpha$ & 0.57 & 0.33 & 0.1 & H & $\alpha$ \\ & $\beta$ & 0.6 & 0.2 & 0.2 & & \\ \hline \end{tabular} \label{table:constantinou_scenarios} \caption{Match examples defined by Constantinou and Fenton. In each case, forecast $\alpha$ is described as superior to forecast $\beta$ by the authors.} \end{table} The reasoning given by the authors for favouring each forecast is as follows. For match one, forecast $\alpha$ predicts a home win with total certainty and must outperform any other forecast (including $\beta$). For match two, forecast $\alpha$ places more probability on the outcome than forecast $\beta$ and therefore provides the most informative forecast. For match three, the only match in which the outcome is defined to be a draw, it is argued that, although both forecasts place the same probability on the outcome, forecast $\alpha$ should be favoured because the probability placed on a home win and an away win are evenly distributed and therefore `more indicative of a draw'. For match four, it is argued that forecast $\alpha$ should be favoured because, although both forecasts place the same probability on the outcome, $\alpha$ places more probability `close' to the home win, i.e. on a draw, than $\beta$ and therefore is more favourable. Finally, for match five, which is described as the most contentious case, whilst $\beta$ places more probability on the outcome than $\alpha$, the authors argue that forecast $\alpha$ is, in fact, more desirable than $\beta$ because it is `more indicative of a home win' due to the greater probability placed on the draw. To provide further justification for favouring forecast $\alpha$, they give an example in which a gambler uses the forecast to inform a bet on the binary event of whether the match ends with any outcome other than an away win (commonly known as a lay bet). Since the sum of the probabilities on the home win and the draw are higher for forecast $\alpha$ than for forecast $\beta$, they suggest that $\alpha$ is more desirable for that purpose. \par Our main counterargument to the reasoning of Constantinou and Fenton concerns their assertion that forecast $\alpha$ outperforms forecast $\beta$ in each case. In fact, it is impossible to say which of the two forecasts should be preferred in each case without considering the underlying probability distribution of the match (which is unknown in practice). Consider match one. Here, they argue that forecast $\alpha$ should be rewarded more than any other forecast since it predicts the outcome with absolute certainty. This seems entirely reasonable since no forecast is able to place more probability on the outcome. However, it does \emph{not} follow from this that $\alpha$ is the best forecast. To illustrate this, consider the case in which $\beta$ represents the true underlying probability distribution of the match; i.e. the match will end with a home win with probability $0.9$, a draw with probability $0.1$ and an away win with probability $0$. It is not contentious to state that $\beta$ is the best forecast in this setting and we argue that it would be deeply flawed to claim otherwise. Forecast $\alpha$ should not be considered to be the best forecast simply because the match happened to end in a home win (which would happen with 90 percent probability in this case). In a succession of football matches in which the underlying probability is represented by $\beta$ and the forecast is $\alpha$, a draw would eventually occur, with forecast $\alpha$ placing zero probability on that event. The same logic can be applied if the underlying probability distribution is represented by forecast $\alpha$, in which case, $\alpha$ can objectively be considered to be the best forecast. In summary, without knowing the underlying probability distribution of the match, the answer to the question of which forecast is best can only be `it depends'. In practice, of course, it is never possible to know the underlying distribution and therefore we cannot distinguish the performance of the two forecasts on the basis of a single match. \par The effect of the probability distribution of the match on the favoured forecast under each score is now demonstrated. For a given probability distribution, the expected score of each of forecasts $\alpha$ and $\beta$ are calculated, in order to determine which is preferred by each scoring rule. This is repeated for a large number of randomly selected underlying probability distributions. This is demonstrated for match five in figure~\ref{figure:match5_imperfect_scores}. Here, each dot represents a different probability distribution of the match with the probability of a home win and a draw on the $x$ and $y$ axes respectively. Each dot is coloured according to which of the two candidate forecasts is preferred under the three scoring rules. The colour scheme is defined in table~\ref{table:key_for_alpha_beta_plot}. For example, if a point is coloured blue, the RPS prefers $\alpha$ whilst the ignorance and Brier scores prefer $\beta$ under that distribution. Whilst the colour scheme might seem difficult to interpret at first, it becomes much clearer when it is considered that points coloured green, blue and red represent distributions in which only the ignorance, RPS and Brier score prefer $\alpha$ and that the colours of overlapping regions are defined by mixing those colours. Note that, for this particular match, there are no green areas, that is there are no underlying distributions in which the ignorance score prefers $\alpha$ and the RPS and Brier score prefer $\beta$. \par \begin{table}[] \begin{tabular}{|l|c|c|c|} \hline Colour & Ignorance & RPS & Brier \\ \hline Green & $\alpha$ & $\beta$ & $\beta$ \\ Blue & $\beta$ & $\alpha$ & $\beta$ \\ Red & $\beta$ & $\beta$ & $\alpha$ \\ Turquoise & $\alpha$ & $\alpha$ & $\beta$ \\ Brown & $\alpha$ & $\beta$ & $\alpha$ \\ Purple & $\beta$ & $\alpha$ & $\alpha$ \\ Yellow & $\beta$ & $\beta$ & $\beta$ \\ Black & $\alpha$ & $\alpha$ & $\alpha$ \\ \hline \end{tabular} \label{table:key_for_alpha_beta_plot} \caption{Colour scheme for figures~\ref{figure:match5_imperfect_scores} and~\ref{figure:match_1_to_4_imperfect_scores}.} \end{table} The first conclusion to be drawn from figure~\ref{figure:match5_imperfect_scores} is that, clearly, as previously discussed, the forecast favoured by each scoring rule depends on the underlying probability distribution. Moreover, the choice of scoring rule impacts which of the two forecasts is preferred. We can look at each of the regions and try to understand how and why the three scoring rules differ. Consider the blue region in the bottom right of the figure. A point located in the very bottom right represents a probability distribution which places a probability of one on a home win and therefore zero on both a draw and an away win. Here, the RPS is the only score that favours $\alpha$ over $\beta$. This seems somewhat counterintuitive and can be argued to be a weakness of the scoring rule. Here, the RPS rewards probability placed on a draw, regardless of the fact that that outcome \emph{cannot} happen. The cost of doing this is that, out of the two forecasts, the one that places less probability on the outcome is favoured. \par \begin{figure}[!htb] \centering \includegraphics[scale=0.35]{match5_imperfect_scores.jpg} \caption{Randomly chosen probability distributions of match five coloured according to which forecast ($\alpha$ or $\beta$) is preferred by each of the three scoring rules. The colour scheme is described in table~\ref{table:key_for_alpha_beta_plot}.} \label{figure:match5_imperfect_scores} \end{figure} The same information as shown in figure~\ref{figure:match5_imperfect_scores} for match five is shown for matches one to four in figure~\ref{figure:match_1_to_4_imperfect_scores}. This reinforces the importance of the underlying probability distribution and how the choice of forecast depends heavily on the scoring rule. \par \begin{figure}[!htb] \centering \includegraphics[scale=0.35]{match_1_to_4_imperfect_scores.jpg} \caption{Randomly chosen probability distributions of matches one to four coloured according to which forecast ($\alpha$ or $\beta$) is preferred by each of the three scoring rules. The colour scheme is described in table~\ref{table:key_for_alpha_beta_plot}.} \label{figure:match_1_to_4_imperfect_scores} \end{figure} In practice, scoring rules are usually used to assess the performance of forecasting \emph{systems} rather than individual forecasts. A forecasting system is a set of rules that is used to generate forecasts of different events in some common way. For example, a forecasting system might be built on the basis of an individual model, a combination of models or the judgement of a particular person and can be applied to generate forecasts of a range of events (e.g. football matches). Forecasting systems are then evaluated by taking the average score over many events according to some scoring rule. This provides a basis with which to select a forecasting system for the prediction of future events. \par Before moving on, it is of interest to address two particular points made by Constantinou and Fenton in favour of forecast $\alpha$ for match five. Firstly, they describe a situation in which the forecasts are used to inform a `lay' bet on an away win. They argue that, since the combined probability placed on a home win or a draw is higher for forecast $\alpha$ than for forecast $\beta$, $\alpha$ is a better forecast, given this outcome. There is a simple counterargument to this. If a gambler intends to use the forecasts to make lay bets such as the one described, the resulting binary forecasts formed by adding the home win and draw probabilities should be evaluated separately. This is because the new binary forecasts take a different form and have a different aim. It does not makes sense during evaluation to attempt to pre-empt how the forecasts might be used to create other forecasts of a different nature. In fact, the original match outcome forecasts and the binary forecasts might even favour a different forecasting system. For example, one forecasting system might be poor at distinguishing a home win from a draw but good at estimating the probability of an away win. Tying one's hands to create and use a one size fits all forecast seems unnecessary and counterproductive in this case. \par The second point of contention regards the `indicativeness of a home win' in match five. The authors argue that despite the fact that forecast $\beta$ places more probability on the outcome than forecast $\alpha$, forecast $\alpha$ is more indicative of a home win, due to the increased probability placed on the draw. It should be noted here that, were the probability on the draw reduced to $0.3$ and the probability on the away win increased to $0.23$, the RPS would favour forecast $\beta$ and thus the `indicativeness' of a home win is somewhat arbitrary. \par The primary claim of Constantinou and Fenton is that probability placed on possible outcomes that are `close' to the actual outcome should be rewarded more than probability placed on outcomes that are `further away'. Furthermore, they argue that the RPS provides a scoring rule that does this and is therefore suitable for evaluating forecasts of football matches. However, as described above, by not considering the underlying distribution of the match, it is not possible to state that one forecast is better than another and therefore the reasoning given in support of the RPS does not provide a compelling argument. We therefore consider the question of which forecast is assigned the best score, when conditioned on a single outcome, to be moot and we do not consider it further. Instead, we define potential goals of using scoring rules and ask whether the sensitivity to distance property offered by the RPS has any value in achieving them. \par \section{What are Scoring Rules For?} \label{section:what_scoring_rules_for} The principle intention of this paper is to assess the value of scoring rules that are non-local and sensitive to distance in the context of forecasts of football matches. In order to attempt to assess the merits of these properties, it is useful to consider the aims behind the deployment of scoring rules. For the properties of interest to have value, there should be some practical benefit in terms of achieving those aims. Here, we discuss the aims behind the application of scoring rules with a view to assessing whether the non-local and sensitivity to distance properties help to achieve them. \par One obvious aim of scoring rules is to provide a means of comparison between competing forecasting systems. There are many contexts in which one might want to make such comparisons. One might have a finite set of competing probabilistic forecasts of the same events and be looking to determine which is the most informative. For example, a broadcaster may want to decide which forecasts are most useful to show in its sports coverage or a gambler may wish to decide which forecasting service to subscribe to in order to aid their betting decisions. A means of comparison can also be important in the context of model development. A forecaster looking to improve the performance of their forecasting system by, for example, increasing the number of factors included in the model, may want to assess whether these changes result in improved forecasts. Parameter selection also falls under the umbrella of forecast comparison since each set of parameter values will lead to a different set of forecasts. Since parameters usually take continuous values, parameter selection can be considered to be a comparison between an infinite number of sets of forecasts. \par Whilst selecting one of two or more sets of forecasts may be considered to be important in a range of settings, this alone does not give an indication of the magnitude of the difference in skill. An additional question of interest concerns how much more informative one set of forecasts is over another and whether this difference is significant. Typically, two sets of forecasts are compared using the difference in their mean score (\cite{wheatcroft2019interpreting}). Resampling techniques can then be used to determine if that difference is significant. An interesting question concerns whether the difference in scores has an interpretation in terms of the relative performance of the forecasting systems of interest. In fact, to our knowledge, only one of the scoring rules considered in this paper has a useful interpretation when considered in this way and that is the ignorance score. The difference between the mean ignorance scores of two forecasting systems represents the difference in information provided by each one expressed in bits. This means that calculating $2$ to the power of the mean relative ignorance between forecasting systems one and two yields the mean increase in probability density placed on the outcome by the former over the latter. \par Whilst scoring rules are useful tools for evaluating and comparing forecasting systems, it is important to acknowledge their limitations. Often, a set of forecasts are used with a specific purpose in mind. In sports forecasting, this might be to aid a decision whether to place a bet on a certain outcome, whether to select a certain player for a match or which play to make during a game. It is therefore crucial to determine whether the forecasts are fit for that specific purpose. For example, a set of forecasts may successfully incorporate important information and therefore score better than alternative forecasts that do not incorporate this information yet still not be fit for a specific purpose. For example, using a set of forecasts to choose whether to place bets may result in a substantial loss which would have been avoided had the bets not been placed at all. \par \section{Perfect and imperfect model scenarios} \label{section:perfect_imperfect} Philosophical approaches to the comparison of scoring rules typically consider two distinct settings: the perfect model scenario, in which one of the candidate forecasting systems coincides with the probability distribution that generated the outcome (often referred to as the data generating model (DGM)) and the imperfect model scenario, in which each candidate forecasting system is imperfect (\cite{judd2001indistinguishable_perfect,judd2004indistinguishable}). In the perfect model scenario, there should be no ambiguity as to which set of forecasts is most desirable; a perfect forecasting system is always better than an imperfect one. In the latter, on the other hand, the `best' forecasting system is subjective and the question of which one is the most desirable is also subjective. \par In the perfect model setting, there are two directly linked questions of interest. Firstly, `does the scoring rule always favour the perfect forecasting system in expectation?' Scoring rules that do this are called proper and it is generally considered that a chosen scoring rule should have this property (\cite{brocker2007scoring}). As discussed in section~\ref{section:properties_scoring_rules}, each of the three scoring rules considered in this paper are proper and thus they cannot be distinguished in this way. A closely linked means of comparison for scoring rules assumes that each one is proper and assesses how many past forecasts and outcomes are required to have a given probability of selecting the perfect forecasting system. Requiring fewer forecasts and outcomes to do this means that the information is used more efficiently and therefore that there is a better chance of selecting the best forecasting system for future events. This observation forms the basis of the experiments presented in this paper. \par In practice, one can never expect any of the candidate forecasting systems to be perfect and therefore the perfect model case is generally accepted to be only a theoretical construct. It can nonetheless be argued that the performance of scoring rules in this context is important. If, in expectation, a scoring rule does not favour a perfect forecasting system over all others, one should be uneasy about the ability of that scoring rule to favour useful imperfect forecasting systems over misleading ones. Similarly, the efficiency in which a scoring rule uses the information in past forecasts and outcomes ought to tell us something about the way in which each scoring rule uses the information provided to it. For example, in the context of non-local scoring rules that are sensitive to distance, if these properties are truely useful, we might expect that that extra information should be capable of distinguishing perfect and imperfect forecasting systems more quickly. \par In practical situations, since none of the candidate forecasting systems are expected to be perfect, all exercises in forecasting system selection fall into the imperfect category. In this setting, unlike the perfect model case, proper scores will often favour different imperfect forecasting systems. Distinguishing the scoring rules is then a question of identifying which type of imperfect forecasts should be preferred. Other than the analysis demonstrated in figures~\ref{figure:match5_imperfect_scores} and~\ref{figure:match_1_to_4_imperfect_scores}, this question is left as future work. \par \section{Experiment one - Repeated outcomes of the same match} \label{section:experiment_one} In this experiment, the five pairs of forecasts defined by Constantinou and Fenton and shown in table~\ref{table:constantinou_scenarios} are used to assess the probability that each of the three candidate scoring rules identifies a perfect forecasting system over an imperfect one for a given number of past forecasts and outcomes. For a given pair of forecasts, define the outcomes of a series of $n$ matches by drawing from forecast $\alpha$ or forecast $\beta$ with equal probability $0.5$. Define two forecasting systems as follows. The \emph{perfect forecasting system} always knows which of the two distributions from which the outcome is drawn and therefore always defines the correct distribution as the forecast. The \emph{imperfect forecasting system}, on the other hand, always issues the alternative distribution as the forecast. A scoring rule is defined to `select' a forecasting system if it is assigned the lowest mean score over $n$ forecasts. The probability that each scoring will select the perfect forecasting system is calculated for different values of $n$ and the experiment is carried out using each of the forecast pairs in table~\ref{table:constantinou_scenarios}. \subsection{Results} Perhaps the most interesting of the five examples defined by Constantinou and Fenton is match five. Here, both forecast $\alpha$ and forecast $\beta$ place similar probability on a home win but forecast $\alpha$ places more probability on the draw. The probability of each scoring rule identifying the perfect forecasting system in this case is shown as a function of $n$ in figure~\ref{figure:prob_perfect_fun_n_match_5}. Here, the ignorance score outperforms both the Brier score and the RPS for almost every tested value of $n$, whilst there is little difference in the performance of the Brier score and RPS. \par The results for matches one to four are shown in figure~\ref{figure:prob_perfect_fun_n_match_5}. For match one, the ignorance score clearly outperforms both the Brier score and the RPS for relatively large values of $n$, whilst the difference is minimal for lower values. The non monotonic nature of the probabilities under the RPS and Brier scores may seem surprising at first but, in fact, can easily be explained. All three scoring rules punish the imperfect forecasting system when the outcome is a draw, since the forecast in this case predicts a home win with certainty. The overall probability of a draw for a given realisation is $0.05$ since the probability that the outcome is drawn from $\beta$ is $0.5$ and the probability of a draw given $\beta$ is $0.1$. For all values of $n$ less than $20$, once a draw has occurred, no combinations of other outcomes can result in the imperfect forecasting system being assigned a better mean score than the perfect forecasting system. When $n$ is greater than $20$, on the other hand, the imperfect forecasting system can still `recover' from such a situation as long as there is only one such occurrence. The probability for the ignorance score, on the other hand, is monotonic and this is because the ignorance score assigns an infinitely bad score to a forecast that places zero probability on an outcome. Therefore, once such a case has been observed, the imperfect forecasting system cannot achieve a better score than the perfect forecasting system and, since the probability of observing such a case increases with $n$, the probability is monotonically increasing. \par For match two, whilst the probabilities of selecting the perfect forecasting system are similar for each score, the ignorance score slightly outperforms the other two scores for all values of $n$. In terms of the Brier score and the RPS, neither appears to be systematically better than the other. \par For match three, the ignorance score tends to outperform the other two scores for all $n$ greater than three, whilst, again, there is no obvious systematic difference between the performance of the RPS and Brier scores. For very small $n$, the ignorance score achieves a lower probability of selecting the perfect forecasting system. However, caution should be applied in such cases since scoring rules are designed with a relatively large number of forecast pairs in mind, that is, if the aim were to apply them to small $n$, they might be designed differently. \par Match four provides perhaps the most interesting results. In this case, $\alpha$ and $\beta$ differ only in the probabilities placed on a draw and an away win. There is therefore only a small difference between the perfect and imperfect forecasting systems. This is reflected in the fact that the probability of choosing the perfect forecasting system increases relatively slowly with $n$. Here, whilst the ignorance and Brier scores perform similarly well, there is a distinct advantage for both over the RPS. Whilst the RPS performs relatively well for low $n$, the value of increasing $n$ is far lower than for the ignorance and Brier scores, i.e. the RPS does not make good use of the extra information provided by increasing the number of forecasts and outcomes. \par Overall, from these results, there is no evidence that the RPS outperforms either the Brier or ignorance scores and, in fact, there is some evidence that the opposite is true. The RPS does not typically make good use of additional sample members in comparison to the other two scores. Looking more closely at the results, the stark difference in performance in match four, and to some extent match five, suggests that the biggest difference in performance might be in cases in which the difference between the two candidate forecasts is relatively small and this observation provides a motivation for the design of experiment two. Experiment one considers only a case with repeated forecasts from one of two candidate distributions. In practice, there is usually interest in forecasts of different events rather than a large number of realisations of the same event. In experiment two, the performance of the three candidate scoring rules is compared in the context of forecasts of a wide range of different football matches generated from actual bookmakers' odds. \par \begin{figure}[!htb] \centering \includegraphics[scale=0.6]{prob_perfect_fun_n_match_5.png} \caption{Probability of each scoring rule selecting the perfect forecasting system as a function of $n$ for match 5.} \label{figure:prob_perfect_fun_n_match_5} \end{figure} \begin{figure}[!htb] \centering \includegraphics[scale=0.6]{prob_perfect_fun_n_match_1_to_4.png} \caption{Probability of each scoring rule selecting the perfect forecasting system as a function of $n$ for matches 1 to 4.} \label{figure:prob_perfect_fun_n_match_1_to_4} \end{figure} \section{Experiment two - forecasts based on match odds} \label{section:experiment_two} In experiment two, the aim is to assess the effectiveness of each scoring rule in terms of distinguishing a `perfect' forecasting system from an `imperfect' one in a more realistic setting in which each match has a different probability distribution. To do this, artificial pairs of forecasts are created in which one represents the true distribution and the other is imperfect. The aim is then to estimate the probability that each scoring rule selects the set of true distributions. \par To obtain sets of forecasts that are realistic in terms of actual football matches, bookmakers' odds on past matches are used which are converted into probabilistic forecasts. These are taken from the repository of football data at \url{football-data.co.uk} which supplies free-to-access data from a range of European leagues. Details of the data and how the odds are used to generate probabilistic forecasts are given in the appendix. Odds from a total of 39,343 matches are available and form the basis of a set of candidate probability distributions. \par We seek $n$ pairs of forecasts such that one represents the true distribution of the outcome, and therefore a perfect forecast, whilst the other represents an imperfect forecast. In order to test the effect of different levels of imperfection, we define a method of controlling it. To create a perfect forecast and corresponding outcome, a distribution is randomly drawn from the candidate set and defined to be the perfect forecast. A random draw from that distribution is then taken and defined to be the outcome. Next, we seek an alternative, imperfect forecast from the candidate set. Here, we apply a condition on the similarity of the candidate forecasts with the perfect forecast. Let the perfect forecast be defined by $\{p_{h},p_{d},p_{a}\}$ where $p_{h}$, $p_{d}$ and $p_{a}$ represents the forecast probability of a home win, draw and away win respectively. For each forecast in the candidate set, define the `distance' from the true probability distribution to be \begin{equation} \epsilon=\frac{1}{3}(|\tilde{p}_{h}-p_{h}|+|\tilde{p}_{d}-p_{d}|+|\tilde{p}_{a}-p_{a}|). \end{equation} We define some threshold value $\delta$, find all forecasts for which $\epsilon$ is less than $\delta$ (excluding the perfect forecast itself) and randomly draw the imperfect forecast from that set. This process is repeated $n$ times such that there are a total of $n$ pairs of forecasts. We define the `perfect forecasting system' to be the system that always issues the perfect forecast from the pair and the `imperfect forecasting system' to be such that the alternative, imperfect forecast is always issued. The experiment is repeated for multiple values of $n$ and different levels of the parameter $\delta$, which governs the imperfection. \par \subsection{Results} The effect of different levels of imperfection, governed by the selected value of $\delta$, is demonstrated in figure~\ref{figure:model_error_plot}. Each blue dot represents a perfect forecast, with the $x$ and $y$ axes representing the probability of a home win and a draw respectively. The grey line links each of these with the corresponding imperfect forecast. Increasing the value of $\delta$ tends to result in more distinct pairs of forecasts and therefore a higher level of imperfection. \par The proportion of forecast pairs in which the perfect forecasting system is selected over the imperfect forecasting system is shown for each score and value of $\delta$ as a function of $\log_{2}(n)$ in figure~\ref{figure:prop_correct}. The red, blue and green lines represent this proportion for the Brier score, Ignorance score and RPS respectively for the stated value of $\delta$. For higher levels of imperfection, that is when $\delta$ is high, there does not appear to be much difference in the performance of the scoring rules. However, for the lowest level of imperfection, in which $\delta=0.1$, there appears to be a notable difference with the RPS outperformed by both the ignorance and Brier scores. From this graph alone, however, it is not clear whether these differences are statistically significant. Given that each of the scores are calculated on the same sets of forecast pairs, the scoring rules can be compared pairwise with a total of three different comparisons (ignorance vs RPS, ignorance vs Brier and RPS vs Brier). These differences are shown as a function of $\log_{2}(n)$ in figure~\ref{figure:Diff_wins_error_bars}, with each panel representing a different value of $\delta$. The error bars represent $95$ percent resampling intervals of the mean difference and hence, if the intervals do not contain zero, there is a significant difference in the performance of that pair of scoring rules. \par For the two lowest levels of imperfection ($\delta=0.01$ and $\delta=0.025$), there is a clear hierarchy in terms of the efficacy of each score in identifying the perfect forecasting system. The ignorance score tends to outperform the Brier score which tends to outperform the RPS. This difference is most stark for larger values of $n$. For the two larger levels of imperfection ($\delta=0.05$ and $\delta=0.1$), the difference is less clear and, in general, there is no significant difference between the Brier score and the RPS. The ignorance score, on the other hand, still tends to perform significantly better than both other scores. These results therefore provide clear support for the ignorance score and little support for the RPS. \par \begin{figure}[!htb] \centering \includegraphics[scale=0.6]{model_error_plot.png} \caption{Examples of forecast pairs for different levels of imperfection. The probability placed on a home win and a draw is represented by the $x$ and $y$ axes respectively. The grey lines join pairs of forecasts for the purpose of the experiment.} \label{figure:model_error_plot} \end{figure} \begin{figure}[!htb] \centering \includegraphics[scale=0.6]{prop_correct.png} \caption{The proportion of cases in which the perfect forecasting system is selected by each scoring rule system as a function of $\log_{2}(n)$ for different values of $\delta$.} \label{figure:prop_correct} \end{figure} \begin{figure}[!htb] \centering \includegraphics[scale=0.6]{Diff_wins_error_bars.png} \caption{Pairwise differences in the proportion of cases in which the perfect forecasting system is selected between the ignorance and RPS (blue), ignorance and Brier score (red) and Brier score and RPS (yellow) as a function of $\log_{2}(n)$ with 95 percent resampling intervals of the mean.} \label{figure:Diff_wins_error_bars} \end{figure} \section{Discussion} \label{section:discussion} The aim of this paper is to reopen the debate surrounding the use of scoring rules for evaluating the performance of probabilistic forecasts of football matches. The reasoning presented by Constantinou and Fenton supporting the use of the RPS over other scoring rules has been shown to be oversimplistic and the conclusion questionable. With this in mind, two experiments have been conducted with the aim of assessing the performance of each scoring rule in the context of identifying a perfect forecasting system using a finite number of past forecasts and outcomes. The ignorance score has been found to outperform both the RPS and the Brier scores whilst, to a lesser extent, the Brier score has been shown to perform better than the RPS in this context. \par The results in this paper may seem surprising at first. After all, both the Brier score and the RPS are non-local and take into account the entire forecast distribution rather than just the probability at the outcome whilst the RPS is sensitive to distance and therefore also takes into account the ordering of the potential outcomes. It would be easy to conclude from this that, since both scores take more of the distribution into account, they are more informative. However, it should be stressed that this would only be the case if those extra aspects are genuinely useful in terms of assessing the performance of the forecasts. In practice, we only ever gain limited knowledge regarding the true distribution, even once the outcome is revealed. If, for example, the outcome is a home win, this tells us little or nothing about the probability of a draw or an away win. In fact, knowing the outcome reveals relatively little about its probability, other than it is greater than zero. Given this, we argue that the probability placed on potential outcomes that didn't happen are irrelevant. We know nothing about the true probabilities and therefore cannot reward probability placed on such outcomes. On the other hand, we \emph{know} that the actual outcome occurred. Moreover, the more likely that event was deemed by the forecast, the better prepared we could have been for the occurrence of that outcome. We therefore argue that the probability placed on the outcome can be the only aspect of interest in evaluating probabilistic forecasts. Given that the ignorance score is the only proper and local score (\cite{brocker2007scoring}), this leads to it being a natural preference. \par In summary, this paper has both argued for and provided empirical evidence in favour of the ignorance score over both the Brier score and RPS. It should be noted, however, that this paper has only touched upon the question of which types of imperfect forecasts are favoured by different scores. Useful future work would be to attempt to understand better where different scoring rules favour different types of forecasts. A preference for the types of forecasts favoured by the Brier score or RPS would then need to be weighed up against the unfavourable results demonstrated in this paper. Regardless, we hope that the arguments and results in this paper are successful in reopening the debate surrounding the choice of scoring rule for evaluating forecasts of football matches. From the evidence presented in this paper, we strongly recommend the ignorance score for this purpose. \par \newpage
{ "attr-fineweb-edu": 1.939453, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUblvxK2li-Hhr-XWU
\section*{Abstract} From the viewpoint of networks, a ranking system for players or teams in sports is equivalent to a centrality measure for sports networks, whereby a directed link represents the result of a single game. Previously proposed network-based ranking systems are derived from static networks, i.e., aggregation of the results of games over time. However, the score of a player (or team) fluctuates over time. Defeating a renowned player in the peak performance is intuitively more rewarding than defeating the same player in other periods. To account for this factor, we propose a dynamic variant of such a network-based ranking system and apply it to professional men's tennis data. We derive a set of linear online update equations for the score of each player. The proposed ranking system predicts the outcome of the future games with a higher accuracy than the static counterparts. \newpage \section{Introduction}\label{sec:introduction} Ranking of individual players or teams in sports, both professional and amateur, is a tool for entertaining fans and developing sports business. Depending on the type of sports, different ranking systems are in use \cite{Stefani1997JAS}. A challenge in sports ranking is that it is often impossible for all the pairs of players or teams (we refer only to players in the following. However, the discussion also applies to team sports) to fight against each other. This is the case for most individual sports and some team sports in which a league contains many teams, such as American college football and soccer at an international level. Then, the set of opponents depends on players such that ranking players by simply counting the number of wins and losses is inappropriate. In this situation, several ranking systems on the basis of networks have been proposed. A player is regarded to be a node in a network, and a directed link from the winning player to the losing player (or the converse) represents the result of a single game. Once the directed network of players is generated, ranking the players is equivalent to defining a centrality measure for the network. A crux in constructing a network-based ranking system is to let a player that beats a strong player gain a high score. Examples of network-based ranking systems include those derived from the Laplacian matrix of the network \cite{Daniels1969Biom,Moon1970SiamRev,Borm2002AOR,Saavedra2010PhysicaA}, the PageRank \cite{Radicchi2011PlosOne}, a random walk that is different from those implied by the Laplacian or PageRank \cite{Callaghan2004NAMS}, a combination of node degree and global structure of networks \cite{Herings2005SCW}, and the so-called win-lose score \cite{Park2005JSM}. Previous network-based ranking systems do not account for fluctuations of rankings. In fact, a player, even a history making strong player, referred to as $X$, is often weak in the beginning of the career. Player $X$ may also be weak past the most brilliant period in the $X$'s career, suggestive of the retirement in a near future. For other players, it is more rewarding to beat $X$ when $X$ is in the peak performance than when $X$ is novice, near the retirement, or in the slump. It may be preferable to take into account the dynamics of players' strengths for defining a ranking system. In the present study, we extend the win-lose score, a network-based ranking system proposed by Park and Newman \cite{Park2005JSM} to the dynamical case. Then, we apply the proposed ranking system to the professional men's tennis data. In broader contexts, the current study is related to at least two other lineages of researches. First, a dynamic network-based ranking implies that we exploit the temporal information about the data, i.e., the times when games are played. Therefore, such a ranking system is equivalent to a dynamic centrality measure for temporal networks, in which sequences of pairwise interaction events with time stamps are building units of the network \cite{HolmeSaramaki2012PhysRep}. Although some centrality measures specialized in temporal networks have been proposed \cite{Tang2010SNS,Pan2011PRE,Grindrod2011PRE}, they are not for ranking purposes. In addition, they are constant valued centrality measures for dynamic (i.e., temporal) data of pairwise interaction. In the context of temporal networks, we propose a dynamically changing centrality measure for temporal networks. Second, statistical approaches to sports ranking have a much longer history than network approaches. Representative statistical ranking systems include the Elo system \cite{Elo1978book} and the Bradley-Terry model (see \cite{Bradley1976Biom} for a review). Variants of these models have been used to construct dynamic ranking systems. Empirical Bayes framework naturally fits this problem \cite{Glickman1993thesis,Fahrmeir1994JASA,Glickman1999JRSSSC,Knorrheld2000Stat,Coulom2008LNCS,Herbrich2007NIPS}. Because the Bayesian estimators cannot be obtained analytically, or even numerically owing to the computational cost, in these models, techniques for obtaining Bayes estimators such as the Gaussian assumption of the posterior distribution \cite{Glickman1999JRSSSC,Herbrich2007NIPS}, approximate message passing \cite{Herbrich2007NIPS}, and Kalman filter \cite{Fahrmeir1994JASA,Glickman1999JRSSSC,Knorrheld2000Stat}, have been employed. In a non-Bayesian statistical ranking system, the pseudo likelihood, which is defined such that the contribution of the past game results to the current pseudo likelihood decays exponentially in time, is numerically maximized \cite{Dixon1997ApplStat}. In general, the parameter set of a statistical ranking system that accounts for dynamics of players' strengths is composed of dynamically changing strength parameters for all the players and perhaps other auxiliary parameters. Therefore, the number of parameters to be statistically estimated may be large relative to the amount of data. In other words, the instantaneous ranks of players have to be estimated before the players play sufficiently many games with others under fixed strengths. Even under a Bayesian framework with which updating of the parameter values is naturally implemented, it may be difficult to reliably estimate dynamic ranks of players due to relative paucity of data. In addition, in sports played by individuals, such as tennis, it frequently occurs that new players begin and old and underperforming players leave. This factor also increases the number of parameters of a ranking system. In contrast, ours and other network-based ranking systems, both static and dynamic ones, are not founded on statistical methods. Network-based ranking systems can be also simpler and more transparent than statistical counterparts. \section*{Results}\label{sec:results} \subsection*{Dynamic win-lose score}\label{sub:dwl} We extend the win-lose score \cite{Park2005JSM} (see Methods) to account for the fact that the strengths of players fluctuate over time. In the following, we refer to the win-lose score as the original win-lose score and the extended one as the dynamic win-lose score. The original win-lose score overestimates the real strength of a player $i$ when $i$ defeated an opponent $j$ that is now strong and was weak at the time of the match between $i$ and $j$. Because $j$ defeats many strong opponents afterward, $i$ unjustly receives many indirect wins through $j$. The same logic also applies to other network-based static ranking systems \cite{Daniels1969Biom,Moon1970SiamRev,Borm2002AOR,Saavedra2010PhysicaA,Radicchi2011PlosOne,Callaghan2004NAMS,Herings2005SCW}. To remedy this feature, we pose two assumptions. First, we assume that the increment of the win score of player $i$ through the $i$'s winning against player $j$ depends on the $j$'s win score at that moment. It does not explicitly depend on the $j$'s score in the past or future. The same holds true for the lose score. Second, we assume that each player's win and lose scores decay exponentially in time. This assumption is also employed in a Bayesian dynamic ranking system \cite{Dixon1997ApplStat}. Let $A_{t_n}$ be the win-lose matrix for the game that occurs at time $t_n$ ($1\le n\le n_{\max}$). In the analysis of the tennis data carried out in the following, the resolution of $t_n$ is equal to one day. Therefore, players' scores change even within a single tournament. If player $j$ wins against player $i$ at time $t_n$, we set the $(i,j)$ element of the matrix $A_{t_n}$ to be 1. All the other elements of $A_{t_n}$ are set to 0. We define the dynamic win score at time $t_n$ in vector form, denoted by $\bm w_{t_n}$, as follows: \begin{align} W_{t_n} =& A_{t_n} + e^{-\beta (t_n - t_{n-1})} \sum _ {m_n \in \{ 0,1 \}} \alpha^{m_n} A_{t_{n-1}}A_{t_n}^{m_n} \notag\\ &+ e^{-\beta (t_n - t_{n-2})} \sum _ {m_{n-1},m_n \in \{ 0,1 \}} \alpha^{m_{n-1}+m_n} A_{t_{n-2}}A_{t_{n-1}}^{m_{n-1}}A_{t_n}^{m_n} \notag\\ &+ \cdots + e^{-\beta (t_n - t_1)} \sum _ {m_2, \ldots, m_n \in \{ 0,1 \}} \alpha^{\sum_{i=2}^n m_i} A_{t_1}A_{t_2}^{m_2}\cdots A_{t_n}^{m_n} \label{eq:def of W_{t_n}} \end{align} and \begin{equation} \bm w_{t_n} = W_{t_n}^{\top}\bm 1, \label{eq:def of w_{t_n}} \end{equation} where $\alpha$ is the weight of the indirect win, which is the same as the case of the original win-lose score (Methods), and $\beta\ge 0$ represents the decay rate of the score. The first term on the right-hand side of Eq.~\eqref{eq:def of W_{t_n}} (i.e., $A_{t_n}$) represents the effect of the direct win at time $t_n$. The second term consists of two contributions. For $m_n=0$, the quantity inside the summation represents the direct win at time $t_{n-1}$, which results in weight $e^{-\beta (t_n-t_{n-1})}$. For $m_n=1$, the quantity represents the indirect win. The ($i$, $j$) element of $A_{t_{n-1}}A_{t_n}$ is positive if and only if player $j$ wins against a player $k$ at time $t_n$ and $k$ wins against $i$ at time $t_{n-1}$. Player $i$ gains score $e^{-\beta (t_n-t_{n-1})} \alpha$ out of this situation. For both cases $m_n=0$ and $m_n=1$, the $j$th column of the second term accounts for the effect of the $j$'s win at time $t_{n-1}$. The third term covers four cases. For $m_{n-1}=m_n=0$, the quantity inside the summation represents the direct win at $t_{n-2}$, resulting in weight $e^{-\beta (t_n-t_{n-2})}$. For $m_{n-1}=0$ and $m_n=1$, the quantity represents the indirect win based on the games at $t_{n-2}$ and $t_n$, resulting in weight $e^{-\beta (t_n-t_{n-2})}\alpha$. For $m_{n-1}=1$ and $m_n=0$, the quantity represents the indirect win based on the games at $t_{n-2}$ and $t_{n-1}$, resulting in weight $e^{-\beta (t_n-t_{n-2})}\alpha$. For $m_{n-1}=m_n=1$, the quantity represents the indirect win based on the games at $t_{n-2}$, $t_{n-1}$, and $t_n$, resulting in weight $e^{-\beta (t_n-t_{n-2})}\alpha^2$. In either of the four cases, the $j$th column of the third term accounts for the effect of the $j$'s win at time $t_{n-2}$. To see the difference between the original and dynamic win scores, consider the exemplary data with $N=3$ players shown in \FIG\ref{fig:N=3 example}. The original win-lose scores calculated from the aggregation of the data up to time $t_n$ ($n=1, 2$, and 3), denoted by $w_{t_n}(i)$ for player $i$, are given by \begin{equation} \begin{cases} w_{t_1}(1)=1,\\ w_{t_1}(2)=0,\\ w_{t_1}(3)=0, \end{cases} \begin{cases} w_{t_2}(1)=1+\alpha,\\ w_{t_2}(2)=1,\\ w_{t_2}(3)=0, \end{cases} \begin{cases} w_{t_3}(1)=1+\alpha+\alpha^2+\cdots,\\ w_{t_3}(2)=1+\alpha+\alpha^2+\cdots,\\ w_{t_3}(3)=1+\alpha+\alpha^2+\cdots. \end{cases} \end{equation} The scores of the three players are the same at $t=t_3$ because the aggregated network is symmetric (i.e., directed cycle) if we discard the information about the time. The dynamic win-lose scores for the same data are given by \begin{equation} \begin{cases} w_{t_1}(1)=1,\\ w_{t_1}(2)=0,\\ w_{t_1}(3)=0, \end{cases} \begin{cases} w_{t_2}(1)=e^{-\beta(t_2-t_1)},\\ w_{t_2}(2)=1,\\ w_{t_2}(3)=0, \end{cases} \begin{cases} w_{t_3}(1)=e^{-\beta(t_3-t_1)},\\ w_{t_3}(2)=e^{-\beta(t_3-t_2)},\\ w_{t_3}(3)=1+\alpha e^{-\beta(t_3-t_1)}. \end{cases} \end{equation} The score of player 1 at $t_2$ (i.e., $w_{t_2}(1)$) differs from the original win-lose score in two aspects. First, it is discounted by factor $e^{-\beta(t_2-t_1)}$. Second, the value of $w_{t_2}(1)$ indicates that player 1 does not gain an indirect win. This is because it is after player 1 defeated player 2 that player 2 defeats player 3. In contrast, player 3 gains an indirect win at $t=t_3$ because player 3 defeats player 1, which defeated player 2 before (i.e., at $t=t_1$). It should be noted that the win scores of the three players are different at $t=t_3$ although the aggregated network is symmetric. Equation~\eqref{eq:def of W_{t_n}} leads to \begin{align} W_{t_n} =& A_{t_n} + e^{-\beta (t_n - t_{n-1})} \left[ A_{t_{n-1}} + e^{-\beta (t_{n-1} - t_{n-2})} \sum_{m_{n-1} \in \{ 0,1 \}} \alpha^{m_{n-1}} A_{t_{n-2}}A_{t_{n-1}}^{m_{n-1}}\right.\notag\\ &+\cdots\notag\\ & \left. + e^{-\beta (t_{n-1} - t_1)} \sum_{m_2, \ldots, m_{n-1} \in \{ 0,1 \}} \alpha^{\sum _{i=2}^{n-1} m_i} A_{t_1}A_{t_2}^{m_2}\cdots A_{t_{n-1}}^{m_{n-1}} \right] \sum_{m_n \in \{ 0,1 \}} \alpha^{m_n} A_{t_n}^{m_n}\notag\\ =& A_{t_n} + e^{-\beta (t_n - t_{n-1})}W_{t_{n-1}}(I + \alpha A_{t_n}). \label{eq:convenient W_{t_n}} \end{align} Therefore, by combining Eqs.~\eqref{eq:def of w_{t_n}} and \eqref{eq:convenient W_{t_n}}, we obtain the update equation for the dynamic win score as follows: \begin{equation} \bm w_{t_n}=\begin{cases} A_{t_1}^{\top}\bm 1 & (n=1),\\ A_{t_n}^{\top} \bm 1 + e^{-\beta (t_n - t_{n-1})} (I + \alpha A_{t_n}^{\top})\bm w_{t_{n-1}} & (n>1). \end{cases} \label{eq:wtn} \end{equation} The dynamic lose score at time $t_n$ is denoted in vector form by $\bm \ell_{t_n}$. We obtain the update equation for $\bm \ell_{t_n}$ by replacing $A_{t_n}$ in \EQ\eqref{eq:wtn} by $A_{t_n}^{\top}$ as follows: \begin{equation} \bm \ell_{t_n}=\begin{cases} A_{t_1}\bm 1 & (n=1), \\ A_{t_n} \bm 1 + e^{-\beta (t_n - t_{n-1})} (I + \alpha A_{t_n})\bm \ell_{t_{n-1}} & (n>1).\end{cases} \label{eq:ltn} \end{equation} Finally, the dynamic win-lose score at time $t_n$, denoted by $\bm s_{t_n}$, is given by \begin{equation} \bm s_{t_n} = \bm w_{t_n} -\bm \ell_{t_n}. \end{equation} It should be noted that we do not treat retired players in special ways. Players' scores exponentially decay after retirement. \subsection*{Predictability}\label{sub:predictability} We apply the dynamic win-lose score to results of professional men's tennis. The nature of the data is described in Methods. In this section, we predict the outcomes of future games based on different ranking systems. The frequency of violations, whereby a lower ranked player wins against a higher ranked player in a game, quantifies the degree of predictability \cite{Martinich2002Inter,Bennaim2006JQAS}. In other literature, the retrodictive version of the frequency of violations is also used for assessing the performance of ranking systems \cite{Martinich2002Inter,Lundh2006JQAS,Park2010JSM,Coleman2005Interface}. We compare the predictability of the dynamic win-lose score, the original win-lose score \cite{Park2005JSM}, and the prestige score (Methods). The prestige score, proposed by Radicchi and applied to professional men's tennis data \cite{Radicchi2011PlosOne}, is a static ranking system and is a version of the PageRank originally proposed for ranking webpages \cite{Brin98}. We also implement a dynamic version of the prestige score (Methods) and compare its performance of prediction with that of the dynamic win-lose score. We define the frequency of violations as follows. We calculate the score of each player at $t_n$ ($1\le n\le n_{\max}-1$) on the basis of the results up to $t_n$. For the original win-lose score and prestige score, we aggregate the directed links from $t=t_1$ to $t=t_n$ to construct a static network and calculate the players' scores. If the result of each game at $t_{n+1}$ is inconsistent with the calculated ranking, we regard that a violation occurs. If the two players involved in the game at $t_{n+1}$ have exactly the same score, we regard that a tie occurs irrespective of the result of the game. We define the prediction accuracy at the $N_{\rm gp}$th game as the fraction of correct prediction when the results of the games from $t=t_2$ through the $N_{\rm gp}$th game are predicted. The prediction accuracy is given by $\left(N_{\rm gp}^{\prime}-e-v\right)/\left(N_{\rm gp}^{\prime}-e\right)$, where $N_{\rm gp}^{\prime} (<N_{\rm gp})$ is the number of predicted games, $v$ is the number of violations, and $e$ is the number of ties. For the prestige score and its dynamic variant, we exclude the games in which either player plays for the first time because the score is not defined for the players that have never played. In this case, we increment $e$ by one. The original and dynamic win-lose scores can be negative valued. Equations~\eqref{eq:def of w_{t_n}} and \eqref{eq:w original win score} guarantee that the initial score is equal to zero for all the players for the dynamic and original win-lose scores, respectively. Furthermore, any player has a zero win-lose score when the player fights a game for the first time. Even though we do not treat such a game as tie unless both players involved in the game have zero scores, treating it as tie little affects the following results. The prediction accuracy for the dynamic win-lose score, original win-lose score, prestige score, and dynamic prestige score are shown in \FIGS\ref{fig:performance}(a), \ref{fig:performance}(b), \ref{fig:performance}(c), and \ref{fig:performance}(d), respectively, for various parameter values. Figure~\ref{fig:performance}(a) indicates that the prediction accuracy for the dynamic win-lose score is the largest for $\alpha=0.13$ except when the number of games (i.e., $N_{\rm gp}$) is small. The accuracy is insensitive to $\alpha$ when $0.08\le \alpha\le 0.2$. In this range of $\alpha$, we confirmed by additional numerical simulations that the results for $\beta=1/365$ and those for $\beta=0$ are indistinguishable. Therefore, we conclude that the performance of prediction has some robustness with respect to $\alpha$ and $\beta$. We also confirmed that the accuracy monotonically increases between $\alpha\approx 0.03$ and $\alpha\approx 0.13$. However, for an unknown reason, the accuracy with $\alpha\approx 0.03$ is smaller than that with $\alpha=0$ (results not shown). Figure~\ref{fig:performance}(b) indicates that the prediction accuracy for the original win-lose score is larger for $\alpha=0$ than $\alpha=0.004835$. The latter $\alpha$ value is very close to the upper limit calculated from the largest eigenvalue of $A$ (see subsection ``Parameter values'' in Methods). We also found that the prediction accuracy monotonically decreases with $\alpha$. Nevertheless, except for small $N_{\rm gp}$, the accuracy with $\alpha=0$ is lower than that for the dynamic win-lose score with $\alpha=0$ and $0.08\le \alpha\le 0.2$ (\FIG\ref{fig:performance}(a)). Figure~\ref{fig:performance}(c) indicates that the prediction by the prestige score is better for a smaller value of $q$ (see Methods for the meaning of $q$). We confirmed that this is the case for other values of $q$ and that the results with $q\le 0.05$ little differ from those with $q=0.05$. Except for small $N_{\rm gp}$, the prediction accuracy with $q=0.05$ is lower than that for the dynamic win-lose score with $0.08\le \alpha\le 0.2$ (\FIG\ref{fig:performance}(a)). Figure~\ref{fig:performance}(d) indicates that the prediction by the dynamic variant of the prestige score is more accurate than that by the dynamic win-lose score, in particular for small $N_{\rm gp}$. Similar to the case of the original prestige score, the prediction accuracy decreases with $q$. The findings obtained from \FIG\ref{fig:performance} are summarized as follows. When $\alpha$ is between $\approx 0.08$ and $\approx 0.2$ and $\beta$ is between 0 and $1/365$, the dynamic win-lose score outperforms the original win-lose score and the prestige score in the prediction accuracy. For example, at the end of the data, the accuracy is equal to 0.659, 0.661, 0.661, and 0.659 for the dynamic win-lose score with ($\alpha$, $\beta$) $=$ (0.08, $1/365$), (0.1, $1/365$), (0.13, $1/365$), and (0.2, $1/365$), respectively, while it is equal to 0.623 for the original win-lose score with $\alpha=0$ and 0.631 for the prestige score with $q=0.05$. However, the accuracy for the dynamic variant of the prestige score with $q=0.05$ (i.e., 0.668) is slightly larger than the largest value obtained by the dynamic win-lose score. We also compare the prediction accuracy for the dynamic win-lose score with that for the official Association of Tennis Professionals (ATP) rankings. Because the calculation of the ATP rankings involves relatively minor games that do not belong to ATP World Tour tournaments, which we used for \FIG\ref{fig:performance}, we use a different data set for the present comparison (see ``Data'' in Methods). The prediction accuracy at the end of the data is equal to 0.637 for the ATP rankings and 0.588, 0.629, 0.646, 0.650, and 0.649 for the dynamic win-lose score with ($\alpha$, $\beta$) $=$ (0.08, $1/365$), (0.1, $1/365$), (0.13, $1/365$), (0.17, $1/365$), and (0.2, $1/365$), respectively. The prediction accuracy for the dynamic win-lose score is larger than that for the ATP rankings in a wide range of $\alpha$ (i.e., $0.11\le \alpha\le 0.39$). \subsection*{Robustness against parameter variation}\label{sub:sensitivity} Figure~\ref{fig:performance}(a) indicates that the prediction accuracy for the dynamic win-lose score is robust against some variations in the $\alpha$ and $\beta$ values. In this section, we examine the robustness of the dynamic win-lose score more extensively by examining the rank correlation between the scores derived from different $\alpha$ and $\beta$ values. The Kendall's tau is a standard method to quantify the rank correlation \cite{Kendall1938Biom}. In our data, the full ranking containing all the players, to which the Kendall's tau applies, contains players that only appear in a few games. In fact, most players are such players \cite{Radicchi2011PlosOne}, and their ranks are inherently unstable. In addition, it is usually the list of top ranked players that are of practical interests. Therefore, we use a generalized Kendall's tau for comparing top $k$ lists of the full ranking \cite{Fagin2003SIAMDM}. We denote the sets of the top $k$ players, i.e., $k$ players with the largest scores, in the two full rankings by $\bm R_1$ and $\bm R_2$. In general, $\bm R_1$ and $\bm R_2$ can be different. For an arbitrarily chosen pair of players $r_1$, $r_2$ $\in \bm R_1 \cup \bm R_2$, $r_1\neq r_2$, we set $\overline{K}_{r_1 , r_2}(\bm R_1,\bm R_2)=1$ if (1) $r_1$ and $r_2$ appear in both top $k$ lists $R_1$ and $R_2$, and $r_1$ and $r_2$ are in the opposite order in the two top $k$ lists, (2) $r_1$ has a higher rank than $r_2$ in one of the top $k$ lists, and $r_2$, but not $r_1$, is contained in the other top $k$ list, (3) $r_1$ exists only in one of the two top $k$ lists, and $r_2$ exists only in the other top $k$ list. Otherwise, we set $\overline{K}_{r_1, r_2}(\bm R_1,\bm R_2)=0$. $\overline{K}_{r_1, r_2}(\bm R_1,\bm R_2)$ is a penalty imposed on the inconsistency between the two top $k$ lists. We use the so-called optimistic variant of the Kendall distance $K_{\tau}^{(0)}(\bm R_1,\bm R_2)$ defined as follows \cite{Fagin2003SIAMDM}: \begin{equation} K_{\tau}^{(0)}(\bm R_1,\bm R_2) = \sum_{r_1,r_2 \in \bm R_1 \cup \bm R_2} \overline{K} _{r_1, r_2}(\bm R_1,\bm R_2). \end{equation} We normalize the distance between the two rankings as follows \cite{Mccown2007JCDL}: \begin{equation} K = 1 - \frac{K_{\tau}^{(0)}(\bm R_1,\bm R_2)}{k^2}. \end{equation} A large value of $K$ indicates a higher correlation between the two top $k$ lists. It should be noted that $0\le K\le 1$. In particular, when there is no overlap between the two top $k$ lists, we obtain $K=0$. For the dynamic win-lose scores at $t_{n_{\max}}$, i.e., at the end of the entire period, we calculate $K$ with $k=300$ for different pairs of $\alpha$ and $\beta$ values. The results for $\beta=1/365$ and different values of $\alpha$ are shown in \FIG\ref{fig:robustness alpha}. The top $k$ lists are similar (i.e., $K \ge 0.85$) for any $\alpha$ larger than $\approx 0.06$. This finding is consistent with the fact that the prediction accuracy is high and robust when $\alpha$ falls between $\approx 0.08$ and $\approx 0.2$ (\FIG\ref{fig:performance}(a)). For fixed values of $\alpha$, the $K$ values between the ranking with $\beta=1/365$ and that with various values of $\beta$ are shown in \FIG\ref{fig:robustness alpha beta}. $K$ is almost unity at least in the range $0\le\beta\le 2/365$. Therefore, removing the assumption of the exponential decay of score in time (i.e., $\beta=0$) little changes the top 300 list. This finding is consistent with the result that the prediction accuracy is almost the same between $\beta=0$ and $\beta=1/365$ if $0.1\le\alpha\le 0.2$ (see the previous subsection). Nevertheless, this observation does not imply that we can ignore the temporal aspect of the data. Keeping the order of the games contributes to the performance of prediction, as suggested by the comparison between the prediction results for the dynamic (\FIG\ref{fig:performance}(a)) and original (\FIG\ref{fig:performance}(b)) win-lose scores. \subsection*{Dynamics of scores for individual players}\label{sub:dynamics individual} In contrast to the original win-lose score and prestige score, the dynamic win-lose score can track dynamics of the strength of each player. It should be noted that the summation of the scores over the individuals, i.e., $\sum_{i=1}^N s_{t_n}(i)$, depends on time. In particular, it grows almost exponentially for the parameter values with which the prediction accuracy is high (i.e., $\alpha$ larger than $\approx 0.08$), as shown in \FIG\ref{fig:sum scores}. $\sum_{i=1}^N s_{t_n}(i)$ increases with the number of games, or equivalently, with time because more recent players take more advantage of indirect wins than older players. The increase in $\sum_{i=1}^N s_{t_n}(i)$ is not owing to the number of players or games observed per year; in fact, the latter numbers do not increase in time \cite{Radicchi2011PlosOne}. Therefore, for clarity, we normalize the win-lose score of each player by dividing it by the instantaneous $\sum_{i=1}^N s_{t_n}(i)$ value. The time courses of the normalized win-lose scores for four renowned players are shown in \FIG\ref{fig:famous players}(a). We set $\alpha=0.13$ and $\beta=1/365$, for which the prediction is approximately the most accurate. The ATP rankings of the four players during the same period are shown in \FIG\ref{fig:famous players}(b) for comparison. The time courses of the dynamic win-lose score and those of the ATP rankings are similar. In particular, the times at which the strength of one player (e.g., Federer) begins to exceed another player (e.g., Agassi) are similar between \FIGS\ref{fig:famous players}(a) and \ref{fig:famous players}(b). Figure~\ref{fig:famous players} suggests that the dynamic win-lose score appositely captures rises and falls of these players. \section*{Discussion} We extended the win-lose score for static sports networks \cite{Park2005JSM} to the case of dynamic networks. By assuming that the score decays exponentially in time, we could derive closed online update equations for the win and lose scores. The proposed dynamic win-lose score realizes a higher prediction accuracy than the original win-lose score and the prestige score. It is straightforward to extend the dynamic win-lose score to incorporate factors such as the importance of each tournament or game via modifications of the game matrix $A_{t_n}$. We also confirmed the robustness of the ranking against variation in the two parameter values in the model. Finally, the dynamic win-lose score is capable of tracking dynamics of players' strengths. It seems that network-based ranking systems are easier to understand and implement, and more scalable than those based on statistical methods. The dynamic win-lose score share these desirable features with static network-based ranking systems. The applicability of the idea behind the dynamic win-lose score is not limited to the case of the win-lose score. In fact, we implemented a dynamic variant of the prestige score. It even yielded a larger prediction accuracy than the dynamic win-lose score did. This result implies that the idea of network-based dynamic ranking systems may be a powerful approach to assessing strengths of sports players and teams, which fluctuate over time. The dynamic win-lose score is better than our version of the dynamic prestige score in that only the former allows for a set of closed online update equations. Establishing similar update equations for other network-based ranking systems such as the prestige score and the Laplacian centrality (see Introduction) is warranted for future work. Prospective results obtained through this line of researches may be also useful in systematically deriving dynamic centrality measures for temporal networks in general. \section*{Methods} \subsection*{Park \& Newman's win-lose score}\label{sub:Park} The win-lose score by Park and Newman \cite{Park2005JSM} is a network-based static ranking system defined as follows. We assume $N$ players and denote by $A_{ij}$ ($1\le i, j\le N$) the number of times that player $j$ wins against player $i$ during the entire period. We let $\alpha$ ($0\le \alpha<1$) be a constant representing the weight of indirect wins. For example, if player $i$ wins against $j$ and $j$ wins against $k$, $i$ gains score 1 from the direct win against $j$ and score $\alpha$ from the indirect win against $k$. Therefore, the $i$'s win score is equal to $1+\alpha$. If $k$ wins against yet another player $\ell$, the $i$'s win score is altered to $1+\alpha+\alpha^2$. The win scores of the players are given by \begin{align} W =& A + \alpha A^2 + \alpha^2 A^3 + \cdots\notag\\ =& A(I+\alpha A + \alpha^2 A^2 + \alpha^3 A^3 + \cdots)\notag\\ =& A(I - \alpha A)^{-1},\\ \bm w =& W^{\top} \bm 1 = (I-\alpha A^{\top})^{-1}A^{\top} \bm 1, \label{eq:w original win score} \end{align} where $W$ is the $N\times N$ matrix whose $(i,j)$ element represents the score that player $j$ obtains via direct and indirect wins against player $i$, $\bm w$ is the $N$ dimensional column vector whose $i$th element represents the win score of player $i$, and $\bm 1$ is the $N$ dimensional column vector defined by \begin{equation} \bm 1=(1\; 1\; \cdots \; 1)^{\top}. \end{equation} We similarly obtain the lose scores of the $N$ players in vector form by replacing $A$ with $A^{\top}$ as follows: \begin{equation} \bm \ell = (I-\alpha A)^{-1}A \bm 1. \end{equation} The total win-lose score is given in vector form by \begin{equation} \bm s = \bm w -\bm \ell. \end{equation} \subsection*{Prestige score} The prestige score of player $i$, denoted by $P_i$, is defined by \begin{equation} P_i = (1-q)\sum_{j=1}^N P_j\frac{\tilde{w}_{ji}}{s_j^{\rm out}} + \frac{q}{N} + \frac{1-q}{N}\sum_{j=1}^N P_j\delta (s_j^{\rm out})\quad (1\le i\le N), \label{eq:prestige score} \end{equation} where $q$ is a constant, $\tilde{w}_{ji}$ is the number of times player $i$ defeats player $j$ during the entire period (it should be noted that $\tilde{w}_{ji}$ has nothing to do with the win scores denoted by $\bm w$ in \EQS\eqref{eq:def of w_{t_n}} and \eqref{eq:w original win score}), $s_j^{\rm out}\equiv\sum_{i^{\prime}=1}^N \tilde{w}_{ji^{\prime}}$ is equal to the number of losses for player $j$, $\delta(s_j^{\rm out})=1$ if $s_j^{\rm out}=0$, and $\delta(s_j^{\rm out})=0$ if $s_j^{\rm out}\ge 1$. The normalization is given by $\sum_{i=1}^N P_i = 1$. We set $q=0.15$, as in \cite{Radicchi2011PlosOne}, and also $q=0.05$ and $q=0.30$. To define a dynamic variant of the prestige score, we let $\tilde{w}_{ij}$ used in \EQ\eqref{eq:prestige score} depend on time. We define $\tilde{w}_{ij}$ at time $t$ by \begin{equation} \tilde{w}_{ji}\equiv \sum_n A_{t_n}(j,i)e^{-\beta(t-t_n)}, \label{eq:tilde w(t)} \end{equation} where $A_{t_n}(j,i)$ is the $(j,i)$ element of the win-lose matrix $A_{t_n}$, and the summation over $n$ is taken over the games that occur before time $t$. Substituting \EQ\eqref{eq:tilde w(t)} in \EQ\eqref{eq:prestige score} yields the dynamic prestige score $P_i$ ($1\le i\le N$) at time $t$. We set $\beta=1/365$, which is the same value as that used for the dynamic win-lose score. \subsection*{Data} We collected the data from the website of ATP \cite{ATPWorldTour}. Except when we compared the prediction accuracy for the dynamic win-lose score with that for the ATP rankings, we used single games in ATP World Tour tournaments recorded on this website. The data set contains 137842 singles games from December 1972 to May 2010 and involves 5039 players that participated in at least one game. Because the source of our data set is the same as that of Radicchi's data set \cite{Radicchi2011PlosOne} and the period of the data is similar, the number of games contained in our data and that in Radicchi's are close to each other. In the comparison between the dynamic win-lose score and the ATP rankings, we used all the types of single games recorded on the website of ATP. They include the games belonging to ATP Challenger Tours and ITF Futures tournaments in addition to ATP World Tour tournament games. We used this data set because it corresponds to the games on which the calculation of the ATP rankings is based. The ATP rankings are not available on a regular basis in early years. Therefore, we used the data from July 23, 1984 to August 15, 2011. The data set contains 330796 games and involves 13077 players that participated in at least one game. \subsection*{Parameter values for the dynamic win-lose score}\label{sub:parameter choice} A guiding principle for setting the parameter values of a ranking system is to select the values that maximize the performance of prediction \cite{Dixon1997ApplStat,Knorrheld2000Stat}. Instead, we set $\alpha$ and $\beta$ as follows. In the original win-lose score, it is recommended that $\alpha$ is set to the value smaller than and close to the inverse of the largest eigenvalue of $A$ \cite{Park2005JSM}. If $\alpha$ exceeds this upper limit, the original win-lose score diverges. For our data, the upper limit according to this criterion is equal to $1/206.80=0.0048355$. However, the dynamic win-lose score converges irrespective of the values of $\alpha$ and $\beta$ for the following reason. For expository purposes, let us assign different nodes to the same player at different times $t_n$ ($1\le n\le n_{\max}$). Then, \EQ\eqref{eq:def of W_{t_n}} implies that any link in the network, which represents a game at time $t_n$, is directed from the winner at $t_n$ to the loser at $t_{n}$ or earlier times. Because there is no time-reversed link (i.e., from $t_n$ to $t_{n^{\prime}}$, where $t_n<t_{n^{\prime}}$) and any pair of players play at most once at any $t_n$, the network is acyclic. The upper limit of $\alpha$ is infinite when the network is acyclic \cite{Park2005JSM}. On the basis of this observation, we examine the behavior of the dynamic win-lose score for various values of $\alpha$. In the official ATP ranking, the score of a player is calculated from the player's performance in the last 52 weeks $\approx$ one year \cite{ATPWorldTour}. The results of the games in this time window contribute to the current ranking of the player with the same weight if the other conditions are equal. The dynamic win-lose score uses the results of all the games in the past, and the contribution of the game decays exponentially in time. By equating the contribution of a single game in the two ranking systems, we assume $1\times 365 = \int_{0}^{\infty } e^{-\beta t}dt$, which leads to $\beta = 1/365$. In Results, we also investigated the robustness of the ranking results against variations in the $\alpha$ and $\beta$ values.
{ "attr-fineweb-edu": 1.946289, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUd5M4uzlh3iJcVdsD
\section{Introduction}\label{sec1} Combining ranking results from different sources is a common problem. Well-known rank aggregation problems range from the election problem back in 18th century \citep{borda1781memoire} to search engine results aggregation in modern days \citep{dwork2001rank}. In this paper, we tackle the problem of rank aggregation with relevant covariates of the ranked entities, as explained in detail in the following two applications. \begin{example}[NFL Quarterback Ranking]\label{ex2} During the National Football League (NFL) season, experts from different websites, such as \url{espn.com} and \url{nfl.com}, provide weekly ranking lists of players by position. For example, Table \ref{table1} shows the ranking lists of the NFL starting quarterbacks from 13 experts in week 12 of season 2014. The ranking lists can help football fans better predict the performance of the quarterbacks in the coming week and even place bets in online fantasy sports games. After collecting ranking lists from the experts, the websites mostly aggregate them using arithmetic means. Besides rankings, the summary statistics of the NFL players are also available online. For example, Table \ref{table2} shows the statistics of the ranked quarterbacks prior to week 12 of season 2014. Not surprisingly, in addition to watching football games, the experts may also use these summary statistics when ranking quarterbacks. \vspace{3mm} \begin{table}[htb] \scriptsize \centering \caption{Ranking lists of NFL starting quarterbacks from 13 different experts, as of week 12 in the 2014 season.} \label{table1} \begin{tabular}{crrrrrrrrrrrrr} \toprule Player & $\tau_1$ & $\tau_2$ & $\tau_3$ & $\tau_4$ & $\tau_5$ & $\tau_6$ & $\tau_7$ & $\tau_8$ & $\tau_9$ & $\tau_{10}$ & $\tau_{11}$ & $\tau_{12}$ & $\tau_{13}$ \\ \midrule Andrew Luck & 1 & 1 & 1 & 3 & 3 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ Aaron Rodgers & 2 & 3 & 4 & 2 & 1 & 2 & 3 & 3 & 2 & 2 & 3 & 4 & 3 \\ Peyton Manning & 3 & 2 & 5 & 4 & 2 & 3 & 2 & 2 & 3 & 4 & 4 & 2 & 2 \\ Tom Brady & 4 & 7 & 3 & 5 & 4 & 5 & 4 & 6 & 4 & 3 & 6 & 8 & 4 \\ Tony Romo & 9 & 5 & 6 & 1 & 5 & 4 & 5 & 4 & 5 & 5 & 7 & 6 & 6 \\ Drew Brees & 10 & 4 & 2 & 8 & 9 & 7 & 7 & 5 & 7 & 6 & 2 & 3 & 5 \\ Ben Roethlisberger & 6 & 8 & 7 & 7 & 7 & 6 & 6 & 10 & 6 & 7 & 5 & 7 & 7 \\ Ryan Tannehill & 5 & 6 & 13 & 6 & 11 & 8 & 8 & 7 & 9 & 9 & 8 & 5 & 8 \\ Matthew Stafford & 8 & 9 & 11 & 13 & 8 & 9 & 9 & 8 & 8 & 8 & 9 & 9 & 9 \\ Mark Sanchez & 22 & 10 & 9 & 9 & 16 & 10 & 10 & 9 & 10 & 10 & 12 & 12 & 12 \\ Russell Wilson & 12 & 13 & 17 & 10 & 10 & 12 & 11 & 12 & 11 & 12 & 11 & 14 & 15 \\ Philip Rivers & 7 & 14 & 15 & 20 & 6 & 17 & 17 & 11 & 16 & 15 & 14 & 10 & 10 \\ Cam Newton & 18 & 12 & 8 & 17 & 19 & 11 & 14 & 14 & 14 & 16 & 21 & 13 & 14 \\ Eli Manning & 17 & -- & 18 & 19 & 14 & 19 & 12 & 13 & 12 & 13 & 16 & 23 & 11 \\ Matt Ryan & 21 & 17 & 19 & 15 & 20 & 15 & 15 & 15 & 13 & 11 & 20 & 21 & 13 \\ Andy Dalton & 15 & -- & 14 & -- & 17 & 14 & 16 & 20 & 15 & 14 & 19 & 22 & 16 \\ Alex Smith & 16 & 11 & 21 & 16 & 18 & 18 & 18 & 16 & 20 & 21 & 13 & 11 & 17 \\ Colin Kaepernick & 11 & 16 & 16 & 11 & 12 & 16 & 21 & 17 & 19 & 18 & 22 & 16 & 21 \\ Joe Flacco & 24 & 15 & 12 & 14 & 24 & 13 & 13 & 18 & 18 & 20 & 15 & 15 & 19 \\ Jay Culter & 13 & 18 & 10 & 12 & 13 & 21 & 19 & 19 & 17 & 17 & 23 & 20 & 18 \\ Josh McCown & 14 & 19 & 22 & 18 & 15 & 22 & 22 & 21 & 21 & 19 & 18 & 17 & 23 \\ Drew Stanton & 20 & 20 & -- & 22 & 22 & 20 & 20 & 23 & 22 & 22 & 10 & 19 & 20 \\ Teddy Bridgewater & 23 & 21 & 20 & 21 & 23 & 23 & 23 & 22 & 23 & 24 & 17 & 18 & 22 \\ Brian Hoyer & 19 & -- & -- & -- & 21 & 24 & 24 & 24 & 24 & 23 & 24 & 24 & 24 \\ \bottomrule \end{tabular} \begin{tablenotes} \item Source: \url{http://fantasy.nfl.com/research/rankings}, \url{http://www.fantasypros.com/nfl/rankings/qb.php}. \end{tablenotes} \end{table} \begin{table}[htb] \scriptsize \caption{Relevant statistics of the ranked quarterbacks, prior to week 12 of the 2014 NFL season. From left to right, the statistics stand for: number of games played; pass completion percentage; passing attempts per game; average passing yards per attempt; touchdown percentage; intercept percentage; running attempts per game; running yards per attempt; running first down percentage.} \centering \label{table2} \begin{adjustbox}{center} \begin{tabular}{lrrrrrrrrrrr} \toprule Player & G & Pct & Att & Avg & Yds & TD & Int & RAtt & RAvg & RYds & R1st \\ \midrule Andrew Luck & 11 & 63.40 & 42.20 & 7.80 & 331.00 & 6.30 & 2.20 & 4.20 & 4.20 & 17.50 & 30.40 \\ Aaron Rodgers & 11 & 66.70 & 31.10 & 8.60 & 268.80 & 8.80 & 0.90 & 2.50 & 6.40 & 16.20 & 50.00 \\ Peyton Manning & 11 & 68.10 & 40.20 & 8.00 & 323.50 & 7.70 & 2.00 & 1.50 & -0.50 & -0.70 & 0.00 \\ Tom Brady & 11 & 65.00 & 37.90 & 7.20 & 272.50 & 6.20 & 1.40 & 1.70 & 0.70 & 1.30 & 21.10 \\ Tony Romo & 10 & 68.80 & 29.50 & 8.50 & 251.90 & 7.50 & 2.00 & 1.50 & 2.50 & 3.70 & 20.00 \\ Drew Brees & 11 & 70.30 & 42.00 & 7.60 & 317.40 & 4.80 & 2.40 & 1.70 & 2.80 & 4.90 & 26.30 \\ Ben Roethlisberger & 11 & 68.30 & 37.50 & 7.90 & 297.30 & 5.80 & 1.50 & 1.90 & 1.10 & 2.10 & 19.00 \\ Ryan Tannehill & 11 & 66.10 & 35.40 & 6.60 & 234.70 & 5.10 & 2.10 & 3.70 & 6.70 & 25.10 & 36.60 \\ Matthew Stafford & 11 & 58.80 & 37.70 & 7.10 & 267.50 & 3.10 & 2.40 & 2.80 & 2.00 & 5.60 & 16.10 \\ Mark Sanchez & 4 & 62.30 & 36.50 & 8.10 & 296.80 & 4.80 & 4.10 & 3.50 & 0.60 & 2.00 & 7.10 \\ Russell Wilson & 11 & 63.60 & 28.50 & 7.10 & 202.70 & 4.50 & 1.60 & 7.60 & 7.70 & 58.50 & 45.20 \\ Philip Rivers & 11 & 68.30 & 33.00 & 7.80 & 257.70 & 6.10 & 2.50 & 2.50 & 2.50 & 6.40 & 25.00 \\ Cam Newton & 10 & 58.60 & 33.30 & 7.20 & 239.20 & 3.60 & 3.00 & 6.40 & 4.60 & 29.30 & 37.50 \\ Eli Manning & 11 & 62.30 & 36.90 & 7.00 & 257.50 & 5.20 & 3.00 & 0.80 & 3.80 & 3.10 & 33.30 \\ Matt Ryan & 11 & 65.10 & 38.50 & 7.20 & 278.70 & 4.50 & 2.10 & 1.60 & 4.30 & 7.10 & 33.30 \\ Andy Dalton & 11 & 62.40 & 30.70 & 7.10 & 219.40 & 3.60 & 3.00 & 3.80 & 2.50 & 9.50 & 33.30 \\ Alex Smith & 11 & 65.10 & 29.70 & 6.80 & 201.00 & 4.00 & 1.20 & 3.20 & 5.50 & 17.40 & 25.70 \\ Colin Kaepernick & 11 & 61.70 & 31.50 & 7.50 & 237.70 & 4.30 & 1.70 & 6.80 & 4.50 & 30.50 & 22.70 \\ Joe Flacco & 11 & 63.20 & 34.10 & 7.40 & 251.30 & 4.80 & 2.10 & 2.00 & 1.70 & 3.40 & 45.50 \\ Jay Cutler & 11 & 66.80 & 36.40 & 7.10 & 256.80 & 5.50 & 3.00 & 2.90 & 3.90 & 11.30 & 28.10 \\ Josh McCown & 6 & 60.40 & 30.30 & 7.40 & 225.00 & 3.80 & 4.40 & 2.70 & 5.80 & 15.30 & 50.00 \\ Drew Stanton & 6 & 53.60 & 25.20 & 7.10 & 178.20 & 3.30 & 2.00 & 3.00 & 2.00 & 6.00 & 22.20 \\ Teddy Bridgewater & 8 & 60.30 & 32.80 & 6.40 & 211.10 & 2.30 & 2.70 & 3.50 & 4.60 & 16.10 & 32.10 \\ Brian Hoyer & 11 & 55.90 & 33.20 & 7.80 & 260.40 & 3.00 & 2.20 & 1.80 & 0.90 & 1.50 & 20.00 \\ \bottomrule \end{tabular} \end{adjustbox} \begin{tablenotes} \item Source: \url{http://www.nfl.com/stats}. \end{tablenotes} \end{table} \end{example} In Example \ref{ex2}, the primary goal is to obtain an aggregated ranking list of all players, which is hoped to be more precise than the simple method using arithmetic means. In particular, we want to incorporate the covariates (i.e., the summary statistics here) of the players to improve the accuracy of rank aggregation. Moreover, according to Table \ref{table1}, most of the experts give very similar ranking lists, with a few exceptions such as experts 4 and 5. Therefore, it is also important to discern the varying qualities of the rankers, in order to diminish the effect of low-quality rankers and make the aggregation results more robust. \begin{example}[Orthodontics treatment evaluation ranking]\label{ex1} In 2009, 69 orthodontics experts were invited by the School of Stomatology at Peking University to evaluate the post-treatment conditions of 108 medical cases \citep{song2015validation}. In order to make the evaluation easier for experts, cases were divided into 9 groups, each containing 12 cases. For each group of the cases, each expert evaluated the conditions of all cases and provided a within-group ranking list, mostly based on their personal experiences and judgments of the patients' teeth records. In the meantime, using each case's plaster model, cephalometric radiograph and photograph, the School of Stomatology located key points, measured their distances and angles that are considered to be relevant features for diagnosis, and summarized these features in terms of peer assessment rating (PAR) index \citep{richmond1992development}. Table \ref{table3} shows 15 of the 69 ranking lists for two groups, and Table \ref{table_cov} shows the corresponding features for these two groups. \begin{table}[htb] \scriptsize \centering \caption{Ranking lists for Groups A and H, two of the 9 groups in Example 2} \label{table3} \begin{adjustbox}{center} \begin{tabular}{rrrrrrrrrrrrrrrrrrrrr} \toprule & $\tau_{1}$ & $\tau_{2}$ & $\tau_{3}$ & $\tau_{4}$ & $\tau_{5}$ & $\tau_{6}$ & $\tau_{7}$ & $\tau_{8}$ & $\tau_{9}$ & $\tau_{10}$ & $\tau_{11}$ & $\tau_{12}$ & $\tau_{13}$ & $\tau_{14}$ & $\tau_{15}$ \\ \midrule A1 & 1 & 3 & 5 & 2 & 4 & 1 & 1 & 2 & 5 & 5 & 10 & 8 & 2 & 4 & 2 \\ A2 & 11 & 5 & 10 & 9 & 9 & 12 & 9 & 7 & 11 & 12 & 4 & 7 & 5 & 6 & 5 \\ A3 & 6 & 10 & 8 & 11 & 11 & 8 & 11 & 8 & 12 & 9 & 6 & 11 & 12 & 11 & 11 \\ A4 & 3 & 2 & 4 & 3 & 1 & 4 & 2 & 10 & 1 & 6 & 8 & 2 & 1 & 1 & 1 \\ A5 & 9 & 4 & 7 & 5 & 6 & 6 & 6 & 5 & 3 & 3 & 2 & 5 & 11 & 7 & 9 \\ A6 & 10 & 9 & 3 & 6 & 5 & 11 & 5 & 9 & 6 & 7 & 3 & 1 & 6 & 8 & 7 \\ A7 & 8 & 8 & 11 & 7 & 12 & 9 & 12 & 11 & 8 & 10 & 7 & 9 & 8 & 12 & 12\\ A8 & 4 & 1 & 1 & 4 & 3 & 2 & 4 & 4 & 2 & 1 & 1 & 6 & 3 & 2 & 6 \\ A9 & 2 & 12 & 9 & 8 & 8 & 5 & 7 & 3 & 9 & 8 & 11 & 12 & 7 & 5 & 8 \\ A10 & 7 & 11 & 6 & 10 & 10 & 7 & 8 & 6 & 7 & 11 & 9 & 3 & 10 & 9 & 4 \\ A11 & 5 & 7 & 2 & 1 & 2 & 3 & 10 & 1 & 10 & 2 & 5 & 4 & 9 & 3 & 3 \\ A12 & 12 & 6 & 12 & 12 & 7 & 10 & 3 & 12 & 4 & 4 & 12 & 10 & 4 & 10 & 10 \\ \midrule H1 & 4 & 8 & 5 & 8 & 4 & 11 & 4 & 3 & 8 & 9 & 4 & 4 & 3 & 11 & 8 \\ H2 & 1 & 2 & 4 & 5 & 2 & 7 & 2 & 2 & 1 & 2 & 1 & 1 & 2 & 2 & 1 \\ H3 & 2 & 3 & 2 & 2 & 1 & 4 & 1 & 1 & 2 & 1 & 6 & 5 & 5 & 3 & 3 \\ H4 & 3 & 4 & 3 & 4 & 3 & 3 & 3 & 4 & 3 & 4 & 7 & 7 & 1 & 1 & 2 \\ H5 & 12 & 12 & 12 & 12 & 12 & 12 & 12 & 12 & 12 & 12 & 10 & 12 & 12 & 9 & 12 \\ H6 & 6 & 5 & 1 & 1 & 6 & 2 & 7 & 5 & 7 & 3 & 5 & 3 & 7 & 4 & 6 \\ H7 & 8 & 11 & 6 & 9 & 10 & 9 & 11 & 11 & 10 & 11 & 11 & 11 & 6 & 7 & 10 \\ H8 & 11 & 6 & 8 & 3 & 7 & 1 & 6 & 6 & 6 & 6 & 8 & 8 & 4 & 8 & 9 \\ H9 & 5 & 7 & 10 & 11 & 5 & 10 & 10 & 10 & 11 & 8 & 2 & 6 & 10 & 12 & 4 \\ H10 & 10 & 9 & 9 & 7 & 9 & 5 & 5 & 7 & 5 & 7 & 12 & 9 & 11 & 5 & 7 \\ H11 & 9 & 10 & 7 & 10 & 11 & 8 & 9 & 8 & 9 & 10 & 9 & 10 & 8 & 6 & 11 \\ H12 & 7 & 1 & 11 & 6 & 8 & 6 & 8 & 9 & 4 & 5 & 3 & 2 & 9 & 10 & 5 \\ \bottomrule \end{tabular} \end{adjustbox} \end{table} \end{example} \begin{table}[ht] \scriptsize \centering \caption{Below are 11 covariates measured based on peer assessment rating (PAR) index. From left to right, the statistics stand for: Upper right segment; Upper anterior segment; Upper left segment; Lower right segment; Lower anterior segment; Lower left segment; Right buccal occlusion; Left buccal occlusion; Overjet; Overbit; Centerline.} \label{table_cov} \begin{tabular}{rrrrrrrrrrrr} \toprule & d1m & d2m & d3m & d4m & d5m & d6m & rbom & lbom & ojmm & obm & clm \\ \midrule A1 & 1.56 & 0.22 & 1.44 & 1.00 & 0.00 & 1.22 & 0.00 & 0.33 & 0.00 & 0.00 & 0.00 \\ A2 & 1.33 & 0.22 & 1.00 & 0.33 & 0.00 & 0.33 & 0.00 & 0.33 & 0.00 & 0.33 & 0.00 \\ A3 & 1.22 & 0.33 & 1.00 & 0.67 & 0.11 & 1.44 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ A4 & 0.00 & 0.00 & 0.11 & 1.78 & 0.22 & 1.89 & 0.33 & 0.67 & 0.00 & 0.00 & 0.00 \\ A5 & 1.33 & 0.22 & 0.78 & 1.22 & 0.11 & 1.67 & 0.33 & 0.00 & 0.78 & 0.00 & 0.00 \\ A6 & 1.11 & 0.56 & 1.78 & 0.89 & 0.22 & 0.89 & 0.67 & 1.00 & 0.78 & 0.00 & 0.00 \\ A7 & 1.22 & 0.67 & 1.89 & 0.89 & 0.11 & 1.00 & 0.67 & 0.33 & 0.67 & 0.00 & 0.00 \\ A8 & 1.44 & 0.22 & 1.56 & 0.89 & 0.22 & 0.56 & 2.00 & 2.00 & 0.00 & 0.00 & 0.00 \\ A9 & 1.11 & 0.33 & 1.22 & 0.44 & 0.00 & 1.00 & 2.33 & 0.67 & 0.00 & 0.00 & 0.00 \\ A10 & 0.67 & 0.11 & 0.89 & 0.11 & 0.00 & 0.00 & 0.67 & 1.00 & 0.00 & 0.67 & 0.00 \\ A11 & 0.67 & 0.89 & 1.00 & 0.67 & 1.33 & 2.44 & 1.33 & 1.00 & 0.11 & 0.00 & 0.67 \\ A12 & 0.67 & 0.11 & 0.22 & 1.00 & 0.00 & 0.56 & 0.33 & 1.33 & 0.00 & 0.33 & 0.00 \\ \midrule H1 & 0.67 & 0.22 & 0.78 & 1.67 & 0.56 & 0.78 & 0.67 & 0.00 & 0.78 & 0.00 & 0.00 \\ H2 & 1.56 & 0.56 & 0.22 & 0.44 & 0.00 & 0.11 & 0.00 & 0.67 & 0.00 & 0.00 & 0.00 \\ H3 & 0.56 & 0.22 & 1.00 & 0.33 & 0.11 & 0.78 & 0.00 & 0.67 & 0.00 & 0.33 & 0.00 \\ H4 & 0.56 & 0.22 & 0.67 & 0.44 & 0.11 & 0.44 & 0.67 & 1.00 & 0.00 & 0.00 & 0.00 \\ H5 & 1.22 & 0.33 & 0.67 & 0.44 & 0.00 & 0.33 & 1.00 & 0.67 & 0.33 & 0.00 & 0.00 \\ H6 & 0.56 & 0.11 & 1.33 & 1.22 & 0.00 & 1.33 & 1.00 & 0.67 & 0.22 & 0.00 & 0.00 \\ H7 & 0.56 & 0.33 & 0.78 & 0.78 & 0.00 & 1.22 & 2.00 & 1.33 & 0.44 & 0.33 & 0.00 \\ H8 & 0.78 & 0.22 & 1.56 & 0.89 & 0.00 & 0.33 & 1.67 & 2.00 & 0.00 & 0.00 & 0.00 \\ H9 & 0.44 & 0.22 & 1.00 & 0.00 & 0.11 & 0.11 & 1.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ H10 & 1.11 & 0.33 & 1.78 & 0.22 & 0.22 & 0.33 & 1.33 & 1.67 & 0.00 & 0.00 & 0.00 \\ H11 & 0.67 & 0.67 & 1.00 & 0.67 & 0.56 & 0.56 & 1.00 & 1.00 & 0.11 & 0.00 & 0.00 \\ H12 & 1.22 & 0.78 & 1.00 & 0.33 & 0.33 & 0.67 & 1.00 & 0.67 & 0.56 & 0.00 & 0.00 \\ \bottomrule \end{tabular} \end{table} The rank aggregation problem emerges naturally in Example \ref{ex1} because the average perception of experienced orthodontists is considered the cornerstone of systems for the evaluation of orthodontic treatment outcome as described in \cite{song2014reliability}. However, Example \ref{ex1} contains many ``local" rankings among non-overlapping subgroups, and thus differs from Example \ref{ex2} and most prevailing rank aggregation applications. Having been demonstrated to be associated with ranking outcomes by \cite{song2015validation}, the covariates information not only helps in improving ranking accuracy, but also is crucial for generating full ranking lists. Moreover, the individual reliability and overall consistency of these orthodontists (or rankers) are critical concerns prior to rank aggregation \citep{liu2012consistency, song2014reliability}. There could be heterogeneous quality or opinions among rankers as evidenced by the ranking discrepancies in Table \ref{table3}. For example, the ranking position of case A9 from the listed 15 experts ranges from 2 to 12. Therefore, Example \ref{ex1} presents a rank aggregation problem with covariates information and heterogeneous rankers. \subsection*{Related work and main contributions} There are mainly two types of methods dealing with rank data. The first type tries to find an aggregated ranking list that is consistent with most input rankings according to some criteria. For example, \citet{borda1781memoire} aggregated rankings based on the arithmetic mean of ranking positions, commonly known as Borda count; and \citet{van2000variants} studied several variants of Borda count. \citet{dwork2001rank} proposed to aggregate rankings based on the stationary distributions of certain Markov chains, which are constructed heuristically based on the ranking lists; and \citet{deconde2006combining} and \citet{lin2010rank} extended this approach to fit more complicated situations. \citet{lin2009integration} obtained the aggregated ranking list by minimizing its total distance to all the input ranking lists, an idea that can be traced back to the Mallows model \citep{mallows1957non}. The second type of methods builds statistical models to characterize the data generating process of the rank data and uses the estimated models to generate the aggregated ranking list \citep{critchlow1991probability, marden1996analyzing, alvo2014statistical}. The most popular model for rank data is the Thurstone order statistics model, which includes the Thurstone--Mosteller--Daniels model \citep{thurstone1927law, mosteller1951remarks, daniels1950rank} and Plackett--Luce model \citep{luce1959, plackett1975} as special cases. Together with variants and extensions \citep{benter1994, bockenholt1992thurstonian}, the Thurston model family has been successfully applied to a wide range of problems \citep[e.g.,][]{murphy2006, murphy2008jasa, johnson2002bayesian, graydavies2016}. Briefly, the Thurstone model assumes that there is an underlying evaluation score for each entity, whose noisy version determines the rankings. In the Thurstone--Mosteller--Daniels and Plackett--Luce models, the noises are assumed to follow the normal and Gumbel distributions, respectively. The Plackett-Luce model can be equivalently viewed as a multistage model that models the ranking process sequentially, where each entity has a unique parameter representing its probability of being selected at each stage up to a normalizing constant. Challenges arise in the analysis of ranking data when (a) rankers are of different qualities or belong to different opinion groups; (b) covariates information are available for either rankers or the ranked entities or both; and (c) there are incomplete ranking lists. \citet{murphy2006,murphy2008jasa,murphy2008aoas, murphy2010} developed the finite mixture of Plackett--Luce models and Benter models \citep{benter1994} to accommodate heterogeneous subgroups of rankers, where both the mixing proportion and group specific parameters can depend on the covariates of rankers. \citet{bockenholt1993} introduces the finite mixture of Thurstone models to allow for heterogeneous subgroups of rankers; \citet{yu2000bayesian} attempts to incorporate the covariates information for both ranked entities and rankers; \citet{johnson2002bayesian} examines qualities of several known subgroups of rankers; and \citet{lee2014} represents qualities of rankers by letting them have different noise levels. See \citet{bockenholt2006} for a review of developments in Thurstonian-based analysis, as well as some further extensions. Recently, \citet{deng2014bayesian} proposed a Bayesian approach that can distinguish high-quality rankers from low-quality ones, and \citet{bhowmik2017} proposed a method that utilizes covariates of ranked entities to assess qualities of all rankers. We here employ the Thurstone--Mosteller--Daniels model and its extensions because they are flexible enough to deal with incomplete ranking list and can provide a unified framework to accommodate covariate information of ranked entities, rankers with different qualities, and heterogeneous subgroups of rankers. In particular, we use the Dirichlet process prior for the mixture subgroups of rankers, which can automatically determine the total number of mixture components. Moreover, in contrast to focusing on inferring parameters of Thurstone models in most previous studies, we focus mainly on the rank aggregation and the uncertainty evaluation of the resulting aggregated ranking lists. Computationally, the estimation for the Thurstone model is generally difficult due to the complicated form of the likelihood function, especially when there are a large number of ranked entities. To overcome the difficulty, \citet{maydeu1999thurstonian} transformed the estimation problem to a one involving mean and covariance structures with dichotomous indicators, \citet{yao1999bayesian} proposed a Bayesian approach based on Gibbs sampler, and \citet{johnson2013bayesian} advocated the JAGS software to implement the Bayesian posterior sampling. We here develop a parameter-expanded Gibbs sampler \citep{liu1999parameter}, which facilitates group moves of the latent variables, to further improve the computational efficiency. As demonstrated in the numerical studies, the improvement of the new sampler over the standard one is significant. The rest of this article is organized as follows. Section \ref{sec2} elaborates on our Bayesian models for rank data with covariates. Section \ref{sec3} provides details of our Markov Chain Monte Carlo algorithms. Section \ref{agg_mcmc} introduces multiple analysis tools using MCMC samples. Section \ref{sec4} displays simulation results to validate our approaches. Section \ref{realData} describes the two real-data applications using the proposed methods. Section \ref{sec6} concludes with a short discussion. \section{Bayesian models for rank data with covariates}\label{sec2} \subsection{Notation and definitions} Let $\mathcal{U}$ be the set of all entities in consideration, and let $n=|\mathcal{U}|$ be the total number of entities in $\mathcal{U}$. We use $i_1 \succ i_2$ to denote that entity $i_1$ is ranked higher than entity $i_2$. A \textit{ranking list} $\tau$ is a set of non-contradictory pairwise relations in $\mathcal{U}$, which gives rise to ordered preference lists for entities in $\mathcal{U}$. We call $\tau$ a \textit{full ranking list} if $\tau$ identifies all pairwise relations in $\mathcal{U}$, otherwise a \textit{partial ranking list}. When $\tau$ is a full ranking list, we can equivalently write $\tau$ as $\tau=[i_1\succ i_2\succ \ldots \succ i_{n}]$ for notational simplicity, and further define $\tau(i)$ as the position of an entity $i\in \mathcal{U}$. Specifically, a high ranked element has a small-numbered position in the list, i.e. $\tau(i_1)<\tau(i_2)$ if and only if $i_1 \succ i_2$. Furthermore, for any vector $\bm{z} = (z_1, \ldots, z_n)' \in \mathbb{R}^n$, we use $\text{rank}(\bm{z}) = [i_1\succ i_2\succ \ldots \succ i_{n}]$ to denote the full ranking list of $z_i$'s in a decreasing order, i.e., $z_{i_1} \geq \ldots \geq z_{i_n}$. As introduced in Examples \ref{ex2} and \ref{ex1}, we also observe some covariates of ranked entities. Let $\bm{x}_i \in \mathbb{R}^p$ be the $p$ dimensional covariate vector of ranked entity $i$, and let $\bm{X}=(\bm{x}_1,\bm{x}_2,\ldots,\bm{x}_n)'\in \mathbb{R}^{n\times p}$ be the covariate matrix for all $n$ entities. For clarification, in the following discussion we use index $i$ for ranked entities and index $j$ for rankers, with $n$ and $m$ denoting the total numbers of ranked entities and rankers, respectively. \subsection{Full ranking lists without covariates} Suppose we have $m$ full ranking lists $\tau_1,\tau_2,\ldots,\tau_m$ for entities in $\mathcal{U}=\{1,2,\ldots,n\}$. \citet{thurstone1927law} postulated that the ranking outcome $\tau_j$ is determined by $n$ latent variables $Z_{ij}$'s, for $1\leq i \leq n$, where $Z_{ij}$ represents ranker $j$'s evaluation score of the $i$th entity, and $Z_{i_1j}>Z_{i_2j}$ if and only if $i_1 \succ i_2$ for ranker $j$. Define $\bm{Z}_j=(Z_{1j},\ldots,Z_{nj})'$ as ranker $j$'s evaluations of all entities, and $\text{rank}(\bm{Z}_j)$ as the associated full ranking list based on $\bm{Z}_j$. Similar to Thurstone's assumption, we assume that $\bm{Z}_j$ follows a multivariate Gaussian distribution with mean $\bm{\mu}=(\mu_1,\ldots,\mu_n)'$ representing the underlying true score of the ranked entities: \begin{equation} \begin{aligned} Z_{ij} & =\mu_{i}+\epsilon_{ij}, \ \quad \epsilon_{ij}\sim N(0,\sigma^2) & (1\leq i\leq n; 1\leq j\leq m)\\ \tau_j & =\text{rank}(\bm{Z}_j), & (1\leq j\leq m) \end{aligned} \label{eq1} \end{equation} where $\epsilon_{ij}$'s are jointly independently. Because we only observe the ranking lists $\tau_j$'s, multiplying $(\bm{\mu},\sigma)$ by a constant or adding a constant to all the $\mu_i$'s does not influence the likelihood function. Therefore, to ensure identifiability of the parameters, we fix $\sigma^2=1$ and impose the constraint that $\bm{\mu}$ lies in the space $\bm{\Theta}=\{\bm{\mu}\in \mathbb{R}^n:\bm{1}'\bm{\mu}=0\}$. Model (\ref{eq1}) implies that the $\tau_j$'s are independent and identically distributed (i.i.d.) conditional on $\bm{\mu}$, so the likelihood function is $$ p(\tau_1,\cdots,\tau_m \mid \bm{\mu})=\prod_{j=1}^m p(\tau_j \mid \bm{\mu}) = \prod_{j=1}^m \int_{\mathbb{R}^n} p(\tau_j \mid \bm{Z}_j,\bm{\mu})p(\bm{Z}_j \mid \bm{\mu})\text{d}\bm{Z}_j, $$ where $p(\tau_j \mid \bm{Z}_j,\bm{\mu})=1_{\{\text{rank}(\bm{Z}_j)=\tau_j\}} $. Specifically, for any possible full ranking list $\tau$ on $\mathcal{U}=\{1,2,\ldots,n\}$, the probability mass function is $$ P(\tau_j=\tau \mid \bm{\mu})=\int_{\mathbb{R}^n} 1\{\text{rank}(\bm{Z}_j)=\tau\}\cdot (2\pi)^{-n/2}e^{-\frac{1}{2}\|\bm{Z}_j-\bm{\mu}\|^2} \text{d}\bm{Z}_j. $$ Our goal is to generate an aggregated rank based on an estimate of $\bm{\mu}$ in model (\ref{eq1}). One approach is to use the maximum likelihood estimate (MLE) $\hat{\bm{\mu}}_m$ defined as \[\hat{\bm{\mu}}_m = \arg\max_{\bm{\mu}} \frac{1}{m} \sum_{j=1}^{m}\log p(\tau_j\mid\bm{\mu}).\] We have the following consistency result for $\hat{\bm{\mu}}_m$ with the proof deferred to the Supplementary Material. \begin{thm}\label{thm1} Let true parameter value of model (\ref{eq1}) be $\bm{\mu}_0\in \bm{\Theta}$, and we observe $\tau_1,\ldots, \tau_m$ generated from model (\ref{eq1}). Let $\hat{\bm{\mu}}_m$ be the MLE of $\bm{\mu}_0$. Then, for any $\epsilon>0$ and any compact set $K\subset \bm{\Theta}$, we have $$P\left(\{\|\hat{\bm{\mu}}_m-\bm{\mu}_0\|_2 \geq\epsilon\} \cap \{\hat{\bm{\mu}}_m\in K\}\right)\rightarrow 0,$$ as $n$ is fixed and $m\rightarrow\infty.$ \end{thm} Alternatively, we can employ a Bayesian procedure, which is more convenient to incorporate prior information, to quantify estimation uncertainties, and to utilize efficient Markov chain Monte Carlo (MCMC) algorithms including data augmentation \citep{tanner1987calculation} and parameter expansion strategies \citep{liu1999parameter}. With a reasonable prior, the posterior mean of $\bm{\mu}$ is also a consistent estimator under the same setting as in Theorem \ref{thm1}. Denote the prior of $\bm{\mu}$ by $p(\bm{\mu})$. The posterior distribution of $\bm{\mu}$ and $(\bm{Z}_1,\ldots,\bm{Z}_m)$ is \begin{align*} p(\bm{\mu},\bm{Z}_1,\ldots,\bm{Z}_m\mid\tau_1,\cdots,\tau_m) & = p(\bm{\mu}) \cdot \prod_{j=1}^{m} p(\bm{Z}_j \mid \bm{\mu}) \cdot \prod_{j=1}^{m}1\{\tau_j = \text{rank}(Z_j)\}. \end{align*} We can then generate the aggregated ranking list as \begin{equation} \rho = \text{rank} \left( \tilde{\bm{\mu}} \right) = \text{rank} \left( (\tilde{\mu}_1, \tilde{\mu}_2, \ldots, \tilde{\mu}_n)' \right) \label{agg_barc} \end{equation} where the $\tilde{\mu}_i$'s are the posterior means of the $\mu_i$'s. Let $\bm{P}_n = \bm{I}_n - n^{-1}\bm{1}_n\bm{1}_n'$ denote the projection matrix that determines a mapping from $\mathbb{R}^n$ to $\bm{\Theta}$. We choose the prior of $\bm{\mu}$, which is restricted to the parameter space $\bm{\Theta}$, to be $\mathcal{N}\left(\bm{0},\sigma_{\mu}^2\bm{P}_n\right)$. The intuition for choosing this prior is that when $\bm{\mu}\sim \mathcal{N}(\bm{0}, \sigma_{\mu}^2 \bm{I}_n)$, we have $\bm{P}_n\bm{\mu}\in \bm{\Theta}$ and $\bm{P}_n\bm{\mu} \sim \mathcal{N}(\bm{0},\sigma_{\mu}^2\bm{P}_n)$. For computation, it is equivalent to using the prior $\bm{\mu}\sim \mathcal{N}(\bm{0}, \sigma_{\mu}^2 \bm{I}_n)$ and considering the posterior mean of $\bm{P}_n\bm{\mu}\equiv\bm{\mu}-\bar{\bm{\mu}}$, where $\bar{\bm{\mu}}=n^{-1}\sum_{i=1}^{n}\mu_i\bm{1}_n$. In other words, \begin{equation*} p_{\pi_1}(\bm{\mu}\mid\tau_1,\cdots,\tau_m)=p_{\pi_2}(\bm{\mu}-\bar{\bm{\mu}}\mid\tau_1,\cdots,\tau_m) \label{eq9} \end{equation*} where $\pi_1\sim \mathcal{N}\left(\bm{0},\sigma_{\mu}^2\bm{P}_n\right)$ and $\pi_2\sim \mathcal{N}(\bm{0}, \sigma_{\mu}^2 \bm{I}_n)$ denote the prior of $\bm{\mu}$. More generally, although we restrict $\bm{\mu}$ to the parameter space $\bm{\Theta}$, we only need to specify a prior for unconstrained $\bm{\mu}$ and make inference based on posterior distribution of $\bm{\mu}-\bar{\bm{\mu}}$. Therefore, under such Bayesian model setting, it is extremely flexible to extend the model to incorporate covariate information, as illustrated immediately. \subsection{Ranking lists with covariates}\label{sec:rank_cov} As in both examples, each ranked entity is associated with relevant covariates that are available systematic information determining how a ranker ranks it. To incorporate the covariate information into model (\ref{eq1}), we assume that the score of entity $i$ depends linearly on the $p$-dimensional covariate vector $\bm{x}_i$, for $i=1,\ldots, n$. To avoid being too restrictive, we allow the intercept term for each entity to be different. In sum, we have the following over-parameterized model: \begin{equation} \begin{aligned} \mu_{i} &=\alpha_{i}+\bm{x}_{i}'\bm{\beta}, & (1\leq i\leq n)\\ Z_{ij} &=\mu_{i}+\epsilon_{ij}, \quad \epsilon_{ij}\sim \mathcal{N}(0,1), & (1\leq i\leq n; 1\leq j\leq m)\\ \tau_j &=\text{rank}(\bm{Z}_j), & (1\leq j\leq m) \end{aligned} \label{eq2} \end{equation} where the $\epsilon_{ij}$'s are mutually independent. Model (\ref{eq2}) is over-parameterized because $\bm{\mu}$ is invariant if we add a constant vector $\bm{c}$ to $\bm{\beta}$ and change $\alpha_i$ to $\alpha_i-\bm{x}_{i}'\bm{c}$. As a result, the parameters $\bm{\alpha}=(\alpha_1, \ldots, \alpha_n)$ and $\bm{\beta}$ are non-identifiable. However, the structure between $\bm{\mu}$ and $(\bm{\alpha},\bm{\beta})$ help us construct some informative priors on $\bm{\mu}$, incorporating the covariate information. Intuitively, entities with similar $\bm{x}_{i}$'s should be close in the underlying $\mu_i$'s. Such intuition is conformed by Model (\ref{eq2}) with suitable priors on $(\bm{\alpha},\bm{\beta})$, because similar entities will have higher correlation among their $\mu_i$'s \textit{a priori}. Model (\ref{eq2}) can be helpful when the ranking information is weak and incomplete, and the covariate information is strongly related to the ranking mechanism. We further illustrate Model (\ref{eq2}) using the quarterback data in Example \ref{ex2}. The unobserved variable $Z_{ij}$ represents ranker $j$'s evaluation for the performance of quarterback $i$. The expression $\alpha_{i}+\bm{x}_{i}'\bm{\beta}$ quantifies a hypothetically universal underlying "quality" of the quarterback, and each ranker evaluates it with a personal variation modeled by $\epsilon_{ij}$. The linear term $\bm{x}_{i}'\bm{\beta}$ can explain the part of their performance, but there are many aspects in a football game that cannot be reflected through a linear combination of these summary statistics. The term $\alpha_i$ can capture the remaining ``random effect". Without $\alpha_i$, Model (\ref{eq2}) reduces to a rank regression model in \citet{johnson2013bayesian}, which can be too restrictive in some applications. We set the prior $p(\bm{\alpha},\bm{\beta})\equiv p(\bm{\alpha})p(\bm{\beta})$, where $p(\bm{\alpha})$ is simply $\mathcal{N}(0,\sigma^2_\alpha I)$ and $p(\bm{\beta})$ is $\mathcal{N}(0,\sigma^2_\beta I)$. The hyper-parameter $\sigma_\alpha$ and $\sigma_\beta$ can reflect prior belief on the relevance of covariates information to ranking mechanism. Intuitively, the stronger the belief on the role of covariates, the smaller the ratio $\sigma^2_{\alpha}/\sigma^2_{\beta}$ will be chosen. We address the choice of hyper-parameters $(\sigma^2_\alpha, \sigma^2_\beta)$ in the simulation studies. With this prior, the posterior mean of $\bm{\mu}-\bar{\bm{\mu}} = \bm{\mu} - (n^{-1}\sum_{i=1}^{n}\mu_i)\bm{1}_n$ is our estimates for $\bm{\mu}\in \bm{\Theta}$. Below we name this Bayesian approach based on model \eqref{eq2} as BARC, standing for Bayesian aggregation of rank data with covariates. \subsection{Weighted rank aggregation for varying qualities of rankers} In practice, the rankers in consideration may have different quality or reliability. In these cases, a weighted rank aggregation is often more appropriate, where each ranker $j$ has a weight $w_j$ reflecting the quality of its ranking list. However, it is difficult to design a proper weighting scheme in practice, especially when little or no prior knowledge of the rankers is available. To deal with this difficulty, we incorporate weights into variance parameters in our model, and infer them jointly with other parameters. More precisely, we model the ranker's quality by the precision of the noise, i.e, extending model (\ref{eq2}) to the following weighted version: \begin{align}\label{eq5} \mu_i & = \alpha_{i}+\bm{x}_{i}'\bm{\beta}, & (1\leq i\leq n) \nonumber \\ Z_{ij} & =\mu_i+\epsilon_{ij}, \ \quad \epsilon_{ij}\sim N(0,w^{-1}_j), & (1\leq i\leq n; 1\leq j\leq m)\\ \tau_j & =\text{rank}(\bm{Z}_j), & (1\leq j\leq m) \nonumber \end{align} where the $\epsilon_{ij}$'s are mutually independent and $w_j>0$. Note that the variance of $\epsilon_{ij}$, which is the inverse of the ranker's reliability measure $w_j$, depends only on ranker $j$'s quality, but does not depend on entity $i$. The prior for the $w_j$'s can be any distribution bounded away from zero and infinity such as uniform and truncated chi-square distributions. A more restrictive choice is to let the weights take only on a few discrete values. Our numerical study shows that the more restrictive prior specification for the weights can lead to a much less sticky MCMC sampler without compromising much in the precision of aggregated rank as well as the quality evaluation of rankers. Specifically, we restrict $w_j$ to three different levels for reliable, mediocre and low-quality rankers, separately. The corresponding weights for these rankers are 2, 1 and 0.5, respectively, with equal probabilities {\it a priori}, i.e., \begin{equation} P(w_j=0.5)=P(w_j=1)=P(w_j=2)=\frac{1}{3}, \quad (1\leq j\leq m) \end{equation} where the $w_j$'s are mutually independent. We call this weighted rank aggregation method as BARCW, standing for Bayesian aggregation of rank data with entities' covariates and rankers' (unknown) weights. \subsection{Ranker clustering via mixture model} Our previous models assume that the underlying score $\bm{\mu}$ is universal to all rankers, which can sometimes be too restrictive. \cite{bockenholt1993} and \cite{murphy2006,murphy2008jasa,murphy2008aoas} suggested that there are often several categories of voters with very different political opinions in an election, and subsequently a mixture model approach should be applied to cluster voters into subgroups. Differing from BARCW, which studies differences in rankers' reliabilities, this mixture model focuses on the heterogeneity in rankers' opinions while assuming that all rankers are equally reliable. A common issue in mixture models is to determine the number of mixture components. Here we employ the Dirichlet process mixture model, which overcomes this problem by defining mixture distributions with a countably infinite number of components via a Dirichlet process prior \citep{antoniak1974mixtures, ferguson1983bayesian}. We first extend Model (\ref{eq2}) so that the underlying score of entities is ranker-specific: \begin{equation} \begin{aligned} \bm{\mu}^{(j)} &= \bm{\alpha}^{(j)}+\bm{X}\bm{\beta}^{(j)}, & \quad (1\leq j\leq m)\\ \bm{Z}_{j} & = \bm{\mu}^{(j)} + \bm{\varepsilon}_j, \ \quad \bm{\varepsilon}_j \sim \mathcal{N}(\bm{0},\bm{I}_n), & \quad (1\leq j\leq m)\\ \tau_{j} &= \text{rank}\left(\bm{Z}_{j}\right), & \quad (1\leq j\leq m) \end{aligned} \label{dpm1} \end{equation} where $\bm{X} \in \mathbb{R}^{n\times p}$ is the covariate matrix for all ranked entities, $\bm{\mu}^{(j)}$ represents the underlying true score for ranker $j$, and $\bm{\varepsilon}_j$'s are jointly independent. We then assume that the distribution of $(\bm{\alpha}^{(j)},\bm{\beta}^{(j)})$ follows a Dirichlet process prior, i.e. \begin{align}\label{dpm2} (\bm{\alpha}^{(j)},\bm{\beta}^{(j)})\mid G \overset{iid}{\sim} G, \quad G &\sim DP(\gamma,G_0), \end{align} where $G_0$ defines a baseline distribution on $\mathbb{R}^n \times \mathbb{R}^p$ for the Dirichlet process prior, satisfying $E(G)=G_0$, and $\gamma$ is a concentration parameter. For the ease of understanding, we can equivalently view model \eqref{dpm1}-\eqref{dpm2} as the limit of the following finite mixture model with $K$ components when $K\rightarrow \infty$: \begin{align}\label{mixture} \left(\pi_1, \ldots, \pi_K\right) & \sim \text{Dir}(\gamma/K, \ldots, \gamma/K), \nonumber\\ q_{j} \mid \bm{\pi} &\overset{iid}\sim \text{Multinomial}\left(\pi_1, \ldots, \pi_K\right), & (1\leq j\leq m) \nonumber\\ ( \bm{\alpha}^{\langle k\rangle},\bm{\beta}^{\langle k\rangle} ) &\overset{iid}{\sim} G_0, & (1 \leq k \leq K)\\ \bm{\mu}^{\langle k\rangle} & = \bm{\alpha}^{\langle k\rangle}+\bm{X}\bm{\beta}^{\langle k\rangle}, & (1 \leq k \leq K) \nonumber\\ \bm{Z}_j & = \bm{\mu}^{\langle q_j\rangle} + \bm{\varepsilon}_j, \quad \bm{\varepsilon}_j \sim \mathcal{N}(\bm{0}, \bm{I}_n), & \quad (1\leq j\leq m) \nonumber\\ \tau_{j} & = \text{rank}\left(\bm{Z}_{j}\right), & \quad (1\leq j\leq m) \nonumber \end{align} where the latent variable $q_j\in \{1,2,\ldots, K\}$ indicates the cluster allocation of ranker $j$, and $\bm{\mu}^{\langle k\rangle}$ corresponds to the common underlying score vector for rankers in cluster $k$. We choose the baseline distribution $G_0$ on $\mathbb{R}^n \times \mathbb{R}^p$ using two independent zero-mean Gaussian distributions with covariances $\sigma_{\alpha}^2\bm{I}_n$ and $\sigma_{\beta}^2\bm{I}_p$, i.e., $G_0 \sim \mathcal{N}(\bm{0}, \text{diag}(\sigma_{\alpha}^2\bm{I}_n, \sigma_{\beta}^2\bm{I}_p))$. Clearly, $G_0$ is the same as the prior distribution of $(\bm{\alpha},\bm{\beta})$ we use in the previous models, and the conjugacy between $G_0$ and the distribution of $\bm{Z}_j$'s leads to a straightforward Gibbs sampler as described in \cite{neal1992bayesian} and \cite{maceachern1994estimating}. Parameter $\gamma$ represents the degree of concentration of $G$ around $G_0$ and, thus, is related to the number of distinct clusters. According to the P{\'o}lya urn scheme representation of the Dirichlet process in \cite{blackwell1973ferguson}, the prior probability that a new ranker belongs to a different cluster with all $m$ existing rankers is $\gamma/(m+\gamma-1)$. In addition, the expected number of clusters with in total $m$ rankers is $\sum_{j=1}^m\gamma/(j+\gamma-1)$ {\it a priori}. We discuss the sensitivity of this hyper-parameter in the simulation studies. Under this Dirichlet process mixture model, we are interested in rank aggregation within each cluster as well as rank aggregation across all clusters. The aggregated ranking in each cluster $k$ is determined by the order of $\bm{\mu}^{\langle k\rangle}$, or equivalently $\bm{\mu}^{(j)}$'s with cluster allocation $q_j = k$. The aggregated ranking list across all clusters depends on the underlying score of all rankers: \begin{equation} \rho =\text{rank}\left(m^{-1}\sum_{j=1}^m \tilde{\bm{\mu}}^{(j)} \right), \label{agg_barcm} \end{equation} where $\tilde{\bm{\mu}}^{(j)}$ is the posterior mean of the $\bm{\mu}^{(j)}$ for each ranker $j$. We regard this rank aggregation method as BARCM, standing for Bayesian Aggregation of Rank data with Covariates of entities and Mixture of rankers with different ranking opinions. \subsection{Extension to partial ranking lists} Model \eqref{eq1},\eqref{eq2},\eqref{eq5} and \eqref{dpm1} can all be applied when the observations are partial ranking lists. Because we define ranking list as a set of non-contradictory pairwise relations among ranked entities, partial ranking lists appear when any of the pairwise relations is missing. Thus, besides the partial ranking list $\tau_j \ (1\leq j\leq m)$, we also observe the $\delta_j$'s, which indicate which pairwise relationship is missing. Under latent variable models, we denote $\tau_j \simeq \text{rank}(\bm{Z}_j)$ if the partial ranking list $\tau_j$ is consistent with the full ranking list $\text{rank}(\bm{Z}_j)$. Our models, BARC, BARCW and BARCM, for the observed individual partial ranking lists are the same as in \eqref{eq2}, \eqref{eq5} and \eqref{dpm1}-\eqref{dpm2}, except that $\tau_j = \text{rank}(\bm{Z}_j)$ is replaced by $\tau_j \simeq \text{rank}(\bm{Z}_j)$. Let $\bm{\theta}_\delta$ and $\bm{\theta}_\tau$ denote the parameters for missing indicators $\delta_j$'s and ranking lists $\tau_j$'s, respectively. We can then write the likelihood of $(\delta_j,\tau_j)$ as \begin{align*} p(\delta_j, \tau_j \mid \bm{\theta}_\delta, \bm{\theta}_\tau, \bm{X}) = \sum_{r: r \simeq \tau_j} \int_{\mathbb{R}^n} p(\delta_j \mid r, \bm{Z}_j, \bm{\theta}_\delta, \bm{X}) 1\{r = \text{rank}(\bm{Z}_j) \} p(\bm{Z}_j \mid \bm{\theta}_\tau, \bm{X}) \text{d} \bm{Z}_j. \end{align*} If the pairwise relations are missing at random, in the sense that $p(\delta_j \mid r, \bm{Z}_j, \bm{\theta}_\delta, \bm{X}) = p(\delta_j \mid \tilde{r}, \tilde{\bm{Z}}_j, \bm{\theta}_\delta, \bm{X})$ for all possible $(r, \bm{Z}_j, \tilde{r}, \tilde{\bm{Z}}_j)$ such that $r = \text{rank}(\bm{Z}_j) \simeq \tau_j$ and $\tilde{r}= \text{rank}(\tilde{\bm{Z}}_j) \simeq \tau_j$, then the likelihood of $(\delta_j,\tau_j)$ can be simplified as \begin{align*} p(\delta_j, \tau_j \mid \bm{\theta}_\delta, \bm{\theta}_\tau, \bm{X}) = p(\delta_j \mid \tau_j, \bm{\theta}_\delta, \bm{X}) \int_{\mathbb{R}^n} 1\{\tau_j \simeq \text{rank}(\bm{Z}_j) \} p(\bm{Z}_j \mid \bm{\theta}_\tau, \bm{X}) \text{d} \bm{Z}_j \end{align*} If the priors for the parameters $\bm{\theta}_\delta$ and $\bm{\theta}_\tau$ are mutually independent, we can further ignore the $\delta_j$'s when conducting the Bayesian inference for the parameter $\bm{\theta}_\tau$ of ranking mechanisms. \section{MCMC computation with parameter expansion}\label{sec3} We use Gibbs sampling with parameter expansion \citep{liu1999parameter} in our Bayesian computation for the latent variable models with covariates. We start with model (\ref{eq2}) and then generalize this MCMC strategy to two extended models, (\ref{eq5}) and \eqref{dpm1}-\eqref{dpm2}. To simplify the notation, we define $\bm{Z} = (\bm{Z}_1,\ldots,\bm{Z}_m)\in \mathbb{R}^{n\times m}$, $\mathcal{T} = \{\tau_j\}_{j=1}^m$, $\bm{V} = (\bm{I}_n,\bm{X}) \in \mathbb{R}^{n\times (n+p)}$, and $\bm{\Lambda} = \text{diag}(\sigma_{\alpha}^{2}\bm{I}_{n}, \sigma_{\beta}^{2}\bm{I}_{p}) \in \mathbb{R}^{(n+p)\times(n+p)}$. \subsection{Parameter-expanded Gibbs Sampler} \label{step_zba} The most computationally expensive part in our model is to sample all the $Z_{ij}$'s from the truncated Gaussian distributions. Furthermore, because $\bm{Z}$ and $(\bm{\alpha}, \bm{\beta})$ are intertwined together due to the posited regression model, they tend to correlate highly, similar to the difficulty of the data augmentation method introduced by \cite{albert1993bayesian} for probit regression models. To speed up the algorithm, we follow Scheme 2 in \cite{liu1999parameter} and exploit a parameter-expanded data augmentation (PX-DA) algorithm. In particular, we introduce a group scale transformation of the ``missing data" matrix $\bm{Z}$, the evaluation scores of all rankers for all ranked identities, indexed by a non-negative parameter $\theta$, i.e., $t_{\theta}(\bm{Z})\equiv \bm{Z}/\theta$. The PX-DA algorithm updates the missing data $\bm{Z}$ and the expanded parameters $(\theta, \bm{\alpha},\bm{\beta})$ iteratively as follows: \begin{enumerate} \item For $i=1,\ldots,n$ and $j=1,\ldots,m$, draw [$Z_{ij} \mid Z_{[-i],j},\bm{Z}_{[-j]},\bm{\alpha},\bm{\beta}$] from $\mathcal{N}(\alpha_i+\bm{x}_{i}'\bm{\beta},1)$ with truncation points determined by $Z_{[-i],j}$, such that $Z_{ij}$ falls in the correct position according to $\tau_j$, i.e., $\text{rank}(\bm{Z}_j)\simeq\tau_j$. \item Draw $\theta\sim p(\theta \mid \bm{Z},\mathcal{T})\propto p(t_\theta(\bm{Z}))|J_\theta(\bm{Z})|H(d\theta)$. Here, $|J_\theta(\bm{Z})|=\theta^{-nm}$ is the Jacobian of scale transformation, $H(d\theta)=\theta^{-1}d\theta$ is the Haar measure on a scale group up to a constant, and $$ p(t_\theta(\bm{Z})) \propto \int p(t_\theta(\bm{Z}) \mid \bm{\alpha}, \bm{\beta})p(\bm{\alpha})p(\bm{\beta})\text{d}\bm{\alpha} \text{d}\bm{\beta} \propto \exp\left\{-\frac{S}{2\theta^2}\right\}, $$ is the marginal density of latent variables evaluated at $t_{\theta}(\bm{Z})$, where $$ S=\sum_{j=1}^{m}\bm{Z}_j'\bm{Z}_j-\sum_{j=1}^{m}\sum_{k=1}^{m}\bm{Z}_j'\bm{V}(\bm{\Lambda}^{-1}+m\bm{V}'\bm{V})^{-1}\bm{V}'\bm{Z}_k. $$ We can derive that $\theta^2\sim S/\chi^2_{nm}$. \item Draw $(\bm{\alpha},\bm{\beta})\sim p(\bm{\alpha}, \bm{\beta} \mid t_\theta(\bm{Z}))\equiv \mathcal{N}\left(\hat{\bm{\eta}}/\theta,\hat{\bm{\Sigma}}\right)$, where $$\hat{\bm{\eta}}=(\bm{\Lambda}^{-1}+m\bm{V}'\bm{V})^{-1}\bm{V}'\sum_{j=1}^m\bm{Z}_j\ \ \text{ and } \ \ \hat{\bm{\Sigma}}=(\bm{\Lambda}^{-1}+m\bm{V}'\bm{V})^{-1}.$$ \end{enumerate} Below we give some intuition on why the PX-DA algorithm improves efficiency. Without Step 2 and with $t_{\theta}(\bm{Z})$ in Step 3 replaced by $\bm{Z}$, the algorithm reduces to the standard Gibbs sampler, which updates the missing data and parameters iteratively.The scale group move of $\bm{Z}$ under the usual Gibbs sampler is slow due to both the Gibbs update for $\bm{Z}$ in Step 1 and the high correlation between $\bm{Z}$ and $(\bm{\alpha}, \bm{\beta})$. To overcome such difficulty, the PX-DA algorithm introduces a scale transformation of $\bm{Z}$ to facilitate its group move and mitigate its correlation with $(\bm{\alpha}, \bm{\beta})$. To ensure the validity of the MCMC algorithm, the scale transformation parameter $\theta$ has to be drawn from a carefully specified distribution, such that the move is invariant under the target posterior distribution, i.e., $t_{\theta}(\bm{Z})$ follows the same distribution as the original $\bm{Z}$ under stationarity. To aid in understanding, we provide a proof in the Supplementary Material that the specified distribution of $\theta$ in Step 2 satisfies this property. \subsection{Gibbs sampler for BARCW} Under Model \eqref{eq5} for BARCW, the Gibbs steps for $\left[\bm{Z},\bm{\beta}, \bm{\alpha} \mid \mathcal{T}, \bm{W}\right]$ is very similar to that for $\left[\bm{Z},\bm{\beta}, \bm{\alpha} \mid \mathcal{T}\right]$ in the previous model for BARC, with details relegated to the Supplementary Material. The additional step is to draw $w_j$ given all other variables. For $j=1,\ldots,m$, we draw discrete random variable $w_j$ from the following conditional posterior probability mass function: \begin{align*} p(w_j\mid \bm{Z},\bm{w}_{[-j]},\bm{\alpha},\bm{\beta},\mathcal{T}) &\propto p(w_j)p(\bm{Z}\mid \bm{\alpha},\bm{\beta},\bm{w}) \\ &\propto w_j^{n\over 2}\exp\left(-\frac{w_j}{2}\sum_{i=1}^n \left(Z_{ij}-\bm{x}_{i}'\bm{\beta}-\alpha_i\right)^2\right ). \end{align*} \subsection{Gibbs sampler for BARCM} Under model \eqref{dpm1}-\eqref{dpm2} for BARCM, we first represent the parameters $\{\bm{\alpha}^{(j)}, \bm{\beta}^{(j)}\}_{j=1}^m$ by cluster allocation vector $\bm{q} = (q_1,\ldots, q_m)$ and cluster-wise parameters $\{\bm{\alpha}^{\langle k\rangle}, \bm{\beta}^{\langle k\rangle}: k\in \{q_1,\ldots, q_m\}\}$, and then use the MCMC algorithm to sample $\bm{q}$, $(\bm{\alpha}^{\langle k\rangle}, \bm{\beta}^{\langle k\rangle})$'s and $\bm{Z}= (\bm{Z}_1,\ldots,\bm{Z}_m)$. Let $\mathcal{A}_k(\bm{q}) = \{j \mid 1\leq j\leq m, q_j = k\}$ denote the set of rankers that belong to cluster $k$ given cluster allocation $\bm{q}$. Due to the conjugacy between $G_0$ and the distribution of $\bm{Z}_j$'s, we can integrate out $(\bm{\alpha}^{\langle k\rangle}, \bm{\beta}^{\langle k\rangle})$'s when sampling $\bm{q}$, and Gibbs sampling of $\bm{q}$ given $\bm{Z}$ follows from Algorithm 3 in \cite{neal2000markov}. Specifically, the Gibbs steps are as follows: \begin{enumerate} \item For $j=1,\dots,m$, draw $q_j$ from \begin{eqnarray*} & & P\left(q_j = k\mid \bm{Z}, \bm{q}_{[-j]}, \mathcal{T}\right) \\ &\propto & P\left(q_j = k\mid \bm{q}_{[-j]}\right)\int p\left(\bm{Z}_{j}\mid \bm{\alpha}^{\langle k\rangle}, \bm{\beta}^{\langle k\rangle}\right)p\left(\bm{\alpha}^{\langle k\rangle}, \bm{\beta}^{\langle k\rangle} \mid \bm{Z}_{[-j]}\right) \text{d}\bm{\alpha}^{\langle k\rangle}\text{d}\bm{\beta}^{\langle k\rangle}\\ &\propto & P\left(q_j = k\mid \bm{q}_{[-j]}\right)\cdot\exp\left(-\frac{1}{2}S_{k}(\bm{q})+\frac{1}{2}S_{k}(\bm{q}_{[-j]})\right ), \end{eqnarray*} where \begin{equation*} S_{k}\left(\bm{q}\right) = \sum_{j\in\mathcal{A}_{k}(\bm{q})}\bm{Z}_{j}'\bm{Z}_{j}-\sum_{j\in\mathcal{A}_{k}(\bm{q})} \sum_{l\in\mathcal{A}_{k}(\bm{q})} \bm{Z}_j'\bm{V}\left(\bm{\Lambda}^{-1}+|\mathcal{A}_{k}(\bm{q})|\bm{V}'\bm{V}\right)^{-1}\bm{V}'\bm{Z}_k, \end{equation*} $|\mathcal{A}_{k}(\bm{q}_{[-j]})|$ denotes the number of units except $j$ that are in cluster $k$, and $P\left(q_j \mid \bm{q}_{[-j]}\right)$ is determined as follows: \begin{eqnarray*} \text{If } k = q_i \text{ for some } i\neq j:\ P\left(q_j = k\mid \bm{q}_{[-j]}\right) &=& \frac{|\mathcal{A}_{k}(\bm{q}_{[-j]})|}{(m-1+\gamma)}\\ P\left(q_j \neq q_i \text{ for all } i\neq j\mid \bm{q}_{[-j]}\right) &=& \frac{\gamma}{(m-1+\gamma)}, \end{eqnarray*} \item For each $k\in \{q_1,\ldots,q_m\}$, we sample $[\bm{Z}_{\mathcal{A}_k(\bm{q})},\bm{\alpha}^{\langle k\rangle}, \bm{\beta}^{\langle k\rangle} \mid \mathcal{T}, \bm{q}]$ using very similar Gibbs sampling steps as we sample $[\bm{Z},\bm{\alpha},\bm{\beta} \mid \mathcal{T}]$ in the BARC model, with details relegated to the Supplementary Material. \end{enumerate} \section{Rank aggregation via MCMC samples}\label{agg_mcmc} Following the Bayesian computation in the previous section, we can obtain MCMC samples from the posterior distribution of $(\bm{\alpha}, \bm{\beta})$ under BARC or BARCW, and from the posterior distribution of $(\bm{\alpha}^{(j)}, \bm{\beta}^{(j)})$'s under BARCM. As described in (\ref{agg_barc}) and (\ref{agg_barcm}), we use the posterior mean of $\mu_i \equiv \alpha_i+\bm{x}_{i}'\bm{\beta}$'s to generate the aggregated ranking list in BARC and BARCW, and use the posterior mean of $m^{-1} \sum_{j=1}^{m} \mu_{i}^{(j)} = m^{-1}\sum_{k} |\mathcal{A}_{k}(\bm{q})|( \alpha_i^{\langle k\rangle}+\bm{x}_{i}'\bm{\beta}^{\langle k\rangle})$'s in BARCM. Moreover, we have some byproducts from the Bayesian inference besides the aggregated ranking lists, as illustrated below. \subsection{Probability interval for the aggregated ranking list} In existing rank aggregation methods, people usually seek only one aggregated rank, but ignore the uncertainty of the aggregation result. When we observe $i \succ j$ in a single ranking list $\rho$, we cannot tell whether $i$ is much better than $j$ or they are close. The Bayesian inference provides us a natural uncertainty measure for the ranking result. Under BARC or BARCW, suppose we have MCMC samples $\{\bm{\mu}^{[l]}\}_{l=1}^L$, from the posterior distribution $p(\bm{\mu}\mid \tau_1,\cdots,\tau_m)$. For each sample $\bm{\mu}^{[l]}$, we calculate a ranking list $ \rho^{[l]}=\text{rank}(\bm{\mu}^{[l]}). $ We denote $\tau^{[l]}(i)$ as the position of entity $i$ in ranking list $\rho^{[l]}$, and define the $(1-\alpha)$ probability interval of entity $i$'s rank as $$ \left(\tau^{LB}(i),\tau^{UB}(i)\right)=\left(\tau_{(\frac{\alpha}{2})}(i),\tau_{(1-\frac{\alpha}{2})}(i)\right), $$ where $\tau_{(\frac{\alpha}{2})}(i)$ and $\tau_{(1-\frac{\alpha}{2})}(i)$ are the $\frac{\alpha}{2}$th and $(1-\frac{\alpha}{2})$th sample quantiles of $\{ \tau^{[l]}(i)\}_{l=1}^{L}$. The construction of credible intervals for entities' ranks under BARCM is very similar, and thus is omitted here. \subsection{Measurements of heterogeneous rankers} In BARCW and BARCM, we aim to learn the heterogeneity in rankers and subsequently improve as well as better interpret the rank aggregation results. Both methods deliver meaningful measures to detect heterogeneous rankers. In BARCW, we assume that all rankers share the same opinion and the samples from $p(\bm{w} \mid \mathcal{T})$ measure the reliability of the input rankers. In BARCM, we assume that there exist a few groups of rankers with different opinions, despite all being reliable rankers. The MCMC samples from $p(\bm{q} \mid \mathcal{T})$ estimate ranker clusters with different opinions. The number of clusters is determined by the number of distinct values in cluster allocation $\bm{q}$. The opinion of rankers in cluster $k$ can be aggregated by the posterior means of $\alpha_i^{\langle k\rangle}+\bm{x}_{i}'\bm{\beta}^{\langle k\rangle}$'s. We compare both methods later in simulation studies and real applications. \subsection{Role of covariates in the ranking mechanism} As discussed in Section \ref{sec:rank_cov}, the interpretation of $\bm{\alpha}$ and $\bm{\beta}$ is difficult due to over-parameterization. However, noting that the $\alpha_i$'s are modeled as i.i.d Gaussian random variables with mean zero \textit{a priori}, the posterior distribution of $\bm{\beta}$ still provides some meaningful information about the role of covariates in the ranking mechanism. Intuitively, for each ranked entity $i$, the projection $\bm{x}_i'\bm{\beta}$ can be seen as the part of the evaluation score $\bm{\mu}_i$ linearly explained by the covariates, and $\alpha_i$ as the corresponding residual. The sign and magnitude of the coefficient $\beta_k$ for the $k$th covariate indicate the positive or negative role of covariates and its strength in determining the ranking list. In practice, we can incorporate nonlinear transformations of original covariates to allow for more flexible role of covariates in explaining the ranking mechanism. \section{Simulation Studies}\label{sec4} We adopt the normalized \textit{Kendall tau distance} \citep{kendall1938new} between ranking lists in evaluation to compare our methods with other rank aggregation methods. Another popular distance measure \textit{Spearman's footrule distance} \citep{diaconis1977spearman} gives very similar results and is thus omitted here. \subsection{Comparison between BARC and other rank aggregation methods} Recall that $\mathcal{U}$ is the set $\{1,\ldots,n\}$ of entities, and entity $i$ has true value $\mu_i$. We generate $m$ full ranking lists $\{\tau_j\}_{j=1}^m$ via the following model: $$ \tau_j=\text{rank} (\bm{Z}_j), \quad \text{ where } \bm{Z}_{j} \overset{iid}{\sim} \mathcal{N}(\bm{\mu},\sigma^2\bm{I}_n). $$ We generate i.i.d. vectors $\bm{x}_{i} = (x_{i1}, \ldots, x_{ip})' \in \mathbb{R}^p$ from the multivariate normal distribution with mean 0 and covariance $\text{Cov}(x_{is},x_{it})=\rho^{|s-t|}$ for $1\leq s,t \leq p$, and examine three scenarios. In Scenario 1, the true difference between entities can be linearly explained by covariates. In Scenario 2, a linear combination of covariates can partially explain the ranking. In Scenario 3, the ranking mechanism is barely correlated with the covariates. \begin{enumerate} \item $\mu_i=\bm{x}_{i}^T\bm{\beta}$, where $\bm{\beta}=(3,2,1,0.5)'$, $p=4$, and $\rho=0.2$. \item $\mu_i=\bm{x}_{i}^T\bm{\beta}+\|\bm{x}_{i}\|^2$, where $\bm{\beta}=(3,2,1)'$, $p=3$, and $\rho=0.5$. \item $\mu_i=\|\bm{x}_{i}\|^2$, where $p=4$, and $\rho=0.5$. \end{enumerate} We first examine the impact of the noise level $\sigma$ on the performance of BARC and other rank aggregation methods. Fixing $n=50$ and $m=10$, we tried four different values of $\sigma$ ($=5,10,20,40$). For each configuration, we generated 500 simulated datasets. We applied Borda Count (BC), Markov-Chain based methods (MC1, MC2, MC3), Plackett--Luce based method (PL) and our BARC method. A brief review of the aforementioned methods can be found in the Supplementary Material. When utilizing BARC and its extensions, we input standardized covariates and set hyper-parameters $\sigma_\alpha=1$ and $\sigma_\beta=100$ unless otherwise stated. Intuitively, with a small $\sigma_\alpha$ and a large $\sigma_\beta$, BARC would exploit the role of covariates in rank aggregation. The Kendall's tau distances between the true rank and the aggregated ranks produced by the six methods, averaged over the 500 simulated datasets, are plotted against the noise level in Figure \ref{fig1}. We can observe that BARC uniformly outperformed the competing methods in Scenarios 1 and 2 when the linear combination of covariates is useful, and was competitive in scenario 3. The PL method underperformed all other methods but MC1 due to its misspecified distributional assumption. \begin{figure}[htb] \centering \includegraphics[width=\textwidth]{sim-kd.pdf} \caption{Average distance between true rank and aggregated ranks of different methods. As the covariates become increasingly dis-associated with the ranking mechanism from Scenarios 1 to 3, the advantage of BARC over existing methods shrank. Under these scenarios, the lines of MC2, MC3 and Borda Count overlap. In Scenario 3, the results of MC2, MC3, Borda Count and BARC are extremely close as the ranking does not associate with covariates linearly. } \label{fig1} \end{figure} \subsection{Computational advantage of parameter expansion} Before we move to more complicated settings, we would like to use the above simulation to demonstrate the effectiveness of parameter expansion in dealing with rank data. We use Scenario 2 with noise level $\sigma=5$ as an illustration. Figure \ref{acf_px} shows that Gibbs sampler with parameter expansion reduces the autocorrelation in MCMC samples compared to regular Gibbs sampler. \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{acf_px.pdf} \includegraphics[width=0.4\textwidth]{acf_naive.pdf} \caption{The left panel is autocorrelation plot of $\beta_1$ in parameter expanded Gibbs sampler; The right panel is the autocorrelation plot of $\beta_1$ in regular Gibbs sampler.} \label{acf_px} \end{figure} \subsection{BARC with partial ranking lists} We further explore how BARC performs for aggregating partial ranking lists, where subgroups have no overlap with each other. This is a similar situation as we observe in Example \ref{ex1}. We simulated data from Scenario 2 with $n=80$, $m=10$ and $p=3$. We randomly divide these 80 entities into $k\,(=1,2,4,8,10,16)$ subgroups, each with size $n/k$. As $k$ increases, the pairwise comparison information decreases. For example, when $k=16$, we have only $5.06\%$ of all the pairwise comparisons in a partial ranking list. Figure \ref{fig3} displays the Kendall's tau distances between the true rank and the aggregated ranking lists inferred by BARC in different cases. BARC is quite robust with respect to partial ranking lists when unobserved pairwise comparisons are missing completely at random and the input ranking lists have moderate dependence on the available covariates. In contrast, denoted by BAR in Figure \ref{fig3}, the BARC method without using covariates is susceptible to partial lists. \begin{figure}[htb] \centering \includegraphics[width=0.7\textwidth]{sim-f-kd.pdf} \caption{ We applied BARC with two different settings for hyper-parameter $\sigma_{\alpha}$ to aggregate partial ranking lists. For comparison, we also applied our method without using covariates, denoted as BAR. With the help of covariates, BARC's performance were relatively unaffected by the increase of the incompleteness of the ranking lists. BARC is also robust with hyper-parameter choices\textemdash the BARC lines with different values of $\sigma_{\alpha}$ (i.e., 0.5 and 1) were very close to each other. } \label{fig3} \end{figure} \subsection{BARCM for heterogeneous opinions in ranking lists}\label{sec:simu_barcm} In real world, there can be a few groups of rankers with different opinions, despite all being reliable rankers. Dirichlet process mixture model \eqref{dpm1}-\eqref{dpm2} clusters the rankers and can automatically determine the total number of clusters. Here, we use simulation to study the sensitivity of BARCM to hyper-parameter $\gamma$ in the Dirichlet process prior. In addition, we explore the performance of BARCW under this misspecified model setting. We simulated under the BARC model with three mixture components. Mimicking the dataset in Example \ref{ex1}, we have $m=69$ rankers, $p=11$ covariates each entity, and $n=108$ entities divided into 9 non-overlapping groups of equal size. The categories of rankers are generated with probability $\bm{\pi} = (0.5, 0.3, 0.2)$. The covariates $\bm{x}_i$'s are generated from multivariate normal distribution with mean 0 and covariance $Cov(x_{is},x_{it})=(0.2)^{|s-t|}$, and the coefficients are generated from $\bm{\beta}^{\langle k \rangle}\overset{iid}{\sim} \mathcal{N}(0,\bm{I}_p)$ and $\alpha_i^{\langle k \rangle}\overset{iid}{\sim}\mathcal{N}(0,2^2)$ for $k=1,2,3$ and $i=1,\ldots,n$. The noise level is fixed at $\sigma=1$. Table \ref{acc_ncat} shows the average clustering accuracy under different hyper-parameters. The clustering accuracy here is measured by Rand Index, which is the percentage of pairwise clustering decisions that are correct \citep{rand1971objective}. The hyper-parameters clearly impacts the number of clusters in the mixture model, but the results are quite robust in terms of the clustering error. \begin{table}[htb] \caption{Clustering analysis under heterogeneous setting: average clustering accuracy and number of clusters given by BARCM over 100 simulations under each $\gamma$ value. } \centering \label{acc_ncat} \begin{tabular}{|c|c|c|c|c|} \hline & $\gamma = m^{-1}$ & $\gamma = m^{-1/2}$ & $\gamma = 1$ & $\gamma = m^{1/2}$\\ \hline Clustering accuracy & 0.994 & 0.990 & 0.987 & 0.979\\ \hline Expected $\#$ of clusters {\it a posteriori} & 3.629 & 4.162 & 4.671 & 5.746\\ \hline Expected $\#$ of clusters {\it a priori} & 1.069 & 1.557 & 4.819 & 18.986\\ \hline \end{tabular} \end{table} We also applied BARCW to this simulated data set. Figure \ref{fig_barcm} shows that the minority opinions are down-weighted by BARCW, which assumes that all rankers share the same opinion. As a result, BARCW reinforces the majority's opinion in rank aggregation. In practice, we recommend to apply BARCM to check if there are several sizable ranker subgroups. By studying rankers' heterogeneity, we can better understand our ranking data even if we seek only one aggregated ranking list. \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{sim-weights-cat.pdf} \caption{Box plot of weights by ranker categories given by BARCW in 100 simulations. Ranker category with the largest proportion is weighted higher than the other two.} \label{fig_barcm} \end{figure} \subsection{Robustness of BARCM and BARCW under homogeneous setting} In contrast to the simulation with heterogeneous ranker qualities or opinions, we also simulated the BARC model under the homogeneous setting to verify the robustness of BARCM and BARCW. The simulation is the same as \ref{sec:simu_barcm} except that all rankers are from one component with equal qualities. Table \ref{acc_homo} shows the average clustering accuracy under different hyper-parameters. Figure \ref{fig_weights_homo} shows the histogram of the rankers' weights given by BARCW. Under this homogeneous setting, BARCM clustered the rankers into one group, and BARCW assigned the rankers' weights mostly near the maximum. \begin{table}[htb] \caption{Clustering analysis under homogeneous setting: average clustering accuracy and number of clusters given by BARCM over 100 simulations under each $\gamma$ value.} \centering \label{acc_homo} \begin{tabular}{|c|c|c|c|c|} \hline & $\gamma = m^{-1}$ & $\gamma = m^{-1/2}$ & $\gamma = 1$ & $\gamma = m^{1/2}$\\ \hline Clustering accuracy & 1.000 & 0.999 & 0.998 & 0.993\\ \hline Expected $\#$ of clusters {\it a posteriori} & 1.007 & 1.033 & 1.057 & 1.215\\ \hline \end{tabular} \end{table} \begin{figure}[htb] \centering \includegraphics[width=0.6\textwidth]{weights-homo.pdf} \caption{Histogram of rankers' weights given by BARCW in 100 simulations under homogeneous setting. Almost all the rankers are learned to be reliable.} \label{fig_weights_homo} \end{figure} \section{Analyses of the Two Real Data Sets}\label{realData} \subsection{Aggregating NFL Quarterback rankings} Ranking NFL quarterbacks is a classic case where experts' ranking schemes are clearly related to some performance statistics of the players in their games. Information in Tables \ref{table1} and \ref{table2} enables us to generate aggregated lists using both rank data and the covariates information, as shown in Table \ref{table4}. For quarterbacks at the top and bottom of the list, these methods mostly agree with each other. Among all compared rank aggregation results, the PL method has the largest discrepancy with other methods, especially in the bottom half where the ranking uncertainty is large. Some diagnostics plots for MCMC convergence are provided in the Supplementary Material. \begin{table}[htb] \tiny \centering \caption{Rank aggregation results of NFL quarterbacks listed in the order of BARCW posterior means. The rankings are listed to the right of the underlying values given by rank aggregation methods.} \label{table4} \begin{adjustbox}{center} \begin{tabular}{l|rr|rr|rr|rr|rr} \toprule Player & BARCW $\mu$ & Rank & BARC $\mu$ & Rank & BC & Rank & PL.$\gamma$ & Rank & MC3.$\pi$ & Rank \\ \midrule Andrew Luck & 6.518 & 1 & 6.069 & 1 & 1.286 & 1 & 0.361 & 1 & 0.207 & 1 \\ Aaron Rodgers & 4.76 & 2 & 4.635 & 2 & 2.571 & 2 & 0.195 & 2 & 0.137 & 2 \\ Peyton Manning & 4.466 & 3 & 4.39 & 3 & 3 & 3 & 0.171 & 3 & 0.12 & 3 \\ Tom Brady & 2.937 & 4 & 2.942 & 4 & 5.071 & 4 & 0.072 & 4 & 0.071 & 4 \\ Tony Romo & 2.805 & 5 & 2.744 & 5 & 5.214 & 5 & 0.062 & 5 & 0.07 & 5 \\ Drew Brees & 2.469 & 6 & 2.448 & 6 & 5.857 & 6 & 0.043 & 6 & 0.063 & 6 \\ Ben Roethlisberger & 2.149 & 7 & 2.094 & 7 & 6.571 & 7 & 0.035 & 7 & 0.052 & 7 \\ Ryan Tannehill & 1.435 & 8 & 1.342 & 8 & 8 & 8 & 0.020 & 8 & 0.04 & 8 \\ Matthew Stafford & 0.965 & 9 & 0.72 & 9 & 8.857 & 9 & 0.015 & 9 & 0.034 & 9 \\ Mark Sanchez & -0.005 & 10 & -0.098 & 10 & 11.5 & 10 & 0.005 & 11 & 0.023 & 10 \\ Russell Wilson & -0.716 & 11 & -0.496 & 11 & 12.214 & 11 & 0.005 & 10 & 0.021 & 11 \\ Philip Rivers & -0.93 & 12 & -0.602 & 12 & 13.214 & 12 & 0.003 & 12 & 0.02 & 12 \\ Cam Newton & -1.197 & 13 & -1.136 & 13 & 14.5 & 13 & 0.002 & 13 & 0.017 & 13 \\ Matt Ryan & -1.413 & 14 & -1.474 & 14 & 16.357 & 15 & 0.001 & 15 & 0.014 & 15 \\ Eli Manning & -1.474 & 15 & -1.497 & 15 & 16.071 & 14 & 0.002 & 14 & 0.014 & 14 \\ Alex Smith & -1.793 & 16 & -1.717 & 17 & 16.714 & 16 & 0.001 & 17 & 0.013 & 16 \\ Colin Kaepernick & -1.813 & 17 & -1.601 & 16 & 16.786 & 17 & 0.001 & 18 & 0.013 & 17 \\ Joe Flacco & -1.815 & 18 & -1.778 & 19 & 16.929 & 18 & 0.001 & 20 & 0.013 & 18 \\ Jay Culter & -1.884 & 19 & -1.726 & 18 & 17.143 & 19 & 0.001 & 19 & 0.013 & 19 \\ Andy Dalton & -1.987 & 20 & -2.052 & 20 & 17.357 & 20 & 0.001 & 16 & 0.012 & 20 \\ Josh McCown & -2.733 & 21 & -2.645 & 21 & 19.5 & 21 & 0.001 & 21 & 0.01 & 21 \\ Drew Stanton & -2.812 & 22 & -2.823 & 22 & 20.286 & 22 & 0.001 & 22 & 0.009 & 22 \\ Teddy Bridgewater & -3.476 & 23 & -3.378 & 23 & 21.714 & 23 & 0.000 & 23 & 0.008 & 23 \\ Brian Hoyer & -4.462 & 24 & -4.361 & 24 & 23.286 & 24 & 0.000 & 24 & 0.007 & 24 \\ \bottomrule \end{tabular} \end{adjustbox} \end{table} Figure \ref{fig10} shows the $95\%$ probability interval for each quarterback's rank under both BARC and BARCW. We can see that the interval width is large for mediocre quarterbacks, and that is exactly where a majority of discrepancies occurred among different rankers and different rank aggregation methods. The interval estimates of aggregated ranks can separate several elite quarterbacks from the others. \begin{figure}[htb] \centering \includegraphics[width=0.49\textwidth]{rank-CI-BARC.pdf} \includegraphics[width=0.49\textwidth]{rank-CI-BARCW.pdf} \caption{Interval estimates of aggregated ranks of NFL quarterbacks, as of week 12 in the 2014 NFL season. The plot on the left is given by BARC, and the right one is from BARCW. The differences between these two results are very small. For example, BARCW separates the interval estimate of Matthew Stafford and Mark Sanchez after down-weighting a few rankers.} \label{fig10} \end{figure} All methods except BARCW assume equal reliability for all input lists. Figure \ref{fig6} shows the posterior boxplots and posterior means of the weights. Out of the 13 rankers, seven are inferred to have significantly higher quality than the other six rankers. We further validated our weights estimation using the prediction accuracy of each expert at the end of the season. Table \ref{pred_acc} shows the means and standard deviations of two well separated groups of rankers. \begin{figure}[htb] \centering \includegraphics[width=0.6\textwidth]{qb-weight.pdf} \caption{Boxplots of posterior samples of weights given by BARCW. Red cross marks the posterior means of weights. Black points are samples outside of the range between first and third quartile of the posterior samples, and black lines are collapsed boxes when interquartile range is 0. Seven experts are learned to be reliable rankers.} \label{fig6} \end{figure} \begin{table}[htb] \caption{Summary of 13 experts' prediction accuracy evaluated after the 2014 NFL season. Throughout the season, \emph{FantasyPros.com} compare each expert's player preference to the actual outcomes. The prediction accuracy is calculated based on the incremental fantasy points implied by ranking lists.} \label{pred_acc} \centering \begin{tabular}{|c|c|c|} \hline & ``reliable'' rankers & ``unreliable'' rankers\\ \hline mean (accuracy) & 0.589 & 0.550\\ \hline sd (accuracy) & 0.013 & 0.027\\ \hline \end{tabular} \end{table} Figure \ref{fig7} gives us intuition about the role of covariates in our rank aggregation. TD and Int, which stand for percentage of touchdowns and interceptions thrown when attempting to pass, are the most significant covariates; touchdowns have a positive effect, while interceptions have a negative one. Based on our football common sense, touchdowns and interceptions can directly impact the result of a game. \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth]{coefplot.pdf} \caption{Posterior mean and 95$\%$ probability interval of $\beta$ given by BARCW in aggregating quarterback rankings. Please refer Table~\ref{table2} for the covariates information, and each column of covariates are standardized when applied in BARCW.} \label{fig7} \end{figure} \subsection{Aggregating orthodontics data} As mentioned in Section 1, the orthodontics data set contains 69 ranking lists for each of the 9 groups of the orthodontic cases. With ranking lists produced by a group of high-profile specialists, the rank aggregation problem emerges because the average perception of experienced orthodontists is considered the cornerstone of systems for the evaluation of orthodontic treatment outcome \citep{liu2012consistency, song2014reliability, song2015validation}. The covariates for these cases are objective assessments on their teeth. It is quite difficult to aggregate ranking lists of many non-overlapping subgroups, as covariates are the only source of information available in bridging different groups. In addition, Table \ref{table3} shows that the rankers did not have very similar opinions. \begin{figure}[htb] \centering \includegraphics[width=0.6\textwidth]{distplot_mu.pdf} \caption{Kendall tau distance of posterior mean of $\bm{\mu}^{(j)}$s based on BARCM result. The ranking discrepancy increases as the color shifts from dark to light.} \label{corr} \end{figure} Previously, \cite{liu2012consistency} and \cite{song2014reliability} assessed the reliability and the overall consistency of these experienced orthodontists through simple statistics including Spearman's correlation among these highly incomplete ranking lists within each subgroup of cases. To gain deeper understanding of these ranking lists, we first study the heterogeneity among rankers using BARCM. We applied Dirichlet process mixture model with $\gamma=1$. The 69 experts are clustered into 24 subgroups. The sizes of the leading three clusters are $(19,9,6)$. Other clusters have fewer than 5 rankers each. Figure \ref{corr} shows the Kendall tau distances among the posterior means of the $\bm{\mu}^{(j)}$s, which are the underlying ranking criteria of all rankers. We see that almost half of the rankers cannot be grouped into sizable clusters, indicating that their opinions are closer to the baseline distribution in Dirichlet process than to other rankers. Because a ranking lists drawn from the baseline distribution is just noise, the rankers in the small groups either were unreliable rankers, or used information other than the available covariates in their ranking systems. \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth]{weights-cat.pdf} \caption{Boxplots of rankers' weights by clusters. The weight of ranker $j$ is given by the posterior mean of $w_j$ in BARCW. The clusters are estimated from BARCM ordered by their size. The sizes of clusters 1-3 are $(19,9,6)$, while ``other'' combines all rankers from the remaining 21 fragmented clusters.} \label{weights-cat} \end{figure} We subsequently applied BARCW to this data set. Figure \ref{weights-cat} shows the box plot of rankers' weights by their estimated clusters from BARCM. The rankers in the largest cluster are mostly considered reliable rankers, while the rankers in the small clusters are labeled as low-quality rankers. Implied by lower weights, the noisier ranking evaluation explains why the small-size clusters are not combined into the big ones. The weights of rankers in cluster 2 and cluster 3 are around the middle. This result is similar to our demonstration using simulation in Section \ref{sec:simu_barcm}. Among clusters 1-3, BARCW tends to down-weight the minority opinions when heterogeneous opinions exist. Based on the results from BARCM and BARCW, we conclude that there are three ranking opinions among half of the experts, while the others have considerable discrepancy that can be attributed to low individual reliability. Finally, we use both BARCW and BARCM for rank aggregation. The key to aggregate these nine non-overlapping groups of patients is to figure out the rank of patients' orthodontics conditions using, but not overly relying on, the covariates. Tables \ref{top} and \ref{bottom} show the top and bottom cases in aggregated ranking lists. Recall that BARCM aggregates opinions of the whole sample by averaging over all clusters with their corresponding proportions. The results from BARCW and BARCM are quite consistent with each other although they employed different assumptions. The Kendall distance between these two aggregated lists is 0.047. It supports our conclusion that rankers' discrepancy can be mostly explained by their heterogeneous reliability. \begin{table}[htb] \centering \caption{The five cases that are considered to have the best conditions based on rank aggregation.} \label{top} \begin{tabular}{rccccccc} \hline & BARC & BARCW & BARCM & Cl. 1 & Cl. 2 & Cl. 3 \\ \hline 1 & H2 & G7 & G7 & E2 & A1 & G7 \\ 2 & E2 & E2 & E2 & H3 & G7 & A1 \\ 3 & G7 & H2 & H2 & G7 & H2 & E2 \\ 4 & H3 & H3 & H3 & H2 & E2 & A4 \\ 5 & H4 & H4 & A1 & F8 & H4 & H4 \\ \hline \end{tabular} \end{table} \begin{table}[htb] \centering \caption{The five cases that are considered to have the worst conditions based on rank aggregation.} \label{bottom} \begin{tabular}{rccccccc} \hline & BARC & BARCW & BARCM & Cl. 1 & Cl. 2 & Cl. 3 \\ \hline 108 & F4 & F4 & F4 & H5 & F4 & F4 \\ 107 & H5 & H5 & F10 & F10 & F10 & F10 \\ 106 & F10 & F10 & H5 & F4 & H5 & E6 \\ 105 & E6 & E6 & E6 & D11 & D11 & H5 \\ 104 & D11 & D11 & D11 & E6 & E6 & E8 \\ \hline \end{tabular} \end{table} Figure \ref{coefplot2} shows coefficient plot of $\bm{\beta}$ in BARCW. It illustrates the role of covariates in our rank aggregation, especially in positioning those non-overlapping groups. Among the covariates, overjet, overbite and centerline all measure certain types of overall displacement, and thus are generally considered to have stronger negative effect compared to the other local displacements in this study. \begin{figure}[htb] \centering \includegraphics[width=0.55\textwidth]{coefplot2.pdf} \caption{Posterior mean and 95$\%$ probability interval of $\beta$ given by BARCW in aggregating orthodontics data. Please refer Table~\ref{table_cov} for the covariates information, and each column of covariates are standardized when applied in BARCW.} \label{coefplot2} \end{figure} \section{Discussion}\label{sec6} We described three model-based Bayesian rank aggregation methods (BARC, BARCW, BARCM) for combining different ranking lists when some covariates for the entities in consideration are also observed. With the help of covariates, these methods can accommodate various types of input ranking lists, including highly incomplete ones. Under the assumption of homogeneous ranking opinion, BARCW learns the qualities of rankers from data, and over-weighs high quality ones in rank aggregation. BARCM, on the other hand, studies the possibility of having heterogeneous opinion groups among rankers under the same framework. All three methods generate uncertainty measures for the aggregated ranks. Our simulation studies and real-data applications validate the importance of covariate information and our estimation of rankers' qualities and their heterogeneous opinions. We note that our methods consider only the covariate information of the ranked entities. It is of interest to further incorporate available covariate information of rankers, which can be helpful for detecting rankers' qualities and clustering rankers into subgroups with different opinions. We leave this extension of BARC for a further work. The foundation of our rank data modeling is the Thursthone-Mosteller-Daniels model, which can be extended in many ways. Although these models have been around for a long time, the standard MCMC procedure for their Bayesian inference does not mix well for our real-data applications due to the entangled latent structure of the models. We took advantage of the conjugacy of Gaussian distributions and exploited parameter-expanded Gibbs sampler to improve the computation efficiency. We can also speed up the MCMC algorithm through parallelization when there are many rankers, i.e., when $m$ is large. For all the three models, BARC, BARCW and BARCM, we can parallelize the Gibbs steps for updating $\{\bm{Z}_j\}_{j=1}^m$ given $\bm{\mu}$, or equivalently $(\bm{\alpha}, \bm{\beta})$. However, this full Bayesian inference still has its limitation in computational scalability when dealing with very large datasets such as those arisen from voting. An interesting future work is to develop approximate likelihood and Bayesian priors for the BARC model family under ``big-data'' settings that can enable us to do both efficient point estimation and approximate Bayesian inference. \bibliographystyle{imsart-nameyear}
{ "attr-fineweb-edu": 1.621094, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUd1TxK4sA-9F9lBc_
\section{Related work} Although the focus of the football analytics community has been mostly on developing metrics for measuring chance quality, there has also been some work on quantifying other types of actions like passes. Power et al.~\cite{power2017not} objectively measure the expected risk and reward of each pass using spatio-temporal tracking data. Gyarmati and Stanojevic~\cite{gyarmati2016qpass} value each pass by quantifying the value of having the ball in the origin and destination location of the pass using event data. Although their approach is similar in spirit to ours, our proposed approach takes more contextual information into account to value the passes. Furthermore, there has also been some work in the sports analytics community on more general approaches that aim to quantify several different types of actions~\cite{cervone2014pointwise,decroos2018hattrics,decroos2017starss,liu2018deepreinforcementlearning,schulte2015qfunction}. Decroos et al.~\cite{decroos2017starss} compute the value of each on-the-ball action in football (e.g., a pass or a dribble) by computing the difference between the values of the post-action and pre-action game states. Their approach distributes the expected reward of a possession sequence across the constituting actions, whereas our approach computes the expected reward for each pass individually. Cervone et al.~\cite{cervone2014pointwise} propose a method to predict points and to value decisions in basketball. Liu and Schulte as well as Schulte et al.~\cite{liu2018deepreinforcementlearning,schulte2015qfunction} present a deep reinforcement learning approach to address a similar task for ice hockey. \section{Dataset} \label{section:dataset} Our dataset comprises game data for 9,061 games in the English Premier League, Spanish LaLiga, German 1. Bundesliga, Italian Serie A, French Ligue Un, Belgian Pro League and Dutch Eredivisie. The dataset spans the 2014/2015 through 2017/2018 seasons and was provided by SciSports' partner Wyscout.\footnote{\url{https://www.wyscout.com}} For each game, the dataset contains information on the players (i.e., name, date of birth and position) and the teams (i.e., starting lineup and substitutions) as well as play-by-play event data describing the most notable events that happened on the pitch. For each event, the dataset provides the following information: timestamp (i.e., half and time), team and player performing the event, type (e.g., pass or shot) and subtype (e.g., cross or high pass), and start and end location. Table~\ref{table:data-example} shows an excerpt from our dataset showing five consecutive passes. \begin{table}[htp!] \centering \caption{An excerpt from our dataset showing five consecutive passes.} \label{table:data-example} \begin{tabular}{rrrrrrrrrr} \toprule \textbf{half} & \textbf{time (s)} & \textbf{team} & \textbf{player} & \textbf{type} & \textbf{subtype} & \textbf{start\_x} & \textbf{end\_x} & \textbf{start\_y} & \textbf{end\_y} \tabularnewline \midrule 1 & 8.642 & 679 & 217031 & 8 & 85 & 58 & 66 & 34 & 9 \tabularnewline 1 & 10.167 & 679 & 86307 & 8 & 85 & 66 & 85 & 9 & 17 \tabularnewline 1 & 11.987 & 679 & 3443 & 8 & 85 & 85 & 90 & 17 & 25 \tabularnewline 1 & 13.681 & 679 & 4488 & 8 & 80 & 90 & 89 & 25 & 44 \tabularnewline 1 & 14.488 & 679 & 3682 & 8 & 85 & 89 & 81 & 44 & 39 \tabularnewline \bottomrule \end{tabular} \end{table} \section{Conclusion} This paper introduced a novel approach for measuring football players' on-the-ball contributions from passes using play-by-play event data collected during football games. Viewing a football game as a series of possession sequences, our approach values each pass by computing the difference between the values of its constituting possession sequence before and after the pass. To value a possession sequence, our approach combines a k-nearest-neighbor search with dynamic time warping, where the value of the possession sequence reflects its likeliness of yielding a goal. In the future, we aim to improve our approach by accounting for the strength of the opponent and more accurately valuing the possession sequences by taking more contextual information into account. To compute the similarities between the possession sequences, we also plan to investigate spatio-temporal convolution kernels (e.g.,~\cite{knauf2014spatio}) as an alternative for dynamic time warping and to explore more sophisticated techniques for clustering the possession sequences. \section{Introduction} The performances of individual football players in games are hard to quantify due to the low-scoring nature of football. During major tournaments like the FIFA World Cup, the organizers\footnote{\url{https://www.fifa.com/worldcup/statistics/}} and mainstream sports media report basic statistics like distance covered, number of assists, number of saves, number of goal attempts, and number of completed passes~\cite{burnton2018best}. While these statistics provide some insights into the performances of individual football players, they largely fail to account for the circumstances under which the actions were performed. For example, successfully completing a forward pass deep into the half of the opponent is both more difficult and more valuable than performing a backward pass on the own half without any pressure from the opponent whatsoever. In recent years, football analytics researchers and enthusiasts alike have proposed several performance metrics for individual players. Although the majority of these metrics focuses on measuring the quality of shots, there has been an increasing interest in quantifying other types of individual player actions~\cite{decroos2018hattrics,gyarmati2016qpass,power2017not}. This recent focus shift has been fueled by the availability of more extensive data and the observation that shots only constitute a small portion of the on-the-ball actions that football players perform during games~\cite{power2017not}. In the present work, we use a dataset comprising over twelve million on-the-ball actions of which only 2\% are shots. Instead, the majority of the on-the-ball actions are passes (75\%), dribbles (13\%), and set pieces (10\%). In this paper, we introduce a novel approach to measure football players' on-the-ball contributions from passes during games. Our approach measures the expected impact of each pass on the scoreline. We value a pass by computing the difference between the expected reward of the possession sequence constituting the pass both before and after the pass. That is, a pass receives a positive value if the expected reward of the possession sequence after the pass is higher than the expected reward before the pass. Our approach employs a k-nearest-neighbor search with dynamic time warping (DTW) as a distance function to determine the expected reward of a possession sequence. Our empirical evaluation on an extensive real-world dataset shows that our approach is capable of identifying different types of impactful players like the ball-playing defender Ragnar Klavan (Liverpool FC), the advanced playmaker Mesut \"{O}zil (Arsenal), and the deep-lying playmaker Toni Kroos (Real Madrid). \section{Approach} \label{section:approach} Valuing individual actions such as passes is challenging due to the low-scoring nature of football. Since football players get only a few occasions during games to earn reward from their passes (i.e., each time a goal is scored), we resort to computing the passes' expected rewards instead of distributing the actual rewards from goals across the preceding passes. More specifically, we compute the number of goals expected to arise from a given pass if that pass were repeated many times. To this end, we propose a four-step approach to measure football players' expected contributions from their passes during football games. In the remainder of this section, we discuss each step in turn. \subsection{Constructing possession sequences} We split the event stream for each game into a set of possession sequences, which are sequences of events where the same team remains in possession of the ball. The first possession sequence in each half starts with the kick-off. The end of one possession sequence and thus also the start of the following possession sequence is marked by one the following events: a team losing possession (e.g., due to an inaccurate pass), a team scoring a goal, a team committing a foul (e.g., an offside pass), or the ball going out of play. \subsection{Labeling possession sequences} We label each possession sequence by computing its expected reward. When a possession sequence \textit{does not} result in a shot, the sequence receives a value of zero. When a possession sequence \textit{does} result in a shot, the sequence receives the expected-goals value of the shot. This value reflects how often the shot can be expected to yield a goal if the shot were repeated many times. For example, a shot having an expected-goals value of 0.13 is expected to translate into 13 goals if the shot were repeated 100 times. Building on earlier work, we train an expected-goals model to value shots~\cite{decroos2017predicting}. We represent each shot by its location on the pitch (i.e., x and y location), its distance to the center of the goal, and the angle between its location and the goal posts. We label the shots resulting in a goal as positive examples and all other shots as negative examples. We train a binary classification model that assigns a probability of scoring to each shot. \subsection{Valuing passes} \label{subsection:valuing-passes} \begin{figure}[ht] \begin{center} \includegraphics[width=0.8\textwidth]{figures/spvmethod.png} \caption{Visualization of our approach to value passes, where we value the last pass of the possession sequence shown in green. We obtain a pass value of 0.15, which is the difference between the value of the possession subsequence after the pass (0.45) and the value of the possession subsequence before the pass (0.30).} \label{fig:method} \end{center} \vspace{-20pt} \end{figure} We split each possession sequence into a set of possession subsequences. Each subsequence starts with the same event as the original possession sequence and ends after one of the passes in that sequence. For example, a possession sequence consisting of a pass 1, a pass 2, a dribble and a pass 3 collapses into a set of three possession subsequences. The first subsequence consists of pass 1, the second subsequence consists of pass 1 and pass 2, and the third subsequence consists of pass 1, pass 2, the dribble, and pass 3. We value a given pass by computing the difference between the expected reward of the possession subsequence after that pass and the expected reward of the possession subsequence before that pass. Hence, the value of the pass reflects an increase or decrease in expected reward. We assume that a team can only earn reward whenever it is in possession of the ball. If the pass is the first in its possession subsequence, we set the expected reward of the possession subsequence before the pass to zero. If the pass is unsuccessful and thus marks the end of its possession subsequence, we set the expected reward of the possession subsequence after the pass to zero. We compute the expected reward of a possession subsequence by first performing a k-nearest-neighbors search and then averaging the labels of the k most-similar possession subsequences. We use dynamic time warping (DTW) to measure the similarity between two possession subsequences~\cite{bemdt1994using}. We interpolate the possession subsequences and obtain the x and y coordinates at fixed one-second intervals. We first apply DTW to the x coordinates and y coordinates separately and then sum the differences in both dimensions. To speed up the k-nearest-neighbors search, we reduce the number of computations by first clustering the possession subsequences and then performing DTW within each cluster. We divide the pitch into a grid of cells, where each cell is 15 meters long and 17 meters wide. Hence, a default pitch of 105 meters long and 68 meters wide yields a 7-by-4 grid. We represent each cluster as an origin-destination pair and thus obtain 784 clusters (i.e., 28 origins $\times$ 28 destinations). We assign each possession subsequence to exactly one cluster based on its start and end location on the pitch. Figure~\ref{fig:method} shows a visualization of our approach for valuing passes. In this example, we aim to value the last pass in the possession sequence shown in green (top-left figure). First, we compute the value of the possession subsequence before the pass (top-right figure). We compute the average of the labels of the two nearest neighbors, which are 0.0 and 0.6, and obtain a value of 0.3. Second, we compute the value of the possession subsequence after the pass (bottom-left figure). We compute the average of the labels of the two nearest neighbors, which are 0.4 and 0.5, and obtain a value of 0.45. Third, we compute the difference between the value after the pass and the value before the pass to obtain a pass value of 0.15 (bottom-right figure). \subsection{Rating players} We rate a player by first summing the values of his passes for a given period of time (e.g., a game, a sequence of games or a season) and then normalizing the obtained sum per 90 minutes of play. We consider all types of passes, including open-play passes, goal kicks, corner kicks, and free kicks. \section{Experimental evaluation} \label{section:results} In this section, we present an experimental evaluation of our proposed approach. We introduce the datasets, present the methodology, investigate the impact of the parameters, and present results for the 2017/2018 season. \subsection{Datasets} We split the available data presented in Section~\ref{section:dataset} into three datasets: a train set, a validation set, and a test set. We respect the chronological order of the games. Our train set covers the 2014/2015 and 2015/2016 seasons, our validation set covers the 2016/2017 season, and our test set covers the 2017/2018 season. Table~\ref{tbl:datasets} shows the characteristics of our three datasets. \begin{table} \centering \caption{The characteristics of our three datasets.} \label{tbl:datasets} \begin{tabular}{lrrr} \toprule & \textbf{Train set} & \textbf{Validation set} & \textbf{Test set} \tabularnewline \midrule Games & {4,253} & {2,404} & {2,404} \tabularnewline Possession sequences & {1,878,593} & {972,526} & {970,303} \tabularnewline Passes & {3,425,285} & {1,998,533} & {2,023,730} \tabularnewline Shots & {95,381} & {53,617} & {54,311} \tabularnewline Goals & {9,853} & {5,868} & {5,762} \tabularnewline \bottomrule \end{tabular} \end{table} \subsection{Methodology} \label{subsection:methodology} We use the \texttt{XGBoost} algorithm to train the expected-goals model.\footnote{\url{https://xgboost.readthedocs.io/en/latest/}} After optimizing the parameters using a grid search, we set the number of estimators to 500, the learning rate to 0.01, and the maximum tree depth to 5. We use the dynamic time warping implementation provided by the \texttt{dtaidistance} library to compute the distances between the possession subsequences.\footnote{\url{https://github.com/wannesm/dtaidistance}} We do not restrict the warping paths in the distance computations. Inspired by the work from Liu and Schulte on evaluating player performances in ice hockey, we evaluate our approach by predicting the outcomes of future games as we expect our pass values to be predictors of future performances~\cite{liu2018deepreinforcementlearning}. We predict the outcomes for 1,172 games in the English Premier League, Spanish LaLiga, German 1. Bundesliga, Italian Serie A and French Ligue Un. We only consider games involving teams for which player ratings are available for at least one player in each line (i.e., goalkeeper, defender, midfielder or striker). We assume that the number of goals scored by each team in each game is Poisson distributed~\cite{maher1982modelling}. We use the player ratings obtained on the validation set to determine the means of the Poisson random variables representing the expected number of goals scored by the teams in the games in the test set. We compute the Poisson means by summing the ratings for the players in the starting line-up. For players who played at least 900 minutes in the 2016/2017 season, we consider their actual contributions. For the remaining players, we use the average contribution of the team's players in the same line. Since the average reward gained from passes (i.e., 0.07 goals per team per game) only reflects around 5\% of the average reward gained during games (i.e., 1.42 goals per team per game), we transform the distribution over the total player ratings per team per game to follow a similar distribution as the average number of goals scored by each team in each game in the validation set. We compute the probabilities for a home win, a draw, and an away win using the Skellam distribution~\cite{karlis2008bayesian}. \subsection{Impact of the parameters} We now investigate how the clustering step impacts the results and what the optimal number of neighbors in the k-nearest-neighbors search is. \subsubsection{Impact of the clustering step.} For an increasing number of possession sequences, performing the k-nearest-neighbors search quickly becomes prohibitively expensive. For example, obtaining results on our test set would require over 1.8 trillion distance computations (i.e., 1,878,593 possession sequences in the train set $\times$ 970,303 possession sequences in the test set). To reduce the number of distance computations, we exploit the observation that possession sequences starting or ending in entirely different locations on the pitch are unlikely to be similar. For example, a possession sequence starting in a team's penalty area is unlikely to be similar to a possession sequence starting in the opponent's penalty area. More specifically, as explained in Section~\ref{subsection:valuing-passes}, we first cluster the possession sequences according to their start and end locations and then perform the k-nearest-neighbors search within each cluster. To evaluate the impact of the clustering step on our results, we arbitrarily sample 100 games from the train set and 50 games from the validation set. The resulting train and validation subsets consist of 68,907 sequences and 35,291 sequences, respectively. Table~\ref{tbl:clustering} reports the total runtimes, the number of clusters, and the average cluster size for three settings: no clustering, clustering with grid cells of 15 by 17 meters, and clustering with grid cells of 5 by 4 meters. As expected, clustering the possession sequences speeds up our approach considerably. In addition, we also investigate the impact of the clustering step in a more qualitative fashion. We randomly sample three possession sequences and 100 games comprising 32,245 possession sequences from our training set. We perform a three-nearest-neighbors search in both the no-clustering setting and the clustering setting with grid cells of 15 by 17 meters. Figure~\ref{fig:sequences_preclus} shows the three nearest neighbors for each of the three possession sequences in both settings, where the results for the clustering setting are shown on the left and the results for the no-clustering setting are shown on the right. Although the obtained possession sequences are different, the three-nearest-neighbors search obtains highly similar neighbors in both settings. \begin{figure}[H] \vspace{-10pt} \begin{center} \includegraphics[width=0.7\textwidth]{figures/sequences_preclus.png} \caption{Visualization of the three nearest neighbors obtained for three randomly sampled possession sequences in both the clustering setting with grid cells of 15 by 17 meters and the no-clustering setting. The results for the former setting are on the left, whereas the results for the latter setting are on the right.} \label{fig:sequences_preclus} \end{center} \end{figure} \begin{table}[H] \centering \caption{The characteristics of three settings: no clustering, clustering with grid cells of 15 by 17 meters, and clustering with grid cells of 5 by 4 meters.} \label{tbl:clustering} \begin{tabular}{lrrr} \toprule & \textbf{No clustering} & \textbf{Cell: 15$\times$17} & \textbf{Cell: 5$\times$4}\tabularnewline \midrule Total runtime & {270 minutes} & {12 minutes} & {150 minutes} \tabularnewline Number of clusters & 1 & 784 & 127,449 \tabularnewline Average cluster size & {68,907} & {87.89} & {0.54} \tabularnewline \bottomrule \end{tabular} \end{table} \subsubsection{Optimal number of neighbors in the k-nearest-neighbors search.} We investigate the optimal number of neighbors in the k-nearest-neighbors search to value passes. We try the following values for the parameter $k$: 1, 2, 5, 10, 20, 50 and 100. We predict the outcomes of the games in the test set as explained in Section~\ref{subsection:methodology}. Table~\ref{tbl:loglosses} shows the logarithmic losses for each of the values for $k$. We find that 10 is the optimal number of neighbors. In addition, we compare our approach to two baseline approaches. The first baseline approach is the pass accuracy. The second baseline approach is the prior distribution over the possible game outcomes, where we assign a probability of 48.42\% to a home win, 28.16\% to an away win, and 23.42\% to a draw. Our approach outperforms both baseline approaches. \subsection{Results} We now present the players who provided the highest contributions from passes during the 2017/2018 season. We present the overall ranking as well as the top-ranked players under the age of 21. Furthermore, we investigate the relationship between a player's average value per pass and his total number of passes per 90 minutes as well as the distribution of the player ratings per position. \begin{table}[H] \centering \caption{Logarithmic losses for predicting the games in the test set for different numbers of nearest neighbors $k$ in order of increasing logarithmic loss.} \label{tbl:loglosses} \begin{tabular}{lr} \toprule \textbf{Setting} & \textbf{Logarithmic loss} \tabularnewline \midrule $k=10$ & {1.0521} \tabularnewline $k=5$ & {1.0521} \tabularnewline $k=20$ & {1.0528} \tabularnewline $k=2$ & {1.0560} \tabularnewline $k=50$ & {1.0579} \tabularnewline $k=100$ & {1.0594} \tabularnewline $k=1$ & {1.0725} \tabularnewline Pass accuracy & {1.0800} \tabularnewline Prior distribution & {1.0860} \tabularnewline \bottomrule \end{tabular} \end{table} Following the experiments above, we set the number of nearest neighbors $k$ to 10 and perform clustering with grid cells of 15 meters by 17 meters. We compute the expected reward per 90 minutes for the players in the 2017/2018 season (i.e., the test set) and perform the k-nearest-neighbors search to value their passes on all other seasons (i.e., the train and validation set). Table~\ref{tbl:ranking-1718} shows the top-ten-ranked players who played at least 900 minutes during the 2017/2018 season in the English Premier League, Spanish LaLiga, German 1. Bundesliga, Italian Serie A, and French Ligue Un. Ragnar Klavan, who is a ball-playing defender for Liverpool FC, tops our ranking with an expected contribution per 90 minutes of 0.1133. Furthermore, Arsenal's advanced playmaker Mesut \"{O}zil ranks second, whereas Real Madrid's deep-lying playmaker Toni Kroos ranks third. Table~\ref{tbl:ranking-1718-talents} shows the top-five-ranked players under the age of 21 who played at least 900 minutes during the 2017/2018 season in Europe's top-five leagues, the Dutch Eredivisie or the Belgian Pro League. Teun Koopmeiners (AZ Alkmaar) tops our ranking with an expected contribution per 90 minutes of 0.0806. Furthermore, Real Madrid-loanee Martin {\O}degaard (SC Heerenveen) ranks second, whereas Nikola Milenkovi\'{c} (ACF Fiorentina) ranks third. \begin{table}[H] \vspace{-10pt} \centering \caption{The top-ten-ranked players who played at least 900 minutes during the 2017/2018 season in Europe's top-five leagues.} \label{tbl:ranking-1718} \begin{tabular}{rllr} \toprule \textbf{Rank} & \textbf{Player} & \textbf{Team} & \textbf{Contribution P90} \tabularnewline \midrule 1 & Ragnar Klavan & Liverpool FC & 0.1133 \tabularnewline 2 & Mesut \"{O}zil & Arsenal & 0.1034 \tabularnewline 3 & Toni Kroos & Real Madrid & 0.0943 \tabularnewline 4 & Manuel Lanzini & West Ham United & 0.0892 \tabularnewline 5 & Joan Jord\'{a}n & SD Eibar & 0.0830 \tabularnewline 6 & Esteban Granero & Espanyol & 0.0797 \tabularnewline 7 & Nuri Sahin & Borussia Dortmund & 0.0796 \tabularnewline 8 & Mahmoud Dahoud & Borussia Dortmund & 0.0775 \tabularnewline 9 & Granit Xhaka & Arsenal & 0.0774 \tabularnewline 10 & Faouzi Ghoulam & SSC Napoli & 0.0765 \tabularnewline \bottomrule \end{tabular} \vspace{-10pt} \end{table} \begin{table}[H] \vspace{-10pt} \centering \caption{The top-five-ranked players under the age of 21 who played at least 900 minutes during the 2017/2018 season in Europe's top-five leagues, the Dutch Eredivisie or the Belgian Pro League.} \label{tbl:ranking-1718-talents} \begin{tabular}{rllr} \toprule \textbf{Rank} & \textbf{Player} & \textbf{Team} & \textbf{Contribution P90} \tabularnewline \midrule 1 & Teun Koopmeiners & AZ Alkmaar & 0.0806 \tabularnewline 2 & Martin {\O}degaard & SC Heerenveen & 0.0639 \tabularnewline 3 & Nikola Milenkovi\'{c} & ACF Fiorentina & 0.0617 \tabularnewline 4 & Sander Berge & KRC Genk & 0.0601 \tabularnewline 5 & Maximilian W\"{o}ber & Ajax & 0.0599 \tabularnewline \bottomrule \end{tabular} \vspace{-10pt} \end{table} Figure~\ref{fig:top10scatter} shows whether players earn their pass contribution by performing many passes per 90 minutes or by performing high-value passes. The five players with the highest contribution per 90 minutes are highlighted in red. While Lanzini and Joan Jord\'{a}n do not perform many passes per 90 minutes, they obtain a rather high average value per pass. The dotted line drawn through Klavan contains all points with the same contribution per 90 minutes as him. \begin{figure}[htb] \begin{center} \includegraphics[width=0.8\textwidth]{figures/top10scatter.png} \caption{Scatter plot showing the correlation between a player's average value per pass and his total number of passes per 90 minutes. The five players with the highest contribution per 90 minutes are highlighted in red.} \label{fig:top10scatter} \end{center} \end{figure} Figure~\ref{fig:positions_kde} presents a comparison between a player's pass accuracy and pass contribution per 90 minutes. In terms of pass accuracy, forwards rate low as they typically perform passes in more crowded areas of the pitch, while goalkeepers rate high. In terms of pass contribution, goalkeepers rate low, while especially midfielders rate high. \begin{figure}[htb] \centering \subfloat{\includegraphics[width=.4\textwidth]{figures/Goalkeeper_kde.png}}\quad \subfloat{\includegraphics[width=.4\textwidth]{figures/Defender_kde.png}}\\ \subfloat{\includegraphics[width=.4\textwidth]{figures/Midfielder_kde.png}}\quad \subfloat{\includegraphics[width=.4\textwidth]{figures/Forward_kde.png}} \caption{Density plots per position showing the correlation between the pass accuracy and the pass contribution per 90 minutes.} \label{fig:positions_kde} \vspace{-10pt} \end{figure} \subsection{Application: Replacing Manuel Lanzini} We use our pass values to find a suitable replacement for Manuel Lanzini. The Argentine midfielder, who excelled at West Ham United throughout the 2017/2018 season, ruptured his right knee's anterior cruciate ligament while preparing for the 2018 FIFA World Cup. West Ham United are expected to sign a replacement for Lanzini, who will likely miss the entire 2018/2019 season. To address this task, we define a ``Lanzini similarity function'' that accounts for a player's pass contribution per 90 minutes, number of passes per 90 minutes and pass accuracy. We normalize each of the three pass metrics before feeding them into the similarity function. Manuel Lanzini achieves a high pass contribution per 90 minutes despite a low pass accuracy, which suggests that the midfielder prefers high-risk, high-value passes over low-risk, low-value passes. Table~\ref{tbl:lanzini} shows the five most-similar players to Lanzini born after July 1st, 1993, who played at least 900 minutes in the 2017/2018 season. Mahmoud Dahoud (Borussia Dortmund) tops the list ahead of Joan Jord\'{a}n (SD Eibar) and Naby Ke\"{i}ta (RB Leipzig), who moved to Liverpool during the summer of 2018. \begin{table}[H] \vspace{-10pt} \centering \caption{The five most-similar players to Manuel Lanzini born after July 1st, 1993, who played at least 900 minutes in the 2017/2018 season.} \label{tbl:lanzini} \begin{tabular}{rllr} \toprule \textbf{Rank} & \textbf{Player} & \textbf{Team} & \textbf{Similarity score} \tabularnewline \midrule 1 & Mahmoud Dahoud & Borussia Dortmund & 0.9955 \tabularnewline 2 & Joan Jord\'{a}n & SD Eibar & 0.9881 \tabularnewline 3 & Naby Ke\"{i}ta & RB Leipzig & 0.9794 \tabularnewline 4 & Dominik Kohr & Bayer 04 Leverkusen& 0.9717 \tabularnewline 5 & Medr\'{a}n & Deportivo Alav\'{e}s & 0.9591 \tabularnewline \bottomrule \end{tabular} \end{table}
{ "attr-fineweb-edu": 1.699219, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUa7s5qsFAfsI7W_4M
\section{Conclusion} \label{sec:conclude} The motivation for {\textsc{\small Voyageur}}\ is based on the discrepancy between the needs of users searching for services and the current state of search engines. The ideas of {\textsc{\small Voyageur}}\ are applicable to many other verticals beyond travel. At the core of the technical challenges that {\textsc{\small Voyageur}}\ and systems like it need to address is the ability to discover and aggregate evidence from textual reviews in response to user queries. This is a technical challenge that draws upon techniques from NLP, IR and Database technologies. \section{Demo of {\textsc{\small Voyageur}}} \label{sec:demo} \section{Implementation of {\textsc{\small Voyageur}}} \label{sec:implementation} We briefly touch upon the technology underlying {\textsc{\small Voyageur}}. \vspace{-2mm} \subsection{A subjective database engine} As mentioned, the main challenge of building a successful experiential search engine is the modeling and querying of the \emph{subjective attributes}, where there is typically no ground truth to the values of such attributes. Examples of such attributes include the cleanliness of hotel rooms, quality of the food served, and cultural value of tourist attractions. They are not explicitly modeled in today's search engines and therefore not directly queryable. {\textsc{\small Voyageur}}\ is developed on top of {\textsc{\small OpineDB}}~\cite{opinedb19corr}, a \emph{subjective database engine}. {\textsc{\small OpineDB}}\ goes beyond traditional database engines by supporting the modeling, extraction, aggregating, and effective query processing of the subjective data. Next, we illustrate the key design elements of {\textsc{\small OpineDB}}\ by showcasing its application to hotel search in {\textsc{\small Voyageur}}. \smallskip \noindent \textbf{Data model, extraction, and aggregation. } The main challenge in modeling subjective attributes is the wide range of linguistic variations with which the attributes are described in text. Consider an attribute {\tt room\_quietness} of hotels. The review text can be of various forms such as (1) ``{\em the neighborhood seems very quiet at night}'', (2) ``{\em on busy street with traffic noise}'', or (3) ``{\em quiet and peaceful location'}'. In addition, {\textsc{\small OpineDB}}\ needs to {\em aggregate} these phrases into a meaningful signal for answering queries, which may themselves include new linguistic variations. {\textsc{\small OpineDB}}\ models subjective attributes with a new data type called the {\em linguistic domain} and provides an aggregated view of the linguistic domain through a {\em marker summary}. The linguistic domain of an attribute contains all phrases for describing the attribute from the reviews. E.g., ``quiet at night'', ``traffic noise'', and ``peaceful location''. A subset of phrases is then chosen as the {\em domain markers} (or {\em markers} for short) for each linguistic domain. The phrases are aggregated based on the markers to constitute the marker summary. For quietness, the markers might be $[$very\_quiet, average, noisy, very\_noisy$]$. To construct the marker summary of a hotel's quietness, {\textsc{\small OpineDB}}\ needs to assign each quietness phrase to its closest marker and compute the frequencies of the markers. For example, the summary $[$very\_quiet:20, average: 70, noisy:30, very\_noisy:10$]$ for a hotel would represent that the hotel is closer to being {\tt average} in quietness than to the other markers. The linguistic domain is obtained by extracting phrases from reviews. Various techniques are available for this task in opinion mining and sentiment analysis \cite{liu2012sentiment, absa}. The marker summaries are currently histograms computed from the extraction relations. However, we can also leverage more complex aggregate functions. \smallskip \noindent \textbf{Query processing. } The query predicates from Box A is formulated as an SQL-like query for {\textsc{\small OpineDB}}\ to process. \vspace*{1mm} \begin{tabular}{p{1cm}p{3in}} {\bf select} & * \hspace{0.3cm}{\bf from} {\tt Hotels} $h$\\ {\bf where} & $\mathtt{price\_pn} \leq 350$ {\bf and} $\mathtt{price\_pn} \geq 200$ {\bf and} \\ & ``{\em quiet}'' {\bf and} ``{\em friendly staff}'' \vspace*{1mm} \end{tabular} Here, price\_pn is an objective attribute of the Hotels relation while ``{\em quiet}'' and ``{\em friendly staff}'' are subjective predicates. {\textsc{\small OpineDB}}\ needs to interpret these predicates using the linguistic domains in order to find the best subjective attributes of the Hotels relation that can be used to answer them. In general, this is not a trivial matching problem since the query terms may not be directly modeled in the schema. For example, the user may ask for ``romantic hotels'', but the attribute for romance might not be in the schema. For such cases, {\textsc{\small OpineDB}}\ leverages a combination of NLP and IR techniques to find a best-effort reformulation of the query term into a combination of schema attributes. For example, for ``romantic hotels'', {\textsc{\small OpineDB}}\ will match it to a combination of ``exceptional service'' and ``luxurious bathrooms'' which are modeled by the schema. After computing the interpretation, {\textsc{\small OpineDB}}\ uses the marker summaries to compute a {\em membership score} for each pair of hotel and query predicate. Finally, {\textsc{\small OpineDB}}\ combines multiple predicates using a variant of fuzzy logic. \vspace{-2mm} \subsection{Mining interesting facts and tips } We formulate the problem of finding useful travel tips and interesting facts as a query-sentence matching problem. We adopt an approach similar to an existing work for mining travel tips from reviews \cite{tipmining}. The approach consists of a \emph{filtering phase} and a \emph{ranking phase}. \smallskip \noindent \textbf{Filtering phase. } This phase constructs a set of candidate tip/fact sentences by applying filters and classifiers to all the review sentences. According to \cite{tipmining}, effective filters for tips include phrase patterns (e.g, sentences containing ``make sure to'') and part-of-speech tags (e.g, sentences starting with verbs). For constructing the candidate set of interesting facts, we select sentences that contain a least one \emph{informative token}, which are words or short phrases frequently mentioned in reviews of the target entity but not frequently mentioned in reviews of similar entities. We also found that an interesting fact is more likely to appear in sentences with an extreme sentiment (very positive or negative). So we also apply sentiment analysis to select such sentences. For both tips and interesting facts, we further refine the candidate sets by removing duplicates, i.e., sentences of similar meaning or the unimportant ones. We do so by applying TextRank \cite{textrank}, a classic algorithm for sentence summarization. \smallskip \noindent \textbf{Ranking phase. } Instead of simply selecting candidates for interesting facts/tips, sets, we implemented a novel ranking function for finding candidates that best match the user's query predicates. The ranking function considers not only the \emph{significance} of a candidate as computed in the filtering phase but also the \emph{relevance} of the candidate with the query. Measuring the relevance is not trivial since the tips/interesting facts can use vocabularies different from the ones used in the query. In the previous example, a fact that matches the query ``near park'' is ``10 min walk to Presidio'' which has no exact-matched word. The similarity function leverages a combination of NLP and IR techniques, analogous to query interpretation in {\textsc{\small OpineDB}}. \vspace{-2mm} \subsection{Datasets and tools} The {\textsc{\small Voyageur}}\ demo will serve hotels, attractions, and restaurants search in the San Francisco area in a browser-based web app. We collected the underlying data from two datasets: the Google Local Reviews\footnote{\url{http://cseweb.ucsd.edu/~jmcauley/datasets.html\#google\_local}} and the TripAdvisor datasets\footnote{\url{http://nemis.isti.cnr.it/~marcheggiani/datasets/}}. Our dataset consists of 18,500 reviews of 227 hotels, 6,256 reviews of 545 attractions and 67,382 reviews of 4,007 restaurants. We implemented the front end of {\textsc{\small Voyageur}}\ using the JavaScript library React. \section{Introduction}\label{sec:intro} The rise of e-commerce enables us to plan many of our future experiences with online search engines. For example, sites for searching hotels, flights, restaurants, attractions or jobs bring a wealth of information to our fingertips, thereby giving us the ability to plan our vacations, restaurant gatherings, or future career moves. Unfortunately, current search engines completely ignore any experiential aspect of the plans they are helping you create. Instead, they are primarily database-backed interfaces that focus on searching based on objective attributes of services, such as price range, location, or cuisine. \smallskip \noindent \textbf{The need for experiential search. } An experiential search engine is based on the observation that at a fundamental level, users seek to satisfy an experiential need. For example, a restaurant search is meant to fulfill a social purpose, be it romantic, work-related or a reunion with rowdy friends from college. A vacation trip plan is meant to accomplish a family or couple need such as a relaxing location with easy access to fun activities and highlights of local cuisine. To satisfy these needs effectively, the user should be able to search on experiential attributes, such as whether a hotel is romantic or has clean rooms, or whether a restaurant has a good view of the sunset or has a quiet ambience. Table \ref{tab:subjective} shows a snippet from a preliminary study that investigates which attributes users care about. We asked human workers on Amazon Mechanical Turk~\cite{mturk} to provide the most important criteria in making their decisions in 7 common verticals. We then conservatively judge whether each criterion is an experiential one or not. As the table shows, the majority of attributes of interest are experiential. \iffalse To better understand the importance of these experiential attributes, we conducted a user study on Amazon Mechanical Turk \cite{mturk} to determine the importance of these attributes when the users search for products or services. We asked human workers to provide the most important criteria in making their decisions in 7 different domains: hotel, restaurant, vacation, education, career, real Estate, and car. For each collected criterion, we manually judge whether it is an experiential one. This is done conservatively: any criterion that can be captured by any factual/objective attribute is not counted as experiential (e.g., the ``wifi'' attribute for hotels). \fi \setlength{\tabcolsep}{4pt} \begin{table}[!ht] \vspace{-1.5mm} \caption{Experiential attributes in different domains.} \label{tab:subjective} \vspace{-3mm} \begin{tabular}{l|c|l} \toprule {\bf Domain} & {\bf \%Exp. Attr} & {\bf Some examples} \\ \midrule Hotel & 69.0\% & cleanliness, food, comfortable \\ Restaurant & 64.3\% & food, ambiance, variety, service \\ Vacation & 82.6\% & weather, safety, culture, nightlife \\ College & 77.4\% & dorm quality, faculty, diversity \\ Home & 68.8\% & space, good schools, quiet, safe \\ Career & 65.8\% & work-life balance, colleagues, culture \\ Car & 56.0\% & comfortable, safety, reliability \\ \bottomrule \end{tabular} \vspace{-2mm} \end{table} \iffalse The result of the survey (Table \ref{tab:subjective}) is significant: for all the 7 domains, the majority of the desired properties are experiential. To our knowledge, online services for these domains today do not directly support search over these experiential attributes. As such, there is a significant gap between the search capabilities and the users' ultimate information needs. We believe that this observation raises an important challenge: {\em can we build, experiential search systems that help users create experiences that will result in more positive emotions?} \fi To the best of our knowledge, online services for these domains today lack support for directly searching over experiential attributes. The gap between a users' experiential needs and search capabilities raises an important challenge: {\em can we build search systems that place the experience the user is planning at the center of the search process?} \smallskip \noindent \textbf{Challenges. } Supporting search on experiential aspects of services is challenging for several reasons. First, the universe of experiential attributes is vast, their precise meaning is often vague, and they are expressed in text using many linguistic variations. The experience of ``quiet hotel rooms'' can be described simply as ``quiet room'' or ``we enjoyed the peaceful nights'' in hotel reviews. Second, by definition, experiential attributes of a service are {\em subjective} and personal, and database systems do not gracefully handle such data. Third, the experiential aspect of a service may depend on how they relate to other services. For example, a significant component of a hotel experience is whether it is close to the main destinations the user plans to visit. Finally, unlike objective attributes that can be faithfully provided by the service owner, users expect that the data for experiential attributes come from other customers. Currently, such data is expressed in text in online reviews and in social media. Booking sites have made significant efforts to aggregate and surface comments from reviews. Still, while these comments are visible when a user inspects a particular hotel or restaurant, users still cannot search by these attributes. This paper describes {\textsc{\small Voyageur}}, the first search engine that explicitly models experiential attributes of services and supports searching them. We chose to build {\textsc{\small Voyageur}}\ in the domain of travel because it is complex and highly experiential, but its ideas also apply to other verticals. The first idea underlying {\textsc{\small Voyageur}}\ is that the experiential aspects of the service under consideration need to be part of the database model and visible to the user. For example, when we model a hotel, we'll also consider the time it takes to get there from the airport and nearby activities that can be done before check-in in case of an early arrival. Furthermore, {\textsc{\small Voyageur}}\ will fuse information about multiple services. So the proximity of the hotel from the attractions of interest to the user is part of how the system models a hotel. Of course, while many of the common experiential aspects can be anticipated in advance, it is impractical that we can cover them all. Hence, the second main idea in {\textsc{\small Voyageur}}\ is that its schema should be easily extensible, it should be able to handle imprecise queries, and be able to fall back on unstructured data when its database model is insufficient. Even with the above two principles, {\textsc{\small Voyageur}}\ still faces the challenge of selecting which information to show to the user about a particular entity (e.g., hotel or attraction). Ideally, {\textsc{\small Voyageur}}\ should display to the user aspects of the entity that are most relevant to the decision she is trying to make. {\textsc{\small Voyageur}}\ includes algorithms for discovering items from reviews that best summarize an entity, highlight the most unique things about them, and useful actionable tips. \section{Overview of \textsc{VOYAGEUR}} \label{sec:overview} We illustrate the main ideas of {\textsc{\small Voyageur}}. Specifically, we show how {\textsc{\small Voyageur}}\ supports experiential queries and how these queries assist a user in selecting a hotel. \begin{figure*}[!ht] \vspace{-2mm} \centering \includegraphics[width=1.0\textwidth]{hotel_page_annotated.pdf} \vspace*{-6mm} \caption{A screenshot of the hotel recommendation screen of {\textsc{\small Voyageur}}. The user can interactively customize the search (Box A), to view interesting facts/tips (Box B) and a \textit{review summary} (Box C) of each recommended hotel and to check out a \textit{map view} (Box D) of the recommendations.} \label{fig:screenshot} \vspace{-5mm} \end{figure*} \smallskip \noindent \textbf{User scenario. } Elle Rios is a marketing executive living in Tokyo. She is planning a vacation to \textit{San Francisco} in early October. Her goal is to have a relaxing experience during the vacation. Her entire travel experience will be influenced by a variety of services, including flights, hotels, local attractions, and restaurants. Elle visits the {\textsc{\small Voyageur}}\ website and first enters the destination with the travel period. {\textsc{\small Voyageur}}\ then displays a series of screens with recommendations for each of these services. In searching for hotels, Elle's experiential goal to have a relaxing stay is achieved by a balance of her {\em objective} constraints (her budget for hotels is \textit{\$250-350 per night}) and {\em subjective} criteria; she is an introvert and she knows that {\em quiet} hotels with {\em friendly staff} will help her relax\footnote{Elle can also directly search for hotels with relaxing atmosphere in {\textsc{\small Voyageur}}.}. In addition, Elle also cares about whether the hotel is conveniently located for reaching the famous and historic attractions she wants to visit and the high-quality vegetarian restaurants she is interested in. Figure \ref{fig:screenshot} shows a screenshot of {\textsc{\small Voyageur}}, where Elle can plan a trip that satisfies her requirements. (The screens for attraction and restaurant search are similar). The screenshot shows how {\textsc{\small Voyageur}}\ emphasizes the experiential aspects of trip planning. First, {\textsc{\small Voyageur}}\ allows users to express their subjective criteria (in addition to their objective criteria) and generates recommendations accordingly through the \emph{experiential search} function (Box A). Second, {\textsc{\small Voyageur}}\ tailors the display of {\em interesting facts and tips} and {\em summary of reviews} based on the search criteria entered (Boxes B and C). Third, {\textsc{\small Voyageur}}\ supports a series of additional features, such as map view and travel wallet, to further improve the user's experience in the hotel search (Box D). The map view provides a holistic view of the trip by putting together all recommended entities of different types. The travel wallet, as we explain later, takes into consideration the user's travel history and preferences if the user chooses to share them with {\textsc{\small Voyageur}}. \smallskip \noindent \textbf{Experiential search. } Elle expresses her objective and experiential/subjective criteria as query predicates to {\textsc{\small Voyageur}}'s interface. While the objective requirements like ``\$250 to 300 per night'' can be directly modeled and queried in a typical hotel database, answering predicates like \emph{``quiet''} and \emph{``friendly staff''} is challenging as these are subjective terms and cannot be immediately modeled in a traditional database system. {\textsc{\small Voyageur}}\ addresses this challenge with a subjective database engine that explicitly models the subjective attributes and answers subjective query predicates. {\textsc{\small Voyageur}}\ extracts subjective attributes such as {\it room quietness} and {\it staff quality} from hotel customer reviews, builds a summary of the variations of these terms, and then matches those attributes with the input query predicates. In Figure \ref{fig:screenshot}, {\textsc{\small Voyageur}}\ generates a ranked list of hotels by matching the query predicates specified by Elle with the subjective attributes extracted from the underlying hotel reviews. The review summaries (Box C) show that the selected hotels are clearly good matches. Specifically, {\textsc{\small Voyageur}}\ recommends Monte Cristo since 75\% of 200 reviewers agree that it is very quiet and it has friendly staff (not shown). Hotel Drisco, next on the list, is recommended because 68\% of 196 reviewers agree that it has friendly staff and the it is also very quiet (not shown). \smallskip \noindent \textbf{Interesting facts and tips. } Along with the search results, {\textsc{\small Voyageur}}\ shows snippets of travel tips and/or interesting facts of each result (Box B) it thinks is relevant for Elle. An interesting fact typically highlights an unusual or unique experience about the service. For example, being very close to Presidio Park (one of the largest parks in San Francisco) is unique to Monte Cristo Inn and Hotel Drisco and is thus an interesting fact to show for each hotel. The fact that Monte Cristo Inn is a ``beautiful vintage building and furnishings" is unique only to Monte Cristo Inn. Such interesting facts can be important for decision making. It also enables Elle to better anticipate the type of experiences that will be encountered at the hotel~\cite{anticipation}. On the other hand, tips are snippets of information that propose a potential action the user may take to either avoid a negative experience or create a positive one~\cite{tipmining}. For example, a useful tip for a hotel may be that there is free parking two blocks away. While the interesting facts and tips are useful, they are not always available for every service and can be incomplete. Existing work \cite{suggestion,tipmining} proposed mining useful travel tips from customer reviews with promising results. In {\textsc{\small Voyageur}}, we formulate the problem of finding tips and interesting facts as a query-sentence matching problem to find tips and interesting facts relevant to the users' query. Our algorithms prefer to select sentence snippets from reviews to match the user's query. The challenge we face is that users' query predicates and the tips/facts in reviews are described in different vocabularies and linguistic forms. Moreover, labeled data for the matching task is generally not available. Thus, novel techniques are needed to construct good matching functions between queries and sentence snippets. \smallskip \noindent \textbf{Review summarization. } The summary of reviews (Box C) provides Elle with an explanation of why the specific hotel is recommended and the summary saves her from reading the repetitive and lengthy reviews. {\textsc{\small Voyageur}}\ summarizes the reviews of each recommended hotel in two different formats: (1) statistical statements and (2) sample review snippets. For example, in Figure \ref{fig:screenshot}, {\textsc{\small Voyageur}}\ summarizes the room quietness attribute of Monte Cristo Inn with the statistical statement ``75\% of 200 reviews say it is very quiet'' and 3 randomly selected sample review snippets that match the \emph{quiet} requirement. \smallskip \noindent \textbf{Additional features. } The following features further improve Elle's ability to create a positive travel experience: \begin{itemize}\parskip=0pt \item \textbf{Map view. } In each recommendation screen, a map view (Box D of Figure \ref{fig:screenshot}) marks the locations of the recommended hotels. Whenever a hotel is selected, the map view is centered at a chosen hotel and shows the recommended local attractions and restaurants so the user can better plan how to travel between these places. \item \textbf{Travel wallet. } Users have the option of creating a \emph{travel wallet}, which is similar to the Wallet feature on many smartphones. It contains information about the user that she shares only when she chooses to. In the case of a travel wallet, this information records her travel preferences. The travel wallet can be created explicitly by answering questions or can be collected automatically from previous travels. The travel wallet is used by {\textsc{\small Voyageur}}\ to further personalize the search results. \item \textbf{Trip summary. } Finally, after making several choices (flight, hotel, attractions, etc.) through all the recommendation screens, the user can view a summary of the trip underlying the key experiential components. In Elle's case, the summary includes a timeline with important dates, transportation methods to/from the airport, and tips/facts about the chosen hotel and each planned tourist attractions. \end{itemize} \section{Related work} \label{sec:related}
{ "attr-fineweb-edu": 1.957031, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbcc5ixsDMJPI2shY
\section{Introduction} The selection of the 2021 Best FIFA Men's Player in early 2022 caused some controversy when it was revealed that Lionel Messi, who came in second with~44 votes, had not voted for Robert Lewandowski, who won the trophy with~48 votes. Lewandowski, on the other hand, had given votes to Jorginho, Messi, and Ronaldo, thus helping his closest contender and risking his own victory. Both Messi and Lewandowski were allowed to cast votes as the captains of their respective national teams. While we can only speculate about the reasons for the two players' voting behavior, it is evident that situations where there is at least a partial overlap between the set of voters and the set of candidates come with intrinsic incentive issues. Candidates who have a reasonable chance of winning may be motivated not to reveal their true opinion on who should win in order to increase their own chance of winning. This phenomenon is not limited to the selection of football players for an award but affects a wide range of areas from scientific peer review to the election of the pope. Incentive issues that arise when members of a group are selected by other members were first studied in a systematic way by \citet{holzman2013impartial} and \citet{alon2011sum}, who formalized the problem in terms of a directed graph in which vertices correspond to voters and a directed edge from one voter to another indicates that the former nominates the latter. A (deterministic) selection mechanism then takes such a nomination graph as input and returns one of its vertices. In order to allow voters to express their true opinions about other voters without having to worry about their own chance of selection, an important property of a selection mechanism is its impartiality: a mechanism is impartial if, for all nomination graphs, a change of the outgoing edges of some vertex $v$ does not change whether $v$ is selected or not. It is easy to see that a mechanism that selects a vertex with maximum indegree and breaks ties in some consistent way is not impartial: if ties are broken in favor of greater index, for example, a vertex with maximum indegree that currently nominates another vertex with maximum indegree but greater index has an incentive to instead nominate a different vertex. \citeauthor{holzman2013impartial} have shown that impartial mechanisms are in fact much more limited even in a setting where each voter casts exactly one vote, i.e.,\xspace where each vertex has outdegree one: for every impartial mechanism there is a nomination graph where it selects a vertex with indegree zero, or a nomination graph where it fails to select a vertex with indegree $n-1$, where $n$ is the number of voters. This shows in particular that the best multiplicative approximation guarantee for impartial mechanisms, i.e.,\xspace the worst case over all nomination graphs of the ratio between the maximum indegree and the indegree of the selected vertex is at least $n-1$. On the other hand, a multiplicative guarantee of $n-1$ is easy to obtain by always following the outgoing edge of a fixed vertex. As multiplicative guarantees do not allow for a meaningful distinction among deterministic impartial mechanisms, \citet{caragiannis2019impartial,caragiannis2021impartial} proposed to instead consider an additive guarantee, i.e.,\xspace the worst-case over all nomination graphs of the difference between the maximum indegree and the indegree of the selected vertex. A mechanism due to \citeauthor{holzman2013impartial}, majority with default, achieves an additive guarantee of~$\lceil n/2 \rceil$. \citet{caragiannis2019impartial,caragiannis2021impartial} further propose a randomized mechanism with an additive guarantee of $O(\sqrt{n})$. It remains open, however, whether there exists a \emph{deterministic} mechanism with a \emph{sublinear} additive guarantee. A sublinear additive guarantee is significant because it implies asymptotic multiplicative optimality as the maximum indegree goes to infinity. The setting studied by \citeauthor{holzman2013impartial}, where each vertex has outdegree one, is commonly referred to as the plurality setting. The impossibility results of \citeauthor{holzman2013impartial} regarding multiplicative guarantees carry over to the more general approval setting, where outdegrees can be arbitrary, but here even less is known about possible additive guarantees. While \citet{caragiannis2019impartial} have shown that deterministic impartial mechanisms cannot provide a better additive guarantee than~$3$, no mechanism is known that improves on the trivial guarantee of $n-1$ achieved by selecting a fixed vertex. \citeauthor{caragiannis2019impartial} also gave a randomized mechanism for the approval setting with an additive guarantee of $\Theta(n^{2/3} \ln^{1/3} n)$. \subsection{Our Contribution} We develop a new deterministic mechanism for impartial selection that is parameterized by a pair of thresholds on the indegrees of vertices in the graph. The mechanism seeks to select a vertex with large indegree, and to achieve impartiality it iteratively deletes outgoing edges from vertices in decreasing order of their indegrees, until only the outgoing edges of vertices with indegrees below the lower threshold remain. It then selects a vertex with maximum remaining indegree if that indegree is above the higher threshold, and otherwise does not select. Any ties are broken according to a fixed ordering of the vertices. We give a sufficient condition for choices of thresholds that guarantee impartiality. The iterative nature of the deletions requires a fairly intricate analysis but is key to achieving impartiality. The additive guarantee is then obtained for a good choice of thresholds, and the worst case is the one where the mechanism does not select. For instances with $n$ vertices and maximum outdegree at most $O(n^{\kappa})$, where $\kappa\in[0,1]$, the mechanism provides an additive guarantee of $O(n^{\frac{1+\kappa}{2}})$. This is the first sublinear bound for a deterministic mechanism and any $\kappa\in[0,1]$, and is sublinear for all $\kappa\in[0,1)$. For settings with constant maximum outdegree, which includes the setting of \citeauthor{holzman2013impartial} where all outdegrees are equal to one, our bound matches the best known bound of $O(\sqrt{n})$ for randomized mechanisms and outdegree one, due to \citet{caragiannis2019impartial}. When the maximum outdegree is unbounded, the bound becomes $O(n)$. This is of course trivial, as even a mechanism that never selects a vertex provides an additive guarantee of $n-1$. For a setting without abstentions, i.e.,\xspace with minimum outdegree one, the guarantee can be improved slightly to $n-2$ by following the outgoing edge of a fixed vertex. We show that both of these bounds are best possible by giving matching lower bounds. This improves on the only lower bound known prior to our work, again due to \citeauthor{caragiannis2019impartial}, which is equal to~$3$ and applies to the setting with abstentions and mechanisms that select a vertex for every graph. Just like the lower bounds regarding multiplicative guarantees for plurality, our lower bounds for approval are obtained through an axiomatic impossibility result. \citeauthor{holzman2013impartial} have shown that in the case of plurality, impartiality is incompatible with positive and negative unanimity. Here, positive unanimity requires that a vertex with the maximum possible indegree of $n-1$ must be selected, and negative unanimity that a vertex with indegree zero cannot be selected. In the case of approval, this impossibility can be strengthened even further: call a selection mechanism weakly unanimous if it selects a vertex with positive indegree whenever there exists a vertex with the maximum possible indegree of $n-1$; then weak unanimity and impartiality are incompatible. This result is obtained by analyzing the behavior of impartial mechanisms on a restricted class of graphs with a high degree of symmetry among vertices. Like \citeauthor{holzman2013impartial}, we can assume that isomorphic vertices are selected with equal probabilities by a randomized relaxation of a mechanism. % A suitable class of graphs for our purposes are those generated by partial orders on the set of ordered partitions. Our result is then obtained by combining counting results for ordered partitions and an argument similar to Farkas' Lemma. \subsection{Related Work} Impartiality as a formal property of social and economic mechanisms was first considered by \citet{de2008impartial}, for the distribution of a divisible commodity among a set of individuals according to the individuals' subjective claims. \citet{holzman2013impartial} and \citet{alon2011sum} studied impartial selection in two different settings, plurality and approval, and established strong impossibility results regarding the ability of deterministic mechanisms to approximate the maximum indegree in a multiplicative sense. \citet{alon2011sum} also proposed randomized mechanisms for the selection of one or more vertices. \citet{fischer2015optimal} then obtained a randomized mechanism with the best possible multiplicative guarantee of~$2$ for the selection of a single vertex in the approval setting, and \citet{bjelde2017impartial} gave improved deterministic and randomized mechanisms for the selection of more than one vertex. Starting from the observation that impossibility results for randomized mechanisms in particular are obtained from graphs with very small indegrees, \citet{bousquet2014near} developed a randomized mechanism that is optimal in the large-indegree limit, i.e.,\xspace that chooses a vertex with indegree arbitrarily close to the maximum indegree as the latter goes to infinity. \citet{caragiannis2019impartial,caragiannis2021impartial} used the same observation as motivation to study mechanisms with additive rather than multiplicative guarantees. They developed new mechanisms that achieve such guarantees, established a relatively small but nontrivial lower bound of~$3$ for deterministic mechanisms in the approval setting, and gave improved deterministic mechanisms for a setting with prior information. The axiomatic study of \citeauthor{holzman2013impartial} has been refined and extended in a number of ways, for example with a focus on symmetric mechanisms~\citep{mackenzie2015symmetry} and to the selection of more than one vertex~\citep{TaOh14a}. \citet{MacK20a} provided a detailed axiomatic analysis of mechanisms used in the papal conclave. Various selection mechanisms have also been proposed that are tailored to applications like peer review and exploit the particular preference and information structures of those applications~\citep{kurokawa2015impartial,xu2018strategyproof,aziz2019strategyproof,mattei2020peernomination}. Impartial mechanisms have finally been considered for other objectives, specifically for the maximization of progeny~\citep{babichenko2020incentive,zhang2021incentive} and for rank aggregation~\citep{kahng2018ranking}. The proof of our impossibility result uses a class of graphs constructed from ordered partitions of the set of vertices. The class has been studied previously~\citep[e.g.,\xspace][]{insko2017ordered,diagana2017some}, and some of its known properties including its lattice structure and the number of graphs isomorphic to each graph within the class are relevant to us. \section{Preliminaries} For $n\in \mathbb{N}$, let $\mathcal{G}_n = \left\{(N, E): N = \{1,2,\dots,n\}, E \subseteq (N \times N ) \setminus \bigcup_{v\in N}\{(v,v)\}\right\}$ be the set of directed graphs with $n$ vertices and no loops. Let $\mathcal{G} = \bigcup_{n\in \mathbb{N}} \mathcal{G}_n$. For $G=(N,E)\in\mathcal{G}$ and $v\in N$, let $N^+(v, G)=\{u\in N: (v,u) \in E\}$ be the out-neighborhood and $N^-(v, G)=\{u\in N:(u,v)\in E\}$ the in-neighborhood of~$v$ in~$G$. Let $\delta^+(v,G)=|N^+(v,G)|$ and $\delta^-(v,G)=|N^-(v,G)|$ denote the outdegree and indegree of~$v$ in~$G$, $\delta^-_S(v,G)=|\{u\in S: (u,v)\in E\}|$ the indegree of~$v$ from a particular subset $S\subseteq N$ of the vertices, and $\Delta(G) = \max_{v\in N} \delta^-(v, G)$ the maximum indegree of any vertex in $G$. When the graph is clear from the context, we will sometimes drop $G$ from the notation and write $N^+(v)$, $N^-(v)$, $\delta^+(v)$, $\delta^-(v)$, $\delta^-_S(v)$, and~$\Delta$. For $n,k\in\mathbb{N}$, let $\mathcal{G}^+_n = \{(N, E)\in \mathcal{G}_n: \delta^+(v)\geq 1 \text{ for every } v\in N\}$ be the set of graphs in $\mathcal{G}_n$ where all outdegrees are strictly positive, $\mathcal{G}_n(k)=\left\{(N, E)\in \mathcal{G}_n: \delta^+(v)\leq k \text{ for every } v\in N\right\}$ the set of graphs in $\mathcal{G}_n$ where outdegrees are at most $k$, and $\mathcal{G}^+_n(k) = \mathcal{G}^+_n \cap \mathcal{G}_n(k)$ for the set of graphs satisfying both conditions. Let $\mathcal{G}^+ = \bigcup_{n\in \mathbb{N}} \mathcal{G}^+_n,\ \mathcal{G}(k) = \bigcup_{n\in \mathbb{N}} \mathcal{G}_n(k)$, and $\mathcal{G}^+(k) = \bigcup_{n\in \mathbb{N}} \mathcal{G}^+_n(k)$. A (deterministic) \textit{selection mechanism} is then given by a family of functions $f : \mathcal{G}_n \to 2^N$ that maps each graph to a subset of its vertices, where we require throughout that $|f(G)|\leq 1$ for all $G\in\mathcal{G}$. In a slight abuse of notation, we will use $f$ to refer to both the mechanism and to individual functions from the family. Mechanism $f$ is \textit{impartial} on $\mathcal{G}'\subseteq \mathcal{G}$ if on this set of graphs the outgoing edges of a vertex have no influence on its selection, i.e.,\xspace if for every pair of graphs $G = (N, E)$ and $G' = (N, E')$ in $\mathcal{G}'$ and every $v\in N$, $f(G)\cap\{v\}=f(G')\cap\{v\}$ whenever $E\setminus(\{v\}\times N)=E'\setminus(\{v\}\times N)$. Mechanism $f$ is \textit{$\alpha$-additive} on $\mathcal{G}' \subseteq \mathcal{G}$, for $\alpha \geq 0$, if for every graph in $\mathcal{G}'$ the indegree of the choice of $f$ differs from the maximum indegree by at most $\alpha$, i.e.,\xspace if \[ \sup_{\substack{G\in \mathcal{G}'}} \Bigl\{ \Delta(G)-\delta^-(f(G), G) \Bigr\} \leq \alpha, \] where, in a slight abuse of notation, $\delta^-(S,G)=\sum_{v\in S}\delta^-(v,G)$. \section{Iterated Deletion of Nominations} When outdegrees are at most one, the following simple mechanism is $\lfloor n/2\rfloor$-additive: if there is a vertex with indegree at least $\lfloor n/2\rfloor+1$, select it; otherwise, do not select.\footnote{This mechanism is a simpler version of a mechanism of \citet{holzman2013impartial}, majority with default, which is required to always select and does so by singling out a default vertex whose outgoing edge is ignored and which is selected in the absence of a vertex with large indegree.} As there can be at most one vertex with degree $\lfloor n/2\rfloor+1$ or more and a vertex cannot influence its own indegree, the mechanism is clearly impartial. We will borrow from this mechanism the idea to impose a threshold on the minimum indegree a vertex needs to be selected, but will seek to lower the threshold in order to achieve a better additive guarantee and also to relax the constraint on the maximum outdegree. Of course, lower thresholds and larger outdegrees both mean that more and more vertices become eligible for selection, and we will no longer get impartiality for free. As a first step, it is instructive to again consider the outdegree-one case but to use a lower threshold $t=\lfloor n/3\rfloor+1$. This choice of the threshold means that up to two vertices can have indegrees above the threshold and thus be eligible for selection. To achieve impartiality it makes sense to delete the outgoing edges of such vertices. Unfortunately, selecting a vertex with maximum indegree among those that remain above~$t$ after deletion, while breaking ties in favor of a greater index,\footnote{More formally, for a graph $G=(N,E)$ and $u,v\in N$ with $\delta^-(u),\ \delta^-(v)\geq t$ and $\smash{\delta^-_{N\setminus \{v\}}(u)=\delta^-_{N\setminus \{u\}}(v)}$, the mechanism selects $\max\{u,v\}$.} is not impartial: whether a vertex $v$ ends up above or below $t$ may depend on whether another vertex $u$ does and thus whether the edge $(u,v)$ is deleted or not; the latter may in turn depend on the existence of the edge $(v,u)$. To make matters worse, which edges remain after deletion may also depend on whether edges are deleted simultaneously or in order, and on the particular order in which they are deleted. Both phenomena are illustrated in \autoref{fig:n/3-mechanism}. It turns out that for this particular mechanism impartiality can be restored if outgoing edges of the two vertices above the threshold are deleted iteratively, from large to small indegree and breaking ties in the same way as before, and a selection is only made if after deletion at least one vertex remains with indegree above a higher threshold of $T=t+1$. In addition to being impartial, the resulting mechanism is $(\lfloor n/3\rfloor+2)$-additive. \begin{figure}[t] \centering \begin{tikzpicture}[scale=0.85] \Vertex[y=.9, Math, shape=circle, color=white, , size=.05, label=u, fontscale=1, position=above, distance=-.08cm]{A} \Vertex[x=1.4, y=.9, Math, shape=circle, color=black, size=.05, label=v, fontscale=1, position=above, distance=-.08cm]{B} \Edge[Direct, color=black, lw=1pt, bend=-15](A)(B) \Edge[Direct, color=black, lw=1pt, bend=-15](B)(A) \draw[] (-.2,.1) -- (1.6,.1); \draw[] (-.2,.7) -- (1.6,.7); \draw[] (-.2,1.3) -- (1.6,1.3); \Vertex[x=3, y=.3, Math, shape=circle, color=black, , size=.05, label=u, fontscale=1, position=above, distance=-.08cm]{C} \Vertex[x=4.4, y=.9, Math, shape=circle, color=white, size=.05, label=v, fontscale=1, position=above, distance=-.08cm]{D} \Edge[Direct, color=black, lw=1pt](C)(D) \draw[] (2.8,.1) -- (4.6,.1); \draw[] (2.8,.7) -- (4.6,.7); \draw[] (2.8,1.3) -- (4.6,1.3); \Text[x=-.8, y=.4, fontsize=\scriptsize]{$t-1$} \Text[x=-.8, y=1, fontsize=\scriptsize]{$t$} \Vertex[x=9, y=.6, Math, shape=circle, color=black, , size=.05, label=u, fontscale=1, position=above, distance=-.08cm]{E} \Vertex[x=10.4, y=1.2, Math, shape=circle, color=black, size=.05, label=v, fontscale=1, position=above, distance=-.08cm]{F} \Edge[Direct, color=black, lw=1pt, bend=-15](E)(F) \Edge[Direct, color=black, lw=1pt, bend=-15](F)(E) \draw[] (8.8,-.2) -- (10.6,-.2); \draw[] (8.8,.4) -- (10.6,.4); \draw[] (8.8,1) -- (10.6,1); \draw[] (8.8,1.6) -- (10.6,1.6); \Vertex[x=12, Math, shape=circle, color=black, , size=.05, label=u, fontscale=1, position=above, distance=-.08cm]{C} \Vertex[x=13.4, y=1.2, Math, shape=circle, color=white, size=.05, label=v, fontscale=1, position=above, distance=-.08cm]{D} \Edge[Direct, color=black, lw=1pt](C)(D) \draw[] (11.8,-.2) -- (13.6,-.2); \draw[] (11.8,.4) -- (13.6,.4); \draw[] (11.8,1) -- (13.6,1); \draw[] (11.8,1.6) -- (13.6,1.6); \Text[x=8.2, y=.1, fontsize=\scriptsize]{$t-1$} \Text[x=8.2, y=.7, fontsize=\scriptsize]{$t$} \Text[x=8.2, y=1.3, fontsize=\scriptsize]{$t+1$} \end{tikzpicture} \caption{A mechanism that deletes the outgoing edges of vertices above $t$ and selects a vertex with maximum remaining indegree above $t$ is not impartial. This is illustrated by the two diagrams on the left for a mechanism that deletes edges iteratively, but is true also for simultaneous deletion. A mechanism that simultaneously deletes the outgoing edges of vertices above $t$ and selects a vertex with maximum remaining indegree above $t+1$ is not impartial, as illustrated by the two diagrams on the right. All diagrams in this section place vertex~$x$ along the vertical axis according to~$\delta^-(x)$ and along the horizontal axis according to~$-x$. The selected vertex is the leftmost one among those that are highest, which corresponds to selecting a vertex with maximum indegree and breaking ties in favor of a greater index. A selected vertex is drawn in white. Edges incident to vertices not in the diagram are not shown.} \label{fig:n/3-mechanism} \end{figure} It is natural to ask whether the threshold can be lowered further while maintaining impartiality, and whether guarantees can be obtained in a similar fashion for graphs with larger outdegrees. The answer to the first question is not obvious, as the number of graphs that need to be considered to establish impartiality grows very quickly in the number of vertices eligible for selection. The obvious generalization of the mechanism with threshold $\lfloor n/2\rfloor+1$ to a setting with outdegrees at most~$k$, of selecting the unique vertex with indegree at least $\lfloor kn/2\rfloor+1$ if it exists and not selecting otherwise, is impartial and $\lfloor kn/2 \rfloor$-additive, but this guarantee is trivial when $k\geq 2$. Our main result answers both questions in the affirmative. It applies to settings with outdegree at most $k=O(n^{\kappa})$ for $\kappa\in[0,1]$ and provides a nontrivial guarantee when $\kappa\in[0,1)$. When~$k$ is constant the guarantee is $O(\sqrt{n})$, which matches the best guarantee known for randomized mechanisms and outdegree one~\citep{caragiannis2019impartial}. \begin{theorem} \label{thm:additive-ub} For every $n\in\mathbb{N}$, $\kappa\in[0,1]$, and $k=O(n^{\kappa})$, there exists an impartial and $O(n^{\frac{1+\kappa}{2}})$-additive mechanism on $\mathcal{G}_n(k)$. Specifically, for every $n\in\mathbb{N}$, there exists an impartial and $\sqrt{8n}$-additive mechanism on $\mathcal{G}_n(1)$. \end{theorem} The result is achieved by a mechanism we call the \emph{Twin Threshold Mechanism} and describe formally in \autoref{alg:TTM}. The mechanism iteratively deletes the outgoing edges from vertices with indegree above a first threshold $t$ from the highest to the lowest indegree and, in the end, selects the vertex with maximum remaining indegree as long as that indegree is above a second, higher threshold $T$. The parameters~$t$ and~$T$ will be chosen in order to achieve impartiality and obtain the desired bounds. Throughout the mechanism ties are broken as before, in favor of greater index. \begin{algorithm}[t] \SetAlgoNoLine \KwIn{Digraph $G=(N,E)\in \mathcal{G}_n$, thresholds $T,\ t\in\{1,\ldots,n-1\}$ with $t\leq T$.} \KwOut{Set $S\subseteq N$ of selected vertices with $|S|\leq 1$.} Initialize $i \xleftarrow{} 0$ and $d\xleftarrow{} \Delta$\; $D^i\xleftarrow{} \emptyset$ \tcp*{vertices with deleted outgoing edges in iteration $i$ or before} $\forall v\in N$, $\hat{\delta}^i(v) \xleftarrow{} \delta^-(v)$ \tcp*{indegree of $v$ omitting edges deleted up to~$i$} \While{$d\geq t$}{ \If{$\{u\in N\setminus D^i: \hat{\delta}^i(u)=d\} = \emptyset$}{ Update $d\xleftarrow{} d-1$ and {\bf continue} } Let $v=\max\{u\in N\setminus D^i: \hat{\delta}^i(u)=d\}$\; Update $\hat{\delta}^{i+1}(u) \xleftarrow{} \hat{\delta}^i(u)-1$ for every $u\in N^+(v)$ and $\hat{\delta}^{i+1}(u) \xleftarrow{} \hat{\delta}^i(u)$ for every $u\in N\setminus N^+(v)$ \tcp*{outgoing edges of $v$ are deleted} Update $D^{i+1} \xleftarrow{}D^i \cup \{v\}$ and $i \xleftarrow{} i+1$ } Let $I\xleftarrow{} i$\; \If{$\hat{\Delta} := \max_{v \in N}{\hat{\delta}^I(v)} \geq T$}{ {\bf Return} $S = \{\max\{v\in N: \hat{\delta}^I(v)=\hat{\Delta}\}\}$ } {\bf Return} $S = \emptyset$ \caption{Twin Threshold Mechanism} \label{alg:TTM} \end{algorithm} For any choice of the maximum outdegree~$k$ and the threshold parameters~$T$ and~$t$, the mechanism achieves its worst additive performance guarantee in cases where it does not select, and this guarantee can be obtained in a straightforward way by bounding the maximum indegree. The proof of impartiality, on the other hand, uses a relatively subtle argument to show that for certain values of $T$ and $t$, a vertex above the higher threshold~$T$ cannot influence whether another vertex ends up above or below the lower threshold~$t$ when edges have been deleted. Vertices above~$T$ then have no influence on the set of edges taken into account for selection, and since these are the only vertices that can potentially be selected impartiality follows. In the following we will compare runs of the mechanism for different graphs, and denote by $\hat{\delta}^i(v,G)$, $D^i(G)$, and $I(G)$ the respective values of $\hat{\delta}^i(v)$, $D^i$, and $I$ when the mechanism is invoked with input graph $G$. We use $\chi$ to denote the indicator function for logical propositions, i.e.,\xspace $\chi(p)=1$ when proposition $p$ is true and $\chi(p)=0$ otherwise. For a graph~$G$ and a vertex~$v$ in~$G$ whose outgoing edges are deleted by the mechanism, we use~$i^{\star}(v,G)$ to denote the iteration in which this deletion takes place, such that $D^{i^{\star}(v,G)+1}(G)\setminus D^{i^{\star}(v,G)}(G) = \{v\}$. We use the convention that $i^{\star}(v,G)=I(G)$ if $v\not\in D^{I(G)}(G)$ to extend the function to all vertices. For a graph~$G$ and vertex~$v$, we write $\delta^{\star}(v,G)=\hat{\delta}^{i^{\star}(v,G)}(v,G)$ for the indegree of $v$, not taking into account any incoming edges deleted previously, at the last moment before the outgoing edges of~$v$ are deleted. When the graph $G$ is clear from the context, we again drop $G$ from the notation. It is clear from the definition of the mechanism that for any graph $G=(N,E)$ and vertex $v\in D^{I(G)}(G)$, \begin{equation}\label{eq:vertex-descent-characterization-iterations} \delta^-(v,G)-\delta^{\star}(v,G) = |\{u\in N^-(v,G): i^{\star}(u,G)< i^{\star}(v,G)\}|. \end{equation} When comparing tuples of the form $(\delta^{\star}(v), v)$, we use regular inequalities to denote lexicographic order. These comparisons are relevant to our analysis because, for any graph $G=(N,E)$ and vertices $u,v\in D^{I(G)}(G)$, \begin{equation}\label{eq:equivalence-iterations-indegrees} i^{\star}(u,G)<i^{\star}(v,G) \quad\text{if and only if}\quad (\delta^{\star}(u,G), u) > (\delta^{\star}(v,G), v). \end{equation} \begin{figure}[t] \centering \begin{tikzpicture}[scale=0.88] \Vertex[y=1.95, Math, shape=circle, color=black, , size=.05, label=u_0, fontscale=1, position=above, distance=-.09cm]{A} \Vertex[x=1.4, y=1.95, Math, shape=circle, color=black, size=.05, label=v, fontscale=1, position=above, distance=-.08cm]{B} \Vertex[x=2.1, y=1.95, Math, shape=circle, color=black, size=.05, label=u_1, fontscale=1, position=above, distance=-.09cm]{C} \Vertex[x=.7, y=.65, Math, shape=circle, color=black, size=.05, label=u_2, fontscale=1, position=above left, distance=-.16cm]{D} \Edge[Direct, color=black, lw=1pt](A)(B) \Edge[Direct, color=black, lw=1pt](C)(B) \Edge[Direct, color=black, lw=1pt](D)(B) \draw[] (-.2,-.2) -- (2.3,-.2); \draw[] (-.2,.45) -- (2.3,.45); \draw[] (-.2,1.1) -- (2.3,1.1); \draw[] (-.2,1.75) -- (2.3,1.75); \draw[] (-.2,2.4) -- (2.3,2.4); \Vertex[x=4, y=1.95, Math, shape=circle, color=black, , size=.05, label=u_0, fontscale=1, position=above, distance=-.09cm]{E} \Vertex[x=5.4, y=1.3, Math, shape=circle, color=black, size=.05, label=v, fontscale=1, position=above, distance=-.08cm]{F} \Vertex[x=6.1, y=1.95, Math, shape=circle, color=black, size=.05, label=u_1, fontscale=1, position=above, distance=-.09cm]{G} \Vertex[x=4.7, y=.65, Math, shape=circle, color=black, size=.05, label=u_2, fontscale=1, position=above left, distance=-.16cm]{H} \Edge[Direct, color=black, lw=1pt](G)(F) \Edge[Direct, color=black, lw=1pt](H)(F) \draw[] (3.8,-.2) -- (6.3,-.2); \draw[] (3.8,.45) -- (6.3,.45); \draw[] (3.8,1.1) -- (6.3,1.1); \draw[] (3.8,1.75) -- (6.3,1.75); \draw[] (3.8,2.4) -- (6.3,2.4); \Vertex[x=8, y=1.95, Math, shape=circle, color=black, , size=.05, label=u_0, fontscale=1, position=above, distance=-.09cm]{I} \Vertex[x=9.4, y=.65, Math, shape=circle, color=black, size=.05, label=v, fontscale=1, position=above, distance=-.08cm]{J} \Vertex[x=10.1, y=1.95, Math, shape=circle, color=black, size=.05, label=u_1, fontscale=1, position=above, distance=-.09cm]{K} \Vertex[x=8.7, y=.65, Math, shape=circle, color=black, size=.05, label=u_2, fontscale=1, position=above, distance=-.09cm]{L} \Edge[Direct, color=black, lw=1pt](L)(J) \draw[] (7.8,-.2) -- (10.3,-.2); \draw[] (7.8,.45) -- (10.3,.45); \draw[] (7.8,1.1) -- (10.3,1.1); \draw[] (7.8,1.75) -- (10.3,1.75); \draw[] (7.8,2.4) -- (10.3,2.4); \Vertex[x=12, y=1.95, Math, shape=circle, color=white, , size=.05, label=u_0, fontscale=1, position=above, distance=-.09cm]{M} \Vertex[x=13.4, Math, shape=circle, color=black, size=.05, label=v, fontscale=1, position=above, distance=-.08cm]{N} \Vertex[x=14.1, y=1.95, Math, shape=circle, color=black, size=.05, label=u_1, fontscale=1, position=above, distance=-.09cm]{O} \Vertex[x=12.7, y=.65, Math, shape=circle, color=black, size=.05, label=u_2, fontscale=1, position=above, distance=-.09cm]{P} \draw[] (11.8,-.2) -- (14.3,-.2); \draw[] (11.8,.45) -- (14.3,.45); \draw[] (11.8,1.1) -- (14.3,1.1); \draw[] (11.8,1.75) -- (14.3,1.75); \draw[] (11.8,2.4) -- (14.3,2.4); \Text[x=1.05, y=-.6, fontsize=\scriptsize]{$i=0$} \Text[x=5.05, y=-.6, fontsize=\scriptsize]{$i=1$} \Text[x=9.05, y=-.6, fontsize=\scriptsize]{$i=2$} \Text[x=13.05, y=-.6, fontsize=\scriptsize]{$i=3$} \end{tikzpicture} \caption{Illustration of the edge deletion process in \autoref{alg:TTM}. We have assumed that all indegrees are above~$t$.} \label{fig:example-mechanism} \end{figure} \autoref{fig:example-mechanism} shows an example of the edge deletion process over four iterations of \autoref{alg:TTM}. Observe that for $j\in \{1,2\}$, $(\delta^-(v),v)>(\delta^-(u_j),u_j)$ but $(\delta^{\star}(v),v)<(\delta^{\star}(u_j),u_j)$. This is caused by a drop in the indegree of~$v$ over the course of the algorithm, and this drop occurs before the algorithm considers possible outgoing edges of~$v$ for deletion. For our analysis, it will be important to bound how much the indegree of a vertex~$v$ can drop before~$v$ loses its outgoing edges. The following lemma, which we prove in~\autoref{app:indegree-changes}, characterizes this quantity in terms of the indegrees of the in-neighbors of~$v$. \begin{lemma} \label{lem:indegree-changes} Let $G=(N,E)\in \mathcal{G}_n,\ v\in N$ and $(d,z)\in \mathbb{N}^2$ such that $(\delta^-(v),v)>(d, z)\geq (\delta^{\star}(v), v)$. Let $r =\delta^-(v)-d+\chi(v>z)$. Then there exist vertices $u_0,\dots,u_{r-1}$ such that for each $j\in\{0,1,\dots,r-1\}$, $(u_{j},v)\in E$ and $(\delta^{\star}(u_{j}),u_j)>(\delta^-(v)-j, v)$. Moreover, if $(d,z)=(\delta^{\star}(v),v)$, then for every vertex $u\in N^-(v)\setminus \{u_0,\dots,u_{r-1}\}$, $(\delta^{\star}(u),u)<(\delta^{\star}(v),v)$. \end{lemma} If we take $(d,z)=(\delta^{\star}(v),v)$, the lemma implies that for any vertex $v\in D^{I}$, \begin{equation}\label{eq:vertex-descent-characterization-indegrees} \delta^-(v)-\delta^{\star}(v) = |\{u\in N^-(v): (\delta^{\star}(u), u)>(\delta^{\star}(v)+1,v)\}|. \end{equation} In other words, if the indegree of a vertex~$v$ drops from $\delta^-(v)$ to $\delta^{\star}(v)=\delta^-(v)-r$ before the outgoing edges of~$v$ are deleted, there must be~$r$ vertices with edges to~$v$ that satisfy the following property: at least one of them, $u_0$, must have indegree high enough for its outgoing edges to be deleted before those of $v$, i.e.,\xspace $(\delta^{\star}(u_0),u_0)>(\delta^-(v), v)$; another vertex, $u_1$, must have indegree high enough for its outgoing edges to be deleted before those of $v$ after its indegree is reduced by one, i.e.,\xspace $(\delta^{\star}(u_1),u_1)>(\delta^-(v)-1, v)$; and so forth, up to $u_{r-1}$ with $(\delta^{\star}(u_{r-1}),u_{r-1})>(\delta^-(v)-(r-1), v)=(\delta^{\star}(v)+1), v)$. The other vertices $u$ with edges to $v$ must satisfy $(\delta^{\star}(u),u)<(\delta^{\star}(v),v)$. This is illustrated in \autoref{fig:lemma-1} for the case where the indegree of $v$ drops by $r=3$. When $(d,z)>(\delta^{\star}(v),v)$ it is enough to carry out the analysis for the first $r=\delta^-(v)-d+\chi(v>z)$ vertices $u_0,\ldots,u_{r-1}$. \begin{figure}[t] \centering \begin{tikzpicture}[scale=0.88] \draw[] (-2,-.2) -- (2,-.2); \draw[] (-2,.4) -- (2,.4); \draw[] (-2,1) -- (2,1); \draw[] (-2,1.6) -- (2,1.6); \draw[] (-2,2.2) -- (2,2.2); \fill[color=gray] (-2,2.2) rectangle (2,2.8); \fill[color=gray] (-2,1.6) rectangle (-.1,2.2); \fill[color=gray,pattern=dots] (.1,1.6) rectangle (2,2.2); \fill[color=gray,pattern=dots] (-2,1) rectangle (-.1,1.6); \fill[color=gray,pattern=vertical lines] (.1,1) rectangle (2,1.6); \fill[color=gray,pattern=vertical lines] (-2,.4) rectangle (-.1,1); \Text[x=0, y=2.5]{\Large{\bf A}} \Text[x=-1, y=1.9]{\Large{\bf A}} \Text[x=-1, y=1.3]{\Large{\bf B}} \Text[x=1, y=1.9]{\Large{\bf B}} \Text[x=1, y=1.3]{\Large{\bf C}} \Text[x=-1, y=.7]{\Large{\bf C}} \Vertex[y=1.8, Math, shape=circle, color=black, size=.05, label=v, fontscale=1, position=above, distance=-.08cm]{A} \Vertex[Math, shape=circle, color=black, size=.05]{B} \Edge[Direct, style=dashed, color=black, lw=1pt](A)(B) \end{tikzpicture} \caption{Illustration of \autoref{lem:indegree-changes} for $r=3$. If the indegree of $v$ drops as shown by the dashed arrow, there must be a vertex with an edge to~$v$ in~$A$, another vertex with an edge to~$v$ in $A\cup B$, and a third vertex with an edge to~$v$ in $A\cup B\cup C$. Note that this exact condition is satisfied for the example of \autoref{fig:example-mechanism}.} \label{fig:lemma-1} \end{figure} To establish impartiality of the Twin Threshold Mechanism, we need to compare runs of the mechanism on graphs that differ in the outgoing edges of a single vertex. Intuitively, a change in the outgoing edges will make a difference to the outcome of the mechanism only if it affects the position of some other vertex relative to the lower threshold $t$ at the time that vertex is considered: If at that time the vertex is above the threshold its outgoing edges are deleted, otherwise the edges remain and are used in the decision of which vertex to select. We are thus interested in pairs of graphs $G_1$ and $G_2$ that differ only in the outgoing edges of a vertex $\tilde{v}$, and which contain a second vertex~$v\neq\tilde{v}$ such that $\delta^{\star}(v,G_1)>\delta^{\star}(v,G_2)$. Using \autoref{lem:indegree-changes}, we can derive conditions in terms of the indegrees of the in-neighbors of~$v$ under which this can happen. Moreover, we can show that one of two additional conditions must be satisfied: either (i)~$\tilde{v}$ has an edge to $v$ in $G_1$, or (ii)~there exists a vertex $u_0$ with an edge to~$v$ in both $G_1$ and $G_2$ such that $\delta^{\star}(u_0,G_1)<\delta^{\star}(u_0,G_2)$. We obtain the following lemma. \begin{lemma} \label{lem:indegree-changes-two-graphs} Let $G_1=(N,E_1),G_2=(N,E_2)\in \mathcal{G}_n$, $v,\ \tilde{v} \in N$ with $v \not= \tilde{v}$ be such that $E_1\setminus(\{ \tilde{v} \}\times N)=E_2\setminus (\{ \tilde{v} \}\times N),\ \delta^{\star}(v,G_1)>\delta^{\star}(v,G_2)$, and $\delta^{\star}(v,G_1)\geq t$. Consider $(d,z)\in \mathbb{N}^2$ such that $(\delta^{\star}(v,G_1),v) > (d, z) \geq (\delta^{\star}(v,G_2), v)$, and let $r=\delta^{\star}(v,G_1)-d+\chi(v>z)$. Then, there exist vertices $u_0,\ldots,u_{r-1}$ such that, for every $j\in\{1,\ldots,r-1\}$, \[ (u_j,v)\in E_1\cap E_2,\quad (\delta^{\star}(u_{j},G_2),u_j)> (\delta^{\star}(v,G_1)-j,v),\quad (\delta^{\star}(u_j,G_1),u_j)< (\delta^{\star}(v,G_1),v), \] and one of the following holds: \begin{enumerate}[label=(\roman*)] \item $(u_0,v)\in E_1\setminus E_2$ and if $(\delta^{\star}(v,G_1),v)<(\delta^-(\tilde{v},G_1),\tilde{v})$, taking $\tilde{r}= \delta^-(\tilde{v},G_1)-\delta^{\star}(v,G_1)+\chi(\tilde{v}>v)$ we have that there are vertices $\tilde{u}_0,\ldots,\tilde{u}_{\tilde{r}-1}$, none of them equal to $\tilde{v}$, such that $(\delta^{\star}(\tilde{u}_j,G_1),\tilde{u}_{j})> (\delta^-(\tilde{v},G_1)-j,\tilde{v})$ for every $j\in\{0,\ldots,\tilde{r}-1\}$; or \label{indegree-changes-two-graphs-alt1} \item $(\delta^{\star}(u_0,G_2),u_0)>\dots > (\delta^{\star}(u_{r-1},G_2),u_{r-1})$, $(u_0,v)\in E_1\cap E_2$, and $(\delta^{\star}(u_0,G_2),u_0)> (\delta^{\star}(v,G_1),v) > (\delta^{\star}(u_0,G_1),u_0)$. \label{indegree-changes-two-graphs-alt2} \end{enumerate} \end{lemma} We prove \autoref{lem:indegree-changes-two-graphs} in \autoref{app:indegree-changes-two-graphs} but provide some intuition for its correctness here. Assume that the indegree of a vertex $v$ drops from $\delta^-(v)$ to $\delta^{\star}(v)=\delta^-(v)-r$ before its outgoing edges are deleted. Then, by \autoref{lem:indegree-changes}, there must be $r$ in-neighbors of~$v$ whose indegrees are at least $\delta^{\star}(v)+1$ when their outgoing edges are deleted. Thus, when $v$, $G_1$, and $G_2$ are as in the statement of \autoref{lem:indegree-changes-two-graphs}, and defining $r_1=\delta^-(v, G_1)-\delta^{\star}(v,G_1)$ and $r_2=\delta^-(v, G_2)-\delta^{\star}(v,G_2)$, then $r_1$ in-neighbors of $v$ must have indegree at least $\delta^{\star}(v,G_1)+1$ in $G_1$ upon deletion of their outgoing edges, and $r_2$ in-neighbors of $v$ must have indegree at least $\delta^{\star}(v,G_2)+1$ in $G_2$ upon deletion of their outgoing edges. There are then two possible reasons for the difference between $\delta^{\star}(v,G_1)$ and $\delta^{\star}(v,G_2)$. The first, which can only occur when $r_1=r_2$, is given in Condition~\ref{indegree-changes-two-graphs-alt1} of the lemma: $\tilde{v}$ has an edge to $v$ in $G_1$ but not in $G_2$, while all the other indegrees remain the same. However, for this difference to have an impact, the outgoing edges of $\tilde{v}$ must be deleted after those of $v$, and \autoref{lem:indegree-changes} implies the existence of in-neighbors of $\tilde{v}$ with indegrees as shown. The other reason, which can happen when $r_1=r_2$ and necessarily happens otherwise, is that some in-neighbor $u_0$ of $v$ in both $G_1$ and $G_2$ loses its outgoing edges after $v$ when the input to the mechanism is $G_1$, but before $v$ when the input is $G_2$. This must happen due to a change in the indegree of~$u_0$ at the time its outgoing edges are deleted, i.e.,\xspace $(\delta^{\star}(u_0,G_2),u_0)> (\delta^{\star}(v,G_1),v) > (\delta^{\star}(u_0,G_1),u_0)$. This is captured in Condition~\ref{indegree-changes-two-graphs-alt2}. In both cases, \autoref{lem:indegree-changes} implies the existence of $\delta^{\star}(v,G_1)-\delta^{\star}(v,G_2)-1$ further in-neighbors of $v$ in both graphs, denoted as $u_1,\ldots, u_{r-1}$, which lose their outgoing edges after $v$ in $G_1$ but before $v$ in $G_2$. When $(d,z)>(\delta^{\star}(v,G_2),v)$, it again suffices to carry out the analysis for a smaller subset of the vertices with edges to~$v$. \autoref{lem:indegree-changes-two-graphs} implies that whenever $G_1$ and $G_2$ differ only in the outgoing edges of a single vertex $\tilde{v}$, and $v$ is a different vertex with $\delta^{\star}(v,G_1)>\delta^{\star}(v,G_2)$, then either (i)~$\tilde{v}$ has an edge to $v$ in $G_1$, or (ii)~there exists a vertex $u_0$ with $\delta^{\star}(u_0,G_1)<\delta^{\star}(u_0,G_2)$. The fact that this relationship is the opposite of that for $v$ naturally suggests an iterative analysis, where the roles of $G_1$ and $G_2$ are exchanged in each iteration as long as Condition~\ref{indegree-changes-two-graphs-alt2} holds. Such an analysis leads to the following lemma, which establishes a sufficient condition for impartiality in terms of~$T$,~$t$, and~$k$. Impartiality for a particular choice of $T$ and $t$ that guarantees the bound of \autoref{thm:additive-ub} can then be obtained in a straightforward way. \begin{lemma} \label{lem:impartiality-additive-ub} For every $n,k \in \mathbb{N}$ with $k\leq n-1$, the Twin Threshold Mechanism with parameters $T$ and $t$ such that \[ \frac{1}{2}(T^2+3T+t-t^2) > k(n+2) \] is impartial on $\mathcal{G}_n(k)$. \end{lemma} \begin{proof} Let $f$ be the selection mechanism given by the Twin Threshold Mechanism with thresholds $T$ and $t$. We suppose that $f$ is not impartial and we want to see that the inequality in the statement of the lemma is thus violated. Specifically, let $n\in \mathbb{N},\ G=(N,E),G'=(N,E')\in \mathcal{G}_n$, and $\tilde{v}\in N$ such that $E\setminus(\{\tilde{v}\}\times N) = E'\setminus (\{\tilde{v}\}\times N)$ and $\tilde{v} \in f(G) \vartriangle f(G')$, i.e.,\xspace $\tilde{v}$ is selected only for one of these graphs. In particular, $\delta^-(\tilde{v},G)=\delta^-(\tilde{v},G') \geq T$. For $\tilde{v}$ to be selected only for one of the graphs, there has to be a vertex $v\in N\setminus \{\tilde{v}\}$ whose vote is counted when the mechanism runs with input $G$ but not when it runs with input $G'$, or vice versa. Suppose w.l.o.g.\@ that the former holds and denote $v^0=v$. From \autoref{lem:indegree-changes-two-graphs} with $v=v^0$, $G_1=G,\ G_2=G'$, and $(d,z)=(t-1,v^0)$, we have that there are $r^0=\delta^{\star}(v^0,G)-(t-1)$ vertices $u^0_0,\ldots,u^0_{r^0-1}$ for which $(u^0_{j},v^0)\in E\cap E',\ (\delta^{\star}(u^0_{j},G'),u^0_{j})> (\delta^{\star}(v^0,G)-j,v^0),\ (\delta^{\star}(u^0_{j},G),u^0_{j})< (\delta^{\star}(v^0,G),v^0)$ for every $j \in \{1,\ldots, r^0-1\}$, and one of the conditions in the lemma holds. If Condition~\ref{indegree-changes-two-graphs-alt1} holds, we denote $m=0$. Otherwise, we have that $(\delta^{\star}(u^0_0,G'),u^0_0)> (\delta^{\star}(v^0,G),v^0) > (\delta^{\star}(u^0_0,G),u^0_0)$, thus we can define $v_1=u^0_0$ and apply \autoref{lem:indegree-changes-two-graphs} with $v=v_1$, $G_1=G',\ G_2=G,$ and $(d,z)=(\delta^{\star}(v^0,G),v^0)$. The argument can be repeated until Condition~\ref{indegree-changes-two-graphs-alt1} holds at some iteration, which we denote $m$. This necessarily happens because $n$ is finite, and we denote as $G^*\in \{G,G'\}$ the graph such that $(\tilde{v},v_m)\in G^*$. In particular, $G^*=G$ if $m$ is even and $G^*=G'$ if $m$ is odd. For every iteration $\ell\in\{1,\ldots,m-1\}$ the following holds: There is a vertex $v^{\ell}=u^{\ell-1}_0$ and a strictly positive value $r^{\ell}\in \{\delta^{\star}(v^{\ell},G)-\delta^{\star}(v^{\ell-1},G')+1, \delta^{\star}(v^{\ell},G)-\delta^{\star}(v^{\ell-1},G')\}$ (if $\ell$ is even) or $r^{\ell}\in \{\delta^{\star}(v^{\ell},G')-\delta^{\star}(v^{\ell-1},G)+1, \delta^{\star}(v^{\ell},G')-\delta^{\star}(v^{\ell-1},G)\}$ (if $\ell$ is odd) such that there are vertices $u^{\ell}_0,\ldots,u^{\ell}_{r^{\ell}-1}$ such that for each $j \in \{1,\ldots,r^{\ell}-1\}$, \[ (u^{\ell}_{j},v^{\ell})\in E\cap E',\ (\delta^{\star}(u^{\ell}_{j},G'),u^{\ell}_j)> (\delta^{\star}(v^{\ell},G)-j,v^{\ell}) \text{ and } (\delta^{\star}(u^{\ell}_{j},G),u^{\ell}_j)< (\delta^{\star}(v^{\ell},G),v^{\ell}) \] if $\ell$ is even, and \[ (u^{\ell}_{j},v^{\ell})\in E\cap E',\ (\delta^{\star}(u^{\ell}_{j},G),u^{\ell}_j)> (\delta^{\star}(v^{\ell},G')-j,v^{\ell}) \text{ and } (\delta^{\star}(u^{\ell}_{j},G'),u^{\ell}_j)< (\delta^{\star}(v^{\ell},G'),v^{\ell}) \] if $\ell$ is odd. Furthermore, we claim that for every $\ell,\ell'\in\{0,\ldots,m-1\}$ and every $j \in \{0,\ldots,r^{\ell}-1\}$, $j' \in \{0,\ldots,r^{\ell'}-1\}$ with $(\ell,j)\not=(\ell',j')$ it holds $u^{\ell}_{j}\not= u^{\ell'}_{j'}$. In order to see this, we actually show the following properties, that directly imply the previous one: \[ (\delta^{\star}(u^{\ell}_j,G'),u^{\ell}_{j})>(\delta^{\star}(u^{\ell'}_{j'},G'),u^{\ell'}_{j'}) \quad \begin{aligned} &\text{for every } \ell\in \{2,\ldots,m\} \text{ even }, \ell'\in \{0,\ldots,\ell-1\},\\[-3pt] & j \in \{0,\ldots,r^{\ell}-1\}, j'\in \{0,\ldots,r^{\ell'}-1\}\text{ with } (\ell,j)\not=(\ell',j').\end{aligned} \] \[ (\delta^{\star}(u^{\ell}_j,G),u^{\ell}_{j})>(\delta^{\star}(u^{\ell'}_{j'},G),u^{\ell'}_{j'}) \quad \begin{aligned} &\text{for every } \ell\in \{1,\ldots,m\} \text{ odd }, \ell'\in \{0,\ldots,\ell-1\},\\[-3pt] & j \in \{0,\ldots,r^{\ell}-1\}, j'\in \{0,\ldots,r^{\ell'}-1\}\text{ with } (\ell,j)\not=(\ell',j').\end{aligned} \] We prove this by induction over $\ell$, distinguishing whether this is an even or odd value. Let first $\ell\in \{2,\ldots,m\}$ be an even value and note that \[ (\delta^{\star}(u^{\ell}_{j},G'),u^{\ell}_{j})>(\delta^{\star}(v^{\ell},G),v^{\ell})>(\delta^{\star}(v^{\ell-1},G'),v^{\ell-1}) \quad \text{for every }j\in \{0,\ldots,r^{\ell}-1\}. \] But from the definition of these vertices, $(\delta^{\star}(v^{\ell-1},G'),v^{\ell-1})>(\delta^{\star}(u^{\ell-1}_{j'},G'),u^{\ell-1}_{j'})$ for each $j'\in \{0,\ldots,r^{\ell-1}-1\}$. Moreover, we also have that $(\delta^{\star}(v^{\ell-1},G'),v^{\ell-1})>(\delta^{\star}(u^{\ell-2}_{j'},G'),u^{\ell-2}_{j'})$ for every $j'\in \{0,\ldots,r^{\ell-2}-1\}$ due to the chain of inequalities in Condition ~\ref{indegree-changes-two-graphs-alt2} of \autoref{lem:indegree-changes-two-graphs}. Therefore, we conclude that for every even $\ell$ we have both \[ (\delta^{\star}(u^{\ell}_{j},G'),u^{\ell}_{j})>(\delta^{\star}(u^{\ell-1}_{j'},G'),u^{\ell-1}_{j'}) \quad \text{for every } j'\in \{0,\ldots,r^{\ell-1}-1\}, \] and \[ (\delta^{\star}(u^{\ell}_{j},G'),u^{\ell}_{j})>(\delta^{\star}(u^{\ell-2}_{j'},G'),u^{\ell-2}_{j'}) \quad \text{for every } j'\in \{0,\ldots,r^{\ell-2}-1\}. \] This proves the claim for the base case $\ell=2$, and also implies that if it holds for every $\ell\leq \hat{\ell}$ with $\hat{\ell} \geq 2$ then it holds directly for $\hat{\ell}+2$ as well. For odd values of $\ell$, the claim follows from a completely analogous reasoning using graph $G$ instead of $G'$, with the only difference that, for the base case, $\ell'=\ell-1$ is the only possibility. If $\delta^{\star}(v^m,G^*)\geq\delta^-(\tilde{v},G)$, the indegrees (either in $G$ or $G'$) of the vertices $u^{\ell}_{j}$ for $\ell\in \{0,\ldots,m-1\}$ and $j\in \{0,\ldots,r^{\ell}-1\}$, plus the indegree of $\tilde{v}$, sum up to at least \begin{align*} & \sum_{\ell<m \text{ even}}\sum_{j=0}^{r^{\ell}-1}\delta^{\star}(u^{\ell}_{j},G') + \sum_{\ell<m \text{ odd}}\sum_{j=0}^{r^{\ell}-1}\delta^{\star}(u^{\ell}_{j},G) + \delta^-(\tilde{v},G) \\ & \geq \sum_{\ell<m \text{ even}}\sum_{j=0}^{r^{\ell}-1}(\delta^{\star}(v^{\ell},G)-j) + \sum_{\ell<m \text{ odd}}\sum_{j=0}^{r^{\ell}-1}(\delta^{\star}(v^{\ell},G')-j) + \delta^-(\tilde{v},G)\\ & \geq \sum_{j=0}^{\delta^{\star}(v^m,G)-t}(\delta^{\star}(v^m,G)-j) + \delta^-(\tilde{v},G) \geq \sum_{j=t}^{T}j + T = \frac{1}{2}(T^2+3T+t-t^2), \end{align*} where the second inequality uses that $\delta^{\star}(v^{\ell},G)\geq \delta^{\star}(u^{\ell}_0,G)=\delta^{\star}(v^{\ell+1},G')-r^{\ell+1}$ if $\ell$ is even, and $\delta^{\star}(v^{\ell},G')\geq \delta^{\star}(u^{\ell}_0,G')=\delta^{\star}(v^{\ell+1},G)-r^{\ell+1}$ if $\ell$ is odd, as stated in \autoref{lem:indegree-changes-two-graphs}. If $\delta^{\star}(v^m,G^*)<\delta^-(\tilde{v},G)$, using \autoref{lem:indegree-changes-two-graphs} with $\tilde{r}=\delta^-(\tilde{v},G)-\delta^{\star}(v^m,G^*) + \chi(\tilde{v}>v^m)$, we have that there are vertices $\tilde{u}_0,\ldots,\tilde{u}_{\tilde{r}-1}$, none of them equal to $\tilde{v}$, such that $\delta^{\star}(\tilde{u}_j,G^*)\geq \delta^-(\tilde{v},G)-j$ for every $j\in\{0,\ldots,\tilde{r}-1\}$. Therefore, even though the last inequality in the previous chain does not hold, we now have that the indegrees (either in $G$ or $G'$) of the vertices $u^{\ell}_{j}$ for $\ell\in \{0,\ldots,m-1\}$ and $j\in \{0,\ldots,r^{\ell}-1\}$, plus the indegrees of the vertices $\tilde{u}_0,\ldots,\tilde{u}_{\tilde{r}-1}$ and the indegree of $\tilde{v}$, sum up to at least \begin{align*} & \sum_{\ell<m \text{ even}}\sum_{j=0}^{r^{\ell}-1}\delta^{\star}(u^{\ell}_{j},G') + \sum_{\ell<m \text{ odd}}\sum_{j=0}^{r^{\ell}-1}\delta^{\star}(u^{\ell}_{j},G) + \sum_{j=0}^{\tilde{r}-1}\delta^{\star}(\tilde{u}_{j},G^*) + \delta^-(\tilde{v},G) \\ & \geq \sum_{\ell<m \text{ even}}\sum_{j=0}^{r^{\ell}-1}(\delta^{\star}(v^{\ell},G)-j) + \sum_{\ell<m \text{ odd}}\sum_{j=0}^{r^{\ell}-1}(\delta^{\star}(v^{\ell},G')-j) + \sum_{j=\delta^{\star}(v^m,G^*)+1}^{\delta^-(\tilde{v},G)}j + \delta^-(\tilde{v},G)\\ & \geq \sum_{j=0}^{\delta^{\star}(v^m,G)-t}(\delta^{\star}(v^m,G)-j) + \sum_{j=\delta^{\star}(v^m,G)+1}^{\delta^-(\tilde{v},G)}j + T\\ & \geq \sum_{j=t}^{T}j + T = \frac{1}{2}(T^2+3T+t-t^2). \end{align*} Since the sum of the indegrees is at most the maximum number $kn$ of edges, and since the indegrees of vertices summed over $G$ and $G'$ differ by at most $2k$, we conclude that \[ \frac{1}{2}(T^2+3T+t-t^2)\leq k(n+2). \tag*{\raisebox{-.5\baselineskip}{\qedhere}} \] \end{proof} \autoref{fig:lemma-3} illustrates this result by showing a situation where the outgoing edge of a vertex $\tilde{v}$ with $\delta^-(\tilde{v},G_1)=\delta^-(\tilde{v},G_2)=T$ determines whether $\delta^{\star}(v)\geq t$ for another vertex~$v$, and thus whether $\tilde{v}$ itself is selected or not. Note that in the example there exist vertices $w_j$ such that $\delta^-(w_j,G)\geq t+j$ for every $j\in\{0,\ldots, T-t\}$ and some $G\in \{G_1,G_2\}$. This property turns out to be universal and allows us to prove \autoref{lem:impartiality-additive-ub}. \autoref{fig:lemma-3} in fact shows a worst-case situation, in the sense that with fewer vertices than the ones depicted there the same situation cannot occur. We are now ready to prove \autoref{thm:additive-ub}. \begin{figure}[t] \centering \begin{tikzpicture}[scale=0.88] \Vertex[x=1, y=2.25, Math, shape=circle, color=black, , size=.05, label=u^3_1, fontscale=1, position=above, distance=-.1cm]{A} \Vertex[x=1.5, y=2.25, Math, shape=circle, color=black, size=.05, label=\tilde{v}, fontscale=1, position=above, distance=-.07cm]{B} \Vertex[x=.5, y=1.5, Math, shape=circle, color=black, size=.05, label=u^2_1, fontscale=1, position=above left, distance=-.17cm]{C} \Vertex[y=.75, Math, shape=circle, color=black, size=.05, label=u^1_1, fontscale=1, position=above left, distance=-.17cm]{D} \Vertex[x=2, y=.75, Math, shape=circle, color=black, size=.05, label=v, fontscale=1, position=above right, distance=-.16cm]{E} \Edge[Direct, color=black, lw=1pt](B)(A) \Edge[Direct, color=black, lw=1pt](A)(C) \Edge[Direct, color=black, lw=1pt](C)(D) \Edge[Direct, color=black, lw=1pt](D)(E) \Edge[Direct, color=black, lw=1pt](E)(B) \draw[] (-.2,-.2) -- (2.2,-.2); \draw[] (-.2,.55) -- (2.2,.55); \draw[] (-.2,1.3) -- (2.2,1.3); \draw[] (-.2,2.05) -- (2.2,2.05); \draw[] (-.2,2.8) -- (2.2,2.8); \Vertex[x=4.1, y=2.25, Math, shape=circle, color=black, , size=.05, label=u^3_1, fontscale=1, position=above, distance=-.1cm]{F} \Vertex[x=4.6, y=2.25, Math, shape=circle, color=black, size=.05, label=\tilde{v}, fontscale=1, position=above, distance=-.07cm]{G} \Vertex[x=3.6, y=.75, Math, shape=circle, color=black, size=.05, label=u^2_1, fontscale=1, position=above, distance=-.1cm]{H} \Vertex[x=3.1, y=.75, Math, shape=circle, color=black, size=.05, label=u^1_1, fontscale=1, position=above, distance=-.1cm]{I} \Vertex[x=5.1, y=.75, Math, shape=circle, color=black, size=.05, label=v, fontscale=1, position=above right, distance=-.16cm]{J} \Edge[Direct, color=black, lw=1pt](G)(F) \Edge[Direct, color=black, lw=1pt](H)(I) \Edge[Direct, color=black, lw=1pt, bend=-25](I)(J) \Edge[Direct, color=black, lw=1pt](J)(G) \draw[] (2.9,-.2) -- (5.3,-.2); \draw[] (2.9,.55) -- (5.3,.55); \draw[] (2.9,1.3) -- (5.3,1.3); \draw[] (2.9,2.05) -- (5.3,2.05); \draw[] (2.9,2.8) -- (5.3,2.8); \Vertex[x=7.2, y=1.5, Math, shape=circle, color=black, , size=.05, label=u^3_1, fontscale=1, position=above, distance=-.1cm]{K} \Vertex[x=7.7, y=2.25, Math, shape=circle, color=black, size=.05, label=\tilde{v}, fontscale=1, position=above, distance=-.07cm]{L} \Vertex[x=6.7, y=.75, Math, shape=circle, color=black, size=.05, label=u^2_1, fontscale=1, position=above, distance=-.1cm]{M} \Vertex[x=6.2, y=.75, Math, shape=circle, color=black, size=.05, label=u^1_1, fontscale=1, position=above, distance=-.1cm]{N} \Vertex[x=8.2, y=.75, Math, shape=circle, color=black, size=.05, label=v, fontscale=1, position=above right, distance=-.16cm]{O} \Edge[Direct, color=black, lw=1pt](M)(N) \Edge[Direct, color=black, lw=1pt, bend=-25](N)(O) \Edge[Direct, color=black, lw=1pt](O)(L) \draw[] (6,-.2) -- (8.4,-.2); \draw[] (6,.55) -- (8.4,.55); \draw[] (6,1.3) -- (8.4,1.3); \draw[] (6,2.05) -- (8.4,2.05); \draw[] (6,2.8) -- (8.4,2.8); \Vertex[x=10.3, y=1.5, Math, shape=circle, color=black, , size=.05, label=u^3_1, fontscale=1, position=above, distance=-.1cm]{P} \Vertex[x=10.8, y=2.25, Math, shape=circle, color=black, size=.05, label=\tilde{v}, fontscale=1, position=above, distance=-.07cm]{Q} \Vertex[x=9.8, y=.75, Math, shape=circle, color=black, size=.05, label=u^2_1, fontscale=1, position=above, distance=-.1cm]{R} \Vertex[x=9.3, y=.75, Math, shape=circle, color=black, size=.05, label=u^1_1, fontscale=1, position=above, distance=-.1cm]{S} \Vertex[x=11.3, Math, shape=circle, color=black, size=.05, label=v, fontscale=1, position=above right, distance=-.16cm]{T} \Edge[Direct, color=black, lw=1pt](R)(S) \Edge[Direct, color=black, lw=1pt](T)(Q) \draw[] (9.1,-.2) -- (11.5,-.2); \draw[] (9.1,.55) -- (11.5,.55); \draw[] (9.1,1.3) -- (11.5,1.3); \draw[] (9.1,2.05) -- (11.5,2.05); \draw[] (9.1,2.8) -- (11.5,2.8); \Vertex[x=13.4, y=1.5, Math, shape=circle, color=black, , size=.05, label=u^3_1, fontscale=1, position=above, distance=-.1cm]{U} \Vertex[x=13.9, y=2.25, Math, shape=circle, color=white, size=.05, label=\tilde{v}, fontscale=1, position=above, distance=-.07cm]{V} \Vertex[x=12.9, y=.75, Math, shape=circle, color=black, size=.05, label=u^2_1, fontscale=1, position=above, distance=-.1cm]{W} \Vertex[x=12.4, Math, shape=circle, color=black, size=.05, label=u^1_1, fontscale=1, position=above, distance=-.1cm]{X} \Vertex[x=14.4, Math, shape=circle, color=black, size=.05, label=v, fontscale=1, position=above right, distance=-.16cm]{Y} \Edge[Direct, color=black, lw=1pt](Y)(V) \draw[] (12.2,-.2) -- (14.6,-.2); \draw[] (12.2,.55) -- (14.6,.55); \draw[] (12.2,1.3) -- (14.6,1.3); \draw[] (12.2,2.05) -- (14.6,2.05); \draw[] (12.2,2.8) -- (14.6,2.8); \Text[x=-.7, y=-.6, fontsize=\scriptsize]{$G_1$} \Text[x=-.8, y=.925, fontsize=\scriptsize]{$t$} \Text[x=-.8, y=2.425, fontsize=\scriptsize]{$T$} \Text[x=1, y=-.6, fontsize=\scriptsize]{$i=0$} \Text[x=4.1, y=-.6, fontsize=\scriptsize]{$i=1$} \Text[x=7.2, y=-.6, fontsize=\scriptsize]{$i=2$} \Text[x=10.3, y=-.6, fontsize=\scriptsize]{$i=3$} \Text[x=13.4, y=-.6, fontsize=\scriptsize]{$i=4$} \Vertex[x=1, y=-3, Math, shape=circle, color=black, , size=.05, label=u^3_1, fontscale=1, position=above, distance=-.1cm]{AA} \Vertex[x=1.5, y=-2.25, Math, shape=circle, color=black, size=.05, label=\tilde{v}, fontscale=1, position=above, distance=-.07cm]{AB} \Vertex[x=.5, y=-3, Math, shape=circle, color=black, size=.05, label=u^2_1, fontscale=1, position=above, distance=-.1cm]{AC} \Vertex[y=-3.75, Math, shape=circle, color=black, size=.05, label=u^1_1, fontscale=1, position=above left, distance=-.17cm]{AD} \Vertex[x=2, y=-3.75, Math, shape=circle, color=black, size=.05, label=v, fontscale=1, position=above right, distance=-.16cm]{AE} \Edge[Direct, color=black, lw=1pt](AA)(AC) \Edge[Direct, color=black, lw=1pt](AC)(AD) \Edge[Direct, color=black, lw=1pt](AD)(AE) \Edge[Direct, color=black, lw=1pt](AE)(AB) \draw[] (-.2,-4.7) -- (2.2,-4.7); \draw[] (-.2,-3.95) -- (2.2,-3.95); \draw[] (-.2,-3.2) -- (2.2,-3.2); \draw[] (-.2,-2.45) -- (2.2,-2.45); \draw[] (-.2,-1.7) -- (2.2,-1.7); \Vertex[x=4.1, y=-3, Math, shape=circle, color=black, , size=.05, label=u^3_1, fontscale=1, position=above, distance=-.1cm]{AF} \Vertex[x=4.6, y=-2.25, Math, shape=circle, color=black, size=.05, label=\tilde{v}, fontscale=1, position=above, distance=-.07cm]{AG} \Vertex[x=3.6, y=-3, Math, shape=circle, color=black, size=.05, label=u^2_1, fontscale=1, position=above, distance=-.1cm]{AH} \Vertex[x=3.1, y=-4.5, Math, shape=circle, color=black, size=.05, label=u^1_1, fontscale=1, position=above, distance=-.1cm]{AI} \Vertex[x=5.1, y=-3.75, Math, shape=circle, color=black, size=.05, label=v, fontscale=1, position=above right, distance=-.16cm]{AJ} \Edge[Direct, color=black, lw=1pt](AF)(AH) \Edge[Direct, color=black, lw=1pt](AI)(AJ) \Edge[Direct, color=black, lw=1pt](AJ)(AG) \draw[] (2.9,-4.7) -- (5.3,-4.7); \draw[] (2.9,-3.95) -- (5.3,-3.95); \draw[] (2.9,-3.2) -- (5.3,-3.2); \draw[] (2.9,-2.45) -- (5.3,-2.45); \draw[] (2.9,-1.7) -- (5.3,-1.7); \Vertex[x=7.2, y=-3, Math, shape=circle, color=black, , size=.05, label=u^3_1, fontscale=1, position=above, distance=-.1cm]{AK} \Vertex[x=7.7, y=-2.25, Math, shape=circle, color=black, size=.05, label=\tilde{v}, fontscale=1, position=above, distance=-.07cm]{AL} \Vertex[x=6.7, y=-3.75, Math, shape=circle, color=black, size=.05, label=u^2_1, fontscale=1, position=above, distance=-.1cm]{AM} \Vertex[x=6.2, y=-4.5, Math, shape=circle, color=black, size=.05, label=u^1_1, fontscale=1, position=above, distance=-.1cm]{AN} \Vertex[x=8.2, y=-3.75, Math, shape=circle, color=black, size=.05, label=v, fontscale=1, position=above right, distance=-.16cm]{AO} \Edge[Direct, color=black, lw=1pt](AN)(AO) \Edge[Direct, color=black, lw=1pt](AO)(AL) \draw[] (6,-4.7) -- (8.4,-4.7); \draw[] (6,-3.95) -- (8.4,-3.95); \draw[] (6,-3.2) -- (8.4,-3.2); \draw[] (6,-2.45) -- (8.4,-2.45); \draw[] (6,-1.7) -- (8.4,-1.7); \Vertex[x=10.3, y=-3, Math, shape=circle, color=black, , size=.05, label=u^3_1, fontscale=1, position=above, distance=-.1cm]{AP} \Vertex[x=10.8, y=-3, Math, shape=circle, color=black, size=.05, label=\tilde{v}, fontscale=1, position=above, distance=-.07cm]{AQ} \Vertex[x=9.8, y=-3.75, Math, shape=circle, color=black, size=.05, label=u^2_1, fontscale=1, position=above, distance=-.1cm]{AR} \Vertex[x=9.3, y=-4.5, Math, shape=circle, color=black, size=.05, label=u^1_1, fontscale=1, position=above, distance=-.1cm]{AS} \Vertex[x=11.3, y=-3.75, Math, shape=circle, color=black, size=.05, label=v, fontscale=1, position=above, distance=-.07cm]{AT} \Edge[Direct, color=black, lw=1pt](AS)(AT) \draw[] (9.1,-4.7) -- (11.5,-4.7); \draw[] (9.1,-3.95) -- (11.5,-3.95); \draw[] (9.1,-3.2) -- (11.5,-3.2); \draw[] (9.1,-2.45) -- (11.5,-2.45); \draw[] (9.1,-1.7) -- (11.5,-1.7); \Text[x=-.7, y=-5.1, fontsize=\scriptsize]{$G_2$} \Text[x=-.8, y=-3.575, fontsize=\scriptsize]{$t$} \Text[x=-.8, y=-2.075, fontsize=\scriptsize]{$T$} \Text[x=1, y=-5.1, fontsize=\scriptsize]{$i=0$} \Text[x=4.1, y=-5.1, fontsize=\scriptsize]{$i=1$} \Text[x=7.2, y=-5.1, fontsize=\scriptsize]{$i=2$} \Text[x=10.3, y=-5.1, fontsize=\scriptsize]{$i=3$} \end{tikzpicture} \caption{By changing its outgoing edge, $\tilde{v}$ is able to affect whether it is selected by the mechanism or not. \autoref{lem:impartiality-additive-ub} gives a condition over $T,\ t$, and $k$ such that this cannot happen.} \label{fig:lemma-3} \end{figure} \begin{proof}[Proof of \autoref{thm:additive-ub}] Let $f$ be the selection mechanism given by the Twin Threshold Mechanism with $T=\frac{5}{2}\sqrt{c}n^{\frac{1+\kappa}{2}}-1$ and $t=\frac{1}{2}\sqrt{c}n^{\frac{1+\kappa}{2}}$. Let $n\in \mathbb{N}_+,\ G\in \mathcal{G}_n(k)$ and $\kappa, c>0$ such that $k\leq cn^{\kappa}$. This implies that $k(n+2)\leq cn^{1+\kappa}+2cn^{\kappa}\leq 3cn^{1+\kappa}$, thus from \autoref{lem:impartiality-additive-ub} we have that a sufficient condition for impartiality is that \[ \frac{1}{2}(T^2+3T+t-t^2) > 3cn^{1+\kappa}. \] Replacing $T$ and $t$ yields \[ \frac{1}{2}(T^2+3T+t-t^2) = 3cn^{1+\kappa} + \frac{3}{2}\sqrt{c}n^{\frac{1+\kappa}{2}} - 1 > 3cn^{1+\kappa}, \] where the last equality uses that $cn^{\kappa}\geq k$ and that $n, k\geq 1$. We conclude that the mechanism is impartial for these values of $T$ and $t$. For obtaining the additive bound, first consider a graph $G=(N,E)$ such that the mechanism returns the empty set when run with this graph as input. Let $v^*$ be such that $\delta^-(v^*)=\Delta(G)$ and note that necessarily $\hat{\delta}^I(v^*)\leq T-1$. Since there are at most $\lfloor kn/t\rfloor$ vertices with indegree $t$ or higher, a maximum of $\lfloor kn/t\rfloor-1$ in-neighbors of $v^*$ have their outgoing edges deleted during the algorithm. Therefore, we conclude that $\Delta(G)\leq T+\lfloor kn/t \rfloor-2$. Consider now $G=(N,E)$ such that the mechanism returns a set $\{v\}$, and let $v^*$ be such that $\delta^-(v^*)=\Delta(G)$. Once again, a maximum of $\lfloor kn/t\rfloor-1$ in-neighbors of $v^*$ have their outgoing edges deleted during the algorithm. Using the fact that $\hat{\delta}^I(v^*)\leq \hat{\delta}^I(v)$ since $v$ is selected, we conclude that \[ \Delta(G)-\delta^-(v) \leq \left(\delta^-(v)+\left\lfloor \frac{kn}{t} \right\rfloor-1\right) -\delta^-(v) = \left\lfloor \frac{kn}{t}\right\rfloor-1. \] Since the value obtained in the former case is greater or equal than the one obtained in the latter for any values of $T$ and $t$, we have that $f$ is $\alpha$-additive for $\alpha = T+\lfloor kn/t \rfloor-2$. Therefore, for the specified values of $T$ and $t$ and given the upper bound on $k$, $f$ is $\alpha$-additive for \[ \alpha = \frac{5}{2}\sqrt{c}n^{\frac{1+\kappa}{2}}-1 + \left\lfloor \frac{2cn^{1+\kappa}}{\sqrt{c}n^{\frac{1+\kappa}{2}}}\right\rfloor -2 = \frac{5}{2}\sqrt{c}n^{\frac{1+\kappa}{2}}+ \left\lfloor 2\sqrt{c}n^{\frac{1+\kappa}{2}}\right\rfloor -3 = O(n^{\frac{1+\kappa}{2}}). \] We conclude that $f$ is $O(n^{\frac{1+\kappa}{2}})$-additive. When $k=1$, a more detailed analysis yields the bound of $\sqrt{8n}$. First observe that in this case, by \autoref{lem:impartiality-additive-ub}, impartiality holds when \[ \frac{1}{2}(T^2+3T+t-t^2)-(n+2) > 0 \] and thus when \[ T^2+3T-(t^2-t+2n+4) > 0. \] The left-hand side is equal to zero if and only if \[ T=\frac{-3 \pm \sqrt{4(t^2-t+2n+4)+9}}{2} = \pm \sqrt{t^2-t+2n+\frac{25}{4}} - \frac{3}{2}, \] and since $T$ has to be non-negative impartiality holds if \[ T > \max\left\{- \sqrt{t^2-t+2n+\frac{25}{4}} - \frac{3}{2}, \sqrt{t^2-t+2n+\frac{25}{4}} - \frac{3}{2}\right\}. \] This is trivially satisfied if we take \[ t=\lceil\sqrt{n}\rceil,\quad T = \left\lfloor \sqrt{\lceil\sqrt{n}\rceil^2 - \lceil\sqrt{n}\rceil +2n+\frac{25}{4}} -\frac{1}{2} \right\rfloor. \] As before, we know that given the thresholds $T$ and $t$, the mechanism is $\alpha$-additive for any $\alpha\geq\alpha(n):= T+\lfloor n/t \rfloor -2$. In order to obtain an upper bound on $\alpha(n)$, we start by bounding $T$ from above: \begin{align*} T = \left\lfloor \sqrt{\lceil\sqrt{n}\rceil^2 - \lceil\sqrt{n}\rceil +2n+\frac{25}{4}} - \frac{1}{2}\right\rfloor & \leq \sqrt{(\sqrt{n}+1)^2 - (\sqrt{n}+1) +2n+\frac{25}{4}} - \frac{1}{2}\\ & = \sqrt{3n+\sqrt{n}+\frac{25}{4}} - \frac{1}{2}, \end{align*} where the first inequality holds because $a^2-a$ is increasing for $a\geq 1/2$. Then \[ \alpha(n) = T+\lfloor n/t \rfloor-2 \leq \sqrt{3n+\sqrt{n}+\frac{25}{4}} - \frac{1}{2} +\left\lfloor \frac{n}{\lceil \sqrt{n}\rceil} \right\rfloor-2 \leq \sqrt{3n+\sqrt{n}+\frac{25}{4}} + \sqrt{n} -\frac{5}{2}. \] To see that the last expression bounded from above by $\sqrt{8n}$, let $g(n)=\sqrt{8n}-\alpha(n)$ and observe that \[ g(1) \geq \sqrt{8} - \sqrt{3+\sqrt{1}+\frac{25}{4}} - \sqrt{1} +\frac{5}{2} = \sqrt{8} -\sqrt{\frac{41}{4}} + \frac{3}{2} \approx 0.13 >0. \] Moreover, \[ g'(n) = \frac{\sqrt{2}}{\sqrt{n}} - \frac{6\sqrt{n}+1}{2\sqrt{n}\sqrt{12n+4\sqrt{n}+25}} - \frac{1}{2\sqrt{n}} = \frac{4(2\sqrt{2}-1)\sqrt{12n+4\sqrt{n}+25}-24\sqrt{n}-4}{8\sqrt{n}\sqrt{12n+4\sqrt{n}+25}}, \] which is non-negative when $n\geq 1$. To see this, note that the denominator of the last expression is positive and that $4(2\sqrt{2}-1)>7$, so that $g'(n)\geq 0$ if \[ 7\sqrt{12n+4\sqrt{n}+25}\geq 24\sqrt{n}+4, \] i.e.,\xspace if \[ 12n+4\sqrt{n}+1209\geq 0. \] This clearly holds when $n\geq 1$. We conclude that $\alpha(n)\leq \sqrt{8n}$ for every $n\geq 1$, thus $f$ is $\sqrt{8n}$-additive. \end{proof} \section{A Tight Impossibility Result for Approval} So far, we have developed a new mechanism for impartial selection and have established an additive performance guarantee for the mechanism relative to the maximum outdegree in the graph. We will now take a closer look at the case where the maximum outdegree is unbounded, i.e.,\xspace at the approval setting. When applied to the approval setting, \autoref{thm:additive-ub} provides a performance guarantee of $O(n)$. As the maximum indegree in a graph with $n$ vertices is $n-1$, this bound is trivially achieved by any impartial mechanism including the mechanism that never selects. \citet{caragiannis2019impartial} have used a careful case analysis to show that deterministic impartial mechanisms cannot be better than $3$-additive. We show that the trivial upper bound of $n-1$ is in fact tight for all $n$, which means that the mechanism that never selects provides the best possible additive performance guarantee among all deterministic impartial mechanisms. Our result is in fact more general and again holds relative to the maximum outdegree~$k$. \begin{theorem} \label{thm:additive-lb} % Let $n\in\mathbb{N}$ and $k\leq n-1$. Let $f$ be an impartial deterministic selection mechanism such that $f$ is $\alpha$-additive on $\mathcal{G}_n(k)$. Then $\alpha\geq k$. In particular, if $f$ is $\alpha$-additive on $\mathcal{G}_n$, then $\alpha\geq n-1$. \end{theorem} In the practically relevant case where individuals are not allowed to abstain and the minimum outdegree is therefore at least~$1$, a small improvement can be obtained by selecting a vertex with an incoming edge from a fixed vertex, and again breaking ties by a fixed ordering of the vertices. The selected vertex then has indegree at least $1$, which for the approval setting implies ($n-2$)-additivity. This guarantee is again best possible. \begin{theorem} \label{thm:additive-lb-abstentions} % Let $n\in\mathbb{N}$ and $k\leq n-1$. Let $f$ be an impartial deterministic selection mechanism such that $f$ is $\alpha$-additive on $\mathcal{G}^+_n(k)$. Then $\alpha\geq k-1$. In particular, if $f$ is $\alpha$-additive on $\mathcal{G}^+_n$, then $\alpha\geq n-2$. \end{theorem} To prove both theorems we study the performance of impartial, but not necessarily deterministic, selection mechanisms on a particular class of graphs which in the case of $n$ vertices we denote by $\mathcal{G}^T_n$. For each $n\in\mathbb{N}$, a graph $G=(N,E)\in \mathcal{G}_n$ belongs to $\mathcal{G}^T_n$ if and only if there exists an $r$-partition of $N$ for some $r\geq 1$, which we denote by $(S_1,\ldots,S_{r})$, such that (i)~$u<v$ for every $u\in S_i$, and $v\in S_j$ with $i<j$, and (ii)~$E=\{(u,v)\in S_i\times S_j: i,j \in \{1,\ldots,r\}, i\leq j, u\not=v\}$. In other words, $\mathcal{G}^T_n$ contains all graphs obtained by taking an ordered partition of a set of $n$ unlabeled vertices and adding edges from each vertex to all other vertices in the same part and in greater parts. We will not be interested in isomorphic graphs within the class and thus only consider partitions of the vertices in increasing order. A graph in~$\mathcal{G}^T_n$ is thus characterized by the partition $(S_1,\ldots,S_{r})$, or by the tuple $(s_1,\ldots,s_r)$ where $s_i=|S_i|$ for each $i\in\{1,\ldots,r\}$. For a given graph $G\in\mathcal{G}^T_n$, we denote the former by $S(G)$, latter by $s(G)$, and the length of $s(G)$ by $r(G)$. Finally, for $G\in \mathcal{G}^T_n$, let \[ \lambda_G=\frac{n!}{\prod_{i=1}^{r(G)}{(s(G))_i!}}, \] which is the number of graphs with $n$ vertices isomorphic to $G$. \autoref{fig:transitive-graphs-2-3} shows the graphs in $\mathcal{G}^T_2$ and $\mathcal{G}^T_3$, along with their tuple representation $s(G)$ and associated values $\lambda_G$. The sums, for $n\in\mathbb{N}$, of the values $\lambda_G$ for all graphs $G\in\mathcal{G}^T_n$ are known as Fubini numbers and count the number of weak orders on an $n$-element set. The following lemma establishes a property of Fubini numbers that is readily appreciated for the cases shown in \autoref{fig:transitive-graphs-2-3} but in fact holds for all~$n$. The property was known previously~\citep{diagana2017some}, but we provide an alternative proof in \autoref{app:odd-graphs} for the sake of completeness. \begin{lemma} \label{lem:odd-graphs} For every $n\in \mathbb{N},\ n\geq 1$, $\sum_{G\in \mathcal{G}^T_n}\lambda_G$ is an odd number. \end{lemma} \begin{figure}[t] \centering \begin{tikzpicture}[scale=0.88] \Vertex[y=1.5, Math, shape=circle, color=black, size=.05]{A} \Vertex[Math, shape=circle, color=black, size=.05]{B} \Edge[Direct, color=black, lw=1pt](A)(B) \Vertex[x=1.3, y=1.5, Math, shape=circle, color=black, size=.05]{C} \Vertex[x=1.3, Math, shape=circle, color=black, size=.05]{D} \Edge[Direct, color=black, lw=1pt, bend=-20](C)(D) \Edge[Direct, color=black, lw=1pt, bend=-20](D)(C) \Vertex[x=3.5, y=1.5, Math, shape=circle, color=black, size=.05]{E} \Vertex[x=2.6, Math, shape=circle, color=black, size=.05]{F} \Vertex[x=4.4, Math, shape=circle, color=black, size=.05]{G} \Edge[Direct, color=black, lw=1pt](E)(F) \Edge[Direct, color=black, lw=1pt](E)(G) \Edge[Direct, color=black, lw=1pt](F)(G) \Vertex[x=6.6, y=1.5, Math, shape=circle, color=black, size=.05]{H} \Vertex[x=5.7, Math, shape=circle, color=black, size=.05]{I} \Vertex[x=7.5, Math, shape=circle, color=black, size=.05]{J} \Edge[Direct, color=black, lw=1pt](H)(I) \Edge[Direct, color=black, lw=1pt](H)(J) \Edge[Direct, color=black, lw=1pt, bend=-20](I)(J) \Edge[Direct, color=black, lw=1pt, bend=-20](J)(I) \Vertex[x=9.7, y=1.5, Math, shape=circle, color=black, size=.05]{K} \Vertex[x=8.8, Math, shape=circle, color=black, size=.05]{L} \Vertex[x=10.6, Math, shape=circle, color=black, size=.05]{M} \Edge[Direct, color=black, lw=1pt, bend=-20](K)(L) \Edge[Direct, color=black, lw=1pt, bend=-20](L)(K) \Edge[Direct, color=black, lw=1pt](K)(M) \Edge[Direct, color=black, lw=1pt](L)(M) \Vertex[x=12.8, y=1.5, Math, shape=circle, color=black, size=.05]{N} \Vertex[x=11.9, Math, shape=circle, color=black, size=.05]{O} \Vertex[x=13.7, Math, shape=circle, color=black, size=.05]{P} \Edge[Direct, color=black, lw=1pt, bend=-20](N)(O) \Edge[Direct, color=black, lw=1pt, bend=-20](O)(N) \Edge[Direct, color=black, lw=1pt, bend=-20](N)(P) \Edge[Direct, color=black, lw=1pt, bend=-20](P)(N) \Edge[Direct, color=black, lw=1pt, bend=-20](O)(P) \Edge[Direct, color=black, lw=1pt, bend=-20](P)(O) \Text[x=-1.4, y=-.75]{$s(G)$} \Text[x=-1.4, y=-1.25]{$\lambda_G$} \Text[y=-.75]{$(1,1)$} \Text[y=-1.25]{$2$} \Text[x=1.3, y=-.75]{$(2)$} \Text[x=1.3, y=-1.25]{$1$} \Text[x=3.5, y=-.75]{$(1,1,1)$} \Text[x=3.5, y=-1.25]{$6$} \Text[x=6.6, y=-.75]{$(1,2)$} \Text[x=6.6, y=-1.25]{$3$} \Text[x=9.7, y=-.75]{$(2,1)$} \Text[x=9.7, y=-1.25]{$3$} \Text[x=12.8, y=-.75]{$(3)$} \Text[x=12.8, y=-1.25]{$1$} \end{tikzpicture} \caption{Graphs in $\mathcal{G}^T_2$ and $\mathcal{G}^T_3$.} \label{fig:transitive-graphs-2-3} \end{figure} For every pair of graphs $G,G'\in \mathcal{G}^T_n$ and $j\in\{2,\ldots,r(G)\}$, we say that there is a \textit{$j$-transition} from $G$ to $G'$ if $r(G)=r(G')+1$, $(s(G))_{j}=1$, and \[ (s(G'))_i =\left\{ \begin{array}{ll} (s(G))_i & \text{ if } i\leq j-2, \\ (s(G))_i +1 & \text{ if } i= j-1, \\ (s(G))_{i+1} & \text{ if } i\geq j. \end{array} \right. \] Intuitively, a $j$-transition can be obtained by changing the outgoing edges of the single vertex in the set $(S(G))_{j}$, including not only edges to vertices in $\bigcup_{i=j}^{r(G)}(S(G))_i$ but also to vertices in $(S(G))_{j-1}$. When the value of $j$ is not relevant in a particular context, we simply say that there is a transition from $G$ to $G'$ if there exists some $j\in\{2,\ldots,r(G)\}$ such that there is a $j$-transition from $G$ to $G'$. Observe that for every pair of graphs $G,G'\in \mathcal{G}^T_n$ there is at most one $j\in\{2,\ldots,r(G)\}$ such that there is a $j$-transition from $G$ to $G'$, and that if there is a transition from $G$ to $G'$, there cannot be a transition from $G'$ to $G$. This kind of relation between ordered partitions has been exploited by \citet{insko2017ordered} for studying an expansion of the determinant, giving rise to a partial order on~$\mathcal{G}_n^T$. In our context, it turns out to be relevant because whenever there is a $j$-transition from $G$ to $G'$, any impartial mechanism either selects the vertex in $(S(G))_{j}$ both in $G$ and $G'$, or in none of them. In the case of plurality, impartiality was shown by \citet{holzman2013impartial} to be incompatible with two further axioms: \textit{positive unanimity}, which requires for all $G=(N,E)\in\mathcal{G}(1)$ that $v\in f(G)$ if $\delta^-(v)=|N|-1$; and \textit{negative unanimity}, which requires for all $G=(N,E)\in\mathcal{G}(1)$ that $v\not\in f(G)$ if $\delta^-(v)=0$. This result holds even on a restricted class of graphs, consisting of a single cycle and additional vertices with edges onto that cycle, and has immediate and very strong implications on the best multiplicative approximation guarantee that an impartial mechanism can achieve. For additive performance guarantees, however, the incompatibility of impartiality with the other two axioms implies only a lower bound of $2$. We will see in the following that on the class $\mathcal{G}^T_n$, impartiality is incompatible with a single axiom that weakens positive and negative unanimity. Strong lower bounds regarding additive performance guarantees for approval then follow immediately. The class $\mathcal{G}^T_n$ is very different, and has to be very different, from the class of graphs used by \citeauthor{holzman2013impartial}, and will ultimately require a new analysis. We can, however, follow the approach of \citeauthor{holzman2013impartial} to consider randomized mechanisms rather than deterministic ones, which allows us without loss of generality to restrict attention to mechanisms that treat vertices symmetrically. A \textit{randomized selection mechanism} for $\mathcal{G}$ is given by a family of functions $f:\mathcal{G}_n\to [0,1]^n$ that maps each graph to a probability distribution on the set of its vertices, such that $\sum_{i=1}^n{(f(G))_i}\leq 1$ for every graph $G\in \mathcal{G}_n$. Analogously to the case of deterministic mechanisms, we say that a randomized selection mechanism $f$ is \emph{impartial} on $\mathcal{G}'\subseteq \mathcal{G}$ if for every pair of graphs $G = (N, E)$ and $G' = (N, E')$ in $\mathcal{G}'$ and every $v\in N$, $(f(G))_v = (f(G'))_v$ whenever $E \setminus (\{v\} \times N) = E' \setminus (\{v\} \times N)$. We say that a randomized selection mechanism $f$ satisfies \textit{weak unanimity} on $\mathcal{G}_n$ if for every $G=(N,E)\in \mathcal{G}_n$ such that $\delta^-(v)=n-1$ for some $v\in N$, \[ \sum_{u\in N: \delta^-(u)\geq 1}(f(G))_u\geq 1. \] In other words, weak unanimity requires that a vertex with positive indegree is chosen with probability~$1$ whenever there exists a vertex with indegree~$n-1$. We finally say that a randomized mechanism $f$ is \emph{symmetric} if it is invariant with respect to renaming of the vertices, i.e.,\xspace if for every $G = (N, E)\in \mathcal{G}$, every $v\in N$ and every permutation $\pi = (\pi_1,\ldots, \pi_{|N|})$ of $N$, $ (f(G_{\pi}))_{\pi_v} = (f(G))_v$, where $G_{\pi} = (N, E_{\pi})$ with $E_{\pi} = \{(\pi_u, \pi_v): (u, v) \in E\}$. For a given randomized mechanism $f$, we denote by $f_s$ the mechanism obtained by applying a random permutation $\pi$ to the vertices of the input graph, invoking $f$, and permuting the result by the inverse of $\pi$. Thus, for all $n\in \mathbb{N},\ G \in \mathcal{G}_n$, and $v \in N$, \[ (f_s(G))_v = \frac{1}{n!} \sum_{\pi\in \mathcal{S}_n}(f(G_{\pi}))_{\pi_v}, \] where $\mathcal{S}_n$ is the set of all permutations $\pi = (\pi_1,\ldots, \pi_n)$ of a set of $n$ elements. The following lemma, which we prove in \autoref{app:symmetry-axiom}, establishes that $f_s$ is symmetric for every randomized mechanism $f$ and inherits impartiality and weak unanimity from $f$. The lemma is a straightforward variant of a result of \citeauthor{holzman2013impartial} and will allow us to restrict attention to symmetric randomized mechanisms. \begin{lemma} \label{lem:symmetry-axiom} Let $f$ be a randomized selection mechanism that is impartial and weakly unanimous on $\mathcal{G}_n$. Then, $f_s$ is symmetric, impartial, and weakly unanimous on $\mathcal{G}_n$. \end{lemma} We are now ready to state our axiomatic impossibility result, which can be seen as a stronger version of that of \citeauthor{holzman2013impartial} for the case of unbounded outdegree. Both lower bounds follow easily from this result. \begin{lemma} \label{lem:imposibility-selection} For every $n\in\mathbb{N},\ n\geq 2$, there exists no randomized selection mechanism $f$ satisfying impartiality and weak unanimity on $\mathcal{G}^T_n$. \end{lemma} \begin{proof} Let $n\in \mathbb{N},\ n\geq 2$ and suppose that there exists a randomized selection mechanism $f$ satisfying impartiality and weak unanimity on $\mathcal{G}_n$. Since we can assume symmetry due to \autoref{lem:symmetry-axiom}, for each graph $G \in \mathcal{G}^T_n$ we have that for every $i\in \{1,\ldots, r(G)\}$ and every $u,v\in (S(G))_i$, $(f(G))_u=(f(G))_v$, and thus we denote this value simply as $(f(G))_i$. We consider for the proof an undirected graph $\mathcal{H}_n=(\mathcal{G}^T_n, \mathcal{F})$, such that for every pair of graphs $G,G'\in \mathcal{G}^T_n$ we have that $\{G,G'\}\in \mathcal{F}$ if and only if there is a transition from $G$ to $G'$ or from $G'$ to $G$. By definition of a transition, for each $\{G,G'\}\in \mathcal{F}$ we have $|r(G)-r(G')|=1$. Therefore, $\mathcal{H}_n$ is bipartite with partition $(L_n, R_n)$ where $L_n=\{G\in \mathcal{G}^T_n: r(G) \text{ is even}\}$ and $R_n=\{G\in \mathcal{G}^T_n: r(G) \text{ is odd}\}$. \autoref{fig:graphs-of-graphs-2-3-4} in \autoref{app:imposibility-selection} depicts the graphs $\mathcal{H}_2,\ \mathcal{H}_3$, and $\mathcal{H}_4$. For each graph $G \in \mathcal{G}^T_n$, we define $i(G)=2$ if $(s(G))_1=1$ and $i(G)=1$ otherwise. Let $G=(N,E)$ and $G'=(N,E')$ be two graphs in $\mathcal{G}^T_n$ such that there is a $j$-transition from $G$ to $G'$ for some $j\in\{2,\ldots,r(G)\}$. Denoting by $v$ the unique vertex in $(S(G))_{j}$, which is also in $S(G')_{j-1}$, we have that $E\setminus (\{v\}\times N) = E'\setminus (\{v\}\times N)$, and since $f$ is impartial, $(f(G))_{j} = (f(G'))_{j-1}$. Therefore, \begingroup \allowdisplaybreaks \begin{align*} \lambda_{G'} \cdot (s(G'))_{j-1} \cdot (f(G'))_{j-1} & = \frac{n!}{\prod_{i=1}^{r(G')}{(s(G'))_i!}} (s(G'))_{j-1} (f(G'))_{j-1}\\ & = \frac{n!}{((s(G'))_{j-1}-1)!\prod_{i\in\{1,\ldots,r(G')\}\setminus \{j-1\}}{(s(G'))_i!}} (f(G'))_{j-1}\\ & = \frac{n!}{((s(G))_{j-1})!\prod_{i\in\{1,\ldots,r(G)\}\setminus \{j-1,j\}}{(s(G))_i!}} (f(G))_{j}\\ & = \frac{n!}{\prod_{i\in\{1,\ldots,r(G)\}}{(s(G))_i!}} (s(G))_{j} \cdot (f(G))_{j}\\ & = \lambda_G \cdot (s(G))_{j}\cdot (f(G))_{j}. \end{align*} \endgroup The first two and last two equalities are obtained by replacing known expressions and simple calculations. The third equality comes from the fact that $(s(G'))_i=(s(G))_i$ for every $i\leq j-2$, $(s(G'))_{j-1}=(s(G))_{j-1}+1$, and $(s(G'))_i=(s(G))_{i+1}$ for every $i\geq j$. Moreover, observe that for each $G\in \mathcal{G}^T_n$ and for each $j\in \{i(G),\ldots,r(G)\}$, there exists exactly one $G'\in \mathcal{G}^T_n$ such that there is a $j$-transition from $G$ to $G'$ (if $(s(G))_j=1$) or a $(j+1)$-transition from $G'$ to $G$ (if $(s(G))_j\geq 2$). Therefore, \begin{equation}\label{eq:sums_bipartition} \sum_{G\in L_n}\sum_{i=i(G)}^{r(G)}{\lambda_G \cdot (s(G))_{i} \cdot (f(G))_{i}} = \sum_{G\in R_n}\sum_{i=i(G)}^{r(G)}{\lambda_G \cdot (s(G))_{i} \cdot (f(G))_{i}}. \end{equation} We now derive two important sets of inequalities. From the fact that $f$ is a selection mechanism, for each $G\in \mathcal{G}^T_n$ we have that \begin{equation} \sum_{i=i(G)}^{r(G)}{(s(G))_{i} \cdot (f(G))_{i}}\leq 1,\label{eq:probs-leq-1} \end{equation} where replacing 1 by $i(G)$ on the left-hand side is possible since it can only make the sum smaller and thus keeps the inequality. On the other hand, from the fact that $f$ satisfies weak unanimity, for each $G\in \mathcal{G}^T_n$ we have that \begin{equation} -\sum_{i=i(G)}^{r(G)}{(s(G))_{i} \cdot (f(G))_{i}}\leq -1,\label{eq:probs-geq-1} \end{equation} where we can omit the term for $i=1$ on the left-hand side whenever $(s(G))_1=1$, because the vertex in $(S(G))_1$ has indegree 0 in such case. In order to cancel out the left-hand sides of the previous inequalities, we assign a sign to each part of the bipartition of $\mathcal{H}_n$. Let $\text{sign}(L_n), \text{sign}(R_n)\in \{-1,1\}$ with $\text{sign}(L_n)\cdot \text{sign}(R_n) = -1$, and let $\text{sign}(G)=\text{sign}(L_n)$ for each $G\in L_n$ and $\text{sign}(G)=\text{sign}(R_n)$ for each $G\in R_n$. Summing up the inequalities \eqref{eq:probs-leq-1} multiplied by $\lambda_G$ for every $G\in \mathcal{G}^T_n$ with $\text{sign}(G)=1$ and the inequalities \eqref{eq:probs-geq-1} multiplied by $\lambda_G$ for every $G\in \mathcal{G}^T_n$ with $\text{sign}(G)=-1$, we obtain \[ \sum_{G\in \mathcal{G}^T_n}\sum_{i=i(G)}^{r(G)}{\text{sign}(G) \cdot \lambda_G \cdot (s(G))_{i} \cdot (f(G))_{i}} \leq \sum_{G\in \mathcal{G}^T_n} \text{sign}(G)\cdot \lambda_G. \] By \autoref{eq:sums_bipartition} the left-hand side is equal to~$0$. However, we know from \autoref{lem:odd-graphs} that $\sum_{G\in \mathcal{G}^T_n}\lambda_G$ is odd, so the right-hand side cannot be equal to $0$. For one of the two possible choices of $\text{sign}(L_n)$ and $\text{sign}(R_n)$ the right-hand side is negative, and we obtain a contradiction. We conclude that a randomized selection mechanism $f$ satisfying impartiality and weak unanimity on $\mathcal{G}_n$ cannot exist. \end{proof} As an illustration, the counterexamples constructed for $n=3$ and $n=4$ are shown in \autoref{fig:counterexample-weak-unanimity} in \autoref{app:imposibility-selection}. We are now ready to prove Theorems~\ref{thm:additive-lb} and~\ref{thm:additive-lb-abstentions}. In order to be able to apply \autoref{lem:imposibility-selection} to deterministic mechanisms, we need a simple definition. For a given deterministic selection mechanism $f:\mathcal{G}_n\to 2^N$, let $f_{\text{rand}}:\mathcal{G}_n\to [0,1]^n$ be the randomized selection mechanism such that $(f_{\text{rand}}(G))_v=1$ if $v\in f(G)$ and $(f_{\text{rand}}(G))_v=0$ otherwise. It is then easy to see that whenever $f$ is impartial, $f_{\text{rand}}$ is impartial as well. \begin{proof}[Proof of \autoref{thm:additive-lb}] \begin{algorithm}[t] \SetAlgoNoLine \KwIn{Digraph $G=(N,E)\in \mathcal{G}_{k+1}$, mechanism $f_{k}$ and integer $n\geq k+1$.} \KwOut{Set $S$ of selected vertices with $|S|\leq 1$.} Let $H=(N\cup N', E)$, where $N'=\bigcup_{j=1}^{n-k-1}\{u_j\}$\; {\bf Return} $f_{k}(H)$ \caption{Selection mechanism $f$ based on $f_{k}$.} \label{alg:add-isolated-vertices} \end{algorithm} The result is straightforward when $n=1$. Let $n\geq 2$ and $k \leq n-1$, and suppose that there is an impartial deterministic selection mechanism $f_{k}$ with \[ \Delta(G)-\delta^-(f_k(G), G) \leq k -1 \] for every $G\in \mathcal{G}_n(k)$. We define the deterministic selection mechanism $f$ based on $f_{k}$ as specified in \autoref{alg:add-isolated-vertices}. This mechanism receives a graph $G=(N,E)$ in $\mathcal{G}_{k+1}$ and adds new vertices, if necessary to complete $n$ vertices. These vertices are isolated, in the sense that the set of edges in this new graph $H$ remains the same. For every input graph $G$, the graph constructed belongs to $\mathcal{G}_n(k)$, so the mechanism finally applies $f_{k}$. We claim that $f_{\text{rand}}$ is impartial and weakly unanimous on $\mathcal{G}^T_{k+1}$. To see that $f_{\text{rand}}$ is weakly unanimous, observe that \[ \delta^-(v,H)=\delta^-(v,G) \text{ for every } v\in N, \text{ and } \delta^-(v,H)=0 \text{ for every } v\not\in N. \] For each $G\in \mathcal{G}^T_{k+1}$, we have that $\Delta(G)=k$ and thus $\Delta(H)=k$. From the fact that $f_{k}$ is $(k-1)$-additive on $\mathcal{G}_n(k)$, we conclude that $f$ returns a vertex $v^*$ of $H$ with $\delta^-(v^*,H)\geq k - (k-1) = 1$. This implies, in the first place, that $v^*\in N$, thus $f$ is indeed a selection mechanism on $\mathcal{G}^T_{k+1}$. Furthermore, we have $\delta^-(v^*,G)\geq 1$, thus \[ \sum_{v\in G: \delta^-(v)\geq 1}(f_{\text{rand}}(G))_v = 1, \] i.e.,\xspace $f_{\text{rand}}$ is weakly unanimous on $\mathcal{G}^T_{k+1}$. Impartiality of $f_{\text{rand}}$ is straightforward since $f_{k}$ is impartial and the set of edges is not modified in the mechanism. This contradicts \autoref{lem:imposibility-selection}, so we conclude that mechanism $f_{k}$ cannot exist. \end{proof} \begin{proof}[Proof of \autoref{thm:additive-lb-abstentions}] \begin{algorithm}[t] \SetAlgoNoLine \KwIn{Digraph $G=(N,E)\in \mathcal{G}_{k}$, mechanism $f^+_{k}$ and integer $n\geq k+1$.} \KwOut{Set $S$ of selected vertices with $|S|\leq 1$.} Let $H=(N\cup N', F)$, where $N'=\bigcup_{j=1}^{n-k}\{u_j\}$ and $F = E\cup (N'\times N) \cup (\{v\in N: \delta^+(v,G)=0\} \times \{u_1\})$\; {\bf Return} $f^+_{k}(H)$ \caption{Selection mechanism $f$ based on $f^+_{k}$.} \label{alg:add-inneighbors} \end{algorithm} The result is straightforward when $n\in \{1,2\}$. Let $n\in \mathbb{N}, n\geq 3$ and suppose that there is an impartial deterministic selection mechanism $f^+_{k}$ with \[ \Delta(G)-\delta^-(f^+_{k}(G), G) \leq k-2 \] for every $G\in \mathcal{G}^+_n(k)$. We define the deterministic selection mechanism $f$ based on $f^+_{k}$ as specified in \autoref{alg:add-inneighbors}. This mechanism requires a graph $G$ in $\mathcal{G}_{k}$ and adds new vertices $N'=\{u_1,\ldots,u_{n-k}\}$, as well as edges from each of these vertices to every vertex of $G$, and edges from every vertex of $G$ with outdegree zero to $u_1$. We first claim that for every input graph $G$, the graph $H$ constructed in the mechanism belongs to $\mathcal{G}^+_n(k)$. Indeed, since $G\in \mathcal{G}_{k}$ every node $v\in N$ satisfies $\delta^+(v,G)\leq k-1$. Moreover, an outgoing edge to $u_1$ is added for every $v$ with $\delta^+(v,G)=0$, thus $1\leq \delta^-(v,H)\leq k-1$ for each $v\in N$. On the other hand, each node in $N'$ has outdegree $k$. Therefore, the mechanism is well defined, in the sense that in its last step it applies $f^+_{k}$ to a graph in $\mathcal{G}^+_n(k)$. We claim that $f_{\text{rand}}$ is impartial and weakly unanimous on $\mathcal{G}^T_{k}$, which is a clear contradiction to \autoref{lem:imposibility-selection} and thus implies that mechanism $f^+_k$ cannot exist. We prove the claim in what follows. For each $G=(N,E)\in \mathcal{G}^T_{k}$, the set $\{v\in N: \delta^+(v)=0\}$ is equal to $(S(G))_{r(G)}$ if $(s(G))_{r(G)}=1$, or to the empty set, otherwise. Therefore, for $u_1,\ldots,u_{n-k}$ and $H$ as defined in the mechanism we have that $\delta^-(u_1,H)\leq 1,\ \delta^-(u_j,H)=0$ for every $j\in \{2,\ldots,n-k\}$, and $\delta^-(v,H)=\delta^-(v,G)+1$ for every $v\in N$. Since $\Delta(G)=k-1$ from the definition of the set $\mathcal{G}^T_{k}$, we have that $\Delta(H)=k$. Using that $f^+_{k}$ is $(k-2)$-additive on $\mathcal{G}^+_n(k)$, we conclude that $f$ returns a vertex $v^*\in N\cup N'$ with $\delta^-(v^*,H) \geq k-(k-2) = 2$. This implies, in the first place, that $v^*\in N$, thus $f$ is indeed a selection mechanism on $\mathcal{G}^T_{k}$. Furthermore, we have $\delta^-(v^*,G)\geq 1$, thus \[ \sum_{v\in G: \delta^-(v)\geq 1}(f_{\text{rand}}(G))_v = 1, \] i.e.,\xspace $f_{\text{rand}}$ is weakly unanimous on $\mathcal{G}^T_{k}$. To see that $f$ is impartial on $\mathcal{G}^T_{k}$, let $G=(N,E), G'=(N,E')\in \mathcal{G}^T_{k}$ and $\tilde{v}\in N$ be such that $E\setminus (\{\tilde{v}\}\times N) = E'\setminus (\{\tilde{v}\}\times N)$. Denoting $F$ and $F'$ the edges of the graphs defined in the mechanism $f$ when run with input $G$ and $G'$, respectively, it is enough to show that $F\setminus (\{\tilde{v}\}\times (N\cup N')) = F'\setminus (\{\tilde{v}\}\times (N\cup N'))$, because this would imply \[ f(G)\cap \{\tilde{v}\} = f^+_k(N\cup N',F)\cap \{\tilde{v}\} = f^+_k(N\cup N',F')\cap \{\tilde{v}\} = f(G')\cap \{\tilde{v}\} \] where the second equality holds since $f^+_k$ is impartial on $\mathcal{G}^+_n(k)$ by hypothesis. Indeed, \begin{align*} F\setminus (\{\tilde{v}\}\times (N\cup N')) & = (E\cup (N'\times N) \cup (\{v\in N: \delta^+(v,G)=0\} \times \{u_1\})) \setminus (\{\tilde{v}\}\times (N\cup N'))\\ & = E\setminus (\{\tilde{v}\}\times N) \cup (N'\times N) \cup (\{v\in N\setminus \{\tilde{v}\}: \delta^+(v,G)=0\} \times \{u_1\})\\ & = E'\setminus (\{\tilde{v}\}\times N) \cup (N'\times N) \cup (\{v\in N\setminus \{\tilde{v}\}: \delta^+(v,G')=0\} \times \{u_1\})\\ & = (E'\cup (N'\times N) \cup (\{v\in N: \delta^+(v,G')=0\} \times \{u_1\})) \setminus (\{\tilde{v}\}\times (N\cup N'))\\ & = F'\setminus (\{\tilde{v}\}\times (N\cup N')), \end{align*} where the third equality comes the fact that the outgoing edges of every vertex in $(N\cup N')\setminus \{\tilde{v}\}$ are the same in $G$ and $G'$. This implies that $f_{\text{rand}}$ is impartial as well, concluding the proof of the claim and the proof of the theorem. \end{proof} Theorems~\ref{thm:additive-lb} and~\ref{thm:additive-lb-abstentions} provide tight bounds for the approval setting but have very weak implications for plurality. We end with a small but nontrivial lower bound for the latter, which applies to settings with and without abstentions. The proof of this result can be found in \autoref{app:additive-plurality-lb} \begin{theorem} \label{thm:additive-plurality-lb} Let $n\in \mathbb{N}$ and let $f$ be an impartial deterministic selection mechanism such that $f$ is $\alpha$-additive on $\mathcal{G}^+(1)$. Then, $\alpha \geq 3$. \end{theorem}
{ "attr-fineweb-edu": 1.915039, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUd2A4eIfiUSz0smGP
\section{Introduction} \label{sec:introduction} Determining which National Basketball Association (NBA) players do the most to help their teams win games is perhaps the most natural question in basketball analytics. Traditionally, one quantifies the notion of helping teams win with a scoring statistic like points-per-game or true shooting percentage, a function of point differential like Adjusted Plus-Minus (see, e.g., \cite{Rosenbaum2004}, \cite{IlardiBarzilai2008}) and variants thereof, or some combination of box score statistics and pace of play like the Player Efficiency Rating (PER) of \cite{Hollinger2004}. While these metrics are informative, we observe that they ignore the \textit{context} in which players perform. As a result, they can artificially inflate the importance of performance in low-leverage situations, when the outcome of the game is essentially decided, while simultaneously deflating the importance of high-leverage performance, when the final outcome is still in question. For instance, point differential-based metrics model the home team's lead dropping from 5 points to 0 points in the last minute of the first half in exactly the same way that they model the home team's lead dropping from 30 points to 25 points in the last minute of the second half. In both of these scenarios, the home team's point differential is -5 points but, as we will see in Section~\ref{sec:win_probability}, the home team's chance of winning the game dropped from 72\% to 56\% in the first scenario while it remained constant at 100\% in the second. We argue that a player's performance in the second scenario has no impact on the final outcome and should therefore not be treated comparably to performance in the first. We address this issue by proposing a win probability framework and linear regression model to estimate each player's contribution to his team's overall chance of winning games. The use of win probability to evaluate the performance of professional athletes dates back at least to \cite{MillsMills1970}, who evaluated Major League Baseball players. As \cite{Studeman2004} observes, their Player Wins Average methodology has been re-introduced several times since, most notably as Win Probability Added (WPA). To compute WPA, one starts with an estimate of a team's probability of winning the game at various game states. For each plate appearance, one then credits the pitcher and batter with the resulting change in their respective team's win probability and then sums these contributions over the course of a season to determine how involved a player was in his team's wins (or losses). A natural extension of the WPA methodology to basketball would be to measure the change in the win probability from the time a player enters the game to the time he is substituted out of the game and then sum these changes over the course of a season. Such an extension is identical to the traditional plus-minus statistic except that it is computed on the scale of win probability instead of points scored. An inherent weakness of using plus-minus (on both the point and win probability scales) to assess a player's performance is that a player's plus-minus statistic necessarily depends on the contributions of his teammates and opponents. According to \cite{GramacyJensenTaddy2013}, since the the quality of any individual player's pool of teammates and opponents can vary dramatically, ``the marginal plus-minus for individual players are inherently polluted.'' To overcome this difficulty, \cite{Rosenbaum2004} introduced Adjusted Plus-Minus to estimate the average number of points a player scores per 100 possession after controlling for his opponents and teammates. To compute Adjusted Plus-Minus, one first breaks the game into several ``shifts,'' periods of play between substitutions, and measures both the point differential and total number of possessions in each shift. One then regresses the point differential per 100 possessions from the shift onto indicators corresponding to the ten players on the court. We propose instead to regress the change in the home team's win probability during a shift onto signed indicators corresponding to the five home team players and five away team players in order to estimate each player's \textit{partial effect} on his team's chances of winning. Briefly, if we denote the change in the home team's win probability in the $i^{th}$ shift by $y_{i},$ we then model the expected change in win probability given the players on the court, as \begin{equation} \label{eq:general_model} E[y_{i} | \mathbf{h}_{i}, \mathbf{a}_{i}] = \mu_{i} + \theta_{h_{i1}} + \cdots + \theta_{h_{i5}} - \theta_{a_{i1}} - \cdots - \theta_{a_{i5}} \end{equation} where $\theta = \left( \theta_{1}, \ldots, \theta_{488}\right)^{\top}$ is the vector of player partial effects and $\mathbf{h}_{i} = \left\{ h_{i1}, \ldots, h_{i5} \right\}$ and $\mathbf{a}_{i} = \left\{ a_{i1}, \ldots, a_{i5} \right\}$ are indices on $\theta$ corresponding to the home team (h) and away team (a) players. The intercept term $\mu_{i}$ may depend on additional covariates, such as team indicators. Fitting the model in Equation~\ref{eq:general_model} is complicated by the fact that we have a relatively large number of covariates (viz. a total of 488 players in the 2013-2014 season) displaying a high degree of collinearity, since some players are frequently on the court together. This can lead to imprecise estimates of player partial effects with very large standard errors. Regularization, which shrinks the estimates of the components of $\theta$ towards zero, is therefore necessary to promote numerical stability for each partial effect estimate. We take a Bayesian approach, which involves specifying a \textit{prior} distribution with mode at zero on each partial effect and using play-by-play data from the 2013-14 season to update these priors to get a \textit{posterior} distribution of the partial effects. As \cite{KyungGillGhoshCasella2010} argue, the Bayesian formulation of regularized regression produces valid and tractable standard errors, unlike popular frequentist techniques like the lasso of \cite{Tibshirani1996}. This enables us to quantify the joint uncertainty of our partial effect estimates in a natural fashion. Our proposed methodology produces a \textit{retrospective} measure of individual player contributions and does not attempt to measure a player's latent ability or talent. Our estimates of player partial effect are context-dependent, making them unsuitable for forecasting future performance since the context in which a player plays can vary season-to-season and even week-to-week. Nevertheless, because our proposal is context-dependent, we feel that it provides a more appropriate accounting of what actually happened than existing player-evaluation metrics like PER and ESPN's Real Plus/Minus (RPM). Taken together with such existing metrics, our estimates of player effect can provide insight into whether coaches are dividing playing time most effectively and help understand the extent to which a player's individual performance translate to wins for his team. The rest of this paper is organized as follows. We detail our data and regression model in Section~\ref{sec:data_model_methods} and describe our estimation of win probability in Section~\ref{sec:win_probability}. Section~\ref{sec:full_posterior_analysis} presents a full Bayesian analysis of the joint uncertainty about player effects. In Section~\ref{sec:comparing_players} we introduce \textit{leverage profiles} to measure the similarity between the contexts in which two players performed. These profiles enable us to make meaningful comparisons of players based on their partial effect estimates. In keeping with the examples of other player evaluation metrics, we propose two rank-orderings of players using their partial effects. In Section~\ref{sec:impact_ranking}, we rank players on a team-by-team basis, allowing us to determine each player's relative value to his team. In Section~\ref{sec:impact_score}, we present a single ranking of all players which balances a player's partial effect against the posterior uncertainty in estimating his effect. We extend our analysis of player partial effects in Section~\ref{sec:lineup_comparison} to five-man lineups and consider how various lineups matchup against each other. We conclude in Section~\ref{sec:discussion} with a discussion of our results and several extensions. \section{Data, Models, and Methods} \label{sec:data_model_methods} Like Adjusted Plus/Minus, we break each game into shifts: periods of play between successive substitutions when the ten players on the court is unchanged. During the 2013--2014 regular season, a typical game consisted of around 31 shifts. In order to determine which players are on the court during each shift, we use play-by-play data obtained from ESPN for 8365 of the 9840 (85\%) of the scheduled regular season games in each of the eight seasons between 2006 and 2014. The play-by-play data for the remaining 15\% of games were either incomplete or missing altogether. The majority of the missing games were from the first half of the time window considered. To the best of our knowledge, our dataset does not systematically exclude games from certain teams or certain types of games (early-season vs late-season, close game vs blow-out). Using the data from the 2006--2007 season to 2012--2013 season, we estimate the home team's win probability as a function of its lead and the time elapsed. With these win probability estimates, we then compute the change in the home team's win probability during each of $n = 35,799$ shifts in the 2013--2014 regular season. We denote the change in the home team's win probability during the $i^{th}$ shift by $y_{i}.$ This change in win probability can be directly attributed to the performance of the 10 players on the court during that shift. Thus, to measure each individual player's impact on the change in win probability, we regress $y_{i}$ onto indicator variables corresponding to which of the 488 players were on the court during the $i^{th}$ shift. We model \begin{equation} \label{eq:player_team_model} y_{i}|\mathbf{h_{i}, a_{i}} = \mu + \theta_{h_{i1}} + \cdots + \theta_{h_{i5}} - \theta_{a_{i1}} - \cdots \theta_{a_{i5}} + \tau_{H_{i}} - \tau_{A_{i}} + \sigma\varepsilon_{i}, \end{equation} where $\theta = \left( \theta_{1}, \ldots, \theta_{488}\right)^{\top}$ is the vector of partial effects for the 488 players, $\tau = \left(\tau_{1}, \ldots, \tau_{30}\right)^{\top}$ is a vector of partial effects for the 30 teams, with $\mathbf{h}_{i} = \left\{ h_{i1}, \ldots, h_{i5} \right\}$ and $\mathbf{a}_{i} = \left\{ a_{i1}, \ldots, a_{i5} \right\}$ are indices on $\theta$ corresponding to the home team (h) and away team (a) players, $H_{i}$ and $A_{i}$ are indices on $\tau$ corresponding to which teams are playing in shift $i$, and the $\varepsilon_{i}$ are independent standard normal random variables. We view $\mu$ as a league-average ``home-court advantage'' and $\sigma$ as a measure of the variability in $y_{i}$ that arises from both the uncertainty in measuring $y_{i}$ and the inherent variability in win probability that cannot be explained by the performance of the players on the court. Since we are including team effects in Equation~\ref{eq:player_team_model}, each player's partial effect is measured relative to his team's average, so that players are not overly penalized. \subsection{Estimation of Win Probability} \label{sec:win_probability} In order to fit such a regression model, we must begin with an estimate of the probability that the home team wins the game after leading by $L$ points after $T$ seconds, which we denote by $p_{T,L}.$ Estimating win probability at specific intermediate times during a game is not a new problem; indeed, \cite{Lindsey1963} estimated win probabilities in baseball in the 1960's and \cite{Stern1994} introduced a probit regression model to estimate $p_{T,L}.$ \cite{MayminMayminShen2012} expanded on that probit model to study when to take starters in foul trouble out of a game, \cite{Bashuk2012} considered empirical estimates of win probability to predict team performance in college basketball, and \cite{Pettigrew2015} recently introduced a parametric model to estimate win probability in hockey. Intuitively, we believe that $p_{T,L}$ is a smooth function of both $T$ and $L$; for a fixed lead, the win probability should be relatively constant for a small duration of time. By construction, the probit model of \cite{Stern1994} produces a smooth estimate of the win probability and the estimates based on all games from the 2006--07 to 2012--13 regular seasons are shown in Figure~\ref{fig:winProb}(A), where the color of the unit cell $[T,T+1] \times [L, L + 1]$ corresponds to the estimated value of $p_{T,L}.$ To get a sense of how well the probit estimates fit the observed data, we can compare them to the empirical estimates of $p_{T,L}$ given by the proportion of times that the home team has won after leading by $L$ points after $T$ seconds. The empirical estimates of $p_{T,L}$ are shown in Figure~\ref{fig:winProb}(B). \begin{figure}[!h] \centering \includegraphics{winProb} \caption{Various estimates of $p_{T,L}$. The probit estimates in (A), while smooth, do not agree with the empirical win probabilities shown in (B). Our estimates, shown in (C), are closer in value to the empirical estimates than are those in (A) but are much smoother than the empirical estimates.} \label{fig:winProb} \end{figure} We see immediately that the empirical estimates are rather different than the probit estimates: for positive $L$, the probit estimate of $p_{T,L}$ tends to be much smaller than the empirical estimate of $p_{T,L}$ and for negative L, the probit estimates tend to overestimate $p_{T,L}$. This discrepancy arises primarily because the probit model is fit using only data from the ends of the first three quarters and does not incorporate any other intermediate times. Additionally, the probit model imposes several rather strong assumptions about the evolution of the win probability as the game progresses. As a result, we find the empirical estimates much more compelling than the probit estimates. Despite this, we observe in Figure~\ref{fig:winProb}(B) that the empirical estimates are much less smooth than the probit estimates. Also worrying are the extreme and incongruous estimates near the edges of the colored region in Figure~\ref{fig:winProb}(B). For instance, the empirical estimates suggest that the home team will always win the game if they trailed by 18 points after five minutes of play. Upon further inspection, we find that the home team trailed by 18 points after five minutes exactly once in the seven season span from 2006 to 2013 and they happened to win that game. In other words, the empirical estimates are rather sensitive to small sample size leading to extreme values which can heavily bias our response variables $y_{i}$ in Equation~\ref{eq:player_team_model}. To address these small sample issues in the empirical estimate, we propose a middle ground between the empirical and probit estimates. In particular, we let $N_{T,L}$ be the number of games in which the home team has led by $\ell$ points after $t$ seconds where $T-h_{t} \leq t \leq T+h_{t}$ and $L-h_{l} \leq \ell \leq L+h_{l},$ where $h_{t}$ and $h_{l}$ are positive integers. We then let $n_{T,L}$ be the number of games which the home team won in this window and model $n_{T,L}$ as a Binomial$(N_{T,L}, p_{T,L})$ random variable. This modeling approach is based on the assumption that the win probability is relatively constant over a small window in the $(T,L)$-plane. The choice of $h_{t}$ and $h_{l}$ dictate how many game states worth of information is used to estimate $p_{T,L}$ and larger choices of both will yield, in general, smoother estimates of $p_{T,L}.$ Since very few offensive possession last six seconds or less and since no offensive possession can result in more than four points, we argue that the win probability should be relatively constant in the window $[T - 3, T + 3] \times [L - 2, L + 2]$ and we take $h_{t} = 3, h_{l} = 2.$ We place a conjugate Beta($\alpha_{T,L}, \beta_{T,L})$ prior on $p_{T,L}$ and estimate $p_{T,L}$ with the resulting posterior mean $\hat{p}_{T,L},$ given by $$ \hat{p}_{T,L} = \frac{n_{T,L} + \alpha_{T,L}}{N_{T,L} + \alpha_{T,L} + \beta_{T,L}}. $$ The value of $y_{i}$ in Equation~\ref{eq:player_team_model} is the difference between the estimated win probability at the end of the shift and at the start of the shift. Based on the above expression, we can interpret $\alpha_{T,L}$ and $\beta_{T,L}$ as ``pseudo-wins'' and ``pseudo-losses'' added to the observed counts of home team wins and losses in the window $[T - 3, T + 3] \times [L - 2, L + 2].$ The addition of these ``pseudo-games'' tends to shrink the original empirical estimates of $p_{T,L}$ towards $\frac{\alpha_{T,L}}{\alpha_{T,L} + \beta_{T,L}}.$ To specify $\alpha_{T,L}$ and $\beta_{T,L},$ it is enough to describe how many pseudo-wins and pseudo-losses we add to each of the 35 unit cells $[t, +1] \times [\ell, \ell + 1]$ in the window $[T - 3, T + 3] \times [L - 2, L + 2].$ We add a total of 10 pseudo-games to each unit cell, but the specific number of pseudo-wins depends on the value of $\ell.$ For $\ell < -20,$ we add 10 pseudo-losses and no pseudo-wins and for $\ell > 20,$ we add 10 pseudo-wins and no pseudo-losses. For the remaining values of $\ell,$ we add five pseudo-wins and five pseudo-losses. Since we add 10 pseudo-games to each cell, we add a total of $\alpha_{T,L} + \beta_{T,L} = 350$ pseudo-games the window $[T - 3, T + 3] \times [L - 2, L + 2].$ We note that this procedure does not ensure that our estimated win probabilities are monotonic in lead and time. However, the empirical win probabilities are far from monotonic themselves, and our procedure does mitigate many of these departures by smoothing over the window $[T- 3, T + 3] \times [L - 2, L + 2].$ We find that for most combinations of $T$ and $L$, $N_{T,L}$ is much greater than 350; for instance, at $T = 423,$ we observe $N_{T,L} =$ 4018, 11375, 17724, 14588, and 5460 for $L =$ -10, -5, 0, 5, and 10, respectively. In these cases, the value of $\hat{p}_{T,L}$ is driven more by the observed data than by the values of $\alpha_{T,L}$ and $\beta_{T,L}.$ Moreover, in such cases, the uncertainty of our estimate $\hat{p}_{T,L}$, which can be measured by the posterior standard deviation of $p_{T,L}$ is exceeding small: for $T = 423$ and $-10 \leq L \leq 10,$ the posterior standard deviation of $p_{T,L}$, is between 0.003 and 0.007. When $N_{T,L}$ is comparable to or much smaller than 350, the values of $\alpha_{T,L}$ and $\beta_{T,L}$ exert more influence on the value of $\hat{p}_{T,L}.$ The increased influence of the prior on $\hat{p}_{T,L}$ in such rare game states helps smooth over the extreme discontinuities that are present in the empirical win probability estimates above. In these situations, there is a larger degree of uncertainty in our estimate of $\hat{p}_{T,L},$ but we find that the posterior standard deviation of $p_{T,L}$ never exceeds 0.035. The uncertainty in our estimation of $p_{T,L}$ leads to additional uncertainty in the $y_{i}$'s, akin to measurement error. The error term in Equation~\ref{eq:player_team_model} is meant to capture this additional uncertainty, as well as any inherent variation in the change in win probability unexplained by the players on the court. \subsection{Bayesian Linear Regression of Player Effects} \label{sec:bayesian_approach} As mentioned in Section~\ref{sec:introduction}, we take a Bayesian approach to fitting the model in Equation~\ref{eq:player_team_model}. Because we have a large number of covariates displaying a high degree of collinearity, a regularization prior that shrinks each component of $\theta$ towards zero is needed to promote stability for each partial effect. Popular choices of regularization priors on the components $\theta_{j}$ include a normal prior, which corresponds to an $\ell_{2}$ penalty, or a Laplace prior, which corresponds to an $\ell_{1}$ penalty. \cite{ThomasVenturaJensenMa2013} also consider a Laplace-Gaussian prior, which combines both $\ell_{2}$ and $\ell_{1}$ penalties. Maximum a posteriori estimation with respect to these priors correspond to ridge, lasso, and elastic net regression, respectively. We choose to use the Laplace prior, which was also considered by \cite{ThomasVenturaJensenMa2013} to derive rankings of National Hockey League players. Between the normal and Laplace prior, we choose to use the Laplace prior since it tends to pull smaller partial effects towards zero faster than the normal prior, as noted by \cite{ParkCasella2008}. We are thus able to use the existing \texttt{R} implementation of \cite{ParkCasella2008}'s Gibbs sampler in the \texttt{monomvn} package. Though the elastic net is better suited for regression problems in which there are groups of highly correlated predictors than is the lasso (\cite{ZouHastie2005}), there is no widely-available Gibbs sampler and the computational challenge of implementation offsets the additional benefit we gain from using the Laplace-Gaussian prior. We let $\mathbf{P}^{i}$ be a vector indicating which players are on the court during shift $i$ so that its $j^{th}$ entry, $\mathbf{P}^{i}_{j},$ is equal to 1 if player $j$ is on the home team, -1 if player $j$ is on the away team, and 0 otherwise. Similarly, we let $\mathbf{T}^{i}$ be a vector indicating which teams are playing during shift $i$ so that its $k^{th}$ entry, $\mathbf{T}^{i}_{k},$ is equal to 1 if team $k$ is the home team, -1 is team $k$ is the away team, and 0 otherwise. Conditional on $\mathbf{P}^{i}$ and $\mathbf{T}^{i},$ we model $$ y_{i}|\mathbf{P}^{i}, \mathbf{T}^{i} \sim N\left(\mu + \mathbf{P}^{i^{\top}}\theta + \mathbf{T}^{i^{\top}}\tau, \sigma^{2}\right). $$ We place independent Laplacian priors on each component of $\theta$ and $\tau$, conditional on the corresponding noise parameters $\sigma^{2}.$ The conditional prior densities of $\left(\theta, \tau \right)$ given $\sigma^{2}$ is given by $$ p(\theta, \tau | \sigma^{2}) \propto \left[\left( \frac{\lambda}{\sigma}\right)^{488} \times \exp{\left\{-\frac{\lambda}{2\sigma}\sum_{j = 1}^{488}{|\theta_{j}|}\right\}}\right] \times \left[ \left( \frac{\lambda}{\sigma}\right)^{30} \times \exp{\left\{-\frac{\lambda}{2\sigma}\sum_{k = 1}^{30}{|\tau_{k}|}\right\}}\right], $$ where $\lambda > 0$ is a sparsity parameter that governs how much each component of $\theta$ is shrunk towards zero. We further place a flat prior on $\mu,$ a Gamma$(r,\delta)$ hyper-prior on $\lambda^{2},$ and non-informative hyper-priors on $\sigma^{2}, r,$ and $\delta.$ Because of the hierarchical structure of our model, the joint posterior distribution of $(\mu, \theta, \tau, \sigma^{2})$ is not analytically tractable and we must instead rely on a Markov Chain Monte Carlo (MCMC) simulation to estimate the posterior distribution. We use the Gibbs sampler described by \cite{ParkCasella2008} that is implemented in the \texttt{monomvn} package in \texttt{R}. We note that our prior specification is the default setting for this implementation. In specifying this regression model, we make several strong assumptions. First, we assume that the $y_{i}'$s are independent. Since it is generally not the case that all ten players are substituted out of the game at the end of the shift, it is reasonable to expect that there will be some autocorrelation structure among the $y_{i}$'s. Indeed, as seen in the autocorrelation plot in Figure~\ref{fig:hist_acf_winProb}(B), we observe a small amount of autocorrelation (-0.1) between $y_{i}$ and $y_{i+1}.$ We also observe that there is no significant autocorrelation at larger lags. While the independence assumption is not technically correct, the lack of persistent autocorrelation and the relatively weak correlation between $y_{i}$ and $y_{i+1},$ make the assumption somewhat more palatable. \begin{figure}[!h] \centering \includegraphics{hist_acf_winProb} \caption{Histogram and autocorrelation plot of the $y_{i}$'s.} \label{fig:hist_acf_winProb} \end{figure} Our second modeling assumption is that, conditional on $\left(\mathbf{P}_{i}, \mathbf{T}_{i}\right),$ the $y_{i}$'s are Gaussian with constant variance. This conditional Gaussian assumption does not imply that the $y_{i}$'s are marginally Gaussian (which does not seem to be the case in Figure~\ref{fig:hist_acf_winProb}(A)). Despite the fact that we have 35,799 shifts in our dataset, we find that there are 29,453 unique combinations of ten players on the court. Thus, we only observe a few instances of each unique $\left(\mathbf{P}_{i}, \mathbf{T}_{i}\right)$ making it difficult to assess the conditional normality assumption directly. The limited number of each $\left(\mathbf{P}_{i}, \mathbf{T}_{i}\right)$ also makes it difficult to check the assumption of constance variance of the $y_{i}$'s conditional on $\left(\mathbf{P}_{i}, \mathbf{T}_{i}\right).$ In the Appendix, we explore several transformations and alternative specifications of the $y_{i}$'s, but do not find alternatives that match these assumptions better than our current specification. At this point, it is also worth mentioning that our model does not explicitly include the duration of each shift as a predictor, despite the fact that $y_{i}$ depends on shift length. Figure~\ref{fig:duration_y}(A) shows the change in win probability associated with varying shift durations and varying lead changes. Quite clearly, we see that the curves in Figure~\ref{fig:duration_y}(A) are different, indicating a dependence between $y_{i}$ and shift duration, although we see in Figure~\ref{fig:duration_y}(B) that the overall correlation is quite small. On a conceptual level, a player's performance in a 15 second shift during which his team's win probability increases by 20\% has the same impact on his team's chances of winning had the shift lasted 30 seconds. Since our ultimate goal is to estimate each player's individual impact, as opposed to his playing time-adjusted impact or per-minute impact, including shift duration as an additional predictor distorts the desired interpretation of player partial effects. In fact, we assert that the change in win probability as an outcome variable is the most natural way to account for the effect of shift duration on a player's overall impact on the court. \begin{figure}[!h] \centering \includegraphics{duration_y} \caption{Change in win probability plotted against shift duration} \label{fig:duration_y} \end{figure} \section{Full Posterior Analysis} \label{sec:full_posterior_analysis} We use the Gibbs sampler function \texttt{lasso} in the \texttt{monomvn R} package to obtain 1000 independent samples from the full posterior distribution of $\left(\mu, \theta, \tau, \sigma^{2}\right).$ With these samples, we can approximate the marginal posterior density of each player's partial effect using a standard kernel density estimator. Figure~\ref{fig:player_effect_density} shows the estimated posterior densities of the partial effects of several players. \begin{figure}[!h] \centering \includegraphics[width = 4.5in]{player_effect_density} \caption{Approximate posterior densities of several players' partial effects.} \label{fig:player_effect_density} \end{figure} We see that these densities are almost entirely supported within the range [-0.02, 0.02], indicating that it is unlikely that any individual player, over the course of a single shift, is able to improve (or hurt) his team's chances of winning the game by more than a percentage point or two. This is partly due to our regularization prior, which tends to pull the components of $\theta$ and $\tau$ towards zero, and to the fact that the $y_{i}$'s are tightly concentrated near zero. Nevertheless, though our estimates of each player's partial effect are small, we still see considerable heterogeneity in the approximate posterior densities. Most strikingly, we see that the posterior distribution of Dirk Nowitzki's partial effect is mostly supported on the positive axis (in 991 out of our 1000 posterior samples, his effect is positive) while the posterior distribution of Alonzo Gee's partial effect is mostly supported on the negative axis (his partial effect is negative in 976 out of 1000 posterior samples). Intuitively, we can measure a player's ``value'' by his partial effect on his team's chances of winning. Among the players in Figure~\ref{fig:player_effect_density}, we see that Nowitzki was the most valuable since his density lies further to the right than any other player's. However, there is considerable overlap in the support of his density and that of Kevin Durant, making it difficult to determine who is decidedly the ``most valuable.'' Indeed, we find that Nowitzki's partial effect is greater than Kevin Durant's in 692 out of 1000 posterior samples. We also observe high similarity in the posterior densities of Durant and LeBron James, who finished first and second, respectively, in voting for the 2013-14 Most Valuable Player (MVP) award. On closer inspection, we find that Durant's partial effect is greater than James' in only 554 of the 1000 posterior samples, indicating that, by the end of the 2013-14 regular season, Durant and James had very nearly the same impact on their teams' chances of winning, with Durant enjoying a rather slight advantage. In the context of the MVP award, then, our results would suggest that Durant is only slightly more deserving than James, but Nowitzki is more deserving than both Durant and James. We can also track how the posterior distribution of player partial effects evolve over the course of the season, which helps to determine how many games worth of data is necessary to start differentiating the partial effects of different players. Figure~\ref{fig:weekly_comparison} show the approximate posterior densities of Durant, Gee, James, and Nowitzki after weeks 1, 5, 10, 15, 20, and 25 of the season. \begin{figure}[!h] \centering \includegraphics{weekly_comparison} \caption{Approximate posterior densities of Kevin Durant's, LeBron James', and Dirk Nowitzki's partial effect as the season progresses} \label{fig:weekly_comparison} \end{figure} Through the first five weeks of the season, the posterior distributions of each player's partial effects are virtually identical. However, after ten weeks, we begin to see some separation, with Gee's density moving towards the left and Durant's density moving towards the right. This suggests that we need at least ten weeks worth of data (approximately 30 - 35 games) in order to identify differences in player partial effects. We see a rather considerable gap between Durant's and James' densities by week 15 and we observe that Durant's partial effect is greater than James' in nearly 75\% of the posterior samples up to that time. Over the next 10 weeks, though, this gap shrinks considerably: visually, the two posterior densities become increasingly indistinguishable and the proportion of posterior samples in which Durant's partial effect is greater than James' shrinks back towards 0.5. This mirrors the general consensus described by \cite{Ballentine2014} and \cite{Buckley2014} about how the race for the MVP award evolved: Durant was the clear front-runner for the MVP award by late January (approximately week 13 of the season) but many reporters declared the race much closer after James' historic performances in weeks 18 and 19 (including multiple 40-point performances and a 61-point performance against Charlotte). We also see that the separation between Nowitzki's density and Durant's density increases between weeks 15 and 20. \subsection{Comparing Players} \label{sec:comparing_players} Directly comparing partial effects for all pairs of players is complicated by the fact that players perform in different contexts. To determine which players are most comparable, we determine the total number of shifts each player played, his team's average win probability at the start of these shifts, the average duration of these shifts, and the average length of each shift. We call this information a player's \textit{leverage profile}. We then compute the Mahalanobis distance between the leverage profiles of each pair of players. Table~\ref{tab:player_compare} shows the four players with the most similar leverage profile for several players and Figure~\ref{fig:player_compare} shows comparison box plots of the posterior distribution of their partial effects. \begin{table}[!h] \centering \caption{Most similar leverage profiles. Mahalanobis distance shown in parentheses} \label{tab:player_compare} \begin{tabular}{l|ll} \hline Player & ~ Similar Players & \\ \hline LeBron James & DeAndre Jordan (0.025) & Kevin Durant (0.055) \\ ~ & Blake Griffin (0.082) & Stephen Curry (0.204) \\ \hline Chris Paul & Shawn Marion (0.081) & Courtney Lee (0.103) \\ ~ & Terrence Ross (0.126) & Chris Bosh (0.141) \\ \hline Kyrie Irving & DeMarcus Cousins (0.080) & Tristan Thompson (0.087) \\ ~ & Brandon Bass (0.099) & Randy Foye (0.109) \\ \hline Zach Randolph & Jimmy Butler (0.020) & David West (0.045) \\ ~ & Mike Conley (0.063) & George Hill (0.073) \\\hline \end{tabular} \end{table} \begin{figure}[!h] \centering \includegraphics[width = 4.5in]{player_compare} \caption{Comparison box plots of partial effects of players with similar leverage profiles} \label{fig:player_compare} \end{figure} We see that the posterior distributions of partial effects for each player in Table~\ref{tab:player_compare} are well-separated from the posterior distribution of partial effects of the player with the most similar leverage profile. For instance, LeBron James' leverage profile is most similar to DeAndre Jordan's, but we see that James' posterior distribution is located to the right of Jordan's and we find that in 884 of the 1000 posterior samples, James' partial effect is greater than Jordan's. This suggests that while James and Jordan played in similar contexts, James' performance in these situations was more helpful to his team than Jordan's. \subsection{Team Effects} \label{sec:team_effects} Recall that the inclusion of team effects, $\tau,$ in Equation~\ref{eq:player_team_model} was to ensure that the partial effects of players were not overly deflated if they played on bad teams or overly inflated if they played on good teams. Figure~\ref{fig:team_effect} shows box plots of the posterior distribution of all team effects. \begin{figure}[!h] \centering \includegraphics[width = 4.5in]{team_effect} \caption{Comparison box plots of the posterior distribution of team effects.} \label{fig:team_effect} \end{figure} We see that the Milwaukee Bucks and Sacramento Kings have a noticeably negative effect. This suggests that opposing teams generally increased their chances of winning, regardless of which five Bucks or Kings players were on the court. This is in contrast with the San Antonio Spurs, whose team effect is substantially positive. Figure~\ref{fig:bucks_kings_spurs_comparison} shows comparative box plots of the posterior distribution of the partial effects for a few Bucks, Kings, and Spurs players. \begin{figure}[!h] \centering \includegraphics{bucks_kings_spurs_comparison} \caption{Comparison box plots of partial effects of selected Bucks, Kings, and Spurs players} \label{fig:bucks_kings_spurs_comparison} \end{figure} The fact that the posterior distributions of Isaiah Thomas', DeMarcus Cousins', and Khris Middleton's partial effects are predominantly concentrated on the positive axis indicates that their performance stood out despite the relatively poor quality of their team. On the other hand, the posterior distributions of Ben McLemore's, O.J. Mayo's, and Brandon Knight's partial effects are predominantly concentrated on the negative axis, indicating that their teams' already diminished chances of winning decreased when they were on the court. The fact that Manu Ginobili has such a large positive partial effect is especially noteworthy, given the Spurs' already large positive effect. \section{Impact Ranking} \label{sec:impact_ranking} Since we may view a player's partial effect as an indication of his value to his team, we can generate a rank-ordering of the players on each team based on their partial effects. Intuitively, we could rank all of the members of a particular team by the posterior mean or median of their partial effects. Such an approach, however, does not incorporate the joint uncertainty of the partial effects. Alternatively, for each team and each posterior sample of $\theta,$ we could rank the partial effects of all players on that team and then identify the rank-ordering with highest posterior frequency. Unfortunately, since there are over one trillion orderings of 15 players (the minimum number of players per team), such an approach would require an impractical number of posterior samples. Instead, we propose to average the player rankings over the 1000 posterior samples to get their \textbf{Impact Ranking}. Table~\ref{tab:impact_ranking} shows the Impact Ranking for the players on the San Antonio Spurs and the Miami Heat, with the most common starting lineup bolded and players who played very limited minutes starred. \begin{table}[!h] \centering \caption{Impact Ranking for San Antonio Spurs and Miami Heat players. For each player, we report both his salary and the approximate probability that his partial effect is greater than the that of the player ranked immediately after him. Starred players played very limited minutes.} \label{tab:impact_ranking} \begin{tabular}{llcl} \\ \hline Rank&San Antonio Spurs&~~~~~~~~~&Miami Heat\\ \hline 1. & Manu Ginobili (\$7.5M, 0.719)& &\textbf{Chris Bosh} (\$19.1M, 0.650)\\ 2. & \textbf{Danny Green} (\$3.8M, 0.540) & & \textbf{LeBron James} (\$19.1M, 0.683) \\ 3. &Patty Mills (\$1.3M, 0.531)& & \textbf{Mario Chalmers} (\$4M, 0.577)\\ 4. &\textbf{Kawhi Leonard} (\$1.9M, 0.631) & & Ray Allen (\$3.2M, 0.571)\\ 5. &\textbf{Tiago Splitter} (\$10M, 0.510)& & Toney Douglas (\$1.6M, 0.486) \\ 6. &\textbf{Tony Parker} (\$12.5M, 0.554) & & Roger Mason Jr. (\$0.8M, 0.509) \\ 7. & \textbf{Tim Duncan} (\$10.4M, 0.518)& & Chris Andersen (\$1.4M, 0.518)\\ 8. & Damion James$^{*}$ (\$20K, 0.489)& & James Jones (\$1.5M, 0.520)\\ 9. & Boris Diaw (\$4.7M, 0.566)& & \textbf{Dwyane Wade} (\$18.7M, 0.515) \\ 10. & Matt Bonner (\$3.9M, 0.582)& & DeAndre Liggins$^{*}$ (\$52K, 0.520)\\ 11. & Jeff Ayres (\$1.8M, 0.556)& & Norris Cole (\$1.1M, 0.570) \\ 12. & Nando de Colo (\$1.4M, 0.561)& & Justin Hamilton$^{*}$ (\$98K, 0.541) \\ 13. & Austin Daye (\$0.9M, 0.530)& & Michael Beasley (\$0.8M, 0.511)\\ 14. & Aron Baynes (\$0.8M, 0.513)& & Greg Oden (\$0.8M, 0.503)\\ 15. & Cory Joseph (\$1.1M, 0.583)& & Rashard Lewis (\$1.4M, 0.525)\\ 16. & Marco Belinelli (\$2.8M)& & \textbf{Shane Battier} (\$3.3M, 0.618) \\ 17. & & & Udonis Haslem (\$4.3M)\\ \hline \end{tabular} \end{table} In Table~\ref{tab:impact_ranking}, we see that the most impactful player for the Spurs, Manu Ginobili, is a bench player, while five of the next six most impactful players were the most common starters. This is in contrast to the Heat, for whom we only observe three starters in the top five most impactful players and a rather significant drop-off down to the remaining starters. For instance, Dwayne Wade was not nearly as impactful as several Heat bench players and Shane Battier was even less valuable than several players who had very limited minutes (DeAndre Liggins and Justin Hamilton) or limited roles (Greg Oden). This indicates that the Heat did not rely much on Wade or Battier to win games, despite their appearance in the starting lineup. We can further compare each player's salary to his impact ranking to get a sense of which players are being over- or under-valued by their teams. For instance, Patty Mills earned only \$1.3M, the eleventh highest salary on the Spurs, despite being the third most impactful player on the team. In contrast, Wade was the ninth most impactful player on Heat, despite earning nearly \$15 million dollars more than Mario Chalmers, who was the third most impactful player for the Heat. \section{Impact Score} \label{sec:impact_score} A natural use of any player evaluation methodology is to generate a single ranking of all players and we could simply rank all players in the league according to the posterior mean of their partial effects. Unfortunately, since the mean by influenced by a few very extreme value, such a ranking can overvalue players whose partial effects have large posterior variance. To try to account for the joint variability of player effects, we can rank the players' partial effect estimates in each of our 1000 simulated posterior samples. Then we could compute 95\% credible intervals for each player's partial effects-based rank. We find, however, that these intervals are rather long. For instance, we find that LeBron James had the largest partial effect among all players in only 11 of the 1000 posterior samples and the 95\% credible interval for his rank is $[3, 317].$ Similarly, we find that Kevin Durant also had the largest partial effect among all players in 11 of the 1000 posterior samples and the 95\% credible for his rank is $[2, 300].$ It turns out that Dirk Nowitzki had the largest partial effect in the most number of posterior samples (39 out of 1000) but the credible interval for his rank is $[1, 158].$ Given the considerable overlap in the posterior distributions of player partial effects as seen in Figure~\ref{fig:player_effect_density}, it is not surprising to see the large joint posterior variability in player partial effects reflected in the rather long credible intervals of each player's partial effects-based ranks. We instead propose to rank players according to their \textbf{Impact Score}, which we define as the ratio between the posterior mean and the posterior standard deviation of a player's partial effect. This definition is very similar to the Sharpe Ratio used in finance to examine the performance of an investment strategy. We may view Impact Score as a balance between a player's estimated ``risk'' (i.e. uncertainty about his partial effect) and a player's estimated ``reward'' (i.e. average partial effect). As an example, we find that the posterior mean of Iman Shumpert's partial effect is less than the posterior mean of Chris Bosh's partial effect (0.0063 compared to 0.0069). We also find that the posterior standard deviation of Shumpert's partial effect is 0.0034 while it is 0.0039 for Bosh. Between the two players, Shumpert gets the edge in Impact Score rankings because we are less uncertain about his effect, despite him having a smaller average effect compared to Bosh. Table~\ref{tab:impact_score} shows the thirty players with largest Impact Scores. Somewhat unsurprisingly, we see a number of superstars in Table~\ref{tab:impact_score}. Patrick Patterson is a notable standout; as \cite{Cavan2014} and \cite{Lapin2014} note, he provided valuable three-point shooting and rebounding off the bench for the Toronto Raptors. \begin{table}[!h] \centering \caption{Players with the highest Impact Scores} \label{tab:impact_score} \begin{tabular}{lll} \\ \hline 1. Dirk Nowitzki (2.329)& ~~~~~~ & 16. Eric Bledsoe (1.274) \\ 2. Patrick Patterson (1.939)& ~~~~~~ & 17. Dwight Howard (1.273) \\ 3. Iman Shumpert (1.823) & ~~~~~~ & 18. Danny Green (1.214)\\ 4. Chris Bosh (1.802) & ~~~~~~ & 19. Deron Williams (1.212) \\ 5. Manu Ginobili (1.779) & ~~~~~~ & 20. Matt Barnes (1.206) \\ 6. James Harden (1.637) & ~~~~~~ & 21. Roy Hibbert (1.205) \\ 7. Chris Paul (1.588) & ~~~~~~ & 22. J.J. Redick (1.201) \\ 8. Zach Randolph (1.56) & ~~~~~~ & 23. Shaun Livingston (1.201) \\ 9. Joakim Noah (1.555) & ~~~~~~ & 24. Marcin Gortat (1.185)\\ 10. Stephen Curry (1.514) & ~~~~~~ & 25. Greivis Vasquez (1.175) \\ 11. Nene Hilario (1.474) & ~~~~~~ & 26. Blake Griffin (1.174)\\ 12. Andre Iguodala (1.445) & ~~~~~~ & 27. Anthony Tolliver (1.151) \\ 13. Kevin Durant (1.410) & ~~~~~~ & 28. LaMarcus Aldridge (1.140) \\ 14. LeBron James (1.324) & ~~~~~~ & 29. Courtney Lee (1.131) \\ 15. Isaiah Thomas (1.310) & ~~~~~~ & 30. Nate Robinson (1.126) \\ \hline \end{tabular} \end{table} It is important to note that our reported Impact Scores are subject to some degree of uncertainty, since we have to estimate the posterior mean and standard deviation of each player's partial effect. This uncertainty amounts to MCMC simulation variability and induces some uncertainty in the reported player rankings. In order to quantify the induced uncertainty explicitly, we could run our sampler several times, each time generating a draw of 1000 simulated posterior samples and ranking the players according to the resulting Impact Scores. We could then study the distribution of each player's ranking. While straightforward in principle, the computational burden of running our sampler sufficiently many times is rather impractical. Moreover, we suspect the simulation-to-simulation variability in Impact Scores is small. Since we are estimating the posterior mean and standard deviation of player partial effects with 1000 samples, we are reasonably certain that the estimated values are close to the true values. As a result, our reported Impact Scores are reasonably precise and we do not expect much variation in the player rankings. \subsection{Comparison of Impact Score to Other Metrics} \label{sec:comparison} \cite{Hollinger2004} introduced Player Efficient Rating (PER) to ``sum up all [of] a player's positive accomplishments, subtract the negative accomplishments, and a return a per-minute rating of a player's performance.'' Recently, ESPN introduced Real Plus-Minus (RPM) which improves on Adjusted Plus-Minus through a proprietary method that, according to \cite{Ilardi2014}, uses ``Bayesian priors, aging curves, score of the game and extensive out-of-sample testing.'' Figure~\ref{fig:impact_comparison} shows Impact Score plotted against PER and RPM. We note that of the 488 players in our data set, RPM was available for only 437. In Figure~\ref{fig:impact_comparison}, we have excluded the six players whose PER is greater than 33 or less than -3 so that the scale of the figure is not distorted by these extreme values. We find that the correlation between Impact Score and PER is somewhat moderate (correlation 0.226) and that Impact Score is much more highly correlated with RPM (correlation of 0.655). This is somewhat expected, since PER is essentially context-agnostic and RPM at least partially accounts for the context of player performance. To see this, we note that the number of points a player scores is a key ingredient in the PER computation. What is missing, however, is any consideration of \textit{when} those points were scored. RPM is more context-aware, as it considers the score of the game when evaluation player performance. However, since the RPM methodology is proprietary, the extent to which the context in which a player performs influences his final RPM value remains unclear. \begin{figure}[!h] \centering \includegraphics{impact_comparison} \caption{Comparison of Impact Score to PER (A) and RPM (B). We find that RPM is much more consistent with Impact Score than is PER, though there are still several inconsistencies in overall player evaluation. Note that PER is calibrated so the league average is 15.00} \label{fig:impact_comparison} \end{figure} As we noted in Section~\ref{sec:introduction}, metrics like PER and point-differential metrics can overvalue low-leverage performances. An extreme example of this is DeAndre Liggins' PER of 129.47. Liggins played in a single game during the 2013-14 regular season and in his 84 seconds of play, he made his single shot attempted and recorded a rebound. We note, however, that Liggins entered the game when his team had a 96.7\% chance of winning the game and his performance did not improve his team's chances of winning in any meaningful way. Figure~\ref{fig:start_winProb_comparison} plots each player's Impact Score, PER, and RPM against the average win probability of each player's shifts. In Figure~\ref{fig:start_winProb_comparison}, we have included the players with very negative PER values who were excluded from Figure~\ref{fig:impact_comparison}. \begin{figure}[!h] \centering \includegraphics{start_winProb_comparison} \caption{Impact Score, PER, and RPM plotted against average starting win probability. Note that RPM was unavailable for 51 players.} \label{fig:start_winProb_comparison} \end{figure} In Figure~\ref{fig:start_winProb_comparison}, we see that the average starting win probability for Chris Smith, Vander Blue, Tony Mitchell, and DeAndre Liggins was less than 0.2 or greater than 0.8, suggesting that they played primarily in low-leverage situations. We see that while their PERs ranged from -23 to 130, their Impact Scores are all very close to zero. This confirms that our methodology correctly values so-called ``garbage time'' performance. It is interesting to note Hasheem Thabeet played when his team had, on average, above a 70\% of winning the game. His negative Impact Score is an indication that his performance generally hurt his team's chances of winning and we find that he had a negative partial effect in 678 of the 1000 posterior samples. While it is encouraging that there is at least some positive correlation between Impact Scores and PER, simply looking at the correlation is not particularly informative, as these metrics are measuring rather different quantities. Of greater interest, perhaps, is to see when PER and Impact Score agree and when they disagree. For instance, we find players like LeBron James, Chris Paul and Dirk Nowitzki who have both large PER values and large Impact Scores. The large PER values are driven by the fact that they efficiently accumulated more positive box-score statistics (e.g. points, assists, rebounds, etc.) than negative statistics (e.g. turnovers and fouls) and the large Impact Scores indicate that their individual performances helped improve their team's chances of winning. On the other hand, Brook Lopez and Kyrie Irving have the ninth and twenty-ninth largest PER values but their rather middling Impact Scores suggest that, despite accumulating impressive individual statistics, their performances did not actually improve their teams' chances of winning. In contrast to Irving and Lopez, players like Iman Shumpert and Andre Iguodala have below-average PER values but rather large Impact Scores. Shumpert has a PER of 9.66, placing him in the bottom 25\% of the league, but has the fourth largest Impact Score. This suggests that even though Shumpert himself did not accumulate particularly impressive individual statistics, his team nevertheless improved its chances of winning when he was on the court. It is worth noting that Shumpert and Iguodala are regarded as top defensive players. As \cite{GoldsberryWeiss2013} remark, conventional basketball statistics tend to emphasize offensive performance since there are not nearly as many discrete defensive factors to record in a box score as there are offensive factors. As such, metrics like PER can be biased against defensive specialists. It is re-assuring, then, to see that Impact Score does not appear to be as biased against defensive players as PER. It is important to note that the fact that Shumpert and Iguodala have much larger Impact Scores than Lopez and Irving does not mean that Shumpert and Iguodala are inherently better players than Lopez and Irving. Rather, it means that Shumpert's and Iguodala's performances helped their teams much more than Irving's or Lopez's. One explanation for the discrepancies between Lopez and Irving's Impact Scores and PERs could be coaching decisions. The fact that Lopez and Irving were accumulating impressive individual statistics without improving their respective teams' chances of winning suggests that their coaches may not have been playing them at opportune times for their teams. In this way, when taken together with a metric like PER, Impact Score can provide a more complete accounting and evaluation of a player's performance. \subsection{Year-to-year correlation of Impact Score} \label{sec:year_to_year_impactScore} A natural question to ask about any player evaluation metric is how stable it is year-to-year. In other words, to what extent can we predict how a player ranks with respect to one metric in a season given his ranking in a previous season. Using play-by-play data from the 2012-13 regular season, we can fit a model similar to that in Equation~\ref{eq:player_team_model} and compute each player's Impact Score for that season. There were 389 players who played in both the 2012-13 and 2013-14 seasons and Figure~\ref{fig:impact2012_2013} plots there players' 2012-13 Impact Scores against their 2013-14 Impact Scores. \begin{figure}[!h] \centering \includegraphics[width = 4.5in]{impact2012_2013} \caption{Impact Scores in 2012 and 2013} \label{fig:impact2012_2013} \end{figure} We observe that the correlation between 2012-13 and 2013-2014 Impact Score is 0.242, indicating a rather moderate positive trend. We notice, however, that there are several players whose Impact Scores in 2012-13 are much different than their Impact Scores in 2013-14. For instance, Iman Shumpert's and Dirk Nowitzki's Impact Scores increased dramatically between the two season. At the other end of the spectrum, players like Larry Sanders and Tyson Chandler displayed sharp declines in their Impact Scores. On further inspection, we find that all of these players missed many games due to injury in the seasons when they had lower Impact Scores. Upon their return from injury, they played fewer minutes while they continued to rehabilitate and re-adjust to playing at a high-level. In short, the variation in the \textit{contexts} in which these players performed is reflected in the the season-to-season variation in their Impact Score. Because it is context-dependent, we would not expect the year-to-year correlation for Impact Scores to be nearly as high as the year-to-year correlation for PER (correlation of 0.75), which attempts to provide a context-agnostic assessment of player contribution. Nevertheless, we may still assess the significance of the correlation we have observed using a permutation test. To simulate the distribution of the correlation between 2012-13 and 2013-14 Impact Scores, under the hypothesis that they are independent, we repeatedly permute the observed 2013-14 Impact Scores and compute the correlation between these permuted scores and the observed 2012-13 Impact Scores. Figure~\ref{fig:impactScore_cor2012_2013} shows a histogram of this null distribution based on 500,000 samples. \begin{figure}[!h] \centering \includegraphics[width = 4.5in]{impactScore_cor2012_2013} \caption{Null distribution of correlation between 2012-13 and 2013-14 Impact Scores under the hypothesis that they are independent. The observed correlation of 0.242 is shown in red} \label{fig:impactScore_cor2012_2013} \end{figure} We find that the observed correlation is significantly different than zero. This indicates that even though Impact Scores are inherently context-dependent, a player's Impact Score is one season is moderately predictive of his Impact Score in the next, barring any significant changes in the contexts in which he plays. \subsection{Multi-season Impact Score} Though the context in which players perform between seasons can be highly variable, it is arguably more stable across multiple seasons. In light of this, we can re-fit our models using all of the play-by-play data from 2008-09 to 2010-11 and from 2011-12 to 2013-14, and estimated each player's partial effect separately in both time period. Note that for each season considered, the change in win probability during a shift was estimated using data from all prior seasons. Somewhat surprisingly, we find that the posterior standard deviations of the player partial effects estimated over multiple seasons is not substantially smaller than when we consider one seasons at a time, despite having much more data. For instance, the posterior standard deviation of LeBron James' partial effect in the 2013-14 season is 0.0035 while it is 0.002 over the three season span from 2008-09 to 2010-11. Table~\ref{tab:multi_year_impact} shows the top 10 Impact Scores over these two three-season periods. \begin{table}[!h] \centering \caption{Impact Score computed over three seasons windows} \label{tab:multi_year_impact} \begin{tabular}{ll} \hline 2008-09 -- 2010-11 & 2011-12 -- 2013-14 \\ \hline LeBron James (5.400) & LeBron James (3.085) \\ Dirk Nowitzki (3.758) & Chris Paul (3.041)\\ Chris Paul (3.247) & Amir Johnson (2.982)\\ Dwyane Wade (2.948) & Stephen Curry (2.919) \\ LaMarcus Aldridge (2.775) & Andre Iguodala (2.805)\\ Steve Nash (2.770) & Mike Dunleavy (2.790)\\ Tim Duncan (2.679) & Dirk Nowitzki (2.733) \\ Matt Bonner (2.178) & Kevin Durant (2.426) \\ Kevin Garnett (2.125) & Paul George (2.332)\\ \hline \end{tabular} \end{table} Quite clearly, LeBron James stands out rather prominently, especially in the 2008-2010 time period, as far and away the most impactful player over those three seasons. We note that James' 2013-2014 Impact Score is much less than either of his multi-season Impact Scores. This indicates that while James may have been most impactful player over the course of several seasons, in that particular season, he was not as impactful. Figure~\ref{fig:multi_year_impact} shows the Impact Scores from 2011-2013 plotted against the Impact Scores 2008-2010. The correlation between these scores is 0.45, which is larger than the year-to-year correlation in Impact Score. The players with discordant single season Impact Scores highlighted in Figure~\ref{fig:impact2012_2013} were all recovering from significant injuries that required them to miss many games and play restricted minutes for a good portion of the season. Since there are generally few injuries which span significant portions of multiple seasons, the context in which players perform tend to stabilize across several seasons. \begin{figure}[!h] \centering \includegraphics[width = 4.5in]{multi_year_impact} \caption{Impact Scores computed over 2008-2010 and 2011-2013} \label{fig:multi_year_impact} \end{figure} \section{Lineup comparison} \label{sec:lineup_comparison} As a further study of the full covariance structure of $\theta$ and $\tau,$ we can compare how different five-man lineups match up against each other. To simulate the posterior distribution of a five-man lineup's effect on its team's win probability, we simply sum the corresponding entries of each posterior sample of $\theta.$ With these samples, we can compute each lineup's Impact Score just as we did for player's in Section~\ref{sec:impact_score}: we divide the posterior mean of the lineup's effect by the posterior standard deviation of its effect. Table~\ref{tab:lineup_impact_score} shows the ten lineups with the largest Impact Scores. \begin{table}[!h] \centering \caption{Lineups with the largest impact score. } \label{tab:lineup_impact_score} \begin{tabular}{lll} \\ \hline ~~~~Lineup & Impact Score & Minutes\\ \hline 1.~ Stephen Curry, Klay Thompson, Andre Iguodala & 2.98 & 780.25 \\ ~~~~ David Lee, Andrew Bogut & ~ & ~ \\ \hline 2.~Chris Paul, J.J. Redick, Matt Barnes & 2.88 & 88.57 \\ ~~~~Blake Griffin, DeAndre Jordan & & \\ \hline 3.~Stephen Curry, Klay Thompson, Andre Iguodala & 2.82 & 31.75 \\ ~~~~David Lee, Jermaine O'Neal & & \\ \hline 4.~George Hill, Lance Stephenson, Paul George & 2.58 & 1369.38 \\ ~~~~David West, Roy Hibbert & & \\ \hline 5.~Mario Chalmers, Ray Allen, LeBron James & 2.57 & 34.28 \\ ~~~~Chris Bosh, Chris Andersen & & \\ \hline 6.~Patrick Beverley, James Harden, Chandler Parsons & 2.51 & 589.97 \\ ~~~~Terrence Jones, Dwight Howard & & \\ \hline 7.~Mario Chalmers, Dwyane Wade, LeBron James & 2.46 & 26.2 \\ ~~~~Chris Bosh, Chris Andersen & & \\ \hline 8.~C.J. Watson, Lance Stephenson, Paul George & 2.42 & 118.27 \\ ~~~~David West, Roy Hibbert & & \\ \hline 9.~John Wall, Bradley Beal, Trevor Ariza & 2.38 & 384.03 \\ ~~~~Nene Hilario, Marcin Gortat & & \\ \hline 10.~Patrick Beverley, James Harden, Chandler Parsons & 2.38 & 65.58 \\ ~~~~Donatas Motiejunas, Dwight Howard & & \\ \hline \end{tabular} \end{table} We can also simulate draws from the posterior predictive distribution of the change in home team win probability for each home/away configurations of two five-man lineups using our posterior samples of $\left(\mu, \theta, \tau, \sigma^{2}\right).$ For a specific home/away configuration, we construct vectors of signed indicators, $\mathbf{P}^{*}$ and $\mathbf{T}^{*}$, to encode which players and teams we are pitting against one another. For each sample of $\left(\mu, \theta, \tau, \sigma^{2}\right)$ we compute $$ \mu + \mathbf{P}^{*\top}\theta + \mathbf{T}^{*\top}\tau + \sigma z $$ where $z \sim N(0,1),$ to simulate a sample from the posterior predictive distribution of the change in the home team's win probability for the given matchup. In particular, we consider pitting the lineup with the largest Impact Score (Stephen Curry, Klay Thompson, Andre Iguodala, David Lee, Andrew Bogut) against three different lineups: the lineup with second largest Impact Score (Chris Paul, J.J. Redick, Matt Barnes, Blake Griffin, DeAndre Jordan), the lineup with the smallest Impact Score (Eric Maynor, Garrett Temple, Chris Singleton, Trevor Booker, Kevin Seraphin), and the lineup with the median Impact Score (Donald Sloan, Orlando Johnson, Solomon Hill, Lavoy Allen, Roy Hibbert). The median lineup's Impact Score is the median of all lineup Impact Scores. Figure~\ref{fig:lineup_compare} shows the posterior predictive densities of the change in win probability in a single shift when the lineup with largest Impact Score plays at home. \begin{figure}[!h] \centering \includegraphics[width = 4.5in]{lineup_compare} \caption{Posterior predictive density in win probability of the lineup with the largest impact score matched up with three other lineups} \label{fig:lineup_compare} \end{figure} Unsurprisingly, when the lineup with largest Impact Score is pitted against the lineup with smallest Impact Score, the predicted change in win probability is positive about 65\% of the time and is greater than 0.1 just over 23\% of the time. It is also reassuring to see that the density corresponding to the matchup against the median lineup lies between the two extremes considered. Rather surprisingly, however, we find that when the lineup with largest Impact Score is pitted against the lineup with second largest Impact Score, the change in win probability is negative about 55\% of the time. We find that posterior mean effect of the Paul-Reddick-Barnes-Griffin-Jordan lineup is 0.0166 while the posterior mean effect of the Curry-Thompson-Iguodala-Lee-Bogut lineup is 0.0150. The difference in Impact Score is driven by the difference in the posterior standard deviation of each lineup's effect (0.0050 for Curry-Thompson-Iguodala-Lee-Bogut and 0.0058 for Paul-Reddick-Barnes-Griffin-Jordan). Because of the disparity in playing time (780.25 minutes vs 88.57 minutes), we are less uncertain about the effect of the Curry-Thompson-Iguodala-Lee-Bogut lineup and the additional certainty makes up for the smaller average effect. This highlights an important feature of Impact Score: it tries to balance the estimated effect against the uncertainty in this estimate. At this point, it is worth nothing that while the change in win probability over the course of any shift is constrained to lie between -1 and 1, none of our modeling assumptions restrict the range of the predicted change in win probability in any of the match-ups considered to lie in this range. In particular, since we have a conditional normal model, it could be the case that $\sigma z$ term pushes our prediction outside of the interval $[-1,1].$ In light of this, it is reassuring to find that the support of posterior predictive distributions of the change in win probability in all of the match-ups considered is in $[-0.4, 0.4].$ \section{Discussion} \label{sec:discussion} In this paper, we have estimated each NBA player's effect on his team's chances of winning, after accounting for the contributions of his teammate and opponents. By focusing on win probability, our model simultaneously down-weights the importance of performance in low-leverage (``garbage time'') and up-weights the importance of high-leverage performance, in marked contrast to existing measures like PER which provide context-agnostic assessments of player performance. Since our estimates of player effects depend fundamentally on the context in which players perform, our estimates and derived metrics are necessarily retrospective in nature. As a result, our results do not display nearly as high of a year-to-year correlation as other metrics. We would argue, however, that the somewhat lower year-to-year repeatability of our derived metrics are offset by the fact that they provide a much more complete accounting of how a player helped their teams win in a particular season. When taken together with a metric like PER, our results enable us to determine whether the performance of a player who recorded impressive box-score totals actually improved his team's chances of winning the game. Ultimately, our model and derived metrics serve as a complement to existing measures of player performance and enables us to contextualize individual performances in a way that existing metrics alone cannot. We have introduced a new method for estimating the probability that a team wins the game as a function of its lead and how much time is remaining. Our win probability estimates can be viewed as a middle-ground between the empirical estimates, which display extreme discontinuity due to small sample size issues, and existing probit regression model estimates, which do not seem to fit the empirical observations well. Though our win probability estimates are generally quite precise, our choice of smoothing window $[T - 3, T + 3] \times [L - 2, L + 2]$ is admittedly rather simplistic. This is most pronounced near the end of the game, when a single possession can swing the outcome and it less reasonable to expect the win probability when leading by 2 points is similar to the win probability when trailing by 2 points. To deal with this, one could let the window over which we aggregate games vary with both time and lead instead of using a fixed window. We also note that the choice of a hard threshold of $L = \pm 20$ in determining the number of pseudo-wins, $\alpha_{T,L},$ and pseudo-losses, $\beta_{T,L}$, to add is arbitrary and we could just as easily have selected $L = \pm 25$ or $\pm 30.$ Alternatively, $\alpha_{T,L}$ and $\beta_{T,L}$ could be selected at random from a specified distribution depending on the time and lead or we can place a further hyper-prior on $\left(\alpha_{T,L}, \beta_{T,L}\right).$ Unfortunately, estimates from the first approach may not be reproducible and explicitly computing the Bayes estimator of $p_{T,L}$ in the second approach can be difficult. While a more carefully constructed prior can, in principle, lead to estimates that more accurately reflect our subjective beliefs about how win probability evolves, one must take care not to select a prior that can overwhelm the observed data. Looking at our win probability estimates, we find that a unit change in time corresponds to a smaller change in win probability than a unit change in lead, especially near the end of close games. This can introduce a slight bias against players who are frequently substituted into games on defensive possessions and taken out of the game on offensive possessions, since such players will not be associated with large changes in win probability. One way to overcome this bias is to account for which team has possession of the ball into our win probability estimates. In principle, it would be straightforward to include possession information into our win probability estimates: first we bin the games based on home team lead, time remaining, and which team has possession, and then we apply our estimation procedure twice, once for when the home team has possession and once for when the away team has possession. Our omission of possession information is driven largely by our inability to determine which team has possession on a second-by-second basis reliably due to errors in the order in which plays are recorded in the play-by-play data we have used. In general, more sophisticated estimation of win probability remains an area of future exploration. Since our estimates of player effect are context-dependent, we have introduced leverage profiles as a way to determine which players' partial effects are most directly comparable. Though we have not done so in this paper, one could use leverage profiles to cluster players based on the situations in which they play. This could potentially provide insight into how coaches use various players around the league and lead to a more nuanced understanding of each player's role on his team. In keeping with the spirit of previous player-evaluation, we define two metrics, Impact Ranking and Impact Score, to determine a rank-ordering of players. Impact Ranking provides an in-team ranking of each player's partial effect, allowing us to determine whether a player's salary is commensurate with his overall contribution to his team's chances of winning games. Impact Score balances a player's estimated effect against the uncertainty in our estimate to generate a league-wide rank-ordering. We have found that any individual player's effect on his team's chances of winning during a single shift is small, generally less than 1\%. We moreover have found rather considerable overlaps in the posterior distribution of player partial effects. This suggests there is no single player who improves his team's chances of winning significantly more than the other players. That said, we are still able to distinguish clear differences in players' impacts. Somewhat surprisingly, we find that Dirk Nowitzki had a larger impact on his team's chances of winning that more prominent players like Kevin Durant and LeBron James. We also found that Durant and James' impact were virtually indistinguishable. This is not to suggest that Nowitzki is a better or more talented player than Durant or James, per se. Rather, it indicates that Nowitzki's performance was much more important to his team's success than Durant's or James' performances were to their respective teams. There are several possible extensions and refinements to our proposed methodology. As mentioned earlier, our win probability estimation is admittedly simplistic and designing a more sophisticated procedure is an area for future work. It is also possible to include additional covariates in equation~(\ref{eq:player_team_model}) to entertain two-way or three-way player interactions, in case there are any on-court synergies or mismatches amongst small groups of players. In its current form, Equation~\ref{eq:player_team_model} does not distinguish the uncertainty in estimating the $y_{i}$'s from the inherent variability in the change in win probability. It may be possible to separate these sources of variability by decomposing $\sigma,$ though care must be taken to ensure identifiability of the resulting model. Finally, rather than focusing on each player's overall impact, one could scale the predictors in Equation~\ref{eq:player_team_model} by the shift length and re-fit the model to estimate each player's per-minute impact on his team's chances of winning. \newpage
{ "attr-fineweb-edu": 1.65625, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUd2k4eIZijkbdObaq
\section{Introduction} Soccer 2D Simulation league is one the first robotic leagues in RoboCup Competitions which is a great environment for researchers to invent and apply intelligent algorithms and compete with the best researchers in the field \cite{abreu2012performance}. Numerous teams participate in the WorldCup competition annually which has almost 40 major and junior leagues including simulation and real environments. Moreover, Soccer 2D Simulation league has participants from varied countries and universities. From the most famous teams we can mention Helios \cite{helios2018}, Cyrus \cite{zarecyrus2017}\cite{zarecyrus2019}, Gliders \cite{gliders2019}, FRA-UNIted \cite{fra2020}, Namira \cite{asali2018namira}, and Razi \cite{noohpishehrazi} that have multiple titles in different RoboCup competitions.\\ Namira 2D Soccer Simulation team consists of current and previous students of Shiraz University and Qazvin Islamic Azad University (QIAU). Some of the members were previously working as a team in Shiraz \cite{asali2016shiraz} and Persian Gulf 2D Soccer Simulation Teams \cite{asali2017persian} in World Cup 2016 and 2017 and some recently added students who study Software \& Hardware Engineering at Shiraz University and QIAU. Totally, Namira's members achieved $1^{st}$ place in IranOpen 2016 technical challenge, $2^{nd}$ place in ShirazOpen 2018, $5^{th}$ place in IranOpen 2016 and 2017 leagues, $6^{th}$ place in RoboCup WorldCup 2016, $8^{th}$ place in World Cup 2018 \cite{asali2018namira} and a few other achievements \cite{zarecyrus2015}\cite{khayami2014cyrus}\cite{khayami2013m}. The team's research focus is on topics such as Noise Reduction, Opponent Modelling, Behavior \& Strategy Detection \& Selection, software development, Data Mining \& Analysis \cite{asali2016using} and so on.\\ In this paper, we first introduce a noise reduction method for agents' localization based on Autoregression Kalman Filtering. In section three and four we introduce two new software packages which are beneficial for 2D Soccer Simulation community as tools to analyze game data and work on them in a easier way. At last, we talk about future works and paper conclusion in section five. \section{Kalman Filtering for Localization} Kalman Filtering which is also known as Linear Quadratic Estimation (LQE), is an algorithm that uses a series of measurements observed over time containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each time frame. Kalman Filtering can be implemented and used in many different ways. In our research, we utilized Autoregressive (AR) Kalman Filter method to fit a model which considers location, velocity, and acceleration to estimate the location of the agents more accurately than what agent already gets from the server. In this model, we assume the movement of the agent has variant velocity and acceleration. The location estimation process works in a way that dedicates more weight to the model than server data in order to handle the effect of acceleration change in agent's movement during time. As a result, when agent receives noisy location data from server, it compares received data with its own model and tunes the data in a way that huge noises would be alleviated and agent's belief of the world state will not change dramatically in a short amount of time. \\ As it is shown in the left image of Fig.~\ref{fig:exampleFig1}, we compared x coordinate of an agent in the first 1000 cycles of a match while observation, estimation and real data are depicted in diagram. To see the difference on a closer scrutiny, the right image in Fig.~\ref{fig:exampleFig1} declares that estimated x coordinate is almost always closer to the real data in comparison with observation data which comes from server. The model smooths the dramatic changes in coordinates and does not let the agent to be confused too much about its own or other agents' locations. To check the error of our method, we have used Gamma parameter of our Kalman Filter and the results are drawn as a diagram in Fig.~\ref{fig:exampleFig2}. \begin{figure}[ht] \centering \includegraphics[width=1.0\textwidth]{kalman1.eps} \caption{Comparison of real, observed and estimated data in 1000 game cycles} \label{fig:exampleFig1} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=1.0\textwidth]{Gamma.eps} \caption{Convergence of Gamma parameter over time in Kalman Filter Localization} \label{fig:exampleFig2} \end{figure} \section{Log Analyzer} The Process of analyzing soccer matches is completely vital to find weaknesses and strengths, however, such process is really time consuming without special tools. Traditionally, the most common way to check the performance of the team, specifically after a change in the code, was to observe the game logs by programmer or other experts. In some cases, the changes in the code are very subtle and we need more than a few matches to see the difference. In such cases, we need a great amount of time to play multiple matches and analyze them one by one. Furthermore, altering one behavior may affect other behaviors which may not be detectable by the expert at first glance. Consequently, we need a tool to help us holding matches against different rivals, analyze both teams' behaviors and give us a feedback so that we can decide if our changes had a positive or negative effect in comparison to our previous code versions.\\ In the previous year, our team proposed a Tournament Planning and Analyzing Software that had a limited built-in version of Log Analyzer that could be used just with our TPAS software. Therefore, we decided not only to improve Log Analyzer, but also to release a standalone version with new features. Both Log Analyzer and TPAS source codes and instructions are provided on github. Since Log Analyzer is cross platform, we can utilize it in all operating systems because it only uses game logs as input data.\\ Log Analyzer has a built-in parser which is capable of parsing rcl and rcg log files and it can both return python objects and text files which contain match facts and other information that can be extracted from the game. The resultant information includes, but not limited to, the number of, shoots, passes, tackles, catches (by goalkeepers), turns, kicks, and so on. In some cases, like shoots or passes, it can also differ correct actions from wrong ones and it calculates the accuracy of the action which is quite useful. Furthermore, we can define a certain region in the field and get number of each event there which gives developers a free hand to focus on special regions. It can also calculate all parameters individually for each player that helps to know whether a target player can do well enough and to detect which players play crucial roles in the game. The accuracy of action detection is almost 99 percent for most actions like shoot and pass. Another important data is possession which indicates percentage of ball possession for each team and player in different regions of the field. This will greatly help experts to balance between their defense, midfield and offense regions and countless other information and knowledge can be gained by having such data. It should be mentioned that we can use Log Analyzer both as a python module or as a script to get reports which again can be both as a python object or a text file. \section{World Model Viewer (WMV)} The 2D Soccer Simulation environment is partially observable and agents receive noisy data from their observation. To check the difference between real and observed data, typically, we have to print out agent's data and read text outputs. The faster and more convenient way is to check visualized data of agents' observation which was not available through previous tools in the community. 2D Soccer Simulation teams usually use Logplayer software to watch game log files which what they see is based on the real data not what agents can have access to. In order to see agents world model without reinventing the wheel, we developed a software package that uses agents' world model data to generate a new log file for each player and then, we can use Logplayer to see each player's belief state in each cycle of the game. \\ World Model Viewer (WMV) uses Namira Log Analyzer python module, Logplayer and some other python scripts to provide desired data which is players' world model. We embedded some code in agent's code to output its world model in a dedicated file and run a python script to convert previous output data into a readable file for Logplayer. Then, we can watch the game from the eyes of each agent by running its data file using Logplayer. \\ This method has its own pros and cons. We can use Logplayer special features to see object details such as agents' velocity, position, size, type, etc. in a visual manner which is preferable most of the time. Nevertheless, we generate log file for each agent which is 11 files in total; and we can only use this tool when the game is finished. We already replace real data when agent does not have data for a cycle and we are working on that to make it more convenient for users. Figure~\ref{fig:exampleFig4} depicts agent's world compared to real data in an arbitrary cycle of the match. \begin{figure}[ht] \centering \includegraphics[width=1.0\textwidth]{tools.eps} \caption{Difference between real and world model data for an agent} \label{fig:exampleFig4} \end{figure} \section{Conclusion and Future Work} This paper was a brief description of our scientific and technical efforts as Namira 2D Soccer Simulation team. we mentioned localization enhancement using Kalman Filtering and two novel software, LogAnalyzer and World Model Viewer (WMV), which can be accessed through github freely. We are going to integrated our new software with our past introduced software in previous competitions and publish them as a package in the near future. We also need to make improvement in our WMV software to have online view over agent's world model in Logplayer. Implementing learning algorithms for behavior modelling and detection is another goal of the team in for the future competitions. \bibliographystyle{unsrt}
{ "attr-fineweb-edu": 1.71875, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUda85qg5A55BmGqFi
\section{Introduction} Sports tournaments are very popular events all over the world. A huge amount of money is involved in organizing these tournaments and also a lots of revenue is generated by selling the tickets and broadcasting rights. Scheduling the matches is a very important aspect of these tournaments. Scheduling specifies the sequence of matches played by each participating team along with the venues. In a Double Round-robin Tournament, every pair of participating teams play exactly two matches between them once in both of their home venues. The Traveling Tournament Problem (TTP) asks for a double round robin schedule minimizing the total travel distance of the participating teams. The fairness condition imposes an upper bound $k$ to the maximum number of consecutive home or away matches played by any team. TTP was first introduced by Easton, Nemhauser, and Trick~\cite{easton2001traveling}. The problem bears some similarity with Traveling Salesman Problem (TSP). In fact, a reduction of the unconstrained version of TTP (or $k=\infty$) from TSP has been shown by Bhattacharyya \cite{bhattacharyya2016complexity} proving the basic problem to be NP hard. When the maximum permissible length consecutive home or away matches is set to $3$, TTP is proven to be NP-Hard as well by Thielen and Westphal \cite{thielen2011complexity}. In this work, the natural question has been asked, \emph{Is TTP NP-Hard for any fixed $k>3$}? \subsection{Problem Definition} Let $T$ be the set of teams with $|T|=n$. Let $\Delta$ be a square matrix of dimension $n \times n$ whose element at $i^{th}$ row and $j^{th}$ column corresponds to the distance between the home venues of $i^{th}$ and $j^{th}$ team in $T$ for all $i,j\leq|T|$. $\Delta$ is symmetric with diagonal terms $0$ and all the distances in $\Delta$ satisfy triangle inequality. \begin{definition} \textbf{Decision Version of TTP-$k$:} For a fixed a natural number $k$, an even integer $n$, a given set of teams $T$ with $|T|=n$, mutual distances matrix ($\Delta$) and a rational number $\delta$, is it possible to schedule a double round-robin tournament such that the total travel distance of the tournament is less or equal to $\delta$, where no team can have more than $k$ consecutive away matches or consecutive home matches and no team plays its consecutive matches with same opponent. \end{definition} \subsection{Previous Work} The Traveling Tournament Problem was first introduced by Easton, Nemhauser, and Trick~\cite{easton2001traveling}. Since then most of works focused on finding approximation algorithms or heuristics when $k=3$ \cite{benoist2001lagrange,easton2002solving,anagnostopoulos2006simulated,di2007composite, miyashiro2012approximation}. Thielen and Westphal \cite{thielen2011complexity} proved NP hardness for TTP-3. In a series of two papers \cite{imahori2010approximation,imahori20142} Imahori, Matsui, and Miyahiro showed approximation algorithms for the unconstrained TTP. Bhattacharyya \cite{bhattacharyya2016complexity} complemented this by proving the NP-hardness of unconstrained TTP. For other values of $k$, only upper bound results are known. Approximation for TTP-2 is done by Thielen and Westphal \cite{thielen2010approximating} and improved by Xiao and Kou \cite{xiao2016improved}. For TTP-$k$ with $k>3$, approximation algorithms are given by Yamaguchi et al. \cite{yamaguchi2011improved} and Westphal and Noparlik \cite{westphal20145}. Many different problems related to sports scheduling have been thoroughly analyzed in the past. For detail surveys and study on round-robin tournament scheduling, the readers are referred to \cite{kendall2010scheduling,rasmussen2008round,trick1999challenge}. Graph Theoretic approach for solving scheduling problems can be found in \cite{de1981scheduling,de1988some}. \subsection{Approach towards the Problem} In this work, generalization of the approach by Thielen and Westphal \cite{thielen2011complexity} has been done, who showed the NP Hardness of TTP-$3$. Like them, a reduction from the satisfiability problem has been shown. While \cite{thielen2011complexity} showed reduction from $3$-SAT, a reduction from $k$-SAT is shown here. However the reduction shown here is different in few crucial aspects. Firstly, the construction of the reduced TTP instance graph is different from that in~\cite{thielen2011complexity}. In order to accommodate trips of length $k$, a new graph in terms of vertices and edge weights is required. In addition, the trip structures and their synchronous assembly to get the tournament schedule is different from that of TTP-$3$. The reconstruction of $k$-SAT with specific properties of clauses require different technique. \subsection{Result} Here the main theorem is the following. \begin{theorem} \label{thm:main} TTP-$k$ for a fixed $k>3$ is strongly NP-Complete. \end{theorem} \section{Proof of Theorem~\ref{thm:main}} This reduction requires that the input instance of the satisfiability problem satisfies certain properties. The first step is to show that \emph{any} input instance can be transformed into an instance with properties required for the reduction. Following notations are used throughout the paper. If $x\in\bool$ is a boolean variable, $\bar{x}$ denotes its complement, $\bar{x}=1\oplus x$. \begin{lemma} \label{thm:satmod} For a $k$-SAT instance $F_k$ with $t$ variables and $p$ clauses there exists another $k$-SAT instance $F'_k$ with $t'$ variable and $p'$ clauses such that a satisfying assignment of $F_k$ implies a satisfying assignment of the variables of $F'_k$ and vice-versa. Moreover, the number of occurrence of all the $t'$ variables and there compliments are equal in $F'_k$, $t'\leq \left(t+\frac{k+1}{2}\right)$ and $k\mid p'$ with $k,t,t',p,p' \in \mathbb{N}$. \end{lemma} \begin{proof} Let $x_i$ and $\bar{x}_i $ ($i \in \{1,\dots,t\}$) be the variables in $F_k$. Suppose $n_i$ and $\bar{n}_i$ are the number of occurrence of $x_i$ and $\bar{x}_i$ in $F_k$ respectively. Without loss of generality it can be assume that $n_i \leq \bar{n}_i$. To make $n_i=\bar{n}_i$, few clauses has been added to $F_k$ depending on the value of $k$ is even or odd. \begin{enumerate} \item When $k$ is odd, clauses of the form $(x_i \lor x_{t+1} \lor \bar{x}_{t+1} \lor \dots \lor x_{t+\frac{k-1}{2}} \lor \bar{x}_{t+\frac{k-1}{2}})$ are added until $n_i=\bar{n}_i$ $\forall i \in \{1,\dots,t\}$ by introducing $(k-1)/2$ new variables. After adding these clauses number of clauses is even due to \textit{Handshaking Lemma}. Then by adding at most $(k-1)/2$ pairs of clauses of the form $(x_{t+1} \lor \bar{x}_{t+1} \lor \dots \lor x_{t+\frac{k-1}{2}} \lor \bar{x}_{t+\frac{k-1}{2}} \lor x_{t+\frac{k+1}{2}})$ and $(x_{t+1} \lor \bar{x}_{t+1} \lor \dots \lor x_{t+\frac{k-1}{2}} \lor \bar{x}_{t+\frac{k-1}{2}} \lor \bar{x}_{t+\frac{k+1}{2}})$, number of clauses can be made divisible by $k$ keeping $n_i=\bar{n}_i, \forall i \in \{1,\dots,t+\frac{k+1}{2}\}$. \item When $k$ is even, then using \textit{Handshaking Lemma} it can be said that there exist even number of indices $j \in \{1,\dots,t\}$ such that $(n_j+\bar{n}_j)$ is odd. So, $\|n_j-\bar{n}_j\|$ is odd. Without loss of generality, it can be assumed that $\bar{n}_j>n_j$. By identifying two indices of this kind, namely $m$ and $n$, clauses of the form $(x_m \lor x_n \lor x_{t+1} \lor \bar{x}_{t+1} \lor \dots \lor x_{t+\frac{k}{2}-1} \lor \bar{x}_{t+\frac{k}{2}-1})$ are added until $\|n_i-\bar{n}_i\|$ is even $\forall i \in \{1,\dots,t\}$ by introducing $(k/2-1)$ new variables. Now $\forall j \in \{1,\dots,t\}$ such that $n_j \neq \bar{n}_j$, pair of clauses of the forms $(x_j \lor x_{t+1} \lor \bar{x}_{t+1} \lor \dots \lor x_{t+\frac{k}{2}-1} \lor \bar{x}_{t+\frac{k}{2}-1} \lor x_{t+\frac{k}{2}})$ and $(x_j \lor x_{t+1} \lor \bar{x}_{t+1} \lor \dots \lor x_{t+\frac{k}{2}-1} \lor \bar{x}_{t+\frac{k}{2}-1} \lor \bar{x}_{t+\frac{k}{2}})$ are added until $n_j=\bar{n}_j$ $\forall j \in \{1,\dots,t\}$. Then by adding at most $(k-1)$ clauses of the form $(x_{t+1} \lor \bar{x}_{t+1} \lor \dots \lor x_{t+\frac{k}{2}} \lor \bar{x}_{t+\frac{k}{2}})$, the number of clauses can be made divisible by $k$ keeping $(n_i=\bar{n}_i)$ $\forall i \in \{1,\dots,t+\frac{k}{2}\}$. \end{enumerate} Let the resulting \textit{k-SAT} problem expression be $F'_k$ with $t'$ variables and $p'$ clauses where, $k\vert p'$. In both the cases explained above, all the additional clauses always give truth values. So, $F'_k$ will have a truth assignment of variables $x_i$ for all $i \in \{1,\dots,t'\}$ if and only if $F_k$ have a truth assignment of variables $x_i$ $\forall i \in \{1,\dots,t\}$. This proves the lemma. \end{proof} \subsection{TTP-$k$ is NP-Complete} The first step is to show that TTP-$k$ is indeed in NP. \begin{lemma} TTP-$k$ is in NP. \end{lemma} \begin{proof} In a decision version of \textit{TTP-$k$} with given set $T$ of $n$ teams, and the constraint $k$ it is verifiable in $O(n^2)$ whether a schedule is valid and gives a total travel distance less than $\delta$ or not. This ensures the membership of \textit{TTP-$k$} in \textit{NP}. \end{proof} Next, a reduction from \textit{$k$-SAT} to \textit{TTP}-$k$ has been shown. For this, a special weighted graph $G=(V,E)$ has been constructed with one or more teams situated at each vertex of $G$ and a predefined value $\delta$ of total travel distance such that there is a satisfying assignment of variables for \textit{k-SAT} if and only if there is a \textit{TTP}-$k$ schedule between the teams in $G$ with total travel distance less than $\delta$. First the input \textit{$k$-SAT} problem is modified using Lemma~\ref{thm:satmod} such that the resulting formula has the following properties. \begin{enumerate}[label=(\alph*)] \item There are $t$ variables $x_1, x_2,\dots, x_t$. \item Number of occurrence of $x_i$ is equal to the number of occurrence of $\bar{x}_i$; $n_i=\overline{n}_i$. \item Number of clauses $p$ is divisible by $k$, $k\vert p$. \end{enumerate} \subsubsection{The Construction} We start with the construction of the reduced instance graph $G$. Recall that $k$ is the upper bound of number of consecutive home or away matches, and $n_i$ is the number of occurrence of the variable $x_i$. The main part of the graph is the union of $t$ many sub-graphs $G_1,G_2,\cdots,G_t$ where $t$ is the number of variables in the (modified) input SAT instance. Each $G_i$ consists of $(k+1)n_i$ vertices. \begin{enumerate}[label=(\alph*)] \item $n_i$ many vertices are denoted by $x_{i,j}$ and $n_i$ many vertices are denoted by $\bar{x}_{i,j}$ where $ j \in \{1,\dots,n_i\}$. \item For $ j \in \{1,\dots,n_i\}$, the vertices $y_{i,j}$ denote $n_i$ many vertices. \item $ j \in \{1,\dots,n_i\}$ and $l\in \{1,\cdots,k-2\}$ the vertices $w_{i,j}^l$ are the remaining $(k-2)n_i$ many vertices. \end{enumerate} In addition, there are $(k-1)p+1$ many vertices in the graph where $p$ is the number of clauses of the (modified) input SAT instance. There is a central vertex $v$. Then there are $p$ vertices, $C_m$ and $(k-2)p$ vertices, $z_m^l \ \forall l \in \{1,\dots,k-2\}$ and $\forall m \in \{1,\dots,p\}$. For ease of explanation, we summarize the important parameters related to $G$. The total number of vertices in $G$ is $\left[\left(\sum_{i=1}^t(k+1)n_i\right)+(k-1)p+1\right]$. We also know that, $\sum_{i=1}^{t}2n_i=kp$. So, The total number of vertices other than $v$ in $G$ is $\left(\frac{k(k+1)}{2}+k-1\right)p$ which we denote by $a$. \begin{equation*} a=\left(\frac{k(k+1)}{2}+k-1\right)p=p\left(\frac{k^2+3k-2}{2}\right) \end{equation*} \textsc{Weights of the edges}. Let $M=\theta(a^5)$. First weight $M$ is assigned to the edges from $v$ to the vertices $x_{i,j}$ and $\bar{x}_{i,j}$, $(M-2)$ to the edges from $v$ to $y_{i,j}$ and $(M-2k+4)$ to edges from $v$ to $w^{k-2}_{i,j}$ of $G_i$ for every $j\in\{1,2,\dots,n_i\}$. Then $x_{i,j}$ is connected with $w^{k-2}_{i,j}$ through $k-3$ vertices serially namely $w^{1}_{i,j},w^{2}_{i,j},\dots,w^{k-3}_{i,j}$ where each of these consecutive vertices in this serial connection is connected with each other with an edge of weight $2$. Then $\bar{x}_{i,j}$ is connected to $w^{1}_{i,q}$ with an edge of weight $2$ where $q=j(mod \ n_i)+1$ and also $y_{i,j}$ is connected to both $x_{i,j}$ and $\bar{x}_{i,j}$ with edges of weight $2$ each as described in Figure \ref{subgraph_1}. For the connection between the remaining vertices in $G$, first $x_{i,j}$ or $\bar{x}_{i,j}$ are connected to $c_m$ with an edge of weight $2$ if $x_i$ or $\bar{x}_{i,j}$ has its $j^{th}$ occurrence in the $m^{th}$ clause of the modified \textit{k-SAT}. Then $c_m$ is connected with $z^{k-2}_m$ through $k-3$ vertices serially namely $z^1_m,z^2_m,\dots,z^{k-3}_m$ where each of these consecutive vertices in this serial connection is connected with each other with an edge of weight $2$. At last $z^{k-2}_m$ and $c_m$ is connected to $v$ by edges of weights $(M-2k+6)$ and $(M+2)$ respectively as described in Figure \ref{subgraph_2}. Formally, weights of all the edges of $G$ are listed as follows: \begin{itemize} \item Weight$(w_{i,j}^r, w_{i,j}^s)$=2$\|r-s\|$ for all $i \in \{1,\dots,t\}$, for all $j \in \{1,\dots,n_i\}$, for all $r,s \in \{1,\dots,k$-2$\}$. \item Weight$(x_{i,j}, w_{i,j}^{s})$=2s for all $i \in \{1,\dots,t\}$, for all $j \in \{1,\dots,n_i\}$, for all $s \in \{1,\dots, k$-2$\}$. \item Weight$(\bar{x}_{i,j},w_{i,q}^{s})$=2s for all $i \in \{1,\dots,t\}$, for all $j \in \{1,\dots,n_i\}$, for all $s \in \{1,\dots, k$-2$\}$ and $q=j(mod \ n_i)+1$. \item Weight$(x_{i,j}, y_{i,j})$=Weight$(\bar{x}_{i,j}, y_{i,j})$=2 for all $i \in \{1,\dots,t\}$, for all $j \in \{1,\dots,n_i\}$. \item Weight$(c_m,z_m^r)$=2r for all $r \in \{1,\dots,k-2\}$ and for all $m \in \{1,\dots,p\}$. \item Weight$(z_m^r,z_m^s)$=2$\|r-s\|$ for all $r,s \in \{1,\dots,k-2\}$ and for all $m \in \{1,\dots,p\}$. \item Weight$(x_{i,j},c_m)$=2, if $j^{th}$ occurrence of $x_i$ is present in $m^{th}$ clause of the given \textit{k-SAT} expression. \item Weight$(\bar{x}_{i,j},c_m)$=2, if $j^{th}$ occurrence of $\bar{x}_i$ is present in $m^{th}$ clause of the given \textit{k-SAT} expression. \item Weight$(v,w_{i,j}^{k-2})$=M-2k+4 for all $i \in \{1,\dots,t\}$, for all $j \in \{1,\dots,n_i\}$. \item Weight$(v,y_{i,j})$=M-2 for all $i \in \{1,\dots,t\}$, for all $j \in \{1,\dots,n_i\}$. \item Weight$(v,x_{i,j})$=M for all $i \in \{1,\dots,t\}$, for all $j \in \{1,\dots,n_i\}$. \item Weight$(v,\bar{x}_{i,j})$=M for all $i \in \{1,\dots,t\}$, for all $j \in \{1,\dots,n_i\}$. \item Weight$(v,z_m^{k-2})$=M-2k+6 for all $m \in \{1,\dots,p\}$. \item Weight$(v,c_m)$=M+2 for all $m \in \{1,\dots,p\}$. \end{itemize} All other edges in the complete graph $G$ is given maximum possible weights without violating triangle inequality. \noindent\textsc{Creating TTP-$k$ instance.} Now the teams are placed on the vertices of $G$ to construct the reduced instance. \begin{itemize} \item Total number of teams is equal to $a^3+a$. \item At each vertex of $G$ except $v$, only one team is placed. This set of vertices or teams is denoted as $U$. \item $a^3$ teams are situated at $v$ and distance between them is $0$ and call this set of vertices or teams $T$. \end{itemize} \begin{figure}[htb] \begin{center} \includegraphics[height=5in,width=4in,angle=0]{firstpic} \caption{Sub-graph of $G$ for $i^{th}$ variable where $d_1=M-2k+4$} \label{subgraph_1} \end{center} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[height=5in,width=4in,angle=0]{secondfig} \caption{Sub-graph Corresponding to First Clause of the form $C_1=(x_1 \lor \bar{x}_2 \lor \dots)$ where $d_2=M-2k+6$} \label{subgraph_2} \end{center} \end{figure} We fix, \begin{equation*} \delta = M\left[\frac{4a^4}{k}\right]+pa^3(k^2-3k+6)+2a(a-1)k+\frac{4a^4}{k} \end{equation*} \begin{lemma} Weights of the edges of $G$ preserve the triangle Inequality. \end{lemma} \begin{proof} For a tuple $(v,x_{i,j},y_{i,j})$ or $(v,\bar{x}_{i,j},y_{i,j})$ for all $i \in \{1,\dots,t\}$ and $j \in \{1,\dots,n_i\}$, the triangle inequality is preserved as, \begin{eqnarray*} Weight(v,x_{i,j})=M=Weight(x_{i,j},y_{i,j})+Weight(v,y_{i,j})=2+(M-2)\\ Weight(v,\bar{x}_{i,j})=M=Weight(\bar{x}_{i,j},y_{i,j})+Weight(v,y_{i,j})=2+(M-2) \end{eqnarray*} For a tuple $(v,x_{i,j},w^{1}_{i,j})$ or $(v,\bar{x}_{i,j},w^{1}_{i,j})$ for all $i \in \{1,\dots,t\}$ and $j \in \{1,\dots,n_i\}$, the triangle inequality is preserved as, \begin{eqnarray*} Weight(v,x_{i,j})=M=Weight(x_{i,j},w^{1}_{i,j})+Weight(v,w^{1}_{i,j})=2+(M-2)\\ Weight(v,\bar{x}_{i,j})=M=Weight(\bar{x}_{i,j},w^{1}_{i,j})+Weight(v,w^{1}_{i,j})=2+(M-2) \end{eqnarray*} For a tuple $(v,x_{i,j},c_m)$ or $(v,\bar{x}_{i,j},c_m)$ where $j^{th}$ occurrence of $x_i$ or $\bar{x}_i$ is present in $m^{th}$ clause of the given \textit{k-SAT} expression, the triangle inequality is preserved as, \begin{eqnarray*} Weight(v,c_m)=M+2=Weight(x_{i,j},c_m)+Weight(v,x_{i,j})=2+M\\ Weight(v,c_m)=M+2=Weight(\bar{x}_{i,j},c_m)+Weight(v,\bar{x}_{i,j})=2+M \end{eqnarray*} The triangle inequality for all other tuple of three vertices in $G$ is followed from these above three cases as the weights are given maximum possible values without violating triangle inequality while assigned. \end{proof} \subsection{The Reduction} As a desired value of $\delta$ has been got, now the only remaining part is the reduction. First, it has been shown that a given satisfying assignment of variables in a \textit{k-SAT} $\imply$ a \textit{TTP-k} schedule of total travel distance less than $\delta$. First, the tours for the $a^3$ vertices situated at $v$ are constructed and also showed that, these tours are so cheap in terms of travel distance that tours of similar structure must be there in a tournament where the total travel distance is desired to be as low as possible. So, $\frac{a}{k}$ node disjoint tours are constructed for a team at $v$ where, all the vertices $x_{i,j}, \bar{x}_{i,j}, y_{i,j}, w^r_{i,j}, c_m, z^r_m$ are visited. First, observe that as there is a satisfying assignment of variables in the \textit{k-SAT}. Let us define two conditions denoting with the value of a variable $b_m$ in the following manner, \begin{equation*} b_m=1 \implies \exists i \in \{1,\dots,t\}\mbox{ such that } x_i=1 \ \& \ x_i \ \mbox{appears in the} \ m^{th} \mbox{clause}. \end{equation*} \begin{equation*} b_m=0 \implies \exists i \in \{1,\dots,t\} \mbox{ such that } x_i=0 \ \& \ \bar{x}_i \ \mbox{appears in the} \ m^{th} \mbox{clause}. \end{equation*} For all $m \in \{1,\dots,p\}$, if $b_m=1$ then Weight$(x_{i,j},c_m)=2$. Now, to visit a team at $c_m$ only, a team in $v$ has to travel $2(M+2)$ distance. But if it travels to a vertex $x_{i,j}$, then to $c_m$ and travel through $z_m^1$ to $z_m^{k-2}$ and return to $v$, it travels the same distance, $2(M+2)$, and also visits $k$ vertices in a single trip which is a desired situation here. Afterwards in another trip to a vertex $\bar{x}_{i,j}$, in spite of directly traveling to and fro to $\bar{x}_{i,j}$ only, i.e. $2M$, if it travels to $w_{i,q}^{k-2}$ first then through all $w_{i,q}^s$ for $q=j(mod \ n_i)+1$ and $s \in \{1,\dots,k-3\}$ to $\bar{x}_{i,j}$ and $y_{i,j}$ and returns back to $v$, the travel distance is same as $2M$ and also $k$ vertices will be visited in a single trip. Multiple trip to these extra $(k-1)$ vertices would cost much more in comparison. Similar tours are taken when $b_m=0$ only interchanging $x_{i,j}$ and $y_{i,j}$ with $\bar{x}_{i,q}$ and $y_{i,q}$ respectively. This leaves $(k-2)p/2$ number of $x_{i,j}$ type vertices to visit. As $k\vert p$, this can be done in $p(k-2)/2k$ trips each of length $k$ and travel distance less than $2(M+k(k-1))$. So the total distance traveled by a team situated at $v$ is upper bounded by $\delta_1$. Where, \begin{eqnarray*} \delta_1&&=2(M+2)p+Mkp+M(k-2)p/k+(k-1)(k-2)p\\ &&=p\left[M\left(k+3-\frac{2}{k}\right)+k^2-3k+6\right] \end{eqnarray*} With tours involving distance $M$ is minimized by covering exactly $k$ vertices in each of the tours. Now these tours can be numbered from $1$ to $\frac{a}{k}$ and vertices in each tour can be numbered from $1$ to $k$. So, all the $a$ vertices of $U$ are named as $u_{i,j}$ for all $i \in \{1,\dots,k\}$ and $j \in \{1,\dots,\frac{a}{k}\}$ such that $ u_{i,j}$ is the $i^{th}$ visited team of $j^{th}$ tour. Also the vertices in $T$ are partitioned in $a^2$ disjoint sets $T_1, T_2, \dots, T_{a^2}$ each of size $a$. Moreover $T_q=\{t_{r,q}$ such that $ r \in \{1,\dots,a\}\}$. The tours by the teams in $T$ are now designed in such a way that, $t_{1,1}$ will take tour number $1$, i.e. travel through $u_{1,1},u_{2,1},\dots,u_{k,1}$. Then $u_{1,1},u_{2,1},\dots,u_{k,1}$ visit $t_{1,1}$ in the same order. Similarly, for all $i \in \{2,\dots,k\}, t_{i,1}$ follows the same tour and visited by the same teams as $t_{1,1}$ but with a time delay of $(i-1)$. This way all the teams in $T_1$ first complete visits to all the teams in tour $1$ and then get visited by them. Then $T_1$ starts tour $2$. This way teams in $T_1$ plays with the teams in tour $1$ to tour $\frac{a}{2k}$ in such a way that the teams in $T_1$ visit first and then they get visited. Similarly, teams in $T_q$ for all $q \in \{1,\dots,\frac{a^2}{2}\}$ follow a similar travel like $T_1$. The matches between teams in $T_q$ for all $q \in \{\frac{a^2}{2}+1,\dots,a^2\}$ and the teams in $U$ that are in tour $j$, for all $j \in \{\frac{a}{2k}+1,\dots,\frac{a}{k}\}$ are also played in a similar fashion with the change that the teams in $U$ visit the teams in $T$ first and then they get visited by the teams in $T$. This way the sets $T$ and $U$ both are divided in two parts according to the teams they completed playing. Formally, \begin{equation*} T_a=\bigcup\limits_{q=1}^{a^2/2} T_q \qquad T_b=\bigcup\limits_{q=a^2/2+1}^{a^2} T_q \qquad U_a=\bigcup\limits_{j=1}^{a/2k} S_j \qquad U_b=\bigcup\limits_{j=a/2k+1}^{a/k} S_j \end{equation*} Afterwards by changing the roles of $T_a$ and $T_b$, matches between teams in $T_a$ and $U_b$ are arranged and similarly between teams in $T_b$ and $U_a$. So, the main remaining part is schedule of matches between the teams in $U$ and that of the teams in $T$. For scheduling matches between teams in $U$, the teams in $U$ are categorized in $k$ categories depending on their occurrence in the tours, i.e., for all $i \in \{1,\dots,k\}, U_i=\{u_{i,j}$ such that $j \in \{1,\dots,\frac{a}{k}\}\}$. For all $i$, the teams in $U_i$ play against the teams in $T$ at the same slots but half of them play at home and the other half play away. More specifically the teams in $U_i$ play with the teams in $T$ at exactly on the slots $(2i-1)$ to $(2a^3+2i-2)$. Keeping these busy slots in mind, schedule of a single round-robin tournament of the teams in $U$ is designed using canonical tournament introduced by de Werra\cite{de1988some}. This is done by assigning a vertex to each of the teams in a $U_i$ in a special graph as done in the canonical tournament design. This tournament structure gives assurance that each team plays every other team exactly once and no team has a long sequence of home and away matches. Here a match between $i$ and $j$ signifies a match between a team in $U_i$ and a team in $U_j$. At the end the same tournament is repeated with changed match venues, i.e. nullification of home field advantage. Now the only remaining part is scheduling of the matches between the teams in $T$. Let for all $t \in T, d(t)$ denote the first slot on which team $t$ plays a team in $U$ and let $T_i=\{t \in T $ such that $ ((d(t)-1)\mbox{mod k})+1=i\}$ for all $i \in \{1,\dots,k\}$. Then every $T_i$ is partitioned into $2a^2$ groups of cardinality $\frac{a}{2k}$ such that $d(t_1)=d(t_2)$ for every two members $t_1,t_2$ of the same group. For every $T_i$, the matches between teams in different groups in $T_i$ are scheduled. Among the teams in $T_i$, $\frac{a}{k}$ will always be busy playing with some teams of $U$. To handle this fact, two dummy groups $U_1$ and $U_2$, as defined before, are introduced that play with these busy $\frac{a}{k}$ teams of $T_i$. Then each group is treated as a team and again canonical tournament structure is applied only skipping the day at which the two dummy groups meet. For scheduling the matches between the members of two groups $g$ and $h$ of $T_i$ in $l$ rounds, where \begin{equation*} g=\{g_1,g_2,\dots,g_l\} \qquad h=\{h_1,h_2,\dots,h_l\} \end{equation*} The following is done. The $i^{th}$ round contains the matches between $h_j$ and $g_{((i+j) \mod l)+1}$ for all $i,j \in \{1,2,\dots,l\}$ with game taking place at the home of $h_j$. This restricts the lengths of away trips and home stands for all the teams from being long. Then the same schedule is repeated with altered venues. Matches between the teams of some group of $T_i$ with the teams of a dummy group has already been taken care when the matches between $U$ and $T$ were designed. Now, for the scenario where the two dummy groups meet, two kind of matches are there which differ in length. The encounter between two groups consists of $\frac{a}{k}$ slots, while the encounter between two dummy groups, i.e. $U_1$ an $U_2$ consists of $a$ slots. The encounters between the groups of $T_i$ are scheduled in usual way using $\frac{a}{k}$ slots and the extra $\frac{(k-1)a}{k}$ slots are used to schedule matches between the teams in different $T_i$'s. To schedule matches between the teams in different $T_i$'s, first each $T_i$ is partitioned in two equal size, namely $T_{i,1}$ and $T_{i,2}$. Now, considering each $T_{i,j}$ as a single team for all $i \in \{1,2,\dots,k\}$ and all $j \in \{1,2\}$, a canonical tournament is again applied on these teams skipping the day at which $T_{i,1}$ encounters $T_{i,2}$ for all $i$. These scenario can be achieved by properly initializing the canonical tournament. Then the same schedule is repeated with altered venues to nullify home advantage as done in all earlier canonical tournament structures. For a team situated in one of the vertex in $E \setminus v$ to visit the teams in $v$, it has to travel at most $\frac{2(M+2)a^3}{k}$ using $\frac{a^3}{k}$ trips of length $k$. So, the total travel distance of all the teams in $E \setminus \{v\}$ to the teams in $v$ can be bounded by $\delta_2$. Where, \begin{equation*} \delta_2 = \frac{2(M+2)a^4}{k} \end{equation*} As all the distances between the vertices in $E \setminus \{v\}$ in $G$ is less or equal to $2k$, the travel between all the teams in $E \setminus \{v\}$ can be bounded above by $\delta_3$. Where, \begin{equation*} \delta_3 =2a(a-1)k \end{equation*} The teams in $T$ visit each other at zero cost as they are situated at the same point. So, the total travel distance of the tournament is bounded by, \begin{eqnarray*} &&\delta_1 \cdot a^3 + \delta_2 + \delta_3\\ &&=a^{3}p\left[M\left(k+3-\frac{2}{k}\right)+k^2-3k+6\right]+\frac{2(M+2)a^4}{k}+2a(a-1)k\\ &&=\delta \end{eqnarray*} In another way, \begin{eqnarray*} \delta&&=Ma^3\left[p\left(k+3-\frac{2}{k}\right)+\frac{2a}{k}\right]+pa^3(k^2-3k+6)+2a(a-1)k+\frac{4a^4}{k}\\ &&=M\left[\frac{4a^4}{k}\right]+pa^3(k^2-3k+6)+2a(a-1)k+\frac{4a^4}{k} \end{eqnarray*} This completes the first direction of the proof. \subsection{Proof of the other direction} For the other direction of the proof, it has to be shown that \textit{k-SAT} is not satisfiable $\imply$ a \textit{TTP-k} schedule of total travel distance less than $\delta$ is not possible. Here it is assumed that there is no satisfying assignment of variables of the \textit{k-SAT}. In the forward direction of the proof explained above, it is shown that the proposed schedule is compact and gives an optimized value of travel distance for the tours of the teams in $T$ to the teams in $U$. But the travels are designed depending on a truth assignment of $x$ variables. The travel to $c_m$ vertex goes through a $x_{i,j}$ or $\bar{x}_{i,j}$ which is there in the $m^{th}$ clause of the \textit{k-SAT} and assigned with value $1$. Also the other variable among these two covers $y_{i,j}$ and $w^s_{i,j}$s together in another tour. Let assume that, there exist optimum tours similar to the forward direction of the proof although there is no satisfying assignment of variables of the \textit{k-SAT}. So, there is an optimum path through each $c_m$ for all $m\in\{1,\dots,p\}$ that includes a vertex $x_{i,j}$ or $\bar{x}_{i,j}$ which is present in the $m^{th}$ clause of the \textit{k-SAT} expression. Now if value $1$ is assigned to each of these variables then that must end in a wrong assignment of variables as it contradicts the assumption otherwise. This imply that $x_{i,j}=\bar{x}_{i,j}=1$ has been assigned for some $i \in \{1,\dots,t\}$ and $j \in \{1,\dots,n_i\}$. That means both $x_{i,j}$ and $\bar{x}_{i,j}$ are included in optimized tours from $v$ to $c_i$ and $c_j$, where $i,j\in\{1,\dots,p\}$. Now, it will not be possible to design a trips from $v$, that includes $y_{i,j},w^1_{i,j},\dots,w^{k-2}_{i,j}$ for all $i \in \{1,\dots,k\}$ and $j \in \{1,\dots,\frac{a}{k}\}$ using the optimized trips. So, it is not possible to cover all the $c_m,y_{i,j},w^s_{i,j}$s along with all $x_{i,j}$s for all $i,j$ using these optimized tours. To cover all the vertices in $U$ each of the $a^3$ vertices of $T$ has to tour at least once more to the vertices of $U$. This extra tours will increase the total travel distance by at least $2 \cdot M \cdot a^3$. As the total travel distance by the teams in $U$ for matches among them is $O(a^2)$, so the tours of the teams of $T$ to those of $U$ dictates the total travel distance of the tournament. So, increase in this part will increase the total distance significantly and for $M$ being $\theta(a^5)$, the total travel distance will be more than $\delta$. This completes the other direction of the proof and the reduction. \bibliographystyle{unsrt}
{ "attr-fineweb-edu": 1.773438, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUe9jxK7FjYCv2Tj_p
\section{Introduction} Team INPUT was formed in August 2018. The team is unique in the aspect that many of its members have prior experience of participation in the RoboCup Junior Soccer Open. Our team consists of some members from three junior teams: CatPot, INPUT, and CatBot. These teams have participated in world competitions, and INPUT in particular won the RoboCup 2017 Nagoya (Fig. \ref{fig:junior}). All members are graduates of the National Institute of Technology, Nagaoka College (NITNC). INPUT is one of NAZE's projects from the Nagaoka area, where NAZE and NITNC are located. Nagaoka is a manufacturing town. We are developing robots in collaboration with companies in the Nagaoka area. The RoboCup 2019 Japan Tournament held in Nagaoka City was INPUT's first official competition (Fig. \ref{fig:jo2019}). In the first game of the tournament, all eight robots worked and successfully passed the ball. We were able to win one game. \begin{figure}[htbp] \centering \begin{tabular}{c} \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=0.95\linewidth]{figure/junior.pdf} \caption{World Championship in the Junior League.} \label{fig:junior} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=0.95\linewidth]{figure/jo2019.pdf} \caption{RoboCup Japan Open 2019, Nagaoka.} \label{fig:jo2019} \end{minipage} \end{tabular} \end{figure} \section{Introducing Robot Generation v2019} This chapter describes the configuration of the v2019 robot used in RoboCup 2019 Japan Open, Nagaoka. This robot was the first robot developed by team INPUT. Fig. \ref{fig:robot2019} shows the overall view of the v2019 robot. \subsection{Mechanism Design} The drive system of this robot is composed of four motor units (Fig. \ref{fig:dribeunit2019}). These mechanical units were designed with consideration for replaceability and ease of repair, such that the units can be easily removed by unscrewing three bolts. In this drive mechanism, the rotation is decelerated and transmitted by the internal gear attached to the omni-wheel. By using the internal gear, it is possible to reduce the distance between the motor and the rotation axis of the wheel, and thus possible to lower the mounting position of the motor and lower the center of gravity of the robot. The reduction ratio of the internal gear was approximately 3.33: 1, and the maximum tire speed was 1557 rpm. This robot is equipped with a deformable dribble mechanism, as shown in Fig. \ref{fig:dribbler2019}. The dribble unit is composed of two parts: the motor and the roller. The roller part is movable around the rotation axis M. This is because when the ball comes into contact with the roller, the impact is released by the movement of the roller. For the kick mechanism of this robot, we selected the Super Stroke Solenoid (S-1012SS, Shindengen Mechatronics Co., Ltd.), which has a sufficient output and a long stroke. A Teflon sheet was attached to the bottom of the contact part with the ball and was used as a guide (Fig. \ref{fig:kicker2019}). \begin{figure}[ttbp] \centering \includegraphics[width=0.9\linewidth]{figure/robot2019.pdf} \caption{Components of robot v2019.} \label{fig:robot2019} \end{figure} \begin{figure}[ttbp] \centering \includegraphics[width=0.5\linewidth]{figure/driveunit2019.pdf} \caption{Drive unit.} \label{fig:dribeunit2019} \end{figure} \begin{figure}[htbp] \centering \begin{tabular}{c} \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=0.95\linewidth]{figure/dribbler2019.pdf} \caption{Dribble mechanism.} \label{fig:dribbler2019} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=0.95\linewidth]{figure/kicker2019.pdf} \caption{Kick mechanism} \label{fig:kicker2019} \end{minipage} \end{tabular} \end{figure} \subsection{Circuit} The circuit of this robot consists of five parts: the main board, voltage boost circuit, motor driver for the dribbler, ball detector, and the Li-Po battery. The main board contains the motor driver for the omni-wheel and communication modules. The details are shown in Fig. \ref{fig:circuit2019} and Table \ref{tab:circuit2019}. \begin{figure}[ttbp] \centering \includegraphics[width=0.9\linewidth]{figure/circuit.png} \caption{Circuit components.} \label{fig:circuit2019} \end{figure} \begin{table}[htbp] \centering \caption{List of circuit components and their details} \label{tab:circuit2019} \begin{tabularx}{\linewidth}{X|X} \hline Device & Description \\\hline Main board (Fig. \ref{fig:circuit2019}a) & CPU is STM32F407VGT6 on 215 MHz. \\ & It contains 4 motor drivers ESCON50/5. \\ & An XBee3 is used for communication with the PC. \\ & DP83848 ethernet phi-module is included for using Wi-Fi router. \\ \hline Booster (Fig. \ref{fig:circuit2019}b) & Li-Po battery 4 cell is boosted to 175 V to use the kicker. \\ \hline Motor driver for dribbler (Fig. \ref{fig:circuit2019}c) & An Nch MOSFET (EKI04027) DC motor driver. \\ \hline Ball detector (Fig. \ref{fig:circuit2019}d) & Photo IC with optical switch functions (S8119) and Infrared LED. \\ \hline 5 GHz WIFI router & WRH-583BK2-S, IEEE 802.11abgn 2.4/5 GHz wireless LAN \\ \hline Li-Po battery (Fig. \ref{fig:circuit2019}e) & Hyperion LiPo 4-cell 1800 mAh \\ \hline \end{tabularx} \end{table} The STM32 was developed using the hardware abstraction layer (HAL) library. An XBee 2.4 GHz band radio was used to communicate with the PC. When the XBee was unavailable, 5 GHz Wi-Fi was used to communicate with the PC. The moving motor does not have an external encoder, and the ESCON is voltage-controlled; the CPU conveys the speed of the motor to the ESCON through pulse width modulation (PWM) signals. The lack of an external encoder is due to space constraints inside the robot. The motor of the dribbler uses a single FET and can only rotate in the direction of the ball roll. The dribbler does not need to rotate in the counterclockwise (CCW) direction; hence, a very simple circuit is used to rotate only in the clockwise (CW) direction. The voltage boost circuit of this robot is based on the voltage boost circuit of Team Roots \cite{roots}, and it is boosted up to 175 V using LT3750. The kicking force is adjusted based on the switching speed of the IGBT. The robot is equipped with a ball sensor in the dribbling mechanism, as shown in red in Fig. \ref{fig:dribbler2019}, which is used in order to determine if the ball was ready to be kicked. A ball sensor was mounted on the dribble mechanism. A photo IC (S8119) and an infrared LED were used to detect the presence of the ball in the dribbler mechanism. The communication protocol for XBee is shown in Fig. \ref{fig:protocol2019}. XBee operates in the AT mode with a baud rate of 115200 bps. The speeds $v_x,v_y,v_{\theta}$ are received as floats. The 5 GHz Wi-Fi is used to receive the User Datagram Protocol (UDP) using FreeRTOS and LwIP. The Wi-Fi router assigns a local IP address to each robot and sends a comma separated string to the specified IP address (each robot is assigned a local IP address by the router, and a comma-separated string is sent to the specified IP address (robot)). The contents of the packet are $v_x,v_y,v_{\theta}$, kicker power, and dribbler power. \begin{figure}[ttbp] \centering \includegraphics[width=0.9\linewidth]{figure/protocol2019.pdf} \caption{Communication protocol for XBee.} \label{fig:protocol2019} \end{figure} \section{Introducing Robot Generation v2022} While the v2019 robot had sufficient functionality to work in a match, it had a high center of mass (COM) that caused the robot to nearly fall over when accelerating from a stationary state. We developed v2022, which solves this problem by improving the hardware. In the following section, we explain the solutions for each problem. \subsection{Improved Wheels for a Lower COM of Robot} To lower the COM, we focused on enlarging the space closer to the ground in the configuration of the robot (ground area), as shown in Fig. \ref{fig:robot2021}(a). Heavy objects must be placed at a lower height to lower the COM, and sufficient space in the lower area is needed to achieve this. However, because this area is at a height where the ball can easily be touched, devices that directly touch the ball (solenoids for kicking, sensors, etc.) as well as the motor wheels for driving are placed here. In some teams, the wheel drive motor is placed at the top for space reasons, and gears are used to transmit rotation to the wheels, resulting in a high COM. Therefore, to reduce the COM, it is necessary to save space for these devices. Hence, we first made the omni-wheel thinner to increase the available space. The bottleneck for making a thinner wheel lies in the process of fastening the wheel to the motor shaft. Many teams inserted a screw from the side of the shaft and pressed down on the shaft to fix the wheel. However, the thinner the wheel, the less fastening force is available because screws with a larger diameter cannot be used. This method also requires access to the screws on the side of the wheel when attaching and detaching it, which increases the assembly time. In Team-TIGERs \cite{tigers2020}, the shaft is fixed with glue, and the wheel is fixed with screws; however, the screws loosen due to vibration and rotation. Therefore, we developed an omni-wheel equipped with a mechanical locking mechanism, as shown in Fig. \ref{fig:robot2021}(b). The mechanical locking mechanism is a mechanical component that generates a force in the direction of diameter contraction by tightening a pair of tapered parts in the axial direction, thereby fixing the shaft and wheel by friction. It can also be detached repeatedly by applying a force in the axial direction with the removal screw. This enables the use of fewer parts, high fastening force, and simple attachment and removal. The basic characteristics of the developed omni-wheels are listed in Table 2. In v2022, the wheel was directly connected to the motor shaft to increase the space in the lower section (Fig. \ref{fig:robot2021}(a)). We used a Maxon motor EC 45 flat 70 W 24 V instead of the conventional Maxon motor EC 45 flat 50 W 18 V because it requires a large torque. The circuit changes due to the change in the motor are described in the next section. \begin{figure}[htbp] \centering \subfloat[][]{\includegraphics[width=0.45\linewidth]{figure/2021a.pdf}} \subfloat[][]{\includegraphics[width=0.45\linewidth]{figure/2021b.pdf}} \quad \subfloat[][]{\includegraphics[width=0.45\linewidth]{figure/2021c.pdf}} \subfloat[][]{\includegraphics[width=0.45\linewidth]{figure/2021d.pdf}} \caption{v2022 robot with the new Omni-Wheel. (a) Ground area of robot, (b) Cross-section view of drive unit, (c) Wheel unit, (d) Isometric view of v2022.} \label{fig:robot2021} \end{figure} \begin{table}[htbp] \centering \caption{List of circuit components and their details} \label{tab:v2019v2022} \begin{tabularx}{\linewidth}{X|X|X} \hline & v2019 & v2022 \\\hline Diameter [mm] & 80 & 55 \\ Thickness [mm] & 16.5 & 11 \\ Height of COM [mm] & 49.2 & 39.7 \\ Material & A2017 & A2017 \\ \hline \end{tabularx} \end{table} \subsection{Change of battery and voltage boost circuit} The battery was changed to a 6-cell LiPo because the motor rating was changed to 24 V in v2022, as described in Section 4.1. The v2019 booster circuit is designed for a 4-cell LiPo battery and, hence, cannot be used in v2022. In v2022, the voltage booster circuit is redesigned to allow a maximum input voltage of 40 V using LT3751 as the voltage booster control IC and an additional regulator. The input circuit to the solenoid and the adjustment of the kick force were the same as in v2019. Fig. \ref{fig:booster2021} shows the v2022 boost circuit. The occupied area was reduced using a two-stage board. \begin{figure}[htbp] \centering \subfloat[][]{\includegraphics[width=0.45\linewidth]{figure/booster2021a.pdf}} \subfloat[][]{\includegraphics[width=0.45\linewidth]{figure/booster2021b.pdf}} \caption{v2022 Booster Circuit (a) separated, (b) stacked.} \label{fig:booster2021} \end{figure} \section{Software} \subsection{Structure} The structure of the AI software is illustrated in Fig. \ref{fig:softwarestruct}. It is divided into three major blocks: communication, strategy, and control. The system is event-driven, triggered by the frame information in the ssl-vision. The entire sequence of operations, including the transmission from the trigger to the robot is processed within one frame. First, the communication block receives the vision and referee information and sends a signal to the strategy block. This includes communication with robots. The strategy block determines the movement of the robot based on the situation of the game. The control block is responsible for path generation, acceleration/deceleration control, and yaw angle control to achieve the determined motion. The strategy block consists of four modules. The GameManager module updates the status of the game from the vision and referee information. The strategy module determines the combination of roles assigned to the robot and the priority of each role based on the situation of the match. The role module appropriately assigns roles to robots and determines the actions to be taken for each role. The skill module is a subdivision of roles and consists of the most basic descriptions. The control block consists of the PathPlan module and the MotionControl module. The PathPlan module generates paths for collision avoidance. The MotionControl module performs acceleration/deceleration control and posture control. The acceleration/deceleration control is based on the trapezoidal acceleration. The details are described in section 5.4. Yaw angle control was performed using PD control. \begin{figure}[ttbp] \centering \includegraphics[width=0.9\linewidth]{figure/softwarestruct.pdf} \caption{Software structure overview.} \label{fig:softwarestruct} \end{figure} \subsection{Role} One of the six roles is assigned to each robot. The number of roles assigned and their priorities depends on the situation of the match. \begin{description} \item[Goalie] This role is always assigned the highest priority, and a robot assigned this role only works as a goalie. \item[Attacker] This is a role that involves actively approaching the ball and aiming to pass, clear, and shoot; the attacker role is never assigned to more than one robot. \item[Defender] The robots assigned this role move around the perimeter of their own penalty area to block the path of shots. The number of defenders assigned to each team varies depending on the situation of the game. \item[PassReceiver] This is a role that involves waiting for a pass at a location where it is most likely to be received. The number of passengers assigned varies depending on the match situation. \item[PassInterrupter] This player marks the opponent and intercepts the pass. The number of passengers assigned varies depending on the match situation. \item[Waiter] This is the lowest priority role, and involves moving to a larger space for a smooth transition to another role. \end{description} \subsection{Passing position} The best location for passing is determined by the potential method, with the field delimited by a grid. To determine the potential field, we computed the sum of several masks. A typical mask is shown in Fig. \ref{fig:potentialscore}a, which considers the ball position as a point light source and excludes shadows cast by obstacles. The passing location is determined flexibly by computing the sum of masks with other simple gradients, as shown in Fig. \ref{fig:potentialscore}b. \begin{figure}[ttbp] \centering \includegraphics[width=0.9\linewidth]{figure/potentialscore.pdf} \caption{Potential score mask for determining the passing position. (a) Example of a mask, and (b) applies multiple masks.} \label{fig:potentialscore} \end{figure} \subsection{Acceleration/deceleration control} Speed control is based on the trapezoidal acceleration/deceleration. In the simple trapezoidal acceleration/deceleration, the speed is low in the low-speed range and short-distance movement. For this reason, the trapezoidal acceleration/deceleration was cut at both ends, as shown in Fig. \ref{fig:acc}. \begin{figure}[ttbp] \centering \includegraphics[width=0.9\linewidth]{figure/trapezpidalacc.pdf} \caption{Trapezoidal acceleration/deceleration with both ends cut.} \label{fig:acc} \end{figure} \subsection{Framework} Qt was used as the software framework. The user interface developed in Qt is shown in Fig. \ref{fig:ui}. Qt is used not only for the user interface, but also for UDP communication, serial communication for XBee, and the core "signal, slot." \begin{figure}[ttbp] \centering \includegraphics[width=0.9\linewidth]{figure/ui.pdf} \caption{Software user interface.} \label{fig:ui} \end{figure} \subsection{Development Environment} The software development was managed by GitLab. We used the GitLab CI to automatically verify software builds and tests. In addition, a system to output code documentation was configured using Doxygen. \section{Introduction of robot development in cooperation with companies in the Nagaoka area} Our team of students from Nagaoka is building a robot in collaboration with a company in Nagaoka. Fig. \ref{fig:discussion} shows a discussion between students and a company in Nagaoka. The new Omni-Wheel, described in Section 4.1, was developed through this collaboration. We also received support from a company in the Nagaoka area for the installation of cameras in the practice field of our team. In general, SSL cameras are installed in a high position so that they can show a wide view of the field, but it is difficult to install a camera in a high position. Therefore, we set up a scaffold made of a single pipe right next to the field and set up the camera at a height of approximately 4 m as shown in Fig. \ref{fig:camera}. \begin{figure}[ttbp] \centering \includegraphics[width=0.6\linewidth]{figure/discussions.pdf} \caption{Discussions with partner companies.} \label{fig:discussion} \end{figure} \begin{figure}[ttbp] \centering \includegraphics[width=0.6\linewidth]{figure/camera.pdf} \caption{Camera mounted using single pipe.} \label{fig:camera} \end{figure} \section{Acknowledgement} This robot was developed with the generous support and cooperation of NAZE and its member companies, who provided funding, design guidance, parts production, and field installation. We would like to express our gratitude to Sanshin Co., NSS Co., and Ogawa Conveyor Corporation for their generous support. We would like to take this opportunity to express our deepest gratitude.
{ "attr-fineweb-edu": 1.708984, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUd-A4eIOjRxJW9gyj
\section{Introduction} Analyzing correlations among cricket teams of different era has been a topic of interest for sports experts and journalists for decades. In this paper we study such influence (or interaction) by constructing cross-correlation matrix $C$ \cite{plerou,fin,fin1,fin2,fin3,quantrans} formed by runs scored by teams over different time intervals, formally called a time series.We consider the time series of batting scores posted per innings by a team in all official ICC International Test matches played. Then we construct an ensemble of cross-correlation matrices corresponding to Test data for that cricket team. We repeat the process for One Day International (ODI) and Indian Premier League (IPL) T20 cricket matches. We assume the correlations to be random and compare the fluctuating properties of $C$ with that of random matrices. Within the bounds imposed by the RMT model, fluctuations of $C$ show brilliant agreement with the ``universal'' results of GUE \cite{mehta,ghosh,ghoshbook}, while the level density corresponds to the MP distribution \cite{marchenko}. This implies that interactions in $C$ are random, or in simple words not governed by any causality principal. However outside the bounds, eigenvalues of $C$ show departure from RMT predictions, implying influence of external non-random factors common to all matches played during this period. To understand this effect, we remove $k$ extreme bands from $C$ and perform the Kolmogorov-Smirnov (KS) Test. We observe a better agreement with RMT predictions. We organize the paper as follows: After a brief description of the data analyzed in sub-section [\ref{sec:data}], we define cross-correlation matrix in sub-section[\ref{sec:acm}]. Section[\ref{sec:rmt}] introduces our RMT model along with a brief proof of MP distribution. We analyze our results and its corresponding RMT model in Section [\ref{sec:analysis}]. This is followed by concluding remarks. \subsection{Data analysed} \label{sec:data} We construct three ensembles, corresponding to runs scored in Tests, ODIs and Indian Premier League (IPL). \begin{itemize} \item The ODI ensemble comprises of cross-correlation matrices constructed from runs scored by India, England, Australia, West Indies, South Africa, New Zealand, Pakistan and Sri Lanka for all official ICC One Day International matches played between 1971 and 2014. For each country we have a sequence of runs scored in both home and away matches. An ensemble of fifty one $90\times 90$ matrices are constructed from the time series data. \item The Test ensemble comprises of cross-correlation matrices constructed from runs scored by India, England, Australia, West Indies, South Africa, New Zealand, Pakistan and Sri Lanka. For each country we have a sequence of runs scored per innings (each match has a maximum of two innings) in both home and away matches. The Test scores have been taken for all matches played between England, Australia and South Africa between 1877 and 1909 and all official ICC Test matches thereafter, till 2014. An ensemble of seventy $90 \times 90$ matrices are constructed from the time series data. \item The IPL ensemble comprises of cross-correlation matrices constructed from runs scored by Chennai Super Kings, Rajasthan Royals, Royal Challengers Bangalore, Delhi Daredevils, Kings XI Punjab, Kolkata Knight Riders and Mumbai Indians for all official BCCI IPL T20 matches played between 2008 and 2014. For each team we have a sequence of batting scores posted per match. An ensemble of twenty eight $20 \times 20$ matrices are constructed from the time series data. \end{itemize} \subsection{Cross-correlation matrix} \label{sec:acm} Cross-correlation matrix $C$ is constructed from a given time series $X=\left\lbrace X(1),X(2),\ldots\right\rbrace$ by defining subsequences $X_{i}=\left\lbrace X(i),X(i+1),\ldots,X(N) \right\rbrace$ and $X_{j}=\left\lbrace X(j),X(j+1),\ldots,X(N-\Delta t) \right\rbrace$, separated by a ``lag'' $\Delta t = i-j$, $j<i$ and $i,j \in \mathbb{N}$. We then normalize the subsequences by defining \begin{equation} Y_i=\frac{X_i - \mu_{X_i}}{\sigma _{X_i}}. \end{equation} Finally, cross-correlation matrix $C$\cite{plerou} is defined as \begin{equation} C_{i,j}=\left< Y_i Y_j \right>, \end{equation} where $\mu_{X_i}$ and $\sigma_{X_{i}}$ are sample mean score and standard deviation of the subsequence $X_i$ respectively, and $\left<\ldots\right>$ denotes a time average over the period studied. This is the correlation coefficient between the subsequences $Y_i$ and $Y_j$ and help us understand the correlation between runs scored by a given team at different time intervals. The matrix elements lie between -1 and 1 and the matrices so constructed are Hermitian. Now, we construct multiple matrices on a single time series, giving rise to an ensemble of matrices. Letting $C^{(1)}=C$ (as constructed above), we construct another matrix $C^{(2)}$ by removing first $N$ elements of the time series considered, and constructing the cross-correlation matrix with the method described above. We continue this process of construction till the length of the truncated time series becomes less than $N$. \section{Random Matrix Model} \label{sec:rmt} Unitary Ensemble of random matrices is invariant under unitary transformation $H\rightarrow W^{T}HW$ where the ensemble is defined in the space $T_{2G}$ of Hermitian matrices and $W$ is any unitary matrix. Also, the various linearly independent elements of $H$, must be statistically independent\cite{mehta}. Joint probability distribution function of eigenvalues $\{ x_1,x_2,...,x_N \}$ is given by, \begin{equation} \label{jpdf} P_{N\beta}(x_1,..,x_N)=C_{N \beta}. \, \prod_{j<k}x_{j}^{N\beta a}\exp\left(-N\beta b \sum_{1}^{N}x_j\right)\left | x_j-x_k \right |^\beta, \end{equation} where $\beta=1,2$ and $4$ correspond to orthogonal (OE), unitary (UE) and symplectic (SE) ensembles respectively and $C_{N\beta}$ is the normalization constant \cite{mehta}. We define $n$-point correlation function by \begin{equation} R_{n}^{(\beta)}(x_1,..,x_n)=\frac{N!}{(N-n)!}\int dx_{n+1}\ldots\int dx_{N}P_{N\beta}(x_1,..,x_N). \end{equation} This gives a hierarchy of equations \cite{ghoshbook} given by \begin{eqnarray} \label{hier} \beta R_{1}(x)\int\frac{R_{1}(y)}{(x-y)}dy+\frac{w\prime(x)}{w(x)}R_{1}(x)=0, \end{eqnarray} where \begin{equation} w(x)=x^{N\beta a}\exp[{-N\beta b x}]. \end{equation} \begin{figure}[H] \centering \includegraphics[width=0.7\textwidth]{testld.pdf} \caption{Level Density for averaged Test data with $k=5$ . The solid line refers to Marchenko-Pastur result (\ref{den(th)}) and the dashed line refers to the finite $N$ result, obtained by the polynomial method described in Section \ref{sec:rmt}. Here, $a=2.75$, $b=3.535$, $X_{-}=0.339601$ and $X_{+}=1.78204$ in (\ref{den(th)}). The largest eigenvalue is circled towards the end of the spectrum.}\label{ld} \end{figure} We solve the integral equation using the resolvent \begin{equation} G(z)=\int\frac{R_{1}(y)}{z-y}dy, \end{equation} which satisfies \begin{equation} G(x+i0)=\int\frac{R_{1}(y)}{x-y}dy-i\pi R_{1}(x). \end{equation} Multiplying Eq.(\ref{hier}) by $x/(z-x)$ and integrating over $x$ we get after some elementary calculation \begin{eqnarray} \label{den(th)} \rho(x)\equiv \frac{R_{1}(x)}{N} &=& \frac{b}{\pi x}\sqrt{(x-X_{-})(X_{+}-x)};\hspace{1cm} X_{-}<x<X_{+},\\ \nonumber &=& 0, \hspace{2cm}\textrm{otherwise.} \end{eqnarray} where \begin{equation} X_{\pm}=\frac{a+1}{b}\pm \frac{\sqrt{2a+1}}{b}. \end{equation} For finite $N$, following Dyson-Mehta method \cite{mehta}, we use \begin{equation} \label{den(fin)} \rho(x)=\frac{1}{N}\sum_{j=0}^{N-1}\phi^{2}_{j}(x),\hspace{1cm}\phi_{j}(x)=\sqrt{w(x)}P_{j}(x), \end{equation} where $P_{j}(x)$ are orthonormal polynomials which satisfy \begin{equation} \int_{X_{-}}^{X_{+}}P_{j}(x)P_{k}(x)w(x)dx=\delta_{j,k},\hspace{1cm}j,k\in \mathbb N. \end{equation} To understand the correlation in the system, we first need to unfold the eigenvalues to eliminate global effect over fluctuation. The sequence of scores for each country is unfolded independently. The corresponding unfolded eigenvalues $y_k$ are given by\cite{verbaarschot}, \begin{equation}\label{unf} y_k=\int_{X_-}^{x_k}\rho(x)\mathrm{d}x, \end{equation} and the mean spacing of the unfolded eigenvalues $y_k$ is 1. We perform unfolding using both (i) the theoretical level density (\ref{den(th)}) and (ii) numerical integration of the data and obtain the best-fit over the integrated density. \begin{figure}[H] \centering \begin{subfigure}[b]{0.47\textwidth} \includegraphics[width=\textwidth]{testsd.pdf} \caption{Theoretical unfolding} \end{subfigure}\hfill \begin{subfigure}[b]{0.47\textwidth} \includegraphics[width=\textwidth]{testnumsd.pdf} \caption{Numerical unfolding} \end{subfigure} \caption{Nearest neighbour spacing distribution for mixed and averaged Test data obtained via numerical and theoretical unfolding (using Marchenko-Pastur result (\ref{den(th)}) with $a=2.75$, $b=3.535$, $X_{-}=0.339601$ and $X_{+}=1.78204$). The solid line refers to spacing distribution of experimental data with $k=5$, the dotted line refers to GUE result and the dashed line refers to the Poisson case.}\label{sd} \end{figure} For $\left \{S_i | S_i=y_{i+1}-y_i \right \}$, $s_i=S_i/D$ where $y_i$ denote successive unfolded levels and $D$ is the average spacing, the level spacing distribution $p(s)ds$ is defined as the probability of finding an $s_i$ between $s$ and $s+ds$ \cite{mehta}. For no correlations between the levels, we have the Poisson distribution \begin{equation} \label{poisson} p(s)=\exp[-s], \end{equation} while for GUE, we get the Wigner's surmise \begin{equation} \label{guespacing} p(s)=\frac{32 s^2}{\pi^2} \exp \left[-\frac{4}{\pi} s^2\right]. \end{equation} We consider $8$ sequences of eigenvalues for Test data obtained by ensemble averaging over each country. We unfold these sequences individually and average over the $8$ sequences of spacings. The result shows remarkable agreement with GUE predictions (Fig. \ref{sd}). Upon mixing of the eigenvalues of the Test data we observe Poisson distribution (Fig. \ref{sd}). \begin{figure}[H] \centering \includegraphics[width=0.7\textwidth]{testnv.pdf} \caption{Number variance for the averaged and mixed Test data obtained via numerically unfolding over the spectra. The solid line refers to GUE result (\ref{numbervariance}) and the dashed line refers to Poisson case. The figure plots three cases: (i) Averaged Test data with $k=5$ extreme diagonals removed (ii) Mixed Test data with $k=5$ extreme diagonals removed and (iii) Mixed Test data for the entire spectrum when no diagonals are removed from the matrices.}\label{nv} \end{figure} Another statistic considered is the linear statistic or the number variance. For $n_k$ unfolded levels in consecutive sequences of length $n$, we define the moments \cite{verbaarschot}, \begin{equation} M_p(n)=\frac{1}{N}\sum_{k=1}^{N}n_k^p, \end{equation} where $N$ is the number of sequences considered, each of length $n$ covering the entire spectrum. Then the number variance $\Sigma^2(n)$ is given by \begin{equation} \Sigma^2(n)=M_2(n)-n^2. \end{equation} For GUE, number variance is given by \cite{mehta}, \begin{equation} \label{numbervariance} \Sigma^2(n)=\frac{1}{\pi^2}\left(\ln(2\pi n)+ \gamma +1 \right), \end{equation} where $\gamma$ is the well known Euler constant. Number variance is known to be very sensitive for larger values of $n$ on account of spectral rigidity. Fig \ref{nv} shows a very good agreement of the experimental number variance result of the Test data to that of the GUE result for cases when $k=0$ and $k=5$ extreme diagonals are removed from both ends of the matrices involved in calculation. \begin{figure}[H] \centering \includegraphics[width=0.7\textwidth]{testnumdel.pdf} \caption{The Dyson-Mehta least squares statistic for the averaged and mixed Test data with $k=5$ extreme diagonals removed from both ends of the matrices involved in calculation obtained via numerically unfolding the spectrum . The solid line refers to the GUE result (\ref{guedel}) and the dashed line refers to the result for the Poisson case (\ref{poisdel}).}\label{del} \end{figure} The other statistics considered is the Dyson-Mehta least square statistic or the spectral rigidity statistic\cite{mehta} which measures the long-range correlations and irregularity in the level series in the system by calculating the least square deviation of the unfolding function from a straight line $y=a E + b$ over different ranges $L$. The statistic $\Delta(L)$ for $L=L_2-L_1$ is given by the integral, \begin{equation}\label{delta} \Delta (L)=\frac{1}{L}\int_{L_1}^{L_2}(N(E)-aE-b)^2dE, \end{equation} where $N(E)$ is the unfolding function. The mean value of the statistic for the GUE case is given by \cite{mehta}, \begin{equation}\label{guedel} \left \langle \Delta \right \rangle = \frac{1}{2 \pi^2} (\ln (2 \pi L) + \gamma - 5/4). \end{equation} For Poisson case, the least square statistics is given by \begin{equation}\label{poisdel} \left \langle \Delta \right \rangle = \frac{s}{15}. \end{equation} \section{Analysis} \label{sec:analysis} The problem that one encounters in analysis of such data are 1. The finite length of time series available introduces measurement noise. 2. A bigger time series will introduce more contributions from non-random events which will affect the ``universality'' result but will provide information about the correlations among different time series. We study the RMT model defined by Eq.(\ref{jpdf}). We obtain MP distribution (\ref{den(th)}) for the level density as $N\rightarrow \infty$. We observe that the level density of eigenvalues of $C$ in the bulk shows a remarkable agreement with the MP distribution for all Test, ODI and IPL data. However, some large eigenvalues exist outside the bounds $[X_-, X_+]$. To ensure that these eigenvalues are not due to finite $N$ effect, we obtain level-density for finite $N$. For this, we develop the corresponding orthonormal polynomials using Gram-Schmidt method and using Eq.(\ref{den(fin)}) for $N=10$ obtain the level density and compare that with ensembles of cricketing data. (Fig. \ref{ld}). We observe that the large eigenvalues still remain outside the bounds.\\ The next question is if these large eigenvalues non random, in which case our RMT model will not only show disagreement with the level density but also ``spoil'' the RMT predictions. To verify this, we make RMT analysis over the entire spectrum and compare its results with the truncated sparse matrix, which removes the large eigenvalues. KS test shows that our level density and spacing distribution analysis is considerably hampered by the presence of these large eigenvalues, thereby conforming the existence of non random long range correlations. To track the level of non-randomness, we remove $k$, ($k<<N$) extreme bands out of $2N-1$ bands of the $N\times N$ matrices $C$ and perform the KS test. We perform numerical unfolding over the eigenvalues where the integrated density of states are fitted with a polynomial. For ODI, where $N=90$ we obtain a p-value of $0.640311$ for the full spectrum and a p-value of $0.9025$ for spectrum of the matrix with $k=15$. For the Test data (again $N=90$), we obtain a p-value of $0.49$ for unfolding the full spectrum and a p-value of $0.855394$ when unfolding the spectrum of the matrix with $k=5$.Thus by creating a sparse matrix, which removes the large eigenvalues, our results converge to RMT predictions by $\approx30 \%$. This proves the existence of non randomness in the system introduced by elements $C_{ij}$, with $|i-j|\approx N$. We observe that as we increase the value of k, the largest eigenvalue in the spectrum gradually reduces and converges towards the bound imposed by the RMT model as shown in Fig. \ref{del}. We then do theoretical unfolding on the new data and observe similar agreement on KS test. For the number variance calculation, we first unfold the spectrum and calculate number variance both within bounds and over the entire spectrum. The former gives a good agreement with GUE while the latter, as expected, shows deviation, pointing towards the presence of large eigenvalues which are due to correlation coefficients between runs scored over a long time gap. \begin{figure}[H] \centering \includegraphics[width=0.7\textwidth]{maxeigs.pdf} \caption{Largest eigenvalue in the averaged spectrum vs. $k$ for the Test, ODI and IPL data}\label{maxeigs} \end{figure} Finally, theoretical unfolding is performed over the spectra using Eqs.(\ref{unf}) and (\ref{den(th)}). The MP distribution parameters for the Test data ($k=5$) are given in Fig. \ref{sd}. For the ODI data ($k=15$), we have $a=2.475$, $b=3.15$, $X_{-}=0.328806$ and $X_{+}=1.87754$ as the optimal parameters for Eq. \ref{den(th)}. Lastly, we mix levels obtained from the time series of all teams and observe a Poisson distribution (Fig. \ref{sd}). \section{Conclusion} \label{sec:conclusion} From the statistical analysis of test, ODI and IPL data, we conclude that the eigenvalues of cross-correlation matrices display GUE universality. The Test and ODI data are the only sets of data we found to be large enough to give results of the nature produced in this paper. Thus even though the T20 results of the BCCI IPL matches are also considered the small $N$ effect is visible in our GUE results. We observe Wigner surmise when we study the ensembles of different countries (in tests and ODI s)/teams (IPL) separately. However, upon mixing the data of all countries, we get Poisson statistics, both for spacing and number variance. Here we may recall that while studying nuclear data statistics \cite{pandey}, eigenvalues with same spin show GOE but mixed data gives Poisson. To ensure that the large eigenvalue which lies outside the bounds are not due to the size of the matrices, we obtain the level density using the polynomial method for finite $N$. We observe that the large eigenvalues were still lying well outside the bounds. Also while numerical unfolding over the whole spectra (and not under the MP bound), we observe that the number variance show departure from GUE. However, by removing the long-range interaction terms from $C$, we observe a better agreement with RMT predictions, both for level density as well as spacing distribution and number variance. We believe that eigenvalues close to the upper bound still maintains randomness and any deviation is due to temporal effect. For example, scores getting affected due to a sudden burst of performance of an individual player over a tournament or bilateral series. However, the larger eigenvalues are probably caused due to more stable, non random influence like the effect on cricketing performance due to the advent of new technology. However this needs a thorough investigation. We wish to come back to this in a later publication. \section*{Acknowledgement} We acknowledge ESPN Cricinfo for providing us with the cricket data. \bibliographystyle{unsrt}
{ "attr-fineweb-edu": 1.458008, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbRs5qsNCPep75OGm
\section{Introduction} \label{introduction} \review{A fundamental problem in Combinatorial Optimization is to partition a graph in several parts. There are many different versions of graph partitioning problems depending on the number of parts required, the type of weights on the edges or nodes, and the inclusion of several other constraints like restricting the number of nodes in each part. Usually, the objective of these problems is to divide the set of nodes into subsets with a strong internal connectivity and a weak connectivity between them. Most versions of these problems are known to be NP-hard. In this paper, a problem consisting of partitioning a complete or a general graph in a fixed number of subsets of nodes such that their cardinality differ at most in one and the total weight of each subset is bounded, is introduced. The objective aims to minimize the total cost of edges with end-nodes in the same subset. The motivation to state this problem was the realignment of the second category of the Ecuadorian football league.} The sports team realignment deals with partitioning a fixed number of professional sports teams into groups or divisions of similar characteristics, in which a tournament is played. Commonly, a geographical criterion is used to construct the divisions in order to minimize the sum of intra-divisional travel costs. The Ecuadorian football federation (FEF) adapted the last benchmark and imposes that the realignment of the second category of the professional football league must be made by considering provinces instead of teams. Thus, in the realignment, the provinces are divided into four geographical zones. In each zone, two \emph{subgroups} of teams are randomly constructed by regarding the two best teams of each province, satisfying that two teams of the same province do not belong to the same subgroup, and every subgroup must have the same number of best and second-best teams whenever possible. The eight subgroups play a Double Round Robin Tournament, where the champion of each subgroup and the four second-best teams with the highest scores advance to another stage of the championship. In 2014, FEF managers requested to the authors of this work how to make the realignment of the second category of the Ecuadorian football league in an optimal way, \review{considering 21 provinces and 42 teams to be partitioned in 4 groups.} \review{Prior to this requirement, they made a realignment in which the total road travel distance was 39830.4 km. However, the optimal solution reduced this road travel distance in 1532.4 km with the corresponding highly relevant logistics and economical benefits for the large number of people involved. A graphical representation is depicted in Figure \ref{figura_solucion_inicial}. In a preliminary work \cite{ISCO2016}, the problem was modeled as a $k$-clique partitioning problem with constraints on the sizes and weights of the cliques and formulated as an integer program. Unfortunately, there was an instance related to a proposal to change the design of the realignment that could not be solved to optimality. Consequently, the present paper aims to improve the previous practical results and provide more theoretical insightful about this problem.} \begin{figure}[!htb] \centering \includegraphics[scale=0.85]{ecuador.eps} \caption{Empirical vs optimal solution, 2014 edition} \label{figura_solucion_inicial} \end{figure} The proposal for realignment the second category of the Ecuadorian football league addressed in the previous work, and revisited in this paper, consists of directly making the divisions by using teams instead of provinces, as it is done in other international leagues. During the 2015 season of the second category of the Ecuadorian football league, 44 teams participated (22 provincial associations) and the set of teams was divided into $4$ groups with $5$ teams and $4$ groups with $6$ teams. In the context of this problem, the distance between two teams is defined as the road trip distance between the venues of the teams. The strongness or weakness of a team is quantified by means of a parameter that measures its football level considering the historical performance of each team. Thus, the problem studied in this paper consists of partitioning the set of teams into $k$ groups such that \review{ the number of teams in each group differs at most by one, there exists a certain homogeneity of football performance among the teams in each class of the partition, and minimizing an objective function corresponding to the total geographical distance intra-group.} The sports team realignment problem has been modeled in different ways and for different leagues in other countries. A quadratic binary programming model is set up to divide 30 teams, of the National Football League (NFL) in the United States, into 6 compact divisions of 5 teams each \citep{Saltzman1996}. The results, obtained directly from a nonlinear programming solver, are considerably less expensive for the teams in terms of total intra-divisional travel, in comparison with the realignment of the 1995 edition of this league. On the other hand, \citet{MacDonaldPulleyblank2014} propose a geometric method to construct realignments for several sports leagues in the United States: NHL, MLB, NFL and NBA. The authors claim that with their approach they always find the optimal solution. To prove this, they solve mixed integer programming problems corresponding to practical instances, using CPLEX. In the case that it is possible to divide the teams into divisions of equal size, this problem can be modeled as a $k$\textit{-way equipartition problem}: given an undirected graph with $n$ nodes and edge costs, the problem consists of finding a $k$-partition of the set of nodes, each of the same size, such that the total cost of edges which have both endpoints in one of the subsets of the partition is minimized. An example of this case was given by \citet{Mitchell2001}, who optimally solved the realignment of the NFL in the United States for $32$ teams and $8$ divisions by using a branch-and-cut algorithm. Moreover, the author shows that the 2002 edition of the NFL could have reduced the sum of intra-divisional travel distances by $45\%$. When $n\!\mod k \neq 0$, the sports team realignment problem can be modeled as a \textit{Clique Partitioning Problem} (CPP) with constraints in the sizes of the cliques \citep{Mitchell2007}, as we will see in the next section. The CPP has been extensively studied in the literature. This graph optimization problem was introduced by \citet{GROTSCHEL} to formulate a clustering problem. They studied it from a polyhedral point of view and its theoretical results have been used in a cutting plane algorithm that includes heuristic separation routines for some classes of facet-defining inequalities. \citet{Ferreira} analyzed the problem of partitioning a graph satisfying capacity constraints on the sum of the node weights in each subset of the partition. \citet{Jaehn} proposed a branch-and-bound algorithm where tighter bounds for each node in the search tree are reported. Additionally, a variant where constraints on the size of cliques are introduced for the CPP is studied by \citet{LABBE} and a competitive branch-and-cut for a particular problem based on it have been implemented \citep{LABBE2}. \review{Regarded to applications of the Graph Partitioning Problem, it is widely known that the canonical application of this problem is the distribution of work to processors of a parallel machine \citep{HENDRICKSON_2000}. Other well known applications include VLSI circuit design \citep{VLSI_2011}, Mobile Wireless Communications \citep{Fairbrother_EtAl_2017} and the already mentioned sports team realignment \citep{Mitchell_2003}. For a complete survey of applications and recent advances in graph partitioning, see \cite{Buluc2016}. } \review{This paper proposes two integer formulations for the $k$\textit{-way equipartition problem} which are formally defined in Section \ref{formulacion}. Moreover, valid inequalities for the polytopes of these formulations are derived in Section \ref{valid_inequalities}, which are integrated in a Branch \& Bound \& Cut scheme to solve to optimality instances of 54 teams and, in particular, a hard real-world instance of 44 teams not solved in a previous work}. Additionally, a tabu search method for finding feasible high quality solutions is shown in Section \ref{SectionHeuristic}. In Section \ref{SectionPracticalExperience}, the tabu search method and the usage of valid inequalities are integrated in several strategies to solve the real-world as well as the simulated instances. Finally, Section \ref{Conclusions} concludes the paper with some remarks. \section{Problem definition and integer programming formulations} \label{formulacion} By associating the venues of teams with the nodes of a graph, the distance between venues with costs on the edges, and football performance of the teams with weights on the nodes, \reviewd{a typical realignment problem} can be modeled as a $k$-way partitioning problem with constraints on the size (number of nodes in each subset differs at most in one) and weight of the cliques (total sum of node weights in the clique). From now on, we refer to this problem as a \emph{balanced} $k$\emph{-\reviewd{way partitioning problem with weight constraints} (B$k$-WWC)}. Let $G=(V,E)$ be an undirected complete graph with node set $V=\{1,\ldots,n\}$, edge set $E=\left\{\{i,j\}:i,j\in V, i \neq j\right\}$, cost on the edges $d: E \longrightarrow \mathbb{R}^+$, weights on nodes $w:V \longrightarrow \mathbb{R}^+$ and a fixed number $k \geq 2$. A $k$-\emph{partition} of $G$ is a collection of $k$ subgraphs $(V_1,E(V_1)),\ldots,(V_k,E(V_k))$ of $G$, where $V_i \neq \emptyset$ for all $i=1,\dots, k$, $V_i\cap V_j= \emptyset$ for all $i\neq j$, $\cup_{i=1}^k V_i=V$, and $E(V_i)$ is the set of edges with end nodes in $V_i$. \review{Note that all subgraphs $(V_i,E(V_i))$ are cliques since $G$ is a complete graph.} Moreover, let $W_L, W_U\in \mathbb{R}^+$, $W_L\leq W_U$, be the lower and upper bounds, respectively, for the weight of each clique (which is part of the input of our problem). The weight of a clique is the total sum of the node weights in the clique. Then, B$k$-WWC consists of finding a $k$-way partition such that \begin{align} & W_L \leq \sum_{j\in V_c}w_j \leq W_U,& \quad & \forall \; c=1,\ldots,k,\label{cond_pesos}\\ & \left| |V_i|-|V_j| \right| \leq 1,& \quad & \forall \; i,j = 1,2,\ldots,k,\quad i< j,\label{cond_nodos} \end{align} and the total edge cost over all cliques is minimized. In a previous work \citep{ISCO2016}, the NP-hardness of the B$k$-WWC was proved by a polynomial transformation from the 3-Equitable Coloring Problem. It is known that algorithms based on integer programming techniques are proved to be the best tools for dealing with problems such as B$k$-WWC. As it was mentioned in the introduction, another CPP was successfully addressed by \citet{LABBE2}, which used an integer programming formulation for the size-constrained clique partitioning problem given by \citet{LABBE}. In that formulation, a binary variable is defined for each edge. Let $x_{ij}$ be the variable associated to the edge $\{i,j\}$. Then, $x_{ij}=1$ if nodes $i$ and $j$ belong to the same clique, and $x_{ij}=0$ otherwise. The formulation is stated as follows: \begin{align} & \min \sum_{\{i,j\}\in E} d_{ij} x_{ij} \label{obj_1}\\ & \mbox{s.t.} \nonumber\\ & \hspace{0.4cm} x_{ij} + x_{jl} - x_{il} \leq 1,& \forall \; 1\leq i < j< l\leq n, \label{restr_11}\\ & \hspace{0.4cm} x_{ij} -x_{jl} +x_{il} \leq 1,& \forall \; 1\leq i < j< l\leq n, \label{restr_12}\\ & - x_{ij} +x_{jl} + x_{il} \leq 1,& \forall \; 1\leq i < j< l\leq n, \label{restr_13}\\ & F_L \leq 1 + \sum_{j:\{i,j\}\in E} x_{ij} \leq F_U,&\qquad \forall \; i\in V, \label{restr_2}\\ & \hspace{0.4cm} x_{ij} \in \{0,1\}, & \forall\; \{i,j\}\in E \label{restr_4} \end{align} The objective function \eqref{obj_1} seeks to minimize the total edge cost of the $k$ cliques. Constraints \eqref{restr_11}, \eqref{restr_12}, and \eqref{restr_13} are the so-called triangle inequalities introduced by \citet{GROTSCHEL}, which guarantee that if three nodes $i,j,l$ of $V_c$ are linked by two edges $\{i,j\}$ and $\{j,l\}$, then the third edge $\{i,l\}$ must be also included in the solution, for all $c$. Constraints \eqref{restr_2} ensure that the cardinality of each clique is between values $F_L$ and $F_U$, and constraints \eqref{restr_4} determine that all variables are binary. In our case, in order to model the B$k$-WWC, the cardinality of any two cliques must differ at most in one, i.e., from now on, $F_L := \left \lfloor n/k \right \rfloor\geq 2$ and $F_U := \left \lceil n/k \right \rceil$. Moreover, additional constraints that impose the weight requirements on each clique are included: \begin{equation} W_L \leq w_i + \sum_{j:\{i,j\}\in E} w_j x_{ij} \leq W_U, \qquad \forall\; i\in V, \label{restr_3} \end{equation} As the cardinality of each subset in the partition depends on $n$ and $k$, when $k$ divides $n$, the formulation \eqref{obj_1}-\eqref{restr_3} returns $k$ cliques with exactly $n/k$ nodes and the problem corresponds to the $k$-way equi-partition problem \cite{Mitchell2001}. In this case, constraints (\ref{restr_2}) can be rewritten as: \begin{equation} \sum_{j:\{i,j\}\in E} x_{ij} = \frac{n}{k} - 1, \qquad \forall \; i\in V, \label{restr_2x} \end{equation} When $k$ does not divide $n$, the previous constraints are not enough to guarantee that integer solutions represent partitions of $k$ cliques. \review{In fact, observe that formulation \eqref{obj_1}-\eqref{restr_3} may admit feasible solutions for different values of $k$; for example, consider as an instance of B$k$-WWC a complete graph with $n=23$ and $k=7$, which implies that $F_L=3$ and $F_U=4$. However, a feasible solution for this instance could be a partition consisting of one subset of nodes with cardinality equal to $3$ and five subsets with cardinality equal to $4$, which corresponds to a non desired value of $k=6$.} Two alternatives are proposed to overcome this issue. On the one hand, dummy nodes are added to the graph until the condition $n\!\mod k = 0$ is met, i.e.~ a set of dummy nodes $\tilde V$ of cardinality $k-(n\mod k)$ is defined. Moreover, for all $i\in \tilde V$, costs $d_{ij}=0$ for $j \in V \cup \tilde V$, and weights $w_i = 0$ are fixed. After that, $V$ must be updated with $V\cup \tilde V$, and the same for $E$. Finally, observe that two dummy nodes must not be assigned to the same clique in the partition. In order to avoid this impasse, the following constraint is considered: \begin{equation} \sum_{\{i,j\} \subset \tilde V} x_{ij}=0 \label{rest_ceros} \end{equation} We call $\mathcal{F}_1$ to the formulation of B$k$-WWC composed by the objective function \eqref{obj_1} and constraints \eqref{restr_11}-\eqref{restr_13}, \eqref{restr_4}-\eqref{rest_ceros}. \review{On the other hand}, an alternative to the inclusion of dummy nodes (as formulation $\mathcal{F}_1$ \reviewd{states}) is to consider a new constraint as follows. Note that in a balanced partition of a graph, there exists $r\in \mathbb{N}$ and $k-r$ disjoint subsets of cardinality $\left \lceil n/k \right \rceil$ and $\left \lfloor n/k \right \rfloor$, respectively. \reviewd{That is, $n=r F_U + \left(k-r \right) F_L$ where $r= n-k F_L = n \!\mod k$}. \reviewd{Note also that the total number of edges} is $r F_U (F_U - 1)/2 + (k - r) F_L(F_L-1)/2$. Let $\beta_{n,k}$ be the last number. It is easy to see that $\beta_{n,k} = \beta_{n,k'}$ implies $k = k'$, and therefore the following equality forces the partition to have size $k$: \begin{equation} \sum_{\{i,j\}\in E} x_{ij} = \beta_{n,k} \label{restr_5} \end{equation} We call $\mathcal{F}_2$ to the formulation composed by the objective function \eqref{obj_1} and constraints \eqref{restr_11}-\eqref{restr_3} and \eqref{restr_5}. In some situations, the graph $G$ could be a non-complete one. For example, in \reviewd{real-world instances}, there \reviewd{would be} an extra requirement where certain pairwise of nodes must not participate in the same clique. This can be modeled by simply deletion of edges $\{u,v\}$ where $u$ and $v$ are those nodes that should be included in different cliques. Such a problem will be known as the \emph{generalized balanced $k$-\reviewd{way} clique partitioning with weight constraints} (GB$k$-WWC). The last problem is not harder to solve than B$k$-WWC. In fact, given an instance of GB$k$-WWC defined by an arbitrary graph $G=(V,E)$, cost on the edges $d: E \longrightarrow \mathbb{R}^+$, weights on nodes $w:V \longrightarrow \mathbb{R}^+$, positive numbers $W_L$ and $W_U$, and a fixed integer number $k \geq 2$, an instance of B$k$-WWC can be constructed as follows: take a complete graph $G' = (V', E')$ with set of nodes $V'=V$, weights $w'_i=w_i ~~\forall i\in V$, numbers $W'_L=W_L$, $W'_U=W_U$ , $k'=k$, and distances $$ d^{\prime}_{ij}= \begin{cases} d_{ij}, \mbox{ if } \{i,j\} \in E\\ M, \mbox{ if } \{i,j\} \in E' \backslash E \end{cases} $$ where $M$ is a big number. It is then obvious that GB$k$-WWC has an optimum solution if and only if the optimum of B$k$-WWC does not exceed $M$. In practice, one does not have to deal with $M$. As in the case for dummy nodes, considering a constraint $\sum_{\{i,j\} \in E' \backslash E} x_{ij} = 0$ is enough. \section{Valid inequalities} \label{valid_inequalities} \reviewd{Let $\mathcal{P}$ be the polytope defined by the convex hull of integer solutions of $\mathcal{F}_1$ (if $n\!\mod k = 0$) or $\mathcal{F}_2$ (otherwise).} $\mathcal{P}$ is uniquely determined by the parameters $n, k, W_L, W_U$ and $w_i$ for all $i\in V$. If the weight constraints are redundant, the dimension of this polytope is given by Theorem 3.1 of \citet{LABBE} and the equations (\ref{restr_2x}) or (\ref{restr_5}) are enough to describe the minimal system of $\mathcal{P}$. On the other hand, the weight constraints could make this polytope to be empty. In order to avoid these cases, a necessary condition on weights is established in the following result: \begin{lemma} \label{condicion_modelo} A necessary condition for the feasibility of B$k$-WWC is \begin{eqnarray*} \max \left \{ 2, \left\lceil \frac{\sum_{i\in V}w_i}{W_U} \right\rceil \right \} \leq k \leq \min \left \{ \left \lfloor \frac{n}{2} \right \rfloor , \left\lfloor \frac{\sum_{i\in V}w_i}{W_L}\right\rfloor \right\} \end{eqnarray*} \end{lemma} \begin{proof} Observe that by assumption $k\geq 2$ and $F_L\geq 2$. On the other hand, from constraints \eqref{restr_3}, \begin{align*} & \sum_{c=1}^k W_L \leq \sum_{c=1}^k (w_i + \sum_{\{i,j\}\in E} w_j x_{ij}) \leq \sum_{c=1}^k W_U,\\ &\sum_{c=1}^k W_L \leq \sum_{c=1}^k \sum_{i\in V_c} w_i \leq \sum_{c=1}^k W_U,\\ & k W_L \leq \sum_{i\in V} w_i \leq k W_U, \end{align*} from which the result follows. \end{proof} Observe that \reviewd{$\mathcal{P}$} is contained in the one given by \citet{LABBE}. Hence, linear relaxations of our formulations can be strengthened by means of known classes of valid and facet-defining inequalities described in previous works. This is the case of the 2-partition inequalities: \begin{equation} \label{part2eq} \sum_{i\in S} \sum_{j\in T} x_{ij} - \sum_{i_1, i_2\in S} x_{i_1 i_2} - \sum_{j_1, j_2\in T} x_{j_1 j_2} \leq |S|, \end{equation} for every two disjoint nonempty subsets $S, T$ of $V$ such that $|S| \leq |T|$ and $S \cap T = \emptyset$. These inequalities were introduced in the nineties by \citet{GROTSCHEL} for the Clique Partitioning Polytope and, in recent years, \citet{LABBE} explored these inequalities for their polytope. Based on the computational experiments reported in these preceding works, the usage of 2-partition inequalities as cuts evidenced a good behavior and effectiveness in solving the IP model, and they will be considered in the present paper. \reviewd{In addition, new valid equations and inequalities arise by the introduction of weight constraints \eqref{restr_3}. Below, three families of valid inequalities for $\mathcal{P}$ are proposed. For any $T \subset V$, define $w(T) = \sum_{i \in T} w_i$. Also, for a given $k$-partition $\pi=(V_1,\ldots,V_k)$ define $E_{\pi}(T) = \cup_{i=1}^k E(V_i \cap T)$. That is, $E_{\pi}(T)$ contains all the edges of $\pi$ with end nodes in $T$. Finally, any integer solution lying in a polyhedron $P$ is called a \emph{root} of $P$. \begin{proposition} Let $T \subset V$ such that $w(T) > W_U$. Then, the following $T$-Weight-Cover inequality is valid for $\mathcal{P}$: \[ \sum_{\{i,j\} \in E(T)} x_{ij} \leq \dfrac{(|T|-1)(|T|-2)}{2}. \] \end{proposition} \begin{proof} Let $x$ be a root of $\mathcal{P}$ representing a $k$-partition $\pi$. The left side of the inequality is $|E_{\pi}(T)| = \sum_{i=1}^k |V_i \cap T|(|V_i \cap T|-1)/2$. Since $w(T) > W_U$, all nodes in $T$ do not belong to the same clique in $\pi$. That is, $T$ has nodes from two or more cliques in $\pi$ and the largest value of $|E_{\pi}(T)|$ is reached when $T$ has $|T|-1$ nodes belonging to a clique, say $V_{i_1}$, and just one node belonging to another one, say $V_{i_2}$. In that case, $|E(V_{i_1})| = (|T|-1)(|T|-2)/2$ and $|E(V_{i_2})| = 0$. Therefore, $|E_{\pi}(T)| \leq (|T|-1)(|T|-2)/2$. \end{proof} \begin{corollary} \label{ZEROW} Let $\{i,j\} \in E$ such that $w_i + w_j > W_U$. Then, $x_{ij} = 0$ is a valid equation of $\mathcal{P}$. \end{corollary} \begin{proposition} Let $T \subset V$ such that $w(T) > W_L$, $|T| \leq F_L$ and $r = w(T) - W_L$. Then, for all $i\in T$, the following $(T,i)$-Weight-Lowerbound inequality is valid for $\mathcal{P}$: \[ w_i + \sum_{j \in T \setminus \{i\}} w_j x_{ij} + \sum_{j \in V \setminus T} (w_j + r) x_{ij} \geq w(T). \] \end{proposition} \begin{proof} Let $x$ be a root of $\mathcal{P}$ representing a $k$-partition $\pi$ and w.l.o.g.$\!$ suppose that $i \in V_1$. The left side of the inequality is $w(V_1) + r |V_1 \setminus T|$. If $V_1 \setminus T = \emptyset$, we have $T = V_1$ since $|T| \leq F_L \leq |V_1|$, and the inequality is valid. If $V_1 \setminus T \neq \emptyset$ then $w(V_1) + r |V_1 \setminus T| \geq w(V_1) + r \geq W_L + r = w(T)$. \end{proof} \begin{proposition} Let $T \subset V$ such that $w(T) < W_U$, $r = W_U - w(T)$ and $S = \{j \in V \setminus T : w_j > r\} \neq \emptyset$. Then, for all $i\in T$, the following $(T,i)$-Weight-Upperbound inequality is valid for $\mathcal{P}$: \[ w_i + \sum_{j \in T \setminus \{i\}} w_j x_{ij} + \sum_{j \in S} (w_j - r) x_{ij} \leq w(T). \] \end{proposition} \begin{proof} Let $x$ be a root of $\mathcal{P}$ representing a $k$-partition $\pi$ and w.l.o.g.$\!$ suppose that $i \in V_1$. The left side of the inequality is $w(T \cap V_1) + w(S \cap V_1) - r |S \cap V_1|$. If $S \cap V_1 = \emptyset$, we have $w(T \cap V_1) \leq w(T)$ and the inequality is valid. If $S \cap V_1 \neq \emptyset$, then $w(T \cap V_1) + w(S \cap V_1) - r |S \cap V_1| \leq w(V_1) - r |S \cap V_1| \leq W_U - r |S \cap V_1| \leq W_U - r = w(T)$. \end{proof} } \reviewd{ The previous results just give conditions for the inequalities to be valid but we are also interested in finding those inequalities that define faces of high dimension, preferably facets of $\mathcal{P}$, since one can reinforce linear relaxations with them. This fact would require a deeper polyhedral study of $\mathcal{P}$. However, for practical purposes, it is enough to propose necessary conditions that guarantee that faces defined by such inequalities have as many roots as possible (or at least be non-empty). These conditions can be further used for the proper separation of the inequalities involved. \begin{proposition} Let $\mathcal{F}$ be the face defined by a $T$-Weight-Cover inequality. If $\mathcal{F} \neq \emptyset$, then there exists $l \in T$ such that $w(T \setminus \{l\}) \leq W_U$. \end{proposition} \begin{proof} Let $x$ be a root of $\mathcal{F}$ representing a $k$-partition $\pi$. Since $E_{\pi}(T) = (|T|-1)(|T|-2)/2$, the partition $\pi$ restricted to $T$ must have two components: a clique of size $|T|-1$ and an isolated node $l$. Denote $T' = T \backslash \{l\}$. Clearly, $E_{\pi}(T) = E_{\pi}(T')$. Now, suppose that $w(T') > W_U$. Hence, the $T'$-Weight-Cover inequality is also valid and we obtain $E_{\pi}(T) = E_{\pi}(T') \leq (|T|-2)(|T|-3)/2 < (|T|-1)(|T|-2)/2$ which is absurd. \end{proof} The previous result suggests that, in order to obtain a good Weight-Cover inequality, we should impose that $w(T \setminus \{l\}) \leq W_U$ for all $l \in T$ (i.e.~ $T$ is ``minimal'' with respect to the condition $w(T) > W_U$). \begin{proposition} Let $\mathcal{F}$ be the face defined by a $(T,i)$-Weight-Lowerbound inequality. If $\mathcal{F} \neq \emptyset$, then $|T| \geq F_L-1$. \end{proposition} \begin{proof} Let $x$ be a root of $\mathcal{F}$ representing a $k$-partition $\pi$ and suppose that $i \in V_1$. Also, let $s = |V_1 \setminus T|$. We have $w(V_1) + rs = w(T)$. Therefore, $$s = \dfrac{w(T) - w(V_1)}{r} = \dfrac{w(T) - w(V_1)}{w(T) - W_L}.$$ Since $w(V_1) \geq W_L$, we have $s \leq 1$. Therefore, $|T| \geq |V_1| - s \geq |V_1| - 1 \geq F_L - 1$. \end{proof} This result simply suggests to consider only Weight-Lowerbound inequalities such that $|T| \geq F_L-1$. } Regarding the Weight-Upperbound inequalities, and for the sake of clarity, the roots of the faces defined by such inequalities are classified in two types. Let $\mathcal{F} \neq \emptyset$ be the face defined by a $(T,i)$-Weight-Upperbound inequality, let $x$ be a root of $\mathcal{F}$ representing a $k$-partition $\pi$ where w.l.o.g. $i \in V_1$. If $S \cap V_1 = \emptyset$, we say that $x$ is of \emph{Type 1}. Otherwise, $x$ is of \emph{Type 2}. Now, define $\Pi^t$ as the set of roots of $\mathcal{F}$ of Type $t$ where $t \in \{1, 2\}$. \begin{proposition} Let $\mathcal{F}$ be the face defined by a $(T,i)$-Weight-Upperbound inequality.\\ (i) If $\mathcal{F} \cap \Pi^1 \neq \emptyset$, then $|T| \leq F_U$.\\ (ii) If $\mathcal{F} \cap \Pi^2 \neq \emptyset$, then $|T| \geq F_L-1$. \label{Proposition6} \end{proposition} \begin{proof} We recall that $w((T \cup S) \cap V_1) - r |S \cap V_1| = w(T)$.\\ (i) If $S \cap V_1 = \emptyset$, we obtain $w(T \cap V_1) = w(T)$ implying that $T \subset V_1$. Therefore, $|T| \leq |V_1| \leq F_U$.\\ (ii) If $S \cap V_1 \neq \emptyset$, add $r |S \cap V_1| - W_U$ to both sides of the equation. We obtain $w((T \cup S) \cap V_1) - W_U = w(T) - W_U + r |S \cap V_1| = -r + r |S \cap V_1| = r(|S \cap V_1| - 1)$. Since $|S \cap V_1| \geq 1$, $w((T \cup S) \cap V_1) - W_U \geq 0$ and, therefore, $w((T \cup S) \cap V_1) \geq W_U$. On the other hand, $w(V_1) \leq W_U$ implying that $w((T \cup S) \cap V_1) = W_U$. Hence, $w(V_1) = W_U$, $V_1 \subset T \cup S$ and $|S \cap V_1| = 1$. Let $s$ be the unique element from $S \cap V_1$. Then, $V_1 \setminus \{s\} \subset T$. Therefore, $|T| \geq |V_1|-1 \geq F_L-1$. \end{proof} This result suggests to discard those Weight-Upperbound inequalities such that the condition $F_L-1 \leq |T| \leq F_U$ does not hold. \section{Deriving upper bounds: a tabu search} \label{SectionHeuristic} Consider an optimization problem where, given a graph $G = (V, E)$ and a number $k$, the objective is to obtain a partition $(V_1, \ldots, V_k)$ of the set of nodes such that $||V_i|-|V_j|| \leq 1$ for all $i \neq j$, and to minimize the number of edges in $\bigcup_{i=1}^k E(V_i)$. This problem, called $k$-ECP, is iteratively used by the state-of-the-art tabu search algorithm \textsc{TabuEqCol} for solving the \emph{Equitable Coloring Problem} \citep{TABUEQCOL,TABUIMPROVED}. The $k$-ECP is very related to the problem presented in this paper. In fact, it is a particular case of B$k$-WWC: simply consider a complete graph $G' = (V, E')$, $W_L = W_U = 0$, $w_i = 0$ for all $i \in V$, $d_{ij} = 1$ for all $\{i,j\} \in E$ and $d_{ij} = 0$ for all $\{i,j\} \in E' \backslash E$. \reviewd{In this section, we propose a two-phase algorithm based on \textsc{TabuEqCol} for solving B$k$-WWC which incorporates an additional mechanism to deal with weights.} \emph{Tabu search} is a metaheuristic method proposed by \citet{GLOVER}. Basically, it is a local search algorithm which is equipped with additional mechanisms that prevent from visiting a solution twice and getting stuck in a local optimum. The design of a tabu search algorithm involves to define the search space of feasible solutions, an objective function, the neighborhood of each solution, the stopping criterion, the aspiration criterion, the features and how to choose one of them to be stored in the tabu list and how to compute the tabu tenure. The reader is referred to the work by \citet{TABUEQCOL} for the definitions of these concepts and how to denote them. Below, the details of our algorithm are presented: \begin{itemize} \item \emph{Search space of solutions}. A solution $s$ is a partition $(V_1, \ldots, V_k)$ of the set of nodes such that $||V_i|-|V_j|| \leq 1$ for all $i \neq j$. For the sake of efficiency, solutions are stored in memory as tuples $(V_1, \ldots, V_k, W_1, \ldots, W_k, R^+, R^-)$ where $W_i = \sum_{v \in V_i} w_v$, $R^+ = \{ i : |V_i| = \lfloor n/k \rfloor+1\}$ and $R^- = \{ i : |V_i| = \lfloor n/k \rfloor\}$. \item \emph{Objective function}. For a given solution $s$, let $d(s)$ be the sum of the distances of every edge in $E(V_i)$ for all $i \in \{1,\ldots,k\}$ and $I(s) = \{ i : W_i < W_L \lor W_i > W_U\}$. The objective function is defined as $f(s) = d(s) + M|I(s)|$ where $M$ is a big value. Note that solutions satisfying $I(s) \neq \varnothing$ are feasible but penalized in $f(s)$. \item \emph{Stopping criterion}. The algorithm stops when a maximum number of iterations is reached. \item \emph{Aspiration criterion}. Let $s$ be the current solution and $s^*$ be the best known solution so far. When $f(s) < f(s^*)$, $s$ replaces $s^*$. \item \emph{Set of features}. A solution $s$ presents a feature $(v, i)$ if and only if $v \in V_i$. \item \emph{Initial solution}. For all $i \in \{1,\ldots,k\}$, do $V_i = \{ v_j : (j-1) \mod k = i-1 \}$. \item \emph{Neighborhood of a solution}. For a given solution $s = (V_1, \ldots, V_k$, $W_1, \ldots, W_k$, $R^+, R^-)$, a neighbor $s' = (V'_1, \ldots, V'_k$, $W'_1, \ldots, W'_k$, $R'^+, R'^-)$ of $s$ is generated with two schemes: \begin{itemize} \item \emph{1-move} (only applicable when $n$ does not divide $k$). For a given $v \in V_i$ such that $i \in R^+$ and a given $j \in R^-$, consider $s'$ such that node $v$ is moved from $V_i$ to $V_j$. Formally, $V'_j = V_j \cup \{ v\}$, $W'_j = W_j + w_v$, $V'_i = V_i \backslash \{v\}$, $W'_i = W_i - w_v$, $V'_l = V_l$ and $W'_l = W_l$ for all $l \in \{1,\ldots,k\} \backslash \{i, j\}$, $R'^+ = R^+ \cup \{j\} \backslash \{i\}$ and $R'^- = R^- \cup \{i\} \backslash \{j\}$. \item \emph{2-exchange}. For a given $v \in V_i$ and $u \in V_j$ such that $i < j$, consider $s'$ such that $v$ is moved to $V_j$ and $u$ is moved to $V_i$. Formally, $V'_j = (V_j \backslash \{u\}) \cup \{v\}$, $W'_j = W_j - w_u + w_v$, $V'_i = (V_i \backslash \{v\}) \cup \{u\}$, $W'_i = W_i - w_v + w_u$, $V'_l = V_l$ and $W'_l = W_l$ for all $l \in \{1,\ldots,k\} \backslash \{i, j\}$, $R'^+ = R^+$ and $R'^- = R^-$. \end{itemize} \item \emph{Selection of a feature to add in the tabu list}. Once a movement from $s$ to $s'$ is performed, $(v,i)$ is stored on the tabu list. \item \emph{Tabu tenure}. Once an element is added to the tabu list, it remains there for the next $t$ iterations, where $t$ is an integer randomly generated with a uniform distribution (one of the criteria used by \citet{TABUIMPROVED}). \end{itemize} Since one pretends the algorithm to be as fast as possible, the value of $f(s')$ should be computed by adding or subtracting the corresponding difference to $f(s)$. Also, $M$ should not be too high in order to avoid round-off errors. In our case, $M$ was set to $10000$. \reviewd{A difference between \textsc{TabuEqCol} and our algorithm is that we are interested in feasible solutions for the B$k$-WCC but \textsc{TabuEqCol} does not contemplate weight constraints. For that reason, the entire process is divided in two stages. The first one consists of searching a solution $s$ with $I(s) = \varnothing$ while the second one is focused on minimizing $d(s)$.} We observed that, during the first stage, the search needs to be more diversified. Therefore, different range of values for the tabu tenure in each stage are used. If $f(s) \geq M$ (1st. stage) then $t \in [5,40]$ and if $f(s) < M$ (2nd. stage) then $t \in [5,20]$. This method can also be used for obtaining feasible solutions of GB$k$-WWC: \reviewd{simply consider $d_{ij} = M$ for those edges $\{i,j\} \notin E$; if $f(s) < M$ then $s$ is a feasible solution of GB$k$-WWC.} However, if the density of edges in the graph $G$ is not high, it would be convenient to exploit the structure of $G$ in the computation of the neighborhood of a solution and tabu tenure, as in the case of \textsc{TabuEqCol}. \section{Computational experiments} \label{SectionPracticalExperience} In this section, some computational experiments are presented. They consist of comparing different ways of solving the B$k$-WWC, called \emph{Strategies} and denoted by $\mathcal{S}$. Comparisons are carried out over random instances and, at the end, the resolution of a real-world instance is addressed. The improvement in the results are shown incrementally. That is, each strategy outperforms the previous one. All the experiments were carried out on a machine equipped with an Intel i5 2.67GHz, 4Gb of memory, the operating system Ubuntu 16.4 and the solver GuRoBi 6.5. All instances as well as the source code of the implementation can be downloaded from: \begin{center} \texttt{http://www.fceia.unr.edu.ar/$\sim$daniel/stuff/football.zip} \end{center} Random instances were generated by computing the coordinates $(x,y)$ of $n$ points with an uniform distribution in the domain $[-100, 100] \times [-100, 100]$. Then, for every pair of points $i$, $j$, $d_{ij}$ is assigned the euclidean distance between both points. Weights $w_i$ are random values generated with a uniform distribution in the range $[0.1, 0.9]$, and $W_L = \mu (n / k) - \sigma$, $W_U = \mu (n / k) + \sigma$ where $\mu$ and $\sigma$ are the average and standard deviation of the weights of all points. Combinations of $n$ are $k$ were chosen so that $k$ does not divide $n$, similar to those values of real instances. Results of the experiments are summarized in Tables \ref{tab:1} to \ref{tab:5}, whose format are as follows: first and second columns display the number of the instance and its optimal value, and remaining columns show the number of $B\&B$ nodes evaluated and the time elapsed in seconds of the optimization. A time limit of one hour is imposed. For those instances that are not solved within this limit, the gap percentage between the best lower and upper bound found is reported. Last row displays the average over all instances, except for Tables \ref{tab:3} and \ref{tab:5} (marked with a dagger) where it shows the average over those instances solved by all strategies being compared (i.e.~ 21, 23, 24, 26, 28, 29 and 30 for Table \ref{tab:3}; 42, 43, 45, 47, 49 and 50 for Table \ref{tab:5}). Boldface indicates the best results.\\ \reviewd{\emph{$\mathcal{F}_1$ (with dummy nodes) vs. $\mathcal{F}_2$}.} Strategy 1 ($\mathcal{S}_1$) consists of the resolution of $\mathcal{F}_1$, after addition of $k-(n\!\mod k)$ dummy nodes, while $\mathcal{S}_2$ is the direct resolution of $\mathcal{F}_2$. Both use GuRoBi at its default settings. Instead of using a single constraint, such as \eqref{rest_ceros}, variables $x_{ij}$ are directly fixed to zero in a straightforward manner. In fact, there are three cases where $x_{ij}$ are set to zero: \begin{itemize} \item $i, j \in \tilde V$ (only when dummy nodes are present, see constraint \eqref{rest_ceros}). \item $\{i,j\} \in E' \backslash E$ (only when $G$ is not complete). \item $w_i + w_j > W_U$ (see Corollary \ref{ZEROW}). \end{itemize} Note that, according to the results reported in the tables, $\mathcal{S}_2$ performs better than $\mathcal{S}_1$ for larger instances. In particular, $\mathcal{S}_2$ solves instance 27 to optimality and reports better gaps than $\mathcal{S}_1$ for instances 22 and 25.\\ \reviewd{\emph{Tabu search vs. GuRoBi built-in heuristics}.} The next step is to use the tabu search method proposed in the previous section. This metaheuristic generates good initial solutions. In particular, it gives the optimal one for almost all instances given in the tables in a reasonable amount of iterations (the unique exception was instance 19 where it could not reach the optimal solution after 1000000 iterations, for two different seeds). Based on experimentation with several random instances and initial seeds, we obtained a formula by linear regression for the limit in the number of iterations needed: \[ it_{limit} = \left\lfloor e^{0.26571 n - 0.052978} \right\rfloor \] Now, in $\mathcal{S}_3$, $it_{limit}$ iterations of the tabu search are executed and the best solution found is provided as an initial integer solution to GuRoBi. Then, $\mathcal{F}_2$ is solved with GuRoBi primal heuristics turned off (Heuristics, PumpPasses and RINS parameters are set to zero). In order to make a fair comparison, time reported in tables includes time spent by tabu search. Clearly, $\mathcal{S}_3$ outperforms $\mathcal{S}_2$. In particular, $\mathcal{S}_3$ was able to solve instance 25 by optimality and presents a lower gap than $\mathcal{S}_2$ for instance 22.\\ \reviewd{\emph{Addition of triangular inequalities on demand}.} Formulation $\mathcal{F}_2$ (and also $\mathcal{F}_1$) has a $O(n^3)$ number of constraints due to triangular inequalities (precisely, $n(n-1)(n-2)/2$). Although in presence of variables set to zero some of them become redundant and one can omit them when performing the initial relaxation (e.g.\! for a given $i < j < l$, if $x_{ij}=0$ then inequalities \eqref{restr_11} and \eqref{restr_12} are redundant but \eqref{restr_13} is not), its number is still high. Since the number of variables is $O(n^2)$ it is expected that several triangular inequalities are not active in the fractional solution of the initial relaxation. We noticed that there exists a relationship between being active and the distances of positive variables in its support: the lower the value is $d_{ij} + d_{jl}$, the higher the probability of the inequality $x_{ij} + x_{jl} - x_{il} \leq 1$ is active. The following experiment reveals this relationship. \reviewd{ Let $\mathcal{T}$ be the set of triangular inequalities and let $\tilde d(t)$ be a value assigned to each $t \in \mathcal{T}$ as follows: $$\tilde d(t) = \begin{cases} d_{ij} + d_{jl}, &\textrm{when $t$ is a constr. \eqref{restr_11}},\\ d_{ij} + d_{il}, &\textrm{when $t$ is a constr. \eqref{restr_12}},\\ d_{jl} + d_{il}, &\textrm{when $t$ is a constr. \eqref{restr_13}}. \end{cases}$$ We first order all triangular inequalities according to the value $\tilde d(t)$ from lowest to highest and we make an equitable partition $I_1, I_2, \ldots, I_{10}$ of the set $\mathcal{T}$ in 10 deciles. That is, $\cup_{r=1}^{10} I_r = \mathcal{T}$ and $|I_r| = \lceil\frac{n+r-10}{10}\rceil$ for all $r = 1,\ldots,10$ where $d(t) \leq d(t')$ for $t \in I_r$ and $t' \in I_{r+1}$.} Then, we solve $\mathcal{F}_2$ without these inequalities in its initial relaxation and, whenever GuRobi obtains a solution (fractional or integer) violating some of them, they are added to the current relaxation: if the solution is fractional, they are added as cuts and if the solution is integer, they are added as lazy constraints. Histograms with the averages (over 10 instances of 44 nodes, each instance having $|\mathcal{T}|=39732$) of percentages of triangular inequalities generated per decile $I_r$ are shown in Figures \ref{fig:1} and \ref{fig:2}. In the former, only those inequalities added at root node of $B\&B$ tree are considered. In Figure \ref{fig:2}, all inequalities are counted. In particular, if the same inequality is generated in two different nodes, it is counted twice. Note that, at root node, most of the inequalities from $I_1$ and $I_2$ are generated. In addition, during the B\&B process, inequalities from $I_8$, $I_9$ and $I_{10}$ are rarely violated. Based on these observations and additional experimentation, \reviewd{we define the strategy} $\mathcal{S}_4$ as $\mathcal{S}_3$ with the following differences: \begin{itemize} \item Only inequalities from $I_1$ and $I_2$ are considered in the initial relaxation. \item Each time an integer solution is found, inequalities from $I_r$ with $r \in \{3,\ldots,10\}$ are checked and violated ones are added as lazy constraints. \item Each time a fractional solution is found, inequalities from $I_r$ with $r \in \{3,\ldots,m\}$ are checked and those that are violated by at least 0.1 units, are added as cuts. If the current node is root, $m = 10$, otherwise $m = 7$. \item Some GuRoBi cuts are disabled (Clique, FlowCover, FlowPath, Implied, MIPSep, Network and ModK). \end{itemize} Observe that $\mathcal{S}_4$ behaves much better than $\mathcal{S}_3$. This strategy needs less than the half of time used by the preceding one. In addition, it can solve instance 22 to optimality.\\ \reviewd{\emph{Separation of valid inequalities}. Here, we experiment with additional custom separation routines embedded in our code, where 2-partition inequalities and the new families of valid inequalities presented in Section \ref{valid_inequalities} are considered. Two experiments are carried out, detailed below. } \reviewd{ \noindent Experiment 1: In this experiment, we analyze the effectiveness in terms of reduction in the number of B\&B nodes when Weight-Cover, Weight-Lowerbound and Weight-Upperbound inequalities are used. We also gather helpful information that is further used for the design of the separation routines. For each family, random instances of 44 nodes are solved and, during the optimization, the inequalities satisfying the conditions given in Section \ref{valid_inequalities} are enumerated exhaustively and added when they are violated by an amount of at least $1\%$ of the r.h.s.\\ Regarding Weight-Cover inequalities, we restrict the enumeration to $|T| \in \{4,5\}$ since for $|T| = 3$ or $|T| \geq 6$ they are seldom violated. Regarding Weight-Upperbound inequalities, we impose an additional limit of 1500 nodes since its enumeration in each node consumes a fairly long time. This limit is reached on instances 22, 25 and 27.\\ Table \ref{tab:0} reports the total number of cuts generated and the number of B\&B nodes evaluated for each family of valid inequalities, and the relative gap when 1500 nodes are reached (only for Weight-Upperbound). The three columns entitled ``only GuRoBi'' display the same parameters (number of cuts, B\&B nodes and relative gap at 1500 nodes) generated by strategy $\mathcal{S}_4$. The last three rows show the averages over all instances, the averages over instances 21, 23, 24, 26, 28, 29 and 30 (marked with a dagger), and the average of gap over instances 22, 25 and 27 (marked with a double dagger).\\ We conclude that the addition of Weight-Cover and Weight-Upperbound cuts make a substantial reduction in the number of nodes, whereas Weight-Lowerbound cuts occur less frequently and the reduction in the number of nodes is marginal. We also noticed that violated Weight-Cover inequalities are usually composed of nodes $i$ with high values of the expression $w_i + \sum_{j \in V \setminus \{i\}} w_j x^*_{ij}$, where $x^*$ is the current fractional solution. However, violated Weight-Upperbound inequalities do not seem to have an obvious structure that can be exploited. We only consider Weight-Cover inequalities in the next experiment. } \reviewd{ \noindent Experiment 2: The goal is to compare the performance of $\mathcal{S}_4$ when different combinations of separation routines are used. For each combination, random instances of 48 and 54 nodes are solved (see Tables \ref{tab:4} and \ref{tab:5}). The execution of these routines is performed only when no triangular inequalities were generated for the current fractional solution, denoted by $x^*$. In particular, the separation of 2-partition inequalities is based on the procedure given by \citet{GROTSCHEL} and \citet{LABBE2}.} \begin{itemize} \item \emph{2-partition}. The following procedure is repeated for each $v \in V$. First, compute $W := \{u \in V \backslash \{v\} : x^*_{uv} \notin \mathbb{Z} \}$. If $|W| \leq 4$, then stop. Otherwise, set $F := \emptyset$ and repeat the following 5 times, whenever possible. Pick two random nodes $i, j$ from $W\backslash F$ and set $T := \{i,j\}$. Keep picking nodes $r$ from $W\backslash F$ such that $x^*_{rv} - \sum_{t\in T} x^*_{rt} > 0$ and add $r$ to $T$ until no more nodes are found. Then, check if the 2-partition inequality \eqref{part2eq} with $S=\{v\}$ and $T$ is violated and, in that case, add it as a cut. Finally, make $F := F \cup T$ (even when the inequality is not violated). The set $F$ (of ``forbidden nodes'') prevents from generating cuts with similar support. \item \emph{Weight-Cover}. First, order nodes $i \in V$ according to the value $q^*(i) := w_i + \sum_{j \in V \setminus \{i\}} w_j x^*_{ij}$ from highest to lowest, i.e.$\!$ $V = \{v_1, v_2, \ldots, v_n\}$ such that $q^*(v_j) \geq q^*(v_{j+1})$ for all $j$. For each $t \in \{4,5\}$ do the following. Consider every $T$ composed of $t - 2$ nodes from $\{v_1, v_2, \ldots, v_{t+2}\}$ and 2 more nodes from $V$ (note that $|T| = t$ and the procedure explores $O(n^2)$ combinations). If $w(T) > W_U$ and $w(T \setminus \{l\}) \leq W_U$ for all $l \in T$, check the $T$-Weight-Cover inequality. If it is violated, add it as a cut. \end{itemize} \reviewd{ In the given procedures, an inequality is considered violated if the amount of violation is at least $0.1$. As one can see from the tables, 2-partition together with Weight-Cover cuts is the best choice. We define the strategy $\mathcal{S}_5$ as $\mathcal{S}_4$ with both separation routines enabled.\\ } \review{\emph{Resolution of a real-world instance}. As mentioned in the introduction, in the zonal stage of the second category of the Ecuadorian football league, a championship including the two best teams of each provincial associations is played. The set of teams must be partitioned in 8 groups to play a Round Robin Tournament in each one of them. A regulation imposes that two teams of the same provincial association must not belong to the same group. During the 2015 season, $44$ teams (22 provincial associations) participated in the tournament, and they were divided in $4$ groups of $5$ teams and $4$ groups of $6$ teams.} \review{ The nodes of a graph are associated with the venues of the teams. We denote the nodes by $2i$ and $2i+1$, corresponding to the venues of the best two teams of each provincial association $i$, for all $i=0,1,\ldots,21$, and edge $\{i,j\}$ is included if and only if the teams associated to nodes $i,j$ could potencially belong to the same group. In order to satisfy the regulation mentioned before, edges of the form $\{2i,2i+1\}$ do not appear in the graph. Thus, our realignment instance consists of $44$ teams which must be partitioned in $8$ groups, and the graph $G = (V,E)$ is a particular complement of a matching, i.e.~ $V = \{0,\ldots,43\}$ and $E = \{ \{i,j\} : 0 \leq i < j \leq 43 \} \backslash \{ \{2i,2i+1\} : 0 \leq i \leq 21 \}$. } For solving our instance, we made a preliminary test of our two best strategies (i.e.~ $\mathcal{S}_4$ vs. $\mathcal{S}_5$). A time limit of one hour was imposed to them. None of them was able to solve the instance within this limit, but the relative gap reported was 3.22\% for $\mathcal{S}_4$ against 2.36\% for $\mathcal{S}_5$. Hence, $\mathcal{S}_5$ was chosen for solving the instance without time limit. Below, we resume some highlights about the optimization: \begin{itemize} \item \review{\emph{Instance:} $|V|=44$, $k = 8$, $W_L = 2.08412$ and $W_U = 4.14835$. } \item \emph{Iterations performed/time spent by tabu search}: 113352 iterations (6.8 sec.). \item \emph{Variables and constraints of the initial relaxation:} 913 vars., 7444 constr. \item \emph{Cuts generated}: Gomory (18), Cover (65), MIR (1620), GUB cover (98), Zero-half (1278), triangular (1935), 2-partition (3965), Weight-Cover (14). \item \emph{Nodes explored and total time elapsed}: 34573 (68374 sec.). \item \emph{Optimal value}: 21523 km, found by tabu search at iter. 3549 (0.21 sec.). \item \emph{Gap evolution:} 2.36\% after 1 hour, 1.08\% after 4 hours. \end{itemize} Since a Double Round Robin Tournament is considered, the total distance is 86092 km. In contrast, the best solution found in our previous work \citep{ISCO2016} had 86192 km and the gap reported was 12.6\% after 4 hours of execution. \section{Conclusion and future work} \label{Conclusions} \review{In this paper, a balanced $k$-way partitioning problem with weight constraints is defined. The problem consists in partitioning a complete or a general graph in a fixed number of subsets of nodes such that their cardinality differs at most in one and the total weight of each subset is bounded. The objective aims to minimize the total cost of edges with end-nodes in the same subset. The problem was formulated as an integer program and several strategies based on exact and methaheuristic methods are evaluated. The motivation to state, formulate and solve this problem was the realignment of the second category of the Ecuadorian football league. The solution of this case of study is based on real world data and the methodology may be suitable applied to the realignment of other sports leagues with similar characteristics.} In order to solve the problem, one of the key decisions was to use a modification of a state-of-the-art tabu search for obtaining good feasible solutions. In particular, our algorithm found the optimal solution of the real-world instance in less than a second, whereas the previous approach given by \citet{ISCO2016} was unable to obtain it within 4 hours of CPU time. Another key decision was to include a portion of triangular inequalities (20\% of them under a specific ordering) in the initial relaxation and manage the remaining ones as cuts or lazy constraints. These facts, in conjunction with other minor decisions, allowed to solve comfortably random instances of 48 nodes in less than half an hour and the real-world one in 19 hours (here, almost all the time was spent on certifying the optimality). In addition, two formulations are presented and one of them ($\mathcal{F}_2$) is chosen based on experimentation. A possible cause of the poor performance of $\mathcal{F}_1$ could be the existence of symmetrical solutions due to the indistinguishability of dummy nodes. For example, an addition of 4 dummy nodes implies that, for each integer solution of $\mathcal{F}_2$, there are 24 symmetrical integer solutions in $\mathcal{F}_1$. \reviewd{ We also proposed three families of valid inequalities and two of them have proven to be very effective for reducing the number of B\&B nodes, when they are used as cuts. One of them (Weight-Cover) in conjunction with the well-known 2-partition inequalities, allows to shorten the CPU time in 51\% for $n = 48$ (see Table \ref{tab:4}) and 56\% for $n = 54$ (see Table \ref{tab:5}). Moreover, it is able to solve one more instance (48) and the gap reported for those instances not solved in one hour (41, 44, 46) is smaller. }\review{As other state-of-the-art exact algorithms for the $k$-way graph partitioning problem \cite{Fairbrother_EtAl_2017, Anjos2013}, the best strategies provided here attain optimal solutions for graphs that have around 50 nodes.} A future work could be \reviewd{to include a separation routine of Weight-Upperbound inequalities and to explore other valid inequalities (for example, by adapting those ones presented by \citet{LABBE2}). Finally, at the theoretical level, it could be useful to make a polyhedral study of $\mathcal{P}$, the convex hull of integer solutions of $\mathcal{F}_2$, and to propose facet-defining inequalities that can be used as cuts. } \begin{acknowledgements} This research was partially supported by the 15-MathAmSud-06 ``PACK-COVER: Packing and covering, structural aspects'' trilateral cooperation project. \end{acknowledgements} \bibliographystyle{spbasic}
{ "attr-fineweb-edu": 1.537109, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdNo5i7PA9Pc3JOU7
\section{Introduction} \label{sec:intro} The tremendous growth of video data has resulted in a significant demand for tools that can accelerate and simplify the production of sports highlight packages for more effective browsing, searching, and content summarization. In a major professional golf tournament such as Masters, for example, with 90 golfers playing multiple rounds over four days, video from every tee, every hole and multiple camera angles can quickly add up to hundreds of hours of footage. Yet, most of the process for producing highlight reels is still manual, labor-intensive, and not scalable. \begin{figure}[t] \begin{center} \includegraphics[width=0.98\linewidth]{example.jpg} \end{center} \vspace{-0.3cm} \caption{The H5 system dashboard for auto-curation of sports highlights. Highlights are identified in near real-time (shown in the right panel) with an associated excitement level score. The user can click on the icons in the right panel to play the associated video in the center, along with the scores for each excitement measure. } \label{fig:short} \end{figure} In this paper, we present a novel approach for auto-curating sports highlights, showcasing its application in extracting golf play highlights. Our approach uniquely fuses information from the {\em player}, {\em spectators}, and the {\em commentator} to determine a game's most exciting moments. More specifically, we measure the excitement level of video segments based on the following multimodal markers: \vspace{-0.2cm} \begin{itemize}[noitemsep] \item {\bf Player reaction:} visual action recognition of player's celebration (such as high fives or fist pumps); \item {\bf Spectators:} audio measurement of crowd cheers; \item {\bf Commentator: } excitement measure based on the commentator's tone of the voice, as well as exciting words or expressions used, such as ``beautiful shot''. \end{itemize} \vspace{-0.2cm} These indicators are used along with the detection of TV graphics (e.g., lower third banners) and shot-boundary detection to accurately identify the start and end frames of key shot highlights with an overall excitement score. The selected segments are then added to an interactive dashboard for quick review and retrieval by a video editor or broadcast producer, speeding up the process by which these highlights can then be shared with fans eager to see the latest action. Figure \ref{fig:short} shows the interface of our system, called High-Five ({\bf High}lights {\bf F}rom {\bf I}ntelligent {\bf V}ideo {\bf E}ngine), H5 in short. In our approach, we exploit how one modality can guide the learning of another modality, with the goal of reducing the cost of manual training data annotation. In particular, we show that we can use TV graphics and OCR as a proxy to build rich feature representations for player recognition from {\em unlabeled} video, without requiring costly training data annotation. Our audio-based classifiers also rely on feature representations learned from unlabeled video \cite{soundnetNIPS16}, and are used to constrain the training data collection of other modalities (e.g., we use the crowd cheer detector to select training data for player reaction recognition). Personalized highlight extraction and retrieval is another unique feature of our system. By leveraging TV graphics and OCR, our method automatically gathers information about the player's name and the hole number. This metadata is matched with the relevant highlight segments, enabling searches like ``show me all highlights of player X at hole Y during the tournament'' and personalized highlights generation based on a viewer's favorite players. In summary, the key {\bf contributions} of our work are listed below: \vspace{-0.2cm} \begin{itemize}[noitemsep] \item We present a first-of-kind system for automatically extracting golf highlights by uniquely fusing multimodal excitement measures from the player, spectators, and commentator. In addition, by automatically extracting metadata via TV graphics and OCR, we allow personalized highlight retrieval or alerts based on player name, hole number, location, and time. \item Novel techniques are introduced for learning our multimodal classifiers without requiring costly manual training data annotation. In particular, we build rich feature representations for player recognition without manually annotated training examples. \item We provide an extensive evaluation of our work, showing the importance of each component in our proposed approach, and comparing our results with professionally curated highlights. Our system has been successfully demonstrated at a major golf tournament, processing live streams and extracting highlights from four channels during four consecutive days. \end{itemize} \vspace{-0.2cm} \begin{figure*}[ht] \begin{center} \includegraphics[width=0.88\linewidth]{framework.jpg} \end{center} \caption{Our approach consists of applying multimodal (video, audio, text) marker detectors to measure the excitement levels of the player, spectators, and commentator in video segment proposals. The start/end frames of key shot highlights are accurately identified based on these markers, along with the detection of TV graphics and visual shot boundaries. The output highlight segments are associated with an overall excitement score as well as additional metadata such as the player name, hole number, shot information, location, and time. } \label{fig:framework} \end{figure*} \section{Related Work} \label{sec:related} \vspace{0.05in} {\bf Video Summarization.} There is a long history of research on video summarization \cite{ma2002user, rav2006making,zhang2016video}, which aims to produce short videos or keyframes that summarize the main content of long full-length videos. Our work also aims at summarizing video content, but instead of optimizing for representativeness and diversity as traditional video summarization methods, our goal is to find the highlights or exciting moments in the videos. A few recent methods address the problem of highlight detection in consumer videos \cite{sun2014ranking,yang2015unsupervised,yao2016highlight}. Instead our focus is on sports videos, which offer more structure and more objective metrics than unconstrained consumer videos. \vspace{0.05in} {\bf Sports Highlights Generation.} Several methods have been proposed to automatically extract highlights from sports videos based on audio and visual cues. Example approaches include the analysis of replays \cite{zhao2006highlight}, crowd cheering \cite{xiong2003audio}, motion features \cite{xiong2003generation}, and closed captioning \cite{zhang2002event}. More recently, Bettadapura et al. \cite{bettadapura2016leveraging} used contextual cues from the environment to understand the excitement levels within a basketball game. Tang and Boring \cite{tang2012epicplay} proposed to automatically produce highlights by analyzing social media services such as twitter. Decroos et al. \cite{decroos2017predicting} developed a method for forecasting sports highlights to achieve more effective coverage of multiple games happening at the same time. Different from existing methods, our proposed approach offers a unique combination of excitement measures to produce highlights, including information from the {\em spectators}, the {\em commentator}, and the {\em player} reaction. In addition, we enable personalized highlight generation or retrieval based on a viewer`s favorite players. \vspace{0.05in} {\bf Self-Supervised Learning.} In recent years, there has been significant interest in methods that learn deep neural network classifiers without requiring a large amount of manually annotated training examples. In particular, {\em self-supervised} learning approaches rely on auxiliary tasks for feature learning, leveraging sources of supervision that are usually available ``for free'' and in large quantities to regularize deep neural network models. Examples of auxiliary tasks include the prediction of ego-motion \cite{agrawal2015learning,jayaraman2015learning}, location and weather \cite{wang2016walk}, spatial context or patch layout \cite{noroozi2016unsupervised,pathak2016context}, image colorization \cite{zhang2016colorful}, and temporal coherency \cite{mobahi2009deep}. Aytar et al. \cite{soundnetNIPS16} explored the natural synchronization between vision and sound to learn an acoustic representation from {\em unlabeled} video. We leverage this work to build audio models for crowd cheering and commentator excitement with a few training examples, and use these classifiers to constrain the training data collection for player reaction recognition. More interestingly, we exploit the detection of TV graphics as a free supervisory signal to learn feature representations for player recognition from unlabeled video. \section{Technical Approach} \label{sec:system} \subsection{Framework} Our framework is illustrated in Figure \ref{fig:framework}. Given an input video feed, we extract in parallel four multimodal markers of potential interest: player action of celebration (detected by a visual classifier), crowd cheer (with an audio classifier), commentator excitement (detected by a combination of an audio classifier and a salient keywords extractor applied after a speech-to-text component). We employ the audience cheer detector for seeding a potential moment of interest. Our system then computes shot boundaries for that segment as exemplified in Figure \ref{fig:startend}. The start of the segment is identified by graphic content overlaid to the video feed. By applying an OCR engine to the graphic, we can recognize the name of the player involved and the hole number, as well as additional metadata. The end of the segment is identified with standard visual shot boundary detection applied in a window of few seconds after the occurrence of the last excitement marker. Finally we compute a combined excitement score for the segment proposal based on a combination of the individual markers. In the following we describe each component in detail. \subsection{Audio-based Markers Detection} \label{ssec:audio} Crowd cheering is perhaps the most veritable form of approval of a player's shot within the context of any sport. Specifically in golf, we have observed that cheers almost always accompany important shots. Most importantly crowd cheer can point to the fact that an important shot was just played (indicating the end of a highlight). Another important audio marker is excitement in the commentators' tone while describing a shot. Together those two audio markers play an important role in determining the position and excitement level of a potential highlight clip. In this work, we leverage Soundnet \cite{soundnetNIPS16} to construct audio-based classifiers for both crowd-cheering and commentator tone excitement. Soundnet uses a deep 1-D convolutional neural network architecture to learn representations of environmental sounds from nearly 2 million unlabeled videos. Specifically, we extract features from the {\it conv-5} layer to represent 6 seconds audio segments. The choice of the {\it conv-5} layer is based upon experiments and superior results reported in \cite{soundnetNIPS16}. The dimensionality of the feature is 17,152. One key advantage of using such a rich representation pre-trained on millions of environmental sounds is the direct ability to build powerful linear classifiers, similarly to what has been observed for image classification \cite{razavian2014baseline}, for cheer and commentator tone excitement detection with relatively few audio training examples (for example we started with 28 positive and 57 negative training samples for the audio-based commentator excitement classifier). We adopt an iterative refinement bootstrapping methodology to construct our audio based classifiers. We learn an initial classifier with relatively few audio snippets and then perform bootstrapping on a distinct test set. This procedure is repeated to improve the accuracy at each iteration. \vspace{-0.2cm} \subsubsection{Crowd Cheer Detection} \label{ssec:cheer} Cheer samples from 2016 Masters replay videos as well as examples of cheer obtained from YouTube were used in order to train the audio cheer classifier using a linear SVM on top of deep features. For negative examples, we used audio tracks containing regular speech, music, and other kinds of non-cheer sounds found in Masters replays. In total our final training set consisted of 156 positive and 193 negative samples (6 seconds each). The leave-one-out cross validation accuracy on the training set was 99.4\%. \vspace{-0.2cm} \subsubsection{Commentator Excitement Detection}\label{ssec:tone} We propose a novel commentator excitement measure based on voice tone and speech-to-text-analysis. {\bf Tone-based:} Besides recognizing crowd cheer, we employ the deep Soundnet audio features to model excitement in commentators' tone. As above, we employ a linear SVM classifier for modeling. For negative examples, we used audio tracks containing regular speech, music, regular cheer (without commentator excitement) and other kinds of sounds which do not have an excited commentator found in 2016 Masters replays. In total, the training set for audio based commentator excitement recognition consisted of 131 positive and 217 negative samples. The leave-one-out cross validation accuracy on the training set was 81.3\%. \linebrea {\bf Text-based:} While the commentator's tone can say a lot about how excited they are while describing a particular shot, the level of their excitement can also be gauged from another source, that is, the expressions they use. We created a dictionary of 60 expressions (words and phrases) indicative of excitement (e.g. "great shot", "fantastic" ) and assign to each of them excitement scores ranging from 0 and 1. We use a speech to text service\footnote{\url{https://www.ibm.com/watson/developercloud/speech-to-text.html}} to obtain a transcript of commentators' speech and create an excitement score as an aggregate of scores of individual expressions in it. When assigning a final excitement score to a highlight (as described in Section~\ref{ssec:fusion}), we average the tone-based and text-based commentator excitement to obtain the overall level of excitement of the commentator. The two scores obtained from complementary sources of information create a robust measure of commentator excitement, as exemplified in Figure~\ref{fig:commentator}. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.8\linewidth]{commentator.jpg} \end{center} \vspace{-0.3cm} \caption{Commentator excitement score computation based on (i) audio tone analysis and (ii) speech to text analysis.} \label{fig:commentator} \end{figure*} \subsection{Visual Marker Detection} \label{ssec:visual} \subsubsection{Player Reaction} \label{ssec:action} Understanding the reaction of a player is another important cue to determine an interesting moment of a game. In our work, we train an action recognizer to detect a player celebrating. To the best of our knowledge, measuring excitement from the player reaction for highlight extraction has not been explored in previous work. We adopt two strategies to reduce the cost of training data collection and annotation for action recognition. First, we use our audio-based classifiers (crowd cheer and commentator excitement) at a low threshold to select a subset of video segments for annotation, as in most cases the player celebration is accompanied by crowd cheer and/or commentator excitement. Second, inspired by \cite{ma2017less}, we use still images which are much easier to annotate and allow training with less computational resources compared to video-based classifiers. Figure \ref{fig:action} shows examples of images used to train our model. At test time, the classifier is applied at every frame and the scores aggregated for the highlight segment as described in the next Section. Initially, we trained a classifier with 574 positive examples and 563 negative examples. The positive examples were sampled from 2016 Masters replay videos and also from the web. The negative examples were randomly sampled from the Masters videos. We used the VGG-16 model \cite{simonyan2014very}, pre-trained on Imagenet as our base model. The Caffe \cite{jia2014caffe} deep learning library was used to train the model with stochastic gradient descent, learning rate 0.001, momentum 0.9, weight decay 0.0005. Then, we performed three rounds of hard negative mining on Masters videos from previous years, obtaining 2,906 positive examples and 6,744 negative ones. The classifier fine-tuned on this data achieved 88\% accuracy on a separate test set containing 460 positive and 858 negative images. \begin{figure} \begin{center} \includegraphics[width=0.8\linewidth]{action.jpg} \end{center} \vspace{-0.3cm} \caption{Examples of still images used to train our action recognition model.} \label{fig:action} \end{figure} \subsubsection{TV Graphics, OCR, and Shot-boundaries} \label{ssec:shot} In professional golf tournament broadcasts, a golf swing is generally preceded by a TV graphics with the name of the player just about to hit the golf ball and other information about the shot. The detection of such markers is straightforward, as they appear in specific locations of the image, and have distinct colors. We check for such colors in the vicinity of pre-defined image locations (which are fixed across all broadcasting channels) to determine the TV graphics bounding box. One could use a more general approach by training a TV graphics detector (for example via faster-rcnn \cite{renNIPS15fasterrcnn} or SSD \cite{liu2016ssd}), however this was beyond the scope of this work. We then apply OCR (using the Tesseract engine \cite{smith2007overview}) within the detected region in order to extract metadata such as the name of the player and the hole number. This information is associated with the detected highlights, allowing personalized queries and highlight generation based on a viewer's favorite players. We also use standard shot-boundary detection based on color histograms \cite{amir2003ibm} as a visual marker to better determine the end of a highlight clip. \subsection{Highlight Detection} \label{ssec:fusion} Figure \ref{fig:startend} illustrates how we incorporate multimodal markers to identify segments as potential highlights and assign excitement scores to them. The system starts by generating {\bf segment proposals} based on the crowd cheering marker. Specifically, crowd cheering detection is performed on a continuous segment of the stream and positive scores are tapped to point to potentially important cheers in audio. Adjacent 6 second segments with positive scores are merged to mark the end of a bout of contiguous crowd cheer. Each distinct cheer marker is then evaluated as a potential candidate for a highlight using presence of a TV graphics marker containing a player name and hole number within a preset duration threshold (set at 80 seconds). The beginning of the highlight is set as 5 seconds before the appearance of TV graphics marker. In order to determine the end of the clip we perform shot boundary detection in a 5 second video segment starting from the end of cheer marker. If a shot boundary is detected, the end of the segment is set at the shot change point. Segments thus obtained constitute valid highlight segment proposals for the system. The highest cheer score value among adjacent segments that are merged is set as the crowd cheer marker score for a particular segment proposal. Once those baseline segment scores have been computed, we perform further search to determine if the segment contains player celebration action, excitement in commentators' tone, or exciting words or expressions used to describe the shot. It is important to note that the cheer and commentator excitement predictions are performed on every 6 seconds audio segment tapped from the video stream. Similarly the visual player celebration action recognition is performed on frames sampled at 1 fps. In order to determine the overall excitement level of a video segment we incorporate available evidence from all audio, visual, and text based classifiers that fall within a segment proposal. Specifically, we aggregate and normalize positive scores for these markers within a time-window of detected crowd cheer marker. For player reaction, we set this window to be 15 seconds while for audio commentator excitement the window was set to be 20 seconds. Finally we obtain the overall excitement score of a segment proposal using a linear fusion of scores obtained from crowd cheer, commentator excitement (audio and text-based), and player celebration action markers. Weights for crowd cheer, commentator excitement (audio and text) and player reaction components are set as 0.61, 0.13, 0.13, and 0.13 respectively. The search time-windows, segment duration thresholds and weights for linear fusion were decided on the basis of analysis performed on the training set, which consists on the broadcast from the 2016 Masters tournament. \begin{figure*} \begin{center} \includegraphics[width=0.77\linewidth]{startend.jpg} \end{center} \vspace{-0.3cm} \caption{Demonstrating highlight clip start and end frames selection.} \label{fig:startend} \end{figure*} \section{Self-Supervised Player Recognition} \label{sec:zeroshot} Automatic player detection and recognition can be a very powerful tool for generating personalized highlights when graphics are not available, as well as to perform analysis outside of the event broadcast itself. It could for example enable to estimate the presence of a player in social media posts by recognizing his face. The task is however quite challenging. First, there is a large variations in pose, illumination, resolution, occlusion (hats, sunglasses) and facial expressions, even for the same player, as visible in Figure \ref{fig:face}. Second, inter-player differences are limited, as many players wear extremely similar outfits, in particular hats, which occlude or obscure part of their face. Finally, a robust face recognition model requires large quantities of labeled data in order to achieve high levels of accuracy, which is often difficult to obtain and labor intensive to annotate. We propose to alleviate such limitations by exploiting the information provided by other modalities of the video content, specifically the overlaid graphics containing the players name. This allows us to generate a large set of training examples for each player, which can be used to train a face recognition classifier, or learn powerful face descriptors. We start by detecting faces within temporal window after a graphic with a player name is found, using a faster-rcnn detector \cite{renNIPS15fasterrcnn}. The assumption is that in the segment after the name of a player is displayed, his face will be visible multiple times in the video feed. Not all detected faces in that time window are going to represent the player of interest. We therefore perform outliers removal, using geometrical and clustering constraints. We assume the distribution of all detected faces to be bi-modal, with the largest cluster containing faces of the player of interest. Faces that are too small are discarded, and faces in a central position of the frame are given preference. Each face region is expanded by 40\% and rescaled to 224x224 pixels. Furthermore, only a maximum of one face per frame can belong to a given player. Given all the face candidates for a given player, we perform two-class k-means clustering on top of fc7 features extracted from a VGG Face network \cite{Parkhi15}, and keep only the faces belonging to the largest cluster while respecting the geometric constraints to be the representative examples of the player's face. This process, working without supervision, allows us to collect a large quantity of training images for each player. We can then train a player face recognition model, which in our case consists of a VGG Face Network fine-tuned by adding a softmax layer with one dimension per player. Figure \ref{fig:face}(b) shows an example subset of training faces automatically collected for Sergio Garcia from the 2016 Golf Masters broadcast. The system was able to collect hundreds of images with a large variety of pose and expressions for the same player. Bordered in red are highlighted two noisy examples. While the purity of the training clusters is not perfect, as we will show in the experiments of Section \ref{ssec:resultface} it still allowed to learn a robust classifier with no explicit supervision. \section{Experiments} \label{sec:experiments} \subsection{Experimental Setting} \label{ssec:setting} We evaluated our system in a real world application, namely the 2017 Golf Masters tournament. We analyzed in near real-time the content of the four channels broadcasting simultaneously over the course of four consecutive days, from April 6th to April 9th, for a total of 124 hours of content\footnote{Video replays are publicly available at \url{http://www.masters.com/en_US/watch/index.html}}. Our system produced 741 highlights over all channels and days. The system ran on a Redhat Linux box with two K40 GPUs. We extracted frames directly from the video stream at a rate of 1fps and audio in 6 seconds segments encoded as 16bit PCM at rate 22,050. The cheer detector and commentator excitement run in real time (1 second to process one second of content), the action detection takes 0.05secs per frame, graphics detection with OCR takes 0.02secs per frame. The speech-to-text is the only component slower than real time, processing 6 seconds of content in 8 seconds, since we had to upload every audio chunk to an API service. In the following we report experiments conducted after the event to quantitatively evaluate the performance of the system, both in terms of overall quality of the produced highlights as well as the efficacy of its individual components. All training was performed on content from the 2016 Golf Masters broadcast, while testing was done on the last day of the 2017 tournament. \subsection{Highlights Detection} \label{ssec:resulthighlights} Evaluating the quality of sports highlights is a challenging task, since a clearly defined ground truth does not exist. Similarly to previous works in this field \cite{bettadapura2016leveraging}, we approached this problem by comparing the clips automatically generated by our system to two human based references. The first is a human evaluation and ranking of the clips that we produced. The second is the collection of highlights professionally produced by the official Masters curators and published on their Twitter channel. \subsubsection{Human Evaluation of Highlights Ranking} We employed three persons in a user study to determine the quality of the top 120 highlights clips produced by our system from Day 4 of the Golf Masters. We asked each participant to assign a score to every clip in a scale from 0 to 5, with 0 meaning a clip without any interesting content, 1 meaning a highlight that is associated with the wrong player, and 2 to 5 meaning true highlights, 5 being the most exciting shots and 2 the least exciting (but still relevant) shots. We then averaged the scores of the three users for each clip. The resulting scores determined that 92.68\% of the clips produced by our system were legitimate highlights (scores 2 and above), while 7.32\% were mistakes. We also compared the rankings of the clips according to the scores of each individual component, as well as their fusion, to the ranking obtained through the users votes. The performance of each ranking is computed at different depth $k$ with the normalized discounted cumulative gain (nDCG) metric, which is a standard retrieval measure computed as follows \[ nDCG(k) = \frac{1}{Z} \sum^k_{i=1} \frac{2^{rel_i} -1}{log_2(i+1)} \] where $rel_i$ is the relevance score assigned by the users to clip $i$ and $Z$ is a normalization factor ensuring that the perfect ranking produces a nDCG score of 1. In Figure \ref{fig:nDCG} we present the nDGC at different ranks. We notice that all components but the Commentator Excitement correctly identify the most exciting clip (at rank 1). After that only the Action component assigns the highest scores to the following top 5 clips. When considering 10 top clips or more, the benefit of combining multiple modalities becomes apparent, as the Fusion nDGC curve remains constantly higher than each individual marker. \subsubsection{Comparison with Official Masters Highlights} The previous experiment confirmed the quality of the identified highlights as perceived by potential users of the system. We then compared H5 generated clips with highlights professionally created for Masters, {\it Masters Moments}, available at their official Twitter page\footnote{\url{https://twitter.com/mastersmoments}}. There are a total of 116 highlight videos from the final day at the 2017 Masters. Each one covers a player's approach to a certain hole (e.g. Daniel Berger, 13th hole) and usually contains multiple shots that the player used to complete a particular hole. In contrast each H5 highlight video is about a specific shot at a particular hole for a given player. In order to match the two sets of videos, we considered just the player names and hole numbers and ignored the shot numbers. After eliminating Masters Moments outside of the four channels we covered live during the tournament and for which there is no matching player graphics marker, we obtained 90 Masters Moments. In Table~\ref{tab:highlightsresults}, we report Precision and Recall of matching clips over the top 120 highlights produced by the H5 Fusion system. We observe that approximately half of the clips overlap with Masters Moments. This leaves us with three sets of videos: one shared among the two sets (a gold standard of sorts), one unique to Masters Moments and one unique to H5. We observed that by lowering thresholds on our markers detectors, we can incorporate 90\% of the Masters Moments by producing more clips. Our system is therefore potentially capable of producing almost all of the professionally produced content. We also wanted to investigate the quality of the clips which were discovered by the H5 system beyond what the official Master's channel produced. Generation of highlights is a subjective task and may not comprehensively cover every player and every shot at the Masters. At the same time, some of the shots included in the official highlights may not necessarily be great ones but strategically important in some ways. While our previous experiment was aimed at understanding the coverage of our system vis-a-vis official Masters highlights, we wondered if a golf aficionado would find the remaining videos still interesting (though not part of official highlights). We therefore aimed an experiment at quantitatively comparing (a) H5 highlight clips that matched Masters Moments and (b) H5 highlight clips that did not match Masters Moments videos. In order to do so we selected the 40 most highly ranked (by H5) videos from lists (a) and (b) respectively and performed a user study using three human participants familiar with golf. Participants were shown pairs of videos with roughly equivalent H5 scores/ranks (one from list (a) and the other from list (b) above) and were asked to label the more interesting video between the two, or report that they were equivalent. Majority voting was used among the users votes to determine the video pick from each pair. From the results reported in Table \ref{tab:highlightsresults} we observe that while the preference of the users lies slightly more for videos in set (a), in almost half of the cases the highlights uniquely and originally produced by the H5 system were deemed equally if not more interesting. This reflects that the system was able to discover content that users find interesting and goes beyond what was officially produced. It is also interesting to notice that our system is agnostic with respect to the actual score action of a given play, that is, a highlight is detected even when the ball does not end up in the hole, but the shot is recognized as valuable by the crowd and/or commentator and players through their reactions to it. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{nDGC.png} \end{center} \caption{nDGC computed at different ranks for the individual components as well as the Fusion.} \vspace{-0.3cm} \label{fig:nDCG} \end{figure} \begin{table}[t] \centering \begin{tabular}{|c|c|c|} \hline Depth & 120 & 500 \\ \hline \hline Precision & 0.54 & 0.35 \\ \hline Recall & 0.4 & 0.9 \\ \hline \hline Matching Highlights Preference & 0.57 & - \\ \hline Non-Matching Highlights Preference & 0.33 & - \\ \hline Equivalent & 0.10 & - \\ \hline \hline \end{tabular} \vspace{0.2cm} \caption{Highlights detection performance. Comparison between the top $k$ ($k=120,500$) retrieved clips from our system and the official Master's Twitter highlights.} \label{tab:highlightsresults} \end{table} \subsection{Self-Supervised Recognition} \label{ssec:resultface} In order to test our self-supervised player recognition model we randomly selected a set of 10 players who participated to both the 2016 and the 2017 tournaments (shown in Figure \ref{fig:face} (a)). In Table \ref{tab:faceresults} we report the statistics of the number of training images that the system was able to automatically obtain in a self-supervised manner. For each player we obtain on average 280 images. Data augmentation in the form of random cropping and scaling was performed to uniform the distribution of examples across players. Since there is no supervision in the training data collection process, some noise in bound to arise. We manually inspected the purity of each training cluster (where one cluster is the set of images representing one player) and found it to be 94.26\% on average. Note that despite evaluating its presence, we did not correct for the training noise, since our method is fully self-supervised. The face recognition model was fine-tuned from a face VGG network with learning rate = 0.001, $\gamma = 0.1$, momentum = 0.9 and weight decay = 0.0005. The net converged after approximately 4K iterations with batch size 32. We evaluated the performance of the model on a set of images randomly sampled from Day 4 of the 2017 tournament and manually annotated with the identity of the 10 investigated players. Applying the classifier directly to the images achieved 66.47\% accuracy (note that random guess is 10\% in this case since we have 10 classes). We exploited the fact that the images come from video data to cluster temporally close images based on fc7 features and assigned to all images in a cluster the identity which received the highest number of predictions within the cluster. This process raised the performance to 81.12\%. Figure \ref{fig:face} (c) shows examples of correctly labeled test images of Sergio Garcia. Note the large variety of pose, illumination, occlusion and facial expressions. In row (d) we also show some examples of false negatives (bordered in orange) and false positives (in red). The net result of our framework is thus a self-supervised data-collection procedure which allows to gather large quantities of training data without need for any annotation, which can be used to learn robust feature representations and face recognition models. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{face.jpg} \end{center} \vspace{-0.3cm} \caption{Self-supervised player face learning. (a) Examples of the 10 players used in the experiments. (b) Subset of the images automatically selected as training set (2016 Masters) for Sergio Garcia (note the diversity of pose, expression, occlusion, illumination, resolution). (c) Examples of test faces (2017 Masters) correctly recognized through self-supervised learning. (d) Examples of False Negatives (in orange) and False Positives (in red).} \label{fig:face} \end{figure} \begin{table}[t] \centering \begin{tabular}{|c|c|c|c|} \hline Number of Players & 10\\ \hline \hline Number of Training Images & 2,806\\ \hline Training Clusters Purity & 94.26\%\\ \hline \hline Number of Test Images & 1,181\\ \hline Random Guess & 10.00\%\\ \hline Classifier Alone Accuracy & 66.47\%\\ \hline Classifier + Clustering Accuracy & \textbf{81.12\%}\\ \hline \hline \end{tabular} \vspace{0.2cm} \caption{Face classification performance. } \label{tab:faceresults} \end{table} \subsection{Discussion} While we have demonstrated our approach in golf, we believe our proposed techniques for modeling the excitement levels of the players, commentator, and spectators are general and can be extended to other sports. The way we determine the start of an event based on TV graphics is specific to golf, but that could be replaced by other markers in other sports. In tennis, for example, the start of an event could be obtained based on the detection of a serve by action recognition. The combination of multimodal excitement measures is crucial to determine the most exciting moments of a game. Crowd cheer is the most important marker, but alone cannot differentiate a hole-in-one or the final shot of the tournament from other equally loud events. In addition, we noticed several edge cases where non-exciting video segments had loud cheering from other holes. Our system correctly attenuates the highlight scores in such cases, due to the lack of player celebration and commentator excitement. We believe that other sources of excitement measures, such as player and crowd facial expressions, or information from social media could further enhance our system. The same approach used for self-supervised player recognition could also be applied for the detection of golf setup (player ready to hit the golf ball), using TV graphics as a proxy to crop positive examples based on person detection. This would generalize our approach to detect the start of an event without relying on TV graphics, and also help fix a few failure cases of consecutive shots for which a single TV graphics is present. \section{Conclusion} \label{sec:conclusion} We presented a novel approach for automatically extracting highlights from sports videos based on multimodal excitement measures, including audio analysis from the spectators and the commentator, and visual analysis of the players. Based on that, we developed a first-of-a-kind system for auto-curation of golf highlight packages, which was demonstrated in a major tournament, accurately extracting the start and end frames of key shot highlights over four days. We also exploited the correlation of different modalities to learn models with reduced cost in training data annotation. As next steps, we plan to demonstrate our approach in other sports such as tennis and produce more complex storytelling video summaries of the games. \small { \bibliographystyle{ieee}
{ "attr-fineweb-edu": 1.798828, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUd1Y5qdmDNqo3lLKe
\section{Introduction} \IEEEPARstart{T}{he} widespread availability of cameras has led to an enormous and ever-growing collection of unedited and unstructured videos generated by users around the world \cite{jiang2015super}. A popular domain corresponds to sports videos taken at public events and professional/amateur matches. These types of user-generated sports videos (UGSVs) are often lengthy with several uninteresting parts, and thus many of them are stored and are never reviewed. A convenient way to review, transfer, and share the video via channels, such as social network services, includes generating summaries of a UGSV that only shows the interesting parts or highlights. Automatic video summarization is a challenging problem that involves extracting semantics from video. Traditional user-generated video summarization methods target general videos in which contents are not limited to a specific domain. This is mainly because of the difficulty in extracting semantics from an unstructured video \cite{zha2007building}. As opposed to extracting semantics, these methods use low-level visual features and attempt to reduce visual redundancy using clustering-based approaches \cite{lienhart1999dynamic}. More recent user-generated video summarization methods use deep neural network-based features to extract higher-level semantics \cite{zhang2016video,otani2016videosum}. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{overview.pdf} \caption{An overview of the proposed method to generate a summary of user-generated sports video (UGSV) based on players' actions. Two types of features that represent players' actions, namely body joint-based and holistic actions, are used to extract highlights from the original video.} \label{fig:overview} \end{figure} With respect to sports video and especially with respect to professional sports in broadcast TV programs, there exist a number of summarization methods that leverage editing conventions to extract high-level semantics by exploiting a knowledge of the specific sport \cite{li2010action,nitta2009automatic}. For example, broadcast sports video contains slow-motion replays \cite{pan2001detection}, narration and superimposed text \cite{ekin2003automatic}, and specific camera work \cite{xu2005hmm}. This type of video editing constitutes the basis for heuristic rules that aid in the determination of highlights (or certain interesting moments of a game such as a free kick in soccer or a pitch in baseball). Additionally, broadcast video is often edited by following the structure of the sport (i.e., ``downs'' in American football), and this constitutes another cue for summarization \cite{li2001event}. UGSV lies in in-between general user-generated video and broadcast sports video. Given a specific sport, domain knowledge can be used to generate a UGSV summary. However, UGSV does not typically follow any editing convention or structure, and thus a different type of cues is required to grab the semantics. This paper describes a novel method for UGSV summarization. Our observation with respect to semantics extraction is that a game in most sports consists of a succession of players' actions, and thus the actions can be one of the most important cues to determine if a certain part of video is interesting or not. For example, a definitive smash in tennis is more likely to be enjoyed by tennis viewers than a repetitive ball exchange. Also, a feint in boxing might not be interesting by itself, but viewers would surely enjoy it if it is followed by an uppercut that knocks out the opponent. Based on this observation, the proposed method uses players' actions to model the highlights of a sports game (Fig.~\ref{fig:overview}). Inspired by recent methods for action recognition in video, the proposed method uses a two-stream architecture that extracts two types of action features for action representation. One type involves players' body joint positions estimated in 2D or 3D (obtainable from depth maps). Body joint-based features provide a precise representation of actions. The other type involves holistic features that can be obtained with deep convolutional neural networks (CNNs) designed to extract spatio-temporal features from video. Holistic features help to capture actions in their context. Subsequently, long short-term memory (LSTM) is used to model the temporal dependencies of the extracted features for highlight classification. In our summaries, a highlight may contain one or more actions performed by the players. Several types of body joint-based and holistic features are comparatively evaluated for UGSV summarization. We consider the case of Kendo (Japanese fencing) as an example of a sport to evaluate the proposed method. This work is an extension of our previous work in \cite{tejerodepablos2016human}. The main contributions of this work are as follows: \begin{itemize} \item A novel UGSV summarization method that determines highlights in a video by using players' action features and a deep neural network. \item A comparison of several action feature extraction methods, i.e., body joint features (RGB image-based 2D joint positions and depth map-based 3D joint positions) and holistic features (C3D \cite{tran2015learning} and CNN-ISA \cite{le2011learning}) to demonstrate their adequacy to model video highlights. \item A new UGSV dataset with 246 min of Kendo videos in which each second of a video has a label that indicates whether or not it is a part of a highlight. The labels were provided by annotators with and without experience in Kendo. \item Objective and subjective evaluations of the proposed method. Users with and without experience in Kendo were surveyed to investigate the adequacy of the proposed method with respect to individual needs. \end{itemize} \section{Related work} \label{sec:related} This section introduces existing video summarization methods in terms of the types of video (i.e., broadcast sports video and user-generated video). This section also reviews existing work in action recognition, which constitutes a key technique for modeling highlights in the proposed method. \subsection{Action recognition from video} Body-joint features are widely used for human action recognition, because of their rich representability of human motion and their robustness to variability in human appearance \cite{biswas2011gesture}. However, they miss potential cues contained in the appearance of the scene. Holistic features, which focus more on the global appearance of the scene, have been also hand-crafted for action recognition \cite{sun2009action}; from motion energy-images to silhouette-based images \cite{chen2007human,calderara2008action}. As shown in recent works \cite{tran2015learning,le2011learning,simonyan2014two,feichtenhofer2016spatiotemporal}, convolutional neural networks (CNN) have outperformed traditional methods as they are able to extract holistic action recognition features that are more reliable and generalizable than hand-crafted features. An example corresponds to three-dimensional convolutional neural networks (3D CNNs) that constitute an extension of CNNs applied to images (2D CNNs). While 2D CNNs perform only spatial operations in a single image, 3D CNNs also perform temporal operations while preserving temporal dependencies among the input video frames \cite{tran2015learning}. Le et al.~\cite{le2011learning} used a 3D CNN with independent subspace analysis (CNN-ISA) and a support vector machine (SVM) to recognize human actions from video. Additionally, Tran et al.~\cite{tran2015learning} designed a CNN called C3D to extract video features that were subsequently fed to an SVM for action recognition. Another state-of-the-art CNN-based action recognition method employed two types of streams, namely a spatial \textit{appearance} stream and a temporal \textit{motion} stream \cite{simonyan2014two,feichtenhofer2016spatiotemporal}. Videos are decomposed into spatial and temporal components, i.e., into an RGB and optical flow representation of its frames, and fed into two separate 3D CNNs. Each stream separately provides a score for each possible action, and the scores from two streams were later combined to obtain a final decision. This architecture is supported by the two-stream hypothesis of neuroscience in which the human visual system is composed of two different streams in the brain, namely the dorsal stream (spatial awareness and guidance of actions) and the ventral stream (object recognition and form representation) \cite{goodale1992separate}. In addition to RGB videos, other methods leverage depth maps obtained from commodity depth sensors (e.g.~Microsoft Kinect) to estimate the human 3D pose for action recognition \cite{xia2012view,martinez2014action,cai2016effective}. The third dimension provides robustness to occlusions and variations from the camera viewpoint. \subsection{Broadcast sports video summarization} Summarization of sports video focuses on extracting interesting moments (i.e., highlights) of a game. A major approach leverages editing conventions such as those present in broadcast TV programs. Editing conventions are common to almost all videos of a specific sport and allow automatic methods to extract high-level semantics \cite{choi2008spatio,chen2006semantic}. Ekin et al.~\cite{ekin2003automatic} summarized broadcast soccer games by leveraging predefined camera angles in edited video to detect soccer field elements (e.g., goal posts). Similar work used slow-motion replays to determine key events in a game \cite{pan2001detection} and predefined camera motion patterns to find scenes in which players scored in basketball/soccer games \cite{xu2005hmm}. In addition to editing conventions, the structure of the sport also provides high-level semantics for summarization. Certain sports are structured in ``plays'' that are defined based on the rules of the sport and are often easily recognized in broadcast videos \cite{li2010action,tjondronegoro2004integrating,liang2004semantic}. For example, Li et al.~\cite{li2001event} summarized American football games by leveraging their turn-based structure and recognizing ``down'' scenes from the video. Other methods used metadata in sports videos \cite{nitta2009automatic,divakaran2003video} since it contains high-level descriptions (e.g., ``hits'' may be annotated in the metadata with their timestamps for a baseball game). A downside of these methods is that they cannot be applied to sports video without any editing conventions, structures, and metadata. Furthermore, they are based on heuristics, and thus it is difficult to generalize them to different sports. Existing work also proposed several methods that are not based on heuristics. These methods leverage variations between scenes that are found in broadcast video (e.g., the close-up in a goal celebration in soccer). Chen et al.~\cite{chen2008motion} detected intensity variations in color frames to segment relevant events to summarize broadcast videos of soccer, basketball, and tennis. Mendi et al.~\cite{mendi2013sports} detected the extrema in the optical flow of a video to extract the frames with the highest action content and construct a summary for broadcast rugby video. These methods can be more generally applied to broadcast videos, but they lack high-level semantics, and thus the extracted scenes do not always correspond to the highlights of the game. \subsection{User-generated video summarization} \label{sec:ugv} Sports video includes a somewhat universal criterion on the extent to which a ``play'' is interesting (e.g., a \textit{homerun} in a baseball game should be an interesting play for most viewers). In contrast, user-generated video in general do not have a clear and universal criterion to identify interesting moments. Additionally, neither editing conventions nor specific structures that can be used to grab high-level semantics can be leveraged \cite{hua2004optimization}. Hence, many video summarization methods for user-generated video are designed to reduce the redundancy of a lengthy original video as opposed to determining interesting moments. Traditional methods uniformly sample frames \cite{mills1992magnifier} or cluster them based on low-level features, such as color \cite{lienhart1999dynamic}, to extract a brief synopsis of a lengthy video. These methods do not extract highlights of the video, and therefore researchers proposed other types of summarization criteria such as important objects \cite{meng2016keyframes}, attention \cite{evangelopoulos2013multimodal}, interestingness \cite{peng2011editing}, and user preferences \cite{garciadelmolino2017active} Recent methods use deep neural networks to automatically learn a criterion to model highlights. Yang et al.~\cite{yang2015unsupervised} extracted features from ground-truth video summaries to train a model for highlight detection. Otani et al.~\cite{otani2016video} use a set of both original videos and their textual summaries that are generated via majority voting by multiple annotators to train a model to find video highlights. Video titles \cite{song2015tvsum}, descriptions \cite{otani2016videosum}, and other side information \cite{yuan2017video} can also be used to learn a criterion to generate summaries. The aforementioned methods employed networks with CNNs and LSTMs, and this requires a large amount of data for training. The generation of these types of large summarization datasets for training their network is non-viable for most researchers, and thus their models are built on pre-trained networks such as VGG \cite{simonyan2014very} and GoogLeNet \cite{szegedy2015going}. \section{UGSV summarization using action features} \label{sec:harsum} UGSV summarization inherits the intricacies of user-generated video summarization. The extraction of high-level semantics is not trivial in the absence of editing conventions. However, given a specific sport, it is possible to leverage domain knowledge to facilitate the extraction of high-level semantics. The idea in the present work for semantics extraction involves utilizing players' actions, as they are the main constituents of a game. Our previous work \cite{tejerodepablos2016human} applied an action recognition technique to sports videos to determine combinations of actions that interest viewers by using a hidden Markov model with Gaussian Mixture emissions. To the best of our knowledge, this work was the first to use a UGSV summarization based on players' actions. A major drawback of the previous work \cite{tejerodepablos2016human} involves the usage of the outputs of a classic action recognizer as features to determine the highlights of the UGSV. Moreover, in addition to the UGSV summarization dataset, the method also requires an action dataset of the sport to train the action recognizer. Another drawback of \cite{tejerodepablos2016human} is that it only uses features from 3D joint positions (that are estimated by, e.g.,~\cite{zhang2012microsoft}). They provide rich information on players' actions but miss other potential cues for summarization contained in the scene. Holistic features can compensate such missing cues by, for example, modeling the context of an action. Also, holistic features are useful when the joint position estimation fails. Hence, in this work, we hypothesize that features extracted from players' actions allow summarizing UGSV. The method in \cite{tejerodepablos2016human} is extended by employing a two-stream deep neural network \cite{simonyan2014two, feichtenhofer2016spatiotemporal}. Our new method considers two different types of inputs, namely RGB frames of video and body joint positions, and each of them are transformed through two separate neural networks (i.e., streams). These two streams are then fused to form a single action representation to determine the highlights. Our method does not require recognizing the actions explicitly, thus avoiding expensive human action annotation; and the proposed network is trained from the lower layer to the top layers by using a UGSV summarization dataset. Given the proposed method, it is necessary for the target sports to satisfy the following conditions: (1) a game consists of a series of recognizable actions performed by each player and (2) players are recorded from a close distance for joint position estimation. However, it is expected that the idea of using action recognition-related features for UGSV summarization is still valid for most types of sports. \subsection{Overview} In this work, UGSV summarization is formulated as a problem of classifying a video segment in the original video as interesting (and thus included in the summary) or uninteresting. A two-stream neural network is designed for this problem, and it is trained in a supervised manner with ground truth labels provided by multiple annotators. Figure \ref{fig:overview} shows an overview of the proposed method. The method first divides the original input video into a set $S = \{s_t\}$ of video segments in which RGB frames can be accompanied by their corresponding depth maps. A video segment $s_t$ is then fed into the two-stream network. The body joint-based feature stream considers RGB frames (and depth maps) in $s_t$ to obtain body joint-based features $x_t$, and the holistic feature stream computes holistic features $y_t$ from the RGB frames. The former stream captures the players' motion in detail by explicitly estimating their body joint positions. The latter stream represents entire frames in the video segment, and this is helpful to encode, for example, the relationship between the players. The features $X = \{x_t\}$ and $Y = \{y_t\}$ are then used for highlight classification by considering the temporal dependencies among the video segments. The highlight summaries correspond to a concatenation of the segments that are classified as interesting. \subsection{Video segmentation} \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{segmentation.pdf} \caption{In the video segmentation, a video segment $s_t$ contains frames in-between $t-1$ and $t+2$ sec. Each video segment overlaps with adjacent ones for 2 sec.} \label{fig:segmentation} \end{figure} Various methods have been proposed to segment a video (e.g., its content \cite{chen2008motion}). In the proposed method, the original input video of length $T$ seconds (sec) is uniformly segmented into multiple overlapping segments, so that subsequently a second $t$ of video can be represented by extracting action features from a video segment $s_t$, i.e., $S = \{s_t| t=1,\dots,T\}$. Thus, $T$ also corresponds to the number of video segments in $S$, and $s_t$ corresponds to the video segment that contains frames from sec $t-1$ to sec $t+\tau-1$. For a finer labeling of highlights, short video segments are required. We choose a $\tau = 3$, for which adjacent video segments overlap by 2 sec as shown in Fig.~\ref{fig:segmentation}. Each segment $s_t$ may contain a different number of frames, especially when the input video is captured with an RGB-D camera (e.g., Microsoft Kinect) due to automatic exposure control. \subsection{Body joint-based feature stream} \label{sec:jointfeat} In this stream (Fig.~\ref{fig:jointfeat}), a sequence of positions of the players' body joints (e.g., head, elbow, etc.) that represent the movement of the players irrespective of their appearance is used to obtain a detailed representation of players' actions. Specifically, two types of joint representations are employed in this work, namely 3D positions from depth maps or 2D positions from RGB frames. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{bjfeat.pdf} \caption{In the body joint-based feature stream, an LSTM is fed with the body joint positions estimated from players on each frame $u_t^f$ to model temporal dependencies and extract a feature vector $h_t$. Additionally, these body joint positions are also used to calculate an activity measure for all players $a_t$. The body joint-based feature vector is their concatenation $x_t$.} \label{fig:jointfeat} \end{figure} With respect to the 3D body joint positions, the skeleton tracker (e.g., \cite{zhang2012microsoft}) is used as in the previous work \cite{tejerodepablos2016human}, and it estimates 3D positions from depth maps. The 3D positions are usually represented in the camera coordinate system, and thus they are view-dependent, thereby introducing extra variations. Therefore, the 3D positions from the camera coordinate system are transformed to each player's coordinate system in which the origin corresponds to one of the body joints (e.g., torso). In the absence of depth maps (which is likely in current user-generated video), 2D body joint positions can still be estimated from RGB frames. Recent methods in human pose estimation leverage 2D CNNs to learn spatial relationships among human body parts and estimate 2D joint positions \cite{wei2016convolutional}. These types of 2D positions are not as robust relative to view variations as 3D positions. However, they can be extracted from RGB frames alone without using depth maps. The given 2D body joint positions are also transformed to positions relative to the player's coordinate system to ensure that they are translation invariant. The use of an activity measure works positively while extracting highlights \cite{tejerodepablos2016human}. In order to calculate the activity measure of a certain player $q$ in the video segment $s$ (we omit subscript $t$ in this subsection for notation simplicity), the volume (or plane for the 2D case) around the player is divided into a certain number of regions, and the ratio $r_v$ of the number of frames in the video segment in which the joint $j$ falls into region $v$ is calculated. The activity measure $a_q$ is defined as the entropy obtained based on $r_v$. With respect to each joint $j$ in player $q$'s body, we compute the entropy as follows: \begin{equation} e_j = -\sum_v r_v \log(r_v). \end{equation} Then, we calculate the activity measure for player $q$ as follows: \begin{equation} a_q = \sum_{j=1}^J e_j. \end{equation} The activity measure for all players in a segment is calculated. More details on the activity measure can be found in \cite{tejerodepablos2016human}. Let $u^f_{qj} $ in $\mathbb{R}^3 $ or $\mathbb{R}^2$ (a row vector) denote the 3D or 2D relative position of joint $j$ of player $q$ in frame $f$ of video segment $s$. Subsequently, given the number of players $Q$ and of estimated body joints $J$, the concatenation of the body joints of all players in frame $f$ is defined as follows: \begin{equation} u^f = (u^f_{11} \cdots u^f_{qj} \cdots u^f_{QJ}). \end{equation} As shown in Fig.~\ref{fig:jointfeat}, vectors $u^1$ to $u^F$ are passed through an LSTM to model the temporal dependencies of the joint positions of players' bodies in $s$. After feeding the last vector $u^F$, the hidden state vector $h$ of the LSTM is considered as a representation of $\{u^f\}$. The state of the LSTM is reset to all zeros prior to feeding the next video segment. It is assumed that the number of players $Q$ does not change. However, some players can be out of the field-of-view of the camera. In this case, the corresponding elements in $u$ and $a$ are substituted with zeros. The proposed method represents a video segment $s$ by concatenating the LSTM output and the activity measure of all players in one vector as follows: \begin{equation} x = (h \;\; a), \end{equation} where $a$ denotes the concatenation of $(a_{1}\;\; \cdots\;\; a_{Q})$. \subsection{Holistic feature stream} \label{sec:holisticfeat} This stream encodes a video segment $s$ in a spatio-temporal representation. We rely on state-of-the-art 3D CNNs over RGB frames. Training a 3D CNN from scratch requires thousands of videos \cite{karpathy2014large} that are not available for the proposed task. Recent work on deep neural networks for computer vision \cite{tran2015learning, feichtenhofer2016spatiotemporal, zeng2016title} shows that the activations of an upper layer of a CNN are useful for other related tasks without requiring fine-tuning. Thus, 3D CNN in which parameters are pre-trained with large-scale datasets can be used instead to leverage a huge amount of labeled training data \cite{jia2014caffe}. The proposed method utilizes a 3D CNN for action feature extraction pre-trained with a publicly available action recognition dataset, such as Sports-1M \cite{karpathy2014large}. Unlike our previous work \cite{tejerodepablos2016human}, which required to classify players' actions, it is not necessary to use a sport-specific action recognition dataset Two types of holistic representations of video segments extracted using 3D CNNs are employed, namely CNN-ISA \cite{le2011learning} and C3D \cite{tran2015learning}. Specifically, CNN-ISA provides a representation robust to local translation (e.g., small variations in players' or camera motion) while it is selective to frequency, rotation, and velocity of such motion. The details of CNN-ISA can be found in \cite{le2011learning}. CNN-ISA achieved state-of-the-art performance in well-known datasets for action recognition such as YouTube \cite{liu2009recognizing}, Hollywood2 \cite{marszalek2009actions}, and UCF sports \cite{rodriguez2008action}. Additionally, C3D features provide a representation of objects, scenes, and actions in a video. The network architecture and other details can be found in \cite{tran2015learning}. C3D pre-trained with the Sports-1M dataset achieved state-of-the-art performance on action recognition over the UCF101 dataset \cite{soomro2012ucf101}. This stream represents a video segment $s$ by using a holistic feature vector $y$ that corresponds to the output of one of the aforementioned 3D CNNs. \subsection{Highlight classification using LSTM} \begin{figure}[!t] \centering \includegraphics{hclass.pdf} \caption{The recurrent neural network architecture for highlight classification consists of a single LSTM layer and several fully-connected layers. The body joint-based features $x_t$ and holistic features $y_t$ extracted from video segment $s_t$ are input to calculate the probability $p_t$ that the segment is interesting.} \label{fig:classification} \end{figure} Figure \ref{fig:classification} shows the network architecture designed to extract highlights of UGSV using the features $x_t$ and $y_t$ from video segment $s_t$. The temporal dependencies among video segments are modeled using an LSTM, and the network outputs the probability $p_t$ that the video segment $s_t$ is interesting. First, the features are concatenated to form vector $z_t = (x_t \;\; y_t)$. Vector $z_t$ then goes through a fully-connected layer to reduce its dimensionality. It is assumed that interesting video segments are related to each other in time, in the same way a skillful boxer first feints a punch prior to hitting to generate an opening in the defense. Existing work in video summarization uses LSTMs to extract video highlights \cite{yang2015unsupervised} since it allows the modeling of temporal dependencies across longer time periods when compared to other methods \cite{yue2015beyond}. Following this concept, an LSTM layer is introduced to the network for highlight classification. The hidden state of the LSTM from each time step goes through two fully-connected layers, and this results in a final softmax activation of two units corresponding to ``interesting'' and ``uninteresting.'' The proposed method provides control over the length $L$ of the output summary. The softmax activation of the unit corresponding to ``interesting'' is considered as the probability $p_t \in [0,1]$ that segment $s_t$ is part of a highlight, and skimming curve formulation \cite{truong2007video} is applied to the sequence of probabilities by decreasing a threshold $\theta$ from 1 until a set of segments whose total length is highest below $L$ is determined (Fig.~\ref{fig:skimcurve}). The segments in which the probability exceeds $\theta$ are concatenated to generate the output summary in the temporal order. Hence, the resulting summary may contain multiple consecutive interesting segments. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{skim.pdf} \caption{A summary is generated by concatenating segments in which the probability $p_t \in [0,1]$ of being part of a highlight surpasses a certain threshold $\theta$. $\theta$ decreases from 1 until the desired summary length is reached.} \label{fig:skimcurve} \end{figure} \subsection{Network training} A pre-trained CNN is used in the holistic features stream (i.e., CNN-ISA or C3D) while the LSTMs and fully-connected layers are trained from scratch. Hence, during training, the parameters in the holistic feature stream (i.e., CNN layers) are fixed and those in the body joint-based feature stream (i.e., $\mathtt{lstm}_J$) and highlight classification (i.e., $\mathtt{fc1}$, $\mathtt{lstm}_H$, $\mathtt{fc2}$, and $\mathtt{fc3}$) are updated. The UGSV dataset contains video and ground truth labels $l_t \in \{0, 1\}$ for every second, where $l_t = 1$ implies that the period from $t$ sec to $t+1$ sec of the video is ``interesting'' and $l_t = 0$ otherwise. Label $l_t$ is assigned to its corresponding video segment $s_t$, which contains video second $t$ in its center. That is, with $\tau=3$, segment $s_t=\{t-1,t,t+1\}$ is assigned label $l_t$, segment $s_{t+1}=\{t,t+1,t+2\}$ is assigned label $l_{t+1}$, etc. (Fig.~\ref{fig:segmentation}) With respect to training, cross-entropy loss $\ell$ is used: \begin{equation} \ell = \sum l_t \log p_t. \end{equation} \section{Experiments} \label{sec:experiments} The proposed method is evaluated objectively and subjectively. With respect to the objective evaluation, the performance of the proposed method is compared while using different representations of the players' actions. Specifically, only body joint features (3D or 2D), only holistic motion features (CNN-ISA or C3D), and a combination of both features are evaluated. Subsequently, the completeness of the highlights of the generated summaries are examined. With respect to the subjective evaluation, users with and without experience in the sport are surveyed to study their opinions with respect to the summaries. \subsection{Implementation details} \label{sec:imple} For the evaluation, Kendo (Japanese fencing) was selected as an example of a sport. Kendo is a martial art featuring two players and a set of recognizable actions (e.g., attacking and parrying). We used the UGSV Kendo dataset in \cite{tejerodepablos2016human}, which contains 90 min of self-recorded Kendo matches divided in 10 RGB-D videos taken with a Microsoft Kinect v2, and extended it by adding 18 more self-recorded RGB-D Kendo videos. The total length of the videos is 246 min with a framerate of approximately 20 fps (since $\tau=3$~sec, $F=60$). The body joint-based feature stream was configured for $Q=2$ players since Kendo is a two-player sport. The tracker in \cite{zhang2012microsoft} was used as is (without additional training) to estimate $J=15$ 3D body joint positions from depth maps: \textit{head}, \textit{neck}, \textit{torso}, \textit{right shoulder}, \textit{right elbow}, \textit{right wrist}, \textit{left shoulder}, \textit{left elbow}, \textit{left wrist}, \textit{right hip}, \textit{right knee}, \textit{right ankle}, \textit{left hip}, \textit{left knee}, and \textit{left ankle}. In order to estimate the 2D positions of the players' joints from the RGB frames, the CNN-based method proposed by Linna et al.~\cite{linna2016real} was used. We pre-trained this joint estimation CNN with the human pose dataset used by Linna et al.~\cite{linna2016real}, and then we fine-tuned it with our extended UGSV Kendo video dataset. The network provides $J=13$ joints (that is the same as the 3D case with the exception of \textit{neck} and \textit{torso}). Therefore, the size of vector $u^f_t$ is $Q\times J\times3=90$ in the case of 3D positions and $Q\times J\times2=52$ in the case of 2D. Given that the size of $\mathtt{lstm}_J$ is the same as that of the input and that the size of $a_t$ is $Q=2$, the feature vector $x_t$ for the stream is $\in \mathbb{R}^{92}$ for 3D and $\in \mathbb{R}^{54}$ for 2D. \begin{table*}[t] \begin{center} \caption{Size of the learnable elements in the network with respect to the features used ($input\times output$).\newline Feature vector sizes are detailed in Section \ref{sec:imple})} \label{tab:netparam} \begin{tabular}{ r|c c c|c c|c c } \hline & \multicolumn{3}{c|}{Body joint-based features only} & \multicolumn{2}{c|}{Holistic features only} & \multicolumn{2}{c}{Body joint-based and holistic features}\\ & 3D joints & 2D joints & Action recognition & CNN-ISA & C3D & 3D joints + CNN-ISA & 2D joints + CNN-ISA\\ \hline $\mathtt{lstm}_J$ & $90\times90$ & $52\times52$ & --- & --- & --- & $90\times90$ & $52\times52$ \\ $\mathtt{fc1}$ & $92\times50$ & $54\times50$ & $402\times400$ & $400\times400$ & $4096\times400$ & $492\times400$ & $454\times400$ \\ $\mathtt{lstm}_H$ & $50\times50$ & $50\times50$ & $400\times400$ & $400\times400$ & $400\times400$ & $400\times400$ & $400\times400$ \\ $\mathtt{fc2}$ & $50\times20$ & $50\times20$ & $400\times100$ & $400\times100$ & $400\times100$ & $400\times100$ & $400\times100$ \\ $\mathtt{fc3}$ & $20\times2$ & $20\times$2 & $100\times2$ & $100\times2$ & $100\times2$ & $100\times2$ & $100\times2$ \\ \hline \end{tabular} \end{center} \end{table*} With respect to the holistic feature stream, either the CNN-ISA \cite{le2011learning} or C3D \cite{tran2015learning} networks were used. The UGSV Kendo dataset is not sufficiently large to train the CNNs from scratch, and thus networks pre-trained with an action recognition dataset were used. The CNN-ISA was trained in an unsupervised way with the Hollywood2 dataset that consists of 2859 videos \cite{marszalek2009actions}. For this network, we followed the configuration in \cite{wang2009evaluation}. We used a vector quantization representation of the extracted features with a codebook size of 400, thereby resulting in a feature vector $y_t \in \mathbb{R}^{400}$ for each segment $s_t$. The C3D was trained with the Sports-1M dataset \cite{karpathy2014large} that consisted of 1.1 million videos of sports activities. The C3D features were extracted as indicated in \cite{tran2015learning} by uniformly sub-sampling 16 frames out of approximately 60 frames in $s_t$ (the number of frames in $s_t$ may vary for different segments due to the variable framerate of Microsoft Kinect v2) and subsequently the activations from layer \texttt{fc6} (i.e., $y_t \in \mathbb{R}^{4096}$) were extracted. The proposed method was implemented in Chainer \cite{tokui2015chainer} running on Ubuntu Trusty (64 bit), installed in a computer with an Intel Core i7 processor and 32GB of RAM, and a GeForce GTX TITAN X graphics card. In average, it roughly took 300 min to train the network until convergence over our Kendo dataset. For testing, the average processing time of a video is approximately 5 sec (see Table \ref{tab:dataset} for video durations). The learning rate was calculated by the adaptive moment estimation algorithm (Adam) \cite{kingma2015adam} with $\alpha = 0.001$. Sigmoid activation was introduced after the fully-connected layers. Table \ref{tab:netparam} summarizes the number of learnable parameters for each layer, which varies based on the choice of features. \subsection{Results} For annotating the ground truth, 15 participants were invited and divided into two groups, namely experienced (\textit{E}, 5 people) and inexperienced (\textit{NE}, 10 people), based on their experience in the target sport (i.e., Kendo). It was assumed that the highlights preferred by the \textit{E} and \textit{NE} groups would exhibit significant variations, and an aim of the study included evaluating the extent to which the proposed method adapts to the needs of each group. For this, the participants annotated manually the highlights of the 28 videos. The ground truth labels of the videos were separately obtained for both E and NE groups. With respect to each one-second period $t$ of video, the ground truth label is $l_t=1$ if at least 40\% of the participants annotated it as interesting (i.e., 2 people in group \textit{E} and 4 people in group \textit{NE}). Otherwise, $l_t=0$. Due to group \textit{E}'s technical knowledge of Kendo, their highlights contain very specific actions (e.g., decisive strikes and counterattacks). Conversely, group \textit{NE} selected strikes as well as more general actions (e.g., parries and feints), and thus their labeled highlights are almost three times as long as group \textit{E}'s (please refer to the durations in Appendix \ref{sec:appendixa}). The network was separately trained with each group's ground truth labels in the leave-one-out (LOO) fashion, i.e., 27 videos were used for training and a summary of the remaining video was generated for evaluation purposes. The CNN for 2D pose estimation was trained independently prior to each experiment, it was fine-tuned with the 27 training videos in order to estimate the joints of the video used for evaluation. This process was repeated for each video and for each group \textit{E} and \textit{NE}, to result in 28 experienced summaries and 28 inexperienced summaries. The generated summaries had the same length $L$ as their respective ground truth for a fair comparison. Figure \ref{fig:skimcurve} illustrates a few examples of the frames of a video as well as highlight frames extracted by the proposed method (framed in orange). \subsubsection{Objective evaluation by segment f-score} \label{sec:interes} The ability of the proposed method to extract highlights was evaluated in terms of the f-score. In the proposed method, a one-second period of video is as follows: \begin{itemize} \item true positive (TP), if it is in the summary and $l_t = 1$, \item false positive (FP), if it is in the summary but $l_t = 0$, \item false negative (FN), if it is not in the summary but $l_t = 1$ \item true negative (TN), if it is not in the summary and $l_t = 0$. \end{itemize} The f-score is subsequently defined as follows: \begin{equation} \textrm{f-score}=\frac{2\textrm{TP}}{2\textrm{TP}+\textrm{FP}+\textrm{FN}} . \end{equation} Table \ref{tab:objecfeat} shows the f-scores for the summaries generated with the labels of both \textit{E} and \textit{NE} groups. In addition to the features described in Section \ref{sec:imple}, it includes the results of using the features from our previous work in UGSV summarization \cite{tejerodepablos2016human}. The features were obtained by feeding the 3D body joint representation of players' actions to the action recognition method in \cite{tejerodepablos2016flexible} and considering the action classification results. Additionally, the proposed architecture was also compared with that of the method used in the previous work \cite{tejerodepablos2016human} that uses a hidden Markov model with Gaussian mixture emission (GMM-HMM) over the same action recognition results mentioned above. Finally, the results of using $k$-means clustering are included, since $k$-means is widely accepted as a baseline for user-generated video summarization \cite{cong2012towards}. To implement the $k$-means clustering baseline, the video segments $S$ were clustered based on the concatenated features \textit{3D joints + CNN-ISA}, and the summary was created by concatenating in time the cluster centroids. The number of clusters for each video were configured such that the resulting summary length is equal to that of the ground truth. \begin{table}[!t] \begin{center} \caption{F-score comparison of different combinations of features and other UGSV summarization methods.} \label{tab:objecfeat} \begin{tabular}{ r|r|c|c } \hline \multicolumn{2}{c|}{Method} & Group \textit{E} & Group \textit{NE} \\ \hline \multirow{3}{*}{\parbox{2.3cm}{Body joint-based\newline features}} & 3D joints & 0.53 & 0.83 \\ & 2D joints & 0.45 & 0.77 \\ & Action recognition \cite{tejerodepablos2016human} & 0.48 & 0.76 \\ \hline \multirow{2}{*}{\parbox{2.3cm}{Holistic features}} & CNN-ISA & 0.50 & 0.79 \\ & C3D & 0.27 & 0.60 \\ \hline \multirow{2}{*}{\parbox{2.3cm}{Body joint-based\newline and holistic features}} & 3D joints + CNN-ISA & \textbf{0.58} & \textbf{0.85} \\ & 2D joints + CNN-ISA & 0.57 & 0.81 \\ \hline \hline \multirow{2}{*}{\parbox{2.3cm}{Other UGSV\newline summarization}} & $k$-means clustering & 0.28 & 0.61 \\ \cline{2-4} & Without $\mathtt{lstm}_H$ & 0.48 & 0.8 \\ \cline{2-4} & GoogLeNet and BiLSTM \cite{zhang2016video} & 0.27 & 0.65 \\ \cline{2-4} & GMM-HMM \cite{tejerodepablos2016human} & 0.44 & 0.79 \\ \hline \end{tabular} \end{center} \end{table} With respect to using a single feature (i.e.~3D joins, 2D joints, CNN-ISA, C3D, or action recognition), 3D joints obtain the best performance. Although C3D features perform well in action recognition in heterogeneous video \cite{tran2015learning}, the results were worse than that of other features in our summarization task. The dimensionality of the C3D features (4096) is significantly higher when compared to that of others, and thus our dataset may not be sufficient to train the network well. Fine-tuning C3D using the Kendo dataset might improve its performance. In contrast, CNN-ISA also uses RGB frames and obtains better results when compared to those of C3D, most likely due to the lower dimensionality of its features (400). This implies that it is also possible to obtain features from RGB frames that allow the modeling of UGSV highlights. The decrease in the performance of 2D joints with respect to 3D joints may indicate that view variations in the same pose negatively affect the body joint-based features stream. The action recognition feature had an intermediate performance. A potential reason is that the action recognition feature is based on a classic approach for classification, so useful cues contained in the 3D body joint positions degenerated in this process. From these results, the features that performed better for highlight classification correspond to CNN-ISA holistic features and 3D body joint-based features. Several state-of-the-art action recognition methods enjoy improvements in performance by combining handcrafted spatio-temporal features (e.g., dense trajectories) and those learned via CNNs \cite{tran2015learning, feichtenhofer2016spatiotemporal}. This is also true in the present work where a combination of CNN-ISA with 3D joints achieves the best performance. The combination of CNN-ISA with 2D joints also provides a considerable boost in performance and especially for the experienced summaries. This supports our hypothesis that a two-streams architecture also provides better results for UGSV summarization Finally, the lowest part of Table \ref{tab:objecfeat} shows the results of other summarization methods. The results of the proposed method outperform the results of previous works, as well as those of the clustering-based baseline. While clustering allows a wider variety of scenes in the summary, this is not a good strategy for UGSV summarization that follows a different criterion based on interestingness. To investigate the necessity of capturing temporal dependencies of our action features (3D joints + CNN-ISA), we replaced $\mathtt{lstm}_H$ in our network with a fully-connected layer of the same size (400$\times$400). This experiment allowed us to draw some interesting conclusions: Modeling the temporal relationship among sequential action features allows for an improved performance. Moreover, this improvement is more noticeable in the case of experienced users, because of their more elaborated labeling of interesting actions. Then, we compared the proposed method with the state-of-the-art summarization method of Zhang et al. \cite{zhang2016video}. Zhang et al. extract features from each frame with a pre-trained CNN for image recognition (i.e. GoogLeNet \cite{szegedy2015going}) and feeds those features to a bidirectional LSTM to model the likelihood of whether the frames should be included in the summary. As shown in Table \ref{tab:objecfeat}, in spite of the more sophisticated temporal modeling in \cite{zhang2016video}, the performance is lower than most feature combinations in our method. This is most likely due to the particularities of sports video; GoogLeNet, as a network pre-trained for image classification, may not be able to extract features that represent different actions of a sport. Moreover, whereas our features are extracted from a video segment (which contains several frames), features in \cite{zhang2016video} are extracted from a single frame, and thus cannot represent continuous motion. The proposed method also outperforms our previous work \cite{tejerodepablos2016human}, which used the classification results of an action recognition method to train a GMM-HMM for highlight modeling. We can conclude that it is not necessary to explicitly recognize players' actions for UGSV summarization, which may actually degrade performance when compared to that in the case of directly using action recognition features. \subsubsection{Objective evaluation by highlight completeness} \label{sec:complet} \begin{figure}[!t] \centering \includegraphics[scale=.5]{comple.pdf} \caption{Association of highlights with respect to the greedy algorithm. Each highlight in the ground truth is uniquely associated to a highlight in the generated summary (two summary highlights cannot share the same ground truth highlight). The completeness of a summary highlight corresponds to the percentage of overlap with the ground truth (0\% if unassociated).} \label{fig:comple} \end{figure} \begin{figure}[!t] \centering \includegraphics[scale=.5]{rec-pre_E.pdf} \includegraphics[scale=.5]{rec-pre_NE.pdf} \caption{Recall-precision curves for different completeness values (up: labels \textit{E}, down: labels \textit{NE}). The gap between the curves $C=50\%$ and $C=70\%$ shows that a significant number of the highlights are missing for a maximum of half the interesting segments. As $\theta$ varies, the appearing of incomplete highlights affects the association of highlights-ground truth, resulting in a jagged curve.} \label{fig:rec-pre} \end{figure} \begin{figure*}[!thp] \centering \includegraphics[scale=.5]{threshold.pdf} \caption{Original length: 10 min 40 sec. Summary length: 1 min. The highlights summary is generated by applying a threshold $\theta$ to the probability of interestingness $p$. Video segments with higher $p$ are extracted prior to segments with lower $p$, and thus in a few cases the beginning/end segments of the highlights are missing when compared to the ground truth.} \label{fig:threshold} \end{figure*} A highlight may consist of consecutive video segments. Hence, although missing a segment may not significantly impact the f-score, it affects the continuity of the video, and thereby the comprehensibility and the user experience of the summary. Given this, a criterion is defined to evaluate the completeness $c$ of an extracted highlight as the fraction of overlap between the extracted highlight and its associated ground truth highlight. The association between extracted and ground truth highlights is not trivial, and it was performed by using a greedy algorithm in which the total $c$ of all highlights is maximized (Fig. \ref{fig:comple}). An extracted highlight is considered as a TP if its completeness $c$ exceeds a certain percentage $C$\%, and based on this, the precision and recall of the highlights are calculated as follows: \begin{equation} \textrm{precision}=\frac{\textrm{TP}}{\textrm{TP}+\textrm{FP}} , \;\;\;\;\;\; \textrm{recall}=\frac{\textrm{TP}}{\textrm{TP}+\textrm{FN}} . \end{equation} In the experiment, the threshold $\theta$ varies from 0 to 1 over the probability $p$ to generate the recall-precision curve of group \textit{E} and \textit{NE}. Figure \ref{fig:rec-pre} shows the curves produced for $C = $ 50\%, 70\%, and 90\%. We observe that reducing $C$ to 50$\%$ significantly increases the number of complete highlights. The presence of incomplete highlights is attributed to the way highlights are extracted. First, the \textit{high $p$ segments} are extracted, and then the highlight is completed with \textit{low $p$ segments} as the threshold $\theta$ decreases (Fig. \ref{fig:threshold}). However, prior to the completion of a highlight, \textit{high $p$ segments} from other highlights are extracted and, in a few cases, the \textit{low $p$ segments} are never extracted. Specifically, the parts before and after an interesting Kendo technique normally correspond to \textit{low $p$ segments} since they are not present in every ground truth highlight annotated by the participants. The reason for the increased number of incomplete segments (less TP) in the \textit{NE} summaries is because the inexperienced group annotated a higher number of highlights. \subsubsection{Subjective evaluation} The same participants who annotated the original videos were asked to participate in a survey to assess their opinion on the ground truth and the generated summaries. The three videos with the highest, median and lowest f-scores (averaged over groups $E$ and $NE$) were selected. With respect to each video, participants were shown the ground truth and the summaries generated with the best feature combination (i.e., \textit{3D joints + CNN-ISA}) using both group \textit{E} and \textit{NE} labels. As a result, each participant watched 12 videos (3 f-scores $\times$ 4 video types). The participants were asked to: \begin{itemize} \item (Q1) assign a score in a Likert scale from 1 (very few highlights are interesting) to 5 (most highlights are interesting) based on their satisfaction with the contents of each of the 12 videos. \item (Q2) state their opinion on the videos and the criteria followed while assigning a score. \end{itemize} Table \ref{tab:subjec} shows the results of Q1 grouped by video type and video f-score. The scores are averaged for group \textit{E} and \textit{NE} separately. \begin{table*}[t] \begin{center} \caption{Subjective evaluation results with respect to the video type and f-score.\newline Each cell contains the mean $\pm$ the standard deviation of the scores (from 1 to 5).} \label{tab:subjec} \begin{tabular}{c|cccc|ccc} \hline \multirow{2}{*}{} & \multicolumn{4}{c|}{Video type} & \multicolumn{3}{c}{Video f-score} \\ & Ground truth \textit{E} & Ground truth \textit{NE} & Summary \textit{E} & Summary \textit{NE} & Highest & Median & Lowest \\ \hline Group \textit{E} & 3.2$\pm$0.99 & 3.07$\pm$1.04 & 2.6$\pm$1.23 & 2.73$\pm$0.87 & 3.3$\pm$0.95 & 2.85$\pm$0.97 & 2.55$\pm$1.18 \\ Group \textit{NE} & 3.57$\pm$0.72 & 3.5$\pm$1.07 & 3.2$\pm$0.83 & 2.9$\pm$0.97 & 3.48$\pm$0.83 & 3.03$\pm$0.91 & 3.38$\pm$0.95 \\ \hline \end{tabular} \end{center} \end{table*} In the context of Q1, with respect to the video type, both experienced and inexperienced participants assigned a higher score to the ground truth videos than to the generated summaries. This is because some of the summaries contain uninteresting video segments and also the completeness of the highlights is worse when compared to that of ground truth videos. The potential reasons as to why the ground truth videos did not obtain a perfect score are mainly attributed to the following two factors: (1) The ground truth summaries are created by combining labels from several participants via majority voting, and thus the original labels of each participant are lost. (2) The ground truth also contains incomplete highlights due to errors when the participants annotated the videos. Additionally, experienced participants preferred the \textit{NE} ground truth to the \textit{E} summaries plausibly because they do not find incomplete highlights interesting since context is missing. Conversely, inexperienced participants tend to appreciate the highlights from the experienced participants more than their own highlights. This is potentially because the highlights from the experienced participants are briefer and contain certain techniques (e.g. counterattacks) that make summaries more interesting when compared to those of the inexperienced participants. The results for Q1 in terms of the f-score type demonstrate the high correlation to the f-score (i.e., a video with a higher f-score tends to receive a higher subjective score). With respect to Q2, participants provided their opinion on the summaries. A few experienced participants found the highlights as too short and this even included complete highlights in the ground truth. This occurs because only the segments labeled as highlights by at least 40\% of the participants (i.e., 2 people in group \textit{E} and 4 people in group \textit{NE}) were included in the ground truth, and thus some labeled segments were left out. Inexperienced participants state the usefulness of the proposed method to extract highlights based on interesting actions as well as time saved by watching the highlights as opposed to the whole video. In addition, for a few inexperienced participants incomplete highlights make the summaries difficult to follow From this evaluation, we conclude that the labels from experienced users contain a better selection of Kendo techniques. Due to the negative impact of incomplete highlights on the summaries, it is necessary to consider extra temporal consistency in $p_t$. One possibility is to replace the skimming curve formulation-based highlight extraction with an algorithm that takes into account the completeness of the highlights. Also, another possibility is not to combine the labels of several participants since it introduces incomplete highlights (Section \ref{sec:complet}) and alters personal preferences. Thus, instead of combining labels from different participants, another possibility is to create personalized summaries with a higher quality ground truth or to include user profiles such as that proposed in \cite{nitta2009automatic}. \section{Conclusion} \label{sec:conclusions} This paper has described a novel method for automatic summarization of UGSV, especially demonstrating the results for Kendo (Japanese fencing) videos. Given the lack of editing conventions that permit the use of heuristics, a different cue, i.e., players' actions, is used to acquire high-level semantics from videos to generate a summary of highlights. The presented two-stream method combines body joint-based features and holistic features for highlights extraction. The best combination among the evaluated features corresponds to a combination of 3D body joint-based features and CNN-ISA features \cite{le2011learning}). In contrast to the previous work \cite{tejerodepablos2016human}, the results indicate that it is not necessary to explicitly recognize players' actions in order to determine highlights. Alternatively, deep neural networks are leveraged to extract a feature representation of players' actions and to model their temporal dependency. Specifically, LSTM is useful to model the temporal dependencies of the joint positions of players' bodies in each video segment as well as the highlights in the entire video. In order to generate appealing summaries, players' 3D body joint positions from depth maps offer the best performance. However, in the absence of depth maps, 2D body joint positions and holistic features extracted from RGB images are also used for summarization. The future work includes improving the architecture of the network and fine-tuning it in the end-to-end manner with a larger dataset to illustrate its potential performance. It also includes evaluating the method in the context of a wider variety of sports (e.g., boxing, fencing, and table tennis). \section*{Acknowledgment} This work was supported in part by JSPS KAKENHI Grant Number 16K16086. \appendices \section{} \label{sec:appendixa} Table \ref{tab:dataset} lists the duration of the videos in the dataset used in the experiments and their respective ground truth highlights as annotated by users. \begin{table}[!ht] \begin{center} \caption{Duration of the video dataset and ground truths.} \label{tab:dataset} \begin{tabular}{ r|c|c|c } \hline ID & Original video & Ground truth \textit{E} & Ground truth \textit{NE} \\ \hline \#1 & 10 min 48 sec & 1 min 11 sec & 2 min 21 sec \\ \#2 & 5 min 10 sec & 49 sec & 1 min 7 sec \\ \#3 & 5 min 18 sec & 1 min 9 sec & 1 min 58 sec \\ \#4 & 9 min 37 sec & 1 min 37 sec & 2 min 17 sec \\ \#5 & 9 min 59 sec & 2 min 33 sec & 2 min 42 sec \\ \#6 & 10 min 5 sec & 1 min 28 sec & 2 min 55 sec \\ \#7 & 10 min 3 sec & 48 sec & 1 min 45 sec \\ \#8 & 10 min 10 sec & 45 sec & 2 min 14 sec \\ \#9 & 5 min 17 sec & 32 sec & 1 min 14 sec \\ \#10 & 5 min 14 sec & 22 sec & 1 min 30 sec \\ \#11 & 4 min 58 sec & 53 sec & 1 min 50 sec \\ \#12 & 20 min 40 sec & 1 min 24 sec & 4 min 14 sec \\ \#13 & 10 min 15 sec & 53 sec & 2 min 50 sec \\ \#14 & 10 min 16 sec & 58 sec & 5 min 8 sec \\ \#15 & 10 min 37 sec & 47 sec & 2 min 44 sec \\ \#16 & 10 min 37 sec & 34 sec & 2 min 21 sec \\ \#17 & 5 min 14 sec & 16 sec & 1 min 44 sec \\ \#18 & 5 min 4 sec & 32 sec & 2 min 21 sec \\ \#19 & 10 min 57 sec & 38 sec & 2 min 11 sec \\ \#20 & 5 min 36 sec & 27 sec & 1 min 21 sec \\ \#21 & 5 min 36 sec & 33 sec & 1 min 35 sec \\ \#22 & 10 min 48 sec & 58 sec & 1 min 59 sec \\ \#23 & 9 min 44 sec & 1 min 11 sec & 2 min 48 sec \\ \#24 & 10 min 23 sec & 54 sec & 2 min 25 sec \\ \#25 & 10 min 7 sec & 28 sec & 1 min 57 sec \\ \#26 & 10 min 40 sec & 49 sec & 2 min 5 sec \\ \#27 & 4 min 59 sec & 33 sec & 2 min 13 sec \\ \#28 & 8 min 13 sec & 47 sec & 2 min 10 sec \\ \hline Total & 4 hours 6 min 11 sec & 24 min 49 sec & 1 hour 3 min 59 sec \\ \end{tabular} \end{center} \end{table} \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "attr-fineweb-edu": 1.838867, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUaOnxK02iP4Wj9xVh
\section{Introduction} \label{s:intro} {\it Motivation}. Expedia (\url{www.expedia.com}) is the world's largest online travel agency. It has approximately 150 sites that operate in 70 countries around the world, with 50 million visitors a month and 200 mobile app downloads a minute. Expedia released a dataset (through the 2013 International Conference on Data Mining data competition) containing $52$ variables of user and hotel characteristics (e.g. click-through data, hotel characteristics, user's aggregate purchase history, competitor price information) from over $10$ million hotel search results collected over a window of the year $2013$. The Expedia digital marketing team intends to understand: what visitors want to see in their Expedia.com search results, or in other words, what features of ``search result impressions'' lead to purchase with a goal of increasing user engagement, travel experience, and the conversion or booking rate. These factors will ultimately be used to predict the needs of consumers--to personalize, target, and provide consumers with content that is contextually relevant. For this purpose we develop a scalable distributed algorithm that can mine search data from millions of travelers in a completely nonparametric manner to find the important features that best predict customers' likelihood to book a hotel--an important large-scale machine learning problem, which is the main focus of this paper. \vskip1em {\it The Volume Problem}. This kind of `tall' data structure, whose number of observations can run into the millions and billions frequently arises in astronomy, marketing, neuroscience, e-commerce, and social networks. These massive datasets, {\it which cannot be stored or analyzed by a single computer `all-at-once' using standard data analysis software}, create a major bottleneck for statistical modeling and inference. There is currently no efficient and flexible statistical inference model available to address this problem. We seek to develop a comprehensive framework that will allow data scientists to {\it systematically apply the tools and algorithms developed prior to the `age of big data' for massive data problems} - thus filling a significant gap in current practice. \vskip1em {\it The Variety Problem}. Another challenge is how to tackle the mixed data problem (Parzen and Mukhopadhyay, 2013) - one of the biggest unsolved problems of data science. The Expedia dataset contains variables of different types (e.g. continuous, categorical, discrete, etc.). The traditional statistical modeling approach develops tools that are specific for each data type. A few examples of traditional statistical measures for $(Y;X)$ data include: (1) Pearson's $\phi$-coefficient: $Y$ and $X$ both binary, (2) Wilcoxon Statistic: $Y$ binary and $X$ continuous, (3) Kruskal-Wallis Statistic: $Y$ discrete multinomial and $X$ continuous, and many more. Computational implementation of traditional statistical algorithms for heterogeneous large datasets (like the Expedia search data) thus become dauntingly complex as they require the data-type information for each pair from the user to calculate the proper statistic. To streamline this whole process we need to develop unified computing algorithms with automatic data-driven adjustments that yield appropriate statistical measures without demanding the data type information from the user. We call this new computing culture United Statistical Algorithms (Parzen and Mukhopadhyay, 2013). To achieve this goal we design a customized discrete orthonormal polynomial-based transformation, the LP-Transformation, (Mukhopadhyay and Parzen, 2014) for any arbitrary random variable $X$, which can be viewed as a nonparametric data-adaptive generalization of Norbert Wiener's Hermite polynomial chaos-type representation (Wiener, 1938). This easy-to-implement LP-transformation based approach allows us to simultaneously extend and integrate classical and modern statistical methods for nonparametric feature selection, thus providing the foundation to build \textit{increasingly automatic algorithms} for large complex datasets. \vskip1em {\it Scalability Issue}. Finally the most crucial issue is to develop a scalable algorithm for large datasets like the Expedia example. With the evolution of big data structures, new processing capabilities relying on distributed, parallel processing have been developed for efficient data manipulation and analysis. Our technique addresses the question of how to develop a statistical inference framework for massive data that can fully exploit the power of parallel computing architecture and can be easily embedded into the MapReduce framework. We especially design the statistical `map' function and `reduce' function for massive data variable selection by integrating many modern statistical concepts and ideas introduced in Sections \ref{sec:formulation} and \ref{sec:theory}. Doing so allows for faster processing of big datasets while maintaining the ability to obtain accurate statistical inference without losing information - thus providing an effective and efficient strategy for big data analytics. The other appealing aspect of our distributed statistical modeling strategy is that it is equally applicable for small and big data. Our modeling approach is generic and unifies small and big data variable selection strategies. \vskip1em {\it Organization}. Motivated by the challenges of the Expedia case study, we design \texttt{MetaLP} to support fast statistical inference on very large datasets that provides scalability, parallelism, and automation. The article is organized as follows. Section \ref{sec:formulation} provides the basic statistical formulation and overview of the \texttt{MetaLP} algorithmic framework. Section \ref{sec:theory} covers the individual elements of the distributed statistical learning framework in more detail, addresses the issue of heterogeneity in big data, and provides a concrete nonparametric parallelizable variable selection algorithm. Section \ref{sec:expedia} provides an in-depth analysis of the motivating Expedia dataset using the framework to conduct variable selection under different settings to determine which hotel and user characteristics influence likelihood of booking a hotel. Section \ref{sec:paradox} provides two examples on how the \texttt{MetaLP} framework provides a new understanding and resolution for problems related to Simpson's Paradox and Stein's Paradox. Section \ref{sec:conclusion} provides some concluding remarks and discusses future direction of this work. Supplementary materials are also available discussing simulation study results, the relevance of MetaLP for small-data, and the \texttt{R} scripts for MapReduce implementation. \vskip1em {\it Related Literature}. Several distributed learning schemes for big data have been proposed in the literature. These include a scalable bootstrap for massive data (Kleiner et. al., 2014) to assess the quality of a given estimator, a resampling-based stochastic approximation (RSA) method (Liang et. al., 2013) for analysis of a Gaussian geostatistical model, divide and recombine (D\&R) approach to the analysis of large complex data (Guha et al., 2012), and also parallel algorithms for large-scale parametric linear regression proposed by Lin and Xi (2011) and Chen and Xie (2014) among others. \textit{Instead of developing problem-specific divide-and-combine techniques, we provide a general and systematic framework that can be adapted for different varieties of ``learning problems'' from a nonparametric perspective, which distinguishes our work from previous proposals.} A new data analysis paradigm, combining two key ideas: LP united statistical algorithm (science of mixed data modeling), and meta-analysis (statistical basis of divide-and-combine) is proposed that can simultaneously perform nonparametric statistical \textit{modeling and inference} under a single framework in a computationally efficient way, thus addressing a problem with much broader scope and applicability. \section{Statistical Formulation of Big Data Analysis}\label{sec:formulation} Our research is motivated by a real business problem of optimizing personalized web marketing for Expedia, with the goal of improving customer experience and look-to-book ratios\footnote{The look-to-book ratio is the number of people who visit a travel agency or agency web site, compared to the number who actually make a purchase. This ratio is important to online travel agents such as \texttt{Priceline.com}, \texttt{Travelocity.com}, and \texttt{Expedia.com} for determining whether their Websites are securing purchases.} by identifying the key factors that affect consumer choices. Nearly 10 million historical hotel search and click-through transaction records selected over a window of the year 2013 are available for analysis that demonstrate the typical analytical challenges involved in big data modeling. This prototypical digital marketing case study allows us to address the following more general data modeling challenge, which finds its applicability in many areas of modern data-driven science, engineering, and business:\vskip.5em {\it How can we design nonparametric distributed algorithms that work on large amounts of data (that cannot be stored or processed by just one machine/server node) to find the most important features that affect a certain outcome of interest?}\vskip.6em At face value, this might look as a simple two-sample inference problem that can be solved by some trivial generalization of existing `small-data' statistical methods, but in reality, this is not the case. In fact, we are not aware of any pragmatic statistical algorithm that can achieve a similar feat. In this article we perform a thorough investigation of the theoretical and practical challenges present in the Expedia case study, and more generally in big data analysis. We emphasize the role of statistics in big data analysis and provide an overview of the three main components of our statistical theory along with the modeling challenges they are designed to overcome. In what follows, we present the conceptual building blocks of \texttt{MetaLP}--a new large-scale distributed inference tool that allows big data users to run statistical procedures on large amounts of data. Figure \ref{fig:meta_framework2} outlines the architecture. \begin{figure} [h] \centering \includegraphics[height=\textheight,width=.4\textwidth,keepaspectratio,trim=2cm .5cm 2cm 1.5cm]{Meta_plot2.pdf} \caption{\texttt{MetaLP} based large-scale distributed statistical inference architecture.} \label {fig:meta_framework2} \end{figure} \subsection{Partitioning Massive Datasets} Dramatic increases in the size of datasets have created a major bottleneck for conducting statistical inference in a traditional ``centralized'' manner where we have access to the full data. The first and quite natural idea to tackle the volume problem is to divide the big data into several smaller datasets similar to the modern parallel computing database systems like Hadoop and Spark as illustrated in Figure \ref{fig:datatypes}. However, simply dividing the dataset does not allow data scientists to conquer the problem of big data analysis. There are many unsettled questions that we have to carefully address using proper statistical tools to arrive at an appropriate solution. Users must select a data partitioning scheme to split the original large data into several smaller parts and assign them to different nodes for processing. The most common technique is random partition. However, for other problems users can perform other strategies like spatial or temporal partitioning utilizing the inherent structure of the data. It may be the case that the original massive dataset is already partitioned by some natural grouping variable, in which case an algorithm that can accommodate pre-existing partitions is desirable. The number of partitions could be also defined by the user who may consider a wide range of cost metrics including the number of processors required, CPU time, job latency, memory utilization, and more when making this decision. One important and often neglected issue associated with massive data partitioning for parallel processing is that the characteristics of the subpopulations created may vary largely. This is known as heterogeneity (Higgins and Thompson, 2002) which will be thoroughly discussed in the next section and is an unavoidable obstacle for Divide-Conquer style inference models. Heterogeneity can certainly impact data-parallel inference, but the question is can we design a method that is robust to the various data partitioning options by incorporating data-adaptive regularization? Novel exploratory diagnostics along with $\tau^2$-regularization techniques are provided in Sections \ref{sec:I2} and \ref{sec:htau} to measure the severity of heterogeneity across subpopulations and adjust effect size estimates accordingly. \begin{figure}[!t] \centering \begin{subtable}{1\textwidth} \centering \caption*{Subpopulation 1} \begin{tabular}{|c | c c c|} \hline \texttt{booking\_bool} & \texttt{promotion\_flag} & \texttt{srch\_length\_of\_stay} & \texttt{price\_usd} \\\hline 1 & 0 & 2 & 164.59 \\ 0 & 1 & 7 & 284.48 \\ 0 & 1 & 7 & 123.98 \\ \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 4 & 200.46 \\ 1 & 0 & 1 & 194.34 \\ 0 & 1 & 3 & 371.27 \\ \hline \end{tabular} \end{subtable} \[\bullet\] \[\bullet\] \[\bullet\] \begin{subtable}{1\textwidth} \centering \caption*{Subpopulation $k$} \begin{tabular}{|c | c c c|} \hline \texttt{booking\_bool} & \texttt{promotion\_flag} & \texttt{srch\_length\_of\_stay} & \texttt{price\_usd} \\\hline 1 & 1 & 1 & 125.65 \\ 1 & 0 & 3 & 149.32 \\ 0 & 1 & 8 & 120.63 \\ \vdots & \vdots & \vdots & \vdots \\ 0 & 1 & 1 & 224.46 \\ 1 & 0 & 1 & 180.64 \\ 1 & 1 & 3 & 174.89 \\ \hline \end{tabular} \end{subtable} \vskip1em \caption{Illustration of a partitioned data set with $k$ subpopulations and various data types. A subset of the variables in the {\it Expedia} dataset are shown. The target variable $Y$ \texttt{booking\_bool}, indicates whether or not the hotel was booked. The three predictor variables shown are $X_1$ \texttt{promotion\_flag} (indicates if sale price promotion was displayed), $X_2$ \texttt{srch\_length\_of\_stay} (number of nights stayed searched), and $X_3$ \texttt{price\_usd} (displayed price of hotel).} \label{fig:datatypes} \vspace{-.75em} \end{figure} \subsection{LP Statistics for Mixed Data Problem} Massive datasets typically contain a multitude of data types, and the Expedia dataset is no exception. Figure \ref{fig:datatypes} shows three predictor variables in the Expedia dataset: \texttt{promotion\_flag} (binary), \texttt{srch\_length\_of\_stay} (discrete count), and \texttt{price\_usd} (continuous). Even for this illustrative situation, to construct `appropriate' statistical measures with the goal of identifying the important variables, traditional algorithmic approaches demand two pieces of information: (i) values and (ii) data-type information for every single variable $X_j$ present in each of the subpopulations or partitioned datasets, known as the `value-type' information-pair for statistical mining. This produces considerable complications in the computational implementation and creates serious roadblocks for building systematic and automatic algorithms for the Expedia analysis. Thus, the question of immediate concern is:\vskip.4em {\it How can we develop a unified computing formula with automatic built-in adjustments that yields appropriate statistical measures without requiring the data type information from the user?}\vskip.45em To tackle this `variety' or mixed-data problem we design a custom-constructed discrete orthonormal polynomial-based transformation, called LP-Transformation, that provides a generic and universal representation of any random variable, defined in Section \ref{sec:LP}. We use this transformation technique to represent the data in a new LP Hilbert space. This data-adaptive transformation will allow us to construct unified learning algorithms by compactly expressing them as inner products in the LP Hilbert space. \subsection{Combining Information via Confidence Distribution based Meta-analysis} Eventually, the goal of having a distributed inference procedure critically depends on the question:\vskip.5em {\it How to judiciously combine the ``local'' LP-inferences executed in parallel by different servers to get the ``global'' inference for the original big data?}\vskip.55em To resolve this challenge, we make a novel connection with meta-analysis. Section \ref{sec:metaparallel} describes how we can use meta-analysis to parallelize the statistical inference process for massive datasets. Furthermore, instead of simply providing point estimates we seek to provide a distribution estimator (analogous to the Bayesian posterior distribution) for the LP-statistics via a confidence distribution (CD) that contains information for virtually all types of statistical inference (e.g. estimation, hypothesis testing, confidence intervals, etc.). Section \ref{sec:CDmeta} discusses this strategy using CD-based meta-analysis which plays a key role as a `Combiner' to integrate the local inferences to construct a comprehensive answer for the original data. These new connections allow data scientists to fully utilize the parallel processing power of large-scale clusters for designing unified and efficient big data statistical inference algorithms. \vskip1em To conclude, we have discussed the architectural overview of \texttt{MetaLP} which addresses the challenge of developing an inference framework for data-intensive applications in a way that does not require any modifications of the core statistical principles that were developed for `small' data. Due to its simplicity and flexibility, data scientists can adapt this inference model and statistical computing philosophy to a significant number of big data problems. Next, we describe the theoretical underpinnings, algorithmic foundation, and implementation details of our data-parallel large-scale \texttt{MetaLP} inference model. \setcounter{section}{2} \section{Elements of Distributed Statistical Learning}\label{sec:theory} In this section we introduce the key foundational concepts of our proposed nonparametric distributed statistical inference paradigm. The theory connects several classical and modern statistical ideas to develop a comprehensive inference framework. We highlight along the way how these new ideas and connections address the real challenges of big data analysis as noted in Section 2. \subsection{LP United Statistical Algorithm and Universal Representation} \label{sec:LP} An important challenge in analyzing large complex data is in the data variety problem. Developing algorithms that are generally applicable and comparable across different data types (e.g. continuous, discrete, ordinal, nominal, etc.) is an open problem that has a direct implication for developing an automatic and unified computing formula for statistical data science. To address the data variety problem or the mixed data problem, we introduce a new nonparametric statistical data modeling framework based on an LP approach to data analysis (Mukhopadhyay and Parzen, 2014). \vskip.5em \textit{Data Transformation and LP Hilbert Functional Space Representation}. Our approach relies on an alternative representation of the data in the LP Hilbert space, which will be defined shortly. The new representation shows how each explanatory variable, regardless of data type, can be represented as a linear combination of {\it data-adaptive} orthogonal LP basis functions. This data-driven transformation will allow us to construct unified learning algorithms in the LP Hilbert space. Many traditional and modern statistical measures developed for different data-types can be compactly expressed as inner products in the LP space. The following is the fundamental result for the LP basis function representation. \begin{theorem}[LP representation] Random variable $X$ (discrete or continuous) with finite variance admits the following decomposition: $X - \mathbb E(X) = \sum_{j>0} \,T_j(X;X)\, \mathbb E[X T_j(X;X)]$ with probability $1$. \end{theorem} $T_j(X;X)$ for $j=1,2,\ldots$ are score functions constructed by Gram Schmidt orthornormalization of the powers of $T_1(X;X)=\mathcal{Z}(F^{\rm{mid}}(X;X))$. Where $\mathcal{Z}(X)=(X-\mathbb E[X])/\sigma(X)$, $\sigma^2(X) = \mbox{Var}(X)$, and the mid-distribution transformation of a random variable $X$ is defined as \begin{equation} F^{\rm{mid}}(x;X)=F(x;X)-.5p(x;X),\, p(x;X)=\Pr[X=x], \,F(x;X)=\Pr[X \leq x]. \end{equation} We construct the LP score functions on $0<u<1$ by letting $x=Q(u;X)$, where $Q(u;X)$ is the quantile function of the random variable $X$ \begin{equation} S_j(u;X)\,=\,T_j(Q(u;X);X) , ~\,\, Q(u;X)=\inf\{x:F(x) \geq u\}. \end{equation} \vskip.5em {\it Why is it called the LP-basis?} Note that our specially designed basis functions vary naturally according to data type unlike the fixed Fourier and wavelet bases as shown in Figure \ref{fig:score}. Note an interesting similarity of the shapes of LP score functions and shifted Legendre Polynomials for the {\it continuous} feature \texttt{price\_usd}. In fact, as the number of atoms (\# distinct values) of a random variable $A(X) \rightarrow \infty $ (moving from discrete to continuous data type) the shape converges to the smooth Legendre Polynomials. To emphasize this universal limiting shape we call it {\bf L}egendre-{\bf P}olynomial-like ({\bf LP}) orthogonal basis. For any general $X$, LP-polynomials are piecewise-constant orthonormal functions over $[0,1]$ as shown in Figure \ref{fig:score}. This data-driven property makes the LP transformation uniquely advantageous in constructing a generic algorithmic framework to tackle the mixed-data problem. \vskip.5em \textit{Constructing Measures by LP Inner Product}. Define the two-sample LP statistic for variable selection of a \textit{mixed random} variable $X$ (either continuous or discrete) based on our specially designed score functions \begin{equation} \label{eq:LPm} \operatorname{LP}[j;X,Y]\,=\,\operatorname{Cor}[T_j(X;X),Y]\,=\,\mathbb E[T_j(X;X)T_1(Y;Y)]. \end{equation} To prove the second equality of Equation \eqref{eq:LPm} (which expresses our variable selection statistic as LP-inner product measure) verify the following for $Y$ binary \[\mathcal{Z}(y;Y)\,=\,T_1(y;Y)\,=\, \left\{ \begin{array}{rl} -\sqrt{\dfrac{p}{1-p}} &\mbox{for $y=0$} \\ \sqrt{\dfrac{1-p}{p}} &\mbox{for $y=1$.} \end{array} \right.\] \vskip.5em \textit{LP statistic properties}. Using empirical process theory we can show that the sample LP-Fourier measures $\sqrt{n}\widetilde{\operatorname{LP}}[j;X,Y]$ asymptotically converge to i.i.d standard normal distributions (Mukhopahyay and Parzen, 2014). As an example of the power of LP-unification, we describe $\operatorname{LP}[1;X,Y]$ that systematically reproduces all the traditional linear statistical variable selection measures for different data types of $X$. Note that the Nonparametric Wilcoxon method to test the equality of two distributions can equivalently be represented as $\operatorname{Cor}(\mathbb{I}\{Y=1\},F^{\rm{mid}}(X;X))$ which leads to the following important alternative LP representation result. \begin{theorem} Two sample Wilcoxon Statistic $W$ can be computed as \begin{equation} W(X,Y)\,=\,\operatorname{LP}[1;X,Y]. \end{equation} \end{theorem} Our computing formula for the Wilcoxon statistic using $\operatorname{LP}[1;X,Y]$ offers automatic \textit{built-in adjustments for data with ties}; hence no further tuning is required. Furthermore, if we have $X$ and $Y$ both binary, (i.e. they form a $2 \times 2$ table), then we have \begin{eqnarray} T_1(0;X)=-\sqrt{P_{2+}/P_{1+}}, & T_1(1;X)=\sqrt{P_{1+}/P_{2+}} \nonumber \\ T_1(0;Y)=-\sqrt{P_{+2}/P_{+1}}, & T_1(1;Y)=\sqrt{P_{+1}/P_{+2}}, \end{eqnarray} where $P_{i+}=\sum_j P_{ij}$ and $P_{+j}=\sum_i P_{ij}$, and $P_{ij}$ denotes the $2 \times 2$ probability table and \begin{eqnarray} \label{eq:phi} \operatorname{LP}[1;X,Y]&=&\mathbb E[T_1(X;X) T_1(Y;Y)]\,=\,\sum_{i=1}^2 \sum_{j=1}^2 P_{ij} T_1(i-1;X)\,T_1(j-1;Y) \nonumber \\ &=&\big( P_{11}P_{22}-P_{12}P_{21}\big)/\big( P_{1+}P_{+1}P_{2+}P_{+2} \big)^{1/2}. \end{eqnarray} Following result summarizes the observation in \eqref{eq:phi}. \begin{theorem} For $2 \times 2$ contingency table with Pearson correlation $\phi$ we have, \begin{equation} \phi(X,Y)\,\,=\,\,\operatorname{LP}[1;X,Y]. \end{equation} \end{theorem} \vskip.2em \textit{Beyond Linearity}. High order Wilcoxon statistics are LP-statistics of high order score functions $T_j(X;X)$, which detect the \textit{distributional} differences as in variability, skewness, or tail behavior for two different classes. The LP-statistics $\operatorname{LP}[j;X,Y], j>1$ capture how the distribution of a variable changes over classes in a systematic way, applicable for mixed-data types. We are \textit{not} aware of any other statistical technique that can achieve similar goals. \vskip.2em To summarize, the remarkable property of LP-statistics is that it allows data scientists to write a \textit{single} computing formula for any variable $X$, irrespective of its data type, with a \textit{common} metric and asymptotic characteristics. This leads to a huge practical benefit in designing a unified method for combining `distributed local inferences' without requiring the data-type information for the variables in our partitioned dataset. \subsection{Meta-Analysis and Data-Parallelism}\label{sec:metaparallel} The objective of this section is to provide a new way of thinking about the problem: how to appropriately combine ``local" inferences to derive reliable and robust conclusions for the original large dataset? This turns out to be one of the most crucial and heavily \textit{neglected} part of data-intensive modeling that decides the fate of big data inference. Here we introduce the required statistical framework that can answer the key question: \textit{how to compute individual weights for each partitioned dataset?} Our framework adopts the concept of meta-analysis to provide a general recipe for constructing such algorithms for large-scale parallel computing. This will allow us to develop statistical algorithms that can judiciously balance computational speed and statistical accuracy in data analysis. \vskip.25em \textit{Brief Background on Meta-Analysis}. Meta-analysis (Hedges and Olkin, 1985) is a statistical technique by which information from independent studies is assimilated, which has its origins in clinical settings. It was invented primarily to combat the problem of under-powered `small data' studies. A key benefit of this approach is the aggregation of information leading to a higher statistical power as opposed to less precise inference derived from a single small data study. A huge amount of literature exists on meta-analysis, including a careful review of recent developments written by Sutton and Higgins (2008) which includes 281 references. \vskip.25em \textit{Relevance of Meta-analysis for big data inference?} First note that unlike the classical situation, we don't have any low statistical power issue for big data problems. At the same time we are unable to analyze the whole dataset all-at-once using a single machine in a classical inferential setup. Our novelty lies in \textit{recognizing} that meta-analytic logic provides a formal statistical framework to rigorously formulate the problem of combining distributed inferences. We apply meta-analysis from a completely different perspective and motivation, as a tool to facilitate distributed inference for very large datasets. This novel connection provides a statistically sound powerful mechanism to combine `local' inferences by properly determining the `optimal' weighting strategy (Hedges and Olkin, 1985). We partition big data systematically into several subpopulations (small datasets) over a distributed database, estimate parameters of interest in each subpopulation separately, and then combine results using meta-analysis as demonstrated in Figure \ref{fig:meta_framework2} . Thus meta-analysis provides a methodical way to pool the information from all of the small subpopulations and produce a singular powerful combined inference for the original large dataset. In some circumstances, the dataset may already be partitioned (each group could be an image or a large text document) and stored in different servers based on some reasonable partitioning scheme. Our distributed statistical framework can work with these predefined groupings as well by combining them using the meta-analysis framework to arrive at the final combined inference. \vskip.25em We call this statistical framework, which utilizes both $\operatorname{LP}$ statistics and meta-analysis methodology, as \texttt{MetaLP}, and it consists of two parts: (i) the LP statistical map function or algorithm (that tackles the `variety' problem), and (ii) the meta-analysis methodology for merging the information from all subpopulations to get the final inference. \subsection{Confidence Distribution and LP Statistic Representation} \label{sec:CD} The Confidence Distribution (CD) is a distribution estimator, rather than a point or interval estimator, for a particular parameter of interest. From the CD, all the traditional forms of statistical estimation and inference (e.g. point estimation, confidence intervals, hypothesis testing) can be produced. Hence CDs contain a wealth of information about parameters of interest which makes them attractive for understanding key parameters. Moreover, CDs can be utilized within the meta-analysis framework, as we will show in the next section, which enables our algorithm to provide a variety of classical statistics mentioned previously for users. More specifically, the CD is a concept referring to a sample-dependent distribution function on the parameter space that can represent confidence intervals of all levels for a parameter of interest. Xie and Singh (2013) gave a comprehensive review of the CD concept and emphasized that the CD is a frequentist statistical notion. Schweder and Hjort (2002) first defined the CD formally and Singh, Xie, and Strawderman (2005) extended the CD concept to asymptotic confidence distributions (aCDs): \begin{definition} Suppose $\Theta$ is the parameter space of the unknown parameter of interest $\theta$, and $\omega$ is the sample space corresponding to data $\mathbf{X}_n=\{X_1,X_2,\ldots,X_n\}^T$. Then a function $H_n(\cdot)=H_n(\mathbf{X},\cdot)$ on $\omega \times \Theta \rightarrow [0,1]$ is a confidence distribution (CD) if: (i). For each given $\mathbf{X}_n\in \mathbf{\omega},H_n(\cdot)$ is a continuous cumulative distribution function on $\Theta$; (ii). At the true parameter value $\theta=\theta_0$, $H_n(\theta_0)=H_n(\mathbf{X},\theta_0)$, as a function of the sample $\mathbf{X}_n$, following the uniform distribution $U[0,1]$. The function $H_n(\cdot)$ is an asymptotic CD (aCD) if the $U[0,1]$ requirement holds only asymptotically for $n \rightarrow \infty$ and the continuity requirement on $H_n(\cdot)$ can be relaxed. \end{definition} By definition, the CD is a function of both a random sample and the parameter of interest. (i) The definition requires that for each sample, the CD should be a distribution function on the parameter space. The $U[0,1]$ requirement in (ii) allows us to construct confidence intervals from a CD easily, meaning that $(H_n^{-1}(\alpha_1),H_n^{-1}(1-\alpha_2))$ is a $100(1-\alpha_1-\alpha_2)\%$ confidence interval for the parameter $\theta_0$ for any $\alpha_1>0$, $\alpha_2>0$, and $\alpha_1+\alpha_2 <1$. Generally, the CD can be easily derived from the {\it stochastic internal representation } (Parzen, 2013) of a random variable and a pivot $\Psi(S ~,~\theta)$ whose distribution does not depend on the parameter $\theta$, where $\theta$ is the parameter of interest and $S$ is a statistic derived from the data. Here, we derive the CD for the LP statistic. Suppose $\widehat{\operatorname{LP}}[j;X,Y]$ is the estimated $j$th LP statistic for the predictor variable $X$ and binary response $Y$. The limiting asymptotic normality of the empirical LP-statistic can be compactly represent as: \begin{eqnarray} \operatorname{LP}[j;X,Y] \boldsymbol{\mathrel{\stretchto{\mid}{4ex}}} \widehat{\operatorname{LP}}[j;X,Y]~ &=& ~ \widehat{\operatorname{LP}}[j;X,Y] + \frac{Z}{\sqrt{n}}, \end{eqnarray} \noindent which is the stochastic internal representation (similar to the stochastic differential equations representation) of the LP statistic. Thus we have the following form of the confidence distribution, which is the cumulative distribution function of $\mathcal{N}\big (\widehat{\operatorname{LP}}[j;X,Y], 1/n \big)$: \begin{equation} \label{eq:cd} H_{\Phi}(\operatorname{LP}[j;X,Y])=\Phi\left( \sqrt{n}\left(\operatorname{LP}[j;X,Y]-\widehat{\operatorname{LP}}[j;X,Y]\right)\right), \end{equation} The above representation satisfies the conditions in the CD definition and therefore is the CD of $\operatorname{LP}[j;X,Y]$. Since, our derivation is based on assumption that $n \rightarrow \infty$, the CD we just derived is the asymptotic CD. \subsection{Confidence Distribution-based Meta-Analysis}\label{sec:CDmeta} Using the theory presented in Section \ref{sec:CD} we can estimate the confidence distribution (CD) for the $\operatorname{LP}$ statistics for each of the $k$ subpopulations, $H(\operatorname{LP}_\ell[j;X,Y]),$ and the corresponding point estimators $\widehat{\operatorname{LP}}_\ell[j;X,Y]$ for $\ell=1,\ldots,k$. The next step of our \texttt{MetaLP} algorithm is to judiciously combine information contained in the CDs for all subpopulations to arrive at the combined CD $H^{(c)}(\operatorname{LP}[j;X,Y])$ based on the whole dataset for that specific variable $X$. The framework relies on a confidence distribution-based unified approach to meta-analysis introduced by Singh, Xie, and Strawderman (2005). The combining function for CDs across $k$ different studies can be expressed as: \begin{equation} \label{eq:combinedcd} H^{(c)}(\operatorname{LP}[j;X,Y])~=~G_c \{g_c(H(\operatorname{LP}_1[j;X,Y]),\ldots,H(\operatorname{LP}_k[j;X,Y])\}. \end{equation} The function $G_c$ is determined by the monotonic $g_c$ function defined as $$G_c(t)=P(g_c(U_1,\ldots,U_k)\le t),$$ in which $U_1,\ldots,U_k$ are independent $U[0,1]$ random variables. A popular and useful choice for $g_c$ is \begin{equation} \label{eq:com2} g_c(u_1,\ldots,u_k)=\alpha_1F_0^{-1}(u_1)+\ldots+\alpha_kF_0^{-1}(u_k), \end{equation} where $F_0(\cdot)$ is a given cumulative distribution function and $\alpha_\ell \ge 0$ , with at least one $\alpha_\ell \ne 0$, are generic weights. $F_0(\cdot)$ could be any distribution function, which highlights the flexibility of the proposed framework. Hence the following theorem introduces a reasonable proposed form of the combined aCD for $\operatorname{LP}[j;X,Y]$. \begin{theorem}\label{thm:fixed} Setting $F_0^{-1}(t) = \Phi^{-1}(t)$ and $\alpha_l = \sqrt{n_\ell}$, where $n_\ell$ is the size of subpopulation $\ell=1,\ldots,k$, the following combined aCD for $\operatorname{LP}[j;X,Y])$ follows: \begin{equation} H^{(c)}(\operatorname{LP}[j;X,Y]) = \Phi \left[ \left( \sum\limits_{\ell=1}^k n_\ell \right)^{1/2} \left(\operatorname{LP}[j;X,Y] - \widehat{\operatorname{LP}}^{(c)}[j;X,Y]\right) \right] \quad \mathrm{with} \end{equation} \begin{equation} \quad \widehat{\operatorname{LP}}^{(c)}[j;X,Y] = \frac{ \sum_{\ell=1}^{k} n_\ell \widehat{\operatorname{LP}}_\ell[j;X,Y]} { \sum_{\ell=1}^{k} n_\ell}\end{equation} where $\widehat{\operatorname{LP}}^{(c)}[j;X,Y]$ and $\left(\sum_{\ell=1}^k n_\ell\right)^{-1}$ are the mean and variance respectively of the combined aCD for $\operatorname{LP}[j;X,Y]$. \end{theorem}\ To prove this theorem verify that replacing $H(\operatorname{LP}_\ell(j;X,Y))$ by \eqref{eq:cd} in Equation \eqref{eq:combinedcd} along with the choice of combining function given in \eqref{eq:com2}, where $F_0^{-1}(t) = \Phi^{-1}(t)$ and $\alpha_\ell = \sqrt{n_\ell}$ we have \begin{equation*} H^{(c)}(\operatorname{LP}[j;X,Y]) = \Phi \left[ \frac{1}{\sqrt {\sum_{\ell=1}^{k} n_\ell}} \sum\limits_{\ell=1}^k \sqrt{n_\ell} \frac{\operatorname{LP}[j;X,Y] - \widehat{\operatorname{LP}}_\ell[j;X,Y]}{1/\sqrt{n_\ell}}\right]. \end{equation*} \subsection{Diagnostic of Heterogeneity} \label{sec:I2} Our approach allows parallel data processing by dividing the original large dataset into several subpopulations and then finally combining all the information under a confidence distribution based LP meta-analysis framework. One danger that can potentially arise from this Divide-Combine-Conquer style big data analysis is the heterogeneity problem. The root cause of this problem is different characteristics across the subpopulations. It is arguably one of the most significant roadblocks, which is often ignored, for analyzing distributed data. Failure to take heterogeneity into account can easily spoil the big data discovery process. This heterogeneity across subpopulations will produce very different statistical estimates which may not faithfully reflect the original parent dataset. So it is of the utmost importance that we should diagnose and quantify the degree to which each variable suffers from heterogeneous subpopulation groupings. We resolve this challenge by using the $I^2$ statistic (Higgins and Thompson, 2002), which is introduced next. Define Cochran's Q statistic: \begin{equation} \label{eq:Q} Q\,=\, \sum_{\ell=1}^k \alpha_{\ell}\, \big(\widehat{\operatorname{LP}}_\ell [j;X,Y]-\widehat{\operatorname{LP}}^{(c)}[j;X,Y]\big)^2, \end{equation} where $\widehat{\operatorname{LP}}_\ell [j;X,Y]$ is the estimated $\operatorname{LP}$-statistic from subpopulation $\ell$, $\alpha_{\ell}$ is the weight for subpopulation $\ell$ as defined in Equation \ref{eq:combinedcd}, and $\widehat{\operatorname{LP}}^{(c)}[j;X,Y]$ is the combined meta-analysis estimator. Compute the $I^2$ statistic by \begin{equation} \label{eq:I2} I^2 = \left\{ \begin{array}{l l} \frac{Q-(k-1)}{Q} \times 100 \% & \quad \text{if $Q>(k-1)$;}\\ 0 & \quad \text{if $Q \leq (k-1)$;} \end{array} \right. \end{equation} where $k$ is the number of subpopulations. A rule of thumb in practice is that, $0\% \leq I^2 \leq 40 \%$ indicates heterogeneity among subpopulations is not severe. \subsection {$\tau^2$ Regularization to Tackle Heterogeneity in Big Data} \label{sec:htau} The variations among the subpopulations impact $\operatorname{LP}$ statistic estimates, which are not properly accounted for in the Theorem \ref{thm:fixed} model specification. This is especially severe for big data analysis as it is very likely that a substantial number of variables may be affected by heterogeneity across subpopulations (however the curse of heterogeneity can also be present in small data as shown in Supplementary Section B. To better account for the heterogeneity in the distributed statistical inference framework, following Hedges and Olkin (1985, p. 123), we introduce an additional parameter ($\tau^2$) to account for uncertainty due to heterogeneity across subpopulations. A hierarchical structure is added in this model: \begin{equation} \label{eq:r1} \widehat{\operatorname{LP}}_\ell[j;X,Y]\; \boldsymbol{\mathrel{\stretchto{\mid}{4ex}}} \;\operatorname{LP}_\ell[j;X,Y],s_i ~\stackrel{\mathrm{iid}}{\sim}~ N(\operatorname{LP}_\ell[j;X,Y],s_i^2), \text{ and } ~~~\end{equation} \begin{equation} \operatorname{LP}_\ell[j;X,Y]\; \boldsymbol{\mathrel{\stretchto{\mid}{4ex}}} \; \operatorname{LP}[j;X,Y],\tau ~\stackrel{\mathrm{iid}}{\sim}~ N(\operatorname{LP}[j;X,Y],\tau^2), \quad \ell=1,\ldots,k. \end{equation} Under the new model specification, the CD of the $\operatorname{LP}$ statistic for the $\ell$-th group is $H(\operatorname{LP}_\ell[j;X,Y])=\Phi((\operatorname{LP}[j;X,Y]-\widehat{\operatorname{LP}}_\ell[j;X,Y])/(\tau^2 +s_\ell^2)^{1/2})$ where $s_\ell = 1/\sqrt{n_\ell}$. The following theorem provides the form of the combined aCD under this specification. \begin{theorem} \label{thm:taulp} Setting $F_0^{-1}(t) = \Phi^{-1}(t)$ and $\alpha_\ell = 1/\sqrt{(\tau^2+(1/n_\ell))}$, where $n_\ell$ is the size of subpopulation $\ell=1,\ldots,k$, the following combined aCD for $\operatorname{LP}[j;X,Y]$ follows: \begin{equation} \label{eq:CDcombine} H^{(c)}(\operatorname{LP}[j;X,Y]) = \Phi \left[ \left( \sum\limits_{\ell=1}^k \frac{1}{\tau^2+(1/n_\ell)} \right)^{1/2} (\operatorname{LP}[j;X,Y] - \widehat{\operatorname{LP}}^{(c)}[j;X,Y]) \right] \quad \mathrm{with} \end{equation} \begin{equation} \label{eq:taulp} \widehat{\operatorname{LP}}^{(c)}[j;X,Y]) = \frac { \sum_{\ell=1}^{k} (\tau^2+(1/n_\ell))^{-1} \widehat{\operatorname{LP}}_\ell[j;X,Y])} { \sum_{\ell=1}^{k} (\tau^2+(1/n_\ell))^{-1}} \end{equation} where $\widehat{\operatorname{LP}}^{(c)}[j;X,Y])$ and $(\sum_{\ell=1}^{k} 1/(\tau^2+(1/n_\ell)))^{-1}$ are the mean and variance respectively of the combined aCD for $\operatorname{LP}[j;X,Y]$. \end{theorem} The proof is similar to that for Theorem 3.1. The DerSimonian and Laird (DerSimonian and Laird, 1986) and restricted maximum likelhood estimators of the data-adaptive heterogeneity regularization parameter $\tau^2$ are given in Supplementary Section D. \section{Expedia Personalized Hotel Search Dataset}\label{sec:expedia} \texttt{MetaLP} is a generic distributed inference platform that can scale to very large datsets. Motivated by \texttt{MetaLP} here we develop a \textit{model-free parallelizable two-sample algorithm} (assuming no model of any form and requiring no nonparametric smoothing) under the big data inference paradigm and apply it for the Expedia digital marketing problem. Detailed discussion on each one of the following components of our big data two-sample inference model is given in the next sections: \begin{itemize} \item (Section \ref{sec:data}) Data Description.\vskip.15em \item (Section \ref{sec:partition}) Data Partitioning.\vskip.15em \item (Section \ref{sec:LP-map}) LP Map Function.\vskip.15em \item (Section \ref{sec:heter}) Heterogeneity Diagnostic and Regularization.\vskip.15em \item (Section \ref{sec:reduce}) Meta Reducer via LP-Confidence Distribution.\vskip.15em \item (Section \ref{sec:robust}) Robustness Study. \end{itemize} \subsection{Data Description} \label{sec:data} Expedia provided this dataset of $10$ million hotel search results, collected over a window of the year $2013$; online customers input a list of search criteria, such as length of stay, booking window, and number of children, to the Expedia website as shown in Figure \ref{fig:hotel}(a). Based on the search criteria and customer information, Expedia sends back a ranked list of available hotels that customers can book for their travels (see Figure \ref{fig:hotel}(b)). Given the list, customers behaviors (click, book, or ignore) were then recorded by Expedia. The question of significant business value to the Expedia digital marketing team: which factors of these ``search result impressions'' (search criteria, hotel characteristics, customer information, competitor information) are most closely related to booking behavior, which could be used to increase user engagement, travel experience, and search personalization. \begin{figure}[tth] \centering \vskip.4em \includegraphics[scale=.31,trim=.1cm 0cm .1cm 0cm]{slide4_expedia.PNG}~~ \includegraphics[scale=0.32 ,trim=.1cm 0cm .1cm 0cm]{slide7_expedia.PNG} \vskip.8em\caption{(color online) Left: (a) Search window with search criteria variables; Right: (b) List of ranked hotels returned by Expedia with hotel characteristic variables.} \label{fig:hotel} \end{figure} \vskip.2em The dataset contains a huge number of observations ($399,344$ unique search lists and $9,917,530$ observations) each with 45 predictor variables of various of data types (e.g. categorical, binary, and continuous). There are various variables in the dataset related to user characteristics (e.g. visitor location, search history, etc.), search criteria (e.g. length of stay, number of children, room count, etc.), static hotel information (e.g. star rating, hotel location, historic price, review scores, etc.), dynamic hotel information (e.g. current price, promotion flag, etc), and competitor's information (e.g. price difference and availability), that may impact users' booking behaviors. The response variable, \texttt{booking\_bool}, is a binary variable that indicates whether the hotel was booked or not. The remaining variables contain the variables mentioned previously. Descriptions of some representative variables and their data types are presented in Table \ref{tab:des}. A complete list of the variables can be found on \texttt{Kaggle}'s website. \begin{landscape} \begin{table}[p] \begin{center} \begin{tabular}{ |l|l|l|l| } \hline Variable Category & Variables & Data Type & Description \\ \hline \multirow{4}{*}{User's Information} & \texttt{visitor\_location\_country\_id } & Discrete & The ID of the country in which the customer is located \\ & \texttt{visitor\_hist\_starrating} & Continuous & The mean star rating of hotels the customer has previously purchased \\ & \texttt{visitor\_hist\_adr\_usd } & Continuous & The mean price of the hotels the customer has previously purchased\\ & \texttt{orig\_destination\_distance } & Continuous & Physical distance between the hotel and the customer\\ \hline \multirow{6}{*}{Search Criteria} & \texttt{srch\_length\_of\_stay } & Discrete & Number of nights stay that was searched \\ & \texttt{srch\_booking\_window} & Discrete & Number of days in the future the hotel stay started \\ & \texttt{srch\_adults\_count} & Discrete & The number of adults specified in the hotel room\\ & \texttt{srch\_children\_count} & Discrete & The number of children specified in the hotel room\\ & \texttt{srch\_room\_count} & Discrete & Number of hotel rooms specified in the search\\ & \texttt{srch\_saturday\_night\_bool} & Binary & If short stay including Saturday night \\ \hline \multirow{7}{*}{Static hotel characteristics} & \texttt{prop\_country\_id } & Discrete & The ID of the country the customer is located \\ & \texttt{prop\_starrating} & Discrete & The star rating of the hotel \\ & \texttt{prop\_review\_score} & Continuous & The mean customer review score for the hotel \\ & \texttt{prop\_location\_score1} & Continuous & Desirability of hotel location (1)\\ & \texttt{prop\_location\_score2} & Continuous & Desirability of hotel location (2)\\ & \texttt{prop\_log\_historical\_price} & Continuous & Mean price of the hotel over the last trading period \\ & \texttt{pprop\_brand\_bool} & Discrete & If independent or belongs to a hotel chain \\ \hline \multirow{4}{*}{Dynamic hotel characteristics} & \texttt{price\_usd } & Continuous & Displayed price of the hotel for the given search \\ & \texttt{promotion\_flag} & Discrete & If the hotel had a sale price promotion \\ & \texttt{gross\_booking\_usd} & Continuous & Total value of the transaction \\ \hline \multirow{3}{*}{Competitor's Information} & \texttt{comp1\_rate\_percent\_dif } & Continuous & The absolute percentage difference between competitors \\ & \texttt{comp1\_inv } & Binary & If competitor 1 has hotel availability \\ & \texttt{comp1\_rate} & Discrete & If Expedia has lower/same/higher price than competitor X \\ \hline \multirow{2}{*}{Other Information} & \texttt{srch\_id } & Discrete & The ID of the search \\ & \texttt{site\_id } & Discrete & ID of the Expedia point of sale \\ \hline \multirow{1}{*}{Response Variables} & \texttt{booking\_bool} & Binary & If the hotel is booked \\ \hline \end{tabular} \end{center} \caption{Data description for Expedia dataset. The column `data type' indicates the presence of variety problem} \label{tab:des} \end{table} \end{landscape} \subsection{Partition} \label{sec:partition} We consider two different partitioning schemes that are reasonable for the Expedia dataset: random partitioning, in which we get subpopulations with similar sizes that are relatively homogeneous, and predefined partitioning, in which the subpopulations are relatively heterogeneous with contrasting sizes. \vskip.4em {\bf Step 1.} We randomly assign search lists, which are collections of observations from same search id in the dataset, to $200$ different subpopulations. Random assignment of search lists rather than individual observations ensures that sets of hotels viewed in the same search session are all contained in the same subpopulation. The number of subpopulations chosen here can be adapted to meet the processing and time requirements of different users (e.g. users with more servers available may choose to increase the number of subpopulations to take advantage of the additional processing capability). \vskip.55em For example, we can randomly partition the dataset into $S=50,100,150,200,...$ subpopulations. We show in Section \ref{sec:robust} that our method is robust to different numbers of subpopulations. There may be situations where we already have `natural' groupings in the dataset, which can be directly utilized as subpopulations. For example, consider the scenario where the available Expedia data are collected from different countries by variable \texttt{visitor\_location\_country\_id}, a indicator of visitor's location (country). In this setting, our framework can directly utilize these predetermined subpopulations for processing rather than having to pull it all together and randomly assign subpopulations. Often, this partition scheme may result in heterogeneous subpopulations. For example, in the Expedia dataset almost half of the observations are from country 207 (possibly the U.S.). Thus, extra steps must be taken to deal with this situation as described in Section \ref{sec:heter}. \begin{figure}[h] \centering \includegraphics [scale=.6]{country_barplot.pdf} \caption{Barplot: Number of observations for top 40 largest subpopulations from partitioning by \texttt{visitor\_location\_country\_id}} \label {fig:bar} \vspace{-1em} \end{figure} Figure \ref{fig:bar} shows the number of observations for the top 40 largest subpopulations from partitioning by \texttt{visitor\_location\_country\_id}. The top 3 largest groups contain 74\% of the total observations. Group 207 contains almost 50\% of the total observations. On the other hand, random partitioning results in roughly equal sample size across subpopulations (about 49,587 observations each). \subsection{LP Map Function} \label{sec:LP-map} We tackle the existing variety problem (see Table \ref{tab:des}) by developing automated mixed-data algorithms using LP-statistical data modeling tools. \vskip.4em {\bf Step 2.} Following the theory in Section \ref{sec:LP}, construct LP-score polynomials $T_{j}(x;X_i)$ for each variable based on each partitioned input dataset. Figure \ref{fig:score} shows the shapes of LP-basis polynomials for variables \texttt{variable\_length\_of\_stay} (discrete variable) and \texttt{price\_usd} (continuous variable). \vskip.4em {\bf Step 3.} Estimate the $\operatorname{LP}_{\ell}[j;X_i,Y]$ statistics (which denotes the $j$th LP statistics for the $i$th variable in the $\ell$th subpopulation) \begin{equation} \widetilde{\operatorname{LP}}_{\ell}[j;X_i,Y]\,=\, n_{\ell}^{-1}\sum_{k=1}^{n_{\ell}} T_j(x_k;X_i)\,T_1(y_k;Y).\end{equation} {\bf Step 4.} Compute the corresponding LP-confidence distribution given by \[\Phi\left( \sqrt{n}\left(\operatorname{LP}_{\ell}[j;X_i,Y]-\widehat{\operatorname{LP}}_{\ell}[j;X_i,Y]\right)\right),\] for each of the $45$ variables across the $200$ random subpopulations (or $233$ predefined subpopulations defined by the grouping variable \texttt{visitor\_location\_country\_id}), where $i=1,\ldots,45$, $\ell=1, \ldots, 200$, and $i$ and $\ell$ are the indexes for variable and subpopulation respectively. The estimator values $\widehat{\operatorname{LP}}_{\ell}[j;X_i,Y]$ and $n_{\ell}$ (used to find standard deviation) are stored for use in the next step. \begin{figure}[ttt] \centering \includegraphics[height=\textheight,width=.46\textwidth,keepaspectratio,trim=1.5cm .5cm .5cm 1cm]{score1.pdf}~~~~~~~ \includegraphics[height=\textheight,width=.46\textwidth,keepaspectratio,trim=.5cm .5cm 1.5cm 1cm]{score2.pdf} \caption{(a) Left panel shows the shape of the first four LP orthonormal score functions for the variable \texttt{variable\_length\_of\_stay}, which is a discrete random variable taking values $0,\ldots,8$; (b) Right: the shape of the LP basis for the continuous \texttt{price\_usd}. As the number of atoms (\# distinct values) of a random variable increases (moving from discrete to continuous data type) the shape of our custom designed score polynomials automatically approaches to (by construction) a universal shape, close to shifted {\bf L}egendre-{\bf P}olynomials over $(0,1)$.} \label{fig:score} \end{figure} \begin{figure}[h] \centering ~~~~~~~\includegraphics [scale=.7,trim=1cm 0cm 0cm 0cm]{p1.pdf} \caption{(color online) Histogram of LP-statistic of the variable \texttt{price\_usd} based on random partition and \texttt{visitor\_location\_country\_id} partition.} \label {fig:LP} \end{figure} \subsection{Heterogeneity: Diagnostic and Regularization} \label{sec:heter} Figure \ref{fig:LP} shows the first order LP-statistic of the variable \texttt{price\_usd} across different subpopulations based on random and \texttt{visitor\_location} \texttt{\_country\_id} partition schemes. It is clear that the random partition produces relatively homogeneous LP-estimates as the distribution is much more concentrated or clustered together. On the other hand, \texttt{visitor\_loc}\ \texttt{ation\_country\_id} partition results in more heterogeneous LP statistics, which is reflected in the histogram. In fact The standard deviation of LP statistics for \texttt{visitor\_location\_country\_id} partition is about $15$ times more than that of the random partition, which further highlights the underlying heterogeneity issue. Thus care must be taken to incorporate this heterogeneity in a judicious manner that ensures consistent inference. We advocate the method mentioned in Section \ref{sec:I2}. \vskip.4em {\bf Step 5.} Compute the Cochran's Q-statistic using \eqref{eq:Q} and $I^2$ heterogeneity index \eqref{eq:I2} based on $\operatorname{LP}_1[j;X_i,Y],\ldots, \operatorname{LP}_S[j;X_i,Y]$ for each $i$ and $j$. For the random partitioning scheme, our subpopulations are fairly homogeneous (with respect to all variables) as all $I^2$ statistics are below 40\% (see Figure \ref{fig:sitei}(a)); on the other hand, \texttt{visitor\_location\_country\_id} based predefined partitioning divides data into heterogeneous subpopulations for some variables as shown in Figure \ref{fig:sitei}(b) (i.e. some variables have $I^2$ values (red dots) outside of the permissible range of 0 to 40\%). \vskip.4em {\bf Step 6.} Compute the DerSimonian and Laird data-driven estimate \[\hat{\tau}_i^2 = \max \left\{ 0, \frac{Q_i-(k-1)} {n - \sum_\ell n_\ell ^{2} / n } \right\},~~~~~(i=1,\ldots,p).~~~\] One can also use other enhanced estimators like the restricted maximum-likelihood (REML) estimator as discussed in Supplementary Section D. The $I^2$ diagnostic \textit{after} $\tau^2$ regularization is shown in Figure \ref{fig:sitei}(b) (blue dots), which suggest that all $I^2$ values after regularization fall within the acceptable range of 0 to 40\%. The results suggests that our framework are able to deal with heterogeneity issues among subpopulations, as we can perform $\tau^2$ regularization when subpopulations appear to be heterogeneous, which protects the validity of the meta-analysis approach. \begin{figure}[h] \centering \includegraphics[height=.45\textheight,width=.465\textwidth,keepaspectratio,trim=1cm .5cm .5cm 4cm]{ggplot2.pdf} \includegraphics[height=.48\textheight,width=.52\textwidth,keepaspectratio,trim=.5cm .5cm .5cm .8cm]{ggplot1.pdf}\vskip.4em \caption{(color online) (a) $I^2$ Diagnostic for randomly partitioned subpopulations; (b) Predetermined grouping: comparison of $I^2$ diagnostics between before $\tau$ correction (red dots) and after $\tau$ correction (blue dots).} \label {fig:sitei} \end{figure} \subsection{Meta Reducer Step} \label{sec:reduce} This step combines estimates and confidence distributions of $\operatorname{LP}$ statistics from different subpopulations to estimate the combined confidence distribution of the $\operatorname{LP}$ statistic for each variable as outlined in Section \ref{sec:htau}. \vskip.4em {\bf Step 7.} Use $\tau$-corrected weights to properly taking into account the heterogeneity effect. Compute $\widehat{\operatorname{LP}}^{(c)}[j;X,Y])$ by \eqref{eq:taulp} and the corresponding LP-confidence distribution using Theorem \ref{thm:taulp}. \begin{figure}[!tth] \centering \vskip.4em \includegraphics[height=.45\textheight,width=.465\textwidth,keepaspectratio,trim=1.5cm .5cm .5cm .8cm]{LP_SITE.pdf} \includegraphics[height=.45\textheight,width=.500\textwidth,keepaspectratio,trim=.5cm .5cm .5cm .8cm]{comp.pdf} \caption{(color online) (a) Expedia Data: 95 \% Confidence Intervals for each variables' $\operatorname{LP}$ Statistics; (b) 95\% Confidence Interval for Random Sampling Partitioning (black) and country ID Partitioning (red).} \label{fig:sitelp} \end{figure} The results for random subpopulation assignment can be found in Figure \ref{fig:sitelp}(a). Variables with indexes $43$, $44$, and $45$ have highly significant positive relationships with \texttt{booking\_bool}, the binary response variable. Those variables are \texttt{prop\_location\_score2}, the second score outlining the desirability of a hotel's location, \texttt{promotion\_flag}, if the hotel had a sale price promotion specifically displayed, and \texttt{srch\_query\_affinity\_score}, the log of the probability a hotel will be clicked on in Internet searches; there are three variables that have highly negative impacts on hotel booking: \texttt{price\_usd}, displayed price of the hotel for the given search, \texttt{srch\_length\_of\_stay}, number of nights stay that was searched, and \texttt{srch\_booking\_window}, number of days in the future the hotel stay started from the search date. Moreover, there are several variables' $\operatorname{LP}$ statistics whose confidence intervals include zero, which means those variables have an insignificant influence on hotel booking. The top five most influential variables in terms of absolute value of LP statistic estimates are \texttt{prop\_location\_score2}, \texttt{promotion flag}, \texttt{price\_usd}, \texttt{srch\_length\_of\_stay}, and \texttt{prop\_starring}. If we apply the same algorithm to the predefined partitioned dataset and compare the top five influential variables with those from the randomly partitioned dataset, 4 out of 5 are the same, while \texttt{prop\_starring} changed to \texttt{srch\_query\_affinity\_score}. A comprehensive comparison between the two lists are shown in Figure \ref{fig:sitelp}(b). It provides a comparison of 95\% confidence intervals for $\operatorname{LP}$ statistics across each variable based on randomly assigned subpopulations and predefined subpopulations by visitor country. Here, it is shown how the heterogeneity from the predetermined subpopulations impacts the statistical inference slightly in terms of wider confidence intervals. However, across the majority of the variables, the inference obtained from randomly assigned subpopulations and the predetermined subpopulations are largely consistent. The top five most influential variables from both partition schemes are reported in Table \ref{tab:top}. All variables selected by our algorithm are intuitive. For instance, \texttt{prop\_location\_score} indicates the desirability of the hotel location. if the hotel's location has higher desirability score, users tend to book the hotel more frequently; \texttt{promotion\_flag} indicates if the hotel has special promotion. The probability the hotel is booked will be higher if the hotel has a special discount or promotion. \begin{table}[tttt] \begin{center} \begin{tabular}{| l | l | l |} \hline Rank & Random partition & Predetermined partition \\ \hline 1 & \texttt{prop\_location\_score2} & \texttt{prop\_location\_score2} \\ \hline 2 & \texttt{promotion\_flag} & \texttt{promotion\_flag} \\ \hline 3 & \texttt{price\_usd} & \texttt{price\_usd} \\ \hline 4 & \texttt{srch\_length\_of\_stay} & \texttt{srch\_length\_of\_stay}\\ \hline 5 & \texttt{prop\_starring} & \texttt{srch\_query\_affinity\_score} \\ \hline \end{tabular} \end{center} \caption{Top five influential variables by random partition and predetermined partition} \label{tab:top} \end{table} \begin{figure}[ttt] \centering \includegraphics [scale=.51]{meta_sub.pdf} \caption{LP statistics and 95\% confidence intervals for nine variables across different numbers of subpopulations (dotted line is at zero). } \label {fig:sub} \end{figure} \subsection{Robustness to Size and Number of Subpopulations} \label{sec:robust} Due to different capabilities of computing systems available to users, users may choose different sizes and numbers of subpopulations for distributed computing. This requires our algorithm to be robust to numbers and sizes of subpopulations. To assess this robustness, we develop multiple random partitions of the whole dataset into $S=50,100,150,200,250,300,350,400,450,500$ subpopulations respectively, and then show that we are able to get consistent combined LP estimators even with big differences in numbers and sizes of subpopulations. We examine \texttt{prop\_location\_score2}, \texttt{promotion\_flag}, and \texttt{price\_usd} as the three most influential variables; \texttt{srch\_children\_count}, \texttt{srch\_booking\_window}, and \texttt{srch\_room\_count} as moderately important variables; \texttt{prop\_log\_historical\_price}, \texttt{orig\_destination\_distance}, and \texttt{comp1} \texttt{\_rate\_percent\_diff}, as three less influential variables, and compute their combined LP-statistics and 95\% confidence intervals based on 10 different random partition schemes with different numbers and sizes of subpopulations. Figure \ref{fig:sub} suggests that combined LP statistic estimates do not change dramatically as number of subpopulations increase, evidence of stable estimation. \section{Understanding Simpson's and Stein's Paradox from MetaLP Perspective} \label{sec:paradox} Heterogeneity is not solely a big data phenomenon, it can easily arise in small data setup. We will show in this section two smoking-gun examples-- Simpson's Paradox and Stein's Paradox--where blind aggregation \textit{without paying attention to the underlying inherent heterogeneity} leads to a misleading conclusion. \subsection{Simpson's Paradox} \label{sec:simpson} Table \ref{tab:ucb1} shows the UC Berkeley admission rates (Bickel et. al. 1975) by department and gender. Looking only at the university level admission rates at the bottom of this table, there appears to be a significant different in admission rates for males at 45\% and females at 30\%. However, the department level data \textit{does not} appear to support a strong gender bias as in the university level data. The real question at hand is whether \emph{there is a gender bias in university admissions?} We provide a concrete statistical solution to the question put forward by Pearl (2014) regarding the validity and applicability of traditional statistical tools in answering the real puzzle of Simpson's Paradox: ``So in what sense do B-K plots, or ellipsoids, or vectors display, or regressions etc. contribute to the puzzle? They don't. They can't. Why bring them up? Would anyone address the real puzzle? It is a puzzle that cannot be resolved in the language of traditional statistics.'' In particular, we will demonstrate how adopting the \texttt{MetaLP} modeling and combining strategy (that properly takes the existing heterogeneity into account) can resolve issues pertaining to Simpson's paradox (Simpson 1951). This simple example teaches us that \textit{simply averaging} as a means of combining effect-size is \textit{not appropriate} irrespective of small or big data. The calculation for weights \textit{must} take into account the underlying departure from homogeneity, which is ensured in the \texttt{MetaLP} distributed inference mechanism. Now we explain how this paradoxical reversal can be resolved using the \texttt{MetaLP} technology. \begin{table} \centering \def\arraystretch{1.5 \begin{tabular}{l l l} Dept & Male & Female \\\hline A & 62\% (512 / 825) & \textbf{82\%} (89 / 108)\\ B & 63\% (353 / 560) & \textbf{68\%} (17 / 25)\\ C & \textbf{37\%} (120 / 325) & 34\% (202 / 593) \\ D & 33\% (138 / 417) & \textbf{35\%} (131 / 375) \\ E & \textbf{28\%} (53 / 191) & 24\% (94 / 393) \\ F & \textbf{6\%} (22 / 373) & 7\% (24 / 341) \\ All & \textbf{45\%} (1198 / 2691) & 30\% (557 / 1835) \\ \hline \end{tabular} \vskip1em \caption{UC Berkeley admission rates by gender by department (Bickel 1975)} \label{tab:ucb1} \end{table} \begin{figure}[ttt] \centering \includegraphics[height=.45\textheight,width=.5\textwidth,keepaspectratio,trim=1.5cm .5cm .5cm .5cm]{LP_dist_UCB.pdf} \includegraphics[height=.45\textheight,width=.45\textwidth,keepaspectratio,trim=.8cm .5cm 2.5cm 1.5cm]{LP_ci_UCB.pdf} \caption{(color online) (a) Left: aCDs for linear $\operatorname{LP}$ statistics for UC Berkeley admission rates by gender (department level aCDs in black); (b) Right: 95\% Confidence Intervals for $\operatorname{LP}$ statistics for UC Berkeley admission rates by gender.} \label{fig:UCB1} \end{figure} As both admission ($Y$) and gender ($X$) are binary variables, we can compute at most one LP-orthogonal polynomial for each variable $T_1(Y;Y)$ and $T_1(X;X)$; accordingly we can compute only the first-order linear LP statistics $\operatorname{LP}[1;Y,X]$ for each department. Following Equation \eqref{eq:cd}, we derive and estimate the aCD for the $\operatorname{LP}$ statistic for each of the $6$ departments, $H(\operatorname{LP}_l[1;X,Y])$, $l=1,\ldots,6$, and for the aggregated university level dataset, $H(\operatorname{LP}_a[1;X,Y])$. As noted in Section \ref{sec:CD}, the department level aCDs are normally distributed with a mean of $\widehat{\operatorname{LP}}_l[1;X,Y]$ and variance of $1/n_{\ell}$ where $n_{\ell}$ is the number of applicants to department $l$. Similarly the aggregated aCD is also normally distributed with a mean of $\widehat{\operatorname{LP}}_a[1;X,Y]$ and variance of $1/n_a$ where $n_a$ is the number of applicants across all departments. Apply heterogeneity-corrected \texttt{MetaLP} algorithm following Theorem \ref{thm:taulp} to estimate the combined aCD across all departments as follows: \begin{equation*} H^{(c)}(\operatorname{LP}[1;X,Y]) = \Phi \left[ \left( \sum\limits_{\ell=1}^6 \frac{1}{\tau^2+(1/n_{\ell})} \right)^{1/2} (\operatorname{LP}[1;X,Y] - \widehat{\operatorname{LP}}^{(c)}[1;X,Y]) \right] \quad \mathrm{with} \end{equation*} \begin{equation*} \widehat{\operatorname{LP}}^{(c)}[1;X,Y]) = \frac { \sum_{\ell=1}^{6} (\tau^2+(1/n_{\ell}))^{-1} \widehat{\operatorname{LP}}_\ell[1;X,Y])} { \sum_{\ell=1}^{6} (\tau^2+(1/n_{\ell}))^{-1}} \end{equation*} where $\widehat{\operatorname{LP}}^{(c)}[1;X,Y])$ and $\sum_{l=1}^{6} (\tau^2+(1/n_{\ell}))^{-1}$ are the mean and variance respectively of the meta-combined aCD for $\operatorname{LP}[1;X,Y]$. Here, the heterogeneity parameter $\tau^2$ is estimated using the restricted maximum likelihood formulation outlined in Supplementary Section D. Figure \ref{fig:UCB1}(a) displays the estimated aCDs for each department, aggregated data, and for the \texttt{MetaLP} method. First note that the aggregated data aCD is very different from the department level aCDs, which is characteristic of the Simpson's paradox reversal phenomenon due to naive ``aggregation bias''. This is why the aggregated data inference suggests a gender bias in admissions while the department level data does not. Second note that the aCD from the \texttt{MetaLP} method provides an estimate that falls more in line with the department level aCDs. This highlights the advantage of the \texttt{MetaLP} meta-analysis framework for combining information in a judicious manner. Also, as mentioned in Section \ref{sec:CD}, all traditional forms of statistical inference (e.g. point and interval estimation, hypothesis testing) can be derived from the aCD above. For example, we can test $H_0:\operatorname{LP}^{(c)}[1;X,Y]\le 0 $ (indicating no male preference in admissions) vs. $H_1:\operatorname{LP}^{(c)}[1;X,Y] > 0$ (indicating a male preference in admissions) using the aCD for $\operatorname{LP}^{(c)}[1;X,Y]$. The corresponding p-value for the test comes from the probability associated with the support of $H_0$, $C= (-\infty,0]$, (i.e. ``high'' support value for $H_0$ leads to acceptance) following Xie and Singh (2013). Hence the p-value for the above test becomes $$\text{p-value} = H\big ( 0; \operatorname{LP}^{(c)}[1;X,Y] \big) = \Phi\left(\frac{0-\widehat{\operatorname{LP}}^{(c)}[1;X,Y]}{\sqrt{\sum_{l=1}^{6} (\tau^2+(1/n_l))^{-1}}}\right) \approx .81.$$ In this case the support of the LP-CD inference (also known as `belief' in fiducial literature, Kendall and Stuart, 1974) is .81. Hence at the 5\% level of significance, we strongly accept $H_0$ and confirm that there is no evidence to support a significant gender bias favoring males in admissions using the \texttt{MetaLP} approach. In addition we can also compute the 95\% confidence intervals for the $\operatorname{LP}$ statistics measuring the significance of the relationship between gender and admissions as shown in Figure \ref{fig:UCB1}(b). Note (the paradoxical reversal) $5$ out of the $6$ departments show no significant gender bias at the 5\% level of significance as the confidence intervals include positive and negative values, while the confidence interval for the aggregated dataset indicates a significantly higher admission rate for males. On the other hand note that the \texttt{MetaLP} approach resolves the paradox (which arises \textit{due to the failure of recognizing} the presence of heterogeneity among the department-based admission patterns) and correctly concludes that no significant gender bias exists as the confidence interval for the \texttt{MetaLP}-driven $\operatorname{LP}$ statistic includes the null value $0$. \subsection{Stein's Paradox} \label{sec:stein} Perhaps the most popular and classic dataset of Stein's paradox is given in Table~\ref{tab:stein1}, which shows the batting averages of 18 major league players through their first $45$ official at-bats of the $1970$ season. The goal is to predict each player's batting average over the remainder of the season (comprising about $370$ more at bats each) using only the data of the first $45$ at-bats. Stein's shrinkage estimator (James and Stein, 1961), which can be interpreted as an empirical Bayes estimator (Efron and Morris, 1975) turns out to be more than $3$ times as efficient than the MLE estimator. Here we provide a \texttt{MetaLP} approach to this problem by recognizing the ``parallel" structure (18 parallel sub-populations) of baseball data, which fits nicely into the ``decentralized" \texttt{MetaLP} modeling framework. \begin{table}[tth] \centering \def\arraystretch{1.2 \begin{tabular}{l l l l l l} Name & hits$/$AB & $\hat{\mu}_i^{(\operatorname{MLE})}$ & $\mu_i$ & $\hat{\mu}_i^{(JS)}$ & $\hat{\mu}_i^{(\operatorname{LP})}$\\\hline Clemente & $18/45$ & .400 & .346 & \textbf{.294} & .276 \\ F Robinson & $17/45$ & .378 & .298 & \textbf{.289} & .274 \\ F Howard & $16/45$ & .356 & .276 & .285 & \textbf{.272} \\ Johnstone & $15/45$ & .333 & .222 & .280 & \textbf{.270} \\ Berry & $14/45$ & .311 & .273 & \textbf{.275} & .268 \\ Spencer & $14/45$ & .311 & .270 & .275 & \textbf{.268} \\ Kessinger & $13/45$ & .289 & .263 & .270 & \textbf{.265} \\ L Alvarado & $12/45$ & .267 & .210 & .266 & \textbf{.263} \\ Santo & $11/45$ & .244 & .269 & \textbf{.261} & \textbf{.261} \\ Swoboda & $11/45$ & .244 & .230 & \textbf{.261} & \textbf{.261} \\ Unser & $10/45$ & .222 & .264 & .256 & \textbf{.258} \\ Williams & $10/45$ & .222 & .256 & \textbf{.256} & .258 \\ Scott & $10/45$ & .222 & .303 & .256 & \textbf{.258} \\ Petrocelli & $10/45$ & .222 & .264 & .256 & \textbf{.258} \\ E Rodriguez & $10/45$ & .222 & .226 & \textbf{.256} & .258 \\ Campaneris & $9/45$ & .200 & .286 & .252 & \textbf{.256} \\ Munson & $8/45$ & .178 & .316 & .247 & \textbf{.253} \\ Alvis & $7/45$ & .156 & .200 & \textbf{.242} & .251 \\ \hline \end{tabular} \vskip1em \caption{Batting averages $\hat{\mu}_i^{(\operatorname{MLE})}$ for 18 major league players early in the 1970 season; $\mu_i$ values are averages over the remainder of the season. The James-Stein estimates $\hat{\mu}_i^{(JS)}$ and MetaLP estimates $\hat{\mu}_i^{(\operatorname{LP})}$ provide much more accurate overall predictions for the $\mu_i$ values compared to MLE. MSE ratio for $\hat{\mu}_i^{(JS)}$ to $\hat{\mu}_i^{(\operatorname{MLE})}$ is $0.283$ and MSE ratio for $\hat{\mu}_i^{(\operatorname{LP})}$ to $\hat{\mu}_i^{(\operatorname{MLE})}$ is $0.293$ showing comparable efficiency. } \label{tab:stein1} \end{table} We start by defining the variance-stabilized effect-size estimates for each group $$\widehat \theta_i = \sin^{-1}(2\hat{\mu}_i^{(\operatorname{MLE})}-1),~~ \,i=1,\ldots,k ~~~$$ \noindent whose asymptotic distribution is normal with mean $\theta_i$ and variance $1/n_i$ where $n_i=45$ (for all $i$) is the number of at-bats for each player and $\hat{\mu}_i^{(\operatorname{MLE})}$ is the individual batting average for player $i$. Figure \ref{fig:batavg} provides some visual evidence of the heterogeneity between the studies. We apply a MetaLP procedure that incorporates inter-study variations and is applicable for unequal variance/sample size scenarios with no further adjustment. First we estimate the weighted mean, $\hat{\theta}_{\mu}$, of the transformed batting averages with weights for each study $(\hat\tau_{{\rm DL}}^2 + n_i^{-1})^{-1}$, where $\hat\tau_{{\rm DL}}^2$ denotes the DerSimonian and Laird data-driven estimate given in Appendix D. The \texttt{MetaLP} estimators, $\hat{\theta}_i^{(\operatorname{LP})}$, are represented as weighted average between the transformed batting averages and $\hat{\theta}_{\mu}$ as follows: $$\hat{\theta}_i^{(\operatorname{LP})} = \lambda\hat{\theta}_{\mu} + (1-\lambda)\widehat \theta_i,~~ ~~(i=1,\ldots,18),~~~~$$ \noindent where $\lambda = (n_i^{-1})/(\hat\tau_{{\rm DL}}^2 + n_i^{-1})$. Table~\ref{tab:stein1} shows that MetaLP-based estimators are as good as James–Stein empirical Bayes estimators for the baseball data. This stems from the simple fact that random-effect meta-analysis and the Stein estimation are mathematically equivalent. But nevertheless, the framework of understanding and interpretations are different. Additionally, MetaLP is much more flexible and automatic in the sense that it works for `any' estimators (such as mean, regression function, classification probability) beyond mean and Gaussianity assumptions. We feel the MetaLP viewpoint is also less mysterious and clearly highlights the core issue of heterogeneity. Our analysis indicates an exciting frontier of future research at the interface of MetaLP, Empirical Bayes, and Stein's Paradox to develop new theory of distributed massive data modeling. \begin{figure}[h] \centering \includegraphics [scale=.75]{batavg_transformedci.pdf} \caption{95\% confidence intervals for transformed batting averages, $\theta_i$, for each player. This plot clearly indicates the heterogeneity of the effect sizes estimates.} \label {fig:batavg} \end{figure} \section{Final Remarks on Big Data Statistical Inference} \label{sec:conclusion} To address methodological and computational challenges for big data analysis, we have outlined a general theoretical foundation in this article, which we believe may provide the missing link between small data and big data science. Our research shows how the traditional and modern `small' data modeling tools can be successfully adapted and judiciously connected for developing powerful big data analytic tools by leveraging state-of-the-art distributed computing environments. In particular, we have proposed a nonparametric two sample inference algorithm that has the following two-fold practical significance for solving real-world data mining problems: (1) scalability for large data by exploiting the distributed computing architectures using a confidence distribution based meta-analysis framework, and (2) automation for mixed data by using a united LP computing formula. Undoubtedly our theory can be adapted for other common data mining problems, and we are currently investigating how the proposed framework can be utilized to develop parallelizable regression and classification algorithms for big data. \\ Instead of developing distributed versions of statistical algorithms on a case-by-case basis, here we develop a systematic and automatic strategy that will provide a generic platform to extend traditional and modern statistical modeling tools to large datasets using scalable distributed algorithms, thus addressing one of the biggest bottlenecks for data-intensive statistical inference. We believe this research is a great stepping stone towards developing a United Statistical Algorithm (Parzen and Mukhopadhyay, 2013) to bridge the increasing gap between the theory and practice of small and big data analysis. \section*{Acknowledgements} SM thanks William S. Cleveland for pointing out relevant literature and for many helpful comments. This research is partially supported by Fox 2014 young scholars interdisciplinary grant and by the Fox School PhD student research competition award. We would also like to thank the editor, associate editor and three anonymous referees whose comments and suggestions considerably improved the presentation of the paper. \newpage
{ "attr-fineweb-edu": 1.727539, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUc-g4uzlhipPcvZEr
\section{Introduction} The spatial locations of made and missed shot attempts in basketball are naturally modeled as a point process. The Poisson process and its inhomogeneous variant are popular choices to model point data in spatial and temporal settings. Inferring the latent intensity function, $\lambda(\cdot)$, is an effective way to characterize a Poisson process, and $\lambda(\cdot)$ itself is typically of interest. Nonparametric methods to fit intensity functions are often desirable due to their flexibility and expressiveness, and have been explored at length \cite{cox1955, moller1998log, diggle2013statistical}. Nonparametric intensity surfaces have been used in many applied settings, including density estimation \cite{adams-murray-mackay-2009c}, models for disease mapping \cite{benes2002bayesian}, and models of neural spiking \cite{cunningham2008inferring}. When data are related realizations of a Poisson process on the same space, we often seek the underlying structure that ties them together. In this paper, we present an unsupervised approach to extract features from instances of point processes for which the intensity surfaces vary from realization to realization, but are constructed from a common library. The main contribution of this paper is an unsupervised method that finds a low dimensional representation of related point processes. Focusing on the application of modeling basketball shot selection, we show that a matrix decomposition of Poisson process intensity surfaces can provide an interpretable feature space that parsimoniously describes the data. We examine the individual components of the matrix decomposition, as they provide an interesting quantitative summary of players' offensive tendencies. These summaries better characterize player types than any traditional categorization (e.g.~player position). One application of our method is personnel decisions. Our representation can be used to select sets of players with diverse offensive tendencies. This representation is then leveraged in a latent variable model to visualize a player's field goal percentage as a function of location on the court. \subsection{Related Work} Previously, \citet{adams-dahl-murray-2010a} developed a probabilistic matrix factorization method to predict score outcomes in NBA games. Their method incorporates external covariate information, though they do not model spatial effects or individual players. \citet{goldsberry} developed a framework for analyzing the defensive effect of NBA centers on shot frequency and shot efficiency. Their analysis is restricted, however, to a subset of players in a small part of the court near the basket Libraries of spatial or temporal functions with a nonparametric prior have also been used to model neural data. \citet{cunningham2009factor} develop the Gaussian process factor analysis model to discover latent `neural trajectories' in high dimensional neural time-series. Though similar in spirit, our model includes a positivity constraint on the latent functions that fundamentally changes their behavior and interpretation. \section{Background} \label{sec:background} This section reviews the techniques used in our point process modeling method, including Gaussian processes (GPs), Poisson processes (PPs), log-Gaussian Cox processes (LGCPs) and non-negative matrix factorization (NMF). \subsection{Gaussian Processes} A Gaussian process is a stochastic process whose sample path,~${f_1, f_2 \dots \in \mathbb{R}}$, is normally distributed. GPs are frequently used as a probabilistic model over functions~${f: \mathcal{X} \rightarrow \mathbb{R}}$, where the realized value~${f_n \equiv f(x_n)}$ corresponds to a function evaluation at some point~${x_n \in \mathcal{X}}$. The spatial covariance between two points in~$\mathcal{X}$ encode prior beliefs about the function~$f$; covariances can encode beliefs about a wide range of properties, including differentiability, smoothness, and periodicity. As a concrete example, imagine a smooth function~${f: \mathbb{R}^2 \rightarrow \mathbb{R}}$ for which we have observed a set of locations~${x_1, \dots, x_N}$ and values~${f_1, \dots, f_N}$. We can model this `smooth' property by choosing a covariance function that results in smooth processes. For instance, the squared exponential covariance function \begin{eqnarray} \text{cov}(f_i, f_j) = k(x_i, x_j) = \sigma^2 \exp \left( -\frac{1}{2}\frac{||x_i - x_j||^2}{\phi^2} \right) \label{eq:squared-exp} \end{eqnarray} assumes the function $f$ is infinitely differentiable, with marginal variation $\sigma^2$ and length-scale $\phi$, which controls the expected number of direction changes the function exhibits. Because this covariance is strictly a function of the distance between two points in the space $\mathcal{X}$, the squared exponential covariance function is said to be stationary. We use this smoothness property to encode our inductive bias that shooting habits vary smoothly over the court space. For a more thorough treatment of Gaussian processes, see Rasmussen~\cite{rasmussen2006gaussian}. \subsection{Poisson Processes} A Poisson process is a completely spatially random point process on some space, $\mathcal{X}$, for which the number of points that end up in some set $A \subseteq \mathcal{X}$ is Poisson distributed. We will use an inhomogeneous Poisson process on a domain $\mathcal{X}$. That is, we will model the set of spatial points, $x_1, \dots, x_N$ with $x_n \in \mathcal{X}$, as a Poisson process with a non-negative intensity function~${\lambda(x): \mathcal{X} \rightarrow \mathbb{R}_+}$ (throughout this paper, $\mathbb{R}_+$ will indicate the union of the positive reals and zero). This implies that for any set $A \subseteq \mathcal{X}$, the number of points that fall in $A$, $N_A$, will be Poisson distributed, \begin{eqnarray} N_A &\sim& \textrm{Poiss}\left( \int_A \lambda(dA) \right). \end{eqnarray} Furthermore, a Poisson process is `memoryless', meaning that $N_A$ is independent of $N_B$ for disjoint subsets $A$ and $B$. We signify that a set of points~${\mathbf x \equiv \{ x_1, \dots, x_N \}}$ follows a Poisson process as \begin{eqnarray} \mathbf x &\sim& \mathcal{PP}(\lambda(\cdot)). \end{eqnarray} One useful property of the Poisson process is the superposition theorem~\cite{kingman1992poisson}, which states that given a countable collection of independent Poisson processes $\mathbf x_1, \mathbf x_2, \dots$, each with measure $\lambda_1, \lambda_2, \dots$, their superposition is distributed as \begin{eqnarray} \bigcup_{k=1}^\infty \mathbf x_k &\sim& \mathcal{PP}\left( \sum_{k=1}^\infty \lambda_k \right). \end{eqnarray} Furthermore, note that each intensity function $\lambda_k$ can be scaled by some non-negative factor and remain a valid intensity function. The positive scalability of intensity functions and the superposition property of Poisson processes motivate the non-negative decomposition (Section~\ref{sec:nmf}) of a global Poisson process into simpler weighted sub-processes that can be shared between players. \subsection{Log-Gaussian Cox Processes} A log-Gaussian Cox process (LGCP) is a doubly-stochastic Poisson process with a spatially varying intensity function modeled as an exponentiated GP \begin{eqnarray} Z(\cdot) &\sim& \text{GP}(0, k(\cdot, \cdot) ) \\ \lambda(\cdot) &\sim& \exp( Z(\cdot) ) \\ x_1, \dots, x_N &\sim& \mathcal{PP}( \lambda(\cdot) ) \end{eqnarray} where doubly-stochastic refers to two levels of randomness: the random function $Z(\cdot)$ and the random point process with intensity $\lambda(\cdot)$. \subsection{Non-Negative Matrix Factorization} \label{sec:nmf} Non-negative matrix factorization (NMF) is a dimensionality reduction technique that assumes some matrix $\mathbf \Lambda$ can be approximated by the product of two low-rank matrices \begin{eqnarray} \mathbf \Lambda = \mathbf W \mathbf B \end{eqnarray} where the matrix $\mathbf \Lambda \in \mathbb{R}_+^{N \times V}$ is composed of $N$ data points of length $V$, the basis matrix $\mathbf B \in \mathbb{R}_+^{K \times V}$ is composed of $K$ basis vectors, and the weight matrix $\mathbf W \in \mathbb{R}_+^{N \times K}$ is composed of the $N$ non-negative weight vectors that scale and linearly combine the basis vectors to reconstruct $\mathbf \Lambda$. Each vector can be reconstructed from the weights and the bases \begin{eqnarray} \mathbf \lambda_n = \sum_{k=1}^K W_{n,k} B_{k,:}. \end{eqnarray} The optimal matrices $\mathbf W^*$ and $\mathbf B^*$ are determined by an optimization procedure that minimizes $\ell(\cdot, \cdot)$, a measure of reconstruction error or divergence between $\mathbf W \mathbf B$ and $\mathbf \Lambda$ with the constraint that all elements remain non-negative: \begin{eqnarray} \mathbf W^*, \mathbf B^* &=& \argmin_{\mathbf W, \mathbf B \geq 0} \ell(\mathbf \Lambda, \mathbf W \mathbf B). \end{eqnarray} Different metrics will result in different procedures. For arbitrary matrices $\mathbf X$ and $\mathbf Y$, one option is the squared Frobenius norm, \begin{eqnarray} \ell_2(\mathbf X, \mathbf Y) &=& \sum_{i,j} (X_{ij} - Y_{ij})^2. \label{eq:frobenius} \end{eqnarray} Another choice is a matrix divergence metric \begin{align} \ell_{\text{KL}}(\mathbf X, \mathbf Y) &= \sum_{i,j} X_{ij} \log \frac{X_{ij}}{Y_{ij}} - X_{ij} + Y_{ij} \label{eq:kl} \end{align} which reduces to the Kullback-Leibler (KL) divergence when interpreting matrices $\mathbf X$ and $\mathbf Y$ as discrete distributions, i.e.,~$\sum_{ij} X_{ij} = \sum_{ij} Y_{ij} = 1$~\cite{lee}. Note that minimizing the divergence~$\ell_{\text{KL}}(\mathbf X, \mathbf Y)$ as a function of $\mathbf Y$ will yield a different result from optimizing over $\mathbf X$. The two loss functions lead to different properties of~$\mathbf W^*$ and~$\mathbf B^*$. To understand their inherent differences, note that the $\text{KL}$ loss function includes a $\log$ ratio term. This tends to disallow large \emph{ratios} between the original and reconstructed matrices, even in regions of low intensity. In fact, regions of low intensity can contribute more to the loss function than regions of high intensity if the ratio between them is large enough. The $\log$ ratio term is absent from the Frobenius loss function, which only disallows large \emph{differences}. This tends to favor the reconstruction of regions of larger intensity, leading to more basis vectors focused on those regions. Due to the positivity constraint, the basis $\mathbf B^*$ tends to be disjoint, exhibiting a more `parts-based' decomposition than other, less constrained matrix factorization methods, such as PCA. This is due to the restrictive property of the NMF decomposition that disallows negative bases to cancel out positive bases. In practice, this restriction eliminates a large swath of `optimal' factorizations with negative basis/weight pairs, leaving a sparser and often more interpretable basis~\cite{lee1999learning}. \section{Data} \label{sec:data} \begin{figure}[t!] \centering \subfigure[points]{ \includegraphics[height=.23\columnwidth, page=5]{"figs/all_shot_charts.pdf} \includegraphics[height=.23\columnwidth, page=3]{"figs/all_shot_charts.pdf} \label{fig:raw-data} } \subfigure[grid]{ \includegraphics[height=.23\columnwidth, page=5]{"figs/all_shot_vectors.pdf} \includegraphics[height=.23\columnwidth, page=3]{"figs/all_shot_vectors.pdf} } \subfigure[LGCP]{ \includegraphics[height=.23\columnwidth, page=5]{"figs/all_shot_lgcp.pdf} \includegraphics[height=.23\columnwidth, page=3]{"figs/all_shot_lgcp.pdf} \label{fig:lgcps} } \subfigure[LGCP-NMF]{ \includegraphics[height=.23\columnwidth, page=5]{"figs/all_shot_nmf.pdf} \includegraphics[height=.23\columnwidth, page=3]{"figs/all_shot_nmf.pdf} \label{fig:nmf} } \caption{NBA player representations: (a) original point process data from two players, (b) discretized counts, (c) LGCP surfaces, and (d) NMF reconstructed surfaces ($K=10$). Made and missed shots are represented as blue circles and red $\times$'s, respectively. Some players have more data than others because only half of the stadiums had the tracking system in 2012-2013. } \vspace{-1em} \label{fig:first} \end{figure} \begin{figure}[t] \centering \vspace{-1em} \includegraphics[width=.28\columnwidth,page=5]{"figs/empirical_cov.pdf} \includegraphics[width=.28\columnwidth,page=3]{"figs/empirical_cov.pdf} \vspace{-1em} \caption{ Empirical spatial correlation in raw count data at two marked court locations. These data exhibit non-stationary correlation patterns, particularly among three point shooters. This suggests a modeling mechanism to handle the global correlation. } \vspace{-1em} \label{fig:emp_cov} \end{figure} Our data consist of made and missed field goal attempt locations from roughly half of the games in the 2012-2013 NBA regular season. These data were collected by optical sensors as part of a program to introduce spatio-temporal information to basketball analytics. We remove shooters with fewer than 50 field goal attempts, leaving a total of about 78,000 shots distributed among 335 unique NBA players. We model a player's shooting as a point process on the offensive half court, a 35 ft by 50 ft rectangle. We will index players with~${n \in \{1, \dots, N\}}$, and we will refer to the set of each player's shot attempts as~${\mathbf x_n = \{ x_{n,1}, \dots, x_{n,M_n} \}}$, where $M_n$ is the number of shots taken by player $n$, and $x_{n,m} \in [0,35]\times[0,50]$. When discussing shot outcomes, we will use~${y_{n,m} \in \{0,1\}}$ to indicate that the~$n$th player's~$m$th shot was made (1) or missed (0). Some raw data is graphically presented in Figure~\ref{fig:raw-data}. Our goal is to find a parsimonious, yet expressive representation of an NBA basketball player's shooting habits. \subsection{A Note on Non-Stationarity} As an exploratory data analysis step, we visualize the empirical spatial correlation of shot counts in a discretized space. We discretize the court into $V$ tiles, and compute $\mathbf X$ such that $\mathbf X_{n,v} = |\{ x_{n,i} : x_{n,i} \in v\}|$, the number of shots by player $n$ in tile $v$. The empirical correlation, depicted with respect to a few tiles in Figure~\ref{fig:emp_cov}, provides some intuition about the non-stationarity of the underlying intensity surfaces. Long range correlations exist in clearly non-stationary patterns, and this inductive bias is not captured by a stationary LGCP that merely assumes a locally smooth surface. This motivates the use of an additional method, such as NMF, to introduce global spatial patterns that attempt to learn this long range correlation. \section{Proposed Approach} \label{sec:approach} Our method ties together the two ideas, LGCPs and NMF, to extract spatial patterns from NBA shooting data. Given point process realizations for each of $N$ players, $\mathbf x_1, \dots, \mathbf x_N$, our procedure is \vspace{-.25cm} \begin{enumerate} \itemsep-2pt \item Construct the count matrix $\mathbf X_{n,v} = \#$ shots by player $n$ in tile $v$ on a discretized court. \item Fit an intensity surface $\lambda_n = (\lambda_{n,1}, \dots, \lambda_{n,V})^T$ for each player $n$ over the discretized court (LGCP). \item Construct the data matrix $\mathbf \Lambda = (\bar\lambda_1, \dots, \bar\lambda_N)^T$, where $\bar\lambda_n$ has been normalized to have unit volume. \item Find $\mathbf B, \mathbf W$ for some $K$ such that $\mathbf W \mathbf B \approx \mathbf \Lambda$, constraining all matrices to be non-negative (NMF). \end{enumerate} \vspace{-.2cm} This results in a spatial basis $\mathbf B$ and basis loadings for each individual player, $\mathbf w_n$. Due to the superposition property of Poisson processes and the non-negativity of the basis and loadings, the basis vectors can be interpreted as sub-intensity functions, or archetypal intensities used to construct each individual player. The linear weights for each player concisely summarize the spatial shooting habits of a player into a vector in $\mathbb{R}_+^K$. Though we have formulated a continuous model for conceptual simplicity, we discretize the court into~$V$ one-square-foot tiles to gain computational tractability in fitting the LGCP surfaces. We expect this tile size to capture all interesting spatial variation. Furthermore, the discretization maps each player into $\mathbb{R}_{+}^V$, providing the necessary input for NMF dimensionality reduction. \subsection{Fitting the LGCPs} For each player's set of points, $\mathbf x_n$, the likelihood of the point process is discretely approximated as \vspace{-.15cm} \begin{eqnarray} p(\mathbf x_n | \lambda_n(\cdot)) &\approx& \prod_{v=1}^{V} p(\mathbf X_{n,v} | \Delta A \lambda_{n,v} ) \end{eqnarray} where, overloading notation, $\lambda_n(\cdot)$ is the exact intensity function, $\lambda_n$ is the discretized intensity function (vector), and $\Delta A$ is the area of each tile (implicitly one from now on). This approximation comes from the completely spatially random property of the Poisson process, allowing us to treat each tile independently. The probability of the count present in each tile is Poisson, with uniform intensity $\lambda_{n,v}$. Explicitly representing the Gaussian random field $\mathbf z_n$, the posterior is \begin{eqnarray} p(\mathbf z_n | \mathbf x_n) &\propto& p(\mathbf x_n | \mathbf z_n) p(\mathbf z_n) \\ &=& \prod_{v=1}^{V} e^{-\lambda_{n,v}} \frac{\lambda_{n,v}^{\mathbf X_{n,v}}}{\mathbf X_{n,v}!} \mathcal{N}( \mathbf z_n | 0, \mathbf K) \\ \lambda_{n} &=& \exp( \mathbf z_n + z_0 ) \end{eqnarray} where the prior over $\mathbf z_n$ is a mean zero normal with covariance~${\mathbf K_{v,u} = k(x_v, x_u)}$, determined by Equation \ref{eq:squared-exp}, and $z_0$ is a bias term that parameterizes the mean rate of the Poisson process. Samples of the posterior~$p(\lambda_n | \mathbf x_n)$ can be constructed by transforming samples of~$\mathbf z_n | \mathbf x_n$. To overcome the high correlation induced by the court's spatial structure, we employ elliptical slice sampling \cite{murray-adams-mackay-2010a} to approximate the posterior of $\lambda_n$ for each player, and subsequently store the posterior mean \subsection{NMF Optimization} We now solve the optimization problem using techniques from \citet{lee} and \citet{brunet}, comparing the KL and Frobenius loss functions to highlight the difference between the resulting basis vectors. \input{figs/nmf_weights.txt} \begin{figure}[!] \vspace{0em} \centering \subfigure[Corner threes]{\label{fig:basis1}\includegraphics[width=.24\columnwidth,page=1]{"figs/k_10.pdf}}\vspace{-.2em} \subfigure[Wing threes]{\label{fig:basis2}\includegraphics[width=.24\columnwidth, page=4]{"figs/k_10.pdf}}\vspace{-.2em} \subfigure[Top of key threes]{\label{fig:basis3}\includegraphics[width=.24\columnwidth, page=10]{"figs/k_10.pdf}}\vspace{-4pt} \subfigure[Long two-pointers]{\label{fig:basis4}\includegraphics[width=.24\columnwidth, page=8]{"figs/k_10.pdf}}\vspace{-4pt} \caption{ A sample of basis vectors (surfaces) discovered by LGCP-NMF for $K=10$. Each basis surface is the normalized intensity function of a particular shot type, and players' shooting habits are a weighted combination of these shot types. Conditioned on certain shot type (e.g. corner three), the intensity function acts as a density over shot locations, where red indicates likely locations. } \vspace{-1.5em} \label{fig:basis} \end{figure} \begin{figure}[h!] \centering \subfigure[LGCP-NMF (KL)]{\label{fig:brunet}\includegraphics[width=\columnwidth]{figs/basis_kl_row.pdf}}\vspace{-.5em} \subfigure[LGCP-NMF (Frobenius)]{\label{fig:lee}\includegraphics[width=\columnwidth]{figs/basis_frobenius_row.pdf}}\vspace{-.5em} \subfigure[Direct NMF (KL)]{\label{fig:raw}\includegraphics[width=\columnwidth]{figs/raw_counts_NMF.pdf}}\vspace{-.5em} \subfigure[LGCP-PCA]{\label{fig:pca}\includegraphics[width=\columnwidth]{figs/pca_basis_row.pdf}}\vspace{-.5em} \caption{Visual comparison of the basis resulting from various approaches to dimensionality reduction. The top two bases result from LGCP-NMF with the KL (top) and Frobenius (second) loss functions. The third row is the NMF basis applied to raw counts (no spatial continuity). The bottom row is the result of PCA applied to the LGCP intensity functions. LGCP-PCA fundamentally differs due to the negativity of the basis surfaces. Best viewed in color. } \vspace{-1em} \label{fig:nmf-methods} \end{figure} \afterpage{\newpage} \subsection{Alternative Approaches} With the goal of discovering the shared structure among the collection of point processes, we can proceed in a few alternative directions. For instance, one could hand-select a spatial basis and directly fit weights for each individual point process, modeling the intensity as a weighted combination of these bases. However, this leads to multiple restrictions: firstly, choosing the spatial bases to cover the court is a highly subjective task (though, there are situations where it would be desirable to have such control); secondly, these bases are unlikely to match the natural symmetries of the basketball court. In contrast, modeling the intensities with LGCP-NMF uncovers the natural symmetries of the game without user guidance. Another approach would be to directly factorize the raw shot count matrix $\mathbf X$. However, this method ignores spatial information, and essentially models the intensity surface as a set of $V$ independent parameters. Empirically, this method yields a poorer, more degenerate basis, which can be seen in Figure~\ref{fig:raw}. Furthermore, this is far less numerically stable, and jitter must be added to entries of $\mathbf \Lambda$ for convergence. Finally, another reasonable approach would apply PCA directly to the discretized LGCP intensity matrix $\mathbf \Lambda$, though as Figure \ref{fig:pca} demonstrates, the resulting mixed-sign decomposition leads to an unintuitive and visually uninterpretable basis \section{Results} \label{sec:results} We graphically depict our point process data, LGCP representation, and LGCP-NMF reconstruction in Figure \ref{fig:first} for ${K=10}$. There is wide variation in shot selection among NBA players - some shooters specialize in certain types of shots, whereas others will shoot from many locations on the court. Our method discovers basis vectors that correspond to visually interpretable shot types. Similar to the parts-based decomposition of human faces that NMF discovers in \citet{lee1999learning}, LGCP-NMF discovers a shots-based decomposition of NBA players. Setting ${K=10}$ and using the KL-based loss function, we display the resulting basis vectors in Figure~\ref{fig:basis}. One basis corresponds to corner three-point shots \ref{fig:basis1}, while another corresponds to wing three-point shots \ref{fig:basis2}, and yet another to top of the key three point shots \ref{fig:basis3}. A comparison between KL and Frobenius loss functions can be found in Figure~\ref{fig:nmf-methods}. Furthermore, the player specific basis weights provide a concise characterization of their offensive habits. The weight $w_{n,k}$ can be interpreted as the amount player $n$ takes shot type $k$, which quantifies intuitions about player behavior. Table~\ref{tab:weights} compares normalized weights between a selection of players. Empirically, the KL-based NMF decomposition results in a more spatially diverse basis, where the Frobenius-based decomposition focuses on the region of high intensity near the basket at the expense of the rest of the court. This can be seen by comparing Figure~\ref{fig:brunet} (KL) to Figure~\ref{fig:lee} (Frobenius). We also compare the two LGCP-NMF decompositions to the NMF decomposition done directly on the matrix of counts, $\mathbf X$. The results in Figure~\ref{fig:raw} show a set of sparse basis vectors that are spatially unstructured. And lastly, we depict the PCA decomposition of the LGCP matrix $\mathbf \Lambda$ in Figure~\ref{fig:pca}. This yields the most colorful decomposition because the basis vectors and player weights are unconstrained real numbers. This renders the basis vectors uninterpretable as intensity functions. Upon visual inspection, the corner three-point `feature' that is salient in the LGCP-NMF decompositions appears in five separate PCA vectors, some positive, some negative. This is the cancelation phenomenon that NMF avoids. We compare the fit of the low rank NMF reconstructions and the original LGCPs on held out test data in Figure~\ref{fig:test-points}. The NMF decomposition achieves superior predictive performance over the original independent LGCPs in addition to its compressed representation and interpretable basis. \begin{figure}[t!] \centering \vspace{-1.1em} \includegraphics[width=.8\columnwidth]{figs/nmf_vs_lgcp_test_ll.pdf} \vspace{-1.2em} \caption{ Average player test data log likelihoods for LGCP-NMF varying $K$ and independent LGCP. For each fold, we held out 10\% of each player's shots, fit independent LGCPs and ran NMF (using the KL-based loss function) for varying $K$. We display the average (across players) test log likelihood above. The predictive performance of our representation improves upon the high dimensional independent LGCPs, showing the importance of pooling information across players. } \vspace{-.7em} \label{fig:test-points} \end{figure} \section{From Shooting Frequency to Efficiency} \label{sec:application} Unadjusted field goal percentage, or the probability a player makes an attempted shot, is a statistic of interest when evaluating player value. This statistic, however, is spatially uninformed, and washes away important variation due to shooting circumstances. Leveraging the vocabulary of shot types provided by the basis vectors, we model a player's field goal percentage for each of the shot types. We decompose a player's field goal percentage into a weighted combination of $K$ basis field goal percentages, which provides a higher resolution summary of an offensive player's skills. Our aim is to estimate the probability of a made shot for each point in the offensive half court \emph{for each individual player}. \subsection{Latent variable model} For player $n$, we model each shot event as \begin{align*} k_{n,i} &\sim \text{Mult}( \bar w_{n,:} ) && \text{ shot type } \\ \color{blue}{x_{n,i}| k_{n,i}} &\sim \text{Mult}(\bar B_{k_{n,i}}) && \text{ location } \\ \color{red}{y_{n,i} | k_{n,i}} &\sim \text{Bern}( \text{logit}^{-1}( \beta_{n,k_{n,i}} ) ) && \text{ outcome } \end{align*} where $\bar B_k \equiv B_k / \sum_{k'} B_{k'}$ is the normalized basis, and the player weights $\bar w_{n,k}$ are adjusted to reflect the total mass of each unnormalized basis. NMF does not constrain each basis vector to a certain value, so the volume of each basis vector is a meaningful quantity that corresponds to how common a shot type is. We transfer this information into the weights by setting \begin{align*} \bar w_{n,k} &= w_{n,k} \sum_v B_k(v). && \text{ adjusted basis loadings } \end{align*} We do not directly observe the shot type, $k$, only the shot location $x_{n,i}$. Omitting $n$ and $i$ to simplify notation, we can compute the the predictive distribution \begin{align*} p(y | x) &= \sum_{k=1}^K {\color{red}p(y | k)} p(k | x) \\ &= \sum_{z=1}^K {\color{red}p(y | k)} \frac{ {\color{blue}p(x | k)} p(k) }{ \sum_{k'} {\color{blue}p(x | k')} p(k') } \end{align*} where the outcome distribution is red and the location distribution is blue for clarity. The shot type decomposition given by $\mathbf B$ provides a natural way to share information between shooters to reduce the variance in our estimated surfaces. We hierarchically model player probability parameters $\beta_{n,k}$ with respect to each shot type. The prior over parameters is \begin{align*} \beta_{0,k} &\sim \mathcal{N}(0,\sigma_0^2) && \text{ diffuse global prior } \\ \sigma_{k}^2 &\sim \text{Inv-Gamma}(a, b) && \text{ basis variance } \\ \beta_{n,k} &\sim \mathcal{N}(\beta_{0,k}, \sigma_k^2) && \text{ player/basis params} \end{align*} where the global means, $\beta_{0,k}$, and variances, $\sigma_k^2$, are given diffuse priors, $\sigma_0^2 = 100$, and $a = b =.1$. The goal of this hierarchical prior structure is to share information between players about a particular shot type. Furthermore, it will shrink players with low sample sizes to the global mean. Some consequences of these modeling decisions will be discussed in Section~\ref{discussion}. \subsection{Inference} Gibbs sampling is performed to draw posterior samples of the $\beta$ and $\sigma^2$ parameters. To draw posterior samples of $\beta | \sigma^2, y$, we use elliptical slice sampling to exploit the normal prior placed on $\beta$. We can draw samples of $\sigma^2 | \beta, y$ directly due to conjugacy. \subsection{Results} \begin{figure}[t!] \vspace{-1.3em} \centering \subfigure[global mean]{\label{fig:global-mean} \includegraphics[width=.4\columnwidth, page=1]{figs/ind_lvm_hier_global.pdf} } \subfigure[posterior uncertainty]{ \includegraphics[width=.4\columnwidth, page=2]{figs/ind_lvm_hier_global.pdf} } \subfigure[]{ \includegraphics[width=.4\columnwidth, page=1]{figs/ind_lvm_hier_players.pdf} } \subfigure[]{ \includegraphics[width=.4\columnwidth, page=121]{figs/ind_lvm_hier_players.pdf} } \subfigure[]{ \includegraphics[width=.4\columnwidth, page=76]{figs/ind_lvm_hier_players.pdf} } \subfigure[]{ \includegraphics[width=.4\columnwidth, page=91]{figs/ind_lvm_hier_players.pdf} } \vspace{-1em} \caption{ (a) Global efficiency surface and (b) posterior uncertainty. (c-f) Spatial efficiency for a selection of players. Red indicates the highest field goal percentage and dark blue represents the lowest. Novak and Curry are known for their 3-point shooting, whereas James and Irving are known for efficiency near the basket. } \vspace{-1.2em} \label{fig:efficiency} \end{figure} We visualize the global mean field goal percentage surface, corresponding parameters to $\beta_{0,k}$ in Figure~\ref{fig:global-mean}. Beside it, we show one standard deviation of posterior uncertainty in the mean surface. Below the global mean, we show a few examples of individual player field goal percentage surfaces. These visualizations allow us to compare players' efficiency with respect to regions of the court. For instance, our fit suggests that both Kyrie Irving and Steve Novak are below average from basis 4, the baseline jump shot, whereas Stephen Curry is an above average corner three point shooter. This is valuable information for a defending player. More details about player field goal percentage surfaces and player parameter fits are available in the supplemental material. \section{Discussion} \label{discussion} We have presented a method that models related point processes using a constrained matrix decomposition of independently fit intensity surfaces. Our representation provides an accurate low dimensional summary of shooting habits and an intuitive basis that corresponds to shot types recognizable by basketball fans and coaches. After visualizing this basis and discussing some of its properties as a quantification of player habits, we then used the decomposition to form interpretable estimates of a spatially shooting efficiency. We see a few directions for future work. Due to the relationship between KL-based NMF and some fully generative latent variable models, including the probabilistic latent semantic model \cite{ding2008equivalence} and latent Dirichlet allocation \cite{blei2003latent}, we are interested in jointly modeling the point process and intensity surface decomposition in a fully generative model. This spatially informed LDA would model the non-stationary spatial structure the data exhibit within each non-negative basis surface, opening the door for a richer parameterization of offensive shooting habits that could include defensive effects Furthermore, jointly modeling spatial field goal percentage and intensity can capture the correlation between player skill and shooting habits. Common intuition that players will take more shots from locations where they have more accuracy is missed in our treatment, yet modeling this effect may yield a more accurate characterization of a player's habits and ability. \section*{Acknowledgments} The authors would like to acknowledge the Harvard XY Hoops group, including Alex Franks, Alex D'Amour, Ryan Grossman, and Dan Cervone. We also acknowledge the HIPS lab and several referees for helpful suggestions and discussion, and STATS LLC for providing the data. To compare various NMF optimization procedures, the authors used the \texttt{r} package \texttt{NMF} \cite{r-nmf}. \section{Introduction} The spatial locations of made and missed shot attempts in basketball are naturally modeled as a point process. The Poisson process and its inhomogeneous variant are popular choices to model point data in spatial and temporal settings. Inferring the latent intensity function, $\lambda(\cdot)$, is an effective way to characterize a Poisson process, and $\lambda(\cdot)$ itself is typically of interest. Nonparametric methods to fit intensity functions are often desirable due to their flexibility and expressiveness, and have been explored at length \citep{cox1955, moller1998log, diggle2013statistical}. Nonparametric intensity surfaces have been used in many applied settings, including density estimation \citep{adams-murray-mackay-2009c}, models for disease mapping \citep{benes2002bayesian}, and models of neural spiking \citep{cunningham2008inferring}. When data are related realizations of a Poisson process on the same space, we often seek the underlying structure that ties them together. In this paper, we present an unsupervised approach to extract features from instances of point processes for which the intensity surfaces vary from realization to realization, but are constructed from a common library. The main contribution of this paper is an unsupervised method that finds a low dimensional representation of related point processes. Focusing on the application of modeling basketball shot selection, we show that a matrix decomposition of Poisson process intensity surfaces can provide an interpretable feature space that parsimoniously describes the data. We examine the individual components of the matrix decomposition, as they provide an interesting quantitative summary of players' offensive tendencies. These summaries better characterize player types than any traditional categorization (e.g.~player position). One application of our method is personnel decisions. Our representation can be used to select sets of players with diverse offensive tendencies. This representation is then leveraged in a latent variable model to visualize a player's field goal percentage as a function of location on the court. \subsection{Related Work} Previously, \citet{adams-dahl-murray-2010a} developed a probabilistic matrix factorization method to predict score outcomes in NBA games. Their method incorporates external covariate information, though they do not model spatial effects or individual players. \citet{goldsberry} developed a framework for analyzing the defensive effect of NBA centers on shot frequency and shot efficiency. Their analysis is restricted, however, to a subset of players in a small part of the court near the basket Libraries of spatial or temporal functions with a nonparametric prior have also been used to model neural data. \citet{cunningham2009factor} develop the Gaussian process factor analysis model to discover latent `neural trajectories' in high dimensional neural time-series. Though similar in spirit, our model includes a positivity constraint on the latent functions that fundamentally changes their behavior and interpretation. \section{Background} \label{sec:background} This section reviews the techniques used in our point process modeling method, including Gaussian processes (GPs), Poisson processes (PPs), log-Gaussian Cox processes (LGCPs) and non-negative matrix factorization (NMF). \subsection{Gaussian Processes} A Gaussian process is a stochastic process whose sample path, ${f_1, f_2 \dots \in \mathbb{R}}$, is normally distributed. GPs are frequently used as a probabilistic model over functions~${f: \mathcal{X} \rightarrow \mathbb{R}}$, where the realized value ${f_n \equiv f(x_n)}$ corresponds to a function evaluation at some point~${x_n \in \mathcal{X}}$. The spatial covariance between two points in~$\mathcal{X}$ encode prior beliefs about the function~$f$; covariances can encode beliefs about a wide range of properties, including differentiability, smoothness, and periodicity. As a concrete example, imagine a smooth function~${f: \mathbb{R}^2 \rightarrow \mathbb{R}}$ for which we have observed a set of locations~${x_1, \dots, x_N}$ and values~${f_1, \dots, f_N}$. We can model this `smooth' property by choosing a covariance function that results in smooth processes. For instance, the squared exponential covariance function \begin{eqnarray} \text{cov}(f_i, f_j) = k(x_i, x_j) = \sigma^2 \exp \left( -\frac{1}{2}\frac{||x_i - x_j||^2}{\phi^2} \right) \label{eq:squared-exp} \end{eqnarray} assumes the function $f$ is infinitely differentiable, with marginal variation $\sigma^2$ and length-scale $\phi$, which controls the expected number of direction changes the function exhibits. Because this covariance is strictly a function of the distance between two points in the space $\mathcal{X}$, the squared exponential covariance function is said to be stationary. We use this smoothness property to encode our inductive bias that shooting habits vary smoothly over the court space. For a more thorough treatment of Gaussian processes, see \cite{rasmussen2006gaussian}. \subsection{Poisson Processes} A Poisson process is a completely spatially random point process on some space, $\mathcal{X}$, for which the number of points that end up in some set $A \subseteq \mathcal{X}$ is Poisson distributed. We will use an inhomogeneous Poisson process on a domain $\mathcal{X}$. That is, we will model the set of spatial points, $x_1, \dots, x_N$ with $x_n \in \mathcal{X}$, as a Poisson process with a non-negative intensity function~${\lambda(x): \mathcal{X} \rightarrow \mathbb{R}_+}$ (throughout this paper, $\mathbb{R}_+$ will indicate the union of the positive reals and zero). This implies that for any set $A \subseteq \mathcal{X}$, the number of points that fall in $A$, $N_A$, will be Poisson distributed, \begin{eqnarray} N_A &\sim& \textrm{Poiss}\left( \int_A \lambda(dA) \right). \end{eqnarray} Furthermore, a Poisson process is `memoryless', meaning that $N_A$ is independent of $N_B$ for disjoint subsets $A$ and $B$. We signify that a set of points~${\mathbf x \equiv \{ x_1, \dots, x_N \}}$ follows a Poisson process as \begin{eqnarray} \mathbf x &\sim& \mathcal{PP}(\lambda(\cdot)). \end{eqnarray} One useful property of the Poisson process is the superposition theorem~\citep{kingman1992poisson}, which states that given a countable collection of independent Poisson processes $\mathbf x_1, \mathbf x_2, \dots$, each with measure $\lambda_1, \lambda_2, \dots$, their superposition is distributed as \begin{eqnarray} \bigcup_{k=1}^\infty \mathbf x_k &\sim& \mathcal{PP}\left( \sum_{k=1}^\infty \lambda_k \right). \end{eqnarray} Furthermore, note that each intensity function $\lambda_k$ can be scaled by some non-negative factor and remain a valid intensity function. The positive scalability of intensity functions and the superposition property of Poisson processes motivate the non-negative decomposition (Section~\ref{sec:nmf}) of a global Poisson process into simpler weighted sub-processes that can be shared between players. \subsection{Log-Gaussian Cox Processes} A log-Gaussian Cox process (LGCP) is a doubly-stochastic Poisson process with a spatially varying intensity function modeled as an exponentiated GP \begin{eqnarray} Z(\cdot) &\sim& \text{GP}(0, k(\cdot, \cdot) ) \\ \lambda(\cdot) &\sim& \exp( Z(\cdot) ) \\ x_1, \dots, x_N &\sim& \mathcal{PP}( \lambda(\cdot) ) \end{eqnarray} where doubly-stochastic refers to two levels of randomness: the random function $Z(\cdot)$ and the random point process with intensity $\lambda(\cdot)$. \subsection{Non-Negative Matrix Factorization} \label{sec:nmf} Non-negative matrix factorization (NMF) is a dimensionality reduction technique that assumes some matrix $\mathbf \Lambda$ can be approximated by the product of two low-rank matrices \begin{eqnarray} \mathbf \Lambda = \mathbf W \mathbf B \end{eqnarray} where the matrix $\mathbf \Lambda \in \mathbb{R}_+^{N \times V}$ is composed of $N$ data points of length $V$, the basis matrix $\mathbf B \in \mathbb{R}_+^{K \times V}$ is composed of $K$ basis vectors, and the weight matrix $\mathbf W \in \mathbb{R}_+^{N \times K}$ is composed of the $N$ non-negative weight vectors that scale and linearly combine the basis vectors to reconstruct $\mathbf \Lambda$. Each vector can be reconstructed from the weights and the bases \begin{eqnarray} \mathbf \lambda_n = \sum_{k=1}^K W_{n,k} B_{k,:}. \end{eqnarray} The optimal matrices $\mathbf W^*$ and $\mathbf B^*$ are determined by an optimization procedure that minimizes $\ell(\cdot, \cdot)$, a measure of reconstruction error or divergence between $\mathbf W \mathbf B$ and $\mathbf \Lambda$ with the constraint that all elements remain non-negative: \begin{eqnarray} \mathbf W^*, \mathbf B^* &=& \argmin_{\mathbf W, \mathbf B \geq 0} \ell(\mathbf \Lambda, \mathbf W \mathbf B). \end{eqnarray} Different metrics will result in different procedures. For arbitrary matrices $\mathbf X$ and $\mathbf Y$, one option is the squared Frobenius norm, \begin{eqnarray} \ell_2(\mathbf X, \mathbf Y) &=& \sum_{i,j} (X_{ij} - Y_{ij})^2. \label{eq:frobenius} \end{eqnarray} Another choice is a matrix divergence metric \begin{align} \ell_{\text{KL}}(\mathbf X, \mathbf Y) &= \sum_{i,j} X_{ij} \log \frac{X_{ij}}{Y_{ij}} - X_{ij} + Y_{ij} \label{eq:kl} \end{align} which reduces to the Kullback-Leibler (KL) divergence when interpreting matrices $\mathbf X$ and $\mathbf Y$ as discrete distributions, i.e.,~$\sum_{ij} X_{ij} = \sum_{ij} Y_{ij} = 1$~\citep{lee}. Note that minimizing the divergence~$\ell_{\text{KL}}(\mathbf X, \mathbf Y)$ as a function of $\mathbf Y$ will yield a different result from optimizing over $\mathbf X$. The two loss functions lead to different properties of~$\mathbf W^*$ and~$\mathbf B^*$. To understand their inherent differences, note that the $\text{KL}$ loss function includes a $\log$ ratio term. This tends to disallow large \emph{ratios} between the original and reconstructed matrices, even in regions of low intensity. In fact, regions of low intensity can contribute more to the loss function than regions of high intensity if the ratio between them is large enough. The $\log$ ratio term is absent from the Frobenius loss function, which only disallows large \emph{differences}. This tends to favor the reconstruction of regions of larger intensity, leading to more basis vectors focused on those regions. Due to the positivity constraint, the basis $\mathbf B^*$ tends to be disjoint, exhibiting a more `parts-based' decomposition than other, less constrained matrix factorization methods, such as PCA. This is due to the restrictive property of the NMF decomposition that disallows negative bases to cancel out positive bases. In practice, this restriction eliminates a large swath of `optimal' factorizations with negative basis/weight pairs, leaving a sparser and often more interpretable basis~\citep{lee1999learning}. \section{Data} \label{sec:data} \begin{figure}[t!] \centering \subfigure[points]{ \includegraphics[height=.23\columnwidth, page=5]{"figs/all_shot_charts.pdf} \includegraphics[height=.23\columnwidth, page=3]{"figs/all_shot_charts.pdf} \label{fig:raw-data} } \subfigure[grid]{ \includegraphics[height=.23\columnwidth, page=5]{"figs/all_shot_vectors.pdf} \includegraphics[height=.23\columnwidth, page=3]{"figs/all_shot_vectors.pdf} } \subfigure[LGCP]{ \includegraphics[height=.23\columnwidth, page=5]{"figs/all_shot_lgcp.pdf} \includegraphics[height=.23\columnwidth, page=3]{"figs/all_shot_lgcp.pdf} \label{fig:lgcps} } \subfigure[LGCP-NMF]{ \includegraphics[height=.23\columnwidth, page=5]{"figs/all_shot_nmf.pdf} \includegraphics[height=.23\columnwidth, page=3]{"figs/all_shot_nmf.pdf} \label{fig:nmf} } \caption{NBA player representations: (a) original point process data from two players, (b) discretized counts, (c) LGCP surfaces, and (d) NMF reconstructed surfaces ($K=10$). Made and missed shots are represented as blue circles and red $\times$'s, respectively. Some players have more data than others because only half of the stadiums had the tracking system in 2012-2013. } \vspace{-1em} \label{fig:first} \end{figure} \begin{figure}[t] \centering \vspace{-1em} \includegraphics[width=.28\columnwidth,page=5]{"figs/empirical_cov.pdf} \includegraphics[width=.28\columnwidth,page=3]{"figs/empirical_cov.pdf} \vspace{-1em} \caption{ Empirical spatial correlation in raw count data at two marked court locations. These data exhibit non-stationary correlation patterns, particularly among three point shooters. This suggests a modeling mechanism to handle the global correlation. } \vspace{-1em} \label{fig:emp_cov} \end{figure} Our data consist of made and missed field goal attempt locations from roughly half of the games in the 2012-2013 NBA regular season. These data were collected by optical sensors as part of a program to introduce spatio-temporal information to basketball analytics. We remove shooters with fewer than 50 field goal attempts, leaving a total of about 78,000 shots distributed among 335 unique NBA players. We model a player's shooting as a point process on the offensive half court, a 35 ft by 50 ft rectangle. We will index players with~${n \in \{1, \dots, N\}}$, and we will refer to the set of each player's shot attempts as~${\mathbf x_n = \{ x_{n,1}, \dots, x_{n,M_n} \}}$, where $M_n$ is the number of shots taken by player $n$, and $x_{n,m} \in [0,35]\times[0,50]$. When discussing shot outcomes, we will use~${y_{n,m} \in \{0,1\}}$ to indicate that the~$n$th player's~$m$th shot was made (1) or missed (0). Some raw data is graphically presented in Figure~\ref{fig:raw-data}. Our goal is to find a parsimonious, yet expressive representation of an NBA basketball player's shooting habits. \subsection{A Note on Non-Stationarity} As an exploratory data analysis step, we visualize the empirical spatial correlation of shot counts in a discretized space. We discretize the court into $V$ tiles, and compute $\mathbf X$ such that $\mathbf X_{n,v} = |\{ x_{n,i} : x_{n,i} \in v\}|$, the number of shots by player $n$ in tile $v$. The empirical correlation, depicted with respect to a few tiles in Figure~\ref{fig:emp_cov}, provides some intuition about the non-stationarity of the underlying intensity surfaces. Long range correlations exist in clearly non-stationary patterns, and this inductive bias is not captured by a stationary LGCP that merely assumes a locally smooth surface. This motivates the use of an additional method, such as NMF, to introduce global spatial patterns that attempt to learn this long range correlation. \section{Proposed Approach} \label{sec:approach} Our method ties together the two ideas, LGCPs and NMF, to extract spatial patterns from NBA shooting data. Given point process realizations for each of $N$ players, $\mathbf x_1, \dots, \mathbf x_N$, our procedure is \vspace{-.25cm} \begin{enumerate} \itemsep-2pt \item Construct the count matrix $\mathbf X_{n,v} = \#$ shots by player $n$ in tile $v$ on a discretized court. \item Fit an intensity surface $\lambda_n = (\lambda_{n,1}, \dots, \lambda_{n,V})^T$ for each player $n$ over the discretized court (LGCP). \item Construct the data matrix $\mathbf \Lambda = (\bar\lambda_1, \dots, \bar\lambda_N)^T$, where $\bar\lambda_n$ has been normalized to have unit volume. \item Find $\mathbf B, \mathbf W$ for some $K$ such that $\mathbf W \mathbf B \approx \mathbf \Lambda$, constraining all matrices to be non-negative (NMF). \end{enumerate} \vspace{-.2cm} This results in a spatial basis $\mathbf B$ and basis loadings for each individual player, $\mathbf w_n$. Due to the superposition property of Poisson processes and the non-negativity of the basis and loadings, the basis vectors can be interpreted as sub-intensity functions, or archetypal intensities used to construct each individual player. The linear weights for each player concisely summarize the spatial shooting habits of a player into a vector in $\mathbb{R}_+^K$. Though we have formulated a continuous model for conceptual simplicity, we discretize the court into~$V$ one-square-foot tiles to gain computational tractability in fitting the LGCP surfaces. We expect this tile size to capture all interesting spatial variation. Furthermore, the discretization maps each player into $\mathbb{R}_{+}^V$, providing the necessary input for NMF dimensionality reduction. \subsection{Fitting the LGCPs} For each player's set of points, $\mathbf x_n$, the likelihood of the point process is discretely approximated as \vspace{-.15cm} \begin{eqnarray} p(\mathbf x_n | \lambda_n(\cdot)) &\approx& \prod_{v=1}^{V} p(\mathbf X_{n,v} | \Delta A \lambda_{n,v} ) \end{eqnarray} where, overloading notation, $\lambda_n(\cdot)$ is the exact intensity function, $\lambda_n$ is the discretized intensity function (vector), and $\Delta A$ is the area of each tile (implicitly one from now on). This approximation comes from the completely spatially random property of the Poisson process, allowing us to treat each tile independently. The probability of the count present in each tile is Poisson, with uniform intensity $\lambda_{n,v}$. Explicitly representing the Gaussian random field $\mathbf z_n$, the posterior is \begin{eqnarray} p(\mathbf z_n | \mathbf x_n) &\propto& p(\mathbf x_n | \mathbf z_n) p(\mathbf z_n) \\ &=& \prod_{v=1}^{V} e^{-\lambda_{n,v}} \frac{\lambda_{n,v}^{\mathbf X_{n,v}}}{\mathbf X_{n,v}!} \mathcal{N}( \mathbf z_n | 0, \mathbf K) \\ \lambda_{n} &=& \exp( \mathbf z_n + z_0 ) \end{eqnarray} where the prior over $\mathbf z_n$ is a mean zero normal with covariance~${\mathbf K_{v,u} = k(x_v, x_u)}$, determined by Equation \ref{eq:squared-exp}, and $z_0$ is a bias term that parameterizes the mean rate of the Poisson process. Samples of the posterior~$p(\lambda_n | \mathbf x_n)$ can be constructed by transforming samples of~$\mathbf z_n | \mathbf x_n$. To overcome the high correlation induced by the court's spatial structure, we employ elliptical slice sampling \citep{murray-adams-mackay-2010a} to approximate the posterior of $\lambda_n$ for each player, and subsequently store the posterior mean \subsection{NMF Optimization} We now solve the optimization problem using techniques from \citet{lee} and \citet{brunet}, comparing the KL and Frobenius loss functions to highlight the difference between the resulting basis vectors. \input{figs/nmf_weights.txt} \begin{figure}[!] \vspace{0em} \centering \subfigure[Corner threes]{\label{fig:basis1}\includegraphics[width=.24\columnwidth,page=1]{"figs/k_10.pdf}} \subfigure[Wing threes]{\label{fig:basis2}\includegraphics[width=.24\columnwidth, page=4]{"figs/k_10.pdf}} \subfigure[Top of key threes]{\label{fig:basis3}\includegraphics[width=.24\columnwidth, page=10]{"figs/k_10.pdf}} \subfigure[Long two-pointers]{\label{fig:basis4}\includegraphics[width=.24\columnwidth, page=8]{"figs/k_10.pdf}} \caption{ A sample of basis vectors (surfaces) discovered by LGCP-NMF for $K=10$. Each basis surface is the normalized intensity function of a particular shot type, and players' shooting habits are a weighted combination of these shot types. Conditioned on certain shot type (e.g. corner three), the intensity function acts as a density over shot locations, where red indicates likely locations. } \label{fig:basis} \end{figure} \begin{figure}[h!] \centering \subfigure[LGCP-NMF (KL)]{\label{fig:brunet}\includegraphics[width=\columnwidth]{figs/basis_kl_row.pdf}}\vspace{-.5em} \subfigure[LGCP-NMF (Frobenius)]{\label{fig:lee}\includegraphics[width=\columnwidth]{figs/basis_frobenius_row.pdf}}\vspace{-.5em} \subfigure[Direct NMF (KL)]{\label{fig:raw}\includegraphics[width=\columnwidth]{figs/raw_counts_NMF.pdf}}\vspace{-.5em} \subfigure[LGCP-PCA]{\label{fig:pca}\includegraphics[width=\columnwidth]{figs/pca_basis_row.pdf}}\vspace{-.5em} \caption{Visual comparison of the basis resulting from various approaches to dimensionality reduction. The top two bases result from LGCP-NMF with the KL (top) and Frobenius (second) loss functions. The third row is the NMF basis applied to raw counts (no spatial continuity). The bottom row is the result of PCA applied to the LGCP intensity functions. LGCP-PCA fundamentally differs due to the negativity of the basis surfaces. Best viewed in color. } \vspace{-1em} \label{fig:nmf-methods} \end{figure} \afterpage{\newpage} \subsection{Alternative Approaches} With the goal of discovering the shared structure among the collection of point processes, we can proceed in a few alternative directions. For instance, one could hand-select a spatial basis and directly fit weights for each individual point process, modeling the intensity as a weighted combination of these bases. However, this leads to multiple restrictions: firstly, choosing the spatial bases to cover the court is a highly subjective task (though, there are situations where it would be desirable to have such control); secondly, these bases are unlikely to match the natural symmetries of the basketball court. In contrast, modeling the intensities with LGCP-NMF uncovers the natural symmetries of the game without user guidance. Another approach would be to directly factorize the raw shot count matrix $\mathbf X$. However, this method ignores spatial information, and essentially models the intensity surface as a set of $V$ independent parameters. Empirically, this method yields a poorer, more degenerate basis, which can be seen in Figure~\ref{fig:raw}. Furthermore, this is far less numerically stable, and jitter must be added to entries of $\mathbf \Lambda$ for convergence. Finally, another reasonable approach would apply PCA directly to the discretized LGCP intensity matrix $\mathbf \Lambda$, though as Figure \ref{fig:pca} demonstrates, the resulting mixed-sign decomposition leads to an unintuitive and visually uninterpretable basis \section{Results} \label{sec:results} We graphically depict our point process data, LGCP representation, and LGCP-NMF reconstruction in Figure \ref{fig:first} for ${K=10}$. There is wide variation in shot selection among NBA players - some shooters specialize in certain types of shots, whereas others will shoot from many locations on the court. Our method discovers basis vectors that correspond to visually interpretable shot types. Similar to the parts-based decomposition of human faces that NMF discovers in \citet{lee1999learning}, LGCP-NMF discovers a shots-based decomposition of NBA players. Setting ${K=10}$ and using the KL-based loss function, we display the resulting basis vectors in Figure~\ref{fig:basis}. One basis corresponds to corner three-point shots \ref{fig:basis1}, while another corresponds to wing three-point shots \ref{fig:basis2}, and yet another to top of the key three point shots \ref{fig:basis3}. A comparison between KL and Frobenius loss functions can be found in Figure~\ref{fig:nmf-methods}. Furthermore, the player specific basis weights provide a concise characterization of their offensive habits. The weight $w_{n,k}$ can be interpreted as the amount player $n$ takes shot type $k$, which quantifies intuitions about player behavior. Table~\ref{tab:weights} compares normalized weights between a selection of players. Empirically, the KL-based NMF decomposition results in a more spatially diverse basis, where the Frobenius-based decomposition focuses on the region of high intensity near the basket at the expense of the rest of the court. This can be seen by comparing Figure~\ref{fig:brunet} (KL) to Figure~\ref{fig:lee} (Frobenius). We also compare the two LGCP-NMF decompositions to the NMF decomposition done directly on the matrix of counts, $\mathbf X$. The results in Figure~\ref{fig:raw} show a set of sparse basis vectors that are spatially unstructured. And lastly, we depict the PCA decomposition of the LGCP matrix $\mathbf \Lambda$ in Figure~\ref{fig:pca}. This yields the most colorful decomposition because the basis vectors and player weights are unconstrained real numbers. This renders the basis vectors uninterpretable as intensity functions. Upon visual inspection, the corner three-point `feature' that is salient in the LGCP-NMF decompositions appears in five separate PCA vectors, some positive, some negative. This is the cancelation phenomenon that NMF avoids. We compare the fit of the low rank NMF reconstructions and the original LGCPs on held out test data in Figure~\ref{fig:test-points}. The NMF decomposition achieves superior predictive performance over the original independent LGCPs in addition to its compressed representation and interpretable basis. \begin{figure}[t!] \centering \vspace{-1.1em} \includegraphics[width=.8\columnwidth]{figs/nmf_vs_lgcp_test_ll.pdf} \vspace{-1.2em} \caption{ Average player test data log likelihoods for LGCP-NMF varying $K$ and independent LGCP. For each fold, we held out 10\% of each player's shots, fit independent LGCPs and ran NMF (using the KL-based loss function) for varying $K$. We display the average (across players) test log likelihood above. The predictive performance of our representation improves upon the high dimensional independent LGCPs, showing the importance of pooling information across players. } \vspace{-.7em} \label{fig:test-points} \end{figure} \section{From Shooting Frequency to Efficiency} \label{sec:application} Unadjusted field goal percentage, or the probability a player makes an attempted shot, is a statistic of interest when evaluating player value. This statistic, however, is spatially uninformed, and washes away important variation due to shooting circumstances. Leveraging the vocabulary of shot types provided by the basis vectors, we model a player's field goal percentage for each of the shot types. We decompose a player's field goal percentage into a weighted combination of $K$ basis field goal percentages, which provides a higher resolution summary of an offensive player's skills. Our aim is to estimate the probability of a made shot for each point in the offensive half court \emph{for each individual player}. \subsection{Latent variable model} For player $n$, we model each shot event as \begin{align*} k_{n,i} &\sim \text{Mult}( \bar w_{n,:} ) && \text{ shot type } \\ \color{blue}{x_{n,i}| k_{n,i}} &\sim \text{Mult}(\bar B_{k_{n,i}}) && \text{ location } \\ \color{red}{y_{n,i} | k_{n,i}} &\sim \text{Bern}( \text{logit}^{-1}( \beta_{n,k_{n,i}} ) ) && \text{ outcome } \end{align*} where $\bar B_k \equiv B_k / \sum_{k'} B_{k'}$ is the normalized basis, and the player weights $\bar w_{n,k}$ are adjusted to reflect the total mass of each unnormalized basis. NMF does not constrain each basis vector to a certain value, so the volume of each basis vector is a meaningful quantity that corresponds to how common a shot type is. We transfer this information into the weights by setting \begin{align*} \bar w_{n,k} &= w_{n,k} \sum_v B_k(v). && \text{ adjusted basis loadings } \end{align*} We do not directly observe the shot type, $k$, only the shot location $x_{n,i}$. Omitting $n$ and $i$ to simplify notation, we can compute the the predictive distribution \begin{align*} p(y | x) &= \sum_{k=1}^K {\color{red}p(y | k)} p(k | x) \\ &= \sum_{z=1}^K {\color{red}p(y | k)} \frac{ {\color{blue}p(x | k)} p(k) }{ \sum_{k'} {\color{blue}p(x | k')} p(k') } \end{align*} where the outcome distribution is red and the location distribution is blue for clarity. The shot type decomposition given by $\mathbf B$ provides a natural way to share information between shooters to reduce the variance in our estimated surfaces. We hierarchically model player probability parameters $\beta_{n,k}$ with respect to each shot type. The prior over parameters is \begin{align*} \beta_{0,k} &\sim \mathcal{N}(0,\sigma_0^2) && \text{ diffuse global prior } \\ \sigma_{k}^2 &\sim \text{Inv-Gamma}(a, b) && \text{ basis variance } \\ \beta_{n,k} &\sim \mathcal{N}(\beta_{0,k}, \sigma_k^2) && \text{ player/basis params} \end{align*} where the global means, $\beta_{0,k}$, and variances, $\sigma_k^2$, are given diffuse priors, $\sigma_0^2 = 100$, and $a = b =.1$. The goal of this hierarchical prior structure is to share information between players about a particular shot type. Furthermore, it will shrink players with low sample sizes to the global mean. Some consequences of these modeling decisions will be discussed in Section~\ref{discussion}. \subsection{Inference} Gibbs sampling is performed to draw posterior samples of the $\beta$ and $\sigma^2$ parameters. To draw posterior samples of $\beta | \sigma^2, y$, we use elliptical slice sampling to exploit the normal prior placed on $\beta$. We can draw samples of $\sigma^2 | \beta, y$ directly due to conjugacy. \subsection{Results} \begin{figure}[t!] \vspace{-1.3em} \centering \subfigure[global mean]{\label{fig:global-mean} \includegraphics[width=.3\columnwidth, page=1]{figs/ind_lvm_hier_global.pdf} } \subfigure[posterior uncertainty]{ \includegraphics[width=.3\columnwidth, page=2]{figs/ind_lvm_hier_global.pdf} } \subfigure[]{ \includegraphics[width=.3\columnwidth, page=1]{figs/ind_lvm_hier_players.pdf} } \subfigure[]{ \includegraphics[width=.3\columnwidth, page=121]{figs/ind_lvm_hier_players.pdf} } \subfigure[]{ \includegraphics[width=.3\columnwidth, page=76]{figs/ind_lvm_hier_players.pdf} } \subfigure[]{ \includegraphics[width=.3\columnwidth, page=91]{figs/ind_lvm_hier_players.pdf} } \vspace{-1em} \caption{ (a) Global efficiency surface and (b) posterior uncertainty. (c-f) Spatial efficiency for a selection of players. Red indicates the highest field goal percentage and dark blue represents the lowest. Novak and Curry are known for their 3-point shooting, whereas James and Irving are known for efficiency near the basket. } \vspace{-1.2em} \label{fig:efficiency} \end{figure} We visualize the global mean field goal percentage surface, corresponding parameters to $\beta_{0,k}$ in Figure~\ref{fig:global-mean}. Beside it, we show one standard deviation of posterior uncertainty in the mean surface. Below the global mean, we show a few examples of individual player field goal percentage surfaces. These visualizations allow us to compare players' efficiency with respect to regions of the court. For instance, our fit suggests that both Kyrie Irving and Steve Novak are below average from basis 4, the baseline jump shot, whereas Stephen Curry is an above average corner three point shooter. This is valuable information for a defending player. More details about player field goal percentage surfaces and player parameter fits are available in the supplemental material. \section{Discussion} \label{discussion} We have presented a method that models related point processes using a constrained matrix decomposition of independently fit intensity surfaces. Our representation provides an accurate low dimensional summary of shooting habits and an intuitive basis that corresponds to shot types recognizable by basketball fans and coaches. After visualizing this basis and discussing some of its properties as a quantification of player habits, we then used the decomposition to form interpretable estimates of a spatially shooting efficiency. We see a few directions for future work. Due to the relationship between KL-based NMF and some fully generative latent variable models, including the probabilistic latent semantic model \citep{ding2008equivalence} and latent Dirichlet allocation \citep{blei2003latent}, we are interested in jointly modeling the point process and intensity surface decomposition in a fully generative model. This spatially informed LDA would model the non-stationary spatial structure the data exhibit within each non-negative basis surface, opening the door for a richer parameterization of offensive shooting habits that could include defensive effects Furthermore, jointly modeling spatial field goal percentage and intensity can capture the correlation between player skill and shooting habits. Common intuition that players will take more shots from locations where they have more accuracy is missed in our treatment, yet modeling this effect may yield a more accurate characterization of a player's habits and ability. \section*{Acknowledgments} The authors would like to acknowledge the Harvard XY Hoops group, including Alex Franks, Alex D'Amour, Ryan Grossman, and Dan Cervone. We also acknowledge the HIPS lab and several referees for helpful suggestions and discussion, and STATS LLC for providing the data. To compare various NMF optimization procedures, the authors used the \texttt{r} package \texttt{NMF} \citep{r-nmf}. \bibliographystyle{plainnat}
{ "attr-fineweb-edu": 1.302734, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUb67xK6Ot9WA5gWdd
\section{Introduction} The current paper stems from our participation in the 2017 Machine Learning Journal (Springer) challenge on predicting outcomes of soccer matches from a range of leagues around the world (MLS challenge, in short). Details of the challenge and the data can be found in \citet{Berrar2017}. We consider two distinct modeling approaches for the task. The first approach focuses on modeling the probabilities of win, draw, or loss, using various extensions of Bradley-Terry models \citep{Bradley1952}. The second approach focuses on directly modeling the number of goals scored by each team in each match using a hierarchical Poisson log-linear model, building on the modeling frameworks in \citet{Maher1982}, \citet{Dixon1997}, \citet{Karlis2003} and \citet{Baio2010}. The performance of the various modeling approaches in predicting the outcomes of matches is assessed using a novel, context-specific framework for temporal validation that is found to deliver accurate estimates of the prediction error. The direct modeling of the outcomes using the various Bradley-Terry extensions and the modeling of match scores using the hierarchical Poisson log-linear model deliver similar performance in terms of predicting the outcome. The paper is structured as follows: Section \ref{data} briefly introduces the data, presents the necessary data-cleaning operations undertaken, and describes the various features that were extracted. Section \ref{bt} presents the various Bradley-Terry models and extensions we consider for the challenge and describes the associated estimation procedures. Section \ref{poisson} focuses on the hierarchical Poisson log-linear model and the Integrated Nested Laplace Approximations (INLA; \citealt{Rue2009}) of the posterior densities for the model parameters. Section \ref{validationsec} introduces the validation framework and the models are compared in terms of their predictive performance in Section \ref{results}. Section~\ref{conclusion} concludes with discussion and future directions. \section{Pre-processing and feature extraction} \label{data} \subsection{Data exploration} \label{features} The data contain matches from $52$ leagues, covering $35$ countries, for a varying number of seasons for each league. Nearly all leagues have data since 2008, with a few having data extending as far back as 2000. There are no cross-country leagues (e.g.~UEFA Champions League) or teams associated with different countries. The only way that teams move between leagues is within each country by either promotion or relegation. \begin{figure}[t] \centering \includegraphics[scale=0.5]{country-games2.pdf} \caption{Number of available matches per country in the data. } \label{fig-country} \end{figure} Figure~\ref{fig-country} shows the number of available matches for each country in the data set. England dominates the data in terms of matches recorded, with the available matches coming from 5 distinct leagues. The other highly-represented countries are Scotland with data from 4 distinct leagues, and European countries, such as Spain, Germany, Italy and France, most probably because they also have a high UEFA coefficient \citep{wiki:UEFA_coefficient}. \begin{figure}[t] \centering \includegraphics[scale=0.5]{goals-in-a-game2.pdf} \caption{The number of matches per number of goals scored by the home (dark grey) and away team (light grey). } \label{fig-goals} \end{figure} Figure~\ref{fig-goals} shows the number of matches per number of goals scored by the home (dark grey) and away (light grey) teams. Home teams appear to score more goals than away teams, with home teams having consistently higher frequencies for two or more goals and away teams having higher frequencies for no goal and one goal. Overall, home teams scored $304,918$ goals over the whole data set, whereas away teams scored $228,293$ goals. In Section 1 of the Supplementary Material, the trend shown in Figure~\ref{fig-goals} is also found to be present within each country, pointing towards the existence of a home advantage. \subsection{Data cleaning} Upon closer inspection of the original sequence of matches for the MLS challenge, we found and corrected the following three anomalies in the data. The complete set of matches from the 2015-2016 season of the Venezuelan league was duplicated in the data. We kept only one instance of these matches. Furthermore, $26$ matches from the 2013-2014 season of the Norwegian league were assigned the year 2014 in the date field instead of 2013. The dates for these matches were modified accordingly. Finally, one match in the 2013-2014 season of the Moroccan league (Raja Casablanca vs Maghrib de Fes) was assigned the month February in the date field instead of August. The date for this match was corrected, accordingly. \subsection{Feature extraction} The features that were extracted can be categorized into team-specific, match-specific and/or season-specific. Match-specific features were derived from the information available on each match. Season-specific features have the same value for all matches and teams in a season of a particular league, and differ only across seasons for the same league and across leagues. Table~\ref{tab-features} gives short names, descriptions, and ranges for the features that were extracted. Table~\ref{tab-artificial_features} gives an example of what values the features take for an artificial data set with observations on the first 3 matches of a season for team A playing all matches at home. The team-specific features are listed only for Team A to allow for easy tracking of their evolution. The features we extracted are proxies for a range of aspects of the game, and their choice was based on common sense and our understanding of what is important in soccer, and previous literature. Home (feature 1 in Table~\ref{tab-artificial_features}) can be used for including a home advantage in the models; newly promoted (feature 2 in Table~\ref{tab-artificial_features}) is used to account for the fact that a newly promoted team is typically weaker than the competition; days since previous match (feature 3 in Table~\ref{tab-artificial_features}) carries information regarding fatigue of the players and the team, overall; form (feature 4 in Table~\ref{tab-artificial_features}) is a proxy for whether a team is doing better or worse during a particular period in time compared to its general strength; matches played (feature 5 in Table~\ref{tab-artificial_features}) determines how far into the season a game occurs; points tally, goal difference, and points per match (features 6, 7 and 10 in Table~\ref{tab-artificial_features}) are measures of how well a team is doing so far in the season; goals scored per match and goals conceded per match (features 8 and 9 in Table~\ref{tab-artificial_features}) are measures of a team's attacking and defensive ability, respectively; previous season points tally and previous season goal difference (features 11 and 12 in Table~\ref{tab-artificial_features}) are measures of how well a team performed in the previous season, which can be a useful indicator of how well a team will perform in the early stages of a season when other features such as points tally do not carry much information; finally, team rankings (feature 13 in Table~\ref{tab-artificial_features}) refers to a variety of measures that rank teams based on their performance in previous matches, as detailed in Section 2 of the Supplementary Material. In order to avoid missing data in the features we extracted, we made the following conventions. The value of form for the first match of the season for each team was drawn from a Uniform distribution in $(0,1)$. The form for the second and third match were a third of the points in the first match, and a sixth of the total points in the first two matches, respectively. Days since previous match was left unspecified for the very first match of the team in the data. If the team was playing its first season then we treated it as being newly promoted. The previous season points tally was set to $15$ for newly promoted teams and to $65$ for newly relegated teams, and the previous season goal difference was set to $-35$ for newly promoted teams and $35$ for newly relegated teams. These values were set in an ad-hoc manner prior to estimation and validation, based on our sense and experience of what is a small or large value for the corresponding features. In principle, the choice of these values could be made more formally by minimizing a criterion of predictive quality, but we did not pursue this as it would complicate the estimation-prediction workflow described later in the paper and increase computational effort significantly without any guarantee of improving the predictive quality of the models. \def1{1.5} \begin{table*}[t] \caption{Short names, descriptions, and ranges for the features that were extracted.} \scriptsize \begin{center} \begin{tabular}{llll} \toprule Number & Short name & Description & Range \\ \midrule \multicolumn{4}{l}{Team-specific features} \\ \midrule 1 & Home & \noindent\parbox[t]{7cm}{$1$ if the team is playing at home, and $0$ otherwise} & $\{0, 1\}$ \\ 2 & Newly promoted & \noindent\parbox[t]{7cm}{$1$ if the team is newly promoted to the league for the current season, and $0$ otherwise} & $\{0, 1\}$ \\ 3 & Days since previous match & \noindent\parbox[t]{7cm}{number of days elapsed since the previous match of the team} & $\{1, 2, \ldots\}$ \\ 4 & Form & \noindent\parbox[t]{7cm}{a ninth of the total points gained in the last three matches in the current season} & $(0, 1)$ \\ 5 & Matches played & \noindent\parbox[t]{7cm}{number of matches played in the current season and before the current match} & $\{1, 2, \ldots\}$ \\ 6 & Points tally & \noindent\parbox[t]{7cm}{the points accumulated during the current season and before the current match} & $\{0, 1, \ldots\}$ \\ 7 & Goal difference & \noindent\parbox[t]{7cm}{the goals that a team has scored minus the goals that it has conceded over the current season and before the current match} & $\{\ldots, -1, 0, 1, \ldots\}$ \\ 8 & Goals scored per match & \noindent\parbox[t]{7cm}{total goals scored per match played over the current season and before the current match} & $\Re^{+}$ \\ 9 & Goals conceded per match & \noindent\parbox[t]{7cm}{total goals conceded per match over the current season and before the current match} & $\Re^{+}$ \\ 10 & Points per match & \noindent\parbox[t]{7cm}{total points gained per match played over the current season and before the current match} & $[0, 3]$ \\ 11 & Previous season points tally & \noindent\parbox[t]{7cm}{total points accumulated by the team in the previous season of the same league} & $\{0, 1, \ldots\}$ \\ 12 & Previous season goal difference & \noindent\parbox[t]{7cm}{total goals scored minus total goals conceded for each team in the previous season of the same league} & $\{\ldots, -1, 0, 1, \ldots\}$ \\ 13 & Team rankings & \noindent\parbox[t]{7cm}{a variety of team rankings, based on historical observations; See Section~2 of the Supplementary Material} & $\Re$ \\ \midrule \multicolumn{4}{l}{Season-specific features} \\ \midrule 14 & Season & \noindent\parbox[t]{7cm}{the league season in which each match is played} & labels \\ 15 & Season window & \noindent\parbox[t]{7cm}{time period in calendar months of the league season} & labels \\ \midrule \multicolumn{4}{l}{Match-specific features} \\ \midrule 16 & Quarter & \noindent\parbox[t]{7cm}{quarter of the calendar year based on the match date} & labels \\ \bottomrule \end{tabular} \end{center} \label{tab-features} \end{table*} \def1{1} \begin{table*}[t] \caption{Feature values for artificial data showing the first 3 matches of a season with team A playing all matches at home.} \scriptsize \begin{center} \begin{tabular}[t]{llll} \toprule & Match 1 & Match 2 & Match 3 \\ \cmidrule{2-4} \multicolumn{4}{l}{Match attributes and outcomes} \\ \midrule League & Country1 & Country1 & Country1 \\ Date & 2033-08-18 & 2033-08-21 & 2033-08-26 \\ Home team & team A & team A & team A \\ Away team & team B & team C & team D \\ Home score & 2 & 2 & 0 \\ Away score & 0 & 1 & 0 \\ \midrule \multicolumn{4}{l}{Team-specific features (Team A)} \\ \midrule Newly promoted & 0 & 0 & 0 \\ Days since previous match & 91 & 3 & 5 \\ Form & 0.5233 & 1 & 1 \\ Matches played & 0 & 1 & 2 \\ Points tally & 0 & 3 & 6 \\ Goal difference & - & 2 & 3 \\ Goals scored per match & 0 & 2 & 2 \\ Goals conceded per match & 0 & 0 & 0.5 \\ Points per match & 0 & 3 & 3 \\ Previous season points tally & 72 & 72 & 72 \\ Previous Season goal difference & 45 & 45 & 45 \\ \midrule \multicolumn{4}{l}{Season-specific features} \\ \midrule Season & 33-34 & 33-34 & 33-34 \\ Season window & August-May & August-May & August-May \\ \midrule \multicolumn{4}{l}{Match-specific features} \\ \midrule Quarter & 3 & 3 & 3 \\ \hline \end{tabular} \end{center} \label{tab-artificial_features} \end{table*} \section{Modeling outcomes} \label{bt} \subsection{Bradley-Terry models and extensions} The Bradley-Terry model \citep{Bradley1952} is commonly used to model paired comparisons, which often arise in competitive sport. For a binary win/loss outcome, let \[ y_{ijt} = \left\{ \begin{array}{cc} 1\,, & \text{ if team } i \text{ beats team } j \text{ at time } t \\ 0\,, & \text{ if team } j \text{ beats team } i \text{ at time } t \end{array} \right. \qquad (i,j =1,\dots,n; \ i\neq j; \ t \in \Re^+) \,, \] where $n$ is the number of teams present in the data. The Bradley-Terry model assumes that \begin{equation*} p(y_{ijt}=1) = \frac{\pi_i}{\pi_i+\pi_j} \,, \end{equation*} where $\pi_i=\exp (\lambda_i)$, and $\lambda_i$ is understood as the ``strength'' of team $i$. In the original Bradley-Terry formulation, $\lambda_i$ does not vary with time. For the purposes of the MLS challenge prediction task, we consider extensions of the original Bradley-Terry formulation where we allow $\lambda_i$ to depend on a $p$-vector of time-dependent features $\bm{x}_{it}$ for team $i$ at time $t$ as $\lambda_{it} = f(\bm{x}_{it})$ for some function $f(\cdot)$. Bradley-Terry models can also be equivalently written as linking the log-odds of a team winning to the difference in strength of the two teams competing. Some of the extensions below directly specify that difference. \subsubsection*{BL: Baseline} The simplest specification of all assumes that \begin{equation} \label{bl} \lambda_{it} = \beta h_{it}\ , \end{equation} where $h_{it} = 1$ if team $i$ is playing at home at time $t$, and $h_{it} = 0$ otherwise. The only parameter to estimate with this specification is $\beta$, which can be understood as the difference in strength when the team plays at home. We use this model to establish a baseline to improve upon for the prediction task. \subsubsection*{CS: Constant strengths} This specification corresponds to the standard Bradley-Terry model with a home-field advantage, under which \begin{equation} \label{cs} \lambda_{it} = \alpha_i + \beta h_{it} \,. \end{equation} The above specification involves $n + 1$ parameters, where $n$ is the number of teams. The parameter $\alpha_i$ represents the time-invariant strength of the $i$th team. \subsubsection*{LF: Linear with features} Suppose now that we are given a vector of features $\bm{x}_{it}$ associated with team $i$ at time $t$. A simple way to model the team strengths $\lambda_{it}$ is to assume that they are a linear combination of the features. Hence, in this model we have \begin{equation} \label{lf} \lambda_{it}= \sum_{k=1}^{p} \beta_k x_{itk}\,, \end{equation} where $x_{itk}$ is the $k$th element of the feature vector $\bm{x}_{it}$. Note that the coefficients in the linear combination are shared between all teams, and so the number of parameters to estimate is $p$, where $p$ is the dimension of the feature vector. This specification is similar to the one implemented in the \texttt{R} package \texttt{BradleyTerry} \citep{Firth2005}, but without the team specific random effects. \subsubsection*{TVC: Time-varying coefficients} Some of the features we consider, like points tally season (feature 6 in Table~\ref{tab-features}) vary during the season. Ignoring any special circumstances such as teams being punished, the points accumulated by a team is a non-decreasing function of the number of matches the team has played. It is natural to assume that the contribution of points accumulated to the strength of a team is different at the beginning of the season than it is at the end. In order to account for such effects, the parameters for the corresponding features can be allowed to vary with the matches played. Specifically, the team strengths can be modeled as \begin{equation} \label{tvc} \lambda_{it}= \sum_{k \in \mathcal{V}} \gamma_k(m_{it}) x_{itk} + \sum_{k \notin \mathcal{V}} \beta_k x_{itk} \,, \end{equation} where $m_{it}$ denotes the number of matches that team $i$ has played within the current season at time $t$ and $\mathcal{V}$ denotes the set of coefficients that are allowed to vary with the matches played. The functions $\gamma_k(m_{it})$ can be modeled non-parametrically, but in the spirit of keeping the complexity low we instead set $\gamma_k(m_{it}) = \alpha_k + \beta_km_{it}$. With this specification for $\gamma_k(m_{it})$, TVC is equivalent to LF with the inclusion of an extra set of features $\{m_{it}x_{itk}\}_{k \in \mathcal{V}}$. \subsubsection*{AFD: Additive feature differences with time interactions} For the LF specification, the log-odds of team $i$ beating team $j$ is \begin{equation*} \lambda_{it} - \lambda_{jt} = \sum_{k=1}^p\beta_k(x_{itk} - x_{jtk}) \,. \end{equation*} Hence, the LF specification assumes that the difference in strength between the two teams is a linear combination of differences between the features of the teams. We can relax the assumption of linearity, and include non-linear time interactions, by instead assuming that each difference in features contributes to the difference in strengths through an arbitrary bivariate smooth function $g_k$ that depends on the feature difference and the number of matches played. We then arrive at the AFD specification, which can be written as \begin{equation} \label{afd} \lambda_{it} - \lambda_{jt} = \sum_{k \in \mathcal{V}} g_k(x_{itk} - x_{jtk}, m_{it}) + \sum_{k \notin \mathcal{V}} f_k(x_{itk} - x_{jtk}) \ , \end{equation} where for simplicity we take the number of matches played to be the number of matches played by the home team. \subsection{Handling draws} \label{sec-draws} The extra outcome of a draw in a soccer match can be accommodated within the Bradley-Terry formulation in two ways. The first is to treat win, loss and draw as multinomial ordered outcomes, in effect assuming that $\text{``win''} \succ \text{``draw''} \succ \text{``loss''}$, where $\succ$ denotes strong transitive preference. Then, the ordered outcomes can be modeled using cumulative link models \citep{Agresti2015} with the various strength specifications. Specifically, let \[ y_{ijt} = \begin{cases} 2\, , & \text{if team } i \text{ beats team } j \text{ at time } t \ , \\ 1\, , & \text{if team } i \text{ and } j \text{ draw} \text{ at time } t \ ,\\ 0\, , & \text{if team } j \text{ beats team } i \text{ at time } t \ . \end{cases} \] and assume that $y_{ijt}$ has \begin{equation} \label{ordinal} p(y_{ijt} \leq y ) = \frac{e^{\delta_{y} + \lambda_{it}}}{e^{\delta_y + \lambda_{it}} + e^{\lambda_{jt}}} \, , \end{equation} where $-\infty < \delta_0 \le \delta_1 < \delta_2 = \infty$, and $\delta_0, \delta_1$ are parameters to be estimated from the data. \citet{Cattelan2013} and \citet{Kiraly2017} use of this approach for modeling soccer outcomes. Another possibility for handling draws is to use the \citet{Davidson1970} extension of the Bradley-Terry model, under which \begin{align*} p(y_{ijt} = 2 \,|\, y_{ijt} \ne 1) &= \frac{\pi_{it}}{\pi_{it} + \pi_{jt}}\,, \\ p(y_{ijt} = 1) &= \frac{\delta\sqrt{\pi_{it}\pi_{jt}}}{\pi_{it} + \pi_{jt} + \delta\sqrt{\pi_{it}\pi_{jt}}}\,, \\ p(y_{ijt} = 0 \,|\, y_{ijt} \ne 1) &= \frac{\pi_{jt}}{\pi_{it} + \pi_{jt}}\,. \\ \end{align*} where $\delta$ is a parameter to be estimated from the data. \subsection{Estimation} \subsubsection*{Likelihood-based approaches} The parameters of the Bradley-Terry model extensions presented above can be estimated by maximizing the log-likelihood of the multinomial distribution. The log-likelihood about the parameter vector $\bm{\theta}$ is \[ \ell(\bm{\theta}) = \sum_{\{i,j,t\} \in \mathcal{M}} \sum_{y} \mathbb{I}_{[y_{ijt}=y]} \log\Big(p(y_{ijt}=y)\Big)\, , \] where $\mathbb{I_A}$ takes the value $1$ if $A$ holds and $0$ otherwise, and $\mathcal{M}$ is the set of triplets $\{i,j,t\}$ corresponding to the matches whose outcomes have been observed. For estimating the functions involved in the AFD specification, we represent each $f_k$ using thin plate splines \citep{Wahba1990}, and enforce smoothness constraints on the estimate of $f_k$ by maximizing a penalized log-likelihood of the form \[ \ell^{\text{pen}}(\bm{\theta}) = \ell(\bm{\theta}) - k \bm{\theta}^TP\bm{\theta}\, , \] where $P$ is a penalty matrix and $k$ is a tuning parameter. For penalized estimation we only consider ordinal models through the \texttt{R} package \texttt{mgcv} \citep{Wood2006}, and select $k$ by optimizing the Generalized Cross Validation criterion \citep{Golub1979}. Details on the fitting procedure for specifications like AFD and the implementation of thin plate spline regression in \texttt{mgcv} can be found in \cite{Wood2003}. The parameters of the Davidson extensions of the Bradley-Terry model are estimated by using the BFGS optimization algorithm \citep{Byrd1995} to minimize $-\ell(\bm{\theta})$. \subsubsection*{Identifiability} In the CS model, the team strengths are identifiable only up to an additive constant, because $\lambda_i - \lambda_j = (\lambda_i + d) - (\lambda_j + d)$ for any $d \in \Re$. This unidentifiability can be dealt with by setting the strength of an arbitrarily chosen team to zero. The CS model was fitted league-by-league with one identifiability constraint per league. The parameters $\delta_0$ and $\delta_1$ in (\ref{ordinal}) are identifiable only if the specification used for $\lambda_i - \lambda_j$ does not involve an intercept parameter. An alternative is to include an intercept parameter in $\lambda_i-\lambda_j$ and fix $\delta_0$ at a value. The estimated probabilities are invariant to these alternatives, and we use the latter simply because this is the default in the \texttt{mgcv} package. \subsubsection*{Other data-specific considerations} The parameters in the LF, TVC, and AFD specifications (which involve features) are shared across the leagues and matches in the data. For computational efficiency we restrict the fitting procedures to use the $20, 000$ most recent matches, or less if less is available, at the time of the first match that a prediction needs to be made. The CS specification requires estimating the strength parameters directly. For computational efficiency, we estimate the strength parameters independently for each league within each country, and only consider matches that took place in the past calendar year from the date of the first match that a prediction needs to be made. \section{Modeling scores} \label{poisson} \subsection{Model structure} \label{sec-hpl} Every league consists of a number of teams $T$, playing against each other twice in a season (once at home and once away). We indicate the number of goals scored by the home and the away team in the $g$th match of the season ($g=1,\ldots,G$) as $y_{g1}$ and $y_{g2}$, respectively. The observed goal counts $y_{g1}$ and $y_{g2}$ are assumed to be realizations of conditionally independent random variables $Y_{g1}$ and $Y_{g2}$, respectively, with \begin{eqnarray*} Y_{gj} \mid \theta_{gj} \sim \mbox{Poisson}(\theta_{gj}) \ . \end{eqnarray*} The parameters $\theta_{g1}$ and $\theta_{g2}$ represent the {\em scoring intensity} in the $g$-th match for the home and away team, respectively. We assume that $\theta_{g1}$ and $\theta_{g2}$ are specified through the regression structures \begin{align} \begin{split} \eta_{g1} = \log(\theta_{g1}) = \sum_{k=1}^p \beta_k z_{g1k} + \alpha_{h_g} + \xi_{a_g} + \gamma_{h_g,\text{\textit{Sea}}_g} + \delta_{a_g,\text{\textit{Sea}}_g} \ , \\ \eta_{g2} = \log(\theta_{g2}) = \sum_{k=1}^p \beta_k z_{g2k} + \alpha_{a_g} + \xi_{h_g} + \gamma_{a_g,\text{\textit{Sea}}_g} + \delta_{h_g,\text{\textit{Sea}}_g} \ . \end{split} \label{linpred} \end{align} The indices $h_g$ and $a_g$ determine the home and away team for match $g$ respectively, with $h_g, a_g \in \{1, \ldots, T\}$. The parameters $\beta_1, \ldots, \beta_p$ represent the effects corresponding to the observed match- and team-specific features $z_{gj1},\ldots, z_{gjp}$, respectively, collected in a $G \times 2p$ matrix $\bm{Z}$. The other effects in the linear predictor $\eta_{gj}$ reflect assumptions of exchangeability across the teams involved in the matches. Specifically, $\alpha_t$ and $\xi_t$ represent the latent attacking and defensive ability of team $t$ and are assumed to be distributed as \[ \alpha_t \mid \sigma_\alpha \sim\mbox{Normal}(0,\sigma_\alpha^2) \qquad \mbox{and} \qquad \xi_t \mid \sigma_\xi \sim \mbox{Normal}(0,\sigma_\xi^2). \] We used vague log-Gamma priors on the precision parameters $\tau_\alpha=1/\sigma^2_\alpha$ and $\tau_\xi=1/\sigma^2_\xi$. In order to account for the time dynamics across the different seasons, we also include the latent interactions $\gamma_{ts}$ and $\delta_{ts}$ between the team-specific attacking and defensive strengths and the season $s \in \{ 1, \ldots, S \}$, which were modeled using autoregressive specifications with \[ \gamma_{t1} \mid \sigma_\varepsilon, \rho_\gamma \sim\mbox{Normal}\left(0,\sigma^2_\varepsilon(1-\rho_\gamma^2)\right), \quad \gamma_{ts}=\rho_\gamma\gamma_{t,s-1}+\varepsilon_{s}, \quad \varepsilon_s \mid \sigma_\varepsilon \sim\mbox{Normal}(0,\sigma_\varepsilon^2) \quad (s = 2, \ldots, S)\,, \] and \[\delta_{t1} \mid \sigma_\varepsilon, \rho_\delta \sim\mbox{Normal}\left(0,\sigma^2_\varepsilon(1-\rho_\delta^2)\right), \quad \delta_{ts}=\rho_\delta\delta_{t,s-1}+\varepsilon_{s}, \quad \varepsilon_s \mid \sigma_\varepsilon \sim\mbox{Normal}(0,\sigma_\varepsilon^2) \quad (s = 2, \ldots, S) \, . \] For the specification of prior distributions for the hyperparameters $\rho_\gamma, \rho_\delta, \sigma_\epsilon$ we used the default settings of the \texttt{R-INLA} package \citep[version 17.6.20]{Lindgren2015}, which we also use to fit the model (see Subsection \ref{estpoisson}). Specifically, \texttt{R-INLA} sets vague Normal priors (centred at $0$ with large variance) on suitable transformations (e.g. log) of the hyperparameters with unbounded range. \subsection{Estimation} \label{estpoisson} The hierarchical Poisson log-linear model (HPL) of Subsection~\ref{sec-hpl} was fitted using INLA \citep{Rue2009}. Specifically, INLA avoids time-consuming MCMC simulations by numerically approximating the posterior densities for the parameters of latent Gaussian models, which constitute a wide class of hierarchical models of the form \begin{eqnarray*} Y_i \mid \bm{\phi},\bm\psi & \sim & p(y_i\mid \bm\phi,\bm\psi) \ , \\ \bm\phi \mid \bm\psi & \sim & \mbox{Normal}\left(\bm{0},\bm{Q}^{-1}(\bm\psi)\right) \ , \\ \bm\psi & \sim & p(\bm{\psi}) \ , \end{eqnarray*} where $Y_i$ is the random variable corresponding to the observed response $y_i$, $\bm\phi$ is a set of parameters (which may have a large dimension) and $\bm\psi$ is a set of hyperparameters. The basic principle is to approximate the posterior densities for $\bm\psi$ and $\bm\phi$ using a series of nested Normal approximations. The algorithm uses numerical optimization to find the mode of the posterior, while the marginal posterior distributions are computed using numerical integration over the hyperparameters. The posterior densities for the parameters of the HPL model are computed on the available data for each league. To predict the outcome of a future match, we simulated $1000$ samples from the joint approximated predictive distribution of the number of goals $\tilde{Y}_1$, $\tilde{Y}_{2}$, scored in the future match by the home and away teams respectively, given features $\tilde{\bm{z}}_{j} = (\tilde{z}_{j1}, \ldots, \tilde{z}_{j2})^\top$. Sampling was done using the \texttt{inla.posterior.sample} method of the \texttt{R-INLA} package. The predictive distribution has a probability mass function of the form \[ p\left(\tilde{y}_{1}, \tilde{y}_{2} \mid \bm{y}_1, \bm{y}_2, \tilde{\bm{z}}_1, \tilde{\bm{z}}_2, \bm{Z} \right) = \int p\left(\tilde{y}_{1},\tilde{y}_{2} \mid \bm{\nu}, \tilde{\bm{z}}_1, \tilde{\bm{z}}_2 \right) p\left(\bm{\nu} \mid \bm{y}_1, \bm{y}_2, \bm{Z}\right)d\bm{\nu} \, , \] where the vector $\bm{\nu}$ collects all model parameters. We then compute the relative frequencies of the events $\tilde{Y}_{1}> \tilde{Y}_{2}$, $\tilde{Y}_{1}=\tilde{Y}_{2}$, and $\tilde{Y}_{1}<\tilde{Y}_{2}$, which correspond to home win, draw, and loss respectively. \section{Validation framework} \label{validationsec} \subsection{MLS challenge} The MLS challenge consists of predicting the outcomes (win, draw, loss) of 206 soccer matches from 52 leagues that take place between 31st March 2017 and 10th April 2017. The prediction performance of each submission was assessed in terms of the average ranked probability score (see Subsection~\ref{sec-criteria}) over those matches. To predict the outcomes of these matches, the challenge participants have access to over 200,000 matches up to and including the 21st March 2017, which can be used to train a classifier. In order to guide the choice of the model that is best suited to make the final predictions, we designed a validation framework that emulates the requirements of the MLS Challenge. We evaluated the models in terms of the quality of future predictions, i.e.~predictions about matches that happen after the matches used for training. In particular, we estimated the model parameters using data from the period before 1st April of each available calendar year in the data, and examined the quality of predictions in the period between 1st and 7th April of that year. For 2017, we estimated the model parameters using data from the period before 14th March 2017, and examined the quality of predictions in the period between 14th and 21st March 2017. Figure~\ref{validation} is a pictorial representation of the validation framework, illustrating the sequence of experiments and the duration of their corresponding training and validation periods. \begin{figure}[t] \centering \includegraphics[scale=0.7]{valfrwk.pdf} \caption{The sequence of experiments that constitute the validation framework, visualizing their corresponding training and prediction periods. \label{validation}} \end{figure} \subsection{Validation criteria} \label{sec-criteria} The main predictive criterion we used in the validation framework is the ranked probability score, which is also the criterion that was used to determine the outcome of the challenge. Classification accuracy was also computed. \subsubsection*{Ranked probability score} Let $R$ be the number of possible outcomes (e.g.~$R = 3$ in soccer) and $\bm{p}$ be the $R$-vector of predicted probabilities with $j$-th component $p_j \in [0,1]$ and $p_1 + \ldots + p_R = 1$. Suppose that the observed outcomes are encoded in an $R$-vector $\bm{a}$ with $j$-th component $a_j \in \{0,1\}$ and $a_1 + \ldots + a_r = 1$. The ranked probability score is defined as\ \begin{equation} {\rm RPS} = \frac{1}{r-1} {\sum_{i = 1}^{r-1}\left\{\sum_{j=1}^i\left(p_j - a_j\right)\right\}^2} \, . \end{equation} The ranked probability score was introduced by \citet{Epstein1969} \citep[see also,][for a general review of scoring rules]{Gneiting2007} and is a strictly proper probabilistic scoring rule, in the sense that the true odds minimize its expected value \citep{Murphy1969}. \subsubsection*{Classification accuracy} Classification accuracy measures how often the classifier makes the correct prediction, i.e. how many times the outcome with the maximum estimated probability of occurence actually occurs. \begin{center} \begin{table*}[ht] \caption{Illustration of the calculation of the ranked probability score and classification accuracy on artificial data.} \hfill{} \footnotesize \begin{tabular}{lllllllllll} \hline \multicolumn{3}{l}{Observed outcome} & \multicolumn{3}{l}{Predicted probabilities} & \multicolumn{3}{l}{Predicted outcome} & \multirow{2}{*}{RPS} & \multirow{2}{*}{Accuracy} \\ $a_1$ & $a_2$ & $a_3$ & $p_1$ & $p_2$ & $p_3$ & $o_1$ & $o_2$ & $o_3$ & & \\ \hline 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0.5 & 0 \\ 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 0.8 & 0.2 & 0 & 1 & 0 & 0 & 0.02 & 1 \\ 0 & 1 & 0 & 0.33 & 0.33 & 0.34 & 0 & 0 & 1 & 0.11 & 0 \\ \hline \end{tabular}} \hfill{} \label{rpsexample} \end{table*} \end{center} Table~\ref{rpsexample} illustrates the calculations leading to the ranked probability score and classification accuracy for several combinations of $\bm{p}$ and $\bm{a}$. The left-most group of three columns gives the observed outcomes, the next group gives the predicted outcome probabilities, and the third gives the predicted outcomes using maximum probability allocation. The two columns in the right give the ranked probability scores and classification accuracies. As shown, a ranked probability score of zero indicates a perfect prediction (minimum error) and a ranked probability score of one indicates a completely wrong prediction (maximum error). The ranked probability score and classification accuracy for a particular experiment in the validation framework are computed by averaging over their respective values over the matches in the prediction set. The uncertainty in the estimates from each experiment is quantified using leave-one-match out jackknife \citep{efron1982}, as detailed in step~9 of Algorithm~\ref{validationalgo}. \subsection{Meta-analysis} The proposed validation framework consists of $K = 17$ experiments, one for each calendar year in the data. Each experiment results in pairs of observations $(s_i, \hat{\sigma_i}^2)$, where $s_i$ is the ranked probability score or classification accuracy from the $i$th experiment, and $\hat{\sigma_i}^2$ is the associated jackknife estimate of its variance $(i = 1, \ldots , K)$. We synthesized the results of the experiments using meta-analysis \citep{DerSimonian1986}. Specifically, we make the working assumptions that the summary variances $\hat{\sigma_i}^2$ are estimated well-enough to be considered as known, and that $s_1, \ldots, s_K$ are realizations of random variables $S_1, \ldots, S_K$, respectively, which are independent conditionally on independent random effects $U_1, . . . , U_K$, with \[ {S_i \mid U_i} \sim \mbox{Normal} (\alpha + U_i, \hat{\sigma_i}^2) \, , \] and \[ U_i \sim \mbox{Normal} (0, \tau^2) \, . \] The parameter $\alpha$ is understood here as the overall ranked probability score or classification accuracy, after accounting for the heterogeneity between the experiments. The maximum likelihood estimate of the overall ranked probability or classification accuracy is then the weighted average \[ \hat{\alpha} = \frac{\sum w_i s_i}{\sum w_i} \ , \] where $w_i = (\hat{\sigma_i}^2 + \hat{\tau}^2)^{-1}$ and $\hat{\tau}^2$ is the maximum likelihood estimate of $\tau^2$. The estimated standard error for the estimator of the overall score $\hat\alpha$ can be computed using the square root of the inverse Fisher information about $\alpha$, which ends up being $(\sum_{i = 1}^K w_i)^{-1/2}$. The assumptions of the random-effects meta-analysis model (independence, normality and fixed variances) are all subject to direct criticism for the validation framework depicted in Figure~\ref{validation} and the criteria we consider; for example, the training and validation sets defined in the sequence of experiments in Figure~\ref{validation} are overlapping and ordered in time, so the summaries resulting from the experiment are generally correlated. We proceed under the assumption that these departures are not severe enough to influence inference and conclusions about $\alpha$. \subsection{Implementation} Algorithm~\ref{validationalgo} is an implementation of the validation framework in pseudo-code. Each model is expected to have a training method which trains the model on data, and a prediction method which returns predicted outcome probabilities for the prediction set. We refer to these methods as {\tt train} and {\tt predict} in the pseudo-code. \begin{algorithm} [t!] \caption{Pseudo-code for the validation framework} \label{validationalgo} \footnotesize \begin{algorithmic}[1] \Require{ \Statex $\bm{x}_1, \ldots , \bm{x}_G$ \Comment{feature vectors for all $G$} matches in the data set \Statex $d_1 \leq \ldots \leq d_G$ \Comment{$d_g$ is the match date of match $g \in \{1,\ldots,G\}$} \Statex $\bm{o}_1, \ldots , \bm{o}_G$ \Comment{match outcomes} \Statex train: $\{ \bm{x}_g, \bm{o}_g : g \in A\} \to f(\cdot) $ \Comment{Training algorithm} \Statex predict: $\{ \bm{x}_g : g \in B\}, f(\cdot) \to \{\bar{\bm{o}}_g: g \in B\}$ \Comment{Prediction algorithm} \Statex criterion: $\{ \bm{o}_g, \bar{\bm{o}}_g : g \in B \} \to \{v_g: g \in B\}$ \Comment{observation-wise criterion values} \Statex $D_1, \ldots, D_{T}$ \Comment{Cut-off dates for training for experiments} \Statex meta-analysis: $\{ s_i, \hat\sigma_i^2 : i \in \{1, \ldots, T\}\} \to \hat\alpha $ \Comment{Meta-analysis algorithm} } \Statex \Ensure{ $\hat\alpha$ \Comment{Overall validation metric} } \Statex \For{$i \gets 1 \textrm{ to } T$} \Let{$A$}{$\{g: d_g \le D_t\}$} \Let{$B$}{$\{g: D_t < d_g \le D_t + 10\text{days}\}$} \Let{$n_B$}{${\rm dim}(B)$} \Let{$f(\cdot)$}{train($\{ \bm{x}_g, \bm{o_g} : g \in A \}$)} \Comment{fit the model} \Let{$\{ \bar{\bm{o}}_g: g\in B\}$}{predict($\{\bm{x}_g: g \in B\}, f(\cdot)$)} \Comment{get predictions} \Let{$\{v_g: g \in B\}$}{criterion$(\{\bm{o}_g, \bar{\bm{o}}_g: g \in B\})$} \Let{$s_i$}{$\frac{1}{n_B} \sum_{g \in B} v_g$} \Let{$\hat\sigma_i^2$}{$\frac{n_B}{n_B - 1}\sum_{g \in B} \left( \frac{\sum_{h\in B/\{g\}} v_h}{n_B - 1} - s_i \right)^2$} \EndFor \Statex \Let{$\hat\alpha$}{meta-analysis($\{s_i, \hat\sigma_i^2: i \in \{1, \ldots, T\}$)} \end{algorithmic} \end{algorithm} \section{Results} \label{results} In this section we compare the predictive performance of the various models we implemented as measured by the validation framework described in Section \ref{validationsec}. Table~\ref{modref} gives the details of each model in terms of features used, the handling of draws (ordinal and Davidson, as in Subsection~\ref{sec-draws}), the distribution whose parameters are modeled, and the estimation procedure that has been used. The sets of features that were used in the LF, TVC, AFD and HPL specifications in Table~\ref{modref} resulted from ad-hoc experimentation with different combinations of features in the LF specification. All instances of feature 13 refer to the least squares ordinal rank (see Subsection 2.5 of the supplementary material). The features used in the HPL specification in (\ref{linpred}) have been chosen prior to fitting to be home and newly promoted (features 1 and 2 in Table~\ref{tab-features}), the difference in form and points tally (features 4 and 6 in Table~\ref{tab-features}) between the two teams competing in match $g$, and season and quarter (features 15 and 16 in Table~\ref{tab-features}) for the season that match $g$ takes place. \begin{table*}[ht] \caption{Description of each model in Section~\ref{bt} and Section~\ref{poisson} in terms of features used, the handling of draws, the distribution whose parameters are modeled, and the estimation procedure that was used. The suffix $(t)$ indicates features with coefficients varying with matches played (feature 5 in Table~\ref{tab-features}). The model indicated by $\dagger$ is the one we used to compute the probabilities for the submission to the MLS challenge. The acronyms are as follows: BL: Baseline (home advantage); CS: Bradley-Terry with constant strengths; LF: Bradley-Terry with linear features; TVC: Bradley-Terry with time-varying coefficients; AFD: Bradley-Terry with additive feature differences and time interactions; HPL:Hierarchical Poisson log-linear model.} \vspace{0.2cm} \centering \footnotesize \begin{tabular}{rrrlr} \hline Model & Draws & Features & Distribution & Estimation \\ \hline BL (\ref{bl}) & Davidson & 1 & Multinomial & ML \\ BL (\ref{bl}) & Ordinal & 1 & Multinomial & ML \\ CS (\ref{cs}) & Davidson & 1 & Multinomial & ML \\ CS (\ref{cs}) & Ordinal & 1 & Multinomial & ML \\ LF (\ref{lf}) & Davidson & 1, 6, 7, 12, 13 & Multinomial & ML \\ LF (\ref{lf}) & Ordinal & 1, 6, 7, 12, 13 & Multinomial & ML \\ TVC (\ref{tvc}) & Davidson & 1, 6$(t)$, 7$(t)$, 12$(t)$, 13 & Multinomial & ML \\ TVC (\ref{tvc}) & Ordinal & 1, 6$(t)$, 7$(t)$, 12$(t)$, 13 & Multinomial & ML \\ AFD (\ref{afd}) & Davidson & 1, 6$(t)$, 7$(t)$, 12$(t)$, 13 & Multinomial & MPL \\ HPL (\ref{linpred}) & & 1, 2, 4, 6, 15, 16 & Poisson & INLA \\ ($\dagger$) TVC (\ref{tvc}) & Ordinal & 1, 2, 3, 4, 6$(t)$, 7$(t)$, 11$(t)$ & Multinomial & ML \\ \hline \end{tabular} \label{modref} \end{table*} For each of the models in Table~\ref{modref}, Table~\ref{resultstab} presents the ranked probability score and classification accuracy as estimated from the validation framework in Algorithm~\ref{validationalgo}, and as calculated for the matches in the test set for the challenge. The results in Table~\ref{resultstab} are indicative of the good properties of the validation framework of Section~\ref{validationsec} in accurately estimating the performance of the classifier on unseen data. Specifically, and excluding the baseline model, the sample correlation between overall ranked probability score and the average ranked probability score from the matches on the test set is $0.973$. The classification accuracy seems to be underestimated by the validation framework. The TVC model that is indicated by $\dagger$ in Table~\ref{resultstab} is the model we used to compute the probabilities for our submission to the MLS challenge. Figure \ref{coefplots} shows the estimated time-varying coefficients for the TVC model. The remaining parameter estimates are $0.0410$ for the coefficient of form, $-0.0001$ for the coefficient of days since previous match, and $0.0386$ for the coefficient of newly promoted. Of all the features included, only goal difference and point tally last season had coefficients for which we found evidence of difference from zero when accounting for all other parameters in the model (the $p$-values from individual Wald tests are both less than $0.001$). After the completion of the MLS challenge we explored the potential of new models and achieved even smaller ranked probability scores than the one obtained from the TVC model. In particular, the best performing model is the HPL model in Subsection~\ref{sec-hpl} (starred in Table~\ref{resultstab}), followed by the AFD model which achieves a marginally worse ranked probability score. It should be noted here that the LF models are two simpler models that achieve performance that is close to that of HPL and AFD, without the inclusion of random effects, time-varying coefficients, or any non-parametric specifications. The direct comparison between the ordinal and Davidson extensions of Bradley-Terry type models indicates that the differences tend to be small, with the Davidson extensions appearing to perform better. \begin{figure}[t] \vspace{-0.5cm} \centering \includegraphics[scale=0.6]{coefs.pdf} \caption{Plots of the time-varying coefficients in the TVC model that is indicated by $\dagger$ in Table~\ref{resultstab}, which is the model we used to compute the probabilities for our submission to the MLS challenge. } \label{coefplots} \end{figure} We also tested the performance of HPL in terms of predicting actual scores of matches using the validation framework, comparing to a baseline method that always predicts the average goals scored by home and away teams respectively in the training data it receives. Using root mean square error as an evaluation metric, HPL achieved a score of 1.0011 with estimated standard error 0.0077 compared to the baseline which achieved a score of 1.0331 with estimated standard error 0.0083. \begin{table}[ht] \caption{Ranked probability score and classification accuracy for the models in Table~\ref{modref}, as estimated from the validation framework of Section~\ref{validationsec} (standard errors are in parentheses) and from the matches in the test set of the challenge. The model indicated by $\dagger$ is the one we used to compute the probabilities for the submission to the MLS challenge, while the one indicated by $*$ is the one that achieves the lowest estimated ranked probability score.} \label{resultstab} \centering \footnotesize \begin{tabular}{rrrrrrrrr} \toprule & & & \multicolumn{3}{c}{Ranked probability score} & \multicolumn{3}{c}{Accuracy} \\ \cmidrule{3-8} & Model & Draws & \multicolumn{2}{c}{Validation} & Test & \multicolumn{2}{c}{Validation} & Test \\ \midrule & BL & Davidson & 0.2242 & (0.0024) & 0.2261 & 0.4472 & (0.0067) & 0.4515 \\ & BL & Ordinal & 0.2242 & (0.0024) & 0.2261 & 0.4472 & (0.0067) & 0.4515 \\ & CS & Davidson & 0.2112 & (0.0028) & 0.2128 & 0.4829 & (0.0073) & 0.5194 \\ & CS & Ordinal & 0.2114 & (0.0028) & 0.2129 & 0.4779 & (0.0074) & 0.4951 \\ & LF & Davidson & 0.2088 & (0.0026) & 0.2080 & 0.4849 & (0.0068) & 0.5049 \\ & LF & Ordinal & 0.2088 & (0.0026) & 0.2084 & 0.4847 & (0.0068) & 0.5146 \\ & TVC & Davidson & 0.2081 & (0.0026) & 0.2080 & 0.4898 & (0.0068) & 0.5049 \\ & TVC & Ordinal & 0.2083 & (0.0025) & 0.2080 & 0.4860 & (0.0068) & 0.5097 \\ & AFD & Ordinal & 0.2079 & (0.0026) & 0.2061 & 0.4837 & (0.0068) & 0.5194 \\ $\star$ & HPL & & 0.2073 & (0.0025) & 0.2047 & 0.4832 & (0.0067) & 0.5485 \\ $\dagger$ & TVC & Ordinal & 0.2085 & (0.0025) & 0.2087 & 0.4865 & (0.0068) & 0.5388 \\ \bottomrule \end{tabular} \end{table} \section{Conclusions and discussion} \label{conclusion} We compared the performance of various extensions of Bradley-Terry models and a hierarchical log-linear Poisson model for the prediction of outcomes of soccer matches. The best performing Bradley-Terry model and the hierachical log-linear Poisson model delivered similar performance, with the hierachical log-linear Poisson model doing marginally better. Amongst the Bradley-Terry specifications, the best performing one is AFD, which models strength differences through a semi-parametric specification involving general smooth bivariate functions of features and season time. Similar but lower predictive performance was achieved by the Bradley-Terry specification that models team strength in terms of linear functions of season time. Overall, the inclusion of features delivered better predictive performance than the simpler Bradley-Terry specifications. In effect, information is gained by relaxing the assumption that each team has constant strength over the season and across feature values. The fact that the models with time varying components performed best within the Bradley-Terry class of models indicates that enriching models with time-varying specifications can deliver substantial improvements in the prediction of soccer outcomes. All models considered in this paper have been evaluated using a novel, context-specific validation framework that accounts for the temporal dimension in the data and tests the methods under gradually increasing information for the training. The resulting experiments are then pooled together using meta-analysis in order to account for the differences in the uncertainty of the validation criterion values by weighing them accordingly. The meta analysis model we employed operates under the working assumption of independence between the estimated validation criterion values from each experiment. This is at best a crude assumption in cases like the above where data for training may be shared between experiments. Furthermore, the validation framework was designed to explicitly estimate the performance of each method only for a pre-specified window of time in each league, which we have set close to the window where the MLS challenge submissions were being evaluated. As a result, the conclusions we present are not generalizable beyond the specific time window that was considered. Despite these shortcomings, the results in Table~\ref{resultstab} show that the validation framework delivered accurate estimates of the actual predictive performance of each method, as the estimated average predictive performances and the actual performances on the test set (containing matches between 31st March and 10th April, 2017) were very close. The main focus of this paper is to provide a workflow for predicting soccer outcomes, and to propose various alternative models for the task. Additional feature engineering and selection, and alternative fitting strategies can potentially increase performance and are worth pursuing. For example, ensemble methods aimed at improving predictive accuracy like calibration, boosting, bagging, or model averaging \cite[for an overview, see][]{Dietterich2000} could be utilized to boost the performance of the classifiers that were trained in this paper. A challenging aspect of modeling soccer outcomes is devising ways to borrow information across different leagues. The two best performing models (HPL and AFD) are extremes in this respect; HPL is trained on each league separately while AFD is trained on all leagues simultaneously, ignoring the league that teams belong to. Further improvements in predictive quality can potentially be achieved by using a hierarchical model that takes into account which league teams belong to but also allows for sharing of information between leagues. \section{Supplementary material} The supplementary material document contains two sections. Section 1 provides plots of the number of matches per number of goals scored by the home and away teams, by country, for a variety of arbitrarily chosen countries. These plots provide evidence of a home advantage. Section 2 details approaches for obtaining team rankings (feature 13 in Table~\ref{tab-artificial_features}) based on the outcomes of the matches they played so far. \section{Acknowledgements and authors' contributions} This work was supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1. The authors are grateful to Petros Dellaportas, David Firth, Istv\'{a}n Papp, Ricardo Silva, and Zhaozhi Qian for helpful discussions during the challenge. Alkeos Tsokos, Santhosh Narayanan and Ioannis Kosmidis have defined the various Bradley-Terry specifications, and devised and implemented the corresponding estimation procedures and the validation framework. Gianluca Baio developed the hierarchical Poisson log-linear model and the associated posterior inference procedures. Mihai Cucuringu did extensive work on feature extraction using ranking algorithms. Gavin Whitaker carried out core data wrangling tasks and, along with Franz Kir\'aly, worked on the initial data exploration and helped with the design of the estimation-prediction pipeline for the validation experiments. Franz Kir\'aly also contributed to the organisation of the team meetings and communication during the challenge. All authors have discussed and provided feedback on all aspects of the challenge, manuscript preparation and relevant data analyses. \section{Introduction} The current paper stems from our participation in the 2017 Machine Learning Journal (Springer) challenge on predicting outcomes of soccer matches from a range of leagues around the world (MLS challenge, in short). Details of the challenge and the data can be found in \citet{Berrar2017}. We consider two distinct modeling approaches for the task. The first approach focuses on modeling the probabilities of win, draw, or loss, using various extensions of Bradley-Terry models \citep{Bradley1952}. The second approach focuses on directly modeling the number of goals scored by each team in each match using a hierarchical Poisson log-linear model, building on the modeling frameworks in \citet{Maher1982}, \citet{Dixon1997}, \citet{Karlis2003} and \citet{Baio2010}. The performance of the various modeling approaches in predicting the outcomes of matches is assessed using a novel, context-specific framework for temporal validation that is found to deliver accurate estimates of the prediction error. The direct modeling of the outcomes using the various Bradley-Terry extensions and the modeling of match scores using the hierarchical Poisson log-linear model deliver similar performance in terms of predicting the outcome. The paper is structured as follows: Section \ref{data} briefly introduces the data, presents the necessary data-cleaning operations undertaken, and describes the various features that were extracted. Section \ref{bt} presents the various Bradley-Terry models and extensions we consider for the challenge and describes the associated estimation procedures. Section \ref{poisson} focuses on the hierarchical Poisson log-linear model and the Integrated Nested Laplace Approximations (INLA; \citealt{Rue2009}) of the posterior densities for the model parameters. Section \ref{validationsec} introduces the validation framework and the models are compared in terms of their predictive performance in Section \ref{results}. Section~\ref{conclusion} concludes with discussion and future directions. \section{Pre-processing and feature extraction} \label{data} \subsection{Data exploration} \label{features} The data contain matches from $52$ leagues, covering $35$ countries, for a varying number of seasons for each league. Nearly all leagues have data since 2008, with a few having data extending as far back as 2000. There are no cross-country leagues (e.g.~UEFA Champions League) or teams associated with different countries. The only way that teams move between leagues is within each country by either promotion or relegation. \begin{figure}[t] \centering \includegraphics[scale=0.5]{country-games2.pdf} \caption{Number of available matches per country in the data. } \label{fig-country} \end{figure} Figure~\ref{fig-country} shows the number of available matches for each country in the data set. England dominates the data in terms of matches recorded, with the available matches coming from 5 distinct leagues. The other highly-represented countries are Scotland with data from 4 distinct leagues, and European countries, such as Spain, Germany, Italy and France, most probably because they also have a high UEFA coefficient \citep{wiki:UEFA_coefficient}. \begin{figure}[t] \centering \includegraphics[scale=0.5]{goals-in-a-game2.pdf} \caption{The number of matches per number of goals scored by the home (dark grey) and away team (light grey). } \label{fig-goals} \end{figure} Figure~\ref{fig-goals} shows the number of matches per number of goals scored by the home (dark grey) and away (light grey) teams. Home teams appear to score more goals than away teams, with home teams having consistently higher frequencies for two or more goals and away teams having higher frequencies for no goal and one goal. Overall, home teams scored $304,918$ goals over the whole data set, whereas away teams scored $228,293$ goals. In Section 1 of the Supplementary Material, the trend shown in Figure~\ref{fig-goals} is also found to be present within each country, pointing towards the existence of a home advantage. \subsection{Data cleaning} Upon closer inspection of the original sequence of matches for the MLS challenge, we found and corrected the following three anomalies in the data. The complete set of matches from the 2015-2016 season of the Venezuelan league was duplicated in the data. We kept only one instance of these matches. Furthermore, $26$ matches from the 2013-2014 season of the Norwegian league were assigned the year 2014 in the date field instead of 2013. The dates for these matches were modified accordingly. Finally, one match in the 2013-2014 season of the Moroccan league (Raja Casablanca vs Maghrib de Fes) was assigned the month February in the date field instead of August. The date for this match was corrected, accordingly. \subsection{Feature extraction} The features that were extracted can be categorized into team-specific, match-specific and/or season-specific. Match-specific features were derived from the information available on each match. Season-specific features have the same value for all matches and teams in a season of a particular league, and differ only across seasons for the same league and across leagues. Table~\ref{tab-features} gives short names, descriptions, and ranges for the features that were extracted. Table~\ref{tab-artificial_features} gives an example of what values the features take for an artificial data set with observations on the first 3 matches of a season for team A playing all matches at home. The team-specific features are listed only for Team A to allow for easy tracking of their evolution. The features we extracted are proxies for a range of aspects of the game, and their choice was based on common sense and our understanding of what is important in soccer, and previous literature. Home (feature 1 in Table~\ref{tab-artificial_features}) can be used for including a home advantage in the models; newly promoted (feature 2 in Table~\ref{tab-artificial_features}) is used to account for the fact that a newly promoted team is typically weaker than the competition; days since previous match (feature 3 in Table~\ref{tab-artificial_features}) carries information regarding fatigue of the players and the team, overall; form (feature 4 in Table~\ref{tab-artificial_features}) is a proxy for whether a team is doing better or worse during a particular period in time compared to its general strength; matches played (feature 5 in Table~\ref{tab-artificial_features}) determines how far into the season a game occurs; points tally, goal difference, and points per match (features 6, 7 and 10 in Table~\ref{tab-artificial_features}) are measures of how well a team is doing so far in the season; goals scored per match and goals conceded per match (features 8 and 9 in Table~\ref{tab-artificial_features}) are measures of a team's attacking and defensive ability, respectively; previous season points tally and previous season goal difference (features 11 and 12 in Table~\ref{tab-artificial_features}) are measures of how well a team performed in the previous season, which can be a useful indicator of how well a team will perform in the early stages of a season when other features such as points tally do not carry much information; finally, team rankings (feature 13 in Table~\ref{tab-artificial_features}) refers to a variety of measures that rank teams based on their performance in previous matches, as detailed in Section 2 of the Supplementary Material. In order to avoid missing data in the features we extracted, we made the following conventions. The value of form for the first match of the season for each team was drawn from a Uniform distribution in $(0,1)$. The form for the second and third match were a third of the points in the first match, and a sixth of the total points in the first two matches, respectively. Days since previous match was left unspecified for the very first match of the team in the data. If the team was playing its first season then we treated it as being newly promoted. The previous season points tally was set to $15$ for newly promoted teams and to $65$ for newly relegated teams, and the previous season goal difference was set to $-35$ for newly promoted teams and $35$ for newly relegated teams. These values were set in an ad-hoc manner prior to estimation and validation, based on our sense and experience of what is a small or large value for the corresponding features. In principle, the choice of these values could be made more formally by minimizing a criterion of predictive quality, but we did not pursue this as it would complicate the estimation-prediction workflow described later in the paper and increase computational effort significantly without any guarantee of improving the predictive quality of the models. \def1{1.5} \begin{table*}[t] \caption{Short names, descriptions, and ranges for the features that were extracted.} \scriptsize \begin{center} \begin{tabular}{llll} \toprule Number & Short name & Description & Range \\ \midrule \multicolumn{4}{l}{Team-specific features} \\ \midrule 1 & Home & \noindent\parbox[t]{7cm}{$1$ if the team is playing at home, and $0$ otherwise} & $\{0, 1\}$ \\ 2 & Newly promoted & \noindent\parbox[t]{7cm}{$1$ if the team is newly promoted to the league for the current season, and $0$ otherwise} & $\{0, 1\}$ \\ 3 & Days since previous match & \noindent\parbox[t]{7cm}{number of days elapsed since the previous match of the team} & $\{1, 2, \ldots\}$ \\ 4 & Form & \noindent\parbox[t]{7cm}{a ninth of the total points gained in the last three matches in the current season} & $(0, 1)$ \\ 5 & Matches played & \noindent\parbox[t]{7cm}{number of matches played in the current season and before the current match} & $\{1, 2, \ldots\}$ \\ 6 & Points tally & \noindent\parbox[t]{7cm}{the points accumulated during the current season and before the current match} & $\{0, 1, \ldots\}$ \\ 7 & Goal difference & \noindent\parbox[t]{7cm}{the goals that a team has scored minus the goals that it has conceded over the current season and before the current match} & $\{\ldots, -1, 0, 1, \ldots\}$ \\ 8 & Goals scored per match & \noindent\parbox[t]{7cm}{total goals scored per match played over the current season and before the current match} & $\Re^{+}$ \\ 9 & Goals conceded per match & \noindent\parbox[t]{7cm}{total goals conceded per match over the current season and before the current match} & $\Re^{+}$ \\ 10 & Points per match & \noindent\parbox[t]{7cm}{total points gained per match played over the current season and before the current match} & $[0, 3]$ \\ 11 & Previous season points tally & \noindent\parbox[t]{7cm}{total points accumulated by the team in the previous season of the same league} & $\{0, 1, \ldots\}$ \\ 12 & Previous season goal difference & \noindent\parbox[t]{7cm}{total goals scored minus total goals conceded for each team in the previous season of the same league} & $\{\ldots, -1, 0, 1, \ldots\}$ \\ 13 & Team rankings & \noindent\parbox[t]{7cm}{a variety of team rankings, based on historical observations; See Section~2 of the Supplementary Material} & $\Re$ \\ \midrule \multicolumn{4}{l}{Season-specific features} \\ \midrule 14 & Season & \noindent\parbox[t]{7cm}{the league season in which each match is played} & labels \\ 15 & Season window & \noindent\parbox[t]{7cm}{time period in calendar months of the league season} & labels \\ \midrule \multicolumn{4}{l}{Match-specific features} \\ \midrule 16 & Quarter & \noindent\parbox[t]{7cm}{quarter of the calendar year based on the match date} & labels \\ \bottomrule \end{tabular} \end{center} \label{tab-features} \end{table*} \def1{1} \begin{table*}[t] \caption{Feature values for artificial data showing the first 3 matches of a season with team A playing all matches at home.} \scriptsize \begin{center} \begin{tabular}[t]{llll} \toprule & Match 1 & Match 2 & Match 3 \\ \cmidrule{2-4} \multicolumn{4}{l}{Match attributes and outcomes} \\ \midrule League & Country1 & Country1 & Country1 \\ Date & 2033-08-18 & 2033-08-21 & 2033-08-26 \\ Home team & team A & team A & team A \\ Away team & team B & team C & team D \\ Home score & 2 & 2 & 0 \\ Away score & 0 & 1 & 0 \\ \midrule \multicolumn{4}{l}{Team-specific features (Team A)} \\ \midrule Newly promoted & 0 & 0 & 0 \\ Days since previous match & 91 & 3 & 5 \\ Form & 0.5233 & 1 & 1 \\ Matches played & 0 & 1 & 2 \\ Points tally & 0 & 3 & 6 \\ Goal difference & - & 2 & 3 \\ Goals scored per match & 0 & 2 & 2 \\ Goals conceded per match & 0 & 0 & 0.5 \\ Points per match & 0 & 3 & 3 \\ Previous season points tally & 72 & 72 & 72 \\ Previous Season goal difference & 45 & 45 & 45 \\ \midrule \multicolumn{4}{l}{Season-specific features} \\ \midrule Season & 33-34 & 33-34 & 33-34 \\ Season window & August-May & August-May & August-May \\ \midrule \multicolumn{4}{l}{Match-specific features} \\ \midrule Quarter & 3 & 3 & 3 \\ \hline \end{tabular} \end{center} \label{tab-artificial_features} \end{table*} \section{Modeling outcomes} \label{bt} \subsection{Bradley-Terry models and extensions} The Bradley-Terry model \citep{Bradley1952} is commonly used to model paired comparisons, which often arise in competitive sport. For a binary win/loss outcome, let \[ y_{ijt} = \left\{ \begin{array}{cc} 1\,, & \text{ if team } i \text{ beats team } j \text{ at time } t \\ 0\,, & \text{ if team } j \text{ beats team } i \text{ at time } t \end{array} \right. \qquad (i,j =1,\dots,n; \ i\neq j; \ t \in \Re^+) \,, \] where $n$ is the number of teams present in the data. The Bradley-Terry model assumes that \begin{equation*} p(y_{ijt}=1) = \frac{\pi_i}{\pi_i+\pi_j} \,, \end{equation*} where $\pi_i=\exp (\lambda_i)$, and $\lambda_i$ is understood as the ``strength'' of team $i$. In the original Bradley-Terry formulation, $\lambda_i$ does not vary with time. For the purposes of the MLS challenge prediction task, we consider extensions of the original Bradley-Terry formulation where we allow $\lambda_i$ to depend on a $p$-vector of time-dependent features $\bm{x}_{it}$ for team $i$ at time $t$ as $\lambda_{it} = f(\bm{x}_{it})$ for some function $f(\cdot)$. Bradley-Terry models can also be equivalently written as linking the log-odds of a team winning to the difference in strength of the two teams competing. Some of the extensions below directly specify that difference. \subsubsection*{BL: Baseline} The simplest specification of all assumes that \begin{equation} \label{bl} \lambda_{it} = \beta h_{it}\ , \end{equation} where $h_{it} = 1$ if team $i$ is playing at home at time $t$, and $h_{it} = 0$ otherwise. The only parameter to estimate with this specification is $\beta$, which can be understood as the difference in strength when the team plays at home. We use this model to establish a baseline to improve upon for the prediction task. \subsubsection*{CS: Constant strengths} This specification corresponds to the standard Bradley-Terry model with a home-field advantage, under which \begin{equation} \label{cs} \lambda_{it} = \alpha_i + \beta h_{it} \,. \end{equation} The above specification involves $n + 1$ parameters, where $n$ is the number of teams. The parameter $\alpha_i$ represents the time-invariant strength of the $i$th team. \subsubsection*{LF: Linear with features} Suppose now that we are given a vector of features $\bm{x}_{it}$ associated with team $i$ at time $t$. A simple way to model the team strengths $\lambda_{it}$ is to assume that they are a linear combination of the features. Hence, in this model we have \begin{equation} \label{lf} \lambda_{it}= \sum_{k=1}^{p} \beta_k x_{itk}\,, \end{equation} where $x_{itk}$ is the $k$th element of the feature vector $\bm{x}_{it}$. Note that the coefficients in the linear combination are shared between all teams, and so the number of parameters to estimate is $p$, where $p$ is the dimension of the feature vector. This specification is similar to the one implemented in the \texttt{R} package \texttt{BradleyTerry} \citep{Firth2005}, but without the team specific random effects. \subsubsection*{TVC: Time-varying coefficients} Some of the features we consider, like points tally season (feature 6 in Table~\ref{tab-features}) vary during the season. Ignoring any special circumstances such as teams being punished, the points accumulated by a team is a non-decreasing function of the number of matches the team has played. It is natural to assume that the contribution of points accumulated to the strength of a team is different at the beginning of the season than it is at the end. In order to account for such effects, the parameters for the corresponding features can be allowed to vary with the matches played. Specifically, the team strengths can be modeled as \begin{equation} \label{tvc} \lambda_{it}= \sum_{k \in \mathcal{V}} \gamma_k(m_{it}) x_{itk} + \sum_{k \notin \mathcal{V}} \beta_k x_{itk} \,, \end{equation} where $m_{it}$ denotes the number of matches that team $i$ has played within the current season at time $t$ and $\mathcal{V}$ denotes the set of coefficients that are allowed to vary with the matches played. The functions $\gamma_k(m_{it})$ can be modeled non-parametrically, but in the spirit of keeping the complexity low we instead set $\gamma_k(m_{it}) = \alpha_k + \beta_km_{it}$. With this specification for $\gamma_k(m_{it})$, TVC is equivalent to LF with the inclusion of an extra set of features $\{m_{it}x_{itk}\}_{k \in \mathcal{V}}$. \subsubsection*{AFD: Additive feature differences with time interactions} For the LF specification, the log-odds of team $i$ beating team $j$ is \begin{equation*} \lambda_{it} - \lambda_{jt} = \sum_{k=1}^p\beta_k(x_{itk} - x_{jtk}) \,. \end{equation*} Hence, the LF specification assumes that the difference in strength between the two teams is a linear combination of differences between the features of the teams. We can relax the assumption of linearity, and include non-linear time interactions, by instead assuming that each difference in features contributes to the difference in strengths through an arbitrary bivariate smooth function $g_k$ that depends on the feature difference and the number of matches played. We then arrive at the AFD specification, which can be written as \begin{equation} \label{afd} \lambda_{it} - \lambda_{jt} = \sum_{k \in \mathcal{V}} g_k(x_{itk} - x_{jtk}, m_{it}) + \sum_{k \notin \mathcal{V}} f_k(x_{itk} - x_{jtk}) \ , \end{equation} where for simplicity we take the number of matches played to be the number of matches played by the home team. \subsection{Handling draws} \label{sec-draws} The extra outcome of a draw in a soccer match can be accommodated within the Bradley-Terry formulation in two ways. The first is to treat win, loss and draw as multinomial ordered outcomes, in effect assuming that $\text{``win''} \succ \text{``draw''} \succ \text{``loss''}$, where $\succ$ denotes strong transitive preference. Then, the ordered outcomes can be modeled using cumulative link models \citep{Agresti2015} with the various strength specifications. Specifically, let \[ y_{ijt} = \begin{cases} 2\, , & \text{if team } i \text{ beats team } j \text{ at time } t \ , \\ 1\, , & \text{if team } i \text{ and } j \text{ draw} \text{ at time } t \ ,\\ 0\, , & \text{if team } j \text{ beats team } i \text{ at time } t \ . \end{cases} \] and assume that $y_{ijt}$ has \begin{equation} \label{ordinal} p(y_{ijt} \leq y ) = \frac{e^{\delta_{y} + \lambda_{it}}}{e^{\delta_y + \lambda_{it}} + e^{\lambda_{jt}}} \, , \end{equation} where $-\infty < \delta_0 \le \delta_1 < \delta_2 = \infty$, and $\delta_0, \delta_1$ are parameters to be estimated from the data. \citet{Cattelan2013} and \citet{Kiraly2017} use of this approach for modeling soccer outcomes. Another possibility for handling draws is to use the \citet{Davidson1970} extension of the Bradley-Terry model, under which \begin{align*} p(y_{ijt} = 2 \,|\, y_{ijt} \ne 1) &= \frac{\pi_{it}}{\pi_{it} + \pi_{jt}}\,, \\ p(y_{ijt} = 1) &= \frac{\delta\sqrt{\pi_{it}\pi_{jt}}}{\pi_{it} + \pi_{jt} + \delta\sqrt{\pi_{it}\pi_{jt}}}\,, \\ p(y_{ijt} = 0 \,|\, y_{ijt} \ne 1) &= \frac{\pi_{jt}}{\pi_{it} + \pi_{jt}}\,. \\ \end{align*} where $\delta$ is a parameter to be estimated from the data. \subsection{Estimation} \subsubsection*{Likelihood-based approaches} The parameters of the Bradley-Terry model extensions presented above can be estimated by maximizing the log-likelihood of the multinomial distribution. The log-likelihood about the parameter vector $\bm{\theta}$ is \[ \ell(\bm{\theta}) = \sum_{\{i,j,t\} \in \mathcal{M}} \sum_{y} \mathbb{I}_{[y_{ijt}=y]} \log\Big(p(y_{ijt}=y)\Big)\, , \] where $\mathbb{I_A}$ takes the value $1$ if $A$ holds and $0$ otherwise, and $\mathcal{M}$ is the set of triplets $\{i,j,t\}$ corresponding to the matches whose outcomes have been observed. For estimating the functions involved in the AFD specification, we represent each $f_k$ using thin plate splines \citep{Wahba1990}, and enforce smoothness constraints on the estimate of $f_k$ by maximizing a penalized log-likelihood of the form \[ \ell^{\text{pen}}(\bm{\theta}) = \ell(\bm{\theta}) - k \bm{\theta}^TP\bm{\theta}\, , \] where $P$ is a penalty matrix and $k$ is a tuning parameter. For penalized estimation we only consider ordinal models through the \texttt{R} package \texttt{mgcv} \citep{Wood2006}, and select $k$ by optimizing the Generalized Cross Validation criterion \citep{Golub1979}. Details on the fitting procedure for specifications like AFD and the implementation of thin plate spline regression in \texttt{mgcv} can be found in \cite{Wood2003}. The parameters of the Davidson extensions of the Bradley-Terry model are estimated by using the BFGS optimization algorithm \citep{Byrd1995} to minimize $-\ell(\bm{\theta})$. \subsubsection*{Identifiability} In the CS model, the team strengths are identifiable only up to an additive constant, because $\lambda_i - \lambda_j = (\lambda_i + d) - (\lambda_j + d)$ for any $d \in \Re$. This unidentifiability can be dealt with by setting the strength of an arbitrarily chosen team to zero. The CS model was fitted league-by-league with one identifiability constraint per league. The parameters $\delta_0$ and $\delta_1$ in (\ref{ordinal}) are identifiable only if the specification used for $\lambda_i - \lambda_j$ does not involve an intercept parameter. An alternative is to include an intercept parameter in $\lambda_i-\lambda_j$ and fix $\delta_0$ at a value. The estimated probabilities are invariant to these alternatives, and we use the latter simply because this is the default in the \texttt{mgcv} package. \subsubsection*{Other data-specific considerations} The parameters in the LF, TVC, and AFD specifications (which involve features) are shared across the leagues and matches in the data. For computational efficiency we restrict the fitting procedures to use the $20, 000$ most recent matches, or less if less is available, at the time of the first match that a prediction needs to be made. The CS specification requires estimating the strength parameters directly. For computational efficiency, we estimate the strength parameters independently for each league within each country, and only consider matches that took place in the past calendar year from the date of the first match that a prediction needs to be made. \section{Modeling scores} \label{poisson} \subsection{Model structure} \label{sec-hpl} Every league consists of a number of teams $T$, playing against each other twice in a season (once at home and once away). We indicate the number of goals scored by the home and the away team in the $g$th match of the season ($g=1,\ldots,G$) as $y_{g1}$ and $y_{g2}$, respectively. The observed goal counts $y_{g1}$ and $y_{g2}$ are assumed to be realizations of conditionally independent random variables $Y_{g1}$ and $Y_{g2}$, respectively, with \begin{eqnarray*} Y_{gj} \mid \theta_{gj} \sim \mbox{Poisson}(\theta_{gj}) \ . \end{eqnarray*} The parameters $\theta_{g1}$ and $\theta_{g2}$ represent the {\em scoring intensity} in the $g$-th match for the home and away team, respectively. We assume that $\theta_{g1}$ and $\theta_{g2}$ are specified through the regression structures \begin{align} \begin{split} \eta_{g1} = \log(\theta_{g1}) = \sum_{k=1}^p \beta_k z_{g1k} + \alpha_{h_g} + \xi_{a_g} + \gamma_{h_g,\text{\textit{Sea}}_g} + \delta_{a_g,\text{\textit{Sea}}_g} \ , \\ \eta_{g2} = \log(\theta_{g2}) = \sum_{k=1}^p \beta_k z_{g2k} + \alpha_{a_g} + \xi_{h_g} + \gamma_{a_g,\text{\textit{Sea}}_g} + \delta_{h_g,\text{\textit{Sea}}_g} \ . \end{split} \label{linpred} \end{align} The indices $h_g$ and $a_g$ determine the home and away team for match $g$ respectively, with $h_g, a_g \in \{1, \ldots, T\}$. The parameters $\beta_1, \ldots, \beta_p$ represent the effects corresponding to the observed match- and team-specific features $z_{gj1},\ldots, z_{gjp}$, respectively, collected in a $G \times 2p$ matrix $\bm{Z}$. The other effects in the linear predictor $\eta_{gj}$ reflect assumptions of exchangeability across the teams involved in the matches. Specifically, $\alpha_t$ and $\xi_t$ represent the latent attacking and defensive ability of team $t$ and are assumed to be distributed as \[ \alpha_t \mid \sigma_\alpha \sim\mbox{Normal}(0,\sigma_\alpha^2) \qquad \mbox{and} \qquad \xi_t \mid \sigma_\xi \sim \mbox{Normal}(0,\sigma_\xi^2). \] We used vague log-Gamma priors on the precision parameters $\tau_\alpha=1/\sigma^2_\alpha$ and $\tau_\xi=1/\sigma^2_\xi$. In order to account for the time dynamics across the different seasons, we also include the latent interactions $\gamma_{ts}$ and $\delta_{ts}$ between the team-specific attacking and defensive strengths and the season $s \in \{ 1, \ldots, S \}$, which were modeled using autoregressive specifications with \[ \gamma_{t1} \mid \sigma_\varepsilon, \rho_\gamma \sim\mbox{Normal}\left(0,\sigma^2_\varepsilon(1-\rho_\gamma^2)\right), \quad \gamma_{ts}=\rho_\gamma\gamma_{t,s-1}+\varepsilon_{s}, \quad \varepsilon_s \mid \sigma_\varepsilon \sim\mbox{Normal}(0,\sigma_\varepsilon^2) \quad (s = 2, \ldots, S)\,, \] and \[\delta_{t1} \mid \sigma_\varepsilon, \rho_\delta \sim\mbox{Normal}\left(0,\sigma^2_\varepsilon(1-\rho_\delta^2)\right), \quad \delta_{ts}=\rho_\delta\delta_{t,s-1}+\varepsilon_{s}, \quad \varepsilon_s \mid \sigma_\varepsilon \sim\mbox{Normal}(0,\sigma_\varepsilon^2) \quad (s = 2, \ldots, S) \, . \] For the specification of prior distributions for the hyperparameters $\rho_\gamma, \rho_\delta, \sigma_\epsilon$ we used the default settings of the \texttt{R-INLA} package \citep[version 17.6.20]{Lindgren2015}, which we also use to fit the model (see Subsection \ref{estpoisson}). Specifically, \texttt{R-INLA} sets vague Normal priors (centred at $0$ with large variance) on suitable transformations (e.g. log) of the hyperparameters with unbounded range. \subsection{Estimation} \label{estpoisson} The hierarchical Poisson log-linear model (HPL) of Subsection~\ref{sec-hpl} was fitted using INLA \citep{Rue2009}. Specifically, INLA avoids time-consuming MCMC simulations by numerically approximating the posterior densities for the parameters of latent Gaussian models, which constitute a wide class of hierarchical models of the form \begin{eqnarray*} Y_i \mid \bm{\phi},\bm\psi & \sim & p(y_i\mid \bm\phi,\bm\psi) \ , \\ \bm\phi \mid \bm\psi & \sim & \mbox{Normal}\left(\bm{0},\bm{Q}^{-1}(\bm\psi)\right) \ , \\ \bm\psi & \sim & p(\bm{\psi}) \ , \end{eqnarray*} where $Y_i$ is the random variable corresponding to the observed response $y_i$, $\bm\phi$ is a set of parameters (which may have a large dimension) and $\bm\psi$ is a set of hyperparameters. The basic principle is to approximate the posterior densities for $\bm\psi$ and $\bm\phi$ using a series of nested Normal approximations. The algorithm uses numerical optimization to find the mode of the posterior, while the marginal posterior distributions are computed using numerical integration over the hyperparameters. The posterior densities for the parameters of the HPL model are computed on the available data for each league. To predict the outcome of a future match, we simulated $1000$ samples from the joint approximated predictive distribution of the number of goals $\tilde{Y}_1$, $\tilde{Y}_{2}$, scored in the future match by the home and away teams respectively, given features $\tilde{\bm{z}}_{j} = (\tilde{z}_{j1}, \ldots, \tilde{z}_{j2})^\top$. Sampling was done using the \texttt{inla.posterior.sample} method of the \texttt{R-INLA} package. The predictive distribution has a probability mass function of the form \[ p\left(\tilde{y}_{1}, \tilde{y}_{2} \mid \bm{y}_1, \bm{y}_2, \tilde{\bm{z}}_1, \tilde{\bm{z}}_2, \bm{Z} \right) = \int p\left(\tilde{y}_{1},\tilde{y}_{2} \mid \bm{\nu}, \tilde{\bm{z}}_1, \tilde{\bm{z}}_2 \right) p\left(\bm{\nu} \mid \bm{y}_1, \bm{y}_2, \bm{Z}\right)d\bm{\nu} \, , \] where the vector $\bm{\nu}$ collects all model parameters. We then compute the relative frequencies of the events $\tilde{Y}_{1}> \tilde{Y}_{2}$, $\tilde{Y}_{1}=\tilde{Y}_{2}$, and $\tilde{Y}_{1}<\tilde{Y}_{2}$, which correspond to home win, draw, and loss respectively. \section{Validation framework} \label{validationsec} \subsection{MLS challenge} The MLS challenge consists of predicting the outcomes (win, draw, loss) of 206 soccer matches from 52 leagues that take place between 31st March 2017 and 10th April 2017. The prediction performance of each submission was assessed in terms of the average ranked probability score (see Subsection~\ref{sec-criteria}) over those matches. To predict the outcomes of these matches, the challenge participants have access to over 200,000 matches up to and including the 21st March 2017, which can be used to train a classifier. In order to guide the choice of the model that is best suited to make the final predictions, we designed a validation framework that emulates the requirements of the MLS Challenge. We evaluated the models in terms of the quality of future predictions, i.e.~predictions about matches that happen after the matches used for training. In particular, we estimated the model parameters using data from the period before 1st April of each available calendar year in the data, and examined the quality of predictions in the period between 1st and 7th April of that year. For 2017, we estimated the model parameters using data from the period before 14th March 2017, and examined the quality of predictions in the period between 14th and 21st March 2017. Figure~\ref{validation} is a pictorial representation of the validation framework, illustrating the sequence of experiments and the duration of their corresponding training and validation periods. \begin{figure}[t] \centering \includegraphics[scale=0.7]{valfrwk.pdf} \caption{The sequence of experiments that constitute the validation framework, visualizing their corresponding training and prediction periods. \label{validation}} \end{figure} \subsection{Validation criteria} \label{sec-criteria} The main predictive criterion we used in the validation framework is the ranked probability score, which is also the criterion that was used to determine the outcome of the challenge. Classification accuracy was also computed. \subsubsection*{Ranked probability score} Let $R$ be the number of possible outcomes (e.g.~$R = 3$ in soccer) and $\bm{p}$ be the $R$-vector of predicted probabilities with $j$-th component $p_j \in [0,1]$ and $p_1 + \ldots + p_R = 1$. Suppose that the observed outcomes are encoded in an $R$-vector $\bm{a}$ with $j$-th component $a_j \in \{0,1\}$ and $a_1 + \ldots + a_r = 1$. The ranked probability score is defined as\ \begin{equation} {\rm RPS} = \frac{1}{r-1} {\sum_{i = 1}^{r-1}\left\{\sum_{j=1}^i\left(p_j - a_j\right)\right\}^2} \, . \end{equation} The ranked probability score was introduced by \citet{Epstein1969} \citep[see also,][for a general review of scoring rules]{Gneiting2007} and is a strictly proper probabilistic scoring rule, in the sense that the true odds minimize its expected value \citep{Murphy1969}. \subsubsection*{Classification accuracy} Classification accuracy measures how often the classifier makes the correct prediction, i.e. how many times the outcome with the maximum estimated probability of occurence actually occurs. \begin{center} \begin{table*}[ht] \caption{Illustration of the calculation of the ranked probability score and classification accuracy on artificial data.} \hfill{} \footnotesize \begin{tabular}{lllllllllll} \hline \multicolumn{3}{l}{Observed outcome} & \multicolumn{3}{l}{Predicted probabilities} & \multicolumn{3}{l}{Predicted outcome} & \multirow{2}{*}{RPS} & \multirow{2}{*}{Accuracy} \\ $a_1$ & $a_2$ & $a_3$ & $p_1$ & $p_2$ & $p_3$ & $o_1$ & $o_2$ & $o_3$ & & \\ \hline 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0.5 & 0 \\ 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 0.8 & 0.2 & 0 & 1 & 0 & 0 & 0.02 & 1 \\ 0 & 1 & 0 & 0.33 & 0.33 & 0.34 & 0 & 0 & 1 & 0.11 & 0 \\ \hline \end{tabular}} \hfill{} \label{rpsexample} \end{table*} \end{center} Table~\ref{rpsexample} illustrates the calculations leading to the ranked probability score and classification accuracy for several combinations of $\bm{p}$ and $\bm{a}$. The left-most group of three columns gives the observed outcomes, the next group gives the predicted outcome probabilities, and the third gives the predicted outcomes using maximum probability allocation. The two columns in the right give the ranked probability scores and classification accuracies. As shown, a ranked probability score of zero indicates a perfect prediction (minimum error) and a ranked probability score of one indicates a completely wrong prediction (maximum error). The ranked probability score and classification accuracy for a particular experiment in the validation framework are computed by averaging over their respective values over the matches in the prediction set. The uncertainty in the estimates from each experiment is quantified using leave-one-match out jackknife \citep{efron1982}, as detailed in step~9 of Algorithm~\ref{validationalgo}. \subsection{Meta-analysis} The proposed validation framework consists of $K = 17$ experiments, one for each calendar year in the data. Each experiment results in pairs of observations $(s_i, \hat{\sigma_i}^2)$, where $s_i$ is the ranked probability score or classification accuracy from the $i$th experiment, and $\hat{\sigma_i}^2$ is the associated jackknife estimate of its variance $(i = 1, \ldots , K)$. We synthesized the results of the experiments using meta-analysis \citep{DerSimonian1986}. Specifically, we make the working assumptions that the summary variances $\hat{\sigma_i}^2$ are estimated well-enough to be considered as known, and that $s_1, \ldots, s_K$ are realizations of random variables $S_1, \ldots, S_K$, respectively, which are independent conditionally on independent random effects $U_1, . . . , U_K$, with \[ {S_i \mid U_i} \sim \mbox{Normal} (\alpha + U_i, \hat{\sigma_i}^2) \, , \] and \[ U_i \sim \mbox{Normal} (0, \tau^2) \, . \] The parameter $\alpha$ is understood here as the overall ranked probability score or classification accuracy, after accounting for the heterogeneity between the experiments. The maximum likelihood estimate of the overall ranked probability or classification accuracy is then the weighted average \[ \hat{\alpha} = \frac{\sum w_i s_i}{\sum w_i} \ , \] where $w_i = (\hat{\sigma_i}^2 + \hat{\tau}^2)^{-1}$ and $\hat{\tau}^2$ is the maximum likelihood estimate of $\tau^2$. The estimated standard error for the estimator of the overall score $\hat\alpha$ can be computed using the square root of the inverse Fisher information about $\alpha$, which ends up being $(\sum_{i = 1}^K w_i)^{-1/2}$. The assumptions of the random-effects meta-analysis model (independence, normality and fixed variances) are all subject to direct criticism for the validation framework depicted in Figure~\ref{validation} and the criteria we consider; for example, the training and validation sets defined in the sequence of experiments in Figure~\ref{validation} are overlapping and ordered in time, so the summaries resulting from the experiment are generally correlated. We proceed under the assumption that these departures are not severe enough to influence inference and conclusions about $\alpha$. \subsection{Implementation} Algorithm~\ref{validationalgo} is an implementation of the validation framework in pseudo-code. Each model is expected to have a training method which trains the model on data, and a prediction method which returns predicted outcome probabilities for the prediction set. We refer to these methods as {\tt train} and {\tt predict} in the pseudo-code. \begin{algorithm} [t!] \caption{Pseudo-code for the validation framework} \label{validationalgo} \footnotesize \begin{algorithmic}[1] \Require{ \Statex $\bm{x}_1, \ldots , \bm{x}_G$ \Comment{feature vectors for all $G$} matches in the data set \Statex $d_1 \leq \ldots \leq d_G$ \Comment{$d_g$ is the match date of match $g \in \{1,\ldots,G\}$} \Statex $\bm{o}_1, \ldots , \bm{o}_G$ \Comment{match outcomes} \Statex train: $\{ \bm{x}_g, \bm{o}_g : g \in A\} \to f(\cdot) $ \Comment{Training algorithm} \Statex predict: $\{ \bm{x}_g : g \in B\}, f(\cdot) \to \{\bar{\bm{o}}_g: g \in B\}$ \Comment{Prediction algorithm} \Statex criterion: $\{ \bm{o}_g, \bar{\bm{o}}_g : g \in B \} \to \{v_g: g \in B\}$ \Comment{observation-wise criterion values} \Statex $D_1, \ldots, D_{T}$ \Comment{Cut-off dates for training for experiments} \Statex meta-analysis: $\{ s_i, \hat\sigma_i^2 : i \in \{1, \ldots, T\}\} \to \hat\alpha $ \Comment{Meta-analysis algorithm} } \Statex \Ensure{ $\hat\alpha$ \Comment{Overall validation metric} } \Statex \For{$i \gets 1 \textrm{ to } T$} \Let{$A$}{$\{g: d_g \le D_t\}$} \Let{$B$}{$\{g: D_t < d_g \le D_t + 10\text{days}\}$} \Let{$n_B$}{${\rm dim}(B)$} \Let{$f(\cdot)$}{train($\{ \bm{x}_g, \bm{o_g} : g \in A \}$)} \Comment{fit the model} \Let{$\{ \bar{\bm{o}}_g: g\in B\}$}{predict($\{\bm{x}_g: g \in B\}, f(\cdot)$)} \Comment{get predictions} \Let{$\{v_g: g \in B\}$}{criterion$(\{\bm{o}_g, \bar{\bm{o}}_g: g \in B\})$} \Let{$s_i$}{$\frac{1}{n_B} \sum_{g \in B} v_g$} \Let{$\hat\sigma_i^2$}{$\frac{n_B}{n_B - 1}\sum_{g \in B} \left( \frac{\sum_{h\in B/\{g\}} v_h}{n_B - 1} - s_i \right)^2$} \EndFor \Statex \Let{$\hat\alpha$}{meta-analysis($\{s_i, \hat\sigma_i^2: i \in \{1, \ldots, T\}$)} \end{algorithmic} \end{algorithm} \section{Results} \label{results} In this section we compare the predictive performance of the various models we implemented as measured by the validation framework described in Section \ref{validationsec}. Table~\ref{modref} gives the details of each model in terms of features used, the handling of draws (ordinal and Davidson, as in Subsection~\ref{sec-draws}), the distribution whose parameters are modeled, and the estimation procedure that has been used. The sets of features that were used in the LF, TVC, AFD and HPL specifications in Table~\ref{modref} resulted from ad-hoc experimentation with different combinations of features in the LF specification. All instances of feature 13 refer to the least squares ordinal rank (see Subsection 2.5 of the supplementary material). The features used in the HPL specification in (\ref{linpred}) have been chosen prior to fitting to be home and newly promoted (features 1 and 2 in Table~\ref{tab-features}), the difference in form and points tally (features 4 and 6 in Table~\ref{tab-features}) between the two teams competing in match $g$, and season and quarter (features 15 and 16 in Table~\ref{tab-features}) for the season that match $g$ takes place. \begin{table*}[ht] \caption{Description of each model in Section~\ref{bt} and Section~\ref{poisson} in terms of features used, the handling of draws, the distribution whose parameters are modeled, and the estimation procedure that was used. The suffix $(t)$ indicates features with coefficients varying with matches played (feature 5 in Table~\ref{tab-features}). The model indicated by $\dagger$ is the one we used to compute the probabilities for the submission to the MLS challenge. The acronyms are as follows: BL: Baseline (home advantage); CS: Bradley-Terry with constant strengths; LF: Bradley-Terry with linear features; TVC: Bradley-Terry with time-varying coefficients; AFD: Bradley-Terry with additive feature differences and time interactions; HPL:Hierarchical Poisson log-linear model.} \vspace{0.2cm} \centering \footnotesize \begin{tabular}{rrrlr} \hline Model & Draws & Features & Distribution & Estimation \\ \hline BL (\ref{bl}) & Davidson & 1 & Multinomial & ML \\ BL (\ref{bl}) & Ordinal & 1 & Multinomial & ML \\ CS (\ref{cs}) & Davidson & 1 & Multinomial & ML \\ CS (\ref{cs}) & Ordinal & 1 & Multinomial & ML \\ LF (\ref{lf}) & Davidson & 1, 6, 7, 12, 13 & Multinomial & ML \\ LF (\ref{lf}) & Ordinal & 1, 6, 7, 12, 13 & Multinomial & ML \\ TVC (\ref{tvc}) & Davidson & 1, 6$(t)$, 7$(t)$, 12$(t)$, 13 & Multinomial & ML \\ TVC (\ref{tvc}) & Ordinal & 1, 6$(t)$, 7$(t)$, 12$(t)$, 13 & Multinomial & ML \\ AFD (\ref{afd}) & Davidson & 1, 6$(t)$, 7$(t)$, 12$(t)$, 13 & Multinomial & MPL \\ HPL (\ref{linpred}) & & 1, 2, 4, 6, 15, 16 & Poisson & INLA \\ ($\dagger$) TVC (\ref{tvc}) & Ordinal & 1, 2, 3, 4, 6$(t)$, 7$(t)$, 11$(t)$ & Multinomial & ML \\ \hline \end{tabular} \label{modref} \end{table*} For each of the models in Table~\ref{modref}, Table~\ref{resultstab} presents the ranked probability score and classification accuracy as estimated from the validation framework in Algorithm~\ref{validationalgo}, and as calculated for the matches in the test set for the challenge. The results in Table~\ref{resultstab} are indicative of the good properties of the validation framework of Section~\ref{validationsec} in accurately estimating the performance of the classifier on unseen data. Specifically, and excluding the baseline model, the sample correlation between overall ranked probability score and the average ranked probability score from the matches on the test set is $0.973$. The classification accuracy seems to be underestimated by the validation framework. The TVC model that is indicated by $\dagger$ in Table~\ref{resultstab} is the model we used to compute the probabilities for our submission to the MLS challenge. Figure \ref{coefplots} shows the estimated time-varying coefficients for the TVC model. The remaining parameter estimates are $0.0410$ for the coefficient of form, $-0.0001$ for the coefficient of days since previous match, and $0.0386$ for the coefficient of newly promoted. Of all the features included, only goal difference and point tally last season had coefficients for which we found evidence of difference from zero when accounting for all other parameters in the model (the $p$-values from individual Wald tests are both less than $0.001$). After the completion of the MLS challenge we explored the potential of new models and achieved even smaller ranked probability scores than the one obtained from the TVC model. In particular, the best performing model is the HPL model in Subsection~\ref{sec-hpl} (starred in Table~\ref{resultstab}), followed by the AFD model which achieves a marginally worse ranked probability score. It should be noted here that the LF models are two simpler models that achieve performance that is close to that of HPL and AFD, without the inclusion of random effects, time-varying coefficients, or any non-parametric specifications. The direct comparison between the ordinal and Davidson extensions of Bradley-Terry type models indicates that the differences tend to be small, with the Davidson extensions appearing to perform better. \begin{figure}[t] \vspace{-0.5cm} \centering \includegraphics[scale=0.6]{coefs.pdf} \caption{Plots of the time-varying coefficients in the TVC model that is indicated by $\dagger$ in Table~\ref{resultstab}, which is the model we used to compute the probabilities for our submission to the MLS challenge. } \label{coefplots} \end{figure} We also tested the performance of HPL in terms of predicting actual scores of matches using the validation framework, comparing to a baseline method that always predicts the average goals scored by home and away teams respectively in the training data it receives. Using root mean square error as an evaluation metric, HPL achieved a score of 1.0011 with estimated standard error 0.0077 compared to the baseline which achieved a score of 1.0331 with estimated standard error 0.0083. \begin{table}[ht] \caption{Ranked probability score and classification accuracy for the models in Table~\ref{modref}, as estimated from the validation framework of Section~\ref{validationsec} (standard errors are in parentheses) and from the matches in the test set of the challenge. The model indicated by $\dagger$ is the one we used to compute the probabilities for the submission to the MLS challenge, while the one indicated by $*$ is the one that achieves the lowest estimated ranked probability score.} \label{resultstab} \centering \footnotesize \begin{tabular}{rrrrrrrrr} \toprule & & & \multicolumn{3}{c}{Ranked probability score} & \multicolumn{3}{c}{Accuracy} \\ \cmidrule{3-8} & Model & Draws & \multicolumn{2}{c}{Validation} & Test & \multicolumn{2}{c}{Validation} & Test \\ \midrule & BL & Davidson & 0.2242 & (0.0024) & 0.2261 & 0.4472 & (0.0067) & 0.4515 \\ & BL & Ordinal & 0.2242 & (0.0024) & 0.2261 & 0.4472 & (0.0067) & 0.4515 \\ & CS & Davidson & 0.2112 & (0.0028) & 0.2128 & 0.4829 & (0.0073) & 0.5194 \\ & CS & Ordinal & 0.2114 & (0.0028) & 0.2129 & 0.4779 & (0.0074) & 0.4951 \\ & LF & Davidson & 0.2088 & (0.0026) & 0.2080 & 0.4849 & (0.0068) & 0.5049 \\ & LF & Ordinal & 0.2088 & (0.0026) & 0.2084 & 0.4847 & (0.0068) & 0.5146 \\ & TVC & Davidson & 0.2081 & (0.0026) & 0.2080 & 0.4898 & (0.0068) & 0.5049 \\ & TVC & Ordinal & 0.2083 & (0.0025) & 0.2080 & 0.4860 & (0.0068) & 0.5097 \\ & AFD & Ordinal & 0.2079 & (0.0026) & 0.2061 & 0.4837 & (0.0068) & 0.5194 \\ $\star$ & HPL & & 0.2073 & (0.0025) & 0.2047 & 0.4832 & (0.0067) & 0.5485 \\ $\dagger$ & TVC & Ordinal & 0.2085 & (0.0025) & 0.2087 & 0.4865 & (0.0068) & 0.5388 \\ \bottomrule \end{tabular} \end{table} \section{Conclusions and discussion} \label{conclusion} We compared the performance of various extensions of Bradley-Terry models and a hierarchical log-linear Poisson model for the prediction of outcomes of soccer matches. The best performing Bradley-Terry model and the hierachical log-linear Poisson model delivered similar performance, with the hierachical log-linear Poisson model doing marginally better. Amongst the Bradley-Terry specifications, the best performing one is AFD, which models strength differences through a semi-parametric specification involving general smooth bivariate functions of features and season time. Similar but lower predictive performance was achieved by the Bradley-Terry specification that models team strength in terms of linear functions of season time. Overall, the inclusion of features delivered better predictive performance than the simpler Bradley-Terry specifications. In effect, information is gained by relaxing the assumption that each team has constant strength over the season and across feature values. The fact that the models with time varying components performed best within the Bradley-Terry class of models indicates that enriching models with time-varying specifications can deliver substantial improvements in the prediction of soccer outcomes. All models considered in this paper have been evaluated using a novel, context-specific validation framework that accounts for the temporal dimension in the data and tests the methods under gradually increasing information for the training. The resulting experiments are then pooled together using meta-analysis in order to account for the differences in the uncertainty of the validation criterion values by weighing them accordingly. The meta analysis model we employed operates under the working assumption of independence between the estimated validation criterion values from each experiment. This is at best a crude assumption in cases like the above where data for training may be shared between experiments. Furthermore, the validation framework was designed to explicitly estimate the performance of each method only for a pre-specified window of time in each league, which we have set close to the window where the MLS challenge submissions were being evaluated. As a result, the conclusions we present are not generalizable beyond the specific time window that was considered. Despite these shortcomings, the results in Table~\ref{resultstab} show that the validation framework delivered accurate estimates of the actual predictive performance of each method, as the estimated average predictive performances and the actual performances on the test set (containing matches between 31st March and 10th April, 2017) were very close. The main focus of this paper is to provide a workflow for predicting soccer outcomes, and to propose various alternative models for the task. Additional feature engineering and selection, and alternative fitting strategies can potentially increase performance and are worth pursuing. For example, ensemble methods aimed at improving predictive accuracy like calibration, boosting, bagging, or model averaging \cite[for an overview, see][]{Dietterich2000} could be utilized to boost the performance of the classifiers that were trained in this paper. A challenging aspect of modeling soccer outcomes is devising ways to borrow information across different leagues. The two best performing models (HPL and AFD) are extremes in this respect; HPL is trained on each league separately while AFD is trained on all leagues simultaneously, ignoring the league that teams belong to. Further improvements in predictive quality can potentially be achieved by using a hierarchical model that takes into account which league teams belong to but also allows for sharing of information between leagues. \section{Supplementary material} The supplementary material document contains two sections. Section 1 provides plots of the number of matches per number of goals scored by the home and away teams, by country, for a variety of arbitrarily chosen countries. These plots provide evidence of a home advantage. Section 2 details approaches for obtaining team rankings (feature 13 in Table~\ref{tab-artificial_features}) based on the outcomes of the matches they played so far. \section{Acknowledgements and authors' contributions} This work was supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1. The authors are grateful to Petros Dellaportas, David Firth, Istv\'{a}n Papp, Ricardo Silva, and Zhaozhi Qian for helpful discussions during the challenge. Alkeos Tsokos, Santhosh Narayanan and Ioannis Kosmidis have defined the various Bradley-Terry specifications, and devised and implemented the corresponding estimation procedures and the validation framework. Gianluca Baio developed the hierarchical Poisson log-linear model and the associated posterior inference procedures. Mihai Cucuringu did extensive work on feature extraction using ranking algorithms. Gavin Whitaker carried out core data wrangling tasks and, along with Franz Kir\'aly, worked on the initial data exploration and helped with the design of the estimation-prediction pipeline for the validation experiments. Franz Kir\'aly also contributed to the organisation of the team meetings and communication during the challenge. All authors have discussed and provided feedback on all aspects of the challenge, manuscript preparation and relevant data analyses.
{ "attr-fineweb-edu": 1.421875, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdmI5qhLBINsxua_G
\section{Introduction} Au cours de l'évolution, la physiologie humaine a été optimisée pour couvrir de grandes distances afin de trouver suffisamment de nourriture pour soutenir le métabolisme du cerveau. La popularité du marathon chez les humains est un héritage de notre capacité à courir de longues distances en utilisant le métabolisme aérobie. Le nombre de participants au marathon de Londres est passé de 7 000 à 35 000 au cours des 30 dernières années et la participation aux courses sur route en général a augmenté de plus de 50\% au cours de la dernière décennie. Cette popularité est caractérisée par l'émergence de marathoniens amateurs. Ils disposent désormais de cardiofréquence mètres GPS et cherchent à courir dans une zone de vitesse ou de fréquence cardiaque (FC) sans bases théoriques et subissent une chute drastique de leur vitesse à partir du 25ème km. D'autres coureurs préfèrent courir "à la sensation" et un défi important est d'analyser et d'interpréter les séries temporelles de leurs réponses physiologiques pour les aider à améliorer leurs performances. A partir des années 1990, de nombreux auteurs ont mis en évidence le comportement fractal de données physiologiques \cite{abry10a, goldberger91a}. En 2005, une première analyse multifractale sur la FC de marathoniens a été effectuée avec la MMTO (méthode des maxima de la transformée en ondelettes) \cite{wesfreid05a}. Cette étude fût complétée en 2009 en utilisant deux autres méthodes d'analyse multifractale, basées sur les DFA (Detrended Fluctuation Analysis) et les coefficients d'ondelettes dominants appliquées sur des primitives des signaux \cite{wesfreid09a}. Les comparaisons entre ces résultats sont difficiles; en effet ces méthodes ne conduisent pas à des analyses associées au même exposant de régularité: la MMTO est adaptée au {\sl weak scaling exponent} \cite{Meyer}, la DFA au 2-exposant \cite{pLeaders}, et les coefficients dominants à l'exposant de Hölder \cite{jaffard04a}. De plus on ne dispose de résultats mathématiques généraux de validité que pour la méthode des coefficients dominants \cite{jaffard04a}. La comparaison de l'analyse de la FC dans la première moitié et le dernier quart du semi-marathon, lorsque le coureur a commencé à avoir un épuisement du glycogène (cause majeure de la diminution de la vitesse à la fin de la course) met en évidence un changement dans la dynamique de la FC \cite{wesfreid09a}. Cette étude a révélé que la fatigue affecte les propriétés de régularité du signal, ainsi qu'une diminution constante des rapports entre la vitesse, la cadence de pas, les réponses cardiorespiratoires (fréquence respiratoire, cardiaque, volume d'oxygène consommé) et le niveau de taux de perception de l'épuisement (Rate of Perception of Exhaustion (RPE)), selon l'échelle psychophysiologique de Borg. Nous confirmons et complétons ces études en montrant qu'une analyse multifractale basée sur les $p$-exposants permet de travailler directement sur les données (et non sur des primitives) et caractérise plus précisément les modifications physiologiques durant un marathon. \section{Analyse avec les $p$-leaders} \subsection{Exposants de régularités ponctuelles} Soient $X\in L^{\infty}_{loc}(\mathbb{R})$, $t_0 \in \mathbb{R}$ et $\alpha>0$; $X$ appartient à $C^\alpha (t_0)$ s'il existe un polynôme $P_{t_0}$ et $K,r>0$ tels que $\forall t\in [t_0-r,t_0+r]$, $|X(t)-P_{t_0}(t-t_0)| \leq K |t-t_0|^\alpha.$ L'exposant de Hölder ponctuel est $h_X (t_0)=\sup\{\alpha \ : \ X\in C^\alpha (t_0)\}$. Cet exposant n'est pas défini si $X$ n'est pas localement borné; on utilise alors la notion suivante adaptée au cadre $L^p$ \cite{jaffard05a}. Soient $p\in [1,+\infty[$, $X\in L^{p}_{loc}(\mathbb{R})$, $t_0\in \mathbb{R}$ et $\alpha\geq -1/p$; $X\in T^{p}_\alpha (t_0)$ s'il existe un polynôme $P_{t_0}$ , $K, R>0$ tels que \begin{equation*}\forall r \leq R, \; \left(\frac{1}{r} \int_{t_0-r}^{t_0+r}|X(t)-P_{t_0}(t-t_0)|^p dt\right)^{\frac{1}{p}} \leq K r^\alpha . \end{equation*} Le $p$-exposant de $X$ est $h^{(p)}_X (t_0)=\sup\{\alpha :\ X\in T^{p}_\alpha (t_0)\} .$ Lorsque $p=+\infty$, on retrouve l'exposant de Hölder. Cet exposants de régularité se caractérise par les coefficients d'ondelettes de la manière suivante. \subsection{Coefficients dominants et $p$-leaders} Soit $\psi$ une "ondelette", c'est-à-dire une fonction régulière et bien localisée telle que les $\{\psi_{j,k} (t) = 2^{j/2} \psi (2^j t-k)\}_{(j,k)\in\mathbb{Z}^2}$ forment une base orthonormée de $L^2 (\mathbb{R})$. Les coefficients d'ondelette de $X$ sont définis par $c_{j,k}=2^{j/2}\int_{\mathbb{R}} X(t) \psi_{j,k}(t) dt$. Ils permettent de déterminer la valeur de l'{\sl exposant de régularité uniforme} $H_{min}$ de $X$ qui est caractérisé par regression $\log$-$\log$ \begin{equation} \label{defhmain} \sup_{k} |c_{j,k}|\sim 2^{-H_{min}j} \end{equation} (dans la limite des petites échelles, c'est-à-dire quand $j \rightarrow + \infty$). Si $H_{min}>0$, alors $f$ est localement bornée et on peut déterminer l'exposant de Hölder ponctuel de $X$ de la façon suivante: On note un intervalle dyadique $\lambda=\lambda_{j,k}=[k2^{-j}, (k+1)2^{-j}[$ et la réunion avec ses deux voisins est $3\lambda=[(k-1)2^{-j}, (k+2)2^{-j}[$. Les coefficients dominants de $X$ sont définis par $l_{j,k}= \sup_{\lambda' \subset 3\lambda} |c_{\lambda'}|.$ Ils permettent de caractériser les exposants de Hölder ponctuels si $H_{min}>0$ puisque pour $j\rightarrow + \infty$, on a (pour les $\lambda_{j,k}$ contenant $x$) $l_{j,k} \sim 2^{-j h_X (x)}$, cf. \cite{abry15a}. Si $H_{min} < 0$, l'exposant de Hölder de $X$ n'est plus défini, mais, il est possible de travailler sur une intégrée fractionnaires de $X$ d'ordre $\gamma >-H_{min}$. Pour éviter une telle transformation, on peut utiliser le $p$-exposant pour des valeurs de $p$ telles que $X\in L^{p} (\mathbb{R})$. Pour déterminer la valeur d'un tel $p$, il suffit de vérifier que $\eta (p) >0$ où la \textit{fonction d'échelle ondelette} $\eta $ est définie par $2^{-j} \sum_{k} |c_{j,k}|^p\sim 2^{-\eta (p) j}$. Soit $X\in L_{loc}^{p}(\mathbb{R})$. Les $p$-leaders sont définis par $l_{j,k}^{(p)} = \left( \sum_{\lambda' \subset 3\lambda} |c_{\lambda'}|^p 2^{j-j'} \right)^{\frac{1}{p}}.$ Ils permettent de caractériser les $p$-exposants si $\eta (p)>0$; on a: $l_{j,k}^{(p)} \sim 2^{-j h_X^{(p)} (k2^{-j})}$ \cite{abry15a, jaffard05a}. \subsection{$p$-spectre multifractal} L'analyse multifractale a pour objet l'étude de signaux dont l'exposant de régularité ponctuelle varie fortement d'un point à un autre. Le $p$-spectre multifractal d'une fonction $X\in L_{loc}^{p} (\mathbb{R})$, est $D_X^{(p)} : H \mapsto \dim_{H}\left( \left\{ x\in\mathbb{R}\ : \ h_X^{(p)} (x)=H \right\} \right), $ où $\dim_{H}$ désigne la dimension de Hausdorff. Il est en général impossible de calculer point par point les $p$-exposants de données expérimentales. On estime plutôt le $p$-spectre de la façon suivante. Soit $S_X^{(p)} (j,q) = 2^{-j} \sum_{k} \left( l^{(p)}_{j,k}\right)^q.$ Si $S_p (j,q)\sim 2^{-j \zeta_X^{(p)} (q)}$, on appelle $\zeta_X^{(p)} (q)$ la $p$-fonction d'échelle. Sa transformée de Legendre $\mathcal{L}_X^{(p)} (H)=\inf_{q\in\mathbb{R}} \{1+qH-\zeta_X^{(p)} (q)\}$ permet d'estimer le $p$-spectre. En effet $D_X^{(p)} (H) \leq L_X^{(p)} (H)$ \cite{jaffard04a,jaffard05a}. \section{Analyse multifractale de la fréquence cardiaque} Nous analysons la fréquences cardiaque de 8 marathoniens (des hommes de la même tranche d'âge), cf. la Fig. \ref{Fig1}. Nous avons également considéré la cadence et l'accélération, mais elles ne permettent pas de mettre en évidence des paramètres pertinents pour la classification; cela peut être dû au fait que, contrairement à la cadence ou l'accélération, le coureur ne con\-trôle pas directement son rythme cardiaque. \begin{figure}[htb] \begin{center} \resizebox{80mm}{!}{ \includegraphics{dataphy.png}} \end{center} \legende{Fréquence cardiaque d'un marathonien.}\label{Fig1} \end{figure} \subsection{ Estimation du paramètre $H_{min}$} Le paramètre $H_{min}$, cf. \eqref{defhmain}, permet de déterminer si une analyse basée sur les coefficients dominants est possible, et nous verrons qu'il est également un paramètre de classification important. La Fig. \ref{Fig2} est un exemple d'estimations de cette valeur: la régression est effectuée entre $j=8$ et $j=11$ (soit environ entre $26$s et $3$min$25$s), échelles identifiées comme pertinentes pour les données physiologiques, cf. \cite{catrambone20a}. \begin{figure}[htb] \begin{center} \resizebox{50mm}{!}{ \includegraphics{HminFC.PNG}} \end{center} \legende{Estimation par régression log-log du $H_{min}$ d'une fréquence cardiaque. Le bon alignement des points montre que la régression linéaire permet d'estimer avec précision la valeur de $H_{min}$, et qu'il est négatif.}\label{Fig2} \end{figure} Pour la plupart des marathoniens, on obtient que $H_{min}<0$ (cf. la Table \ref{tab1}) ce qui justifie l'utilisation des $p$-leaders. Il est alors nécessaire de déterminer pour quelle valeurs de $p$ on a $\zeta(p)>0$, cf. les Fig. \ref{Fig3} et \ref{Fig4}. Suite à cette première analyse, nous allons effectuer une analyse basée sur les $p$-leaders de la fréquence cardiaque. \begin{figure}[htb] \begin{center} \resizebox{50mm}{!}{ \includegraphics{EchwavFc.PNG}} \end{center} \legende{Exemple de fonction d'échelle ondelette de la fréquence cardiaque. Elle permet de déterminer les $p$ tels que $\zeta(p)>0$. L'estimation du $p$ spectre est alors possible.}\label{Fig3} \end{figure} \subsection{Détermination des 1-spectres} Nous avons choisi de prendre $p=1$ et $p= 1,4$ pour le calcul des $p$-leaders car les données étudiées vérifient toujours $\zeta(p)>0$ pour ces valeurs, cf. la Fig. \ref{Fig3}. La Fig. \ref{Fig4} fournit un tel exemple d'estimation de $\eta (1)$. Il est alors justifié d'estimer le $1$-spectre multifractal pour les données physiologiques sur la fréquence cardiaque des marathoniens. Sur la Fig. \ref{Fig5}, on représente un exemple de $1$-spectre de Legendre. \begin{figure}[htb] \begin{center} \resizebox{50mm}{!}{ \includegraphics{struct1.png}} \end{center} \legende{Estimation par régression log-log de la fonction d'échelle ondelettes de la fréquence cardiaque pour $p=1$. La pente de la régression est positive. L'utilisation de $1$-leaders pour l'analyse multifractale est ainsi justifiée.}\label{Fig4} \end{figure} \begin{figure}[htb] \begin{center} \resizebox{70mm}{!}{ \includegraphics{scal1lead_1.png} \includegraphics{1spect_1.png}} \end{center} \legende{Estimations de la fonction d'échelle 1-leaders de la fréquence cardiaque (à gauche) et de sa transformée de Legendre (à droite). La partie gauche du spectre débute pour des exposants $H<0$ confirmant ainsi la présence de singularités d'exposant négatif dans le signal.}\label{Fig5} \end{figure} On effectue alors une classification des données sur les 8 marathoniens avec des valeurs $p=1$ et $p=1,4$ (cf. Table \ref{tab1}). \begin{table}[htb] \legende{Analyse multifractale de la fréquence cardiaque des marathoniens ($\widetilde {H}_{min}$ et $\widetilde{c}_1^p$ sont respectivement le $H_{min}$ et $c_1^p$ de la primitive du signal)}\label{tab1} \begin{center} \resizebox{88mm}{!}{ \begin{tabular}{||c||*{10}{m{1.2cm}|}|} \hline\hline $ $ & $H_{min}$ & $\widetilde{H}_{min}$ & $c_1^1$& $c_1^{1,4}$ & $\widetilde{c}_1^1$ & $\widetilde{c}_1^{1.4}$ \\ \hline M1 & $-0,2768$ & $0,7232$ & $0,8099$ & $0,8064$ & $1,8242$ & $1,8213$ \\ \hline M2 & $-0,0063$ & $0,9937$ & $0,4564$ & $0,4043$ & $1,3926$ & $1,3509$ \\ \hline M3 & $-0,0039$ & $0,9961$ & $0,6856$ & $0,6625$ & $1,6942$ & $1,6351$ \\ \hline M4 & $-0,1633$ & $0,8367$ & $0,6938$ & $0,6785$ & $1,6653$ & $1,6636$ \\ \hline M5 & $-0,2434$ & $0,7566$ & $0,5835$ & $0,5689$ & $1,5401$ & $1,5224$ \\ \hline M6 & $-0,3296$ & $0,6704$ & $0,5809$ & $0,5636$ & $1,5644$ & $1,5500$ \\ \hline M7 & $0,1099$ & $1,1099$ & $0,5652$ & $0,5483$ & $1,4754$ & $1,4379$ \\ \hline M8 & $-0,5380$ & $0,4620$ & $0,3382$ & $0,2977$ & $1,2588$ & $1,2086$ \\ \hline\hline \end{tabular}} \end{center} \end{table} \begin{figure}[htb] \begin{center} \resizebox{48mm}{!}{ \includegraphics{classFC.png}} \end{center} \legende{Représentation du couple $(H_{min}, c_1)$ déduits du 1-spectre de la fréquence cardiaque; $H_{min}$ apparaît comme le paramètre de classification le plus pertinent. Le point isolé à gauche correspond à M8, le coureur le plus entraîné.}\label{Fig6} \end{figure} Dans la Fig. \ref{Fig6}, on représente pour chaque marathonien les valeur $(H_{min}, c_1^1)$ où $c_1^p$ est la valeur de $H$ pour laquelle le maximum du $p$-spectre est atteint. Il correspond mathématiquement à l'exposant presque partout du signal. Les valeurs de $c_1^1$ restent très proches de $0,4$ tandis que celles du $H_{min}$ varient fortement, mettant en évidence des différences entre coureurs plus ou moins expérimentés. Les autres paramètres permettant de caractériser la forme des spectres ($c_2^p$ et $c_3^p$ \cite{abry10a}) ne s'avèrent pas pertinents pour la classification, ainsi nous ne les présenterons pas. Le coureur M8 qui présente le $H_{min}$ le plus petit est le seul coureur de \textit{trail}, et il a surpassé son précédent record; de plus, il est le plus entraîné avec une façon de courir (nombre de pas par minute et vitesse/accélération) très irrégulière. La Table \ref{tab1} montre que les valeurs de $c_1^p$ varient très peu en fonction de $p$, et que, pour des primitives, le décalage des paramètres $H_{min}$ et $c_1^p$ est d'environ 1, ce qui est caractéristique de signaux ne contenant que des singularités de type { \em cusp} (absence de chirps dans le signal), et montre le caractère intrinsèque de la valeur de $c_1^p$ pour ce type de données. Ces résultats sont confirmés en vérifiant que le spectre d'une primitive du signal se translate de $1$. \subsection{Classification des phases d'un marathon} Nous nous intéressons aux évolutions des paramètres multifractals lors du marathon. Vers le 25ème kilomètre (environ à $60\%$) les coureurs ressentent une pénibilité accrue sur l'échelle RPE Borg (Rate of Perceived Exertion) utilisée pour identifier l'intensité d'un exercice en fonction du ressenti. La Fig. \ref{Fig8} permet d'observer l'évolution des paramètres de multifractalité entre la première moitié du marathon et le dernier quart, mettant en évidence les différences de réactions physiologiques face à la fatigue ressenties à partir du 28ème kilomètre. \begin{figure}[htb] \begin{center} \resizebox{77mm}{!}{ \includegraphics{classFC2.png}} \end{center} \legende{Évolution de $(H_{min}, c_1^1)$ déduits du 1-spectre de la fréquence cardiaque entre le début (en bleu) et le dernier quart (en rouge) du marathon : les évolutions sont similaires sauf pour trois coureurs : M3 et M6 qui ont éprouvé de grandes difficultés et M7 qui est le moins expérimenté avec un temps de course nettement plus long et une façon de courir très régulière.}\label{Fig8} \end{figure} \section{Conclusion et perspectives} L'analyse au moyen de $p$-leaders du rythme cardiaque de marathoniens permet de compléter les analyses précédentes basées sur la MMTO, la DFA, ou les coefficients dominants. Sur le plan méthodologique, nous avons mis en évidence le fait qu'une analyse basée sur les 1-exposants permet de travailler directement sur les données et non sur des primitives. De plus l'analyse des différentes phases de la course permet de relier l'évolution des paramètres multifractals et la physiologie des coureurs : L'exposant $H_{min}$ évolue peu durant la course pour des coureurs expérimentés, mais diminue en fin de course pour les autres, ce que l'on peut interpréter comme une désorganisation dans leurs efforts. De plus, ces résultats confirment des analyses réalisées sur des matrices physiologiques complètes (cardio respiratoire), nécessitant des mesures coûteuses avec des analyseurs ne permettant pas de restituer des données en qualité suffisante comme le fait le cardiofréquence mètre mesurant chaque période cardiaque, avec une précision de 0.001s soit 0.01 hertz pour la fréquence cardiaque). Ces cardiofréquence mètres sont à présent largement utilisés par les coureurs, en revanche, le traitement des série temporelles post course se fait tout au plus avec la méthode DFA. Nous avons mentionné que l'analyse multifractale de la cadence et de l'accélération des coureurs ne fournit pas de paramètres pertinents pour la classification. Il est cependant possible qu'une analyse multivariée qui utilise conjointement ces données avec la fréquence cardiaque (en suivant les outils développés dans \cite{Seuret}) puisse fournir des informations plus riches. Ce point sera l'objet d'une étude future.
{ "attr-fineweb-edu": 1.822266, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUa5rxK1Thg9qFbS_m
\section{Model} \label{sec:cricketmodel} \smallskip \noindent{\bf Intuition.} We shall introduce the {\em Latent Variable Model} here. It is motivated by the following minimal property that sports fans and pundits expect to hold for variety of sports: performance of a team is determined by the {\em team context} that remains constant during the duration of the game and the {\em game context} that changes during the play of the game. In cricket, {\em team context} may correspond to players in the team, coach, the mental state in which the players are, the importance of the game (e.g. final of the world cup), the venue of game and fan base, the pitch or ground condition, etc. And, the {\em game context} may correspond to stage of the game, power play, etc. Next, we formalize this seemingly universal intuition across sports in the context of cricket. \smallskip \noindent{\bf Formalism.} The performance, as mentioned before, is measured by the runs scored and the wickets lost. To that end, we shall index an inning by $i$, and ball within the inning by $j$. Let there be total of $n$ innings and $b$ balls, within LOI, $b = 300$. Let $X_{ij}$ denote the runs on the ball $j$ within inning $i$. We shall model $X_{ij}$ as independent random variables with mean $m_{ij} = {\mathbb E}[X_{ij}]$. To capture the above stated intuition, we state that \begin{align} m_{ij} & = f(\theta_i, \rho_j), \end{align} for all $i \in [n], j \in [b]$\footnote{We use notation $[x] = \{1,\dots, x\}$ for integer $x$.} where {\em latent} parameter $\theta_i \in \Omega_1 = [r_1, s_1]^{d_1}$ captures the {\em team context}, {\em latent} parameter $\rho_j \in \Omega_2 = [r_2, s_2]^{d_2}$ captures the {\em game context}, and {\em latent} function $f: \Omega_1 \times \Omega_2 \to [0,\infty)$ captures complexity of the underlying model. For simplicity and without loss of generality, we shall assume that $r_1=r_2 = 0$, $s_1 = s_2 = 1$, $d_1 = d_2 = d \geq 1$. We shall additionally impose that the latent function $f$ is {\em Lipschitz} continuous, but can be arbitrary otherwise. That is, there exists constant $C_f > 0$ such that for all $\theta, \theta' \in \Omega_1$ and $\rho, \rho' \in \Omega_2$, \begin{align} |f(\theta, \rho) - f(\theta', \rho')| & \leq C_f \Big(\| \theta - \theta'\| + \| \rho - \rho'\|\Big). \end{align} From the modeling perspective, restriction of Lipschitz continuity on $f$ is mild and states that performance of a team can not change arbitrarily when team context and / or game context changes mildly, which makes sense. \smallskip \noindent{\bf Implication.} Consider the matrix of runs scored across all $n$ innings and all $b$ balls, $\boldsymbol{X} = [X_{ij}] \in {\mathbb N}^{n \times b}$. Therefore, ${\mathbb E}[\boldsymbol{X}] = \bm = [m_{ij}] = [f(\theta_i, \rho_j)]$. Define $Y_{ij} = \sum_{k=1}^j X_{ik}$ as the cumulative runs scores in the inning $i$ up to and including $j$th ball and let $\boldsymbol{Y} = [Y_{ij}] \in {\mathbb N}^{n \times b}$. Then, ${\mathbb E}[Y_{ij}] = \sum_{k=1}^j m_{ik}$. Also, let $\boldsymbol{M} = {\mathbb E}[\boldsymbol{Y}]$. \begin{proposition}\label{prop:low-rank} For any $\varepsilon > 0$, there exists $r = r(\varepsilon, d) \geq 1$ such that matrix $\bfm$ is approximated by a matrix of rank $r(\varepsilon, d)$ with respect to entry-wise max-norm within $\varepsilon$. That is, there exists matrix $\bfm^{(r)}$ of rank $r$ such that $\max_{i,j} |m_{ij} - m^{(r)}_{ij}| \leq \varepsilon$. Subsequently, matrix $\boldsymbol{M}^{(r)} = [M^{(r)}_{ij}]$ where $M^{(r)}_{ij} = \sum_{k=1}^j m^{(r)}_{ik}$ is of rank at most $r$ and approximates $\boldsymbol{M}$ such that $\max_{ij} | M_{ij} - M^{(r)}_{ij}| \leq b \varepsilon$. \end{proposition} For the proof of Proposition, we refer the reader to the Appendix (Section \ref{sec:appendix_proof_prop}).\\ \smallskip \noindent{\bf Falsifiability.} The Proposition \ref{prop:low-rank} suggests that if indeed our model assumption holds in practice, then the data matrix of score trajectories, $\boldsymbol{Y}$ ought to be well approximated by low-rank. Putting it other way, its spectrum out to be concentrated on top few principle components. This gives us way to test falsifiability of our model for the game of cricket, and more generally any game of interest. In the context of cricket, we consider matrix comprising over 4700 LOI innings from 1999 to 2017, i.e. as many rows and $300$ columns. Figure \ref{fig:singvals} shows the spectrum of the top 15\% singular values (sorted in descending order) for this matrix. Note that the matrix has been normalized to re-scale all scores to remain between the range $[-1, 1]$. It clearly supports the implications that most of the spectrum is concentrated in the top $8$ principle components. Indeed, we note that the over 99.9\% of the ``energy'' in the matrix is captured by the top 8 singular values. We determine this by calculating the ratio of the sum of squares of the top 8 singular values to the sum of squares of all singular values of this matrix. \smallskip \noindent{\bf Implied linear relationship.} The approximately low-rank structure of the mean matrix $\bm$ and hence $\boldsymbol{M}$ per Proposition \ref{prop:low-rank} for Latent Variable Model and verified through data suggests that if we choose a row (or inning) at random, then it is well approximated by a linear combination of {\em few} other rows. We formalize this implication by making the following assumption. For a new cricket innings, the mean cumulative score performance $M \in \RealsP^b$ can be represented as a linear combination of the mean score performance of other existing innings: \begin{align} \label{eq:assumption2} M = \sum_{i=1}^{n} \beta_i M_i, \end{align} and $\beta_i \in \mathbb{R}$ are weights that depend on the inning of interest and $M_i = [M_{ij}]_{j \in [b]} \in \RealsP^b$. \begin{figure} \centering \includegraphics[width=0.2\textwidth]{singular-value-spectrum.png} \caption{The Singular Value spectrum for all innings in the dataset (matrix of dimensions 4700 $\times$ 300. Only showing the top 15\% singular values, in descending order.} \label{fig:singvals} \end{figure} \vspace{-.1in} \section{Target resetting} \label{sec:revised_target} \subsection{Algorithm: Rationale \& Details} \label{sec:revised_target_algo} \smallskip \noindent{\bf The question.} Cricket is fundamentally an {\em asymetric} game in that team one bats for the first half, sets the target and then second team bats to chase the target within the allocated balls. Many a times (around 9\% of LOIs) the game gets interrupted in the second half due to rain (and other similar events) requiring the shortening of the game and revising the original target. Formally, let team one sets target of $t$ runs for team two to win. Suppose team two bats for $b' < b$ of total balls and interruption happens. Now team two can play till total of $b_1$, $b' \leq b_1 \leq b$ balls or additional $b_1 - b'$ balls. Then, what should the target $t_1$ for team two be in $b_1$ balls? Clearly, $t_1 \leq t$. But, how much smaller? Specifically, there are two scenarios worth discussing: Scenario 1, when $b' = b_1$, i.e. no more play possible; and Scenario 2, when $b' < b_1 < b$, i.e. some more play possible but not all the way. \smallskip \noindent{\bf The ideal goal. } The ideal goal of the target resetting algorithm is that it helps counteract the impact of intervention. Put another way, by resetting the target, the outcome of the game should be as if the game was played for the full duration with the original target. If such a goal is achieved then the fraction of times the team batting second wins should remain the same as that when no intervention happens. \smallskip \noindent{\bf Failure of Duckworth-Lewis-Stern (DLS).} An algorithm for target resetting is, effectively, as essential as the rules for playing game itself. The incumbent solution adopted in LOI cricket games is by Duckworth-Lewis-Stern \cite{dl1, dl2}. And indeed, authors in \cite{dl2} argue that a target resetting algorithm ought to satisfy the invariance of the fraction of time the team batting second wins whether the intervention happens or not. Given that the DLS method has been used for more than a decade now, we have enough data. Specifically, we consider 1953 LOI games during the period from 2003 to 2017. From among these, 172 games were interrupted and the DLS method was applied to reset the target / decide the winner. Among them, 59\% were won by the team batting second. In contrast, for 1781 games that were completed without using the DLS method, the second batting team won 51\% of the games. Therefore, using the $\chi^2$ test of independence, we can reject the null hypothesis that {\em DLS does not create an advantage for team batting first or second} with the p-value of $0.048$ which is statistically significant. Table \ref{table:dls} summarizes the data. This clearly motivates the need for a different target resetting method. \begin{table}[H] \centering \begin{tabular}{|| c | c | c | c ||} \hline & Batting First Won & Batting Second Won & Total \\ [0.5ex] \hline\hline No DLS & 865 & 916 & 1781 \\ \hline DLS & 70 & 102 & 172 \\ \hline \hline Total & 935 & 1018 & 1953 \\ \hline \end{tabular} \caption{Contingency table for the $\chi^2$ test of independence for 1953 LOI games that produced a result. The null hypothesis is that the use of the DLS method has no effect on the distribution of games won by teams batting first or second. The p-value is 0.048 meaning that we can reject the null hypothesis. \label{table:dls} \end{table} \noindent{\bf Scenario 1: No more play.} We start with the scenario where the team batting second plays for $b' < b$ balls when the interruption happens and no more game can be played, i.e. $b_1 = b'$. The question is, what should the target $t_1$ be for the team batting second to have achieved at this stage to win the game? Suppose the team batting second has made $t'$ runs losing $w'$ wickets at the intervention, $b'$. One way to decide whether the team has won or not is to utilize forecasting algorithm proposed in Section \ref{sec:score_forecasting}. Specifically, forecast the score that team batting second will make by the end of the innings, and if it gets to the target $t$, then team batting second wins, else it loses. While this approach makes sense, from a cricketing perspective or setting the rule, it is has a flaw. For example, a good forecasting algorithm (such as ours) will forecast differently and justifiability so, depending upon {\em how} the state of $t'$ runs and $w'$ wickets lost is reached in $b'$ balls. Therefore, it is likely that in one scenario, team two is declared winner while in another scenario team two is declared loser even though at $b'$ balls, in both scenarios, the state of the game is identical. To address this, we need to come up with a forecasting algorithm that is {\em agnostic} of the trajectory leading up to the current state. Formally, we wish to estimate the score at $b'$ balls, given that target of $t$ is achieved by $b$ balls. Let $Y_{0j}$ denote the score at ball $j$ and $W_{0j}$ denote the wicket. Then, we wish to estimate ${\mathbb E}[Y_{0b'} | Y_{0b} = t]$ and ${\mathbb E}[W_{0b'} | Y_{0b} = t]$. And we wish to do so for any $b' < b$. This estimation will not be impacted by the on-going innings or the team's performance leading up to $b'$ but instead by the collective historical data that captures the ``distribution'' of the game play. We propose to evaluate this using a non-parametric nearest-neighbor method. Specifically, given $n$ historical trajectories, for any $\Delta \geq 0$, let $ \mathbb{S}(t, \Delta) = \Big\{i \in [n]: Y_{ib} \in [t, t+\Delta]\Big\}.$ Then, for any $b' < b$, \begin{align} \hat{\mathbb{E}}[Y_{ib'} | Y_{ib} = t] \approx \frac{1}{|\mathbb{S}(t, \Delta)|} \sum_{i \in \mathbb{S}(t, \Delta)} Y_{ib'}. \nonumber \\ \hat{\mathbb{E}}[W_{ib'} | Y_{ib} = t] \approx \frac{1}{|\mathbb{S}(t, \Delta)|} \sum_{i \in \mathbb{S}(t, \Delta)} W_{ib'}. \label{eq:knn1} \end{align} This gives us ``average path to victory'' both in terms of runs scored and wickets lost. By definition, it is monotone since we are taking average of monotonic paths from actual historical innings. Now to compare the score of $t'$ with $w'$ wicket at $b'$ balls with this target, we have to make sure that there is a parity in terms of number of wickets lost. For example, the estimator per \eqref{eq:knn1} may suggest score of $\hat{t}$ with wickets $\hat{w}$ at $b'$ ball as needed for achieving target $t$ at $b$ balls. And $\hat{w} \neq w'$. Then, we need to {\em transform} score $\hat{t}$ with $\hat{w}$ wickets to score that is an equivalent for $w'$ wickets. Such a transformation is not obvious at all. Next, we provide some details on the choice of such a transformation. \smallskip \noindent{\bf A non-linear wicket resource transformation.} We refer to $\boldsymbol{g}(\hat{t}, \hat{w}, \delta b', w')$ as the function needed to transform the score of $\hat{t}$ for the loss of $\hat{w}$ wickets with $\delta b' = b - b'$ balls remaining to an equivalent score for the loss of $w'$ wickets. $\boldsymbol{g}(\hat{t}, \hat{w}, \delta b', w')$ cannot be linear in terms of the wickets lost or balls remaining because the effect of wickets lost (or remaining) and balls consumed (or remaining) is not linear. For instance, the top few batters tend to play as specialist batsmen and having lost 5 out of 10 wickets means the loss of more than 50\% of the batting resources. Similarly, in a typical cricket innings teams tend to accelerate more towards the latter half of an innings meaning that at the half way mark, i.e. $b/2$ balls, teams effectively consider more than half their innings still remaining. $\boldsymbol{g}(\hat{t}, \hat{w}, \delta b', w')$ can be determined in two ways: parametric or non-parametric. An example of a parametric model is the DLS method which encodes information about a team's resources remaining as a function of the wickets and balls remaining. An alternate approach is to estimate the resources remaining using a non-parametric regression, e.g. random forest regression. In such a regression the wickets lost and balls remaining at each point in an innings, i.e. $w'$ and $\delta b' = b - b'$, are used as ``features'' and the percentage of target met, i.e. $t/y'$ where $y'$ is the runs scored at $b'$ balls, is the output variable. The regression model can then help complete the entire table of resources achieved (or remaining). In this work wee use the DLS table of resources remaining as a proxy for $\boldsymbol{g}(\hat{t}, \hat{w}, \delta b', w')$ , but with corrections. Given that we have established that the DLS method introduces a bias in favor of the chasing team, we correct the ``resources remaining'' percentages in a data-driven manner for each game. Specifically, let the target be represented by $t$,the time of interest by $b'$, the runs scored as $\hat{t}$, wickets lost as $\hat{w}$, desired number of wickets lost as $w'$, and the DLS resources remaining percentage be $d[\hat{w}][\delta b']$. Then, our correction to the DLS entry for would take the following form: $t' = \boldsymbol{g}(\hat{t}, \hat{w}, \delta b', w') = \hat{t} \cdot \big(1 - (1 - t'/t) \frac{d[w'][\delta b']}{d[\hat{w}][\delta b']} \big)^{-1}$, where $d[w'][\delta b']$ is the DLS resources remaining percentage for having lost $w'$ wickets with $\delta b' = b - b'$ balls remaining. \smallskip \noindent{\bf Scenario 2: Some more play.} We now consider scenario where some more play will happen, i.e. $b' < b_1 < b$. Assume team batting second has made $y'$ runs for the loss of $w'$ wickets at $b'$ balls. Using the average path to victory and non-linear resource transformation, $\boldsymbol{g}(\cdot)$, we obtain a target, say $t'$ for $w'$ wicket loss at $b'$ balls for achieving the eventual target of $t$. Now, we need to produce the revised target that needs to be achieved at $b_1 < b$ balls by this team. Question is, how to do it. One approach is to continue using the average path to victory as before and use it to compute the 10-wicket equivalent target at $b_1$ balls. But that is likely to be an {\em easier} target than what it should be. The key aspect that needs to be incorporated in setting such revised target is what we call the target ``chasing'' mentality. In cricket (like most sports), it is well observed that the team plays more aggressively near the end of the game. Therefore by setting end of the game closer due to $b_1 < b$, effectively we are taking the team batting second from their normal ``chasing mentality'' at $b'$ balls to closer to end of the game ``chasing mentality''. Therefore, it stands to reason that we should not simply utilize the average path to victory which is calculated based on normal ``chasing mentality''. We address this challenge by simply modifying the ``average path to victory'' calculation from $b'$ to $b_1$, or for $\delta b_1' = b_1 - b'$ balls by effectively using historical trajectories from $b- \delta b_1'$ balls to $b$ balls rather form $b'$ to $b_1$ balls to calculate the average path to victory value at $b_1$ balls. That is, we find all trajectories amongst $n$ historical innings where at $b- \delta b_1'$ balls, roughly $w'$ wickets are lost. And we use average over these trajectories and {\em stitch} it beyond $t'$ at $b'$ balls to obtain the target at $b_1 = b' + \delta b_1'$ balls. Formally, let \begin{align} \label{eq:end_overs_context_set} \mathbb{T}(\delta b_1', w', t, \Delta) = \Big\{i \in \mathbb{S}(t, \Delta): W_{i (b-\delta b_1')} \in [w'-1, w'+1]\Big\} \end{align} Then, to reset target at $b_1 = b' + \delta b_1'$ balls starting with target of $t'$ for loss of $w'$ wickets at $b'$ balls, we estimate increment in target as \begin{align}\label{eq:end_overs_context_target} \delta t(b_1) & = \frac{1}{|\mathbb{T}(\delta b_1', w', t, \Delta)|} \sum_{i \in \mathbb{T}(\delta b_1', w')} \big(Y_{ib} - Y_{i(b-\delta b_1')}\big). \end{align} That is, the revised target is $t' + \delta t(b_1)$ at $b_1 = b' + \delta b_1'$ balls starting at revised target of $t'$ with $w'$ wickets at $b'$ balls. \smallskip \noindent{\bf Summary of the algorithm.} \smallskip \noindent {\em Step 1. Average Path to Victory (Scenario 1 and 2).} As outlined earlier, we first determine the set of ``neighbors'', $\mathcal{S}(t, \Delta)$, referred to as $\mathcal{S}$ for simplicity. Specifically, the neighbors are all innings where the final score, $y_{ib} \geq t$. We then scale all trajectories in $\mathcal{S}$ such that they end exactly at the target, i.e. $y_{ij}^{s} = y_{ij} t / y_{ib}, \forall{i} \in \mathcal{S}$ and similarly for the wickets trajectories $w_{ij}^{s} = w_{ij} t /y_{ib} \forall{i} \in \mathcal{S}$. We then compute the average path to victory in terms of runs and wickets: $$a_y[j] = \frac{1}{| \mathcal{S}|} \sum_{i \in \mathcal{S}} y_{ij}^{s}, \forall{j} \in [b], \quad a_w[j] = \frac{1}{| \mathcal{S} |} \sum_{i \in \mathcal{S}} w_{ij}^{s}, \forall{j} \in [b]$$. The $w'$ wicket equivalent target at $b'$, used in Scenario 1, is $t' = \boldsymbol{g}(a_y[b'], a_w[b'], \delta b', w')$, where $\delta b' = b - b'$ and $a_y[b'], a_w[b']$ are the average runs and wickets at $b'$. \smallskip \noindent {\em Step 2. Revised Target Trajectory (Scenario 2 only).} For Scenario 2, we now determine the revised target trajectory for the remainder of the innings, i.e. $\delta b_1' = b_1 - b'$. First, we determine the offset, which is the target, $t'$, at the point of intervention, $b'$. $t'$, the $w'$ equivalent target at $b'$, is the starting point of the revised target path. What remains is the ``stitch'' the context from the last $\delta b_1'$ of historical innings in $\mathcal{S}$ to determine the revised 10-wicket target at $b_1$. This contextual path is defined in Equations (\ref{eq:end_overs_context_set}) and (\ref{eq:end_overs_context_target}), denoted by $\delta t(b_1)$, with the $\mathcal{S}(t, \Delta)$ replaced, for simplicity, by $\mathcal{S}$, as defined in Step 1. Next, we scale this contextual path such that it never exceeds the original target, $t$. Specifically, $\delta t(b_1)^{s} = \delta t(b_1) \cdot \frac{t}{t' + \delta t(b)}$. Therefore, the revised target for each point $b_1$ where $b' \leq b_1 \leq b$ is determined by the following relationship: \begin{align} \label{eq:revisedtarget} t(b_1) = t' + \delta t(b_1)^{s} \end{align} In the event that $t(b_1) > t$, it can be corrected to lie below $t$ for $b_1 < b$. The correction should be a function of the balls remaining, $b - b_1$, to ensure a smooth increase to $t$ in $b_1 = b$ balls.\\ \smallskip \noindent {\em Step 3. Deciding a Winner (Scenarios 1 and 2).} Let the team batting second have made $y'$ runs for the loss of $w'$ wickets at the point of intervention, $b'$. For both Scenarios 1 and 2, we now have the target trajectories. In Scenario 1, the team batting second is the winner if $y' \geq t'$. In scenario 2, let the team batting second have made $y_1$ runs by the new maximum balls, $b_1$. The team batting second wins if $y_1 \geq t(b_1)$. In the event that in Scenario 2, there is another intervention at some point $b''$ where $b' < b'' < b_1$, and the innings is shortened even further to $b_2$ where $b'' \geq b_2 \geq b_1$, then we are back to the original Scenario 1 ($b_2 = b''$) or Scenario 2 ($b'' < b_2 < b_1$) with the maximum balls in the innings, $b$, replaced by $b_1$; the starting point moved from $0$ balls to $b'$ and the intervention point moved from $b'$ to $b''$. \noindent {\textbf{Fine Prints.} Some remarks are in order. \smallskip \noindent {\em No First-Innings Bias.} An important feature of the algorithm is that the chasing team's target trajectory is not biased by the path taken by the team batting second. The only information from the first innings that is taken in to account is the target, $t$ in $b$ balls. How the team batting first got to their final score should not bias the target computations for the chasing team. Therefore, we take no other information from the first innings in to account. \smallskip \noindent{\em Neighbors.} We scale the ``neighbor'' target trajectories to end exactly at the target. Neighbors are those trajectories that end equal to or above the target, $t$, at the end of the innings. We justify this choice of ``neighbors'' in the Appendix (Section \ref{sec:dominance}). In rare instances, the team batting first will score an unprecedented amount of runs such that the number of past innings which end up exceeding the target are too few. In such a circumstance or any other instance where the neighbors are too few, we propose using the set of \textit{all} innings in the dataset and scaling them up/down accordingly. \subsection{Experiments and Evaluation} \smallskip \noindent{\bf Comparison: Duckworth-Lewis-Stern.} Given that we have established that the DLS method is biased in favor of the teams batting second (see Section \ref{sec:revised_target_algo}), the natural comparison for our algorithm is the DLS method. \smallskip \noindent{\bf Performance Metrics.} The metrics under consideration are those that allow to us to study the relative or absolute bias introduced by both algorithms. In Scenario 1 (as detailed next), the metric is simply the count (or proportion) of games won by the teams batting first or second with and without each of the methods. In Scenario 2 (also detailed below), we consider relative metrics which allow us to compared the relative bias of our algorithm's revised targets compared to the DLS method's revised targets. \smallskip \noindent{\bf Data and Setup.} In order to evaluate the performance of any target resetting algorithm such as our method, the ideal scenario would involve a trial of the method in actual cricket games. However, this is not possible. We must, instead, rely on designing retrospective experiments and simulations which can help with evaluating the method. For all experiments discussed in this section, we use a dataset of about 230 LOI recent games which produced results and lasted longer than 45 overs in the second innings, starting in the year 2013. For the selection of the neighboring trajectories in our algorithm, we use LOI innings from the year 1999 onward. We consider the following two setups: {\em Scenario 1}. In this experiment, we random split the candidate games in to two sets: $\mathcal{S}_A$ and $\mathcal{S}_B$. For games in $\mathcal{S}_A$, we introduce an arbitrary intervention between overs $20$ and $45$ and assume that no more play would have been possible. We ask both methods to declare a winner based on the state at the point of the arbitrary intervention. We determine the counts of teams batting first and second that are declared winners. For $\mathcal{S}_B$, we assume no interventions and determine the counts of teams batting first and second that actually won the game. Use these counts, we conduct a $\chi^2$ test of independence where the null hypothesis is that there is no difference in proportions of game won by teams batting first or second with or without the target resetting algorithm. We note the p-value and then repeat the same experiment 500 times. Under the null-hypothesis the p-value distribution ought to be uniformly distributed, by definition. {\em Scenario 2}. Given that we have established that the DLS method is biased in favor of the team batting second when the target is revised (Scenario 2). It is impossible to use historical data for actual LOI cricket games to retrospectively determine how the outcome might have been different had a different revised target been proposed for the chasing team. Therefore, we use the following experiment with historical data to determine whether a candidate method is poised to have any effect on the outcome of games under Scenario 2: We introduce a hypothetical intervention, at $b' = 180$ balls (30 overs). We then assign an arbitrary revised number of overs, $b_1 = 270$ balls (45 overs). For each innings, we determine the revised remaining target runs at $b_2$ using the DLS method and our algorithm. We determine the subset of historical innings where the same number of wickets were lost at $b - (b_1 - b')$ overs as those at the intervention for the innings under consideration, i.e. $w'$. Next, we determine the proportion of innings in the subset where the revised remaining target runs were scored in the last $b - (b_1 - b')$ overs. For any method that claims to reduce the bias produced by the DLS method, we would expect the proportion of the subset of innings that achieve the revised remaining target to be \textit{lower} than that for the DLS method. Finally, we also determine the average percentage of the DLS revised target that our algorithm's revised targets are greater or lower than the DLS revised targets. We determine this for each over (ball) between the intervention, $b'$ and the original maximum number of overs, $b$. \smallskip \noindent{\bf Results.} For the Scenario 1 experiment, where $b' = b_1$. The resulting distribution of p-values is shown in Figure \ref{fig:pvalsdls2}. We note that for our algorithm the distribution appears fairly uniform (mean = 0.49, median =0.47) while for the DLS method there is a slight bias to the left (mean = 0.41, median = 0.39). However, there is little evidence to conclude that either method suffers from an obvious bias in this scenario. This hints at the DLS method being biased under Scenario 1, similar to what was established in Section \ref{sec:revised_target_algo}. \begin{figure} \centering \includegraphics[width=0.3\textwidth]{p-val-dist.png} \caption{Distribution of p-values for the bootstrapped Chi-Squared test of independence for experiments with Scenario 1.} \label{fig:pvalsdls2} \end{figure} Next, we consider the experiment for Scenario 2 where $b' < b_1 < b$, i.e. the chasing team is provided with a revised target to chase in fewer than maximum number of overs. It is impossible to use historical data for actual LOI cricket games where the DLS method was used to retrospectively determine how the outcome might have been different had a different revised target been proposed for the chasing team. Figure \ref{fig:prop_dls_ours} shows the distribution of the differences of proportions of the subset of the historical innings that exceed the revised remaining target prescribed by our algorithm vs the DLS method. It appears to suggest that at $b_1 = 270$ balls (45 overs) when the intervention is at $b' = 180$ balls (30 overs), our algorithm's revised targets are almost always higher than those proposed by the DLS method which should reduce the bias in the favor of chasing teams experienced in LOI games. We also conduct a paired-sample t-test for the two series produced by our algorithm and the DLS method. The test rejects the null-hypothesis of the two series being the statistically the same with a p-value of $10^{-33}$. \begin{figure} \centering \includegraphics[width=0.25\textwidth]{dls-vs-ours-proportions_final.png} \caption{\small Distribution of differences in proportions of historical innings that exceed the revised remaining number of runs prescribed by our algorithm vs the DLS method for over 230 recent LOI games. The mean of the differences is -3.3\% while the standard error of the mean is 0.1\%. } \label{fig:prop_dls_ours} \end{figure} Finally, Figure \ref{fig:prop_increase} shows that average upwards shift in revised targets produced by our algorithm compared to the DLS method is fairly consistent across all revised maximum durations, $b_1$, where $(b' < b_1 < b)$. The plot shows the average percentage runs that our algorithm's revised targets exceed those produced by the DLS method for all overs possible after the intervention. In this experiment, $b' = 30$ overs and $b_1$ is allowed to vary from over 31 to 49. The targets produced by both algorithms are be the same for 50 overs. The plot underscores that our algorithm should be able to reduce the bias produced by the DLS method for all revised durations. However, note that the closer $b_1$ is to the original maximum duration, $b$, the smaller the percentage increase which appeals to our intuition. \begin{figure} \centering \includegraphics[width=0.25\textwidth]{prop_increase_over_dls2.png} \caption{\small The average percentage runs (with standard error bars) that our algorithm's revised targets exceed those produced by the DLS method for all overs between $b'$ and $b$.} \label{fig:prop_increase} \end{figure} \subsubsection{Case Studies} \smallskip \noindent{\bf Scenario 1: $b_1 = b'$ (No More Play Possible after Intervention).} We consider the game played between Pakistan and New Zealand on 01/16/2018. At one stage New Zealand had lost three wickets or just 90 runs in 16 overs and 2 balls (98 balls) chasing 263. With Pakistan in the ascendency then, it took a couple of good batting partnerships from New Zealand to score the win. Figure \ref{fig:revision1} shows the resulting revised targets that New Zealand would have to make to win the game if the game stopped at any point during the innings \textit{and} no more play was possible (Scenario 1). The dotted green line is the \textit{average path to victory} while the dotted-dashed gray one is the target if the same number of wickets were lost as in the actual innings, at each point. The actual innings is shown as the solid red line. The chasing team would be declared a winner whenever the red line was above the dotted gray line. As we would expect, the red line dips below the gray line starting around the 17th over mark, signaling that Pakistan would have been declared the winners if the game stopped at that time. However, around the 38th over, New Zealand were back in the ascendency and the red line overtakes the gray line indicating that after around the 38th over if the innings stopped and no more play was possible, then New Zealand would be declared winners. The gap between the red and gray line is a proxy for how comfortably the ascendent team is ahead. These observations agree with the actual state of the game and point to a realistic target revision methodology. Note that even though the game ended at the 275th ball point, we show the trajectories for the entire 50 over (300 ball) period. \begin{figure} \centering \includegraphics[width=0.3\textwidth]{pak-nz-chase-revised.png} \caption{\small New Zealand vs Pakistan, 01/16/2018. New Zealand's actual innings (red), average path to victory (green) and revised target (dotted gray).} \label{fig:revision1} \end{figure} For another example of a famous game decided by the DLS method, please see the Appendix (Section \ref{sec:appendix_target_resetting_casestudies}).\\ \smallskip \noindent{\bf Scenario 2: $b' = 30$ overs, $b_1 = 45$ overs.} Next, we consider how our algorithm would work in under Scenario 2 where the chasing team bats the first $b'$ balls assuming they will have all $b= 300$ balls (50 overs) to chase the target. However, there is an intervention at the $b'$ ball mark and only $b_1 < b$ are now possible. We use the World Cup 2011 Final between India and Sri Lanka to illustrate how our algorithm would work and its contrast to the DLS method. Figure \ref{fig:indsri_revised} shows the average path to victory in green for India when chasing the target of 275 in 50 overs (300 balls). We introduce a hypothetical intervention at the $b' = 180$ balls (30 overs) mark and assume that play will resume with a maximum of 270 balls (45 overs) possible instead of the original 300 balls (45 overs). Figure \ref{fig:indsri_revised} shows the original average path to victory (solid green line) while the dotted black line shows the targets had the revised maximum number of overs, $b_1$, been allowed to vary between $[35, 50]$. Finally, the equivalent of the black line is the dotted-dashed blue line which are the revised targets produced by the DLS method. The gap between the two methods is a feature of our algorithm. It is a consequence of taking the context of the last few overs in to account (as discussed in Section \ref{sec:revised_target_algo}) where teams can be expected to score significantly more number of runs towards the end an innings. This ``contextual'' revision of the target is what allows our algorithm to correct for the bias introduced in favor of the chasing teams by the DLS method. \begin{figure} \centering \includegraphics[width=.35\textwidth]{ind_sri_revised_30_45.png} \caption{} \label{fig:indsri_revised} \end{figure} \vspace{-.1in} \iffalse We now turn our attention to a related problem: given the current state of the second innings in an LOI cricket game, is it possible to determine which of the two teams, the one who batted first or the one currently batting to chase the target, is in the ascendency? In cricket, the problem is of great significance. Specifically, in games where it is no longer possible to play out the maximum allocation of overs in an innings, a solution to this problem can also help declare the winner of the game. These circumstances often arise in LOI cricket games, as outlined in Section \ref{sec:revised}. As an example, consider an LOI game where the team batting second was set a target of 250 runs in 50 overs (300 balls) but experienced a rain-interruption at the half way mark (25 overs) when the score was 130 for the loss of 5 wickets. If there was no more play possible, which team should be declared a winner? If only 10 more overs were possible, instead of the originally remaining 25 overs, what should the new revised target be such that the adverse effect of the intervention, i.e. the loss of 15 overs, is nullified without leaving either of the two teams in a better or worse position than at the point of the intervention? One may be tempted to use the forecasting algorithm introduced earlier in this work as a means to determine which team is in the ascendency based on whether the forecasted score at the end of the innings will reach the target, or not. However, that has two short comings: (i) it is not obvious how to use the forecasting algorithm to produce target revisions in the event that the innings resumes for a shortened length; (ii) the forecasting algorithm is likely to produce different forecast estimates based on \textit{how} the chasing team got to their state at the point of the intervention which can be considered unsatisfactory and confusing by cricketers and audiences. Therefore, the goal is to produce revised targets for all possible number of remaining overs by taking in to account only the current state, i.e. runs scored and wickets lost at the point of the intervention. The International Cricket Council currently uses the Duckworth-Lewis-Stern (DLS) method to produce revised targets and declare a winner (\cite{dl1, dl2}) in games where one (or both) innings cannot be completed due to weather, ground conditions etc. Specifically, assume for simplicity that originally the team batting first has made $r_1$ runs in $b (=300)$ balls. This means that the chasing team has to make $t = r_1 + 1$ runs in a maximum of $b$ balls. The chasing team also has a maximum of $w$ wickets ($w = 10$). However, there is an intervention at $b_1 < b$ balls when the chasing team has made $r_b$ runs for the loss fo $w_{b_1}$ wickets. After the interruption it is only possible for the chasing team to face maximum of $b_2$ balls where $b_1 \leq b_2 \leq b$. A target resetting algorithm such as the DLS method can help with the following scenarios: \begin{enumerate} \item \textbf{Scenario 1: No More Play Possible ($b_1=b_2$).}\label{scenario:1} If no more play was possible after the intervention at $b_1$ balls, decide whether the chasing team can be declared to have won, or lost, the game. \item \textbf{Scenario 2: Innings is Shortened ($b_1 < b_2 < b$)}. \label{scenario:2} An intervention occurs at $b_1$ balls where $0 \leq b_1 < b$ balls, and only $b_2$ balls are now possible in the innings, where $b_1 < b_2 < b$. The chasing team has lost $w_{b_1}$ wickets at the intervention and is still allowed to use all their remaining wickets for the shortened chase. A target resetting method must provide revised targets for each possible value of $b_2 \in \{b_1 +1, \cdots , b - 1 \}$. \end{enumerate} In both scenarios above, for the first $b_1$ balls the chasing team is pursuing a target of $t$ in the maximum possible $b$ balls with $w$ wickets. In scenario \ref{scenario:1}, a winner is declared taking in to account the minimum score the team is expected to have scored by the intervention ($b_1$) for the loss of $w_{b_1}$ wickets. In scenario \ref{scenario:2}, revised targets are determined that can take in to account both the wickets and new maximum number of balls remaining. As argued in Section \ref{sec:revised}, the target revision cannot be a linear function of the number of overs and wickets completed (or remaining). This is because the chasing team has the same maximum wickets $w$, for a shorted innings as it did for the original innings. Additionally, note that the loss of the first five wickets, for example, constitutes the loss of greater than 50\% of the batting resources for the batting teams. Therefore, the revised targets need to appropriately be scaled (non-linearly) to be considered equivalent. The DLS method is the incumbent and most widely adopted method to achieve this equivalence. \subsection{Is DLS \textit{fair}?} \label{sec:fairDls} The authors who introduced the DLS method argue in \cite{dl2} that it is \textit{fair} because the win/loss ratio between teams batting first or second is not statistically different in games that use the DLS method or not. Notably, their test was conducted on games during the early years of the DLS method. Cricketers have long felt that the DLS method appears to introduce a bias in favor of the chasing team. To put this claim to test, we perform the following experiment using actual LOI games: We conducted a Chi-Squared Test of Independence for LOI games spanning a recent fourteen year period from 2003-2017 (inclusive). This data includes 1953 LOI games which produced a result. Our null-hypothesis was that there is no statistical difference between the proportion of games won by teams chasing with or without the application of the DLS method. Of the 1953 games in the dataset, 172 were decided based on the DLS method. Among these games, 59\% we were won by the chasing side. The remaining 1781 games were completed without using the DLS method with the chasing team's winning proportion at 51\%. Therefore, at the 5\% significance level, we have enough statistical evidence to reject the null with the p-value being 0.048. Table \ref{table:dls} summarizes these results. This experiment allows us to statistically confirm what cricketers have felt for a while: the DLS method appears to introduce a bias in favor of the chasing team. A target resetting method must not be responsible for such a bias. In what follows, we present an alternative to the DLS method which corrects for the bias in the DLS method and is based off of the Latent Variable Model (LVM) of cricket innings introduced earlier in this work. \begin{table}[H] \centering \begin{tabular}{|| c | c | c | c ||} \hline & Batting First Won & Batting Second Won & Total \\ [0.5ex] \hline\hline No DLS & 865 & 916 & 1781 \\ \hline DLS & 70 & 102 & 172 \\ \hline \hline Total & 935 & 1018 & 1953 \\ \hline \end{tabular} \caption{Contingency table for the Chi-Squared test of independence for 1953 LOI games that produced a result. Games are split based on whether the team batting first or second won the game and across the games where DLS was used or not. The p-value is 0.048 and the null-hypothesis, i.e. there is no difference in distribution of games won by teams batting first or second when using the DLS method or not, can be rejected at the 5\% significance level. } \label{table:dls} \end{table} \subsection{Target Resetting as Conditional Expectation} Recall that in Section \ref{sec:cricketmodel} we modeled the forecast for the remainder of an innings as the estimate of the conditional expectation given the past: \begin{align*} \mathbb{E}[Y_{ij} | Y_{ib_1}, Y_{ib_1 - 1}, ... , Y_{ib_1}], \forall{j > b_1}, \end{align*} where $b_1$ is the point of intervention and $b_1 < j \leq b$. However, this approach cannot be used to produce revised targets, i.e. when the duration of an innings needs to be shortened. The revised target should reflect a reasonable path to victory and not suffer from any bias from the way a team approaches its innings. It should, instead, only be a function of the final target set and the number of overs remaining in an innings. In other words, the target resetting needs to be agnostic of the team's current trajectory and the trajectory of runs in the first innings. In light of this discussion, we we view target resetting as the estimation of a different conditional expectation: \begin{align} \label{eq:targetcondexp1} \mathbb{E}[Y_{ij} | Y_{ib} = t], \end{align} where $b_1 < j \leq b_2 \leq b$, $t$ is the target set by the team batting first, $b_1$ is the point of intervention and $b_2$ is the revised shortened duration of the innings. Unlike forecasting, (\ref{eq:targetcondexp1}) is the conditional distribution of the trajectory given a point in the future. We refer to $\mathbb{E}[Y_{ij} | Y_{ib} = t]$ as the ``average path to victory'' for the team batting second. It is effectively a trajectory which will lead the team to overhaul the target, in expectation, by the end of the full length of the innings. \subsection{Connections to Regression} \label{sec:connection_regression} (\ref{eq:targetcondexp1}) is akin to solving a regression problem where $Y_{ib}$ is a \textit{feature} while $Y_{ij}$ is the output variable. As in least square problems, (\ref{eq:targetcondexp1}) is the optimal solution when estimating a random variable (score at time $j$) given the model feature(s) (final score). Instead of using a parametric estimation method like least squares regression, we can use non-parametric Nearest Neighbors Regression to estimate $Y_{ij}$ conditioned on the final score. Define a set $\mathbb{S}$ as the nearest neighbors of the target trajectory. The similarity metric for $\mathbb{S}$ are all trajectories (innings) in our dataset which end \textit{near} the target, $t$. We can then use the following relationship: \begin{align} \label{eq:knn1} \hat{\mathbb{E}}[Y_{ij} | Y_{ib} = t] \approx \frac{1}{|\mathbb{S}|} \sum_{i \in \mathbb{S}} Y_{ij} \end{align} There are several conceptual ways to compute (\ref{eq:knn1}): \begin{enumerate} \item \textbf{Ideally}, we would have access to limitless amount of data or could simulate limitless number of innings where the score reached by the end of the innings is exactly $t$, i.e. $Y_{ib} = t$. A sample from a set of all such innings could provide a way for us to estimate a \textit{typical} path leading to victory. However, this is unrealistic. Not only do we not have access to limitless data, we also cannot expect to use bootstrap-like approaches because our dataset might have no past innings which end at the target, $t$. \item \textbf{Approximate} approaches such as $k$-nearest neighbors can provide a data-driven way to generate a large enough set of innings which end up in the vicinity of the final target, $t$. However, similar to the issue stated above, our dataset might have not have a large enough number of innings which end up near the target. \end{enumerate} While the ideal approach is impractical, the data-driven approximate approach provides hope. To overcome the problem of not having enough innings which end up in the vicinity of the target, we use a data-driven approach which can create the \textit{illusion} of a large enough set of candidate innings (neighbors). This approach relies on transforming a large enough set of innings by scaling all candidate trajectories such that they all end {\em exactly} at the target. \subsection{Dominance Property} \label{sec:dominance} We first note that due to the monotonicity properties of runs scored in a cricket innings, we have that if $Y_{ib} > Y_{hb}$ then $Y_{ij} \geq Y_{hj}$ in distribution for two innings $i$ and $h$ and where $j \leq b$. Similarly, if $Y_{ib} < Y_{hb}$ then $Y_{ij} \leq Y_{hj}$ in distribution. We refer to this as the \textit{dominance property} of trajectories. This property tells us that if we have a set of neighbors of the target trajectory where the final score, $Y_{ib}$, is greater (less) than the target score, $t_i$, then the estimated target trajectory for balls $j < b$ will be an upper (lower) bound on the target we are estimating. This insight allows us to detail in Appendix (Section \ref{sec:dominancescaling}) why choosing trajectories that end {\em at or above} the target and then scaling them all to end exactly at the target is the correct choice for selecting a large enough set of neighbors for each target. \subsection{``Average Path to Victory''} Sections \ref{sec:connection_regression} and \ref{sec:dominance} provide us with the tools to create an \textit{illusion} of a large set of approximate trajectories which can then be used to compute a \textit{typical} trajectory ending in victory, i.e. ``average path to victory'', $\hat{\mathbb{E}}[Y_{ij} | Y_{ib} = t]$. Such a path can be computed by taking the sample average of the scaled (neighbor) innings. This is akin to an approximate nearest neighbor regression where the prediction is simply the sample average of all items in a set of approximate nearest neighbors. Alternatives such as a non-uniform weighted average can also be used instead. We note that an ``average wickets lost path'' can also be determined in an analogous manner, using the same neighboring innings but scaling each wickets lost path by the same factor as that for the runs. \subsection{Non-Linear Transformation, g} The ``average path to victory'' coupled with the ``average wickets lost path'' provide a prescriptive route to victory, on average. However, if the wickets lost during the innings of interest at every point during the innings are different compared to those prescribed by the ``average wickets lost path'', then we need to transform the ``average path to victory'' to reflect the equivalent runs to be scored when the exact same number of wickets are lost as that in the actual innings. As an example, imagine that in the innings of interest the chasing team has made 135 runs for the loss of 3 wickets at the 25 over mark but the prescriptive average path to victory is represented by 125 runs for the lost of 2 wickets at the same point. We need to transform the 125 runs for the loss of 2 wickets to some higher number of runs for the loss of 3 wickets so that the 135 runs actually scored can be compared directly. Further, as argued earlier in this work, such a transformation is non-linear and depends on both the number of wickets lost and the number of balls (overs) remaining. We will refer to this transformation as the function $\boldsymbol{g}(\cdot)$ which takes as its input arguments the current score, corresponding wickets lost, balls remaining and the desired wickets lost to produce a transformed score for the loss of the desired number of wickets. \subsection{Overs Remaining Context} \label{sec:oversremaningcontext} When considering a revised maximum number of overs, $b_2$, where the intervention happens at $b_1 < b_2$ and the original maximum number of balls available were $b$, the revised average runs and wickets paths to victory must ensure that the context of the last $b - (b_2 - b_1)$ is properly accounted for. Specifically, if the intervention happens at $b_1 = 180$ balls and instead of playing the originally remaining 120 balls, the chasing team now has only 60 balls to play, the average paths to victory (both runs and wickets) must use the last 60 balls, i.e. balls in range [240, 300] to provide the new revised path to victory. Furthermore, if the chasing team had lost $s_w$ wickets at the intervention, then the ``neighbors'' used to compute the revised average path to victory must be those innings that have exceeded the target \textit{and} lost $s_w$ wickets at $b - (b_2 - b_1)$ balls. This will allow us to properly take in to account the mindset of teams when they have $b - (b_2 - b_1)$ balls left to chase a target after having lost $s_w$ wickets instead of the originally available $b - b_1$ balls and the loss of $s_w$ wickets. \subsection{Algorithm} \label{sec:algorithm2} Our goal is to estimate the ``average path to victory'' and scale it to produce an equivalent trajectory for the wickets lost, $w_j$, in the actual innings by ball $j$. The $w_{j}$ wicket target is necessary to provide a equivalent point of comparison to determine whether the chasing team is in the ascendency, or not, at ball $j$ of the innings. The inputs to the algorithm are: $y_{ij}:$ historical score trajectories in the dataset (donors); $w_{ij}:$ corresponding historical wickets lost trajectories in the dataset (donors); $t$: the original target; $b_1$: ball at intervention; $b_2$: new revised maximum number of balls; $b$: original maximum balls; $r, s$: current score and wickets lost at intervention for the innings under consideration. We also provide a non-linear transformation in the form of function $\boldsymbol{g}(\cdot)$, which transforms the current score to an equivalent score corresponding to a different number of wickets lost. $\boldsymbol{g}(\cdot)$ takes as input the the current score, the corresponding wickets lost for that score, balls remaining, and the desired wickets lost to provide an equivalent score via a non-linear transformation. Our data-driven target resetting approach is summarized in Algorithm \ref{algo3}. \begin{algorithm}[H] \caption{Data-Driven Target Resetting} \label{algo3} \begin{enumerate} \item Let $\mathcal{S} = \{i: y_{ib} \geq t\}$ and let $r_i = \frac{y_{ib}}{t} \forall{i} \in \mathcal{S}$. \item Let $y_{ij}' = y_{ij} / r_i, \forall{i} \in \mathcal{S}$ and $w_i' = w_{ij} / r_i, \forall{i} \in \mathcal{S}$. \item Average path to victory (runs and wickets): \\ $a_y[j] = \frac{1}{| \mathcal{S} |} \sum_{i} y_{ij}', \forall{j} \in [b]$, \\ $a_w[j] = \frac{1}{| \mathcal{S} |} \sum_{i} w_{ij}', \forall{j} \in [b]$. \item Offset at Intervention: $o_y = \boldsymbol{g}(a_y[b_1], a_w[b_1], b - b_1, s)$ \item Let $\mathcal{S}_1 = \{i: i \in \mathcal{S}$, and $w_{ij} = s$ where $j = b - (b_2 - b_1) \}$. \item Same wickets lost context:\\ $a_y'[j] = \frac{1}{| \mathcal{S}_1 |} \sum_{i} y_{ij}', \forall{j} \in [b]$, \\ $a_w'[j] = \frac{1}{| \mathcal{S}_1 |} \sum_{i} w_{ij}', \forall{j} \in [b]$. \item Remaining overs context:\\ $a_y^{''}[j] = a_y'[j] - a_y'[b - (b_2 - b_1)], \forall{j} \in [b_1, b_1 +1 \cdots , b]$, \\ $a_w^{''}[j] = a_w'[j]- a_w'[b - (b_2 - b_1)], \forall{j} \in [b_1, b_1 +1, \cdots , b]$. \item Another scaling based on the target: \\ $a_y^{''}[j] = a_y^{''}[j] \cdot (a_y^{''}[b] + o_y) / t, \forall{j} \in [b_1, b_1+1, \cdots , b]$, \\ $a_w^{''}[j] = a_w^{''}[j] \cdot (a_w^{''}[b] + s) / t, \forall{j} \in [b_1, b_1+1, \cdots , b]$. \item Revised average path to victory, starting at offset: \\ $\tilde{a}_y[j] = a_y^{''}[b - (b_2 - j)] + o_y, \forall{j} \in [b_1, b_1+1, \cdots , b_2]$, \\ $\tilde{a}_w[j] = a_w^{''}[b - (b_2 - j)] + s, \forall{j} \in [b_1, b_1+1, \cdots , b_2]$. \item Deciding a Winner: \begin{enumerate}[label=(\alph*)] \item Let $r_j, s_j$ be the runs scored and wickets lost by the team batting second at $b \leq j \leq b_2$. \item Revised target for $s_j$ wickets lost (non-linear transformation):\\ $\tilde{t}_j[s_j] = \boldsymbol{g}(\tilde{a}_y[j], \tilde{a}_w[j], b_2 - j, s_j)$. \item Chasing team wins if $r_j > \tilde{t}_j[s_j]$. Chasing team loses if $r_j < \tilde{t}_j[s_j]$. It is a tie, i.e. no-winner, otherwise. \end{enumerate} \end{enumerate} \end{algorithm} \subsection{Algorithm Fine Print} \subsubsection{No First-Innings Bias} An important feature of Algorithm \ref{algo3} is that the chasing team's target trajectory is not biased by the path taken by the team batting second. The only information from the first innings that is taken in to account is the target, $t$ in $b$ balls. How the team batting first got to their final score should not bias the target computations for the chasing team. Therefore, we take no other information from the first innings in to account. \subsubsection{Average Path to Victory} $a_y, a_w$ are the average paths to victory which denote the average prescribed path in terms of runs and wickets lost. This should be viewed as the trajectory which, if followed exactly, would lead the chasing team to the target in exactly 50 overs (300 balls). This trajectory is simply the average of all innings in the filtered donor pool (nearest neighbors of the target). Note that this averaging relies on all innings in the donor pool to be scaled such that they reach the target in exactly 50 overs (300 balls). \subsubsection{No Neighbors Above the Target} In rare instances, the team batting first will score an unprecedented amount of runs such that the number of past innings which end up exceeding the target are too few. In such a circumstance or any other instance where the donor pool (neighbors) are too few, we propose using the set of \textit{all} innings in the dataset and scaling them up/down accordingly. We justify this recommendation in Section \ref{sec:dominancescaling}. \subsubsection{Non-Linear Transformation, $g$} A possible candidate for the non-linear transformation function, $\boldsymbol{g}(\cdot)$, is provided by the DLS method. The DLS method uses a table of ``resources remaining'' percentages for each possible tuple of \{wickets remaining, balls remaining\}, \cite{dl2}. We use the DLS table as a proxy for $\boldsymbol{g}(\cdot)$ , but with corrections. Given that we have established that the DLS method introduces a bias in favor of the chasing team, we correct the ``resources remaining'' percentages in a data-driven manner for each game. Specifically, let the target be represented by $t$, the average runs to be scored at the time of interest be $a_r$, the average wickets remaining at the time of interest be $a_w$, actual wickets remaining at the time of interest be $w_r$, balls remaining be $b_r$ and the DLS resources remaining percentage be $d_{a_w}[b_r]$. Then, our correction to the DLS entry for would take the following form: $\boldsymbol{g}(a_r, a_w, b_r, w_r) = \frac{a_r} {1 - (1 - \frac{a_r}{t}) \cdot d_{w_r}[b_r] /d_{a_w}[b_r]}$, where $d_{w_r}[b_r]$ is the DLS resources remaining percentage for having $w_r$ wickets and $b_r$ balls remaining. \subsection{Examples of Revised Targets} \label{sec:targetexamples} We present the following revised target examples on actual games using our algorithm. \subsubsection{Scenario 1: $b_1 = b_2$ (No More Play Possible after Intervention)} We consider the game played between Pakistan and New Zealand on 01/16/2018. Pakistan batted first and set New Zealand a competitive target of 263 in 50 overs (300 balls). There was no intervention during the game and New Zealand was successfully able to chase the target down with 25 balls to spare for the loss of 5 wickets. However, at one stage New Zealand had lost three wickets for just 90 runs in 16 overs and 2 balls (98 balls). With Pakistan in the ascendency then, it took a couple of good batting partnerships from New Zealand to score the win. We look at the trajectory of the revised targets for the New Zealand innings. Figure \ref{fig:revision1} shows the resulting revised targets that New Zealand would have to make to win the game if the game stopped at any point during the innings \textit{and} no more play was possible (Scenario 1). The green trajectory is the \textit{average path to victory} while the dotted gray trajectory is the target if the same number of wickets were lost as in the actual innings, at each point. The actual innings is shown by the red line. For each ball, $1 \leq j \leq 300$, the chasing team would be declared a winner whenever the red line was above the dotted gray line, and no more play was possible. As we would expect, the red line lies below the gray line starting around the 17th over mark, signaling that Pakistan would have been declared the winners if the game stopped at that time. However, around the 38th over, New Zealand were back in the ascendency and the red line overtakes the gray line indicating that after around the 38th over if the innings stopped and no more play was possible, then New Zealand would be declared winners. The gap between the red and gray line is a proxy for how comfortably the ascendent team is ahead. These observations agree with the actual state of the game and point to a realistic target revision methodology. Note that even though the game ended at the 275th ball point, we show the trajectories for the entire 50 over (300 ball) period. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{pak-nz-chase-revised.png} \caption{New Zealand vs Pakistan, 01/16/2018. New Zealand's actual innings (red), average path to victory (green) and revised target (dotted gray).} \label{fig:revision1} \end{figure} Next, we consider the famous World Cup 2003 game between South Africa and Sri Lanka. South Africa were chasing a target of 269 in 50 overs (300 balls). Rain made an appearance during the second innings and it became clear that at the end of the 45th over (270th ball) of the chase, no more play would be possible. Mark Boucher, a senior player in the South African team, was provided the Duckworth-Lewis-Stern (DLS) par score for the end of the 45th over and hit six runs off the penultimate ball before the intervention (anticipating play to halt at the end of the over since it had started raining). With that six, South Africa had achieved the par score and Boucher blocked the next ball for zero runs (and importantly, did not lose his wicket). South Africa walked off confident that they had achieved the DLS-revised target to win the game. However, they were informed that the ``par'' score they were provided was the DLS-revised score to \textit{tie} the game and they needed one more than par to win! Unfortunately for South Africa, that tie meant they were knocked out of the world cup which they were also hosting in the most cruel of manners, as noted by The Guardian (\cite{dls-par-1}). We also use Figure \ref{fig:saf-sri-revised} to illustrate how our method would have decided the result of the game at the \textit{actual} intervention which happened at the 45 over mark when no more play was possible. At precisely the 270 ball mark, the revised target score (for having lost 6 wickets) produced by our algorithm was 232. The score made by South Africa was 229 (for the loss of 6 wickets). Therefore, our algorithm would have declared Sri Lanka to be the winner--by a margin of two runs. This is an example that hints that the DLS method might be setting the revised target a touch too low (leading to a bias in favor of the chasing teams). \begin{figure} \centering \includegraphics[width=0.8\textwidth]{saf-sri-chase-revised-45.png} \caption{Actual Intervention at 45 overs (270 balls) and no more play was possible after. New Zealand's actual innings (red), average path to victory (green) and revised target (dotted gray)} \label{fig:saf-sri-revised} \end{figure} \subsubsection{Scenario 2: $b_1 = 30$ overs, $b_2 = 45$ overs} Next, we consider how our algorithm would work in under Scenario 2 where the chasing team bats the first $b_1$ balls assuming they will have all $b= 300$ balls (50 overs) to chase the target. However, there is an intervention at the $b_1$ ball mark and only $b_2 < b$ are now possible. Our algorithm, like the DLS method, recommends a revised target for the chasing team. We use the World Cup 2011 Final between India and Sri Lanka to illustrate how our algorithm would work and its contrast to the DLS method. Figure \ref{fig:indsri_revised} shows the average path to victory in green for India when chasing the target of 275 in 50 overs (300 balls). We introduce a hypothetical intervention at the $b_1 = 180$ balls (30 overs) mark and assume that play will resume with a maximum of 270 balls (45 overs) possible instead of the original 300 balls (45 overs). Figure \ref{fig:indsri_revised} shows in blue the revised average path to victory for India. Notice how the line in blue is not parallel to the green line--this is expected because the line in blue is encoding the final 15 overs context which is comparable to the 35-50 overs on the green line. The noticeable rise in the gradient of the blue line is indicative of the last few overs mindset which would happen later if the inning was allowed its maximum 50 over (300 balls) duration. The black line shows the targets had the revised maximum number of overs, $b_2$, been allowed to vary between $[35, 50]$. This is in contrast to the blue trajectory which shows the revised average path to victory once the revised maximum number of overs, $b_2$, is fixed. It is no surprise then that the blue and black lines intersect at the $b_2 = 270$ balls (45 overs) mark in the situation shown in Figure \ref{fig:indsri_revised}. Finally, the equivalent of the black line is the one in magenta which are the revised targets produced by the DLS method. The slight upwards bias produced by our method is a feature of our algorithm which corrects for the high bias in favor of the chasing team by the DLS method when choosing the offset (at the intervention). The black line gets closer to the magenta line as the revised maximum number of overs increases intersecting at the 50 overs (300 ball) mark. This makes sense because both algorithms must recover the original target if the innings was allowed to take its original maximum duration. While not always the case, we note that our algorithm tends to produce targets which lie above the DLS targets. We establish this via a statistical test in the next section. \begin{figure} \centering \includegraphics[scale=1.5]{ind_sri_revised_30_45.png} \caption{} \label{fig:indsri_revised} \end{figure} \subsection{Reducing the DLS Bias} \label{sec:smallerBias} We can use historical data to simulate and compare the performance of the DLS method and our algorithm under Scenario 1: where $b_1 = b_2$ and the innings never resumes after the intervention. Using a dataset of about 500 LOI games (starting in 2003) where the second innings lasted longer than 45 overs, we randomly partition the the dataset in to two sets: $\mathbb{S}_A$ and $\mathbb{S}_B$. $\mathbb{S}_A$ contains all games where we assume the game takes its natural course and ends without application of a target revision method. For the games in $\mathbb{S}_B$ we introduce arbitrary interventions between overs 20-45 and ask the DLS method and our algorithm to declare a winner at that stage, assuming no more play would have been possible. We then conduct a Chi-Squared test of independence using the distribution of games won by teams batting first and second for games in $\mathbb{S}_A$ and $\mathbb{S}_B$. We note the p-value and then repeat the same experiment 1000 times. The resulting distribution of p-values is shown in Figure \ref{fig:pvalsdls2}. Under the null-hypothesis the p-value distribution ought to be uniformly distributed, by definition. We note that for our algorithm the distribution appears fairly uniform while for the DLS method there is a slight bias to the left. However, there is little evidence to conclude that either method suffers from an obvious bias in this scenario. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{p-val-dist.png} \caption{Distribution of p-values for the bootstrapped Chi-Squared test of independence across a dataset of 500 games randomly split in to games with arbitrary application of the DLS method and our algorithm for a random subset of games in each of the 1000 iterations.} \label{fig:pvalsdls2} \end{figure} The above experiment shows that the DLS method (and our algorithm) are both unbiased in Scenario 1. However, the result of the Chi-Squared test in Section \ref{sec:fairDls} conclusively shows that a disproportionate number of teams batting second tend to win games where the DLS method is used to set a revised target. Therefore, the bias introduced by the DLS method (which we established in Section \ref{sec:fairDls}) must be coming from games in Scenario 2: where $b_2 > b_1$, i.e. the chasing team is provided with an actual revised target to chase in fewer than maximum number of overs. It is impossible to use historical data for actual LOI cricket games where the DLS method was used to retrospectively determine how the outcome might have been different had a different revised target been proposed for the chasing team. Therefore, we use the following experiment with historical data to show that our algorithm is poised to reduce the bias in favor of the chasing teams that the DLS method suffers from. We use hundreds of innings in our dataset and introduce a hypothetical intervention, at $b_1 = 180$ balls (30 overs). We then assign an arbitrary revised number of overs, $b_2 = 270$ balls (45 overs). For each innings, we determine the revised remaining target runs at $b_2$ using the DLS method and our algorithm. We also determine the subset of historical innings where the same number of wickets were lost at $b - (b_2 - b_1)$ overs as those at the intervention for the innings under consideration. We then determine the proportion of innings in the subset where the revised remaining target runs were scored in the last $b - (b_2 - b_1)$ overs. For any method that claims to reduce the bias produced by the DLS method, we would expect the proportion of the subset of innings that achieve the revised remaining target to be \textit{lower} than that for the DLS method. Figure \ref{fig:prop_dls_ours} shows the distribution of the differences of proportions of the subset of the historical innings that exceed the revised remaining target prescribed by our algorithm vs the DLS method. The experiment was performed over 230 LOI games from the past few years. The mean of the differences is approximately -1\%. This appears to suggest that at $b_2 = 270$ balls (45 overs) when the intervention is at $b_1 = 180$ balls (30 overs), our algorithm's revised targets are higher, on average, than those proposed by the DLS method which should reduce the bias in the favor of chasing teams experienced in LOI games. We also conduct a paired-sample t-test for the two series produced by our algorithm and the DLS method. The test rejects the null-hypothesis of the two series being the statistically the same with a p-value of $10^{-6}$. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{dls-vs-ours-proportions.png} \caption{Distribution of differences in proportions of historical innings that exceed the revised remaining number of runs prescribed by our algorithm vs the DLS method for over 230 LOI games. The mean of the differences is -1\% while the standard error of the mean is 0.01\%. } \label{fig:prop_dls_ours} \end{figure} Finally, Figure \ref{fig:prop_increase} shows, using the same innings as those used for the experiment summarized in Figure \ref{fig:prop_dls_ours}, that average upwards bias in revised targets produced by our algorithm compared to the DLS method is fairly consistent across all revised maximum durations, $b_2$, where $(b_1 < b_2 < b)$. The plot shows the average percentage runs that our algorithm's revised targets exceed those produced by the DLS method for all overs possible after the intervention. In this experiment, $b_1 = 30$ overs and $b_2$ is allowed to vary from over 31 to 49. The targets produced by both algorithms must be the same for 50 overs. The plot underscores that our algorithm should be able to reduce the bias produced by the DLS method for all revised durations. However, note that the closer $b_2$ is to the original maximum duration, $b$, the smaller the percentage increase which appeals to our intuition. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{prop_increase_over_dls.png} \caption{The average percentage runs that our algorithm's revised targets exceed those produced by the DLS method across all possible revised new maximum duration (overs) of the innings ($b_2 \in \{31, 32, \cdots , 49\}$) after the intervention $b_1 = 30$ overs. We introduce an arbitrary intervention at the 30 over mark and determine the revised targets for the DLS method and our algorithm. The bars show average percentage increases for each over are calculated over 250 recent games.} \label{fig:prop_increase} \end{figure} \fi \section{Algorithm} \label{sec:algorithm} \subsection{Inputs.} The multi-Robust Synthetic Control (mRSC) algorithm takes several inputs. For simplicity, we assume that there exist {\em two} metrics of interest. The same arguments can naturally be extended to any number of metrics of interest. However, we choose two to better illustrate the algorithm. The two metrics are represented by $N \text{(donor units)} \times T \text{(periods)}$ observation matrices denoted by $\boldsymbol{A}^1$ and $\boldsymbol{A}^2$. The corresponding unobserved mean matrices are denoted by $\boldsymbol{M}^1$ and $\boldsymbol{M}^2$. The intervention, i.e. the period after which we want to estimate the forecasts, is denoted by $T_0 < T$. Therefore, the data used for training is represented by $\boldsymbol{A}^{1-}$ and $\boldsymbol{A}^{1-}$ which are subsets of the observation matrices but for column indices in $[1, ... , T_0]$. The post-intervention matrices are similarly denoted by $\boldsymbol{A}^{1+}$ and $\boldsymbol{A}^{1+}$ which are subsets of the observation matrices but for column indices in $[T_0+1, ... , T]$. The unobserved mean matrices $\boldsymbol{M}^{1-}$, $\boldsymbol{M}^{2-}$, $\boldsymbol{M}^{1+}$ and $\boldsymbol{M}^{2+}$ are defined similarly. Our unit of interest is the denoted by $y^1_1$ and $y^2_1$. Given that we are interested in estimating the post-intervention future of this unit of interest, both $y^1$ and $y^2$ are of dimension $1 \times T_0$. The pre-intervention observation matrices are row-concatenated in to a combined observation matrix, $\boldsymbol{Y}^-$, which has dimensions $N \times 2T_0$. Specifically, $\boldsymbol{Y}^- = [\boldsymbol{A}^{1-} \quad \boldsymbol{A}^{2-}]$. Similarly, we refer to the row-combined but unobserved mean matrix by $\boldsymbol{M}^- = [\boldsymbol{M}^{1-} \quad \boldsymbol{M}^{2-}]$. The post-intervention observation matrices are similarly row-concatenated and denoted by $\boldsymbol{Y}^+ = [\boldsymbol{A}^{1+} \quad \boldsymbol{A}^{2+}]$ but have dimensions $N \times 2(T - T_0)$. Finally, the pre-intervention observations from the unit of interest are denoted by $y^- = [y^1 \quad y^2]$. Given that the mRSC algorithm is a generalization of the Robust Synthetic Control (RSC) algorithm, it uses the same hyper-parameters: (1) a thresholding hyperparameter $\mu \geq 0$, which helps threshold the singular value spectrum of the observation matrix and effectively serves as a knob to effectively trade-off between the bias and variance of the estimator \cite{rsc1}, and (2) a regularization hyper-pameter $\eta \ge 0$ that controls the model complexity \cite{rsc1}. There is one additional parameter used by the mRSC algorithm: (3) a loss-function weighting between the two metrics, $\delta_1, \delta_2$. This parameter helps generalize the regularized OLS regression step in the RSC algorithm \cite{rsc1} to a regularized Weighted Least Squares (WLS) step. If both metrics are of the same order of magnitude and importance, then $\delta_1 = \delta_2$. However, a different weighting can help denote the relative importance (or correction) between the two metrics. Effectively, this helps creating a diagonal matrix, $\Delta = diag([\frac{1}{\sqrt{\delta_1}}, ... , \frac{1}{\sqrt{\delta_2}}, ...])$, which can pre-multiply the observation matrix, $\boldsymbol{M}$ and unit of interest, $y$. Similar to \cite{rsc1}, these parameters can be chosen via several data-driven techniques, e.g. cross-validation. \subsection{Missing Data} To further simplify the exposition of the mRSC algorithm, we assume no missing data. If we encounter missing data in the problem, it can be modeled similar to the RSC algorithm with a probability of observation $p$. It is trivial to extend the mRSC algorithm to incorporate missing data similar to the vanilla RSC algorithm. \subsection{Robust Multi-Metric Algorithm.} \label{sec:robust_algo} Using the inputs, we are now ready to describe the mRSC algorithm. Recall that the goal is to estimate the future of the unit of interest. This estimated forecast is denoted by $\hat{m}_1^+$. Given that we have two metrics of interest, the first half of the estimate belong to first metric and the second half to the second metric. \begin{algorithm} \caption{Multi Robust Synthetic Control (mRSC)}\label{euclid} \noindent \\ {\bf Step 1. De-noising: singular value thresholding.} \\ \begin{enumerate} \item Weight the observation matrices and unit of interest by the corresponding weights, represented by the diagonal matrix $\Delta$: \begin{align} \boldsymbol{Y}^- &= \boldsymbol{Y}^- \Delta, \\ y^- &= y^- \Delta \end{align} \item Compute the singular value decomposition of $\boldsymbol{Y}^{-}$: \begin{align} \boldsymbol{Y}^- = \sum_{i=1}^{N} s_i u_i v_i^T. \end{align} \item Let $S = \{ i : s_i \geq \mu\}$ be the set of singular values above the threshold $\mu$. \item Define the estimator of $\boldsymbol{M}^-$ as \begin{align} \label{eq:p_hat} \widehat{\bM}^- & = \sum_{i \in S} s_i u_i v_i^T, \end{align} \end{enumerate} \noindent \\ {\bf Step 2. Regularized (Weighted) Least Squares.} \\ \begin{enumerate} \item For any $\eta \ge 0$, let \begin{align} \hat{\beta}(\eta) &= \argmin_{v \in \mathbb{R}^{N}} \norm{y^{-} - (\widehat{\bM}^{-})^T v}^2 + \eta \norm{v}^2 \label{eq:ls}. \end{align} \item Define the estimated (counterfactual) means for the treatment unit as \begin{align} \hat{m}_1^+ & = \boldsymbol{Y}^{+T} \hat{\beta}(\eta). \end{align} \end{enumerate} \end{algorithm} Finally, we note that all the features of the vanilla RSC algorithm extend naturally to the mRSC algorithm (see Section 3.4 of \cite{rsc1}). Specifically, all properties of RSC regarding solution interpretability, scalability, low-rank hypotheses and independence from using covariate information are also features of the mRSC algorithm. Effectively, the mRSC algorithm is a generalization of the RSC algorithm in that it incorporates learning based on multiple metrics of interest. \section{Cricket: Setting and Terminology} \label{sec:setting} We use this space to describe concretely the basics of the game of cricket relevant to the rest of this work. We consider the limited-overs international (LOI) format of the game: a LOI game is played between two teams, each of whom gets to bat once. Each team's turn to bat is referred to as an {\em innings}. Therefore, each LOI game is two-innings long. Each innings is allocated a maximum number of times that a ball can be bowled (similar to pitching in baseball) to the batters. A single such bowling event is called a {\em ball}. Six consecutive balls are referred to as an {\em over}. In this work we consider LOI games which mandate that an innings can last a maximum of 300 balls or, equivalently, 50 overs. The team batting second, also referred to as the {\em chasing} team, needs to make one more run than the team batting first to win the game. The total runs scored by a team is the cumulative sum of the runs scored on each of the {\em balls} in the innings. Both teams get a maximum of 10 {\em wickets} they can lose in an innings. The loss of a {\em wicket} is an important event in an innings: it signifies the event where a batter is declared {\em out} and is not allowed to play further role in the innings (as a batter). The loss of ten wickets means the team has no batters left to contribute to the innings. Therefore, an innings can end if maximum overs are bowled or ten wickets are lost, whichever happens earlier. As an example, in the ICC Cricket World Cup Final in 2015, New Zealand batted first and lost all ten wickets for a cumulative score of 183 in 45 overs (270 balls). Australia were set a target of 184 in a maximum of 50 overs but they chased it successfully in 33 overs and 1 ball (199 balls) for the loss of only 3 wickets. It is worth emphasizing that individuals comprising a team cannot change between innings. Everyone on the team can get to bat but only a few typically bowl. The consequence of this is that a team typically comprises several specialists: some playing primarily as batters, some primarily as bowlers with a few who are called upon to do both. Normally teams send their specialist batters to bat ahead of the bowling specialists. This implies that the batting strength of a team degrades non-linearly, i.e. a loss of five wickets implies much more than 50\% of the batting strength exhausted. \subsection{Reduced Innings: Revised Targets} \label{sec:revised} A 50 over (300 ball) game of two innings can last about 8 hours. Often, weather conditions such as rain can prevent a game from lasting the entire 100 overs across two innings. This leads to several scenarios of interest to this work. One such scenario happens when the intervention, e.g. rain, happens at some point during the second innings. While the target was set for a maximum of 50 overs (300 balls) and 10 wickets, it may not be possible to play the maximum number of overs. In such a situation, the target for the chasing team needs to be revised to reflect an equivalent target for the new reduced maximum number of overs. However, note that the chasing team is still allowed access to all 10 of their wickets. As discussed earlier, since the batting capabilities do not scale linearly with the number of wickets, the target revision cannot be done in a simple linear manner. For example, if the team batting second was chasing 250 runs in 50 overs and can now only play a maximum of 25 overs, a revised target of 125 would not be considered equivalent, or {\em fair}, because the chasing team still gets to use all 10 of their wickets meaning that they have a shorter period (25 overs) over which to spend the same number of batting resources (10 wickets). Much of this work aims to address the problem of target resetting in reduced innings. \subsection{Forecasting in retail} \label{sec:retail} We consider the problem of forecasting weekly sales in retail. Here, we highlight a key utility of mRSC over RSC in the presence of sparse data. More specifically, our results demonstrate that when the pre-intervention period (training set) is short, then standard RSC methods fail to generalize well. On the other hand, by using auxiliary information from other metrics, mRSC effectively ``augments'' the training data, which allows it to overcome the difficulty of extrapolating from small sample sizes. \paragraph{\bf Experimental setup.} We consider the Walmart dataset, which contains $T =143$ weekly sales information across $N = 45$ stores and $K = 81$ departments. We arbitrarily choose store one as the treatment unit, and introduce an ``artificial'' intervention at various points; this is done to study the effect of the pre-intervention period length on the predictive power for both mRSC and RSC methods. In particular, we consider the following pre-intervention points to be $15, 43$, and $108$ weeks, representing small to large pre-intervention periods (roughly $10\%, 30\%$, and $75\%$ of the entire time horizon $T$, respectively). Further, we consider three department subsets (representing three different metric subgroups): Departments $\{2, 5, 6, 7, 14, 23, 46, 55\}$, $\{17, 21, 22, 32, 55\}$, and $\{3, 16, 31, 56\}$. \paragraph{\bf Results.} In Table \ref{table:avg_mse}, we show the effect of the pre-intervention length on the RSC and mRSC's ability to forecast. In particular, we compute the average pre-intervention (training) and post-intervention (testing) MSEs across each of the three departmental subgroups (as described above) for both methods and for varying pre-intervention lengths. Although the RSC method consistently achieves a smaller average pre-intervention error, the mRSC consistently outperforms the RSC method in the post-intervention regime, especially when the pre-intervention stage is short. This is in line with our theoretical findings of the post-intervention error behavior, as stated in Theorem \ref{thm:post_int}; i.e., the benefit of incorporating multiple relevant metrics is exhibited by the mRSC algorithm's ability to generalize in the post-intervention regime, where the prediction error decays by a factor of $\sqrt{K}$ faster than that of the RSC algorithm. We present Figure \ref{fig:walmart} to highlight two settings, departments 56 (left) and 22 (right), in which mRSC drastically outperforms RSC in extrapolating from a small training set ($T_0 = 15$ weeks). We highlight that the weekly sales axes between the subplots for each department, particularly department 56, are different; indeed, since the RSC algorithm was given such little training data, the RSC algorithm predicted negative sales values for department 56 and, hence, we have used different sales axes ranges to underscore the prediction quality gap between the two methods. As seen from these plots, the RSC method struggles to extrapolate beyond the training period since the pre-intervention period is short. In general, the RSC method compensates for lack of data by overfitting to the pre-intervention observations and, thus, misinterpreting noise for signal (as seen also by the smaller pre-intervention error in Table \ref{table:avg_mse}). Meanwhile, the mRSC overcomes this challenge by incorporating sales information from other departments. By effectively augmenting the pre-intervention period, mRSC becomes robust to sparse data. However, it is worth noting that both methods are able to extrapolate well in the presence of sufficient data. \begin{figure}[H] \centering \subfigure[Dept. 56 (mRSC)]{\includegraphics[width=0.325\linewidth]{store1_dept56_15_mrsc.png}} \subfigure[Dept. 22 (mRSC)]{\includegraphics[width=0.325\linewidth]{store1_dept22_15_mrsc.png}} \\ \subfigure[Dept. 56 (RSC)]{\includegraphics[width=0.325\linewidth]{store1_dept56_15_rsc.png}} \subfigure[Dept. 22 (RSC)]{\includegraphics[width=0.325\linewidth]{store1_dept22_15_rsc.png}} \caption{mRSC (top) and RSC (bottom) forecasts for departments 56 (left) and 22 (right) of store 1 using $T_0 = 15$ weeks.} \label{fig:walmart} \end{figure} \begin{table} \centering \begin{tabular}{ c | c c | c c } \hline \multicolumn{1}{l}{} & \multicolumn{2}{c}{ Train Error ($10^6$)} & \multicolumn{2}{c}{ Test Error ($10^6$)} \\ \hline $T_0$ & RSC & mRSC & RSC & mRSC \\ \hline 10\% & {\bf 1.54} & 3.89 & 21.0 & {\bf 5.25} \\ 30\% & {\bf 2.21} & 3.51 & 19.4 & {\bf 4.62 } \\ 75\% & {\bf 4.22} & 5.33 & 3.32 & {\bf 2.48} \\ \hline 10\% & {\bf 0.67} & 2.61 & 14.4 & {\bf 2.48 } \\ 30\% & {\bf 0.79} &1.21 & 2.13 & {\bf 1.97 } \\ 75\% & {\bf 1.18} & 2.78 & 1.31 & {\bf 0.77 } \\ \hline 10\% & {\bf 1.28} & 6.10 & 84.6 & {\bf 12.5 } \\ 30\% & {\bf 2.60} & 3.45 & {\bf 3.72 } & 4.13 \\ 75\% & {\bf 2.29} & 2.65 & 4.92 & {\bf 4.72} \\ \hline \end{tabular} \caption{Average pre-intervention (train) and post-intervention (test) MSE for RSC and mRSC methods.} \label{table:avg_mse} \end{table} \section{Proofs} \label{sec:proofs} \subsection{Proof of Proposition \ref{prop:low_rank_approx}} \begin{proof} We will construct a low-rank tensor $\mathcal{\bT}$ by partitioning the latent row and tube spaces of $\widetilde{M}$. Through this process, we will demonstrate that each frontal slice of $\mathcal{\bT}$ is a low-rank matrix, and only a subset of the $K$ frontal slices of $\mathcal{\bT}$ are distinct. Together, these observations establish the low-property of $\mathcal{\bT}$. Finally, we will complete the proof by showing that $\widetilde{M}$ is entry-wise arbitrarily close to $\mathcal{\bT}$. \\ {\bf Partitioning the latent space to construct $\mathcal{\bT}$.} Fix some $\delta_1, \delta_3 > 0$. Since the latent row parameters $\theta_i$ come from a compact space $[0,1]^{d_1}$, we can construct a finite covering (partition) $P(\delta_1) \subset [0,1]^{d_1}$ such that for any $\theta_i \in [0,1]^{d_1}$, there exists a $\theta_{i'} \in P(\delta_1)$ satisfying $\norm{\theta_i - \theta_{i'}}_2 \le \delta_1$. By the same argument, we can construct a partitioning $P(\delta_3) \subset [0,1]^{d_3}$ such that $\norm{\omega_k - \omega_{k'}}_2 \le \delta_3$ for any $\omega_k \in [0,1]^{d_3}$ and some $\omega_{k'} \in P(\delta_3)$. By the Lipschitz property of $f$ and the compactness of the latent space, it follows that $\abs{P(\delta_1)} \le C_1 \cdot \delta_1^{-d_1}$, where $C_1$ is a constant that depends only on the space $[0,1]^{d_1}$, ambient dimension $d_1$, and Lipschitz constant $\mathcal{L}$. Similarly, $\abs{P(\delta_3)} \le C_3 \cdot \delta_3^{-d_3}$, where $C_3$ is a constant that depends only on $[0,1]^{d_3}$, $d_3$, and $\mathcal{L}$. \\ For each $\theta_i$, let $p_1(\theta_i)$ denote the unique element in $P(\delta_1)$ that is closest to $\theta_i$. At the same time, we define $p_3(\omega_k)$ as corresponding element in $P(\delta_3)$ that is closest to $\omega_k$. We now construct our tensor $\mathcal{\bT} = [T_{ijk}]$ by defining its $(i,j,k)$-th entry as \begin{align*} T_{ijk} = f(p_1(\theta_i), \rho_j, p_3(\omega_k)) \end{align*} for all $i\in [N]$, $j \in [T]$, and $k \in [K]$. \\ {\bf Establishing the low-rank property of $\mathcal{\bT}$.} Let us fix a frontal slice $k \in [K]$. Consider any two rows of $\mathcal{\bT}_{\cdot, \cdot, k}$, say $i$ and $i'$. If $p_1(\theta_i) = p_1(\theta_{i'})$, then rows $i$ and $i'$ of $\mathcal{\bT}_{\cdot, \cdot, k}$ are identical. Hence, there at most $\abs{P(\delta_1)}$ distinct rows in $\mathcal{\bT}_{\cdot, \cdot, k}$, i.e., $\text{rank}(\mathcal{\bT}_{\cdot, \cdot, k}) \le \abs{P(\delta_1)}$. In words, each frontal slice of $\mathcal{\bT}$ is a low-rank matrix with its rank bounded above by $\abs{P(\delta_1)}$. \\ Now, consider any two frontal slices $k$ and $k'$ of $\mathcal{\bT}$. If $p_3(\omega_k) = p_3(\omega_{k'})$, then for all $i \in [N]$ and $j \in [T]$, we have \begin{align*} T_{ijk} &= f(p_1(\theta_i), \rho_j, p_3(\omega_k)) = f(p_1(\theta_i), \rho_j, p_3(\omega_{k'})) = T_{ijk'}. \end{align*} In words, the $k$-th frontal slice of $\mathcal{\bT}$ is equivalent to the $k'$-th frontal slice of $\mathcal{\bT}$. Hence, $\mathcal{\bT}$ has at most $\abs{P(\delta_3)}$ distinct frontal slices. \\ To recap, we have established that all of the frontal slices $\mathcal{\bT}_{\cdot, \cdot, k}$ of $\mathcal{\bT}$ are low-rank matrices, and only a subset of the frontal slices of $\mathcal{\bT}$ are distinct. Therefore, it follows that the rank of $\mathcal{\bT}$ (i.e., the smallest integer $r$ such that $\mathcal{\bT}$ can be expressed as a sum of rank one tensors), is bounded by the product of the maximum matrix rank of any slice $\mathcal{\bT}_{\cdot, \cdot, k}$ of $\mathcal{\bT}$ with the number of distinct slices in $\mathcal{\bT}$. More specifically, if we let $\delta = \delta_1 = \delta_3$, then \begin{align*} \text{rank}(\mathcal{\bT}) &\le \abs{P(\delta_1)} \cdot \abs{P(\delta_3)} \le C \cdot \delta^{-(d_1 + d_3)}, \end{align*} where $C$ is a constant that depends on the latent spaces $[0,1]^{d_1}$ and $[0,1]^{d_3}$, the dimensions $d_1$ and $d_3$, and the Lipschitz constant $\mathcal{L}$. We highlight that the bound on the tensor rank does not depend on the dimensions of $\mathcal{\bT}$. \\ {\bf $\widetilde{M}$ is well approximated by $\mathcal{\bT}$.} Here, we bound the maximum difference of any entry in $\widetilde{M} = [M_{ijk}]$ from $\mathcal{\bT} = [T_{ijk}]$. Using the Lipschitz property of $f$, for any $(i,j,k) \in [N] \times [T] \times [K]$, we obtain \begin{align*} \abs{M_{ijk} - T_{ijk}} &= \abs{ f(\theta_i, \rho_j, \omega_k) - f(p_1(\theta_i), \rho_j, p_3(\omega_k))} \\ &\le \mathcal{L} \cdot \left(\norm{\theta_i - p_1(\theta_i)}_2 + \norm{\omega_k - p_3(\omega_k)}_2 \right) \\ &\le \mathcal{L} \cdot (\delta_1 + \delta_3). \end{align*} This establishes that $\widetilde{M}$ is entry-wise arbitrarily close to $\mathcal{\bT}$. Setting $\delta = \delta_1 = \delta_3$ completes the proof. \end{proof} \subsection{Proof of Proposition \ref{prop:linear_comb}} \begin{proof} Recall the definition of $\boldsymbol{U}$ from \eqref{eq:low_rank_tensor}. By Property \ref{property:low_rank}, we have that $\text{dim}(\text{span}\{ \boldsymbol{U}_{1, \cdot}, \dots, \boldsymbol{U}_{N, \cdot} \}) = r$, where $\boldsymbol{U}_{i, \cdot}$ denotes the $i$-th row of $\boldsymbol{U}$. Since we are choosing the treatment unit uniformly at random amongst the $N$ possible indices (due to the re-indexing of indices as per some random permutation), the probability of that $\boldsymbol{U}_{1, \cdot}$ is not a linear combination of the other rows is $r / N$. In light of this observation, we define $A = \{ \boldsymbol{U}_{1, \cdot} = \sum_{z=2}^N \beta^*_z \boldsymbol{U}_{z, \cdot} \}$ as the event where the first row of $\boldsymbol{U}$ is a linear combination of the other rows in $\boldsymbol{U}$. Then, using the arguments above, we have that $\mathbb{P} \{A \} = 1 - \frac{r}{N}$. \\ Now, suppose the event $A$ occurs. Then for all $j \in [T]$ and $k \in [K]$, \begin{align*} M_{1jk} &= \sum_{i=1}^r U_{1i} \cdot V_{ji} \cdot W_{ki} \\ &= \sum_{i=1}^r \left( \sum_{z=2}^N \beta^*_z \cdot U_{zi} \right) \cdot V_{ji} \cdot W_{ki} \\ &= \sum_{z=2}^N \beta^*_z \cdot \left( \sum_{i=1}^r U_{zi} \cdot V_{ji} \cdot W_{ki} \right) \\ &= \sum_{z=2}^N \beta^*_z \cdot M_{zjk}. \end{align*} This completes the proof. \end{proof} \subsection{Proof of Theorem \ref{thm:pre_int}} \begin{proof} The proof follows from an immediate application of Theorem 3 of \cite{asss}. \end{proof} \subsection{Proof of Theorem \ref{thm:post_int}} \begin{proof} The proof follows from an immediate application of Theorem 5 of \cite{asss}. \end{proof} \section{Conclusion} {\bf Summary.} In this work, we focus on the problem of estimating the (robust) synthetic control and using it to forecast the future metric measurement evolution for a unit of interest under the assumption that a potential intervention has no statistical effect. Synthetic control (SC) \cite{abadie1, abadie2, abadie3}, robust synthetic control (RSC) \cite{rsc1}, and its variants perform poorly when the training (pre-intervention) data is too little or too sparse. We introduce the multi-dimensional robust synthetic control (mRSC) algorithm, which overcomes this limitation and generalizes the RSC algorithm. This generalization allows us to present a natural and principled way to include multiple (related) metrics to assist with better inference. The latent variable model lies at the heart of the mRSC model and is a natural extension of the factor model, which is commonly assumed in the SC literature. Our algorithm exploits the proposed low-rank tensor structure to ``de-noise'' and then estimate the (linear) synthetic control via weighted least squares. This produces a consistent estimator where the post-intervention (testing) MSE decays to zero by a factor of $\sqrt{K}$ faster than the RSC algorithm. Through extensive experimentation using synthetically generated datasets and real-world data (in the context of retail), we confirm the theoretical properties of the mRSC algorithm. Finally, we consider the problem of forecasting scores in the game of cricket to illustrate the modeling prowess and predictive precision of the mRSC algorithm. \vspace{10pt} \noindent {\bf Forecasting vis-\'{a}-vis Synthetic Control.} While observational studies and randomized control trials are all concerned with estimating the unobserved counterfactuals for a unit of interest that experiences an intervention, how does one determine the performance of a counterfactual estimation method without access to the ground-truth? Although it is possible to use the pre-intervention data to cross-validate the performance of any estimation method, such a methodology ignores the period of interest: the post-intervention period. An alternate and more effective approach is to study the performance of an estimation method on units that do not experience the intervention, i.e., the {\em placebo} units. If the method is able to accurately estimate the observed post-intervention evolution of the placebo unit(s), it would be reasonable to assume that it would perform well in estimating the unobserved counterfactuals for the unit of interest. Therefore, in this work, we focus on evaluating the estimates of many such placebo units to establish the efficacy of our proposed method. Additionally, given that there is a temporal ordering of the data with clearly defined pre- and post- intervention period(s), our post-intervention estimation problem is akin to a forecasting problem; for units which do not experience any intervention, our goal is to accurately estimate the future and our estimates are evaluated against the observed data. This post-intervention period forecast accuracy becomes our primary metric of comparison and evaluation. Given the discussion above, the method presented in this work serves a dual purpose: (a) it can be used as a method to estimate the (synthetic) control for a unit of interest that experiences an intervention; (b) as long as the temporal or sequential dimension is relative and not absolute, it can be used as a method to forecast the future evolution of any unit of interest. More precisely, this is only possible when the donor units have observations for both the past (pre-intervention period) {\em and} the future (post-intervention period), e.g., in the game of cricket, the donor pool comprises of a large set of already completed innings. Therefore, the mRSC method presented in this work is not a general time series forecasting algorithm: it requires the future to be known for the donor units, which can then assist in estimating the counterfactual for the unit of interest. For more details on the contrast between related work on time series forecasting and the synthetic control based algorithm presented in this work, we refer the reader to Section \ref{sec:related}. \section{Introduction} \label{sec:intro} Quantifying the causal statistical effect of interventions is a problem of interest across a wide array of domains. From policy making to engineering and medicine, estimating the effect of an intervention is critical for innovation and understanding existing systems. In any setting, with or without an intervention, we only observe one set of outcomes. In casual analysis, the fundamental problem is that of estimating what wasn't observed, i.e., the {\em counterfactual}. We consider the setting of observational studies where experimental studies are not feasible to estimate the {\em counterfactual}. In order to estimate the counterfactual, observational studies rely on the identification (or estimation) of a {\em control} unit. This can be achieved by relying on expert domain knowledge, or via techniques such as {\em matching} the target unit to existing control units (called {\em donors}) on covariates features or propensity scores. A popular data-driven approach to estimating the control unit is known as the synthetic control (SC) method \cite{abadie1, abadie2, abadie3}. SC assigns convex weights to the donors such that the resulting {\em synthetic} unit most closely matches the target unit according to a chosen metric of interest. A generalization of this approach, known as robust synthetic control (RSC) \cite{rsc1}, removes the convexity constraint and guarantees a consistent estimator that is robust to missing data and noise. While SC, its many variants, and RSC exhibit attractive properties, they all suffer for poor estimation when the amount of training data (i.e., the length of the pre-intervention period) is small. In many scenarios with little pre-intervention data, there is, however, data available for many different types which might be related to the type of data of interest. For example, we might be interested in crime-rate and there might be related data available such as median income of house-hold, graduation rate from high-school. Therefore, one remedy to the limited pre-intervention data is to utilize data from multiple metrics. As the main contribution of this work, we address this challenge by providing \subsection{Contributions} As the main contribution of this work, we address this challenge of poor estimation of counterfactual \todo[inline]{Do we want the main contribution to address the poor estimation issue? Isn't the generalization of RSC to mRSC important without the poor estimation issue?} due to limited pre-intervention data by providing a generalization of RSC method, the mRSC, which incorporates multiple-types of data in a principled manner. We show that our mRSC method is a natural consequence of the popular factor model. Through this connection, we provide a falsifiability test for mRSC in a data-driven manner. In particular, we establish that if test is passed, then mRSC produces a consistent estimation of the mean unit of interest (for all metrics) in the pre- and post-intervention periods. Further, the accuracy of estimating control using mRSC improves with respect to the RSC with increasing number of different types of relevant data. Specifically, the Mean Squared Error (MSE) is decreased by a factor of $K$ and $\sqrt{K}$ for the pre- and post-intervention estimates. All other properties of the RSC algorithm, namely robustness to missing data and noise, all carry over. Finally, we conduct extensive experimentation to establish the efficacy of this generalized method in comparison to the RSC algorithm via synthetically generated datasets and two real-world case-studies: product sales in retail and trajectory forecasting in game of Cricket. Next, we summarize these contributions in more detail. \medskip \noindent {\bf Model.} We consider a natural generalization of the factor (or latent variable) model considered in the literature cf. \cite{rsc1} where the measurement associated with a unit, at a given time for a given metric is function of features associated with unit, time and the metric. A special case of such models, the linear factor models, are "bread and butter" model within the Econometrics literature and assumed in \cite{abadie1, abadie2, abadie3}. We argue that as long as function is Lipschitz and features belong to compact domain, the resulting factor model can be well approximated by low-rank 3-order tensor with three dimensions of tensor corresponding to unit, time and metric respectively. Therefore, for simplicity of exposition, we focus on setup where the underlying model is indeed a low-rank 3-order tensor. The observations are noisy version of the ground truth 3-order low-rank tensor with potentially missing values. These include pre-intervention data for all units and metrics, while post-intervention data only for ``donor'' units. The goal is to learn the ``synthetic control'' across donor units and all the metrics simultaneously using pre-intervention data so that using post-intervention data for donor units and metrics, the learnt ``synthetic control'' can help estimate the ground-truth measurements for all metrics associated with the treatment unit post-intervention. \medskip \noindent {\bf Algorithm and analysis.} The synthetic control literature \cite{abadie1, abadie2, abadie3, rsc1} relied on making the following key assumption: the treatment unit's measurements across time for the metric of interest can be expressed as a linear combination of the donor units' measurements. In this work, we start by arguing that we need not make this assumption as it holds naturally under the low-rank 3-order tensor model. Furthermore, the low-rank 3-order tensor model suggests that the identical linear relationship holds across all metrics simultaneously. That is, in order to utilize measurements from other metrics to learn synthetic control for a given metric and a given unit of interest, in effect we can treat the measurements for other metric as additional measurements for our metric of interest! That is, in effect the measurements pre-intervention get multiplied by the number of available metrics. The resulting algorithm is natural adaption of the RSC algorithm but with multiple measurements, which we call the multi-dimensional robust synthetic control (mRSC). Using a recently developed method for analyzing error-in-variable regression in high-dimensional setup, we derive finite-sample guarantees on the performance of our mRSC algorithm. We conclude that the mean-squared-error (MSE) for mRSC pre- and post-intervention decays faster than that of RSC by factor $K$ and $\sqrt{K}$ respectively, where $K$ is the number of metrics available. This formalizes the intuition that the data from other metric effectively act as if the data from the same metric. In summary, mRSC provides a way to overcome limited pre-intervention data by simply utilizing pre-intervention data across many metrics. \medskip \noindent {\bf Experiments: Synthetic data.} To begin with, we utilize synthetically generated data per our factor model to both verify tightness of theoretical results as well as understand the diagnostic test to see when is mRSC applicable (or not). To begin with, we argue that if indeed mRSC is to work successfully, when we flatten the 3-order tensor of measurement data to matrix by stacking the unit by time slices across different metric side-by-side (see Section \ref{sec:falsifiability} for details), the result matrix should have similar (approximate) rank as the matrix with only the single metric of interest. We show that when the data is per our model, such test ought to be passed. And when data is not from our model, the test is failed. Next, we observe that the mRSC performs as predicted by theoretical results using experimental data. \medskip \noindent {\bf Experiments: Retail.} Next, we study the efficacy of mRSC in a real-world dataset about departmental sales of product of Walmart obtained from Kaggle \cite{kaggle-walmart}. It comprises sales data for several product-departments across 45 store locations and 143 weeks. The store locations are units, weeks represent time and product-department represent different metric. Our goal is to forecast the sales of a given product-department at a given location after observing data for it pre-intervention, using data for all product-departments across all locations using both pre- and post-intervention data. We study the performance of mRSC and compare it with the RSC method across different intervention times and across different department-products as metric of choice. Across all our experiments, we consistently find that when pre-intervention data is very little, the mRSC method performs significantly better than RSC method; however, when there is a lot of pre-intervention data for a given metric of interest, then mRSC and RSC perform comparably. This is the behavior that is consistently observed and in line with our theoretical results: if pre-intervention data is very limited than the use of data from multiple metrics using mRSC provides significant gains. \todo[fancyline]{Dennis: any precise stats to be included here?} \medskip \noindent {\bf Experiments: Cricket.} We consider the task of forecasting trajectory in the game of cricket. This is an ideal candidate for mRSC because the game has {\em two} metrics of interest: runs scored and wickets lost. We provide brief introduction to Cricket as we do not expect reader(s) to be familiar with the game, followed by why forecasting trajectory is a perfect fit for mRSC setup and then summary of our findings. \smallskip \noindent {\em Cricket 101.} Cricket, per Google Search for ``how many fans world-wide" seem to be the second most popular after soccer in the world at the time of this manuscript being written. It is played between two teams; it has inherently asymmetry in that both teams take turns to bat and bowl. While there are multiple formats with varying duration of the game, we focus on what is called ``50 overs Limited Over International (LOI)" format: each team bats for an ``inning" of 50 overs or 300 balls; during which at any given time two of its players (batsmen) are batting trying to score runs; while the other team fields all of its 11 players; one of which is bowling and others are fielding; collectively trying to get batsmen out or prevent them from scoring; and each batsman can get out at most once; an inning ends when either 300 balls are done or 10 batsmen get out; at the end, team that makes more runs wins. \smallskip \noindent{\em Forecasting trajectory using mRSC.} As an important contribution, we utilize the mRSC algorithm for forecasting entire trajectory of the game using observation of initial parts of game. As a reader will notice, the utilization of mRSC for forecasting trajectory of game is generic and likely to be applicable to other games such as Basketball, which would be interesting future direction. To that end, we start by describing how mRSC can be utilized for forecasting trajectory of an ``inning''. We view each inning as a unit, the balls as time and score, wicket as two different metrics of interest. The historical innings are effectively donor units. The unit or inning of interest, for which we might have observed scores / wickets up to some number of initial balls (or time), is the treatment unit and the act of forecasting is simply coming up with ``synthetic control'' for this unit to estimate the score / wicket trajectory for future balls using it. And that's it. We collected data for historical LOI over the period of 1999-2017 for over 4700 innings. Each inning can be up to 300 balls long with score and wicket trajectory. We find that the approximately low-rank structure of matrix with only scores (4700 by 300) and that with both scores and wicket (4700 by 600) remains the same (see Figure \ref{fig:singvals}). This suggests that mRSC can be applicable for this scenario. Next, we evaluate the performance on more than 750 recent (2010 to 2017) LOI innings. For consistency of comparisons, we conduct detailed study for forecasting at the 30 overs or 180 balls (i.e. 60\% of the inning). We measure performance through Mean-Absolute-Percentage-Error (MAPE) and R-square (R2). The median of the MAPE of our forecasts varies from $0.027$ or $2.7$\% for 5 overs ahead to $0.051$ or $5.1$\% for 20 overs ahead. While this, on its own is significant, to get a sense of the {\em absolute} performance, we evaluate the $R^2$ of our forecasts. We find that the $R^2$ of our forecast varies from $0.92$ for 5 overs to $0.79$ for 15 overs. That is, for 5 over forecasting, we can explain $92$\% of ``variance'' in the data, which is surprisingly accurate. Finally, we establish the value of the the de-noising and regression steps in the mRSC algorithm. Through one important experiment we also establish that the commonly held belief that the donor pool should only comprise innings involving the same team for which we are interested in forecasting leads to worse estimation. This is related to {\em Stein's Paradox} \cite{stein1} which was observed in baseball. Finally, we show using examples of actual cricket games that the mRSC algorithm successfully captures and reacts to nuances of the game of cricket. \subsection{Organization} The rest of this work is organized as follows: we review the relevant bodies of literature (Section \ref{sec:related}) followed by a detailed overview of the problem and our proposed model (Section \ref{sec:model}). Next, in Section \ref{sec:algorithm}, we describe the setting and the mRSC algorithm. We present the main theoretical results in Section \ref{sec:results} followed by a synthetic experiment and ``real-world'' retail case study to compare the mRSC algorithm to the RSC (Section \ref{sec:experiments}) in the presence of sparse data. Finally, in Section \ref{sec:cricket}, we have the complete case study for the game of cricket to demonstrate the versatility of the proposed model and its superior predictive performance. \section{Main Results} \label{sec:results} We now present our main results, which bound the pre- and post-intervention prediction errors of our algorithm. \section{Main Results} \label{sec:results} Here, we present our main results, which bound the pre- and post-intervention prediction errors of our algorithm. We begin, however, with a crucial observation on the underlying mean tensor $\widetilde{M}$, which justifies our algorithmic design. \begin{prop} \label{prop:linear_comb} Assume Property \ref{property:low_rank} holds. Suppose the target unit is chosen uniformly at random amongst the $N$ units; equivalently, let the units be re-indexed as per some permutation chosen uniformly at random. Then, with probability $1-r/N$, there exists a $\beta^* \in \mathbb{R}^{(N-1)}$ such that the target unit (represented by index $1$) satisfies \begin{align} \label{eq:linear_comb} M_{1jk} &= \sum_{z=2}^{N} \beta^*_z \cdot M_{zjk}, \end{align} for all $j \in [T]$ and $k \in [K]$. \end{prop} Thus, under the low-rank property of $\widetilde{M}$ (Property \ref{property:low_rank}), the target/treatment unit is shown to be a linear combination of the donor units across all metrics with high probability. This is the key property that is necessary in all synthetic control-like settings, and it allows us to flatten our third-order tensor into a matrix in order to utilize information across multiple metrics since the target unit is a linear combination of the donor pool across \textit{all metrics}. More specifically, Proposition \ref{prop:linear_comb} establishes that for every metric $k \in [K]$, the first row of the lateral slice $\widetilde{M}_{1, \cdot, k}$ is a linear combination of the other rows within that slice with high probability. Therefore, we can flatten $\widetilde{M}$ into a $N \times kT$ matrix $\boldsymbol{M}$ by concatenating the $K$ slices $\widetilde{M}_{\cdot, \cdot, k}$ of $\widetilde{M}$, and still maintain the linear relationship between the first row of $\boldsymbol{M}$ and its other rows, i.e., \begin{align} \label{eq:flattened_mean_matrix} \boldsymbol{M} &= [\widetilde{M}_{\cdot, \cdot, 1}, \dots, \widetilde{M}_{\cdot, \cdot, K}]. \end{align} This linear relationship across all metrics allows us to combine the datasets from different metrics to effectively augment the pre-intervention period (Step 1 of Algorithm \ref{algorithm:mrsc}). For the rest of this exposition, let $\boldsymbol{M}$ have the following SVD: \begin{align} \boldsymbol{M} &= \sum_{i=1}^r \tau_i \mu_i \nu_i^T, \end{align} where $\tau_i$ denote the singular values, and $\mu_i, \nu_i$ denote the left and right singular vectors, respectively. \subsection{Pre-intervention Error} The following statement bounds the pre-intervention error of the mRSC algorithm; again, we remind the reader that $K=1$ reduces to the RSC framework. \begin{theorem} \label{thm:pre_int} Let the algorithmic hyper-parameter $\Delta = \boldsymbol{I}$ (the identity matrix). Let $r$ and $\beta^*$ be defined as in \eqref{eq:low_rank_tensor} and \eqref{eq:linear_comb}, respectively. Suppose the following conditions hold: \begin{enumerate} \item Properties \ref{property:low_rank}, \ref{property:boundedness}, \ref{property:noise.1} for some $\alpha \ge 1$, \ref{property:noise.2}, and \ref{property:masking_noise}. \item The thresholding parameter $\lambda$ is chosen s.t. $\emph{rank}(\widehat{\bM}) = r$. \item $T_0 = \Theta(T)$. \item The target unit, unit one, is chosen uniformly at random amongst the $N$ units. \end{enumerate} Then with probability at least $1 - \frac{r}{N}$, \begin{align*} \frac{1}{K}\sum_{k=1}^K \emph{MSE}_{T_0}(\widehat{M}_1^{(k)}) &\le \frac{4 \sigma^2 r}{KT_0} + C_1 C(\alpha) \, \| \beta^*\|_1^2 \, \frac{ \log^2(KNT_0)}{\rho^2 KT_0} \, \Bigg(r + \frac{\Big( (KT_0)^2 \rho + KNT_0 \Big) \log^3(KNT_0)}{\rho^2 \tau_r^2} \Bigg), \end{align*} where $C_1 = (1 + \gamma + \Gamma + K_\alpha)^4$, $C(\alpha)$ is a positive constant that may depend on $\alpha \ge 1$, and $\tau_r$ is the $r$-th singular value of $\boldsymbol{M}$. \end{theorem} \subsection{Post-intervention Error} We now proceed to bound the post-intervention MSE of our proposed algorithm. We begin, however, by stating the model class under consideration: \begin{align*} \mathcal{F} &= \{ \beta \in \mathbb{R}^{N-1}: \| \beta\|_2 ~\le B, \, \| \beta\|_0 ~\le r \}, \end{align*} where $B$ is a positive constant. As is commonly assumed in generalization error analyses for regression problems, we consider candidate vectors $\beta \in \mathbb{R}^{N-1}$ that have bounded $\ell_2$-norm. Further, by Proposition 4 of \cite{asss}, if $\text{rank}(\widehat{\bM}) = r$, then for any $\widehat{\beta} \in \mathbb{R}^{N-1}$, there exists a $\beta \in \mathbb{R}^{N-1}$ such that $\widehat{\beta}^T \widehat{\bM} = \beta^T \widehat{\bM}$ and $\| \beta \|_0 \le r$; hence, we can restrict our model class to $r$-sparse linear predictors. Combining the above observations, we consider the collection of candidate regression vectors within $\mathcal{F}$, i.e., the subset of vectors in $\mathbb{R}^{N-1}$ that have bounded $\ell_2$-norm and are $r$-sparse. \vspace{10pt} \noindent \textit{Generating process.} Let us assume $\widetilde{M}$ is generated as per a linear LVM. Specifically, for all $(i,j,k) \in [N] \times [T] \times [K]$, \[ M_{ijk} = f(\theta_i, \rho_j, \omega_k) = \sum_{\ell = 1}^r \lambda_\ell \, \theta_{i, \ell} \, \rho_{j, \ell} \, \omega_{k, \ell}, \] where $\lambda_\ell \in \mathbb{R}, \theta_i \in [0,1]^{d_1}, \rho_j \in [0,1]^{d_2}$, and $\omega_k \in [0,1]^{d_3}$ for some $d_1, d_2, d_3 \ge r$, and $r$ is defined as in \eqref{eq:low_rank_tensor}. Further, we make the natural assumption that $\theta_i, \rho_j$, and $\omega_k$ are sampled i.i.d. from some underlying (unknown) distributions $\Theta, P$, and $\Omega$, respectively. This gives rise to the following statement. \begin{theorem} \label{thm:post_int} Let the conditions of Theorem \ref{thm:pre_int} hold. Further, let $\widehat{\beta} \in \mathcal{F}$. Then for any metric $k \in [K]$, \begin{align} \label{eq:post_int_error} \mathbb{E} \left[ \emph{MSE}_{T-T_0}(\widehat{M}_1^{(k)}) \right] &\le C_1 \mathbb{E} \left[ \frac{1}{K} \sum_{k'=1}^K \emph{MSE}_{T_0}(\widehat{M}_1^{(k')}) \right] + \frac{C_2 r^{3/2} \widehat{\alpha}^2}{\sqrt{KT_0}} \| \beta^*\|_1. \end{align} Here, $C_1$ denotes an absolute constant, $C_2 = C B^2 \Gamma$ for some $C > 0$, and $\widehat{\alpha}^2 = \mathbb{E}[ \| \widehat{\bM} \|_\max^2]$; lastly, we note that the expectation is taken with respect to the randomness in the data (i.e., measurement noise) and $\Theta, P, \Omega$. \end{theorem} In words, concatenating multiple relevant metrics effectively augments the pre-intervention period (number of training samples) by a factor of $K$. By Theorem \ref{thm:post_int}, this results in the post-intervention error decaying to zero by a factor of $\sqrt{K}$ faster, as evident by the generalization error term in \eqref{eq:post_int_error}. Further, we note that the first term on the righthand side of \eqref{eq:post_int_error}, $(1/K) \sum_{k'=1}^K \text{MSE}_{T_0}(\widehat{M}_1^{(k')})$, corresponds to the pre-intervention error across all $K$ metrics, as defined in Theorem \ref{thm:pre_int}. \\ \noindent{\bf Implications.} The statement of Theorem \ref{thm:pre_int} requires that the {\em correct} number of singular values are retained by the mRSC algorithm. In settings where all $r$ singular values of $\boldsymbol{M}$ are roughly equal, i.e., \[ \tau_1 \approx \tau_2 \approx \dots \approx \tau_r = \Theta \Big( \sqrt{(KNT)/r} \Big),\] the pre-intervention prediction error vanishes as long as $KT_0$ scales faster than $\max(\sigma^2 r, \rho^{-4} r \log^5(N))$. Further, as long as $r = O(\log^{1/4} (N))$, the overall error vanishes with the same scaling of $KT_0$. However, the question remains: how does one find a good $r$ in practice? The purpose of the generalization error, such as that implied by Theorem \ref{thm:post_int}, is to precisely help resolve such a dilemma. Specifically, it suggests that the overall error is at most the pre-intervention (training) error plus a generalization error term that scales as $r^{3/2}/\sqrt{KT_0}$. Therefore, one should choose the $r$ that minimizes this bound -- naturally, as $r$ increases, the pre-intervention error is likely to decrease, but the additional term $r^{3/2}/\sqrt{KT_0}$ will increase; therefore, a unique minima (in terms of the value of $r$) exists, and it can be found in a data driven manner. \subsection{Forecasting scores in cricket} \label{sec:cricket} We now focus our attention on another real-world problem: forecasting scores in the game of cricket. A-priori, there is nothing obvious which indicates that this problem has any relationship to the model introduced in Section \ref{sec:model}. As an important contribution of this work, we show how cricket innings can be modeled as an instance of mRSC and we conduct extensive experimentation to demonstrate mRSC's excellent predictive performance. As a starting point, we note that the mRSC setting is a natural candidate for modeling cricket because the game has two key metrics: {\em runs} and {\em wickets}. While the winner is the team that makes more total {\em runs}, the state of the game cannot be described by runs, alone. Wickets lost are as important as runs scored in helping determine the current state of the game. Additionally, it is well-understood among the followers and players of the game that the trajectory of runs scored {\em and} wickets lost are both crucial in determining how well a team can be expected to do for the remainder of the game. We formalize this intuition to model cricket innings and estimate the future total runs scored (and wickets lost) by a team using the mRSC algorithm. In what follows, we first describe the most important aspects of the game of cricket. Next, we describe how to model an inning in cricket as an instance of mRSC. Finally, we use the mRSC algorithm to forecast scores in several hundred actual games to establish its predictive accuracy. In the process, we show that (a) both steps in the mRSC algorithm, i.e., de-noising and regression, are necessary when estimating the future scores in a cricket inning; (b) a learning algorithm that considers only runs, e.g., the RSC algorithm, would be insufficient and, thus, perform poorly because wickets lost are a critical component of the game; (c) somewhat counterintuitively, constraining the ``donor pool'' of innings to belong to the team of interest leads to poor predictive performance; and (d) using {\em real} examples, the mRSC model and algorithm are able to do justice in capturing the nuances of the game of cricket. \subsubsection{A primer on cricket.} Cricket is one of the most popular sports in the world. According to BBC, over a billion people tuned in on television to watch India play Pakistan in the ICC Cricket World Cup 2015 (\cite{bbccricket, wapocricket}). It is a fundamentally different game compared to some its most popular rivals. Unlike football (soccer), rugby, and basketball, cricket is an asymmetric game where both teams take turns to bat and bowl; and unlike baseball, cricket has far fewer innings, but each inning spans a significantly longer time. While cricket is played across three formats, the focus of this work is on the 50-over Limited Overs International (LOI) format, where each team only bats once and the winning team is the one that scores more total runs. A batting inning in a LOI cricket game is defined by the total number of times the ball is bowled to a batter (similar to a ``pitch'' in baseball). In a 50-over LOI inning, the maximum number of balls bowled is $300$. A group of six balls is called an ``over''. Therefore, for the format under consideration, an inning can last up to $50$ overs or $300$ balls. Each batting team gets a budget of $10$ ``outs'', which are defined as the number of times the team loses a batter. In cricket, a batter getting ``out'' is also referred to as the team having lost a ``wicket''. Therefore, the state of a cricket inning is defined by the tuple of ({\em run scores}, {\em wickets lost}) at each point in the inning. We shall call the trajectory of an inning by a sequence of such tuples. In the event that the team loses all $10$ of its wickets before the maximum number of balls (300) are bowled, the innings ends immediately. The team batting second has to score one more run than the team batting first to win the game. As an example, in the ICC Cricket World Cup Final in 2015, New Zealand batted first and lost all ten wickets for a cumulative score of $183$ in 45 overs (270 balls). Australia were set a target of 184 in a maximum of 50 overs but they chased it successfully in 33 overs and 1 ball (199 balls) for the loss of only 3 wickets. \subsubsection{The forecasting problem.} We consider the problem of {\em score trajectory forecasting}, where the goal is to estimate or predict the (score, wicket) tuple for all the {\em future} remaining balls of an ongoing inning. We wish to develop an entirely data-driven approach to address this problem while avoiding the complex models employed in some of the prior works (see Section \ref{sec:related}). Just as in the rest of this work, we are using the mRSC setting as a vehicle for {\em predictions} \subsubsection{An mRSC model.} Given that the problem involves twin-metrics of runs and wickets, the vanilla RSC algorithm would be unsatisfactory in this setting. We cast this problem as an instance of the mRSC model outlined in Section \ref{sec:model}. However, it is not obvious how an inning in cricket can be reduced to an instance of the mRSC problem. We proceed to show that next. \smallskip \noindent{\bf Intuition.} It is important to establish the validity of the {\em latent variable model} and, hence, the factor model or low-rank tensor model in this setting. It is motivated by the following minimal property that sports fans and pundits expect to hold for a variety of sports: the performance of a team is determined by (1) the {\em game context} that remains constant through the duration of the game and (2) the {\em within innings context} that changes during the innings. In cricket, the {\em game context} may correspond to the players, coach, the mental state of the players, the importance of the game (e.g., world cup final), the game venue and fan base, the pitch or ground condition, etc. At the same time, the {\em within innings context} may correspond to stage of the innings, power play, etc. Next, we formalize this seemingly universal intuition across sports in the context of cricket. \smallskip \noindent{\bf Formalism.} The performance, as mentioned before, is measured by the runs scored and the wickets lost. To that end, we shall index an inning by $i$, and ball within the inning by $j$. Let there be a total of $n$ innings and $b$ balls (within LOI, $b = 300$). Let $X_{ij}$ and $W_{ij}$ denote the runs scored and wickets lost on ball $j$ within inning $i$. We shall model $X_{ij}$ and $W_{ij}$ as independent random variables with mean $m_{ij} = {\mathbb E}[X_{ij}]$ and $\lambda_{ij} = {\mathbb E}[W_{ij}]$. Further, let \begin{align} m_{ij} & = f_{r}(\theta_i, \rho_j) \equiv f(\theta_i, \rho_j, \omega_r), \\ \lambda_{ij} & = f_{w}(\theta_i, \rho_j) \equiv f(\theta_i, \rho_j, \omega_w) \end{align} for all $i \in [n], j \in [b]$\footnote{We use notation $[x] = \{1,\dots, x\}$ for integer $x$.} where {\em latent} parameter $\theta_i \in \Omega_1 = [r_1, s_1]^{d_1}$ captures the {\em game context}; {\em latent} parameter $\rho_j \in \Omega_2 = [r_2, s_2]^{d_2}$ captures the {\em within innings context}; {\em latent} functions $f_{r}, f_w: \Omega_1 \times \Omega_2 \to [0,\infty)$ capture complexity of the underlying model for runs scored and wickets lost; and these functions are thought be coming from a class of parametric functions formalized as $f_r(\cdot, \cdot) \equiv f(\cdot, \cdot, \omega_r)$ and $f_w(\cdot, \cdot) \equiv f(\cdot, \cdot, \omega_w)$ for some parameters $\omega_r, \omega_w \in \Omega_3 = [r_3, s_3]^{d_3}$. For simplicity and without loss of generality, we shall assume that $r_1=r_2 = r_3 = 0$, $s_1 = s_2 =s_3= 1$, $d_1 = d_2 = d_3 = d \geq 1$. Now this naturally fits the Latent variable model or factor model we have considered in this work. Therefore, in principle, we can apply mRSC approach for forecasting run / wicket trajectory. \smallskip \noindent{\bf Diagnostic.} Proposition \ref{prop:low_rank_approx} suggests that if indeed our model assumption holds in practice, then the data matrix of score and wicket trajectories and their combination ought to be well approximated by a low-rank matrix. Moreover, for the mRSC model to hold, we expect the ranks of all these matrices to be approximately equal, as discussed in Section \ref{sec:falsifiability}. For the low-rank to manifest, each matrix's spectrum should be concentrated on top few principal components. This gives us way to test falsifiability of our model for the game of cricket, and more generally any game of interest. In the context of cricket, we consider a matrix comprising 4700 LOI innings from 1999 to 2017, i.e. as many rows and $300$ columns. Figure \ref{fig:singvals} shows the spectrum of the top 50 singular values (sorted in descending order) for each matrix. Note that the magnitude of the singular values is expected to different between the two metrics owing to the assumed differences between the generating functions, $f_r$ and $f_w$. It is the relative proportions between the various singular values that we are interested in. The plots clearly support the implications that most of the spectrum is concentrated within the top few ($8$) principal components. The shapes for all three spectrums are also nearly identical, the magnitude differences notwithstanding. Indeed, we note that the over 99.5\% of the ``energy'' in each matrix is captured by the top 8 singular values. We determine this by calculating the ratio of the sum of squares of the top 8 singular values to the sum of squares of all singular values of each matrix. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{singular-value-spectrum-combined.png} \caption{The Singular Value spectrum for all innings in the dataset (runs, wickets and combined matrices of dimensions 4700 $\times$ 300. Only showing the top 50 singular values, in descending order.} \label{fig:singvals} \end{figure} \subsubsection{Experiments and Evaluation} \label{sec:experimentalsetup} We describe the details of the experiments to evaluate the performance of the algorithm. \smallskip \noindent{\bf Data.} For all experiments described in the rest of this section, we consider 750 most recent LOI innings spanning seven years (2010-2017) and introduce an arbitrary intervention at the 30 over mark (180 balls), unless stated otherwise. For each inning, forecasts for the entire remainder of the innings (i.e., 20 overs (or 120 balls)) are estimated using our mRSC algorithm. For the donor pool, we take into consideration all LOI (first) innings from the year 1999 onwards, which comprises about 2400 innings. \smallskip \noindent{\bf Objective.} Even though the twin metrics, runs scored and wickets lost, are equally important in helping to determine the current state of the game and in forecasting the future, the key evaluation metric we will consider is the runs scored. We focus on this particular metric since the winner of game is solely a function of the number of runs scored. i.e., if team A scores more runs than team B, then team A is declared the winner if even team A has lost more wickets. Therefore, while we use {\em both} runs and wickets to train the mRSC model, all evaluations will only focus on the total number of runs scored. \smallskip \noindent {\bf Evaluation metrics.} Given that the ground truth (i.e., the mean trajectory of runs scored and wickets lost) are latent and unobserved, we use the {\em actual} observations from each of these innings to measure our forecasting performance. We will use two metrics to statistically evaluate the performance of our forecast algorithm: \smallskip \noindent {\em MAPE.} We split the run forecast trajectories into four durations of increasing length: 5 overs (30-35 overs), 10 overs (30-40 overs), 15 overs (30-45 overs), and 20 overs (30-50 overs). Next, we compute the mean absolute percentage error (MAPE) in the range $[0, 1]$ for each inning and forecast period. MAPE helps us quantify the quality of runs-forecasts. A distribution of these errors, along with the estimated mean and median values, are reported. \smallskip \noindent {\em $R^2$.} To statistically quantify how much of the variation in the data is captured by our forecast algorithm, we compute the $R^2$ statistic for the forecasts made at the following overs: $\{35, 40, 45\}$. For each of these points in the forecast trajectory, we compute the $R^2$ statistic over all innings considered. The baseline for $R^2$ computations is the sample average (runs) of all innings in the donor pool at the corresponding overs, i.e., $\{35, 40, 45\}$. This is akin to the baseline used in computing the $R^2$ statistic in regression problems, which is simply the sample average of the output variables in the training data. \smallskip \noindent {\bf Comparison.} In order to establish the efficacy of the mRSC algorithm, we conduct the following comparisons to study the importance of each facet of the mRSC algorithm: \begin{enumerate} \item {\bf Criteria 1: value of the WLS}. comparison with an algorithm that uses the de-noised donor pool matrix (result of step 1 of the algorithm), but takes the sample average of the donor pool runs instead of a weighted least squared regression; \item {\bf Criteria 2: value of de-noising}. comparison with an algorithm that ignores the de-noising step (step 1 of the algorithm) and performs a WLS regression directly on the observed data in the donor pool; \item {\bf Criteria 3: value of using all data (Stein's Paradox)}. comparison with an algorithm that restricts the donor pool to belong to past innings played by the {\em same} team for which we are making the prediction. It is commonly assumed that in order to predict the future of a cricket innings, the ``context'' is represented by the identity of the team under consideration. This was also assumed about baseball and refuted (ref. Stein's Paradox \cite{stein1}). Filtering the donor pool to belong to innings played by the same team will allow us to test such an assumption. \item {\bf Criteria 4: value of wickets}. we look for examples of actual innings in cricket where considering a single metric (e.g., runs) would not properly do justice to the state of the innings and one would expect runs-based forecasts from the vanilla RSC algorithm to be inferior to the mRSC algorithm, which is capable of taking the loss of wickets into account as well. \item {\bf Criteria 5: value of our model}. finally, and perhaps most significantly, we compare the algorithm's forecasts with the context and details within some selected actual games/innings. Such a comparison against actual game context and cricketing intuition will help determine whether our algorithm (and by extension, the mRSC model) was able to capture the nuances of the game. \end{enumerate} \subsubsection{Statistical Performance Evaluation} Figure \ref{fig:mapehist1} shows the distribution of MAPE statistic for the runs-forecasts from our algorithm for each of the four forecasts: 5 over, 10 over, 15 over, and 20 over periods with the intervention at the 30th over mark. The clear left-skew hints at most errors being small. The MAPE statistic is larger the longer the forecast horizon considered, which is to be expected. The median MAPE from our algorithm over the longest forecast horizon (20 overs) is only about 5\% while the shortest-horizon forecasts have a median of about 2.5\%. In the cricket context, this should be considered excellent. Table \ref{table:mape1} notes the mean and median for the distribution of the MAPE statistic for our algorithm in comparison to the de-noised donor-pool averaging algorithm described in Criteria 1 in Section \ref{sec:experimentalsetup}. Our algorithm outperforms the donor-pool averaging algorithm for all forecast intervals on both the mean and median MAPE statistics. This establishes the clear value of Step 2 (regression) of the mRSC algorithm. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{forecast_errors_30ov_sing_vals-5.png} \caption{MAPE forecast error distributions for interventions made at the 30 over (180 balls) mark. Algorithm uses the top 5 singular values.} \label{fig:mapehist1} \end{figure} \begin{table}[h!] \centering \begin{tabular}{|| c | c c | c c ||} \hline Forecast & \multicolumn{2}{c |}{mRSC Algorithm} & \multicolumn{2}{c ||}{Donor-Pool Avg} \\ Interval & Mean & Median & Mean & Median \\ [0.5ex] \hline\hline 5 overs & 0.033 & 0.027 & 0.169 & 0.124\\ \hline 10 overs & 0.043 & 0.037 & 0.169 & 0.126\\ \hline 15 overs & 0.053 & 0.043 & 0.170 & 0.124 \\ \hline 20 overs & 0.062 & 0.051 & 0.173 & 0.121\\ \hline \end{tabular} \caption{Forecasts from the mRSC algorithm vs the donor pool averaging algorithm. MAPE forecast error means and medians for interventions made at the 30 over (180 balls) mark. Algorithm uses the top 7 singular values.} \label{table:mape1} \end{table} Tables \ref{table:r2-1} and \ref{table:r2-2} report the $R^2$ statistics for the 5, 10, and 15 over forecasts across all innings produced by our algorithm in comparison to an algorithm that performs the WLS regression step without de-noising (see Criteria 2 in Section \ref{sec:experimentalsetup}). Note that the intervention is at the 10th over mark in Table \ref{table:r2-1} and at the 30th over mark in Table \ref{table:r2-2}. Both tables show that the de-noising step can only be an advantage, as originally argued in \cite{rsc1}. However, the advantage of the de-noising step is more pronounced in situations with less training data, i.e., an earlier intervention. This is to be expected and is perfectly in line with the other experiments conducted in this work. The $R^2$ values indicate that our algorithm is able to capture most of the variability in the data and that there is significant benefit in using the mRSC framework. The decline in median $R^2$ values as we increase the forecast horizon from 5 overs (30 balls) to 15 overs (90 balls) is to be expected because the forecast accuracy degrades with the length of the forecast horizon. \begin{table}[h!] \centering \begin{tabular}{|| c | c c c ||} \hline Intervention: 30th ov & 35 ov & 40 ov & 45 ov\\ [0.5ex] \hline\hline mRSC Algorithm & 0.924 & 0.843 & 0.791\\ \hline Regression (noisy) & 0.921 & 0.839 & 0.787 \\ \hline \end{tabular} \caption{$R^2$ of the forecasts from the mRSC algorithm vs the one where no donor-pool de-noising is performed as a precursor to the WLS regression step. Forecasts at the 35th, 40th, and 45th overs with the intervention at the 30 over (180 ball) mark. Algorithm uses the top 7 singular values.} \label{table:r2-1} \end{table} \begin{table}[h!] \centering \begin{tabular}{|| c | c c c ||} \hline Intervention: 10th ov & 15 ov & 20 ov & 25 ov\\ [0.5ex] \hline\hline mRSC Algorithm & 0.69 & 0.39 & 0.15\\ \hline Regression (noisy) & 0.67 & 0.34 & 0.07\\ \hline \end{tabular} \caption{$R^2$ of the forecasts from the mRSC algorithm vs the one where no donor-pool de-noising is performed as a precursor to the WLS regression step. Forecasts at the 15th, 20th, and 25th overs with the intervention at the 10 over (60 ball) mark. Algorithm uses the top 7 singular values.} \label{table:r2-2} \end{table} \subsubsection{Stein's Paradox in cricket} \label{sec:donorpooleval} We now show that the famous statistical paradox attributed to Charles Stein also makes its appearance in cricket. Using baseball as an example, Stein's paradox claims that the better forecasts of the future performance of a batter can be determined by considering not just the batter's own past performance but also that of other batters even if their performance might be independent of the batter under consideration \cite{stein1}. We use Table \ref{table:keyinsight2} to summarize a similar fact in cricket, using our forecast algorithm. Instead of considering all past innings in the donor pool, we use a donor pool comprising innings {\em only} from the same team as the one that played the innings we are forecasting. Table \ref{table:keyinsight2} shows that the median of MAPE statistics over all forecast horizons are $\sim$ 3x lower when using all past innings in the donor pool compared to a team-specific donor pool. This experiment also validates our latent variable model which allows a complex set of latent parameters to model cricket innings. \begin{table}[h!] \centering \begin{tabular}{|| c | c | c c c c ||} \hline Team & Donor Pool & 5ov & 10 ov & 15 ov & 20 ov \\[0.5ex] \hline\hline England & All innings & \textbf{0.025} & \textbf{0.031} & \textbf{0.039} & \textbf{0.043} \\ & Restricted & 0.065 & 0.097 & 0.113 & 0.127\\ \hline Pakistan & All innings & \textbf{0.029} & \textbf{0.039} & \textbf{0.046} & \textbf{0.052} \\ & Restricted & 0.072 & 0.104 & 0.113 & 0.119 \\ \hline India & All innings & \textbf{0.027} & \textbf{0.037} & \textbf{0.045} & \textbf{0.049} \\ & Restricted & 0.067 & 0.093 & 0.101 & 0.108 \\ \hline Australia & All innings & \textbf{0.027} & \textbf{0.040} & \textbf{0.044} & \textbf{0.053} \\ & Restricted & 0.063 & 0.084 & 0.094 & 0.104 \\ \hline \end{tabular} \caption{MAPE medians for innings played by specific teams on a donor pool with all innings compared to one which is restricted to only include past innings from the same team. The rest of the algorithm is exactly the same for both. The intervention takes place at the 30 over mark and the forecast horizons are 5, 10, 15, and 20 overs.} \label{table:keyinsight2} \end{table} \subsubsection{Capturing the effect of wickets: comparison to RSC} \label{sec:comparisonRSC} We first consider the Champions Trophy 2013 game between Sri Lanka and Australia, played on June 17, 2013 at the Kennington Oval in London. In the second innings, Australia were chasing 254 for victory in their maximum allocation of 50 overs. However, given the tournament context, Australia would have to successfully chase the target in 29 overs to have a shot at qualifying for the semi-finals. In their attempt to reach the target as quickly as possible, Australia played aggressively and lost wickets at a higher rate than normal. They couldn't qualify for the semi-finals but also fell short of the target altogether, and were bowled out for 233. At the 30th over mark, Australia's scorecard read 192 runs scored. Purely based on runs scored, that is a high score and an algorithm which takes {\em only} runs in to account, e.g. the vanilla RSC algorithm, would likely forecast that Australia would overhaul their target of 254 very comfortably. Indeed, the RSC algorithm forecasts that Australia would have reached their target in the 39th over. However, by that 30th over mark Australia had lost 8 wickets. This additional context provided by wickets lost is not captured by the RSC algorithm but is crucial in understanding the state of the game. In cricketing terms 192-8 in 30 overs would not be considered an advantageous position when chasing a target of 254. It is more likely for a team to get bowled out, i.e., lost all 10 of their wickets, before reaching their target. We use the mRSC algorithm to study the effect of effects. Using cross-validation, we find that the best ratio of objective function weights between runs and wickets is 1:4. Using this ratio of weights, the mRSC algorithm forecasts that Australia would be all out, i.e., lose all 10 wickets, and make 225 runs. This is very close to the actual final score of 233 all out. In cricketing terms, that is a far better forecast than that provided by the RSC algorithm which made little cricketing sense. We now consider the game played between England and New Zealand in Southhamption on June 14, 2015. Batting first, England were looking good to reach a score in excess of 350 by the 41th over when they had only lost 5 wickets but already scored 283 runs. However, from being a position of ascendency, their fortunes suddenly dipped and they ended up losing all five remaining wickets within the next four overs to get bowled out for 302. While it is hard to imagine any algorithm forecasting the sudden collapse, we use the 43rd over as a benchmark to compare the two algorithms. At that stage, England had progressed to 291 runs but lost 7 wickets. In cricketing terms, that is no longer a situation where one of would expect the team to cross 350 like it appeared only two overs back. The RSC algorithm does not grasp the change in the state of the game purely based on runs scored. It projects that England would still go on to make 359 runs. The mRSC algorithm, using a runs: wickets weights ratio of 1:2, forecasts a final score of 315 all-out. Once again, we notice that the mRSC algorithm is able to capture the effect of wickets, which is crucial in the game of cricket, to produce more accurate and {\em believable} forecasts. \subsubsection{Capturing cricketing sense: case studies} \label{sec:forecastcasestudies} We use examples from actual games to highlight important features of our forecast algorithm. \\ \\ \noindent \textbf{India vs Australia, World Cup Quarter Final 2011}. We use the ICC World Cup 2011 Quarter Final game between India and Australia. Batting first, Australia made 260 runs for the loss of 6 wickets in their allocated 50 overs. At the 25 over mark, Australia had made 116 runs for the loss of only two wickets. Australia's captain and best batsman, Ricky Ponting, was batting and looked set to lead Australia to a final score in excess of 275-280. However, India were able to claw their way back by taking two quick wickets between overs 30-35 and slowed Australia's progress. Eventually, that slowdown cost Australia a vital few runs and they ended with a good final score of 260--but it could have been better. Figure \ref{fig:aus-ind-1} shows the actual innings and trajectory forecasts produced by our algorithm for interventions at the 25, 40 and 45 over marks. Notice that Figure \ref{fig:ind-aus-1-25} shows that our algorithm forecasted Australia would make more runs than they eventually ended up with. This is what one would expect to happen--we forecast based on the current state of the innings and not using future information. Note that the forecast trajectories at the 40 and 45 over interventions match exceptionally well with reality now that the algorithm has observed India's fightback. This is an important feature of the algorithm and it happens because the pre-intervention fit changes for each of the three points of intervention. The events from overs 30-35 play a significant role in estimating a {\em different} synthetic control, $\widehat{\beta}$, for the interventions at over 40 and 45 compared to the intervention at over 25 (when those events were yet to happen). Another feature worthy of highlighting is that the algorithm is able to project the rise in the rate of scoring towards the end of an innings--this is a well-known feature of cricket innings and any data-driven algorithm should naturally be able to bring this to light. Finally, note the smoothness of the forecasts (in blue) compared to observations (in red). This is a key feature of our algorithm, which ``de-noises'' the data matrix to retain only the top few singular values to estimate the mean future trajectory. \begin{figure}[H] \centering \subfigure[25 over marks]{\label{fig:ind-aus-1-25}\includegraphics[width=0.325\textwidth]{ind-aus-1-25.png}} \subfigure[40 over marks]{\label{fig:ind-aus-1-40}\includegraphics[width=0.325\textwidth]{ind-aus-1-40.png}} \subfigure[45 over marks]{\label{fig:ind-aus-1-45}\includegraphics[width=0.325\textwidth]{ind-aus-1-45.png}} \caption{India vs Aus (WC 2011). First Innings (Australia batting). Interventions at the 25, 40 and 45 over marks. Actual and Forecast trajectories with the 95\% uncertainty interval.} \label{fig:aus-ind-1} \end{figure} We now look at the second inning. Here, India was able to chase the target down and win the game with relative ease. Figure \ref{fig:aus-ind-2} shows the forecasts at the intervention points of 35, 40, and 45 overs for India's innings. The forecast trajectories are exceptionally close to reality and showcase the predictive ability of the algorithm. Once again notice the rise in score rate towards the late stages of the innings, similar to the first innings. We note that the flatlining of the actual score in the innings (red line) is simply due to the fact that India had won the game in the 48th over and no more runs were added. \\ \begin{figure}[H] \centering \subfigure[35 over marks]{\label{fig:ind-aus-2-25}\includegraphics[width=0.325\textwidth]{ind-aus-2-25.png}} \subfigure[40 over marks]{\label{fig:ind-aus-2-40}\includegraphics[width=0.325\textwidth]{ind-aus-2-40.png}} \subfigure[45 over marks]{\label{fig:ind-aus-2-45}\includegraphics[width=0.325\textwidth]{ind-aus-2-45.png}} \caption{India vs Aus (WC 2011). Second Innings (India batting). Interventions at the 35, 40 and 45 over marks. Actual and Forecast trajectories with the 95\% uncertainty interval.} \label{fig:aus-ind-2} \end{figure} \noindent \textbf{Zimbabwe vs Australia, Feb 4 2001.} Zimbabwe and Australia played a LOI game in Perth in 2001. Australia, world champions then, were considered too strong for Zimbabwe and batting first made a well-above par 302 runs for the loss of only four wickets. The target was considered out of Zimbabwe's reach. Zimbabwe started poorly and had made only 91 for the loss of their top three batsmen by the 19th over. However, Stuart Carlisle and Grant Flower combined for a remarkable partnership to take Zimbabwe very close to the finish line. Eventually, Australia got both batsmen out just in the nick of time and ended up winning the game by just one run. We show the forecast trajectories at the 35, 40 and 45 over marks--all during the Carlisli-Flower partnership. The forecasts track reality quite well. A key feature to highlight here is the smoothness of the forecasts (in blue) compared to reality (in red). This is a key feature of our algorithm which ``de-noises'' the data matrix to retain only the top few singular values. The resulting smoothness is the mean effect we are trying to estimate and it is no surprise that the forecast trajectories bring this feature to light. \begin{figure}[H] \centering \subfigure[35 over marks]{\label{fig:zim-aus-2-35}\includegraphics[width=0.325\textwidth]{zim-aus-2-35.png}} \subfigure[40 over marks]{\label{fig:zim-aus-2-40}\includegraphics[width=0.325\textwidth]{zim-aus-2-40.png}} \subfigure[45 over marks]{\label{fig:zim-aus-2-45}\includegraphics[width=0.325\textwidth]{zim-aus-2-45.png}} \caption{Zimbabwe vs Aus (2001). Second Innings (Zimbabwe batting). Interventions at the 35 over, 40 over and 45 over mark.} \label{fig:aus-zim-2} \end{figure} \section{Introduction} \label{sec:intro} Quantifying the causal effect of interventions is a problem of interest across a wide array of domains. From policy making to engineering and medicine, estimating the effect of an intervention is critical for innovation and understanding existing systems. In any setting, with or without an intervention, we only observe one set of outcomes. In casual analysis, the fundamental problem is that of estimating what wasn't observed, i.e., the {\em counterfactual}. In order to estimate the counterfactual, observational studies rely on the identification (or estimation) of a {\em control} unit. This can be achieved by relying on expert domain knowledge, or via techniques such as {\em matching} the target unit to existing control units (called {\em donors}) on covariates features or propensity scores (see \cite{rubin1973, rosenbaum1983}). A popular data-driven approach to estimating the control is known as the Synthetic Control (SC) method \cite{abadie1, abadie2, abadie3}. SC assigns convex weights to the donors such that the resulting {\em synthetic} unit most closely matches the target unit according to a chosen metric of interest. A generalization of this approach, Robust Synthetic Control (RSC) \cite{rsc1}, removes the convexity constraint and guarantees a consistent estimator that is robust to missing data and noise. While SC, its many variants, and RSC exhibit attractive properties, they all suffer for poor estimation when the amount of training data (i.e., the length of the pre-intervention period) is small (e.g., see Figure \ref{fig:walmart} in Section \ref{sec:retail}). In many of these scenarios with little pre-intervention data, there may be other data available which is related to the type of data (or metric) of interest. For example, we might be interested in crime-rate and might also have median household income and high-school graduation rate data available. Therefore, one remedy to the limited pre-intervention data is to utilize data from multiple metrics. \subsection{Contributions} As the main contribution of this work, we propose multi-dimensional Robust Synthetic Control (mRSC), a generalization of unidimensional RSC. Unlike standard SC-like methods, mRSC incorporates multiple types of data in a principled manner to overcome the challenge of forecasting the counterfactual from limited pre-intervention data. We show that the mRSC method is a natural consequence of the popular factor model that is commonly utilized in the field of econometrics. Through this connection, we provide a data-driven falsifiability test for mRSC, which determines whether mRSC can be expected to produce a consistent estimation of the counterfactual evolution across all metrics for a target unit (exposed to intervention) in both the pre- and post-intervention stages (see Section \ref{sec:falsifiability}). Further, we demonstrate that prediction power of mRSC improves over RSC as the number of included relevant data types increases. Specifically, the mean-squared error (MSE) vanishes at a rate scaling with $\sqrt{K}$ for the post-intervention period, where $K$ denotes the number of metrics (see Theorem \ref{thm:post_int} in Section \ref{sec:results}); we highlight that $K=1$ reduces to the vanilla RSC framework. Stated another way, incorporating different types of relevant data (metrics) improves the generalization error since the number of training data increases by a factor of $K$. Additionally, desirable properties of the RSC algorithm, namely robustness to missing data and noise, carry over to our mRSC framework. Finally, we conduct extensive experimentation to establish the efficacy of this generalized method in comparison to RSC via synthetically generated datasets and two real-world case-studies: product sales in retail and score trajectory forecasting in the game of Cricket. We now summarize these contributions in greater detail. \medskip \noindent {\bf Model.} We consider a natural generalization of the factor (or latent variable) model considered in the literature cf. \cite{rsc1}, where the measurement associated with a given unit at a given time for a given metric is a function of the features associated with the particular unit, time, and metric. A special case of such models, the linear factor models, are the {\em bread and butter} models within the Econometrics literature, and are assumed in \cite{abadie1, abadie2, abadie3}. We argue that as long as the function is Lipschitz and the features belong to a compact domain, then the resulting factor model can be well approximated by low-rank third-order tensor (see Proposition \ref{prop:low_rank_approx} in Section \ref{sec:model_mean_tensor_assumptions}) with the dimensions of the tensor corresponding to the unit, time, and metric, respectively. Therefore, to simplify this exposition, we focus on the setup where the underlying model is indeed a low-rank tensor. We assume that our observations are a corrupted version of the ground truth low-rank tensor with potentially missing values. More specifically, we assume we have access to pre-intervention data for all units and metrics, but only observe the post-intervention data for the donor units. Our goal is to estimate the ``synthetic control'' using all donor units and metrics in the pre-intervention period. The estimated synthetic control can then help estimate the {\em future} (post-intervention) ground-truth measurements for all metrics associated with the treatment unit. \medskip \noindent {\bf Algorithm and analysis.} The current SC literature \cite{abadie1, abadie2, abadie3, rsc1} relies on the following key assumption: for any metric of interest, the treatment unit's measurements across time can be expressed as a linear combination of the donor units' measurements. In this work, we start by arguing that we need not make this assumption as it holds naturally under the low-rank tensor model (Proposition \ref{prop:linear_comb} in Section \ref{sec:results}). Furthermore, the low-rank tensor model suggests that the identical linear relationship holds across all metrics simultaneously with high probability. That is, in order to utilize measurements from other metrics to estimate the synthetic control for a given metric and unit of interest, we can effectively treat the measurements for other metric as additional measurements for our metric of interest! In other words, the number of pre-intervention measurements is essentially multiplied by the number of available metrics. The resulting mRSC algorithm is natural extension of RSC but with multiple measurements. Using a recently developed method for analyzing error-in-variable regression in the high-dimensional regime, we derive finite-sample guarantees on the performance of the mRSC algorithm. We conclude that the post-intervention MSE for mRSC decays faster than that of RSC by a factor of $\sqrt{K}$, where $K$ is the number of available metrics (see Theorem \ref{thm:post_int} in Section \ref{sec:results}). This formalizes the intuition that data from other metrics can be treated as belonging to the original metric of interest. In summary, mRSC provides a way to overcome limited pre-intervention data by simply utilizing pre-intervention data across other metrics. \medskip \noindent {\bf Experiments: synthetic data.} To begin, we utilize synthetically generated data as per our factor model to both verify the tightness of our theoretical results and utility of our diagnostic test, which evaluates when mRSC is applicable. We argue that flattening the third-order tensor of measurement data into a matrix (by stacking the unit by time slices across metrics) is valid if the resulting matrix exhibits a similar singular value spectrum (rank) as that of the matrix with only a single metric of interest (see Section \ref{sec:falsifiability}). Our results demonstrate that data generated as per our model pass the diagnostic test; the test, however, fails when the data does not come from our model. Finally, our empirical findings support our theoretical results. \medskip \noindent {\bf Experiments: retail.} Next, we study the efficacy of mRSC in a real-world case-study regarding weekly sales data across multiple departments at various Walmart stores (data obtained from Kaggle \cite{kaggle-walmart}). The store locations represent units, weeks represent time, and product-departments represent different metrics. Our goal is to forecast the sales of a given product-department at a given location using a subset of the weekly sales reports as the pre-intervention period. We study the performance of mRSC and compare it with RSC across different intervention times and different department-product subgroups, which represent the available metrics. Across all our experiments, we consistently find that mRSC significantly outperforms RSC when the pre-intervention data is small; however, the two methods perform comparably in the presence of substantial pre-intervention data. These empirical findings are in line with our theoretical results, i.e., in the presence of sparse training data, mRSC provides significant gains over RSC by utilizing information from auxiliary metrics. Specifically, Table \ref{table:avg_mse} demonstrates that the test prediction error of mRSC is 5-7x better compared to vanilla RSC when very limited pre-intervention data is available. Further, even if the pre-intervention data is significant, mRSC's test prediction error continues to outperform that of RSC. \medskip \noindent {\bf Experiments: cricket.} We consider the task of forecasting score trajectories in cricket. Since cricket has {\em two} metrics of interest, runs scored and wickets lost, cricket is an ideal setting to employ mRSC. We first provide a brief primer on cricket as we do not expect the reader(s) to be familiar with the game. This is followed by an explanation as to why forecasting the score trajectory is a perfect fit for mRSC. We conclude with a summary of our findings. \medskip \noindent {\em Cricket 101.} Cricket, as per a Google Search on ``how many fans world-wide watch cricket", is the second most popular sport after soccer. It is played between two teams and is inherently asymmetric in that both teams take turns to bat and bowl. Among the multiple formats with varying durations of a game, we focus on one format called ``50 overs Limited Over International (LOI)''. Here, each team bats for an ``inning'' of 50 overs or 300 balls, during which, at any given time, two of its players (batsmen) are batting to score runs. Meanwhile, the other team fields all of its 11 players, one of whom is bowling and the others are fielding with the goal of getting the batsmen out or preventing the batsmen from scoring. Each batsman can get out at most once, and an inning ends when either 300 balls are bowled or 10 batsmen are out. At the end, the team that scores more runs wins. For more details relevant to this work, refer to Section \ref{sec:cricket}. \medskip \noindent{\em Forecasting trajectory using mRSC.} As an important contribution, we utilize the mRSC algorithm to forecast an entire score trajectory of a partially observed cricket inning. As the reader will notice, the utilization of mRSC for forecasting the trajectory of game is generic, and is likely to be applicable to other games such as basketball, which would be an interesting future direction. To that end, we start by describing how mRSC can be utilized for forecasting the score trajectory of an ``inning''. We view each inning as a unit, the balls as time, and the runs scored and wickets lost as two different metrics of interest. Consequently, past innings are effectively donor units. The inning of interest, for which we might have observed scores/wickets up to some number of balls (or time), is the treatment unit, and the act of forecasting is simply coming up with a ``synthetic control'' for this unit in order to estimate the score/wicket trajectory for remainder of the game. We collect data for historical LOI over the period of 1999-2017 for over 4700 innings. Each inning can go to 300 balls long for both metrics: runs and wickets. We find that the approximately low-rank structure of the matrix with only scores (4700 by 300) and that with both scores and wickets (4700 by 600) remains the same (see Figure \ref{fig:singvals}). This suggests that mRSC can be applicable for this scenario. Next, we evaluate the predictive performance on more than 750 recent (2010 to 2017) LOI innings. For consistency of comparisons, we conduct a detailed study for forecasting at the 30th over or 180 balls (i.e., 60\% of the inning). We measure performance through the mean-absolute-percentage-error (MAPE) and R-square ($R^2$). The median of the MAPE of our forecasts varies from $0.027$ or $2.7$\% for 5 overs ahead to $0.051$ or $5.1$\% for 20 overs ahead. While this is significant in its own right, to get a sense of the {\em absolute} performance, we evaluate the $R^2$ of our forecasts. We find that the $R^2$ of our forecast varies from $0.92$ for 5 overs to $0.79$ for 15 overs. That is, for the 5 over forecast, we can explain $92$\% of ``variance'' in the data, which is surprisingly accurate. Further, we establish the value of the de-noising and regression steps in the mRSC algorithm. Through one important experiment, we demonstrate that a commonly held belief---i.e., the donor pool should only comprise of innings involving the team we are interested in forecasting for---actually leads to worse estimation. This is related to {\em Stein's Paradox} \cite{stein1}, which was first observed in baseball. Finally, using examples of actual cricket games, we show that the mRSC algorithm successfully captures and reacts to nuances of cricket. \subsection{Related Works} \label{sec:related} There are three bodies of literature relevant to this work: matrix estimation for sequential data inference, synthetic control, and multi-dimensional causal inference. In this work, we rely on low-rank estimation which is a key subproblem in the rich domain of matrix estimation. We refer the interested reader to \cite{lee2017, amjad2018, usvt, BorgsChayesLeeShah17} for extensive surveys of the matrix estimation literature. These works, in particular \cite{lee2017, amjad2018} also use the latent variable model (LVM) which is at the heart of the mRSC model presented in this work. In the context of sequential data inference and forecasting, matrix estimation techniques have recently been proposed as important subroutines. Using low-rank factorizations to learn temporal relationships has been addressed in \cite{dhillon2016, xie16missingtimeseries, rallapalli2010, chen2005}. However, these works make specific assumptions about the particular time series under consideration which is a major difference to our work where we make no assumptions about the specifics of the temporal evolution of the data. \cite{timeseriesMatrixEst, amjad2018} present algorithms closely related to the mRSC algorithm presented in this work to capture relationships across both time- and location without making assumptions about the particular form of temporal relationships. However, in \cite{timeseriesMatrixEst} the authors consider a single (one-dimensional) time series and convert it to a Page Matrix and then apply matrix estimation. The goal in that work is to estimate the future {\em only} using data from the single time series and up to present time. In this work, the notion of time is relative, i.e. there is a temporal ordering but the temporal axis is not absolute, there are many other units (donors) and the estimation of the counterfactual happens longitudinally, i.e. as a function of all other units (donors). Finally, we note that in this work the donor units have all their \emph{future} observations realized while that is certainly not the case in time series forecasting. The data driven method to estimate ``synthetic control'' was original proposed in \cite{abadie1, abadie2} (SCM). The robust synthetic control (RSC) algorithm was presented as a recent generalization of the SCM making it robust to missing data and noise. For a detailed theoretical analysis of the RSC and an overview of the more recent developments of the synthetic control method, we refer the reader to \cite{rsc1}. A key limitation of the RSC and other SCM variants is that they are only concerned with a single metric of interest {red} ($K = 1$) and perform poorly when the pre-intervention (training) period is short. The mRSC algorithm presented in this work generalizes the RSC and allows a principled way to handle multiple related metrics of interest ($K > 1$). Causal inference has long been an interest for researchers from a wide array of communities, ranging from economics to machine learning (see \cite{pearl2009, rubin1974, rubin1973, rosenbaum1983} and the references therein). Recently there has been work in multidimensional causal inference, specifically building causal models that fuse multiple datasets collected under heterogeneous conditions. A general, non-parametric approach to fusing multiple datasets that handles biases has been presented in \cite{Bareinboim7345}. This body of work is similar in spirit to the contribution of our paper, namely generalizing robust synthetic control to utilize multiple metrics of interest. Forecasting the scores and, more specifically, the winner of a cricket game has also been of interest. Works such as \cite{Bailey2006} use multi-regression models to predict the winner of a cricket game. However, predicting the winner is not a goal of this work. Our work focuses on forecasting the future average trajectory of an innings given its partial observation. Some prior efforts in this domain include the WASP method \cite{wasp1}, which calculates the mean future scores via a dynamic programming model instead of learning these solely from past data like we do. Recent efforts, such as \cite{cricviz}, have also focused on forecasting trajectories of the remainder of the innings using complex models that quantify specific features of an innings/game that affect scoring and game outcomes. However, like WASP, these recent methods are also parametric, and their accuracy is a strict function of whether the model is accurately able to capture the complex and evolving nature of the game. In summary, all known prior works on score forecasting make parametric assumptions to develop their methods. In contrast, we make minimal modeling assumptions and instead rely on the generic {\em latent variable model} to use the mRSC algorithm to capture nuances of the game of cricket. \subsection{Organization} The rest of this work is organized as follows: we review the relevant bodies of literature next (Section \ref{sec:related}) followed by a detailed overview of our proposed model and problem statement (Section \ref{sec:model}). Next, in Section \ref{sec:algorithm}, we describe the mRSC algorithm, which is followed by a simple diagnostic test to determine the model applicability to a problem. We present the main theoretical results in Section \ref{sec:results} followed by a synthetic experiment and ``real-world'' retail case study to compare the mRSC algorithm to the RSC (Section \ref{sec:experiments}). Finally, in Section \ref{sec:cricket}, we have the complete case study for the game of cricket to demonstrate the versatility of the proposed model and the mRSC's superior predictive performance. \section{Algorithm} \label{sec:algorithm} \subsection{Setup} Suppose there are $K$ datasets corresponding to $K$ metrics of interest. For each $k \in [K]$, let $\boldsymbol{Z}^{(k)} \in \mathbb{R}^{(N-1) \times T}$ denote the $k$-th donor matrix, which contains information on the $k$-th metric for the entire donor pool and across the entire time horizon. We denote $X_1^{(k)} \in \mathbb{R}^{1 \times T_0}$ as the vector containing information on the $k$-th metric for the treatment unit (unit one), but only during the pre-intervention stage. \\ The mRSC algorithm, to be introduced in Section \ref{sec:robust_algo}, is a generalization of the RSC algorithm introduced in \cite{rsc1}. The RSC algorithm utilizes a hyper-parameter $\lambda \geq 0$, which thresholds the spectrum of the observation matrix; this value effectively serves as a knob to trade-off between the bias and variance of the estimator \cite{rsc1}. The mRSC also utilizes such a hyper-parameter. In addition, it utilizes \begin{align} \Delta = \text{diag}\Bigg(\underbrace{\frac{1}{\delta_1}, \dots, \frac{1}{\delta_1}}_{T_0}, \dots, \underbrace{\frac{1}{\delta_K}, \dots, \frac{1}{\delta_K}}_{T_0} \Bigg) \in \mathbb{R}^{KT_0 \times KT_0}, \end{align} which weighs the importance of the $K$ different metrics by simply multiplying the corresponding observations. For instance, the choice of $\delta_1 = \dots = \delta_K = 1$ renders all metrics to be equally important. On the other hand, if the only metric of interest is $k = 1$, then setting $\delta_1 = 1$ and $\delta_2 = \dots = \delta_K = 0$ effectively reduces the mRSC algorithm to the RSC algorithm for metric one. The choice of hyper-parameters can be chosen in a data-driven manner via standard machine learning techniques such as cross-validation. \subsection{Robust multi-metric algorithm.} \label{sec:robust_algo} Given $\boldsymbol{Z}^{(k)}$ and $X_1^{(k)}$ for all $k \in [K]$, we are now ready to describe the mRSC algorithm. Recall that the goal is to estimate the post-intervention observations (in the absence of any treatment) for a particular unit of interest (unit one). This estimated forecast is denoted by $\widehat{M}_1^{(k)}$ for all metrics $k \in [K]$. \begin{algorithm} \caption{Multi Robust Synthetic Control (mRSC)} \label{algorithm:mrsc} \noindent \\ {\bf Step 1. Concatenation.} \begin{enumerate} \item Construct $\boldsymbol{Z} \in \mathbb{R}^{(N-1)\times KT}$ as the concatenation of $\boldsymbol{Z}^{(k)}$ for all $k \in [K]$. \item Construct $X_1 \in \mathbb{R}^{1 \times KT_0}$ as the concatenation of $X_1^{(k)}$ for all $k \in [K]$. \end{enumerate} \noindent \\ {\bf Step 2. De-noising.} \begin{enumerate} \item Compute the singular value decomposition (SVD) of $\boldsymbol{Z}$: \begin{align} \boldsymbol{Z} &= \sum_{i=1}^{N-1 \wedge KT} s_i u_i v_i^T. \end{align} \item Let $S = \{ i : s_i \geq \lambda\}$ be the set of singular values above the threshold $\lambda$. \item Apply hard singular value thresholding to obtain \begin{align} \label{eq:hsvt} \widehat{\bM} & = \frac{1}{\widehat{\rho}} \sum_{i \in S} s_i u_i v_i^T. \end{align} Here, $\widehat{\rho} $ denotes the proportion of observed entries in $\boldsymbol{Z}$. Further, we partition $\widehat{\bM} = [\widehat{\bM}^{(1)}, \dots, \widehat{\bM}^{(K)}]$ into $K$ blocks of dimension $(N-1) \times T$. \end{enumerate} \noindent \\ {\bf Step 3. Weighted Least Squares.} \begin{enumerate} \item Construct $\widehat{\bM}_{T_0} \in \mathbb{R}^{(N-1)\times KT_0}$ as the concatenation of $\widehat{\bM}^{(k)}_{\cdot, j}$ for $j \in [T_0]$ and $k \in [K]$. \item Weight the donor matrix and treatment vector by $\Delta$, i.e., \begin{align} \widehat{\bM}_{T_0} &:= \widehat{\bM}_{T_0} \cdot \Delta \\ X_1 &:= X_1 \cdot \Delta. \end{align} \item Perform linear regression: \begin{align} \label{eq:linear_regression} \widehat{\beta} &\in \argmin_{v \in \mathbb{R}^{(N-1)}} \norm{ X_1 - v^T \widehat{\bM}_{T_0}}_2^2. \end{align} \item For every $k \in [K]$, define the corresponding estimated (counterfactual) means for the treatment unit as \begin{align} \widehat{M}_1^{(k)} &= \widehat{\beta}^T \widehat{\bM}^{(k)}. \end{align} \end{enumerate} \end{algorithm} \begin{remark} We note that all the features of the vanilla RSC algorithm naturally extend to the mRSC algorithm (see Section 3.4 of \cite{rsc1}). Specifically, properties of RSC regarding solution interpretability, scalability, and independence from using covariate information are also features of the mRSC algorithm. Effectively, the mRSC algorithm is a generalization of the RSC algorithm in that it incorporates learning based on multiple metrics of interest. \end{remark} \subsection{Diagnostic: rank preservation} \label{sec:falsifiability} While it may be tempting to include multiple metrics in any analysis, there must be some relationship between the metrics to allow for improved inference. In order to determine whether the additional metrics should be incorporated in the mRSC model, we propose a rank-preservation diagnostic. As discussed in Proposition \ref{prop:low_rank_approx}, the crucial assumption is that the mean data tensor is low-rank. For the mRSC algorithm to make sense, the row and column relationships must extend similarly across all metrics. Specifically, under a LVM, the latent row and column parameters, $\theta_i \in \mathbb{R}^{d_1}$ and $\rho_j \in \mathbb{R}^{d_2}$ (as described in Section \ref{sec:model_mean_tensor_assumptions}), must be the same for all metrics being considered. This implies that the rank of the matrices for each metric (frontal slices of the tensor) and their concatenation are identical. If, however, the row and column parameters vary with the metric, then concatenated matrix rank may increase. \smallskip For concreteness, we present an analysis from a simple idealized experiment (detailed in Section \ref{sec:mrsc-synthetic}) with two metrics (metric1 and metric2) and matrix dimensions $N = 100, T = 120$. We compare the percentages of the power of the cumulative spectrum contained in the top few singular values for metric1, metric2, and two of their combinations: one where the row and column parameters are held constant and another where the parameters are different. Table \ref{table:diagnostic} shows the resulting ranks of the mean matrices. We see that the rank of the combined metrics where the latent parameters are different is roughly twice that of metric1, metric2, and their combination with identical row and column parameters across metrics. This is an important diagnostic, which can be used to ascertain if the model assumptions are (approximately) satisfied by the data in any multi-metric setting. In real-world case studies where the latent variables and mean matrices are unknown, the same analysis can be conducted using the singular value spectrums of the observed matrices. We point to Figure \ref{fig:singvals} for a specific instance of this diagnostic using real-world noisy observation data. \begin{table}[h] \centering \begin{tabular}{|| c | c ||} \hline Matrix & Approx. Rank\\ [0.5ex] \hline\hline metric1 & 9\\ \hline metric2 & 9 \\ \hline combined & 9 \\ (\textbf{same} row and column params) & \\ \hline combined & 17 \\ (\textbf{different} row and column params) & \\ \hline \end{tabular} \caption{Ranks of the mean matrices for each metric and their concatenated matrices when the latent row and column parameters are held constant across metrics and otherwise.} \label{table:diagnostic} \end{table} \section{Proofs} \subsection{Proposition \ref{prop:low-rank}} \label{sec:appendix_proof_prop} \begin{prop*} For any $\varepsilon > 0$, there exists $r = r(\varepsilon, d) \geq 1$ such that matrix $\bfm$ is approximated by a matrix of rank $r(\varepsilon, d)$ with respect to entry-wise max-norm within $\varepsilon$. That is, there exists matrix $\bfm^{(r)}$ of rank $r$ such that $\max_{i,j} |m_{ij} - m^{(r)}_{ij}| \leq \varepsilon$. Subsequently, matrix $\boldsymbol{M}^{(r)} = [M^{(r)}_{ij}]$ where $M^{(r)}_{ij} = \sum_{k=1}^j m^{(r)}_{ik}$ is of rank at most $r$ and approximates $\boldsymbol{M}$ such that $\max_{ij} | M_{ij} - M^{(r)}_{ij}| \leq b \varepsilon$. \end{prop*} \begin{proof} The proof is a straight forward extension of the arguments in \cite{usvt} and \cite{amjadshah1}. First, we consider the matrix $\bfm = [m_{ij}], \forall{i} \in [n], \forall{j} \in [b]$. Recall that $m_{ij} = f(\theta_i, \rho_j)$ where $f(\cdot)$ is Lipschitz in its arguments with the Lipschitz constant, $C_f > 0$. We have assumed that the latent parameters belong to a compact space: $[0, 1]^d$, where $d \geq 1$. Given that the number of columns are typically much smaller than the number of rows, i.e. $b << n$, we define a finite covering, $P(\eps)$, such that the following holds: for any $A \in P(\eps)$, whenever $\rho, \rho'$ are two points such that $\rho, \rho' \in A$ we have $| f(\theta_i, \rho) - f(\theta_i, \rho') | \leq \eps$. Due to the Lipschitzness of the function $f$ and the compactness of any finite subinterval of $[0,1]^d$ it can be shown that $| P(\eps) | \leq C(C_f, d) \eps^{-d}$, where $C(C_f, d)$ is a constant which depends only on the Lipschitz constant, $C_f$, and the dimension of the latent space, $d$. (see \cite{amjadshah1}). We now construct the matrix $\bfm^{(r)}$. For latent feature $\rho_j$ corresponding to the column $j \in [b]$, find closest element in $P(\eps)$ and let it be denoted by $p(\rho_j)$. Create the matrix $\bfm^{(r)} = [m_{ij}^{(r)}]$ where $m_{ij}^{(r)}= f(\theta_i, p(\rho_j))$. We note that $\text{rank}(\bfm^{(r)}) = r = | P(\eps) | \leq C(C_f, d) \eps^{-d} = r(C_f, d, \eps)$. In the manner in which we have constructed $P(\eps)$ and $\bfm^{(r)}$, we know that $m_{ij} - m_{ij}^{(r)} \leq \eps$. Therefore, $\max_{i,j} |m_{ij} - m^{(r)}_{ij}| \leq \varepsilon$, as required. Next, we consider the cumulative column matrix, $\boldsymbol{M}$. First consider that the number of distinct rows (or columns) in $\bfm$ is less than the rank $r = r(C_f, d, \eps)$. Note that each column, $j$, in $\boldsymbol{M}$ is generated by taking a sum of all columns $k$, where $1 \leq k \leq j$. Therefore, the relationship between the rows of $\bfm$ is maintained in the rows of $\boldsymbol{M}$. This implies that the $\text{rank}(\boldsymbol{M}) \leq \text{rank}(\bfm) = r(C_f, d, \eps)$. Finally, given that the last column, $b$, in $\boldsymbol{M}$ is the cumulative sum of all columns $k$ where $1 \leq k \leq b$, it must be that $\max_{i,j} |M_{ij} - M^{(r)}_{ij}| \leq b \max_{i,j} |m_{ij} - m^{(r)}_{ij}| = b\varepsilon$. \end{proof} \section{Dominance Property} \label{sec:dominance} We first note that due to the monotonicity properties of runs scored in a cricket innings, we have that if $Y_{ib} > Y_{hb}$ then $Y_{ij} \geq Y_{hj}$ in distribution for two innings $i$ and $h$ and where $j \leq b$. Similarly, if $Y_{ib} < Y_{hb}$ then $Y_{ij} \leq Y_{hj}$ in distribution. We refer to this as the \textit{dominance property} of trajectories. This property tells us that if we have a set of neighbors of the target trajectory where the final score, $Y_{ib}$, is greater (less) than the target score, $t_i$, then the estimated target trajectory for balls $j < b$ will be an upper (lower) bound on the target we are estimating. In our target revision algorithm, we scale the ``neighbor'' trajectories to all end exactly at the target. If such a scaling is not done, then we expect that a set of neighbors where all innings end above (below) the target will introduce a bias in the estimated trajectory. In Figure \ref{fig:dominancescaling1} we show that this is indeed the case using the data from the second innings of a game played between Pakistan and New Zealand on 01/16/2018. Note that the we use three types of ``neighbors'': innings in the dataset that end up (i) \textbf{above} the target, (ii) \textbf{below} the target and (iii) \textbf{all} innings. We also estimate the targets with and without any scaling. The results confirm our intuition: when no scaling is applied, the neighbors that end up above (below) the target produce an upper-bound (lower-bound) on the estimated target. The all-innings neighbors produces an estimate somewhere in between the upper and lower bounds. However, with scaling all three sets of neighbors produce estimated paths close to each other and far away from the upper/lower bounds. However, note that the (scaled) all innings and (scaled) below innings both produce an estimated path which is higher than the trajectory produced by the above (scaled) neighbors. This appears somewhat counterintuitive. The likely reason for this is that the below neighbor innings (and hence, the all neighbor innings) contain several innings where the chasing team ended up very far from the target either because they got all-out early on in the innings or realized somewhere during the innings that they cannot win and just batted out the rest of the innings without intending to overhaul the target (unfortunately, a common occurrence in past cricket games). When scaling, this introduces a disproportionate positive bias in such innings causing both these sets of neighbors to produce a positively biased estimated target trajectory. The innings that end above the target do not suffer from these problems and for them the scaling makes perfect sense. This experiment conclusively establishes that the choice of neighbors as those innings where the final score is at least as high as the target and then scaling them uniformly is the correct algorithmic choice to produce an estimated target trajectory given the target. \begin{figure} \centering \includegraphics[width=0.3\textwidth]{scaling-envelopes.png} \caption{Estimated target trajectories for various sets of ``neighbors'' and scaling.} \label{fig:dominancescaling1} \end{figure} \section{Forecast Algorithm: Case Studies Continued} \label{sec:appendix_forecastcasestudies} In the game under considered in Section \ref{sec:forecastcasestudies} (India and Australia at the ICC World Cup 2011), India was able to chase the target down and win the game with relative ease. Figure \ref{fig:aus-ind-2} shows the forecasts at the intervention points of 35, 40 and 45 overs for India's innings. The forecast trajectories are exceptionally close to reality and showcase the predictive ability of the algorithm. Once again notice the rise in score rate towards the late stages of the innings, similar to the first innings. We note that the flatlining of the actual score in the innings (red line) is simply due to the fact that India had won the game in the 48th over and no more runs were added. \\ \begin{figure} \centering \begin{subfigure}[b]{0.225\textwidth} \includegraphics[width=\textwidth]{ind-aus-2-25.png} \label{fig:ind-aus-2-25} \end{subfigure} \begin{subfigure}[b]{0.225\textwidth} \includegraphics[width=\textwidth]{ind-aus-2-40.png} \label{fig:ind-aus-2-40} \end{subfigure} \begin{subfigure}[b]{0.225\textwidth} \includegraphics[width=\textwidth]{ind-aus-2-45.png} \label{fig:ind-aus-2-45} \end{subfigure} \caption{India vs Aus (WC 2011). Second Innings (India batting). Interventions at the 35, 40 and 45 over marks. Actual and Forecast trajectories with the 95\% uncertainty interval.} \label{fig:aus-ind-2} \end{figure} \textbf{Zimbabwe vs Australia, Feb 4 2001.} Zimbabwe and Australia played a LOI game in Perth in 2001. Australia, world champions then, were considered too strong for Zimbabwe and batting first made a well-above par 302 runs for the loss of only four wickets. The target was considered out of Zimbabwe's reach. Zimbabwe started poorly and had made only 91 for the loss of their top three batsmen by the 19th over. However, Stuart Carlisle and Grant Flower combined for a remarkable partnership to take Zimbabwe very close to the finish line. Eventually, Australia got both batsmen out just in the nick of time and ended up winning the game by just one run. We show the forecast trajectories at the 30, 35, 40 and 45 over marks--all during the Carlisli-Flower partnership. The forecasts track reality quite well. A key feature to highlight here is the smoothness of the forecasts (in blue) compared to reality (in red). This is a key feature of our algorithm which ``de-noises'' the data matrix to retain only the top few singular values. The resulting smoothness is the mean effect we are trying to estimate and it is no surprise that the forecast trajectories bring this feature to light. \begin{figure} \centering \begin{subfigure}[b]{0.225\textwidth} \includegraphics[width=\textwidth]{zim-aus-2-30.png} \label{fig:zim-aus-2-30} \end{subfigure} \begin{subfigure}[b]{0.225\textwidth} \includegraphics[width=\textwidth]{zim-aus-2-35.png} \label{fig:zim-aus-2-35} \end{subfigure} \begin{subfigure}[b]{0.225\textwidth} \includegraphics[width=\textwidth]{zim-aus-2-40.png} \label{fig:zim-aus-2-40} \end{subfigure} \begin{subfigure}[b]{0.225\textwidth} \includegraphics[width=\textwidth]{zim-aus-2-45.png} \label{fig:zim-aus-2-45} \end{subfigure} \caption{Zimbabwe vs Aus (2001). Second Innings (Zimbabwe batting). Interventions at the 30 over, 35 over, 40 over and 45 over mark.} \label{fig:aus-zim-2} \end{figure} \section{Target Resetting Algorithm: Case Studies Continued} \label{sec:appendix_target_resetting_casestudies} We consider the famous World Cup 2003 game between South Africa and Sri Lanka. South Africa were chasing a target of 269 in 50 overs (300 balls). Rain made an appearance during the second innings and it became clear that at the end of the 45th over (270th ball) of the chase, no more play would be possible. Mark Boucher, a senior player in the South African team, was provided the Duckworth-Lewis-Stern (DLS) par score for the end of the 45th over and hit six runs off the penultimate ball before the intervention (anticipating play to halt at the end of the over since it had started raining). With that six, South Africa had achieved the par score and Boucher blocked the next ball for zero runs (and importantly, did not lose his wicket). South Africa walked off confident that they had achieved the DLS-revised target to win the game. However, they were informed that the ``par'' score they were provided was the DLS-revised score to \textit{tie} the game and they needed one more than par to win! Unfortunately for South Africa, that tie meant they were knocked out of the world cup which they were also hosting in the most cruel of manners, as noted by The Guardian (\cite{dls-par-1}). We also use Figure \ref{fig:saf-sri-revised} to illustrate how our method would have decided the result of the game at the \textit{actual} intervention which happened at the 45 over mark when no more play was possible. At precisely the 270 ball mark, the revised target score (for having lost 6 wickets) produced by our algorithm was 232. The score made by South Africa was 229 (for the loss of 6 wickets). Therefore, our algorithm would have declared Sri Lanka to be the winner--by a margin of two runs. This is an example that hints that the DLS method might be setting the revised target a touch too low (leading to a bias in favor of the chasing teams). \begin{figure} \centering \includegraphics[width=0.3\textwidth]{saf-sri-chase-revised-45.png} \caption{Actual Intervention at 45 overs (270 balls) and no more play was possible after. New Zealand's actual innings (solid red), average path to victory (dashed green) and revised target (dotted-dashed gray)} \label{fig:saf-sri-revised} \end{figure} \section{Problem Setup} \label{sec:model} Suppose there are $N$ units indexed by $i \in [N]$, across $T$ time periods indexed by $j \in [T]$, and $K$ metrics of interest indexed by $k \in [K]$. Let $M_{ijk}$ denote the ground-truth measurement of interest and $X_{ijk}$ the noisy observation of $M_{ijk}$. Without loss of generality, let us assume that our interest is in the measurement associated with unit $i=1$ and metric $k = 1$. Let $1 \leq T_0 < T$ represent the time instance in which unit one experiences an {\em intervention}. Our interest is to estimate the measurement evolution of metric one for unit one if {\em no intervention} occurred. To do so, we utilize the measurements associated with the ``donor'' units ($2 \le i \le N$), and possibly all metrics $k \in [K]$. \subsection{Model description} For all $2 \le i \le N$, $j \in [T]$, and $k \in [K]$, we posit that \begin{align} \label{eq:donor_model} X_{ijk} &= M_{ijk} + \epsilon_{ijk}, \end{align} where $\epsilon_{ijk}$ denotes the observation noise. We assume that unit one obeys the same model relationship during the pre-intervention period across all metrics, i.e., for all $j \in [T_0]$ and $k \in [K]$, \begin{align} \label{eq:unit_one_model} X_{1jk} = M_{1jk} + \epsilon_{1jk}. \end{align} If unit one was never exposed to the treatment, then the relationship described by \eqref{eq:unit_one_model} would continue to hold during the post-intervention period, i.e., for $j \in \{T_0+1,\dots, T\}$. \subsection{Structural assumptions on mean tensor} \label{sec:model_mean_tensor_assumptions} Let $\widetilde{M} = [M_{ijk}] \in \mathbb{R}^{N \times T \times K}$ be a third-order tensor denoting the underlying, deterministic means in the absence of any treatment. We assume that $\widetilde{M}$ satisfies the following low-rank and boundedness properties: \begin{property} \label{property:low_rank} Let $\widetilde{M}$ be a third-order tensor with rank $r$, i.e., $r$ is the smallest integer such that the entries of $\widetilde{M}$ can be expressed as \begin{align} \label{eq:low_rank_tensor} M_{ijk} &= \sum_{z = 1}^r U_{i z} V_{j z} W_{k z}, \end{align} where $\boldsymbol{U} = [U_{ij}] \in \mathbb{R}^{N \times r}$, $\boldsymbol{V} = [V_{ij}] \in \mathbb{R}^{T \times r}$, $\boldsymbol{W} = [W_{ij}] \in \mathbb{R}^{K \times r}$. \end{property} \begin{property} \label{property:boundedness} There exists an absolute constant $\Gamma \ge 0$ such that $| M_{ijk} | \le \Gamma$ for all $(i,j,k) \in [N] \times [T] \times [K]$. \end{property} \smallskip \noindent {\em Why $\widetilde{M}$ should be low-rank.} A natural generalization of the typical {\em factor model}, which is commonly utilized in the Econometrics literature, is the generic latent variable model (LVM), which posits that \begin{align} \label{eq:lvm} M_{ijk} &= f(\theta_i, \rho_j, \omega_k). \end{align} Here, $\theta_i \in \mathbb{R}^{d_1}, \rho_j \in \mathbb{R}^{d_2}$, and $\omega_k \in \mathbb{R}^{d_3}$ are latent feature vectors capturing unit, time, and metric specific information, respectively, for some $d_1, d_2, d_3 \ge 1$; and the latent function $f: \mathbb{R}^{d_1} \times \mathbb{R}^{d_2} \times \mathbb{R}^{d_3 } \to \mathbb{R}$ captures the model relationship. If $f$ is ``well-behaved'' and the latent spaces are compact, then it can be seen that $\widetilde{M}$ is approximately low-rank. This is made more rigorous by the following proposition. \begin{prop} \label{prop:low_rank_approx} Let $\widetilde{M}$ satisfy \eqref{eq:lvm}. Let $f$ be an $\mathcal{L}$-Lipschitz function with $\theta_i \in [0,1]^{d_1}, \rho_j \in [0,1]^{d_2}$, and $ \omega_k \in [0,1]^{d_3}$ for all $(i,j,k) \in [N] \times [T] \times [K]$. Then, for any $\delta > 0$, there exists a low-rank third-order tensor $\mathcal{\bT}$ of rank $r \le C \cdot \delta^{-(d_1 + d_3)}$ such that \begin{align*} \norm{\widetilde{M} - \mathcal{\bT}}_{\emph{max}} &\le 2 \mathcal{L} \delta. \end{align*} Here, $C$ is a constant that depends on the latent spaces $[0,1]^{d_1}$ and $[0,1]^{d_3}$, ambient dimensions $d_1$ and $d_3$, and Lipschitz constant $\mathcal{L}$. \end{prop} In Section \ref{sec:results}, we will demonstrate that the low-rank property of $\widetilde{M}$ is central to establishing that $\widetilde{M}$ satisfies the key property of synthetic control-like settings, i.e., the target unit can be expressed as a linear combination of the donor pool across \textit{all metrics} (see Proposition \ref{prop:linear_comb}). Hence, in effect, observations generated via a generic latent variable model, which encompass a large class of models, naturally fit within our multi-dimensional synthetic control framework. \subsection{Structural assumptions on noise} Before we state the properties assumed on $\epsilon = [\epsilon_{ijk}] \in \mathbb{R}^{N \times T \times K}$, we first define an important class of random variables/vectors. \begin{definition} For any $\alpha \geq 1$, we define the $\psi_{\alpha}$-norm of a random variable $X$ as \begin{align} \label{eq:alpha_norm} \norm{X}_{\psi_{\alpha}} &= \inf \Big \{ t > 0: \mathbb{E} \exp(|X|^{\alpha} /t^{\alpha}) \le 2 \Big \}. \end{align} If $\norm{X}_{\psi_{\alpha}} < \infty$, we call $X$ a $\psi_{\alpha}$-random variable. More generally, we say $X$ in $\mathbb{R}^n$ is a $\psi_{\alpha}$-random vector if the one-dimensional marginals $\langle X, v \rangle$ are $\psi_{\alpha}$-random variables for all fixed vectors $v \in \mathbb{R}^n$. We define the $\psi_{\alpha}$-norm of the random vector $X \in \mathbb{R}^n$ as \begin{align} \label{eq:alpha_vector_norm} \norm{X}_{\psi_{\alpha}} &= \sup_{v \in \mathcal{S}^{n-1}} \norm{ \langle X, v \rangle }_{\psi_{\alpha}}, \end{align} where $\mathcal{S}^{n-1} := \{ v \in \mathbb{R}^n: \norm{v}_2 = 1\}$ denotes the unit sphere in $\mathbb{R}^n$ and $\langle \cdot, \cdot \rangle$ denotes the inner product. Note that $\alpha = 2$ and $\alpha =1 $ represent the class of sub-gaussian and sub-exponential random variables/vectors, respectively. \end{definition} We now impose the following structural assumptions on $\epsilon$. For notational convenience, we will denote $\epsilon_{\cdot, j, k} \in \mathbb{R}^N$ as the column fiber of $\epsilon$ at time $j$ and for metric $k$. \begin{property} \label{property:noise.1} Let $\epsilon$ be a third-order tensor where its entries $\epsilon_{ijk}$ are mean-zero, $\psi_{\alpha}$-random variables (for some $\alpha \ge 1$) with variance $\sigma^2$, that are independent across time $j \in [T]$ and metrics $k \in [K]$, i.e., there exists an $\alpha \ge 1$ and $K_{\alpha} < \infty$ such that $\| \epsilon_{\cdot, j, k} \|_{\psi_{\alpha}} \le K_{\alpha}$ for all $j \in [T]$ and $k \in [K]$. \end{property} \begin{property} \label{property:noise.2} For all $j \in [T]$ and $k \in [K]$, let $\| \mathbb{E} [ \epsilon_{\cdot, j, k} \epsilon^T_{\cdot, j, k}] \| \le \gamma^2$. \end{property} \subsection{Missing data} In addition to noise perturbations, we allow for missing data within our donor pool of observations. In particular, we observe (a possibly sparse) donor pool tensor $\mathcal{\bZ} = [Z_{ijk}] \in \mathbb{R}^{(N-1) \times T \times K}$ where each entry $Z_{ijk}$ is observed with some probability $\rho \in (0,1]$, independent of other entries. This is made formal by the following property. \begin{property} \label{property:masking_noise} For all $(i,j,k) \in [N-1] \times [T] \times [K]$, \begin{align*} Z_{ijk} &= \begin{cases} X_{(i+1)jk} & \text{w.p. } \rho \\ \star & \text{w.p. } 1 - \rho \end{cases} \end{align*} is sampled independently. Here, $\star$ denotes an unknown value. \end{property} \subsection{Problem statement} In summary, we observe $\mathcal{\bZ}$, which represents the observations associated with the donor pool across the entire time horizon and for all metrics. However, we only observe the pre-intervention observations for unit one, i.e., $X_{1jk}$ for all $j \in [T_0]$ and $k \in [K]$. In order to investigate the effects of the treatment on unit one, we aim to construct a ``synthetic'' unit one to compute its counterfactual sequence of observations $M_{1jk}$ for all $j \in [T]$ and $k \in [K]$, with particular emphasis the post-intervention period (namely, $T_0 < j \le T$), using only the observations described above. We will evaluate our algorithm based on its prediction error. More specifically, we assess the quality of our estimate $\widehat{M}^{(k)}_1 \in \mathbb{R}^{T}$ for any metric $k \in [K]$ in terms of its (1) pre-intervention mean-squared error (MSE) \begin{align} \label{eq:pre_int_mse} \text{MSE}_{T_0}(\widehat{M}^{(k)}_1) &= \frac{1}{T_0} \mathbb{E} \left[ \sum_{t =1}^{T_0} \left(\widehat{M}^{(k)}_{1t} - M_{1tk} \right)^2 \right], \end{align} and (2) post-intervention MSE \begin{align} \label{eq:post_int_mse} \text{MSE}_{T - T_0}(\widehat{M}^{(k)}_1) &= \frac{1}{T-T_0} \mathbb{E} \left[ \sum_{t = T_0+1}^T \left(\widehat{M}^{(k)}_{1t} - M_{1tk} \right)^2 \right]. \end{align} We summarize our model assumptions in Table \ref{table:model_assumptions}\footnote{With regards to Property \ref{property:masking_noise}, we specifically mean $Z_{ijk} = X_{ijk} \cdot \mathds{1}(\pi_{ijk} =1) + \star \cdot \mathds{1}(\pi_{ijk} =0)$.}. \begin{table} \caption{Summary of Model Assumptions} \label{table:model_assumptions} \centering \begin{tabular}{ c c c c c } \toprule \multicolumn{2}{c}{Mean Tensor $\mathcal{\bT}$} & \multicolumn{2}{c}{Observation Noise $\epsilon$} & \multirow{2}{*}{Masking $\mathcal{\bZ}$} \\ \cmidrule{1-2} \cmidrule{3-4} Low-rank & Boundedness & $\psi_{\alpha}$-norm & Covariance & \\ \midrule $\text{rank}(\widetilde{M}) = r$ & $\left| M_{ijk} \right| \leq \Gamma$ & $\norm{\epsilon_{\cdot, j, k}}_{\psi_{\alpha}} \le K_{\alpha}$ & $\big\| \mathbb{E} [\epsilon_{\cdot, j, k} \epsilon^T_{\cdot, j, k}] \big\| \leq \gamma^2$ &$\pi_{ijk} \sim \text{Bernoulli}(\rho)$\\ Property \ref{property:low_rank} & Property \ref{property:boundedness} & Property \ref{property:noise.1} & Property \ref{property:noise.2} & Property \ref{property:masking_noise} \\ \bottomrule \end{tabular} \end{table} \vspace{10pt} \noindent {\bf Notation.} For any general $n_1 \times n_2 \times n_3$ real-valued third-order tensor $\mathcal{\bT}$, we denote $\mathcal{\bT}_{\cdot, j, k}, \mathcal{\bT}_{i, \cdot, k},$ and $\mathcal{\bT}_{i, j, \cdot}$ as the column, row, and tube fibers of $\mathcal{\bT}$, respectively. Similarly, we denote $\mathcal{\bT}_{i, \cdot, \cdot}, \mathcal{\bT}_{\cdot, j, \cdot}$, and $\mathcal{\bT}_{\cdot, \cdot, k}$ as the horizontal, lateral, and frontal slices of $\mathcal{\bT}$, respectively. We will denote the $n_1 \times n_2 \cdot n_3$ matrix $\boldsymbol{T}$ as the flattened version of $\mathcal{\bT}$, i.e., $\boldsymbol{T}$ is formed by concatenating the $n_3$ frontal slices $\mathcal{\bT}_{\cdot, \cdot, k}$ of $\mathcal{\bT}$. We define $\boldsymbol{T}_{i, \cdot}$ and $\boldsymbol{T}_{\cdot, j}$ as the $i$-th row and $j$-th column of $\boldsymbol{T}$, respectively. Finally, we denote $\text{poly}(\alpha_1, \dots, \alpha_n)$ as a function that scales at most polynomially in its arguments $\alpha_1, \dots, \alpha_n$. \section{Experiments} \label{sec:experiments} We establish the validity of our mRSC algorithm in three settings: \begin{enumerate} \item \textbf{Idealized synthetic-data experiment}: using data generated by a known model (outlined in Section \ref{sec:model}), we conduct this experiment to empirically verify Theorems \ref{thm:pre_int} and \ref{thm:post_int}. In situations where the data-generating mechanism and {\em unobserved} means are known, we demonstrate that the mRSC algorithm outperforms the vanilla RSC algorithm \cite{rsc1} by achieving a lower forecasting prediction error. \item \textbf{Retail}: using Walmart sales data, we provide an instance of a ``real-world'' setting where mRSC outperforms RSC in forecasting future sales (the counterfactual). \item \textbf{Cricket}: considering the problem of forecasting scores in cricket, we first show that the trajectory of scores can be modeled as an instance of the mRSC model with multiple natural metrics of interest, and then, through extension experimentation, demonstrate the predictive prowess of the algorithm, which is also successful in capturing the nuances of the game. \end{enumerate} \subsection{Idealized synthetic-data experiment} \label{sec:mrsc-synthetic} \paragraph{\bf Experimental setup.} We consider a setting where we have two metrics of interest, metricA and metricB. The data is generated as follows: we first sample sets of latent row and column parameters $\mathbb{S}_r, \mathbb{S}_c$, where $\mathbb{S}_r = \{s_k | s_k \sim \text{Uniform}(0, 1), 1 \leq k \leq 10 \}$ and $\mathbb{S}_c = \{s_k | s_k \sim \text{Uniform}(0, 1), 1 \leq k \leq 10 \}$. We then fix the latent row and column parameters, $\theta_i$ and $\rho_j$, by sampling (with replacement) from $\mathbb{S}_r$ and $\mathbb{S}_c$. Note that $1 \leq i \leq N$ and $1 \leq j \leq T$, where $N$ and $T$ represent the dimensions of the matrices for each metric. In this experiment, we fix $T = 50$ and vary $N$ in the range $[50, 500]$. For each metric, we use functions $f_a(\theta_i, \rho_j)$ and $f_b(\theta_i, \rho_j)$ to generate the mean matrix $\boldsymbol{M}_a$ and $\boldsymbol{M}_b$. Specifically, \begin{align*} f_a(\theta_i, \rho_j) &= \frac{10}{1 + \exp(-\theta_i - \rho_j - (\alpha_a \theta_i \rho_j))}, \end{align*} where $\alpha_a = 0.7$. $f_b(\theta_i, \rho_j)$ is defined similarly but with $\alpha_b = 0.3$. We then let $\boldsymbol{M}_a = [m_{a, ij}] = [f_a(\theta_i, \rho_j)]$ and $\boldsymbol{M}_b = [m_{b, ij}] = [f_b(\theta_i, \rho_j)], \text{for } 1 \leq i \leq N, 1 \leq j \leq T$. Next, we generate the mean row of interest for each metric by using a fixed linear combination of the rows of the matrices $\boldsymbol{M}_a$ and $\boldsymbol{M}_b$, respectively. We append these rows of interest to the top of both matrices. We refer to them as rows $m_{a, 0}$ and $m_{b_0}$, respectively. Independent Gaussian noise, $\mathbb{N}(0, 1)$ is then added to each entry of the matrices, $\boldsymbol{M}_a$ and $\boldsymbol{M}_b$, including the mean rows of interest at the top. This results in the observation matrices $\boldsymbol{Y}_a$ and $\boldsymbol{Y}_b$ for metricA and metricB. Given the $(N+1) \times T$ matrices $\boldsymbol{Y}_a$ and $\boldsymbol{Y}_b$, and an intervention time, $T_0 < T$, the goal is to estimate the unobserved rows $\hat{m}_{a, 0}$ and $\hat{m}_{b_0}$ in the post-intervention period, i.e., for columns $j$ where $j > T_0$. We achieve this by using estimates provided by the following: \begin{enumerate} \item \textbf{mRSC} algorithm presented in Section \ref{sec:algorithm}. This algorithm combines the two $(N+1) \times T$ matrices in to one matrix of dimensions $(N+1) \times 2T$. For the regression step we use equal weights for both metrics given the form of the generating functions $f_a$ and $f_b$. \item \textbf{RSC} algorithm of \cite{rsc1} applied separately to $\boldsymbol{Y}_a$ and $\boldsymbol{Y}_b$ to generate estimates of each metric, independently. \end{enumerate} We conduct the experiment 100 times for each combination of $N$ and $T$ and average the resulting RMSE scores for the forecasts. \paragraph{\bf Results.} Figure \ref{fig:mrsc-exp-synthetic} shows the results of the experiment. We note that for all levels of $N \in [50, 500]$, mRSC produces a lower RMSE value for the estimates for the first row of both metricA and metricB. This is perfectly in line with the expectations set by Theorems \ref{thm:pre_int} and \ref{thm:post_int}. Note that while the bounds in Theorem \ref{thm:post_int} improve by a factor of $\sqrt{2}$, it is an upper bound and we do not expect the RMSE values to necessarily shrink by that amount. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{mrsc-synthetic.png} \caption{RMSE for mRSC and RSC algorithms for metricA and metricB for the experiment described in Section \ref{sec:mrsc-synthetic}.} \label{fig:mrsc-exp-synthetic} \end{figure} \section{Cricket Experiments Continued} \subsection{Capturing Cricketing Sense: Case Studies} \label{sec:appendix_forecast_cricket} As a continuation of the case-study described in Section \ref{sec:forecastcasestudies}, we provide two more examples. First, we consider the same game as that presented in Section \ref{sec:forecastcasestudies}: India vs Australia, 2011. However, we now look for the second innings. India was able to chase the target down and win the game with relative ease. Figure \ref{fig:aus-ind-2} shows the forecasts at the intervention points of 35, 40 and 45 overs for India's innings. The forecast trajectories are exceptionally close to reality and showcase the predictive ability of the algorithm. Once again notice the rise in score rate towards the late stages of the innings, similar to the first innings. We note that the flatlining of the actual score in the innings (red line) is simply due to the fact that India had won the game in the 48th over and no more runs were added. \\ \begin{figure}[H] \centering \begin{subfigure}[b]{0.225\textwidth} \includegraphics[width=\textwidth]{ind-aus-2-25.png} \label{fig:ind-aus-2-25} \end{subfigure} \begin{subfigure}[b]{0.225\textwidth} \includegraphics[width=\textwidth]{ind-aus-2-40.png} \label{fig:ind-aus-2-40} \end{subfigure} \begin{subfigure}[b]{0.225\textwidth} \includegraphics[width=\textwidth]{ind-aus-2-45.png} \label{fig:ind-aus-2-45} \end{subfigure} \caption{India vs Aus (WC 2011). Second Innings (India batting). Interventions at the 35, 40 and 45 over marks. Actual and Forecast trajectories with the 95\% uncertainty interval.} \label{fig:aus-ind-2} \end{figure} \textbf{Zimbabwe vs Australia, Feb 4 2001.} Zimbabwe and Australia played a LOI game in Perth in 2001. Australia, world champions then, were considered too strong for Zimbabwe and batting first made a well-above par 302 runs for the loss of only four wickets. The target was considered out of Zimbabwe's reach. Zimbabwe started poorly and had made only 91 for the loss of their top three batsmen by the 19th over. However, Stuart Carlisle and Grant Flower combined for a remarkable partnership to take Zimbabwe very close to the finish line. Eventually, Australia got both batsmen out just in the nick of time and ended up winning the game by just one run. We show the forecast trajectories at the 35, 40 and 45 over marks--all during the Carlisli-Flower partnership. The forecasts track reality quite well. A key feature to highlight here is the smoothness of the forecasts (in blue) compared to reality (in red). This is a key feature of our algorithm which ``de-noises'' the data matrix to retain only the top few singular values. The resulting smoothness is the mean effect we are trying to estimate and it is no surprise that the forecast trajectories bring this feature to light. \begin{figure}[H] \centering \begin{subfigure}[b]{0.225\textwidth} \includegraphics[width=\textwidth]{zim-aus-2-35.png} \label{fig:zim-aus-2-35} \end{subfigure} \begin{subfigure}[b]{0.225\textwidth} \includegraphics[width=\textwidth]{zim-aus-2-40.png} \label{fig:zim-aus-2-40} \end{subfigure} \begin{subfigure}[b]{0.225\textwidth} \includegraphics[width=\textwidth]{zim-aus-2-45.png} \label{fig:zim-aus-2-45} \end{subfigure} \caption{Zimbabwe vs Aus (2001). Second Innings (Zimbabwe batting). Interventions at the 35 over, 40 over and 45 over mark.} \label{fig:aus-zim-2} \end{figure} \section{Introduction} \label{sec:intro} Quantifying the causal statistical effect of interventions is a problem of interest across a wide array of domains. From policy to engineering to medicine, estimating the effect of an intervention is critical to innovation and understanding existing systems. In any setting, with or without an intervention, we only get to observe one set of outcomes. In casual analysis, the fundamental problem is that of estimating what wasn't observed, the {\em counterfactual}. In order to estimate the counterfactual, observational studies rely on the identification (or estimation) of a {\em control} unit. This can be done by experts or via techniques such as {\em matching} the unit of interest to other units (called {\em donors}) on covariate features or propensity scores (see \cite{rubin1973, rosenbaum1983}) etc. A popular data-driven approach to estimating the control unit is known as the Synthetic Control Method (SCM) \cite{abadie1, abadie2, abadie3}. SCM assigns convex weights to the donors such that the resulting {\em synthetic} unit most closely matches the unit of interest, according to a chosen metric of interest. A generalization of this approach known as the Robust Synthetic Control (RSC) \cite{rsc1} removes the convexity constraint and guarantees a consistent estimator which is robust to missing data and noise. While the SCM, its many variants and the RSC have been shown to have attractive properties, they all suffer for poor estimation when the amount of training data, i.e. the length of the pre-intervention period, is small. In many of these scenarios with little pre-intervention data, there may be other data available which is related to the type of data (or metric) of interest. For example, we might be interested in crime-rate and might also have median household income and high-school graduation rate data available. Therefore, one remedy to the limited pre-intervention data is to utilize data from multiple metrics. \subsection{Contributions} As the main contribution of this work, we propose a generalization of unidimensional RSC to multidimensional Robust Synthetic Control (mRSC). In the process we address the challenge of poor estimation of the counterfactual due to limited pre-intervention data since the mRSC incorporates multiple-types of data in a principled manner. We show that the mRSC method is a natural consequence of the popular factor model. Through this connection, we provide a falsifiability test for mRSC in a data-driven manner. In particular, we establish that if test is passed, then mRSC can be expected to produce a consistent estimation of the mean unit of interest (for all metrics) in the pre- and post-intervention periods. Further, the accuracy of estimating control using mRSC improves with respect to the RSC with increasing number of different types of relevant data. Specifically, the Mean Squared Error (MSE) is decreased by a factor of $K$ and $\sqrt{K}$ for the pre- and post-intervention estimates. All other properties of the RSC algorithm, namely robustness to missing data and noise, carry over. Finally, we conduct extensive experimentation to establish the efficacy of this generalized method in comparison to the RSC algorithm via synthetically generated datasets and two real-world case-studies: product sales in retail and score trajectory forecasting in game of Cricket. Next, we summarize these contributions in more detail. \medskip \noindent {\bf Model.} We consider a natural generalization of the factor (or latent variable) model considered in the literature cf. \cite{rsc1} where the measurement associated with a given unit, at a given time for a given metric is a function of features associated with the unit, time and the metric. A special case of such models, the linear factor models, are the {\em bread and butter} models within the Econometrics literature and assumed in \cite{abadie1, abadie2, abadie3}. We argue that as long as function is Lipschitz and the features belong to a compact domain, the resulting factor model can be well approximated by low-rank 3-order tensor with three dimensions of the tensor corresponding to the unit, time and metric, respectively. Therefore, for simplicity of exposition, we focus on setup where the underlying model is indeed a low-rank 3-order tensor. The observations are noisy version of the ground truth 3-order low-rank tensor with potentially missing values. These include pre-intervention data for all units and metrics, while post-intervention data only for the ``donor'' units. The goal is to estimate the ``synthetic control'' using all donor units and all the metrics in the pre-intervention period. The estimated ``synthetic control'' can then help estimate the {\em future} (post-intervention) ground-truth measurements for all metrics associated with the treatment unit. \medskip \noindent {\bf Algorithm and analysis.} The synthetic control literature \cite{abadie1, abadie2, abadie3, rsc1} relied on making the following key assumption: the treatment unit's measurements across time for the metric of interest can be expressed as a linear combination of the donor units' measurements. In this work, we start by arguing that we need not make this assumption as it holds naturally under the low-rank 3-order tensor model. Furthermore, the low-rank 3-order tensor model suggests that the identical linear relationship holds across all metrics simultaneously. That is, in order to utilize measurements from other metrics to estimate the synthetic control for a given metric and a given unit of interest, in effect we can treat the measurements for other metric as additional measurements for our metric of interest! That is, in effect the number of pre-intervention measurements get multiplied by the number of available metrics. The resulting algorithm is natural adaption of the RSC algorithm but with multiple measurements, which we call the multidimensional robust synthetic control (mRSC). Using a recently developed method for analyzing error-in-variable regression in high-dimensional setup, we derive finite-sample guarantees on the performance of the mRSC algorithm. We conclude that the pre- and post-intervention mean-squared-error (MSE) for mRSC decays faster than that of RSC by factor $K$ and $\sqrt{K}$ respectively, where $K$ is the number of metrics available. This formalizes the intuition that the data from other metrics effectively act as if the data belongs to the same metric. In summary, mRSC provides a way to overcome limited pre-intervention data by simply utilizing pre-intervention data across other metrics. \medskip \noindent {\bf Experiments: Synthetic data.} To begin with, we utilize synthetically generated data per our factor model to both verify tightness of theoretical results as well as understand the diagnostic test to determine whether the mRSC is applicable (or not). To begin with, we argue that if indeed mRSC is to work successfully, when we flatten the 3-order tensor of measurement data to a matrix by stacking the units by time slices across different metrics, the resulting matrix should have similar (approximate) rank as the matrix with only the single metric of interest (see Section \ref{sec:falsifiability} for details). We show that when the data is generated per our model, such test ought to be passed. And when data is not from our model, the test is failed. Next, we observe that the mRSC performs as predicted by theoretical results using experimental data. \medskip \noindent {\bf Experiments: Retail.} Next, we study the efficacy of mRSC in a real-world dataset about departmental sales of product of Walmart obtained from Kaggle \cite{kaggle-walmart}. It comprises sales data for several product-departments across 45 store locations and 143 weeks. The store locations are units, weeks represent time and product-department represent different metric. Our goal is to forecast the sales of a given product-department at a given location after observing data for it pre-intervention, using data for all product-departments across all locations using both pre- and post-intervention data. We study the performance of mRSC and compare it with the RSC method across different intervention times and across different department-products as metric of choice. Across all our experiments, we consistently find that when pre-intervention data is very little, the mRSC method performs significantly better than RSC method; however, when there is a lot of pre-intervention data for a given metric of interest, then mRSC and RSC perform comparably. This is the behavior that is consistently observed and in line with our theoretical results: if pre-intervention data is very limited than the use of data from multiple metrics using mRSC provides significant gains. \todo[fancyline]{Dennis: any precise stats to be included here?} \medskip \noindent {\bf Experiments: Cricket.} We consider the task of forecasting trajectory in the game of cricket. This is an ideal candidate for mRSC because the game has {\em two} metrics of interest: runs scored and wickets lost. We provide brief introduction to Cricket as we do not expect reader(s) to be familiar with the game, followed by why forecasting trajectory is a perfect fit for mRSC setup and then summary of our findings. \smallskip \noindent {\em Cricket 101.} Cricket, per Google Search for ``how many fans world-wide" seem to be the second most popular after soccer in the world at the time of this manuscript being written. It is played between two teams; it has inherently asymmetry in that both teams take turns to bat and bowl. While there are multiple formats with varying duration of the game, we focus on what is called ``50 overs Limited Over International (LOI)" format: each team bats for an ``inning" of 50 overs or 300 balls; during which at any given time two of its players (batsmen) are batting trying to score runs; while the other team fields all of its 11 players; one of which is bowling and others are fielding; collectively trying to get batsmen out or prevent them from scoring; and each batsman can get out at most once; an inning ends when either 300 balls are done or 10 batsmen get out; at the end, team that makes more runs wins. \smallskip \noindent{\em Forecasting trajectory using mRSC.} As an important contribution, we utilize the mRSC algorithm for forecasting entire trajectory of the game using observation of initial parts of game. As a reader will notice, the utilization of mRSC for forecasting trajectory of game is generic and likely to be applicable to other games such as Basketball, which would be interesting future direction. To that end, we start by describing how mRSC can be utilized for forecasting trajectory of an ``inning''. We view each inning as a unit, the balls as time and score, wicket as two different metrics of interest. The historical innings are effectively donor units. The unit or inning of interest, for which we might have observed scores / wickets up to some number of initial balls (or time), is the treatment unit and the act of forecasting is simply coming up with ``synthetic control'' for this unit to estimate the score / wicket trajectory for future balls using it. And that's it. We collected data for historical LOI over the period of 1999-2017 for over 4700 innings. Each inning can be up to 300 balls long with score and wicket trajectory. We find that the approximately low-rank structure of matrix with only scores (4700 by 300) and that with both scores and wicket (4700 by 600) remains the same (see Figure \ref{fig:singvals}). This suggests that mRSC can be applicable for this scenario. Next, we evaluate the performance on more than 750 recent (2010 to 2017) LOI innings. For consistency of comparisons, we conduct detailed study for forecasting at the 30 overs or 180 balls (i.e. 60\% of the inning). We measure performance through Mean-Absolute-Percentage-Error (MAPE) and R-square (R2). The median of the MAPE of our forecasts varies from $0.027$ or $2.7$\% for 5 overs ahead to $0.051$ or $5.1$\% for 20 overs ahead. While this, on its own is significant, to get a sense of the {\em absolute} performance, we evaluate the $R^2$ of our forecasts. We find that the $R^2$ of our forecast varies from $0.92$ for 5 overs to $0.79$ for 15 overs. That is, for 5 over forecasting, we can explain $92$\% of ``variance'' in the data, which is surprisingly accurate. Finally, we establish the value of the the de-noising and regression steps in the mRSC algorithm. Through one important experiment we also establish that the commonly held belief that the donor pool should only comprise innings involving the same team for which we are interested in forecasting leads to worse estimation. This is related to {\em Stein's Paradox} \cite{stein1} which was observed in baseball. Finally, we show using examples of actual cricket games that the mRSC algorithm successfully captures and reacts to nuances of the game of cricket. \subsection{Organization} The rest of this work is organized as follows: we review the relevant bodies of literature next (Section \ref{sec:related} followed by a detailed overview of the problem and our proposed model (Section \ref{sec:model}). Next, in Section \ref{sec:algorithm} we describe the setting and the mRSC algorithm which is followed by a simple diagnostic test to determine model applicability to a problem. We present the main theoretical results in Section \ref{sec:results} followed by a synthetic experiment and ``real-world'' retail case study to compare the mRSC algorithm to the RSC (Section \ref{sec:experiments}). Finally, in Section \ref{sec:cricket} we have the complete case study for the game of cricket to demonstrate the versatility of the proposed model and superior predictive performance.
{ "attr-fineweb-edu": 1.950195, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdA_xaJiQn8otYAVs
\subsection{Conditional Autoregressive Prior for Player-Specific Coefficients} \label{subsec:CAR} Sharing information between players is critical for our estimation problem, but standard hierarchical models encode an assumption of exchangeability between units that is too strong for NBA players, even between those who are classified by the league as playing the same position. For instance, LeBron James is listed at the same position (small forward) as Steve Novak, despite the fact that James is one of the NBA's most prolific short-range scorers whereas Novak has not scored a layup since 2012. To model between-player variation more realistically, our hierarchical model shares information across players based on a localized notion of player similarity that we represent as an $L \times L$ binary adjacency matrix $\mathbf{H}$: $H_{\ell k} = 1$ if players $\ell$ and $k$ are similar to each other and $H_{\ell k} = 0$ otherwise. We determine similarity in a pre-processing step that compares the spatial distribution of where players spend time on the offensive half-court; see Appendix \ref{subsec:H} for exact details on specifying $\mathbf{H}$. Now let $\beta^{\ell}_{ji}$ be the $i$th component of $\boldsymbol{\beta}^{\ell}_j$, the vector of coefficients for the time-referenced covariates for player $\ell$'s hazard $j$ \eqref{hazard-equation}. Also let $\boldsymbol{\beta}_{ji}$ be the vector representing this component across all $L = 461$ players, $(\beta^{1}_{ji} \: \: \beta^{2}_{ji} \: \ldots \: \beta^{L}_{ji})'$. We assume independent conditional autogressive (CAR) priors \cite{besag1974spatial} for $\boldsymbol{\beta}_{ji}$: \begin{align} \beta^{\ell}_{ji} | \beta^{(-\ell)}_{ji}, \tau_{\boldsymbol{\beta}_{ji}}^2 &\sim \mathcal{N} \left( \frac{1}{n_{\ell}} \sum_{k : H_{\ell k} = 1} \beta^{k}_{ji}, \frac{\tau_{\boldsymbol{\beta}_{ji}}^2}{n_{\ell}} \right) \nonumber \\ \tau^2_{\boldsymbol{\beta}_{ji}} &\sim \text{InvGam}(1, 1) \label{car} \end{align} where $n_{\ell} = \sum_{k} H_{\ell k}$. Similarly, let $\boldsymbol{\beta}_{\textrm{s} i} = (\beta^1_{\textrm{s} i} \: \: \beta^2_{\textrm{s} i} \: \ldots \: \beta^L_{\textrm{s} i})$ be the vector of the $i$th component of the shot probability model \eqref{shotprob} across players $1, \ldots, L$. We assume the same CAR prior \eqref{car} independently for each component $i$. While the inverse gamma prior for $\tau^2_{*}$ terms seems very informative, we want to avoid very large or small values of $\tau^2_{*}$, corresponding to 0 or full shrinkage (respectively), which we know are inappropriate for our model. Predictive performance for the 0 shrinkage model ($\tau^2_{*}$ very large) is shown in Table \ref{loglik-table}, whereas the full shrinkage model ($\tau^2_{*} = 0$) doesn't allow parameters to differ by player identity, which precludes many of the inferences EPV was designed for. \subsection{Spatial Effects $\xi$} \label{subsec:spat_effects} Player-tracking data is a breakthrough because it allows us to model the fundamental spatial component of basketball. In our models, we incorporate the properties of court space in spatial effects $\xi_j^{\ell}, \tilde{\xi}_j^{\ell}, \xi_\textrm{s}^{\ell}$, which are unknown real-valued functions on $\mathbb{S}$, and therefore infinite dimensional. We represent such spatial effects using Gaussian processes (see \citeasnoun{rasmussen2006gaussian} for an overview of modeling aspects of Gaussian processes). Gaussian processes are usually specified by a mean function and covariance function; this approach is computationally intractable for large data sets, as the computation cost of inference and interpolating the surface at unobserved locations is $\mathcal{O}(n^3)$, where $n$ is the number of different points at which $\xi_j^{\ell}$ is observed (for many spatial effects $\xi_j^{\ell}$, the corresponding $n$ would be in the hundreds of thousands). We instead provide $\xi$ with a low-dimensional representation using functional bases \cite{higdon2002space,quinonero2005unifying}, which offers three important advantages. First, this representation is more computationally efficient for large data sets such as ours. Second, functional bases allow for a non-stationary covariance structure that reflects unique spatial dependence patterns on the basketball court. Finally, the finite basis representation allows us to apply the same between-player CAR prior to estimate each player's spatial effects. Our functional basis representation of a Gaussian process $\xi_j^{\ell}$ relies on $d$ deterministic basis functions $\phi_{j1}, \ldots, \phi_{jd}: \mathbb{S} \rightarrow \mathbb{R}$ such that for any $\mathbf{z} \in \mathbb{S}$, \begin{equation}\label{GP-basis} \xi_j^{\ell}(\mathbf{z}) = \sum_{i=1}^d w^{\ell}_{ji}\phi_{ji}(\mathbf{z}), \end{equation} where $\mathbf{w}^{\ell}_j = (w^{\ell}_{j1} \: \ldots \: w^{\ell}_{jd})'$ is a random vector of loadings, $\mathbf{w}^{\ell}_j \sim \mathcal{N}(\boldsymbol{\omega}^{\ell}_j, \boldsymbol{\Sigma}^{\ell}_j)$. Letting $\Phi_j(\mathbf{z}) = (\phi_{j1}(\mathbf{z}) \: \ldots \: \phi_{jd}(\mathbf{z}))'$, we can see that $\xi_j^{\ell}$ given by \eqref{GP-basis} is a Gaussian process with mean function $\Phi_j(\mathbf{z})'\boldsymbol{\omega}^{\ell}_j$ and covariance function $\text{Cov}[\xi_j^{\ell}(\mathbf{z}_1), \xi_j^{\ell}(\mathbf{z}_2)] = \Phi_j(\mathbf{z}_1)'\boldsymbol{\Sigma}^{\ell}_j\Phi_j(\mathbf{z}_2)$. However, since bases $\phi_{ji}$ are deterministic, each $\xi^{\ell}_j$ is represented as a $d$-dimensional parameter. Note that we also use \eqref{GP-basis} for pass receiver spatial effects and the spatial effect term in the shot probability model, $\tilde{\xi}^{\ell}_j$ and $\xi_\textrm{s}^{\ell}$, respectively. For these terms we have associated bases $\tilde{\phi}_{ji}, \phi_{\textrm{s} i}$ and weights, $\tilde{w}^{\ell}_{ji}, w^{\ell}_{\textrm{s} i}$. As our notation indicates, bases functions $\Phi_j(\mathbf{z})$ differ for each macrotransition type but are constant across players; whereas weight vectors $\mathbf{w}^{\ell}_j$ vary across both macrotransition types and players. \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{graphics/bplots_3} \caption{The functional bases $\phi_{ji}$ for $i=1, \ldots, 10$ and $j$ corresponding to the shot-taking macrotransition, $j=5$. There is no statistical interpretation of the ordering of the bases; we have displayed them in rough order of the shot types represented, from close-range to long-range.} \label{bases} \end{figure} Using $d=10$, we determine the functional bases in a pre-processing step, discussed in Appendix \ref{subsec:psi}. These basis functions are interpretable as patterns/motifs that constitute players' decision-making tendencies as a function of space; please see Figure \ref{bases} for an example, or \citeasnoun{miller2013icml} for related work in a basketball context. Furthermore, we use a CAR model \eqref{car} to supply the prior mean and covariance matrix ($\boldsymbol{\omega}^{\ell}_j, \boldsymbol{\Sigma}^{\ell}_j$) for the weights: \begin{align} \boldsymbol{w}^{\ell}_j | \boldsymbol{w}^{-(\ell)}_j, \tau_{\mathbf{w}_j}^2 &\sim \mathcal{N} \left( \frac{1}{n_{\ell}} \sum_{k : H_{\ell k} = 1} \mathbf{w}^{k}_j, \frac{\tau_{\mathbf{w}_j}^2}{n_{\ell}} \mathbf{I}_d\right) \nonumber \\ \tau^2_{\mathbf{w}_{j}} &\sim \text{InvGam}(1, 1). \label{car3} \end{align} As with \eqref{car}, we also use \eqref{car3} for terms $\tilde{\mathbf{w}}_{j}$ and $\mathbf{w}_\textrm{s}$. Combining the functional basis representation \eqref{GP-basis} with the between-player CAR prior \eqref{car3} for the weights, we get a prior representation for spatial effects $\xi^{\ell}_j, \tilde{\xi}^{\ell}_j, \tilde{\xi}^{\ell}$ that is low-dimensional and shares information both across space and between different players. \subsection{Parameter Estimation} \label{subsec:estimation} As discussed in Section \ref{sec:Macro}, calculating EPV requires the parameters that define the multiresolution transition models \ref{M3}--\ref{M4}---specifically, the hazards $\lambda_j^{\ell}$, shot probabilities $p^{\ell}$, and all parameters of the microtransition model \ref{M3}. We estimate these parameters in a Bayesian fashion, combining the likelihood of the observed optical tracking data with the prior structure discussed earlier in this section. Using our multiresolution models, we can write the likelihood for the full optical tracking data, indexed arbitrarily by $t$: \begin{align} \label{partial} \prod_{t} \mathbb{P}(Z_{t + \epsilon} | \mathcal{F}^{(Z)}_t) & = \Bigg( \overbrace{\prod_{t} \mathbb{P}(Z_{t + \epsilon}|M(t)^c, \mathcal{F}^{(Z)}_t)^{\mathbf{1}[M(t)^c]}}^{L_{\text{mic}}} \Bigg) \Bigg( \overbrace{\prod_{t}\prod_{j=1}^6 \mathbb{P}(Z_{t + \epsilon}|M_j(t), C_{\delta_t}, \mathcal{F}^{(Z)}_t)^{\mathbf{1}[M_j(t)]}}^{L_{\text{rem}}}\Bigg) \nonumber \\ & \hspace{-2.5cm} \times \Bigg( \underbrace{\prod_{t} \mathbb{P}(M(t)^c | \mathcal{F}^{(Z)}_t)^{\mathbf{1}[M(t)^c]} \prod_{j=1}^6 \mathbb{P}(M_j(t)|\mathcal{F}^{(Z)}_t)^{\mathbf{1}[M_j(t)]}}_{L_{\text{entry}}} \Bigg) \Bigg( \underbrace{\prod_t \prod_{j=1}^6 \mathbb{P}(C_{\delta_t} | M_j(t), \mathcal{F}^{(Z)}_t)^{\mathbf{1}[M_j(t)]}}_{L_{\text{exit}}} \Bigg), \end{align} The factorization used in \eqref{partial} highlights data features that inform different parameter groups: $L_{\text{mic}}$ is the likelihood term corresponding to the microtransition model \ref{M3}, $L_{\text{entry}}$ the macrotransition entry model \ref{M1}, and $L_{\text{exit}}$ the macrotransition exit model \ref{M2}. The remaining term $L_{\text{rem}}$ is left unspecified, and ignored during inference. Thus, $L_{\text{mic}}, L_{\text{entry}},$ and $L_{\text{exit}}$ can be though of as partial likelihoods \cite{cox1975partial}, which under mild conditions leads to consistent and asymptotically well-behaved estimators \cite{wong1986theory}. When parameters in these partial likelihood terms are given prior distributions, as is the case for those comprising the hazards in the macrotransition entry model, as well as those in the shot probability model, the resulting inference is partially Bayesian \cite{cox1975note}. The microtransition partial likelihood term $L_{\text{mic}}$ factors by player: \begin{equation} L_{\text{mic}} \propto \prod_t \prod_{\ell = 1}^L \mathbb{P}(\mathbf{z}_{\ell}(t + \epsilon) | M(t)^c, \mathcal{F}^{(Z)}_t)^{\mathbf{1}[M(t)^c \text{ and } \ell \text{ on court at time } t]}. \label{Lmic} \end{equation} Depending on whether or not player $\ell$ is on offense (handling the ball or not) or defense, $\mathbb{P}(\mathbf{z}_{\ell}(t + \epsilon) | M(t)^c, \mathcal{F}^{(Z)}_t)$ is supplied by the offensive \eqref{micro} or defensive\eqref{micro-defense} microtransition models. Parameters for these models \eqref{micro}--\eqref{micro-defense} are estimated using R-INLA, where spatial acceleration effects $\mu^{\ell}_x, \mu^{\ell}_y$ are represented using a Gaussian Markov random field approximation to a Gaussian process with Mat\'ern covariance \cite{lindgren2011explicit}. Appendix \ref{subsec:psi} provides the details on this approximation. We do perform any hierarchical modeling for the parameters of the microtransition model---because this model only describes movement (not decision-making), the data for every player is informative enough to provide precise inference. Thus, microtransition models are fit in parallel using each player's data separately; this requires $L=461$ processors, each taking at most 18 hours at 2.50Ghz clock speed, using 32GB of RAM. For the macrotransition entry term, we can write \begin{equation} L_{\text{entry}} \propto \prod_{l=1}^L \prod_{j=1}^6 L_{\text{entry}_j}^{\ell} (\lambda_j^{\ell}(\cdot)), \label{Lmac} \end{equation} recognizing that (for small $\epsilon$), \begin{align}\label{everything} L^{\ell}_{\text{entry}_j}(\lambda_j^{\ell}(\cdot)) &\propto \left(\prod_{\substack{t \: : \: M_j(t) \\ t \in \mathcal{T}^{\ell}}} \lambda^{\ell}_j(t) \right) \exp \left( - \sum_{\substack t \in \mathcal{T}^{\ell}}} \lambda^{\ell}_j(t) \right) \nonumber \\ \text{where}\hspace{1cm} \log(\lambda^{\ell}_j(t)) &= [\mathbf{W}_j^{\ell}(t)]'\boldsymbol{\beta}_j^{\ell} + \boldsymbol{\phi}_j(\mathbf{z}_{\ell}(t))'\mathbf{w}_j^{\ell} + \left(\tilde{\boldsymbol{\phi}}_j(\mathbf{z}_{j}(t) )' \tilde{\mathbf{w}}_j^{\ell}\mathbf{1}[j \leq 4]\right) \end{align} and $\mathcal{T}^{\ell}$ is the set of time $t$ for which player $\ell$ possesses the ball. Expression \eqref{everything} is the likelihood for a Poisson regression; combined with prior distributions \eqref{car}--\eqref{car3}, inference for $\boldsymbol{\beta}^{\ell}_j, \mathbf{w}_j^{\ell}, \tilde{\mathbf{w}}_j^{\ell}$ is thus given by a hierarchical Poisson regression. However, the size of our data makes implementing such a regression model computationally difficult as the design matrix would have 30.4 million rows and a minimum of $L(p_j + d) \geq 5993$ columns, depending on macrotransition type. We perform this regression through the use of integrated nested Laplace approximations (INLA) \cite{rue2009approximate}. Each macrotransition type can be fit separately, and requires approximately 24 hours using a single 2.50GHz processor with 120GB of RAM. Recalling Section \ref{macro_exit}, the macrotransition exit model \ref{M2} is deterministic for all macrotransitions except shot attempts ($j=5$). Thus, $L_{\text{exit}}$ only provides information on the parameters of our shot probability model \eqref{shotprob}. Analogous to the Poisson model in \eqref{everything}, $L_{\text{exit}}$ is the likelihood of a logistic regression, which factors by player. We also use INLA to fit this hierarchical logistic regression model, though fewer computational resources are required as this likelihood only depends on time points where a shot is attempted, which is a much smaller subset of our data. \iffalse \begin{algorithm}[h!] \caption{Calculating EPV ($\nu_t$). } \label{alg:EPV} \begin{algorithmic} \Require{Player $\ell$ possess the ball at time $t$} \bigskip \Function{macro}{$\mathcal{F}^{(Z)}_s, \boldsymbol{\Theta}$} \Comment{Simulates a possible macrotransition in $(s, s + \epsilon]$} \For{$j$ in $1, \ldots, 6$} \State Set $M_j(s) = 1$ with probability min$\{1, \lambda_j^{\ell}(s)\}$ \EndFor \If{$\sum_j M_j(s) > 1$} \State Keep only one $j$ such that $M_j(s) = 1$, choosing it proportional to $\lambda_j^{\ell}(s)$ \EndIf \State \Return{$\{M_j(s), j=1, \ldots, 6\}$} \EndFunction \bigskip \Function{EPVdraw}{$\mathcal{F}^{(Z)}_t, \boldsymbol{\Theta}$} \Comment{Gets EPV from single simulation of next macro} \State Initialize $s \leftarrow t$ \State Initialize $M_j(s) \leftarrow \textsc{macro}(\mathcal{F}^{(Z)}_s, \boldsymbol{\Theta})$ \While{$M_j(s) = 0$ for all $j$} \State Draw $Z_{s+\epsilon} \sim \mathbb{P}(Z_{s + \epsilon} | M(s)^c, \mathcal{F}^{(Z)}_s)$ \State $\mathcal{F}^{(Z)}_{s + \epsilon} \leftarrow \{\mathcal{F}^{(Z)}_t, Z_{s + \epsilon}\}$ \State $s \leftarrow s + \epsilon$ \State $M_j(s) \leftarrow \textsc{macro}(\mathcal{F}^{(Z)}_s, \boldsymbol{\Theta})$ \EndWhile \State Draw $C_\delta \sim \mathbb{P}(C_\delta | M_j(s), \mathcal{F}^{(Z)}_s)$ \State $\nu_t \leftarrow \E[h(C_T)|C_\delta]$ \State \Return{$\nu_t$} \EndFunction \bigskip \Function{EPV}{$N, \mathcal{F}^{(Z)}_t, \boldsymbol{\Theta}$} \Comment{Averages over simulations of next macrotransition} \State Initialize $\nu_t \leftarrow 0$ \For{$i$ in $1, \ldots, N$} \State $\nu_t \leftarrow \nu_t + \textsc{EPVdraw}(\mathcal{F}^{(Z)}_t, \boldsymbol{\Theta})$ \EndFor \State \Return $\nu_t/N$ \EndFunction \end{algorithmic} \end{algorithm} \fi \end{document} \subsection{Time-Varying Covariates in Macrotransition Entry Model} \label{Covariates} As revealed in \eqref{hazard-equation}, the hazards $\lambda_j^{\ell}(t)$ are parameterized by spatial effects ($\xi_j^{\ell}$ and $\tilde{\xi}_j^{\ell}$ for pass events), as well as coefficients for situation covariates, $\boldsymbol{\beta}_j^{\ell}$. The covariates used may be different for each macrotransition $j$, but we assume for each macrotransition type the same covariates are used across players $\ell$. Among the covariates we consider, \texttt{dribble} is an indicator of whether the ballcarrier has started dribbling after receiving possession. \texttt{ndef} is the distance between the ballcarrier and his nearest defender (transformed to $\log(1 + d)$). \texttt{ball\_lastsec} records the distance traveled by the ball in the previous one second. \texttt{closeness} is a categorical variable giving the rank of the ballcarrier's teammates' distance to the ballcarrier. Lastly, \texttt{open} is a measure of how open a potential pass receiver is using a simple formula relating the positions of the defensive players to the vector connecting the ballcarrier with the potential pass recipient. For $j \leq 4$, the pass event macrotransitions, we use \texttt{dribble}, \texttt{ndef}, \texttt{closeness}, and \texttt{open}. For shot-taking and turnover events, \texttt{dribble}, \texttt{ndef}, and \texttt{ball\_lastsec} are included. Lastly, the shot probability model (which, from \eqref{shotprob} has the same parameterization as the macrotransition model) uses \texttt{dribble} and \texttt{ndef} only. All models also include an intercept term. As discussed in Section \ref{subsec:CAR}, independent CAR priors are assumed for each coefficient in each macrotransition hazard model. \subsection{Player Similarity Matrix $\mathbf{H}$ for CAR Prior} \label{subsec:H} The hierarchical models used for parameters of the macrotransition entry model, discussed in Section \ref{subsec:CAR}, are based on the idea that players who share similar roles for their respective teams should behave similarly in the situations they face. Indeed, players' positions (point guard, power forward, etc.) encode their offensive responsibilities: point guards move and distribute the ball, small forwards penetrate and attack the basket, and shooting guards get open for three-point shots. Such responsibilities reflect spatiotemporal decision-making tendencies, and therefore informative for our macrotransition entry model \eqref{hazard-def}--\eqref{hazard-equation}. Rather than use the labeled positions in our data, we define position as a distribution of a player's location during his time on the court. Specifically, we divide the offensive half of the court into 4-square-foot bins (575 total) and count, for each player, the number of data points for which he appears in each bin. Then we stack these counts together into a $L \times 575$ matrix (there are $L=461$ players in our data), denoted $\mathbf{G}$, and take the square root of all entries in $\mathbf{G}$ for normalization. We then perform non-negative matrix factorization (NMF) on $\mathbf{G}$ in order to obtain a low-dimensional representation of players' court occupancy that still reflects variation across players \cite{miller2013icml}. Specifically, this involves solving: \begin{equation}\label{nmf} \hat{\mathbf{G}} = \underset{\mathbf{G}^*}{\text{argmin}}\{D(\mathbf{G}, \mathbf{G}^*)\}, \text{ subject to } \mathbf{G}^* = \left(\underset{L \times r}{\mathbf{U}}\right)\left(\underset{r \times 575}{\mathbf{V}}\right) \text{ and } U_{ij},V_{ij} \geq 0 \text{ for all } i,j, \end{equation} where $r$ is the rank of the approximation $\hat{\mathbf{G}}$ to $\mathbf{G}$ (we use $r=5$), and $D$ is some distance function; we use a Kullback-Liebler type $$D(\mathbf{G}, \mathbf{G}^*) = \sum_{i,j} G_{ij}\log \left( G_{ij}/G_{ij}^*\right) - G_{ij} + G_{ij}^*.$$ The rows of $\mathbf{V}$ are non-negative basis vectors for players' court occupancy distributions (plotted in Figure \ref{H_bases}) and the rows of $\mathbf{U}$ give the loadings for each player. With this factorization, $\mathbf{U}_i$ (the $i$th row of $\mathbf{U}$) provides player $i$'s ``position''---a $r$-dimensional summary of where he spends his time on the court. Moreover, the smaller the difference between two players' positions, $||\mathbf{U}_i - \mathbf{U}_j||$, the more alike are their roles on their respective teams, and the more similar we expect the parameters of their macrotransition models to be a priori. \begin{figure}[h] \centering \includegraphics[width=1.0\linewidth]{graphics/H_bases} \caption{The rows of $\mathbf{V}$ (plotted above for $r=5$) are bases for the players' court occupancy distribution. There is no interpretation to the ordering.} \label{H_bases} \end{figure} Formalizing this, we take the $L \times L$ matrix $\mathbf{H}$ to consist of 0s, then set $H_{ij} = 1$ if player $j$ is one of the eight closest players in our data to player $i$ using the distance $||\mathbf{U}_i - \mathbf{U}_j||$ (the cutoff of choosing the closest eight players is arbitrary). This construction of $\mathbf{H}$ does not guarantee symmetry, which is required for the CAR prior we use, thus we set $H_{ji} = 1$ if $H_{ij} = 1$. For instance, LeBron James' ``neighbors'' are (in no order): Andre Iguodala, Harrison Barnes, Paul George, Kobe Bryant, Evan Turner, Carmelo Anthony, Rodney Stuckey, Will Barton, and Rudy Gay. \subsection{Basis Functions for Spatial Effects $\xi$} \label{subsec:psi} Recalling \eqref{GP-basis}, for each player $\ell$ and macrotransition type $j$, we have $\xi_j^{\ell}(\mathbf{z}) = \sum_{i=1}^d w^{\ell}_{ji} \phi_{ji}(\mathbf{z})$, where $\{\phi_{ji}, i=1, \ldots, d\}$ are the basis functions for macrotransition $j$. During the inference discussed in Section \ref{sec:Computation}, these basis functions are assumed known. They are derived from a pre-processing step. Heuristically, they are constructed by approximately fitting a simplified macrotransition entry model with stationary spatial effect for each player, then performing NMF to find a low-dimensional subspace (in this function space of spatial effects) that accurately captures the spatial dependence of players' macrotransition behavior. We now describe this process in greater detail. Each basis function $\phi_{ji}$ is itself represented as a linear combination of basis functions, \begin{equation} \label{phi_basis} \phi_{ji}(\mathbf{z}) = \sum_{k=1}^{d_0} v_{jik} \psi_k(\mathbf{z}), \end{equation} where $\{\psi_k, k=1, \ldots, d_0\}$ are basis functions (as the notation suggests, the same basis is used for all $j$, $i$). The basis functions $\{\psi_k, k=1, \ldots, d_0\}$ are induced by a triangular mesh of $d_0$ vertices (we use $d_0 = 383$) on the court space $\mathbb{S}$. In practice, the triangulation is defined on a larger region that includes $\mathbb{S}$, due to boundary effects. The mesh is formed by partitioning $\mathbb{S}$ into triangles, where any two triangles share at most one edge or corner; see Figure \ref{triangulation} for an illustration. With some arbitrary ordering of the vertices of this mesh, $\psi_k:\mathbb{S} \rightarrow \mathbb{R}$ is the unique function taking value 0 at all vertices $\tilde{k} \neq k$, 1 at vertex $k$, and linearly interpolating between any two points within the same triangle used in the mesh construction. Thus, with this basis, $\phi_{ji}$ (and consequently, $\xi^{\ell}_j$) are piecewise linear within the triangles of the mesh. \begin{figure}[h] \centering \includegraphics[width=0.33\linewidth]{graphics/triangulation_2} \caption{Triangulation of $\mathbb{S}$ used to build the functional basis $\{\psi_k, k=1, \ldots, d_0\}$. Here, $d_0=383$.} \label{triangulation} \end{figure} This functional basis $\{\psi_k, k=1, \ldots, d_0\}$ is used by \citeasnoun{lindgren2011explicit}, who show that it can approximate a Gaussian random field with Mat\'ern covariance. Specifically, let $x(\mathbf{z}) = \sum_{k=1}^{d_0} v_k\psi_k(\mathbf{z})$ and assume $(v_1 \: \ldots \: v_k)' = \mathbf{v} \sim \mathcal{N}(0, \boldsymbol{\Sigma}_{\nu, \kappa, \sigma^2})$. The form of $\boldsymbol{\Sigma}_{\nu, \kappa, \sigma^2}$ is such that the covariance function of $x$ approximates a Mat\'ern covariance: \begin{equation} \label{gmrf} \text{Cov}[x(\mathbf{z}_1), x(\mathbf{z}_2)] = \boldsymbol{\psi}(\mathbf{z}_1)'\boldsymbol{\Sigma}_{\nu, \kappa, \sigma^2} \boldsymbol{\psi}(\mathbf{z}_2) \approx \frac{\sigma^2}{\Gamma(\nu)2^{\nu-1}}(\kappa ||\mathbf{z}_1 - \mathbf{z}_2||)^{\nu} K_{\nu}(\kappa ||\mathbf{z}_1 - \mathbf{z}_2||), \end{equation} where $\boldsymbol{\psi}(\mathbf{z}) = (\psi_1(\mathbf{z}) \: \ldots \: \psi_{d_0}(\mathbf{z}))'$. As discussed in Section \ref{subsec:spat_effects}, the functional basis representation of a Gaussian process offers computational advantages in that the infinite dimensional field $x$ is given a $d_0$-dimensional representation, as $x$ is completely determined by $\mathbf{v}$. Furthermore, as discussed in \citeasnoun{lindgren2011explicit}, $\boldsymbol{\Sigma}_{\nu, \kappa, \sigma^2}^{-1}$ is sparse (\eqref{gmrf} is actually a Gaussian Markov random field (GMRF) approximation to $x$), offering additional computational savings \cite{rue2001fast}. The GMRF approximation given by \eqref{phi_basis}--\eqref{gmrf} is actually used in fitting the microtransition models for offensive players \eqref{micro}. We give the spatial innovation terms $\mu^{\ell}_x, \mu^{\ell}_y$ representations using the $\psi$ basis. Then, as mentioned in Section \ref{subsec:estimation}, \eqref{micro} is fit independently for each player in our data set using the software R-INLA. We also fit simplified versions of the macrotransition entry model, using the $\psi$ basis, in order to determine $\{v_{jik}, k=1, \ldots, d_0\}$, the loadings of the basis representation for $\phi$, \eqref{phi_basis}. This simplified model replaces the macrotransition hazards \eqref{hazard-equation} with \begin{equation} \label{hazard_preprocess} \log(\lambda^{\ell}_j(t)) = c_j^{\ell} + \sum_{k=1}^{d_0} u^{\ell}_{jk} \psi_k(\mathbf{z}^{\ell}(t)) + \mathbf{1}[j \leq 4]\sum_{k=1}^{d_0}\tilde{u}_{jk}^{\ell}\psi_k\left(\mathbf{z}_{j}(t)\right), \end{equation} thus omitting situational covariates ($\boldsymbol{\beta}^{\ell}_j$ in \eqref{hazard-equation}) and using the $\psi$ basis representation in place of $\xi_j^{\ell}$. Note that for pass events, like \eqref{hazard-equation}, we have an additional term based on the pass recipient's location, parameterized by $\{\tilde{u}^{\ell}_{jk}, k=1, \ldots, d_0\}$. As discussed in Section \ref{subsec:estimation}, parameters in \eqref{hazard_preprocess} can be estimated by running a Poisson regression. We perform this independently for all players $\ell$ and macrotransition types $j$ using the R-INLA software. Like the microtransition model, we fit \eqref{hazard_preprocess} separately for each player across $L = 461$ processors (each hazard type $j$ is run in serial), each requiring at most 32GB RAM and taking no more than 16 hours. For each macrotransition type $j$, point estimates $\hat{u}^{\ell}_{jk}$ are exponentiated\footnote{The reason for exponentiation is because estimates $\hat{u}^{\ell}_{jk}$ inform the log hazard, so exponentiation converts these estimates to a more natural scale of interest. Strong negative signals among the $\hat{u}^{\ell}_{jk}$ will move to 0 in the entries of $\mathbf{U}_j$ and not be very influential in the matrix factorization \eqref{U_factorization}, which is desirable for our purposes.}, so that $[\mathbf{U}_j]_{\ell k} = \exp(\hat{u}^{\ell}_{jk})$. We then perform NMF \eqref{nmf} on $\mathbf{U}_j$: \begin{equation} \label{U_factorization} \mathbf{U}_j \approx \left(\underset{L \times d}{\mathbf{Q}_j}\right)\left(\underset{d \times d_0}{\mathbf{V}_j}\right). \end{equation} Following the NMF example in Section \ref{subsec:H}, the rows of $\mathbf{V}_j$ are bases for the variation in coefficients $\{u^{\ell}_{jk}, k=1, \ldots, d_0\}$ across players $\ell$. As $1 \leq k \leq d_0$ indexes points on our court triangulation (Figure \ref{triangulation}), such bases reflect structured variation across space. We furthermore use these terms as the coefficients for \eqref{phi_basis}, the functional basis representation of $\phi_{ji}$, setting $v_{jik} = [\mathbf{V}_j]_{ik}$. Equivalently, we can summarize our spatial basis model as: \begin{equation} \label{all_bases} \xi^{\ell}_j (\mathbf{z}) = [\mathbf{w}^{\ell}_j]'\boldsymbol{\phi}_j(\mathbf{z}) = [\mathbf{w}^{\ell}_j]' \mathbf{V}_j \boldsymbol{\psi}(\mathbf{z}). \end{equation} The preprocessing steps described in this section---fitting a simplified macrotransition entry model \eqref{hazard_preprocess} and performing NMF on the coefficient estimates \eqref{U_factorization}---provide us with basis functions $\phi_{ji}(\mathbf{z})$ that we treat as fixed and known during the modeling and inference discussed in Section \ref{sec:Computation}. Note that an analogous expression for \eqref{all_bases} is used for $\tilde{\xi}^{\ell}_j$ in terms of $\tilde{\mathbf{w}}^{\ell}_j$ and $\tilde{\mathbf{V}}_j$ for pass events; however, for the spatial effect $\xi^{\ell}_\textrm{s}$ in the shot probability model, we simply use $\mathbf{V}_5$. Thus, the basis functions for the shot probability model are the same as those for the shot-taking hazard model. \subsection{Calculating EPVA: Baseline EPV for League-Average Player} \label{subsec:EPVA} To calculate the baseline EPV for a league-average player possessing the ball in player $\ell$'s shoes, denoted $\nu_t^{r(\ell)}$ in \eqref{EPVA}, we start by considering an alternate version of the transition probability matrix between coarsened states $\mathbf{P}$. For each player $\ell_1, \ldots, \ell_5$ on offense, there is a disjoint subset of rows of $\mathbf{P}$, denoted $\mathbf{P}_{\ell_i}$, that correspond to possession states for player $\ell_i$. Each row of $\mathbf{P}_{\ell_i}$ is a probability distribution over transitions in $\mathcal{C}$ given possession in a particular state. Technically, since states in $\mathcal{C}_{\text{poss}}$ encode player identities, players on different teams do not share all states which they have a nonzero probability of transitioning to individually. To get around this, we remove the columns from each $\mathbf{P}_{\ell_i}$ corresponding to passes to players not on player $\ell_i$'s team, and reorder the remaining columns according to the position (guard, center, etc.) of the associated pass recipient. Thus, the interpretation of transition distributions $\mathbf{P}_{\ell_i}$ across players $\ell_i$ is as consistent as possible. We create a baseline transition profile of a hypothetical league-average player by averaging these transition probabilities across all players: (with slight abuse of notation) let $\mathbf{P}_r = \sum_{\ell=1}^L \mathbf{P}_{\ell}/L$. Using this, we create a new transition probability matrix $\mathbf{P}_r(\ell)$ by replacing player $\ell$'s transition probabilities ($\mathbf{P}_{\ell}$) with the league-average player's ($\mathbf{P}_r$). The baseline (league-average) EPV at time $t$ is then found by evaluating $\nu^{r(\ell)}_t = \E_{\mathbf{P}_r(\ell)}[ X | C_t]$. Note that $\nu^{r(\ell)}_t$ depends only on the coarsened state $C_t$ at time $t$, rather than the full history of the possession, $\mathcal{F}^{(Z)}_t$, as in $\nu_t$ \eqref{epveqn}. This ``coarsened'' baseline $\nu_t^{r(\ell)}$ exploits the fact that, when averaging possessions over the entire season, the results are (in expectation) identical to using a full-resolution baseline EPV that assumes the corresponding multiresolution transition probability models for this hypothetical league-average player. \end{document} \subsection{Player-Tracking Data} In 2013 the National Basketball Association (NBA), in partnership with data provider STATS LLC, installed optical tracking systems in the arenas of all $30$ teams in the league. The systems track the exact two-dimensional locations of every player on the court (as well as the three-dimensional location of the ball) at a resolution of 25Hz, yielding over 1 billion space-time observations over the course of a full season. Consider, for example, the following possession recorded using this player tracking system. This is a specific Miami Heat possession against the Brooklyn Nets from the second quarter of a game on November 1, 2013, chosen arbitrarily among those during which LeBron James (widely considered the best NBA player as of 2014) handles the ball. \begin{figure}[h!] \centering \includegraphics[width=1.0\linewidth]{graphics/poss_path} \caption{Miami Heat possession against Brooklyn Nets. Norris Cole wanders into the perimeter (A) before driving toward the basket (B). Instead of taking the shot, he runs underneath the basket (C) and eventually passes to Rashard Lewis(D), who promptly passes to LeBron James (E). After entering the perimeter (F), James slips behind the defense (G) and scores an easy layup (H).} \label{heat_poss} \end{figure} In this particular possession, diagrammed in Figure \ref{heat_poss}, point guard Norris Cole begins with possession of the ball crossing the halfcourt line (panel A). After waiting for his teammates to arrive in the offensive half of the court, Cole wanders gradually into the perimeter (inside the three point line), before attacking the basket through the left post. He draws two defenders, and while he appears to beat them to the basket (B), instead of attempting a layup he runs underneath the basket through to the right post (C). He is still being double teamed and at this point passes to Rashard Lewis (D), who is standing in the right wing three position. As defender Joe Johnson closes, Lewis passes to LeBron James, who is standing about 6 feet beyond the three point line and drawing the attention of Andray Blatche (E). James wanders slowly into the perimeter (F), until just behind the free throw line, at which point he breaks towards the basket. His rapid acceleration (G) splits the defense and gains him a clear lane to the basket. He successfully finishes with a layup (H), providing the Heat two points. \subsection{Expected Possession Value} Such detailed data hold both limitless analytical potential for basketball spectators and new methodological challenges to statisticians. Of the dizzying array of questions that could be asked of such data, we choose to focus this paper on one particularly compelling quantity of interest, which we call \textit{expected possession value} (EPV), defined as the expected number of points the offense will score on a particular possession conditional on that possession's evolution up to time $t$. For illustration, we plot the EPV curve corresponding to the example Heat possession in Figure \ref{heat_epv}, with EPV estimated using the methodology in this paper. We see several moments when the expected point yield of the possession, given its history, changes dramatically. For the first 2 seconds of the possession, EPV remains around 1. When Cole drives toward the basket, EPV rises until peaking at around 1.34 when Cole is right in front of the basket. As Cole dribbles past the basket (and his defenders continue pursuit), however, EPV falls rapidly, bottoming out at 0.77 before ``resetting'' to 1.00 with the pass to Rashard Lewis. The EPV increases slightly to 1.03 when the ball is then passed to James. As EPV is sensitive to small changes in players' exact locations, we see EPV rise slightly as James approaches the three point line and then dip slightly as he crosses it. Shortly afterwards, EPV rises suddenly as James breaks towards the basket, eluding the defense, and continues rising until he is beneath the basket, when an attempted layup boosts the EPV from 1.52 to 1.62. \begin{figure}[h!] \centering \includegraphics[width=1.0\linewidth]{graphics/ticker_1_edit} \caption{Estimated EPV over time for the possession shown in Figure \ref{heat_poss}. Changes in EPV are induced by changes in players' locations and dynamics of motion; macrotransitions such as passes and shot attempts produce immediate, sometimes rapid changes in EPV. The black line slightly smooths EPV evaluations at each time point (gray dots), which are subject to Monte Carlo error.} \label{heat_epv} \end{figure} In this way, EPV corresponds naturally to a coach's or spectator's sense of how the actions that basketball players take in continuous time help or hurt their team's cause to score in the current possession, and quantifies this in units of expected points. EPV acts like a stock ticker, providing an instantaneous summary of the possession's eventual point value given all available information, much like a stock price values an asset based on speculation of future expectations. \subsection{Related Work and Contributions} Concepts similar to EPV, where final outcomes are modeled conditional on observed progress, have had statistical treatment in other sports, such as in-game win probability in baseball \cite{bukiet1997markov,yang2004two} and football \cite{lock2014using}, as well as in-possession point totals in football \cite{burke2010,goldner2012markov}. These previous efforts can be categorized into either marginal regression/classification approaches, where features of the current game state are mapped directly to expected outcomes, or process-based models that use a homogeneous Markov chain representation of the game to derive outcome distributions. Neither of these approaches is ideal for application to basketball. Marginal regression methodologies ignore the natural martingale structure of EPV, which is essential to its ``stock ticker'' interpretation. On the other hand, while Markov chain methodologies do maintain this ``stock ticker'' structure, applying them to basketball requires discretizing the data, introducing an onerous bias-variance-computation tradeoff that is not present for sports like baseball that are naturally discrete in time. To estimate EPV effectively, we introduce a novel multiresolution approach in which we model basketball possessions at two separate levels of resolution, one fully continuous and one highly coarsened. By coherently combining these models we are able to obtain EPV estimates that are reliable, sensitive, stochastically consistent, and computationally feasible (albeit intensive). While our methodology is motivated by basketball, we believe that this research can serve as an informative case study for analysts working in other application areas where continuous monitoring data are becoming widespread, including traffic monitoring \cite{ihler2006adaptive}, surveillance, and digital marketing \cite{shao2011data}, as well as other sports such as soccer and hockey \cite{thomas2013competing}. Section \ref{sec:Multiresolution} formally defines EPV within the context of a stochastic process for basketball, introducing the multiresolution modeling approach that make EPV calculations tractable as averages over future paths of a stochastic process. Parameters for these models, which represent players' decision-making tendencies in various spatial and situational circumstances, are discussed in Section \ref{sec:Macro}. Section \ref{sec:Computation} discusses inference for these parameters using hierchical models that share information between players and across space. We highlight results from actual NBA possessions in Section \ref{sec:Results}, and show how EPV provides useful new quantifications of player ability and skill. Section \ref{sec:Discussion} concludes with directions for further work. \end{document} \iffalse With its inherently discrete batter/pitcher dynamics, the sport of baseball has received tremendous focus from the statistics community over the past several decades. Because the pitches thrown in a baseball game define a natural set of outcomes, such as a ball, strike, or home run, the entire sequence of a given baseball game may be modeled based on these individual events. Because a batter faces a given set of pitches (or, more coarsely, a set of `at bats') each with its own outcome, understanding batter characteristics boils down to characterizing a set of discrete events. Early examples of statistics' prominent role in baseball include \citeasnoun{efron1975data}, who look at shrinkage estimators of player batting averages, \citeasnoun{albright1993a-statistical}, who call into question the notion of streakiness in batting, and \citeasnoun{albert1994exploring}, who looks at the effect of situational covariates on hitting efficiency. While these reference highlight statisticians' easrly foray into baseball analysis, analyses have projected even further back in time: because basic statistics have been kept from historical games, recent work has applied modern statistical techniques to this historical data to determine, for example, that the best pitcher in 1876 was Albert Spalding, with a 47-12 win-loss record \cite{james2010the-new-bill}. Baseball is not the only naturally discrete sport to attain significant attention from statisticians. Both golf \cite[e.g.][]{connolly2008skill, berry2001how-ferocious} and chess \cite[e.g.][]{draper1963does} have a history of being studied statistically. Another way in which sports have been studied more generally is to ignore within-game events, and focus solely on modeling the outcomes, or the winners, of the game. A common approach is that of paired comparisons \cite{bradley1952rank}, where each team has a latent skill parameter, and their odds of winning a game is proportional to their skill relative to their competitor's. This notion was later extended to ties \cite{davidson1970extending, rao1967ties} and subsequently to dynamic skill parameters \cite{glickman1999parameter}. While these results have had significant impact on our understanding and prediction of sports outcomes, this focus on discrete events has limited our understanding of within-game processes in dynamic sports such as soccer, hockey, and basketball, which have received considerably less attention from the statistics community. The historical neglect of these sports stems largely from their continuous nature, which is not naturally quantified by a boxscore, or play-by-play description, of the game. With players interacting in real-time, it is inherently difficult to represent the characteristics of a game, or of individual players, with a small set of statistics. In contrast to baseball or chess, dynamic sports such as basketball consist of complex space-time interactions which \textit{create} discrete outcomes such as assists and points. In this way, basketball is like a high-speed chess match, where teams and players employ tactics which do not necessarily generate points immediately, but can yield higher-value opportunities several ``moves'' down the line. From this viewpoint, it becomes apparent that the decisive moment in a given possession may not have been the three-point shot which resulted in a point, but rather the pass that led to that shot, or even the preceding drive that scrambled the defense. This idea of players interacting and dynamically making decisions which result in creating value at the endpoint of a play differentiates dynamic sports, and hints at the need for more intricate statistical modeling which accounts for the micro-scale spatio-temporal flow inherent to these sports. Unfortunately, contemporary basketball analyses fail to account for this core idea. Despite many recent innovations, most advanced metrics (PER \cite{hollinger2004pro} and +/- variations \cite{omidiran2011pm}, for example) remain based on simple tallies relating to the terminal states of possessions like points, rebounds, and turnovers. Similarly in hockey, \citeasnoun{thomas2013competing} have discretized the game according to player line changes, considering outcomes as points, censored by player substitutions. While these approaches have shed light on their respective games, they are akin to analyzing a chess match based only on the move that resulted in checkmate, leaving unexplored the possibility that the key move occurred several turns before. This leaves a major gap to be filled, as an understanding of how players contribute to the whole possession---not just the events that end it---can be critical in evaluating players, assessing the quality of their decision-making, and predicting the success of particular in-game tactics. The major obstacle to closing this gap is the current inability to evaluate the individual tactical decisions that form the substructure of every possession of every basketball game. For example, there is no current method to estimate the value of a drive to the basket, or to compare the option of taking a contested shot to the option of passing to an open teammate. A major opportunity to fill this void is presented by recent data acquisitition systems in sports arenas. In the NBA, for instance, starting with the $2013-2014$ season there are multiple cameras mounted in the rafters of every stadium, capturing video which is post-processed to find the spatial location of all players and the ball, $25$ times per second. Similar systems have existed in soccer and football stadiums for years. This optical player-tracking data opens up the opportunity to study the fine-scale spatio-temporal interactions of the players. However, because of the unique character of the data and underlying processes, new statistical methods are required to extract insight from these new data sources. In this paper, we develop a coherent, quantitative representation of a whole possession that summarizes each moment of the possession in terms of the number of points the offense is expected to score---a quantity we call \textit{expected possession value}, or EPV. To capture the unique characteristics inherent in the endpoint-valued spatio-temporal process which characterizes basketball possessions, we employ a multiresolution semi-Markov process model to encode how ball handlers make decisions on the court based upon their unique skill, their location, and the locations of their team-mates and opponents. By assigning a point value to every tactical option available to a player in a given instant, EPV allows for a decomposition of player's decisions, differentiating for instance between good shots versus shots taken when a more favorable pass was available. We define EPV in Section \ref{sec:EPV}, and subsequently represent it as a multiresolution semi-Markov process in Section \ref{sec:Multiresolution}. The subsequent two sections break apart the multiresolution structure, detailing the macro- and micro-transition aspects of the model, respectively. In Section \ref{sec:Results} we highlight details of the method's computational implementation, demonstrating the resulting output. Lastly, in Section \ref{sec:Discussion} we conclude the work and highlight a variety of possible uses of the method, several of which are included in the Appendix. \fi \section{Introduction} \label{sec:Intro} \subfile{EPV_intro.tex} \section{Multiresolution Modeling} \label{sec:Multiresolution} \subfile{multires2.tex} \section{Transition Model Specification} \label{sec:Macro} \subfile{EPV_macro.tex} \section{Hierarchical Modeling and Inference} \label{sec:Computation} \subfile{EPV_computation.tex} \section{Results} \label{sec:Results} \subfile{EPV_results.tex} \section{Discussion} \label{sec:Discussion} \subfile{EPV_discuss.tex} \newpage \subsection{Microtransition Model} The microtransition model describes player movement with the ballcarrier held constant. In the periods between transfers of ball possession (including passes, shots, and turnovers), all players on the court move in order to influence the character of the next ball movement (macrotransition). For instance, the ballcarrier might drive toward the basket to attempt a shot, or move laterally to gain separation from a defender, while his teammates move to position themselves for passes or rebounds, or to set screens and picks. The defense moves correspondingly, attempting to deter easy shot attempts or passes to certain players while simultaneously anticipating a possible turnover. Separate models are assumed for offensive and defensive players, as we shall describe. Predicting the motion of offensive players over a short time window is driven by the players' dynamics (velocity, acceleration, etc.). Let the location of offensive player $\ell$ ($\ell \in \{1, \ldots, L = 461\}$) at time $t$ be $\mathbf{z}^{\ell}(t) = (x^{\ell}(t), y^{\ell}(t))$. We then model movement in each of the $x$ and $y$ coordinates usin \begin{align} x^{\ell}(t + \epsilon) &= x^{\ell}(t) + \alpha^{\ell}_x[x^{\ell}(t) - x^{\ell}(t - \epsilon)] + \eta^{\ell}_x(t) \label{micro} \end{align} (and analogously for $y^{\ell}(t)$). This expression derives from a Taylor series expansion to the ballcarrier's position for each coordinate, such that $\alpha^{\ell}_x[x^{\ell}(t) - x^{\ell}(t - \epsilon)] \approx \epsilon x^{\ell \prime}(t)$, and $\eta^{\ell}_x(t)$ provides stochastic innovations representing the contribution of higher-order derivatives (acceleration, jerk, etc.). Because they are driven to score, players' dynamics on offense are nonstationary. When possessing the ball, most players accelerate toward the basket when beyond the three-point line, and decelerate when very close to the basket in order to attempt a shot. Also, players will accelerate away from the edges of the court as they approach these, in order to stay in bounds. To capture such behavior, we assume spatial structure for the innovations, $\eta^{\ell}_x(t) \sim \mathcal{N} (\mu^{\ell}_x(\mathbf{z}^{\ell}(t)), (\sigma^{\ell}_x)^2)$, where $\mu^{\ell}_x$ maps player $\ell$'s location on the court to an additive effect in \eqref{micro}, which has the interpretation of an acceleration effect; see Figure \ref{accelerations} for an example \begin{figure}[h] \centering \begin{tabular}{cccc} \subfloat[]{\includegraphics[width=0.23\linewidth]{graphics/parker_with}} & \subfloat[]{\includegraphics[width=0.23\linewidth]{graphics/parker_without}} & \subfloat[]{\includegraphics[width=0.23\linewidth]{graphics/howard_with}} & \subfloat[]{\includegraphics[width=0.23\linewidth]{graphics/howard_without}} \end{tabular} \caption{Acceleration fields $(\mu_x(\mathbf{z}(t)), \mu_y(\mathbf{z}(t)))$ for Tony Paker (a)--(b) and Dwight Howard (c)--(d) with and without ball possession. The arrows point in the direction of the acceleration at each point on the court's surface, and the size and color of the arrows are proportional to the magnitude of the acceleration. Comparing (a) and (c) for instance, we see that when both players possess the ball, Parker more frequently attacks the basket from outside the perimeter. Howard does not accelerate to the basket from beyond the perimeter, and only tends to attack the basket inside the paint.} \label{accelerations} \end{figure} The defensive components of $\mathbb{P}(Z_{t+\epsilon} | M(t)^c, \mathcal{F}^{(Z)}_t)$, corresponding to the positions of the five defenders, are easier to model conditional on the evolution of the offense's positions. Following \citeasnoun{franks2014defensive}, we assume each defender's position is centered on a linear combination of the basket's location, the ball's location, and the location of the offensive player he is guarding. \citeasnoun{franks2014defensive} use a hidden Markov model (HMM) based on this assumption to learn which offensive players each defender is guarding, such that conditional on defender $\ell$ guarding offender $k$ his location $\mathbf{z}^{\ell}(t) = (x^{\ell}(t), y^{\ell}(t))$ should be normally distributed with mean $\mathbf{m}^k_{\text{opt}}(t) = 0.62\mathbf{z}^k(t) + 0.11\mathbf{z}_{\text{bask}} + 0.27\mathbf{z}_{\text{ball}}(t)$. Of course, the dynamics (velocity, etc.) of defensive players' are still hugely informative for predicting their locations within a small time window. Thus our microtransition model for defender $\ell$ balances these dynamics with the mean path induced by the player he is guarding: \begin{align} x^{\ell}(t + \epsilon)|m^k_{\text{opt}, x}(t) & \sim \mathcal{N} \bigg( x^{\ell}(t) + a^{\ell}_x[x^{\ell}(t) - x^{\ell}(t-\epsilon)] \nonumber \\ & \hspace{-2cm} + b^{\ell}_x[m^k_{\text{opt}, x}(t + \epsilon) - m^k_{\text{opt}, x}(t)] + c_x^{\ell}[x^{\ell}(t) - m^k_{\text{opt}, x}(t + \epsilon)], (\tau^{\ell}_x)^2\bigg) \label{micro-defense} \end{align} and symmetrically in $y$. Rather than implement the HMM procedure used in \citeasnoun{franks2014defensive}, we simply assume each defender is guarding at time $t$ whichever offensive player $j$ yields the smallest residual $||\mathbf{z}^{\ell}(t) - \mathbf{m}^j_{\text{opt}}(t)||$, noting that more than one defender may be guarding the same offender (as in a ``double team''). Thus, conditional on the locations of the offense at time $t+\epsilon$, \eqref{micro-defense} provides a distribution over the locations of the defense at $t + \epsilon$. \subsection{Macrotransition Entry Model} The macrotransition entry model $\mathbb{P}(M(t) | \mathcal{F}^{(Z)}_t)$ predicts ball movements that instantaneously shift the course of the possession---passes, shot attempts, and turnovers. As such, we consider a family of macrotransition entry models $\mathbb{P}(M_j(t) |\mathcal{F}^{(Z)}_t)$, where $j$ indexes the type of macrotransition corresponding to $M(t)$. There are six such types: four pass options (indexed, without loss of generality, $j \leq 4$), a shot attempt ($j = 5$), or a turnover $(j=6)$. Thus, $M_j(t)$ is the event that a macrotransition of type $j$ begins in the time window $(t, t + \epsilon]$, and $M(t) = \bigcup_{j=1}^6 M_j(t)$. Since macrotransition types are disjoint, we also know $\mathbb{P}(M(t) | \mathcal{F}^{(Z)}_t) = \sum_{j=1}^6 \mathbb{P}(M_j(t) | \mathcal{F}^{(Z)}_t)$. We parameterize the macrotransition entry models as competing risks \cite{prentice1978analysis}: assuming player $\ell$ possesses the ball at time $t > 0$ during a possession, denote \begin{equation}\label{hazard-def} \lambda^{\ell}_j (t) = \lim_{\epsilon \rightarrow 0} \frac{\mathbb{P}(M_j(t) |\mathcal{F}^{(Z)}_t)}{\epsilon} \end{equation} as the hazard for macrotransition $j$ at time $t$. We assume these are log-linear, \begin{equation}\label{hazard-equation} \log(\lambda^{\ell}_j(t)) = [\mathbf{W}_j^{\ell}(t)]'\boldsymbol{\beta}_j^{\ell} + \xi_j^{\ell}\left(\mathbf{z}^{\ell}(t)\right) + \left(\tilde{\xi}_j^{\ell}\left(\mathbf{z}_{j}(t)\right)\mathbf{1}[j \leq 4]\right), \end{equation} where $\mathbf{W}_j^{\ell}(t)$ is a $p_j \times 1$ vector of time-varying covariates, $\boldsymbol{\beta}_j^{\ell}$ a $p_j \times 1$ vector of coefficients, $\mathbf{z}^{\ell}(t)$ is the ballcarrier's 2D location on the court (denote the court space $\mathbb{S}$) at time $t$, and $\xi_j^{\ell}: \mathbb{S} \rightarrow \mathbb{R}$ is a mapping of the player's court location to an additive effect on the log-hazard, providing spatial variation. The last term in \eqref{hazard-equation} only appears for pass events $(j \leq 4)$ to incorporate the location of the receiving player for the corresponding pass: $\mathbf{z}_j(t)$ (which slightly abuses notation) provides his location on the court at time $t$, and $\tilde{\xi}_j^{\ell}$, analogously to $\xi_j^{\ell}$, maps this location to an additive effect on the log-hazard. The four different passing options are identified by the (basketball) position of the potential pass recipient: point guard (PG), shooting guard (SG), small forward (SF), power forward (PF), and center (C). The macrotransition model \eqref{hazard-def}--\eqref{hazard-equation} represents the ballcarrier's decision-making process as an interpretable function of the unique basketball predicaments he faces. For example, in considering the hazard of a shot attempt, the time-varying covariates ($\mathbf{W}_j^{\ell}(t)$) we use are the distance between the ballcarrier and his nearest defender (transformed as $\log(1+d)$ to moderate the influence of extremely large or small observed distances), an indicator for whether the ballcarrier has dribbled since gaining possession, and a constant representing a baseline shooting rate (this is not time-varying)\footnote{Full details on all covariates used for all macrotransition types are included in Appendix \ref{Covariates}}. The spatial effects $\xi_j^{\ell}$ reveal locations where player $\ell$ is more/less likely to attempt a shot in a small time window, holding fixed the time-varying covariates $\mathbf{W}_j^{\ell}(t)$. Such spatial effects (illustrated in Figure \ref{fig:spatial_effects}) are well-known to be nonlinear in distance from the basket and asymmetric about the angle to the basket \cite{miller2013icml}. \begin{figure} \centering \begin{tabular}{cccc} \subfloat[$\xi_1, \tilde{\xi}_1$ (pass to PG)]{\includegraphics[width=0.23\linewidth,height=0.08\textheight]{graphics/lebron_pass1_spatial_2}} & \subfloat[$\xi_2, \tilde{\xi}_2$ (pass to SG)]{\includegraphics[width=0.23\linewidth,height=0.08\textheight]{graphics/lebron_pass2_spatial_2}} & \multirow{-2}[8]{*}{\subfloat[$\xi_5$ (shot-taking)]{\includegraphics[width=0.23\linewidth,height=0.17\textheight]{graphics/lebron_take_spatial_2}}} & \multirow{-2}[8]{*}{\subfloat[$\xi_6$ (turnover)]{\includegraphics[width=0.23\linewidth,height=0.17\textheight]{graphics/lebron_TO_spatial_2}}} \\ \subfloat[$\xi_3, \tilde{\xi}_3$ (pass to PF)]{\includegraphics[width=0.23\linewidth,height=0.08\textheight]{graphics/lebron_pass3_spatial_2}} & \subfloat[$\xi_4, \tilde{\xi}_4$ (pass to C)]{\includegraphics[width=0.23\linewidth,height=0.08\textheight]{graphics/lebron_pass4_spatial_2}} & & \end{tabular} \caption[Sample plots of spatial effects]{Plots of estimated spatial effects $\xi$ for LeBron James as the ballcarrier. For instance, plot (c) reveals the largest effect on James' shot-taking hazard occurs near the basket, with noticeable positive effects also around the three point line (particularly in the ``corner 3'' shot areas). Plot (a) shows that he is more likely (per unit time) to pass to the point guard when at the top of the arc---more so when the point guard is positioned in the post area.} \label{fig:spatial_effects} \end{figure} All model components---the time-varying covariates, their coefficients, and the spatial effects $\xi, \tilde{\xi}$ differ across macrotransition types $j$ for the same ballcarrier $\ell$, as well as across all $L=461$ ballcarriers in the league during the 2013-14 season. This reflects the fact that players' decision-making tendencies and skills are unique; a player such as Dwight Howard will very rarely attempt a three point shot even if he is completely undefended, while someone like Stephen Curry will attempt a three point shot even when closely defended. \subsection{Macrotransition Exit Model} \label{macro_exit} Using the six macrotransition types introduced in the previous subsection, we can express the macrotransition exit model \ref{M2} when player $\ell$ has possession as \begin{align} \mathbb{P}(C_{\delta_t} | M(t), \mathcal{F}^{(Z)}_t) &= \sum_{j=1}^6 \mathbb{P}(C_{\delta_t} | M_j(t), \mathcal{F}^{(Z)}_t) \mathbb{P}(M_j(t) | M(t), \mathcal{F}^{(Z)}_t) \nonumber \\ &= \sum_{j=1}^6 \mathbb{P}(C_{\delta_t} | M_j(t), \mathcal{F}^{(Z)}_t) \frac{\lambda_j^{\ell}(t)}{\sum_{k=1}^6 \lambda^{\ell}_k(t)}, \label{exitmodel} \end{align} using the competing risks model for $M_j(t)$ given by \eqref{hazard-def}--\eqref{hazard-equation}. As terms $\lambda^{\ell}_j(t)$ are supplied by \eqref{hazard-equation}, we focus on the macrotransition exit model conditional on the macrotransition type, $\mathbb{P}(C_{\delta_t} | M_j(t), \mathcal{F}^{(Z)}_t)$. For each $j=1, \ldots, 4$, $M_j(t)$ represents a pass-type macrotransition, therefore $C_{\delta_t}$ is a possession state $c' \in \mathcal{C}_{\text{poss}}$ for the player corresponding to pass option $j$. Thus, a model for $\mathbb{P}(C_{\delta_t} | M_j(t), \mathcal{F}^{(Z)}_t)$ requires us to predict the state $c' \in \mathcal{C}_{\text{poss}}$ the $j$th pass target will occupy upon receiving the ball. Our approach is to simply assume $c'$ is given by the pass target's location at the time the pass begins. While this is naive and could be improved by further modeling, it is a reasonable approximation in practice, because with only seven court regions and two defensive spacings comprising $\mathcal{C}_{\text{poss}}$, the pass recipient's position in this space is unlikely to change during the time the pass is traveling en route, $\delta_t - t$ (a noteable exception is the alley-oop pass, which leads the pass recipient from well outside the basket to a dunk or layup within the restricted area). Our approach thus collapses $\mathbb{P}(C_{\delta_t} | M_j(t), \mathcal{F}^{(Z)}_t)$ to a single state in $\mathcal{C}_{\text{poss}}$, which corresponds to pass target $j$'s location at time $t$. When $j=5$, and a shot attempt occurs in $(t, t + \epsilon]$, $C_{\delta_t}$ is either a made/missed 2 point shot, or made/missed three point shot. For sufficiently small $\epsilon$, we observe at $Z_t$ whether the shot attempt in $(t, t + \epsilon]$ is a two- or three-point shot, therefore our task in providing $\mathbb{P}(C_{\delta_t} | M_j(t), \mathcal{F}^{(Z)}_t)$ is modeling the shot's probability of success. We provide a parametric shot probability model, which shares the same form as the macrotransition entry model \eqref{hazard-def}--\eqref{hazard-equation}, though we use a logit link function as we are modeling a probability instead of a hazard. Specifically, for player $\ell$ attempting a shot at time $t$, let $p^{\ell}(t)$ represent the probability of the shot attempt being successful (resulting in a basket). We assume \begin{equation}\label{shotprob} \text{logit}(p^{\ell}(t)) = [\mathbf{W}_\textrm{s}^{\ell}(t)]'\boldsymbol{\beta}_\textrm{s}^{\ell} + \xi_\textrm{s}^{\ell}(\mathbf{z}^{\ell}(t)) \end{equation} with components in \eqref{shotprob} having the same interpretation as their $j$-indexed counterparts in the competing risks model \eqref{hazard-equation}; that is, $\mathbf{W}_\textrm{s}^{\ell}$ is a vector of time-varying covariates (we use distance to the nearest defender---transformed as $\log(1 + d)$---an indicator for whether the player has dribbled, and a constant to capture baseline shooting efficiency) with $\boldsymbol{\beta}_\textrm{s}^{\ell}$ a corresponding vector of coefficients, and $\xi_\textrm{s}^{\ell}$ a smooth spatial effect, as in \eqref{hazard-equation}. Lastly, when $j=6$ and $M_j(t)$ represents a turnover, $C_{\delta_t}$ is equal to the turnover state in $\mathcal{C}_{\text{end}}$ with probability 1. Note that the macrotransition exit model is mostly trivial when no player has ball possession at time $t$, since this implies $C_t \in \mathcal{C}_{\text{trans}} \cup \mathcal{C}_{\text{end}}$ and $\tau_t = t$. If $C_t \in \mathcal{C}_{\text{end}}$, then the possession is over and $C_{\delta_t} = C_t$. Otherwise, if $C_t \in \mathcal{C}_{\text{trans}}$ represents a pass attempt or turnover in progress, the following state $C_{\delta_t}$ is deterministic given $C_t$ (recall that the pass recipient and his location are encoded in the definition of pass attempt states in $\mathcal{C}_{\text{trans}}$). When $C_t$ represents a shot attempt in progress, the macrotransition exit model reduces to the shot probability model \eqref{shotprob}. Finally, when $C_t$ is a rebound in progress, we ignore full-resolution information and simply use the Markov transition probabilities from $\mathbf{P}$\footnote{Our rebounding model could be improved by using full-resolution spatiotemporal information, as players' reactions to the missed shot event are informative of who obtains the rebound.}. \subsection{Transition Probability Matrix for Coarsened Process} The last model necessary for calculating EPV is \ref{M4}, the transition probability matrix for the embedded Markov chain corresponding to the coarsened process $C^{(0)}, C^{(2)}, \ldots, C^{(K)}$. This transition probability matrix is used to compute the term $\E[X | C_{\delta_t} = c]$ that appears in Theorem \ref{epvtheorem}. Recall that we denote the transition probability matrix as $\mathbf{P}$, where $P_{qr} = \mathbb{P}(C^{(i+1)} = c_r | C^{(i)} = c_q)$ for any $c_q, c_r \in \mathcal{C}$. Without any other probabilistic structure assumed for $C^{(i)}$ other than Markov, for all $q,r$, the maximum likelihood estimator of $P_{qr}$ is the observed transition frequency, $\hat{P}_{qr} = \frac{N_{qr}}{\sum_{r'}N_{qr'}}$, where $N_{qr}$ counts the number of transitions $c_q \rightarrow c_r$. Of course, this estimator has undesirable performance if the number of visits to any particular state $c_q$ is small (for instance, Dwight Howard closely defended in the corner 3 region), as the estimated transition probabilities from that state may be degenerate. Under our multiresolution model for basketball possessions, however, expected transition counts between many coarsened states $C^{(i)}$ can be computed as summaries of our macrotransition models \ref{M1}--\ref{M2}. To show this, for any arbitrary $t > 0$ let $M_j^r(t)$ be the event $$M_j^r(t) = \{\mathbb{P}(M_j(t) \text{ and } C_{t + \epsilon} = c_r | \mathcal{F}^{(Z)}_t) > 0\}.$$ Thus $M_j^r(t)$ occurs if it is possible for a macrotransition of type $j$ into state $c_r$ to occur in $(t, t + \epsilon]$. When applicable, we can use this to get the expected number of $c_q \rightarrow c_r$ transitions: \begin{equation} \label{eq:shrunken_tprob} \tilde{N}_{qr} = \epsilon \sum_{t : C_t = c_q} \lambda^{\ell}_j(t)\mathbf{1}[M_j^r(t)]. \end{equation} When $c_q$ is a shot attempt state from $c_{q'} \in \mathcal{C}_{\text{poss}}$, \eqref{eq:shrunken_tprob} is adjusted using the shot probability model \eqref{shotprob}: $\tilde{N}_{qr} = \epsilon \sum_{t : C_t = c_{q'}} \lambda^{\ell}_j(t)p(t)\mathbf{1}[M_j^r(t)]$ when $c_r$ represents an eventual made shot and $\tilde{N}_{qr} = \epsilon \sum_{t : C_t = c_{q'}} \lambda^{\ell}_j(t)(1 - p(t))\mathbf{1}[M_j^r(t)]$ when $c_r$ represents an eventual miss. By replacing raw counts with their expectations conditional on higher-resolution data, leveraging the hazards $\eqref{eq:shrunken_tprob}$ provides a Rao-Blackwellized (unbiased, lower variance) alternative to counting observed transitions. Furthermore, due to the hierarchical parameterization of hazards $\lambda_j^{\ell}(t)$ (discussed in Section \ref{sec:Computation}), information is shared across space and player combinations so that estimated hazards are well-behaved even in situations without significant observed data. Thus, when $c_q \rightarrow c_r$ represents a macrotransition, we use $\tilde{N}_{qr}$ in place of $N_{qr}$ when calculating $\hat{P}_{qr}$. \end{document} \subsection{Possession case study}\label{subsec:study} After obtaining parameter estimates for the multresolution transition models, we can calculate EPV using Theorem \ref{epvtheorem} and plot $\nu_t$ throughout the course of any possession in our data. We view such (estimated) EPV curves as the main contribution of our work, and their behavior and potential inferential value has been introduced in Section \ref{sec:Intro}. We illustrate this value by revisiting the possession highlighted in Figure \ref{heat_poss} through the lens of EPV. Analysts may also find meaningful aggregations of EPV curves that summarize players' behavior over a possession, game, or season in terms of EPV. We offer two such aggregations in this section. \subsection{Predictive Performance of EPV} Before analyzing EPV estimates, it is essential to check that such estimates are properly calibrated \cite{gneiting2007probabilistic} and accurate enough to be useful to basketball analysts. Our paper introduces EPV, and as such there are no existing results to benchmark the predictive performance of our estimates. We can, however, compare the proposed implementation for estimating EPV with simpler models, based on lower resolution information, to verify whether our multiresolution model captures meaningful features of our data. Assessing the predictive performance of an EPV estimator is difficult because the estimand is a curve whose length varies by possession. Moreover, we never observe any portion of this curve; we only know its endpoint. Therefore, rather than comparing estimated EPV curves between our method and alternative methods, we compare estimated transition probabilities. For any EPV estimator method that is stochastically consistent, if the predicted transitions are properly calibrated, then the derived EPV estimates should be as well. For the inference procedure in Section \ref{sec:Computation}, we use only 90\% of our data set for parameter inference, with the remaining 10\% used to evaluate the out-of-sample performance of our model. We also evaluated out-of-sample performance of simpler macrotransition entry/exit models, which use varying amounts of information from the data. Table~\ref{loglik-table} provides the out-of-sample log-likelihood for the macrotransition models applied to the 10\% of the data not used in model fitting for various simplified models. In particular, we start with the simple model employing constant hazards for each player/event type, then successively add situational covariates, spatial information, then full hierarchical priors. Without any shrinkage, our full model performs in some cases worse than a model with no spatial effects included, but with shrinkage, it consistently performs the best (highest log-likelihood) of the configurations compared. This behavior justifies the prior structure introduced in Section \ref{sec:Computation}. \begin{table}[ht] \centering \begin{tabular}{lrrrr} \toprule & \multicolumn{4}{c}{Model Terms} \\ \midrule Macro. type & Player & Covariates & Covariates + Spatial & Full \\ \midrule Pass1 & -29.4 & -27.7 & -27.2 & -26.4 \\ Pass2 & -24.5 & -23.7 & -23.2 & -22.2 \\ Pass3 & -26.3 & -25.2 & -25.3 & -23.9 \\ Pass4 & -20.4 & -20.4 & -24.5 & -18.9 \\ Shot Attempt & -48.9 & -46.4 & -40.9 & -40.7 \\ Made Basket & -6.6 & -6.6 & -5.6 & -5.2 \\ Turnover & -9.3 & -9.1 & -9.0 & -8.4 \\ \bottomrule \end{tabular} \caption{Out of sample log-likelihood (in thousands) for macrotransition entry/exit models under various model specifications. ``Player'' assumes constant hazards for each player/event type combination. ``Covariates'' augments this model with situational covariates, $\mathbf{W}^{\ell}_j(t)$ as given in \eqref{hazard-equation}. ``Covariates + Spatial'' adds a spatial effect, yielding \eqref{hazard-equation} in its entirety. Lastly, ``Full'' implements this model with the full hierchical model discussed in Section \ref{sec:Computation}.} \label{loglik-table} \end{table} \subsection{Possession Inference from Multiresolution Transitions} Understanding the calculation of EPV in terms of multiresolution transitions is a valuable exercise for a basketball analyst, as these model components reveal precisely how the EPV estimate derives from the spatiotemporal circumstances of the time point considered. Figure \ref{heat_detail} diagrams four moments during our example possession (introduced originally in Figures \ref{heat_poss} and \ref{heat_epv}) in terms of multiresolution transition probabilities. These diagrams illustrate Theorem \ref{epvtheorem} by showing EPV as a weighted average of the value of the next macrotransition. Potential ball movements representing macrotransitions are shown as arrows, with their respective values and probabilities graphically illustrated by color and line thickness (this information is also annotated explicitly). Microtransition distributions are also shown, indicating distributions of players' movement over the next two seconds. Note that the possession diagrammed here was omitted from the data used for parameter estimation. \captionsetup[subfigure]{labelformat=empty} \begin{figure}[h!] \centering \begin{tabular}{ccc} \subfloat[]{\includegraphics[width=0.325\textwidth]{graphics/ticker_2}} & \subfloat[]{\includegraphics[width=0.325\textwidth]{graphics/micro_1_2}} & \subfloat[]{\includegraphics[width=0.325\textwidth]{graphics/legend}} \\ \subfloat[]{\includegraphics[width=0.325\textwidth]{graphics/micro_2_2}} & \subfloat[]{\includegraphics[width=0.325\textwidth]{graphics/micro_3_2}} & \subfloat[]{\includegraphics[width=0.325\textwidth]{graphics/micro_4_2}} \end{tabular} \caption{Detailed diagram of EPV as a function of multiresolution transition probabilities for four time points (labeled A,B,C,D) of the possession featured in Figures \ref{heat_poss}--\ref{heat_epv}. Two seconds of microtransitions are shaded (with forecasted positions for short time horizons darker) while macrotransitions are represented by arrows, using color and line thickness to encode the value (V) and probability (P) of such macrotransitions. The value and probability of the ``other'' category represents the case that no macrotransition occurs during the next two seconds.} \label{heat_detail} \end{figure} Analyzing Figure \ref{heat_detail}, we see that our model estimates largely agree with basketball intuition. For example, players are quite likely to take a shot when they are near to and/or moving towards the basket, as shown in panels A and D. Additionally, because LeBron James is a better shooter than Norris Cole, the value of his shot attempt is higher, even though in the snapshot in panel D he is much farther from the basket than Cole is in panel A. While the value of the shot attempt averages over future microtransitions, which may move the player closer to the basket, when macrotransition hazards are high this average is dominated by microtransitions on very short time scales. We also see Ray Allen, in the right corner 3, as consistently one of the most valuable pass options during this possession, particularly when he is being less closely defended as in panels A and D. In these panels, though, we never see an estimated probability of him receiving a pass above 0.05, most likely because he is being fairly closely defended for someone so far from the ball, and because there are always closer passing options for the ballcarrier. Similarly, while Chris Bosh does not move much during this possession, he is most valuable as a passing option in panel C where he is closest to the basket and without any defenders in his lane. From this, we see that the estimated probabilities and values of the macrotransitions highlighted in Figure \ref{heat_detail} match well with basketball intuition. The analysis presented here could be repeated on any of hundreds of thousands of possessions available in a season of optical tracking data. EPV plots as in Figure \ref{heat_epv} and diagrams as in Figure \ref{heat_detail} provide powerful insight as to how players' movements and decisions contribute value to their team's offense. With this insight, coaches and analysts can formulate strategies and offensive schemes that make optimal use of their players' ability---or, defensive strategies that best suppress the motifs and situations that generate value for the opposing offense. \subsection{EPV-Added} Aggregations of EPV estimates across possessions can yield useful summaries for player evaluation. For example, \textit{EPV-Added} (EPVA) quantifies a player's overall offensive value through his movements and decisions while handling the ball, relative to the estimated value contributed by a league-average player receiving ball possession in the same situations. The notion of \textit{relative} value is important because the martingale structure of EPV ($\nu_t$) prevents any meaningful aggregation of EPV across a specific player's possessions. $\E[\nu_t - \nu_{t + \epsilon}] = 0$ for all $t$, meaning that \textit{on average} EPV does not change during any specific player's ball handling. Thus, while we see the EPV skyrocket after LeBron James receives the ball and eventually attack the basket in Figure \ref{heat_epv}, the definition of EPV prevents such increases being observed on average. If player $\ell$ has possession of the ball starting at time $t_s$ and ending at $t_e$, the quantity $\nu_{t_e} - \nu_{t_s}^{r(\ell)}$ estimates the value contributed player by $\ell$ relative to the hypothetical league-average player during his ball possession (represented by $\nu_{t_s}^{r(\ell)}$). We calculate EPVA for player $\ell$ (EPVA($\ell$)) by summing such differences over all a player's touches (and dividing by the number of games played by player $\ell$ to provide standardization): \begin{equation}\label{EPVA} \text{EPVA}(\ell) = \frac{1}{\# \text{ games for $\ell$}}\sum_{\{t_s, t_e\} \in \mathcal{T}^{\ell}} \nu_{t_e} - \nu_{t_s}^{r(\ell)} \end{equation} where $\mathcal{T}^{\ell}$ contains all intervals of form $[t_s, t_e]$ that span player $\ell$'s ball possession. Specific details on calculating $\nu_t^{r(\ell)}$ are included in Appendix \ref{subsec:EPVA}. Averaging over games implicitly rewards players who have high usage, even if their value added per touch might be low. Often, one-dimensional offensive players accrue the most EPVA per touch since they only handle the ball when they are uniquely suited to scoring; for instance, some centers (such as the Clippers' DeAndre Jordan) only receive the ball right next to the basket, where their height offers a considerable advantage for scoring over other players in the league. Thus, averaging by game---not touch---balances players' efficiency per touch with their usage and importance in the offense. Depending on the context of the analysis, EPVA can also be adjusted to account for team pace (by normalizing by 100 possession) or individual usage (by normalizing by player-touches). Table~\ref{epv_tab} provides a list of the top and bottom 10 ranked players by EPVA using our 2013-14 data. Generally, players with high EPVA effectively adapt their decision-making process to the spatiotemporal circumstances they inherit when gaining possession. They receive the ball in situations that are uniquely suited to their abilities, so that on average the rest of the league is less successful in these circumstances. Players with lower EPVA are not necessarily ``bad'' players in any conventional sense; their actions simply tend to lead to fewer points than other players given the same options. Of course, EPVA provides a limited view of a player's overall contributions since it does not quantify players' actions on defense, or other ways that a player may impact EPV while not possessing the ball (though EPVA could be extended to include these aspects). \begin{table}[ht] \centering \begin{tabular}{rlr} \toprule Rank & Player & EPVA \\ \midrule 1 & Kevin Durant & 3.26 \\ 2 & LeBron James & 2.96 \\ 3 & Jose Calderon & 2.79 \\ 4 & Dirk Nowitzki & 2.69 \\ 5 & Stephen Curry & 2.50 \\ 6 & Kyle Korver & 2.01 \\ 7 & Serge Ibaka & 1.70 \\ 8 & Channing Frye & 1.65 \\ 9 & Al Horford & 1.55 \\ 10 & Goran Dragic & 1.54 \\ \bottomrule \end{tabular} \quad \begin{tabular}{rlr} \toprule Rank & Player & EPVA \\ \midrule 277 & Zaza Pachulia & -1.55 \\ 278 & DeMarcus Cousins & -1.59 \\ 279 & Gordon Hayward & -1.61 \\ 280 & Jimmy Butler & -1.61 \\ 281 & Rodney Stuckey & -1.63 \\ 282 & Ersan Ilyasova & -1.89 \\ 283 & DeMar DeRozan & -2.03 \\ 284 & Rajon Rondo & -2.27 \\ 285 & Ricky Rubio & -2.36 \\ 286 & Rudy Gay & -2.59 \\ \bottomrule \end{tabular} \caption[]{Top/bottom 10 players by EPVA per game in 2013-14, minimum 500 touches in season.} \label{epv_tab} \end{table} As such, we stress the idea that EPVA is not a best/worst players in the NBA ranking. Analysts should also be aware that the league-average player being used as a baseline is completely hypothetical, and we heavily extrapolate our model output by considering value calculations assuming this nonexistant player possessing the ball in all the situations encountered by an actual NBA player. The extent to which such an extrapolation is valid is a judgment a basketball expert can make. Alternatively, one can consider EPV-added over \textit{specific} players (assuming player $\ell_2$ receives the ball in the same situations as player $\ell_1$), using the same framework developed for EPVA. Such a quantity may actually be more useful, particularly if the players being compared play similar roles on their teams and face similar situations and the degree of extrapolation is minimized. \subsection{Shot Satisfaction} Aggregations of the individual components of our multiresolution transition models can also provide useful insights. For example, another player metric we consider is called \textit{shot satisfaction}. For each shot attempt a player takes, we wonder how satisfied the player is with his decision to shoot---what was the expected point value of his most reasonable passing option at the time of the shot? If for a particular player, the EPV measured at his shot attempts is higher than the EPV conditioned on his possible passes at the same time points, then by shooting the player is usually making the best decision for his team. On the other hand, players with pass options at least as valuable as shots should regret their shot attempts (we term ``satisfaction'' as the opposite of regret) as passes in these situations have higher expected value. \begin{table}[h!] \centering \begin{tabular}{rlr} \toprule Rank & Player & Shot Satis. \\ \midrule 1 & Mason Plumlee & 0.35 \\ 2 & Pablo Prigioni & 0.31 \\ 3 & Mike Miller & 0.27 \\ 4 & Andre Drummond & 0.26 \\ 5 & Brandan Wright & 0.24 \\ 6 & DeAndre Jordan & 0.24 \\ 7 & Kyle Korver & 0.24 \\ 8 & Jose Calderon & 0.22 \\ 9 & Jodie Meeks & 0.22 \\ 10 & Anthony Tolliver & 0.22 \\ \bottomrule \end{tabular} \quad \begin{tabular}{rlr} \toprule Rank & Player & Shot Satis. \\ \midrule 277 & Garrett Temple & -0.02 \\ 278 & Kevin Garnett & -0.02 \\ 279 & Shane Larkin & -0.02 \\ 280 & Tayshaun Prince & -0.03 \\ 281 & Dennis Schroder & -0.04 \\ 282 & LaMarcus Aldridge & -0.04 \\ 283 & Ricky Rubio & -0.04 \\ 284 & Roy Hibbert & -0.05 \\ 285 & Will Bynum & -0.05 \\ 286 & Darrell Arthur & -0.05 \\ \bottomrule \end{tabular} \caption[]{Top/bottom 10 players by shot satisfaction in 2013-14, minimum 500 touches in season.} \label{satis_tab} \end{table} Specifically, we calculate \begin{equation}\label{satisfaction} \text{SATIS}(\ell) = \frac{1}{|\mathcal{T}^{\ell}_{\text{shot}}|} \sum_{t \in \mathcal{T}^{\ell}_{\text{shot}}} \nu_t - \E\left[X \mid \bigcup_{j=1}^4 M_j(t), \mathcal{F}^{(Z)}_t \right] \end{equation} where $\mathcal{T}^{\ell}_{\text{shot}}$ indexes times a shot attempt occurs, $\{t : M_5(t) \}$, for player $\ell$. Recalling that macrotransitions $j=1, \ldots, 4$ correspond to pass events (and $j=5$ a shot attempt), $\bigcup_{j=1}^4 M_j(t)$ is equivalent to a pass happening in $(t, t + \epsilon]$. Unlike EPVA, shot satisfaction SATIS($\ell$) is expressed as an average per shot (not per game), which favors player such as three point specialists, who often take fewer shots than their teammates, but do so in situations where their shot attempt is extremely valuable. Table \ref{satis_tab} provides the top/bottom 10 players in shot satisfaction for our 2013-14 data. While players who mainly attempt three-pointers (e.g. Miller, Korver) and/or shots near the basket (e.g. Plumlee, Jordan) have the most shot satisfaction, players who primarily take mid-range or long-range two-pointers (e.g. Aldridge, Garnett) or poor shooters (e.g. Rubio, Prince) have the least. However, because shot satisfaction numbers are mostly positive league-wide, players still shoot relatively efficiently---almost every player generally helps his team by shooting rather than passing in the same situations, though some players do so more than others. We stress that the two derived metrics given in this paper, EPVA and shot satisfaction, are simply examples of the kinds of analyses enabled by EPV. Convential metrics currently used in basketball analysis do measure shot selection and efficiency, as well as passing rates and assists, yet EPVA and shot satisfaction are novel in analyzing these events in their spatiotemporal contexts. \end{document} \subsection{Estimator Criteria} \label{subsec:multiCrit} We have defined EPV in \eqref{epvdef} as an unobserved, theoretical quantity; one could thus imagine many different EPV estimators based on different models and/or information in the data. However, we believe that in order for EPV to achieve its full potential as a basis for high-resolution player and strategy evaluation, an EPV estimator should meet several criteria. First, we require that the EPV estimator be stochastically consistent. Recognizing that EPV is simply a conditional expectation, it is tempting to estimate EPV using a regression or classification approach that maps features from $\mathcal{F}^{(Z)}_t$ to an outcome space, $[0, 3]$ or $\{0, 2, 3\}$. Setting aside the fact that our data associate each possession outcome $X$ with process-valued inputs $Z$, and thus do not conform naturally to input/output structure of such models, such an approach cannot guarantee the estimator will have the (Kolmogorov) stochastic consistency inherent to theoretical EPV, which is essential to its ``stock ticker'' interpretation. Using a stochastically consistent EPV estimator guarantees that changes in the resulting EPV curve derive from players' on-court actions, rather than artifacts or inefficiencies of the data analysis. A stochastic process model for the evolution of a basketball possession guarantees such consistency. The second criterion that we require is that the estimator be sensitive to the fine-grained details of the data without incurring undue variance or computatonal complexity. Applying a Markov chain-based estimation approach would require discretizing the data by mapping the observed spatial configuration $Z_t$ into a simplified summary $C_t$, violating this criterion by trading potentially useful information in the player tracking data for computational tractability. To develop methodology that meet both criteria, we note the information-computation tradeoff in current process modeling strategies results from choosing a single level of resolution at which to model the possession and compute all expectations. In contrast, our method for estimating EPV combines models for the possession at two distinct levels of resolution, namely, a fully continuous model of player movement and actions, and a Markov chain model for a highly coarsened view of the possession. This multiresolution approach leverages the computational simplicity of a discrete Markov chain model while conditioning on exact spatial locations and high-resolution data features. \subsection{A Coarsened Process}\label{subsec:multiCoarse} The Markov chain portion of our method requires a coarsened view of the data. For all time $0 < t \leq T$ during a possession, let $C(\cdot)$ be a coarsening that maps $\mathcal{Z}$ to a finite set $\mathcal{C}$, and call $C_t = C(Z_t)$ the ``state'' of the possession. To make the Markovian assumption plausible, we populate the coarsened state space $\mathcal{C}$ with summaries of the full resolution data so that transitions between these states represent meaningful events in a basketball possession---see Figure~\ref{fig:states} for an illustration. \begin{figure}[!ht] \input{EPV_coarsened_tikz.tex} \caption{Schematic of the coarsened possession process $C$, with states (rectangles) and possible state transitions (arrows) shown. The unshaded states in the first row compose $\mathcal{C}_{\text{poss}}$. Here, states corresponding to distinct ballhandlers are grouped together (Player 1 through 5), and the discretized court in each group represents the player's coarsened position and defended state. The gray shaded rectangles are transition states, $\mathcal{C}_{\text{trans}}$, while the rectangles in the third row represent the end states, $\mathcal{C}_{\text{end}}$. Blue arrows represent possible macrotransition entrances (and red arrows, macrotransition exits) when Player 1 has the ball; these terms are introduced in Section \ref{sec:Macro}.} \label{fig:states} \end{figure} First, there are 3 ``bookkeeping'' states, denoted $\mathcal{C}_{\text{end}}$, that categorize the end of the possession, so that $C_T \in \mathcal{C}_{\text{end}}$ and for all $t < T, C_t \not \in \mathcal{C}_{\text{end}}$ (shown in the bottom row of Figure~\ref{fig:states}). These are $\mathcal{C}_{\text{end}} = $\{made 2 pt, made 3 pt, end of possession\}. These three states have associated point values of 2, 3, and 0, respectively (the generic possession end state can be reached by turnovers and defensive rebounds, which yield no points). This makes the possession point value $X$ a function of the final coarsened state $C_T$. Next, whenever a player possesses the ball at time $t$, we assume $C_t = (\text{ballcarrier ID at }t) \times (\text{court region at }t) \times (\text{defended at }t)$, having defined seven disjoint regions of the court and classifying a player as defended at time $t$ by whether there is a defender within 5 feet of him. The possible values of $C_t$, if a player possesses the ball at time $t$, thus live in $\mathcal{C}_{\text{poss}} = \{\text{player ID}\} \times \{\text{region ID}\} \times \{\mathbf{1}[\text{defended}]\}$. These states are represented by the unshaded portion of the top row of Figure~\ref{fig:states}, where the differently colored regions of the court diagrams reveal the court space discretization. Finally, we define a set of states to indicate that an annotated basketball action from the full resolution data $Z$ is currently in progress. These ``transition'' states encapsulate constrained motifs in a possession, for example, when the ball is in the air traveling between players in a pass attempt. Explicitly, denote $\mathcal{C}_{\text{trans}} = \{$shot attempt from $c \in \mathcal{C}_{\text{poss}}$, pass to $c' \in \mathcal{C}_{\text{poss}}$ from $c \in \mathcal{C}_{\text{poss}}$, turnover in progress, rebound in progress$\}$ (listed in the gray shaded portions of Figure~\ref{fig:states}). These transition states carry information about the possession path, such as the most recent ballcarrier, and the target of the pass, while the ball is in the air during shot attempts and passes\footnote{The reason we index transition states by the origin of the pass/shot attempt (and destination of the pass) is to preserve this information under a Markov assumption, where generic ``pass'' or ``shot'' states would inappropriately allow future states to be independent of the players involved in the shot or pass.}. Note that, by design, a possession must pass through a state in $\mathcal{C}_{\text{trans}}$ in order to reach a state in $\mathcal{C}_{\text{end}}$. For simplicity and due to limitations of the data, this construction of $\mathcal{C} = \mathcal{C}_{\text{poss}} \cup \mathcal{C}_{\text{trans}} \cup \mathcal{C}_{\text{end}}$ excludes several notable basketball events (such as fouls, violations, and other stoppages in play) and aggregates others (the data, for example, does not discriminate among steals, intercepted passes, or lost balls out of bounds) \subsection{Combining Resolutions}\label{subsec:multiTheory} We make several modeling assumptions about the processes $Z$ and $C$, which allow them to be combined into a coherent EPV estimator. \begin{enumerate}[label=(A\arabic*)] \item $C$ is marginally semi-Markov.\label{A1} \end{enumerate} The semi-Markov assumption \ref{A1} guarantees that the embedded sequence of disjoint possession states $C^{(0)}, C^{(1)}, \ldots, C^{(K)}$ is a Markov chain, which ensures that it is straightforward to compute $\E[X | C_t]$ using the associated transition probability matrix \cite{kemeny1976finite}. Next, we specify the relationship between coarsened and full-resolution conditioning. This first requires defining two additional time points which mark changes in the future evolution of the possession: \begin{align} \tau_t & = \begin{cases} \text{min} \{ s : s > t, C_s \in \mathcal{C}_{\text{trans}}\} & \text{if } C_t \in \mathcal{C}_{\text{poss}} \\ t & \text{if } C_t \not \in \mathcal{C}_{\text{poss}} \end{cases} \label{taudef} \\ \delta_t &= \text{min}\{s : s \geq \tau_t, C_s \not \in \mathcal{C}_{\text{trans}} \} \label{deltadef}. \end{align} Thus, assuming a player possesses the ball at time $t$, $\tau_t$ is the first time after $t$ he attempts a shot/pass or turns the ball over (entering a state in $\mathcal{C}_{\text{trans}}$), and $\delta_t$ is the endpoint of this shot/pass/turnover (leaving a state in $\mathcal{C}_{\text{trans}}$). We assume that passing through these transition states, $\mathcal{C}_{\text{trans}}$, \textit{decouples} the future of the possession after time $\delta_t$ with its history up to time $t$: \begin{enumerate}[label=(A\arabic*),resume] \item For all $s > \delta_t$ and $c \in \mathcal{C}$, $\mathbb{P}(C_s =c | C_{\delta_t}, \mathcal{F}^{(Z)}_t) = \mathbb{P}(C_s =c | C_{\delta_t})$. \label{A2} \end{enumerate} Intuitively, assumption \ref{A2} states that for predicting coarsened states beyond some point in the future $\delta_t$, all information in the possession history up to time $t$ is summarized by the distribution of $C_{\delta_t}$. The dynamics of basketball make this assumption reasonable; when a player passes the ball or attempts a shot, this represents a structural transition in the basketball possession to which all players react. Their actions prior to this transition are not likely to influence their actions after this transition. Given $C_{\delta_t}$---which, for a pass at $\tau_t$ includes the pass recipient, his court region, and defensive pressure, and for a shot attempt at $\tau_t$ includes the shot outcome---data prior to the pass/shot attempt are not informative of the possession's future evolution. Together, these assumptions yield a simplified expression for \eqref{epvdef}, which combines contributions from full-resolution and coarsened views of the process. \begin{theorem}\label{epvtheorem} Under assumptions \ref{A1}--\ref{A2}, the full-resolution EPV $\nu_t$ can be rewritten: \begin{equation}\label{epveqn} \nu_t = \sum_{c \in \mathcal{C}} \E[X | C_{\delta_t} = c]\mathbb{P}(C_{\delta_t} = c | \mathcal{F}^{(Z)}_t). \end{equation} \end{theorem} \begin{remark} Although we have specified this result in terms of the specific coarsening defined in Section~\ref{subsec:multiCoarse}, we could substitute any coarsening for which \ref{A1}--\ref{A2} are well-defined and reasonably hold. We briefly discuss potential alternative coarsenings in Section \ref{sec:Discussion}. \end{remark} The proof of Theorem~\ref{epvtheorem}, follows immediately from \ref{A1}--\ref{A2}, and is therefore omitted. Heuristically, \eqref{epveqn} expresses $\nu_t$ as the expectation given by a homogeneous Markov chain on $\mathcal{C}$ with a random starting point $C_{\delta_t}$, where only the starting point depends on the full-resolution information $\mathcal{F}^{(Z)}_t$. This result illustrates the multiresolution conditioning scheme that makes our EPV approach computationally feasible: the term $\E[X | C_{\delta_t} = c]$ is easy to calculate using properties of Markov chains, and $\mathbb{P}(C_{\delta_t} | \mathcal{F}^{(Z)}_t)$ only requires forecasting the full-resolution data for a short period of time relative to \eqref{epvdef}, as $\delta_t \leq T$. \end{document} \iffalse When the coarsened process $C_t$ transitions from a state in $\mathcal{C}_{\text{poss}}$ to one in $\mathcal{C}_{\text{trans}}$, we call this transition between coarsened states a \textit{macrotransition}. \begin{definition} If $C_t \in \mathcal{C}_{\text{poss}}$ and $C_{t + \epsilon} \in \mathcal{C}_{\text{trans}}$, then $C_t \rightarrow C_{t + \epsilon}$ is a \textit{macrotransition}. \end{definition} Macrotransitions, which include all ball movements (passes, shot attempts, turnovers), mark large-scale shifts that form the basis of offensive basketball play. The term carries a double meaning, as a macrotransition describes both a transition among states in our coarsened process, $C_t \rightarrow C_{t + \epsilon}$, as well as a transition of ballcarrier identity on the basketball court. By construction, for a possession that is in a state in $\mathcal{C}_{\text{poss}}$ to proceed to a state in $\mathcal{C}_{\text{end}}$ or a state in $\mathcal{C}_{\text{poss}}$ corresponding to a different ballhandler, a macrotransition must occur as possession passes through a transition state in $\mathcal{C}_{\text{trans}}$ (see possible transition paths illustrated in Figure~\ref{fig:states}). This structure reveals that at any time $t$ during a possession, we are guaranteed to observe the \textit{exit state} of a future (or current, if $C_t \in \mathcal{C}_{\text{trans}}$) macrotransition. Specifically, let $\delta = \min\{s: s > t, C_{s-\epsilon} \in \mathcal{C}_{\text{trans}} \text{ and } C_s \not \in \mathcal{C}_{\text{trans}} \}$ denote the time the possession reaches the state \textit{after} the next (or current, if $C_t \in \mathcal{C}_{\text{trans}}$) macrotransition after time $t$. Thus, if the possession is currently in a macrotransition, $\delta$ is the first time at which a new possession or end state is occupied (ending the macrotransition), while if a player currently possesses the ball, $\delta$ is the time at which the possession reaches the exit state of a future macrotransition. $\delta$ is a bounded stopping time, so we can condition on $C_{\delta}$ to rewrite EPV \eqref{epvdef} as \begin{align} \nu_t &= \sum_{c \in \mathcal{C}} \E[h(C_T)|C_\delta = c, \mathcal{F}^{(Z)}_t] \mathbb{P}(C_\delta = c | \mathcal{F}^{(Z)}_t). \label{EPVdecomp} \end{align} It is helpful to expand the second term in \eqref{EPVdecomp}, $\mathbb{P}(C_\delta = c|\mathcal{F}^{(Z)}_t)$, by conditioning on the start of the macrotransition that corresponds to the exit state $C_\delta$. Denote $M(t)$ as the event that a macrotransition begins in $(t, t + \epsilon]$, and let $\tau = \text{min}\{s: s >t, M(s)\}$ be the time at which the macrotransition ending in $C_\delta$ begins. Thus, $\tau$ and $\delta$ bookend the times during which the possession is in the next (or current, but ongoing) macrotransition, with $C_{\tau}$ being the state in $\mathcal{C}$ immediately \textit{prior} to the start of this macrotransition and $C_{\delta}$ the state immediately succeeding it. Like $\delta$, at any time $t < T$, $\tau$ is a bounded stopping time; however, note that if a macrotransition is in progress at time $t$ then $\tau < t$, and, having been observed, $\tau$ has a degenerate distribution. Defining $\tau$ allows us to write: \begin{align} \mathbb{P}(C_\delta = c|\mathcal{F}^{(Z)}_t) = \sum_{c \in \mathcal{C}} \int_{t}^{\infty} \int_{\mathcal{Z}}& \mathbb{P}(C_\delta = c | M(\tau), Z_\tau = z, \tau = s, \mathcal{F}^{(Z)}_t) \nonumber \\ & \times \mathbb{P}(M(\tau), Z_\tau = z, \tau = s | \mathcal{F}^{(Z)}_t) dz ds. \label{EPVmulti2} \end{align} We make one additional expansion to the terms we have introduced for calculating EPV. The second factor in \eqref{EPVmulti2}, $\mathbb{P}(M(\tau), Z_\tau=z,\tau=s|\mathcal{F}^{(Z)}_t)$, models the location and time of the next macrotransition---implicitly averaging over the intermediate path of the possession in the process. This is the critical piece of our multiresolution structure that connects the full-resolution process $Z$ to the coarsened process $C$, and the component of our model that fully utilizes multiresolution conditioning. We expand this term using our macro- and microtransition models. \begin{definition} The \textit{macrotransition model} is $\mathbb{P}(M(t)|\mathcal{F}^{(Z)}_t)$. \end{definition} \begin{definition} The \textit{microtransition model} is $\mathbb{P}(Z_{t + \epsilon} | M(t)^c, \mathcal{F}^{(Z)}_t)$, where $M(t)^c$ is the complement of $M(t)$. \textit{Microtransitions} are instantaneous changes in the full resolution data $Z_t \rightarrow Z_{t + \epsilon}$ over time windows where a macrotransition is not observed; thus, only location components (and not event annotations) change from $Z_t$ to $Z_{t + \epsilon}$. \end{definition} Multiresolution transition models allow us to sample from $\mathbb{P}(\tau, Z_{\tau}|\mathcal{F}^{(Z)}_t)$, enabling Monte Carlo evaluation of \eqref{EPVmulti2}. The basic idea is that we use the macrotransition model to draw from $\mathbb{P}(M(t)|\mathcal{F}^{(Z)}_t)$ and if $M(t)^c$ and no macrotransition occurs in $(t, t+\epsilon]$, we use the microtransition model to draw from $\mathbb{P}(Z_{t + \epsilon} | M(t)^c, \mathcal{F}^{(Z)}_t)$. Iterating this process, we alternate draws from the macro- and microtransition models until observing $(\tau, Z_{\tau})$---of course, this also yields $M(\tau)$ as a consequence of our definition of $\tau$. Parametric forms for these macro- and microtransition models are dicussed expcility in Sections \ref{sec:Macro} and \ref{sec:Micro} respectively, while Section \ref{sec:Computation} provides additional details on the Monte Carlo integration scheme. Expanding EPV by conditioning on intermediate values in principle does not ease the problem of its evaluation. However several of the components we have introduced motivate reasonable conditional independence assumptions that simplify their evaluation. Only by writing EPV as an average over additional random variables defined in the probability space of our possession can we articulate such assumptions and leverage them to compute EPV. \subsection{Conditional Independence Assumptions} Our expansions of $\nu_t = \E[h(C_T) | \mathcal{F}^{(Z)}_t]$ introduced in the previous subsection \eqref{EPVdecomp}--\eqref{EPVmulti2} express EPV in terms of three probability models: \begin{align} \nu_t &= \sum_{c \in \mathcal{C}} E[h(C_T)|C_\delta = c, \mathcal{F}^{(Z)}_t] \left( \int_{t}^{\infty} \int_{\mathcal{Z}} \mathbb{P}(C_\delta = c | M(\tau), Z_\tau = z, \tau = s, \mathcal{F}^{(Z)}_t) \right. \nonumber \\ & \hspace{2cm} \left. \times \mathbb{P}(M(\tau), Z_\tau = z, \tau = s | \mathcal{F}^{(Z)}_t) dz ds \right). \label{EPVmulti} \end{align} The multiresolution transition models sample from $\mathbb{P}(M(\tau), Z_{\tau}, \tau|\mathcal{F}^{(Z)}_t)$, eliminating the need to evaluate the third term in \eqref{EPVmulti} explicitly when computing $\nu_t$ via Monte Carlo. The second term in \eqref{EPVmulti} is actually quite easy to work with since $C_{\delta}$ is categorical, and given $Z_{\tau}$ the space of possible values it can take is relatively small. This is due to the manner in which macrotransitions constrain the spatiotemporal evolution of the possession. Given $Z_{\tau}$, we can obtain the location and separation from the defense of all four possible pass recipients given a pass in $(\tau, \tau + \epsilon]$, so only a subset of states in $\mathcal{C}_{\text{poss}}$ are possible for $C_\delta$. Similarly, if a shot attempt occurs in this time window, $Z_{\tau}$ indicates whether a successful shot would yield 2 or 3 points, further subsetting the possible values of $C_\delta$. Modeling $C_\delta$ thus reduces to predicting the type of macrotransition corresponding to $M(\tau)$---a pass, shot attempt, or turnover. We discuss this in Section \ref{sec:Macro} in the context of our macrotransition model. The first term in \eqref{EPVmulti}, $E[h(C_T)|C_\delta = c, \mathcal{F}^{(Z)}_t]$ provides the expected point value of the possession given the (coarsened) result of the next macrotransition. Prima facie, this term seems as difficult to evaluate as it has the same essential structure as EPV itself, requiring integration over the future trajectory of the possession after time $\delta$. However, we make a key assumption that frees subsequent evolution of the possession, after time $\delta$, from dependence on the full-resolution history $\mathcal{F}^{(Z)}_t$: \begin{equation}\label{decoupling} \E[h(C_T)|C_\delta, \mathcal{F}^{(Z)}_t] = \E[h(C_T)|C_\delta] \end{equation} This assumption is intuitive for two reasons. First, by constraining the possession to follow a restricted spatiotemporal path, it is reasonable to assume that the macrotransition exit state itself contains sufficient information to characterize the future evolution of the system. Secondly, because macrotransitions play out over much longer timescales than the resolution of the data (i.e., several seconds, as opposed to 1/25th of a second), it is reasonable to assume that fine-scale spatial detail before the start of the macrotransition has been ``mixed out'' by the time the macrotransition ends. An additional, reasonable conditional independence assumption is that the coarsened state sequence $C_t, t > 0$ is marginally a semi-Markov process; that is, denoting $\mathcal{F}^{(C)}_t = \sigma(\{C_s^{-1}, 0 \leq s \leq t\})$ as the history of the coarsened process, for all $t' > t$ and $c \in \mathcal{C}$, we assume $\mathbb{P}(C_{t'} = c | \mathcal{F}^{(C)}_t) = \mathbb{P}(C_{t'} = c | C_t)$. A semi-Markov process generalizes a continuous time Markov Chain in that sojourn times need not be exponentially distributed. We associate with this semi-Markov process an embedded discrete, homogeneous Markov Chain: denote $C^{(0)}, C^{(1)}, \ldots, C^{(K)}$ as the sequence of consecutive states $c \in \mathcal{C}$ visited by $C_t$ during the possession $0 < t \leq T$. Thus, $C^{(K)} = C_T$, and $K$ records the length of the possession in terms of the number of transitions between states in $\mathcal{C}$, which like $T$ is random. Combining these assumptions, the first term in \eqref{EPVmulti}, $\E[h(C_T)|C_\delta, \mathcal{F}^{(Z)}_t]$, can be computed easily from the transition probability matrix of the homogeneous Markov chain embedded in $C_t$. As $C^{(K)}$ is an absorbing state, ending the possession, we can rewrite \eqref{decoupling} as $\E[h(C^{(K)})|C_\delta]$. This is easily obtained by solving a linear system of equations deriving from the transition probability matrix of $C^{(0)}, C^{(1)}, \ldots, C^{(K)}$. Estimating this transition probability matrix is also discussed in Section \ref{sec:Macro}, where we show that it actually derives from the macrotransition model. Compared to using discrete, homogeneous Markov Chains alone to calculate EPV, the multiresolution approach we take ultimately leverages much of the same computational advantages while remaining attenuated to the full-resolution data, responding smoothly as the possession evolves over space and time. \subsection{Consistency} To be useful for decision-making an EPV estimator requires several properties \begin{itemize} \item \textbf{Coherence}: Our estimand of interest is not just the point-by-point EPV at any given time during a possession, but a joint estimate of the whole EPV curve as a possession unfolds. We expect that the martingale relationship described in Section~\ref{sec:EPV} should hold for EPV estimates as well, so that marginalizing over the conditional distributions of future EPV estimates yields the EPV for the current situation. This prevents contradictions (e.g. Simpson's Paradoxes) between EPV estimates provided for a single possession, which may arise from marginal estimates that do not enforce this coherence. \item \textbf{Interpretability}: The model used to compute the expectation should have interpretable components. This aids in model checking, communication with interested end-users, and is useful for computing meaningful summaries. \item \textbf{Estimability}: The model should have few enough degrees of freedom to estimate its parameters from real data. \item \textbf{Tractability}: Given estimates of model parameters, the expectation in \eqref{epvdef} should be computationally tractable. \item \textbf{Sensitivity}: Given estimates of model parameters, EPV should respond to full-resolution changes in spatial information. \end{itemize} \todo[inline]{Should the above be turned into a paragraph?} \todo[inline]{I think ``coherence'' actually belongs in the previous section defining EPV, motivating the stochastic process approach over regression/classification. Then the remaining points could be worked in here as part of a discussion on how the multiresolution framework generalizes. It's a version of ``Markov Chain Regression'' and the ideas of added interpretability and tractability for averaging over stochastic process are very valuable as well.} The above properties can be difficult to obtain together. Note first that a coherent EPV estimator is not trivial to obtain. For example, regression or classification approaches taking player positions and momenta as features and predicting 0, 2, or 3 points as outcomes would provide no guarnatees of coherence. A coherent EPV estimator requires the integral be computed with respect to a Kolmogorov consistent stochastic process on $Z_t$ -- in other words, a distribution over the full path of states that a possession can take. Given that the possession model is coherent, tractability trades off with interpretability and sensitivity. A coherent stochastic process is easiest to construct as a set of transition kernels between possession states in adjacent timesteps. While these transition kernels may be relatively simple on small timescales, a coherent estimator requires that the transition kernel for longer timescales be defined by the convolution of the transition kernels on its subintervals. This can complicate computation substantially because most realistic transition kernels do not have have closed form convolution distributions, and thus require manual computation of normalizing factors. This problem is compounded by the fact that the possession path is an exotic mixture of continuous (e.g., position) and discrete (e.g., ballhandler identity) components. In general, this computation requires summing across all nodes in a tree whose branches represent the distinct paths that the possession could take. The number of nodes in this tree scales exponentially in the number of possession states and is uncountably infinite if some aspect of the possession state is continuous. Thus, defining a possession model to compute EPV requires restrictive assumptions on the stochastic process to obtain tractability. Previous work on win probability and point expectations in other sports have obtained computational tractability by modeling a lower resolution summary of the possession state as a homogeneous Markov chain. This reduces the computational complexity of the integral by a) reducing the breadth of the possession state tree by coarsening the state space, and b) reducing the effective depth of the possession state tree using recursive symmetries in the transition kernels that make the convolution kernel simple to compute. In particular, the Markov chain formulation reduces the integration complexity from exponential in the size of the state space to cubic. In baseball and football, these assumptions have been applied successfully because the games have a natural discrete structure that make an interpretable coarsening simple to define. In addition, the coarsened states are defined at natural breakpoints in play (at-bats in baseball, or downs in football), so while conditional expectations taken with respect to this coarsened state space may not obtain the maximum level of resolution, they are still considered sensitive enough to provide useful summaries. Unfortunately, the homogeneous Markov chain approach is not as effective for basketball. Within a possession, there are no discretizations or breakpoints that define a natural coarsening, so there is no appropriate level of sensitivity coarser than full spatial resolution. This presents an untenable tradeoff. On one hand, any homogeneous Markov chain defined at a coarser resolution averages together irrelevant spatial situations in defining EPV at a given moment---for example, by averaging in a path that begins with a pass to a teammate in a location that is far from where he is currently standing \todo{wording}. On the other hand, any homogeneous Markov chain defined at the appropriate level of resolution would not be feasible to estimate or tractable to integrate. Modeling a coarsened possession process under less stringent assumptions, however, can be well-motivated. Note that EPV is an expecation of a function $h(\cdot)$ that only depends on a few aspects of the possession state at the end time $T$---for example, if the ball goes through the hoop at this time the point value $X$ does not depend on where any player but the shooter is standing. This suggests that we can obtain acceptable EPV estimates from a model that only describes the evolution of a coarsened possession process, so long as that coarsened process depends on a filtration that is richer than that generated by the homogeneous Markov process above. To obtain these competing priorities of coherence, tractability, and interpretability together in one model, we simultaneously model the possession at two levels of resolution. We use a high-resolution model for $Z$ to capture fine details of the possession, and we use a low-resolution model for a coarsened process $C$ that we obtain by a deterministic simplification of the full-resolution process $Z$. Together, the high-resolution model can capture high-resolution motifs that affect the path of the possession on short time horizons, while the low-resolution model captures less detailed aspects of the possession that are sufficient to learn the value of the possession on longer time horizons. To make these simultaneous models coherent, we require an assumption that the full-resolution possession state becomes decoupled from the coarsened state at more distant timepoints by intervening ``large'' shifts in the possession state that we call \textit{macrotransitions}. Both the coarsening and the macrotransition set may be chosen to reflect the modeler's intuition about the structure of a possession. If the chosen coarsening and shifts respect the decoupling assumption, the methodology developed here computes EPV exactly. If the assumption only holds approximately for a given coarsening, the methodology here approximates the integral in \eqref{epvdef}. For concreteness, we begin by describing a coarsening and macrotransition set that we consider to be the simplest non-trivial specification for modeling a basketball possession. We then describe the decoupling assumption that ensures consistency between the high-resolution model for $Z$ and the low-resolution model for $C$ under general specifications. Thus, we can divide the computation of EPV into two parts: first, computing the distribution over the first macrotransition to be encountered after time $t$ using the macrotransition and microtransition models, and second, computing the conditional EPV given the coarsened endpoint of that macrotransition using a transition matrix that can be derived from the macrotransition model parameters. We now discuss in detail the specification and estimation of these models. and simulate the player locations in the full-resolution data at time $t + \epsilon$ (black loop arrow in Figure~\ref{fig:states}). As discussed in Section~\ref{sec:Computation}, repeatedly drawing alternately from the macro- and microtransition models allows us to sample from $\mathbb{P}(M_j(s), Z_s=z,\tau=s|\mathcal{F}^{(Z)}_t)$. \begin{align} \nu_t &= \sum_{c' \in \mathcal{C}} \sum_{j=1}^6 \int_\mathcal{T} \int_{\mathcal{Z}} \E[h(C_T)|C_\delta = c', M_j(s), Z_s=z, \tau=s,\mathcal{F}^{(Z)}_t] \nonumber \\ & \hspace{2cm} \times \mathbb{P}(C_\delta = c' | M_j(s), Z_s=z, \tau=s,\mathcal{F}^{(Z)}_t) \mathbb{P}(M_j(s), Z_s=z,\tau=s|\mathcal{F}^{(Z)}_t) dzds. \label{EPVmulti} \end{align} \begin{align} \nu_t &= \sum_{c' \in \mathcal{C}} \sum_{j=1}^6 \int_\mathcal{T} \int_{\mathcal{Z}} \E[h(C_T)|C_\delta = c', M_j(s), Z_s=z, \tau=s,\mathcal{F}^{(Z)}_t] \nonumber \\ & \hspace{2cm} \times \mathbb{P}(C_\delta = c' | M_j(s), Z_s=z, \tau=s,\mathcal{F}^{(Z)}_t) \mathbb{P}(M_j(s), Z_s=z,\tau=s|\mathcal{F}^{(Z)}_t) dzds. \label{EPVmulti} \end{align} More formally, for any $t < T$, $\delta$ is a bounded stopping time, and we can condition on it to re-express EPV: It is therefore useful to define When a possession moves out of a macrotransition state, we call this next state the macrotransition's \textit{exit state}. These are illustrated by the endpoints of the red arrows in Figure~\ref{fig:states}. As show in the figure, for any macrotransition $c \in \mathcal{C}_{\text{trans}}$, the set of possible exit states are restricted. For example, a possession in the shot attempt state must exit to either the ``made 2pt'' state in $\mathcal{C}_{\text{end}}$ or the ``rebound in progress'' state in $\mathcal{C}_{\text{trans}}$. In the scheme we employ here, the exit state of a shot or rebound macrotransition is treated as random (denoted by having multiple red "exit" arrows in Figure~\ref{fig:states}), while the exit state of a pass macrotransition is set deterministically to the position of the receiver of the pass---note that by linking pairs of states in $\mathcal{C}_{\text{poss}}$, we assume the intended recipient of the pass is known in real-time. In future work, pass macrotransitions could also be modeled with random exit states, representing the possibility of a pass being intercepted, but the annotations in the current data do not enable this modeling approach at present. \todo{We need to make clear the connection between macrotransitions and the previous coarsened process} Macrotransitions induce a natural decomposition of a possession that conforms to basketball intuition. Coaches generally draw up offensive schemes that focus on sequences of macrotransitions, with all action in between designed to provide a more favorable context in which to make the next macrotransition decision. Here we introduce notation for macrotransitions and specify a similar decomposition of \eqref{epvdef} that splits the expectation across the result of the next macrotranstion. \textbf{Macrotransition type.} At any given moment when a player possesses the ball, there are six possible categories of macrotransition, corresponding to 4 pass options, a shot attempt, or a turnover, which we index by $j \in \{1, \ldots, 6\}$ (See the six blue arrows in Figure~\ref{fig:states}.). Without loss of generality, assume $j \leq 4$ correspond to pass events, $j=5$ is a shot attempt and $j=6$ a turnover. For a fixed $\epsilon$ (chosen to be 1/25, the temporal resolution of our data), let $M_j(t)$ be the event that a macrotransition of type $j$ begins in the time window $(t, t + \epsilon]$. Also, denote $M(t) = \bigcup_{j=1}^6 M_j(t)$ and $M(t)^c$ its complement. \textbf{Macrotransition times.} Using this notation, a basic decomposition of \eqref{epvdef} conditions on the result of the next macrotransition to give \begin{align} \nu_t &= \sum_{c' \in \mathcal{C}} \E[h(C_T)|C_\delta = c', \mathcal{F}^{(Z)}_t] \mathbb{P}(C_\delta = c' | \mathcal{F}^{(Z)}_t), \label{EPVdecomp} \end{align} where the exit time $\delta$ is implicitly integrated out. For modeling purposes, it is useful to explicitly express the distribution of the next exit state $C_\delta$ in terms of the macrotransition event that preceded it, $M(\tau)$. For full generality, we average over the time ($\tau$), type ($M_j(\tau)$), and spatial context ($Z_s$) of the macrotransition yielding exit state $C_\delta$. This yields the expression \begin{align} \nu_t &= \sum_{c' \in \mathcal{C}} \sum_{j=1}^6 \int_\mathcal{T} \int_{\mathcal{Z}} \E[h(C_T)|C_\delta = c', M_j(s), Z_s=z, \tau=s,\mathcal{F}^{(Z)}_t] \nonumber \\ & \hspace{2cm} \times \mathbb{P}(C_\delta = c' | M_j(s), Z_s=z, \tau=s,\mathcal{F}^{(Z)}_t) \mathbb{P}(M_j(s), Z_s=z,\tau=s|\mathcal{F}^{(Z)}_t) dzds. \label{EPVmulti} \end{align} Each factor in \eqref{EPVmulti} corresponds to an aspect of a possession that needs to be modeled under this decomposition. The third factor $\mathbb{P}(M_j(s), Z_s=z,\tau=s|\mathcal{F}^{(Z)}_t)$ models the type, location, and time of the next macrotransition---implicitly averaging over the intermediate path of the possession in the process. This is the critical piece of our multiresolution structure that connects the full-resolution process $Z$ to the coarsened process $C$. The basic idea is that we use the \textit{macrotransition model} to draw from $\mathbb{P}(M_j(t)|\mathcal{F}^{(Z)}_t)$ (blue arrows in Figure~\ref{fig:states}); if $M(t)^c$ and no macrotransition occurs in $(t, t+\epsilon]$, we use the \textit{microtransition model} to draw from $\mathbb{P}(Z_{t + \epsilon} | M(t)^c, \mathcal{F}^{(Z)}_t)$ and simulate the player locations in the full-resolution data at time $t + \epsilon$ (black loop arrow in Figure~\ref{fig:states}). As discussed in Section~\ref{sec:Computation}, repeatedly drawing alternately from the macro- and microtransition models allows us to sample from $\mathbb{P}(M_j(s), Z_s=z,\tau=s|\mathcal{F}^{(Z)}_t)$. The second term in \eqref{EPVmulti} is the conditional distribution of the exit state $C_{\delta}$ given the macrotransition $M_j(s)$, which is in some cases degenerate (red arrows in Figure~\ref{fig:states}). Because there are different pass macrotransition events for each teammate, if $j$ corresponds to a pass, then $C_{\delta}$ is (with probability one) the possession state occupied by the player corresponding to the $j$th passing option at the time of the pass. Likewise, if $j$ is the turnover macrotransition, then $C_{\delta}$ is in the turnover state with probability one. Only if $j$ is a shot attempt is this distribution nontrivial; in the case of a shot attempt, $C_{\delta}$ may be a made 2 or 3 point basket, or a missed 2 or 3 point basket (the point value of the shot would be contained in the full-resolution data at the time of the shot attempt, $Z_s$). We thus require a model for shot success given the full resolution data prior to the shot. As the parametric form of this model is similar to that of our macrotransition model, it is discussed in the context of the macrotransition model in Section~\ref{sec:Macro}. Lastly, the remaining factor $\E[h(C_T)|C_\delta = c', M_j(s), Z_s=z, \tau=s,\mathcal{F}^{(Z)}_t]$ is potentially the most complex because, without additional assumptions, it has the same essential structure as $\nu_t$ in \eqref{epvdef} itself. However, the structure of macrotransitions motivates assumptions that simplify this factor to provide computational tractability while conforming to basketball intuition. \fi
{ "attr-fineweb-edu": 1.479492, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUfcE5qhLBktRYWDDh
\section{TSP among $\alpha$-fat Regions} \subsection{Problem Definition and Background} The Euclidean TSP with neighborhoods (TSPN) is the following problem: Given a set $\mathcal{R}$ of $k$ regions (subsets of $\mathbb{R}^2$), find a shortest tour that visits at least one point from each region. Even for disjoint or connected regions, the TSPN does not admit a PTAS unless $P=NP$ \cite{safra}. Aiming for a PTAS under additional restrictions on the input, \cite{mitchell-ptas} and \cite{elb} require connected and disjoint regions, and both introduce a notion of $\alpha$-fatness. \begin{definition}[\cite{mitchell-ptas}] \label{def:afat-mit} A region $P$ of points in the plane is \textbf{$\boldsymbol{\alpha}$-fat}, if it contains a disk of diameter $\dfrac{\mathrm{diam}(P)}{\alpha}$. \end{definition} \begin{definition}[\cite{elb}] \label{def:afat-elb} A region $P$ in the plane is \textbf{$\boldsymbol{\alpha}$-fat$_{\boldsymbol{E}}$}, if for every disk $\Theta$, such that the center of $\Theta$ is contained in $P$ but $\Theta$ does not fully contain $P$, the area of the intersection $P \cap \Theta$ is at least $\frac{1}{\alpha}$ times the area of $\Theta$. \end{definition} For $\alpha$-fat$_E$ regions, Chan and Elbassioni \cite{elb-qptas} developed a quasi-polynomial time approximation scheme (even for a more general notion of fatness and in more general metric spaces). Mitchell \cite{mitchell-ptas} was the first to consider $\alpha$-fat regions. Bodlaender et al. \cite{grigoriev} introduced the notion of geographic clustering, where each region contains a square of size $q$ and has diameter at most $cq$ for a fixed constant $c$, which is a special case of $\alpha$-fat regions. They showed that the TSPN with geographic clustering admits a PTAS based on Arora's framework for the Euclidean TSP. In all cases, $\alpha$-fatness provides a lower bound (in terms of their diameters) on the length of a tour visiting disjoint regions, but in the following, the second definition will turn out to be more useful. Throughout this paper, $\alpha \geq 1$ and $\varepsilon > 0$ will be constants. \subsection{Mitchell's Algorithm} The core of Mitchell's algorithm is dynamic programming, which requires certain restrictions on the space of solutions. To this end, Mitchell claims the following: There is an almost optimal tour (up to a factor of $1 + \varepsilon$) such that: \begin{enumerate}[(A)] \item The tour visits the minimum-diameter axis-aligned rectangle $R_0$ intersecting all regions, and therefore has to be located within a window $W_0$ of diameter $\mathcal{O}(\, \mathrm{diam}(R_0))$ intersecting $R_0$. We distinguish internal regions $\mathcal{R}_{W_0}$ that are entirely contained in $W_0$, and external regions. \item We can require the vertices of the tour to lie on a polynomial-size grid (in $k$ and $\frac{1}{\varepsilon}$) within this rectangle. \item The tour is a connected Eulerian graph fulfilling the ``$(m,M)$-guillotine property" (which roughly states that there is a recursive decomposition of the bounding box of the tour by cutting it into subwindows such that the structure of internal regions and tour segments on the cut is of bounded complexity in $m$ and $M$), again at a loss of only $\varepsilon$ for appropriately chosen $m$ and $M$. \item The tour obeys (B) and (C) simultaneously. \item The external regions can be dealt with efficiently as there is only a polynomial number of ways for them to be visited by an $(m,M)$-guillotine tour (i. e. for every cut, there is a polynomial number of options for which regions will be visited on which side of it). \end{enumerate} Under these assumptions, Mitchell states a dynamic programming algorithm. Starting with a window (axis-parallel rectangle) $W_0$, which is assumed to contain all edges of the tour, every subwindow $W$ defines several subproblems (see Figure~\ref{pic:sub}). The subproblems also enumerate all possible configurations of edge segments intersecting its boundary, connection patterns of these segments, internal (contained in $W_0$) and external (intersecting $\partial W_0$) regions to be visited inside and outside of the window, cuts (horizontal or vertical lines dividing $W$ into two subwindows) and configurations on the cut. For each cut, the subproblems to both sides will already have been solved through a bottom-up recursion, therefore we can select an optimal solution with compatible configurations. The optimum (shortest) solution for the subproblem (among all possible cuts) is stored and can be used for the next recursion level. \begin{figure}[hbt] \centering \includegraphics[width=7cm]{sub} \caption{Structure of a subproblem} \label{pic:sub} \end{figure} Assumption A is false, and will be rectified in Section~\ref{sec:loc}, Lemma~\ref{lem:loc}. Statement B is correct. For the third statement, a stronger assumption on the regions can be used to mend the upper bound for the additional length incurred in Mitchell's construction; see Section~\ref{sec:guillotine}, Theorem~\ref{thm:charge}. Preserving connectivity in a graph with guillotine property is difficult, not accounted for in \cite{mitchell-ptas} and for Mitchell's line of argument not clear. We present a counterexample in Section~\ref{sec:cnn}, Figure~\ref{pic:region-span}. While not proven by Mitchell, statement D is still correct (if assumption C holds for the given tour), a technical argument will be sketched in Section~\ref{sec:grid}. The last statement is again false, but can be fixed using a different notion of $\alpha$-fatness, which we will show in Section~\ref{sec:ext}. \subsection{Localization} \label{sec:loc} In {Mitchell}'s algorithm, the search for a (nearly) optimal tour among a set $\mathcal{R}$ of connected regions is restricted to a small neighborhood of the minimum-diameter axis-aligned rectangle $R_0$ that intersects all regions. \begin{claim}[{\cite[Lemma 2.4]{mitchell-ptas}}] There exists an optimal tour $T^*$ of the regions in $\mathcal{R}$ that lies within the ball $B(c_0, 2 \, \mathrm{diam} (R_0))$ of radius $2 \, \mathrm{diam} (R_0)$ around the center point $c_0$ of $R_0$. \end{claim} However, Figure~\ref{pic:loc} shows that in general, this is false: a nearly optimal tour need not be within $\mathcal{O}(\, \mathrm{diam} R_0)$ distance of $R_0$, even if the regions are $\alpha$-fat, disjoint and connected as in Figure~\ref{pic:loc}. The vicinity of $R_0$ only contains a $\sqrt{2}$-approximation of the optimum, which is instead found within $R_1$. \begin{figure}[hbt] \center \includegraphics[width=3.8cm]{tspn} \caption{Localization of an optimal tour} \label{pic:loc} \end{figure} We now show how this problem can be resolved: If an optimal tour intersected $R_0$, {Mitchell}'s lemma would be correct. He argues that, if some regions were to be visited far away from $R_0$, the path leading to them could be replaced by $\partial R_0$, which due to connectivity must visit those regions. Otherwise, no region can be fully contained in $R_0$, so the same argument yields that every region must intersect $\partial R_0$, making $\, \mathrm{perim}(R_0) \leq 2\sqrt{2}\, \mathrm{diam}(R_0)$ an upper bound for the length $L^*$ of an optimal solution. Combining this with the fact that $L^* \geq 2\, \mathrm{diam}(R_0)$, $L^*$ is now known up to a constant factor. Now, there are two cases to consider: If there is a small region (of diameter $\mathcal{O}(L^*)$), an area of diameter $\mathcal{O}(L^*)$ around this region must contain an optimal tour. Otherwise, all regions are of diameter $> \mathcal{O}(L^*)$. If the regions are required to be polygons, it is possible to limit the number of possible (approximate) locations of an optimal tour by adapting an approach by J. Gudmundsson and C. Levcopoulos \cite[Section 5.1]{gudm}, who show that in that case a tour must be the boundary of a convex polygon. This additional structural information then allows them to deduce the existence of an optimal tour within $\mathcal{O}(L^*)$ of a vertex of one of the polygonal regions. Considering rectangles of the right size (since $L^*$ is known up to a constant factor) yields the following lemma: \begin{lemma} \label{lem:loc} For a set $\mathcal{R}$ of disjoint, connected polygons in the plane with a total of $n$ vertices, $\mathcal{O}(n)$ rectangles of size $\mathcal{O}(L^*)$ can be found in polynomial time, such that an optimal tour of length $L^*$ is contained in at least one of them. \end{lemma} \subsection{Guillotine subdivisions and the charging scheme} \label{sec:guillotine} If there were no further problems, a PTAS could be obtained by applying {Mitchell}'s algorithm to all rectangles from Lemma~\ref{lem:loc}. The main idea of the algorithm is to find a nearly optimal tour that satisfies the $(m, M)$-guillotine property, which will be defined in the following. Consider a polygonal planar embedding $S$ of a graph $G$ with edge set $E$ an a total length of $L$. Without loss of generality, let $E$ be a subset of the interior of the unit square $B$. Let $\mathcal{R}$ be a set of regions and $W_0$ the axis-aligned bounding box of an optimal tour (we can afford to enumerate all possibilities on a grid and get a $(1+\varepsilon)$-approximation of $W_0$); let $\mathcal{R}_{W_0}$ be the subset of regions that lie in the interior of $W_0$. \begin{definition}[\cite{mitchell-ptas}] A \textbf{window} is an axis-aligned rectangle $W \subseteq B$. Let $l$ be a horizontal or vertical line through the interior of $W$, then $l$ is called a \textbf{cut} for $W$. The intersection $l \cap E \cap \, \mathrm{int}(W)$ consists of a set of subsegments of the restriction of $E$ to $W$. Let $p_1, \dots, p_\xi$ be the endpoints of these segments ordered along $l$. Then the \textbf{$\boldsymbol{m}$-span} $\sigma_m(l)$ of $l$ (with respect to $W$) is empty, if $\xi \leq 2m-2$, and consists of the line segment $\overline{p_m p_{\xi-m+1}}$ otherwise (see Figure~\ref{pic:cuts}). A cut $l$ is \textbf{$\boldsymbol{m}$-good} with respect to $W$ and $E$, if $\sigma_m(l) \subseteq E$. \end{definition} \begin{figure}[hbt] \center \includegraphics[width=12cm]{cuts-tsp} \caption{A cut $l$ and its $m$-span for $m=1,2,3$. The cut is $3$-good, but not $2$-good.} \label{pic:cuts} \end{figure} Mitchell defines the $M$-region-span analogously: \begin{definition}[\cite{mitchell-ptas}] \label{def:m-region-span} The intersection $l \cap \mathcal{R}_{W_0} \cap \, \mathrm{int}(W)$ of a cut $l$ with the regions $\mathcal{R}_{W_0}$ restricted to $W$ consists of a set of subsegments of $l$. The \textbf{$\boldsymbol{M}$-region-span} $\Sigma_M(l)$ of $l$ is the line segment $\overline{p_M p_{\xi-M+1}}$ along $l$ from the $M$th entry point $p_M$, where $l$ enters the $M$th region of $\mathcal{R}_{W_0}$, to the $M$th-from-the-last exit point $p_{\xi-M+1}$, assuming that the number of intersected regions is $\xi > 2(M-1)$. Otherwise, the $M$-region-span is empty. \end{definition} \hspace*{-0.7cm} \begin{minipage}{14.8cm} \quad This definition is ambiguous if the regions are not required to be convex, because the order of the regions is unclear and there might be a number of points at which $l$ enters or exits the same region. For example, on the right, many of the line segments connecting two red dots could be the $1$-region-span according to this definition. \end{minipage} \hfill \begin{minipage}{2cm} \centering \includegraphics[width=\linewidth]{region-span-amb} \end{minipage} Furthermore, {Mitchell}'s $M$-region-span does not ``behave well'' in the corresponding charging scheme. We propose the following alternative definition. Its benefits will become apparent in the proof of Theorem~\ref{thm:charge} and in Figure~\ref{pic:charging-problems}: \begin{definition} \label{def:region-span} The intersection $l \cap \mathcal{R}_{W_0} \cap \, \mathrm{int}(W)$ of a cut $l$ with the internal regions $\mathcal{R}_{W_0}$ restricted to $W$ consists of a (possibly empty) set of subsegments of $l$. Let $p_1, \dots, p_\xi$ be the endpoints of these segments which are in $\mathrm{int}(W)$, ordered along $l$. Then the \textbf{$\boldsymbol{M}$-region-span} $\Sigma_M(l)$ of $l$ (with respect to $\mathcal{R}_{W_0}$ and $W$) is empty, if $\xi \leq 2M-2$ and consists of the line segment $\overline{p_M p_{\xi-M+1}}$ otherwise (see Figure~\ref{pic:rcuts}). A cut $l$ is \textbf{$\boldsymbol{M}$-region-good} with respect to $W$, $\mathcal{R}_{W_0}$ and $E$, if $\Sigma_M(l) \subseteq E$. \end{definition} \begin{figure}[hbt] \center \includegraphics[width=12cm]{region-cuts-tsp} \caption{A cut $l$ and its $M$-region-span (according to Definition~\ref{def:region-span}) for $M=1,2,3$.} \label{pic:rcuts} \end{figure} With either definition of the $M$-region-span, we can define a corresponding version of the $(m, M)$-guillotine property as follows: \begin{definition}[{\cite{mitchell-ptas}}] An edge set $E$ of a polygonal planar embedded graph satisfies the \textbf{$\boldsymbol{(m, M)}$-guillotine property} with respect to a window $W$ and regions $\mathcal{R}_{W_0}$, if one of the following conditions holds: \begin{itemize} \item No edge of $E$ lies completely in the interior of $W$, \textit{or} \item There is a cut $l$ of $W$ that is $m$-good with respect to $W$ and $E$ and $M$-region-good with respect to $W$, $\mathcal{R}_{W_0}$ and $E$, such that $l$ splits $W$ into two windows $W'$ and $W''$, for which $E$ recursively satisfies the $(m, M)$-guillotine property with respect to $W'$ resp. $W''$ and $\mathcal{R}_{W_0}$. \end{itemize} \end{definition} It is clear from this definition that transforming a tour into an edge set with this property will induce an additional length that depends both on the edges and the regions present. The crucial property of a tour connecting disjoint, $\alpha$-fat regions is that their number and diameter provide a lower bound on its length. It is worth noting that the following lemma holds for a tour among $\alpha$-fat regions in either Mitchell's (Definition~\ref{def:afat-mit}) or Elbassioni's and Fishkin's (Definition~\ref{def:afat-elb}) sense: \begin{lemma}[{\cite[Lemma 2.6]{mitchell-ptas}}] \label{lem:afat-mit} Let $\varepsilon > 0$, then there is a constant $C$ (that depends on $\varepsilon$ and $\alpha$), such that for every \textsc{TSPN}-tour $T^*$ of length $L^*$, connecting $k$ disjoint, connected, $\alpha$-fat ($\alpha$-fat$_E$) regions in the plane, $L^* \geq C \cdot \dfrac{\lambda(\mathcal{R}_{W_0})}{\log (\frac{k}{\varepsilon})}$, where $\lambda(\mathcal{R}_{W_0})$ is the sum of the diameters of the regions that are completely contained in the axis-aligned bounding box $W_0$ of $T^*$. \end{lemma} \cite{mitchell-ptas} provides a proof for this lemma with respect to $\alpha$-fat regions in the sense of Definition~\ref{def:afat-mit}, which can easily be adapted for $\alpha$-fat$_E$ regions as in Definition~\ref{def:afat-elb} (even without requiring connected regions). In the dynamic programming algorithm, $M$ can be chosen as $\mathcal{O}(\frac{1}{\varepsilon} \log (\frac{k}{\varepsilon}))$; therefore we can ``afford'' to construct additional edges of length $\mathcal{O}(\frac{\, \mathrm{diam}(P_i)}{M})$ for every $P_i \in \mathcal{R}_{W_0}$ and still obtain a $(1 + \mathcal{O}(\varepsilon))$-approximation algorithm. The following definitions were not explicitly given in \cite{mitchell-ptas} and are therefore adapted from the corresponding definitions in \cite{mitchell-tsp} for the standard TSP: \begin{definition} Let $l$ be a cut through window $W$, and $p$ a point on $l$, then $p$ is called \textbf{$\boldsymbol{m}$-dark} with respect to $W$ and an edge set $E$, if $p$ is contained in the $m$-span of the cut through $p$ that is orthogonal to $l$. Similarly, a point $p$ on a cut $l$ is said to be \textbf{$\boldsymbol{M}$-region-dark}, if it is contained in the $M$-region-span of a cut through $p$ that is orthogonal to $l$. A segment on a cut $l$ is called \textbf{$\boldsymbol{m}$-dark} and \textbf{$\boldsymbol{M}$-region-dark}, respectively, if every point of it is. A cut $l$ is called \textbf{favorable} if the sum of the lengths of its $m$-dark and $M$-region-dark portions is at least as big as the sum of the lengths of its $m$-span and $M$-region-span. \end{definition} While our definition of the $M$-region-span removes the ambiguity and ensures the correctness of the proof techniques used by Mitchell, it yields a weaker (but correct) overall statement: \begin{theorem}[{Corrected version of \cite[Theorem 3.1]{mitchell-ptas}}] \label{thm:charge} Let $G$ be an planar embedded connected graph, with edge set $E$ consisting of line segments of total length $L$. Let $\mathcal{R}$ be a set of disjoint, polygonal, $\alpha$-fat regions and assume that $E \cap P_i \neq \emptyset$ for every $P_i \in \mathcal{R}$. Let $W_0$ be the axis-aligned bounding box of $E$. Then, for any positive integers $m$ and $M$, there exists an edge set $E' \supseteq E$ that obeys the $(m, M)$-guillotine property with respect to $W_0$ and regions $\mathcal{R}_{W_0}$, and for which the length of $E'$ is at most $L + \frac{\sqrt{2}}{m} L + \frac{\sqrt{2}}{M} \Lambda(\mathcal{R}_{W_0})$, where $\Lambda(\mathcal{R}_{W_0})$ is the sum of the perimeters of the regions in $\mathcal{R}_{W_0}$. \end{theorem} The only deviation from {Mitchell}'s version is that the length of $E'$ is bounded using $\Lambda(\mathcal{R}_{W_0})$ instead of $\lambda(\mathcal{R}_{W_0})$ as defined in Lemma~\ref{lem:afat-mit}. To apply the lower bound on the optimum obtained from $\alpha$-fatness as in the original paper, a further restriction can be imposed on the regions -- that the ratio between the perimeter and diameter of regions is bounded by a constant, which is true, for example, for convex regions. Since polygonal regions only ensure that this ratio is bounded by $\mathcal{O}(n)$, this is a very restrictive assumption. Note further that this theorem, as well as \cite[Theorem 3.1]{mitchell-ptas}, does not establish the existence of a \emph{connected} edge set with the properties of $E'$; see Section~\ref{sec:cnn}. The proof relies on the following key lemma by Mitchell: \begin{lemma}[{\cite[Lemma 3.1]{mitchell-ptas}}] \label{lem:fav-cut} For any planar embedded graph $G$ with edge set $E$, any set of regions $\mathcal{R}_{W_0}$ and any window $W$, there is a favorable cut. \end{lemma} Now, given an edge set $E$ as in the theorem, we recursively find a favorable cut $l$, add the $m$-span and $M$-region-span to $E$ and proceed with the two subwindows, into which $l$ splits the current window. This procedure terminates, because the proof of Lemma~\ref{lem:wfav-cut} yields that the cut can always be chosen to be at one of finitely many candidate coordinates since we assume that all vertices of the tour lie on a grid. As for the additional length induced by the $m$-spans and $M$-region-spans, we know that it can be bounded by the length of the respective dark portions of the cuts in question. Since \cite{mitchell-ptas} omits some of the details, we will give them here. \begin{proof}[Theorem~\ref{thm:charge}] The charging scheme works as follows: Every edge and the boundary (in {Mitchell}'s version, diameter) of every region is split up into finitely many pieces, to each of which we assign a ``charge'' that specifies which multiple of the length of that segment was added to $E$ as part of $m$-spans and $M$-region-spans. If we can establish that the charge for every edge segment is at most $\frac{\sqrt{2}}{m}$, the charge for every region boundary is at most $\frac{\sqrt{2}}{M}$, and the $m$-span and $M$-region-span never get charged during the recursive process, we obtain the statement of Theorem~\ref{thm:charge}. Let $l$ be a favorable cut. The charging scheme for the edge set is described in \cite{mitchell-tsp}: For each $m$-dark portion of $l$, the $2m$ inner edge segments (the $m$ segments closest to the cut on each side) are each charged with $\frac{1}{m}$. In the recursive procedure, each segment $e$ can be charged no more than once from each of the four sides of its axis-parallel bounding box, since in order for it to be charged, there have to be at least $m$ edges to the corresponding side of it, but there are less than $m$ edges between $e$ and any cut that charges it. Therefore, after placing a cut and charging $e$ from one side, there will be less than $m$ edges to the respective side of $e$ in the new subwindow, preventing it from being charged from that direction again. Thus, each side of the axis-parallel bounding box of the segment gets charged $\frac{1}{2m}$ times, and since the perimeter of the bounding box is at most $2 \sqrt{2}$ times the length of the edge segment $e$, it gets charged no more than $\frac{\sqrt{2}}{m}$ times in total. The $m$-span and $M$-region span never get charged, because after they are inserted, they are not in the interior of any of the windows which are considered afterwards. With Definition~\ref{def:region-span} of the $M$-region-span, it is possible to replace the regions by their boundaries (which form a polygonal edge set of total length $\Lambda(\mathcal{R}_{W_0})$) and to treat them the same way as the edge set $E$ (in particular, the $M$-region-span and $M$-region-dark parts become $M$-span and $M$-dark). \end{proof} \begin{figure}[hbt] \center \includegraphics[width=6cm]{charging-problem} \caption{Charge is proportional to perimeter} \label{pic:charging-problems} \end{figure} For {Mitchell}'s original definition (Definition~\ref{def:m-region-span}) of the $M$-region-span, a scenario as in Figure~\ref{pic:charging-problems} can become a problem. Every black line pictured is a favorable cut, every red line segment is $1$-region-dark on the respective cut (even if the window in question has been cut by the black line directly below and above already). The total length of the red line segments is however proportional to the perimeter, not the diameter, of the blue and green regions. Within one window, {Mitchell}'s statement holds; the problem with his definition is its lack of a monotone additive behavior: When cutting a window $W$ into two parts, the sum of the diameters of all relevant regions is $W$ might be less than the sums of the diameters of the relevant regions for each subproblem combined, and while no part of the diameter of a region is charged more than $\frac{\sqrt{2}}{M}$ times in each subproblem, the combined charge might still be greater than $\frac{\sqrt{2}}{M}$. \subsection{Grids and guillotines} \label{sec:grid} In order for the dynamic programming algorithm to work, the number of possible endpoints for an edge has to be restricted (for example, to a grid). In \cite{mitchell-ptas}, an optimal solution will thus first be moved to a fine grid through slight perturbation, and subsequently transformed into an $(m, M)$-guillotine subdivision. Mitchell claims that there is always a favorable cut that has grid coordinates, arguing that in the charging lemma (Lemma~\ref{lem:wfav-cut}), the functions considered are piecewise linear between grid points, therefore the maximum of such a function must be attained at a grid point. The proof fails to take into account that the function might be discontinuous (and not even semi-continuous) at grid points. Even if this were true, it is not in general true in the Euclidean case (unlike the rectilinear case) that the $m$-span ends at a grid coordinate on the cut (for example, it could instead end at an interior point of an edge). A (slightly technical) solution to this uses a weaker version of this claim, i. e. that a favorable cut has to be at a grid coordinate or the mean value of two consecutive grid points, which follows from the simple observation that when integrating affine functions over an interval, the sign of the integral is the same as the sign of the function value at the midpoint of the interval. This can be used in the proof of Lemma~\ref{lem:fav-cut} as follows: The existence of a favorable cut is shown via changing the order of integration -- then the integral of the length of the dark portions along the $x$-axis is the same as the integral of the length of the spans along the $y$-axis, and vice versa. Therefore, there is one axis, such that there is more dark than spanned area in that direction, i. e. the total area of all dark points with respect to some horizontal (resp. vertical) cut is greater than the area of all points that are contained in some horizontal (resp. vertical) cut. Thus, there has to be a single cut with that property as well: a favorable cut. If all previous edges and regions are restricted to the grid, the aforementioned observations imply that in particular, there is a favorable cut at a grid coordinate or in the center between two consecutive ones. It can then be shown that a non-empty $m$-span in an optimal solution always has a certain minimum length, or it contains a grid point. This observation allows us to slightly modify the edge set, so that a cut becomes $m$-good, while all edges still have grid endpoints. The $M$-region-span can be dealt with in a similar way. Moving it to the grid requires a slight change in the definition of the $(m, M)$-guillotine property, which will preserve its algorithmic properties. However, there are further problems with the $M$-region-span, which will be explained in the following section. \subsection{Connectivity} \label{sec:cnn} To transform an optimal tour into an $(m, M)$-guillotine subdivision, the $m$-span and $M$-region-span of a favorable cut are inserted into the edge set through a recursive procedure. The $m$-span is always connected to the original edge set $E$, since its endpoints are intersection points of the cut with $E$. This is not true for the $M$-region-span, and in fact, it can be ``far away'' from $E$, as seen in Figure~\ref{pic:region-span}. The optimal tour (green) and the $1$-region-span $\Sigma_1(l)$ of the favorable cut $l$ (which is favorable, because the two grey squares at the top make a portion of $l$ with the same length as $\Sigma_1(l)$ $1$-region-dark) are far away from each other. Connecting it to the tour does not preserve the approximation ratio of $1+\varepsilon$. \begin{figure}[hbt] \centering \includegraphics[width=3cm]{mregionspan} \caption{Favorable cut and disconnected $1$-region-span} \label{pic:region-span} \end{figure} Note that, in the dynamic programming algorithm, we cannot afford to decide whether to connect the $M$-region-span of a cut to the edge set: If we choose not to connect it, we have to decide which subproblems is responsible for each region on the $M$-region-span, but this is exactly what was to be avoided by introducing it in the first place. On the other hand, if we do connect the $M$-region-span to the edge set, both its length and its possible interference with other subproblems have to be taken care of. With the second definition of $\alpha$-fatness (Definition~\ref{def:afat-elb}), which implies the lower bounds mentioned in Section~\ref{sec:ext}, the length of a segment connecting the $M$-region-span to $E$ could be charged off to the length of the $M$-region-span itself, whereas Mitchell's definition of $\alpha$-fatness does not even guarantee this (because in the proof of the lower bound, we relied on the regions being contained in the bounding box of the tour). It is not clear whether the connection of tour and region-span intersects another subproblem, possibly violating the $(m, M)$-guillotine property there, therefore even for $\alpha$-fat$_E$ regions, this problem remains open. \subsection{External regions} \label{sec:ext} In addition to dealing with the internal regions $\mathcal{R}_{W_0}$, the dynamic programming algorithm has to determine how to visit external regions. {Mitchell}'s strategy is to enumerate all possible options, restricting the complexity with the following argument: Given a situation as in Figure~\ref{pic:sub}, with some external regions protruding from the outside into a subproblem $W$, we know that since there are only $\mathcal{O}(m)$ edges on each side of $W$, they can be split into $\mathcal{O}(m)$ intervals of regions, such that along the corresponding side of $W$, the regions are consecutive with no edges passing through $\partial W$ between them (e. g. the red, green and blue region in Figure~\ref{pic:sub}). It seems clear now that, in order for the green region to be visited by an edge outside of $W$, either the red or the blue region would have to be crossed as well. If this were true, it could be deduced that the set of regions in one of the intervals in question that are not visited outside $W$, and that thus $W$ is responsible for, is a connected subinterval, leading to $\mathcal{O}(n^2)$ possibilities for each interval and $\mathcal{O}(n)^{\mathcal{O}(m)}$ possibilities overall for each window $W$. This argument fails if the green region has a disconnected intersection with $W_0$. An example is given on the left in Figure~\ref{pic:ext}: An $(m, M)$-guillotine tour can visit any subset of the regions outside of $W$, thus there is no polynomial upper bound on the number of possibilities anymore. Mitchell's argument holds for convex regions, but as seen left in Figure~\ref{pic:ext}, in general it does not apply to disjoint, connected, $\alpha$-fat regions. The number of regions such that their intersection with $W_0$ (or even a slightly extended rectangle) is disconnected could be $\Theta (k)$, for example if the construction in Figure~\ref{pic:ext} is extended beyond the yellow region, which is possible, because the regions here actually become ``more fat'' as their size increases, i.e. $\alpha$ decreases and eventually converges to 2. \begin{figure}[hbt] \centering \includegraphics[width=3cm]{extern1} \hspace*{0.3cm} \includegraphics[width=5cm]{extern2} \caption{External regions} \label{pic:ext} \end{figure} It can be shown that in order for this to be a problem, the size of the regions has to increase exponentially due to the logarithmic lower bound in the packing lemma, and the fact that the boundary of $W_0$ is a tour of the external regions. One solution is therefore to restrict the diameter of the regions; many of the approximation algorithms for similar problems do in fact require the regions to have comparable diameter (\cite{elb2}, \cite{dumi}). Alternatively, requiring convexity solves the issue, but is quite a strong condition. Another option is using the notion of $\alpha$-fatness$_{E}$ from Definition~\ref{def:afat-elb} as established by K. Elbassioni, A. Fishkin, N. Mustafa and R. Sitters \cite{elb}. This definition implies that a path connecting $k$ regions has a length of at least $(\frac{k}{\alpha}-1) \frac{\pi \delta}{4}$, where $\delta$ is the diameter of the smallest region \cite{elb}; adding a variant of this up by diameter types yields the same lower bound as for {Mitchell}'s definition of $\alpha$-fatness, up to a constant factor. Unlike {Mitchell}'s version, this definition however estimates the length of a tour in terms of the minimum diameter of the regions involved and can therefore be used to give a constant upper bound on the number of large external regions (see Figure~\ref{pic:ext}, on the right): since $\partial W_0$ is a path connecting them, the number of external regions with diameter $\geq \, \mathrm{diam}(W_0)$ is at most $\alpha(\frac{8\sqrt{2}}{\pi}+1)$. In both cases, the small external regions can be added to $\mathcal{R}_{W_0}$, since $\partial W_0$ is a tour of them, which is at least $\frac{1}{\sqrt{2}}$ times the length of an optimal tour, and thus these regions provide a lower bound of the length of $\partial W_0$, which in turn provides a lower bound on $L^*$. This way, the statement of Lemma~\ref{lem:afat-mit} (for which the fact that $W_0$ is the bounding box of the tour was exploited in the proof) remains intact with modified constants. For the large regions, we can afford to explicitly enumerate which subwindow should visit them. \subsection{Result} Overall, we have the following result: \begin{theorem} \label{thm:main} Let $\varepsilon > 0$ be fixed. Given a set of $k$ disjoint, connected, polygonal regions that are $\alpha$-fat$_E$, convex or $\alpha$-fat and of bounded diameter, with a total of $n$ vertices in the plane, we can find a connected, $(m, M)$-guillotine, Eulerian grid-rounded graph visiting all regions in polynomial time in the size of the grid, $n$, $k$, $2^M$ and $(nm)^m$. Among all such graphs, it will be shortest possible up to a factor of $1 + \varepsilon$. \end{theorem} Here, grid-rounded means that all edge endpoints are on a grid of polynomial size, and that there is only a polynomial number of possible positions for every cut. Mitchell claims that for $M = \mathcal{O}(\frac{1}{\varepsilon} \log \frac{n}{\varepsilon})$ and $m = \mathcal{O}(\frac{1}{\varepsilon})$ and $\alpha$-fat regions, a connected $(m,M)$-guillotine graph is a $(1+\varepsilon)$-approximation of a tour; his proofs only apply to not necessarily connected graphs and regions with bounded perimeter-to-diameter ratio. In general, because of the connectivity problem in Section~\ref{sec:cnn} and some technical difficulties choosing an appropriate grid, it is not clear whether there is a grid of polynomial size, such that a graph with the properties of the theorem is a $(1+\varepsilon)$-approximation of an optimal TSPN tour. For unit disks and with a slightly modified definition of the guillotine property, there are $m$ and $M$ such that the guillotine subdivision is only by a factor of $(1+ \mathcal{O}(\varepsilon))$ longer than a tour; this will be shown in Theorem~\ref{thm:grid}. \section{Unit Disks} The criticism put forward in Section~\ref{sec:cnn} extends to a joint paper of {{Mitchell}} and Dumitrescu \cite{dumi}. Despite being published earlier than the PTAS candidate, the approach chosen there actually takes into account that the $M$-region-span (or $m$-disk-span in the notation of the paper) has to be connected to the edge set, and this is done at a sufficiently low cost. However, no proof is given that the edges added during this process preserve the $(m, M)$-guillotine property. In particular, even if a subproblem contains disks that do not intersect its boundary, the $M$-region-span might not visit one of them; if it always did, then we could add the connecting edge and guarantee that it remains within the same subproblem. With some additional effort, this problem can be avoided, as we will show now. \subsection{Preliminaries} Definition~\ref{def:afat-elb} yields a useful lower bound: \begin{lemma}[\cite{elb}] \label{lem:afat} A shortest path connecting $k$ disjoint, $\alpha$-fat$_E$ regions of diameter $\geq \delta$ has length at least $(\frac{k}{\alpha}-1) \frac{\pi \delta}{4}$. \end{lemma} All results apply not only to unit disks (for which $\alpha = 4$), but to disk-like regions: \begin{definition} A set of regions is \textbf{disk-like}, if all regions are disjoint and connected, and have comparable size (their diameters range between $d_1$ and $d_2$, which are constant), are $\alpha$-fat$_E$ or $\alpha$-fat for constant $\alpha$, and their perimeter-to-diameter ratio is bounded by a constant $r$. \end{definition} \subsection{Charging Scheme} \label{sec:charge} Let $\mathcal{D}$ denote the input set of $k$ disjoint unit disks. Throughout the rest of this paper, we will use a slightly modified version of the $(m, M)$-guillotine property: \begin{definition}[{\cite{mitchell-ptas}}] An edge set $E$ of a polygonal planar embedded graph satisfies the \textbf{$\boldsymbol{(m, M)}$-guillotine property} with respect to a window $W$ and regions $\mathcal{R}_{W_0}$, if one of the following conditions holds: \begin{enumerate} \item There is no edge in $E$ with its interior (i. e. the edge without its endpoints) completely contained in the interior of $W$. \item There is a cut $l$ of $W$ that is $m$-good with respect to $W$ and $E$ and $M$-region-good with respect to $W$, $\mathcal{R}_{W_0}$ and $E$, such that $l$ splits $W$ into two windows $W'$ and $W''$, for which $E$ recursively satisfies the $(m, M)$-guillotine property with respect to $W'$ resp. $W''$ and $\mathcal{R}_{W_0}$. \end{enumerate} \end{definition} The first case differs from Mitchell's definition, which only requires that no entire edge lies completely in the interior of $W$. However, with that definition, adding the $m$-span to the edge set does not reduce the complexity of the subproblem (possibilities for edge configurations on the boundary of the window), because then we would still have to know the positions of the edges that intersect the $m$-span. \begin{definition} Let $m$ and $M$ be fixed. Then a cut is called \textbf{$\boldsymbol{c}$-favorable}, if the sum of the lengths of its $m$-span and $M$-region-span is at most $c$ times the sum of the lengths of its $m$-dark and $M$-region-dark portions. A cut is \textbf{weakly $\boldsymbol{c}$-favorable}, if the sum of the lengths of its $m$-span and $M$-region-span is at most $c$ times the sum of the lengths of its $m$-dark and $M/2$-region-dark portions. \end{definition} For a cut $l$, define the following notation: \begin{itemize} \item $\sigma_m(l)$ -- length of the $m$-span \item $\Sigma_M(l)$ -- length of the $M$-region-span \item $\delta_m(l)$ -- length of the $m$-dark segments \item $\Delta_M(l)$ -- length of the $M$-region-dark segments \end{itemize} \begin{definition} Let $m$ and $M$ be fixed. A cut is called \textbf{weakly central}, if it is horizontal and has distance at least $\min\{2, h/4\}$ from the top and bottom edge of the window or it is vertical and has distance at least $\min\{2, w/4\}$ from the left and right edge of the window, where $w$ and $h$ denote the width and height of the window, respectively. It is \textbf{perfect}, if it is weakly $8$-favorable and at least one of the following holds: \begin{itemize} \item it is \textbf{central}, i. e. if it is a horizontal cut, it has distance at least 2 from its top and bottom edge; if it is a vertical cut, it has distance at least 2 from the left and right edge, \textit{or} \item it is weakly central and its $M$-region-span is empty. \end{itemize} \end{definition} Any central cut is weakly central, any $c$-favorable cut is weakly $c$-favorable. \begin{lemma} \label{lem:wfav-cut} Let $m$ and $M\geq 24$ be fixed. Every window has a perfect cut. \end{lemma} \begin{proof} Lemma 3.1 in \cite{mitchell-ptas} states that every window has a favorable cut. We can use the same techniques to show that almost every window has a central $2$-favorable cut: By definition, any point $p$ is in the $m$-span of a vertical cut, if and only if it is $m$-dark in a horizontal cut; analogously for regions. Therefore, $\int_x \sigma_m(l_x) + \Sigma_M(l_x) \,\mathrm{d} x = \int_y \delta_m(l_y) + \Delta_M(l_y) \,\mathrm{d} y$, where $l_x$ is the vertical cut with $x$-coordinate $x$ and $l_y$ is the horizontal cut with $y$-coordinate $y$; {without loss of generality} $\int_x \sigma_m(l_x) + \Sigma_M(l_x) \,\mathrm{d} x \leq \int_x \delta_m(l_x) + \Delta_M(l_x) \,\mathrm{d} x$. Then there is a $1$-favorable vertical cut, i. e. an $x$ such that $\sigma_m(l_x) + \Sigma_M(l_x) \leq \delta_m(l_x) + \Delta_M(l_x)$. Using Markov's inequality, we can also conclude that at least half of the vertical cuts are $2$-favorable. Therefore, if the window has width $\geq 8$, we can choose a central $2$-favorable cut. If the window has width $a < 8$, then the same argument yields that there is still a $2$-favorable cut $l_x$ with distance at least $a/4$ from the left and right edge of the window. If its $M$-region-span is empty, it is a perfect cut. Otherwise, there are at least $2M$ intersection points with disks, i. e. at least $M$ disks, each of which has to intersect this cut and thus have at least $a/4$ of its width within the window. Consider the interval $[x-a/8, x+a/8]$ of the window (and note that it has width $< 2$). Let $\mathcal{D}_x$ be the set of disks on $l_x$, then each of them must intersect $l_{x-a/8}$ or $l_{x+a/8}$ (or both). Therefore, at least $M/2$ disks of $\mathcal{D}_x$ intersect one of them, without loss of generality, $l_{x-a/8}$. This means that every cut with $x$-coordinate in $(x-a/8, x)$ has a total of $M$ intersection points with these disks. Let $y$ be the $y$-coordinate of the horizontal cut such that half of these intersection points are below and half of them above $l_y$. Then, the segment from $x-a/8$ to $x$ on $l_y$ is $M/2$-region-dark. On the other hand, since $a < 8$, any horizontal cut can intersect at most $k < 4(\frac{16}{\pi} + 1) < 25$ disks (because Lemma~\ref{lem:afat} implies $a \geq (\frac{k}{4} - 1) \frac{\pi}{2}$). Therefore, there are at most $48$ total intersection points with disks on the cut. As $M \geq 24$, $l_y$ has an empty $M$-region-span. For $l_y$, we now have $\sigma_m(l_y) \leq a$, $\Sigma_M(l_y) = 0$, $\delta_m(l_y) \geq 0$, and $\Delta_{M/2}(l_y) \geq a/8$. This implies that $l_y$ is weakly $8$-favorable. Finally, since there are at least $12$ disks above and below $l_y$ on $l_x$, $l_y$ has distance $4 \pi/4 > 2$ from the top and bottom of the window, so it is weakly central and therefore a perfect cut. \end{proof} \begin{theorem}[Connected guillotines] \label{thm:guillotines} Let $m \geq 32$ and $M \geq 24$ be fixed, and let $\mathcal{R}$ be a set of $k \geq 20$ unit disks. Let $L^*$ be the length of a shortest tour with edge set $E^*$ connecting them and $W_0$ its axis-parallel bounding box, then there exists a connected Eulerian $(m,M+24)$-guillotine subdivision with edge set $E' \cup E^*$ of length $(1 + \mathcal{O}(1/m) + \mathcal{O}(1/M)) L^*$ connecting all regions. \end{theorem} Using different constants for $m$ and $M$ is not necessary here. Note, however, that the algorithm has polynomial running time if $m \in \mathcal{O}(1)$ and $M \in \mathcal{O}(\log n)$, so choosing $M$ differently might help with different applications. \begin{proof}[Theorem~\ref{thm:guillotines}] We recursively partition $W_0$ using perfect cuts. These always exist by the previous lemma. For each such cut, we add its $m$-span and $(M+24)$-region-span to the edge set (and not the $M$-region-span, because if a segment is added, we need a lower bound on the length of the $M$-region-span) as well as possibly an additional segment for connectivity (see Figure~\ref{pic:guill}). \begin{figure}[hbt] \center \includegraphics[width=7cm]{perf-cuts} \caption{Adding the blue $1$-region-span $\Sigma_1(l)$ and the connecting segment (twice) makes $l$ $1$-region-good} \label{pic:guill} \end{figure} We refine Mitchell's charging scheme and assign to each point $x$ of an edge or the boundary of a disk a ``charge" $c(x)$, such that the additional length incurred throughout the construction equals $\sum_{D \in \mathcal{D}} \int_{\partial D} c(x) \,\mathrm{d} x + \sum_{e \in E} \int_{e} c(x) \,\mathrm{d} x$. This charge will be piecewise constant. We will show that the charge on an edge segment is bounded by $C'/m$, and the charge of a disk boundary segment is bounded by $C/M$, for constants $C'$ and $C$. This proves the theorem, because $\sum_{D \in \mathcal{D}} \int_{\partial D} C/M \,\mathrm{d} x = C \cdot k 2\pi/M$, but $L^* \geq (k/4 - 1) \cdot 2\pi/4 \geq 2\pi$, hence $$C \cdot k 2\pi/M \leq \dfrac{8 \pi}{M} \left(\dfrac{4L^*}{\pi} + 1\right) = \dfrac{32C}{M} L^* + \dfrac{8C\pi}{M} \leq \dfrac{36C}{M} L^*$$ for the disks, and for the edges, $\sum_{e \in E^*} \int_{e} C'/m \,\mathrm{d} x = \frac{C'}{m} L^*$. From now on, a segment will refer to a disk boundary or edge segment. In the beginning, every segment has charge 0. Every charge that is applied to a segment gets added to its previous charge. We will distinguish direct and indirect charge, and show that each segment is directly charged at most 4 times, once from each axis-parallel direction. Indirect charge will be charge that is added to a segment in $E'$ that was constructed during the proof. Since we cannot charge these segments, we pass their charge on: The new segments at some point were charged to a segment in $E^*$ or $\bigcup_{D \in \mathcal{D}} \partial D$, to which we add the new charge recursively. For example, in Figure~\ref{pic:guill}, the blue region span cannot be charged by any cut, since it is on the boundary of a window. On the other hand, the connecting segment might be charged by a different cut during the construction. If the direct charge for inserting the blue edges was applied to the disks making (different) parts of the cut 1-region-dark, then if the connecting segment is charged, we will instead pass the charge on to the disks (each of them receives half the charge). Now, let $W$ be a window with perfect cut $l$. Then, we add its $m$-span and $(M+24)$-region-span to $E'$. Furthermore, we have that if the $(M+24)$-region-span is non-empty, it is not necessarily connected to the tour. But in this case the cut is central. Therefore, no disk intersected by the $(M+24)$-region-span can intersect the boundary of the window ({without loss of generality} let $l$ be vertical: Then no disk can intersect the left and right boundary, because the cut is central, and the parts of the cut above and below the region-span intersect at least 12 disks each, therefore their length is at least 1). Connect the $(M+24)$-region-span to the closest point of the tour within the same window (which will have distance $\leq 2$, because all the disks are visited). Note the the $m$-span, by definition, is connected to the edge set, hence the connecting segments for the $M$-span are sufficient for connectivity. This makes the cut $m$-good and $(M+24)$-region-good and leaves the edge set connected, therefore recursive application will make the entire window $(m, M+24)$-guillotine. We have to show that this procedure terminates. But this follows from the fact that all cuts are weakly central: Each recursion step reduces one coordinate of the window by at least 1 or $1/4$ of its width/height. We only add new edges in the interior of a subwindow when the $M+24$-region-span is non-empty, therefore, for small enough windows, we do not add edges to the interior of their subwindows. At that point, the minimum length, width and height of an edge inside of the window remains fixed, so at some point, all edges that lie completely in the interior of $W$ will be axis-parallel, and as soon as one coordinate gets small enough, all of them are parallel. But then we can cut between them, so each window only contains one such edge. And lastly, we can cut that edge in half. (There is probably a simpler argument.) There are separate arguments for the length charged to disks and edges. For each cut $l$, we have added a length of $\sigma_m(l) + \Sigma_M(l)$. We can charge it off as follows: There are $2m$ edge segments making a segment of length $\delta_m(l)$ $m$-dark. Charge each of them with $8 \cdot 1/2m$. (More precisely, these are not necessarily the same edges or even connected, but there are edges of total length at least $2m \delta_m(l)$ within the same window such that their orthogonal projection onto $l$ intersects at most $m-1$ other edges in $E' \cup E^*$. We can charge all of those with $8 \cdot 1/2m$.) Similarly, there are disk boundary segments of total length $M \cdot \Delta_{M/2}(l)$ making parts of the cut $M/2$-region-dark. We can charge each of them with $8/M$. In total, the charged length is $M \cdot \Delta_{M/2}(l) \cdot 8/M + 2m \cdot \delta_m(l) \cdot 8 \cdot 1/2m = 8 (\Delta_{M/2}(l) + \delta_m(l)) \geq \sigma_m(l) + \Sigma_M(l)$, because the cut is $8$-favorable. We also know that $\Sigma_{M+24}(l) + 2 \leq \Sigma_{M}(l)$, because the additional segments in $\Sigma_M(l)$ both visit 12 disks. Therefore, $\sigma_m(l) + \Sigma_M(l) \geq \sigma_m(l) + \Sigma_{M + 24}(l) + 2$, which is an upper bound on the actual length of what we insert -- $m$-span, $(M+24)$-region-span, and possibly a connecting segment of length at most 2. So the charge indeed will be an upper bound on the additional length as described in the beginning of the proof. It remains to show that each segment is charged directly only a constant number of times, and that the total indirect charge is sufficiently small. For edge segments, there are at most $m-1$ other segments between a charged segment $e$ and the cut $l$. But the cut then becomes the boundary of the next subwindow. Therefore, this edge will not make a cut $l'$ between $e$ and $l$ $m$-dark, and not be charged again from this direction. Since all cuts are axis-parallel, each edge is indeed only charged at most 4 times. For disks, the same argument works: Only the $M/2$ disks closest to a cut (and making it $M/2$-dark or even $M$-dark) in a given direction are charged, and each disk can only be among those and make the cut $M/2$-dark once for each direction. Finally, we have to take care of indirect charge. This applies to both disks and edges, because while our analysis shows that we can upper bound the additional length for the connecting segments by the $M$-region-span, the length of this span may be accounted for by $m$-dark and not by $M/2$-region-dark segments. But for each edge segment, the direct charge is at most $16/m$. The segment of that length might be charged with $16/m$ again, adding a charge of $(16/m)^2$ to the original segment, yielding a geometric series. Therefore, the total charge is at most $\frac{16}{m-16} \leq 32/m$ for $m \geq 32$. For disks, the same analysis works: The direct charge is at most $8/M$, and since $m \geq 32$, the indirect charge is bounded by $8/M \cdot 32/m \leq 8/M$. Finally, we duplicate each new edge segment to make the resulting graph Eulerian, increasing the additional length by a factor of 2. This concludes the proof. \end{proof} We did not use the fact that the regions are unit disks: It it sufficient to assume they are disk-like and modify the constants accordingly. \subsection{Grid} By computing a $(1+\varepsilon)$-approximation of a tour visiting the centers of the disks, we obtain a TSPN tour which is at most an additive $2k (1+ \varepsilon)$ from the optimum (for $k$ large, this is a constant-factor approximation algorithm and was analyzed in \cite{dumi}). If the tour has length at least $\frac{2k (1+\varepsilon)}{\varepsilon}$, this is a sufficiently good solution; otherwise, the disks are within a square of size $\lceil 3k/\varepsilon \rceil \times \lceil 3k/\varepsilon \rceil$, if $\varepsilon \leq 1/3$ and $k \geq 6$. Such a square can be found (or shown that none exists) in polynomial time. We then equip it with a regular rectilinear grid with edge length $\delta := (2 \lceil k/\varepsilon \rceil)^{-2}$. \begin{definition} \label{def:grid} An edge set is \textbf{grid-rounded} w. r. t. a grid $G$, if all edge endpoints are on the grid. A polygon is grid-rounded, if its boundary is grid-rounded. A set of regions is grid-rounded, if all regions are grid-rounded polygons. A coordinate (or axis-parallel line segment) is said to be a \textbf{half-grid coordinate} of $G$, if it is on the grid or in the middle between two consecutive grid points. \end{definition} At a cost of a factor $(1+\varepsilon)$, the instance can be grid-rounded, such that every disk center is on a grid point. The same can be done for an optimum solution, i. e. every edge should begin and end in a grid point. Such a solution is still feasible, if we replace each disk $D_i \in \mathcal{D}$ by the convex hull of the set of grid points $\Gamma_i$ it contains. As this convex hull contains at least the diamond inscribed in the disk (a square of area 2), and it has rotational symmetry, it will still be $\alpha$-fat$_E$, for slightly smaller $\alpha$. The previous theorem still holds for these regions (with modified constants). The regions $\textnormal{conv}(\Gamma_i)$ are polygonal. Therefore, in Lemma~\ref{lem:wfav-cut}, the functions $\delta_m, \Delta_M, \sigma_m$ and $\Sigma_M$ are piecewise linear, with discontinuities at grid points. This implies: \begin{lemma} \label{lem:grid-cut} Given grid-rounded disk-like regions and a window $W$ with half-grid coordinates, there is a perfect cut with half-grid coordinates. \end{lemma} \begin{proof} This follows from Lemma~\ref{lem:wfav-cut}, because the integrals involved can be replaced by ($\delta$ times) the sums of the function values at all half-grid points (that are not grid points). For piecewise linear functions, this sum is equal to the integral. \end{proof} If the $m$-span and $M$-region-span are inserted at half-grid cuts (with their endpoints not even at half-grid points), the solution does not remain on the grid. In particular, the connecting segment for the $M$-region-span could lead to discontinuities in $\sigma_m(l)$ and $\delta_m(l)$ at non-grid points, thus preventing the recursive application of this lemma. Therefore, it will be moved to the grid in the following. We can assume that every disk is visited by the endpoint of an edge that lies on the grid. This costs a factor of $(1+\varepsilon)$ and means that when restructuring the edge set, as long as the resulting graph remains Eulerian, connected and has the same (or more) edge endpoints, we can still extract at TSPN tour from it. Therefore, the following lemma can be applied without taking the regions into account: \begin{lemma} \label{lem:span} Consider a regular rectilinear grid with edge length $\delta$ and a grid-rounded set $E$ of edges that is an optimum tour of the set of its edge endpoints, and a window $W$. Let $l$ be a cut in $W$, such that its $m$-span is non-empty and $l$ has distance at least $\delta$ from the boundary edges of the window that are parallel to $l$. If the $x$- or $y$-coordinate of $l$ (depending on its orientation) is a grid coordinate, and $m$ intersects at least 15 different edges in their interior (or has 16 intersection points with edges), then $\sigma_m(l) \geq \delta$. Furthermore, if $l$ is in the center between two consecutive grid coordinates, then $\sigma_m(l) \geq \delta$, if its $m$-span intersects at least 19 different edges (necessarily in their interior, because their endpoints can only be on the grid). \end{lemma} This is a grid-rounded version of Arora's patching lemma~\cite[Lemma 3]{arora}. The proof uses similar ideas and exploits the grid structure to show that in some cases, the patching construction actually decreases the tour length. \begin{proof} First, let $l$ be on the grid and without loss of generality vertical. If $l$ has 16 intersection points with edges, then either two of them are (different) grid points, and $\sigma_m(l) \geq \delta$, or at $l$ intersects at least 15 different edges in their interior, therefore it is sufficient to consider this case. If $\sigma_m(l) < \delta$, this configuration can never be optimal. This follows form the construction in Figure~\ref{pic:patching}: Expand the $m$-span to the nearest grid coordinates above and below, and then consider the box that has as left and right edge this expanded $m$-span translated by $\pm \delta$. This box will have height $\delta$ or $2\delta$ and width $2\delta$. If it has height $2\delta$, an additional length of $2 \delta$ is used to connect to the grid point in the center of the box. \begin{figure}[hbt] \center \includegraphics[width=4cm]{patching} \hspace*{0.3cm} \includegraphics[width=4cm]{patching2} \caption{Before and after applying the grid patching lemma to grid cuts} \label{pic:patching} \end{figure} For every edge intersecting the box, split it into different parts at the intersection points. The inner part (inside of the box) has length at least $\delta$, but it does not visit any new endpoints, and can therefore be removed. For the other parts, if we consider their second endpoint (not on the boundary of the box) fixed and choose their first endpoint among the points of the boundary of the box, they become shortest possible when connected in such a way, that the endpoint is a vertex of the box or the segment is orthogonal to the boundary edge of the box. In both cases, the first endpoint is a grid point. Therefore, for these parts there is a grid point on the boundary of the box, such that the intersection point can be moved there without increasing the length of the edge set. This preserves connectivity and parity except possibly on the boundary of the box. Since it has perimeter $\leq 8 \delta$, edges of length $\leq 4 \delta$ can be used to correct parity. Overall, we have added edges of total length $8 \delta + 2 \delta + 4 \delta = 14 \delta$ and removed edges of length $\geq \delta$ for every edge intersecting $\sigma_m(l)$ in its interior. Hence, for an optimal tour, there can be at most 14 such edges. If $l$ is not on the grid, but at a half-grid coordinate, we can apply a similar argument and construction, see Figure~\ref{pic:patching2}. \begin{figure}[hbt] \center \includegraphics[width=4cm]{patching3} \hspace*{0.3cm} \includegraphics[width=4cm]{patching4} \caption{Before and after applying the grid patching lemma to half-grid cuts} \label{pic:patching2} \end{figure} The box has perimeter $\leq 6 \delta$ and for every edge intersecting it, the inner segment of length at least $\delta/2$ can be removed. There are no interior points to be visited, and correcting parity costs at most $3 \delta$. Therefore, there can be at most $\frac{3 \delta + 6 \delta}{\delta/2} = 18$ edges intersecting the $m$-span, if its length is less than $\delta$. \end{proof} For the $M$-region-span, no such lemma is needed, because if we insert it, we can also afford a connecting segment of length $2$. Therefore, it can be extended to the grid without increasing the length we used for the entire construction by more than a factor of $2$ (and actually, $1 + 2/\delta$) -- provided the cut is on the grid. Choosing only cuts on the grid is not sufficient, as the following example shows: Even without regions, there is no $1$-good cut with grid coordinates. \begin{figure}[hbt] \center \includegraphics[width=2cm]{grid-cut} \caption{No 1-good cut with grid coordinates} \label{pic:grid-cut} \end{figure} The recursive construction in Theorem~\ref{thm:guillotines} makes cuts $m$-good and $M$-region-good by inserting edges on the cut, which is not possible for a cut that is not at a grid coordinate. Since we cannot change the position of the regions, no modification of the edge set (preserving grid-roundedness) can make the cut $M$-region-good. Moving the cut to the grid is also not an option, since Figure~\ref{pic:grid-cut} shows that this is not always possible. Therefore, we cannot hope to find an $(m,M)$-guillotine subdivision with this construction. However, for algorithmic purposes, the main aim of the guillotine property was avoiding the enumeration of which subproblem is responsible for visiting which regions. What happens, if we make them \emph{both} responsible for all regions in the $M$-region-span? First, note, that ``smaller'' subproblems never rely on the fact that their containing windows are guillotine, because they do not yet know the respective cuts -- with the exception of the four cuts defining their boundaries. For these cuts, we can easily enumerate the possibilities ``visit all regions in the $M$-region-span'' and ``visit none of them''. Intuitively, it seems that this construction would significantly increase the length of the subdivision. But we know that those regions can be visited by a segment with at most the length of the $M$-region-span. More importantly, we know that they can be visited on each side of the cut by a segment such that their combined length is at most $2 \Delta_M(l)$, which we can afford by the charging scheme. Not both of these can necessarily be connected to $E$ within their containing window (but at least one), therefore we should add (or at least ``reserve'') an edge across the cut, i. e. only obtain an $(m+1, M+24)$-guillotine subdivision. To accommodate these changes, we redefine $M$-region-good and thus obtain a new $(m,M)$-guillotine property: \begin{definition} A cut $l$ is \textbf{$\boldsymbol{M}$-region-good} with respect to $W$, $\mathcal{R}_{W_0}$ and $E$, if there are no two regions $R, R' \in \mathcal{R}$ that intersect the $M$-region-span of $l$, but $E$ visits $R$ and $R'$ only on different sides of the cut. \end{definition} In other words, if $l$ does not visit $R$ on one side of the cut, it must visit all other regions that intersect the $M$-region-span on the other side of the cut. Here, ``side of the cut'' denotes a closed half-space; in particular, a cut that was $M$-region-good w. r. t. the previous definition remains $M$-region-good, since the $M$-region-span is in $E$ and thus $E$ visits all regions in the $M$-region-span on both sides of the cut. The following two lemmas show that this construction works -- for both edges and regions, we can replace the operations ``insert the span'' by one of the following in the charging scheme, and get the statement of the charging scheme (with modified constants) for grid-rounded subdivisions. \begin{lemma} \label{lem:m-good} Given a perfect half-grid cut $l$ in a window $W$ of width $\geq \delta$, and a connected Eulerian grid-rounded edge set $E$, then there is a grid-rounded edge set $E'$ that differs from $E$ only at edges intersecting $W$, has the $(m', M')$-guillotine property outside of $W$ if $E$ does, is by at most an additive $\mathcal{O}(\sigma_m(l))$ longer than $E$, visits the same (or a superset) of the grid points $E'$ visits, and is connected and Eulerian, such that $l$ is $(m+9)$-good w. r. t. $E'$ and $W$. \end{lemma} If the window has width $< \delta$, then the constructions in the proof might not be inside of the window. On the other hand, such a window is $(m,M)$-guillotine by definition, since it cannot contain (the entire interior of) grid edges in its interior. \begin{proof} Without loss of generality, let $l$ be a vertical cut. If $l$ is $(m+9)$-good, there is nothing to show. Otherwise, there are at least $19$ edges on the $m$-span, so it has length at least $\delta$ by Lemma~\ref{lem:span}, or $E$ can be made shorter by applying the construction there, thereby making the cut $(m+1)$-good. If $\sigma_m(l) \geq \delta$, there are two cases: If $l$ is not on the grid, as in Figure~\ref{pic:half-cut}, we insert an ``H'' shape, which has length $\leq 5 \delta + 2 \sigma_m(l) \in \mathcal{O}(\sigma_m(l))$. For all edges intersecting this H, the intersection point should be moved to a grid point without increasing the length or violating guillotine property. \begin{figure}[hbt] \center \includegraphics[width=3.5cm]{half-cut} \hspace*{1cm} \includegraphics[width=3.5cm]{half-cut2} \caption{Construction for half-grid cuts} \label{pic:half-cut} \end{figure} To see that the length does not increase, let $p,q$ be the endpoints of an edge intersecting the H, then replacing the edge by segments from $p$ to the first intersection point $p_H$ and from the last intersection point $q_H$ to $q$ preserves connectivity (and parity can be correcting using edges of the H). Let $p$ be to the left of $l$, then $p_H$ is either the left vertical edge or the bar. In the latter case, moving the intersection point to the left endpoint of the bar only decreases the length of the segment. In the former case, $p_H$ is either a grid point or can be moved vertically on the H. In that case, either the edge from $p$ to $p_H$ is horizontal (so $p_H$ is a grid point because $p$ is), or there is a direction such that the angle at $p_H$ gets less acute when moving $p_H$, thereby making $(p, p_H)$ shorter. Whenever $(p, p_H)$ intersects a grid point, we subdivide it and continue with the segment containing $p_H$, thus ensuring that the edge set remains planar an $(m', M')$-guillotine, if $E$ is. The resulting graph is grid-rounded and $l$ is $m$-good, since the $m$-span only contains one point, and this point is part of the edge set. It might not be Eulerian, but the only points whose parity might have changed are on the H, hence duplicating some of its edges will make the edge set Eulerian again. \begin{figure}[hbt] \center \includegraphics[width=3.5cm]{span-bound} \hspace*{1cm} \includegraphics[width=3.5cm]{span-bound1} \caption{Construction for grid cuts} \label{pic:span-again} \end{figure} If the cut is at a grid coordinate, we increase the length of the $m$-span by at most $2 \delta$ as shown in Figure~\ref{pic:span-again} (so that it becomes grid-rounded) and then proceed analogously to the first case for all edges intersecting it. \end{proof} \begin{lemma} \label{lem:region-good} Given a perfect half-grid $(m+9)$-good cut $l$ in a window $W$, and a connected Eulerian grid-rounded edge set $E$ and grid-rounded disk-like regions $\mathcal{R}$, then there is a grid-rounded edge set $E'$ that differs from $E$ only in edges intersecting of $W$, is $(m', M')$-guillotine outside $W$ if $E$ is, is by at most an additive $\mathcal{O}(\Sigma_M(l))$ longer than $E$, visits the same (or a superset) of the grid points $E'$ visits, and is connected and Eulerian, such that $l$ is $(m+10)$-good and $(M+C)$-good w. r. t. $E'$, $\mathcal{R}$ and $W$. The constant $C$ here depends on the constants for the disk-like regions (Definition~\ref{def:grid}) and can be chosen as 24 for unit disks. \end{lemma} \begin{proof} Without loss of generality, let $l$ be vertical. If $l$ is at a grid coordinate, extend $\Sigma_{M + C}(l)$ to the grid and move all intersection points with edges to the grid as in previous lemmas. Correct parity on the extended $(M+C)$-region-span. Otherwise, since the regions are polygonal, the set of regions in the $(M+C)$-region-span can be visited by a vertical segment of at most the same length either at the grid coordinate directly to the left or to the right of $l$. Insert this segment and possibly a connecting segment to the edge set (which might cross the cut) inside of $W$. For all edges intersecting the new vertical segment, proceed as before. For the connecting segment, note that it is not necessarily rectilinear. Therefore, if it intersects an edge, we can subdivide this edge and move the intersection point to a grid point. This costs at most $3 \delta$ , because there are 3 incident edges, and connects to the edge set -- hence it is sufficient to do this at most once. Since $\Sigma_M(l) \geq 2$ by choice of $C$, this is not too expensive, and can be done inside $W$. Again, correcting parity is only necessary on new segments. \end{proof} Applying these lemmas to the charging scheme yields that there is an $(m,M)$-guillotine grid-rounded subdivision that approximates a tour well, more precisely: \begin{theorem} \label{thm:grid} Let $\varepsilon > 0$. For every set $\mathcal{D}$ of $k \geq 8$ disjoint unit disks within a square of size $\lceil 3k/\varepsilon\rceil \times \lceil 3k/\varepsilon \rceil$, let $m = \max\set{\lceil \frac{1}{\varepsilon} \rceil, 8}$ and $M = \max \set{\lceil \frac{1}{\varepsilon} \log_2 (\frac{k}{\varepsilon}) \rceil, 32}$. Then, there is an edge set $E$ with the following properties: \begin{enumerate} \item It satisfies the (new) $(m+9,M+24)$-guillotine property. \item The endpoints of every edge are on a regular rectilinear grid with edge length $\delta = (2\lceil k/\varepsilon \rceil)^{-2}$. \item It visits at least one point from each of $\Gamma_1, \dots, \Gamma_k$, the grid points of the slightly perturbed, polygonal approximations of the disks. \item It is Eulerian and connected. \item The total length of all its segments is $(1+\mathcal{O}(\varepsilon))L^*$, where $L^*$ is the length of an optimum tour visiting $\mathcal{D}$ (or $\Gamma_1, \dots, \Gamma_k$, as both lengths only differ by a factor of at most $1 + \varepsilon$). \end{enumerate} \end{theorem} This theorem implies an approximation ratio of $(1+\mathcal{O}(\varepsilon))$ for Mitchell's dynamic programming algorithm for the TSPN with disjoint unit disks, if the grid and $m$ and $M$ are chosen as above, and for these parameters, such a subdivision can be found by Mitchell's algorithm (together with the refinements to preserve grid-roundedness) in polynomial time (Theorem~\ref{thm:main}). \begin{proof}[Theorem~\ref{thm:grid}] Using the two previous lemmas, one can construct $E'$ as follows: Starting with an optimal tour on the grid, recursively find a perfect half-grid cut, insert edges so that it becomes $(m+9)$-good and $(M+24)$-region-good, and continue with the new subwindows. The edge set remains a tour, and becomes $(m+9, M+24)$-guillotine. The increase in length can be bounded using the same charging scheme as in Theorem~\ref{thm:guillotines}. \end{proof} \section{Conclusion} The guillotine subdivision method of Mitchell \cite{mitchell-tsp, mitchell-ptas} can be used to derive a PTAS for the TSP with unit disk neighborhoods. All arguments carry over to disk-like regions, for which Mitchell's framework can be used to derive a PTAS as well. This includes geographic clustering as the special case when the regions are $\alpha$-fat (they could be $\alpha$-fat$_E$ instead). However, the approach of Bodlaender et al. \cite{grigoriev} based on curved dissection in Arora's PTAS for TSP \cite{arora} achieves a faster theoretical running time for disjoint connected regions with geographic clustering. Their algorithm, like Arora's, can be generalized to more than two dimensions. For $\alpha$-fat$_E$ regions, the best known results are the constant-factor approximation algorithm of \cite{elb} and the QPTAS of \cite{elb-qptas}. For $\alpha$-fat regions in Mitchell's sense, the existence of a PTAS remains open. The problem with external regions of Section~\ref{sec:ext} can be avoided by using $\alpha$-fat$_E$ or convex regions instead, the charging scheme (Section~\ref{sec:charge}) can be fixed by bounding the ratio of perimeter and diameter, the grid can be handled as in the unit disk case, and localization does not require any additional assumptions. However, even for those stronger conditions on the regions, it is unclear how to handle connectivity (Section~\ref{sec:cnn}) for neighborhoods of varying size. The length of the connecting segment can be bounded for $\alpha$-fat$_E$ regions as in the unit disk case, but it might still destroy the guillotine property of other windows. To our knowledge, no PTAS for any form of the TSP with neighborhoods of varying size exists. Mitchell's constant factor approximation algorithm for disjoint connected regions \cite{mitchell-cfa} relies on the PTAS for $\alpha$-fat regions, but only applies it to disjoint balls, which are $\alpha$-fat$_E$. Therefore, a constant factor approximation algorithm by Elbassioni et al. \cite{elb} can be used instead, so that the overall algorithm in \cite{mitchell-cfa} still works and yields a constant factor approximation for the TSP with general disjoint connected, and in particular $\alpha$-fat, regions.
{ "attr-fineweb-edu": 1.864258, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUeIDxK5YsWOzBDA7D
\section*{Acknowledgments} This work is the result of fruitful interactions and discussions with the other project partners. We would like to thank Nicolas Buompane, Steve Butcher, Les Fairbrother, André Gueisbuhler, Sylvie Martinet and Rui Vinagre from the FIG, Benoit Cosandier, Jose Morato, Christophe Pittet, Pascal Rossier and Fabien Voumard from Longines, and Pascal Felber, Christopher Klahn, Rolf Klappert and Claudia Nash from the Université de Neuchâtel. This work was partly funded by Longines. A preliminary version of this work was presented at the 2017 MIT Sloan Sports Analytics Conference. \section{Conclusions and limitations} \label{sec:conclusion} We put the evaluation of international gymnastics judges on a strong mathematical footing using robust yet simple tools. This has led to a better assessment of current judges, and will improve judging in the future. It is clear that there are significant differences between the best and the worst judges; this in itself is not surprising, but we can now quantify this much more precisely than in the past. Our main contribution is a marking score that evaluates the accuracy of the marks given by judges. The marking score can be used across disciplines, apparatus and competitions. Its calculation is based on the intrinsic judging error variability estimated from prior data. Since athletes improve, and since Codes of Points are revised every four years, this intrinsic variability can and should be calibrated at the beginning of every Olympic cycle with data from the previous cycle. We calibrated our model using 2013--2016 data, and it should be recalibrated after the 2020 Tokyo Summer Olympics. The FIG can use the marking score to assign the best judges to the most important competitions. The marking score is also the central piece of our outlier detection technique highlighting evaluations far above or below what is expected from each judge. The marking score and outlier detection work in tandem: the more accurate a judge is in the long-term, the harder it is for that judge to cheat without being caught due to a low outlier threshold detection. The FIG classifies international gymnastics judges in four categories: Category 1, 2, 3 and 4. Only judges with a Category~1 brevet can be assigned to major international competitions. The classification is based on theoretical and practical examinations, with increasingly stringent thresholds for the higher categories. As an example, in men's artistic gymnastics \cite{Artistic:2017MenRules} the theoretical examination for the execution component consists in the evaluation of 30 routines, 5 per apparatus. Our statistical engine is much more precise than the FIG examinations because it tracks judges longitudinally in real conditions over thousands of evaluations. Our dataset is dominated by Category 1 judges, and even at this highest level it shows significant differences among judges. \subsection{Limitations of our approach: relative evaluations and control scores} \label{subsection:discussion} The first limitation of our approach is that judges are compared with each other and not based on their objective performance. An apparatus with only outstanding judges will trivially have half of them with a marking score below the median, and the same is true of an apparatus with only atrocious judges. From discussions with the FIG, no apparatus or discipline has the luxury of having only outstanding judges. We therefore proposed qualitative thresholds based on the fact that most judges are good, and a reward-based approach for the very best ones. The second limitation of our approach is its dependence on accurate control scores. Even though the Codes of Points are very objective in theory, in practice we must work with approximations of the true performance level, which remains unknown. This has implications for evaluating judges live during competitions and for training our model. During competitions, quick feedback is necessary, and we approximate the control score with the median judging mark. Relying on the median for a single performance or a small event such as an Olympic final can be misleading. A high marking score for a specific performance is not necessarily an indicator of a judging error but can also mean that the judge is accurate but out of consensus with the other inaccurate judges. The FIG typically relies on observers like outside panels and superior juries to obtain quick feedback during competitions. We do not report detailed results here, but our analysis shows that like for reference judges, these observers are in the aggregate equal or worse than regular panel judges, and giving them additional power is dangerous. Discrepancies between this outside panel and the regular panel should be viewed with circumspection. The best the FIG can do in this circumstance is to add these outside marks to the regular panel to increase its robustness until a more accurate control score is available post-competition. The Technical Committee (TC) of each discipline calculates control scores post-competition using video reviews. Each TC uses a different number of members, ranging from two to seven, to evaluate each performance. Furthermore, each TC uses a different aggregation technique: sometimes members verbally agree on a score and other times they take the average. Even with video review, the FIG cannot guarantee the accuracy and unbiasedness of the TC members: some of them might be friends with judges they are evaluating and know what marks they initially gave. We therefore suggested clear guidelines for the calculation of the control scores post-competition to make them as robust as possible in the future. This is paramount to guarantee the accuracy of our approach on a per routine and per competition basis. When we trained our model using 2013--2016 data, the FIG did not have control scores for every performance, and could not tell us under what conditions the available control scores had been derived. For this reason, we trained our model using the median of all the marks at our disposal. Considering the size of our dataset, this provides an excellent approximation of the intrinsic judging error variability. However, like for live evaluations during competitions, retrospective judge evaluations during the 2013--2016 Olympic Cycle must be interpreted cautiously. While a longitudinal evaluation provides a very accurate view of the judges's performance, a bad evaluation for a specific event might indicate that the judge was accurate but out of consensus with other inaccurate judges. More precise control scores obtained by video review must once again be provided to settle the matter. \section{True performance quality and control scores in gymnastics} \label{sec:controlscore} The execution evaluation of a gymnastics routine is based on deductions precisely defined in the Code of Points of each apparatus\footnote{The 2017--2020 Codes of Points, their appendices and other documents related to rules for all the gymnastics disciplines are publicly available at https://www.gymnastics.sport/site/rules/rules.php. Competitions in our dataset were ruled by the 2013--2016 Codes of Points.}. The score of each judge can thus be compared to the theoretical \emph{true} performance of the gymnast. In practice the true performance level is unknown, and the FIG typically derives \emph{control scores} with outside judging panels and video reviews post-competition. Unfortunately, the FIG does not provide accurate control scores for every performance: the number of control scores and how they are obtained depends on the discipline and competition. Besides, even when a control score is available, the codes of points might be ambiguous or the quality of a performance element may land between two discrete values. This still results in an approximation of the true performance, albeit a very good one. Control scores derived post-competition can also be biased, for instance if people deriving them know who the panel judges are, and what marks they initially gave. For all these reasons, in our analysis, we train our model using the median judging mark of each performance as the control score. Whenever marks by reference judges, superior juries and post-competition reviews are available, we include them with execution panel judges and take the median mark over this enlarged panel, henceforth increasing the accuracy of our proxy of the true performance quality. We discuss the implications of training our data with the median, and control scores in general, in Section~\ref{sec:conclusion}. \section{Data and judging in gymnastics} \label{sec:data} \begin{table*} \centering \begin{tabular}{lcccc} \toprule &Typical panel & Number of & Number \\ Discipline & composition & performances & of marks \\ \midrule Acrobatic gymnastics & 4 E + 2 R & 756 & 4'870\\ Aerobic gymnastics & 4 E + 2 R & 938 & 6'072 \\ Artistic gymnastics & 5 E + 2 R & 11'940 & 78'696 \\ Rhythmic gymnastics & 5 E + 2 R & 2'841 & 19'052\\ Trampoline & 5 E & 1'986 & 9'654 \\ \bottomrule \end{tabular} \caption{Standard composition of the execution panel, number of performances and number of marks per discipline. \mbox{E = Execution judges;} R = Reference judges.} \label{tab:noJ} \end{table*} Gymnasts at the international level are evaluated by panels of judges for the difficulty, execution, and artistry components of their performances. The marks given by the judges are aggregated to generate the final scores and rankings of the gymnasts. The number of judges for each component and the aggregation method are specific to each discipline. In this article, we analyze the execution component of all the gymnastics disciplines: artistic gymnastics, acrobatic gymnastics, aerobic gymnastics, rhythmic gymnastics, and trampoline. We also evaluate artistry judges in acrobatic and aerobic gymnastics, but exclude difficulty judges from our analysis. Our dataset encompasses 21 international and continental competitions held during the 2013--2016 Olympic cycle culminating with the 2016 Rio Olympic Games. The execution of a gymnastics routine is evaluated by a panel of judges. Table~\ref{tab:noJ} summarizes the composition of the typical execution panel for each discipline\footnote{The execution panels do not always follow this typical composition: the qualifying phases in artistic and rhythmic gymnastics may include four execution judges instead of five, World Cup events and continental championships do not always feature reference judges, and aerobic and acrobatic gymnastics competitions can have larger execution panels.}. With the exception of trampoline, these panels include execution and reference judges. Execution and reference judges have different power and are selected differently, but they all judge the execution of the routines under the same conditions and using the same criteria. After the completion of a routine, each execution panel judge evaluates the performance by giving it a score between 0 and 10. Table~\ref{tab:noJ} includes the number of performances and judging marks per discipline in our dataset. The number of performances in an event is not always equal to the number of gymnasts. For instance, gymnasts who wish to qualify for the vault apparatus finals jump twice, each jump counting as a distinct performance in our analysis. The number of judging marks depends on the number of performances and the size of the judging panels. \section{Introduction} Gymnastic judges and judges from similar sports are susceptible to well-studied biases\footnote{Consult \textcite{Lan1970} for an initial comprehensive survey until 1970, and \textcite{Bar-Eli:2011} for a recent survey.}. \textcite{Ansorge:1988} detected a \emph{national bias} of artistic gymnastics judges at the 1984 Olympic Games: judges tend to give better marks to athletes from their home country while penalizing close competitors from other countries. National bias was subsequently detected in rhythmic gymnastics at the 2000 Olympic Games \cite{Popovic:2000}, and in numerous other sports such as figure skating \cite{Campbell:1996, Zitzewitz:2006}, Muay Thai boxing \cite{Myers:2006}, ski jumping \cite{Zitzewitz:2006}, diving \cite{Emerson:2009} and dressage \cite{Sandberg:2018}. \textcite{Plessner:1999} observed a \emph{serial position bias} in gymnastics experiments: a competitor performing and evaluated last gets better marks than when performing first. \textcite{Boenetal:2008} found a \emph{conformity bias} in gymnastics: open feedback causes judges to adapt their marks to those of the other judges of the panel. \textcite{Damisch:2006} found a \emph{sequential bias} in artistic gymnastics at the 2004 Olympic Games: the evaluation of a gymnast is likely more generous than expected if the preceding gymnast performed well. \textcite{Plessner:2005} showed in an experiment that still rings judges can make systematic errors based on their viewpoint. Biases observed in other sports might also occur in gymnastics as well. \textcite{Findlay:2004} found a \emph{reputation bias} in figure skating: judges overestimate the performance of athletes with a good reputation. \textcite{Price:2010} quantified the \emph{racial bias} of NBA officials against players of the opposite race, which was large enough to affect the outcome of basketball games. Interestingly, the racial bias of NBA officials subsequently disappeared, most probably due to the public awareness of the bias from the first study \cite{PPW2013}. The aforementioned biases are often unconscious and cannot always be entirely eliminated in practice. However, rule changes and monitoring from the Fédération Internationale de Gymnastique (FIG) as well as increased scrutiny induced by the media exposure of major gymnastics competitions make these biases reasonably small and tempered by mark aggregation. In fact, judging is much more about skill and training than bias: it is difficult to evaluate every single aspect of the complex movements that are part of a gymnastics routine, and unsurprisingly nearly all international judges are former gymnasts. This challenge has been known since at least the 1930s \cite{Zwarg1935}, and there is a large number of studies on the ability of judges to detect execution mistakes in gymnastic routines \cite{Ste-Marie:1999,Ste-Marie:2000,Pizzera:2012,Flessas:2015,Pizzera:2018}\footnote{Consult \textcite{Lan1970} for an initial comprehensive survey until 1970, and \textcite{Bar-Eli:2011} for a recent survey.}. In a nutshell, novice judges consult their scoring sheet much more often than experienced international judges, thus missing execution errors. Furthermore, international judges have superior perceptual anticipation, are better to detect errors in their peripheral vision and, when they are former gymnasts, leverage their own sensorimotor experiences. Even among well-trained judges at the international level, there are significant differences: some judges are simply better than others. For this reason, the FIG has developed and used the Judge Evaluation Program (JEP) to assess the performance of judges during and after international competitions. The work on JEP was started in 2006 and the tool has grown iteratively since then. Despite its usefulness, JEP was partly designed with unsound and inaccurate mathematical tools, and was not always evaluating what it ought to evaluate. \subsection{Our contributions} In this article, we design and describe a toolbox to assess, as objectively as possible, the accuracy of international gymnastics judges using simple yet rigorous tools. This toolbox is now the core statistical engine of the new iteration of JEP\footnote{The new iteration of JEP was developed in collaboration with the FIG and the Longines watchmaker. It is a full software stack that handles all the interactions between the databases, our statistical engine, and a user-friendly front-end to generate statistics, recommendations and judging reports. } providing feedback to judges, executive committees and national federations. It is used to reward the best judges by selecting them to the most important competitions such as the Olympic Games. It finds judges performing below expectations so that corrective measures can be undertaken. It provides hints about inconsistencies and confusing items in the Codes of Points detailing how to evaluate each apparatus, as well as weaknesses in training and accreditation processes. In uncommon but important circumstances, it can uncover biased and cheating judges. The main tool we develop is a \emph{marking score} evaluating the accuracy of the marks given by a judge. We design the marking score such that it is unbiased with the apparatus/discipline under evaluation, and unbiased with respect to the skill level of the gymnasts. In other words, the main difficulty we overcome is as follows: a parallel bars judge giving 5.3 to a gymnast deserving 5.0 must be evaluated more generously than a vault judge giving 9.9 to a gymnast deserving 9.6, but how much more? To quantify this, we model the behavior of judges as heteroscedastic random variables using data from international and continental gymnastics competitions held during the 2013--2016 Olympic cycle. The standard deviation of these random variables, describing the \emph{intrinsic judging error variability} of each discipline, decreases as the performance of the gymnasts improves, which allows us to quantify precisely how judges compare to their peers. To the best of our knowledge, this dependence between judging variability and performance quality has never been properly studied in any setting (sport or other). Besides allowing us to distinguish between accurate and erratic judges, we also use the marking score as the basic tool to detect outlier evaluations. The more accurate is a judge, the lower is his/her outlier detection threshold. We then study \emph{ranking scores} quantifying to what extent judges rank gymnasts in the correct order. We analyzed different metrics to compare distances between rankings such as the generalized version of Kendall's $\tau$ distance \cite{Kumar:2010}. Depending on how these rankings scores are parametrized, they are either unfair by penalizing unlucky judges who blink at the wrong time, or correlated with our marking score and thus unnecessary. Since no approach was satisfactory, the FIG no longer uses ranks to monitor its judges\footnote{The previous iteration of \text{JEP}\xspace used a rudimentary ranking score.}. We made other interesting observations that led to recommendations and changes at the FIG during the course of this work. We show that so called reference judges, hand-picked by the FIG and imparted with more power than regular panel judges, are not better than these regular panel judges in the aggregate. We thus recommended that the FIG stops granting more power to reference judges. We also show that women judges are significantly more accurate than men judges in artistic gymnastics and in trampoline, which has training and evaluation implications. This is the first of a series of three articles on sports judging. In the second article~\cite{HM2018:nationalbias}, we refine national bias studies in gymnastics using the heteroscedastic behavior of the judging error of gymnastics judges. In the third article~\cite{HM2018:heteroscedasticity}, we show that this heteroscedastic judging error appears with a similar shape in other sports where panels of judges evaluate athletes objectively within a finite marking range. The remainder of this article is organized as follows. We present our dataset and describe the gymnastic judging system in Section~\ref{sec:data}. We the discuss true performance quality and control scores in gymnastics in Section~\ref{sec:controlscore}. We derive the marking score in Section~\ref{sec:mark}. In Section~\ref{sec:outlier}, we use the marking score to detect outliers. Section~\ref{sec:rank} discusses ranking scores and why we ultimately left them aside. We present interesting observations and discoveries in Section~\ref{sec:obs} and conclude in Section~\ref{sec:conclusion} by discussing the strengths and limitations of our approach. \section{Marking score} \label{sec:mark} We now derive a \emph{marking score} to evaluate the performance of gymnastics judges. We first describe our general approach using artistic gymnastics data in Section~\ref{subsection:artistic} and present results for the other gymnastics disciplines in Section~\ref{subsection:other}. Table~\ref{tab:notation} summarizes the notation we use in this section. \subsection{General approach applied to artistic gymnastics} \label{subsection:artistic} The marking score must have the following properties. First, it must not depend on the skill level of the gymnasts evaluated: a judge should not be penalized nor advantaged if he judges an Olympic final with the world's best 8 gymnasts as opposed to a preliminary round with 200 gymnasts. Second, it must allow judges comparisons across apparatus, disciplines, and competitions. The marking score of a judge is thus based on three parameters: \begin{enumerate} \item{The control scores of the performances} \item{The marks given by the judge} \item{The apparatus / discipline} \end{enumerate} \begin{table} \centering \begin{tabular}{ll} \toprule $p$ & Performance $p$ \\ $\lambda_p$ & True quality level of Performance $p$ \\ $c_p$ & Control score of Performance $p$ \\ $j$ & Judge $j$ \\ $s_{p,j}$ & Mark of Judge $j$ for Performance $p$ \\ $\hat{e}_{p,j}$ & Judging discrepancy $s_{p,j} - c_p$ (approximates the judging error) \\ & \qquad of Judge $j$ for Performance $p$ \\ $m_{p,j}$ & Marking score for Performance $p$ by Judge $j$ \\ % % $M_{j}$ & Marking score of Judge $j$ \\ % $d$ & Apparatus / Discipline $d$ \\ $\hat{\sigma}_d(c_p)$ & Intrinsic judging error variability of Discipline $d$ \\ $\alpha_d, \beta_d, \gamma_d$ & Parameters of Discipline $d$ \\ $n$ & Number of performances in an event\\ \bottomrule \end{tabular} \caption{Notation.} \label{tab:notation} \end{table} Let $s_{p, j}$ be the mark of Judge~$j$ for Performance $p$, and let $\hat{e}_{p,j}~\triangleq~s_{p, j}~-~c_p$ be the \emph{judging discrepancy} of Judge $j$ for Performance $p$. Since we use the median of the enlarged judging panel as the control score ($c_p \triangleq \underset{j}{\text{med}}(s_{p,j})$), thus as a proxy of the true performance level $\lambda_p$, it follows that $\hat{e}_{p,j}$ is a proxy of the \emph{judging error} of Judge $j$ for Performance~$p$. We emphasize once more that we discuss the advantages and drawbacks of using the median as control score in Section~\ref{sec:conclusion}. Figure~\ref{fig:D:ag} shows the distribution of $\hat{e}_{p, j}$ for artistic gymnastics. Our first observation is that judges are too severe as often as they are too generous, which is trivially true because we use the median as control score. The second observation is that the judging error is highly heteroscedastic. Judges are much more accurate for the best performances, and simply using $\hat{e}_{p,j}$ underweights errors made for the best gymnasts. \begin{figure}[h!] \centering \includegraphics[width=\columnwidth]{AG_differences.pdf} \caption{Distribution of the judging errors in artistic gymnastics. To improve the visibility, we aggregate the points on a $0.1 \times 0.1$ grid.} \label{fig:D:ag} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\columnwidth]{AG_variances.pdf} \caption{Variance of judging error versus control score in artistic gymnastics.} \label{fig:var:ag} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\columnwidth]{AG_standard_deviations.pdf} \caption{Standard deviation of judging error versus control score in artistic gymnastics, corresponding to the intrinsic judging error variability for this discipline.} \label{fig:sd:ag} \end{figure} Figures~\ref{fig:var:ag} and~\ref{fig:sd:ag} respectively show the sample variance and the sample standard deviation of the judging error $\hat{e}_{p,j}$ as a function of the control score $c_p$ for artistic gymnastics. In Figures~\ref{fig:var:ag}, \ref{fig:sd:ag} and all similar figures that follow, the frequency is the number of performances with a given control score, and the fitted curves are exponential weighted least-squares regressions of the data. In Figure~\ref{fig:var:ag}, we observe that the sample variance decreases almost linearly with the control score, except for the best performances for which it does not converge to zero. By inspection, the fitted standard deviation in Figure~\ref{fig:sd:ag} is an outstanding fit. The outliers correspond to the rare gymnasts who aborted or catastrophically missed their routine. The weighted root-mean-square deviation (RMSD) of the regression is 0.015, which is almost one order of magnitude smaller than the smallest deduction allowed by a judge. We use this exponential equation for our estimator of the standard deviation of the judging error $\hat{\sigma}_d(c_p)$, which we call the \emph{intrinsic judging error variability}. We can do the same analysis at the apparatus level. For example, Figures~\ref{fig:sd1}, \ref{fig:sd2}, \ref{fig:sd3} and \ref{fig:sd4} respectively show the intrinsic judging error variability (the weighted least-squares regression of the standard deviation of the judging error) for still rings, uneven bars, women's floor exercise and men's floor exercise. More generally, the estimator for $\hat{\sigma}_d(c_p)$ depends on the discipline (or apparatus) $d$ under evaluation and the control score $c_p$ of the performance, and is given by \begin{equation} \label{eq:s1} \hat{\sigma}_d(c_p) \triangleq \max(\alpha_d + \beta_d e^{\gamma_d c_p},0.05). \end{equation} For some apparatus like men's floor exercise in Figure~\ref{fig:sd4} the intrinsic judging error variability is linear within the data range. Since there is no mark close to 10 in our dataset, and since $\hat{\sigma}_d(c_p)$ becomes small for the best recorded performances, we can omit the mathematical ramifications of the bounded marking range. However, for apparatus such as women's floor exercise in Figure~\ref{fig:sd3}, the best fitted curves go to zero before 10. Since athletes might get higher marks than in our original data set in future competitions, we use $\max(\cdot,0.05)$ as a fail-safe mechanism to avoid comparing judges' marks to a very low and even negative extrapolated intrinsic error variability in the future. We emphasize that all the disciplines and apparatus we analyzed have highly accurate regressions. Besides acrobatic gymnastics, for which we do not have as much data, the worst weighted root-mean-square deviation is $\text{RMSD} \approx 0.04$. \begin{figure}[H] \centering \includegraphics[width=\columnwidth]{AG_standard_deviations_M_SR.pdf} \caption{Standard deviation of judging error versus control score for still rings.} \label{fig:sd1} \end{figure} \begin{figure}[H] \centering \includegraphics[width=\columnwidth]{AG_standard_deviations_W_UB.pdf} \caption{Standard deviation of judging error versus control score for uneven bars.} \label{fig:sd2} \end{figure} \begin{figure}[H] \centering \includegraphics[width=\columnwidth]{AG_standard_deviations_W_FX.pdf} \caption{Standard deviation of judging error versus control score for women's floor exercise.} \label{fig:sd3} \end{figure} \begin{figure}[H] \centering \includegraphics[width=\columnwidth]{AG_standard_deviations_M_FX.pdf} \caption{Standard deviation of judging error versus control score for men's floor exercise.} \label{fig:sd4} \end{figure} The marking score of Performance $p$ by Judge $j$ is \begin{equation} \label{eq:ms} m_{p,j} \triangleq \frac{\hat{e}_{p,j}}{\hat{\sigma}_d(c_p)} = \frac{s_{p,j} - c_p}{\hat{\sigma}_d(c_p)}. \end{equation} It expresses the judging error as a function of the standard deviation for a specific discipline and control score. The overall marking score for Judge $j$ is given by \begin{equation} \label{eq:4} M_j \triangleq \sqrt{E[m_{p,j}^2]}=\sqrt{\frac{1}{n}\sum_{p=1}^n m_{p,j}^2}. \end{equation} The marking score of a perfect judge is 0, and a judge whose judging error is always equal to the intrinsic judging error variability $\hat{\sigma}_d(c_p)$ has a marking score of 1.0. The mean squared error weights outliers heavily, which is desirable for evaluating judges. Figure~\ref{fig:boxplots:ag:all} shows the boxplots of the marking scores for all the judges for each apparatus in artistic gymnastics using the regression from Figure~\ref{fig:sd:ag}. The acronyms are defined in Table~\ref{tab:apparatus}. The first observation is that there are significant differences between apparatus. Pommel horse, for instance, is intrinsically more difficult to judge accurately than vault and floor exercise. The FIG confirms that the alternative, i.e., that judges in pommel horse are less competent than judges in men's vault or men's floor exercise, is highly unlikely. The differences between floor and vault on one side and pommel horse on the other side were previously observed in punctual competitions \cite{university-2009,london-2011,Bucar:2013}. Note that the better accuracy of vault judges does not make it easier to rank the gymnasts since many gymnasts execute the same jumps at a similar performance level. \begin{figure}[H] \centering \includegraphics[width=\columnwidth]{AG_apparatus_boxplots_ALLformula.pdf} \caption{Distribution of the overall marking scores per artistic gymnastic apparatus using one overall formula. The acronyms are defined in Table~\ref{tab:apparatus}, and the numbers between brackets are the number of judges per apparatus in the dataset.} \label{fig:boxplots:ag:all} \end{figure} \begin{table}[H] \centering \begin{tabular}{ l l } \toprule Acronym & Apparatus \\ \midrule BB & Balance beam (women)\\ FX & Floor exercise (men and women)\\ HB & Horizontal bar (men)\\ PB & Parallel bars (men) \\ PH & Pommel horse (men)\\ SR & Still rings (men)\\ UB & Uneven bars (women)\\ VT & Vault (men and women)\\ \bottomrule \end{tabular} \caption{The artistic gymnastics apparatus and their acronyms.} \label{tab:apparatus} \end{table} \begin{table}[H] \centering \begin{tabular}{ l l } \toprule Acronym & Apparatus \\ \midrule DMT & Double mini-trampoline (men and women)\\ IND & Individual trampoline (men and women)\\ TUM & Tumbling (men and women)\\ \bottomrule \end{tabular} \caption{The trampoline apparatus and their acronyms.} \label{tab:gt:apparatus} \end{table} A highly desirable feature for the marking score is to be comparable between apparatus and disciplines, which proves difficult with one overall formula. The differences between apparatus make it challenging for the FIG to qualitatively assess how good the judges are and to convey this information unambiguously to the interested parties. We thus estimated the intrinsic judging error variability $\hat{\sigma}_d(c_p)$ for each apparatus (instead of grouping them together) and used the resulting regressions to recalculate the marking scores. The results, presented in Figure~\ref{fig:boxplots:ag:ind}, now show a good uniformity and make it simpler to compare judges from different apparatus with each other. A pommel horse judge with a marking score of 1.0 is average, and so is a vault judge with the same marking score. This has allowed us to define a single set of quantitative to qualitative thresholds applicable across all the gymnastics apparatus and disciplines. \begin{figure}[H] \centering \includegraphics[width=\columnwidth]{AG_apparatus_boxplots_deciles.pdf} \caption{Distribution of the overall marking scores per artistic gymnastic apparatus using an individual formula per apparatus. The acronyms are defined in Table~\ref{tab:apparatus}, and the numbers between brackets are the number of judges per apparatus in the dataset.} \label{fig:boxplots:ag:ind} \end{figure} \subsection{Other gymnastic disciplines} \label{subsection:other} We use the same approach for the other gymnastics disciplines. Figures~\ref{fig:sd:gr}, \ref{fig:sd:ac} and \ref{fig:sd:ae} respectively show the weighted least-squares regressions for rhythmic gymnastics, acrobatic gymnastics and aerobic gymnastics. We do not discuss the results at the apparatus level, although we found notable differences: group routines in rhythmic gymnastics are more difficult to judge than individual ones, and groups in acrobatic gymnastics are more difficult to judge than pairs. We also analyzed the artistry judges in acrobatic and aerobic gymnastics, and were surprised to observe that the heteroscedasticity of their judging error was almost the same as for execution judges. \begin{figure}[] \centering \includegraphics[width=\columnwidth]{GR_standard_deviations.pdf} \caption{Standard deviation of judging error versus control score in rhythmic gymnastics.} \label{fig:sd:gr} \end{figure} \begin{figure}[] \centering \includegraphics[width=\columnwidth]{AC_standard_deviations.pdf} \caption{Standard deviation of judging error versus control score in acrobatic gymnastics.} \label{fig:sd:ac} \end{figure} \begin{figure}[] \centering \includegraphics[width=\columnwidth]{AE_standard_deviations.pdf} \caption{Standard deviation of judging error versus control score in aerobic gymnastics.} \label{fig:sd:ae} \end{figure} Trampoline, shown in Figure~\ref{fig:sd:gt}, was the most puzzling discipline to tackle. The behavior on the left side of the plot is due to gymnasts who aborted their routine before completing all their jumps, for instance by losing balance and landing a jump outside the center of the trampoline. We solved the problem by fitting the curves based on the completed routines. The result is shown in Figure~\ref{fig:sd:gt:aborted}, with aborted routines represented with rings instead of filled circles. Again, the weighted RMSD is excellent. When calculating the marking score for trampoline judges, the marks of gymnasts who did not complete their exercise may be omitted. If they are accounted for, the estimator generously evaluates judges when gymnasts do not complete their routine, which results in a slightly improved overall marking score. The behavior observed in trampoline appears in other sports with aborted routines or low scores~\cite{HM2018:heteroscedasticity} and can be modeled with concave parabola. This, however, decreases the accuracy of the regression for the best performances, which is undesirable. \begin{figure}[] \centering \includegraphics[width=\columnwidth]{GT_standard_deviations00.pdf} \caption{Standard deviation of judging error versus control score in trampoline.} \label{fig:sd:gt} \end{figure} \begin{figure}[] \centering \includegraphics[width=\columnwidth]{GT_standard_deviations.pdf} \caption{Standard deviation of judging error versus control score in trampoline. The rings indicate aborted routines. Data from synchronized trampoline is removed.} \label{fig:sd:gt:aborted} \end{figure} Trampoline exhibits the largest differences between apparatus: tumbling is much more difficult to judge than individual trampoline, which in turn is much more difficult to judge than double mini-trampoline. The boxplots per trampoline apparatus in Figure~\ref{fig:boxplot:single:trampoline} clearly illustrate this (the acronyms are defined in Table~\ref{tab:gt:apparatus}). We thus use a different regression equation per apparatus. Finally, note that Figure~\ref{fig:sd:gt:aborted} excludes data from synchronized trampoline because its judging panels are partitioned in two halves, each monitoring a different gymnast. The subpanels (two judges each) are too small to derive accurate control scores.% \begin{figure}[] \centering \includegraphics[width=\columnwidth]{GT_apparatus_boxplots_ALLformula_C.pdf} \caption{Distribution of the overall marking scores per trampoline apparatus using one overall formula. The acronyms are defined in Table~\ref{tab:gt:apparatus}, and the numbers between brackets are the number of judges per apparatus in the dataset.} \label{fig:boxplot:single:trampoline} \end{figure} \section{Observations, discoveries and recommendations} \label{sec:obs} During the course of this work we made interesting and sometimes surprising observations and discoveries that led to recommendations to the FIG. We summarize our observations about reference judges in Section~\ref{subsection:reference} and judging gender discrepancies in Section~\ref{subsection:gender}. \subsection{Reference judges} \label{subsection:reference} In addition the regular panel of execution judges, all the gymnastic disciplines except trampoline also have so called \emph{reference judges}. In artistic and rhythmic gymnastics, there are two reference judges, and the aggregation process is as follows\footnote{Acrobatic and aerobic gymnastics have a similar process for execution and artistry judges.}. The execution panel score is the trimmed mean of the middle three of out five execution panel judges, and the reference score is the arithmetic mean of the two reference judges' marks. If the gap between the execution panel score and the reference score exceeds a predefined tolerance threshold, and if the difference between the marks of both reference judges is below a second threshold, then the final execution score of the gymnast is the mean of the execution panel and reference scores. This makes reference judges dangerously powerful. \begin{figure}[H] \centering \includegraphics[width=\columnwidth]{AG_reference_boxplots.pdf} \caption{Distribution of marking scores for Artistic Gymnastics execution panel and reference judges.} \label{fig:pr} \end{figure} At each competition, execution judges are randomly selected from a set of accredited judges submitted by the national federations. In contrast, reference judges are hand-picked by the FIG, and the additional power granted to them is based on the assumption that execution judges are sometimes incompetent or biased. To test this assumption, we compared the marking scores of the execution panel and reference judges. The results for artistic gymnastics are shown in Figure~\ref{fig:pr}\footnote{In Figure~\ref{fig:pr}, judges have at least one marking score per apparatus for which they evaluated gymnasts. A judge has two marking scores on a single apparatus when appearing on the regular execution panel and on the reference panel for different events.}. Although this is obvious by inspection, a two-sided Welch's $t$-test returned a $p$-value of 0.18 and we could not reject the null-hypothesis that both means are equal. We ran similar tests for the other gymnastics disciplines, and in all instances reference judges are either statistically indistinguishable from the execution panel judges, or worse. Having additional judges selected by the FIG is an excellent idea because it increases the size of the panels, thus making them more robust. However, we strongly recommended that the FIG does not grant more power to reference judges. They are not better in aggregate, and the small size of the reference panels further increases the likelihood that the errors they make have greater consequences. The FIG Technical Coordinator has recently proposed the adoption of our recommendation. \subsection{Gender discrepancies: women are more accurate judges than men} \label{subsection:gender} In artistic gymnastics, men apparatus are almost exclusively evaluated by men judges and women apparatus are almost exclusively evaluated by women judges. Figure~\ref{fig:boxplots:ag:all}, besides showing the differences between apparatus, also shows that the marking scores for women apparatus are lower than those of men apparatus. Figure~\ref{fig:mwartistics} formalizes this observation by directly comparing the marking scores of men and women judges in artistic gymnastics\footnote{In Figure~\ref{fig:mwartistics}, judges have one marking score per apparatus for which they evaluated gymnasts.}. The average woman evaluation is $\approx 15\%$ better than the average man evaluation. More formally, we ran a one-sided Welch's $t$-test with the null-hypothesis that the mean of the marking scores of men is smaller than or equal to the mean marking score of women. We obtained a $p$-value of $10^{-15}$, leading to the rejection of the null-hypothesis. \begin{figure}[] \centering \includegraphics[width=\columnwidth]{AG_MW_boxplots_ALLformula.pdf} \caption{Distribution of marking scores per gender in artistic gymnastics.} \label{fig:mwartistics} \end{figure} A first hypothesis that can explain this difference is that in artistic gymnastics, men routines include ten elements, whereas women routines include eight elements. Furthermore, the formation and accreditation process is different for men and women judges. Men, who must judge six apparatus, receive less training than women, who must only judge four. Some men judges also have a (maybe unjustified) reputation of laissez-faire, which contrasts with the precision required from women judges. \begin{figure}[] \centering \includegraphics[width=\columnwidth]{GT_MW_boxplots.pdf} \caption{Distribution of marking scores per gender in trampoline.} \label{fig:mwtrampoline} \end{figure} To obtain more insight, we compared women and men judges in trampoline, which has mixed judging panels as well as the same accreditation process and apparatus per gender. In other words, men and women judges in trampoline receive the same training and execute the same judging tasks. The results are shown in Figure~\ref{fig:mwtrampoline}. The difference between gender observed in artistic gymnastics is less pronounced but remains in trampoline: women judge more accurately than men. We suspect that an important contributor of this judging gender discrepancy in gymnastics is the larger pool of women practicing the sport, which increases the likelihood of having more good women judges at the top of the pyramid since nearly all judges are former gymnasts from different levels. As an illustration, a 2007 Survey from USA Gymnastics reported four times more women gymnasts than men gymnasts in the USA \cite{USA2007survey}. A 2004 report from the ministère de la Jeunesse, des Sports et de la Vie Associative reported a similar ratio in France \cite{France2004survey}. Accurate information on participation per gender is difficult to come by, but fragmentary results indicate a similar participation gender imbalance in trampoline \cite{Silva:2017}. On a different note, we did not observe any mixed-gender bias in trampoline, i.e., judges are not biased in favor of same-gender athletes. This in opposition to other sports such as handball where gender bias by referees led to transgressive behaviors \cite{Souchon:2004}. In light of our gender analysis, we recommended that the FIG and its technical committees thoroughly review their processes to select, train and evaluate men judges in artistic gymnastics and trampoline. The marking score we developed provides valuable help for this task. \section{Outlier detection} \label{sec:outlier} We can use the marking score to signal judging marks that are unlikely high or low, with an increased emphasis on outliers from the same nationality. Figure~\ref{fig:out1}, like Figure~\ref{fig:D:ag}, shows the judging errors for artistic gymnastics judges. Differences of more than two standard deviations ($2 \cdot \hat{\sigma}_d(c_p)$) away from the control score are marked in red\footnote{We use a different equation for $\hat{\sigma}_d(c_p)$ per apparatus.}. The problem with this approach is that a bad judge has a lot of outliers, and a great judge none. This is not what the FIG wants, because an erratic judge can be unbiased and a precise judge can be dishonest. Instead of using the same standard deviation for all the judges, we scale the standard deviation by the overall marking score of each judge, and flag the judging scores that satisfy \begin{equation} \label{eq:outlier} |e_{p,j}| > \max(2\cdot\hat{\sigma}_d(c_p)\cdot M_j, 0.1). \end{equation} We use $\max(\cdot, 0.1)$ to ensure that a difference of 0.1 from the control score is never an outlier. The results are shown in Figure~\ref{fig:out2}. Eq.~\eqref{eq:outlier} flags $\approx 5\%$ of the marks, which is slightly more than what would be expected for a normal distribution. The advantage of the chosen approach is that it compares each judge to herself/himself, that is, it is more stringent for precise judges than for erratic judges. The disadvantage of the chosen approach is that one might think that a judge without outliers is good, which is false. The marking score and outlier detection work in tandem: a judge with a bad marking score is erratic, thus bad no matter how many outliers it has. It is important to note that we cannot infer conscious bias, chicanery or cheating from an outlier mark. A flagged evaluation can be a bad but honest mistake, caused by external factors, or even indicate that a judge is out of consensus with the other judges who might be wrong at the same time. Nevertheless this information is useful for the FIG: performances with large discrepancies among panel judges systematically lead to careful video reviews post-competition. In egregious but very rare circumstances they may even result in sanctions by the FIG Disciplinary Commission. We present a comprehensive analysis of national bias in gymnastics in the second article of this series~\cite{HM2018:nationalbias}. \begin{figure}[h!] \vspace{-0mm} \centering \includegraphics[width=\columnwidth]{AG_outlier_all.pdf} \caption{Distribution of the judging errors in artistic gymnastics. Dots in red are more than two standard deviations ($2 \cdot \sigma_d(c_a)$) away from the control score. To improve the visibility, we aggregate the points on a $0.1 \times 0.1$ grid and shift the outliers (red dots) by 0.05 on both axes.} \label{fig:out1} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\columnwidth]{AG_outlier_ind.pdf} \caption{Distribution of the judging errors in artistic gymnastics. Dots in red are more than $2 \cdot \sigma_d(c_a) \cdot M_j$ away from the control score. To improve the visibility, we aggregate the points on a $0.1 \times 0.1$ grid and shift the outliers (red dots) by 0.05 on both axes.} \label{fig:out2} \vspace{-0mm} \end{figure} \section{Ranking score} \label{sec:rank} The ranking of the gymnasts is determined by their scores, which are themselves aggregated from the marks given by the judges. The old iteration of JEP used a rudimentary \emph{ranking score} to evaluate to what extent judges ranked the best athletes in the right order. In a vacuum this makes sense: the FIG wants to select the most deserving gymnasts for the finals, and award the medals in the correct order. In this section we show that providing an objective assessment of the judges based on the order in which they rank the best athletes is problematic, and we recommended that the FIG stops using this approach. \begin{defn} Let $G = \{g_1, g_2, \dots, g_n\}$ be a set of $n$ gymnasts. A \emph{ranking} on $G$ is a sequence $r = a_1a_2a_3 \dots a_n$, $a_i \neq a_j$ $\forall i, j \in \{1, \dots, n\}$ of all the elements of $G$ that defines a weak order on $G$. Alternatively, a ranking can be noted as $r = (r_{g_1}, r_{g_2}, r_{g_3}, \dots )$, where $r_{g_1}$ is the rank of Gymnast $g_1$, $r_{g_2}$ is the rank of Gymnast $g_2$, and so on. \end{defn} The mathematical comparison of rankings is closely related to the analysis of voting systems and has a long and rich history dating back to the work of Ramon Llull in the 13th century. Two popular metrics on the set of weak orders are Kendall's $\tau$ distance \cite{Ken1938} and Spearman's footrule \cite{Spe1904}, both of which are within a constant fraction of each other \cite{Diaconis:1977}. In recent years, \textcite{Kumar:2010} generalized these two metrics by taking into account element weights, position weights, and element similarities. Their motivation was to find the ranking minimizing the distance to a set of search results from different search engines. \begin{defn} Let $r$ be a ranking of $n$ competitors. Let $w = (w_1, \dots, w_n)$ be a vector of element weights. Let $\delta = (\delta_1, \dots, \delta_n)$ be a vector of position swap costs where $\delta_1 \triangleq 1$ and $\delta_i$ is the cost of swapping elements at positions $i-1$ and $i$ for $i\in\{2,3,\dots,n\}$. Let $p_i = \sum\limits_{j=1}^{i} \delta_j$ for $i\in\{1,2,\dots,n\}$. We define the mean cost of interchanging positions $i$ and $r_i$ by $\bar{p}(i) = \frac{p_i - p_{r_i}}{i - r_i}$. Finally, let $D:\{1, \dots, n\} \times \{1, \dots, n\}$ be a non-empty metric and interpret $D(i, j) = D_{ij}$ as the cost of swapping elements $i$ and $j$. The generalized Kendall's $\tau$ distance \cite{Kumar:2010} is \begin{equation} \label{eq:gkt} K'^{\ast} = K'_{w,\delta,D}(r) = \sum_{s > t} w_s w_t \bar{p}_s \bar{p}_t D_{st} [r_s < r_t]. \end{equation} \end{defn} Note that $K^{'*}$ is the distance between $r$ and the identity ranking $id = (1, 2, 3, \dots)$. To calculate the distance between two rankings $r^1$ and $r^2$, we calculate $K'(r^1, r^2) = K'_{w,\delta,D}(r^1 \circ (r^2)^{-1})$, where $(r^2)^{-1}$ is the right inverse of $r^2$. These generalizations are natural for evaluating gymnastics judges: swapping the gold and silver medalists should be evaluated more harshly than inverting the ninth and tenth best gymnasts, but swapping the gold and silver medalists when their marks are 9.7 and 9.6 should be evaluated more leniently than if their marks are 9.7 and 8.7. \begin{table} \centering \begin{tabular}{ c | ccc } \toprule Parameter set & $w_i$ & $\delta_i$ & $D_{ij}$ \\ \midrule 1 & 1 & 1 & 1 \\ 2 & 1 & 1 & $|c_i-c_j|$ \\ 3 & 1 & $\frac{1}{i}$ & $|c_i-c_j|$ \\ \bottomrule \end{tabular} \caption{Parameters of the ranking scores for our two series of simulations.} \label{tab:rs} \end{table} \begin{figure}[h!] \centering \includegraphics[width=\columnwidth]{Simulation_set_1.pdf} \caption{Ranking score vs marking score for 1000 synthetic average judges and the first set of ranking score parameters from Table~\ref{tab:rs}. We aggregate the points on the x-axis to improve visibility.} \vspace{-0mm} \label{fig:sim1} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\columnwidth]{Simulation_set_2.pdf} \caption{Ranking score vs marking score for 1000 synthetic average judges and the second set of ranking score parameters from Table~\ref{tab:rs}. We aggregate the points to improve visibility.} \label{fig:sim2} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\columnwidth]{Simulation_set_3.pdf} \caption{Ranking score vs marking score for 1000 synthetic average judges and the third set of ranking score parameters from Table~\ref{tab:rs}. We aggregate the points to improve visibility.} \label{fig:sim3} \end{figure} To test the relevance of ranking scores as a measurement of judging accuracy, we ran several simulations to compare them to our marking score. As an example for this article, we use the men's floor exercise finals at the 2016 Rio Olympic Games. We first calculate the control scores $c_1, c_2, \dots, c_8$ of the eight finalists from the marks given by the seven execution judges (five panel judges and two reference judges). We then simulate the performance of 1000 average judges $j\in\{1,2,...,1000\}$ by randomly creating, for each of them, eight marks $s_{1,j}, s_{2,j}, \dots, s_{8,j}$ for the eight finalists using a normal distribution with mean $c_p$ and standard deviation $\hat{\sigma}_d(c_p)$ for $p\in\{1,2,\dots,8\}$. We then calculate, for each judge, the marking score as well as three ranking scores based on Eq.~(\ref{eq:gkt}) with the three different sets of parameters from Table~\ref{tab:rs}. Figures~\ref{fig:sim1}, \ref{fig:sim2} and \ref{fig:sim3} show the ranking score with respect to the marking score of the 1000 judges for the three parameter sets. The figures illustrate that the correlation between the ranking score and the marking score varies widely depending on the chosen parameters. The parameters used in Figure~\ref{fig:sim1} are those of the original version of Kendall's $\tau$ distance \cite{Ken1938}. This simply counts the number of bubble sort swaps required to transform one ranking into the other; swapping the first and second gymnasts separated by 0.1 point is equivalent to swapping the seventh and eighth gymnasts separated by 1.0 point. In Figure~\ref{fig:sim2}, the element swap costs vary $(D_{ij}=|c_i-c_j|)$. This decreases the penalty of swaps as the marks get closer to each other; in particular, swapping two gymnasts with the same control score $c_i=c_j$ incurs no penalty. This increases the correlation between the marking score and the ranking score, and both, to some extent, measure the same thing. In Figure~\ref{fig:sim3}, we also vary the position swap costs $(\delta_i=\frac{1}{i})$. This increases the importance of having the correct order as we move towards the gold medalist. The correlation between the marking score and the ranking score decreases, thus we penalize good judges that unluckily make mistakes at the wrong place, and reward erratic judges that somehow get the podium in the correct order. It is unclear how to parametrize the ranking score; it is either redundant with the marking score, or too uncorrelated to be of any practical value. The {marking score} already achieves our objectives. It is based on the theoretical performances of the gymnasts over hundreds of performances for each judge and reflects bias and cheating, as these involve changing the marks up or down for some of the performances. Furthermore, the FIG is adamant that a theoretical judge who ranks all the gymnasts in the correct order but is either always too generous or too strict is not a good judge because he/she does not apply the Codes of Points properly. From these observations, the FIG stopped using ranking scores to monitor the accuracy of its judges.
{ "attr-fineweb-edu": 1.510742, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdWI4uzlh5xHhy6ER
\section{Introduction} In this article we consider the dual version of two results on polygonal billiards. We begin by describing these original results. The first result is about the dynamics of the so-called pedal map related to billiards in a triangle $P$. The three altitudes of $P$ intersect the opposite sides (or their extensions) in three points called the feet. These three points form the vertices of a new triangle $Q$ called the pedal triangle of the triangle $P$ (Figure 1). It is well known that for acute triangles the pedal triangle forms a period-three billiard orbit often referred to as the Fagnano orbit, i.e., the polygon $Q$ is inscribed in $P$ and satisfies the usual law of geometric optics (the angle of incidence equals the angle of reflection) or equivalently (this is a theorem) the pedal triangle has least perimeter among all inscribed triangles. The name Fagnano is used since in 1775 J.\ F.\ F.\ Fagnano gave the first proof of the variational characterization. In a sequence of elegant and entertaining articles, J.\ Kingston and J.\ Synge \cite{KS}, P.\ Lax \cite{L}, P.\ Ungar \cite{U}, and J.\ Alexander \cite{A} studied the dynamics of the pedal map given by iterating this process. The second result, due to DeTemple and Robertson \cite{DR}, is that a closed convex polygon $P$ is regular if and only if $P$ contains a periodic billiard path $Q$ similar to $P$. There is a dual notion to billiards, called dual or outer billiards. The game of dual billiards is played outside the billiard table. Suppose the table is a polygon $P$ \begin{figure}[h]\label{fig1} \centerline{\psfig{file=07-0236fig1.eps,height=50mm}} \caption{A pedal triangle.} \end{figure} and that $z$ is a point outside $P$ and not on the continuation of any of $P$'s sides. A line $L$ is a {\em support line} of $P$ if it intersects the boundary $\partial P$ of $P$ and $P$ lies entirely in one of the two regions into which the line $L$ divides the plane. There are two support lines to $P$ through $z$; choose the right one as viewed from $z$. If $z$ is not on the continuation of a side of the convex hull of $P$ then this support line intersects $P$ at a single point which we call the {\em support vertex} of $z$. Reflect $z$ in its support vertex to obtain $z$'s image under the dual billiard map denoted by $T$ (see Figure 2). \begin{figure}\label{fig2} \psfrag*{x}{\footnotesize$z$} \psfrag*{T(x)}{\footnotesize$Tz$} \psfrag*{T2(x)}{\footnotesize$\quad\quad T^2z$} \centerline{\psfig{file=07-0236fig2.eps,height=50mm}} \caption{The polygonal dual billiard map.} \end{figure} The map $T^n$ is defined for all $n$ at the point $z$ if none of its images belongs to the continuation of a side of the convex hull of the polygon. Dual billiards have been extensively studied by S.\ Tabachnikov (see \cite{T}--\cite{TD} and the reference therein). Throughout the article we will identify the polygon having vertices $z_i \in \mathbb{C}$, $i=1,\dots,n$, ordered cyclically with the point $z=(z_1,\dots,z_n) \in \mathbb{C}^n$. In particular a polygon for us is an oriented object. The notion of polygon includes self-intersecting polygons (which we call star shaped polygons) and geometrically degenerate $n$-gons: those with a side of length 0, or an angle of $\pi$ or $2 \pi$ at a vertex. Corresponding to the fact that the Fagnano billiard orbit hits each side of the triangle, we call an $n$-periodic dual billiard orbit consisting of $n$ points consecutively reflected in the vertices $z_1,z_2,\dots,z_n$ a {\em Fagnano} dual billiard orbit. Fagnano orbits for dual billiards were introduce by Tabachnikov in \cite{T1}, where he studied a certain variational property analogous to one in the pedal case studied by Gutkin \cite{G}. Motivated by the notion of pedal triangles, we introduce here dedal $n$-gons. An $n$-gon $Q$ is called a {\em dedal} $n$-gon of the $n$-gon $P$ if reflecting the vertex $w_i$ of $Q$ in the vertex $z_i$ of $P$ yields the vertex $w_{i+1}$. (Throughout the article all subscripts will be taken modulo $n$ without explicit mention.) There is no requirement that the sides of $Q$ touch $P$ only at a vertex. From the definition it is clear that if a Fagnano orbit of an $n$-gon $P$ exists then it is a dedal $n$-gon. A nondegenerate polygon can have a degenerate dedal polygon and vice-versa. Examples are shown in Figures 3 and 4. The degeneracy not shown cannot occur: i.e., it is impossible for two consecutive vertices of $P$ to coincide if $Q$ is nondegenerate. We will not dwell on this aspect. \begin{figure} \psfrag*{w1}{\footnotesize$w_1$} \psfrag*{w2}{\footnotesize$w_2$} \psfrag*{w3}{\footnotesize$w_3$} \psfrag*{w4}{\footnotesize$w_4$} \psfrag*{w5}{\footnotesize$w_5$} \psfrag*{z1}{\footnotesize$z_1$} \psfrag*{z2}{\footnotesize$z_2$} \psfrag*{z3}{\footnotesize$z_3$} \psfrag*{z4}{\footnotesize$z_4$} \psfrag*{z5}{\footnotesize$z_5$} \psfrag*{a}{\footnotesize$z_2=z_5$} \centerline{\psfig{file=07-0236fig3a.eps,height=50mm} \hspace{0.5cm} \psfig{file=07-0236fig3b.eps,height=50mm}} \caption{Nondegenerate dedal pentagons corresponding to degenerate pentagons, with angle $\pi$ and $2\pi$.} \end{figure}\label{fig6} \begin{figure}\label{f5} \psfrag*{w1}{\footnotesize$w_1$} \psfrag*{w2}{\footnotesize$w_2$} \psfrag*{w3}{\footnotesize$w_3$} \psfrag*{w4}{\footnotesize$w_4$} \psfrag*{w5}{\footnotesize$w_5$} \psfrag*{z1}{\footnotesize$z_1$} \psfrag*{z2}{\footnotesize$z_2$} \psfrag*{z3}{\footnotesize$z_3$} \psfrag*{z4}{\footnotesize$z_4$} \psfrag*{z5}{\footnotesize$z_5$} \psfrag*{aa}{\tiny$w_3=w_4$} \psfrag*{a}{\tiny$=z_3$} \centerline{\psfig{file=07-0236fig4a.eps,height=34mm} \hspace{0.4cm} \psfig{file=07-0236fig4b.eps,height=38mm} \hspace{0.4cm} \psfig{file=07-0236fig4c.eps,height=38mm}} \caption{Degenerate dedal polygons: loss of a vertex, angle $\pi$, and angle $2\pi$.} \end{figure} In this article we study the dedal $n$-gon(s) $Q$ of an $n$-gon $P$. Our main results are the following. If $n$ is odd then its dedal $n$-gon exists and is unique. For $n$ even we give a necessary and sufficient condition for the existence of dedal $n$-gons and describe the space of all dedal $n$-gons of $P$. Then we go on to characterize regular and affinely regular $n$-gons by similarity to their dedal $n$-gons. Finally we give a complete description of the dynamics of the {\em developing map} $\mu(Q) := P$. The proofs of all our results boil down to some linear algebra of the dedal map. After we wrote this article one of the anonymous referees pointed out that iteration of the developing map has already been studied by Berlekamp, Gilbert, and Sinden in 1965 \cite{BGS}. They answer a question they attribute to G.\ R.\ MacLane, namely, they prove that for almost every polygon $Q$ there exists an $M \ge 1$ such that $\mu^M(Q)$ is convex. Note that the image of a convex polygon is convex; thus this implies that for almost every $Q$ there exists an $M$ such that $\mu^m(Q)$ is convex for all $m \ge M$. \newpage Suppose $Q(w_1,\dots,w_n)$ is a dedal polygon of $P(z_1,\dots,z_n)$. The definition implies that $$ z_i = (w_i + w_{i+1})/2 $$ (see Figures 3, 4, and 5). The linear transformation $\mu: \mathbb{C}^n \to \mathbb{C}^n$ given by $\mu(w_1,\dots,w_n) = (z_1,\dots,z_n)$ is called the {\em developing map}. The characteristic polynomial of $\mu$ is $$(1- 2x)^n - (-1)^n.$$ Its eigenvalues are $(1 + q^i)/2$ for $i=0,1,\dots,n-1$, where $q := \exp(2\pi i/n)$. All the eigenvalues differ from zero except for the $(n/2)$th eigenvalue when $n$ is even. The $i$th eigenvector is $X_i := (1,q^i,q^{2i},\dots,q^{(n-1)i})$. The vectors $X_i$ form a basis of our space of polygons. If $i$ divides $n$ then $X_i$ is a polygon with $n/i$ sides. Nonetheless for the sake of clarity (avoiding stating special cases) we will think of this as an $n$-gon which is traced $i$ times; for example if $n=6$ then $X_2 = (1,q^2,q^4,q^6,q^8,q^{10}) = (1,q^2,q^4,1,q^2,q^4)$ traces the triangle $(1,q^2,q^4)$ twice. An exception to this rule is the case $n$ even and $i= n/2$. In this case $X_{n/2}$ is a segment which we do not consider as a polygon. The following proposition clarifies the existence of dedal polygons. \begin{proposition}\label{prop1} a) Suppose that $n \ge 3$ is odd. Then for any $n$-gon $P$ there is a unique dedal $n$-gon $Q$.\\ b) If $n \ge 3$ is even then dedal $n$-gons exist if and only if the vertices of $P$ satisfy $z_1 - z_2 + z_3 - \cdots - z_n = 0$. If this equation is satisfied then there is a unique dedal n-gon $Q_0$ in the space $X_{n/2}^\perp := \{z \in \mathbb{C}^n:\ z \cdot X_{n/2} = 0\}$. The set $D:=\{Q_0 + s X_{n/2}: \ s \in \mathbb{C}\}$ consists of the dedal $n$-gons of $P$. In particular for each $i \in \{1,\dots,n\}$ every point $w \in \mathbb{C}$ is the $i$th vertex of a unique dedal $n$-gon $Q_i(w)$. \end{proposition} \noindent We remark that the condition $z_1-z_2+ z_3 - \cdots - z_n = 0$ means that the center of mass of the even vertices coincides with the center of mass of the odd vertices. \begin{proofof}{Proposition \ref{prop1}} If $n \ge 3$ is odd then the map $\mu$ is invertible with $$ w_i = z_i - z_{i+1} + z_{i+2} - \cdots + z_{i-1}, $$ and thus the dedal polygon exists and is unique. On the other hand, if $n \ge 3$ is even then the map $\mu$ is not invertible. The kernel is one (complex) dimensional and is generated by the vector $X_{n/2} := (1,-1,1,-1,\dots,1,-1)$. Dedal polygons exist if and only if $z = (z_1,z_2,\dots,z_n)$ is in the range of $\mu$, i.e., the space spanned by the $X_i$ for $i \ne n/2$. The range and kernel of $\mu$ are orthogonal since if $i \ne n/2$ then $$X_i \cdot X_{n/2} = (1 + q^{2i} + q^{4i} + \cdots + q^{(n-2)i}) - (q^i + q^{3i} + \cdots + q^{(n-1)i}) = 0-0 = 0.$$ Thus $z$ is in the range of $\mu$ if and only if it satisfies $z \cdot X_{n/2} = 0,$ or equivalently \begin{equation}\label{existence} z_1 - z_2 + z_3 - \cdots - z_n = 0. \end{equation} Alternatively, to see this note that \begin{eqnarray} &\hspace{-10pt}z_1 + z_3 + \cdots + z_{n-1} = \frac12 \left ( (w_1 + w_2) + (w_3 + w_4) + \cdots + (w_{n-1} + w_n) \right )\nonumber\\ & = \frac12 \left ( (w_2 + w_3) + (w_4 + w_5) + \cdots + (w_n + w_1) \right ) = z_2 + z_4 + \cdots + z_n.\nonumber \end{eqnarray} The uniqueness of $Q_0$ follows since the map $\mu$ is invertible on the space $X_{n/2}^\perp$. The statement about the set $D$ follows immediately since $X_{n/2}$ is the kernel of $\mu$. Let $Q_0 := (w_1^0,\dots,w_n^0)$. For each $i$, any point $w \in \mathbb{C}$ can be uniquely expressed as $w_i^0 + s$ for some $s \in \mathbb{C}$. \end{proofof} Suppose that a polygon $Q$ without self intersection is a dedal polygon of $P$. If $Q$ is convex then it is clearly a Fagnano orbit of $P$ since by convexity the polygon $P$ must be contained in $Q$. On the other hand if $Q$ is not convex then it cannot be a Fagnano orbit since $P$ cannot be contained in $Q$. In particular Fagnano orbits always exist for triangles, but not for polygons with more sides. An example of a pentagon without a Fagnano orbit is given in Figure 5. Although not every polygon has a Fagnano orbit, it does have a periodic orbit that is nearly as simple. Namely, consider the second iteration $T^2$ of the dual billiard map. Connecting the consecutive points of a periodic orbit of $T^2$ yields a polygon. Cutler has shown that the map $T^2$ has a periodic orbit which lies outside any compact neighborhood of $P$ and the polygon constructed from the orbit makes a single turn about $P$ \cite{T2}. The existence of a periodic orbit for the usual billiard in an arbitrary polygon remains open. \begin{figure} \psfrag*{w1}{\footnotesize$w_1$} \psfrag*{w2}{\footnotesize$w_2$} \psfrag*{w3}{\footnotesize$w_3$} \psfrag*{w4}{\footnotesize$w_4$} \psfrag*{w5}{\footnotesize$w_5$} \psfrag*{z1}{\footnotesize$z_1$} \psfrag*{z2}{\footnotesize$z_2$} \psfrag*{z3}{\footnotesize$z_3$} \psfrag*{z4}{\footnotesize$z_4$} \psfrag*{z5}{\footnotesize$z_5$} \centerline{\psfig{file=07-0236fig5.eps,height=50mm}} \caption{ A pentagon without a Fagnano orbit. The unique dedal pentagon is pictured.}\label{fig3} \end{figure} The set of polygons with center of mass at the origin is $$\mathsf{C} := \{(w_1,\dots,w_n) \in \mathbb{C}^n: \ w_1 + \cdots + w_n =~0\}.$$ The eigenvector $X_{0} = (1,1,\dots,1)$ represents a polygon which degenerates to a point. The set $\mathsf{C}$ is the orthogonal complement of $X_0$, i.e., $\mathsf{C} = \{w=(w_1,\dots,w_n) \in \mathbb{C}^n: \ w \cdot X_0 = 0\}$. We will express polygons in the eigenbasis, i.e., we write $P = \sum_{i=0}^{n-1} a_i X_i$; the coefficients $a_i$ are complex numbers. Since the map $\mu$ preserves the center of mass, throughout the rest of the article we assume that the center of mass is at the origin, i.e., $a_0=0$. $X_1$ and $X_{n-1}$ are the usual regular $n$-gon in counterclockwise and clockwise orientation (Figure 6a). If $i$ and $n$ are relatively prime and $i \not \in \{1,n-1\}$ then $X_i$ is star shaped and we also call it regular (Figures 6b and c). Finally if $i$ divides $n$ and $i \ne n/2$ then $X_i$ is naturally a regular $n/i$-gon which, as mentioned above, we will think of as (a multiple cover of) a regular $n$-gon. \begin{figure}\label{fig4} \psfrag*{a}{\footnotesize$(a)$} \psfrag*{b}{\footnotesize$(b)$} \psfrag*{c}{\footnotesize$(c)$} \centerline{\psfig{file=07-0236fig6.eps,height=40mm}} \caption{Up to orientation there are three regular 7-gons.} \end{figure} Two (unoriented) polygons are called {\em similar} if all corresponding angles are equal and all distances are increased (or decreased) in the same ratio. Since we study oriented ordered polygons we additionally want the marking of the vertices of $P$ and $Q$ to correspond, in which case we will call $P$ and $Q$ {\em $\star$-similar}. Thus two polygons $P = \sum_{i=1}^{n-1} b_i X_i$ and $Q = \sum_{i=1}^{n-1} a_i X_i$ are $\star$-similar if there exists a nonzero complex constant $\ell$ such that $b_i = \ell \, a_i$ for all $i$. We will also write this as $P = \ell \, Q$. Note that if $P$ and $Q$ are $\star$-similar then they are similar. On the other hand if $P$ and $Q$ are similar then $P$ is $\star$-similar to a cyclic permutation $Q^{(k)} := (w_k, w_{k+1},w_{k+2}, \dots, w_{k-1})$ of $Q$ or a cyclic permutation $\bar{Q}^{(k)} := (w_k, w_{k-1}, w_{k-2} ,\dots, w_{k+1})$ of $Q$ with the opposite orientation. In analogy to DeTemple and Robertson's result we have: \begin{theorem}\label{thm0} Fix $n \ge 3$. An $n$-gon $P$ is regular if and only if it has a dedal polygon $Q$ which is $\star$-similar to $P$. \end{theorem} Note that if $n$ is odd then $Q$ is the unique dedal polygon of $P$, while if $n$ is even then $Q$ is the unique dedal polygon $Q_0 \in X_{n/2}^\perp$. \begin{proof} Suppose $P$ is regular, i.e., there is a nonzero complex constant $\ell$ such that $P = \ell \, X_j$, where $j \ne n/2$ if $n$ is even. If $n$ is odd then let $Q$ be the unique dedal polygon of $P$. If $n$ is even then since $P$ is regular it satisfies (\ref{existence}). Thus $P$ has dedal polygons and we choose $Q = Q_0 \in X_{n/2}^\perp$. In both cases let $Q=\sum a_i X_i$. Since $P=\mu(Q) = \sum \frac{1+q^i}{2} a_i X_i$, we have $a_i = 0$ for $i \ne j$, i.e., $Q = a_j X_j$. Thus $Q$ is $\star$-similar to $P = \ell \, X_j$. Conversely suppose that $Q = \sum a_i X_i$ is a dedal polygon of $P = \sum b_i X_i$ and that $P$ and $Q$ are $\star$-similar, i.e., there is a nonzero complex constant $\ell$ such that $a_i = \ell \, b_i$. Since $P = \mu(Q)$ we have $\frac{1+q^i}{2} a_i = b_i$ for $i=1,\dots,n$. Combining this with the $\star$-similarity yields $\frac{1+q^i}{2} = 1/{\ell}$ (a constant) for each $i$ such that $a_i \ne 0$. It follows $q^i = q^j$ if $a_i \ne 0$ and $a_j \ne 0$. Since $q^i \ne q^j$ for $i \ne j$ it follows that only a single $a_j$ is nonnull. Therefore $Q = a_j X_j$ and thus $P = \ell^{-1} \, a_j X_j$, and both are regular. Note that if $n$ is even then $i \ne n/2$, since if $i = n/2$ then $(1 + q^i)/2 = 0$ and thus $b_i = a_i = 0$. \end{proof} Of course we would like to have the analog of Theorem \ref{thm0} with the usual notion of similarity. We call an $n$-gon {\em affinely regular} if there exists a $j$ ($j \ne n/2$ when $n$ is even) such that $P$ is in the subspace $A_j$ generated by $X_j$ and $X_{n-j}$. This is equivalent to $P$ being the image of a regular $n$-gon by an affine transformation, which explains the name. \newcounter{Lcount} \begin{theorem}\label{thm1} a) Suppose $n \ge 3$ is odd. An $n$-gon $P$ is affinely regular if and only if it has a dedal polygon $Q$ which is similar to $P$.\\ b) If $n \ge 4$ is even then an $n$-gon $P$ appears in the following list of affinely regular polygons if and only if it has a dedal polygon $Q$ which is similar to $P$. \begin{list}{\roman{Lcount})} {\usecounter{Lcount} \setlength{\rightmargin}{\leftmargin}} \item Regular $n$-gons \item Any $P \in A_j$ such that there exists $k \in \{1,,\dots,n\} \setminus~\{n/2\}$ such that $n$ divides $j(2k-1)$ \item Any $P = b_j X_j + b_{n-j}X_{n-j} \in A_j$ such that there exists $k \in \{1,\dots,n\}$ with ${b_j}/{b_{n-j}} = \pm q^{j(k+3/2)}$ \end{list} \end{theorem} All triangles are affinely regular. Berlekamp et al.\ noticed that every triangle is similar to its dedal triangle \cite{BGS}. One would like to know if there are other special properties of the polygons in the list. \begin{proof} Suppose $Q^{(1)} = Q = \sum_{i=1}^{n-1} a_i X_i$. Then it is a simple exercise in linear algebra to verify that $Q^{(k)} = \sum a_i q^{i(k-1)} X_i$ and that $\bar{Q}^{(k)} = \sum a_{n-i} q^{i(k+1)} X_i$. Thus the similarity of $P$ to $Q$ is equivalent to the existence of a $k \in \{1,\dots,n\}$ and a nonzero complex constant $\ell$ such that $b_i = \ell \, a_iq^{i(k-1)}$ for $i = 1,\dots,n-1$ or $b_i = \ell \, a_{n-i}q^{i(k+1)}$ for $i = 1,\dots,n-1$. The structure of the proof is as follows. We group the even and odd cases together and start by proving that the various classes of affinely regular polygons are similar to their dedal polygons. Then we turn to the converse. Suppose that $n$ is odd and $P = b_j X_j + b_{n-j} X_{n-j}$ is affinely regular. Let $Q = Q^{(1)} = \sum a_i X_i$ be the unique dedal polygon of $P$. Since $P=\mu(Q) = \sum \frac{1+q^i}{2} a_i X_i$, we have $a_i = 0$ for $i \not \in \{j,n-j\}$. We claim that there is a nonzero complex constant $\ell$ such that $P = \ell \, Q^{((n+3)/2)}$. To see this note that $$ 1 + q^j = q^j(1 + q^{-j}) = q^{j(n+1)} ( 1 + q^{-j}) = q^{j(n+1)/2} q^{j(n+1)/2} ( 1 + q^{-j})$$ or $$\frac{1+q^j}{2q^{j(n+1)/2}} = \frac{1+q^{-j}}{2q^{-j(n+1)/2}}.$$ Thus $P = \ell \, Q^{((n+3)/2)}$ for $\ell := (1+q^j)/ (2q^{j(n+1)/2})$. The case $n$ is even and $P$ of class (ii) is similar. Consider the dedal polygon $Q = Q_0 \in X_{n/2}^{\perp}$. Again $P = \mu(Q)$ implies that $a_i = 0$ if $i \not \in \{j,n-j\}$. We claim that there is a nonzero complex constant $\ell$ such that $P = \ell \, Q^{(k+1)}$. Since $n$ divides $j(2k-1)$ we have $$ 1 + q^j = q^j(1 + q^{-j}) = q^{j + j(2k-1)} ( 1 + q^{-j}) = q^{kj} q^{kj} ( 1 + q^{-j})$$ or $$\frac{1+q^j}{2q^{kj}} = \frac{1+q^{-j}}{2q^{-kj}}.$$ Thus $P = \ell \, Q^{(k+1)}$ for $\ell := (1+q^j)/(2q^{kj})$. The case $n$ even and $P$ regular has already been treated in Theorem \ref{thm0}; therefore it remains to treat case (iii) of $n$ even. We suppose $P$ is of this form and want to show that $P$ is similar to $\bar{Q}^{(k)}$, i.e., that there is a nonzero complex constant $\ell$ such that $b_i = \ell \, a_{n-i} q^{i(k+1)}$ for all $i$. As before $P = \mu(Q)$ implies that $a_i = 0$ if $i \not \in \{j,n-j\}$. Combining the two relations, $b_i = \frac{1+q^i}{2} a_i$ and $b_j/b_{n-j} = \pm q^{j(k+3/2)}$ yields $$ b_{j} = \pm q^{j(k+3/2)} b_{n-j} = \pm q^{j(k+3/2)} \frac{1+q^{-j}}{2} a_{n-j} = \pm q^{j(k+1)} \frac{q^{j/2}+q^{-j/2}}{2} a_{n-j}$$ and $$ b_{n-j} = \pm q^{-j(k+3/2)} b_j = \pm q^{-j(k+3/2)} \frac{1+q^j}{2} a_j = \pm q^{-j(k+1)} \frac{q^{j/2} +q^{-j/2}}{2} a_j.$$ Choosing $\ell := \pm (q^{j/2} + q^{-j/2}) /2 \in \R$ yields $P = \ell \, \bar{Q}^{(k)}$. We turn now to the converse. We will first prove that for $n$ even or odd if $Q = \sum a_i X_i$ is similar to $P = \sum b_i X_i$ then $P$ is affinely regular. We first treat that case when $P = \ell \, Q^{(k+1)}$ for some $k$ and $\ell \in \mathbb{C} \setminus \{0\}$, i.e., $b_i = \ell \, a_i q^{ik}$. Since $P = \mu(Q)$ we have $\frac{1+q^i}{2} a_i = b_i$ for $i=1,\dots,n-1$. Note that if $n$ is even then $(1 + q^{n/2})/2 = 0$, and thus $b_{n/2} = \frac{1+q^{n/2}}{2} a_{n/2} = 0$ and $a_{n/2} = \ell^{-1} q^{-nk/2} b_{n/2}$ is zero as well. For each $i$ such that $a_i \ne 0$, combining the two relations between $a_i$ and $b_i$ yields $q^{-ik} \frac{1+q^i}{2} = \ell$. If only a single $a_j$ is nonnull, then $Q$ is regular, and since $P$ is similar to $Q$ it is regular as well. Now suppose that $a_i$ and $a_j$ are nonnull. Then the previous equation implies that \begin{equation}(1+q^i)/(1+q^j) = q^{k(i-j)}.\label{1}\end{equation} Taking absolute values yields $|(1+q^i)|=|(1+q^j)|$, which implies $i = \pm j$. Thus $P$ is affinely regular. If $n$ is even we need to conclude more. Note that $(1+q^{-j})/(1+q^j) = q^{-j}$. Thus taking $i = -j$ in (\ref{1}) implies $1 = q^{j(1-2k)}$. Thus $j(2k-1)$ is a multiple of $n$, i.e., we are in case (ii) of the list. Finally suppose that $P= \mu(Q)$ and $Q$ are similar but have the opposite orientation, i.e., $P = \ell \, \bar{Q}^{(k)}$ for some $\ell \in \mathbb{C} \setminus \{0\}$, or equivalently $b_i = \ell \, a_{n-i} q^{i(k+1)}$. Combing this with $\frac{1+q^i}{2} a_i = b_i$ yields $\frac{1+q^{i}}{2} a_i = \ell \, a_{n-i} q^{i(k+1)}$. If $n$ is even and $i = n/2$ then this equation implies that $a_{n/2} = 0$. For all other cases it implies that $a_i$ and $a_{n-i}$ are simultaneous zero or nonzero. For any $i$ such that they are nonzero there are two such equations, and they imply \begin{equation}\label{2} \frac{a_i}{a_{n-i}} = \frac{2 \, \ell \, q^{i(k+1)}}{1+q^{i}} = \frac{1+ q^{-i}}{2 \, \ell \, q^{-i(k+1)}} . \end{equation} This in turn yields \begin{equation}\label{3} 4 \, \ell^2 = (1+q^i) \cdot (1 + q^{n-i}) = {2 + q^i + q^{n-i}} := f(i) . \end{equation} It is easy to see that $f(i) = f(j)$ if and only if $j=i$ or $j=n-i$. Since the left-hand side of Equation (\ref{3}) does not depend on $i$, there is exactly one pair $(a_j,a_{n-j})$ of nonzero coefficients, i.e., $Q$ is affinely regular. Since $P$ is similar to $Q$ it is also affinely regular. Finally suppose that $n$ is even. Equations (\ref{2}) and (\ref{3}) imply that for each $j$ there exists $k \in \{1,2 \dots ,n\}$ such that $$\frac{b_j}{b_{n-j}} = \pm \frac{\sqrt{f(j)} q^{j(k+1)}}{1+q^{-j}} = \pm \sqrt{\frac{1+q^j}{1+q^{-j}}} q^{j(k+1)} = \pm q^{j(k+3/2)},$$ i.e., we are in case (iii) of the list. \end{proof} \begin{lemma}\label{n} If $Q$ is affinely regular then $\mu^{n}(Q)$ is $\star$-similar to $Q$. \end{lemma} \begin{proof} Suppose $Q = a_j X_j + a_{n-j}X_{n-j}$. Then $$\mu^{n}(Q) = \Big (\frac{1+q^j}{2} \Big )^{n} a_j X_j + \Big (\frac{1+q^{n-j}}{2} \Big )^{n} a_{n-j} X_{n-j}.$$ But \begin{eqnarray}\label{aha} (1+q^{n-j})^n &=& \sum_{k=0}^n {n \choose k} q^{(n-j)k} = \sum_{k=0}^n {n \choose n-k} q^{-jk} \\ &=& \sum_{i=0}^n {n \choose i} q^{-j(n-i)} = \sum_{i=0}^n {n \choose i} q^{ji} =(1+q^j)^n.\nn \end{eqnarray} Thus $\mu^n(Q) = \ell \, Q$ with $\ell = (\frac{1+q^j}{2})^n$. \end{proof} In analogy to the billiard results stated in the introduction we now study the dynamics of $\mu$. On the space $\mathsf{C}$ the dynamics are not very interesting: $\mu^m(Q) \to (0,\dots,0)$ as $m \to \infty$ for all $Q \in \mathsf{C}$. To get a somewhat more interesting behavior notice that $\star$-similarity is an equivalence relation. Let $[Q] := \{ \ell Q: \ \ell \in \mathbb{C} \setminus \{0\}\}$ denote the equivalence class of $Q$. By identifying $\star$-similar polygons we obtain the quotient space $\hat{\mathsf{C}} := \{[Q]: \ Q \in \mathsf{C} \}$. If $Q_1$ and $Q_2$ are $\star$-similar then so are $\mu(Q_1)$ and $\mu(Q_2)$, and thus the map $\mu$ defines a map $\hat{\mu}$ of $\hat{\mathsf{C}}$ to itself. We remark that the equivalence class $[Q]$ of a polygon $Q = \sum a_i X_i$ can be represented by the vector $(a_1,\dots,a_{n-1}) \in \mathbb{C}^{n-1} \setminus \{(0,\dots,0)\}$ with the identification $(a_1,\dots,a_{n-1}) \equiv (\ell a_1,\dots,\ell a_{n-1})$ with $\ell \in \mathbb{C} \setminus \{0\}$. Thus the quotient space $\hat{\mathsf{C}}$ is naturally identified with the complex projective space $\mathbb{CP}^{n-2}$. Let $\S$ be the unit sphere in $\mathsf{C}$, i.e., the set of all $Q = \sum a_i X_i \in \mathsf{C}$ such that $\sum |a_i|^2 = 1$. Each equivalence class $[Q]$ intersects the unit sphere $\S$ in a circle, i.e., $[Q] \cap \S = \{\ell Q: |\ell| = 1/\sum |a_i|^2\}$. We define ${{\rm{dist}}}([P],[Q]) = \inf \{ d(P_0,Q_0): \ P_0 \in [P] \cap \S, Q_0 \in [Q] \cap \S\}$, where $d((a_1,\dots,a_{n-1}),(b_1,\dots,b_{n-1})) = (\sum |a_i - b_i|^2)^{1/2}$ is the Euclidean distance on $S$. It is a simple exercise to verify that this defines a metric on $\hat{\mathsf{C}}$. A $\hat{\mu}$-invariant set $A \subset \hat{\mathsf{C}}$ is an {\em exponential attractor} with basin $B$ for $\hat{\mu}$ if there exists $c,\gamma > 0$ such that $\rm{dist}(\hat{\mu}^m([Q]),A) \le c \exp(-\gamma m)$ for all $m \ge 0$ and all $[Q] \in B$. If $B = \hat{\mathsf{C}}$ then we say that $A$ is a {\em global} exponential attractor. For each $j \in \{1,\dots,\lceil n/2 \rceil -1\}$ let $B_j \subset \mathsf{C}$ be the subspace generated by $\{X_j,X_{j+1},\dots,X_{n-j}\}$. Let $B_{\lceil n/2 \rceil} := \emptyset$, and ${\rm{Aff}} := \cup_{j=1}^{\lceil n/2 \rceil - 1} {A_j}$. For any subset $D \subset \mathsf{C}$ let $\hat{D} := \{ [P] : P \in D\}$. We will only use this notation for sets $D$ which are maximal in the sense that if $\hat{D} = \hat{E}$ then $E \subset D$. \begin{theorem}\label{nn} For each $j \in \{1,\dots,\lceil n/2 \rceil -1\}$ the set $\hat{A_j}$ is an exponential attractor with basin $\hat{B_j}~\setminus~\hat{B_{j+1}}$ and thus $\hat{\rm{Aff}}$ is a global exponential attractor for $\hat{\mu}$. The map $\hat{\mu}$ is $n$-periodic (i.e., $\hat{\mu}^n = Id$) on each $\hat{A_j}$, and thus on $\hat{\rm{Aff}}$. \end{theorem} \begin{proof} Suppose $Q \in B_j \setminus B_{j+1}$. Equation (\ref{aha}) implies that the polygon $\mu^m(Q) = \sum_{i=1}^{n-1} (\frac{1+q^i}{2})^m a_i X_i = \sum_{i=j}^{n-j} (\frac{1+q^i}{2})^m a_i X_i$ is $\star$-similar to $$ \left (\frac{2}{1+q^j} \right )^m \mu^m(Q) = a_j X_j + \sum_{i=j+1}^{n-j-1} \left (\frac{1+q^i}{1+q^j} \right )^m a_i X_i + \left ( \frac{1+q^{n-j}}{1+q^j} \right )^m a_{n-j} X_{n-j}. $$ Since $|(1+q^i)/(1+q^j)| < 1$ the terms in the middle sum are exponentially small. Thus $\hat{\mu}^m ([Q])$ is exponentially close to the equivalence class $[P_m] \in \hat{A_j}$ of the polygon $P_m := a_j X_j + \left [ (1+q^{n-j})/(1+q^j) \right ]^m a_{n-j} X_{n-j}$. The fact that $\hat{\rm{Aff}}$ is a global attractor follows from this since $\hat{\mathsf{C}}$ is the union of the $\hat{B_j}$. Lemma \ref{n} immediately implies that $\hat{\mu}$ is periodic on each $\hat{A_j}$. \end{proof} \paragraph{Acknowledgments.} Many thanks to Sergei Tabachnikov and the two anonymous referees for helpful remarks. \newpage
{ "attr-fineweb-edu": 1.983398, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUfx05qhLB2E5MO0Fv
\section{Introduction} 3D human pose tracking has applications in surveillance~\cite{zheng2015scalable} and analysis of sport events~\cite{burenius20133d,kazemi2013multi}. Most existing approaches~\cite{iqbal2018dual,iqbal2018hand,DBLP:conf/bmvc/KostrikovG14,liu2011markerless,martinez2017simple,pavlakos2017coarse,tome2017lifting,mehta2017monocular,mehta2018single} address 3D human pose estimation from single images while multi-view 3D human pose estimation~\cite{burenius20133d,kazemi2013multi,belagiannis20143d,belagiannis20163d,elhayek2015efficient} remains less explored, as obtaining and maintaining a configuration of calibrated cameras is difficult and costly. However, in sports or surveillance, calibrated multi-camera setups are available and can be leveraged for accurate human pose estimation and tracking. Utilizing multiple views has several obvious advantages over monocular 3D human pose estimation: ambiguities introduced by foreshortening as well as body joint occlusions or motion blurs can be resolved using other views. Furthermore, human poses are estimated within a global coordinate system when using calibrated cameras. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{img/shelf.pdf} \caption{ Qualitative results on the Shelf~\cite{belagiannis20143d} dataset. } \label{fig:my_label} \end{figure} In this work we propose an iterative greedy matching algorithm based on epipolar geometry to approximately solve the k-partite matching problem of multiple human detections in multiple cameras. To this end we utilize a real-time 2D pose estimation framework and achieve very strong results on challenging multi-camera datasets. The common 3D space proves to be very robust for greedy tracking, resulting in a very efficient and well-performing algorithm. In contrast to previous works~\cite{burenius20133d,kazemi2013multi,pavlakos17harvesting,ershadi2018multiple}, our approach does not discretize the solution space but combines triangulation with an efficient pose association approach across camera views and time. Furthermore, our approach does not utilize individual shape models for each person~\cite{liu2011markerless}. We make the following contributions: (i) we present a greedy approach for 3D multi-person tracking from multiple calibrated cameras and show that our approach achieves state-of-the-art results. (ii) We provide extensive experiments on both 3D human pose estimation and on 3D human pose tracking on various multi-person multi-camera datasets. \section{Related Work} Significant progress has been made in pose estimation and pose tracking in recent years~\cite{cao2017realtime,doering2018joint,Iqbal_CVPR2017,xiao2018simple} and our model is built on advancements in the field of 2D multi-person pose estimation~\cite{cao2017realtime,chen2018cascaded,fieraru2018learning,guo2018multi,kocabas2018multiposenet,newell2017associative,rogez2019lcr,xiao2018simple}. For instance, part affinity fields~\cite{cao2017realtime} are 2D vector fields that represent associations between body joints which form limbs. It utilizes a greedy bottom-up approach to detect 2D human poses and is robust to early commitment. Furthermore, it decouples the runtime complexity from the number of people in the image, yielding real-time performance. There is extensive research in monocular 3D human pose estimation~\cite{iqbal2018dual,iqbal2018hand,DBLP:conf/bmvc/KostrikovG14,martinez2017simple,pavlakos2017coarse,tome2017lifting,mehta2017monocular,mehta2018single}. For instance, Martinez et al.~\cite{martinez2017simple} split the problem of inferring 3D human poses from single images into estimating a 2D human pose and then regressing the 3D pose on the low-dimensional 2D representation. Though 3D human pose estimation approaches from single images yield impressive results they do not generalize well to unconstrained data. While multiple views are used in~\cite{pavlakos17harvesting,rhodin2018learning} to guide the training for monocular 3D pose estimation, there are also approaches that use multiple views for inference. A common technique to estimate a single 3D human pose from multiple views is to extend the well-known pictorial structure model~\cite{felzenszwalb2005pictorial} to 3D~\cite{amin2013multi,bergtholdt2010study,burenius20133d,kazemi2013multi,pavlakos17harvesting}. Burenius et al.~\cite{burenius20133d} utilize a 2D part detector based on the HOG-descriptor~\cite{dalal2005histograms} while Kazemi et al.~\cite{kazemi2013multi} use random forests. Pavlakos et al.~\cite{pavlakos17harvesting} outperform all previous models by utilizing the stacked hourglass network~\cite{newell2016stacked} to extract human joint confidence maps from the camera views. However, these models have to discretize their solution space resulting in either a very coarse result or a very large state space making them impractical for estimating 3D poses of multiple people. Furthermore, they restrict their solution space to a 3D bounding volume around the subject which has to be known in advance. Estimating multiple humans from multiple views was first explored by Belagiannis et al.~\cite{belagiannis20143d,belagiannis20163d}. Instead of sampling from all possible translations and rotations they utilize a set of 3D body joint hypotheses which were obtained by triangulating 2D body part detections from different views. However, these methods rely on localizing bounding boxes using a person tracker for each individual in each frame to estimate the number of persons that has to be inferred from the common state space. This will work well in cases where individuals are completely visible in most frames but will run into issues when the pose is not completely visible in some cameras as shown in Figure \ref{fig:cmu_pizza}. A CNN-based approach was proposed by Elhayek et al.~\cite{elhayek2015efficient} where they fit articulated skeletons using 3D sums of Gaussians~\cite{stoll2011fast} and where body part detections are estimated using CNNs. However, the Gaussians and skeletons need to be initialized beforehand for each actor in the scene, similar to~\cite{liu2011markerless}. Fully connected pairwise conditional random fields~\cite{ershadi2018multiple} utilize approximate inference to extract multiple human poses where DeeperCut~\cite{insafutdinov2016deepercut} is used as 2D human pose estimation model. However, the search space has to be discretized and a fully connected graph has to be solved, which throttles inference speed. Our approach does not suffer from any of the aforementioned drawbacks as our model works off-the-shelf without the need of actor-specific body models or discretized state space and uses an efficient greedy approach for estimating 3D human poses. \begin{figure}[t] \begin{center} \includegraphics[width=0.7\linewidth]{img/cmu_pizza1.pdf} \end{center} \caption{Challenging 3D reconstruction of 6 persons in the \textit{CMU Panoptic Dataset}~\cite{Joo_2015_ICCV} with significant occlusion and partial visibility of persons.} \label{fig:cmu_pizza} \end{figure} \section{Model} \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{img/kpartite_greedy.eps} \caption{Estimating multiple people from multiple views can be formulated as k-partite graph partitioning where 2D human pose detections must be associated across multiple views. We employ a greedy approach to make the partitioning tractable. Given a set of 2D human pose detections on multiple views (a) we greedily match all detections on two images (b) where the weight between two detections is defined by the average epipolar distance of the two poses. Other views are then integrated iteratively where the weight is the average of the epipolar distance of the 2D detections in the new view and the already integrated 2D detections (c). 2D detections with the same color represent the same person. } \label{fig:kpartite_greedy} \end{figure} Our model consists of two parts: First, 3D human poses are estimated for each frame. Second, the estimated 3D human poses are greedily matched into tracks which is described in Section \ref{sec:tracking}. To remove outliers and to fill-in missing joints in some frames, a simple yet effective smoothing scheme is applied, which is also discussed in Section \ref{sec:tracking}. \subsection{3D Human Pose Estimation} \label{sec:pose3d} \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{img/epi_lines.pdf} \end{center} \caption{Epipolar lines for two camera views of the UMPM Benchmark~\cite{HICV11:UMPM}. The blue and the red dot in image (a) are projected as blue (red) epipolar lines in the second image (b) while the orange and light-blue dot from image (b) are projected onto image (a). } \label{fig:epilines} \end{figure} First, 2D human poses are extracted for each camera separately. Several strong 2D multi-person pose estimation~\cite{cao2017realtime,chen2018cascaded,fieraru2018learning,guo2018multi,kocabas2018multiposenet,newell2017associative,rogez2019lcr,xiao2018simple} models have been proposed but in our baseline we utilize OpenPose~\cite{cao2017realtime} as it is well established and offers real-time capabilities. We denote the 2D human pose estimations as \begin{align} \big\{h_{i, k}\big\}_{i \in [1, N]}^{k \in [1, K_i]} \end{align} where $N$ is the number of calibrated cameras and $K_i$ the number of detected human poses for camera $i$. In order to estimate the 3D human poses from multiple cameras, we first associate the detections across all views as illustrated in Figure~\ref{fig:kpartite_greedy}. We denote the associated 2D human poses as $\mathcal{H}$ where $\vert \mathcal{H} \vert$ is the number of detected persons and $\mathcal{H}_m = \{h_{i,k}\}$ is the set of 2D human poses that are associated to person $m$. Once the poses are associated, we estimate the 3D human poses for all detected persons $m$ with $\vert \mathcal{H} \vert>1$ by triangulating the 2D joint positions. For the association, we select camera $i=1$ as starting point and choose all 2D human pose detections $h_{1,k}$ in this camera view as person candidates, i.e., $\mathcal{H}= \big\{ \{h_{1,k}\} \big\}$. We then iterate over the other cameras and greedily match their 2D detections with the current list of person candidates $\mathcal{H}$ using bi-partite matching~\cite{munkres1957algorithms}. The cost for assigning a pose $h_{i,k}$ to an existing person candidate $\mathcal{H}_m$ is given by \begin{align} \Phi(h_{i,k}, \mathcal{H}_m) = \frac{1}{\vert \mathcal{H}_m \vert \vert J_{kl}\vert} \sum_{h_{j,l} \in \mathcal{H}_m} \sum_{\iota\in J_{kl}} \phi(h_{i,k}(\iota), h_{j,l}(\iota)) \label{eq:epidistance_big} \end{align} where $h_{i,k}(\iota)$ denotes the 2D pixel location of joint $\iota$ of the 2D human pose $h_{i,k}$ and $J_{kl}$ is the set of joints that are visible for both poses $h_{i,k}$ and $h_{j,l}$. Note that the 2D human pose detections might not contain all $J$ joints due to occlusions or truncations. The distance between two joints in the respective cameras is defined by the distance between the epipolar lines and the joint locations: \begin{align} \phi(p_i, p_j) = \vert p_j^T F^{i,j}p_i \vert + \vert p_i^T F^{j,i}p_j \vert \label{eq:distance} \end{align} where $F^{i,j}$ is the fundamental matrix from camera $i$ to camera $j$. Figure \ref{fig:epilines} shows the epipolar lines for two joints. Using the cost function $\Phi(h_{i,k}, \mathcal{H}_m)$, we solve the bi-partite matching problem for each image $i$: \begin{equation} X^* = \underset{X}{ \mathrm{argmin}}\sum_{m=1}^{\vert \mathcal{H} \vert} \sum_{k=1}^{K_i} \Phi(h_{i,k}, \mathcal{H}_m) X_{k,m} \end{equation} where \begin{equation} \nonumber \sum_k X_{k,m} = 1 \; \forall m \quad\text{and}\quad \sum_m X_{k,m} =1 \; \forall k. \end{equation} $X_{k,m}^*=1$ if $h_{i,k}$ is associated to an existing person candidate $\mathcal{H}_m$ and it is zero otherwise. If $X_{k,m}^*=1$ and $\Phi(h_{i,k}, \mathcal{H}_m) < \theta$, the 2D detection $h_{i,k}$ is added to $\mathcal{H}_m$. If $\Phi(h_{i,k}, \mathcal{H}_m) \geq \theta$, $\{h_{i,k}\}$ is added as hypothesis for a new person to $\mathcal{H}$. Algorithm \ref{alg:poseestimation} summarizes the greedy approach for associating the human poses across views. \begin{algorithm}[t] \footnotesize \SetAlgoLined \KwResult{Associated 2D poses $\mathcal{H}$} $\mathcal{H}:= \big\{ \{h_{1,k}\} \big\}$ \; \For{\textrm{camera} $i \gets 2$ \KwTo $N$}{ \For{\textrm{pose} $k \gets 1$ \KwTo $K_i$ }{ \For{\textrm{hypothesis} $m \gets 1$ \KwTo $\vert\mathcal{H}\vert$}{ $C_{k,m} = \Phi(h_{i,k}, \mathcal{H}_m)$ \; } } $X^* = \underset{X}{ \mathrm{argmin}}\sum_{m=1}^{\vert \mathcal{H} \vert} \sum_{k=1}^{K_i} C_{k,m} X_{k,m}$ \; \For{$k,m$ \textbf{where} $X_{k,m}^*=1$}{ \eIf{$C_{k,m} < \theta$}{ $\mathcal{H}_m = \mathcal{H}_m \ \bigcup \ \{h_{i,k}\}$ \; }{ $\mathcal{H} = \mathcal{H} \ \bigcup \ \big\{ \{h_{i,k} \} \big\}$ \; } } } $\mathcal{H} = \mathcal{H} \setminus \mathcal{H}_m \ \forall m$ \textbf{where} $\vert\mathcal{H}_m\vert = 1$\; \caption{Solving the assignment problem for multiple 2D human pose detections in multiple cameras. $\Phi(h_{i,k}, \mathcal{H}_m)$ \eqref{eq:epidistance_big} is the assignment cost for assigning the 2D human pose $h_{i,k}$ to the person candidate $\mathcal{H}_m$. $X^*$ is a binary matrix obtained by solving the bi-partite matching problem. The last line in the algorithm ensures that all hypotheses that cannot be triangulated are removed. } \label{alg:poseestimation} \end{algorithm} \subsection{Tracking} \label{sec:tracking} For tracking, we use bipartite matching~\cite{munkres1957algorithms} similar to Section~\ref{sec:pose3d}. Assuming that we have already tracked the 3D human poses until frame $t-1$, we first estimate the 3D human poses for frame $t$ as described in Section~\ref{sec:pose3d}. The 3D human poses of frame $t$ are then associated to the 3D human poses of frame $t-1$ by bipartite matching. The assignment cost for two 3D human poses is in this case given by the average Euclidean distance between all joints that are present in both poses. In some cases, two poses do not have any overlapping valid joints due to noisy detections or truncations. The assignment cost is then calculated by projecting the mean of all valid joints of each pose onto the $xy$-plane, assuming that the $z$-axis is the normal of the ground plane, and taking the Euclidean distance between the projected points. As long as the distance between two matched poses is below a threshold $\tau$, they will be integrated into a common track. Otherwise, a new track is created. In our experiments we set $\tau=200mm$. Due to noisy detections, occlusions or motion blur, some joints or even full poses might be missing in some frames or noisy. We fill in missing joints by temporal averaging and we smooth each joint trajectory by a Gaussian kernel with standard deviation $\sigma$. This simple approach significantly boosts the performance of our model as we will show in Section \ref{sec:exp}. \section{Experiments} \begin{table}[t] \normalsize \begin{center} \begin{tabular}{c|| c c c | c c c c c} & \cite{burenius20133d}\textsuperscript{*} & \cite{kazemi2013multi}\textsuperscript{*} & \cite{pavlakos17harvesting}\textsuperscript{*} & \cite{belagiannis20143d} & \cite{belagiannis20163d} & \cite{ershadi2018multiple} & Ours & Ours\textsuperscript{+} \\ \hline ua & .60 & .89 & 1.0 & .68 & .98 & .97 & .99 & 1.0 \\ la & .35 & .68 & 1.0 & .56 & .72 & .95 & .99 & 1.0 \\ ul & 1.0 & 1.0 & 1.0 & .78 & .99 & 1.0 & .98 & .99 \\ ll & .90 & .99 & 1.0 & .70 & .92 & .98 & .93 & .997 \\ \hline avg & .71 & .89 & 1.0 & .68 & .90 & .98 & .97 & \textbf{.997} \\ \hline \end{tabular} \end{center} \caption{Quantitative comparison of methods for single human 3D pose estimation from multiple views on the KTH Football II~\cite{kazemi2013multi} dataset. The numbers are the PCP score in 3D with $\alpha=0.5$. Methods annotated with \textsuperscript{*} can only estimate single human poses, discretize the state space and rely on being provided with a tight 3D bounding box centered at the true 3D location of the person. \textit{Ours}\textsuperscript{+} and \textit{Ours} describe our method with and without track smoothing (Section \ref{sec:tracking}). \textit{ul} and \textit{la} show the scores for upper and lower arm, respectively, while \textit{ul} and \textit{ll} represent upper and lower legs. } \label{tab:singlehuman} \end{table} \begin{table}[t] \normalsize \begin{center} \begin{tabular}{c|c c c | c c c | c c c | c c c | c c c } \multicolumn{13}{c}{\textbf{Campus dataset} $\ (\alpha = 0.5)$} \\ & \multicolumn{3}{c}{\cite{belagiannis20143d}} & \multicolumn{3}{c}{\cite{belagiannis20163d}} & \multicolumn{3}{c}{\cite{ershadi2018multiple}} & \multicolumn{3}{c}{Ours} & \multicolumn{3}{c}{Ours\textsuperscript{+}}\\ \hline Actor & 1 & 2 & 3 & 1 & 2 & 3 & 1 & 2 & 3 & 1 & 2 & 3 & 1 & 2 & 3 \\ \hline ua & .83 & .90 & .78 & .97 & .97 & .90 & .97 & .94 & 93 & .86 & .97 & .91 & .99 & .98 & .98\\ la & .78 & .40 & .62 & .86 & .43 & .75 & .87 & .79 & 70 & .74 & .64 & .68 & .91 & .70 & .92 \\ ul & .86 & .74 & .83 & .93 & .75 & .92 & .94 & .99 & 88 & 1.0 & .99 & .99 & 1.0 & .98 & 1.0\\ ll & .91 & .89 & .70 & .97 & .89 & .76 & .97 & .95 & 81 & 1.0 & .98 & .99 & 1.0 & .98 & .99 \\ \hline avg & .85 & .73 & .73 & .93 & .76 & .83 & .94 & .93 & .85 & .90 & .90 & .89 & .98 & .91 & .98 \\ \hline avg\textsuperscript{*} & \multicolumn{3}{c|}{.77} & \multicolumn{3}{c|}{.84} & \multicolumn{3}{c|}{.91} & \multicolumn{3}{c|}{.90} & \multicolumn{3}{c}{\textbf{.96}}\\ \hline \multicolumn{13}{c}{} \\ \multicolumn{13}{c}{\textbf{Shelf dataset} $\ (\alpha = 0.5)$} \\ & \multicolumn{3}{c}{\cite{belagiannis20143d}} & \multicolumn{3}{c}{\cite{belagiannis20163d}} & \multicolumn{3}{c}{\cite{ershadi2018multiple}} & \multicolumn{3}{c}{Ours} & \multicolumn{3}{c}{Ours\textsuperscript{+}}\\ \hline Actor & 1 & 2 & 3 & 1 & 2 & 3 & 1 & 2 & 3 & 1 & 2 & 3 & 1 & 2 & 3 \\ \hline ua & .72 & .80 & .91 & .82 & .83 & .93 & .93 & .78 & .94 & .99 & .93 & .97 &.1.0 & .97 & .97 \\ la & .61 & .44 & .89 & .82 & .83 & .93 & .83 & .33 & .90 & .97 & .57 & .95 & .99 & .64 & .96 \\ ul & .37 & .46 & .46 & .43 & .50 & .57 & .96 & .95 & .97 & .998 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 \\ ll & .71 & .72 & .95 & .86 & .79 & .97 & .97 & .93 & .96 & .998 & .99 & 1.0 & 1.0 & 1.0 & 1.0 \\ \hline avg & .60 & .61 & .80 & .73 & .74 & .85 & .92 & .75 & .94 & .99 & .87 & .98 & .998 & .90 & .98 \\ \hline avg\textsuperscript{*} & \multicolumn{3}{c|}{.67} & \multicolumn{3}{c|}{.77} & \multicolumn{3}{c|}{.87} & \multicolumn{3}{c|}{.95} & \multicolumn{3}{c}{\textbf{.96}}\\ \hline \end{tabular} \end{center} \caption{Quantitative comparison of multi-person 3D pose estimation from multiple views on the evaluation frames of the annotated Campus~\cite{fleuret2007multicamera,belagiannis20143d} and Shelf dataset~\cite{belagiannis20143d}. The numbers are the PCP score in 3D with $\alpha=0.5$. \textit{Ours}\textsuperscript{+} and \textit{Ours} describe our method with and without track smoothing (Section \ref{sec:tracking}). We show results for each of the three actors separately as well as averaged for each method (\textit{average}\textsuperscript{*}). } \label{tab:multihumans} \end{table} \begin{table}[t] \normalsize \begin{center} \begin{tabular}{c| c c } & \multicolumn{2}{c}{Ours\textsuperscript{+}} \\ \hline Actor & 1 & 2 \\ \hline ua & .997 & .98 \\ la & .98 & .996 \\ ul & 1.0 & 1.0 \\ ll & .99 & .997 \\ \hline avg & 0.99 & 0.99 \\ \hline \end{tabular} \end{center} \caption{ Quantitative comparison of multi-person 3D pose estimation from multiple views on \textit{p2\_chair\_2} of the UMPM benchmark~\cite{HICV11:UMPM}. } \label{tab:umpm} \end{table} \label{sec:exp} We evaluate our approach on two human pose estimation tasks, single person 3D pose estimation and multi-person 3D pose estimation, and compare it to state-of-the-art methods. Percentage of correct parts (PCP) in 3D as described in \cite{burenius20133d} is used for evaluation. We evaluate on the limbs only as annotated head poses vary significantly throughout various datasets. In all experiments, the order in which the cameras are processed is given by the dataset. We then evaluate the tracking performance. The source code is made publicly available~\footnote{https://github.com/jutanke/mv3dpose}. \subsection{Single Person 3D Pose Estimation} Naturally, first works on 3D human pose estimation from multiple views cover only single humans. Typical methods~\cite{burenius20133d,kazemi2013multi,pavlakos17harvesting} find a solution over the complete discretized state space which is intractable for multiple persons. However, we report their results for completeness. All models were evaluated on the complete first sequence of the second player of the KTH Football II~\cite{kazemi2013multi} dataset. Our results are reported in Table \ref{tab:singlehuman}. Our model outperforms all other multi-person approaches and gets close to the state-of-the-art for single human pose estimation~\cite{pavlakos17harvesting} which makes strong assumptions and is much more constrained. Our model has the lowest accuracy for lower legs (\textit{ll}) which experience strong deformation and high movement speed. This can be mostly attributed to the 2D pose estimation framework which confuses left and right under motion blur, as can be seen in Figure \ref{fig:earlycommit}. When smoothing the trajectory (Section \ref{sec:tracking}) this kind of errors can be reduced. \subsection{Multi-Person 3D Pose Estimation} To evaluate our model on multi-person 3D pose estimation, we utilize the Campus~\cite{fleuret2007multicamera,belagiannis20143d}, Shelf~\cite{belagiannis20143d}, CMU Panoptic~\cite{Joo_2015_ICCV} and UMPM~\cite{HICV11:UMPM} dataset. The difficulty of the Campus dataset lies in its low resolution ($360\times 288$ pixel) which makes accurate joint detection hard. Furthermore, small errors in triangulation or detection will result in large PCP errors as the final score is calculated on the 3D joint locations. As in previous works~\cite{belagiannis20143d,belagiannis20163d} we utilize frames $350-470$ and frames $650-750$ of the Campus dataset and frames $300-600$ for the Shelf dataset. Clutter and humans occluding each others make the Shelf dataset challenging. Nevertheless, our model achieves state-of-the-art results on both datasets by a large margin which can be seen in Table \ref{tab:multihumans}. Table \ref{tab:umpm} reports quantitative results on video \textit{p2\_chair\_2} of the UMPM~\cite{HICV11:UMPM} benchmark. A sample frame from this benchmark can be seen in Figure \ref{fig:epilines}. As the background is homogeneous and the human actors maintain a considerable distance to each other the results of our method are quite strong. \subsection{Tracking} \begin{table}[t] \normalsize \begin{center} \begin{tabular}{ c| c | c } \hline & Ours & Ours\textsuperscript{+} \\ \hline 160422\_ultimatum1~\cite{Joo_2015_ICCV} & .89 & .89 \\ 160224\_haggling1~\cite{Joo_2015_ICCV} & .92 & .92 \\ 160906\_pizza1~\cite{Joo_2015_ICCV} & .92 & .93 \\ \hline \end{tabular} \end{center} \caption{Quantitative evaluation of multi-person 3D pose tracking on the CMU Panoptic dataset~\cite{Joo_2015_ICCV} using the MOTA~\cite{bernardin2006multiple} score. \textit{Ours}\textsuperscript{+} and \textit{Ours} describe our method with and without track smoothing (Section \ref{sec:tracking}). } \label{tab:mota} \end{table} For evaluating the tracking accuracy, we utilize the MOTA~\cite{bernardin2006multiple} score which provides a scalar value for the rate of false positives, false negatives, and identity switches of a track. Our model is evaluated on the CMU Panoptic dataset~\cite{Joo_2015_ICCV} which provides multiple interacting people in close proximity. We use videos \textit{160224\_haggling1} with three persons, \textit{160422\_ultimatum1} with up to seven person, and \textit{160906\_pizza1} with six persons. For the videos \textit{160422\_ultimatum1} we use frames $300$ to $3758$, for \textit{160906\_pizza1} we use frames $1000$ to $4458$ and for \textit{160224\_haggling1} we use frames $4209$ to $5315$ and $6440$ to $8200$. The first five HD cameras are used. Our results are reported in Table \ref{tab:mota} which shows that our approach yields strong tracking capabilities. \subsection{Effects of Smoothing} \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{img/sigma.eps} \end{center} \caption{ PCP score for different smoothing values $\sigma$ for tracking on KTH Football II, Campus, and Shelf. If $\sigma$ is too small, the smoothing has little effect and coincides with the un-smoothed results. When the joint trajectories are smoothed too much, the PCP score drops as well as the trajectories do not follow the original path anymore. (Larger PCP scores are better) } \label{fig:sigma} \end{figure} As can be seen in Table \ref{tab:singlehuman} and Table \ref{tab:multihumans} the effects of smoothing can be significant, especially when detection and calibration are noisy as is the case with the Campus and the KTH Football II dataset. In both datasets 2D human pose detection is challenging due to low resolution (Campus) or strong motion blur (KTH Football II). Datasets with higher resolution and less motion blur like the Shelf dataset do not suffer from this problems as much and as such do not benefit the same way from track smoothing. However, a small gain can still be noted as smoothing also fills in joint detections that could not be triangulated. Figure \ref{fig:sigma} explores different $\sigma$ values for smoothing on the KTH Football II, Campus, and Shelf dataset. It can be seen that smoothing improves the performance regardless of the dataset but that too much smoothing obviously reduces the accuracy. We chose $\sigma=2$ for all our experiments except for the Campus dataset where we set $\sigma=4.2$. The reason for the higher value of $\sigma$ for the Campus dataset is due to the very low resolution of the images compared to the other datasets, which increases the noise of the estimated 3D joint position by triangulation. \subsection{Effects of camera order} \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{img/perm_var.eps} \end{center} \caption{ PCP score averaged over all subjects for all $120$ camera permutations of the Shelf dataset. The vertical line represents the mean value over all permutations while the dots represent each camera permutation. } \label{fig:permvar} \end{figure} So far we used the given camera order for each dataset, but the order in which views are greedily matched matters and different results might happen with different orderings. To investigate the impact of the camera order, we evaluated our approach using all $120$ permutations of the $5$ cameras of the Shelf dataset. The results shown in Figure \ref{fig:permvar} show that the approach is very robust to the order of the camera views. \subsection{Early Commitment} \begin{figure}[t] \begin{center} \includegraphics[width=0.75\linewidth]{img/error.pdf} \end{center} \caption{ Issues with early commitment. As we utilize the 2D pose estimations directly, our method suffers when the predictions yield poor results. In this example the pose estimation model correctly estimates (a) and (c) but confuses left and right on (b) due to motion blur. The resulting 3D pose estimation (d) collapses into the centre of the person. The red limbs represent the right body side while blue limbs represent the left body side. } \label{fig:earlycommit} \end{figure} A failure case happens due to the early commitment of our algorithm with regards to the 2D pose estimation, as can be seen in Figure \ref{fig:earlycommit}. When the pose estimation is unsure about a pose, it still fully commits to its output and disregards uncertainty. This problem occurs due to motion blur as the network has difficulties to decide between left and right in this case. As our pose estimation model has mostly seen forward-facing persons it will be more inclined towards predicting a forward-facing person in case of uncertainty. When left and right of a 2D prediction are incorrectly flipped in at least one of the views, the merged 3D prediction will collapse to the vertical line of the person resulting in a poor 3D pose estimation. \section{Conclusion} In this work we presented a simple baseline approach for 3D human pose estimation and tracking from multiple calibrated cameras and evaluate it extensively on several 3D multi-camera datasets. Our approach achieves state-of-the-art results in multi-person 3D pose estimation while remaining sufficiently efficient for fast processing. Due to the models simplicity some common failure cases can be noted which can be build upon in future work. For example, confidence maps provided by the 2D pose estimation model could be utilized to prevent left-right flips. Our approach may serve as a baseline for future work. \section*{Acknowledgement} The work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) GA 1927/5-1 (FOR 2535 Anticipating Human Behavior) and the ERC Starting Grant ARCA (677650). \bibliographystyle{splncs04}
{ "attr-fineweb-edu": 1.649414, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUexjxK0iCl2n2ATdp
\section{Introduction} \label{sec:Intro} The traveling repairman problem with profits (TRPP) \cite{dewilde_heuristics_2013} is a general model that can be stated as follows. Let $G(V,E)$ be a complete weighted graph, where $V$ is the vertex set consisting of the depot $0$ and the customer set $V_c=\{1, 2, ..., n\}$, and $E=\{(i,j): i,j \in V \}$ is the edge set where each edge $(i,j)$ is associated with a symmetric weight $d_{i,j}=d_{j,i}$ (traveling time). A repairman begins his trip from the depot to collect a time-dependent revenue $p_i-l(i)$ via each visited customer and stops his travels when there is no positive revenue, where $p_i$ represents the profit and $l(i)$ is the waiting time for each customer $i$ ($l(0)=0$). Each customer can be visited at most once. The objective of the TRPP is to find an open Hamiltonian path such that the collected revenue $\sum_{i=0}^m [p_i-l(i)]^+$ is maximized, where $m$ is the number of visited customers and $[p_i-l(i)]^+$ is the larger value between $p_i-l(i)$ and $0$. The multiple traveling repairman problem with profits (MTRPP) generalizes the TRPP by considering multiple repairmen (servers or vehicles) to serve the customers. In the MTRPP, all the repairmen start their trips from the depot to collect a time-dependent revenue independently. Let $K\geq 1$ be the number of repairmen, then a formal solution $\varphi$ consists of $K$ Hamiltonian paths (or routes) $\{X_1, X_2, ..., X_{K}\}$, where each path $X_k=(x_0^k, x_1^k, ..., x_{m_k}^k)$ contains $m_k$ customers ($\bigcup\limits_{k=1}^{K} X_i \subseteq V$ and $X_i \cap X_j$=$\{0\},$ $i\neq j$, $\forall i,j \in \{1, 2, ..., K\}$). The objective function can be defined as follows. \begin{equation}\label{eq:definition} f(\varphi)=\sum_{k=1}^{K} \sum_{i=0}^{m_k} [p_{x_i^k}-l(x_i^k)]^+ \end{equation} The aim of the MTRPP is then to find the solution $\varphi^*$ with a maximal total collected revenue $f(\varphi^*)$. If none of the collected revenues $p_{x_i^k}-l(x_i^k)$ is negative, then Equation~(\ref{eq:definition}) can be rewritten as follows. \begin{equation}\label{eq:fast} f(\varphi)=\sum_{k=1}^{K} \sum_{i=0}^{m_k} p_{x_i^k}- \sum_{k=1}^{K} \sum_{i=1}^{m_k} (m_k-i+1)\cdot d_{x^k_{i-1}, x^k_{i}} \end{equation} Equation~(\ref{eq:fast}) is useful to design the fast evaluation of our search algorithm. Typical applications of the MTRPP in the real-life concern humanitarian and emergency relief logistics. For instance, for the case of post-disaster relief operations, $K$ homogeneous rescue teams start their trips from the base to deliver emergency supplies and save survivors of each damaged village or city. Assuming that $p_i$ persons need to be rescued for the village $i$ and one person loses his life along with each time step. The objective of the rescue teams is to save as many as lives. This application scenario was also mentioned for the TRPP \cite{dewilde_heuristics_2013} with a single rescue team. Clearly, the MTRPP is a more convenient model for the real situation where several rescue teams are needed. Existing work for solving the MTRPP as well as some related problems are briefly reviewed as follows. As to solution methods for the MTRPP, there are two practical algorithms in the literature. In 2019, Lu et al. \cite{lu2019memetic} proposed the first memetic algorithm (MA-MTRPP) to solve the MTRPP. This algorithm uses a randomized greedy construction phase for solution initialization, a variable neighborhood search for local optimization and a route-based crossover operator for solution recombination. MA-MTRPP showed to be more efficient compared to the general CPLEX solver on the 240 benchmark instances introduced in that paper. In the same year, Avci and Avci \cite{avci_adaptive_2019} developed a mixed-integer linear programming model and suggested an adaptive large neighborhood algorithm search approach (ALNS-MTRPP) for the MTRPP, which incorporates a couple of problem-specific destroy operators and two new randomized repair operators. The authors proposed another set of 230 benchmark instances and a greedy randomized adaptive search procedure with iterated local search (GRASP-ILS), which was used as a reference heuristic. According to the experimental results, ALNS-MTRPP appears to be more effective and time-efficient than GRASP-ILS for most instances. The closely related TRPP is a special case of the MTRPP with a single repairman ($K=1$). Several heuristic algorithms were proposed to solve the TRPP. In 2013, Dewilde et al. \cite{dewilde_heuristics_2013} firstly proposed a tabu search algorithm incorporating multiple neighborhoods and a greedy initialization procedure. In 2017, Avci and Avci \cite{avci_grasp_2017} suggested a greedy randomized adaptive search procedure combined with iterated local search, which outperformed the previous algorithms by updating 46 best results. In 2019, Lu et al. \cite{lu_hybrid_2019} introduced a population-based hybrid evolutionary search algorithm which provided better results than the previous algorithms. In 2020, Pei et al. \cite{pei_solving_2020} developed a general variable neighborhood search approach integrating auxiliary data structures to improve the search efficiency. This algorithm dominated all the previous algorithms by updating 40 best-known results and matching the best-known results for the remaining instances. As we show in the current work, these auxiliary data structures can be beneficially extended to the MTRPP to design fast evaluation techniques for the generalized problem. The team orienteering problem (TOP) \cite{chao1996team} is another related problem that states that a fixed number of homogeneous vehicles visit a subset of customers to maximize the collected profits within a traveling distance constraint. Different from the MTRPP, the profits of customers in the TOP are time-independent and there are distance constraints for the vehicles. Various solution methods were developped for the TOP, including local search algorithms \cite{vansteenwegen2009guided, hammami2020hybrid, tsakirakis2019similarity}, population based algorithms \cite{bouly2010memetic, zettam2016novel, dang2013effective} and exact methods based on branch-and-price and the cutting plane technique \cite{boussier2007exact, bianchessi2018branch, el2016solving, poggi2010team}. The cumulative capacitated vehicle routing problem (CCVRP) is also related to the MTRPP by considering capacity constraints for the $K$ repairmen (or vehicles). Popular algorithms for this problem include evolutionary algorithm \cite{ngueveu2010effective}, adaptive large neighborhood search heuristic \cite{ribeiro2012adaptive}, two-phase metaheuristic \cite{ke2013two}, iterated greedy algorithms \cite{nucamendi2018cumulative}, and brand-and-cut-and-price algorithm \cite{lysgaard2014branch}. The MTRPP (with multiple repairmen) is a more realistic model compared to the TRPP (with a single repairman) for real-life applications. However, there are only two principal heuristics designed for the MTRPP, contrary to the case of the TRPP for which numerous solution methods exist. There is thus a need to enrich the solution tools for this relevant problem. Moreover, the two existing algorithms for the MTRPP are rather sophisticated and rely on many parameters (13 parameters for ALNS-MTRPP and 7 for MA-MTRPP). In addition, ALNS-MTRPP has difficulties in handling instances of large-scale (e.g., requiring 3 hours to solve the 1000-customers instances). In this work, we propose an easy-to-use (with only 3 parameters) and effective hybrid search algorithm based on the memetic framework to solve the MTRPP (named EHSA-MTRPP). We summarize the contributions as follows. First, we propose an original arc-based crossover (\emph{ABX}), which is inspired by experimental observation and backbone-based heuristics \cite{zhang2004configuration, wang2013backbone}. ABX is able to generate promising offspring solutions from high quality parent solutions. Second, to ensure a high computational effectiveness, we introduce a series of data structures to reduce the complexities for the examination of the neighborhoods and prove that evaluating one neighboring solution in the underlying neighborhoods for the MTRPP can be performed in constant time. Third, we provide new lower bounds for 137 instances out of the 470 benchmark instances in the literature. These bounds can be used for future studies on the MTRPP. Finally, we will make the source code of the proposed algorithm publicly available to the community. People working on the MTRPP and related applications can use the code to solve their problems. This fills the gap that no code for solving the MTRPP is currently available. In the next section, we present the proposed algorithm. In Section~\ref{sec:Experi}, we show the experimental setup, parameter tuning and computational results, followed by investigation of the key component of the algorithm in Section~\ref{sec:Analyse}. We draw conclusions and provides research perspectives in the last Section~\ref{sec:Conclu}. \section{Method} \label{sec:Algo} \subsection{Main scheme}\label{subsec:Algo:main} The proposed hybrid search algorithm for the MTRPP is based on the framework of the memetic algorithm \cite{moscato1999memetic} and relies on five search components: a population initialization procedure (\emph{IniPool}), a variable neighborhood search procedure (\emph{VNS}) to perform the local refinement, a perturbation procedure (\emph{Spert}) to help escape from the local optimum, an arc-based crossover (\emph{ABX}) to generate high-quality offspring solutions and a pool updating procedure (\emph{UpdatingPool}) to manage the population with newly obtained solutions. \begin{algorithm}[!tb] \begin{small} \caption{The general scheme of the EHSA-MTRPP algorithm} \label{Algo:EHSA-MTRPP} \begin{algorithmic}[1] \renewcommand{\algorithmiccomment}[1]{\hfill\textrm{// \small{#1}}} \STATE \sf \textbf{Input}: Input graph $G(V,E)$, population size $Nump$, search limit $Limi$, objective function $f$ and maximum allowed time $T_{max}$ \STATE \textbf{Output}: Best found solution $\varphi^*$ \STATE /* \emph{IniPool} is used to generate initial population. */ \STATE /* \emph{VNS} is used to perform the local refinement. */ \STATE /* \emph{Spert} is used to modify (slightly) the input local optimum. */ \STATE /* \emph{ABX} is used to generate promising offspring solutions. */ \STATE /* \emph{UpdatingPool} is used to update the population. */ \STATE $P=\{\varphi_1, ... \varphi_p\} \leftarrow$ \emph{IniPool}() \COMMENT{See Section \ref{subsec:Algo:Greedy}} \FOR{$i \gets 1$ to $Nump$} \STATE $\varphi_i \leftarrow$ \emph{VNS}($\varphi_i$) \COMMENT{See Section \ref{subsec:Algo:VNS}} \ENDFOR \STATE $\varphi^*\leftarrow \arg\max\{f(\varphi_i), i=1, ..., Nump\} $ \WHILE{$T_{max}$ is not reached} \STATE $C\leftarrow 0$ \STATE ($\varphi_a, \varphi_b$)$\leftarrow$ RandomChoose($P$) \STATE $\varphi\leftarrow$ \emph{ABX($\varphi_a, \varphi_b$)} \COMMENT{See Section \ref{subsec:Algo:abx}} \STATE $\varphi_{lb}\leftarrow \varphi$ \REPEAT \STATE $\varphi \leftarrow$ \emph{VNS}($\varphi$) \IF{$f(\varphi)>(\varphi_{lb})$} \STATE $\varphi_{lb} \leftarrow \varphi$ \STATE $C\leftarrow 0$ \ELSE \STATE $C\leftarrow C+1$ \ENDIF \STATE $\varphi\leftarrow$ \emph{Spert}($\varphi$) \COMMENT{See Section \ref{subsec:Algo:Sperb}} \UNTIL{$C\geq Limi$} \STATE \emph{UpdatingPool}($\varphi_{lb}, P$) \COMMENT{See Section \ref{subsec:Algo:updating}} \IF{$f(\varphi_{lb})>(\varphi^*)$} \STATE $\varphi^* \leftarrow \varphi_{lb}$ \ENDIF \ENDWHILE \RETURN $ \varphi^*$ \end{algorithmic} \end{small} \end{algorithm} Algorithm~\ref{Algo:EHSA-MTRPP} shows the general scheme of the EHSA-MTRPP algorithm. At first, the algorithm calls \emph{IniPool} (See Section~\ref{subsec:Algo:Greedy}) to create the population $P$, where each solution $\varphi_i$ is improved by \emph{VNS} (See Section~\ref{subsec:Algo:VNS}) and the best one is recorded in $\varphi^*$ (lines 8-12). Then the algorithm enters the main search procedure (lines 13-32). For the while loop, we set $C$ to 0 (line 14), randomly choose two different solutions $\varphi_a$ and $\varphi_b$ from the population $P$ and generate an offspring solution $\varphi$ (lines 15-16) with \emph{ABX} (See Section~\ref{subsec:Algo:abx}). After recording $\varphi$ by $\varphi_{lb}$, the algorithm enters the inner loop (lines 18-27) to explore the new solutions by iterating the \emph{VNS} procedure and the \emph{Spert} procedure. For each inner loop, the current solution $\varphi$ is first improved by \emph{VNS} (line 19) and then used to update the local best solution $\varphi_{lb}$. If $\varphi$ is better than $\varphi_{lb}$, $\varphi_{lb}$ is updated and the counter $C$ is reset to 0 (lines 20-22). Otherwise, $C$ is incremented by 1 (lines 23-25). Then the perturbation procedure \emph{Spert} is triggered to displace the search from the local optimum (line 28). The above procedures are repeated until $C$ reaches the search limit $Limi$ (line 27), indicating that the search is exhausted (and trapped in a deep local optimal solution). After the inner loop, the local best solution $\varphi_{lb}$ is used to upgrade the population (line 28), and to update the best found solution $\varphi^*$ (lines 29-31). When the cutoff time $T_{max}$ is reached (line 13), the whole algorithm stops and returns the best recorded solution $\varphi^*$ (line 33). \subsection{Initial population}\label{subsec:Algo:Greedy} The initial population is filled with two types of solutions: half of them are created with a randomized construction method while the remaining solutions are generated with a greedy construction method. For the randomized construction method, we first create a giant tour with all the customers in a random order. Then we separate the giant tour into $K$ routes, where each route has the same number of customers. This leads to a complete solution $\varphi$. We also employed the greedy construction method of \cite{avci_adaptive_2019}. Starting from an empty solution $\varphi$ with $K$ routes and a vertex list $V_r=\{ 1, 2, ..., n\}$, the greedy construction method iteratively adds one vertex into the solution following a greedy randomized principle. At each step, we evaluate the objective variation of the solution $\varphi$ for each operation $Ope(v, k)$, which represents adding $v\in V_r$ to the route $k$. Then we construct a candidate set $OPE_c$ consisting of the $q$ operations with the largest contributions to the objective value. At last, a random operation $Ope(v, k)\in OPE_c$ is carried out to extend the partial solution and the vertex $v$ is removed from $V_r$. These steps are repeated until all the customers are added into the solution. The parameter $q$ is set to 3 here. More details, please refer to \cite{avci_adaptive_2019}. \subsection{Solution improvement by variable neighborhood search}\label{subsec:Algo:VNS} For local optimization, we adopt the general Variable Neighborhood Search (VNS) method \cite{mladenovic1997variable}. Indeed, this method has proved to be quite successful for both the TRPP \cite{pei_solving_2020, lu_hybrid_2019, avci_grasp_2017} and the MTRPP \cite{lu2019memetic}. Our \emph{VNS} procedure for the MTRPP is presented in Algorithm~\ref{Algo:VNS}. \begin{algorithm}[!tb] \begin{small} \caption{Local optimization with VNS} \label{Algo:VNS} \begin{algorithmic}[1] \renewcommand{\algorithmiccomment}[1]{\hfill\textrm{// \small{#1}}} \STATE \sf \textbf{Input}: Objective function $f$ and current solution $\varphi$ \STATE \textbf{Output}: Local best solution $\varphi$ \STATE /* $N_1, N_2, N_3, N_4$ represent respectively $Swap$, $Insert$, $2\mbox{-}opt$ and $Or\mbox{-}opt$ neighborhoods. */ \STATE /* $N_5, N_6, N_7$ represent respectively $Inter\mbox{-}Swap$, $Inter\mbox{-}Insert$ and $Inter\mbox{-}2\mbox{-}opt$ neighborhoods. */ \STATE /* $N_{Add}, N_{Drop} $ denote $Add$ and $Drop$ neighborhoods. */ \REPEAT \STATE $\varphi^\prime\leftarrow$ $\varphi$ \STATE $S_N\leftarrow \{N_1,N_2,N_3,N_4,N_5, N_6, N_7\}$ \STATE $\varphi \leftarrow LocalSearch(\varphi, N_{Add})$ \WHILE{$S_N\neq\emptyset$} \STATE Randomly choose a neighborhood $N\in S_N$ \STATE $\varphi\leftarrow LocalSearch(\varphi, N) $ \STATE $\varphi \leftarrow LocalSearch(\varphi, N_{Drop})$ \STATE $S_N\leftarrow S_N\setminus \{N\}$ \ENDWHILE \UNTIL{$f(\varphi^\prime)\geq f(\varphi)$} \RETURN $ \varphi$ \end{algorithmic} \end{small} \end{algorithm} In the outer loop (lines 6-16), we firstly initialize the recorded solution $\varphi^\prime$ with the current solution $\varphi$ and the neighborhood set $S_N$ with 7 different neighborhoods $N_1\mbox{-}N_7$ (lines 7-8). After a local search procedure based on $N_{Add}$ with the current solution (line 9), the search enters the inner loop to explore local best solutions by alternating different neighborhoods (lines 10-15). For each inner loop, we randomly choose a neighborhood $N\in S_N$ and use it to carry out a local optimization from the current solution (lines 11-12). Then, an additional local optimization based on $N_{Drop}$ is performed and the neighborhood $N$ is removed from the neighborhood set $S_N$ (lines 13-14). When the neighborhood set $S_N$ has been explored ($S_N =\emptyset$), the inner loop is ended. These steps are repeated until no improving solution exists in the neighborhoods (line 16). And $\varphi$ is returned (line 17). Our \emph{VNS} procedure exploits three sets of 9 neighborhoods where seven of them are also employed in \cite{lu2019memetic, avci_adaptive_2019}. The first set of four neighborhoods changes the order of customers in one route: \begin{itemize} \item[$\bullet$] $Swap$ ($N_1$): Exchanging the visiting positions of two customers in one route. \item[$\bullet$] $Insert$ ($N_2$): Removing one customer from its position and inserting it into two adjacent nodes in the same route. \item[$\bullet$] $2\mbox{-}opt$ ($N_3$): Removing two non-adjacent edges and replacing them with two new edges in the same route. \item[$\bullet$] $Or\mbox{-}opt$ ($N_4$): Removing a block of $h$ ($h=2,3$) consecutive customers from one route and inserting them into two adjacent nodes in the same route. \end{itemize} The second set of three neighborhoods is designed to change the customers between different routes: \begin{itemize} \item[$\bullet$] $Inter\mbox{-}Swap$ ($N_5$): Exchanging the positions of two customers in two different routes. \item[$\bullet$] $Inter\mbox{-}Insert$ ($N_6$): Removing one customer from one route and inserting it into two adjacent nodes in another route. \item[$\bullet$] $Inter\mbox{-}2\mbox{-}opt$ ($N_7$): Removing two edges from two different routes and replacing them with two new edges. A simple illustration is presented in Figure~\ref{fig:inter2opt}. \end{itemize} \begin{figure} \centering \includegraphics[scale=0.3]{EPS/2_inter2opt.png} \caption{Illustration of $Inter\mbox{-}2\mbox{-}opt$: supposing two routes $X_a$ (marked in blue) and $X_b$ (marked in orange), operating an $Inter\mbox{-}2\mbox{-}opt$ produces two new routes $X_a^\prime$ and $X_b^\prime$, where the blue dotted lines represent the edges to remove.} \label{fig:inter2opt} \end{figure} The third set of two neighborhoods is to change the set of visited customers: \begin{itemize} \item[$\bullet$] $Add$ ($N_{Add}$): Adding one unselected customer to some position of some route. \item[$\bullet$] $Drop$ ($N_{Drop}$): Removing one customer from one route. \end{itemize} It is interesting to note that Pei et al. \cite{pei_solving_2020} introduced a series of data structures to realize fast evaluation of the neighboring solutions in the neighborhoods $N_1\mbox{-}N_4$, $N_{Add}$ and $N_{Drop}$ for solving the related TRPP. Here, we extend their method to the neighborhoods\footnote{The neighborhoods $N_5\mbox{-}N_7$ are not applied and studied in \cite{pei_solving_2020}.} for the MTRPP. In practice, evaluating each neighboring solution in our algorithm can be finished in $O(1)$, which is more efficient than the reference algorithms in the literature \cite{lu2019memetic, avci_adaptive_2019}. The detailed comparisons of complexities for exploring different neighborhoods between the reference algorithms and the proposed algorithm are discussed in Section~\ref{subsec:Algo:Novel}. The complexities of exploring the aforementioned neighborhoods are summarized as follows. \begin{mypro}\label{pro:1} For the first set of four neighborhoods ($N_1\mbox{-}N_4$) and the third set of two neighborhoods ($N_{Add}$ and $N_{Drop}$) for the MTRPP, the complexity of evaluating each neighboring solution is $O(1)$. Let $n$ be the number of all customers and $m$ be the number of visited customers in the solution. The time complexities of exploring these neighborhoods are given as follows. \begin{itemize} \item[a)] Exploring the complete $Swap$ neighborhood requires $O(m^2)$. \item[b)] Exploring the complete $Insert$ neighborhood requires $O(m^2)$. \item[c)] Exploring the complete $2\mbox{-}opt$ neighborhood requires $O(m^2)$. \item[d)] Exploring the complete $Or\mbox{-}opt$ neighborhood requires $O(m^2\cdot h)$. \item[e)] Exploring the complete $Add$ neighborhood requires $O(m\cdot (n-m))$. \item[f)] Exploring the complete $Drop$ neighborhood requires $O(m)$. \end{itemize} \end{mypro} \begin{mypro}\label{pro:2} For the second set of three neighborhoods ($N_5\mbox{-}N_7$), evaluating each neighboring solution can be done in $O(1)$. Let m be the number of visited customers in the solution. The time complexities of exploring these neighborhoods are summarized as follows. \begin{itemize} \item[a)] Exploring the complete $Inter\mbox{-}Swap$ neighborhood can be finished in $O(m^2)$. \item[b)] Exploring the complete $Inter\mbox{-}Insert$ neighborhood can be finished in $O(m^2)$. \item[c)] Exploring the complete $Inter\mbox{-}2\mbox{-}opt$ neighborhood can be finished in $O(m^2)$. \end{itemize} \end{mypro} Detailed proofs of Propositions~\ref{pro:1} and \ref{pro:2} are presented in Appendix~\ref{app1:proof}. With Equation~(\ref{eq:fast}) and a special array in Equation~(\ref{eq:data}), we can efficiently explore the aforementioned neighborhoods. It is worth noting that, the collected revenues $p_i-l(i)$ of some customers may be negative during the search process while only non-negative profits are taken into consideration by Equation~(\ref{eq:fast}). Therefore, a local optimization based on the $Drop$ operator (line 13 in Algorithm~\ref{Algo:VNS}) is performed after other neighborhoods to get rid of this difficulty. \subsection{Perturbation procedure}\label{subsec:Algo:Sperb} In order to help the search escape from deep local optimum, we apply two operators $Insert$ and $Add$ to perturb the local optimum. We firstly perform $St$ times the $Insert$ operation by randomly choosing a route and inserting some customer to a random position in the route. For the $Add$ operation, we randomly add an unvisited customer to the tail of a random route. And this procedure repeats until all the unvisited customers are added to the solution. The parameter $St$ is determined by the experiments in Section~\ref{subsec:Experi:setup}. We also tested other perturbation methods but the proposed method proves to be the best. \subsection{Arc-based crossover} \label{subsec:Algo:abx} Memetic algorithms employ crossovers to generate diversified offspring solutions from parent solutions at each generation. Generally, a meaningful crossover is expected to be able to inherit useful attributes of the parent solutions and maintain some diversity with respect to the parents \cite{hao2012memetic}. Preliminary experiments showed that the same arcs frequently appear in high-quality solutions (See Section~\ref{subsec:Ana:rational}), which naturally encourages us to preserve these shared arcs (meaningful components) in the offspring solution\footnote{Note that the idea of preserving and transferring the shared components from the parents to the offspring is the basis of backbone based crossovers \cite{zhang2004configuration, wang2013backbone}.}. Following this observation, we propose a dedicated arc-based crossover for the MTRPP. For a given solution $\varphi$ with $K$ paths $\{X_1, ..., X_K\}$, where each path $X_k=(x_0^k, ..., x_{m_k}^k )$ contains $m_k$ customers, the corresponding arc set $A$ is defined as follows: \begin{equation} A=\{(x_i^k,x_{i+1}^k): x_i^k, x_{i+1}^k\in X_k, i\in[0, m_k-1], k\in [1,K] \} \end{equation} Given two parent solutions $\varphi_s$ and $\varphi_t$, let $V_s$ and $V_t$ represent the set of selected customers respectively, and $A_s$ and $A_t$ represent their corresponding arc sets. The arc-based crossover first copies one parent solution (say $\varphi_s$) to the offspring solution, then randomly inserts fifty percent of non-shared arcs of $\varphi_t$ ($A_t\setminus A_s$) into the offspring solution, and finally removes the duplicated vertices if needed. \begin{algorithm}[!tb] \begin{small} \caption{The arc-based crossover (\emph{ABX})} \label{Algo:ABX} \begin{algorithmic}[1] \renewcommand{\algorithmiccomment}[1]{\hfill\textrm{// \small{#1}}} \STATE \sf \textbf{Input}: Input graph $G(V,E)$, parent solutions $\varphi_s$ and $\varphi_t$ \STATE \textbf{Output}: Offspring solution $\varphi_o$ \STATE /* RandomSel-Half-Arcs randomly selects 50\% of arcs from a given set of arcs. */ \STATE /* $V_o$ is the set of the selected customers for $\varphi_o$. */ \STATE /* $V_f$ is the set of nodes, which will be not removed or inserted to other positions in future operations. */ \STATE $\varphi_o\leftarrow \varphi_s$ \STATE $V_o\leftarrow V_s$ \STATE $V_{f}=\emptyset$ \STATE $A_u\leftarrow$ RandomSel-Half-Arcs($A_t\setminus A_s$) \FOR {Each arc $(a,b)\in A_s\cap A_t$} \STATE $V_{f}\leftarrow V_{f}\cup \{a\}$ \STATE $V_{f}\leftarrow V_{f}\cup \{b\}$ \ENDFOR \FOR {Each arc $(a,b)\in A_u$} \IF{$a\notin V_{o}$ and $b\notin V_{o}$} \STATE Insert $(a,b)$ to the tail of some route in $\varphi_o$ \STATE $V_{o}\leftarrow V_{o}\cup \{a\}$ \STATE $V_{o}\leftarrow V_{o}\cup \{b\}$ \ELSIF {$a\in V_{o}$ and $b\notin V_{o}$} \STATE Insert $b$ to the position after $a$ in $\varphi_o$ \STATE $V_{o}\leftarrow V_{o}\cup \{b\}$ \ELSIF {$a\notin V_{o}$ and $b\in V_{o}$} \STATE Insert $a$ to the position before $b$ in $\varphi_o$ \STATE $V_{o}\leftarrow V_{o}\cup \{a\}$ \ELSIF{$b\notin V_f$ } \STATE Remove $b$ from $\varphi_o$ \STATE Insert $b$ to the position after $a$ in $\varphi_o$ \ELSIF{$a\notin V_f$ and $b\in V_f$ } \STATE Remove $a$ from $\varphi_o$ \STATE Insert $a$ to the position before $b$ in $\varphi_o$ \ENDIF \STATE $V_{f}\leftarrow V_{f}\cup \{a\}$ \STATE $V_{f}\leftarrow V_{f}\cup \{b\}$ \ENDFOR \RETURN $ \varphi_o$ \end{algorithmic} \end{small} \end{algorithm} The proposed \emph{ABX} crossover is presented in Algorithm~\ref{Algo:ABX}. It starts by copying $\varphi_s$ to $\varphi_o$, copying $V_s$ to $V_o$, initializing $V_f$ as empty and generating an arc set $A_u$ by randomly selecting 50\% arcs from $A_t\setminus A_s$ (lines 6-9). To preserve the shared arcs $(a,b)\in A_s \cap A_t$ in the offspring solution $\varphi_o$, we then add the vertices of these arcs into the set $V_f$ (lines 10-13). The vertices in $V_f$ will not be considered in the future operations. After that, we insert each arc $(a,b)\in A_u$ into the offspring solution and remove the duplicated vertices (lines 14-34) according to the following conditions \begin{itemize} \item[1)] If both $a$ and $b$ are not included in $V_o$, the arc $(a,b)$ is added to the tail of some route and the two nodes are added into $V_o$ (lines 15-18). \item[2)] If only $a$ is contained in $V_o$, the node $b$ is inserted at the position after $a$ in $\varphi_o$ and is added into $V_o$ (lines 19-21). \item[3)] If only $b$ belongs to $V_o$, the node $a$ is inserted at the position before $b$ in $\varphi_o$ and is added into $V_o$ (lines 22-24). \item[4)] If the two nodes are already in $V_o$ and $b$ is not in $V_f$, we remove $b$ from $\varphi_o$ and insert it at the position after $a$ (lines 25-27). \item[5)] If the two nodes are already in $V_o$ and $a$ is not in $V_f$, we remove $a$ from $\varphi_o$ and insert it at the position before $b$ (lines 28-31). \end{itemize} Both $a$ and $b$ are added into the set $V_f$ after the aforementioned operations (lines 32-33) and the whole loop ends when all the arcs $(a,b)\in A_u$ is added into the offspring. At last, the new generated offspring $\varphi_o$ is returned (line 35). Figure~\ref{fig:abx} shows an illustrative example of the proposed crossover. \begin{figure}[htbp] \centering \includegraphics[scale=0.35]{EPS/abx.png} \caption{Illustration of \emph{ABX} on a 18-customer instance with two routes. $\varphi_s$ and $\varphi_t$ are two parent solutions, $\varphi_o$ is the solution copied from $\varphi_s$ and $\varphi_o^\prime$ is the generated offspring solution. $A_u$ is the set of arcs, which are randomly selected from the non-shared arcs of $\varphi_t$. $V\setminus V_i$ represents the set of unselected customers for solution $\varphi_i$, which can be $\varphi_s$, $\varphi_t$, $\varphi_o$ and $\varphi_o^\prime$. The nodes of the shared arcs between $\varphi_s$ and $\varphi_t$ are marked in blue while the nodes involved in inserting and removing are marked in red.} \label{fig:abx} \end{figure} \subsection{Pool updating} \label{subsec:Algo:updating} After the improvement of the offspring solution by the local refinement procedure, the population is updated by the improved offspring solution $\varphi_{lb}$ (See line 28 in Algorithm~\ref{Algo:EHSA-MTRPP}). In this work, we employ a simple strategy: if $\varphi_{lb}$ is different from all the solutions in the population and better than some solution in terms of the objective value, $\varphi_{lb}$ replaces the worst solution in the population. Otherwise, $\varphi_{lb}$ will be abandoned. We also tested other population updating strategies like diversification-quality strategy\footnote{This strategy considers not only the quality of the newly obtained solution but also its average distance to the other solutions to determine whether accepting $\varphi_{lb}$ into the population.}. However, this simple updating strategy has a better performance. \subsection{Discussion}\label{subsec:Algo:Novel} EHSA-MTRPP distinguishes itself from the reference algorithms \cite{lu2019memetic, avci_adaptive_2019} in two aspects. EHSA-MTRPP employs fast neighborhood evaluation techniques in its local optimization procedure for the MTRPP for the first time. These evaluation techniques ensures a higher computational efficiency of neighborhood examination compared to the existing algorithms ALNS-MTRPP \cite{avci_adaptive_2019} and MA-MTRPP \cite{lu2019memetic}. To illustrate this point, Table~\ref{tab:sum:neighborhood} summarizes the different neighborhoods as well as the complexities to explore each neighborhood in ALNS-MTRPP and MA-MTRPP. \begin{table}[htbp] \centering \caption{Summary of the neighborhood structures as well as their complexities in the reference algorithms and the proposed algorithm, where $n$ depicts the number of customers, $m$ is the number of the selected customers and $h$ is the number of consecutive customers in the block for $N_4$.} \label{tab:sum:neighborhood} \renewcommand{\arraystretch}{1.6} \setlength{\tabcolsep}{1.3mm} \begin{scriptsize} \begin{tabular}{llcclcclcc} \noalign{\smallskip} \hline \multirow{2}{*}{Neighborhood} & & \multicolumn{2}{c}{ALNS-MTRPP \cite{avci_adaptive_2019}} & & \multicolumn{2}{c}{MA-MTRPP \cite{lu2019memetic}} & & \multicolumn{2}{c}{EHSA-MTRPP} \\ \cline{3-4} \cline{6-7} \cline{9-10} & & Employment & Complexity & & Employment & Complexity & & Employment & Complexity \\ \hline $Swap$ & & \Checkmark & $O(m^2\lg n)$ & &\Checkmark & $O(m^3)$ & & \Checkmark & $O(m^2)$ \\ $Insert$ & & \Checkmark & $O(m^2\lg n)$ & &\Checkmark & $O(m^3)$ & & \Checkmark & $O(m^2)$ \\ $2\mbox{-}opt$ & & \Checkmark & $O(m^2\lg n)$ & &\Checkmark & $O(m^3)$ & & \Checkmark & $O(m^2)$ \\ $Or\mbox{-}opt$ & & \Checkmark & $O(m^2\lg n)$ & &\Checkmark & $O(m^3)$ & & \Checkmark & $O(m^2\cdot h)$ \\ $Inter\mbox{-}Swap$ & & \Checkmark & $O(m^2\lg n)$ & &\Checkmark & $O(m^3)$ & & \Checkmark & $O(m^2)$ \\ $Inter\mbox{-}Insert$ & & \Checkmark & $O(m^2\lg n)$ & &\Checkmark & $O(m^3)$ & & \Checkmark & $O(m^2)$ \\ $Inter\mbox{-}2\mbox{-}opt$ & & \Checkmark & $O(m^2\lg n)$ & &\Checkmark & $O(m^3)$ & & \Checkmark & $O(m^2)$ \\ $Inter\mbox{-}Or\mbox{-}opt$ & & \Checkmark & $O(m^2\lg n)$ & &\Checkmark & $O(m^3)$ & & \XSolid & - \\ $Add$ & & \XSolid & - & &\XSolid & - & & \Checkmark & $O(m\cdot (n-m))$ \\ $Drop$ & & \XSolid & - & &\XSolid & - & & \Checkmark & $O(m)$ \\ $Double\mbox{-}bridge$ & & \XSolid & - & &\Checkmark & $O(m^3)$ & & \XSolid & - \\ \hline \end{tabular} \end{scriptsize} \label{tab:parameter} \end{table} From Table~\ref{tab:sum:neighborhood}, we clearly remark that the proposed algorithm explores more efficiently the used neighborhoods. It is worth noting that we also proved that evaluating one neighboring solution in the $Inter\mbox{-}Or\mbox{-}opt$ and $Double\mbox{-}bridge$\footnote{$Double\mbox{-}bridge$ is a popular operator for the traveling salesman problem \cite{lin1973effective}.} based neighborhoods can be done in $O(1)$ with the help of Equation~(\ref{eq:fast}) and the auxiliary data structures. However, we observed that these neighborhoods are not helpful in improving the performance of our algorithm. Therefore, these two neighborhoods are not employed in the proposed algorithm. Additionally, the proposed algorithm adopts a dedicated arc-based crossover, which is able to generate new offspring solutions inheriting meaningful components (the shared arcs) and getting diversified from the parent solutions. MA-MTRPP \cite{lu2019memetic} applies a route-based crossover (RBX) \cite{potvin1996vehicle}, which simply copies one parent solution to the offspring solution, replaces some route of the offspring solution with a route from another parent solution and removes the duplicated vertices if needed. Our experiments and observations showed that the key components of the solutions are the `arcs' but not the `routes', making ABX more appropriate than RBX for solving the MTRPP. Experimental results in Section~\ref{subsec:Analyse:rbx} confirm these observations and demonstrate the effectiveness of the proposed algorithm with ABX compared to its variant with RBX. \section{Computational results and comparative study}\label{sec:Experi} This section presents computational experiments over the benchmark instances in the literature to evaluate the EHSA-MTRPP algorithm. \subsection{Instances, reference algorithms and parameter setting} \label{subsec:Experi:setup} Our computational experiments are based on two groups of 470 benchmark instances\footnote{The instances are collected from the authors of \cite{avci_adaptive_2019,lu2019memetic} and can be download from \url{https://github.com/REN-Jintong/MTRPP}.}, including 230 instances proposed by Avci and Avci \cite{avci_adaptive_2019} (denoted by Ins\_Avci) and 240 instances proposed by Lu et al. \cite{lu2019memetic} (denoted by Ins\_Lu). The 230 Ins\_Avci instances are divided into 14 sets of instances according to the number of customers and servers (repairmen). The first ten sets of instances are converted from instances of the TRPP \cite{dewilde2013heuristics} ($n$=10, 20, 50, 100, 200) by considering two and three servers. As each set of instances of the TRPP is composed of 20 instances, the authors have created $5\times 2 \times 20=200$ instances. The other four sets of instances are of larger sizes, including 10 instances with 500 customers and 10 servers, ten instances with 500 customers and 20 servers, five instances with 750 customers and 100 servers, and five instances with 1000 customers and 50 servers. The 240 Ins\_Lu instances are also based on instances of the TRPP \cite{dewilde2013heuristics}, and divided into 12 sets of instances with $n$=20, 50, 100, 200 and two, three and four servers. Unlike the Ins\_Avci instances, Lu et al. adjusted the profit for each customer of the instances in order to ensure a high-quality solution to hold about 75\% to 95\% of all the customers. For this purpose, each customer $i$ is assigned a non-negative profit $p_i$, a random integer between $\lceil d_{0, i} \rceil$ and $\lceil \frac{n}{k} \times \frac{\sum_{(i,j)\in E} d_{i,j}}{|E|} \rceil$, where $n$ is the number of all customers and $k$ is the number of servers. The proposed EHSA-MTRPP algorithm was programmed in C++ and compiled with the g++ 7.5.0 compiler and -O3 optimization flag\footnote{The source code will be made available at \url{https://github.com/REN-Jintong/MTRPP} upon the publication of this work.}. The experiments were carried out on a computer with an Intel Xeon(R) E5-2695 processor (2.1 GHz CPU and 2 GB RAM), which is slower than the 2.4GHz computers used in the literature \cite{avci_adaptive_2019, lu2019memetic}. The reference results in the literature were reported in \cite{avci_adaptive_2019} for the 230 Ins\_Avci instances with the ALNS-MTRPP algorithm and \cite{lu2019memetic} for the 240 Ins\_Lu instances with the MA-MTRPP algorithm. Unfortunately, the source codes of these reference algorithms are not available. Therefore, to ensure a fair comparison, we performed two different experiments on the two groups of instances. Following the experimental setup in \cite{avci_adaptive_2019, lu2019memetic}, EHSA-MTRPP was run independently 5 times with different seeds on each Ins\_Avci instance while 10 times on the Ins\_Lu instances. The stopping condition in the literature is a prefixed maximum number of iterations, while EHSA-MTRPP uses a maximum cutoff time. One notes that the average running time of ALNS-MTRPP \cite{avci_adaptive_2019} usually reaches several hours for the instances of large size and the average time to get the best found solution for MA-MTRPP \cite{lu2019memetic} is around 300 and 500 seconds for the 200-customer instances. For fair comparisons, we set our cutoff time $T_{max}$ to be twice the number of customers (in seconds). In practice, EHSA-MTRPP is able to attain better solutions than the reference algorithms with less time for most of the benchmark instances. To determine the parameters listed in Table~\ref{tab:input}, we employed an automatic parameter tuning tool Irace \cite{lopez2016irace}. In this experiment, we selected 10 large and difficult instances as the training instances and set the maximum number of runs (tuning budget) to 2000. \begin{table}[htbp] \centering \caption{Parameters of EHSA-MTRPP tuned with the Irace package.} \label{tab:input} \renewcommand{\arraystretch}{1.6} \setlength{\tabcolsep}{2.8mm} \begin{scriptsize} \begin{tabular}{lp{0.35\columnwidth}cc} \noalign{\smallskip} \hline \multicolumn{1}{c}{Parameter} & \multicolumn{1}{c}{Description} & \multicolumn{1}{c}{Type} & \multicolumn{1}{c}{Value range}\\ \hline $Limi$ & Search limit & Integer & [0, 30]\\ $St$ & Strength of the $Insert$ perturbation & Integer & [0, 100]\\ $Nump$ & Number of population & Categorical & \{6, 8, 10, 20, 50, 100\}\\ \hline \end{tabular} \end{scriptsize} \label{tab:parameter} \end{table} According to the tuning experiment, the parameters determined by Irace are: $Limi=2$, $St=11$ and $Nump=10$. We used this parameter setting for EHSA-MTRPP for all our computational experiments. \subsection{Comparative studies}\label{subsec:Experi:com} This section presents the experimental results obtained by EHSA-MTRPP with respect to the reference algorithms \cite{lu2019memetic,avci_adaptive_2019} over the two groups of 470 benchmark instances. Table~\ref{tab:allresults:avci} lists the overall results of the reference algorithm ALNS-MTRPP and our EHSA-MTRPP algorithm over the 230 Ins\_Avci instances (better results are indicated in bold). Column `Problem set' shows the instances with different numbers of customers (Size) and servers (k). Column `UB' represents the best-known upper bounds of the instances \cite{avci_adaptive_2019, lu2019memetic} and column `Bestsofar' describes the best found solutions (lower bounds) in the literature. Columns `Best', `Average' and `Tavg' (columns 4-6) show respectively the best found results, the average found results and the average time to obtain the best found solutions for ALNS-MTRPP. The following three columns depict the same information for EHSA-MTRPP (all the values aforementioned are averaged over the instances of each set). Column `p-value' lists the results of the Wilcoxon signed rank tests of the best found results (column `Best') between ALNS-MTRPP and EHSA-MTRPP, where `NA' indicates no difference between the two groups of results. Then column `$\delta$' presents the improvement in percentage of the best objective value found by EHSA-MTRPP over the best objective value of ALNS-MTRPP. The last three columns list the number of instances for which our EHSA-MTRPP algorithm improved (`Win'), matched (`Match') or failed (`Fail') to attain the best found results reported in \cite{avci_adaptive_2019}. Finally, row `Avg.' depicts the average values of the corresponding indicators for all 230 Ins\_Avci instances. \begin{table}[htbp] \centering \renewcommand{\arraystretch}{1.5} \setlength{\tabcolsep}{0.5mm} \caption{Results of the reference algorithm ALNS-MTRPP \cite{avci_adaptive_2019} and EHSA-MTRPP on the 230 Ins\_Avci instances. Each instance was solved 5 times according to \cite{avci_adaptive_2019}. The data in column `Bestsofar' are the compiled results from ALNS-MTRPP and GRASP-ILS \cite{avci_adaptive_2019}. The optimal solutions for the instances of `Size=10, k=2 and k=3' are known but their timing information is not available.}{\smallskip}\label{tab:allresults:avci} \resizebox{\textwidth}{35mm}{ \begin{tabular}{lllclllclllclllll} \toprule \multirow{2}{*}{Problem set}& \multirow{2}{*}{UB \cite{avci_adaptive_2019}}& \multirow{2}{*}{Bestsofar\cite{avci_adaptive_2019}} & & \multicolumn{3}{c}{ALNS-MTRPP} & & \multicolumn{3}{c}{EHSA-MTRPP} & & \multirow{2}{*}{p-value} & \multirow{2}{*}{$\delta$} & \multirow{2}{*}{W}& \multirow{2}{*}{M}& \multirow{2}{*}{F} \\ \cline{5-7} \cline{9-11} & & & &\multicolumn{1}{l}{Best}& \multicolumn{1}{l}{Average} & \multicolumn{1}{l}{Tavg} & & \multicolumn{1}{l}{Best} &\multicolumn{1}{l}{Average} & \multicolumn{1}{l}{Tavg} & & &\\ \midrule Size=10, m=2 & 2114.85 & 2114.85 & & 2114.85 & 2114.85 & 0.00& & 2114.85 & 2114.85 & 0.03& & NA & 0.000000\% & 0 & 20 & 0 \\ Size=10, m=3 & 2230.60 & 2230.60 & & 2230.60 & 2230.60 & 0.00& & 2230.60 & 2230.60 & 0.03& & NA & 0.000000\% & 0 & 20 & 0 \\ Size=20, m=2 & 9680.85 & 9074.60 & & 9074.60 & 9074.60 & 3.15& & 9074.60 & 9074.60 & 0.06& & NA & 0.000000\% & 0 & 20 & 0 \\ Size=20, m=3 & 9994.85 & 9450.45 & & 9450.45 & 9450.45 & 3.10& & 9450.45 & 9450.45 & 0.06& & NA & 0.000000\% & 0 & 20 & 0 \\ Size=50, m=2 & 57587.75 & 55469.15 & & 55469.15 & 55468.55 & 35.50& & 55469.15 & 55469.15 & 0.82& & NA & 0.000000\% & 0 & 20 & 0 \\ Size=50, m=3 & 58821.75 & 57184.85 & & 57184.85 & 57184.45 & 30.85& & \textbf{57185.35} & \textbf{57185.35} & 0.78& & 3.17$\times 10^{-1}$ & 0.000874\% & 1 & 19 & 0 \\ Size=100, m=2 & 232351.15 & 226900.95 & & 226899.95 & 226895.80 & 346.45& & \textbf{226900.95} & \textbf{226900.47} & 22.96& & 1.02$\times 10^{-1}$ & 0.000441\% & 0 & 20 & 0 \\ Size=100, m=3 & 235956.05 & 231957.70 & & 231954.05 & 231947.30 & 551.05& & \textbf{231958.70} & \textbf{231954.23} & 29.80& & 1.80$\times 10^{-1}$ & 0.002005\% & 1 & 19 & 0 \\ Size=200, m=2 & 907250.15 & 893197.90 & & 893183.35 & 892864.45 & 3600.00& & \textbf{893513.85} & \textbf{893374.88} & 263.23& & 1.20$\times 10^{-4}$ & 0.037002\% & 19 & 0 & 1 \\ Size=200, m=3 & 917633.35 & 907775.35 & & 907775.35 & 907611.55 & 3600.00& & \textbf{907950.35} & \textbf{907841.50} & 258.94& & 8.84$\times 10^{-5}$ & 0.019278\% & 20 & 0 & 0 \\ Size=500, m=10 & 1523086.90 & 1428729.50 & & 1428716.30 & 1422361.10 & 10800.00& & \textbf{1437256.40} & \textbf{1436265.76} & 898.97& & 5.06$\times 10^{-3}$ & 0.597746\% & 10 & 0 & 0 \\ Size=500, m=20 & 766209.10 & 692225.50 & & 692074.30 & 688804.60 & 10800.00& & \textbf{694406.60} & \textbf{694114.40} & 897.70& & 5.06$\times 10^{-3}$ & 0.337001\% & 10 & 0 & 0 \\ Size=750, m=100 & 4150788.40 & 4000423.40 & & 4000199.00 & 3966184.40 & 43200.00& & \textbf{4000585.60} & \textbf{4000541.60} & 1352.60& & 4.31$\times 10^{-2}$ & 0.009665\% & 5 & 0 & 0 \\ Size=1000, m=50 & 5402658.80 & 5189124.00 & & 5186645.80 & 5066567.40 & 43200.00& & \textbf{5191726.40} & \textbf{5191527.76} & 1757.60& & 4.31$\times 10^{-2}$ & 0.097955\% & 5 & 0 & 0 \\ \hline Avg. & 518837.49 & 500280.07 & & 500212.50 & 496401.17 & 3527.83& & \textbf{500848.55} & \textbf{500765.52} & 195.88& & & & & & \\ \hline Sum & & & & & & & & & & & & & & 71 & 158 & 1\\ \bottomrule \end{tabular} } \end{table} From row `Avg.' of Table~\ref{tab:allresults:avci}, we remark that EHSA-MTRPP outperforms ALNS-MTRPP concerning the best found results and the average found results. The two algorithms have the same performance for the first five sets of instances (instances of small sizes), while for the remaining nine sets of instances, EHSA-MTRPP performs better than the reference algorithm ALNS-MTRPP both in terms of solution quality (`Best' and `Average') and running time (column `Tavg'). In particular, the results of the Wilcoxon signed rank test (column `p-value') show that there exists a significant difference of the best found results between ALNS-MTRPP and EHSA-MTRPP over the last six set of instances (p-value$<$0.05). Overall, EHSA-MTRPP clearly dominates ALNS-MTRPP by updating the best records (new lower bounds) for 71 instances, matching the best-known results for 158 instances and only missing one best-known result. Using similar column headings as Table~\ref{tab:allresults:avci} (excluding column `Bestsofar'), Table~\ref{tab:allresults:lu} summarizes the overall results of MA-MTRPP and EHSA-MTRPP on the 240 Ins\_Lu instances. \begin{table}[htbp] \centering \renewcommand{\arraystretch}{1.5} \setlength{\tabcolsep}{0.5mm} \caption{Results of the reference algorithm MA-MTRPP \cite{lu2019memetic} and EHSA-MTRPP on the instances of Ins\_Lu. Each instance was solved 10 times according to \cite{lu2019memetic}. The optimal solutions for the instances of small size (`Size=20, k=2, 3 and 4') are known.}{\smallskip}\label{tab:allresults:lu} \begin{tiny} \begin{tabular}{llclllclllclllll} \toprule \multirow{2}{*}{Problem set}& \multirow{2}{*}{UB \cite{lu2019memetic}} & & \multicolumn{3}{c}{MA-MTRPP} & & \multicolumn{3}{c}{EHSA-MTRPP} & & \multirow{2}{*}{p-value} & \multirow{2}{*}{$\delta$} & \multirow{2}{*}{W}& \multirow{2}{*}{M}& \multirow{2}{*}{F} \\ \cline{4-6} \cline{8-10} & & &\multicolumn{1}{l}{Best}& \multicolumn{1}{l}{Average} & \multicolumn{1}{l}{Tavg} & & \multicolumn{1}{l}{Best} &\multicolumn{1}{l}{Average} & \multicolumn{1}{l}{Tavg} & & &\\ \midrule Size=20, m=2 & 3937.60 & & 3937.60 & 3937.60 & 1.31& & 3937.60 & 3937.60 & 0.05& & NA & 0.000000\% & 0 & 20 & 0 \\ Size=20, m=3 & 2399.20 & & 2399.20 & 2399.20 & 1.29& & 2399.20 & 2399.20 & 0.05& & NA & 0.000000\% & 0 & 20 & 0 \\ Size=20, m=4 & 1733.40 & & 1733.40 & 1733.40 & 1.23& & 1733.40 & 1733.40 & 0.06& & NA & 0.000000\% & 0 & 20 & 0 \\ Size=50, m=2 & 29677.45 & & 27172.15 & 27172.15 & 7.38& & \textbf{27173.55} & \textbf{27173.55} & 1.02& & 1.09$\times 10^{-1}$ & 0.005152\% & 3 & 17 & 0 \\ Size=50, m=3 & 19464.60 & & 17523.55 & 17523.55 & 6.26& & 17523.55 & 17523.55 & 0.66& & NA & 0.000000\% & 0 & 20 & 0 \\ Size=50, m=4 & 14805.10 & & 13049.05 & 13049.05 & 5.72& & \textbf{13049.25} & \textbf{13049.25} & 0.75& & 1.80$\times 10^{-1}$ & 0.001533\% & 2 & 18 & 0 \\ Size=100, m=2 & 120082.50 & & 113566.35 & 113560.76 & 46.13& & \textbf{113567.10} & \textbf{113566.60} & 21.44& & 1.80$\times 10^{-1}$ & 0.000660\% & 2 & 18 & 0 \\ Size=100, m=3 & 81703.80 & & 76976.35 & 76972.48 & 37.85& & \textbf{76976.65} & \textbf{76976.24} & 23.31& & 3.17$\times 10^{-1}$ & 0.000390\% & 1 & 19 & 0 \\ Size=100, m=4 & 61265.85 & & 57188.40 & 57186.69 & 32.55& & \textbf{57188.55} & \textbf{57188.53} & 21.07& & 3.17$\times 10^{-1}$ & 0.000262\% & 1 & 19 & 0 \\ Size=200, m=2 & 489139.40 & & 472301.40 & 472002.08 & 455.39& & \textbf{472499.25} & \textbf{472354.94} & 254.16& & 1.03$\times 10^{-4}$ & 0.041891\% & 19 & 0 & 1 \\ Size=200, m=3 & 333141.60 & & 321136.55 & 320912.21 & 358.27& & \textbf{321278.75} & \textbf{321175.57} & 245.66& & 8.86$\times 10^{-5}$ & 0.044280\% & 20 & 0 & 0 \\ Size=200, m=4 & 246431.05 & & 236694.15 & 236539.09 & 278.37& & \textbf{236805.20} & \textbf{236720.93} & 229.22& & 1.55$\times 10^{-4}$ & 0.046917\% & 18 & 1 & 1 \\ \hline Avg. & 116981.80 & & 111973.18 & 111915.69 & 102.64& & \textbf{112011.00} & \textbf{111983.28} & 66.45& & & & & & \\ \hline Sum & & & & & & & & & & & & & 66 & 172 & 2\\ \bottomrule \end{tabular} \end{tiny} \end{table} From row `Avg.' of Table~\ref{tab:allresults:lu}, one observes that EHSA-MTRPP achieves a better performance (column `Best' and `Average') than MA-MTRPP with a shorter average time (66.45 seconds for EHSA-MTRPP vs 102.64 seconds for MA-MTRPP). As to each set of instances, EHSA-MTRPP shows a better or equal performance concerning the best found results and average found results. In particular, the proposed algorithm outperforms MA-MTRPP on the last three sets of large instances confirmed by the Wilcoxon signed rank test (p-value $<$ 0.05). In addition, EHSA-MTRPP spends less time (column `Tavg') than MA-MTRPP to attain the best found solutions for each set of instances. Overall, EHSA-MTRPP updates 66 best records (new lower bounds), matches the best-known results for 172 instances and misses only two best-known results. In summary, EHSA-MTRPP provides much better results than the reference algorithms on the 470 benchmark instances by establishing 137 new record results (29\%) and matching best-known results for 330 instances (70\%). The detailed comparisons between the reference algorithms \cite{lu2019memetic,avci_adaptive_2019} and EHSA-MTRPP are presented in Appendix~\ref{sec:Appendix}. \section{Additional results}\label{sec:Analyse} This section firstly presents the additional results to demonstrate the important roles of the fast evaluation technique and the arc-based crossover for the proposed algorithm. Furthermore, we experimentally compare the proposed algorithm to the variant with RBX, and reveal the rationale behind the proposed crossover. \subsection{Influence of the fast evaluation technique in the neighborhood structure.}\label{subsec:Analyse:fast} To investigate the influence of the fast evaluation technique on our algorithm, we create a variant of EHSA-MTRPP where the fast evaluation technique (named EHSA-MTRPP-NoFast) is disabled. We employ the parameters in Section~\ref{subsec:Experi:setup} and run both EHSA-MTRPP-NoFast and EHSA-MTRPP 10 independently times on each instance of large size ($n\geq 200$). The cut-off time is also set to be twice the number of customers. Using similar column headings as Table~\ref{tab:allresults:lu}, Table~\ref{tab:nofast} gives the comparative results of EHSA-MTRPP-NoFast and EHSA-MTRPP over the instances of large size from Ins\_Avci and Ins\_Lu. (Better results are marked in bold.) \begin{table}[htbp] \centering \renewcommand{\arraystretch}{1.5} \setlength{\tabcolsep}{0.4mm} \caption{Results of EHSA-MTRPP-NoFast and EHSA-MTRPP on the benchmark instances of large size. Each instance is solved 10 times and the cutoff time is set to be twice the number of customers.}{\smallskip}\label{tab:nofast} \begin{tiny} \begin{tabular}{llclllclllclllll} \toprule \multirow{2}{*}{Problem set}& \multirow{2}{*}{UB} & & \multicolumn{3}{c}{EHSA-MTRPP-NoFast} & & \multicolumn{3}{c}{EHSA-MTRPP} & & \multirow{2}{*}{p-value} & \multirow{2}{*}{$\delta$} & \multirow{2}{*}{W}& \multirow{2}{*}{M}& \multirow{2}{*}{F} \\ \cline{4-6} \cline{8-10} & & &\multicolumn{1}{l}{Best}& \multicolumn{1}{l}{Average} & \multicolumn{1}{l}{Tavg} & & \multicolumn{1}{l}{Best} &\multicolumn{1}{l}{Average} & \multicolumn{1}{l}{Tavg} & & &\\ \midrule Ins\_Avci & & & & & & & & & & & & & & & \\ \hline Size=200, m=2 & 907250.15 & & 893247.90 & 892891.18 & 169.58& & \textbf{893513.85} & \textbf{893374.88} & 263.23& & 1.03$\times 10^{-4}$ & 0.029773\% & 19 & 0 & 1 \\ Size=200, m=3 & 917633.35 & & 907732.95 & 907532.35 & 169.48& & \textbf{907950.35} & \textbf{907841.50} & 258.94& & 8.86$\times 10^{-5}$ & 0.023950\% & 20 & 0 & 0 \\ Size=500, m=10 & 1523086.90 & & 1431017.90 & 1428561.78 & 500.05& & \textbf{1437256.40} & \textbf{1436265.76} & 898.97& & 5.06$\times 10^{-3}$ & 0.435948\% & 10 & 0 & 0 \\ Size=500, m=20 & 766209.10 & & 692814.30 & 691839.16 & 500.21& & \textbf{694406.60} & \textbf{694114.40} & 897.70& & 5.06$\times 10^{-3}$ & 0.229831\% & 10 & 0 & 0 \\ Size=750, m=100 & 4150788.40 & & 3999139.60 & 3948547.72 & 754.74& & \textbf{4000585.60} & \textbf{4000541.60} & 1352.60& & 4.31$\times 10^{-2}$ & 0.036158\% & 5 & 0 & 0 \\ Size=1000, m=50 & 5402658.80 & & 5180266.20 & 5116501.24 & 1003.76& & \textbf{5191726.40} & \textbf{5191527.76} & 1757.60& & 4.31$\times 10^{-2}$ & 0.221228\% & 5 & 0 & 0 \\ \midrule Ins\_Lu & & & & & & & & & & & & & & & \\ \hline Size=200, m=2 & 489139.40 & & 472354.10 & 471965.12 & 325.62& & \textbf{472499.25} & \textbf{472354.94} & 254.16& & 7.80$\times 10^{-4}$ & 0.030729\% & 19 & 0 & 1 \\ Size=200, m=3 & 333141.60 & & 321165.95 & 320919.83 & 327.09& & \textbf{321278.75} & \textbf{321175.57} & 245.66& & 1.32$\times 10^{-4}$ & 0.035122\% & 19 & 1 & 0 \\ Size=200, m=4 & 246431.05 & & 236709.90 & 236544.92 & 330.46& & \textbf{236805.20} & \textbf{236720.93} & 229.22& & 1.32$\times 10^{-4}$ & 0.040260\% & 19 & 1 & 0 \\ \bottomrule \end{tabular} \end{tiny} \end{table} From column `Best' and `Average' in Table~\ref{tab:nofast}, one can conclude that EHSA-MTRPP outperforms EHSA-MTRPP-NoFast for each set of instances, which is also confirmed by the Wilcoxon signed rank tests ($p$-value$<0.05$). \begin{figure}[htbp] \centering \includegraphics[scale=0.3]{EPS/bar} \caption{The average ratio of the visited solutions of EHSA-MTRPP over EHSA-MTRPP-NoFast for 9 instances of different sizes. The name `Size\_index\_k' for each instance indicates respectively the number of customers, the instance index and the number of routes. Each instance is solved 10 times independently and cut-off time is set to twice the number of customers.} \label{fig:bar} \end{figure} To illustrate the effectiveness of the fast evaluation technique, we carry out another experiment by running both algorithms independently 10 times on 9 instances of different sizes and record the numbers of the visited neighboring solutions. The cut-off time is also set to be twice the number of customers. Figure~\ref{fig:bar} shows the average ratio of the visited solutions of EHSA-MTRPP over EHSA-MTRPP-NoFast for the instances of different sizes. One observes that EHSA-MTRPP visits more neighboring solutions than EHSA-MTRPP-NoFast for all selected instances. The dominance of our algorithm with the fast evaluation technique becomes even more clear as the size of instance increases. In summary, the results in Table~\ref{tab:nofast} and in Figure~\ref{fig:bar} demonstrate that the fast evaluation technique can help the proposed algorithm to explore more efficiently the search space and it indeed contributes to the performance of the proposed algorithm. \subsection{Influence of the crossover operator} \label{subsec:nox} This section explores the contributions of the arc-based crossover to our algorithm. We create an EHSA-MTRPP variant (ILS-MTRPP) by disabling ABX (lines 15-16) in Algorithm~\ref{Algo:EHSA-MTRPP}. Using the same experimental setup in Section~\ref{subsec:Experi:setup}, another experiment is performed on the benchmark instances of large size, and the results are presented in Table~\ref{tab:nox} with the same column headings as Table~\ref{tab:nofast} (better results are marked in bold). \begin{table}[htbp] \centering \renewcommand{\arraystretch}{1.5} \setlength{\tabcolsep}{0.4mm} \caption{Results of ILS-MTRPP and EHSA-MTRPP on the instances of large size from the benchmark. Each instance is solved 10 times and the cutoff time is set to be twice the number of customers.}{\smallskip}\label{tab:nox} \begin{tiny} \begin{tabular}{llclllclllclllll} \toprule \multirow{2}{*}{Problem set}& \multirow{2}{*}{UB} & & \multicolumn{3}{c}{ILS-MTRPP} & & \multicolumn{3}{c}{EHSA-MTRPP} & & \multirow{2}{*}{p-value} & \multirow{2}{*}{$\delta$} & \multirow{2}{*}{W}& \multirow{2}{*}{M}& \multirow{2}{*}{F} \\ \cline{4-6} \cline{8-10} & & &\multicolumn{1}{l}{Best}& \multicolumn{1}{l}{Average} & \multicolumn{1}{l}{Tavg} & & \multicolumn{1}{l}{Best} &\multicolumn{1}{l}{Average} & \multicolumn{1}{l}{Tavg} & & &\\ \midrule Ins\_Avci & & & & & & & & & & & & & & & \\ \hline Size=200, m=2 & 907250.15 & & 893225.25 & 892997.62 & 106.16& & \textbf{893513.85} & \textbf{893374.88} & 263.23& & 8.86$\times 10^{-5}$ & 0.032310\% & 20 & 0 & 0 \\ Size=200, m=3 & 917633.35 & & 907796.10 & 907666.30 & 97.70& & \textbf{907950.35} & \textbf{907841.50} & 258.94& & 8.84$\times 10^{-5}$ & 0.016992\% & 20 & 0 & 0 \\ Size=500, m=10 & 1523086.90 & & 1435567.80 & 1434542.62 & 213.20& & \textbf{1437256.40} & \textbf{1436265.76} & 898.97& & 5.06$\times 10^{-3}$ & 0.117626\% & 10 & 0 & 0 \\ Size=500, m=20 & 766209.10 & & 693796.30 & 693456.04 & 215.12& & \textbf{694406.60} & \textbf{694114.40} & 897.70& & 5.06$\times 10^{-3}$ & 0.087965\% & 10 & 0 & 0 \\ Size=750, m=100 & 4150788.40 & & 4000456.40 & 4000414.68 & 320.81& & \textbf{4000585.60} & \textbf{4000541.60} & 1352.60& & 4.31$\times 10^{-2}$ & 0.003230\% & 5 & 0 & 0 \\ Size=1000, m=50 & 5402658.80 & & 5191550.60 & 5191326.92 & 520.71& & \textbf{5191726.40} & \textbf{5191527.76} & 1757.60& & 4.31$\times 10^{-2}$ & 0.003386\% & 5 & 0 & 0 \\ \midrule Ins\_Lu & & & & & & & & & & & & & & & \\ \hline Size=200, m=2 & 489139.40 & & 472282.15 & 472016.28 & 198.16& & \textbf{472499.25} & \textbf{472354.94} & 254.16& & 2.93$\times 10^{-4}$ & 0.045968\% & 19 & 0 & 1 \\ Size=200, m=3 & 333141.60 & & 321140.15 & 320982.75 & 201.67& & \textbf{321278.75} & \textbf{321175.57} & 245.66& & 8.84$\times 10^{-5}$ & 0.043159\% & 20 & 0 & 0 \\ Size=200, m=4 & 246431.05 & & 236703.05 & 236592.84 & 199.80& & \textbf{236805.20} & \textbf{236720.93} & 229.22& & 8.84$\times 10^{-5}$ & 0.043155\% & 20 & 0 & 0 \\ \bottomrule \end{tabular} \end{tiny} \end{table} From Table~\ref{tab:nox}, one concludes that EHSA-MTRPP clearly dominates ILS-MTRPP in terms of columns `Best' and `Average' for each set of instances (only missing one instance in `Size=200, m=2' in Ins\_Lu). The dominance is confirmed by the results of the Wilcoxon signed rank tests ($p$-value$<0.05$). This experiment shows that ABX contributes positively to the performance of our EHSA-MTRPP algorithm. \subsection{Compared to the route based crossover (RBX) in the literature}\label{subsec:Analyse:rbx} In this section, we make a comparison between the arc-based crossover and the route-based crossover, which is employed in the reference algorithm MA-MTRPP \cite{lu2019memetic}. We create a EHSA-MTRPP variant (EHSA-MTRPP-RBX) by replacing ABX with RBX (line 16 in Algorithm~\ref{Algo:EHSA-MTRPP}). On each instance of large size, both algorithms are independently run 10 times using the parameters in Section~\ref{subsec:Experi:setup}. The cut-off time is always set to be twice the number of customers. The results are summarized in Table~\ref{tab:rbx}, which uses the same column headings as Table~\ref{tab:nofast}. (Better results are indicated in bold.) \begin{table}[htbp] \centering \renewcommand{\arraystretch}{1.5} \setlength{\tabcolsep}{0.4mm} \caption{Results of EHSA-MTRPP-RBX and EHSA-MTRPP on the instances of large size from the benchmark. Each instance is solved 10 times and the cutoff time is set to be twice the number of customers.}{\smallskip}\label{tab:rbx} \begin{tiny} \begin{tabular}{llclllclllclllll} \toprule \multirow{2}{*}{Problem set}& \multirow{2}{*}{UB} & & \multicolumn{3}{c}{EHSA-MTRPP-RBX} & & \multicolumn{3}{c}{EHSA-MTRPP} & & \multirow{2}{*}{p-value} & \multirow{2}{*}{$\delta$} & \multirow{2}{*}{W}& \multirow{2}{*}{M}& \multirow{2}{*}{F} \\ \cline{4-6} \cline{8-10} & & &\multicolumn{1}{l}{Best}& \multicolumn{1}{l}{Average} & \multicolumn{1}{l}{Tavg} & & \multicolumn{1}{l}{Best} &\multicolumn{1}{l}{Average} & \multicolumn{1}{l}{Tavg} & & &\\ \midrule Ins\_Avci & & & & & & & & & & & & & & & \\ \hline Size=200, m=2 & 907250.15 & & 893420.60 & 893254.29 & 156.76& & \textbf{893513.85} & \textbf{893374.88} & 263.23& & 2.35$\times 10^{-4}$ & 0.010437\% & 19 & 0 & 1 \\ Size=200, m=3 & 917633.35 & & 907886.65 & 907784.43 & 154.88& & \textbf{907950.35} & \textbf{907841.50} & 258.94& & 1.28$\times 10^{-3}$ & 0.007016\% & 18 & 1 & 1 \\ Size=500, m=10 & 1523086.90 & & 1436503.90 & 1435606.38 & 379.31& & \textbf{1437256.40} & \textbf{1436265.76} & 898.97& & 5.06$\times 10^{-3}$ & 0.052384\% & 10 & 0 & 0 \\ Size=500, m=20 & 766209.10 & & 694289.30 & 693986.20 & 406.29& & \textbf{694406.60} & \textbf{694114.40} & 897.70& & 5.06$\times 10^{-3}$ & 0.016895\% & 10 & 0 & 0 \\ Size=750, m=100 & 4150788.40 & & 4000559.80 & 4000520.52 & 641.77& & \textbf{4000585.60} & \textbf{4000541.60} & 1352.60& & 2.25$\times 10^{-1}$ & 0.000645\% & 4 & 0 & 1 \\ Size=1000, m=50 & 5402658.80 & & 5191587.40 & 5191416.32 & 751.44& & \textbf{5191726.40} & \textbf{5191527.76} & 1757.60& & 4.31$\times 10^{-2}$ & 0.002677\% & 5 & 0 & 0 \\ \midrule Ins\_Lu & & & & & & & & & & & & & & & \\ \hline Size=200, m=2 & 489139.40 & & 472466.60 & 472286.13 & 300.66& & \textbf{472499.25} & \textbf{472354.94} & 254.16& & 3.76$\times 10^{-3}$ & 0.006911\% & 17 & 1 & 2 \\ Size=200, m=3 & 333141.60 & & 321230.30 & 321127.52 & 297.78& & \textbf{321278.75} & \textbf{321175.57} & 245.66& & 3.40$\times 10^{-4}$ & 0.015083\% & 18 & 1 & 1 \\ Size=200, m=4 & 246431.05 & & 236781.60 & 236703.33 & 302.65& & \textbf{236805.20} & \textbf{236720.93} & 229.22& & 7.37$\times 10^{-4}$ & 0.009967\% & 16 & 2 & 2 \\ \bottomrule \end{tabular} \end{tiny} \end{table} The results show that except several cases, EHSA-MTRPP performs better than EHSA-MTRPP-RBX on all sets of instances in terms of the best found results (column `Best') and the average found results (column `Average'), and the Wilcoxon signed rank tests ($p$-value$<0.05$) indicate that there are significant differences for 8 sets of results (except for the instances of `Size=750, m=100'). This experiment confirms that the ABX crossover is more appropriate than RBX for the MTRPP. \subsection{Rationale behind the arc-based crossover} \label{subsec:Ana:rational} We experimentally investigate the rationale behind ABX by analyzing the structural similarities between high-quality solutions. For two given solutions $\varphi_1$ and $\varphi_2$ with their corresponding arc sets $A_1$ and $A_2$, their similarity is defined by $Sim(\varphi_1, \varphi_2)=\frac{|A_1\cap A_2|}{|A_1\cup A_2|}$. Generally, the larger the similarity between two solutions, the more arcs they share. We run the EHSA-MTRPP algorithm 100 times on each of the selected 16 instances while recording the best found solution of each run, and the cut-off time per run is always set to twice the number of customers. For each instance, we calculate the maximum similarity (denoted $sim\_max$) between any two solutions by $sim\_max=\max_{1\leq i< j\leq 100} Sim(\varphi_i, \varphi_j)$, the minimum similarity (denoted $sim\_min$) between any two solutions by $sim\_min=\min_{1\leq i< j\leq 100} Sim (\varphi_i, \varphi_j)$ and the average similarity (denoted $sim\_avg$) between any two solutions by $sim\_avg=\frac{1}{4950}\cdot\sum_{1\leq i< j\leq 100} Sim(\varphi_i, \varphi_j)$. Figure~\ref{fig:curve2} shows the results of the solution similarities for different instances. \begin{figure}[htbp] \centering \includegraphics[scale=0.3]{EPS/curve_dis2} \caption{The similarity between high-quality solutions for 16 instances. Each instance is solved 100 times independently with a cut-off time per run set to twice the number of customers.} \label{fig:curve2} \end{figure} From Figure~\ref{fig:curve2}, one observes that there is a high similarity between high-quality solutions. In particular, the maximum similarity of the 100 high-quality solutions is more than 0.6 and the average similarity is over 0.4 between the solutions for each instance. In other words, a large number of arcs frequently appear in high-quality solutions, which provides a solid foundation for the design of the arc-based crossover in this work. One notices that the maximum similarities for the last six largest instances ($n\geq 500$) are not as high as the other instances ($n\leq 200$). This may be that the results for these difficult instances ($n\geq 500$) are not good enough. \section{Conclusions}\label{sec:Conclu} An effective hybrid search algorithm for the multiple traveling repairman problem with profit was proposed under the framework of memetic algorithm. The proposed algorithm distinguishes itself from the existing algorithms by two key features, i.e., its fast neighborhood evaluation techniques designed to accelerate neighborhood examinations and the dedicated arc-based crossover able to generate diversified and meaningful offspring solutions. The assessment on the 470 benchmark instances in the literature showed that the proposed algorithm competes favorably with the existing algorithms by updating the best records (new lower bounds) for 137 instances (29\%) and matching the best-known results for 330 instances (70\%) within a reasonable time. Additional experiments demonstrated that the fast evaluation technique and the arc-based crossover play positives roles to the performance of the algorithm. We analyzed both formally and experimentally the reduced complexities of neighborhood examinations and provided experimental evidences (high similarity between the high-quality solutions) to support the design of the arc-based crossover. The source code of our algorithm will be made available upon the publication of this work. It can be used to solve practical applications and adapted to related problems. In the future, we would like to develop efficient algorithms based on the arc-based crossover on some other related problems like team orienteering problem. \section*{Acknowledgement} This work was partially supported by the Shenzhen Science and Technology Innovation Commission (grant no. JCYJ20180508162601910), the National Key R\&D Program of China (grant no. 2020YFB1313300), and the Funding from the Shenzhen Institute of Artificial Intelligence and Robotics for Society (grant no. AC01202101036 and AC01202101009).
{ "attr-fineweb-edu": 1.708008, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdKQ4uzlhjV1b1YyU
\section{Introduction} \label{sec:Intro} Basketball professional European courts are 28 meters large and 14 meters wide, and 10 players (plus 3 referees) interact on it following complex patterns that help them accomplishing their goal, which can be either scoring or preventing the other team to score. These tactical plays include several kinds of movement, which might involve players moving together and close to each other, thus generating space and advantages. Given this scenario, and being aware that there is not an established multi-camera array set in these courts for tracking purposes (because of its cost and the low height of stadiums), broadcasting cameras are the main source of basketball video content. Usually, these cameras are set in the middle of the court in the horizontal axis, and camera operators just perform some panning or zooming during the game right at the same spot. {Given this video feed, a tracking-by-detection algorithm is adopted: first, potential players are detected, and then, features are extracted and compared, quantifying how much do players in different frames resemble. Then, players are tracked by obtaining a matrix with all similarities and minimizing the total cost of assignments. Several kind of features for establishing the similarities are evaluated:} \nothing{ \textit{ Given this video feed, several choices can be made in order to build a multi-tracker: \begin{itemize} \item A simultaneous tracking-by-detection algorithm can be designed\textcolor{blue}{/is adopted/is proposed (amb can no queda clar si ho fem o no)}: first, potential players have to be detected, and then, features are extracted and compared, quantifying how much do players in different frames resemble. By obtaining a matrix with all similarities and minimizing the total cost of assignments, players can be tracked. \item An optical flow method might be used instead, by computing displacement vectors for every pixel/object. In this way, and combining with a (possibly spatio-temporal) segmentation method, players can be localized and their movements estimated. \textcolor{blue}{aquest punt em resulta confus perquè no fem res d'això i de fet no sé si algun paper de tracking mitjanament competitiu ho fa} \end{itemize} In this article, a tracker based on detections is presented, and several kind of features are tested:} } \begin{itemize} \item Geometrical features, which might involve relative distances (in screen-coordinates and expressed in pixels) between detected objects. \item Visual features, which may quantify how different boxes look alike by comparing RGB similarity metrics in different small neighborhood patches. \item Deep learning features, which can be obtained by post-processing the output of a convolutional layer in a Deep Neural Network. \end{itemize} \nothing{ \textit{Besides, the combination with classical Computer Vision techniques might help improving the trackers' overall performance. For instance, homography estimation can compute transformations within consecutive frames, thus leading to stabilized camera sequences, where distances among frames are considerably reduced.} } {Besides, we show that the combination with classical Computer Vision techniques helps improving the trackers' overall performance. In particular, camera stabilization based on homography estimation leads to camera motion compensated sequences where distances of corresponding players in consecutive frames are considerably reduced.} The aim of this paper is to prove that deep learning features can be extracted and compared with ease, obtaining better results than with classical features. For this reason, an ablation study for different tests in a given scenario is included. The remaining article is distributed as follows: in Section \ref{sec:SoA} related works are described. Later on, in Section \ref{sec:Meth}, the presented methods are detailed, involving the main modules of player and pose detection, feature extraction and matching; moreover, camera stabilization techniques and pose models are considered. Results are shown and discussed in Section \ref{sec:Res}, and conclusions are extracted in final Section \ref{sec:Conc}. \section{Related Work} \label{sec:SoA} Multi-object tracking in video has been and still is a very active research area in computer vision. One of the most used tracking strategies is the so-called tracking by detection, which involves a previous or simultaneous detection step~\cite{milan2015joint,ramanathan2016detecting,henschel2018fusion,girdhar2018detect,doering2018joint}. Some of these works use a CNN-based detector with a tracking step \cite{ramanathan2016detecting,girdhar2018detect}, while others are based on global optimization methods. Among them, a joint segmentation and tracking of multiple targets is proposed in \cite{milan2015joint}, while in \cite{henschel2018fusion} a full-body detector and a head detector are combined to boost the performance. The authors in \cite{doering2018joint} combine Convolutional Neural Networks (CNNs) and a Temporal-Flow-Fields-based method. Another family of tracking methods which achieve a good compromise between accuracy and speed are based on Discriminant Correlation Filters (DCF). They are based on a first stage where features are extracted and then correlation filters are used. Initially, hand-crafted features like HoG were used and later on different proposals use deep learning features extracted with pretrained networks (e.g. \cite{qi2016hedged}). The results are improved when learning the feature extraction network in an end-to-end fashion for tracking purposes \cite{wang2017dcfnet}. The latest trend is to train deep learning based tracking methods in an unsupervised manner \cite{wang2019learning,wang2019unsupervised}. On the other hand, pose tracking refers in the literature to the task of estimating anatomical human keypoints and assigning unique labels for each keypoint across the frames of a video \cite{iqbal2017posetrack,insafutdinov2017arttrack}. This paper addresses the problem of tracking basketball players in broadcast videos. This is a challenging scenario where multiple occlusions are present, the resolution of the players is small and there is a high similarity between the different instances to track, specially within the same team members. For a deeper review of players detection and tracking in sports the interested reader is referred to the recent survey \cite{thomas2017computer}. The authors of \cite{senocak2018part} also consider basketball scenarios seen from a broadcast camera and they deal with player identification. For that, they propose to use CNN features extracted at multiple scales and encoded in a Fisher vector. \section{Proposed Method and Assessment} \label{sec:Meth} In this Section, the implemented tracking-by-detection method is detailed. The associated generic pipeline can be seen in Figure \ref{fig:Pipe} and it follows the subsequent stages: \begin{enumerate} \item[A.] For each frame, the basketball court is detected, with the purpose of not taking fans and bench players into account in the following steps. Also, a camera stabilization step may be included, and a discussion about its need in order to perform multi-tracking by reducing distances of objects within frames is provided. \item[B.] Players are detected, together with their pose using a pretrained pose model, and bounding boxes are placed around all of them. \item[C.] Features are extracted from these bounding boxes in combination with pose information. Several choices are analyzed in terms of features to be extracted. \item[D.] By comparing features of all players in three consecutive frames (indicated by Frame N, N-1 and N-2, respectively, in Figure~\ref{fig:Pipe}) and using a customized version of the Hungarian algorithm, tracking associations are performed. \end{enumerate} \begin{figure*}[ht] \centering \includegraphics[width=0.7\textwidth]{GenPipelineCut.png} \caption{Generic Pipeline: for each frame, players are detected (through pose models) and tracked (via feature extraction and matching).} \label{fig:Pipe} \end{figure*} \subsection{Pre-Processing} \subsubsection{Court Detection} Although court detection is not the main contribution of this research, the identification of visible court boundaries in the image is basic in order to filter out those candidates that are not taking part of the game actively (such as bench players or referees). It has to be mentioned that the basic filtering to be performed is thought for the vast majority of European courts, where court surroundings usually share the same color, and fans sit far from team benches. Knowing that in the broadcasting images the court results in a trapezoid with some visible boundaries, line segments are detected by using a fast parameter-less method based on the \textit{a contrario} theory \cite{von2010lsd} (code available in \cite{von2012lsd}). Right after, segments with the same orientation and intersection at the boundaries of the image, as seen in Figure \ref{fig:Court}, are joint and considered as part of the same line; the dominant orientation will be considered as the one with the longest visible parts (proportional to the sum of individual segments' length). However, given that basketball courts have many parallel lines (such as sidelines, corner three line, paint sides,...), several line candidates have to be tested in order to find the real court surroundings. Moreover, two dominant orientations are taken into account: (1) the ones belonging to sidelines (intersections at both left-right image boundaries), and (2) the one belonging to the visible baseline (both baselines cannot be seen at the same time if the camera shot is an average one). Given the non-complex scenario of European courts, color filtering is used in the HSV colorspace by checking contributions all over the image; in the case of Figure \ref{fig:Court}, court surroundings are blue and the court itself is a bright brown tonality. For a given dominant orientation, the subsquent steps are followed: \begin{enumerate} \item First, a line candidate with the dominant orientation is set at the top (in the case of sideline candidates) / left side (baseline candidates) of the image. \item Then, two parallel lines are set at a $\pm 25$ pixel distance with respect to the line candidate. \item Later on, and taking only the pixels comprised between the candidate line and the two parallel ones, the number of pixels that satisfy color conditions is computed for both sides independently. This is, if the candidate line is a potential sideline, the number of pixels is computed above and under it; instead, if the candidate line is a potential baseline, the number of pixels is computed at its left and its right. In the case of Figure \ref{fig:Court}, pixels with a Hue value between 120 and 150 degrees are the ones satisfying filter conditions. \item The line candidate is moved 12 pixels towards the bottom (sidelines) / right side (baseline) of the image. The same procedure being followed in Steps 2 and 3 is applied again. \item Once examined all possible cases, the line candidate with the maximum difference between above-below/left-right sides is set as the court limit. As it can be seen in the given example, the best court delimiters are the lines that stay right in the limit between brown-blue regions. \end{enumerate} \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{EuropeanCourtCut.png} \caption{Court detection. (a) Different segments with the same orientation and intersections are joint; (b) Final segmentation result.} \label{fig:Court} \end{figure} \subsubsection{Camera Stabilization} In order to ease the tracking of the players, an additional camera stabilization step to remove the camera motion can be incorporated. Taking into account that its inclusion represents extra computations, in this paper, an ablation study is provided to discuss the extend of its advantages. When enclosed, the camera stabilization method and implementation in \cite{sanchez2017ipol} is used. It estimates a set of homographies, each of which associated to a frame of the video and allowing to stabilize it. Table~\ref{tab:stab} in Section~\ref{sec:Res} presents the quantitative results including it. \subsection{Player Detection} As mentioned in Section \ref{sec:Intro}, the presented tracker is based on multiple detections in each individual frame. More concretely, the implemented method relies on pose models techniques \cite{poseModel1, poseModel2, cao2017realtime} stemming from an implementation of the latter \cite{openPoseLib}. Basically, this method is a bottom-up approach that uses a Convolutional Neural Network to: (1) detect anatomical keypoints, (2) build limbs by joining keypoints, and (3) merge limbs in the visible person skeleton. Given a basketball frame, the output of the main inference pose function is a $ 25 \times 3$ vector for each player, with the position (in screen coordinates) of 25 keypoints, which belong to the main biometric human-body parts, together with a confidence score. Note that there might be situations where specific parts might not be detected, resulting in unknown information in the corresponding entry of the pose vector of the whole skeleton. In addition, 26 heatmaps are returned, indicating the confidence of each part being at each particular pixel. By checking all the parts' positions and taking the minima-maxima XY coordinates for each detected player, bounding boxes are placed around the respective players. \subsection{Feature Extraction} Once bounding boxes are obtained, their comparison must be performed in order to assign individual tracks for each box in time. With the purpose of quantifying this process, different approaches can be used whilst extracting features. In this subsection, all tested features used \textit{a posteriori} are explained. For the remaining part of this subsection, ${B_{t_1}}$ and ${B_{t_2}}$ are considered as two different bounding boxes, detected at $t_{1}$ and $t_{2}$ respectively. \subsubsection{Geometrical Features} This classical approach can be used to measure distances or overlapping between bounding boxes of different frames. If the number of frames per second the video feed is not low, it can be assumed that player movements between adjacent frames will not be large; for this reason, players can be potentially found at a similar position in screen coordinates in short time intervals, so the distance between bounding boxes' centroids can be used as a metric. That is, given $\mathbf{x}_{B_{t_1}}$ and $\mathbf{x}_{B_{t_2}}$ as the centroids of two bounding boxes, the normalized distance between centroids can be expressed as \begin{equation}\label{eq:Cd} C_d(B_{t_1},B_{t_2})=\frac{1}{\sqrt{w^2+h^2}}\|\mathbf{x}_{B_{t_1}}-\mathbf{x}_{B_2}\|, \end{equation} where $w$ and $h$ are the width and the height of the frame domain. Another similar metric that could be used is the intersection over union between boxes, but due to the fact that basketball courts are usually cluttered and players move fast and randomly, it is not useful for this paper's purposes. \subsubsection{Visual Features}\label{sec:vsim} Distances might help distinguish basic correspondences, but this simple metric does not take into account key aspects, such as the jersey color (which team do players belong) or their skin tone. For this reason, a color similarity could be implemented in order to deal with these situations. Moreover, in this specific case, knowing that body positions are already obtained, fair comparisons can be performed, where the color surroundings of each part will be only compared to the neighborhood of the same part in another bounding box. Nevertheless, it has to be taken into account that only the pairs of anatomical keypoints present or detected in both $B_{t_1}$ and $B_{t_2}$ (denoted here as $\mathbf{p}^k_1$ and $\mathbf{p}^k_2$, respectively) will be used for the computation. The color and texture of a keypoint can be computed by centering a neighborhood around it. That is, let ${\cal{E}}$ be a squared neighborhood of 3$\times$3 pixels centered at $\mathbf{0}\in\mathbf{R}^2$. Then, \begin{equation}\label{eq:Cc} C_{c}(B_{t_1},B_{t_2})\!=\!\frac{1}{255 |S| \, |\cal{E}|}\!\!\sum_{k\in S}\!\sum_{\mathbf{y}\in{\cal{E}}}\!\|I_{t_1}(\mathbf{p}^k_1+\mathbf{y})-I_{t_2}(\mathbf{p}^k_2+\mathbf{y})\| \end{equation} where $S$ denotes the set of mentioned pairs of corresponding keypoints detected in both frames. \subsubsection{Deep Learning Features} \label{sec:DLF} Deep Learning (DL) is a broadly used machine learning technique with many possible applications, such as classification, segmentation or prediction. The basis of any DL model is a deep neural network formed by many layers. These networks serve to predict values from a given input. Convolutional Neural Networks (CNN) are special cases in which weights at every layer are shared spatially across an image. This has the effect of reducing the number of parameters needed for a layer and gaining a certain robustness to translation in the image. Then, a CNN architecture is composed by several kinds of layers, being convolutional layers the most important ones, but also including non-linear activation functions, biases, etc. This type of layers computes the response of several filters by convolving with different image patches. The associated weights to these filters, and also the ones associated to the non-linear activation functions, are learnt during the training process (in a supervised or unsupervised way) in order to achieve maximum accuracy for the concrete aimed task. It is well known that the first convolutional layers will produce higher responses to low-level features such as edges while posterior layers correlate with mid-, high- and global-level features associated to more semantic attributes. Bearing in mind that training a model from scratch is expensive, researchers use pretrained models and their corresponding weights for their purposes, such as by fine-tuning the model (for instance by feeding the model with new data and changing or adapting the previously obtained weights accordingly). In the presented experiments, the popular VGG-19 network \cite{simonyan2014very} is used for feature extraction, initialized with weights trained with ImageNet dataset~\cite{imagenet_cvpr09}. The original model was trained for image classification, and its architecture consists of 5 blocks with at least 2 convolutional layers, and 2 fully-connected layers at the end that will output a class probability vector for each image. The network takes as input a $224 \times 224 \times 3$ image, and the output size of the second convolutional layer of each block is shown in Table \ref{tab:OutVGG}. \begin{table}[ht] \centering \resizebox{0.35\textwidth}{!}{ \begin{tabular}{|c|c|c|c|} \hline \textbf{} & \textbf{Width} & \textbf{Height} & \textbf{Nº Filters} \\ \hline b2c2 & 112 & 112 & 128 \\ \hline b3c2 & 56 & 56 & 256 \\ \hline b4c2 & 28 & 28 & 512 \\ \hline b5c2 & 14 & 14 & 512 \\ \hline \end{tabular}} \caption{Output size of VGG-19 convolutional layers. In the first column, b stands for block number and c stands for the convolutional layer number inside that block.} \label{tab:OutVGG} \end{table} In order to feed the network with an appropriate sized image, a basic procedure is followed as seen in Figure \ref{fig:resizing}: considering that player boxes are usually higher than wider and having the center of the bounding box, its height $H_{B_{t}}$ is checked. Then, a squared image of $H_{B_{t}} \times H_{B_{t}} \times 3$ is generated around the center of the bounding box; finally, this image is resized to the desired width and height ($224$ and $224$, respectively). In this way, the aspect ratio of the bounding box content does not change. \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{ResizeCut.png} \caption{Player and Pose Detection: (a) random patch of an image containing a player, (b) detected pose through pretrained models, (c) black: bounding box fitting in player boundaries, pink: bounding box with default $224 \times 224$ pixels resolution, (d) reshaped bounding box to be fed into VGG-19.} \label{fig:resizing} \end{figure} However, extracting deep learning features from the whole bounding box introduces noise to the feature vector, as part of it belongs to the background (e.g. court). Therefore, feature are only extracted in those pixels that belong to detected body parts, resulting in a quantized 1D vector with length equal to the number of filters. As detailed below, part positions have to be downscaled to the output size of the convolutional layer Moreover, all feature vectors must be normalized with L2 norm. An example using the 10th convolutional layer of VGG-19 is shown in Figure \ref{fig:featExVGG}, where a $1 \times (25 \times 512)$ vector is obtained.\\ \begin{figure*}[ht] \centering \includegraphics[width=0.8\textwidth]{PipelineActivationsCut.png} \caption{Feature Extraction of all body parts using the 10th convolutional layer of a VGG-19 network.} \label{fig:featExVGG} \end{figure*} Once all boxes have their corresponding feature vectors, the metric defined in \cite{wang2019learning} is used to quantify differences; in particular, the similarity between two feature vectors $f_{t_1,k}^{y_{t_1}}$ and $f_{t_2,k}^{y_{t_2}}$, belonging to bounding boxes detected in $t_{1}$ and $t_{2}$ respectively, can be defined as: \begin{equation}\label{eq:CsSing} Sim(f_{t_1,k}^{y_{t_1}},f_{t_2,k}^{y_{t2}})=\frac{exp(f_{t_1,k}^{y_{t_1}} \dot f_{t_2,k}^{y_{t2}})}{\sum exp(f_{t_1,k}^{y_{t_1}} \dot f_{t_2,k}^{y_{t2}})} \end{equation} where $k$ corresponds to the particular body part and $y_{t1}$ and $y_{t2}$ to the pixel position inside the neighborhood being placed around the keypoint. Therefore, the total cost taking all parts into account is defined as: \begin{equation}\label{eq:CsTot} C_{DL}(B_{t_1},B_{t_2})\!=\!\frac{1}{|S|}\!\!\sum_{k \in S} \!\max_{\substack{y_{t1} \in {\cal{E}} \\ y_{t2} \in {\cal{E}}'} }(Sim(f_{t_1,k}^{y_{t_1}},f_{t_2,k}^{y_{t2}})) \end{equation} where ${\cal{S}}$ corresponds, once again, to the set of detected parts in both frames, and ${\cal{E}}$ and ${\cal{E}'}$ correspond to the set of pixels in the neighborhood placed around each keypoint. Nevertheless, two important remarks have to be pointed out: \begin{enumerate} \item Some of the Open Pose detected parts have low confidence. Given that, generally, there are more than 14 detected parts per player, all parts with lower confidence than 0.3 are discarded and not taken into account when extracting features. Hence, the subset ${\cal{S}}$ in Equations \ref{eq:Cc} and \ref{eq:CsTot} considers all detected parts in both bounding boxes that satisfy the mentioned confidence threshold. \item Convolutional layer outputs (as implemented in the VGG-19) decrease the spatial resolution of the input. Since non-integer positions are found when downscaling parts' locations (in the input image) to the corresponding resolution of the layer of interest, the features of the $2 \times 2$ closest pixels at that layer are contemplated. Then, the cost will be considered as the most similar feature vector to the $2 \times 2$ target one given. In Tables~\ref{tab:nonstab} and \ref{tab:stab} a discussion on the effect of the approximate correct location is included. \end{enumerate} \subsection{Matching} Having quantified all bounding boxes in terms of features, a cost matrix containing the similarity between pairs of bounding boxes is computed by combining the different extraction results. The suitability of the different types of features is evaluated by combining with appropriate weights them before building this matrix; in the presented experiments, the following weighted sum of different costs has been applied: \begin{equation}\label{eq:cost} C(B_{t_1},B_{t_2})= \alpha {C}_{Feat1}(B_{t_1},B_{t_2}) + (1-\alpha) {C}_{Feat2}(B_{t_1},B_{t_2}) \end{equation} where $C_{{Feat}1}$ refers to $C_d$ given by (\ref{eq:Cd}), ${C}_{{Feat}2}$ refers either to $C_{DL}$ in (\ref{eq:CsTot}) or $C_c$ in (\ref{eq:Cc}) and $\alpha\in [0,1]$. From this matrix, unique matchings between boxes of adjacent frames are computed by minimizing the overall cost assignment: \begin{enumerate} \item For each bounding box in time $t_{N}$, the two minimum association costs (and labels) among all the boxes in $t_{N-1}$ are stored in an $A_{t_{N},t_{N-1}}$ matrix. \item If there are repeated label associations, a decision has to be made: \begin{itemize} \item If the cost of one of the repeated associations is considerably smaller than the others (by +10\%), this same box is matched with the one in the previous frame. \item If the cost of all the repeated associations is similar (less than 10\%), the box with the largest difference between its first and second minimum costs is set as the match. \item In both cases, for all boxes that have not been assigned, the label of their second minimum cost is checked too. If there is no existing association with that specific label, a new match is set. \end{itemize} \item In order to provide the algorithm with some more robustness, the same procedure described in steps 1 and 2 is repeated with boxes in $t_{N}$ and $t_{N-2}$. This results in an $A_{t_{N},t_{N-2}}$ matrix. \item For each single box, the minimum cost assignment for each box is checked at both $A_{t_{N},t_{N-1}}$ and $A_{t_{N},t_{N-2}}$, keeping the minimum as the final match. In this way, a 2-frame memory tolerance is introduced into the algorithm, and players that might be lost for one frame can be recovered in the following one. \item If there are still bounding boxes without assignations, new labels are generated, considering these as new players that appear on scene. Final labels are converted into unique identifiers, which will be later used in order to compute performance metrics. \end{enumerate} \section{Results} \label{sec:Res} In this section, a detailed ablation of quantitative results is provided and discussed, comparing all the above-mentioned techniques and combinations. Besides, the content of the gathered dataset is explained. \subsection{Dataset} A dataset of 22 European single-camera basketball sequences has been used. Original videos have full-HD resolution ($1920 \times 1080$ pixels) and 25 frames per second, but in order to provide fair comparisons, only 4 frames are extracted per second. The included sequences involve static offensive basketball motion, with several sets of screens/isolations; moreover, different jersey colors and skin tonalities are included. However, the court is the same European one for all situations, and there are no fast break / transition plays, as in the case where all players run from one side to the other, due to the fact that camera stabilization techniques do not handle these situations. The average duration of these sequences is 11.07 seconds, resulting in a total of 1019 frames. Ground truth data is attached in the given dataset, containing bounding boxes over each player and all 3 referees (taking the minimum visible X and Y coordinates of each individual) in every single frame (when visible); this results results in a total of 11339 boxes. \subsection{Quantitative Results} Although it is not part of this article's contribution, quantitative assessment of the detection method is shown in Table \ref{tab:DetPlay} and it is compared to the performance of the state-of-the-art YOLO network \cite{redmon2016you}; for a fair comparison, only the \textit{person} detections within the court boundaries are kept in both cases. These detections can be seen in Figure \ref{fig:DetFig} with their corresponding ground truth boxes. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Detections.png} \caption{Player Detections (green boxes) together with its ground truth (blue boxes).} \label{fig:DetFig} \end{figure} \begin{table}[ht] \centering \resizebox{0.45\textwidth}{!}{ \begin{tabular}{|c|c|c|c|} \hline & \textbf{Precision} & \textbf{Recall} & \textbf{F1-Score} \\ \hline Open Pose & 0.9718 & 0.9243 & 0.9470 \\ \hline YOLO & 0.8401 & 0.9426 & 0.8876 \\ \hline \end{tabular}} \label{tab:DetPlay} \caption{Detection Results} \end{table} From now on, all quantitative tracking results will be expressed in the Multiple Object Tracking Accuracy (MOTA) metric, which is defined in \cite{bernardin2008evaluating} as: $$ MOTA = 1-\frac{\sum_{t}^{} fp_{t} + m_{t} + mm_{t}}{\sum_{t}^{} g_{t}},$$ where $fp_{t}$, $m_{t}$, $mm_{t}$ and $g_{t}$ denote, respectively, to false positives, misses, missmatches and total number of ground truth boxes over all the sequence. Another meaningful tracking metric that has been computed as well is the Multiple Object Tracking Precision (MOTP), which can be defined as: $$ MOTP = \frac{\sum_{i,t}^{} IoU_{i,t}}{\sum_{t}^{} c_{t}},$$ where $IoU_{i,t}$ and $\sum_{t}^{} c_{t}$ correspond to the intersection over union between two boxes, and to the sum of correct assignations through the sequence, respectively. The detected bounding boxes for all the upcoming experiments are the same ones (thus the intersection with Ground Truth bounding boxes does not change neither), and knowing that the total number of instances is large, the MOTP results barely changes in all presented combinations of techniques: $0.6165 \pm 0.0218 $. \\ Starting only with DL features (that is, $\alpha=0$ in (\ref{eq:cost}) and ${C}_{{Feat}2}$ equal to $C_{DL}$), Table \ref{table:OutConv} shows the maximum MOTA metrics achieved after performing the extraction in the output of convolutional layers. As mentioned, a pretrained VGG-19 architecture is used, taking as an output the result of each second convolutional layer from the second to the fifth block. The best MOTA results are obtained with the output of the fourth block, corresponding to the 10th convolutional layer of the overall architecture. For the remaining tests, all DL features will be based on this layer, which has an output of size $ 28 \times 28 \times 512 $. \begin{table}[ht] \centering \resizebox{0.45\textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Layer} & b2c2 & b3c2 & b4c2 & b5c2 \\ \hline \textbf{MOTA} & 0.5396 & 0.5972 & \textbf{0.6369} & 0.6321 \\ \hline \end{tabular}} \caption{MOTA results obtained with $\alpha=0$ in (\ref{eq:cost}), ${C}_{{Feat}2}$ equal to $C_{DL}$ and by extracting features in the output of different convolutional layers.}\label{table:OutConv} \end{table} Having tried all possible weights in 0.05 intervals, Table \ref{tab:nonstab} shows the most significant MOTA results for a non-stabilized video sequence. In this experiment, a comparison between Geometrical and DL features is shown, with the performance on their own as well as its best weighted combination. Besides, as explained in Subsection \ref{sec:DLF}, when extracting DL features, three different tests have been performed regarding the neighborhood size. As it can be seen in Table \ref{tab:nonstab}, DL features outperform Geometrical ones, specially in the case of a 2x2 neighborhood. By combining them, and giving more weight to the DL side, results are improved in all cases, thus indicating that the two types of features complement each other. In Table \ref{tab:stab} the same experiments are shown, but this time using a stabilizied video sequence. In this case, the Geometrical performance outperforms Deep Learning, but as it has been mentioned, these metrics will drastically drop if the included dataset sequences contain fast camera movements (or even large pannings). From both Tables \ref{tab:nonstab} and \ref{tab:stab} it can be deduced that the best filter size when extracting DL pose features is a 2x2 neighborhood. \textit{A priori}, one might think that a 3x3 neighborhood should work better, as it is already including the 2x2 one, but a 3x3 spatial neighborhood in the output of the 10th convolutional layer is equivalent to a $24 \times 24$ real neighborhood around the specific part in the image domain. Accordingly, adding these extra positions will include court pixels in all feature vectors, which might then produce a higher response in court-court comparisons, resulting in non-meaningful matches. \begin{table}[ht] \centering \resizebox{0.35\textwidth}{!}{ \begin{tabular}{|c|c|c|c|} \hline \textbf{Neighborhood} & ${\alpha}$ & 1-$\alpha$ & \textbf{MOTA} \\ \hline --- & 1 & 0 & 0.5689 \\ \hline 1x1 & 0 & 1 & 0.5923 \\ \hline 1x1 & 0.3 & 0.7 & 0.6289 \\ \hline 2x2 & 0 & 1 & 0.6369 \\ \hline 2x2 & 0.2 & 0.8 & \textbf{0.6529} \\ \hline 3x3 & 0 & 1 & 0.6171 \\ \hline 3x3 & 0.3 & 0.7 & 0.6444 \\ \hline \end{tabular}} \caption{Non-stabilized results obtained from only 4 video frames per second.} \label{tab:nonstab} \end{table} \begin{table}[ht] \centering \resizebox{0.35\textwidth}{!}{ \begin{tabular}{|c|c|c|c|} \hline \textbf{Neighborhood} & $\alpha$ & 1-$\alpha$ & \textbf{MOTA} \\ \hline --- & 1 & 0 & 0.6506 \\ \hline 2x2 & 0 & 1 & 0.6369 \\ \hline 1x1 & 0.6 & 0.4 & 0.6752 \\ \hline 2x2 & 0.55 & 0.45 & \textbf{0.6825} \\ \hline 3x3 & 0.7 & 0.3 & 0.6781 \\ \hline \end{tabular}} \caption{Stabilized results, with the same 4 video frames per second and weights as in Table~\ref{tab:nonstab}.}\label{tab:stab} \end{table} Apart from comparing Geometrical and DL features through $C_d$ and the mentioned different $C_{DL}$, the effect of Visual features (color similarity $C_c$, explained in Subsection \ref{sec:vsim}) is checked too. In Table \ref{tab:ConDLFeat}, the best weighted combinations in terms of MOTA are shown for a non-stabilized and a stabilized video sequence. In both cases, DL features outperform color ones by a 3\% margin. The combination of all Geometrical, Visual, and DL features outperforms the rest of techniques but just by a 0.2\%, which comes at a cost of computation expenses, so it is worth using only DL features. \\ \begin{table}[ht] \centering \resizebox{0.4\textwidth}{!}{ \begin{tabular}{|c|c|} \hline \textbf{Combination of Features} & \textbf{MOTA} \\ \hline Geometrical + Visual & 0.6233 \\ \hline Geometrical + VGG & 0.6529 \\ \hline Geometrical + Visual {[}Stab{]} & 0.6583 \\ \hline Geometrical + VGG {[}Stab{]} & 0.6825 \\ \hline Geometrical + VGG + Visual {[}Stab{]} & \textbf{0.6843} \\ \hline \end{tabular}} \caption{Effect of Visual and Deep Learning features in combination with Geometrical ones.} \label{tab:ConDLFeat} \end{table} In order to break down and evaluate the contribution in MOTA of every single pose part, Table \ref{tab:partPerf} is displayed; these results have been obtained with a 2x2 neighborhood around parts, and without combining with Geometrical features. As it can be seen, there are basically three clusters: \begin{enumerate} \item Discriminative features, above a 0.35 MOTA, that manage to track at a decent performance only with a $1 \times 512$ feature vector/player. These parts (shoulders, chest and hip) belong to the main shape of human \textit{torso}, and it coincides with the jersey-skin boundary in the case of players. \item Features that stay within a MOTA of 0.20 and 0.35, which are not tracking players properly but their contribution might help the discriminative ones to get higher performance metrics. These parts include skinned pixels of basic articulations such as elbows, knees, and ankles. \item Concrete parts that have almost no details at a coarse resolution, thus resulting in low MOTA performance. Eyes could be an example: although people's eyes have many features that made them discriminative (such as shape, color, pupil size, eyebrow's length), players' eyes in the dataset images do not embrace more than a 5x5 pixel region, and all of them look the same shape and brown or darkish. This results in poor tracking results when checking only for these parts. \end{enumerate} \begin{table}[ht] \centering \begin{tabular}{|c|c|} \hline \textbf{Part} & \textbf{MOTA} \\ \hline Chest & 0.5349 \\ \hline L-Shoulder & 0.4726 \\ \hline R-Shoulder & 0.4707 \\ \hline R-Hip & 0.3961 \\ \hline Mid-Hip & 0.3956 \\ \hline L-Hip & 0.3867 \\ \hline L-Knee & 0.3156 \\ \hline R-Knee & 0.3062 \\ \hline L-Elbow & 0.2862 \\ \hline R-Elbow & 0.2545 \\ \hline R-Ankle & 0.2418 \\ \hline L-Ankle & 0.2407 \\ \hline L-Toes & 0.1935 \\ \hline R-Toes & 0.1920 \\ \hline L-Ear & 0.1348 \\ \hline L-Heel & 0.1259 \\ \hline L-Wrist & 0.1235 \\ \hline R-Heel & 0.1126 \\ \hline L-Mid-Foot & 0.1116 \\ \hline R-Wrist & 0.1111 \\ \hline R-Mid-Foot & 0.0964 \\ \hline L-Eye & 0.0916 \\ \hline Nose & 0.0771 \\ \hline R-Eye & 0.0655 \\ \hline R-Ear & 0.0677 \\ \hline \end{tabular} \caption{Individual Part Tracking Performance, obtained with $\alpha=0$ in (\ref{eq:cost}) and ${C}_{{Feat}2}$ equal to $C_{DL}$} \label{tab:partPerf} \end{table} Given the mentioned clusters, 3 different tracking tests have been performed taking only some parts into account, in particular, taking all the body parts that had a MOTA performance by itself higher than: (1) 0.35, (2) 0.20, (3) 0.10, belonging to (1) 6, (2) 12 and (3) 20 parts, respectively. Results are shown in Table \ref{tab:sepParts}, where it can be seen that the second and third cluster complement the top ones, while the bottom-5 parts actually contribute to a drop in MOTA. The drawback of this clustering is that it requires some analysis that cannot be performed in test time, and different video sequences (\textit{i.e} different sports) might lead to different part results. \begin{table}[ht] \centering \resizebox{0.4\textwidth}{!}{ \begin{tabular}{|c|c|c|} \hline \textbf{Part MOTA} & \textbf{Nº of Parts} & \textbf{Total MOTA} \\ \hline \textgreater 0.35 & 6 & 0.6105 \\ \hline \textgreater 0.20 & 12 & 0.6412 \\ \hline \textgreater 0.10 & 20 & \textbf{0.6423} \\ \hline all & 25 & 0.6369 \\ \hline \end{tabular}} \caption{Clustering Part Results ($\alpha=0$ and ${C}_{{Feat}2}=C_{DL}$)} \label{tab:sepParts} \end{table} A qualitative visual detection and tracking result (obtained with the best combination of Geometrical + Deep Learning features without camera stabilization) is displayed in Figure \ref{fig:FinRes}, where players are detected inside a bounding box, and its color indicates their ID; as it can be seen, all 33 associations are properly done except a missed player in the first frame and a missmatch between frames 2 and 3 (orange-green boxes) \begin{figure*}[ht] \centering \includegraphics[width=0.95\textwidth]{SequenceBoxesCut.png} \caption{Obtained tracking and pose results in three consecutive frames, where each bounding box color represents a unique ID.} \label{fig:FinRes} \end{figure*} \section{Conclusions} \label{sec:Conc} In this article, a single-camera multi-tracker for basketball video sequences has been presented. Using a pretrained model to detect pose and humans, an ablation study has been detailed in order to address the feature extraction process, considering three types of features: Geometrical, Visual and Deep Learning based. In particular, Deep Learning features have been extracted by combining pose information with the output of convolutional layers of a VGG-19 network, reaching a maximum of 0.6843 MOTA performance. Several conclusions can be extracted from the presented experiments: \begin{itemize} \item In the case of VGG-19, DL features extracted from the 10th convolutional layer present the best accuracy; moreover, placing a 2x2 neighborhood around downscaled body parts improves the tracking performance. \item Classical Computer Vision techniques such as camera stabilization can help improving the Geometrical features performance, but it might have related drawbacks, such as the incapability of generalization to all kinds of camera movements. \item DL features outperfom Visual ones when combining with Geometrical information. The combination of all of them does not imply a performance boost. \item When extracting pose features from convolutional layers, those body parts that cannot be distinguishable at a coarse resolution have a negative effect in the overall performance. \end{itemize} Future work could involve the fine-tuning of a given network in order to get specific weights for tracking purposes. This training should be done in an unsupervised/self-supervised way, and a bigger dataset will be used, including different type of basketball courts and all kind of plays. Moreover, if there is no need to label ground truth data, this new model could be also trained with other sports' data, thus potentially creating a robust multi-sport tracker. \section*{Acknowledgments} The authors acknowledge partial support by MICINN/FEDER UE project, reference PGC2018-098625-B-I00, H2020-MSCA-RISE-2017 project, reference 777826 NoMADS and F.C. Barcelona's data support. \ifCLASSOPTIONcaptionsoff \newpage \fi
{ "attr-fineweb-edu": 1.886719, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbuXxK6wB9mn49psl
\subsection{Dataset} We trained \texttt{baller2vec}{} on a publicly available dataset of player and ball trajectories recorded from 631 National Basketball Association (NBA) games from the 2015-2016 season.\footnote{\url{https://github.com/linouk23/NBA-Player-Movements}} All 30 NBA teams and 450 different players were represented. Because transition sequences are a strategically important part of basketball, unlike prior work, e.g., \citet{felsen2018will, yeh2019diverse, zhan2018generating}, we did not terminate sequences on a change of possession, nor did we constrain ourselves to a fixed subset of sequences. Instead, each training sample was generated on the fly by first randomly sampling a game, and then randomly sampling a starting time from that game. The following four seconds of data were downsampled to 5 Hz from the original 25 Hz and used as the input. Because we did not terminate sequences on a change of possession, we could not normalize the direction of the court as was done in prior work \cite{felsen2018will, yeh2019diverse, zhan2018generating}. Instead, for each sampled sequence, we randomly (with a probability of 0.5) rotated the court 180\degree{} (because the court's direction is arbitrary), doubling the size of the dataset. We used a training/validation/test split of 569/30/32 games, respectively (i.e., 5\% of the games were used for testing, and 5\% of the remaining 95\% of games were used for validation). As a result, we had access to $\sim$82 million different (albeit overlapping) training sequences (569 games $\times$ 4 periods per game $\times$ 12 minutes per period $\times$ 60 seconds per minute $\times$ 25 Hz $\times$ 2 rotations), $\sim$800x the number of sequences used in prior work. For both the validation and test sets, $\sim$1,000 different, \textit{non-overlapping} sequences were selected for evaluation by dividing each game into $\lceil \frac{1,000}{N} \rceil$ non-overlapping chunks (where $N$ is the number of games), and using the starting four seconds from each chunk as the evaluation sequence. \subsection{Model}\label{sec:model} We trained separate models for \textbf{Task P}{} and \textbf{Task B}{}. For all experiments, we used a single Transformer architecture that was nearly identical to the original model described in \citet{vaswani2017attention}, with $d_{\text{model}} = 512$ (the dimension of the input and output of each Transformer layer), eight attention heads, $d_{\textrm{ff}} = 2048$ (the dimension of the inner feedforward layers), and six layers, although we did not use dropout. For \textit{both} \textbf{Task P}{} and \textbf{Task B}{}, the players \textit{and} the ball were included in the input, and both the players and the ball were embedded to 20-dimensional vectors. The input features for each player consisted of his identity, his $(x, y)$ coordinates on the court at each time step in the sequence, and a binary variable indicating the side of his frontcourt (i.e., the direction of his team's hoop).\footnote{We did not include team identity as an input variable because teams are collections of players and a coach, and coaches did not vary in the dataset because we only had access to half of one season of data; however, with additional seasons of data, we would include the coach as an input variable.} The input features for the ball were its $(x, y, \zeta)$ coordinates at each time step. The input features for the players and the ball were processed by separate, three-layer MLPs before being fed into the Transformer. Each MLP had 128, 256, and 512 nodes in its three layers, respectively, and a ReLU nonlinearity following each of the first two layers. For classification, a single linear layer was applied to the Transformer output followed by a softmax. For players, we binned an $11 \text{ ft} \times 11 \text{ ft}$ 2D Euclidean trajectory space into an $11 \times 11$ grid of $1 \text{ ft} \times 1 \text{ ft}$ squares for a total of 121 player trajectory labels. Similarly, for the ball, we binned a $19 \text{ ft} \times 19 \text{ ft} \times 19 \text{ ft}$ 3D Euclidean trajectory space into a $19 \times 19 \times 19$ grid of $1 \text{ ft} \times 1 \text{ ft} \times 1 \text{ ft}$ cubes for a total of 6,859 ball trajectory labels. We used the Adam optimizer \cite{kingma2014adam} with an initial learning rate of $10^{-6}$, $\beta_{1} = 0.9$, $\beta_{2} = 0.999$, and $\epsilon = 10^{-9}$ to update the model's parameters, of which there were $\sim$19/23 million for \textbf{Task P}{}/\textbf{Task B}{}, respectively. The learning rate was reduced to $10^{-7}$ after 20 consecutive epochs of the validation loss not improving. Models were implemented in PyTorch and trained on a single NVIDIA GTX 1080 Ti GPU for seven days ($\sim$650 epochs) where each epoch consisted of 20,000 training samples, and the validation set was used for early stopping. \subsection{Baselines} \begin{wraptable}{r}{0.4\linewidth} \vskip -0.26in \centering \caption{The perplexity per trajectory bin on the test set when using \texttt{baller2vec}{} vs. the marginal distribution of the trajectory bins in the training set (``Train'') for all predictions. \texttt{baller2vec}{} considerably reduces the uncertainty over the trajectory bins. } \begin{center} \begin{tabular}{lcc} \toprule & \texttt{baller2vec}{} & Train \\ \midrule \textbf{Task P} & 1.64 & 15.72 \\ \textbf{Task B} & 13.44 & 316.05 \\ \bottomrule \end{tabular} \label{tab:perplexity} \end{center} \vskip -0.1in \end{wraptable} As our naive baseline, we used the marginal distribution of the trajectory bins from the training set for all predictions. For our strong baseline, we implemented a \texttt{baller2vec}{}-like graph recurrent neural network (GRNN) and trained it on \textbf{Task P} (code is available in the \texttt{baller2vec}{} repository).\footnote{We chose to implement our own strong baseline because \texttt{baller2vec}{} has far more parameters than models from prior work (e.g., $\sim$70x \citet{felsen2018will}).} Specifically, at each time step, the player and ball inputs were first processed using MLPs as in \texttt{baller2vec}{}, and these inputs were then fed into a graph neural network (GNN) similar to \citet{yeh2019diverse}. The node and edge functions of the GNN were each a Transformer-like feedforward network (TFF), i.e., $\text{TFF}(x) = \text{LN}(x + W_{2}\text{ReLU}(W_{1} x + b_{1}) + b_{2})$, where LN is Layer Normalization \cite{ba2016layer}, $W_{1}$ and $W_{2}$ are weight matrices, $b_{1}$ and $b_{2}$ are bias vectors, and ReLU is the rectifier activation function. For our RNN, we used a gated recurrent unit (GRU) RNN \cite{cho-etal-2014-learning} in which we replaced each of the six weight matrices of the GRU with a TFF. Each TFF had the same dimensions as the Transformer layers used in \texttt{baller2vec}{}. Our GRNN had $\sim$18M parameters, which is comparable to the $\sim$19M in \texttt{baller2vec}{}. We also trained our GRNN for seven days ($\sim$175 epochs). \subsection{Ablation studies} \begin{wraptable}{r}{0.61\linewidth} \vskip -0.55in \centering \caption{The average NLL (lower is better) on the \textbf{Task P}{} test set and seconds per training epoch (SPE) for \texttt{baller2vec}{} (\texttt{b2v}) and our GRNN. \texttt{baller2vec}{} trains $\sim$3.8 times faster per epoch compared to our GRNN, and \texttt{baller2vec}{} outperformed our GRNN by 10.5\% when given the same amount of training time. Even when only allowed to train for half (``0.5x'') and a quarter (``0.25x'') as long as our GRNN, \texttt{baller2vec}{} outperformed our GRNN by 9.1\% and 1.5\%, respectively.. } \begin{center} \begin{tabular}{lcccc} \toprule & \texttt{b2v} & \texttt{b2v} (0.5x) & \texttt{b2v} (0.25x) & GRNN \\ \midrule NLL & 0.492 & 0.499 & 0.541 & 0.549 \\ SPE & $\sim$900 & $\sim$900 & $\sim$900 & $\sim$3,400 \\ \bottomrule \end{tabular} \vskip -0.2in \label{tab:grnn} \end{center} \end{wraptable} To assess the impacts of the multi-entity design and player embeddings of \texttt{baller2vec}{} on model performance, we trained three variations of our \textbf{Task P}{} model using: (1) one player in the input without player identity, (2) all 10 players in the input without player identity, and (3) all 10 players in the input with player identity. In experiments where player identity was not used, a single generic player embedding was used in place of the player identity embeddings. We also trained two variations of our \textbf{Task B}{} model: one with player identity and one without. Lastly, to determine the extent to which \texttt{baller2vec}{} uses historical information in its predictions, we compared the performance of our best \textbf{Task P}{} model on the full sequence test set with its performance on the test set when \textit{only predicting the trajectories for the first frame} (i.e., we applied the \textit{same} model to only the first frames of the test set). \subsection{Multi-entity sequences} \begin{wrapfigure}{r}{0.4\linewidth} \vskip -0.5in \includegraphics[width=\linewidth]{discrete_trajectory} \caption{An example of a binned trajectory. The agent's starting position is at the center of the grid, and the cell containing the agent's ending position is used as the label (of which there are $n^{2}$ possibilities). } \vskip -0.15in \label{fig:discrete} \end{wrapfigure} Let $A = \{1, 2, \dots, B\}$ be a set indexing $B$ entities and $P = \{p_{1}, p_{2}, \dots, p_{K}\} \subset A$ be the $K$ entities involved in a particular sequence. Further, let $Z_{t} = \{z_{t, 1}, z_{t, 2}, \dots, z_{t, K}\}$ be an \textit{unordered} set of $K$ feature vectors such that $z_{t,k}$ is the feature vector at time step $t$ for entity $p_{k}$. $\mathcal{Z} = (Z_{1}, Z_{2}, \dots, Z_{T})$ is thus an \textit{ordered} sequence of sets of feature vectors over $T$ time steps. When $K = 1$, $\mathcal{Z}$ is a sequence of individual feature vectors, which is the underlying data structure for many NLP problems. We now consider two different tasks: (1) sequential entity labeling, where each entity has its own label at each time step (which is conceptually similar to word-level language modeling), and (2) sequential labeling, where each time step has a single label (see Figure \ref{fig:baller2vec}). For (1), let $\mathcal{V} = (V_{1}, V_{2}, \dots, V_{T})$ be a sequence of sets of labels corresponding to $\mathcal{Z}$ such that $V_{t} = \{v_{t, 1}, v_{t, 2}, \dots, v_{t, K}\}$ and $v_{t, k}$ is the label at time step $t$ for the entity indexed by $k$. For (2), let $W = (w_{1}, w_{2}, \dots, w_{T})$ be a sequence of labels corresponding to $\mathcal{Z}$ where $w_{t}$ is the label at time step $t$. The goal is then to learn a function $f$ that maps a set of entities and their time-dependent feature vectors $\mathcal{Z}$ to a probability distribution over either (1) the entities' time-dependent labels $\mathcal{V}$ or (2) the sequence of labels $W$. \subsection{Multi-agent spatiotemporal modeling}\label{sec:masm} In the MASM setting, $P$ is a set of $K$ different agents and $C_{t} = \{(x_{t,1}, y_{t,1}), (x_{t,2}, y_{t,2}), \dots, (x_{t,K}, y_{t,K})\}$ is an unordered set of $K$ coordinate pairs such that $(x_{t,k}, y_{t,k})$ are the coordinates for agent $p_{k}$ at time step $t$. The ordered sequence of sets of coordinates $\mathcal{C} = (C_{1}, C_{2}, \dots, C_{T})$, together with $P$, thus defines the trajectories for the $K$ agents over $T$ time steps. We then define $z_{t,k}$ as: $z_{t,k} = g([e(p_{k}), x_{t,k}, y_{t,k}, h_{t,k}])$, where $g$ is a multilayer perceptron (MLP), $e$ is an agent embedding layer, and $h_{t,k}$ is a vector of optional contextual features for agent $p_{k}$ at time step $t$. The trajectory for agent $p_{k}$ at time step $t$ is defined as $(x_{t+1,k} - x_{t,k}, y_{t+1,k} - y_{t,k})$. Similar to \citet{zheng2016generating}, to fully capture the multimodal nature of the trajectory distributions, we binned the 2D Euclidean space into an $n \times n$ grid (Figure \ref{fig:discrete}) and treated the problem as a classification task. Therefore, $\mathcal{Z}$ has a corresponding sequence of sets of trajectory labels (i.e., $v_{t, k} = \text{Bin}(\Delta x_{t,k}, \Delta y_{t,k})$, so $v_{t, k}$ is an integer from one to $n^{2}$), and the loss for each sample in \textbf{Task P}{} is: $\mathcal{L} = \sum_{t=1}^{T} \sum_{k=1}^{K} -\ln(f(\mathcal{Z})_{t,k}[v_{t,k}])$, where $f(\mathcal{Z})_{t,k}[v_{t,k}]$ is the probability assigned to the trajectory label for agent $p_{k}$ at time step $t$ by $f$; i.e., the loss is the NLL of the data according to the model For \textbf{Task B}{}, the loss for each sample is: $\mathcal{L} = \sum_{t=1}^{T} -\ln(f(\mathcal{Z})_{t}[w_{t}])$, where $f(\mathcal{Z})_{t}[w_{t}]$ is the probability assigned to the trajectory label for the ball at time step $t$ by $f$, and the labels correspond to a binned 3D Euclidean space (i.e., $w_{t} = \text{Bin}(\Delta x_{t}, \Delta y_{t}, \Delta \zeta_{t})$, so $w_{t}$ is an integer from one to $n^{3}$). \begin{figure}[h] \centering \subfigure{\includegraphics[width=\textwidth]{baller2vec}} \caption{An overview of our multi-entity Transformer, \texttt{baller2vec}{}. Each time step $t$ consists of an \textit{unordered} set $Z_{t}$ of entity feature vectors (colored circles) as the input, with either (\textbf{left}) a corresponding set $V_{t}$ of entity labels (colored diamonds) or (\textbf{right}) a single label $w_{t}$ (gray triangle) as the target. Matching colored circles/diamonds across time steps correspond to the same entity. In our experiments, each entity feature vector $z_{t,k}$ is produced by an MLP $g$ that takes a player's identity embedding $e(p_{k})$, raw court coordinates $(x_{t,k}, y_{t,k})$, and a binary variable indicating the player's frontcourt $h_{t,k}$ as input. Each entity label $v_{t,k}$ is an integer indexing the trajectory bin derived from the player's raw trajectory, while each $w_{t}$ is an integer indexing the ball's trajectory bin. } \vskip -0.15in \label{fig:baller2vec} \end{figure} \subsection{The multi-entity Transformer}\label{sec:multi_ent_trans} We now describe our multi-entity Transformer, \texttt{baller2vec}{} (Figure \ref{fig:baller2vec}). For NLP tasks, the Transformer self-attention mask $M$ takes the form of a $T \times T$ matrix (Figure \ref{fig:masks}) where $T$ is the length of the sequence. The element at $M_{t_{1},t_{2}}$ thus indicates whether or not the model can ``look'' at time step $t_{2}$ when processing time step $t_{1}$. Here, we generalize the standard Transformer to the multi-entity setting by employing a $T \times K \times T \times K$ mask \textit{tensor} where element $M_{t_{1},k_{1},t_{2},k_{2}}$ indicates whether or not the model can ``look'' at agent $p_{k_{2}}$ at time step $t_{2}$ when processing agent $p_{k_{1}}$ at time step $t_{1}$. Here, we mask all elements where $t_{2} > t_{1}$ and leave all remaining elements unmasked, i.e., \texttt{baller2vec}{} is a ``causal'' model. In practice, to be compatible with Transformer implementations in major deep learning libraries, we reshape $M$ into a $T K \times T K$ matrix (Figure \ref{fig:masks}), and the input to the Transformer is a matrix with shape $T K \times F$ where $F$ is the dimension of each $z_{t,k}$. \citet{Irie2019} observed that positional encoding \cite{vaswani2017attention} is not only unnecessary, but detrimental for Transformers that use a causal attention mask, so we do not use positional encoding with \texttt{baller2vec}{}. The remaining computations are identical to the standard Transformer (see code).\footnote{See also ``The Illustrated Transformer'' (\url{https://jalammar.github.io/illustrated-transformer/}) for an introduction to the architecture.} \begin{wrapfigure}{r}{0.6\linewidth} \centering \subfigure{\includegraphics[width=0.45\linewidth]{standard_mask}} \subfigure{\includegraphics[width=0.45\linewidth]{baller2vec_mask}} \vskip -0.1in \caption{\textbf{Left}: the standard self-attention mask matrix $M$. The element at $M_{t_{1},t_{2}}$ indicates whether or not the model can ``look'' at time step $t_{2}$ when processing time step $t_{1}$. \textbf{Right}: the matrix form of our multi-entity self-attention mask tensor. In tensor form, element $M_{t_{1},k_{1},t_{2},k_{2}}$ indicates whether or not the model can ``look'' at agent $p_{k_{2}}$ at time step $t_{2}$ when processing agent $p_{k_{1}}$ at time step $t_{1}$. In matrix form, this corresponds to element $M_{t_{1} K + k_{1}, t_{2} K + k_{2}}$ when using zero-based indexing. The $M$ shown here is for a static, fully connected graph, but other, potentially evolving network structures can be encoded in the attention mask tensor. } \vskip -0.65in \label{fig:masks} \end{wrapfigure} \subsection{Trajectory modeling in sports}\label{sec:related_traj} There is a rich literature on MASM, particularly in the context of sports, e.g., \citet{kim2010motion, zheng2016generating, le2017coordinated, le2017data, qi2020imitative, zhan2020learning}. Most relevant to our work is \citet{yeh2019diverse}, who used a variational recurrent neural network combined with a graph neural network to forecast trajectories in a multi-agent setting. Like their approach, our model is permutation equivariant with regard to the ordering of the agents; however, we use a multi-head attention mechanism to achieve this permutation equivariance while the permutation equivariance in \citet{yeh2019diverse} is provided by the graph neural network. Specifically, \citet{yeh2019diverse} define: $v \to e: \textbf{e}_{i,j} = f_{e}([\textbf{v}_{i}, \textbf{v}_{j}, \textbf{t}_{i, j}])$ and $e \to v: \textbf{o}_{i} = f_{v}(\sum_{j \in \mathcal{N}_{i}} [\mathbf{e}_{i, j}, \textbf{t}_{i}])$, where $\textbf{v}_{i}$ is the initial state of agent $i$, $\textbf{t}_{i, j}$ is an embedding for the edge between agents $i$ and $j$, $\textbf{e}_{i,j}$ is the representation for edge $(i, j)$, $\mathcal{N}_{i}$ is the neighborhood for agent $i$, $\textbf{t}_{i}$ is a node embedding for agent $i$, $\mathbf{o}_{i}$ is the output state for agent $i$, and $f_{e}$ and $f_{v}$ are deep neural networks. Assuming \textit{each individual player} is a different ``type'' in $f_{e}$ (i.e., attempting to maximize the level of personalization) would require $450^{2} =$ 202,500 (i.e., $B^{2}$) different $t_{i,j}$ edge embeddings, many of which would never be used during training and thus inevitably lead to poor out-of-sample performance. Reducing the number of type embeddings requires making assumptions about the nature of the relationships between nodes. By using a multi-head attention mechanism, \texttt{baller2vec}{} learns to integrate information about different agents in a highly flexible manner that is both agent and time-dependent, and can generalize to unseen agent combinations. The attention heads in \texttt{baller2vec}{} are somewhat analogous to edge types, but, importantly, they do not require a priori knowledge about the relationships between the players. Additionally, unlike recent works that use variational methods to train their generative models \cite{yeh2019diverse, felsen2018will, zhan2018generating}, we translate the multi-agent trajectory modeling problem into a classification task, which allows us to train our model by strictly maximizing the likelihood of the data. As a result, we do not make any assumptions about the distributions of the trajectories nor do we need to set any priors over latent variables. \citet{zheng2016generating} also predicted binned trajectories, but they used a recurrent convolutional neural network to predict the trajectory for a single player trajectory at a time at each time step. \subsection{Transformers for multi-agent spatiotemporal modeling} \citet{giuliari2020transformer} used a Transformer to forecast the trajectories of \textit{individual} pedestrians, i.e., the model does not consider interactions between individuals. \citet{YuMa2020Spatio} used \textit{separate} temporal and spatial Transformers to forecast the trajectories of multiple, interacting pedestrians. Specifically, the temporal Transformer processes the coordinates of each pedestrian \textit{independently} (i.e., it does not model interactions), while the spatial Transformer, which is inspired by Graph Attention Networks \cite{velivckovic2017graph}, processes the pedestrians \textit{independently at each time step}. \citet{sanford2020group} used a Transformer to classify on-the-ball events from sequences in soccer games; however, only the coordinates of the $K$-nearest players to the ball were included in the input (along with the ball's coordinates). Further, the \textit{order} of the included players was based on their average distance from the ball for a given temporal window, which can lead to specific players changing position in the input between temporal windows. As far as we are aware, \texttt{baller2vec} is the \textbf{first} Transformer capable of processing all agents \textit{simultaneously across time} without imposing an order on the agents. \subsection{\texttt{baller2vec}{} is an effective learning algorithm for multi-agent spatiotemporal modeling.}\label{sec:baller2vec_masm} \begin{wraptable}{r}{0.45\linewidth} \vskip -0.25in \centering \caption{The average NLL on the test set for each of the models in our ablation experiments (lower is better). For \textbf{Task P}{}, using all 10 players improved model performance by 18.0\%, while using player identity improved model performance by an additional 4.4\%. For \textbf{Task B}{}, using player identity improved model performance by 2.7\%. 1/10 indicates whether one or 10 players were used as input, respectively, while I/NI indicates whether or not player identity was used, respectively. } \begin{center} \begin{tabular}{lccc} \toprule Task & 1-NI & 10-NI & 10-I \\ \midrule \textbf{Task P}{} & 0.628 & 0.515 & 0.492 \\ \textbf{Task B}{} & N/A & 2.670 & 2.598 \\ \bottomrule \end{tabular} \label{tab:ablation} \end{center} \vskip -0.2in \end{wraptable} The average NLL on the test set for our best \textbf{Task P}{} model was 0.492, while the average NLL for our best \textbf{Task B}{} model was 2.598. In NLP, model performance is often expressed in terms of the perplexity per word, which, intuitively, is the number of faces on a fair die that has the same amount of uncertainty as the model per word (i.e., a uniform distribution over $M$ labels has a perplexity of $M$, so a model with a per word perplexity of six has the same average uncertainty as rolling a fair six-sided die). In our case, we consider the perplexity per trajectory bin, defined as: $PP = e^{\frac{1}{NTK} \sum_{n=1}^{N} \sum_{t=1}^{T} \sum_{k=1}^{K} -\ln(p(v_{n, t, k}))}$, where $N$ is the number of sequences. Our best \textbf{Task P}{} model achieved a $PP$ of 1.64, i.e., \texttt{baller2vec}{} was, on average, as uncertain as rolling a 1.64-sided fair die (better than a coin flip) when predicting player trajectory bins (Table \ref{tab:perplexity}). For comparison, when using the distribution of the player trajectory bins in the training set as the predicted probabilities, the $PP$ on the test set was 15.72. Our best \textbf{Task B}{} model achieved a $PP$ of 13.44 (compared to 316.05 when using the training set distribution). Compared to our GRNN, \texttt{baller2vec}{} was $\sim$3.8 times faster and had a 10.5\% lower average NLL when given an equal amount of training time (Table \ref{tab:grnn}). Even when only given half as much training time as our GRNN, \texttt{baller2vec}{} had a 9.1\% lower average NLL. \subsection{\texttt{baller2vec}{} uses information about all players on the court through time, in addition to player identity, to model spatiotemporal dynamics.}\label{sec:baller2vec_ablation} Results for our ablation experiments can be seen in Table \ref{tab:ablation}. Including all 10 players in the input dramatically improved the performance of our \textbf{Task P}{} model by 18.0\% vs. only including a single player. Including player identity improved the model's performance a further 4.4\%. This stands in contrast to \citet{felsen2018will} where the inclusion of player identity led to slightly \textit{worse} model performance; a counterintuitive result given the range of skills among NBA players, but possibly a side effect of their role-alignment procedure. Additionally, when replacing the players in each test set sequence with random players, the performance of our best \textbf{Task P}{} model deteriorated by 6.2\% from 0.492 to 0.522. Interestingly, including player identity only improved our \textbf{Task B}{} model's performance by 2.7\%. Lastly, our best \textbf{Task P}{} model's performance on the full sequence test set (0.492) was 70.6\% better than its performance on the single frame test set (1.67), i.e., \texttt{baller2vec}{} is clearly using historical information to model the spatiotemporal dynamics of basketball. \vspace{-0.2in} \subsection{\texttt{baller2vec}{}'s learned player embeddings encode individual attributes.}\label{sec:baller2vec_embeddings} \begin{figure*}[t] \centering \vskip -0.1in \subfigure{\includegraphics[width=\textwidth]{player_embeddings}} \caption{As can be seen in this 2D UMAP of the player embeddings, by exclusively learning to predict the trajectory of the ball, \texttt{baller2vec}{} was able to infer idiosyncratic player attributes. The left-hand side of the plot contains tall post players (\mytriangle{white}, \mysquare{white}), e.g., Serge Ibaka, while the right-hand side of the plot contains shorter shooting guards (\mystar{white}) and point guards (+), e.g., Stephen Curry. The connecting transition region contains forwards (\mysquare{white}, \mycircle{white}) and other ``hybrid'' players, i.e., individuals possessing both guard and post skills, e.g., LeBron James. Further, players with similar defensive abilities, measured here by the cube root of the players' blocks per minute in the 2015-2016 season \cite{basketball2021stats}, cluster together. } \vskip -0.2in \label{fig:embeddings} \end{figure*} \begin{wrapfigure}{l}{0.4\linewidth} \centering \vskip -0.15in \includegraphics[width=\linewidth]{similar_players} \caption{Nearest neighbors in \texttt{baller2vec}{}'s embedding space are plausible doppelgängers, such as the explosive point guards Russell Westbrook and Derrick Rose, and seven-foot tall brothers Pau and Marc Gasol. Images credits can be found in Table \ref{tab:image_credits}. } \vskip -0.15in \label{fig:similar_players} \end{wrapfigure} Neural language models are widely known for their ability to encode semantic relationships between words and phrases as geometric relationships between embeddings—see, e.g., \citet{mikolov2013distributed, mikolov2013efficient, le2014distributed, sutskever2014sequence}. \citet{alcorn20182vec} observed a similar phenomenon in a baseball setting, where batters and pitchers with similar skills were found next to each other in the embedding space learned by a neural network trained to predict the outcome of an at-bat. A 2D UMAP \cite{mcinnes2018umap} of the player embeddings learned by \texttt{baller2vec}{} for \textbf{Task B}{} can be seen in Figure \ref{fig:embeddings}. Like \texttt{(batter|pitcher)2vec} \cite{alcorn20182vec}, \texttt{baller2vec}{} seems to encode skills and physical attributes in its player embeddings. Querying the nearest neighbors for individual players reveals further insights about the \texttt{baller2vec}{} embeddings. For example, the nearest neighbor for Russell Westbrook, an extremely athletic 6'3" point guard, is Derrick Rose, a 6'2" point guard also known for his athleticism (Figure \ref{fig:similar_players}). Amusingly, the nearest neighbor for Pau Gasol, a 7'1" center with a respectable shooting range, is his younger brother Marc Gasol, a 6'11" center, also with a respectable shooting range. \subsection{\texttt{baller2vec}{}'s predicted trajectory bin distributions depend on both the historical and current context.}\label{sec:baller2vec_trajs} Because \texttt{baller2vec}{} \textit{explicitly} models the distribution of the player trajectories (unlike variational methods), we can easily visualize how its predicted trajectory bin distributions shift in different situations. As can be seen in Figure \ref{fig:trajs}, \texttt{baller2vec}{}'s predicted trajectory bin distributions depend on both the historical and current context. When provided with limited historical information, \texttt{baller2vec}{} tends to be less certain about where the players might go. \texttt{baller2vec}{} also tends to be more certain when predicting trajectory bins at ``easy'' moments (e.g., a player moving into open space) vs. ``hard'' moments (e.g., an offensive player choosing which direction to move around a defender). \begin{wrapfigure}{l}{0.5\linewidth} \centering \includegraphics[width=\linewidth]{traj_preds} \caption{\texttt{baller2vec}{}'s trajectory predicted trajectory bin distributions are affected by both the historical and current context. At $t = 1$, \texttt{baller2vec}{} is fairly uncertain about the target player's (\mytriangle{red}; $k = 8$) trajectory (left grid and dotted red line; the blue-bordered center cell is the ``stationary'' trajectory), with most of the probability mass divided between trajectories moving towards the ball handler's sideline (right grid; black = 1.0; white = 0.0). After observing a portion of the sequence ($t = 6$), \texttt{baller2vec}{} becomes very certain about the target player's trajectory ($f_{6,8}$), but when the player reaches a decision point ($t = 13$), \texttt{baller2vec}{} becomes split between trajectories (staying still or moving towards the top of the key). Additional examples can be found in Figure \ref{fig:traj_forecasts}. \ball{} = ball, \mysquare{white} = offense, \mytriangle{gray} = defense, and $f_{t,k} = f(\mathcal{Z})_{t,k}$. } \vskip -1in \label{fig:trajs} \end{wrapfigure} \subsection{Attention heads in \texttt{baller2vec}{} appear to perform basketball-relevant functions.}\label{sec:baller2vec_attn} One intriguing property of the attention mechanism \cite{graves2013generating, graves2014neural, weston2014memory, bahdanau2014neural} is how, when visualized, the attention weights often seem to reveal how a model is ``thinking''. For example, \citet{vaswani2017attention} discovered examples of attention heads in their Transformer that appear to be performing various language understanding subtasks, such as anaphora resolution. As can be seen in Figure \ref{fig:attn}, some of the attention heads in \texttt{baller2vec}{} seem to be performing basketball understanding subtasks, such as keeping track of the ball handler's teammates, and anticipating who the ball handler will pass to, which, intuitively, help with our task of predicting the ball's trajectory. \section{Introduction} \input{Introduction} \section{Methods} \input{Methods} \section{Experiments} \input{Experiments} \section{Results} \input{Results} \section{Related Work} \input{Related} \section{Limitations}\label{sec:limitations} \input{Limitations} \section{Conclusion} \input{Conclusion} \section{Author Contributions} MAA conceived and implemented the architecture, designed and ran the experiments, and wrote the manuscript. AN partially funded MAA, provided the GPUs for the experiments, and commented on the manuscript. \section{Acknowledgements} We would like to thank Sudha Lakshmi, Katherine Silliman, Jan Van Haaren, Hans-Werner Van Wyk, and Eric Winsberg for their helpful suggestions on how to improve the manuscript.
{ "attr-fineweb-edu": 1.68457, "attr-cc_en_topic": 0, "domain": "arxiv" }