diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzbljt" "b/data_all_eng_slimpj/shuffled/split2/finalzzbljt" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzbljt" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nMotivated by the desire for greater efficiency in drug development and the low success rates in confirmatory (Phase 3) studies, methodological research on adaptive designs and interest in their application has grown tremendously over the last 30 years. In an adaptive design, accumulating data can be used to modify the course of the trial. Several possible adaptations can be considered in interim analyses, for example, adaptive randomization for dose finding, dropping and\/or adding treatment arms, sample size re-estimation, and early stopping for safety, futility or efficacy, to name a few.\n\nValidity and integrity are two major considerations in adaptive designs (Dragalin, 2006). Because data from one stage of the trial can inform the design of future stages of the trial, careful steps need to be taken to maintain the validity of the trial, i.e., control of the Type I error probability and minimization of bias. To maintain trial integrity, it is important that all adaptations be pre-planned, prior to the unblinded examination of data, and that all trial personnel other than those responsible for making the adaptations are blind to the results of any interim analysis (Food and Drug Administration, 2019). It is also important to ensure consistency in trial conduct among the different stages.\n\nA general method for hypothesis testing in experiments with adaptive interim analyses based on combining stage-wise $p$-values was proposed by Bauer and K\\\"{o}hne (1994). The basic idea behind the construction of a combination test in a two-stage adaptive design is to transform the stage-wise test statistics to $p$-values, with independence of the $p$-values following from the conditional invariance principle (Brannath {\\it{et al}}., 2007, 2012; Wassmer and Brannath, 2016), regardless of the adaptation performed after the first stage. The principle holds as long as the null distribution of the first-stage $p$-value ($p_1$) as well as the conditional distribution of the second-stage $p$-value ($p_2$) given $p_1$ are stochastically larger than the $U(0,1)$ distribution (the so-called ``p-clud\" property). A specified combination function is used to combine the $p$-values obtained before and after the preplanned adaptation of the design into a single global test statistic. An extension of combination tests to allow more flexibility regarding the number of stages and the choice of decision boundaries was provided by Brannath {\\it{et al}}. (2002).\n\nIn dose-response studies, a component of the MCP-Mod procedure (Bretz {\\it{et al}}., 2005) has gained popularity for the purpose of detecting a proof-of-concept (PoC) signal in learning-phase trials. The procedure consists of specifying a set of candidate dose-response models, determining the optimal contrast statistic for each candidate model, and using the maximum contrast as the overall test statistic. Other authors have considered extensions of this procedure to adaptive dose-response designs. Miller (2010) investigated a two-stage adaptive dose-response design for PoC testing incorporating adaptation of the dosages, and possibly the contrast vectors. He developed an adaptive multiple contrast test (AMCT) that combines the multiple contrast test statistics across two stages under the assumption that the variance is known. Franchetti {\\it{et al}}. (2013) extended the MCP-Mod procedure to a two-stage dose-response design with a pre-specified rule of adding and\/or dropping dosage groups in Stage 2 based on the Stage 1 results. The PoC test uses Fisher's (1932) combination method to combine the two stage-wise $p$-values, each obtained by applying the MCP-Mod procedure to the data from each stage. This method includes a restrictive requirement of equal total sample sizes for each stage. Also, the authors claimed that the independence of the two stage-wise $p$-values is potentially compromised if the number of dosages used in Stage 2 is not the same as that used in Stage 1 and proposed a method for assigning weights to the different dosage groups to deal with this problem. We do not believe that such weighting is necessary as long as the statistic used to combine the stage-wise $p$-values (Fisher's, in this case) does not include weights that depend on the Stage 1 data.\n\nEarly work related to adaptive designs for dose-response testing includes a general procedure with multi-stage designs proposed by Bauer and R\\\"{o}hmel (1995), in which dosage adaptations were performed at interim analyses. Other goals of adaptive dose-response studies include determining if any dosage yields a clinically relevant benefit, estimating the dose-response curve, and selecting a target dosage for further study (Dragalin {\\it{et al}}., 2010). Several model-based adaptive dose-ranging designs that utilize principles of optimal experimental design to address these objectives were studied by Dragalin {\\it{et al}}. (2010). Bornkamp {\\it{et al}}. (2011) proposed a response-adaptive dose-finding design under model uncertainty, which uses a Bayesian approach to update the parameters of the candidate dose-response models and model probabilities at each interim analysis.\n\nIn this article, we propose new methods to address the specific objective of detecting a PoC signal in adaptive dose-response studies with normally-distributed outcomes. We extend the MCP-Mod procedure to include generalized multiple contrast tests (GMCTs; Ma and McDermott, 2020) and apply them to adaptive designs; we refer to these as adaptive generalized multiple contrast tests (AGMCTs). These tests are introduced in Section \\ref{Adaptive Generalized Multiple Contrast Tests}. In Section \\ref{Adaptive Multiple Contrast Test} we extend the AMCT of Miller (2010) to accommodate more flexible adaptations and to the important case where the variance is unknown using the conditional rejection probability (CRP) principle (M\\\"{u}ller and Sch\\\"{a}fer, 2001, 2004). Numerical examples are provided in Section \\ref{Numerical Example} to illustrate the application of the AGMCTs and AMCT. In Section \\ref{Simulation studies}, we conduct simulation studies to evaluate the operating characteristics of the various methods as well as the corresponding tests for non-adaptive designs. The conclusions are given in Section \\ref{Conclusion}.\n\n\n\\section{Adaptive Generalized Multiple Contrast Tests}\\label{Adaptive Generalized Multiple Contrast Tests}\n\nIn this section, we propose a two-stage adaptive design in which we use data from Stage 1 to get a better sense of the true dose-response model and make adaptations to the design for Stage 2. We then use data from both Stage 1 and Stage 2 to perform an overall test to detect the PoC signal. The rationale is to overcome the problem of potential model misspecification at the design stage.\n\n\\subsection{General Procedure}\nWe consider the case of a normally distributed outcome variable. Suppose that there are $n_{i1}$ subjects in dosage group $i$ in Stage 1, $i=1,\\ldots,k_1$. Denote the first stage data as $\\pmb{Y}_1=(Y_{111},\\ldots,Y_{1 n_{11} 1},\\ldots, $ $Y_{k_1 1 1},\\ldots,Y_{k_1 n_{k_1 1}1})^\\prime.$\nThe statistical model is\n$$Y_{ij1}=\\mu_i +\\epsilon_{ij1},\\quad\\epsilon_{ij1}\\stackrel{iid}{\\sim} N(0, \\sigma^2),\\quad i=1,\\ldots, k_1,\\ j=1,\\ldots, n_{i1}.$$\nThe true mean configuration is postulated to follow some dose-response model $\\mu_i=f(d_i,\\pmb{\\theta})$, where $d_i$ is the dosage in the $i^{\\text{th}}$ group, $i=1,\\ldots,k_1$. The dose-response model is restricted to be of the form $f(\\cdot;\\pmb{\\theta})=\\theta_0+\\theta_1 f^0(\\cdot;\\pmb{\\theta}^0)$, where $f^0(\\cdot;\\pmb{\\theta}^0)$ is a standardized dose-response model indexed by a parameter vector $\\pmb{\\theta}^0$ (Thomas, 2017). A candidate set of $M$ dose-response models $f_m(\\cdot,\\pmb{\\theta})$, $m=1,\\ldots,M$, including values for $\\pmb{\\theta}$, is pre-specified. For each candidate model, an optimal contrast is determined to maximize the power to detect differences among the mean responses; the contrast coefficients are chosen to be perfectly correlated with the mean responses if that model is correct (Bretz {\\it{et al}}., 2005; Pinheiro {\\it{et al}}., 2014).\n\nFor each candidate model, the following hypothesis is tested:\n$$H_{0m 1}: \\sum_{i=1}^{k_1} c_{mi1} \\mu_i=0,\\quad\\text{vs.}\\quad H_{1m 1}: \\sum_{i=1}^{k_1} c_{mi1} \\mu_i>0,\\quad m=1,\\ldots, M,$$\nwhere $c_{m11},\\ldots, c_{mk_11}$ are the optimal contrast coefficients associated with the $m^{\\text{th}}$ candidate model in Stage 1. The multiple contrast test statistics are\n$$T_{m1}=\\sum_{i=1}^{k_1} c_{mi1} \\bar{Y}_{i1}\\Bigg\/\\left(S_1\\sqrt{\\sum_{i=1}^{k_1}\\frac{c_{mi1}^2}{n_{i1}}}\\right),\\quad m=1,\\ldots, M,$$\nwhere $\\bar{Y}_{i1}=\\sum_{j=1}^{n_{i1}} Y_{ij1}\/n_{i1}$ and the pooled variance estimator is $S_1^2=\\sum_{i=1}^{k_1}\\sum_{j=1}^{n_{i1}} (Y_{ij1}-\\bar{Y}_{i1})^2\/\\nu_1$, where $\\nu_1=\\sum_{i=1}^{k_1} n_{i1}-k_1$. The joint null distribution of $(T_{11},\\ldots,T_{M1})^\\prime$ is multivariate $t$ (with $\\nu_1$ degrees of freedom) with common denominator and correlation matrix having elements\n$$\\rho_{m m^{\\prime}1}=\\sum_{i=1}^{k_1}\\frac{c_{mi1} c_{m^{\\prime} i1}}{n_{i1}}\\Bigg\/\\sqrt{\\sum_{i=1}^{k_1}\\frac{c_{m i1}^2}{n_{i1}}\\sum_{i=1}^{k_1}\\frac{c_{m^{\\prime} i1}^2}{n_{i1}}}, \\quad m, m^{\\prime}=1,\\ldots, M.$$\n\nLet $p_{m1}=1-\\mathcal{T}_{\\nu_1} (T_{m1})$ be the $p$-values derived from $T_{m1}$, $m=1, \\ldots, M$, where $\\mathcal{T}_{\\nu_1} (\\cdot)$ is the cumulative distribution function of the $t$ distribution with $\\nu_1$ degrees of freedom. We consider three combination statistics to combine the $M$ dependent one-sided $p$-values in Stage 1 (Ma and McDermott, 2020):\n\\begin{enumerate}[(i)]\n\\item Tippett's (1931) combination statistic,\n$$\\Psi_{T1}=\\min_{1 \\leq m \\leq M} \\ p_{m1};$$\n\\item Fisher's (1932) combination statistic,\n$$\\Psi_{F1}=-2 \\sum_{m=1}^M \\log (p_{m1});$$\n\\item Inverse normal combination statistic (Stouffer, 1949),\n$$\\Psi_{N1}=\\sum_{m=1}^M \\Phi^{-1} (1-p_{m1}).$$\n\\end{enumerate}\n\n\nNote that the use of Tippett's combination statistic is equivalent to the original MCP-Mod procedure; the use of different combination statistics results in a generalization of the MCP-Mod procedure, yielding GMCTs (Ma and McDermott, 2020). When the $p$-values are independent, these statistics have simple null distributions. In our case the $p$-values are dependent, but the correlations among $T_{11},\\ldots,T_{M1}$ are known. For Tippett's combination method, one can obtain multiplicity-adjusted $p$-values from $T_{m1}$, $m=1, \\ldots, M$, given the correlation structure using the {\\tt{mvtnorm}} package in {\\tt{R}}. A PoC signal is established in Stage 1 if the minimum adjusted $p$-value $p_{\\text{min, adj}1}< \\alpha$ (Bretz {\\it{et al}}., 2005). For Fisher's and the inverse normal combination methods, excellent approximations to the null distributions of $\\Psi_{F1}$ and $\\Psi_{N1}$ have been developed (Kost and McDermott, 2002), enabling computation of the overall $p$-value $p_1$ for Stage 1 using a GMCT (Ma and McDermott, 2020).\n\nAfter obtaining the Stage 1 data, we make design adaptations and determine the optimal contrasts for the updated models in Stage 2 (see Sections \\ref{Adapting the Candidate Dose-Response Models} and \\ref{Adapting the Dosage Groups} below). We then conduct a GMCT in Stage 2 and obtain the second-stage $p$-value $p_2$. Under the overall null hypothesis $H_0: \\mu_1= \\cdots =\\mu_{k^*}$, where $k^*$ is the total number of unique dosage groups in Stages 1 and 2 combined, the independence of the stage-wise $p$-values $p_1$ and $p_2$ can be established using the conditional invariance principle (Brannath {\\it{et al}}., 2007). To perform the overall PoC test in the two-stage adaptive design, we combine $p_1$ and $p_2$ using one of the above combination statistics.\n\nA procedure that ignores the adaptation, i.e., that simply pools the data from Stage 1 and Stage 2 and applies a GMCT to the pooled data as if no adaptation had been performed, would substantially increase the Type I error probability.\n\n\n\\subsection{Adapting the Candidate Dose-Response Models}\\label{Adapting the Candidate Dose-Response Models}\nHere and in Section \\ref{Adapting the Dosage Groups} below, we consider adaptations for the second stage that are arguably most relevant for PoC testing, namely those of the candidate dose-response models and the dosages to be studied. The choice of the candidate dose-response models and dosages for Stage 1 would depend on prior knowledge from pre-clinical or early-stage clinical experience with the investigative agent. If there is great uncertainty concerning the nature of the dose-response relationship, it would seem sensible to select a more diverse set of candidate dose-response models with pre-specified parameters when the trial begins.\n\nAfter collecting the Stage 1 data, these data can be used to estimate $\\pmb{\\theta}$ for each of the $M$ candidate dose-response models and adapt each of the models by substituting $\\pmb{\\hat{\\theta}}$ for the original specification (guess) of $\\pmb{\\theta}$. The optimal contrast vectors can be constructed for each of the updated models $f_m(\\cdot,\\pmb{\\hat{\\theta}})$, $m=1,\\ldots,M$, for use in Stage 2.\n\n\n\nA potential problem occurs when the true dose-response model differs markedly from some of the specified candidate models and if those candidate models are nonlinear models with several unknown parameters. In such cases there can be a failure to fit the models using the Stage 1 data. To handle this problem, one can consider fall-back approaches to determine the corresponding contrasts to be used in Stage 2. These include using isotonic regression (Robertson {\\it{et al}}., 1988), imposing reasonable bounds on the nonlinear parameters during model-fitting (as is done in the {\\tt{R}}-package {\\tt{DoseFinding}} to ensure the existence of the maximum likelihood estimates), and retaining the Stage 1 contrast for use in Stage 2. Different strategies can be used for different models in cases where more than one model cannot be fit using the Stage 1 data.\n\nSpecifically, consider the following 5 candidate dose-response models:\n\\begin{itemize}\n \\item[] $E_{\\max}$ model: $f_1(d,\\pmb{\\theta})=E_0+E_{\\max} d\/ (ED_{50}+d)$\n \\item[] Linear-log model: $f_2 (d,\\pmb{\\theta})=\\theta_0+\\theta_1\\log(5d+1)$\n \\item[] Linear model: $f_3 (d,\\pmb{\\theta})=\\theta_0+\\theta_1 d$\n \\item[] Quadratic model: $f_4 (d,\\pmb{\\theta})=\\theta_0+\\theta_1 d + \\theta_2 d^2$\n \\item[] Logistic model: $f_5 (d,\\pmb{\\theta})=E_0+E_{\\max} \/[1 + \\exp\\{(ED_{50}-d)\/ \\delta\\}]$\n\\end{itemize}\nAmong these 5 candidate models, the $E_{\\max}$ and Logistic models are the ones that may fail to converge since the others can be expressed as linear models in $d$ (or a simple function of $d$). A possible fall-back strategy could be as follows: if only one of the $E_{\\max}$ and Logistic models fails to converge in Stage 1, isotonic regression is used to generate the corresponding contrast for use in Stage 2; if both the $E_{\\max}$ and Logistic models fail to converge in Stage 1, then isotonic regression is used to generate the corresponding contrast for the Logistic model and the same contrast that was used in Stage 1 is used in Stage 2 for the $E_{\\max}$ model (see Section \\ref{Numerical Example, AGMCT} for a numerical example).\n\nAnother potential concern arises if the data from Stage 1 suggest that there is a negative dose-response relationship, i.e., that higher dosages are associated with worse outcomes. In this case, the adapted contrast associated with the linear model, say, in Stage 2 would be the negative of that used in Stage 1. If a similar dose-response pattern is observed in Stage 2, then the contrast associated with the linear model would incorrectly indicate (possibly strong) evidence against the null hypothesis. One way to avoid this problem would be to not adapt the dose-response models in such a case, but instead to consider adapting the dosage groups by retaining only dosages, if any, that appear to be associated with increasing sample means (see Section \\ref{Adapting the Dosage Groups} below).\n\nIdeally, of course, it would be required to pre-specify the measures that would be taken to deal with the problems noted above (non-convergence of non-linear models, negative dose-response relationship) prior to examination of the data.\n\nOne could also consider different numbers of candidate models (or contrast vectors) in Stage 1 and Stage 2. One non-model-based option, for example, would be to use a single contrast in Stage 2 based on the sample means of the dosage groups from Stage 1. We found that this strategy, while intuitively appealing, yielded tests with reduced power, likely due to the reliance on a single contrast combined with the uncertainty associated with estimation of the means of each dosage group in Stage 1. One could also consider a small number of other contrasts based on values that are within the bounds of uncertainty reflected in the sample means, though how to choose these contrasts is somewhat arbitrary.\n\n\n\\subsection{Adapting the Dosage Groups}\\label{Adapting the Dosage Groups}\n\nAdaptation of the dosage groups in Stage 2, including the number of dosage groups, could also be considered. One would have to establish principles for adding and\/or dropping dosages; for example, dropping active dosages that appear to be less efficacious than placebo or that appear to be less efficacious than other active dosages, or adding a dosage (within a safe range) when there appears to be no indication of a dose-response relationship in Stage 1. Relevant discussion of these issues can be found in Bauer and R\\\"{o}hmel (1995), Miller (2010), and Franchetti {\\it{et al}}. (2013).\n\nTo illustrate this type of adaptation, we create an example dosage adaptation rule to drop the active dosage groups that appear to be less efficacious than placebo and the adjacent group. Suppose that there are $k_1$ dosage groups in Stage 1 and denote the dosage vector in Stage 1 as $\\pmb{d}_{\\text{Stage1}}=(d_{11},\\ldots, d_{k_11})^\\prime$, where $d_{11}=0$ (placebo group). We will select $k_2$ dosage groups from the $k_1$ Stage 1 dosage groups, $k_2\\leq k_1$. Denote the dosage vector in Stage 2 as $\\pmb{d}_{\\text{Stage2}}=(d_{12}, \\ldots, d_{k_22})^\\prime$, where $d_{12}=0$ (placebo group). The example dosage adaptation rule is as follows:\n\\begin{itemize}\n \\item[] \\textbf{Step 1}: Always select the placebo group to be included in Stage 2, i.e., $d_{12}=d_{11}=0$.\n\n \\item[] \\textbf{Step 2}: Consider the difference in the means between each active dosage group and the placebo group in Stage 1.\n\nDenote $\\hat{\\Delta}_{21}=\\bar{Y}_{21}-\\bar{Y}_{11},\\ldots,\\hat{\\Delta}_{k_11}=\\bar{Y}_{k_11}-\\bar{Y}_{11}$. If there exists dosage group(s) $i$, $i=2,\\ldots, k_1$, such that $\\hat{\\Delta}_{i 1}<-\\delta$, where $\\delta\\ge 0$, then we remove dosage(s) $d_{i 1}$ from consideration; however, if $\\hat{\\Delta}_{i 1}<-\\delta$ for all $i=2,\\ldots, k_1$, then we stop the trial at the interim analysis and fail to reject $H_0$.\n\n \\item[] \\textbf{Step 3}: Consider the differences in the means between two adjacent dosage groups among the remaining dosage groups, ordered from smallest to largest.\n\nAfter Steps 1 and 2, we have selected $d_{11}$ (placebo) into Stage 2 and have several remaining dosage groups $d_{\\tilde{2}1},\\ldots,d_{\\tilde{k}1}$, where $\\tilde{k}\\leq k_1$.\n\nWe first examine the difference in the means between dosages $d_{11}$ and $d_{\\tilde{2}1}$. If $\\hat{\\Delta}_{\\tilde{2}1}=\\bar{Y}_{\\tilde{2}1}-\\bar{Y}_{11} > -\\delta$, then $d_{\\tilde{2}1}$ is selected to be included in Stage 2, i.e., $d_{22}=d_{\\tilde{2}1}$; otherwise, $d_{\\tilde{2}1}$ is discarded and we proceed to the next possible dosage $d_{\\tilde{3}1}$.\n\nIf $d_{\\tilde{2}1}$ is selected to be included in Stage 2, then we proceed to compare the means between dosages $d_{\\tilde{2}1}$ and $d_{\\tilde{3}1}$. If $\\hat{\\Delta}_{\\tilde{3} \\tilde{2}}=\\bar{Y}_{\\tilde{3}1}-\\bar{Y}_{\\tilde{2}1}> -\\delta$, then $d_{\\tilde{3}1}$ is selected to be included in Stage 2, i.e., $d_{32}=d_{\\tilde{3}1}$; otherwise, $d_{\\tilde{3}1}$ is discarded. However, if $d_{\\tilde{2}1}$ is discarded, then the means should be compared between dosages $d_{11}$ and $d_{\\tilde{3}1}$, since these are now adjacent dosages among those remaining.\n\nThis procedure is repeated until the last possible dosage $d_{\\tilde{k}1}$ is reached and its associated mean is compared with that of the remaining adjacent dosage. This results in a final number $k_2 \\leq \\tilde{k}$ of dosage groups selected to be included in Stage 2, i.e., $\\pmb{d}_{\\text{Stage2}}=(d_{12}, \\ldots, d_{k_22})^\\prime$.\n\\end{itemize}\n\nHere we consider the threshold of adaptive dosing $\\delta=0$, which simply considers the difference between two sample means and retains the dosage with the larger sample mean. This threshold might be strict since it does not consider the variability of the difference between two sample means. An alternative threshold could be $\\delta=\\sqrt{\\text{var}(\\bar{Y}_{i1}-\\bar{Y}_{i^\\prime 1})}$, $i,i^\\prime=1, \\ldots, k_1$, which retains a dosage with a mean that is no more than one standard error lower than the mean of the adjacent dosage (or placebo). Users are free to choose their own threshold $\\delta$ based on considerations specific to their problem.\n\n\n\n\n\nWe emphasize that this is just one possible rule to adapt the dosage groups for Stage 2, and this rule only considers dropping dosages at the end of Stage 1. One could consider different adaptation rules that allow adding and\/or dropping dosages at the end of Stage 1, i.e., $k_2$ does not need to be less than or equal to $k_1$, and some of the dosage groups selected in Stage 2 may differ from those included in Stage 1. Also, as in Miller (2010), such a rule is based on heuristic considerations and is relatively easy to communicate to non-statisticians. Mercier {\\it{et al}}. (2015) provide an approach to selecting dosages for Stage 2 based on the hypothetical dose-response shape (out of several pre-specified) that correlates highest with the data observed in Stage 1.\n\nOne can adapt both the candidate dose-response models and the dosage groups in Stage 2. The optimal contrast vectors for Stage 2 would then be determined by the updated candidate dose-response models with parameters $\\pmb{\\hat{\\theta}}$ and the adapted dosages $\\pmb{d}_{\\text{Stage2}}$. The overall p-value for Stage 2, $p_2$, would be obtained from a GMCT that uses the updated optimal contrast vectors. We incorporate this strategy in our simulation studies below. It should be noted that if one adapts only the candidate dose-response models and not the dosage groups, the contrasts for the Linear and Linear-log models would not change based on the Stage 1 data. This would not be the case if one also adapted the dosage groups.\n\n\\section{Adaptive Multiple Contrast Test}\\label{Adaptive Multiple Contrast Test}\n\n\\subsection{Known Variance Case}\\label{AMCT, Known Variance Case}\n\nInstead of combining the stage-wise $p$-values $p_1$ and $p_2$, each based on a GMCT, Miller (2010) suggested combining the test statistics for each candidate dose-response model across the two stages, and then derving an overall $p$-value from a multiple contrast test applied to those statistics, assuming a known variance $\\sigma^2$. For each candidate model, we have\n$$Z_m=\\left(\\sum_{i=1}^{k_1} c_{mi1} \\bar{Y}_{i1}+\\sum_{i=1}^{k_2} c_{mi2} \\bar{Y}_{i2}\\right)\\Bigg\/\\sigma\\sqrt{\\sum_{i=1}^{k_1} \\frac{c^2_{mi1}}{n_{i1}}+\\sum_{i=1}^{k_2}\\frac{c^2_{mi2}}{n_{i2}}},\\quad m= 1,\\ldots,M.$$\nSince $k_2$, $c_{mi2}$, and $n_{i2}$ can depend on the interim data (adaptation), the null distribution of $Z_m$ is not standard normal in general.\n\nIn order to control the Type I error probability of the overall test, Miller (2010) applies a conditional error approach based on the conditional rejection probability (CRP) principle (M\\\"{u}ller and Sch\\\"{a}fer, 2001, 2004). Computation of the conditional Type I error probability requires pre-specification of what Miller (2010) calls a ``base test\", i.e., pre-specified values for the contrast coefficients ($c^*_{mi2}$), number of dosage groups ($k_2^*$), and group sample sizes ($n^*_{i2}$) in Stage 2, $i=1, \\ldots, k^*_2$, $m=1,\\ldots, M$. There is not a clear best strategy for choosing these pre-specified values. Miller (2010) considers an example where all possible Stage 2 designs can be enumerated and have $k_1=k_2$ and $n_{i1}=n_{i2}$, $i=1,\\ldots,k_1$, and the pre-specified values involving $c_{mi2}^*$, $i=1,\\ldots,k_2$, $m=1,\\ldots,M$, are averaged over the possible Stage 2 designs. More generally one cannot enumerate all possible Stage 2 designs, so in the development below we pre-specify $c^*_{mi2}=c_{mi1}$, $k^*_2=k_1$, and $n^*_{i2}=n_{i1}$, $i=1,\\ldots,k_2$, $m=1,\\ldots,M$. Since the dosages can also be adapted, we suggest pre-specifying $\\pmb{d}^*_{\\text{Stage2}}=\\pmb{d}_{\\text{Stage1}}=(d_{11},\\ldots,d_{k_1 1})^\\prime$. One can think of this ``base test\" as one that is based on a study that uses the same design in Stage 2 as was used in Stage 1.\n\nThe $Z$-statistics for the base test are\n$$Z^*_m=\\sum_{i=1}^{k_1} c_{mi1}\\left(\\bar{Y}_{i1}+\\bar{Y}_{i2}\\right)\\Bigg\/\\sigma\\sqrt{2\\sum_{i=1}^{k_1}\\frac{c_{mi1}^2 }{n_{i1}}},\\quad m=1,\\ldots,M.$$\nUnder $H_0$, the joint distribution of $\\pmb{Z}^*=(Z^*_1,\\ldots,Z^*_M)^\\prime$ is multivariate normal with mean $\\pmb{0}$ and covariance matrix $\\pmb{R}^*=(\\rho_{m m^{\\prime}1})$, $m, m^{\\prime}=1,\\ldots, M$. One can then obtain the non-adaptive $\\alpha$-level critical value $u^*_{1-\\alpha}$ based on the null distribution of $Z_{\\max}^*=\\max \\{\\pmb{Z}^*\\}$ using the {\\tt{R}}-package {\\tt{mvtnorm}}.\n\nIn order to obtain the conditional Type I error probability $A=P_{H_0}(Z^*_{\\max}\\ge u^*_{1-\\alpha}\\,|\\,\\pmb{Y}_1)$, where $\\pmb{Y}_1$ are the Stage 1 data, it can be seen that the conditional distribution of $\\pmb{Z}^*$ given $\\pmb{Y}_1=\\pmb{y}_1$ is multivariate normal with mean vector\n$$\\left(\\sum_{i=1}^{k_1} c_{1i1}\\bar{y}_{i1}\\Bigg\/\\sigma\\sqrt{2\\sum_{i=1}^{k_1}\\frac{c_{1i1}^2}{n_{i1}}},\\ldots,\\sum_{i=1}^{k_1} c_{Mi1}\\bar{y}_{i1}\\Bigg\/\\sigma\\sqrt{2\\sum_{i=1}^{k_1}\\frac{c_{Mi1}^2 }{n_{i1}}}\\right)^\\prime$$\nand covariance matrix $\\pmb{R_2}^*=\\pmb{R}^*\/2$, where $\\bar{y}_{i1}=\\sum_{j=1}^{n_{i1}} y_{ij1}\/n_{i1}$, $i=1,\\ldots,k_1$. Hence, the conditional Type I error probability is\n$$A=P_{H_0} (Z^*_{\\max}\\geq u^*_{1-\\alpha}\\,|\\,\\pmb{Y}_1)=1- P_{H_0} (\\pmb{Z}^* \\leq (u^*_{1-\\alpha},\\ldots,u^*_{1-\\alpha})^\\prime\\,|\\,\\pmb{Y}_1),$$\nwhich can be obtained using the {\\tt{pmvnorm}} function in the {\\tt{R}}-package {\\tt{mvtnorm}}.\n\nIn general, the interim analysis at the end of Stage 1 could yield adapted values of $c_{mi2}$, $k_2$, and $n_{i2}$ for Stage 2 and, hence, the adapted $Z$-statistics $Z_m$, $m=1, \\ldots, M$. Denote $\\pmb{Z}=(Z_1,\\ldots,Z_M)^\\prime$ and $Z_{\\max}=\\max \\{\\pmb{Z}\\}$. The adaptive critical value $\\tilde{u}_{1-\\alpha}$ can be obtained by solving the equation\n$$\\tilde{A}=P_{H_0}(Z_{\\max}\\geq\\tilde{u}_{1-\\alpha}\\,|\\,\\pmb{Y}_1)=1- P_{H_0}(\\pmb{Z} \\leq (\\tilde{u}_{1-\\alpha},\\ldots,\\tilde{u}_{1-\\alpha})^\\prime\\,|\\,\\pmb{Y}_1)= A,$$\nwhere the conditional distribution of $\\pmb{Z}$ given $\\pmb{Y}_1$ is multivariate normal with mean vector\n$$\\left(\\sum_{i=1}^{k_1} c_{1i1}\\bar{y}_{i1}\\Bigg\/\\sigma\\sqrt{\\sum_{i=1}^{k_1}\\frac{c_{1i1}^2}{n_{i1}}+\\sum_{i=1}^{k_2}\\frac{c_{1i2}^2}{n_{i2}}},\\ldots,\\sum_{i=1}^{k_1} c_{Mi1}\\bar{y}_{i1}\\Bigg\/\\sigma\\sqrt{\\sum_{i=1}^{k_1}\\frac{c_{Mi1}^2}{n_{i1}}+\\sum_{i=1}^{k_2}\\frac{c_{Mi2}^2}{n_{i2}}}\\right)^{\\prime}$$\nand covariance matrix $\\pmb{\\tilde{R}}=(\\text{cov}(Z_m,Z_{m^\\prime}\\,|\\,\\pmb{Y}_1))$, $m,m^\\prime=1,\\ldots,M$, where\n$$\\text{cov}(Z_m,Z_{m^\\prime}\\,|\\,\\pmb{Y}_1 )={\\sum_{i=1}^{k_2}}\\frac{c_{mi2} c_{m^\\prime i2}}{n_{i2}} \\Bigg\/\n\\sqrt{\\left( {\\sum_{i=1}^{k_1}}\\frac{c_{mi1}^2}{n_{i1}}+ {\\sum_{i=1}^{k_2}}\\frac{c_{mi2}^2}{n_{i2}}\\right)\\left( {\\sum_{i=1}^{k_1}}\\frac{c_{m^\\prime i1}^2}{n_{i1}}+ {\\sum_{i=1}^{k_2}} \\frac{c_{m^\\prime i2}^2}{n_{i2}}\\right)}.$$\nUse of $\\tilde{u}_{1-\\alpha}$ as the critical value for the AMCT controls the Type I error probability at level $\\alpha$ (M\\\"{u}ller and Sch\\\"{a}fer, 2001, 2004; Miller, 2010).\n\n\\subsection{Unknown Variance Case}\\label{AMCT, Unknown Variance Case}\n\nMiller (2010) briefly discusses the possibility of extending the AMCT to accommodate estimation of the variance $\\sigma^2$, the complication being that the conditional Type I error probability depends on the unknown variance. Posch {\\it{et al}}. (2004) developed methods to calculate the conditional Type I error probability for the one sample $t$-test given the interim data, but the authors only consider the univariate case and the approach does not directly apply to either the single contrast test or the multiple contrast test.\n\nIn this subsection, we extend the AMCT to the unknown variance case by considering the combined $T$-statistics\n$$T_m=\\frac{\\displaystyle{\\sum_{i=1}^{k_1}} c_{mi1}\\bar{Y}_{i1}+\\displaystyle{\\sum_{i=1}^{k_2}} c_{mi2} \\bar{Y}_{i2}}{S\\sqrt{\\displaystyle{\\sum_{i=1}^{k_1}} \\frac{c^2_{mi1}}{n_{i1}}+\\displaystyle{\\sum_{i=1}^{k_2}}\\frac{c^2_{mi2}}{n_{i2}}}}=\\frac{\\sigma Z_m}{S},\\quad m= 1, \\ldots ,M,$$\nwhere the pooled variance estimator is\n$$S^2=\\left(\\sum_{i=1}^{k_1}\\sum_{j=1}^{n_{i1}} (Y_{ij1}-\\bar{Y}_{i1})^2+\\sum_{i=1}^{k_2}\\sum_{j=1}^{n_{i2}} (Y_{ij2}-\\bar{Y}_{i2})^2\\right)\\Bigg\/\\left(\\sum_{i=1}^{k_1} n_{i1}-k_1+\\sum_{i=1}^{k_2} n_{i2}-k_2\\right).$$\nAs in Section \\ref{AMCT, Known Variance Case}, we pre-specify $c^*_{mi2}=c_{mi1}$, $k^*_2=k_1$, $n^*_{i2}=n_{i1}$, and $\\pmb{d}^*_{\\text{Stage2}}=\\pmb{d}_{\\text{Stage1}}$, $i=1,\\ldots,k^*_2$, $m=1,\\ldots,M$. The $T$-statistics for the base test are\n$$T^*_m=\\sum_{i=1}^{k_1} c_{mi1} (\\bar{Y}_{i1}+\\bar{Y}_{i2})\\Bigg\/S^*\\sqrt{2\\sum_{i=1}^{k_1}\\frac{c^2_{mi1}}{n_{i1}}}=\\frac{\\sigma Z^*_m }{S^*},\\quad m=1,\\ldots,M,$$\nwhere\n$$S^{*2}=\\sum_{i=1}^{k_1}\\sum_{j=1}^{n_{i1}}\\left[(Y_{ij1}-\\bar{Y}_{i1})^2+(Y_{ij2}-\\bar{Y}_{i2})^2\\right]\\Bigg\/(2\\nu_1),\\text{ where }\\nu_1=\\sum_{i=1}^{k_1} n_{i1}-k_1.$$\nSince $S^{*2}$ is independent of $Z^*_m$ and $2\\nu_1 S^{*2}\/\\sigma^2 \\sim \\chi^2_{2 \\nu_1}$, the null joint distribution of $\\pmb{T}^*=(T_1^*, \\ldots, T_M^*)^\\prime$ is multivariate $t$ with $2\\nu_1$ degrees of freedom and correlation matrix $\\pmb{R}^*$. The non-adaptive $\\alpha$-level critical value $c^*_{1-\\alpha}$ can then be obtained using the {\\tt{qmvt}} function in the {\\tt{R}}-package {\\tt{mvtnorm}}.\n\nThe main difficulty in the unknown variance case is that the approach outlined in Section \\ref{AMCT, Known Variance Case} cannot be employed because the conditional distribution of $T_m^*$ given $\\pmb{Y}_1$ is not central $t$ under $H_0$. We develop the conditional Type I error probability as follows. Denote\n$$T_m^*\\,|\\,\\pmb{Y}_1=\n\\frac{\\displaystyle{\\sum_{i=1}^{k_1}} c_{mi1} (\\bar{y}_{i1}+\\bar{Y}_{i2})}{\\sqrt{\\displaystyle{\\sum_{i=1}^{k_1}}\\frac{c^2_{mi1}}{n_{i1}}}\\sqrt{ \\displaystyle{\\sum_{i=1}^{k_1}\\sum_{j=1}^{n_{i1}}}\\left\\{ (y_{ij1}-\\bar{y}_{i1})^2+ (Y_{ij2}-\\bar{Y}_{i2})^2\\right\\} \\Bigg\/\\nu_1}}=\\frac{U^*_m}{\\displaystyle{\\sqrt{\\frac{V^*}{\\nu_1}+q^*}}},\\quad m=1,\\ldots, M,$$\nwhere\n$$U_m^*=\\frac{\\displaystyle{\\sum_{i=1}^{k_1}} c_{mi1} (\\bar{y}_{i1}+\\bar{Y}_{i2})}{\\sigma\\sqrt{\\displaystyle{\\sum_{i=1}^{k_1}}\\frac{c^2_{mi1}}{n_{i1}}}},\\quad V^*=\\sum_{i=1}^{k_1}\\sum_{j=1}^{n_{i1}} (Y_{ij2}-\\bar{Y}_{i2})^2 \\Big\/\\sigma^2,$$\nand the constant\n$$q^*= {\\sum_{i=1}^{k_1}\\sum_{j=1}^{n_{i1}}} (y_{ij1}-\\bar{y}_{i1})^2\\Big\/(\\nu_1\\sigma^2).$$\n\nUnder $H_0$, the joint distribution of $(U_1^*,\\ldots,U_M^*)^\\prime$ is multivariate normal with mean vector $(b^*_1,\\ldots, b^*_M)^\\prime$ and variance-covariance matrix $\\pmb{R}^*$, where\n$$b^*_m=\\sum_{i=1}^{k_1} c_{mi1}\\bar{y}_{i1}\\Bigg\/\\sigma\\sqrt{\\sum_{i=1}^{k_1}\\frac{c^2_{mi1}}{n_{i1}}},\\quad m=1,\\ldots,M.$$\nSince $V^*\\sim\\chi^2_{\\nu_1}$ and is independent of $(U_1^*, \\ldots,U_M^*)^\\prime$, the joint density function of $(U_1^*,\\ldots,U_M^*,V^*)^\\prime$ is\n\\begin{eqnarray*}\n& &f_{(U_1^*,\\ldots,U_M^*,V^*)}(u_1^*,\\ldots,u_M^*,v^*)=\\frac{1}{(2 \\pi)^{M\/2} |\\pmb{R}^*|^{1\/2}}\\frac{1}{\\Gamma(\\nu_1\/2) 2^{\\nu_1\/2}}\\times\n\\\\[0.5\\baselineskip]\n& &(v^*)^{\\nu_1\/2-1}e^{-v^*\/2}\\exp\\left \\{-\\frac{1}{2}(u^*_1-b^*_1,\\ldots,u^*_M-b^*_M)(\\pmb{R}^*)^{-1}(u^*_1-b^*_1,\\ldots,u^*_M-b^*_M)^\\prime\\right \\},\n\\end{eqnarray*}\nwhere $\\Gamma (\\cdot)$ is the Gamma function. Now make the transformation\n$$T_m^*\\,|\\,\\pmb{Y}_1=\\frac{U_m^*}{\\displaystyle{\\sqrt{\\frac{V^*}{\\nu_1}+q^*}}},\\quad m=1,\\ldots,M,\\quad\\text{and}\\quad\nW^*=V^*$$\nwith Jacobian $(W^*\/\\nu_1+q^*)^{M\/2}$. The joint density function of $\\pmb{T}^*\\,|\\,\\pmb{Y}_1$ is\n\\begin{eqnarray*}\n& &\\displaystyle{f_{\\pmb{T}^*\\,|\\,\\pmb{Y}_1}\\left((t_1^*,\\ldots,t_M^*)\\,|\\,\\pmb{y}_1 \\right)} \\\\\n&=&\\frac{1}{(2\\pi)^{M\/2} |\\pmb{R}^*|^{1\/2}}\\frac{1}{\\Gamma (\\nu_1\/2) 2^{\\nu_1\/2}}\n\\int_0^{+ \\infty}\\left(\\frac{w^*}{\\nu_1}+q^*\\right)^{M\/2}(w^*)^{\\nu_1\/2-1} e^{-w^*\/2}\\times \\\\\n& &\\exp\\Bigg[-\\frac{1}{2}\\left\\{t_1^*\\left(\\frac{w^*}{\\nu_1}+q^*\\right)^{1\/2}-b^*_1,\\ldots,t_M^*\\left(\\frac{w^*}{\\nu_1}+q^*\\right)^{1\/2}-b^*_M\\right \\}(\\pmb{R}^*)^{-1} \\\\\n& &\\left \\{t_1^*\\left(\\frac{w^*}{\\nu_1}+q^*\\right)^{1\/2}-b^*_1,\\ldots,t_M^*\\left(\\frac{w^*}{\\nu_1}+q^*\\right)^{1\/2}-b^*_M\\right\\}^\\prime\\, \\Bigg ] dw^*\n\\end{eqnarray*}\nWe then obtain the conditional Type I error probability\n\\begin{eqnarray*}\nA&=&1- P_{H_0}\\left(\\pmb{T}^*\\leq (c^*_{1-\\alpha},\\ldots,c^*_{1-\\alpha})^\\prime\\,|\\,\\pmb{Y}_1\\right) \\\\\n&=&1- \\int\\cdots\\int_{(t_1^*,\\ldots, t_M^*) \\leq (c^*_{1-\\alpha},\\ldots,c^*_{1-\\alpha})} f_{\\pmb{T}^*\\,|\\,\\pmb{Y}_1}\\left((t_1^*,\\ldots,t_M^*)\\,|\\,\\pmb{y}_1\\right)\\ d t_1^* \\cdots d t_M^*.\n\\end{eqnarray*}\n\nAfter making the adaptations at the interim analysis, from the conditional distribution of $\\pmb{T}=(T_1, \\ldots, T_M)^\\prime$ given $\\pmb{Y}_1$, the adaptive critical value $\\tilde{c}_{1-\\alpha}$ can be determined as a solution to the following equation:\n\\begin{eqnarray*}\n\\tilde{A}&=&1-P_{H_0}\\left(\\pmb{T} \\leq (\\tilde{c}_{1-\\alpha},\\ldots,\\tilde{c}_{1-\\alpha})^\\prime\\,|\\,\\pmb{Y}_1\\right) \\\\\n&=&1- \\int\\cdots\\int_{(t_1,\\ldots,t_M) \\leq (\\tilde{c}_{1-\\alpha},\\ldots,\\tilde{c}_{1-\\alpha})} f_{\\pmb{T}\\,|\\,\\pmb{Y}_1}\\left((t_1,\\ldots,t_M)\\,|\\,\\pmb{y}_1\\right)\\ d t_1 \\cdots d t_M=A,\n\\end{eqnarray*}\nwhere\n\\begin{eqnarray*}\n&\\displaystyle f_{\\pmb{T}\\,|\\,\\pmb{Y}_1}\\left((t_1, \\ldots, t_M)\\,|\\,\\pmb{y}_1\\right)=\\displaystyle\\frac{1}{(2\\pi)^{M\/2} |\\pmb{\\tilde{R}}|^{1\/2}}\\displaystyle \\frac{1}{\\Gamma (\\nu_2\/2) 2^{\\nu_2\/2}}\\int_0^{+ \\infty}\\left(\\frac{w}{\\nu}+q\\right)^{M\/2} w^{\\nu_2\/2-1} e^{-w\/2}\\times \\\\\n&\\exp\\Bigg[-\\displaystyle\\frac{1}{2}\\left\\{t_1\\left(\\frac{w}{\\nu}+q\\right)^{1\/2}-b_1,\\ldots,t_M\\left(\\frac{w}{\\nu}+q\\right)^{1\/2}-b_M\\right\\}\\pmb{\\tilde{R}}^{-1} \\\\\n&\\displaystyle\\left\\{t_1\\left(\\frac{w}{\\nu}+q\\right)^{1\/2}-b_1,\\ldots,t_M\\left(\\frac{w}{\\nu}+q\\right)^{1\/2}-b_M\\right\\}^\\prime\\, \\Bigg] dw,\\\\\n&\\nu_2=\\displaystyle{\\sum_{i=1}^{k_2}} n_{i2}-k_2, \\quad \\nu=\\nu_1+\\nu_2, \\quad q=\\displaystyle{\\sum_{i=1}^{k_1}\\sum_{j=1}^{n_{i1}}} (y_{ij1}-\\bar{y}_{i1})^2\\Big\/(\\nu\\sigma^2),\n\\end{eqnarray*}\nand\n\\[b_m=\\displaystyle{\\sum_{i=1}^{k_1}} c_{mi1}\\bar{y}_{i1}\\Bigg\/\\sigma\\sqrt{\\displaystyle{\\sum_{i=1}^{k_1}}\\frac{c^2_{mi1}}{n_{i1}}+\\displaystyle{\\sum_{i=1}^{k_2}}\\frac{c^2_{mi2}}{n_{i2}}},\\quad m=1,\\ldots,M.\\]\n$H_0$ is rejected if $T_{\\max}=\\max \\{\\pmb{T}\\}\\geq\\tilde{c}_{1-\\alpha}$. Use of the critical value $\\tilde{c}_{1-\\alpha}$ provides control of the Type I error probability at level $\\alpha$ according to the CRP principle (M\\\"{u}ller and Sch\\\"{a}fer, 2001, 2004).\n\n\n\\section{Numerical Example}\\label{Numerical Example}\n\n\n\\subsection{Adaptive Generalized Multiple Contrast Tests}\\label{Numerical Example, AGMCT}\n\nTo illustrate the adaptive generalized multiple contrast tests (AGMCTs), we generated a numerical example. The example data set is available as Supporting Information. Suppose that there are $k_1=5$ dosage groups in Stage 1, with $\\pmb{d}_{\\text{Stage1}}=(0,0.05,0.20,0.60,1.00)^\\prime$. The total sample sizes in two stages are the same ($N_1=N_2=120$) and the group sample sizes are equal in Stage 1 ($n_{11}=\\cdots =n_{51}=N_1\/5=24$). The $M=5$ candidate dose-response models with the original specifications of $\\pmb{\\theta}$ are shown in Table \\ref{Table.original candidate models}.\n\nWe assume that the true dose-response model is the $E_{\\max}$ 2 model:\n$$f_{E_{\\max}2}(d,\\pmb{\\theta})=E_0+E_{\\text{max}} d\/ (ED_{50}+d)=0.2+0.6 d\/(0.1+d).$$\nWe generate the Stage 1 data from a multivariate normal distribution with mean $f_{E_{\\max}2}(\\pmb{d}_{\\text{Stage1}},\\pmb{\\theta})=$ $(0.20, 0.40, 0.60, 0.71, 0.75)^\\prime$ and covariance matrix $\\sigma^2\\pmb{I}=1.478^2\\pmb{I}$. The sample mean and variance estimates from the Stage 1 data are $\\bar{\\pmb{y}}_1=(0.52, 0.47, 1.09, 1.70, 0.45)^\\prime$ and $s^2_1=1.58^2$, respectively.\n\nThe optimal contrast vectors in Stage 1 based on the $M=5$ candidate dose-response models in Table \\ref{Table.original candidate models} are as follows.\n\\begin{align*}\n&E_{\\max}: \\pmb{c}_{11}=(-0.64, -0.36, 0.06, 0.41, 0.53)^\\prime, \\\\\n&\\text{Linear-log}: \\pmb{c}_{21}=(-0.54, -0.39, -0.08, 0.37, 0.64)^\\prime, \\\\\n&\\text{Linear}: \\pmb{c}_{31}=(-0.44, -0.38, -0.20, 0.27, 0.74)^\\prime, \\\\\n&\\text{Quadratic}: \\pmb{c}_{41}=(-0.57, -0.36, 0.16, 0.71, 0.07)^\\prime, \\\\\n&\\text{Logistic}: \\pmb{c}_{51}=(-0.40, -0.39, -0.31, 0.50, 0.59)^\\prime.\\end{align*}\nAfter conducting three different GMCTs using Tippett's, Fisher's, and inverse normal combination statistics, we obtain the following Stage 1 p-values: $p_{T1}=0.005$, $p_{F1}=0.047$, and $p_{N1}=0.06$.\n\nWe then adapt the candidate dose-response models and the dosage groups. We fit the 5 original candidate dose-response models using the Stage 1 data. Unfortunately, the Logistic model failed to converge on a solution so we replaced it with isotonic regression. Also, we use the dosage adaptation rule described in Section \\ref{Adapting the Dosage Groups} with $\\delta=0$ to drop the active dosage groups that appear to be less efficacious than placebo or the adjacent dosage. Finally, we obtain $k_2=3$ dosage groups in Stage 2: $\\pmb{d}_{\\text{Stage2}}=(0,0.20,0.60)^\\prime$ and $n_{12}=n_{22}=n_{32}=N_2\/3=40$.\n\nThe optimal contrast vectors in Stage 2 based on the adapted dose-response models and dosage groups are as follows:\n\\begin{align*}\n&E_{\\max}: \\pmb{c}_{12}=(-0.433, -0.383, 0.816)^\\prime,\\\\\n&\\text{Linear-log}: \\pmb{c}_{22}=(-0.707, 0.000, 0.707)^\\prime,\\\\\n&\\text{Linear}: \\pmb{c}_{32}=(-0.617, -0.154, 0.772)^\\prime, \\\\\n&\\text{Quadratic}: \\pmb{c}_{42}=(-0.766, 0.137, 0.629)^\\prime,\\\\\n&\\text{Isotonic regression}: \\pmb{c}_{52}=(-0.816, 0.408, 0.408)^\\prime.\\end{align*}\n\nThe Stage 2 data are then generated from a multivariate normal distribution with mean $f_{E_{\\max}2}(\\pmb{d}_{\\text{Stage2}},\\pmb{\\theta})=$ $(0.20, 0.60, 0.71)^\\prime$ and covariance matrix $\\sigma^2\\pmb{I}=1.478^2\\pmb{I}$. The sample mean and variance estimates from the Stage 2 data under adaptation are $\\bar{\\pmb{y}}_2=(-0.09, 0.77, 0.73)^\\prime$ and $s^2_2=1.52^2$, respectively. After conducting three different GMCTs using Tippett's, Fisher's, and inverse normal combination statistics, we obtain the following Stage 2 p-values: $p_{T2}=0.005$, $p_{F2}=0.008$, and $p_{N2}=0.008$. The p-values from Stage 1 and Stage 2 are then combined using Fisher's combination statistic and the inverse normal combination statistic. The combination statistics and resulting overall p-values are shown in Table \\ref{num.results AGMCT}.\n\n\\subsection{Adaptive Multiple Contrast Test}\\label{Numerical Example, AMCT}\n\n\\subsubsection{Known Variance Case}\nWe use the same simulated data as in Section \\ref{Numerical Example, AGMCT} to illustrate the adaptive multiple contrast test (AMCT) for the known variance case (for purposes of this illustration, we use $\\sigma^2=1.478^2$). We first obtain the non-adaptive critical value $u^*_{1-\\alpha}$. The joint null distribution of $\\pmb{Z}^*=(Z_1^*,\\ldots,Z_5^*)^\\prime$ is multivariate normal with mean $\\pmb{0}$ and covariance matrix $\\pmb{R}^*$, where\n\\begin{eqnarray*}\n\\pmb{R}^*=\\left(\\begin{array}{ccccc}\n1 &0.977 &0.912& 0.842& 0.896 \\\\\n0.977 &1& 0.977& 0.750& 0.956 \\\\\n0.912 &0.977& 1& 0.602& 0.957 \\\\\n0.842 &0.750& 0.602& 1& 0.715 \\\\\n0.896 &0.956& 0.957& 0.715& 1\n\\end{array}\\right).\n\\end{eqnarray*}\n\nThe value of $u^*_{1-\\alpha}$ is obtained using the {\\tt{qmvnorm}} function in the {\\tt{R}}-package {\\tt{mvtnorm}}, resulting in $u^*_{1-\\alpha}=1.968$. We then calculate the conditional mean of $\\pmb{Z}^*$ given $\\pmb{Y}_1$,\n$$\\left(\\frac{\\displaystyle{\\sum_{i=1}^{k_1}} c_{1i1} \\bar{y}_{i1}}{\\sigma\\sqrt{2\\displaystyle{\\sum_{i=1}^{k_1}}\\frac{c_{1i1}^2}{n_{i1}}}},\\ldots,\n\\frac{\\displaystyle{\\sum_{i=1}^{k_1}} c_{Mi1} \\bar{y}_{i1}}{\\sigma\\sqrt{2\\displaystyle{\\sum_{i=1}^{k_1}}\\frac{c_{Mi1}^2}{n_{i1}}}}\\right)^\\prime=(1.19, 0.87, 0.42, 2.22, 0.92)^\\prime,$$\nand the conditional covariance matrix $\\pmb{R_2}^*=\\pmb{R}^*\/2$. The conditional error is obtained using the {\\tt{pmvnorm}} function in the {\\tt{R}}-package {\\tt{mvtnorm}} as\n$$A=1- P_{H_0}\\left(\\pmb{Z}^* \\leq (u^*_{1-\\alpha},\\ldots, u^*_{1-\\alpha})^{\\prime}\\,|\\,\\pmb{Y}_1\\right)=0.64.$$\n\nAfter adapting the dose-response models and dosage groups as in Section \\ref{Numerical Example, AGMCT} above, we obtain the conditional distribution of $\\pmb{Z}\\,|\\,\\pmb{Y}_1$, which is multivariate normal with mean\n$$\n\\left(\\frac{\\displaystyle{\\sum_{i=1}^{k_1}} c_{1i1} \\bar{y}_{i1}}{\\sigma\\sqrt{\\displaystyle{\\sum_{i=1}^{k_1}}\\frac{c_{1i1}^2}{n_{i1}}+\\displaystyle{\\sum_{i=1}^{k_2}}\\frac{c_{1i2}^2}{n_{i2}}}},\\ldots,\\frac{\\displaystyle{\\sum_{i=1}^{k_1}} c_{Mi1} \\bar{y}_{i1}}{\\sigma\\sqrt{\\displaystyle{\\sum_{i=1}^{k_1}}\\frac{c_{Mi1}^2}{n_{i1}}+\\displaystyle{\\sum_{i=1}^{k_2}} \\frac{c_{Mi2}^2}{n_{i2}}}},\\right)^{\\prime} \\\\[0.5\\baselineskip]=(1.33, 0.98, 0.47, 2.48, 1.03)^{\\prime}\n$$\nand covariance matrix\n\\begin{eqnarray*}\n\\pmb{\\tilde{R}}=\\left(\\begin{array}{ccccc}\n0.375& 0.331& 0.358& 0.297& 0.199 \\\\\n0.331& 0.375& 0.368& 0.370& 0.325 \\\\\n0.358& 0.368& 0.375& 0.351& 0.283 \\\\\n0.297& 0.370& 0.351& 0.375& 0.352 \\\\\n0.199& 0.325& 0.283& 0.352& 0.375\n\\end{array}\\right).\n\\end{eqnarray*}\n\nFinally, we obtain the adaptive critical value $\\tilde{u}_{1-\\alpha}=2.263$ and the combined test statistics $\\pmb{Z}=(Z_1,\\ldots, Z_M)^\\prime=(2.22, 2.50, 1.78, 4.15, 2.83)^\\prime$. We reject $H_0$ since $Z_{\\max}=4.15 \\geq \\tilde{u}_{1-\\alpha}$.\n\n\\subsubsection{Unknown Variance Case}\n\nTo illustrate the AMCT in the unknown variance case (Section \\ref{AMCT, Unknown Variance Case}), we use the same example data as in Section \\ref{Numerical Example, AGMCT} for $M=2$ candidate dose-response models. Here, we only consider the $E_{\\max}$ and Linear-log candidate dose-response models in Table \\ref{Table.original candidate models}. Other settings are the same as in Section \\ref{Numerical Example, AGMCT}, including the optimal contrasts for\nboth Stage 1 and Stage 2, and the adapted dosage groups for Stage 2.\n\nWe first obtain the non-adaptive critical value $c^*_{1-\\alpha}$. The joint null distribution of $\\pmb{T}^*=(T_1^*,T_2^*)^\\prime$ is bivariate $t$ with degrees of freedom $2 \\nu_1$ and correlation matrix $\\pmb{R}^*$, where $\\nu_1=N_1-5=115$ and\n\\begin{eqnarray*}\n\\pmb{R}^*=\\left(\\begin{array}{cc}\n1 &0.977\\\\\n0.977 &1\n\\end{array}\\right).\n\\end{eqnarray*}\nThe value of $c^*_{1-\\alpha}$ is obtained using the {\\tt{qmvt}} function in the {\\tt{R}}-package {\\tt{mvtnorm}}, resulting in $c^*_{1-\\alpha}=1.732$.\n\nWe then obtain the conditional error by numerically calculating the three-dimensional integral below using the {\\tt{adaptIntegrate}} function in the {\\tt{R}}-package {\\tt{cubature}}.\n\\begin{eqnarray*}\nA&=&1- \\frac{1}{(2\\pi)^{M\/2} |\\pmb{R}^*|^{1\/2}}\\frac{1}{\\Gamma (\\nu_1\/2) 2^{\\nu_1\/2}} \\int_0^{+ \\infty} \\int_{-\\infty}^{c^*_{1-\\alpha}} \\int_{-\\infty}^{c^*_{1-\\alpha}} \\left(\\frac{w^*}{\\nu_1}+q^*\\right)^{M\/2}(w^*)^{\\nu_1\/2-1} e^{-w^*\/2}\\times\\\\\n& &\\exp\\Bigg[-\\frac{1}{2}\\left\\{t_1^*\\left(\\frac{w^*}{\\nu_1}+q^*\\right)^{1\/2}-b^*_1, t_2^*\\left(\\frac{w^*}{\\nu_1}+q^*\\right)^{1\/2}-b^*_2\\right\\}(\\pmb{R}^*)^{-1} \\\\\n& & \\left\\{t_1^*\\left(\\frac{w^*}{\\nu_1}+q^*\\right)^{1\/2}-b^*_1, t_2^*\\left(\\frac{w^*}{\\nu_1}+q^*\\right)^{1\/2}-b^*_2\\right\\}^\\prime\\, \\Bigg] dw^* \\ dt_1^* \\ dt_2^* = 0.198.\n\\end{eqnarray*}\n\nAfter adapting the dose-response models and dosage groups at the end of Stage 1, we consider the conditional distribution of $\\pmb{T}\\,|\\,\\pmb{Y}_1$. The adaptive critical value $\\tilde{c}_{1-\\alpha}$ can be obtained by solving the following equation using a bisection algorithm:\n\n\\begin{eqnarray*}\n\\tilde{A}&=& \\frac{1}{(2\\pi)^{M\/2} |\\pmb{\\tilde{R}}|^{1\/2}}\\frac{1}{\\Gamma (\\nu_2\/2) 2^{\\nu_2\/2}} \\int_0^{+ \\infty} \\int_{-\\infty}^{\\tilde{c}_{1-\\alpha}} \\int_{-\\infty}^{\\tilde{c}_{1-\\alpha}} \\left(\\frac{w}{\\nu}+q\\right)^{M\/2} w^{\\nu_2\/2-1} e^{-w\/2}\\times \\\\\n& &\\exp\\Bigg[-\\frac{1}{2}\\left\\{t_1\\left(\\frac{w}{\\nu}+q\\right)^{1\/2}-b_1, t_2\\left(\\frac{w}{\\nu}+q\\right)^{1\/2}-b_2\\right\\}\\pmb{\\tilde{R}}^{-1} \\\\\n& & \\left\\{t_1\\left(\\frac{w}{\\nu}+q\\right)^{1\/2}-b_1, t_2\\left(\\frac{w}{\\nu}+q\\right)^{1\/2}-b_2\\right\\}^\\prime\\, \\Bigg] dw \\ dt_1\\ dt_2 = A ,\n\\end{eqnarray*}\nwhere the covariance matrix $\\pmb{\\tilde{R}}$ is\n\\begin{eqnarray*}\n\\pmb{\\tilde{R}}=\\left(\\begin{array}{cc}\n0.375 &0.331\\\\\n0.331 &0.375\n\\end{array}\\right).\n\\end{eqnarray*}\n\nFinally, we obtain the adaptive critical value $\\tilde{c}_{1-\\alpha}=1.802$ with tolerance $10^{-7}$. The combined test statistics are $\\pmb{T}=(T_1,T_2)^\\prime=(2.11, 2.38)^\\prime$ and we reject $H_0$ since $T_{\\max}=2.38 \\geq \\tilde{c}_{1-\\alpha}$.\n\n\n\\section{Simulation studies}\\label{Simulation studies}\n\n\nIn this section, we conduct simulation studies to compare the operating characteristics of the AGMCTs with those of the AMCT in the setting of a design that adapts both the candidate dose-response models and the dosage groups based on data from Stage 1. We also compare these with the operating characteristics of the corresponding tests in a non-adaptive design.\n\nAssume $k_1=5$ and $\\pmb{d}_{\\text{Stage1}}=(0,0.05,0.20,0.60,1.00)^\\prime$. The total sample size is the same for each of the two stages ($N_1=N_2$) and the group sample sizes within each stage are equal, with $N_1=N_2=60$, 120, 180, and 240. The $M=5$ candidate dose-response models with the original specifications of $\\pmb{\\theta}$ are shown in Table \\ref{Table.original candidate models}. The outcome for each patient is distributed as $N(\\mu(d),\\sigma^2)$, where the true mean configuration $\\mu(d)$ follows one of the eight different dose-response models in Table \\ref{Sim.eight dose-response models}, and $\\sigma=1.478$. The dose-response curves for the five candidate models and the eight true dose-response models are shown in Figure \\ref{fig: candidate and true model}.\n\nFor the (true) $E_{\\max}$ 2 and Double-logistic models, the optimal contrasts are highly correlated with those of the candidate models. In contrast, for the (true) $E_{\\max}$ 3, Exponential 1, Exponential 2, Quadratic 2, Step and Truncated-logistic models, the optimal contrasts are not highly correlated with those of the candidate models (Figure \\ref{fig: True vs. candidate}).\n\nFor the AGMCTs, we use three GMCTs to combine the $M=5$ dependent $p$-values \\emph{within} each stage: Tippett's ($T$), Fisher's ($F$) and inverse normal ($N$) combination methods (Ma and McDermott, 2020). The same GMCT is used in both Stage 1 and Stage 2. To perform the overall test, only the inverse normal ($\\Psi_N$) combination statistic is used to combine $p_1$ and $p_2$ \\emph{across} stages since our preliminary simulation studies showed that, in general, using $\\Psi_N$ to combine $p_1$ and $p_2$ yielded greater power than using $\\Psi_F$. The reason for this is that under the alternative hypothesis, $p_1$ and $p_2$ both tend to be small and the rejection region of $\\Psi_N$ is larger than that of $\\Psi_F$ when $p_1$ and $p_2$ are both small (Wassmer and Brannath 2016, Section 6.2).\n\n\n\nFor the AGMCTs, we report the results of the operating characteristics for both the known and unknown variance cases. The results for the corresponding GMCTs in a non-adaptive design are also reported. For the AMCT, the simulation studies of the operating characteristics are presented only for the known variance case. The corresponding test in a non-adaptive design is just the MCP-Mod procedure, which is equivalent to the GMCT based on Tippett's combination method in a non-adaptive design.\n\nAll dosage adaptations are made according to the example rule described in Section \\ref{Adapting the Dosage Groups}. To deal with the problems outlined in Section \\ref{Adapting the Candidate Dose-Response Models} above, if only one of the $E_{\\max}$ and Logistic models fails to converge in Stage 1, isotonic regression is used to generate the corresponding contrast for use in Stage 2; if both the $E_{\\max}$ and Logistic models fail to converge in Stage 1, then isotonic regression is used to generate the corresponding contrast for the Logistic model and the same contrast that was used in Stage 1 is used in Stage 2 for the $E_{\\max}$ model. Also, if there is a negative dose-response relationship suggested by the Stage 1 data (i.e., a negative estimated slope in the Linear model), no adaptation of the dose-response models is performed for Stage 2 and we only adapt the dosage groups.\n\nAll estimated values of Type I error probability and power are based on 10,000 replications of the simulations. The Type I error probabilities for the AGMCTs and the AMCT (Tables \\ref{sim: type I error, known variance} and \\ref{sim: type I error, unknown variance} in the Appendix) agree with theory that the tests being considered all exhibit control of the Type I error probability at $\\alpha=0.05$; all values fall within the 95\\% confidence interval (0.0457, 0.0543).\n\nFor the known variance case, the power curves of the competing tests are shown in Figure \\ref{fig: Power AGMCTs and AMCT, known variance}. When the optimal contrasts associated with the true dose-response models are highly correlated with those of the candidate models ($E_{\\max}$ 2 and Double-logistic models), the AGMCTs and the AMCT are, in general, slightly less powerful than the corresponding tests in a non-adaptive design. When the optimal contrasts associated with the true dose-response models are not highly correlated with those of the candidate models ($E_{\\max}$ 3, Exponential 1, Exponential 2, Quadratic 2, Step and Truncated-logistic models), however, the AGMCTs and AMCT are more powerful than the corresponding tests in a non-adaptive design. Another observation is that the overall performance of the AMCT is the best among all the adaptive designs.\n\nFor the unknown variance case, the power curves of the competing tests are shown in Figure \\ref{fig: Power AGMCTs, unknown variance}. The overall results for these comparisons are very similar to those for the known variance case.\n\n\\section{Conclusion}\\label{Conclusion}\n\nIn this article, we extend the MCP-Mod procedure with GMCTs (Bretz {\\it{et al}}., 2005; Ma and McDermott, 2020) to two-stage adaptive designs. We perform a GMCT within each stage and combine the stage-wise $p$-values using a specified combination method to test the overall null hypothesis of no dose-response relationship. We also consider and extend an alternative AMCT approach proposed by Miller (2010), which uses the maximum standardized stratified contrast across Stage 1 and Stage 2 as the test statistic. One issue that deserves further exploration is how to best determine the ``base test\" for the AMCT. Our development in Sections \\ref{AMCT, Known Variance Case} and \\ref{AMCT, Unknown Variance Case} is based on pre-specification of the contrasts, number of candidate dose-response models, and group sample sizes to be the same in Stage 2 as they were in Stage 1. While this is not necessarily the best choice, in the absence of the ability to enumerate all possible two-stage designs being considered, it might be quite reasonable in practice. An issue that remains unresolved is that of efficiently computing the conditional error and adaptive critical value for the AMCT when the variance is unknown since these involve multidimensional integrals that can take a long time to compute.\n\nSimulation studies demonstrate that the AGMCTs and AMCT are generally more powerful for PoC testing than the corresponding tests in a non-adaptive design if the true dose-response model is, in a sense, not ``close'' to the models included in the initial candidate set. This might occur, for example, if the selection of the candidate set of dose-response models is not well informed by evidence from preclinical and early-phase studies. This is consistent with intuition: if the dose-response models are badly misspecified at the design stage, using data from Stage 1 to get a better sense of the true dose-response model and using data from both Stage 1 and Stage 2 to perform an overall test for $H_0$ should result in increased power. On the other hand, if the true dose-response model is ``close\" to the models specified in the initial candidate set, the non-adaptive design is sufficient to detect the PoC signal. In this case, the adaptive design does not provide any benefit and results in a small loss of efficiency.\n\nComparisons among the different AGMCTs and the AMCT did not reveal major differences in their operating characteristics in general. Differences among the AGMCTs tended to be larger in the setting of a non-adaptive design (Ma and McDermott, 2020). In principle, the AGMCTs proposed here for two-stage adaptive designs could be extended to multiple stages, although the circumstances under which that would be beneficial are not clear.\n\nFinally, we note that baseline covariates can easily be incorporated into the AGMCTs, as outlined in Section 2.3 of Ma and McDermott (2020).\\\\\n\n\\setstretch{1}\n\n\\makeatletter\n\\renewcommand{\\@biblabel}[1]{#1.}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAdvances in maritime robotics over the last two decades have fostered an emergence of unmanned surface vehicles (USVs). These autonomous boats range from small vessels used for automated inspection of dangerous areas and automation of repetitive tasks like bathymetry or environmental control, to massive cargo and transport ships. This next stage of maritime automation holds a potential to transform maritime-related tasks and will likely impact the global economy. The safety of autonomous navigation systems hinges on their environment perception capability, in particular obstacle detection, which is responsible for timely reaction and collision avoidance.\n\n\nCameras as low-power and information rich sensors are particularly appealing due to their large success in perception for autonomous cars~\\cite{Cordts2016Cityscapesa,Chen2018Encoder}. However, recent works~\\cite{Bovcon2019Mastr,Bovcon2020MODS} have shown that methods developed for autonomous cars do not translate well to USVs due to the specifics of the maritime domain. As a result, several approaches that exploit the domain specifics for improved detection accuracy have been recently proposed~\\cite{Bovcon2021WaSR,Cane2019Evaluating,Yao2021Shoreline,Zust2021SLR}. Since everything but water can be an obstacle, classical detectors for individual obstacle classes cannot address all obstacle types. State-of-the-art methods~\\cite{Bovcon2021WaSR} instead casts maritime obstacle detection as an anomaly segmentation problem by segmenting the image into the water, sky and obstacle categories.\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/motivation.pdf}\n\\caption{Single-frame obstacle detection methods (top right) struggle to distinguish between object reflections and true objects. However, reflections exhibit a distinctive temporal pattern compared to true objects (bottom left). WaSR-T (bottom right) considers the temporal context from recent frames to learn these patterns and increase segmentation robustness.}\n\\label{fig:motivation}\n\\end{figure}\n\nDespite significant advances reported in the recent maritime benchmark~\\cite{Bovcon2020MODS}, the state-of-the-art is still challenged by the reflective properties of the water surface, which cause objects reflections and sun glitter. In fact, given a single image, it is quite difficult to distinguish a reflected object or a spot of sun glitter from a true obstacle (Figure~\\ref{fig:motivation}). This results in a number of false positive detections, which in practice leads to frequent and unnecessary slowdowns of the boat, rendering current camera-based obstacle detection methods impractical.\n\n\nWe note that while correctly classifying reflections from a single image is challenging, the problem might become simpler when considering the temporal context. As illustrated in Figure~\\ref{fig:motivation}, due to water dynamics, the reflection appearance is not locally static, like that of an obstacle, but undergoes warped deformations. Based on this insight, we propose a new maritime obstacle segmentation network WaSR-T, which is our main contribution. WaSR-T introduces a new temporal context module that allows the network to extract the temporal context from a sequence of frames to differentiate objects from reflections. To the best of our knowledge, this is the first deep maritime obstacle detection architecture with a temporal component.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/architecture.pdf}\n\\caption{Overview of WaSR-T (left). Target frame and preceding context frames are fed into a shared encoder producing per-frame feature maps $X_F$ and $M_F$. The Temporal Context Module (right) extracts the temporal information from per-frame embeddings using a 3D convolution. The resulting temporal context embeddings $C_V$ are combined with target frame embeddings $X_V$ and fed into the decoder which predicts the target frame segmentation.}\n\\label{fig:architecture}\n\\end{figure*}\n\n\nWe also observe that the challenging maritime mirroring and glitter scenes are underrepresented in the standard training sets. We therefore extend the existing single-frame maritime segmentation training dataset MaSTr1325~\\cite{Bovcon2019Mastr} with corresponding preceding frames and introduce additional training images representing challenging reflection conditions, which is our secondary contribution. To maintain the notation convention, we name the extended dataset MaSTr1478. Experiments show that the dataset extension delivers significant performance improvement. Results on the recent maritime benchmark MODS~\\cite{Bovcon2020MODS} show that, compared to the single-frame WaSR~\\cite{Bovcon2021WaSR}, the proposed \\mbox{WaSR-T} reduces the number of false positive detections by 30\\% with a low computational overhead and sets a new state-of-the-art in maritime obstacle detection.\n\nIn summary, our main contributions are: (i) WaSR-T, a temporal extension of WaSR~\\cite{Bovcon2021WaSR} that leverages the temporal context for increased robustness and (ii) MaSTr1478, an extension of the existing single-frame training dataset~\\cite{Bovcon2019Mastr} with challenging reflection scenes that facilitates the training of temporal maritime segmentation networks. The new dataset and the WaSR-T source code will be publicly released to facilitate further research of temporal features in maritime obstacle detection.\n\n\n\n\n\n\n\n\\section{Related work}\n\n\nSemantic segmentation has become a common approach for obstacle detection in the marine domain~\\cite{Bovcon2019Mastr,Cane2019Evaluating,Bovcon2020MODS}, as it can address both dynamic (\\eg boats, swimmers, buoys) and static obstacles (\\eg shore, piers, floating fences) in a unified way by posing the problem as anomaly segmentation. Recently, several specialized networks for the marine domain have been proposed for this task~\\cite{Bovcon2021WaSR,Yao2021Shoreline,Qiao2022Automated}. These methods address reflections and increase detection robustness in multiple ways, including regularization techniques~\\cite{Yao2021Shoreline}, specialized loss functions~\\cite{Bovcon2021WaSR} and obstacle-oriented training regimes~\\cite{Zust2021SLR}. \n\nHowever, robustness to reflections is still lacking and causes comparatively low performance within the 15m area near the boat~\\cite{Bovcon2020MODS}, where segmentation errors are most critical. In practice, obstacle detection methods receive frames sequentially, thus the temporal component of the data is also available and could be used to distinguish between reflections and objects. So far, the additional temporal information has not yet been explored in context of maritime obstacle detection.\n\nIn other domains with similar access to sequential image data, effort has been made to harness the temporal information to improve the segmentation performance. Some approaches investigate the use of temporal information only during training to improve the temporal consistency of single-frame networks. \\cite{Varghese2021Unsupervised} and \\cite{Liu2020Efficient} achieve this by propagating the segmentation masks in consecutive frames by optical flow.\n\nIncorporating temporal information into the network for improved prediction has been explored as well,\nwith attention-based approaches being the most prevalent method. In video object segmentation~\\cite{Oh2019Video,Li2020Fast,Duke2021SSTVOS} attention is used to aggregate the information from features and segmentation masks of previous \"memory\" frames based on the attention between the target and memory features. However, these methods are designed mainly for propagating initial segmentation masks of large foreground objects over the video sequence and are not directly suitable for general purpose discriminative semantic segmentation required for obstacle detection.\n\nSimilarly, in video semantic segmentation~\\cite{Wang2021Temporal,Yuan2021CSANet} attention-based approaches are used to aggregate the temporal information from recent frames to improve general purpose semantic segmentation. \\cite{Yuan2021CSANet} additionally introduces auxiliary losses, which guide the learning of attention based on inter-frame consistency. Instead of a global attention which aggregates information from semantically similar regions from past frames, we propose a convolutional approach to facilitate the learning of local temporal texture, which is characteristic for reflections.\n\n\\section{Temporal context for obstacle detection}\n\n\nGiven a target frame $X \\in \\mathbb{R}^{3 \\times H \\times W}$, the task of the segmentation-based obstacle detection method is to predict the segmentation mask, \\ie to classify each location in $X$ as either water, sky or obstacle. We propose using the temporal context to improve the prediction accuracy. Our network (Figure~\\ref{fig:architecture}), denoted as \\mbox{WaSR-T}, is based on the state-of-the-art single-frame network for maritime obstacle detection\nWaSR~\\cite{Bovcon2021WaSR}. \nWe design WaSR-T to encode the discriminative temporal information about local appearance changes of a region over $T$ preceding context frames $M \\in \\mathbb{R}^{T \\times 3 \\times H \\times W}$. The temporal context is added to the high-level features at the deepest level of the network as shown in Figure~\\ref{fig:architecture}.\n\nFollowing \\cite{Oh2019Video} and \\cite{Wang2021Temporal}, the target and context frames are first individually encoded with a shared encoder network, producing per-frame feature maps $X_F \\in \\mathbb{R}^{N \\times H \\times W}$ and $M_F \\in \\mathbb{R}^{T \\times N \\times H \\times W}$, where $N$ is the number of channels.\nThe Temporal Context Module (Section~\\ref{sec:method\/temporal_descriptors}) then extracts dicriminative temporal context embeddings from per-frame features. Finally, the temporal context embeddings are concatenated with target frame embeddings and fed into a decoder network. Following \\cite{Bovcon2021WaSR}, the decoder gradually merges the information with multi-level features of the target frame (\\ie skip connections) and outputs the final target frame segmentation.\n\n\n\\subsection{Temporal Context Module}\\label{sec:method\/temporal_descriptors}\n\nThe Temporal Context Module (TCM) extracts the temporal information from embeddings of the context and target frames and combines it with embeddings of the target frame using concatenation (Figure~\\ref{fig:architecture}). For this reason, the number of input channels to the decoder doubles compared to the single-frame network. Thus, in order to preserve the structure and number of input channels to the decoder, TCM first reduces the dimensionality of per-frame feature maps $X_F$ and $M_F$ accordingly -- a shared $1 \\times 1$ convolutional layer is used to project the per-frame feature maps into $N\/2$ dimensional per-frame embeddings $X_V$ and $M_V$ as shown in Figure~\\ref{fig:architecture}.\n\nTo extract the temporal information from a sequence of frame embeddings, attention-based approaches~\\cite{Oh2019Video,Duke2021SSTVOS,Wang2021Temporal} are often utilized, as they allow aggregation of information from semantically similar regions across multiple frames to account for movement and appearance changes of objects. Reflections, however often feature significant local texture changes as demonstrated in Figure~\\ref{fig:motivation}.\nThus, instead of globally aligning semantically similar regions using attention mechanisms, we utilize a spatio-temporal convolution to extract the local texture changes.\n\nFirst we stack the context and target frame embeddings $M_V$ and $X_V$ into a spatio-temporal context volume $C \\in \\mathbb{R}^{(T+1) \\times N\/2 \\times H \\times W}$. Then a 3D convolution layer is used to extract discriminative spatio-temporal features from $C$. To account for minor inter-frame object and camera movements, a kernel size of $(T+1) \\times 3 \\times 3$ is used to capture temporal information in a local spatial region around locations in the context volume. We apply padding in the spatial dimensions to preserve the spatial size of the output context features $C_V \\in \\mathbb{R}^{N\/2 \\times H \\times W}$.\n\n\\subsection{Efficient inference}\n\nDuring training, for each input image $X$, WaSR-T needs to extracts all per-frame context embeddings $M_F$ in addition to target frame embeddings $X_V$. However, during inference the frames are passed to the network sequentially, thus recent frame embeddings can be stored in memory and feature extraction only needs to be performed on the newly observed target frame. Specifically, WaSR-T stores a buffer of $T$ most recent frame embeddings $X_V$ in memory and uses them as the context frame embeddings $M_V$ in TCM. The memory buffer is initialized with $T$ copies of the $X_V$ embeddings of the first frame in the sequence. Using sequential inference, the efficiency of WaSR-T is not significantly impacted compared to single-frame methods, differing only due to the temporal context extraction in TCM.\n\n\n\n\n\n\n\n\n\n\n\\section{Experiments}\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/cmp_real_compressed.pdf}\n\\caption{Qualitative results on MODS (top) and web-sourced sequences (bottom) reveal that in WaSR-T the handling of reflections and sun glitter is significantly improved compared to WaSR, resulting in a smaller number of FP detections and increased temporal consistency.}\n\\label{fig:cmp_mods}\n\\end{figure*}\n\n\\subsection{Implementation details}\n\nWaSR-T follows the architecture of WaSR~\\cite{Bovcon2021WaSR} and applies ResNet101 as the feature encoder. In a preliminary study we observed that in contrast to WaSR, the inertial measurements (IMU) do not bring improvements in our temporal extension. Therefore the IMU is not used in the decoder for simplicity. We apply the original WaSR training procedure, i.e., the water separation loss function, hyper-parameters, optimizers, learning rate schedule and image augmentation. We set the number of past frames in the temporal context module to $T=5$. Because of training memory constraints, the backbone gradients are restricted to the current and previous frame. WaSR-T is trained for 100 epochs on 2$\\times$NVIDIA Titan A100S GPUs with a minibatch size of 4 per GPU.\n\nThe networks in our experiments are trained on the training set Mastr1478 (Section~\\ref{sec:mastr1478}) and tested on the most recent maritime obstacle detection benchmark MODS~\\cite{Bovcon2020MODS}, which contains approximately 100 annotated sequences captured under various conditions. The evaluation protocol reflects the detection performance meaningful for practical USV navigation and separately evaluates the detection of obstacle-water edge for static obstacles and the detection of dynamic obstacles. The water-edge detection robustness ($\\mu_R$) is computed from the ground truth edge, while dynamic obstacle detection is evaluated in terms of true-positive (TP), false-positive (FP) and false-negative (FN) detections, and summarized by the F1 measure, precision (Pr) and recall (Re). A dynamic obstacle counts as detected (TP) if the coverage of the segmentation inside the ground truth bounding box is sufficient, otherwise the obstacle counts as undetected (FN). Predicted segmentations outside of the ground truth bounding boxes count as false positive detections. Detection performance is reported over the entire visible navigable area and separately within a 15m \\textit{danger zone} from the USV, where the detection performance is critical for immediate collision prevention.\n\n\\subsection{Temporally extended training dataset MaSTr1478}\\label{sec:mastr1478}\n\nTo facilitate the training of temporal networks, we extended the recent MaSTr1325~\\cite{Bovcon2019Mastr} dataset, which contains 1325 fully segmented images recorded by a USV. First, the dataset was extended by adding $T=5$ preceding frames for each annotated frame, to allow learning of the temporal context. We noticed that while MaSTr1325 is focused on the broader challenges in maritime segmentation, it contains relatively few examples of challenging reflections and glitter. We have thus extended the original dataset with additional 153 images (including their preceding frames) and use the codename \\textit{MaSTr1478} for this new dataset. The additional images were obtained from online videos or were additionally recorded by us to represent difficult scenarios for current single-frame methods, where the temporal information is important for accurate prediction, such as object mirroring, reflections and sun glitter. Examples are shown in Figure~\\ref{fig:dataset}. The frames are labeled with per-pixel ground truth following~\\cite{Bovcon2019Mastr}. To emphasize the challenging conditions, the training samples in the training batches are sampled from the original MaSTr1325 images and the additional images with equal probability. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/ds_examples.jpg}\n\\caption{Examples of the additional training sequences in the MaSTr1478 with object reflections, sun glitter and low-light conditions.}\n\\label{fig:dataset}\n\\end{figure}\n\n\n\n\\subsection{Comparison with state of the art}\n\nWe compare WaSR-T with single-frame state-of-the-art segmentation methods (DeepLabV3+~\\cite{Chen2018Encoder}, BiSeNet~\\cite{Yu2018Bisenet}, RefineNet~\\cite{Lin2017RefineNet}, WaSR~\\cite{Bovcon2021WaSR}), which scored as top performers on the recent maritime obstacle detection benchmark MODS~\\cite{Bovcon2020MODS}, as well as with state-of-the-art segmentation methods that rely on temporal information. For the latter we considered the video object segmentation method STM~\\cite{Oh2019Video} and a recent video semantic segmentation method TMANet~\\cite{Wang2021Temporal}, which use memory attention to encode the temporal information from past frames. Since a relatively simple backbone is used in the original STM, we extended it to the same backbone and decoder architecture as used in WaSR~\\cite{Bovcon2021WaSR}.\n\n\n\nResults in Table~\\ref{tab:sota} show that multi-frame methods outperform the single-frame networks in detection precision (particularly within the danger zone), and except from TMANet, preserve a high recall.\nWaSR-T outperforms the original WaSR by 1.8 points in precision and 0.9 points in the overall F1, while substantially outperforming it within the danger zone resulting in a 6.0 points F1 score improvement. This is primarily due to reduction of false positives (see Figures \\ref{fig:cmp_mods} and \\ref{fig:cmp_davimar}), which is reflected in a 10.5 point improvement of the Pr score within the danger zone. \\mbox{WaSR-T} also outperforms the temporal state-of-the-art networks especially inside the danger zone, resulting in approximately 2 points performance improvement of danger-zone F1 score. \n\nIn terms of speed, the new temporal module does not substantially increase the computation. The original WaSR runs at 15 FPS, while WaSR-T runs at approximately 10 FPS, which matches the sequence acquisition framerate.\n\nDespite the large improvements in robustness to reflections, WaSR-T also shares some limitations (\\eg detection of thin objects) with existing methods as shown in Figure~\\ref{fig:cmp_failure}. For example, the temporal context is still not able to fully address reflections in rare situations where the water is completely still and the temporal texture changes cannot be observed. We aim to tackle these challenges in our future work. \n\n\\begin{table}[]\n \\centering\n \\caption{Comparison of SOTA single-frame and multi-frame methods on MODS in terms of water-edge detection robustness ($\\mu_R$), precision, recall and F1 score for obstacle detection. Danger-zone performance is reported in parentheses.}\n \\label{tab:sota}\n \\begin{tabular}{lcccc}\n \n method & $\\mu_R$ & Pr & Re & F1 \\\\\n \\midrule\n DeepLabV3+~\\cite{Chen2018Encoder} & 96.8 & 80.1 (18.6) & \\textbf{92.7} (\\textbf{98.4}) & 86.0 (31.3) \\\\\n BiSeNet~\\cite{Yu2018Bisenet} & 97.4 & 90.5 (53.7) & 89.9 (97.0) & 90.2 (69.1) \\\\\n RefineNet~\\cite{Lin2017RefineNet} & 97.3 & 89.0 (45.1) & 93.0 (98.1) & 91.0 (61.8) \\\\\n WaSR~\\cite{Bovcon2021WaSR} & 97.8 & 95.1 (80.3) & 91.9 (96.2) & 93.5 (87.6) \\\\\n \\midrule\n TMANet~\\cite{Wang2021Temporal} & 98.3 & 96.4 (90.0) & 85.1 (93.0) & 90.4 (91.5) \\\\\n STM~\\cite{Oh2019Video} & \\textbf{98.4} & 96.3 (86.2) & 92.5 (96.4) & \\textbf{94.4} (91.0) \\\\\n WaSR-T & \\textbf{98.4} & \\textbf{96.9} (\\textbf{90.8}) & 92.0 (96.5) & \\textbf{94.4} (\\textbf{93.6}) \\\\\n \n \\end{tabular}\n\\end{table}\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/cmp_davimar.pdf}\n\\caption{Qualitative results on challenging inland water sequences demonstrates large improvements of WaSR-T in terms of practical robustness to reflections.}\n\\label{fig:cmp_davimar}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/cmp_failure.pdf}\n\\caption{Failure cases of both methods include small objects hiding in reflections (column 1), reflections on very still water (column 2), thin objects (column 3) and challenging water-land boundaries (columns 4 and 5).}\n\\label{fig:cmp_failure}\n\\end{figure*}\n\n\\subsection{Analysis of the alternative temporal aggregation methods}\n\nNext, we analyzed alternatives to the feature fusion in the temporal context module proposed in Section~\\ref{sec:method\/temporal_descriptors}: (i) pixel-wise average pooling of temporal features (window size of $T + 1 \\times 1 \\times 1$) and (ii) local average pooling of temporal features ($T + 1 \\times 3 \\times 3$). Table~\\ref{tab:temp_agg} shows that, compared to singe-frame WaSR, the simple pixel-wise temporal average pooling of context features already improves the performance over single-frame inference by 0.8 points (overall) and 1.9 points (danger zone) in F1. Increasing the pooling window size to a local window does not improve performance. In contrast, the 3D convolution approach described in Section~\\ref{sec:method\/temporal_descriptors} is able to learn discriminative local temporal relations and increases the F1 by an additional 0.2 points overall, and by 3.5 points inside the danger zone. The improvement is primarily on the account of substantial reduction of false positive detections.\n\\begin{table}[htb]\n \\centering\n \\caption{WaSR-T performance with different temporal aggregation methods in terms of water-edge detection robustness ($\\mu_R$), number of FP detections and F1 score. Performance inside the danger-zone is reported in parentheses.}\n \\label{tab:temp_agg}\n \\begin{tabular}{lccc}\n \n aggregation & $\\mu_R$ & FP & F1 \\\\\n \\midrule\n Single-frame & 97.8 & 2492 (629) & 93.5 (87.6) \\\\\n Avg pool ($T+1 \\times 1 \\times 1$) & \\textbf{98.4} & 1771 (474) & 94.2 (90.1) \\\\\n Avg pool ($T+1 \\times 3 \\times 3$) & 98.3 & 2152 (537) & 93.5 (89.2) \\\\\n \n \n 3D Convolution & \\textbf{98.4} & \\textbf{1540} (\\textbf{261}) & \\textbf{94.4} (\\textbf{93.6}) \\\\\n \n \\end{tabular}\n\\end{table}\n\n\\subsection{Influence of the temporal and spatial context size}\n\nTo gain further insights, we analyzed the influence of temporal context module parameters, i.e., the temporal context length $T$ and spatial kernel size. Table~\\ref{tab:abl} shows that utilizing even a single temporal context frame (i.e., $T=1$) significantly improves the performance over single-frame inference ($T=0$) by decreasing the number of false positive detections by 30\\% overall and 39\\% inside the danger zone.\nIncreasing the temporal context length $T$ further, brings consistent, but smaller improvements in reduction of FP detections and danger-zone F1 scores.\n\n\nThe spatial context size, determined by the kernel size of the 3D convolution of the temporal context module also importantly affects the performance. Using $1 \\times 1$ spatial kernel size encodes only pixel-wise temporal relations, which negatively impacts the performance inside the danger-zone within which the objects are typically large. Increasing the kernel size to $3 \\times 3$ addresses this issue, while the performance does not improve with further increasing the spatial context size.\n\\begin{table}[htb]\n \\centering\n \\caption{Influence of parameters in WaSR-T in terms of water-edge detection robustness ($\\mu_R$), number of FP detections and F1 score. Performance inside the danger-zone is reported in parentheses.}\n \\label{tab:abl}\n \\begin{tabular}{cccc}\n \n $T$ & $\\mu_R$ & FP & F1 \\\\\n \\midrule\n 0 & 97.8 & 2492 (629) & 93.5 (87.6) \\\\\n 1 & 98.4 & 1745 (383) & 94.2 (91.5) \\\\\n 3 & \\textbf{98.6} & 1606 (323) & 94.0 (92.6) \\\\\n 5 & 98.4 & \\textbf{1540} (\\textbf{261}) & \\textbf{94.4} (\\textbf{93.6}) \\\\\n \\midrule\n kernel size & & & \\\\\n \\midrule\n $1 \\times 1$ & 98.1 & \\textbf{1456} (357) & \\textbf{94.6} (92.0) \\\\\n $3 \\times 3$ & \\textbf{98.4} & 1540 (\\textbf{261}) & 94.4 (\\textbf{93.6}) \\\\\n $5 \\times 5$ & 98.3 & 1639 (318) & 94.2 (92.6) \\\\\n \n \\end{tabular}\n\\end{table}\n\n\n\\subsection{Influence of the extended MaSTr1478}\n\nFinally, several experiments were performed to evaluate the contribution of the extended training dataset MaSTr1478. In particular, how much performance improvement is brought by the temporal extension and how much by the new scenes with reflections and glitter. The results in Table~\\ref{tab:wasr_comp} show that the single-frame WaSR does not benefit from the additional sequences in MaSTr1478. While the overall detection performance improves by 0.1 points F1, the performance decreases by 0.6 points inside the danger zone. \nUsing only temporally extended MaSTr1325 does not improve WaSR-T performance. However, considering also the new sequences in MaSTr1478, the performance improves substantially. We observe a 41\\% overall reduction in the number of FP detections and a 53\\% reduction of FPs inside the danger zone. The overall performance is thus increased by 1.0 F1 points overall and by 5.4 F1 points inside the danger zone. \n\nFigure~\\ref{fig:cmp_mods} provides qualitative results. In contrast to \\mbox{WaSR-T}, the single-frame WaSR is unable to correctly segment regions of water containing the reflections and glitter, despite using the reflection-specific training examples of MaSTr1478. We conclude that both the new scenes and the temporal extension allow learning of the temporal appearance in WaSR-T and are responsible for improved segmentation.\n\n\n\n\\begin{table}[htb]\n \\centering\n \\caption{Influence of training dataset extensions in terms of water-edge detection robustness ($\\mu_R$), number of FP detections and F1 score. Performance inside the danger-zone is reported in parentheses.}\n \\label{tab:wasr_comp}\n \\begin{tabular}{lccc}\n \n model & $\\mu_R$ & FP & F1 \\\\\n \\midrule\n WaSR (MaSTr1325) & 97.2 & 2625 (561) & 93.4 (88.2) \\\\\n WaSR (MaSTr1478) & 97.8 & 2492 (629) & 93.5 (87.6) \\\\\n WaSR-T (MaSTr1325) & 97.5 & 2273 (655) & 93.7 (87.3) \\\\\n WaSR-T (MaSTr1478) & \\textbf{98.4} & \\textbf{1540} (\\textbf{261}) & \\textbf{94.4} (\\textbf{93.6}) \\\\\n \n \\end{tabular}\n\\end{table}\n\n\\section{Conclusion}\n\nWe presented WaSR-T, a novel maritime obstacle detection network that harnesses the temporal context to improve obstacle detection by segmentation on water regions with ambiguous appearance. We also extended the well-known training dataset MaSTr1325~\\cite{Bovcon2019Mastr} by including preceding images for each training image and added new 153 training images with challenging scenes containing object mirroring and glitter -- the new dataset is called MaSTr1478. Experiments show that the new images and temporal extension lead to substantial improvement on maritime obstacle detection. WaSR-T outperforms single-frame maritime obstacle detection networks as well as other networks that use temporal contexts and sets a new state-of-the-art on the maritime obstacle detection benchmark MODS~\\cite{Bovcon2020MODS}. \n\n\n\n\n\\addtolength{\\textheight}{-12cm} \n \n \n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nThe task of generating textual descriptions of images tests a machine's ability to understand visual data and interpret it in natural language. \nIt is a fundamental research problem lying at the intersection of natural language processing, computer vision, and cognitive science.\nFor example, single-image captioning~\\citep{farhadi2010every, kulkarni2013babytalk, vinyals2015show, xu2015show} has been extensively studied.\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=.9\\linewidth]{images\/overview.pdf}\n \\caption{Overview of the visual comparison task and our motivation. The key is to understand both images and compare them. Explicit semantic structures can be compared between images and used to generate comparative descriptions aligned to the image saliency.}\n \\label{fig:task}\n\\end{figure}\n\nRecently, a new intriguing task, visual comparison, along with several benchmarks ~\\citep{jhamtani2018learning, tan2019expressing, park2019robust, forbes2019neural} has drawn increasing attention in the community.\nTo complete the task and generate comparative descriptions, a machine should understand the visual differences between a pair of images (see \\cref{fig:task}).\nPrevious methods~\\cite{jhamtani2018learning} often consider the pair of pre-trained visual features such as the ResNet features~\\cite{he2016deep} as a whole, and build end-to-end neural networks to predict the description of visual comparison directly.\nIn contrast, humans can easily reason about the visual components of a single image and describe the visual differences between two images based on their semantic understanding of each one. \nHumans do not need to look at thousands of image pairs to describe the difference of new image pairs, as they can leverage their understanding of single images for visual comparison. \n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=\\textwidth]{images\/model.pdf}\n\\caption{Our \\textsc{L2C} model. It consists of a segmentation encoder, a graph convolutional module, and an LSTM decoder with an auxiliary loss for single-image captioning. Details are in \\cref{sec:method}.}\n\\label{fig:model}\n\\end{figure*}\n\nTherefore, we believe that visual differences should be learned by understanding and comparing every single image's semantic representation.\nA most recent work~\\cite{zhang2020diagnosing} conceptually supports this argument, where they show that low-level ResNet visual features lead to poor generalization in vision-and-language navigation, and high-level semantic segmentation helps the agent generalize to unseen scenarios. \n\nMotivated by humans, we propose a Learning-to-Compare (\\textsc{L2C}) method that focuses on reasoning about the semantic structures of individual images and then compares the difference of the image pair. \nOur contributions are three-fold: \n\\begin{itemize}\n \\setlength\\itemsep{-0.2em}\n \\item We construct a structured image representation by leveraging image segmentation with a novel semantic pooling, and use graph convolutional networks to perform reasoning on these learned representations.\n \\item We utilize single-image captioning data to boost semantic understanding of each image with its language counterpart.\n \\item Our \\textsc{L2C} model outperforms the baseline on both automatic evaluation and human evaluation, and generalizes better on the testing image pairs.\n\\end{itemize}\n\n\\section{\\textsc{L2C} Model}\n\\label{sec:method}\nWe present a novel framework in \\cref{fig:model}, which consists of three main components. \nFirst, a \\emph{segmentation encoder} is used to extract structured visual features with strong semantic priors.\nThen, a \\emph{graph convolutional module} performs reasoning on the learned semantic representations. \nTo enhance the understanding of each image, we introduce a \\emph{single-image captioning auxiliary loss} to associate the single-image graph representation with the semantic meaning conveyed by its language counterpart.\nFinally, a decoder generates the visual descriptions comparing two images based on differences in graph representations. \nAll parameters are shared for both images and both tasks.\n\n\\subsection{Semantic Representation Construction}\nTo extract semantic visual features, we utilize pre-trained fully convolutional networks (FCN)~\\citep{long2015fully} with ResNet-101 as the backbone. \nAn image $\\mathcal{I}$ is fed into the ResNet backbone to produce a feature map $\\mathcal{F}$ $\\in \\mathbb{R}^{D\\times H\\times W}$, which is then forwarded into an FCN head that generates a binary segmentation mask $B$ for the bird class. \nHowever, the shapes of these masks are variable for each image, and simple pooling methods such as average pooling and max pooling would lose some information of spatial relations within the mask.\n\nTo address this issue and enable efficient aggregation over the area of interest (the masked area), we add a module after the ResNet to cluster each pixel within the mask into $K$ classes. Feature map $\\mathcal{F}$ is forwarded through this pooling module to obtain a confidence map $\\mathcal{C}$ $\\in \\mathbb{R}^{K\\times H\\times W}$, whose entry at each pixel is a $K$-dimensional vector that represents the probability distribution of $K$ classes.\n\nThen a set of nodes $V = \\{v_1, ..., v_K\\}, v_k \\in \\mathbb{R}^D$ is constructed as following: \n\\begin{equation}\n v_k= \\sum_{i, j} \\mathcal{F} \\odot \\mathcal{B} \\odot \\mathcal{C}_k\n\\end{equation}\nwhere $i$=$1,... H,$ $j$=$1,...,W ,$, $\\mathcal{C}_k$ is the $k$-th probability map and $\\odot$ denotes element-wise multiplication.\n\nTo enforce local smoothness, i.e., pixels in a neighborhood are more likely belong to one class, we employ total variation norm as a regularization term:\n\\begin{equation}\n \\mathcal{L}_{TV} = \\sum_{i,j}|C_{i+1,j}-C{i,j}|+|C_{i,j+1}-C{i,j}|\n\\end{equation}\n\n\\subsection{Comparative Relational Reasoning}\nInspired by recent advances in visual reasoning and graph neural networks ~\\citep{chen2018iterative, li2019visual}, we introduce a relational reasoning module to enhance the semantic representation of each image.\nA fully-connected visual semantic graph $G = (V, E)$ is built, where $V$ is the set of nodes, each containing a regional feature, and $E$ is constructed by measuring the pairwise affinity between each two nodes $v_i, v_j$ in a latent space.\n\\begin{equation}\n A(v_i, v_j) = (W_i v_i)^T (W_j v_j)\n\\end{equation}\nwhere $W_i, W_j$ are learnable matrices, and $A$ is the constructed adjacency matrix. \n\nWe apply Graph Convolutional Networks (GCN) ~\\citep{kipf2016semi} to perform reasoning on the graph.\nAfter the GCN module, the output $V^o = \\{v_1^o, ..., v_K^o\\}, v_k^o \\in \\mathbb{R}^D$ will be a relationship enhanced representation of a bird.\nFor the visual comparison task, we compute the difference of each two visual nodes from two sets, denoted as $V^g_{diff} = \\{v_{diff,1}^o, ..., v_{diff,K}^o\\}, v_{diff,k}^o = v_{k,1}^o - v_{k, 2}^o \\in \\mathbb{R}^D$.\n\n\\subsection{Learning to Compare while Learning to Describe}\nAfter obtaining relation-enhanced semantic features, we use a Long Short-Term Memory (LSTM) ~\\citep{hochreiter1997long} to generate captions. \nAs discussed in \\cref{sec:intro}, semantic understanding of each image is key to solve the task. However, there is no single dataset that contains both visual comparison and single-image annotations.\nHence, we leverage two datasets from similar domains to facilitate training. One is for visual comparison, and the other is for single-image captioning. Alternate training is utilized such that for each iteration, two mini-batches of images from both datasets are sampled independently and fed into the encoder to obtain visual representations $V^o$ (for single-image captioning) or $V^o_{diff}$ (for visual comparison).\n\nThe LSTM takes $V^o$ or $V^o_{diff}$ with previous output word embedding $y_{t-1}$ as input, updates the hidden state from $h_{t-1}$ to $h_t$, and predicts the word for the next time step.\nThe generation process of bi-image comparison is learned by maximizing the log-likelihood of the predicted output sentence. The loss function is defined as follows:\n\\begin{equation}\n \\mathcal{L}_{diff}=-\\sum_t {\\log P(y_{t}|y_{1:t-1}, V^o_{diff})}\n\\end{equation}\nSimilar loss is applied for learning single-image captioning:\n\\begin{equation}\n \\mathcal{L}_{single}=-\\sum_t {\\log P(y_{t}|y_{1:t-1}, V^o)}\n\\end{equation}\n\nOverall, the model is optimized with a mixture of cross-entropy losses and total variation loss:\n\\begin{equation}\n \\begin{split}\n \\mathcal{L}_{loss} = \\mathcal{L}_{diff} + \\mathcal{L}_{single} + \\lambda \\mathcal{L}_{TV}\n \\end{split}\n\\end{equation}\nwhere $\\lambda$ is an adaptive factor that weighs the total variation loss.\n\n\n\\section{Experiments}\n\\subsection{Experimental Setup}\n\\paragraph{Datasets} \nThe Birds-to-Words (B2W) has 3347 image pairs, and each has around 5 descriptions of visual difference. This leads to 12890\/1556\/1604 captions for train\/val\/test splits. Since B2W contains only visual comparisons, We use the CUB-200-2011 dataset (CUB) ~\\citep{wah2011caltech}, which consists of single-image captions as an auxiliary to facilitate the training of semantic understanding. \nCUB has 8855\/2933 images of birds for train\/val splits, and each image has 10 captions.\n\n\\paragraph{Evaluation Metrics}\nPerformances are first evaluated on three automatic metrics\\footnote{\\url{https:\/\/www.nltk.org}}: BLEU-4~\\citep{papineni2002bleu}, ROUGE-L~\\citep{lin-2004-rouge}, and CIDEr-D~\\citep{vedantam2015cider}. Each generated description is compared to all five reference paragraphs. Note for this particular task, researchers observe that CIDEr-D is susceptible to common patterns in the data (See \\cref{tab:main} for proof), and ROUGE-L is anecdotally correlated with higher-quality descriptions (which is noted in previous work~\\citep{forbes2019neural}). Hence we consider ROUGE-L as the major metric for evaluating performances.\nWe then perform a human evaluation to further verify the performance.\n\n\\begin{table*}[t]\n\\small\n\\centering\n\\setlength{\\tabcolsep}{8pt}\n\\begin{tabular}{l rrrrr rrrrr}\n\\toprule\n & \\multicolumn{3}{c}{\\textbf{Validation}} & \\multicolumn{3}{c}{\\textbf{Test}}\\\\\n\\cmidrule(lr){2-4} \\cmidrule(lr){5-7}\nModel & BLEU-4 $\\uparrow$ & ROUGE-L $\\uparrow$ & CIDEr-D $\\uparrow$ & BLEU-4 $\\uparrow$ & ROUGE-L $\\uparrow$ & CIDEr-D $\\uparrow$ \\\\\n\\toprule\nMost Frequent & 20.0 & 31.0 & \\textbf{42.0} & 20.0 & 30.0 & \\textbf{43.0} \\\\\nText-Only & 14.0 & 36.0 & 5.0 & 14.0 & 36.0 & 7.0 \\\\\nNeural Naturalist & 24.0 & 46.0 & 28.0 & 22.0 & 43.0 & 25.0 \\\\\nCNN+LSTM & 25.1 & 43.4 & 10.2 & 24.9 & 43.2 & 9.9 \\\\\n\\midrule \n\\textsc{L2C} [B2W] & 31.9 & 45.7 & 15.2 & 31.3 & 45.3 & 15.1 \\\\\n\\textsc{L2C} [CUB+B2W] & \\textbf{32.3} & \\textbf{46.2} & 16.4 & \\textbf{31.8} & \\textbf{45.6} & 16.3 \\\\\n\\midrule\nHuman & 26.0 & 47.0 & 39.0 & 27.0 & 47.0 & 42.0 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Results for visual comparison on the Birds-to-Words dataset~\\citep{forbes2019neural}. \\textit{Most Frequent} produces only the most observed description in the dataset: ``the two animals appear to be exactly the same\". \\textit{Text-Only} samples captions from the training data according to their empirical distribution. \\textit{Neural Naturalist} is a transformer model in ~\\citet{forbes2019neural}. \\textit{CNN+LSTM} is a commonly-used CNN encoder and LSTM decoder model.\n}\n\\label{tab:main}\n\\end{table*}\n\n\n\\paragraph{Implementation Details}\nWe use Adam as the optimizer with an initial learning rate set to 1e-4. The pooling module to generate $K$ classes is composed of two convolutional layers and batch normalization, with kernel sizes 3 and 1 respectively. We set $K$ to 9 and $\\lambda$ to 1. The dimension of graph representations is 512. The hidden size of the decoder is also 512. The batch sizes of B2W and CUB are 16 and 128. Following the advice from ~\\citep{forbes2019neural}, we report the results using models with the highest ROUGE-L on the validation set, since it could correlate better with high-quality outputs for this task.\n\n\n\\subsection{Automatic Evaluation}\nAs shown in \\cref{tab:main}, first, L2C[B2W] (training with visual comparison task only) outperforms baseline methods on BLEU-4 and ROUGE-L. Previous approaches and architectures failed to bring superior results by directly modeling the visual relationship on ResNet features.\nSecond, joint learning with a single-image caption L2C[B2W+CUB] can help improve the ability of semantic understanding, thus, the overall performance of the model.\nFinally, our method also has a smaller gap between validation and test set compared to \\textit{neural naturalist}, indicating its potential capability to generalize for unseen samples.\n\n\\begin{table}\n\\small\n\\centering\n\\begin{tabular}{c c|c|c}\n\\toprule\nChoice (\\%) & L2C & CNN+LSTM & Tie \\\\\n\\midrule\nScore & \\textbf{50.8} & 39.4 & 9.8 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Human evaluation results. We present workers with two generations by L2C and CNN+LSTM for each image pair and let them choose the better one.\n}\n\\label{tab:human}\n\\end{table}\n\n\\subsection{Human Evaluation}\nTo fully evaluate our model, we conduct a pairwise human evaluation on Amazon Mechanical Turk with 100 image pairs randomly sampled from the test set, each sample was assigned to 5 workers to eliminate human variance. Following~\\citet{wang2018arel}, for each image pair, workers are presented with two paragraphs from different models and asked to choose the better one based on text quality\\footnote{We instruct the annotators to consider two perspectives, relevance (the text describes the context of two images) and expressiveness (grammatically and semantically correct).}. As shown in \\cref{tab:human}, \\textsc{L2C} outperforms \\textsc{CNN+LSTM}, which is consistent with automatic metrics.\n\n\n\\subsection{Ablation Studies} \n\n\\paragraph{Effect of Individual Components}\nWe perform ablation studies to show the effectiveness of semantic pooling, total variance loss, and graph reasoning, as shown in \\cref{tab:ablation}.\nFirst, without semantic pooling, the model degrades to average pooling, and results show that semantic pooling can better preserve the spatial relations for the visual representations. \nMoreover, the total variation loss can further boost the performance by injecting the prior local smoothness.\nFinally, the results without GCN are lower than the full L2C model, indicating graph convolutions can efficiently modeling relations among visual regions.\n\n\\begin{table}[t]\n\\small\n\\centering\n\\setlength{\\tabcolsep}{2pt}\n\\begin{tabular}{l rrr}\n\\toprule\n & \\multicolumn{3}{c}{\\textbf{Validation}}\\\\\n\\cmidrule(lr){2-4}\nModel & BLEU-4 $\\uparrow$ & ROUGE-L $\\uparrow$ & CIDEr-D $\\uparrow$ \\\\\n\\toprule\nL2C & \\textbf{31.9} & \\textbf{45.7} & \\textbf{15.2} \\\\\n\\midrule \n$-$ Semantic Pooling & 24.5 & 43.2 & 7.2 \\\\\n$-$ TV Loss & 29.3 & 44.8 & 13.6 \\\\\n$-$ GCN & 30.2 & 43.5 & 10.7 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Ablation study on the B2W dataset. We individually remove Semantic Pooling, total variation (TV) loss, and GCN to test their effects.\n}\n\\label{tab:ablation}\n\\end{table}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=.8\\linewidth]{images\/robust.pdf}\n \\caption{Sensitivity test on number of K chosen.}\n \\label{fig:robust}\n\\end{figure}\n\n\\paragraph{Sensitivity Test}\nWe analyze model performance under a varying number of $K$ ($K$ is the number of classes for confidence map $\\mathcal{C}$), as shown in \\cref{fig:robust}. Empirically, we found the results are comparable when $K$ is small. \n\n\n\\section{Conclusion}\nIn this paper, we present a learning-to-compare framework for generating visual comparisons. \nOur segmentation encoder with semantic pooling and graph reasoning could construct structured image representations. \nWe also show that learning to describe visual differences benefits from understanding the semantics of each image.\n\n\\section*{Acknowledgments}\nThe research was partly sponsored by the U.S. Army Research Office and was accomplished under Contract Number W911NF19-D-0001 for the Institute for Collaborative Biotechnologies. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction \\label{intro}}\nWR124 ($\\equiv$BAC209) is a Galactic massive star characterized by a very high heliocentric recession velocity of $\\sim$175 km~s$^{-1}$ \\citep{1982A&A...116...54S}, and it is regarded to be among the fastest moving massive stars in the Galaxy \\citep{1982A&A...114..135M}. It was classified by \\citet{1938PASP...50..350M} as a nitrogen-sequence Wolf-Rayet star (WN) and later as Population I WN8 star \\citep{1969ApJ...157.1245S}. \\\\ \n \nWR stars are thought to be a late stage in the evolution of stars more massive than 25~$M_{\\mathrm{\\sun}}$ and they are characterized by significant stellar winds with high mass-loss rate and terminal velocity. Many WRs are surrounded by nebular emission, some of which are members of a class of object called \\textit{ring nebulae}. The structure of this type of nebula is attributed to a continual process of mass loss from the exciting WR star, which sweeps the surrounding interestellar gas into a shell \\citep{1965ApJ...142.1033J}. The study of nebulae around WRs gives us clues to the mass-loss history of massive stars, as well as to the chemical enrichment of the interstellar medium (ISM).\\\\\n\n\\begin{table*}[t]\n\t\t \\caption{Main physical parameters of WR\\,124 and M1-67.} \n\t\t\\label{table:parameter} \n\t\t\\centering \n \t\\begin{tabular}{ l l l l l}\t\\hline\n\t\t\t\t\\\\\n Object & Parameter & Value & Reference \\\\ \\hline \\hline \\\\\n\t\t\tWR\\,124 & ($\\alpha$,$\\delta$) (J2000) & (19:11:30.88, +16:51:38.16) & \\citet{1997ESASP1200.....P} \\\\ \t\n\t\t\t& Spectral type & WN 8 & \\citet{1969ApJ...157.1245S} \\\\\n\t\t & $v_{\\mathrm{\\infty}}$ (km~s$^{-1}$) & 710 & \\citet{2001NewAR..45..135V} \\\\\n\t\t & $T_{\\mathrm{eff}}$ (kK) & 44.7 & Hamann et al. (2006) \\\\\n\t\t\t& Distance (kpc )& 4-5 & Crawford \\& Barlow (1991) \\\\\n\t\t\t& $R_{\\mathrm{G}} $ (kpc )& 8-10& Esteban et al. (1992) \\\\\n\t\t\t&\t $v_{\\mathrm{hel}}$ (km~s$^{-1}$) & 175 & Solf \\& Carsenty (1982) \\\\\n \t\t& $M_{\\mathrm{v}} $ (mag) & -7.22 & Hamann et al. (2006) \\\\\n & $E_{\\mathrm{b-v}} $ (mag) & 1.08 & Hamann et al. (2006) \\\\ \t \t\t\n \\\\ \n\t\t\tM1-67 & H${\\alpha}$ diameter (arcsec) & 110-120 & \\citet{1998ApJ...506L.127G} \\\\\n\t\t\t&\t $v_{\\mathrm{hel}}$ (km~s$^{-1}$) & 150-185 & \\citet{1981ApJ...249..586C} \\\\\n\t\t\t&\t $v_{\\mathrm{exp}}$ (km~s$^{-1}$) & 46 & Sirianni et al. (1998) \\\\\n\t\t\t&$ M_{\\mathrm{ionized}}$ ($M_{\\mathrm{\\sun}}$) & 1.73 & \\citet{1998ApJ...506L.127G} \\\\\n\t\t\t\t\\hline\t\t\\\\ \n\t\t\t\\end{tabular}\n\t\t\\end{table*}\n\nM1-67 \\citep[$\\equiv$Sh2-80,][]{1959ApJS....4..257S} is a bright nebula surrounding WR124, and it shows a clumpy and irregular distribution of gas that is mostly condensed in bright knots and filaments. It was first reported by \\citet{1946PASP...58..305M} during an H${\\alpha}$ objective prism survey. Classification of the nebula and its distance have been subjects of debate in the past years. Although it was first considered an H{\\sc ii} region \\citep{1959ApJS....4..257S}, the classification of M1-67 has been alternating between a planetary nebula (PN) and a ring nebula. \\citet{1964PASP...76..241B} adopted a distance of 0.9~kpc and suggested that M1-67 might be a PN since both star and nebula have high radial velocity. Studies from optical, infrared, and radio data by \\citet{1975ApL....16..165C} prompted its classification as ring nebula around a WR, with a distance of 4.33~kpc (in agreement with \\citet{1979RMxAA...4..271P} estimations). Nevertheless, \\citet{1985A&A...145L..13V} supported the PN status based on the energy distribution in the far infrared. The issue was settled by \\citet{1991A&A...244..205E} and \\citet{1991A&A...249..518C}. The detailed abundance analysis of the nebula by \\citet{1991A&A...244..205E} revealed nitrogen enhancement and oxygen deficiency, which is typical of material ejected in a previous evolutionary phase, and pointed to a progenitor more massive than those usually associated with PN central stars. \\citet{1991A&A...249..518C} estimated a distance between 4 kpc and 5 kpc using the interstellar Na{\\sc i}D$_{2}$ absorption spectrum of the star, ruling out the PN nature. Recently, \\citet{2010ApJ...724L..90M} have used a comprehensive model of the nebular expansion to estimate a distance of 3.35~kpc. Currently, M1-67 is classified as an ejected type WR ring-nebula.\n\nAlthough M1-67 shows an apparent spherical symmetry, ground-based coronographic images revealed a bipolar structure \\citep{1995IAUS..163...78N}. The emission lines seems to be caused by condensations of gas in clumps and radial filaments \\citep{1998ApJ...506L.127G}. One of the most striking characteristics of the nebula is the virtual absence of optical oxygen emission lines \\citep{1978ApJ...219..914B,1991A&A...244..205E}. Nevertheless, \\citet{1981ApJ...249..586C} reported a bright spot of [O{\\sc iii}]$\\lambda$5007\\AA{} 15\\arcsec\\, to the NE of the central star. Spectroscopic investigations of the physical conditions and abundances of the nebular shell have shown that the ionized gas is nitrogen-enriched and oxygen-depleted, suggesting that O has been processed into N mainly via the ON cycle \\citep{1991A&A...244..205E}. This implies that M1-67 is almost completely composed of stellar material that is poorly mixed with the surrounding ISM. The long-slit spectroscopy of M1-67 established that the bulk of the nebula is expanding at $v_{\\mathrm{exp}}$=42-46~km~s$^{-1}$ \\citep{1982A&A...116...54S,1998A&A...335.1029S} and \\citet{1981ApJ...249..586C} confirmed the high heliocentric velocity of the nebula $v_{\\mathrm{hel}}$=150-185~km~s$^{-1}$, which is comparable to the velocity of the star. The main parameters of the central star, WR124, and the nebula, M1-67, are summarized in Table \\ref{table:parameter}. \n\nMany studies have tried to disentangle the geometry and dynamics of M1-67 and its interaction with WR124. \\citet{1982A&A...116...54S} proposed a simple expanding \\textquotedblleft empty\\textquotedblright shell with condensation of stellar material that was ejected by the high-velocity parent star; indeed, the leading edge of the shell is considerably brighter than the trailing part. \\citet{1998A&A...335.1029S} found two components in the environment of the central star and interpreted them as the consequence of two different events in the past: a spherical hollow shell of 92\\arcsec\\, in diameter expanding at 46~km~s$^{-1}$ and a bipolar outflow with a semi-dimension of 48\\arcsec\\, and a velocity of 88~km~s$^{-1}$ with respect to the expansion centre. On the other hand, some authors explained the asymmetry as the result of a possible low-mass companion for WR124 \\citep{1981ApJ...249..586C,1982A&A...114..135M}. \\\\\n\nDespite the important findings of the last years, some relevant aspects of the evolution and formation of the ring nebula associated with WR124 remain unknown. In particular, a 2D study of the ionization structure of the nebula covering all the morphologies and\/or the structural components can shed light on the formation process of the nebula from the ejecta of the central star. The late spectral type of the WR ionizing star (WN8) is also very remarkable to study, as well as the degree of homogeneity in the chemical composition of its ejecta. \\\\\n\nTo do this, we included M1-67 in our programme of integral field spectroscopy (IFS) observations to compare the 2D structure with the integrated properties of certain selected areas and with models of WR evolution. The paper is organized as follows. First, we describe the observations and data reduction in Sect. \\ref{obsandred}. Then, we present the 2D results for morphology, ionization structure, and kinematic in Sect. \\ref{2d}. In Sect. \\ref{1d} we show the physical conditions and chemical abundances of eight selected areas. We perform a study of M1-67 in the mid-infrared range by analysing the IRS spectrum and the 24$\\,\\mu$m MIPS image from Spitzer in Sect. \\ref{ir}. In Sect. \\ref{discussion} we discuss the chemical composition of M1-67, the observed structure, and its relation with the evolution of the central WR star. Finally, a summary of the main conclusion is given in Sect. \\ref{conclusions}.\n\n\n\n\\section{Observation and data reduction \\label{obsandred}} \n\\begin{figure*}\n\\centering\n\\includegraphics[width=14cm]{m167_INT.pdf}\n\\caption{Narrow-band image of M1-67 in H${\\alpha}$+continuum taken with the Wide Field Camera at the Isaac Newton Telescope. North is up and east left. Red hexagons show the two zones of our IFS observations: \\emph{Edge} to the NE (left) and \\emph{Centre} to the SW (right).}\n\\label{fig:rgb}\n\\end{figure*}\n\nThe observations were carried out on July 5, 2005 using the Potsdam Multi-Aperture Spectrograph instrument (PMAS) \\citep{2005PASP..117..620R} in PPAK mode (PMAS fibre Package, \\citealt{2006PASP..118..129K}) at the 3.5~m telescope of the Centro Astron\\'omico Hispano Alem\\'an (CAHA) at the observatory of Calar Alto (Almer\\'ia, Spain). \nThe PPAK fibre bundle consists of 382 fibres with a diameter of 2.7 arcsec. The 331 science fibres are concentrated in an hexagonal bundle covering a field of view (FoV) of $74\\arcsec \\times 65\\arcsec$. The surrounding sky is sampled by 36 fibres distributed in six bundles located following a circle at about 90\\arcsec \\,from the centre. There are 15 fibres for calibration purposes too \\citep[see Fig. 5 in][]{2006PASP..118..129K}. We used the V300 grating, covering the spectral range from 3660 to 7040 \\AA{} with a dispersion of 1.67~\\AA{}\/pix, giving a spectral resolution of FWHM$\\sim$8.7 \\AA{} (R = $\\lambda\/\\Delta\\lambda\\sim$ 660) at 5577\\AA{}. The weather was photometric throughout the observations with the typical seeing subarsecond. \\\\\n\nTo choose the regions of M1-67 to be mapped, we resorted to the narrow-band images observed by our group at the Isaac Newton Telescope (INT) with the Wide Field Camera (WFC). The first PPAK pointing (called \\emph{Centre}) was centred on the WR star and it covers almost the whole nebula. The second zone (called \\emph{Edge}) was selected to study the NE edge of the object containing nebular emission and surrounding medium. Both regions can be seen in Fig. \\ref{fig:rgb}. Table \\ref{table:log} shows the observational log for M1-67.\n\nBias frames, continuum, arcs, and one spectrophotometric standard star (Hz\\,44) were also acquired during the observations.\\\\\n\n\\begin{table*}\n\\caption{M1-67 PPAK observational log.} \n\\label{table:log} \n\\centering \n\\begin{tabular}{l c c c c c c }\n\\hline\nZone & Coordinates (J2000) &Grating & Spectral range & Exp. time & Airmass & Date \\\\\n& ($\\alpha$,$\\delta$) & & ( \\AA{} ) & (s) & &\\\\\n\\hline\\hline\n\\\\\nCentre & (19:11:30.9 , +16:51:39.2) & V300 & 3640-7040 & 3 $\\times$ 30 & 1.08 & July, 5, 2005 \\\\\nEdge & (19:12:14.8 , +16:52:12.9) & V300 & 3640-7040 & 3 $\\times$ 450 & 1.07 &July, 5, 2005 \\\\\n\\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\n\nThe data were reduced using the R3D software \\citep{2006AN....327..850S} in combination with IRAF\\footnote{The Image Reduction and Analysis Facility IRAF is distributed by the National Optical Astronomy Observatories, which are operated by Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. Website: http:\/\/iraf.noao.edu\/.} and the Euro3D packages \\citep{2004AN....325..167S}. The reduction consisted of the standard steps for fibre-fed IFS observations.\n\nAt first, a master bias frame was created and subtracted from all the images. The different exposures taken at the same position on the sky were combined to reject cosmic rays using IRAF routines. A trace mask was generated from a continuum-illuminated exposure, identifying the location of each spectrum on the detector along the dispersion axis. Then, each spectrum was extracted from the science and standard star frames, co-adding the flux within an aperture of five pixels at the location of the spectral peak in the raw data using the tracing information, and storing it in a 2D image called row-stacked spectrum (RSS) \\citep{2004AN....325..167S}. We checked that the contamination from flux coming from adjacent fibres using this aperture was negligible \\citep{2004PASP..116..565B,2006AN....327..850S}. For a given aperture and $FWHM\\sim(0.5\\times~aperture)$, we found a level of cross-talk that was always $<$10$\\%$. This seems to be an acceptable compromise between maximizing the recovered flux and minimizing the cross-talk. \n\nDistortion and dispersion solutions were obtained using a He calibration-lamp exposure and applied to the science data to perform the wavelength calibration. The accuracy achieved was better than $\\sim$0.1~\\AA{} (rms) for the arc exposures. Corrections to minimize the differences between fibre-to-fibre transmission throughput were also applied, creating a fibre flat from the exposure of a continuum source. Observations of the spectrophotometric standard star Hz\\,44 were used to perform the flux calibration. \n\nThe sky emission was determined using the science data spectra obtained throughout the additional fibres for sampling the sky. As explained above, the second pointing was made at the edge of M1-67, and some of its sky bundles are located within an area containing signals from the nebula. We inspected the 36 sky-fibres of each pointing, selecting those that did not show nebular emission. The spectra of all the selected fibres were combined with a mean in a single spectrum, and a 2D spectrum was created by copying the combined spectrum in each fibre. These sky spectra were then subtracted from every science spectrum, each pointing with its own sky.\n\nFinally, considering the wavelength range and the airmass of the observations, and using \\citet{1982PASP...94..715F}, we estimated that offsets due to the differential atmospheric refraction (DAR) were always smaller than one third of the fibre diameter. Correction for DAR was not necessary in our data.\n\n\n\n\\section{Two-dimensional analysis \\label{2d}} \nTo perform a detailed analysis of the 2D structure of the nebula, we built easy-to-use datacubes of the two reduced pointings with three dimensions: two spatials and one spectral. We studied all the interpolation routines included in the E3D software and verified which conserves the flux and the apparent morphology observed in the spaxels. Finally, we generated our cubes with the linear Delaunay interpolation and a pixel size of 1.5$\\times$1.5~arcsec$^{2}$.\\\\\n\nIn a first attempt to understand the morphology of the two observed zones, we extracted images by sliding the cubes at different wavelength ranges. They are presented in Fig. \\ref{fig:morphology_all} on logarithmic scale.\n\n\\begin{figure}\n\\includegraphics[width=9cm]{morph_O3.pdf}\n\\includegraphics[width=9cm]{morph_ha.pdf}\n\\includegraphics[width=9cm]{morph_S2.pdf}\n\\caption{Interpolated images of M1-67 of the two observed regions: In the left column the edge pointing and in the right the central one. In each row we represent the flux (including continuum) integrated in a wavelength range. Top: range 5006\\AA{}-5014\\AA{} including [O{\\sc iii}]$\\lambda$5007\\AA{}. Middle: range 6562\\AA{}-6590\\AA{} including H${\\alpha}$ and [N{\\sc ii}]$\\lambda$6584\\AA{}. Bottom: range 6729\\AA{}-6737\\AA{} including [S{\\sc ii}]$\\lambda \\lambda$6717,6731$\\AA{}$. \nAll the maps are represented on logarithmic scales with units of $\\log$(erg~cm$^{-2}$~s$^{-1}$). The size of the hexagon side is 38\\arcsec. In all the maps, north is up and east to the left (see Fig. \\ref{fig:rgb}).}\n\\label{fig:morphology_all}\n\\end{figure}\n\nIn the 5006\\AA{}-5014\\AA{} range, which includes the [O{\\sc iii}]$\\lambda$5007\\AA{} line, no significant extended emission can be observed in both regions, supporting previous studies that revealed no oxygen emission in M1-67. Several spots appear in the FoV (including the central WR124 star) with fluxes lower than\n$\\sim10^{-17}$~erg~cm$^{-2}$~s$^{-1}$, but we checked that their nature was not nebular. They probably are stars in our line of sight. We gave special attention to the spot described by \\citet{1981ApJ...249..586C} at 15\\arcsec\\ to the NE of the star. Although we can observe some emission, we cannot confirm that it comes from the nebula. A more detailed analysis of this spot is performed in Sect. \\ref{1d} by means of the integrated spectrum. \n\nAs for other lines from the central pointing maps (H${\\alpha}$, [N{\\sc ii}], and [S{\\sc ii}]), most of the emission seems to be concentrated in at least five knots distributed in the NE-SW direction without counterpart in the [O{\\sc iii}]$\\lambda$5007\\AA{} image. In addition, two regions with very faint surface brightness (or even no emission) can be seen at the opposite sides (NW and SE). This orientation agrees with the bipolar structure observed by \\citet{1995IAUS..163...78N} and \\citet{1998A&A...335.1029S} with coronographic studies. H${\\alpha}$ and [S{\\sc ii}] emission shows a discontinuity in the edge pointing of the nebula with higher surface brightness in its SW area. When we move to the NE, the emission decreases until it disappears. The purple coloured area reaches non-negligible emission up to H${\\alpha}\\sim10^{-16}$~erg~cm$^{-2}$~s$^{-1}$ per pixel (1pixel=2.25~arcsec$^2$).\\\\\n\n\n\n\n\\subsection{2D study of the emission-line maps \\label{maps}}\nMaps were created from cubes by fitting the emission lines in each spatial element following the methodology presented in \\citet{2012A&A...541A.119F}. Basically, we performed a Gaussian fit to the emission lines using our own routine, which returns maps of the flux, centre, and FWHM among other properties. To prevent contamination by low signal-to-noise (S\/N) data, we masked out all pixels with S\/N lower than 5. During the creation of the S\/N masks, we visually inspected all the pixels, rejecting interactively those with non-nebular spectra or with contamination from the central WR and other field stars.\n\nFor both regions (centre and edge pointings), maps of parameters from the Gaussian fitting were generated for seven emission lines: H${\\gamma}$, H${\\beta}$, H${\\alpha}$, [N{\\sc ii}]$\\lambda \\lambda$6548,6584\\AA{}, and [S{\\sc ii}]$\\lambda \\lambda$6717,6731$\\AA{}$. Although our spectral range includes the [O{\\sc ii}]$\\lambda \\lambda$3726,3728\\AA{} lines, their automatic fit was not considered because these lines are faint and placed at the edge of the CCD, where the distortion correction bended and deformed them.\\\\\n\nAll the emission line maps were reddening-corrected using the reddening coefficient, c(H${\\beta}$), map (each pointing was corrected with its own c(H${\\beta}$) map). To determine this coefficient we resorted to the H${\\alpha}$\/H${\\beta}$ line ratio. We analysed the maps of the three Balmer lines detected in our wavelength spectral range (H${\\gamma}$, H${\\beta}$, and H${\\alpha}$) and decided to discard the H${\\gamma}$ flux because the S\/N was lower than 5 in $\\sim$20$\\%$ of the spaxels. However, we checked that both derivations were consistent in the spaxels with good S\/N. We used an intrinsic Balmer emission line ratio of H${\\alpha}$\/H${\\beta}$ =3.03 obtained from the public software of \\citet{1995MNRAS.272...41S}, assuming Case B recombination with an electron density of $n_{\\mathrm{e}}\\sim1000$ cm$^{-3}$ \\citep{1991A&A...244..205E} and an electron temperature of $T_{\\mathrm{e}}\\sim 7000$ K (the mean value between the estimations of \\citealt{1978ApJ...219..914B} ($\\sim$7500 K) and \\citealt{1991A&A...244..205E} ($\\sim$6000 K)). Statistical frequency distributions of the reddening coefficient were also created for the two maps taking the mean error ($\\sim$0.1) as binning. Figure \\ref{fig:reddening} shows the spatial distribution of the two derived c(H${\\beta}$) maps and their corresponding histograms.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{chbeta_paper.pdf}\n\\caption{Spatial structure of the derived c(H${\\beta}$) maps and their corresponding statistical frequency distributions with a binning of 0.1. On the left the edge pointing and on the right the central one. Orientation and sizes of maps are as in Fig. \\ref{fig:morphology_all}.}\n\\label{fig:reddening}\n\\end{figure}\n\nThe structure of the reddening map of the central region is mostly uniform with values ranging from 1.3 to 2.5 and a mean value of $\\sim$1.85$\\pm$0.10; the histogram reveals that the most probable value for c(H${\\beta}$) in this zone is 1.90. Single pixels with very high or very low values have a large error in the coefficient estimations. Big holes in the map correspond to the masked pixels.\n\nThe derived reddening coefficient map of the edge pointing has a less homogeneous structure, with a c(H${\\beta}$) mean value of $\\sim$2.11$\\pm$0.08 over the 1.7-2.8 range. It is interesting to notice that all the pixels with c(H${\\beta}$)$>$2.5 are placed in the NW area, where the discontinuity of the H${\\alpha}$ image is observed (Fig. \\ref{fig:morphology_all}). In this region, pixels with c(H${\\beta}$)$>$2.5 were inspected individually; after checking the S\/N of the Balmer lines and the c(H${\\beta}$) errors we decided not to mask them and to pay special attention to the rest of properties derived there. We study this region in detail in the 1D analysis (Sect. \\ref {1d}). The statistical frequency distribution of the reddening coefficient for the edge pointing gives 2.0 as the most probable value. If we exclude values higher than 2.5, the distribution can be fitted by a Gaussian function. \n\nTo compare our results with the literature, we estimated the extinction as $A_{\\mathrm{v}}=2.145 \\times c(H{\\beta})$, using the \\citet{1989ApJ...345..245C} extinction law with $R_{\\mathrm{v}}=3.1$ and the colour excess as $E(B-V)=0.692 \\times c(H{\\beta})$. The mean values derived were $A_{\\mathrm{v}}$=3.9 and E(B-V)=1.3 for the central pointing, and $A_{\\mathrm{v}}$=4.5 and E(B-V)=1.5 for the edge. The reddening coefficients derived from our data are higher than those estimated by \\citet{1991A&A...244..205E} ($\\sim$1.35), whereas the $A_{\\mathrm{v}}\\sim3.8$ obtained by \\citet{1981ApJ...249..586C} and $E(B-V)\\sim1.35$ from \\citet{1975ApL....16..165C} agree with our values.\\\\\n\nThe electron density (n$_{\\mathrm{e}}$) maps were produced from the [S{\\sc ii}]$\\lambda\\lambda$6717\/6731 ratios using the IRAF package TEMDEN based on a five-level statistical equilibrium model \\citep{1987JRASC..81..195D,1995PASP..107..896S}. Using these maps, we created statistical frequency distribution of the electron density with a binning of 100~cm$^{-3}$ (low density limit). They are shown with the derived n$_{\\mathrm{e}}$ maps in Fig. \\ref{fig:density}. In general terms, the values of n$_{\\mathrm{e}}$ presented in these maps are in good agreement with values reported in the literature. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{density_paper.pdf}\n\\caption{Electron density, n$_{\\mathrm{e}}$, maps derived from the [S{\\sc ii}]$\\lambda\\lambda$6717\/6731 line ratios in units of cm$^{-3}$. Orientations and sizes as in Fig. \\ref{fig:morphology_all}. On the bottom the statistical frequency distributions with a binning of 100~cm$^{-3}$. The edge pointing is on the left, and the central on the right. The white lines across the maps (from NE to SW) represent the direction along which the cuts were extracted to study the radial variation of n$_{\\mathrm{e}}$. See text for details.}\n\\label{fig:density}\n\\end{figure}\n\nThe histogram for the central pointing shows elements distributed in a wide range of density from $\\sim$200 to $\\sim$3000~cm$^{-3}$ with 1000~cm$^{-3}$ as the most probable n$_{\\mathrm{e}}$. The mean value of the distribution is 1008~cm$^{-3}$. Some isolated pixels appear very intense in the image with densities as high as 3000~cm$^{-3}$, but with large errors. The density distribution follows the low-ionization emission elements supporting the idea of a bipolar structure. It is interesting to notice that the knots with the higher surface brightness correspond to the denser zones.\n\nThe histogram for the edge pointing shows a distribution close to a Gaussian centred on 500~cm$^{-3}$. The map ranges from 100~cm$^{-3}$ to 1000~cm$^{-3}$ with a mean value of 507~cm$^{-3}$. As happens in the other region, some pixels (7) show higher densities (up to 1000~cm$^{-3}$), and we removed them from our estimations. The majority of the pixels with high reddening coefficient (c(H${\\beta}$)$>$2.5) were rejected by the S\/N mask for the sulphur line, but the unmasked pixels present a mean density of 613~cm$^{-3}$.\n\nThe morphological analyses showed that the bright knots are aligned in a preferred axis along the NE-SW direction with a bipolar structure (see Fig. \\ref{fig:morphology_all}); to check that the electron density is related with the bipolarity, we performed a cut in the density maps along this direction (see cuts in Fig. \\ref{fig:density}), the density profiles obtained are presented in Fig. \\ref{fig:dens_rad}. For them, we performed four fits using the least-squares method: the first from the star towards the SW, the second from the star towards the NE including pixels from the two pointings (centre and edge), and the last two fits from the star towards the NE, but differentiating the two pointings (see Fig. \\ref{fig:dens_rad}). It can be seen that the density decreases when we move away from the WR star. In addition, the fits show a symmetric gradient in the central points with a tendency to flatten out towards the ends.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{neVSr.pdf}\n\\caption{Radial variation in electronic density (in cm$^{-3}$) with distance (in pc) along the direction of bipolarity (from NE to SW). We consider negative radius from the star towards the NE and positive from the star towards the SW. Lines indicate least-squares fits: solid lines correspond to pixels differentiating the two pointings, and the dashed line represents the fit along the direction star-NE including pixels from both pointings.}\n\\label{fig:dens_rad}\n\\end{figure}\n\n\n\n\\subsection{Emission line relations \\label{diagdiag}}\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{ratio_SH.pdf}\n\\includegraphics[width=9cm]{ratio_NS.pdf}\n\\includegraphics[width=9cm]{ratio_NH.pdf}\n\\caption{Derived maps of the emission line ratios of the two pointings: edge (left) and centre (right). Top: [S{\\sc ii}]$\\lambda \\lambda$6717,6731\/H${\\alpha}$. Middle: [N{\\sc ii}]$\\lambda $6584\/[S{\\sc ii}]$\\lambda \\lambda$6717,6731. Bottom: ([N{\\sc ii}]$\\lambda\\lambda$6584\/H${\\alpha}$. Orientations and sizes as in Fig. \\ref{fig:morphology_all}.}\n\\label{fig:ratio_maps}\n\\end{figure}\n\nFigure \\ref{fig:ratio_maps} shows maps of the emission line ratios for the two pointings. Their mean values are summarized in Table \\ref{table:line_ratios}. All the intensities presented in the table and figure are reddening-corrected. \n\nIn both regions the [S{\\sc ii}]$\\lambda\\lambda$6717,6731\/H${\\alpha}$ map presents an inhomogeneus and patchy structure. [S{\\sc ii}] lines are fainter than H${\\alpha}$ in all the spaxels, with a maximum logarithmic ratio of -1.3 to the north of the edge pointing. Some isolated pixels show higher values, but they are over the limits of the region masked, so unreliable. \n\nThe distribution of the [N{\\sc ii}]$\\lambda $6584\/[S{\\sc ii}]$\\lambda \\lambda$6717,6731 map presents a structure opposite to [S{\\sc ii}]\/H${\\alpha}$ in both regions. The [N{\\sc ii}] emission is stronger than [S{\\sc ii}], reaching $\\log$([N{\\sc ii}]\/[S{\\sc ii}])=1.4 in areas close to the ISM.\n\nStudying the [N{\\sc ii}]$\\lambda\\lambda$6584\/H${\\alpha}$ maps led to more interesting results. The central pointing shows positive values, except in some regions in the direction of the bipolarity, where [N{\\sc ii}] and H${\\alpha}$ fluxes are equal. In the edge pointing, regions with different ratios are clearly separated. In most of the pixels, [N{\\sc ii}]$\\ge$H${\\alpha}$ with an increasing ratio towards the side. To the north, an area can be seen where H${\\alpha}$ $>$[N{\\sc ii}]; this region possesses the higher derived c(H${\\beta})$, and it was masked in the sulphur maps because of its low S\/N ($<$5). The NW area has the highest ratio, possibly produced by the contamination of a nearby field star.\\\\\n\n\\begin{table}[!h]\n\\caption{Mean values of the emission line ratio maps}\n\\label{table:line_ratios} \n\\centering \n\\begin{tabular}{l c c}\n\\hline\nLine ratios & Edge & Centre \\\\\n\\hline\n\\hline \\\\\n$\\log$([S{\\sc ii}]$\\lambda\\lambda$6717,6731\/H${\\alpha}$)& -1.03 & -1.01 \\\\\n$\\log$([N{\\sc ii}]$\\lambda$6584\/[S{\\sc ii}]$\\lambda \\lambda$6717,6731)& 1.15 & 1.07 \\\\\n$\\log$([N{\\sc ii}]$\\lambda\\lambda$6584\/H${\\alpha}$)& 0.07 & 0.06 \\\\\n\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nTo understand the differences of the [N{\\sc ii}]$\\lambda\\lambda$6584\/H${\\alpha}$ ratio in the edge pointing, we complement the study by generating statistical frequency distributions of the ratio map and plotting all the spaxels from emission line maps in the diagram [N{\\sc ii}]$\\lambda$6584 vs. H${\\alpha}$ (Fig. \\ref{fig:NHa}).The diagram [N{\\sc ii}]$\\lambda$6584 vs. H${\\alpha}$ showed in Fig. \\ref{fig:NHa} a, presents double behaviour. We considered two lines with unity slope as upper and lower limits. Points above the upper line are pixels where the [N{\\sc ii}] emission is stronger than H${\\alpha}$, while below the lower line they have the opposite behaviour. Points between these two lines are pixels with $\\log$(H${\\alpha}$)=$\\log$([N{\\sc ii}]) $\\pm$ 0.05. Then, we located the points of the diagram in the FoV of PPAK, taking these limits into account, to identify their spatial locations (see Fig. \\ref{fig:NHa} b); they appear grouped. The statistical frequency distribution of the [N{\\sc ii}]\/H${\\alpha}$ map shows a bimodal distribution as we can see in Fig.\\ref{fig:NHa} c. When we identified the spaxels of the three regions defined above, we found that the left peak (centred in $\\sim$-0.3) includes all the points behind the lower limit, and the right peak (centred in $\\sim$0.1) includes the points of the two other zones. We can conclude that at least two spatial regions exist in this pointing: one with [N{\\sc ii}]$\\geq$H${\\alpha}$ to the SW and another one to the north with [N{\\sc ii}]$<$H${\\alpha}$. All the pixels with c(H${\\beta}$)$>$2.5 are included in the second region, along with the spectra with very low S\/N of the sulphur lines.\\\\\n\nFor the central pointing, the same analysis shows that [N{\\sc ii}] follows H${\\alpha}$ for all the points in a one-to-one relation line of unity slope. Relations between the other lines were also studied by means of two diagrams ([N{\\sc ii}]$\\lambda$6584 vs. [S{\\sc ii}]$\\lambda\\lambda$6717,6731 and [S{\\sc ii}]$\\lambda\\lambda$6717,6731 vs. H${\\alpha}$), showing strong correlations in both the pointings. The statistical frequency distribution of all the emission line ratios showed single peaks with distributions close to Gaussian functions, except [N{\\sc ii}]$\\lambda\\lambda$6584\/H${\\alpha}$ on the edge.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{NHa_paper.pdf}\n\\caption{Relations between [N{\\sc ii}] and H${\\alpha}$ for the edge pointing. Colours help us locate points spatially: red corresponds to points with $\\log$(H${\\alpha}$)=$\\log$([N{\\sc ii}]) $\\pm$ 0.05, blue to points with $\\log$(H${\\alpha}$)$>\\log$([N{\\sc ii}]), and green to points where $\\log$(H${\\alpha}$)$<\\log$([N{\\sc ii}]). From the top to the bottom: (a) $\\log$([N{\\sc ii}]$\\lambda$6584) vs. $\\log$(H${\\alpha}$). All the spaxels of the intensity maps (in units of $\\log$(erg~cm$^{-2}$~s$^{-1}$) are represented in the diagram with crosses. Black lines with unitary slope represent the limits. (b) PPAK FoV of the edge pointing with the zones defined in plot \\emph{a}. (c) Statistical frequency distributions of the $\\log$([N{\\sc ii}]\/H${\\alpha}$) map. Black solid line represents the distributions of all the spaxels, and coloured dashed lines represent the regions defined above. See text for details.}\n\\label{fig:NHa}\n\\end{figure}\n\n\n\n\n\\subsection{The radial velocity field \\label{kinematics}}\nLimitations on the instrument resolution prevented us from carrying out an exhaustive analysis of the kinematics of M1-67. Nevertheless, the resolution was sufficient for studying the distribution of the radial velocity field and relating it to the morphology and ionization structure.\n\nUsing the central wavelength of the Gaussian fit performed in the cubes, we created radial velocity maps for the two observed regions. Two corrections were carried out over the measured radial velocities. First, we estimated the error in the wavelength calibration by comparing the wavelength of a sky emission line with its theoretical value, we obtained a difference of -0.303~\\AA{} ($\\sim$-16~km~s$^{-1}$ for [OI]$\\lambda$5577\\AA{}), and this zero point was added to the measured velocities. Then, we translated maps into the local standard of rest (LSR) and corrected for the Earth's motions, taking coordinates and universal time of the observations into account.\n\nWith the corrected radial velocity fields of H${\\alpha}$, we scaled the measured velocities using the overlapping region of the two pointings to avoid deviations. Then, we calculated the total mean velocity, obtaining a value of 139~km~s$^{-1}$, and we established it as the heliocentric velocity of the nebula. This velocity is in very good agreement with the 137~km~s$^{-1}$ obtained by \\citet{1998A&A...335.1029S}. We present the relative radial velocity field of H${\\alpha}$ for the two regions mosaicked in Fig. \\ref{fig:ha_velocity}.\\\\\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=8cm]{vradial.pdf}\n\\caption{Relative radial velocity field derived for H${\\alpha}$ in units of km~s$^{-1}$. Zero is the global mean velocity (139~km~s$^{-1}$), see text for details. The two pointings are mosaicked. The red cross marks the position of the central star. North is up and east left (see Fig. \\ref{fig:rgb}).}\n\\label{fig:ha_velocity}\n\\end{figure}\n\nPrevious kinematic studies \\citep{1982A&A...116...54S,1998A&A...335.1029S} have found two components (one redshifted and another blueshifted) supporting the idea of a shell in expansion. With the low resolution of our data we cannot resolve both components, and the velocity field shown is dominated by the radial velocity of the brightest knots, a kind of intensity- weighted radial velocity distribution. Despite the low resolution, a study of the overall structure of both regions can be carried out. The gas of the nebula seems to move faster near to the WR star, decreasing its relative velocity when moving away from the centre. The velocity field changes its tendency (increasing) in the \\textquotedblleft peculiar\\textquotedblright ~zone towards the north of the edge pointing, where other properties were also found to differ from the rest of the nebula.\n\n\nFigure \\ref{fig:hist_vr} shows the statistical frequency distributions of the radial velocity maps with a binning of 5~km~s$^{-1}$. To consolidate the differences found in the diagram [N{\\sc ii}]$\\lambda$6584 vs. H${\\alpha}$ for the edge pointing (Fig. \\ref{fig:NHa}), we represented pixels from the two regions separately The region where [N{\\sc ii}]$\\geq$H${\\alpha}$ presents a Gaussian distribution that covers a wide range in velocity, suggesting that some regions are moving away from us and others towards us. The distribution of the regions where H${\\alpha}$ emission is higher than [N{\\sc ii}] is narrower and it is centred near to the zero velocity.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{hist_vr.pdf}\n\\caption{Statistical frequency distributions of the radial velocity of H${\\alpha}$ relative to the heliocentric velocity, with a binning of 5~km~s$^{-1}$. The central region is on the top with all the spaxeles represented. On the bottom the edge pointing: black solid line represents all the pixels, the short-dashed red line is the region where [N{\\sc ii}]$\\geq$H${\\alpha}$, and the long-dashed blue line represents the regions with [N{\\sc ii}]$<$H${\\alpha}$.}\n\\label{fig:hist_vr}\n\\end{figure}\n\n\n\n\n\\section{Properties of the integrated spectra \\label{1d}} \n\nWe created 1D spectra by combining fibres to describe the integrated properties of several interesting zones. Eight integrated spectra were generated over the two pointings (see Fig. \\ref{fig:integratedregions}).\\\\\n\nThe regions selected over the central pointing are:\n\n-\\textit{Region 1} (R1): examining the emission of the low ionization elements showed in Fig. \\ref{fig:morphology_all}, three bright knots appear to the south of the nebula. Eight fibres over these knots were selected and combined to create a single spectrum. The offset from central star is $\\Delta\\alpha\\sim$4.05$\\arcsec$, $\\Delta\\delta\\sim$13.5$\\arcsec$.\n\n-\\textit{Region 2} (R2): we combined three spaxels to the north of the star coinciding with another isolated knot. The offset from central star is $\\Delta\\alpha\\sim$1.35$\\arcsec$, $\\Delta\\delta\\sim$14.85$\\arcsec$.\n\n-\\textit{Region 3} (R3): we chose those fibres placed to the east of the star in a zone where an extended emission is seen in H${\\alpha}$, paying attention to not include light from any star. The offset from central star is $\\Delta\\alpha\\sim$12.15$\\arcsec$, $\\Delta\\delta\\sim$4.05$\\arcsec$.\n\n-\\textit{Region 4} (R4): we were interested in analysing a large region in the NW masked in the 2D analysis where all the emission line maps showed S\/N lower than 5. The fourth integrated spectrum was created there to check that there is emission in this area. The offset from central star is $\\Delta\\alpha\\sim$14.85$\\arcsec$, $\\Delta\\delta\\sim$14.85$\\arcsec$.\\\\\n\nThe regions selected over the edge pointing are:\n\n-\\textit{Region 5} (R5): nine fibres were selected on the south of the edge pointing, close to the discontinuity. This spectrum belongs to the region showed in Fig. \\ref{fig:NHa} b where [N{\\sc ii}] is stronger than H${\\alpha}$. The offset from central star is $\\Delta\\alpha\\sim$31.05$\\arcsec$, $\\Delta\\delta\\sim$16.2$\\arcsec$.\n\n-\\textit{Region 6} (R6): we combined several spaxels at the SW limit of the FoV to check that [N{\\sc ii}]$\\sim$H${\\alpha}$ in this area, as we found in Sect. \\ref{diagdiag}. The offset from central star is $\\Delta\\alpha\\sim$14.85$\\arcsec$, $\\Delta\\delta\\sim$20.25$\\arcsec$.\n\n-\\textit{Region 7} (R7): in a faint region to the north of the edge pointing where interesting properties were obtained in the 2D analysis: some pixels show c(H${\\beta}$)$>$2.5, the S\/N of the sulphur lines is very low so they were masked in several maps, the [N{\\sc ii}]\/H${\\alpha}$ ratio has its minimum values, and the kinematic study revealed that here the radial velocity increases opposite to the general trend. Seven spaxels were combined in this region to analyse the properties in detail. The offset from central star is $\\Delta\\alpha\\sim$27$\\arcsec$, $\\Delta\\delta\\sim$40.5$\\arcsec$.\n\n-\\textit{Region 8} (R8): six fibres were selected on the left of the discontinuity to checked whether this region has nebular emission. The offset from central star is $\\Delta\\alpha\\sim$47.25$\\arcsec$, $\\Delta\\delta\\sim$52.65$\\arcsec$.\\\\\n\n \n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{integrated_regions.pdf}\n\\caption{H${\\alpha}$ images of the two areas of M1-67 observed with PPAK. Boxes represent the eight regions where the integrated spectra were generated. For the offsets of each region from the central star (green cross), see the text. Orientations and sizes are as in Fig. \\ref{fig:morphology_all}. Edge on the left and centre on the right.}\n\\label{fig:integratedregions}\n\\end{figure}\n\nIn addition, another three integrated spectra were extracted to perform several tests. From the central spaxel of the central pointing FoV, we obtained the spectrum of WR124 (\\textit{Region WR}). The other two were extracted at $\\sim$15\\arcsec\\ to the NE of the star (common region in both pointings) to study the zone where \\citet{1981ApJ...249..586C} found emission in [O{\\sc iii}]$\\lambda$5007\\AA{} (\\textit{Regions S1 and S2} in the central and edge pointings, respectively). Figure \\ref{fig:integratedspectra} shows six representative 1D spectra from the 11 created. \\\\\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\textwidth]{integrated_all.pdf}\n\\caption{Examples of integrated spectra. From left to right and top to bottom: (a) Whole spectrum of \\textit{Region 3}. (b) Spectrum of \\textit{Region 5} in the range of H${\\alpha}$. (c) \\textit{Region 7}, in same range as \\textit{b}, where the absence of sulphur lines can be seen. (d) Spectrum of \\textit{Region 5} centred on the [N{\\sc ii}]$\\lambda$5755\\AA{} emission line, used to calculate the electron temperature. (e) Zoom over the spectrum in \\textit{Region S2} without any emission in the [O{\\sc iii}]$\\lambda$5007\\AA{} emission line.(f) Whole spectrum of WR124 obtained from the central spaxel. }\n\\label{fig:integratedspectra}\n\\end{figure*}\n\n\nFluxes of the main emission lines were measured by fitting Gaussian functions using SPLOT within IRAF. All the measured fluxes are in units of erg~cm$^{-2}$~s$^{-1}$ per fibre (area of fibre $\\sim$5.7~arcsec$^2$). Statistical errors were estimated using the formula presented in \\citet{2003MNRAS.346..105P}:\n\\begin{equation}\n\\sigma_{\\mathrm{1}}=\\sigma_{\\mathrm{c}} N^{1\/2} [1+EW\/(N\\Delta)]^{1\/2}\n\\end{equation}\nwhere $\\sigma_{\\mathrm{1}}$ represents the error on the observed line flux, N is the number of pixels used to measure the line, EW the line equivalent width, $\\sigma_{\\mathrm{c}} $ the standard deviation in the continuum close to the line of interest, and $\\Delta$ represents the dispersion in \\AA{}\/pix.\\\\\n\nWe derived the reddening coefficient, c(H${\\beta}$), from the H${\\alpha}$\/H${\\beta}$ and H${\\gamma}$\/H${\\beta}$ line ratios using the procedure described in Sect. \\ref{maps}. In \\textit{Region 7}, H${\\gamma}$ was measured with low S\/N, and only the other two Balmer lines were used to estimate c(H${\\beta}$). The reddening coefficients agree with values obtained in the 2D study. Table \\ref{table:all_lines} lists the reddening-corrected fluxes of the emission lines measured in every zone labelled with their standard identification. The third column reports the adopted reddening curve using extinction law by \\citet{1989ApJ...345..245C} with $R_{\\mathrm{V}}=3.1$. Errors in the emission line intensities were derived by propagating the observational errors in the fluxes and the reddening constant uncertainties. The estimated fluxes and errors were normalized to $F(H{\\beta})=100$. The values obtained for c(H${\\beta}$) are also presented in the last row of Table \\ref{table:all_lines}. \\\\\n\nFive integrated spectra deserve special attention. First, the R4 created in the NW dark area of the central pointing only showed three emission lines: H${\\alpha}$, [N{\\sc ii}]$\\lambda$6548\\AA{}, and [N{\\sc ii}]$\\lambda$6584\\AA{}. We deduce that, in areas out of the bipolar structure a faint, but not negligible, emission exists coming from the nebular gas rather than the ISM. We estimated the H${\\beta}$ flux by means of the reddening coefficient: assuming c(H${\\beta}$)=1.87 (the mean value of the other integrated spectra of this pointing), we performed the inverse process of the extinction correction and obtained F($H{\\beta}$)=3.21~10$^{-16}$~erg~cm$^{-2}$~s$^{-1}$. \n\nThe spectrum of R8 does not show any emission, thus physical and chemical properties could not be estimated here. We did not include this region in tables. We extracted a spectrum of the WR star (\\textit{Region WR}) shown in Fig.\\ref{fig:integratedspectra}f, and not perform a detailed analysis here. Finally, the study performed over \\textit{Regions S1 and S2} revealed typical nebular spectra (very similar to R4), but we did not find any emission of the [O{\\sc iii}]$\\lambda$5007\\AA{} line as seen in Fig. \\ref{fig:integratedspectra}e.\n\n\n\n\n\\subsection{Physical properties and chemical abundances \\label{prop_and_ab}}\nElectron density (n$_{\\mathrm{e}}$) was calculated from the [S{\\sc ii}]$\\lambda\\lambda$6717\/6731 line ratio using the IRAF package TEMDEN. The derived density ranges from $\\sim$1500~cm$^{-3}$ near the star, to $\\sim$650~cm$^{-3}$ towards the edge. These values are consistent with our 2D maps and with previous studies \\citep{1991A&A...244..205E,1998A&A...335.1029S}. \\\\\n\nElectron temperature, T$_{\\mathrm{e}}$, can be derived using the line ratio R$_{\\mathrm{N2}}$: \n\\begin{equation}\nR_{\\mathrm{N2}}={I([\\mathrm{N \\textsc{ii}}]\\lambda 6548)+I([\\mathrm{N \\textsc{ii}}]\\lambda 6584) \\over I([\\mathrm{N \\textsc{ii}}]\\lambda 5755)}.\n\\end{equation}\nThe [N{\\sc ii}]$\\lambda$5755\\AA{} auroral line that appears close to the \\textquotedblleft sky\\textquotedblright\\, line Hg{\\sc i} 5770\\AA{}, was detected in two zones (R5 and R6). We measured this line again in the spectra before sky subtraction and conclude that the flux of [N{\\sc ii}]$\\lambda$5755\\AA{} line in R6 is contaminated by the Hg{\\sc i} emission line, thus not reliable. We obtained a direct estimate of T$_{\\mathrm{e}}$([N{\\sc ii}]) from R$_{\\mathrm{N2}}$ only for R5.\\\\\n\nTo reinforce the validity of the chemical abundances estimations and to provide ionization correction factors (ICFs) for those species whose ionizations stages were not all observed in the optical spectrum, we performed photoionization models of R5. To do so, we used the code CLOUDY v.10 \\citep{1998PASP..110..761F}, assuming a central ionizing source from a WR star atmosphere \\citep{2002MNRAS.337.1309S} with $Z = 0.008$ and an effective temperature of the star of 45~000 K which are, respectively, the closest values to the measured total metallicity of the gas and the estimated temperature of WR124 \\citep{2006A&A...457.1015H}. \n\nWe considered a spherical geometry putting the gas at a distance of 1~pc from the star and assumed a constant density of 700~cm$^{-3}$, a value similar to the one derived from [S{\\sc ii}] emission lines. The model that fits the emission line intensities of [O{\\sc ii}], [O{\\sc iii}], He{\\sc i}, [N{\\sc ii}], and [S{\\sc ii}] better in R5 was obtained by varying the ionization parameter (U) and the relative chemical abundances of He, O, N, and S. The emission lines from this model are listed in Table \\ref{table:all_lines}, while the derived physical properties and the ionic and total chemical abundances are listed in Table \\ref{table:paramyabun}. The ICFs obtained were ICF(N$^{+}$)=1.21 and ICF(S$^{+}$)=1.58. Regarding the resulting geometry, the final radius is 1.22~pc, which is of the same order of magnitude as the apparent size of the nebula in the images. \\\\\n\nTo estimate chemical abundances, electron density and electron temperature are required. We used T$_{\\mathrm{e}}$([N{\\sc ii}]) as temperature representative of the low ionization ions, S$^{+}$, N$^{+}$, and O$^{+}$, and T$_{e}$([O{\\sc iii}]) for deriving the O$^{2+}$ and He$^{+}$ abundances. In those zones where the electron temperature was not calculated, we adopted the value of R5. In previous studies, T$_{\\mathrm{e}}$([N{\\sc ii}]) ranges from 5900~K \\citep{1998A&A...335.1029S} to 8000~K \\citep{1981ApJ...249..586C}; maybe, the supposition of T$_{\\mathrm{e}}$=8200~K leads to our abundances being underestimated. Since the photionization model predicts T$_{\\mathrm{e}}$([N{\\sc ii}])$\\sim$8550~K and T$_{e}$([O{\\sc iii}])$\\sim$8330~K in R5, we considered T$_{\\mathrm{e}}$([N{\\sc ii}])$\\sim$T$_{e}$([O{\\sc iii}]) in the estimations. To infer abundances in R4 and R7, where sulphur lines were not measured, we adopted the electron density of R5, n$_{e}$=631~cm$^{-3}$. We checked that variations in density do not affect this estimation. \\\\\n\nIonic abundances were derived from the forbidden-to-hydrogen emission line ratios using the functional forms given by \\citet{2008MNRAS.383..209H}, which are based on the IRAF package IONIC. We used equations from \\citet{2004ApJ...617...29O} to obtain the singly ionized helium abundance. To determine the total abundance of O\/H we added the two ionic abundances (O\/H~$\\sim$ O$^{+}$\/H$^{+}$ + O$^{2+}$\/H$^{+}$). The total N\/H and S\/H abundances were inferred thanks to the ICFs obtained in the photoionization model, X\/H$\\sim$(X$^{+}$\/H) $\\times$ ICF(X$^{+}$). In the case of helium abundances we used the relation between X(O$^{2+}$)=O$^{2+}$\/(O$^{2+}$+O$^{+}$) and ICF(He$^{+}$+He$^{++}$) from \\citet[Fig. 7]{2007ApJ...662...15I} and we deduced that ICF(y$^{+}$)$\\gg$1. Since our helium measurements are uncertain, we do not venture to estimate the total helium abundances.\\\\\n\nIn R5 all the useful emission lines were measured and the abundances determined as explained above. In the rest of the regions, we did not measure all the necessary lines to calculate abundances, and we resorted to the empirical parameter N2S2 \\citep{2009MNRAS.398..949P} to estimate N\/O from the nitrogen and sulphur emission lines:\n\n\\begin{equation}\n\\log(N\/O)=1.26\\times N2S2 - 0.86\n\\end{equation}\nwhere\n\n\\begin{equation}\nN2S2=\\log \\left ({I([\\mathrm{N \\textsc{ii}}]\\lambda 6584) \\over I([\\mathrm{S\\textsc{ii}}] \\lambda\\lambda 6717,6731)} \\right).\n\\end{equation}\n\nBefore, we estimated the N\/O in R5 with the N2S2 parameter and checked that the result was in good agreement with the value obtained with the direct method. In Table \\ref{table:paramyabun} we present all the ionic and total abundances, with their corresponding errors, derived for the integrated spectra. We discuss the results in Sect. \\ref{chemical}.\n\n\n\n\n\\section{Infrared study \\label{ir}}\nTo enhance the morphological and chemical analysis, a study in the mid-infrared was performed. We obtained IRS (Infrared Spectrograph, \\citealt{2004ApJS..154...18H}) data in mapping mode and the MIPS (Multiband Imaging Photometer, \\citealt{2004ApJS..154...25R}) 24$\\,\\mu$m image from the Spitzer Heritage Archive (SHA)\\footnote{Website: sha.ipac.caltech.edu\/applications\/Spitzer\/SHA}. M1-67 has already been studied in the past in the infrared range by \\citet{1985A&A...145L..13V}. They presented the energy distribution of the central star WR\\,124 and flux densities, finding thermal emission of dust at T$_{\\mathrm{c}}\\sim$100~K.\\\\\n\nFigure \\ref{fig:spitz_24micr} shows the MIPS 24$\\mu$m image of M1-67. This image has already been presented by \\cite{2010MNRAS.405.1047G}. In a nebula, the origin of the 24$\\mu$m emission can be mainly due to two factors: presence of the [O{\\sc iv}]25.90$\\mu$m line from highly ionized gas or warm dust. Since M1-67 presents a low degree of ionization, we deduce that the observed emission shown in Fig. \\ref{fig:spitz_24micr} traces the warm dust distribution of the nebula. The emission has an elliptical shape along the NE-SW direction in very good agreement with the bipolar axis observed in Fig. \\ref{fig:morphology_all}, thus suggesting that the structure is composed of a mixture of ionized gas and warm dust. Furthermore, an external and spherical structure can be seen extending around the ellipsoidal shell. This faint bubble is not seen in our optical images.\\\\\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{spitzer24micr.pdf}\n\\caption{MIPS 24$\\mu$m image of M1-67. North is up and east left. Boxes indicate the two regions where IR spectra were obtained. Contourns represent the H${\\alpha}$ emission derived from Fig. \\ref{fig:rgb}.}\n\\label{fig:spitz_24micr}\n\\end{figure}\n\nFor the low-resolution short-low (SL) and long-low (LL) modules spectroscopic observations, basic calibrated data (BCD, pipeline version 18.18) were processed and analysed with the CUBISM software \\citep{2007ApJ...656..770S}. Data were background-subtracted using averaged off-source observations and flux-calibrated. Bad pixels were removed with the automatic correction routine within CUBISM and a datacube assembled for each module. CUBISM allows extracting of 1D spectra over polygonal apertures: given the different spatial coverage of the SL and LL module, we chose two apertures (with an area of $\\sim$60~arcsec$^2$) on the outskirts of the nebula, observed by both modules. The spectra from the different modules were stitched together, ignoring the noisy region at the red end of each order. We called them Regions A and B (see Fig. \\ref{fig:spitz_24micr}). In Fig. \\ref{fig:spitz_spec} we present the spectrum obtained in Region B.\\\\\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{spectrum_ir.pdf}\n\\caption{Infrared spectrum obtained in Region B. The most relevant lines are indicated.}\n\\label{fig:spitz_spec}\n\\end{figure}\n\nWe measured the most important lines by fitting Gaussian functions with IRAF. Errors were calculated as we explained before (Sect. \\ref{1d}). Fluxes and their corresponding errors are presented for the two regions in Table \\ref{table:infrared}.\\\\\n\nAssuming the electron temperature of \\textit{Region 5} (T$_{e}$=8200~K) and an electron density n$_{e}$=600~cm$^{-3}$, we inferred the chemical abundances. To obtain the fluxes relatives to H${\\beta}$ we used the theoretical ratio of H(7-6)\/H${\\beta}$=0.0109 from \\citet{1995MNRAS.272...41S}. The ionic abundances, Ne$^{+}$\/H$^{+}$, Ne$^{2+}$\/H$^{+}$, S$^{2+}$\/H$^{+}$, and S$^{3+}$\/H$^{+}$ were inferred by using the IRAF package IONIC. We estimated the total neon abundance by adding the two ionic abundances, Ne\/H$\\sim$Ne$^{+}$\/H$^{+}$+Ne$^{2+}$\/H$^{+}$. For deriving the total S\/H abundance we need to add the S$^{+}$\/H$^{+}$ from the optical spectra. To do so we compared the regions from which the IR and the optical 1D spectra were taken. Noticing the proximity between Region A and R3, we approximated the total sulphur abundance in the spectrum A as S\/H$\\sim$(S$^{+}$\/H$^{+}$)$_{R3}$ + (S$^{++}$\/H$^{+}$)$_{A}$. The 1D spectrum nearest to B is R4, but in R4 we did not measure sulphur lines. Since the S$^{+}$\/H$^{+}$ is similar in all the integrated spectra, we considered the mean value, (S$^{+}$\/H$^{+}$)$_{mean}$=6.17, so that the total sulphur abundance in B can be written as S\/H$\\sim$(S$^{+}$\/H$^{+}$)$_{mean}$ + (S$^{++}$\/H$^{+}$)$_{B}$. We assumed that S$^{3+}$\/H$^{+}$ is negligible. Results are presented in Table \\ref{table:abinfrared} and discussed in Sect. \\ref{chemical}.\n\n\n\\begin{table}[h!]\n\\caption{Lines measured over the two spectra studied in the infrared range. Integrated fluxes are in units of 10$^{-5}$~erg~cm$^{-2}$~s$^{-1}$. }\n\\label{table:infrared} \n\\centering \n\\begin{tabular}{l c c c}\n\\hline\n&&\\multicolumn{2}{c}{F($\\lambda$)} \\\\\n\\cline{3-4}\nLine & $\\lambda$~($\\mu$m) & Region A & Region B \\\\\n\\hline \\hline \\\\\n{[}S{\\sc iv}] & 10.51 & ...& 4.1 $\\pm$ 0.5 \\\\\nH(7-6) & 12.37 & 4.9 $\\pm$ 0.5 & 5.1 $\\pm$ 1.0 \\\\\n{[}Ne{\\sc ii}] & 12.81 & 121.7 $\\pm$ 3.2 &105.3 $\\pm$ 3.5 \\\\\n{[}Ne{\\sc iii}] & 15.56 & 5.0 $\\pm$ 0.4 & 1.1 $\\pm$ 0.3 \\\\\n{[}S{\\sc iii}] & 18.71 & 133.9 $\\pm$ 5.5 & 99.2 $\\pm$ 1.9 \\\\\n{[}S{\\sc iii}] & 33.48 & 156.6 $\\pm$ 5.3 & 135.2 $\\pm$ 4.7 \\\\\n\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[h!]\n\\caption{Ionic and total chemical abundances estimated in Regions A and B with infrared spectroscopy.}\n\\label{table:abinfrared} \n\\centering \n\\begin{tabular}{l c c}\n\\hline\n& Region A & Region B \\\\\n\\hline \\hline \\\\\n12+log(Ne$^{+}$\/H$^{+})$ & 7.56 $\\pm$ 0.04 & 7.47 $\\pm$ 0.08 \\\\\n12+log(Ne$^{2+}$\/H$^{+})$ & 5.85 $\\pm$ 0.05 & 5.18 $\\pm$ 0.16 \\\\\n12+log(S$^{2+}$(18.71$\\mu$m)\/H$^{+}$) & 6.59 $\\pm$ 0.09 & 6.44 $\\pm$ 0.17\\\\\n12+log(S$^{2+}$(33.48$\\mu$m)\/H$^{+}$) & 6.64 $\\pm$ 0.16 & 6.56 $\\pm$ 0.21\\\\\n12+log(S$^{3+}$\/H$^{+}$) & ... & 4.41 $\\pm$ 0.19 \\\\\n\\\\\n12+log(Ne\/H) & 7.57 $\\pm$ 0.04 & 7.48 $\\pm$ 0.08\\\\\n12+log(S\/H)$\\dagger$ & 6.72 $\\pm$ 0.07 & 6.63 $\\pm$ 0.11 \\\\\n\\hline\n\\end{tabular}\n \\begin{list}{}{}\n\t\t\t\\item {$\\dagger$} Assuming S$^{+}$\/H$^{+}$ from the optical spectroscopy.\\\\\n\t\t\\end{list}\n\n\\end{table}\n\n\n\n\n\\section{Dicussion \\label{discussion}}\nWe included M1-67 in our IFS observational programme to provide answers to some questions that still surround this object: degree of gas homogeneity (both kinematic and chemical), stellar evolutionary phase origin of the gas, interaction with the ISM, influence of the star spectral type at WR stage, etcetera. To do this, we put together our results for the optical (1D + 2D) and infrared analysis, and we complemented them with theoretical models of stellar evolution and previous kinematic studies of this nebula.\n\n\n\\subsection{Chemical content of M1-67 \\label{chemical}}\nThe chemical abundances derived from the 1D optical and infrared studies, presented in Tables \\ref{table:abinfrared} and \\ref{table:paramyabun}, give us relevant information on the chemical content across the nebula. To compare the derived abundances with the expected ISM values at the location of the nebula, we use the solar values from \\citet{2009ARA&A..47..481A} as our primary reference. For the sake of consistency as reference for M1-67, we consider here gas abundances derived following the same methodology, i.e. H{\\sc ii} region collisional emission lines. We adopted the chemical abundances of the prototypical H{\\sc ii} region M\\,42 as a reference (\\citealt{2007A&A...465..207S,2011MNRAS.412.1367T} with t$^{2}$=0, and references therein). Then, we corrected the t$^{2}$=0 abundances from the effect of the radial abundance gradient of the Milky Way \\citep{2006ApJS..162..346R} to the galactocentric radius of M1-67\\footnote{Assumed R$_{G}\\sim$10 kpc as the the galactocentric distance of the representative ISM at the location of M1-67 \\citep{1992A&A...259..629E} and taking the distance from the Sun to M\\,42 into account (d$\\sim$0.414\\,kpc, \\citealt{2007A&A...474..515M}).}. We considered the constant ratio $\\log \\mathrm{(Ne\/O)}$=-0.73$\\pm$0.08 since they are products of the same nucleosynthesis. After these corrections the expected ISM abundances to be compared with M1-67 are 12+$\\log \\mathrm{(O\/H)}\\sim$8.42$\\pm$0.03, 12+$\\log \\mathrm{(N\/H)}\\sim$7.54$\\pm$0.09, 12+$\\log \\mathrm{(S\/H)}\\sim$6.99$\\pm$0.12, and 12+$\\log \\mathrm{(Ne\/H)}\\sim$7.69$\\pm$0.09.\\\\\n\n\nFirst of all, it can be observed that our derived oxygen abundances in R5 and R6 (12+$\\log \\mathrm{(O\/H)}\\sim$7.73$\\pm$0.06, 7.67$\\pm$0.07, respectively) are substantially lower than the expected value by factors $\\sim$10 with respect to the solar reference, and $\\sim$7 with respect to the ISM. This result implies that in the M1-67 nebula oxygen is strongly under-abundant. Comparing the derived N\/H abundance with the expected ISM value, we find that nitrogen is strongly enriched in M1-67 (factor $\\ge$ 6). \n\n\nOverall, this chemical composition can be seen in all the nebular regions observed; the N\/O ratio appears extremely high due to the effect of both nitrogen enhancement and oxygen deficiency. This fact can be understood when assuming we are seeing regions composed of material processed in the CNO cycle. This result for N\/O abundance is consistent with previous 1D studies \\citep{1991A&A...244..205E}, but here it has been extended across the whole (2D) nebular geometry and physical conditions. The only region where the N\/H abundance is close to the ISM expected value is R7 (the region with different properties in the 2D analysis, see Sect. \\ref{2d}).\n\nWe did not estimate the total helium abundances since our helium lines are very faint and the measures uncertain. Nonetheless, given the low limit of the value of He{\\sc i} ($<$0.03), the absence of He{\\sc ii} and the ICF inferred from \\citet{2007ApJ...662...15I} (ICF(y$^{+}$)$\\gg$1), we deduced that in M1-67 the largest part of helium is unseen and in neutral form. \\\\\n\nThe analysis of the chemical abundances obtained here is reinforced by the information derived from the infrared study. The infrared spectrum allowed us to derive the sulphur and neon abundances for the main ionic species, Ne$^{+}$, Ne$^{++}$, S$^{++}$, and S$^{+3}$. The total neon abundance derived in M1-67 is consistent within the errors with the expected ISM abundance for the two apertures (Table \\ref{table:abinfrared}). The noble gas neon is not expected to suffer nucleosynthetic transformation in the stellar interior, and its abundance should be preserved.\n\nIn the case of sulphur, the derivation of the total abundance requires the contribution of the optical S$^{+}$ to be added to the ionic fractions derived from the infrared. Once this approximation has been assumed, the total abundance of S\/H obtained is close to, though still slightly lower than, the expected ISM value at the galactocentric distance of M1-67. Thus we cannot rule out the possibility that the nebular material could be slightly sulphur-poor: either a certain degree of depletion on dust or maybe a nucleosynthetic origin (or both) could be at work as reported for some planetary nebulae \\citep{2012ApJ...749...61H}.\\\\\n\nTaking the abundance ratios into account, we can obtain clear indications of the excitation degree of the nebula. The values N$^{+}$\/N~$\\sim$1 and O$^{+}$\/O$^{++}$~$>$1 from the optical and the derived ratios of Ne$^{+}$\/Ne$^{++}$ and S$^{++}$\/S$^{3+}$ from the IR study point to the very low ionization degree of the gas in M1-67. The ionization parameter obtained from the photoionization model of R5, $\\log\\mathrm{(U)}=-3.84$, is fully consistent with this very low excitation observed.\\\\\n\n\nTo provide a summary of the chemical abundances obtained across the nebula in the optical and infrared ranges, we have grouped regions with similar physical and chemical properties (whenever possible). In Table \\ref{table:summary} we show the results: $<$1,2,3$>$ represents the average of R1, R2, and R3, $<$5,6$>$ the average of R5 and R6, and $<$A,B$>$ the average of zones A and B from the IR study. In these cases the corresponding parameters were estimated as the mean weighted by the error in each zone. The two last columns represent the expected ISM values and solar abundances from \\citet{2009ARA&A..47..481A}, respectively.\n\n\\begin{table*}\n\t\t \\caption{Summary of inferred properties in M1-67.} \n\t\t\\label{table:summary} \n\t\t\\centering \n\t\t\\begin{tabular}{l c c c c c c c}\n\t\t\\hline\n\t\t& $<$1,2,3$>$ & 4 & $<$5,6$>$ & 7 & $<$A,B$>$ & ISM$^{a}$ & Solar$^{b}$ \\\\\n\t\t\\hline\n\t\t\\hline\n \t\t\\\\\n\t\tc(H${\\beta}$) & 1.87 $\\pm$ 0.01 & 1.87 $\\pm$ 0.01 & 1.90 $\\pm$ 0.02 & 2.15 $\\pm$ 0.04 & ... & ... & ... \\\\\n\t\tn$_{\\mathrm{e}}$([S{\\sc ii}]) (cm$^{-3}$)& 1581 $\\pm$ 49 & ... & 677 $\\pm$ 62 & ... & ... & ... & ...\\\\\n\t\t12+log(O\/H) & ... &... & 7.70 $\\pm$ 0.03 & ... & 8.28 $\\pm$ 0.09 $^{c}$ & 8.42$\\pm$ 0.03 & 8.69 $\\pm$ 0.05\\\\\n\t\t12+log(S\/H) & 6.35 $\\pm$ 0.02 & ... & 6.40 $\\pm$ 0.02 & ... & 6.69 $\\pm$ 0.04 & 6.99$\\pm$ 0.12 & 7.12 $\\pm$ 0.03\\\\\n\t\t12+log(N\/H) & 8.13 $\\pm$ 0.01 & 8.36 $\\pm$ 0.03 & 8.21 $\\pm$ 0.03 & 7.92 $\\pm$ 0.03 & ... & 7.54 $\\pm$ 0.09 & 7.83 $\\pm$ 0.05 \\\\\n\t\t12+log(Ne\/H) & ... & ... & ... & ... & 7.55 $\\pm$ 0.04 & 7.69 $\\pm$ 0.09 & 7.93 $\\pm$ 0.10\\\\\n\t\t$\\Delta$(log(N\/H))$^{d}$ & 0.59 $\\pm$ 0.09 & 0.82 $\\pm$ 0.09 & 0.67 $\\pm$ 0.10 & 0.38 $\\pm$ 0.09 & ... & ...& ... \\\\\n\t\t$\\Delta$(log(O\/H))$^{d}$& ... & ... & -0.72 $\\pm$ 0.04 & ... & -0.14 $\\pm$ 0.09 $^{c}$ & ... & ... \\\\\n\t\t\\hline\n\t\t\\end{tabular}\n \\begin{list}{}{}\n\t\t\t\\item {$^{a}$} Expected ISM abundances at R$_{G}\\sim$10 kpc. \\\\\n\t\t\t\\item {$^{b}$} Solar abundances from \\citet{2009ARA&A..47..481A}.\\\\\n\t\t\t\\item {$^{c}$} Estimated assuming $\\log \\mathrm{(Ne\/O)}$=-0.73$\\pm$0.08.\\\\\t\n\t\t\t\\item {$^{d}$} Variations with respect to the expected ISM abundance. \\\\ \n\t\t\\end{list}\n\t\t\\end{table*}\n\n\n\n\n\n\\subsection{M1-67 structure \\label{structure}}\nAlthough the first observations of M1-67 showed a nearly spherical shape, the high contrast achieved by coronographic studies in the inner regions made a bipolar symmetry clearly visible \\citep{1995IAUS..163...78N}. Owing to the field of view of our PPAK observations, we cannot detect this bipolarity; however, the narrow-band images from the INT and the interpolated maps from PPAK (see Figs. \\ref{fig:rgb} and \\ref{fig:morphology_all}) show that the bright knots are aligned along a preferred axis with \\textquotedblleft holes\\textquotedblright ~in the perpendicular direction. The integrated spectrum of R4, confirms that the emission in the holes is very faint (i.e. H${\\beta}$ was not detected). Furthermore, the MIPS image from Spitzer (Fig. \\ref{fig:spitz_24micr}) also reveals the bipolar appearance at 24$\\mu$m, suggesting that the ionized gas is mixed with warm dust. We emphasize that the knots are not only regions with high surface brightness but also very dense areas where the [N{\\sc ii}]\/H${\\alpha}$ and [N{\\sc ii}]\/[S{\\sc ii}] ratios show the maximum values.\\\\\n\nWe support the idea of a preferred axis, but either way, is the bipolarity the footprint of an ejection from the star? Looking at the radial velocity map of Fig. \\ref{fig:ha_velocity}, we can see that velocity decreases when we move away from the centre (except in the far NE where the gas has peculiar properties, see below). This agrees with the studies from \\citet{1981ApJ...249..586C}, who predict a faster movement near the star, and \\citet{1998A&A...335.1029S} who discuss the idea of a bipolar outflow. The spatial distribution of the electron density shows similar behaviour: the mean values of maps and integrated spectra of the central pointing are higher than at the edge ($\\sim$1500~cm$^{-3}$ and $\\sim$650~cm$^{-3}$, respectively). Also the electron density decreases along the radial cut seen in Fig. \\ref{fig:dens_rad} with a symmetric gradient in two directions (from the centre to NE and to SW) and flattening towards the edges. Both analyses lead us to think that the preferred axis is not only morphological, but is also the footprint of a mechanism that could have expelled material in the past and, later, interacted with the ISM diluted and decelerated the gas.\\\\\n\nLeaving aside for a moment the discussion of bipolarity, there is another striking morphological feature in this object. The IR study at 24$\\mu$m reveals a spherical bubble surrounding the bipolar structure. Kinematic studies from \\citet{1998A&A...335.1029S} show two different motions in the enviroment of WR124: a bipolar outflow and an external spherical hollow shell expanding into the ISM. In our narrow-band images from the INT, this bubble is not detected possibly because the material is diluted in the ISM and very weak in the optical range. A simple sketch of the proposed structure of M1-67 is presented in Figure \\ref{fig:sketch}: an inner region with bipolar or elliptical shape along the direction NE-SW surrounded by an external spherical bubble.\\\\\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{sketch.pdf}\n\\caption{Sketch showing the structure of M1-67 around the central star WR124: the bipolar axis along the direction NE-SW and the spherical bubble.}\n\\label{fig:sketch}\n\\end{figure}\n\nThe study of the edge pointing region suggests that the gas in the NE possesses different properties from the gas of the bipolar outflow. In summary, the properties found for this region are a) the largest reddening coefficient of the nebula with c(H${\\beta})>$2.5; b) the only area where we measure [N{\\sc ii}]$<$H${\\alpha}$ and with the smallest N\/H abundance estimated, close to the solar neighbourhood value; c) increase rather than decrease in the relative radial velocity; d) absence of [S{\\sc ii}]$\\lambda\\lambda$6717\/6731emission lines; e) the minimum H${\\beta}$ flux measured (R7) and lack of [O{\\sc iii}]$\\lambda$5007 or helium lines. The presence of these properties puzzles us, but here we propose a possible scenario to explain the origin of this region. The nitrogen abundance of our \\textquotedblleft peculiar\\textquotedblright ~region points towards material not processed in the CNO cycle (e.g. ISM or MS bubble) while the morphology (Fig. \\ref{fig:rgb}) and kinematics (Fig. \\ref{fig:ha_velocity}) suggest that it does not belong to the bipolar ejection. When looking at the bow-shock simulations of \\citet{2003A&A...398..181V} and taking the external IR bubble into account, it is possible that the high velocity of the runaway WR124 causes a paraboloid-like bow shock around the star sweeping up the surrounding medium, so that we are seeing the remaining of this bow-shock placed in our line of sight. We should bear in mind that the peculiar region is spatially close to the small reversed bow-shock like structures found by \\citet{2001ApJ...562..753G} at the NE periphery of M1-67.\n\n\n\n\\subsection{M1-67: a consequence of the evolution of the central star WR124 \\label{evolution}}\nThe theory of evolution of massive stars can help us explain the observed structure. We compare the stellar parameters from the central star in M1-67 (effective temperature and luminosity from \\citealt{2006A&A...457.1015H}) with the stellar evolution models from STARS \\citep{1971MNRAS.151..351E,1995MNRAS.274..964P,2004MNRAS.353...87E}, \\citet{2003A&A...404..975M}, and the most recent models from \\citet{2012A&A...537A.146E} to estimate the initial mass of the WR star. Regardless of small discrepancies, all the models predict initial mass for WR124 of 60-80~M$\\sun$ (J. Toal\\'a, \\textit{private communication}). The evolutionary scenario for a single massive star with 60~M$\\sun<$M$_{\\mathrm{i}}<$90~M$\\sun$ follows the sequence O-Of\/WNL$\\longleftrightarrow$LBV-WNL-WCL-SN \\citep{2011BSRSL..80..266M}. After spending a normal life as O stars on the main sequence (MS), they evolve towards cooler temperatures becoming luminous blue variables (LBVs) \\citep{1994PASP..106.1025H}. These stars undergo extremely strong mass loss (up to 10$^{-3\\ldots -4}~$M$\\sun$~yr$^{-1}$) through winds and occasionally giant eruptions, and thus peel off parts of their stellar envelope to form a small LBV nebulae (LBVN) \\citep{1995ApJ...448..788N}. LBV stars lose their mass so fast that they rapidly evolve away from LBV stage to become WR stars. With an initial mass range of 60-80~M$\\sun$, we can derive that the central star in M1-67 has experienced an LBV phase instead of a red or a yellow supergiant phase before becoming a WR star. This idea is in good agreement with previous studies of the nature of M1-67 based on different observational approaches: M1-67 is very likely the imprint of a previous LBV wind instead of a classical red super-giant (RSG) wind-blown nebula \\citep{1998ApJ...506L.127G, 2003A&A...398..181V}.\\\\\n\nThe spectral type of the central star (WN8), tells us that it is a \\textquotedblleft young\\textquotedblright ~Wolf-Rayet and that it has most likely entered the WR phase recently. Under this hypothesis, we propose that the WR winds have not had enough time to substantially interact with the previous nebular material and, therefore, the layers and observed features originate in stellar material ejected during the MS and\/or LBV phases. Considering a representative expansion velocity and the linear size of the nebula, we estimate that the ejection happened $\\sim$5$\\times$10$^{4}$~yr ago. This value is slightly higher than the LBV phase duration ($\\sim$1.3$\\times$10$^{4}$~yr, \\citealt{1996A&A...305..229G}) thus supporting the hypothesis that the star has recently entered the WR phase.\\\\\n\nTaking the physical sizes and morphologies from an hydrodynamical simulations of a 60 M$\\sun$ star as reference \\citep{1996A&A...305..229G}, it is possible that the external bubble of M1-67 contains material expelled during the MS phase, which is very tenuous in the optical because of the dilution with the ISM. \\citet{Castor1975} and \\citet{Weaver1977} both built models the derive analytical solutions for the dynamic evolution of shock bubbles created by interaction between the ISM and the stellar wind in the MS phase.\\\\ \n\nSeveral observational reasons have led us to think that the bipolar ejection (or axis of preference) is composed by material ejected during the LBV stage. First, the abundances in the knots along this axis show enrichment in nitrogen and deficiency in oxygen, a behaviour typical of CNO-processed material in phases after the MS stage\\footnote{We should bear in mind that models that include rotation \\citep{Meynet2005} and recent observations of O stars in the LMC \\citep{2012A&A...537A..79R} have revealed that some stars of the MS stage can also show CNO-processed material.}. It is common in observations of LBVN to find very intense [N{\\sc ii}] emission and absence of [O{\\sc iii}] \\citep{1995ApJ...448..788N,1998ApJ...503..278S,2002A&A...393..503W}, this being also indicative of a low effective temperature and low degree of excitation. Furthermore, many of these nebulae show clumpy radial structures (not multiple shells) and morphologies with preferred axes \\citep{1993ApJ...410L..35C}. The presence of a bipolar ejection in M1-67 enhances the similarity of the nebula to other LBVN, which almost all display some degree of bipolarity \\citep{1995ApJ...448..788N, 2001RvMA...14..261W}. In short, M1-67 shows the general properties of LBV nebulae: linear size, total ionized gas, velocity field, IR emission, chemical abundances, line intensities, and dynamical characteristics; this clearly points to an LBV progenitor \\citep[among others]{1995ApJ...448..788N, 2001ApJ...551..764L,2011BSRSL..80..440W}.\n\nThe idea of M1-67 being made up of material ejected during the LBV stage was suggested in the past by \\citet{1998A&A...335.1029S} based on the total mass of ionized gas, the expansion velocity, and the linear size of the nebula. Also \\citet{1998ApJ...506L.127G} explain the clumpy appearance of M1-67 by assuming the interaction of winds in a previous LBV phase.\\\\\n\nOur study depicts M1-67 as a nebula with two regions: an external spherical bubble with material likely produced during the MS and an inner nearly elliptical region along the NE-SW direction produced due to an ejection in the LBV phase. We are observing a WR nebula with LBVN appearance.\\\\\n\n\n\n\n\\section{Summary and conclusions \\label{conclusions}}\n\\renewcommand {\\labelenumi} {\\arabic {enumi}$)$}\n\\renewcommand {\\labelenumii} {$\\bullet$}\nIn this work, we have presented the first integral field spectroscopy study of the ring-nebula M1-67 around the Wolf-Rayet star WR124 in the optical range with PPAK. Two regions of the nebula were observed and analysed by means of 2D and 1D studies. We also obtained and analysed IR spectroscopic data and the MIPS 24$\\mu$m image of M1-67 from Spitzer. In the following, we present the main results derived from this work.\n\n\\begin{enumerate}\n\\item We obtain maps from the emission lines that allow us to perform a detailed study of the 2D structure of the nebula:\n\\begin{enumerate}\n\\item Interpolated maps from the main emission lines show a clumpy structure with bright knots aligned along a preferred axis in the NE-SW direction. The [O{\\sc iii}]$\\lambda$5007\\AA{} emission is absent over the whole nebula.\n\\item The spatial distribution of the reddening coefficient maps, c(H${\\beta}$), presents slight variations between the two pointings. In the central region c(H${\\beta}$) ranges from 1.3 to 2.5 with a mean value of $\\sim$1.85, while in the edge pointing the mean is 2.11, ranging from 1.7 to 2.8.\n\\item Electron density maps, n$_{e}$, derived from the [S{\\sc ii}]$\\lambda\\lambda$6717\/6731 ratios, show a non-uniform structure. Knots with higher surface brightness in H${\\alpha}$ possess the highest densities. We also find that density decreases with increasing the distance from the star showing a symmetric gradient.\n\\item We analysed the ionization structure by means of line ratios maps. In particular, the [N{\\sc ii}]\/H${\\alpha}$ map of the edge pointing field reveals two behaviours, thus defining two spatially well delimited regions: one in the NE with [N{\\sc ii}]$<$H${\\alpha}$ and the second one with [N{\\sc ii}]$\\ge$H${\\alpha}$.\n\\item With radial velocity maps we studied the kinematics of the nebula. The derived heliocentric velocity for M1-67 is $\\sim$139~km~s$^{-1}$, in agreement with previous results. The relative radial velocity seems to decrease as it moves away from the central star along the preferred axis. \\\\\n\\end{enumerate}\n\n\\item We derived the physical parameters and chemical abundances of M1-67 using the integrated spectra of eight regions:\n\\begin{enumerate}\n\\item The electron densities inferred on the central region present higher values than on the edge ($\\sim$1500~cm$^{-3}$ and $\\sim$650~cm$^{-3}$, respectively). This result agrees with the radial variations of the 2D study.\n\\item We derived an electron temperature of $\\sim$8200~K in R5 by using our measurement of the [N{\\sc ii}]$\\lambda$5755\\AA{} emission line.\n\\item The chemical abundances show, in all the studied areas, an enrichment in nitrogen and a deficiency in oxygen. The nitrogen enhancement in each region is different, suggesting an inhomogeneous chemical enrichment.\\\\\n\\end{enumerate}\n\n\\item The 24$\\mu$m image reveals an inner bipolar-like structure in the NE-SW direction and an outer faint spherical bubble interacting with the surrounding ISM. From the low-resolution mid-IR spectroscopic data, we measured the main emission lines and estimate ionic and total chemical abundances, verifying the low ionization degree of the gas. \\\\\n\n\\item Overall, this study revealed the clumpy structure of M1-67 with knots aligned along a preferred axis and with \\textquotedblleft holes\\textquotedblright~ along the perpendicular direction. The gas along this bipolar axis possesses a low ionization degree, and it is well mixed with warm dust. The optical analysis of these knots revealed chemical abundances typical of material processed in the CNO cycle, suggesting that the material comes from an evolved stage of the star. The radial variations in electron density and velocity indicate that the gas of the bipolar feature was ejected by the star. \\\\\n\n\\item A region placed to the NE of the nebula shows different kinematic, chemical, and morphological properties. We propose that this region comprises the remaining of a bow-shock caused by the runaway WR124 with ISM material mixed up with the MS bubble.\\\\\n\\end{enumerate}\n\nBased on our observational results and taking theoretical models from the literature into account (e.g. \\citealt{1996A&A...305..229G}), we propose a scenario where the central star has recently entered the WR phase. This implies that the interaction of WR winds with previous surrounding material is not visible yet. After comparing our results with stellar evolution models and taking the inferred initial mass of the star (60~M$\\sun < $ M$_{i} <$ 80~M$\\sun$) into account, we deduced that the central star experienced an LBV stage before becoming a WR. The bipolar material observed belongs to an ejection during the LBV stage since the morphology, kinematics, and chemistry are in good agreement with previous studies of LBV nebulae.\\\\\n\n\n\n\n\n\\begin{acknowledgements}\nThis work is supported by the Spanish Ministry of Science and Innovation (MICINN) under the grant BES-2008-008120. This work has been partially funded by the projects: AYA2010-21887-C04-01 of the Spanish PNAYA and CSD2006 - 00070 \"1st Science with GTC from the CONSOLIDER 2010 programme of the Spanish MICINN and TIC114 of the Junta de Andaluc\\'ia. We thank J. Toal\\'a for providing estimations for the initial mass of the WR star and for useful suggestions. We are also very grateful to M. Fern\\'andez-Lorenzo, A. Monreal-Ibero, K. Weis, and the ESTALLIDOS collaboration for their useful comments and scientific support.\n\\end{acknowledgements}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nOpen source projects have an increasing importance in modern software development.\nFor example, several open source projects are daily used by millions of users.\nHowever, it is very important to continually attract more participants and contributors to these projects, in order to increase the chances of long-term success~\\cite{comino2007}.\nParticularly, several channels can be used to promote open source software, helping to keep the interest of the community and also to attract new members.\n\nIn this article, we investigate the most common channels used by developers to promote open source projects.\nWe manually inspected a large set of popular projects on GitHub, which is the world's largest collection of open source software, with around 27 million users and 77 million repositories~\\cite{githubsearch}.\nOur contributions include: (i) data about the promotion channels frequently used by popular open source projects; (ii) a comparison on the use of promotion channels by popular projects and by random ones; and (iii) an analysis of the impact of promotion on Hacker News, a popular news aggregation site, in the popularity of the studied projects.\nOur findings help practitioners to understand the importance of using promotion channels in the open source development context.\n\n\\section{Study Design}\n\\label{sec:design}\n\nTo reveal the most common promotion channels used by developers, we manually inspected the documentation of the top-100 projects with most stars on GitHub (stars is a popular feature to manifest interest or satisfaction with GitHub projects~\\cite{icsme2016}).\nWe restricted our analysis to popular projects because they have a large number of users and therefore need better and efficient ways to communicate with users and also to attract new contributors.\n\nFigure~\\ref{fig:repos-overview} shows the distribution of the number of stars of the projects considered in this study.\nThis number ranges from 291,138 stars ({\\sc \\mbox{freeCodeCamp\/freeCodeCamp}}) to 23,322 stars ({\\sc \\mbox{tiimgreen\/github-cheat-sheet}}).\nThe considered projects are primarily developed on 17 programming languages; JavaScript is the most common one (40 projects), followed by Python (9 projects) and Go (5 projects).\nFurthermore, 14 projects only include markdown files with documentation purposes (e.g., projects with tutorials, books, awesome lists, etc).\nFinally, regarding the project owners, 69 are organizational accounts and 31 are user accounts.\\medskip\n\n\\begin{figure}[!ht]\n\t\\center\n\t\\includegraphics[width=0.65\\textwidth,keepaspectratio,trim={0 2em 0 2em},clip]{images\/repos_overview.pdf}\n\t\\caption{Number of GitHub stars of the analyzed projects}\n\t\\label{fig:repos-overview}\n\\end{figure}\n\nFor each of these 100 projects, the first author of this paper initially inspected their READMEs on GitHub to identify the channels used to promote the projects and to keep the users up-to-date with important information about them.\nFor example, the following sentence is available on the README of {\\sc adobe\/brackets}: \\aspas{\\it You can see some screenshots of Brackets on the \\underline{wiki}, intro videos on \\underline{YouTube}, and news on the Brackets \\underline{blog}}.\nIn this case, wiki and YouTube are used to support users whereas blog is a channel used to disseminate news about {\\sc Brackets}.\nThus, only blog is considered a promotion channel in our study.\nNext, we inspected the projects' website, for those projects having one.\nWe navigated through the site pages, searching for more channels used to promote the projects.\n\nAfter this manual inspection, the following promotion channels emerged:\n\n\\begin{itemize}\n\t\\item {\\bf Blogs}, which are used, for example, to publish announcements of new software versions, upcoming events, and improvements.\n\n\t\\item {\\bf Events and Users Meetings:} Organizing events and supporting users meetings are other strategies commonly followed to promote projects. On events the initiative usually comes from the development team or from the organization that supports the project, whereas on user meeting the initiative comes from the users, usually from a specific region or country. We rely on Meetup (\\url{https:\/\/meetup.com}) to discover users meetings.\n\n\t\\item {\\bf Twitter, Facebook, and Google+}, which are also used to connect the projects to users. We considered only official accounts, which are explicitly advertised on the project documentation or are verified by the social network (e.g., \\url{https:\/\/support.twitter.com\/articles\/20174631}).\n\n\t\\item {\\bf Newsletter and RSS feeds}, which refer to e-mails with the most relevant news about the projects and RSS feeds.\n\n\\end{itemize}\n\nIn addition, we found that developers use Q\\&A forums (e.g., StackOverflow), discussion groups (e.g., Google Groups), and messaging tools (e.g., IRC and Slack) to promote their projects.\nHowever, these channels are mostly used to discuss the projects and to provide answers to common questions raised by users.\nFor example, from the 155 topics opened in 2017 in the {\\sc adobe\/brackets} discussion group at Google Groups, only eight (5.1\\%) are related to announcements of new versions, mostly pre-releases for community testing.\nMoreover, from almost 500 topics on {\\sc facebook\/react} official forum, we could not identify any announcement related to the project development.\nThus, in this study, we do not consider forums, discussion groups, and messaging tools as promotion channels.\n\n\\section{Results}\n\\label{results}\n\n\n\\subsection{What are the most common promotion channels?}\n\\label{sec:results:rq1}\n\nFigure~\\ref{fig:rq1} presents the most common promotion channels used by the top-100 projects on GitHub.\nThe most common channel is Twitter, which is used by 56 projects.\nThe second one is Users Meetings (41 projects), followed by Blogs (38 projects), Events (33 projects), and RSS feeds (33 projects).\nThe least common channels are Facebook and Google+, which are used by 18 and 7 projects, respectively.\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=0.75\\textwidth,keepaspectratio,trim={0 1em 0 3em},clip]{images\/channels_binary.pdf}\n\t\\caption{Most common promotion channels}\n\t\\label{fig:rq1}\n\\end{figure}\n\nFigure~\\ref{fig:rq1_2} shows the distribution of the number of promotion channels per project.\nAlmost one third of the projects (32 projects) do not use any channel.\nBy contrast, more than half of the projects (55 projects) use at least two promotion channels.\nThe highest number of promotion channels is seven, which is the case of {\\sc \\mbox{facebook\/react}}, {\\sc \\mbox{facebook\/react-native}}, {\\sc \\mbox{meteor\/meteor}}, {\\sc \\mbox{golang\/go}}, {\\sc \\mbox{ionic-team\/ionic}}, {\\sc \\mbox{angular\/angular}}, and {\\sc adobe\/\\\\brackets}.\nWe also found that Blog and Twitter is the most frequent combination of channels (35 projects).\nOther frequent combinations include, for example, Blog and RSS (31 projects), Events and Users Meetings (31 projects), and Twitter, Events and User Meetings (31 projects).\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=0.725\\textwidth,keepaspectratio,trim={0 1em 0 3em},clip]{images\/channels_binary_histogram.pdf}\n\t\\caption{Number of promotion channels per project}\n\t\\label{fig:rq1_2}\n\\end{figure}\n\n\n\\subsection{How often do developers promote their projects?}\n\\label{sec:results:rq2}\n\nIn this second question, we investigate how often developers promote their projects on blogs and social networks.\nFor blogs, we calculate the promotion frequency as the number of posts on the last 12 months.\nFor social networks, we could not retrieve all posts for all projects because their APIs restrict the search to a recent period (e.g., last seven days for Twitter and last 100 posts for Facebook).\nThus, in this case, we only classified each social network account in two distinct groups: active and inactive.\nAn {\\em active} account has at least three posts on the last three months; otherwise, it is considered an {\\em inactive} account.\nThis classification was performed by manually counting the number of posts on the social network pages.\n\nFigure~\\ref{fig:rq2} presents the distribution of the number of blog posts on the last 12 months.\nThe number ranges from 1 ({\\sc nylas\/nylas-mail}) to 1,300 ({\\sc freeCodeCamp\/freeCodeCamp}); the first, second, and third quartile values are 7, 19, and 54 posts, respectively.\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=.325\\linewidth,keepaspectratio,trim={0 2em 0 2em},clip,page=2]{images\/activity_blog.pdf}\n\t\\caption{Distribution of the number of posts on the last 12 months (outliers are omitted)}\n\t\\label{fig:rq2}\n\\end{figure}\n\nTable~\\ref{tab:rq2:social} lists the activity status of the Twitter, Facebook, and Google+ accounts.\nWe found that 83.9\\% of the projects that use Twitter have an active account; 55.6\\% of the projects have an active Facebook account and only 28.6\\% have an active Google+ account.\n\n\\begin{table}[!ht]\n \\caption{Active Twitter, Facebook, and Google+ accounts}\n \\label{tab:rq2:social}\n \\centering\n \\begin{tabular}{@{}ccrr@{}}\n \\toprule\n \\multicolumn{1}{c}{\\bf Channel} && \\multicolumn{1}{c}{\\bf Active (\\%)} & \\multicolumn{1}{c}{\\bf Inactive (\\%)} \\\\\n \\midrule\n Twitter && 47 (83.9\\%) & 9 (16.1\\%) \\\\\n Facebook && 10 (55.6\\%) & 8 (44.4\\%) \\\\\n Google+ && 2 (28.6\\%) & 5 (71.4\\%) \\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\nFinally, we investigate the characteristics of the user meeting groups promoted on Meetup (such meetings are the 3rd most common promotion channel studied in this article).\nA Meetup group is a local community of people that is responsible for organizing meeting events~\\cite{meetupgroup}. \nThese groups are identified by topics to help members find them. \nHere, we rely on these topics to collect meetups about the studied open source projects, along with their locations (i.e., city and country).\nFor example, the topic for {\\sc jquery\/jquery} is {\\em jquery} and a summary of the meeting groups about this topic can be found at \\url{https:\/\/www.meetup.com\/topics\/jquery\/all}.\nFigure~\\ref{fig:rq3_meetups} presents the distribution of the number of groups, cities, and countries of the projects with meetings registered at Meetup. For groups, the values ranges from 2 to 2,261 groups; considering the cities, the values range from 2 to 725; finally, for countries, the values range from 2 to 96. The maximum values always refer to {\\sc torvalds\/linux}. In other words, {\\sc torvalds\/linux} has 2,261 meetup groups, which are spread over 725 cities from 96 countries.\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\begin{subfigure}{.325\\textwidth}\n\t \\centering\n\t \\includegraphics[width=.95\\linewidth,keepaspectratio,trim={1.5em 2em 1em 2em},clip,page=2]{images\/meetups.pdf}\n\t\t\\caption{Groups}\n\t \\label{fig:rq3_meetups_sub1}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.325\\textwidth}\n\t \\centering\n\t \\includegraphics[width=.95\\linewidth,keepaspectratio,trim={1.5em 2em 1em 2em},clip,page=3]{images\/meetups.pdf}\n\t\t\\caption{Cities}\n\t \\label{fig:rq3_meetups_sub2}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.325\\textwidth}\n\t \\centering\n\t \\includegraphics[width=.95\\linewidth,keepaspectratio,trim={1.5em 2em 1em 2em},clip,page=4]{images\/meetups.pdf}\n\t\t\\caption{Countries}\n\t \\label{fig:rq3_meetups_sub3}\n\t\\end{subfigure}%\n\t\\caption{Number of groups, cities, and countries of the user meetings}\n\t\\label{fig:rq3_meetups}\n\\end{figure}\n\n\n\\subsection{How popular and random projects differ on the usage of promotion channels?}\n\\label{sec:results:rq3}\n\nIn Section~\\ref{sec:results:rq1}, we investigated the most common promotion channels used by popular GitHub projects.\nIn this section, we contrast the usage of promotion channels by these projects and by a random sample of GitHub projects.\nFor this purpose, we randomly selected 100 projects from the top-5,000 repositories by number of stars and manually inspected their documentation using the same methodology reported in Section~\\ref{sec:design}.\nThe number of stars of this random sample ranges from 2,297 stars ({\\sc uber-archive\/image-diff}) to 22,558 ({\\sc vsouza\/awesome-ios}).\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=0.75\\textwidth,keepaspectratio,trim={0 1em 0 2em},clip]{images\/channels_random.pdf}\n\t\\caption{Most common promotion channels used by random projects}\n\t\\label{fig:rq3}\n\\end{figure}\n\nFigure~\\ref{fig:rq3} compares the usage of promotion channels by the random projects and by the most popular ones.\nIn the random sample, the number of projects using the investigated promotion channels is significantly lower compared to the most popular ones.\nHowever, by applying the Spearman's rank correlation test, we found a strong correlation between the number of projects using the promotion channels on each group ($rho =$ 0.904 and \\emph{p-value} $<$ 0.01).\nFor example, Twitter is also the most used promotion channel among the random projects (31 projects), followed by Blogs (17 projects) and RSS (13 projects).\nCompared to the most popular projects, Users meetings and Newsletter are less common (13 and 6 projects, respectively).\nFinally, Facebook and Google+ also have a very limited usage (7 and 4 projects, respectively).\n\n\n\\subsection{What is the impact of promotion on Hacker News?}\n\\label{results:rq4}\n\nAfter publishing content on blogs, Twitter, etc., open source developers can also promote this content on social news aggregator sites. These sites aggregate contents from distinct sources for easing viewing by a large public.\nThe most popular and important example is Hacker News (\\url{https:\/\/news.ycombinator.com}), which is dedicated to Computer Science and related technologies content. Hacker News posts just include a title and the URL of the promoted content (e.g.,~a blog post about a new version of an open source project). Any user registered in the site can post a link on Hacker News, i.e., not necessarily the links are posted by the contributors of an open source project, for example. Other Hacker News users can discuss the posts and upvote\nthem. An upvote is similar to a {\\em like} in social networks; posts are listed on Hacker News according to the number of upvotes.\nIn this research question, we use Hacker News due to its popularity; posts that reach the front page of the site receive for example 10-100K page views, in one or two days (\\url{https:\/\/goo.gl\/evyP4w}). Furthermore, Hacker News\nprovides a public API, which allows search and metadata collection.\n\nFor each popular project considered in our study (100 projects), we searched for Hacker News posts with a URL referencing the project sites or pages, including GitHub pages (READMEs, issues, etc). As result, we found 3,019 posts on Hacker News referencing content from 96 studied projects (i.e., only four projects are never referenced on Hacker News).\nFigure~\\ref{fig:rq4_overview} presents the distributions of the number of posts per project, upvotes, and comments.\nThe number of posts ranges from 1 to 298 posts per project ({\\sc rails\/rails}); the first, second, and third quartile values are 4, 10, and 43 posts, respectively.\nRegarding their upvotes, the most popular post is about {\\sc appple\/swift} (\\aspas{\\em Swift is Open Source}), with 1,824 upvotes; the quartile values are 2, 3, and 12 upvotes, respectively.\nFinally, the highest number of comments is 760, about a GitHub issue opened for Microsoft Visual Studio (\\aspas{\\em VS Code uses 13\\% CPU when idle due to blinking cursor rendering}); the quartile values are 0, 0, and 2 comments, respectively. \nOn the one hand, these results show that most Hacker News posts do not attract attention. \nBy contrast, a small number of posts attract a lot of attention. For example, the top-10\\% posts have at least 132 upvotes. These posts are called {\\em successful posts} in this investigation.\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\begin{subfigure}{.325\\textwidth}\n\t \\centering\n\t \\includegraphics[width=.95\\linewidth,keepaspectratio,trim={0 2em 1em 2em},clip,page=2]{images\/hn_posts_overview.pdf}\n\t\t\\caption{Posts}\n\t \\label{fig:rq4_overview_sub1}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.325\\textwidth}\n\t \\centering\n\t \\includegraphics[width=.95\\linewidth,keepaspectratio,trim={0 2em 1em 2em},clip,page=4]{images\/hn_posts_overview.pdf}\n\t\t\\caption{Upvotes}\n\t \\label{fig:rq4_overview_sub2}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.325\\textwidth}\n\t \\centering\n\t \\includegraphics[width=.95\\linewidth,keepaspectratio,trim={0 2em 1em 2em},clip,page=6]{images\/hn_posts_overview.pdf}\n\t\t\\caption{Comments}\n\t \\label{fig:rq4_overview_sub3}\n\t\\end{subfigure}%\n\t\\caption{Number of posts, upvotes, and comments (outliers are omitted)}\n\t\\label{fig:rq4_overview}\n\\end{figure}\n\nFigure~\\ref{fig:rq4_stars_before_after} shows boxplots with the number of GitHub stars gained by projects covered by successful posts, in the first three days before and after the publication date on Hacker News. The intention is to investigate the impact of a successful promotion on Hacker News, by comparing the number of stars gained before and after each successful post publication. On the median, the projects covered by successful posts gained 74 stars in the first three days before their appearance on Hacker News; in the first three days after the publication, the projects gained 138 stars. Therefore, Hacker News has a positive impact on the project's popularity, measured by GitHub stars. \nIndeed, the distributions are statistically different, according to the one-tailed variant of the Mann-Whitney U test (p-value $\\leq 0.05$). By computing Cliff's delta, we found a {\\em medium} effect size ($d = -0.372$).\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=0.4\\textwidth,keepaspectratio,trim={0 0 0 2em},clip,page=2]{images\/hn_stars_before_after.pdf}\n\t\\caption{Number of GitHub stars received by projects covered by successful Hacker News posts in the first three days before and after the post publication}\n\t\\label{fig:rq4_stars_before_after}\n\\end{figure}\n\nFinally, we inspected the titles of each successful post, aiming to categorize the post purpose. The most common category includes posts announcing new releases of open source projects (44.9\\%; e.g., \\aspas{\\em Angular 2 Final Released}). Other popular categories include posts promoting articles or reports about the projects (25.4\\%; e.g., \\aspas{\\em Vue.js vs.~React}), announcing the first release of a project (16.5\\%; e.g., \\aspas{\\em YouTube-dl: Open-source YouTube downloader}), highlighting new project features (10.6\\%; e.g., \\aspas{\\em Git and GitHub Integration Comes to Atom}) and open sourcing products (1.6\\%; e.g., \\aspas{\\em Visual Studio Code is now open source}).\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Related Work}\n\\label{sec:related}\n\nAlthough open source software has been exhaustively explored recently, little is known about how developers promote these projects.\nThe main exception is a work conducted by Bianco et al. where the authors analyze marketing and communication strategies of three companies that develop open source software~\\cite{Bianco12}.\nBy means of interviews, they found that websites and product launch events are adopted by the three organizations; however, the organizations differ considerably on the use of other communication channels, mainly when promoting the projects in open source communities and among industrial users.\n\nOn the one hand, most communication channels investigated in this paper are explored in other studies, but with different intentions.\nSinger et al. report a qualitative study focused on discovering the benefits that Twitter brings to developers~\\cite{singer2014}.\nThey found that Twitter adopters use it to stay aware of industry changes, for learning, and for building relationships.\nBy correlating the blogging and committing behavior of developers, Pagano and Maleej observed an intensive use of blogs, frequently detailing activities described shortly before in commit messages~\\cite{pagano2011}.\nBajic and Lyons analyze how software companies use social media techniques to gather feedback from users collectively~\\cite{Bajic2011}.\nTheir results suggest that startups use social media mainly for competitive advantage and established organizations use it to monitor the buzz among their users.\nBy studying a successful software development company, Hansson et al. identified that user meetings and newsletter are adopted to include and increase the participation of users in the development process~\\cite{Hansson2006}.\nFinally, Aniche et al. conduct a study to understand how developers use modern news aggregator sites (Reddit and Hacker News)~\\cite{aniche2018}. According to their results, the two main reasons for posting links on these sites is to promote own work and to share relevant content.\n\n\n\\section{Conclusion and Practical Implications}\n\\label{sec:conclusion}\n\nIn this paper, we investigated the most common promotion channels used by popular GitHub projects. This investigation supports the following practical recommendations to open source project managers and leaders:\n\n\\begin{enumerate}\n\\item Promotion is an important aspect of open source project management, which should be emphasized by project leaders. For example, most popular GitHub projects (two thirds) use at least one promotion channel; half of the projects invest on two channels. By contrast, the use of promotion channels is less common among projects with lower popularity. \n\n\\item Open source project managers should consider the use of Twitter (47 projects among the top-100 most popular GitHub projects have active Twitter accounts), Users meetings (which are organized or supported by 41 projects), and blogs (which are used by 38 projects).\n\n\\item Open source project managers should also consider promotion on social news aggregator sites. Successful posts on Hacker News may have an important impact on the popularity of GitHub projects. However, only 10\\% of the Hacker News posts about the studied projects have had some success\n\\end{enumerate}\n\n\n\n\\section*{Acknowledgments}\n\n\\noindent Our research is supported by CAPES, FAPEMIG, and CNPq.\n\n\\small\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}