{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Extended Source Analysis\n",
    "\n",
    "The analysis of an extended source is performed similarly to the process described in the [binned likelihood](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/binned_likelihood_tutorial.html) tutorial.\n",
    "\n",
    "The only extra step needed is to create a two dimensional template that describes the object you are wanting to analyze.\n",
    "\n",
    "This template can be created from several different sources including maps from other observatories or you can create your own from scratch.\n",
    "\n",
    "There are a few simple rules you have to follow in creating your template but once you've done that, pretty much any shape is possible to model."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "***"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This sample analysis is based on the Centaurus A analysis performed by the LAT team and described in [Abdo, A. A. et al. 2010, Science, 328, 728](http://adsabs.harvard.edu/abs/2010Sci...328..725F). At certain points we will refer to this article and its supplement as well as the [Cicerone](https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/).\n",
    "\n",
    "The goal of this tutorial is to reproduce the data analysis performed in this publication including calculating the spectral shape and fluxes of the central core of Cen A and the large radio lobes. This tutorial uses a [user-contributed tool](https://fermi.gsfc.nasa.gov/ssc/data/analysis/user/) (`make4FGLxml.py`) and assumes you have the most recent [Fermitools](https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/) installed.\n",
    "\n",
    "We will also make significant use of python, so you might want to familiarize yourself with python (there's a beginner's guide at http://wiki.python.org/moin/BeginnersGuide).\n",
    "\n",
    "This tutorial also assumes that you've gone through the non-python based [binned likelihood](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/binned_likelihood_tutorial.html) thread. It is also useful if you have gone through the [Likelihood Analysis with Python](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/python_tutorial.html) tutorial.\n",
    "\n",
    "This tutorial should take approximately 8 hours to complete (depending on your computer's speed) if you do everything, but there are some steps you can skip along the way which shaves off about 4 hours of that."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Get the data\n",
    "\n",
    "For this thread the original data were extracted from the [LAT data server](https://fermi.gsfc.nasa.gov/cgi-bin/ssc/LAT/LATDataQuery.cgi) with the following selections (these selections are similar to those in the paper):\n",
    "\n",
    "```\n",
    "Search Center (RA,Dec) = (201.47,-42.97)\n",
    "Radius = 30 degrees\n",
    "Start Time (MET) = 239557420 seconds (2008-08-04T15:43:40)\n",
    "Stop Time (MET) = 265507200 seconds (2009-06-01T00:00:00)\n",
    "Minimum Energy = 300 MeV\n",
    "Maximum Energy = 300000 MeV\n",
    "```\n",
    "\n",
    "For more information on how to download LAT data please see the [Extract LAT Data](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extract_latdata.html) tutorial.\n",
    "\n",
    "These are the event files. Run the code cell below to retrieve them:\n",
    "```\n",
    "L1504211512544B65347F11_PH00.fits\n",
    "L1504211512544B65347F11_PH01.fits\n",
    "L1504211512544B65347F11_PH02.fits\n",
    "L1504211512544B65347F11_SC00.fits\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extended/L1504211512544B65347F11_PH00.fits\n",
    "!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extended/L1504211512544B65347F11_PH01.fits\n",
    "!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extended/L1504211512544B65347F11_PH02.fits\n",
    "!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extended/L1504211512544B65347F11_SC00.fits"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!mkdir data\n",
    "!mv *SC00.fits spacecraft.fits\n",
    "!mv *.fits ./data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You'll first need to make a file list with the names of your input event files:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!ls ./data/*PH*.fits > ./data/CenA.list\n",
    "!cat ./data/CenA.list"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In the following analysis we've assumed that you've named your list of data files `CenA.list` and renamed the spacecraft file (`L1504211512544B65347F11_SC00.fits`) to `spacecraft.fits`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Perform Event Selections\n",
    "\n",
    "We are going to follow the prescription described in the unbinned likelihood tutorial to filter and prepare our data set.\n",
    "\n",
    "First, execute **gtselect**:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "gtselect evclass=128 evtype=3\n",
    "    @./data/CenA.list\n",
    "    ./data/CenA_filtered.fits\n",
    "    201.47\n",
    "    -42.97\n",
    "    10\n",
    "    239557420\n",
    "    265507200\n",
    "    300\n",
    "    300000\n",
    "    90"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next, you need to run **gtmktime** with a similar filter as that used in the publication."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "gtmktime\n",
    "    ./data/spacecraft.fits\n",
    "    (DATA_QUAL>0)&&(LAT_CONFIG==1)\n",
    "    no\n",
    "    ./data/CenA_filtered.fits\n",
    "    ./data/CenA_filtered_gti.fits"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Create and View a Counts Map\n",
    "\n",
    "Now we can go ahead and create a counts map of the region so we can visualize the lobes and see the core of CenA in gamma rays.\n",
    "\n",
    "We can't make any quantitative statements yet, since we haven't produced a template to model but we can at least look and see what we have."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "gtbin\n",
    "    CMAP\n",
    "    ./data/CenA_filtered_gti.fits\n",
    "    ./data/CenA_CMAP.fits\n",
    "    NONE\n",
    "    140\n",
    "    140\n",
    "    0.1\n",
    "    CEL\n",
    "    201.47\n",
    "    -42.97\n",
    "    0\n",
    "    AIT"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Open this FITS image up (`CenA_CMAP.fits`) in your favorite FITS viewer such as *ds9*. If you use the default settings, you'll see the following (using the 'heat' color map):\n",
    "\n",
    "<img src='https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extended/CenA_CMAP.png'>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Visualizing the lobes with this raw image is not easy. You can kind of see some emission in the center but not really resolve it.\n",
    "\n",
    "However, you can smooth this image in *ds9* by clicking the 'Analysis Menu' and then selecting 'Smooth'.\n",
    "\n",
    "This is better, but not quite good enough, so click 'Analysis' again and select 'Smooth Parameters'. This should bring up a dialog box where you can select the Kernel Radius. Set this to 9 and click 'Apply' and then close the dialog box.\n",
    "\n",
    "The resulting image is pretty washed out now, so click 'Color' and then select 'b'. If you play with the color scale by holding the right mouse button while dragging it around you can wind up with a smoothed count map that looks a little like the one in the publication:\n",
    "\n",
    "<img src='https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extended/CenA_CMAP_smooth.png'>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We're not going to get something perfect here since we're manually playing with the color scaling and the paper used a sophisticated adaptive smoothing algorithm while we're just using the simple Gaussian algorithm included in *ds9*.\n",
    "\n",
    "However, you can already see the central core of CenA and some diffuse emission to the north and south of the core that kind of looks like extended emission.\n",
    "\n",
    "You can also see several point sources in the ROI that are other gamma-ray sources we'll need to model. Since Cen A is near the galactic plane, you can make out some of the galactic diffuse emission at the bottom of the image."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Create a Template for the Extended Emission\n",
    "\n",
    "Now that we've convinced ourselves that there is extended lobe emission, we need to find a way to describe the spatial extent.\n",
    "\n",
    "In this case, we'll follow the example of the publication and use the WMAP k-band observations as a template. The exact WMAP template that was used in the publication is not available but we can get a very similar one from [NASA's SkyView](http://skyview.gsfc.nasa.gov/).\n",
    "\n",
    "Navigate to SkyView using and click on the [SkyView Query Form](http://skyview.gsfc.nasa.gov/cgi-bin/query.pl) link. We want to download the WMAP data in a region around Cen A, so input `Cen A` into the `Coordinates of Source` box of the form. Then, select `WMAP K` in the `Infrared CMB:` box.\n",
    "\n",
    "We need to match our ROI, so under `Common Options` input `140` as the `Image size (pixels)` and 14 as the `Image Size (degrees)`. Once you've input all of this, hit the `Submit Request` button."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<img src='https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extended/SkyView_Query.png'>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "On the resulting page, you'll see an image of the WMAP data showing a fuzzy image of Cen A in the K-Band.\n",
    "\n",
    "This is nice, but what we really want is the FITS image, so click the FITS link and download it into your working directory.\n",
    "\n",
    "In the following section, we're going to assume that the SkyView image is called `skv.fits`, so rename the downloaded file accordingly.\n",
    "\n",
    "You can also download the WMAP image directly by running the code cell below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extended/skv.fits\n",
    "!mv *.fits ./data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here's what the downloaded WMAP K-band image looks like in ds9 using the 'heat' color map and square-root scaling without any smoothing:\n",
    "\n",
    "<img src='https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extended/wmap-k.png'>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The raw image by itself is not in the correct form for the analysis we want to do.\n",
    "\n",
    "First of all, we want to subtract out the central core of Cen A and we want to divide the map into two parts so that we can separately model the north and south lobes. Furthermore, there are a few things we need to do to make any map compatible with the Fermitools:\n",
    "\n",
    "* The template must be in J2000 coordinates. The likelihood code does not transpose coordinates into J2000.\n",
    "* The background must be set to 0. The likelihood code will use any pixel over 0 as part of the template.\n",
    "* The total flux must be normalized to 1."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The first point is already taken care of since the WMAP image is in J2000 coordinates.\n",
    "\n",
    "We'll take care of the second point by setting all of the points below a certain value to 0. This isn't the best way to do it, but it's quick, which is what we're going for in this thread.\n",
    "\n",
    "In the final step, we'll integrate over the entire map to get our normalization factor and divide each pixel by this number.\n",
    "\n",
    "So let's get started!\n",
    "\n",
    "We're going to use `pyFits` to do this which is included in the Fermitools (you could use a similar tools like IDL or ftools if you wanted). You can find more details on pyFits at the [pyFits website](http://www.stsci.edu/resources/software_hardware/pyfits)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import astropy.io.fits as pyfits\n",
    "import numpy as np\n",
    "\n",
    "wmap_image = pyfits.open(\"./data/skv.fits\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You can view the header of the image via the following command. The image is in the first HDU of the FITS file so you access the '0'th element of the `wmap_image` object."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print wmap_image[0].header"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we do a rough background suppression by setting any pixels less than 0.5 mK to 0. We'll also save the resulting image to a file so we can see what we've done."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "wmap_image[0].data[wmap_image[0].data < 0.5] = 0.0\n",
    "wmap_image.writeto('./data/CenA_wmap_k_above5.fits')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If you open up this image in *ds9*, you'll notice that there's still some background left in the south east portion of the image, so we'll get rid of this by setting any pixels that are more than 5 degrees away from the center to 0 and write this out to a file.\n",
    "\n",
    "Remember that the image is 14 degrees and has a pixel size of 0.1 degrees. We're going to create a 2-d array that's filled with the distance to the center pixel."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "x = np.arange(-7,7,0.1)\n",
    "y = np.arange(-7,7,0.1)\n",
    "xx, yy = np.meshgrid(x, y, sparse=True)\n",
    "dist = np.sqrt(xx**2 + yy**2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, set all of the pixels in the wmap data where `dist` is greater than 5 to 0."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "wmap_image[0].data[dist > 5] = 0.0\n",
    "wmap_image.writeto('./data/CenA_wmap_k_nobkgrnd.fits')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The resulting image should only have emission from the radio galaxy left and the rest of the image set to 0.\n",
    "\n",
    "This is important because any non-zero pixels will be treated as part of the template by the likelihood code and will skew your results.\n",
    "\n",
    "Now we will continue on to subtract out the center 1 degree of the image so that the core of Cen A is not included in the template."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "wmap_image[0].data[dist < 1.0] = 0.0\n",
    "wmap_image.writeto('./data/CenA_wmap_k_nocenter.fits')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "At this point, we want to divide up our map into north and south pieces and normalize each of them. We need to normalize the flux of each template that we're using to 1. This means doing a two dimensional integral over the image making sure to take into account the size of each bin.\n",
    "\n",
    "In practice, this means summing up all of the bins, multiplying this by (pi/180)^2 and multiplying this by the pixel area. We will now make the north map by setting all the south pixels to zero, normalize it and save it. We'll then close the image, open the image we saved before we did the zeroing out of the south pixels, and do the same for the south region."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "wmap_image[0].data[0:69,0:140] = 0\n",
    "norm = np.sum(wmap_image[0].data) * (np.pi/180)**2 * (0.1**2)\n",
    "wmap_image[0].data = wmap_image[0].data / norm\n",
    "wmap_image.writeto('./data/CenA_wmap_k_nocenter_N.fits')\n",
    "wmap_image.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now open the file back up and do the same for the south region."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "wmap_image = pyfits.open('CenA_wmap_k_nocenter.fits')\n",
    "wmap_image[0].data[70:140,0:140]=0\n",
    "norm = np.sum(wmap_image[0].data) * (np.pi/180)**2 * (0.1**2)\n",
    "wmap_image[0].data = wmap_image[0].data / norm\n",
    "wmap_image.writeto('./data/CenA_wmap_k_nocenter_S.fits')\n",
    "wmap_image.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In the end, you should have two FITS files that look like the following images. The max value in the North template should be around 1670 and the max in the South should be around 900.\n",
    "\n",
    "If you are far away from these values, make sure you executed the above steps correctly."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from IPython.display import HTML\n",
    "display(HTML(\"<table><tr><td><img src='https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extended/wmap-k-nocenter-S.png'></td><td><img src='https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extended/wmap-k-nocenter-N.png'></td></tr></table>\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Create a 3D Counts Map and Compute the Livetime\n",
    "\n",
    "Now we can continue to follow the binned analysis prescription by creating a 3D counts map."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "gtbin\n",
    "    CCUBE\n",
    "    ./data/CenA_filtered_gti.fits\n",
    "    ./data/CenA_CCUBE.fits\n",
    "    NONE\n",
    "    140\n",
    "    140\n",
    "    0.1\n",
    "    CEL\n",
    "    201.47\n",
    "    -42.97\n",
    "    0\n",
    "    AIT\n",
    "    LOG\n",
    "    300\n",
    "    300000\n",
    "    30"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Then compute the livetime. Remember that we applied a zenith cut to the data when we performed our event selections. You will need to apply the zenith cut to the exposure by including `zmax=90` as an argument on the command line. This will take something like 30 minutes to complete."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "gtltcube zmax=90\n",
    "    ./data/CenA_filtered_gti.fits\n",
    "    ./data/spacecraft.fits\n",
    "    ./data/CenA_ltcube.fits\n",
    "    0.025\n",
    "    1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Generate XML Model File\n",
    "\n",
    "We have developed a model file (`CenA_model.xml`) based on the 4FGL catalog that works for this ROI and the length of time of the observations. If you want to develop your own, you should look at the other threads where this is done.\n",
    "\n",
    "Also make sure you have the most recent galactic diffuse and isotropic model files which can be downloaded [from this page](https://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html). They should also be in your Fermitools installation in the `$(FERMI_DIR)/refdata/fermi/galdiffuse` directory."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here is a code cell that fetches the needed files:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extended/CenA_model.xml"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!mv *.xml ./data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If you open `CenA_model.xml` up, you will see background sources, the galactic and extragalactic backgrounds, and the three sources of interest (`CenA`, `CenA_NorthLobe` and `CenA_SouthLobe`). Note that the Lobes reference the spatial models we created in the previous section (`CenA_wmap_k_nocenter_N.fits` and `CenA_wmap_k_nocenter_S.fits`).\n",
    "\n",
    "For these three sources we are fitting a powerlaw model (see the [Cicerone](https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/) for descriptions of the different spectral and spatial models). The core of Cen A will be modeled by a point source (`<spatialModel type=\"SkyDirFunction\">`) and the two lobes will be modeled using the templates we created from the WMAP sky map (`<spatialModel file=\"some_map.fits\" type=\"SpatialMap\">`)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Compute the Binned Exposure and Source Maps\n",
    "\n",
    "Now, we need to compute a binned exposure map. Make sure you tell it `none` when asked for a Counts cube so that you can choose the dimensions of the exposure map.\n",
    ">**Note**: If you get a `File not found` error when using CALDB to select your IRF, you will need to specify the IRF name. In this case we want to use the `P8R3_SOURCE_V2` IRF."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "gtexpcube2\n",
    "    ./data/CenA_ltcube.fits\n",
    "    none\n",
    "    ./data/CenA_BinnedExpMap.fits\n",
    "    P8R3_SOURCE_V2\n",
    "    400\n",
    "    400\n",
    "    0.1\n",
    "    201.47\n",
    "    -42.97\n",
    "    0\n",
    "    AIT\n",
    "    CEL\n",
    "    300\n",
    "    300000\n",
    "    30"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Note that we've matched the energy dimensions of our binned exposure map to our counts cube but we've made it 40 degrees square instead of 14. This is so that **gtsrcmaps** can calculate the exposure on sources not in our ROI that still might contribute to emission within our ROI.\n",
    "\n",
    "To achieve this, **gtsrcmaps** finds the farthest point from the center of our ROI (i.e. one of the corners, a distance of 7$\\cdot$sqrt(2) degrees from the center) adds 10 degrees to this and then convolves the model counts map that it creates with the PSF.\n",
    "\n",
    "Since the PSF at low energies (100s of MeV) is on the order of 4 - 10 degrees in radius, we need to add an additional 10 degrees to account for the convolution. The total distance from the center in our case is thus 7$\\cdot$sqrt(2) + 10 + (4-10) or approximately 24 - 30 degrees. This means the smallest square region that can contain this circle is approximately 34 - 43 degrees on a side ((radius/sqrt(2))$\\cdot$2)).\n",
    "\n",
    "Don't worry about getting it exactly right; **gtscrmaps** will fail with an error about 'emapbnds' if you input an exposure cube that's too small, and you can go back and make it larger.\n",
    "\n",
    "Now run **gtsrcmaps** to generate a source map using the model file we just created."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Note**: Make sure that, in the xml file (`CenA_model.xml`), you specify where your diffuse source files (`gll_iem_v07.fits` and `iso_P8R3_SOURCE_V2_v1.txt`) are. In most cases, this is `$FERMI_DIR/refdata/fermi/galdiffuse`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "gtsrcmaps\n",
    "    ./data/CenA_ltcube.fits\n",
    "    ./data/CenA_CCUBE.fits\n",
    "    ./data/CenA_model.xml\n",
    "    ./data/CenA_BinnedExpMap.fits\n",
    "    ./data/CenA_srcMaps.fits\n",
    "    CALDB"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Run the Likelihood Analysis\n",
    "\n",
    "It's time to actually run the likelihood analysis now.\n",
    "\n",
    "First, you need to import the pyLikelihood module and then the BinnedAnalysis functions.\n",
    "\n",
    "As discussed in the likelihood thread, it's best to do a two-pass fit here: the first with the MINUIT optimizer and a looser tolerance, and then finally with the NewMinuit optimizer and a tighter tolerance. This allows us to zero in on the best fit parameters.\n",
    "\n",
    "For more details on the pyLikelihood module, check out the [pyLikelihood Usage Notes](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/python_usage_notes.html)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pyLikelihood\n",
    "\n",
    "from BinnedAnalysis import*\n",
    "obs = BinnedObs(srcMaps='./data/CenA_srcMaps.fits',expCube='./data/CenA_ltcube.fits',binnedExpMap='./data/CenA_BinnedExpMap.fits',irfs='CALDB')\n",
    "like1 = BinnedAnalysis(obs,'./data/CenA_model.xml',optimizer='MINUIT')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Set the tolerance to something large to start out with, do the first fit and save the results to a file.\n",
    "\n",
    "Here, we get the minimization object (like1obj) from the logLike object so that we can access it later. We pass this object to the fit routine so that it knows which fitting object to use."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "like1.tol = 0.1\n",
    "like1obj = pyLike.Minuit(like1.logLike)\n",
    "like1.fit(verbosity=0,covar=True,optObject=like1obj)\n",
    "like1.logLike.writeXml('./data/CenA_fit1.xml')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's check how the convergence of this first fit went."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "like1obj.getQuality()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This output corresponds to the `MINUIT` fit quality.\n",
    "\n",
    "A \"good\" fit corresponds to a value of `fit quality = 3`; if you get a lower value, it is likely that there is a problem with the error matrix. According to the Minuit documentation possible values for \"fit quality\" are:\n",
    "\n",
    "* 0 - Error matrix not calculated at all\n",
    "* 1 - Diagonal approximation only, not accurate\n",
    "* 2 - Full matrix, but forced positive-definite (i.e. not accurate)\n",
    "* 3 - Full accurate covariance matrix (After MIGRAD, this is the indication of normal convergence.)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, create a new BinnedAnalysis object using the same BinnedObs object from before and the results of the previous fit and tell it to use the NewMinuit optimizer. We also double check that the tolerance is what we want (1e-8) and then we fit the model and save the results."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "like2 = BinnedAnalysis(obs,'./data/CenA_fit1.xml',optimizer='NewMinuit')\n",
    "like2.tol = 1e-8\n",
    "like2obj = pyLike.NewMinuit(like2.logLike)\n",
    "like2.fit(verbosity=0,covar=True,optObject=like2obj)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can check to see if NewMinuit converged like this:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(like2obj.getRetCode())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Great, it converged this time, so let's save the fitted model. The return codes for NewMinuit are different than those for Minuit (it's a bit mask).\n",
    "\n",
    "The only thing you really need to know is that if this number is anything but `0`, the fit didn't converge and you have to keep trying."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "like2.logLike.writeXml('./data/CenA_fit2.xml')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now that we've done the full fit we can verify that we've gotten values close to what's in the publication. First check the results for the core:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "like2.Ts('Cen A')\n",
    "like2.model['Cen A']\n",
    "like2.flux('Cen A',emin=100,emax=100000)\n",
    "like2.fluxError('Cen A',emin=100,emax=100000)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "help(like2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So, the core is detected with high significance (TS = 325) and the fit resulted in a spectral index of $2.64 ± 0.1$. The integral flux above 100 MeV is $(1.3 ± 0.2) \\cdot 10^{-7}$. This matches well (within statistical errors) the index and flux reported in the article (index = 2.67 ± 0.10, flux = 1.50 +0.25/-0.22)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "like2.Ts('CenA_NorthLobe')\n",
    "like2.model['CenA_NorthLobe']\n",
    "like2.flux('CenA_NorthLobe',emin=100,emax=100000)\n",
    "like2.fluxError('CenA_NorthLobe',emin=100,emax=100000)\n",
    "like2.Ts('CenA_SouthLobe')\n",
    "like2.model['CenA_SouthLobe']\n",
    "like2.flux('CenA_SouthLobe',emin=100,emax=100000)\n",
    "like2.fluxError('CenA_SouthLobe',emin=100,emax=100000)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The north lobe is marginally detected with a TS value of 10.6 while the south lobe is detected at a higher TS of 120.5. This is less than the LAT team reported.\n",
    "\n",
    "The north lobe has a spectral index of 1.9 ± 0.3 and an integral flux of $(0.99 ± 0.86) \\cdot 10^{-8})$. The south lobe has a spectral index of 2.6 ± 0.1 and an integral flux of $(1.6 ± 0.4) \\cdot 10^{-7})$.\n",
    "\n",
    "The integral fluxes for the south lobe are still consistent with what is reported in the paper $(1.09 +0.24/-0.21) \\cdot 10^{-7}$ and still within statistical errors for the indices (2.60 +0.14/-0.15). The north lobe significance is much lower along with the integral flux/index.\n",
    "\n",
    "The main reasons for any discrepancy is that we are using newly reconstructed data, different IRFs, different background models, a slightly different template than was used by the LAT team, and a different model file based on the 4FGL catalog.\n",
    "\n",
    "The fact that our results are still close indicates that the method is robust. If you get results that are outside of the errors of the thread, double check that you've followed the thread accurately."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Create some Residual Maps\n",
    "\n",
    "It's easiest to visualize our results by generating some model maps based on the model we just created and then use these models to create residuals maps based on the actual counts map. We're going to create four different model maps:\n",
    "\n",
    "* `CenA_ModelMap_All.fits`: A map of the full model (nothing commented out)\n",
    "* `CenA_ModelMap_AllBkgrnd.fits`: A map of all of the background sources (comment out the Cen A sources)\n",
    "* `CenA_ModelMap_CenA.fits`: A map of just the Cen A sources (comment out everything but the Cen A sources)\n",
    "* `CenA_ModelMap_Diff.fits`: A map of just the diffuse background sources (comment out everything but the isotropic and galactic sources)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To do this, we'll need to edit the output xml model file (`CenA_fit2.xml`) each time and then run **gtmodel**."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extended/CenA_fit2.xml\n",
    "!mv *.xml ./data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Again, **make sure that the xml points to the correct location for the diffuse source files**.\n",
    "\n",
    "Other than properly specifying the location of the diffuse sources, you don't need to edit the XML for the first one:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "gtmodel\n",
    "    ./data/CenA_srcMaps.fits\n",
    "    ./data/CenA_fit2.xml\n",
    "    ./data/CenA_ModelMap_All.fits\n",
    "    CALDB\n",
    "    ./data/CenA_ltcube.fits\n",
    "    ./data/CenA_BinnedExpMap.fits"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For the next file, comment out the Cen A sources in the xml file (you comment out text in xml using `<!--` and `-->` before and after the text you want to comment out) and run **gtmodel** again."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```xml\n",
    "<!-- <source name=\"Cen A\" type=\"PointSource\">\n",
    "    <spectrum type=\"PowerLaw\">\n",
    "      <parameter error=\"0.1143998413\" free=\"1\" max=\"10000\" min=\"0.0001\" name=\"Prefactor\" scale=\"1e-11\" value=\"1.354990763\" />\n",
    "      <parameter error=\"0.09903003339\" free=\"1\" max=\"10\" min=\"0\" name=\"Index\" scale=\"-1\" value=\"2.636336026\" />\n",
    "\n",
    ".\n",
    ".\n",
    ".\n",
    ".\n",
    ".\n",
    "\n",
    "    <spatialModel file=\"CenA_wmap_k_nocenter_S.fits\" map_based_integral=\"true\" type=\"SpatialMap\">\n",
    "      <parameter free=\"0\" max=\"1000\" min=\"0.001\" name=\"Prefactor\" scale=\"1\" value=\"1\" />\n",
    "    </spatialModel>\n",
    "  </source> -->\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This generates a model map without the CenA sources, leaving the background intact."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Make sure that CenA_fit2.xml has been edited accordingly before\n",
    "# you run this step.\n",
    "%%bash\n",
    "gtmodel\n",
    "    ./data/CenA_srcMaps.fits\n",
    "    ./data/CenA_fit2.xml\n",
    "    ./data/CenA_ModelMap_AllBkgrnd.fits\n",
    "    CALDB\n",
    "    ./data/CenA_ltcube.fits\n",
    "    ./data/CenA_BinnedExpMap.fits"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Continue to comment out select parts of the model xml file according to the list above and run **gtmodel** afterwards to get the next two files."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "`CenA_ModelMap_CenA.fits`: Comment out everything *but* the CenA sources in `CenA_fit2.xml`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Make sure that CenA_fit2.xml has been edited accordingly before\n",
    "# you run this step.\n",
    "%%bash\n",
    "gtmodel\n",
    "    ./data/CenA_srcMaps.fits\n",
    "    ./data/CenA_fit2.xml\n",
    "    ./data/CenA_ModelMap_CenA.fits\n",
    "    CALDB\n",
    "    ./data/CenA_ltcube.fits\n",
    "    ./data/CenA_BinnedExpMap.fits"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "`CenA_ModelMap_Diff.fits`: Comment out everything *but* the isotropic and galactic sources."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Make sure that CenA_fit2.xml has been edited accordingly before\n",
    "# you run this step.\n",
    "%%bash\n",
    "gtmodel\n",
    "    ./data/CenA_srcMaps.fits\n",
    "    ./data/CenA_fit2.xml\n",
    "    ./data/CenA_ModelMap_Diff.fits\n",
    "    CALDB\n",
    "    ./data/CenA_ltcube.fits\n",
    "    ./data/CenA_BinnedExpMap.fits"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You should end up with the four images below:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from IPython.display import HTML\n",
    "display(HTML(\"<table><tr><td><img src='https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extended/CenA_ModelMap_All.png'></td><td><img src='https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extended/CenA_ModelMap_AllBkgrnd.png'></td></tr></table>\"))\n",
    "display(HTML(\"<table><tr><td><img src='https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extended/CenA_ModelMap_CenA.png'></td><td><img src='https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extended/CenA_ModelMap_Diffuse.png'></td></tr></table>\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, we'll use the ftool **farith** to subtract these model maps from our data to create residual maps."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "display(HTML(\"<table><tr><td><img src='https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extended/CenA_CMAP_Resid.png'><center>CenA_CMAP_Resid.png: Counts map minus all of the sources in the model showing that there's not really anything left after we subtract everything out.</center></td><td><img src='https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extended/CenA_CMAP_CenA.png'><center>CenA_CMAP_CenA.png: Everything subtracted except the Cen A core and lobes. You can really see the extent of the radio galaxy in this map.</center</td></tr></table>\"))\n",
    "display(HTML(\"<table><tr><td><img src='https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extended/CenA_CMAP_Bkgrnd.png'><center>CenA_CMAP_Bkgrnd.png: This shows a counts map with the Cen A core and lobes removed. The galactic and extragalactic emission dominates here but you can see a few individual point sources.</center></td><td><img src='https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/extended/CenA_CMAP_Sources.png'><center>CenA_CMAP_Sources.png: This shows the region with the galactic and extragalactic emission removed so you can see the point sources along with the Cen A sources.</center></td></tr></table>\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Finished!\n",
    "\n",
    "So, it looks like we've generally reproduced the results from the Cen A paper.\n",
    "\n",
    "You can go on from here and produce a proper spectrum using the model you have produced here (see the python likelihood thread) and you can plot a profile if you like (try using pyfits and matplotlib, both included in the Fermitools).\n",
    "\n",
    "You can also create any type of template you wish now to analyze extended diffuse sources using arbitrary shapes (create them in python and use pyfits to save them). Just make sure you follow the prescription outlined above."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 2",
   "language": "python",
   "name": "python2"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.14"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
