\documentclass{article}
\title{EUCALYPTUS DATA REDUCTION MANUAL}


\begin{document}

\maketitle

\textit{(Advanced users: look for IFU specific instructions searching for ***IFU***)}


\vspace{1cm}

\textbf{SOAR\_IFU PACKAGE INSTALLATION INSTRUCTIONS}\\

The SOAR\_IFU package runs on the IRAF software (PC-IRAF V2.12.1) installed on the Linux RedHat platform 
(RedHat 7.2).
You will create a new \textit{login.cl} file and a new \textit{uparm} directory to run the SOAR\_IFU
package:
in the home directory of the package, type \textbf{mkiraf}.\\

Set a few environment variables in your profile (\textit{.tcshrc}):

\begin{itemize}

\item \textbf{setenv SOAR\_IFU \textit{the\_home\_of\_the\_package}} 

\item \textbf{setenv ROOTSYS \$SOAR\_IFU}

\item \textbf{setenv LD\_LIBRARY\_PATH \$\{LD\_LIBRARY\_PATH\}:\$ \{SOAR\_IFU\}\textit{/lib}}

\end{itemize}

\vspace{1cm}

\textbf{DATA REDUCTION PROCEDURE}

\begin{enumerate}


\item First combine all your bias frames with ZEROCOMBINE. To do this, create a list with all of 
your bias frames.  For instance, if you call your bias frames \textit{bias????.fits},  type:
\\


{ \bfseries cl $>$ files bias*.fits $>$ biaslist}


The file biaslist will look like this:
\\

\textbf{cl$>$ head biaslist}
\texttt{
\begin{quote}
bias0001.fits\\
bias0002.fits\\
bias0003.fits\\
bias0004.fits\\
bias0005.fits\\
bias0006.fits\\
bias0007.fits\\
bias0008.fits\\
bias0009.fits\\
bias0010.fits\\
bias0011.fits\\
bias0012.fits
\end{quote}
}

Now, to combine all bias frames listed in biaslist, use ZEROCOMBINE with the following 
parameters:
\\

\textbf{cl$>$ epar zerocomb}
\texttt{
\begin{tabbing}
input = "@biaslist" \` List of zero level images to combine\\
(output = "Zero") \`     Output zero level name\\
(combine = "average")   \`   Type of combine operation\\
(reject = "avsigclip")   \`  Type of rejection\\
(ccdtype = "")        \`     CCD image type to combine\\
(process = no)       \`      Process images before combining?\\
(delete = no)          \`    Delete input images after combining?\\
(clobber = no)        \`     Clobber existing output image?\\
(scale = "none")    \`       Image scaling\\
(statsec = "")        \`     Image section for computing statistics\\
(nlow = 0)         \`        minmax: Number of low pixels to reject\\
(nhigh = 1)       \`         minmax: Number of high pixels to reject\\
(nkeep = 1)      \`       Minimum to keep (pos) or maximum to reject\\
(mclip = yes)    \`          Use median in sigma clipping algorithms?\\
(lsigma = 3.)     \`         Lower sigma clipping factor\\
(hsigma = 3.)       \`       Upper sigma clipping factor\\
(rdnoise = "0.")    \`       ccdclip: CCD readout noise (electrons)\\
(gain = "1.")         \`     ccdclip: CCD gain (electrons/DN)\\
(snoise = "0.")      \`      ccdclip: Sensitivity noise (fraction)\\
(pclip = -0.5)       \`      pclip: Percentile clipping parameter\\
(blank = 0.)        \`       Value if there are no pixels\\
(mode = "ql")
\end{tabbing}
}

(for a detailed discussion on what are the best parameters, read \emph{"A Guide to CCD data 
reductions"} by Phil Massey -- http://iraf.noao.edu)


\item You will need to make a list for each type of frames you took over the night.  Flats, SkyFlats, 
NeAr arcs, whatever.  Create a list for each type in the same way as you did for the biases:

\textit{biaslist} - list of all bias frames\\
\textit{dirflatlist} - list of all direct flats (see below)\\
\textit{objlist}   - list of all object frames\\
\textit{domeflatlist}  - list of all dome flats\\
\textit{masklist}       - list of all mask spectra\\
\textit{neonlist}         - list of all neon spectra\\


\item Now you need to subtract the bias frames from all your other frames. We do this by using 
CCDPROC (load NOAO, IMRED, CCDRED to get access to CCDPROC). Then, run 
CCDPROC with the following parameters:
\\

\textbf{cl$>$ epar ccdproc}
\texttt{
\begin{tabbing}
images = "@dirflatlist,@objlist,@domeflatlist,@masklist,@neonlist"   \\
(output = "")      \` List of output CCD images \\
(ccdtype = " ")    \` CCD image type to correct \\
(max\_cache = 0)   \`  Maximum image caching memory (in Mbytes) \\
(noproc = no)      \` List processing steps only? \\
(fixpix = no)     \`  Fix bad CCD lines and columns? \\
(overscan = yes)   \` Apply overscan strip correction? \\
(trim = yes)       \` Trim the image? \\
(zerocor = yes)    \` Apply zero level correction? \\
(darkcor = no)     \` Apply dark count correction? \\
(flatcor = no)     \` Apply flat field correction? \\
(illumcor = no)   \`  Apply illumination correction? \\
(fringecor = no)  \`  Apply fringe correction? \\
(readcor = no)    \`  Convert zero level image to readout correction? \\
(scancor = no)    \`  Convert flat field image to scan correction? \\
(readaxis = "line")   \` Read out axis (column|line) \\
(fixfile = "")      \`File describing the bad lines and columns \\
(biassec = "[1:50,50:2850]")     \`Overscan strip image section \\
(trimsec = "[55:2095,50:2850]")  \`Trim data section \\
(zero = "Zero")              \`    Zero level calibration image \\
(dark = "Dark")              \`    Dark count calibration image \\ 
(flat = "CorrDirFlat")     \`      Flat field images \\
(illum = "")                  \`   Illumination correction images \\
(fringe = "")                \`    Fringe correction images \\
(minreplace = 1.)       \`         Minimum flat field value \\
(scantype = "shortscan")      \`   Scan type (shortscan|longscan) \\
(nscan = 1)                  \`    Number of short scan lines \\
(interactive = no)             \`  Fit overscan interactively? \\
(function = "legendre")     \`     Fitting function \\
(order = 1)       \`  Number of polynomial terms or spline pieces \\
(sample = "*")     \` Sample points to fit \\
(naverage = 1)    \`  Number of sample points to combine \\
(niterate = 1)    \`  Number of rejection iterations \\
(low\_reject = 3.)   \`Low sigma rejection factor \\
(high\_reject = 3.)  \`High sigma rejection factor \\
(grow = 0.)        \` Rejection growing radius \\
(mode = "ql")
\end{tabbing}
}

\item (***IFU***) Now that you have subtracted the bias from all your other frames you can 
combine your direct flats.  We call direct flats those flats taken illuminating the ccd directly, 
without passing the light through the fibers.  Combine the direct flats using FLATCOMBINE 
with the following parameters:
\\

\textbf{cl$>$ epar flatcombine}
\texttt{
\begin{tabbing}
        input = "@dirflatlist" \` List of flat field images to combine\\
      (output = "DirFlat")     \` Output flat field root name\\
     (combine = "average")   \`   Type of combine operation\\
      (reject = "avsigclip")    \`Type of rejection\\
     (ccdtype = "")           \`  CCD image type to combine\\
     (process = no)         \`    Process images before combining?\\
     (subsets = no)         \`    Combine images by subset parameter?\\
      (delete = no)        \`     Delete input images after combining?\\
     (clobber = no)       \`      Clobber existing output image?\\
       (scale = "mode") \`        Image scaling\\
     (statsec = "")          \`   Image section for computing statistics\\
        (nlow = 0)           \`   minmax: Number of low pixels to reject\\
       (nhigh = 1)          \`    minmax: Number of high pixels to reject\\
       (nkeep = 1)         \`     Minimum to keep (pos) or maximum to reject (neg)\\
       (mclip = yes)       \`     Use median in sigma clipping algorithms?\\
      (lsigma = 3.)        \`     Lower sigma clipping factor\\
      (hsigma = 3.)       \`      Upper sigma clipping factor\\
     (rdnoise = "0.")     \`      ccdclip: CCD readout noise (electrons)\\
        (gain = "1.")       \`    ccdclip: CCD gain (electrons/DN)\\
      (snoise = "0.")     \`      ccdclip: Sensitivity noise (fraction)\\
       (pclip = -0.5)       \`    pclip: Percentile clipping parameter\\
       (blank = 1.)         \`    Value if there are no pixels\\
        (mode = "q")
\end{tabbing}
}

\item The direct flat does not provide an uniform illumination of your chip,
   large scale variation will be visible and you do not want to divide your
   other frames by this flat field yet.  Remove any large scale variation 
   by using IMSURFIT or MKILLUMFLAT. This new image will have the information
   we need at this point: the pixel-to-pixel sensitivity variations.
   (If using IMSURFIT, do only step a). If using MKILLUMFLAT, use only step b).
   You don't need to do both.)\\
   
a) IMSURFIT will fit a 2D polynomial to your flat field images and write out
   a new image with the ratio of your flat field to the fit.  
  The IMSURFIT parameters look like this:
\\

\textbf{cl$>$ epar imsurfit}
\texttt{
\begin{tabbing}
       input = "DirFlat"    \`   Input images to be fit \\
       output = "CorrDirFlat"  \` Output images\\
       xorder = 3              \` Order of function in x\\
       yorder = 3              \` Order of function in y\\
 (type\_output = "response")   \`  Type of output \\
    (function = "spline3")      \`Function to be fit\\
 (cross\_terms = yes)           \` Include cross-terms for polynomials?\\
     (xmedian = 1)            \`  X length of median box\\
     (ymedian = 1)          \`    Y length of median box\\
(median\_perce = 50.)        \`    Min. fraction of pixels in median box\\
       (lower = 0.)           \`  Lower limit for residuals\\
       (upper = 0.)          \`   Upper limit for residuals\\
       (ngrow = 0)           \`   Radius of region growing circle\\
       (niter = 0)             \` Maximum number of rejection cycles\\
     (regions = "all")     \`     Good regions \\
        (rows = "*")         \`   Rows to be fit\\
     (columns = "*")      \`      Columns to be fit\\
      (border = "50")     \`      Width of border to be fit\\
    (sections = )            \`   File name for sections list\\
      (circle = )            \`   Circle specifications\\
     (div\_min = INDEF)     \`     Division minimum for response output\\
        (mode = "ql")
\end{tabbing}
}

  Then, do the following:
  \\

\textbf{cl$>$ hedit CorrDirFlat CCDMEAN 1 upd+}\\


b) MKILLUMFLAT will smooth the flat field image and overwrite it with the ratio
   of the original flat field to the smoothed flat:\\
   
\textbf{cl$>$ imcopy DirFlat CorrDirFlat}\\

\textbf{cl$>$ epar mkillumflat}
\texttt{
\begin{tabbing}
        input = "CorrDirFlat"   \`    Input CCD flat field images \\
       output = " "       \`        Output images (same as input if none given)\\
     (ccdtype = " ")      \`        CCD image type to select\\
     (xboxmin = 5.)        \`       Minimum smoothing box size in x at edges\\
     (xboxmax = 0.25)      \`       Maximum smoothing box size in x\\
     (yboxmin = 5.)          \`     Minimum smoothing box size in y at edges\\
     (yboxmax = 0.25)     \`        Maximum smoothing box size in y\\
        (clip = yes)          \`    Clip input pixels?\\
    (lowsigma = 2.5)      \`        Low clipping sigma\\
   (highsigma = 2.5)      \`        High clipping sigma\\
   (divbyzero = 1.)         \`      Result for division by zero\\
     (ccdproc = "")        \`       CCD processing parameters\\
        (mode = "ql")           
\end{tabbing}
}

\item Next you must divide all the remaining frames by the ``corrected`` direct
   flat.  You can do this using CCDPROC again.  The parameters will be:
\\

\textbf{cl$>$ epar ccdproc}
\texttt{
\begin{tabbing}
       images = "@domeflatlist,@masklist,@neonlist,@objlist" \\
      (output = "")        \`       List of output CCD images\\
     (ccdtype = " ")         \`     CCD image type to correct\\
   (max\_cache = 0)       \`         Maximum image caching memory (in Mbytes)\\
      (noproc = no)        \`       List processing steps only?\\
      (fixpix = no)          \`     Fix bad CCD lines and columns?\\
    (overscan = no)     \`          Apply overscan strip correction?\\
        (trim = no)          \`     Trim the image?\\
     (zerocor = no)       \`        Apply zero level correction?\\
     (darkcor = no)        \`       Apply dark count correction?\\
     (flatcor = yes)         \`     Apply flat field correction?\\
    (illumcor = no)      \`         Apply illumination correction?\\
   (fringecor = no)     \`          Apply fringe correction?\\
     (readcor = no)     \`          Convert zero level image to readout correction?\\
     (scancor = no)      \`         Convert flat field image to scan correction? \\
    (readaxis = "line")   \`        Read out axis (column|line)\\
     (fixfile = "")           \`    File describing the bad lines and columns\\
     (biassec = "[1:50,50:2850]")  \` Overscan strip image section\\
     (trimsec = "[55:2095,50:2850]")  \` Trim data section\\
        (zero = "Zero")        \`   Zero level calibration image\\
        (dark = "Dark")        \`   Dark count calibration image\\
        (flat = "CorrDirFlat")   \` Flat field images\\
       (illum = "")           \`    Illumination correction images\\
      (fringe = "")           \`    Fringe correction images\\
  (minreplace = 1.)       \`        Minimum flat field value\\
    (scantype = "shortscan")    \`  Scan type (shortscan|longscan)\\
       (nscan = 1)        \`        Number of short scan lines \\
 (interactive = no)      \`         Fit overscan interactively?\\
    (function = "legendre")     \`  Fitting function\\
       (order = 1)           \`     Number of polynomial terms or spline pieces\\
      (sample = "*")       \`       Sample points to fit\\
    (naverage = 1)        \`        Number of sample points to combine\\
    (niterate = 1)           \`     Number of rejection iterations\\
  (low\_reject = 3.)        \`       Low sigma rejection factor\\
 (high\_reject = 3.)       \`        High sigma rejection factor\\
        (grow = 0.)       \`        Rejection growing radius \\
        (mode = "ql") 
\end{tabbing}
}

\item (***IFU***) By now we are ready to start working on the mask spectra.  First
   we must combine all the spectra taken at each mask position.  Use IMCOMBINE
   with the following for each mask position:
\\

\textbf{cl$>$ epar imcombine}
\texttt{
\begin{tabbing}
        input = "mask-00-00-000?.fits"  \`List of images to combine\\
       output = "mask-00-00.fits" \` List of output images\\
     (headers = "")             \` List of header files (optional)\\
     (bpmasks = "")         \`     List of bad pixel masks (optional)\\
    (rejmasks = "")          \`    List of rejection masks (optional)\\
   (nrejmasks = "")        \`      List of number rejected masks (optional)\\
    (expmasks = "")        \`      List of exposure masks (optional)\\
      (sigmas = "")            \`  List of sigma images (optional)\\
     (logfile = "STDOUT")     \`   Log file \\
     (combine = "average")    \`   Type of combine operation\\
      (reject = "minmax")      \`  Type of rejection\\
     (project = no)          \`    Project highest dimension of input images?\\
     (outtype = "real")      \`    Output image pixel datatype\\
   (outlimits = "")             \` Output limits (x1 x2 y1 y2 ...)\\
     (offsets = "none")     \`     Input image offsets\\
    (masktype = "none")    \`      Mask type\\
   (maskvalue = 0.)          \`    Mask value\\
       (blank = 0.)           \`   Value if there are no pixels\\
       (scale = "none")   \`       Image scaling\\
        (zero = "none")      \`    Image zero point offset\\
      (weight = "none")    \`      Image weights\\
     (statsec = "")           \`   Image section for computing statistics\\
     (expname = "")        \`      Image header exposure time keyword\\
  (lthreshold = INDEF)    \`       Lower threshold\\
  (hthreshold = INDEF)      \`     Upper threshold\\
        (nlow = 0)        \`       minmax: Number of low pixels to reject\\
       (nhigh = 1)        \`       minmax: Number of high pixels to reject\\
       (nkeep = 1)        \`       Minimum to keep (pos) or maximum to reject (neg)\\
       (mclip = yes)      \`       Use median in sigma clipping algorithms?\\
      (lsigma = 3.)        \`      Lower sigma clipping factor\\
      (hsigma = 3.)       \`       Upper sigma clipping factor\\
     (rdnoise = "0.")     \`       ccdclip: CCD readout noise (electrons)\\
        (gain = "1.")       \`     ccdclip: CCD gain (electrons/DN)\\
      (snoise = "0.")      \`      ccdclip: Sensitivity noise (fraction)\\
    (sigscale = 0.1)      \`       Tolerance for sigma clipping scaling correction\\
       (pclip = -0.5)       \`     pclip: Percentile clipping parameter \\
        (grow = 0.)         \`     Radius (pixels) for neighbor rejection \\
        (mode = "ql")
\end{tabbing}
}

  This is IMCOMBINE's output:

  \texttt{combine = average, scale = none, zero = none,\\
  weight = none, reject = minmax, nlow = 0, \\
  nhigh = 1,  blank = 0.}
   \texttt{
\begin{quote} 
            Images\\
   mask-00-00-0001.fits\\
   mask-00-00-0002.fits\\
   mask-00-00-0003.fits\\
\end{quote}
}
\texttt{  Output image = mask-00-00.fits, ncombine = 3}\\


\item (***IFU***) Verify if APSCATTER, APDEF and APFIND are set as the following:
\\

\textbf{cl$>$ epar apscat}
\texttt{
\begin{tabbing}
        input = " "          \`     List of input images to subtract scattered light \\
       output = " "          \`     List of output corrected images \\
   (apertures = "")        \`       Apertures \\
     (scatter = "")           \`    List of scattered light images (optional) \\
  (references = "")       \`        List of aperture reference images \\
 (interactive = yes)      \`        Run task interactively? \\
        (find = yes)         \`     Find apertures? \\
    (recenter = no)     \`          Recenter apertures? \\
      (resize = no)        \`       Resize apertures? \\
        (edit = yes)        \`      Edit apertures? \\
       (trace = yes)       \`       Trace apertures? \\
    (fittrace = yes)       \`       Fit the traced points interactively? \\
    (subtract = yes)    \`          Subtract scattered light? \\
      (smooth = no)      \`         Smooth scattered light along the dispersion? \\
  (fitscatter = yes)       \`       Fit scattered light interactively? \\
   (fitsmooth = no)       \`        Smooth the scattered light interactively?  \\
        (line = 1)             \`   Dispersion line \\
        (nsum = 10)       \`        Number of dispersion lines to sum or median \\ 
      (buffer = 1.)          \`     Buffer distance from apertures \\
     (apscat1 = "")       \`        Fitting parameters across the dispersion \\
     (apscat2 = "")       \`        Fitting parameters along the dispersion \\
        (mode = "ql")
\end{tabbing}
}


\textbf{cl$>$ epar apdef}
\texttt{
\begin{tabbing}
       (lower = -1.5)      \`      Lower aperture limit relative to center\\
       (upper = 1.5)       \`      Upper aperture limit relative to center\\
   (apidtable = "")        \`      Aperture ID table \\
  (b\_function = "chebyshev")   \`  Background function\\
     (b\_order = 1)       \`        Background function order\\
    (b\_sample = "-10:-6,6:10")  \` Background sample regions\\
  (b\_naverage = -3)        \`      Background average or median\\
  (b\_niterate = 0)          \`     Background rejection iterations\\
(b\_low\_reject = 3.)        \`      Background lower rejection sigma\\
(b\_high\_rejec = 3.)       \`       Background upper rejection sigma\\
      (b\_grow = 0.)            \`  Background rejection growing radius\\
        (mode = "ql")
\end{tabbing}
}

\textbf{cl$>$ epar apfind}
\texttt{
\begin{tabbing}
        input = " "           \`    List of input images\\
          (apertures = "")        \`       Apertures\\
  (references = "")        \`       Reference images \\
 (interactive = yes)       \`       Run task interactively?\\
        (find = yes)           \`   Find apertures?\\
    (recenter = no)        \`       Recenter apertures?\\
      (resize = no)          \`     Resize apertures?\\
        (edit = yes)          \`    Edit apertures? \\
        (line = INDEF)     \`       Dispersion line\\
        (nsum = 1)           \`     Number of dispersion lines to sum or median\\
       nfind = 102          \`     Number of apertures to be found automatically\\
      (minsep = 5.)         \`      Minimum separation between spectra\\
      (maxsep = 1000.)     \`       Maximum separation between spectra\\
       (order = "increasing")   \`  Order of apertures\\
        (mode = "ql")   
 \end{tabbing}
}       
                 
\item MASK will iterate over the masks, finding and tracing the apertures and
   subtracting scattered light:\\

\textbf{cl$>$ epar mask}
\texttt{
\begin{tabbing}
         nome = "mask"         \`   Base name for files\\
          ver = 1          \`       Number of vertical mask positions\\
          hor = 5          \`       Number of horizontal mask positions\\
          ext = "fits"      \`      Filetype of images\\
       colavg = 5         \`        Number of columns to average during fits\\
      colstep = 1          \`       Number of columns to jump from one fit to the \\
    apscatter = ""\\
        lista = ""\\
        (mode = "ql")
 \end{tabbing}
}       


Type \textbf{:go} and press \textbf{Enter}:

\texttt{mask-00-00\\
Find apertures for mask-00-00?  (yes):}

Press \textbf{Enter}:

\texttt{Number of apertures to be found automatically (102):}

Press \textbf{Enter} again:

\texttt{Edit apertures for mask-00-00?  (yes):}

Press \textbf{Enter}. A graphic window should now appear. Enlarge it
and position the mouse near the left edge (see figure \emph{mask1.png}). Then type \textbf{wxwx}
to zoom in the X direction (see figure \emph{mask2.png}).

Verify the correspondence between the peaks and the numbered markings on top (the numbers
   are not important, only the positions). Type \textbf{wr} to see the next peaks. If there is an
   extra mark (see figure \emph{mask3.png}), position the mouse over it and press \textbf{d}. If, on the
   other hand, a mark is missing (see figure \emph{mask4.png}), position the mouse over it and
   press \textbf{m}.
   
When finished, press \textbf{q}.

\texttt{Trace apertures for mask-00-00?  (yes):}

Press \textbf{Enter}:

\texttt{Fit traced positions for mask-00-00 interactively?  (yes):}

Type \textbf{NO} (all uppercase) and press \textbf{Enter}:

\texttt{Write apertures for mask-00-00 to database?  (yes):}

Press \textbf{Enter}:

\texttt{Subtract scattered light in mask-00-00?  (yes):}

Press \textbf{Enter} again:

\texttt{Fit scattered light interactively?  (yes):}

Type \textbf{NO} (all uppercase) and press \textbf{Enter}. This will take a while. Pay attention because
   the next question will appear on the text window, not on the graphic window:

\texttt{mask-00-01\\
Find apertures for mask-00-01?  (yes):}

Repeat the procedure above for the remaining masks.

\item To verify the previous step, do:\\

\textbf{cl$>$ apedit mask}\\
\texttt{Edit apertures for mask?  (yes):}

Press \textbf{Enter}. In the graphic window, position the mouse near the left edge then type
   \textbf{wxwxwxwx} to zoom in the X direction and \textbf{wr} to see the next peaks (see figure 
\emph{mask5.png}).

Press \textbf{q}.

\texttt{Write apertures for mask to database?  (yes):}

Type \textbf{NO} (all uppercase) and press \textbf{Enter}.
Another validation step is to extract and reconstruct the images of the masks with the IFU task:\\

\textbf{cl$>$ epar ifu}
\texttt{
\begin{tabbing}
        image = "mask-00-00"   \` Input frame \\
       (trref = "CorrDomeFlat.ql")  \` Fiber transmission calibration\\
       (imref = "Neon.ql")      \`  Calibration arc reference\\
       (apref = "mask")       \`    Fiber apertures reference\\
      (ccdcor = no)           \`    Run ccdproc on input images ? (yes/no)\\
     (extract = yes)         \`     Extract fibers ? (yes/no)\\
          (ql = yes)           \`   Quick look ? (yes/no)\\
        (tcor = no)           \`    Transmission correction? (yes/no)\\
        (dcor = no)         \`      Dispersion correction? (yes/no)\\
     (gencube = no)     \`          Generate datacube ? (yes/no)\\
    (ldisplay = yes)       \`       Display reconstructed image ? (yes/no)\\
     (ccdproc = "")        \`       ccdproc parameters (use :e to edit)\\
     (dispcor = "")         \`      dispcor parameters (use :e to edit)\\
        (mode = "ql")
 \end{tabbing}
}       

Type \textbf{:go} and press \textbf{Enter}. After a while, the mask pattern will appear. This could be
   done also with the other masks. Press \textbf{q} to quit.

\item CALFIB will do a non-linear gaussian fit to find the center and width of the fibers
at each dispersion. When using quick-look mode, this step is not necessary:\\

\textbf{cl$>$ epar calfib}
\texttt{
\begin{tabbing}
         nome = "mask"        \`      Base name for files \\
          ver = 1            \`       Number of vertical mask positions \\
          hor = 5           \`        Number of horizontal mask positions \\
          ext = "fits"        \`      Filetype of images \\
       colavg = 5          \`         Number of columns to average during fits \\
      colstep = 1          \`         Number of columns to jump from one fit to the n\\
        lista = ""\\
        (mode = "ql")
 \end{tabbing}
}       
Type \textbf{:go} and press \textbf{Enter}.


\item Combine the dome flats to create the transmission correction frames:\\

\textbf{cl$>$ epar flatcombine}
\texttt{
\begin{tabbing}
        input = "@domeflatlist"  \` List of flat field images to combine \\
      (output = "DomeFlat")     \` Output flat field root name\\
     (combine = "average")     \`  Type of combine operation\\
      (reject = "avsigclip")    \` Type of rejection\\
     (ccdtype = "")            \`  CCD image type to combine\\
     (process = no)          \`    Process images before combining?\\
     (subsets = no)          \`    Combine images by subset parameter?\\
      (delete = no)            \`  Delete input images after combining?\\
     (clobber = no)           \`    Clobber existing output image?\\
       (scale = "mode")    \`      Image scaling\\
     (statsec = "")         \`     Image section for computing statistics\\
        (nlow = 0)          \`     minmax: Number of low pixels to reject\\
       (nhigh = 1)         \`      minmax: Number of high pixels to reject\\
       (nkeep = 1)        \`       Minimum to keep (pos) or maximum to reject (neg)\\
       (mclip = yes)      \`       Use median in sigma clipping algorithms?\\
      (lsigma = 3.)        \`      Lower sigma clipping factor\\
      (hsigma = 3.)       \`       Upper sigma clipping factor\\
     (rdnoise = "0.")    \`        ccdclip: CCD readout noise (electrons)\\
        (gain = "1.")       \`     ccdclip: CCD gain (electrons/DN)\\
      (snoise = "0.")     \`       ccdclip: Sensitivity noise (fraction)\\
       (pclip = -0.5)       \`     pclip: Percentile clipping parameter\\
       (blank = 1.)         \`     Value if there are no pixels\\
        (mode = "ql")   
 \end{tabbing}
}       
	

\item (***IFU***) Extract the spectra of all apertures from the DomeFlat frames.  They will
    be used later to do the transmission  corrections. Now we can use the IFU
    task in the SOAR-IFU package. Use it with the following parameters:\\

\textbf{cl$>$ epar ifu}
\texttt{
\begin{tabbing}
        image = "DomeFlat"     \`  Input CCD image\\
       (trref = "")           \`   Fiber transmission calibration - unmasked flat\\
       (imref = "")  \`Calibration arc reference\\
       (apref = "mask")       \`   Fiber apertures reference\\
      (ccdcor = no)             \` Run ccdproc on input images ? (yes/no)\\
     (extract = yes)            \` Extract fibers ? (yes/no)\\
          (ql = yes)          \`   Quick look ? (yes/no)\\
        (tcor = no)          \`    Transmission correction? (yes/no)\\
        (dcor = no)          \`    Dispersion correction? (yes/no)\\
     (gencube = no)      \`        Generate datacube ? (yes/no)\\
    (ldisplay = yes)        \`     Display reconstructed image ? (yes/no)\\
     (ccdproc = "")         \`     ccdproc parameters (use :e to edit)\\
     (dispcor = "")           \`   dispcor parameters (use :e to edit)\\
        (mode = "ql")
 \end{tabbing}
}       
	

\item(***IFU***) We do not want to use the spectral information of the white lamp flat,
    because the lamp is not spectrally flat. To flatten the spectral
    response, first integrate the light on each fiber over the whole
    spectra using BLKAVG. Then, replicate the result over the dispersion
    axis with BLKREP:\\
    
    \textbf{cl$>$ imhead DomeFlat.ql}\\
    \texttt{DomeFlat.ql[2041,510][real]: Flat Cup}
    
    \textbf{cl$>$ blkavg DomeFlat.ql CorrDomeFlat.ql 2041 1}
    
    \textbf{cl$>$ blkrep CorrDomeFlat.ql CorrDomeFlat.ql 2041 1}
    
    \textbf{cl$>$ imhead CorrDomeFlat.ql}\\
    \texttt{CorrDomeFlat.ql[2041,510][real]: Flat Cup}
    
    \textbf{cl$>$imstat DomeFlat.ql,CorrDomeFlat.ql}\\
    \texttt{\#               IMAGE      NPIX      MEAN    STDDEV       MIN       MAX\\
              DomeFlat.ql   1040910    38886.     7808.     2509.    61462.\\
          CorrDomeFlat.ql   1040910    38886.     6178.    14315.    56706.\\
	}

\item Combine your neon frames with IMCOMBINE.


\item (***IFU***) Extract the neon spectra for all fibers using the task IFU again:\\

\textbf{cl$>$ epar ifu}
\texttt{
\begin{tabbing}
        image = "Neon"       \`   Input CCD image \\
       (trref = "CorrDomeFlat.ql")  \` Fiber transmission calibration\\
       (imref = "Neon.ql")   \`        Calibration arc reference \\
       (apref = "mask")      \`     Fiber apertures reference \\
      (ccdcor = no)            \`   Run ccdproc on input images ? (yes/no) \\
     (extract = yes)         \`     Extract fibers ? (yes/no) \\
          (ql = yes)           \`   Quick look ? (yes/no) \\
        (tcor = no)           \`    Transmission correction? (yes/no) \\
        (dcor = no)          \`     Dispersion correction? (yes/no) \\
     (gencube = no)     \`          Generate datacube ? (yes/no) \\
    (ldisplay = yes)       \`        Display reconstructed image ? (yes/no) \\
     (ccdproc = "")        \`       ccdproc parameters (use :e to edit) \\
     (dispcor = "")          \`     dispcor parameters (use :e to edit) \\
        (mode = "ql")           
 \end{tabbing}
}       

\item Find the wavelength solution to one of the apertures in the neon
    frames using IDENTIFY:\\

\textbf{cl$>$ epar identify}
\texttt{
\begin{tabbing}
       images = "Neon.ql"     \`  Images containing features to be identified \\
        (section = "line 1")       \`   Section to apply to two dimensional images\\
    (database = "database")     \`   Database in which to record feature data\\
   (coordlist = "linelists\$idhenear.dat")  \`  User coordinate list\\
       (units = "")             \`   Coordinate units\\
        (nsum = "10")      \`        Number of lines/columns/bands to sum in 2D imag\\
       (match = -3.)         \`      Coordinate list matching limit\\
 (maxfeatures = 50)      \`          Maximum number of features for automatic identi\\
      (zwidth = 100.)        \`      Zoom graph width in user units\\
       (ftype = "emission")  \`     Feature type\\
      (fwidth = 7.)            \`    Feature width in pixels\\
     (cradius = 5.)         \`       Centering radius in pixels\\
   (threshold = 0.)       \`         Feature threshold for centering\\
      (minsep = 2.)          \`     Minimum pixel separation\\
    (function = "spline3")     \`   Coordinate function\\
       (order = 3)            \`    Order of coordinate function\\
      (sample = "*")       \`       Coordinate sample regions\\
    (niterate = 0)           \`     Rejection iterations\\
  (low\_reject = 3.)        \`       Lower rejection sigma\\
 (high\_reject = 3.)      \`         Upper rejection sigma\\
        (grow = 0.)          \`     Rejection growing radius\\
   (autowrite = no)     \`          Automatically write to database\\
    (graphics = "stdgraph")    \`   Graphics output device\\
      (cursor = "")             \`  Graphics cursor input\\
     crval = "6450"      \`      Approximate coordinate (at reference pixel)\\
        cdelt = ".3"            \`   Approximate dispersion\\
        (aidpars = "")        \`       Automatic identification algorithm parameters\\
        (mode = "ql")
 \end{tabbing}
}       

\item Now use REIDENTIFY and let it find the solution to all other apertures:\\

\textbf{cl$>$ epar reident}
\texttt{
\begin{tabbing}
    reference = "Neon.ql"    \`    Reference image \\
       images = "Neon.ql"  \`      Images to be reidentified \\
      (interactive = "yes")      \`     Interactive fitting? \\
     (section = "line 1")    \`    Section to apply to two dimensional images \\
      (newaps = no)       \`       Reidentify apertures in images not in reference \\
    (override = no)       \`       Override previous solutions? \\
       (refit = yes)          \`   Refit coordinate function?  \\
       (trace = yes)      \`       Trace reference image? \\
        (step = "10")    \`        Step in lines/columns/bands for tracing an imag \\
        (nsum = "1")      \`       Number of lines/columns/bands to sum \\
       (shift = "INDEF")     \`    Shift to add to reference features (INDEF to se \\
      (search = INDEF)    \`       Search radius \\
       (nlost = 3)            \`   Maximum number of features which may be lost \\
     (cradius = 7.)        \`      Centering radius \\
   (threshold = 100.)     \`       Feature threshold for centering \\
 (addfeatures = no)       \`       Add features from a line list? \\
   (coordlist = "linelists\$idhenear.dat") \` User coordinate list \\
       (match = -3.)       \`      Coordinate list matching limit \\
 (maxfeatures = 50)       \`       Maximum number of features for automatic identi \\
      (minsep = 2.)          \`    Minimum pixel separation  \\
    (database = "database")   \`   Database \\
    (logfiles = "Neon.log")   \`   List of log files \\
    (plotfile = "")        \`      Plot file for residuals \\
     (verbose = no)         \`     Verbose output? \\
    (graphics = "stdgraph")   \`   Graphics output device \\
      (cursor = "")        \`      Graphics cursor input \\
       answer = "YES"         \`    Fit dispersion function interactively? \\
        crval =              \`    Approximate coordinate (at reference pixel) \\
        cdelt =              \`    Approximate dispersion \\
     (aidpars = "")        \`     Automatic identification algorithm parameters\\
        (mode = "ql")       
 \end{tabbing}
}       

\item (***IFU***) To validate the previous step, do the dispersion correction on the
    extracted Neon using IFU:\\
	
\textbf{cl$>$ epar ifu}
\texttt{
\begin{tabbing}
        image = "Neon.ql"    \`       Input CCD image \\
       (trref = "CorrDomeFlat.ql") \` Fiber transmission calibration \\ 
       (imref = "Neon.ql")    \`      Calibration arc reference \\
       (apref = "mask")   \`       Fiber apertures reference \\
      (ccdcor = no)         \`     Run ccdproc on input images ? (yes/no) \\
     (extract = no)         \`     Extract fibers ? (yes/no) \\
          (ql = yes)       \`      Quick look ? (yes/no) \\
        (tcor = no)        \`      Transmission correction? (yes/no) \\
        (dcor = yes)     \`        Dispersion correction? (yes/no) \\
     (gencube = no)  \`            Generate datacube ? (yes/no) \\
    (ldisplay = yes)    \`         Display reconstructed image ? (yes/no) \\
     (ccdproc = "")     \`         ccdproc parameters (use :e to edit) \\
     (dispcor = "")      \`        dispcor parameters (use :e to edit) \\
        (mode = "ql")           
	 \end{tabbing}
}       
    Then use DISPLAY on the resulting image (\emph{Neon.ql.wl}). The vertical lines
    must now be straight.
	
\item(***IFU***) Now, to extract and reconstruct the objects using IFU:\\

\textbf{cl$>$ epar ifu}
\texttt{
\begin{tabbing}
        image = "obj0001"     \`    Input frame\\
       (trref = "CorrDomeFlat.ql")  \` Fiber transmission calibration \\
       (imref = "Neon.ql")    \`    Calibration arc reference\\
       (apref = "mask")       \`    Fiber apertures reference\\
      (ccdcor = no)          \`     Run ccdproc on input images ? (yes/no)\\
     (extract = yes)     \`         Extract fibers ? (yes/no)\\
          (ql = yes)         \`     Quick look ? (yes/no)\\
        (tcor = no)         \`      Transmission correction? (yes/no)\\
        (dcor = yes)         \`     Dispersion correction? (yes/no)\\
     (gencube = no)          \`     Generate datacube ? (yes/no)\\
    (ldisplay = yes)            \`  Display reconstructed image ? (yes/no)\\
     (ccdproc = "")        \`       ccdproc parameters (use :e to edit)\\
     (dispcor = "")         \`      dispcor parameters (use :e to edit)\\
        (mode = "ql")
\end{tabbing}
}       

  The images generated by IFU in quick-look mode are:

  \emph{obj0001.ql}:              Extracted spectra\\
 \emph{ obj0001.ql.tc}:           Corrected for fiber transmission\\
 \emph{ obj0001.ql.tc.wl}:        Wavelength corrected\\
  \emph{obj0001.ql.tc.wl.ldisp}:  Reconstructed 2D image\\
 \emph{ obj0001.ql.tc.wl.dc}:     3D datacube\\

  For complete reduction, the \emph{ql} extension is replaced by \emph{lin}.
  All reduction steps except extraction are optional, in which cases the
correspondent extension (\emph{tc,wl,dc,ldisp}) will be missing.


\item (***IFU***) To display an object that was previously extracted,
    use \mbox{LDISPLAY:}\\
    
\textbf{cl$>$ epar ldisplay}
\texttt{
\begin{tabbing}
        input = "obj0001.ql.tc.wl"\`    Input fiber file \\
      (output = "obj0001.ql.tc.wl.ldisp") \`   Lens array image\\
    (ldispdir = "soar\_ifu\$conf") \`   Directory for config files\\
       (lconf = "soar\_ifu.cfg") \`   File containing lens array geometry\\
          (w1 = INDEF)    \`         Starting wavelength\\
          (w2 = INDEF)     \`        Ending wavelength\\
    (imagecur = "")        \`        Image cursor input\\
        (mode = "ql")
\end{tabbing}
}      

Use the following commands with LDISPLAY:

\textbf{e} -- examine the spectrum of the lens under the cursor with SPLOT. Use \texttt{q} to
      return to LDISPLAY;\\
\textbf{w} -- implements a virtual wavelength filter on the reconstructed image, given the 
starting and ending wavelengths;\\
\textbf{q} -- exit LDISPLAY.
  


There is not, at the moment, any task developed to deal with the Eucalyptus 3D datacube. But the
user can perform all the usual procedures related to IFUs using simple IRAF tasks. Some hints are presented
below.

\item (***IFU***) To create a white light 2D image, use \mbox{LDISPLAY:}\\

\textbf{cl$>$ epar ldisplay}
\texttt{
\begin{tabbing}
        input = "obj0001.ql.tc.wl"\`    Input fiber file \\
      (output = "white0001") \`   Lens array image\\
    (ldispdir = "soar\_ifu\$conf") \`   Directory for config files\\
       (lconf = "soar\_ifu.cfg") \`   File containing lens array geometry\\
          (w1 = INDEF)    \`         Starting wavelength\\
          (w2 = INDEF)     \`        Ending wavelength\\
    (imagecur = "")        \`        Image cursor input\\
        (mode = "ql")
\end{tabbing}
}      

The white light reconstructed image will be displayed. Once you exit LDISPLAY (\textbf{q})
the image will be saved with the name \emph{white0001.fits} (in this example).

To construct a passband image, change the \texttt{output} parameter in LDISPLAY
(otherwise your white light image will be overwritten) and set the wavelength limits (parameters
\texttt{w1} and \texttt{w2}). The images produced by LDISPLAY may be displayed (with the
DISPLAY task) and combined (with IMCOMBINE or IMARITH) as usual.

\item (***IFU***) To examine the spectrum of each individual lens you may use LDISPLAY
as cited above (item 21). A simple way to perform arithmetic operations with 2 or more lenses
spectra (to sum lenses 204 and 235, for instance) is by using the task SARITH:

\textbf{cl$>$ epar sarith}
\texttt{
\begin{tabbing}
	input1 = "obj0001.ql.tc.wl[*,204]"   \`         List of input spectra\\
           op = "+"   \`          Operation\\
       input2 = "obj0001.ql.tc.wl[*,235]" \`          List of input spectra or constants\\
       output = "sum.fits"  \`        List of output spectra\\
          (w1 = INDEF)\`          Starting wavelength\\
          (w2 = INDEF) \`         Ending wavelength\\
   (apertures = "")    \`         List of input apertures or columns/lines\\
       (bands = "")    \`         List of input bands or lines/bands\\
       (beams = "")     \`        List of input beams or echelle orders\\
   (apmodulus = 0)      \`        Input aperture modulus (0=none)\\
     (reverse = no)    \`         Reverse order of operands in binary operation?\\
   (ignoreaps = yes)   \`         Ignore second operand aperture numbers?\\
      (format = "onedspec")\`     Output spectral format\\
    (renumber = no)    \`         Renumber output apertures?\\
      (offset = 0)     \`         Output aperture number offset\\
     (clobber = yes)    \`        Modify existing output images?\\
       (merge = no)     \`        Merge with existing output images?\\
       (rebin = yes)    \`        Rebin to exact wavelength region?\\
      (errval = 0.)     \`        Arithmetic error replacement value\\
     (verbose = yes)    \`        Print operations?\\
        (mode = "ql")           
\end{tabbing}
}

Another way (perhaps more adequate to do operations with a large number of lenses) is by using 
SCOPY to create one individual archive to each spectrum:

\textbf{cl$>$ epar scopy}
\texttt{
\begin{tabbing}
  	input = "obj0001.ql.tc.wl"\` List of input spectra\\
       output = "sub"     \`       List of output spectra\\
          (w1 = INDEF)   \`       Starting wavelength\\
          (w2 = INDEF)  \`        Ending wavelength\\
   (apertures = "240-249")  \`        List of input apertures or columns/lines\\
       (bands = "")     \`        List of input bands or lines/bands\\
       (beams = "")     \`        List of beams or echelle orders\\
   (apmodulus = 0)      \`        Input aperture modulus (0=none)\\
      (format = "onedspec") \`    Output spectra format\\
    (renumber = no)        \`     Renumber output apertures?\\
      (offset = 0)      \`        Output aperture number offset\\
     (clobber = no)     \`        Modify existing output images?\\
       (merge = no)     \`        Merge with existing output images?\\
       (rebin = yes)    \`        Rebin to exact wavelength region?\\
     (verbose = yes)   \`         Print operations?\\
        (mode = "ql")           
\end{tabbing}
}

This last example will create 10 spectra (named \emph{sub.0240.fits} to \emph{sub.0249.fits}).
To create spectra for all lenses set the parameter \texttt{apertures} to null.
Operations may now be applied to these spectra using SCOMBINE or SARITH.


\end{enumerate}  


\end{document}