Reference Guide

Release

3.0

Date

Oct 04, 2022

Warning

This “Reference” is still a work in progress; some of the material is not organized, and several aspects of PAPI are not yet covered sufficient detail.

This will contain a reference guide for developers of the application.

reduce.calDark — Dark combination

class reduce.calDark.MasterDark(file_list, temp_dir, output_filename='/tmp/mdark.fits', texp_scale=False, bpm=None, normalize=False, show_stats=False, no_type_checking=False)[source]

Create a master Dark from a list for dark files (single or MEF files); all must have the same properties (TEXP, NCOADDS, READMODE).

file_listlist

A list of dark files

temp_dir: str

Directory where temp files will be created

output_filename: str

Ouput filename of the master dark file to be created

texp_scale: bool

If true, scale the darks before the combination.

bpm: str

Bad pixel Map filename

normalize: str

Whether true, a normalization to 1 second is done after darks combination. It means, the master dark is supposed to have the count level of a dark frame of 1 second.

show_stats: bool

Whether true, some statistics will be be shown

no_type_checking: bool

Whether true, the type of file (dark, flat, …) will no be checked in the input files

createMaster()[source]

Create a master DARK from the dark file list.

The method must be called only after the object was properly initialized.

reduce.calDark.main(arguments=None)[source]

reduce.calDakModel — Dark model combination

class reduce.calDarkModel.MasterDarkModel(input_files, temp_dir='/tmp/', output_filename='/tmp/mdarkmodel.fits', bpm=None, show_stats=True)[source]

Class used to build and manage a master calibration dark model

As input a series of dark exposures with a range of exposure times is given. A linear fit is done at each pixel position of data number versus exposure time. A each pixel position in the output map represents the slope of the fit done at that position and is thus the dark current expressed in units of data numbers per second.

input_data: list

A list of dark files

temp_dir: str

Directory for temporal files

output_filename: str

Filename for the master dark obtained

bpm: str

Input bad pixel mask or NULL (optional)

If no error, a fits file (nx*ny) with 2 planes (extensions) plane 0 = bias plane 1 = dark current in DN/sec

DARKCURRENT The median dark current in data numbers per second found from the median value of the output dark current map.

  • Data model for MEF files (PANIC)

createDarkModel()[source]

Create a master DARK model from the dark file list

reduce.calDarkModel.main(arguments=None)[source]
reduce.calDarkModel.my_mode(data)[source]

An easy (efficient and precise ??) way to find out the mode stats of an array (not used)

reduce.calDomeFlat — Dome Flat combination

class reduce.calDomeFlat.MasterDomeFlat(input_files, temp_dir='/tmp/', output_filename='/tmp/mdflat.fits', normal=True, median_smooth=False)[source]

Class used to build and manage a master calibration dome flat.

Dome flats are not pretty good for low-spatial frequency QE variation across the chip (large scale variation), but quite reasonable for high-spatial frequency (small scale variations).

  1. Check the EXPTIME , TYPE(dome) and FILTER of each Flat frame

  2. Separate lamp ON/OFF dome flats

  3. Make the combine of Flat LAMP-OFF frames

  4. Make the combine of Flat LAMP-ON frames

  5. Subtract lampON-lampOFF (implicit dark subtraction)

  6. (optionally) Normalize the flat-field with median (robust estimator)

# NOTE : We do NOT subtract any MASTER_DARK, it is not required for DOME FLATS (it is done implicitly because both ON/OFF flats are taken with the same Exposition Time)

  • Multiply by the BPM

  • Reject flat images when their background is different more than 1%

compared to the other, or when more than 3 sigma of the others. - Optional Median smooth

createMaster()[source]

Create a master Dome FLAT from the dome flat file list

reduce.calDomeFlat.main(arguments=None)[source]

reduce.calTwFlat — Twlight Flat combination

exception reduce.calTwFlat.ExError[source]
class reduce.calTwFlat.MasterTwilightFlat(flat_files, master_dark_model, master_dark_list, output_filename='/tmp/mtwflat.fits', lthr=10000, hthr=40000, bpm=None, normal=True, temp_dir='/tmp/', median_smooth=False)[source]

Class used to build and manage a master calibration twilight flat.

Twilight flats are quite good for low-spatial frequency QE variation across the chip (large scale variation), but not for high-spatial frequency (small scale variations).

1. Check the TYPE(twilight) and FILTER of each Flat frame If any frame on list missmatch the FILTER, then the master twflat will skip this frame and contiune with then next ones. EXPTIME do not need be the same, so EXPTIME scaling with ‘mode’ will be done

1.1: Check either over or under exposed frames

2. We subtract a proper MASTER_DARK, it is required for TWILIGHT FLATS because they might have diff EXPTIMEs

3. Make the combine (with sigclip rejection) of dark subtracted Flat frames scaling by ‘mode’

  1. Normalize the tw-flat dividing by the mean value

Author:

JMIbannez, IAA-CSIC

createMaster()[source]

Create a master Tw FLAT from the flat file list

reduce.calTwFlat.main(arguments=None)[source]

reduce.calBPM — Bad Pixel Map generation

class reduce.calBPM.BadPixelMask(input_file, outputfile=None, lthr=4.0, hthr=4.0, temp_dir='/tmp', raw_flag=False)[source]

Generate a bad pixel mask from a list of dark corrected dome flat images. (extracted from VIRCAM pipeline, vircam_genbpm)

A list of DARK corrected dome FLAT images is given. A master flat is created from all the input flats in the list. Each input flat is then divided by the master. Bad pixels are marked on the new image as those that are above or below the threshold (in sigma) in the new image. Any pixel which has been marked as bad for more than a quarter of the input images is defined as bad in the output mask.

create()[source]
  1. Combine all of the dome flats into a master

  2. Divide the resulting image by its median –>normalized MASTER_FLAT

  3. Create and zero the rejection mask

  4. Loop for all input images 4.1 Divide each by the nomalized master flat 4.2 Divide the resulting image by its median 4.3 Get the standard deviation of the image 4.4 Define the bad pixels

  5. Go through the rejection mask and if a pixel has been marked bad more than a set number of times, then it is defined as bad

create_JM()[source]
  1. Combine all of the dome flats into a master

  2. Divide the resulting image by its median –>normalized MASTER_FLAT

  3. Create and zero the rejection mask

  4. Loop for all input images and divide each by the master flat

    4.1 Divide the resulting image by its median 4.2 Get the standard deviation of the image 4.3 Define the bad pixels

  5. Go through the rejection mask and if a pixel has been marked bad more than a set number of times, then it is defined as bad

reduce.calBPM.applyBPM(filename, master_bpm, output_filename, overwrite=False)[source]

Apply a BPM to a input file setting to NaN the bad pixels.

filename: str

Input file to apply the BPM. MEF files are not supported yet.

master_bpm: str

The master BPM to be applied to the input file. Bad pixels are masked with 1’s and good pixels with 0’s.

output_filename: str

Filename of the new file created with bad pixels masked to NaN.

overwrite: bool

If True, the input filename will be masked with NaN on bad pixels. If False, the input filename will no be modified, and bad pixels will be masked in the output_filename.

output_filename: str

If success, the output filename with the masked pixels.

  • add support for MEF files

reduce.calBPM.fixPix(im, mask, iraf=False)[source]

Applies a bad-pixel mask to the input image (im), creating an image with masked values replaced with a bi-linear interpolation from nearby pixels. Probably only good for isolated badpixels.

Usage:

fixed = fixpix(im, mask, [iraf=])

Inputs:

im = the image array mask = an array that is True (or >0) where im contains bad pixels iraf = True use IRAF.fixpix; False use numpy and a loop over

all pixels (extremelly low)

Outputs:

fixed = the corrected image

v1.0.0 Michael S. Kelley, UCF, Jan 2008

v1.1.0 Added the option to use IRAF’s fixpix. MSK, UMD, 25 Apr

2011

  • Non-IRAF algorithm is extremelly slow.

reduce.calBPM.main(arguments=None)[source]

reduce.calGainMap — Gain Map generation

class reduce.calGainMap.DomeGainMap(filelist, output_filename='/tmp/domeFlat.fits', bpm=None)[source]

Compute the gain map from a list of dome (lamp-on,lamp-off) frames

create()[source]

Creation of the Gain map

class reduce.calGainMap.GainMap(flatfield, output_filename='/tmp/gainmap.fits', bpm=None, do_normalization=True, mingain=0.5, maxgain=1.5, nxblock=16, nyblock=16, nsigma=5)[source]

Build a Gain Map from a Flat Field image (dome, twilight, science-sky)

JMIbanez, IAA-CSIC

create()[source]

Given a NOT normalized flat field, compute the gain map taking into account the input parameters and an optional Bad Pixel Map (bpm).

If success, return the output filename image of the Gain Map generated, where Bad Pixels = 0.0

class reduce.calGainMap.SkyGainMap(filelist, output_filename='/tmp/gainmap.fits', bpm=None, temp_dir='/tmp/')[source]

Compute the gain map from a list of sky frames

create()[source]

Creation of the Gain map

class reduce.calGainMap.TwlightGainMap(flats_filelist, master_dark_list, output_filename='/tmp/gainmap.fits', bpm=None, temp_dir='/tmp/')[source]

Compute the gain map from a list of twlight frames

create()[source]

Creation of the Gain map

reduce.calGainMap.get_dev(gain, naxis1, naxis2, chip, nx, ny)[source]
reduce.calGainMap.main(arguments=None)[source]

reduce.calCombineFF — Dome and sky master flats combination

A trick to combine domeFF and skyFF :

Often for a run you have dome flats with an accumulated number of electrons in the millions, but a poor match in illumination and color to the dark sky. You also have a limited number of twilight flats or dark-sky images that can be combined to make a dark-sky flat, but the total counts per pixel in either set of flats is not very high. A fairly standard procedure is to ‘median-smooth’ dome and twilight or dark-sky flat. A median smoothing replaces each pixel with the median of the pixel values in a box of a given size on a side. The result is an image that has been smoothed on the scale of the smoothing box size. A procedure for taking advantage of the facts that the large-scale flat-field variation of the dark-sky flat match that of the program frames and the dome flats have very high S/N in each pixel goes as follows:

(a) Median smooth the combined, dark-sky flat -this improves the S/N and preserves the large-scale features of the flat.

(b) Median smooth the combined dome flats using the same filter size as was used for the dark-sky flat.

(c) Divide the combined dome flat by it’s median smoothed-version. The result is a frame that is flat on large scales but contains all the high spatial frequency flat-field information.

(d) Now multiply the smoothed dark-sky frame and the result of the division in the previous step. You now have a flat-field with the low spatial frequency properties of the dark-sky flat combined with the high S/N, high spatial frequency properties of the dome flat.

reduce.calCombineFF.combineFF(domeFF, skyFF, combinedFF=None)[source]

Combine a dome FF and a sky FF in order to have a FF with the low frequency proporties of the dark-sky flat combined with the high S/N, high spatial frequency properties of the dome FF.

Basically:

skyFF’ = smooth(skyFF) domeFF’ = smooth(domeFF) domeFF’’ = domeFF / domeFF’ combFF = skyFF’ * domeFF’’

domeFFstr

Input filename of dome FF

skyFFstr

Input filename of the sky FF

combinedFF: str

Output filename of the combined FF generated

If all was successful, the name of the output file is returned

reduce.calSuperFlat — Super master flats generation

class reduce.calSuperFlat.SuperSkyFlat(filelist, output_filename='/tmp/superFlat.fits', bpm=None, norm=True, temp_dir='/tmp/', median_smooth=False, norm_value='median', check=False)[source]

Class used to build a super sky Flat from a dither set of science frames containing objects.

file_list: list

A list FITS files or directory

output_filename: str

File where log will be written

If no error return 0

create()[source]

Create the super sky flat using sigma-clipping algorithm (and supporting MEF)

reduce.calSuperFlat.main(arguments=None)[source]

reduce.applyDarkFlat — Dark and Flat correction

class reduce.applyDarkFlat.ApplyDarkFlat(sci_raw_files, mdark=None, mflat=None, bpm=None, out_dir_='/tmp', bpm_action='none', force_apply=False, norm=False)[source]

Class used to subtract a master dark (or dark model), divide by a master flat field and apply a BPM.

Applies a master dark, master flat and BPM to a list of non-calibrated science files. For each file processed a new file it is generated with the same filename but with the suffix ‘_DF.fits’ and/or ‘_BPM.fits’.

sci_raw_files: list

A list of science raw files to calibrate

dark: str

Master dark to subtract; it can be a master dark model to produce a proper scaled master dark to subtract.

mflat: str

Master flat to divide by (not normalized !)

bpm: str

Input bad pixel mask or None

out_dir: str

Output directory where calibrated files will be created

bpm_action: str
Action to perform with BPM:
  • fix, to fix Bad pixels

  • grab, to set to ‘NaN’ Bad pixels.

  • none, nothing to do with BPM (default)

force_apply: bool

If true, no checking with data FITS header will be done (IMAGETYPE).

norm: bool

If true, perform Flat-Field normalization (wrt median chip_1).

file_list

If no error, return the list of files generated as result of the current processing

apply()[source]

Applies masters DARK and/or FLAT and/or BPM to science file list. Both master DARK and FLAT are optional,i.e., each one can be applied even the other is not present.

If dark_EXPTIME matches with the sci_EXPTIME, a straight subtraction is done, otherwise, a Master_Dark_Model is required in order to compute a scaled dark.

Note: This routine works fine with MEF files and data cubes, of a with MEF-cubes. If means that if sci_data is a cube (3D array), the dark is subtracted to each layer, and flat is applied also to each layer. Thus, the calibrations works correctly for data cubes of data, no matter if they are MEF or single HDU fits.

exception reduce.applyDarkFlat.ExError[source]

Next class if for a general execution Exception

reduce.applyDarkFlat.fixpix(image_data, mask_data)[source]

Clean masked (bad) pixels from an input image. Each masked pixel is replaced by the median of unmasked pixels in a 2D window of size centered on it. If all pixels in the window are masked, then the window is increased in size until unmasked pixels are found.

image_data: the image array to fix

mask_data: an array that is True (or >0) where image contains bad pixels

The cleaned image array;otherwise an exception is raised.

reduce.applyDarkFlat.fixpix_old(im, mask, iraf=False)[source]

Applies a bad-pixel mask to the input image (im), creating an image with masked values replaced with a bi-linear interpolation from nearby pixels. Probably only good for isolated badpixels.

Usage:

fixed = fixpix(im, mask, [iraf=])

Inputs:

im = the image array mask = an array that is True (or >0) where im contains bad pixels iraf = True use IRAF.fixpix; False use numpy and a loop over

all pixels (extremelly low)

Outputs:

fixed = the corrected image

v1.0.0 Michael S. Kelley, UCF, Jan 2008

v1.1.0 Added the option to use IRAF’s fixpix. MSK, UMD, 25 Apr

2011

  • Non-IRAF algorithm is extremelly slow.

reduce.applyDarkFlat.main(arguments=None)[source]

reduce.NonLinearity — Non-Linearity correction

reduce.dxtak — Cross-talk correction

From “Characterization, Testing and Operation of Omega2000 Wide Field Infrared Camera”, Zoltan Kovacs et.al.

Although bright stars can saturate the detector, resetting of the full array prevents this excess in the pixel values from causing any residual image effects in the following image of the dithering. Nevertheless, the satured pixels generate a crosstalk between the data transfer lines of the different channels of the quadrant in which they are situated. The data lines of the channels are organized in parallel and there might be an interference between the data lines transferring the high video signal and the neighbour ones. As a result of this crosstalk, a series of spots with the distances of 128 pixels from each other appeares in the whole quadrant, corresponding to each channel. The average values of the spots were lower than the background signal and their difference was few percent, which is large enough to degrade the photometric correctness at the place they are situated. These spots could not be measured in the raw images but they were well discernible in the reduced frames (Fig. 9). This effect was a general feature of the operation of all the HAWAII-2 detectors we tested and should be considered for the choice of pointing positions in any field of next observations.

reduce.dxtalk.de_crosstalk_PANIC(in_image, out_image=None, overwrite=False)[source]

Remove cross-talk in PANIC single detectors (2kx2k). The frame structure of a full frame of PANIC is as follow:

I—————–I I I I I Q3 I Q4 I I I I I—————–I I I I I Q1 I Q2 I I I I I—————–I

where each quadrant (Qn) is 2kx2k and has 32 horizontal stripes of 64 pixels of width. So, all the quadrant are processed in the same way.

The procedure does not need to read the DET_ID keyword to know the detector or how to stripes are distributed along the detector. All detectors have the same orientation of the stipes. So, no matter which detector we are proceesing !

DET_ID = SG1 – Q1 - vertical stripes DET_ID = SG2 – Q2 - vertical stripes DET_ID = SG3 – Q3 - vertical stripes DET_ID = SG4 – Q4 - vertical stripes

reduce.dxtalk.de_crosstalk_PANIC_H4RG(in_image, out_image=None, overwrite=False)[source]

Remove cross-talk in PANIC H4RG detector (4kx4k). The H4RG is 4kx4k and has 64 horizontal stripes of 64 pixels of width. So, the full detector is processed in the same way.

The procedure does not need to read the DET_ID keyword to know the detector or how to stripes are distributed along the detector.

reduce.dxtalk.de_crosstalk_PANIC_full_detector(in_image, out_image=None, overwrite=False)[source]

==> NOT USED !!

Remove cross-talk in PANIC images (4kx4k). The frame structure expected is as follow:

I—————–I I I I I Q4 I Q3 I I I I I—————–I I I I I Q1 I Q2 I I I I I—————–I

where each quadrant (Qn) is 2kx2k and has 32 horizontal stripes of 64 pixels of height. So, all the quadrant are processed in the same way.

in_image: str

Filename of the FITS image to remove the xtalk. It must be a single FITS, MEF are not supported yet.

out_image: str

Filaname of the output dxtalked image.

overwrite: bool

Flag to indicate if the out_image can be overwritten.

reduce.dxtalk.de_crosstalk_o2k(in_image, out_image=None, overwrite=False)[source]

Remove cross-talk in O2k images (2kx2k).

The image structure expected is as follow:

I—————–I I I I I Q4 I Q3 I I I I I—————–I I I I I Q1 I Q2 I I I I I—————–I

where each quadrant (Qn) is 1kx1k and has 8 horizontal (Q1,Q3) or vertical (Q2,Q4) stripes of 128 pixels of length (width or heigh). So, so quadrant pairs (Q1,Q3) and (Q2,Q4) are processed in the same way.

reduce.dxtalk.main(arguments=None)[source]
reduce.dxtalk.remove_crosstalk(in_image, out_image=None, overwrite=False)[source]

Remove cross-talk in O2k or PANIC images.

in_imagestr

Input filename to be decrosstalk

out_imagestr

Output filename of decrosstalked image

overwrite: Boolean

If true, the input file ‘in_image’ filename will be overwritten, otherwise, the ‘out_image’ filename will be used as output.

If all was successful, the name of the output file is returned

astromatic.swarp — SWARP wrapper

A wrapper for SWARP (Astromatic.net, E.Bertin).

This wrapper allows you to configure SWARP, run it and get back its outputs without the need of editing SWARP configuration files. by default, configuration files are created on-the-fly, and SWARP is run silently via python.

Tested on SWARP versions 2.17.x

class astromatic.swarp.SWARP[source]

A wrapper class to transparently use SWARP.

clean(config=True)[source]

Remove the generated SWARP files (if any). If config is True, remove generated configuration files.

run(file_list, updateconfig=True, clean=False, path=None)[source]

Run SWARP for a given list of files (fits files), and it can be one single file.

updateconfig: bool

If True (default), the configuration files will be updated before running SWARP.

clean: bool

If clean is True (default: False), configuration files (if any) will be deleted after SWARP terminates.

path: str

Path to the ‘swarp’ application (binary file) in the system.

setup(path=None)[source]

Look for SWARP program (‘swarp’). If a full path is provided, only this path is checked. Raise a SWARPException if it failed. Return program and version if it succeed.

update_config()[source]

Update the configuration files according to the current in-memory SWARP configuration.

exception astromatic.swarp.SWARPException[source]

astromatic.scamp — SCAMP wrapper

A wrapper for SCAMP (Astromatic.net, E.Bertin).

This wrapper allows you to configure SCAMP, run it and get back its outputs without the need of editing SCAMP configuration files. by default, configuration files are created on-the-fly, and SCAMP is run silently via python.

Tested on SCAMP versions 1.4.6 and 1.7.0

class astromatic.scamp.SCAMP[source]

A wrapper class to transparently use SCAMP.

clean(config=True, catalog=False, check=False)[source]

Remove the generated SCAMP files (if any). If config is True, remove generated configuration files. If catalog is True, remove the output catalog. If check is True, remove output check image.

run(catalog_list, updateconfig=True, clean=False, path=None)[source]

Run SCAMP for a given list of catalog (.ldac files), and it can be one single catalog list.

updateconfig: bool

Is True (default), the configuration files will be updated before running SCAMP.

clean: bool

If True (default: False), configuration files (if any) will be deleted after SCAMP terminates.

path: str

Path name to look for scamp binary file in the system.

SCAMP_Exception

Cannot run SCAMP

SCAMP_AccuracyException

SCAMP Warning/error: Significant inaccuracy likely to occur in projection.

setup(path=None)[source]

Look for SCAMP program (‘scamp’). If a full path is provided, only this path is checked. Raise a SCAMP_Exception if it failed. Return program and version if it succeed.

update_config()[source]

Update the configuration files according to the current in-memory SCAMP configuration.

exception astromatic.scamp.SCAMP_AccuracyException[source]
exception astromatic.scamp.SCAMP_Exception[source]
astromatic.scamp.runCmd(str_cmd, p_shell=True)[source]

A wrapper to run system commands.

str_cmd: str

Command string to be executed in the shell

p_shell: bool

If True (default), command will be executed through the shell, and all cout/cerr messages will be available. If False, exception is the only way to find out problems during the call.

0 if some error, or 1 if all was OK

TODO:
  • allow to launch commands in background

  • best checking of error when shell=True