Skip to content

Commit

Permalink
Rebase 33x (#1135)
Browse files Browse the repository at this point in the history
* Update pyproject to use oldest-supported-numpy (#1118)

* Update pyproject to use oldest-supported-numpy

* Update bdist settings

* Update license year

* Add build-backend to pyproject.toml, to force PEP517 build (#1121)

Co-authored-by: Zane.Geiger <[email protected]>

* MVM: Insure SVM-derived WCS gets used as input to MVM processing (#1125)

* make_poller_files.py: updated code to produce proper output when processing SVM-pipeline-updated flc/flt fits files

* make_poller_files.py: renamed locate_fitspath_from_rootname() to locate_fitsfile(). Refactored subroutine to be able to process full filenames, as well as rootnames. Also updated docstrings, help text; Updated subroutine name in external calls in search_skyframes.py and make_custom_mosaic.py.

* end of day commit

* development ongoing

* development ongoing

* product.py: tweaked how layer_vals in declared.

* product.py: tweaked how sce_filename is declared

* Development milestone #1 (Add new functionality to MVM code: Add ability to process SVM-processed flc/flt files) complete.

* which_skycell.py: initial commit

* removed a couple of pdb imports

* poller_utils.py: removed pdb import statement

* assorted PEP8 and spelling fixes

* which_skycell.py: subroutine 'identify_pc_sc' to 'report_skycells'

* which_skycell.py: minor tweak to docstring

Co-authored-by: Michael Dulude <[email protected]>

* which_skycell.py: fixed incorrectly set argument in file open command (#1128)

* Remove dependence on photutils private functions (#1127)

* Only deblend large sources (#1131)

* Generate empty source catalogs (#1122)

* Generate empty source catalogs

* Treat catalog rejection correctly

* Write out empty seg catalogs when rejecting

* Update verify_crthresh logic for rejecting catalogs

* Insure empty catalogs always get created

* Ensure hapcut utilities will work on MVM exposure-level products (#1119)

* WIP
Removed the option of a single output MEF file for image cutouts.
The output cutout filename now contains "p####" vs "####" for consistency with
other MVM filenames.
Ensure proper treatment of WFC3/IR files which have both a "fine" (default) and
"coarse" platescales where only "coarse" is part of the input and output filenames.
Instead of adding the input filename to the PHDU header of the output file as a
HISTORY keyword, put this value in keyword ORIG_FLE in the PHDU.  This keyword
can already be found in the EHDUs.
Delete stale variables asap.

* Updates to process exposure-level images.
Group the input images by detector/filter.
RA/Dec decimal degree coordinates truncated to four decimal places in the mantissa.
Fixed use of dash vs underscore when creating the output filenames.
Comments updated.

* Reworked the mvm_combine function to use dictionaries to keep track of the filter-level
and exposure level files.  The img_combiner parameter is currently disabled.

* Added try/except blocks and additional comments.  Make sure variables are initialized as
necessary.

* Corrected typos in comments.

* Function mvm_analyze_wrapper determines the viability of an image for MVM processing (#1132)

* Added a new function, mvm_analyze_wrapper, which uses the functionality in the underlying
analyze_data routine to examine FITS keywords and determine the viability of the data to
be used in the MVM processing.  The mvm_analyze_wrapper works on a single image at a time.
Added an optional parameter to analyze_data so the routine can determine when MVM processing
is wanted as MVM processing does not allow anything to be done for GRISM/PRISM images.
Updated doc strings.

* Changed logging level from DEBUG to NOTSET.

* Changed log setting to DEBUG in the creation of the logger, as well as setting the
level for the handlers to force out INFO messages. Improved documentation of an input
parameter.  Made logic more explict in the code handling Grism/Prism data.

* MVM: Make alignment to GAIA optional (#1133)

* hapmultisequencer.py: added logic in run_mvm_processing() to skip run_align_to_gaia() step and a new optional subroutine input 'skip_gaia_alignment' in run_mvm_processing() to control the new feature.

* runmultihap.py: added new optional command-line switch to allow users to skip gaia alignment of input images

* fixed some docstring input param definitions

* make_custom_mosaic.py: updated code to support new "skip_gaia_alignment" functionality in hapmultisequencer.

* minor docstring tweaks

* Minimize use of fitting for shift only (#1129)

* Improve alignment process

* Generate empty source catalogs

* Get logic right for skipping fits

* Update logic to keep up with #1122

* Made crclean efficient

* Use background value to replace CRs;add comments

* Update setup to explicitly require numpy

* Bump min Python to 3.7

* Revert to using current numpy only for build (#1124)

Co-authored-by: Zane Geiger <[email protected]>
Co-authored-by: Zane.Geiger <[email protected]>
Co-authored-by: Michael Dulude <[email protected]>
Co-authored-by: Michael Dulude <[email protected]>
Co-authored-by: Larry Bradley <[email protected]>
Co-authored-by: mdlpstsci <[email protected]>
  • Loading branch information
7 people authored Sep 22, 2021
1 parent f2b9976 commit 6deca02
Show file tree
Hide file tree
Showing 27 changed files with 1,490 additions and 354 deletions.
4 changes: 2 additions & 2 deletions drizzlepac/devutils/search_skyframes.py
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ def augment_results(results):
# populate dateobs_list, path_list
for idx in results.index:
rootname = results.exposure[idx]
imgname = make_poller_files.locate_fitspath_from_rootname(rootname)
imgname = make_poller_files.locate_fitsfile(rootname)
dateobs_list.append(fits.getval(imgname, "DATE-OBS"))
path_list.append(imgname)

Expand Down Expand Up @@ -305,7 +305,7 @@ def make_footprint_fits_file(skycell_name, img_list, footprint_imgname):
parser.add_argument('-f', '--spec', required=False, default="None",
help='Filter name(s) to search for. To search for ACS observations that use two '
'spectral elements, enter the names of both spectral elements in any order '
'seperated by a dash. Example two-spectral element input: f606w-pol60v')
'separated by a dash. Example two-spectral element input: f606w-pol60v')
parser.add_argument('-m', '--master_observations_file', required=False,
default=os.getenv("ALL_EXP_FILE"),
help='Name of the master observations .csv file containing comma-separated columns '
Expand Down
25 changes: 17 additions & 8 deletions drizzlepac/hapmultisequencer.py
Original file line number Diff line number Diff line change
Expand Up @@ -231,9 +231,10 @@ def create_drizzle_products(total_obj_list, custom_limits=None):
# ----------------------------------------------------------------------------------------------------------------------


def run_mvm_processing(input_filename, diagnostic_mode=False, use_defaults_configs=True,
input_custom_pars_file=None, output_custom_pars_file=None, phot_mode="both",
custom_limits=None, output_file_prefix=None, log_level=logutil.logging.INFO):
def run_mvm_processing(input_filename, skip_gaia_alignment=False, diagnostic_mode=False,
use_defaults_configs=True, input_custom_pars_file=None, output_custom_pars_file=None,
phot_mode="both", custom_limits=None, output_file_prefix=None,
log_level=logutil.logging.INFO):

"""Run the HST Advanced Products (HAP) generation code. This routine is the sequencer or
controller which invokes the high-level functionality to process the multi-visit data.
Expand All @@ -244,6 +245,10 @@ def run_mvm_processing(input_filename, diagnostic_mode=False, use_defaults_confi
The 'poller file' where each line contains information regarding an exposures considered
part of the multi-visit.
skip_gaia_alignment : bool, optional
Skip alignment of all input images to known Gaia/HSC sources in the input image footprint? If set to
'True', the existing input image alignment solution will be used instead. The default is False.
diagnostic_mode : bool, optional
Allows printing of additional diagnostic information to the log. Also, can turn on
creation and use of pickled information.
Expand Down Expand Up @@ -355,11 +360,15 @@ def run_mvm_processing(input_filename, diagnostic_mode=False, use_defaults_confi
log.info("The configuration parameters have been read and applied to the drizzle objects.")

# TODO: This is the place where updated WCS info is migrated from drizzlepac params to filter objects

reference_catalog = run_align_to_gaia(total_obj_list, custom_limits=custom_limits,
log_level=log_level, diagnostic_mode=diagnostic_mode)
if reference_catalog:
product_list += [reference_catalog]
if skip_gaia_alignment:
log.info("Gaia alignment step skipped. Existing input image alignment solution will be used instead.")
else:
reference_catalog = run_align_to_gaia(total_obj_list,
custom_limits=custom_limits,
log_level=log_level,
diagnostic_mode=diagnostic_mode)
if reference_catalog:
product_list += [reference_catalog]

# Run AstroDrizzle to produce drizzle-combined products
log.info("\n{}: Create drizzled imagery products.".format(str(datetime.datetime.now())))
Expand Down
116 changes: 41 additions & 75 deletions drizzlepac/hapsequencer.py
Original file line number Diff line number Diff line change
Expand Up @@ -162,46 +162,6 @@ def create_catalog_products(total_obj_list, log_level, diagnostic_mode=False, ph
# images and some of the measurements can be appended to the total catalog
total_product_catalogs.identify(mask=total_product_obj.mask)

# Determine how to continue if "aperture" or "segment" fails to find sources for this total
# detection product - take into account the initial setting of phot_mode.
# If no sources were found by either the point or segmentation algorithms, go on to
# the next total detection product (detector) in the visit with the initially requested
# phot_mode. If the point or segmentation algorithms found sources, need to continue
# processing for that (those) algorithm(s) only.

# When both algorithms have been requested...
if input_phot_mode == 'both':
# If no sources found with either algorithm, skip to the next total detection product
if total_product_catalogs.catalogs['aperture'].sources is None and total_product_catalogs.catalogs['segment'].sources is None:
log.info("No sources found with Segmentation or Point algorithms for TDP {} - skip to next TDP".format(total_product_obj.drizzle_filename))
del total_product_catalogs.catalogs['aperture']
del total_product_catalogs.catalogs['segment']
continue

# Only point algorithm found sources, continue to the filter catalogs for just point
if total_product_catalogs.catalogs['aperture'].sources is not None and total_product_catalogs.catalogs['segment'].sources is None:
log.info("Sources only found with Point algorithm for TDP {} - phot_mode set only to POINT for this TDP".format(total_product_obj.drizzle_filename))
phot_mode = 'aperture'
del total_product_catalogs.catalogs['segment']

# Only segment algorithm found sources, continue to the filter catalogs for just segmentation
if total_product_catalogs.catalogs['aperture'].sources is None and total_product_catalogs.catalogs['segment'].sources is not None:
log.info("Sources only found with Segmentation algorithm for TDP {} - phot_mode set only to SEGMENT for this TDP".format(total_product_obj.drizzle_filename))
phot_mode = 'segment'
del total_product_catalogs.catalogs['aperture']

# Only requested the point algorithm
elif input_phot_mode == 'aperture':
if total_product_catalogs.catalogs['aperture'].sources is None:
del total_product_catalogs.catalogs['aperture']
continue

# Only requested the segmentation algorithm
elif input_phot_mode == 'segment':
if total_product_catalogs.catalogs['segment'].sources is None:
del total_product_catalogs.catalogs['segment']
continue

# Build dictionary of total_product_catalogs.catalogs[*].sources to use for
# filter photometric catalog generation
sources_dict = {}
Expand Down Expand Up @@ -243,7 +203,6 @@ def create_catalog_products(total_obj_list, log_level, diagnostic_mode=False, ph
# a filter "subset" table which will be combined with the total detection table.
filter_name = filter_product_obj.filters
filter_product_catalogs.measure(filter_name)

log.info("Flagging sources in filter product catalog")
filter_product_catalogs = run_sourcelist_flagging(filter_product_obj,
filter_product_catalogs,
Expand Down Expand Up @@ -355,30 +314,33 @@ def create_catalog_products(total_obj_list, log_level, diagnostic_mode=False, ph
# rate of cosmic-ray contamination for the total detection product
reject_catalogs = total_product_catalogs.verify_crthresh(n1_exposure_time)

if not reject_catalogs or diagnostic_mode:
for filter_product_obj in total_product_obj.fdp_list:
filter_product_catalogs = filter_catalogs[filter_product_obj.drizzle_filename]

# Now write the catalogs out for this filter product
log.info("Writing out filter product catalog")
# Write out photometric (filter) catalog(s)
filter_product_catalogs.write(reject_catalogs)
if diagnostic_mode:
# If diagnostic mode, we want to inspect the original full source catalogs
reject_catalogs = False

# append filter product catalogs to list
if phot_mode in ['aperture', 'both']:
product_list.append(filter_product_obj.point_cat_filename)
if phot_mode in ['segment', 'both']:
product_list.append(filter_product_obj.segment_cat_filename)
for filter_product_obj in total_product_obj.fdp_list:
filter_product_catalogs = filter_catalogs[filter_product_obj.drizzle_filename]

log.info("Writing out total product catalog")
# write out list(s) of identified sources
total_product_catalogs.write(reject_catalogs)
# Now write the catalogs out for this filter product
log.info("Writing out filter product catalog")
# Write out photometric (filter) catalog(s)
filter_product_catalogs.write(reject_catalogs)

# append total product catalogs to manifest list
# append filter product catalogs to list
if phot_mode in ['aperture', 'both']:
product_list.append(total_product_obj.point_cat_filename)
product_list.append(filter_product_obj.point_cat_filename)
if phot_mode in ['segment', 'both']:
product_list.append(total_product_obj.segment_cat_filename)
product_list.append(filter_product_obj.segment_cat_filename)

log.info("Writing out total product catalog")
# write out list(s) of identified sources
total_product_catalogs.write(reject_catalogs)

# append total product catalogs to manifest list
if phot_mode in ['aperture', 'both']:
product_list.append(total_product_obj.point_cat_filename)
if phot_mode in ['segment', 'both']:
product_list.append(total_product_obj.segment_cat_filename)
return product_list


Expand Down Expand Up @@ -871,22 +833,26 @@ def run_sourcelist_flagging(filter_product_obj, filter_product_catalogs, log_lev
pickle_out.close()
log.info("Wrote hla_flag_filter param pickle file {} ".format(out_pickle_filename))
# TODO: REMOVE ABOVE CODE ONCE FLAGGING PARAMS ARE OPTIMIZED
if catalog_data is not None and len(catalog_data) > 0:
source_cat = hla_flag_filter.run_source_list_flagging(drizzled_image,
flt_list,
param_dict,
exptime,
plate_scale,
median_sky,
catalog_name,
catalog_data,
cat_type,
drz_root_dir,
filter_product_obj.hla_flag_msk,
ci_lookup_file_path,
output_custom_pars_file,
log_level,
diagnostic_mode)
else:
source_cat = catalog_data

filter_product_catalogs.catalogs[cat_type].source_cat = hla_flag_filter.run_source_list_flagging(drizzled_image,
flt_list,
param_dict,
exptime,
plate_scale,
median_sky,
catalog_name,
catalog_data,
cat_type,
drz_root_dir,
filter_product_obj.hla_flag_msk,
ci_lookup_file_path,
output_custom_pars_file,
log_level,
diagnostic_mode)
filter_product_catalogs.catalogs[cat_type].source_cat = source_cat

return filter_product_catalogs

Expand Down
Loading

0 comments on commit 6deca02

Please sign in to comment.