-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding a function to create a log file and save cloud correction para… #260
base: master
Are you sure you want to change the base?
Conversation
…meters in the file
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #260 +/- ##
==========================================
- Coverage 77.23% 77.18% -0.06%
==========================================
Files 21 22 +1
Lines 2614 2621 +7
==========================================
+ Hits 2019 2023 +4
- Misses 595 598 +3 ☔ View full report in Codecov by Sentry. |
I think you did it well too complicated. You already have one instance of "logging": logger that can be used, you just need to print out in the general log cloud_correction parameters from the config file (and also you can print out MCP version if you want, but this I think we should change such that it is done for all the scripts) |
@@ -248,7 +252,7 @@ def process_telescope_data(input_file, config, tel_id, camgeom, focal_eff): | |||
if assigned_tel_ids["LST-1"] == tel_id: | |||
conc_params = concentration_parameters( | |||
clean_camgeom, clean_image, hillas_params | |||
) # "For LST-1 we compute concentration from the cleaned image and for MAGIC from the full image to reproduce the current behaviour in the standard code | |||
) # "For LST-1 we compute concentration from the cleaned image and for MAGIC from the full image to reproduce the current behaviour in the standard code/home/natti/CTA/lodz_cluster/lstchain_10_11_test/scripts/new/outfiles/dl1_coincidence_stereo_0.75-0.85_03955 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks like a copy/paste error
subarray_info.to_hdf(args.output_file) | ||
logging.info(f"Correction parameters: {correction_params}") | ||
logging.info(f"ctapipe version: {ctapipe.__version__}") | ||
logging.info(f"magicctapipe version: {magicctapipe.__version__}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do you use a separate log? you have "logger" defined before and used just below this line. you can use it also to write the correction parameters and version of software
…ogger improvements, and units changes.
Hi @nzywucka, the latest changes seem to went into another file. |
Hi @nzywucka |
extracted_lidar_reports_test.csv: file with the cloud parameters from LIDAR lst1_magic_cloud_correction.py: interpolation function has been added to the code
magicctapipe/resources/config.yaml
Outdated
@@ -102,7 +102,7 @@ event_coincidence: | |||
|
|||
|
|||
stereo_reco: | |||
quality_cuts: "(intensity > 50) & (width > 0)" | |||
quality_cuts: "(intensity > 50) & (intensity > 80 or tel_id > 1) & (width > 0)" #"(intensity > 50) & (width > 0)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this cut was used for tests. it should not go into the default input card
@@ -0,0 +1,7 @@ | |||
timestamp,base_height,top_height,transmission,lidar_zenith |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
better to put this file in resources directory (where the input cards are)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also, is this file still used for something? the yaml file has a different format
magicctapipe/resources/config.yaml
Outdated
@@ -280,8 +280,8 @@ dl2_to_dl3: | |||
|
|||
cloud_correction: | |||
specify_cloud: yes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is not anymore "specify_cloud" option and it is not used anywhere in the code. I think you can drop it for the moment
lst1_magic_cloud_correction.py: interpolation implementation has been added to the code.
lidar_reports_clouds_ST.03.16.yaml: configuration file with LIDAR reports of ST.03.16 period added lst1_magic_cloud_correction.py: additional cleaning procedure and improvements to the interpolation function added
magicctapipe/resources/config.yaml
Outdated
number_of_layers: 10 | ||
|
||
additional_lst_cleaning: | ||
picture_thresh: 10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the purpose of redoing the cleaning is to raise the thresholds by some factor, and the same factor should be applied to LST and MAGIC. So, either the factor is determined automatically in the script (but this can be messy because then one needs to process the simulations for each run even if changes are small, or you specify in the input card only a single multiplication factor and this factor is applied to picture_thresh, boundary_thresh of both MAGIC and LST, and rest of the settings should be taken from default config or from the file itself (at least for MAGIC, because for LST this can be more tricky)
@@ -0,0 +1,533 @@ | |||
units: | |||
timestamp: 'iso' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this file is for a very specific dataset, therefore there is no point of posting it into the general repository
lst1_magic_cloud_correction.py: update on the additional cleaning method
A tuple containing interpolated or nearest values for: | ||
- base_height (float): The base height of the cloud layer in meters. | ||
- top_height (float): The top height of the cloud layer in meters. | ||
- vertical_transmission (float): Transmission factor of the cloud layer adjusted for vertical angle. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
... for zenith angle
config_clean_LST = { | ||
key: ( | ||
value * cmf | ||
if isinstance(value, (int, float)) and not isinstance(value, bool) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this way you will upscale all the parameters, including e.g. min_number_picture_neighbours that should not be touched. It is better and simpler to directly upscale only picture_thresh and boundary_thresh
|
||
# AC MAGIC | ||
# Ignore runtime warnings appeared during the image cleaning | ||
warnings.simplefilter("ignore", category=RuntimeWarning) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
even while this is copied from the magic_calib_do_dl1 I think it would be better to try to run without this and see if there are no issues. Because it might also hide some other warnings related to different parts of the code
config_clean_magic = { | ||
key: ( | ||
False | ||
if key == "find_hotpixels" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
save as above, you should only scale picture_thresh and boundary_thresh, and do not touch other parameters
peak_time = dl1_images["peak_time"][index_img] | ||
image /= trans_pixels | ||
|
||
clean_image = image[clean_mask] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you use the old mask first no matter if you do the additional cleaning a moment later or not. I think it is faster to move those 3 lines as "else" after the next condition
unsuitable_mask=unsuitable_mask, | ||
) | ||
|
||
# n_pixels = np.count_nonzero(clean_mask) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some leftover commented out lines here
prefixed_conc_params = { | ||
f"concentration_{key}": value for key, value in conc_params.items() | ||
} | ||
event_params.update(prefixed_conc_params) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
prefixed_conc_params could not be included directly in event_params a few lines above?
|
||
cloud_params = lidar_cloud_interpolation( | ||
mean_subrun_timestamp, max_gap_lidar_shots, lidar_report_file | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this function can return None if there are no proper LIDAR raports to interpolate from that would crash the code below. A better way to handle error management would be to introduce a new error here:
https://github.com/cta-observatory/magic-cta-pipe/blob/73e4a3bf47ace47deb11d1d74a4caf0c37954841/magicctapipe/utils/error_codes.py
and throw it like here:
sys.exit(NO_EVENTS_WITHIN_MAXIMUM_DISTANCE) |
correction_params = config.get("cloud_correction", {}) | ||
max_gap_lidar_shots = u.Quantity( | ||
correction_params.get("max_gap_lidar_shots") | ||
) # set it to 900 seconds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the comment is misleading - the code will set it to whatever is in the input card
try: | ||
dl1_images = read_table(args.input_file, image_node_path) | ||
except tables.NoSuchNodeError: | ||
logger.info(f"\nNo image for telescope with ID {tel_id}. Skipping.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this skips the telescopes without stored images and it is very dangerous because images are stored separately in MAGIC and in LST files, so e.g. if you have LST file without images, but MAGIC with images the script will make MAGIC-only analysis that is likely not what the user wanted.
So I would turn this into a fatal error
logger.info(f"\nNo image for telescope with ID {tel_id}. Skipping.") | ||
dl1_images = None | ||
|
||
df = process_telescope_data( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you pass the config to the function, so the parameters like tel_ids, nlayers, cmf should be recovered from the config inside this function instead of being passed additionally.
|
||
subarray_info.to_hdf(output_file) | ||
|
||
correction_params = config.get("cloud_correction", {}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you already extracted it in L583, why do you do it again?
cloud_params = lidar_cloud_interpolation( | ||
mean_subrun_timestamp, max_gap_lidar_shots, lidar_report_file | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also I think it would be useful to print the cloud_params in the logger
…meters in the file