PYde gauge: Routines for processing and analyzing tide gauge sea level data, generally for comparison with model output
Clone this repo to your local directory, then create your own branch to work in:
git clone https://github.com/clittleaer/pyde_gauge.git
cd pyde_gauge
git checkout -b <nameofyourbranch>
Included analysis scripts are: 1) simple comparisons of tide gauges and altimetry time series, sampled at the nearest grid point (global_alt_tg_comp_at_tgs.ipynb) and 2) similar comparisons of model and altimetry time series, sampled at the nearest grid point to pseudo-tide gauge locations (global_alt_hr_comp_at_pseudo_tgs.ipynb). Initial figures, and the included .pkl and .csv files, analyze 365 tide gauge locations and ~300 psuedo-tg locations. Code will, by default, use a reduced set of four locations.
1a. Run import_rlr_matlab.ipynb.
1b. Or, just use provided PSMSL_data and PSMSL_ids.csv files.
2a. Run tg_processing. IB correction requires access to ERA-5 on NCAR RDS server.
-
Run alt_processing and cesm2_hr_processing using the .csv and .pkl files exported from step 2a. Both require access to NCAR hosted datasets, for now.
-
Send all outputs to different .csv and .pkl files, then read those into the analysis scripts.
-
Run the "pseudo_tg_locations.ipynb" notebook.
-
Run alt_processing and cesm2_hr_processing, but use the pseudo_tg .csv and .pkl files.
- Import monthly tide gauges from PSMSL.
- IB “Correcting” (ERA-5 for now), if you have access to NCAR RDA data.
- Monthly processing; removing mean seasonal cycle.
- extraction of closest point (using momlevel)from MEASURES 1/6 degree product
- Removal of global mean from altimetry
- standard time series analysis techniques, as for TG's
- pseudo TG code, to sample coastal points for gridded datasets
- Sorting along coastlines.
- Filtering problematic gauges(right now, using momlevel threshold of distance between tide gauge and altimetry/model)
- Gridded analyses, regridding
- CESM FOSI, multiple cycles, LR and HR, and coupled simulations.
- SEANOE dataset
- Wrap and/or recode RLR script from PSMSL
- Long-period tide “corrections”
- Uncertainty in IB/GM corrections (multiple datasets)
- Correcting for “global mean” terms, including fingerprints (Using Fredrikse et al. 2021 Dataset) and VLM (using, e.g., Hammond et al. 2021: https://doi.org/10.1029/2021JB022355)
- Careful vetting of coastal locations where it makes sense to compare with models/altimetry. Ideally, these are locations right along the coastline and are not nestled back in and embayments or upriver. My -- maybe idealistic -- vision is to have this be determined by the spatial coherence of TGs (which of course will be determined by the spatial/temporal scale you care about!).
- Clustering/spatial averaging/coherence.
- Time mean quantities
- SWOT
- TG (UHSLC): (I haven’t touched high frequency (sub-monthly) data in a while.)
- TG NOAA
- Altimetry (DUACS)
- MOM6
- Generic CMIP6
- I agree that knowing the cell depth would be useful. The analysis is fast, so one could simply repeat the analysis by passing an array of "deptho" to get the depth at the selected locations.
- The call from momlevel to Scikit could be adapted to return all of the nearest neighbor points within a threshold to compute a mean/variance. A modest amount of work is needed, but it's not an intractable problem.