diff --git a/README.md b/README.md index 8db5a1948..2f208a245 100644 --- a/README.md +++ b/README.md @@ -20,6 +20,7 @@ and a link to the full documentation for each currently-supported POD. | [Diurnal Cycle of Precipitation](https://www.cgd.ucar.edu/cms/bundy/Projects/diagnostics/mdtf/mdtf_figures/MDTF_QBOi.EXP1.AMIP.001.save/precip_diurnal_cycle/precip_diurnal_cycle.html) | Rich Neale (NCAR) | | [Eulerian Storm Track](https://github.com/NOAA-GFDL/MDTF-diagnostics/blob/main/diagnostics/eulerian_storm_track/doc/eulerian_storm_track.rst) | James Booth (CUNY), Jeyavinoth Jeyaratnam | | [Extratropical Variance (EOF 500hPa Height)](https://www.cgd.ucar.edu/cms/bundy/Projects/diagnostics/mdtf/mdtf_figures/MDTF_QBOi.EXP1.AMIP.001.save/EOF_500hPa/EOF_500hPa.html) | CESM/AMWG (NCAR) | +| [Forcing Feedback Diagnostic](https://github.com/NOAA-GFDL/MDTF-diagnostics/blob/main/diagnostics/forcing_feedback/doc/forcing_feedback.rst) | Brian Soden (U. Miami), Ryan Kramer| | [Mixed Layer Depth](https://github.com/NOAA-GFDL/MDTF-diagnostics/blob/main/diagnostics/mixed_layer_depth/doc/mixed_layer_depth.rst) | Cecilia Bitz (U. Washington), Lettie Roach | | [MJO Propagation and Amplitude ](https://www.cgd.ucar.edu/cms/bundy/Projects/diagnostics/mdtf/mdtf_figures/MDTF_GFDL.CM4.c96L32.am4g10r8/MJO_prop_amp/MJO_prop_amp.html)| Xianan Jiang (UCLA) | | [MJO Spectra and Phasing](https://www.cgd.ucar.edu/cms/bundy/Projects/diagnostics/mdtf/mdtf_figures/MDTF_QBOi.EXP1.AMIP.001.save/MJO_suite/MJO_suite.html) | CESM/AMWG (NCAR) | diff --git a/diagnostics/forcing_feedback/doc/forcing_feedback.rst b/diagnostics/forcing_feedback/doc/forcing_feedback.rst new file mode 100644 index 000000000..b0eba25ce --- /dev/null +++ b/diagnostics/forcing_feedback/doc/forcing_feedback.rst @@ -0,0 +1,114 @@ +Forcing Feedback Diagnostic Package +============================================================ +Last update: 12/21/2023 + +The Forcing Feedback Diagnostic package evaluates a model's radiative forcing and radiative feedbacks. This is a commong framework for understanding the radiative constraints on a model's climate sensitivity and is outlined in detail by :ref:`Sherwood et al. (2015) <1>`, among many others. To compute radiative feedbacks, anomalies of temperature, specific humidity and surface albedo are translated into radiative anomalies by multiplying them by radiative kernels developed from the CloudSat/CALIPSO Fluxes and Heating Rates product (:ref:`Kramer et al. 2019 <2>`).These radiative anomalies are regressed against the model's global-mean surface temperature anomalies to estimate the radiative feedbacks. Cloud radiative feedbacks are computed as the change in cloud radiative effects from the model's TOA radiative flux variables, corrected for cloud masking using the kernel-derived non-cloud radiative feedbacks (:ref:`Soden et al. 2008 <3>`). The Instantaneous Radiative Forcing is computed first under clear-sky conditions by subtracting kernel-derivred clear-sky radiative feedbacks from the clear-sky TOA radiative imbalance diagnosed from the model's radiative flux variables. The all-sky Instantaneous Radiative Forcing is estimated by dividing the clear-sky Instantaneous Radiative Forcing by a cloud masking constant (:ref:`Kramer et al. 2021 <4>`). All radiative quantities in this package are defined at the TOA and positive represents an increase in net downwelling or a radiative heating of the planet. + + +Contact info +------------ + +- PI of the project: Brian Soden, University of Miami (bsoden@rsmas.miami.edu); +- Current developer: Ryan Kramer (ryan.kramer@noaa.gov) + +Open source copyright agreement +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +This package is distributed under the LGPLv3 license (see LICENSE.txt). + +Functionality +------------- + +The currently package consists of: +- a Python script (forcing_feedback.py), which sets up the directories and calls\.\.\. +- \.\.\. an Python script (forcing_feedback_kernelcalcs.py) which reads the data, performs the calculations and +saves the results to temporary netcdfs./././. +- \.\.\. Finally, a Python script (forcing_feedback_plots.py) reads in the temporary results, +observational radiative forcing and feedbacks and creates plots. Throughout the package, the scripts use Python +functions defined in a third Python script (forcing_feedback_util.py) + +As a module of the MDTF code package, all scripts of this package can be found +under ``mdtf/MDTF_$ver/var_code/forcing_feedback`` +and pre-digested observational data and radiative kernels (in netcdf format) under +``mdtf/inputdata/obs_data/forcing_feedback`` +Place your input data at: ``mdtf/inputdata/model/$model_name/mon/`` + +Required programming language and libraries +------------------------------------------- + +Python is required to run the diagnostic. + +The part of the package written in Python requires packages os, sys, numpy, xarray, scipy, matplotlib, +cartopy and dask. These Python packages are already included in the standard Anaconda installation + +Required model output variables +------------------------------- + +The following three 3-D (lat-lon-time), monthly model fields are required: +- surface skin temperature ("ts" in CMIP conventions) +- TOA incident shortwave radiation ("rsdt") +- TOA outgoing all-sky shortwave radiation ("rsut") +- TOA outgoing clear-sky shortwave radiation ("rsutcs") +- TOA outgoing all-sky longwave radiation ("rlut") +- TOA outgoing clear-sky longwave radiation ("rlutcs") +- Surface downwelling all-sky shortwave radiation ("rsds") +- Surface downwelling clear-sky shortwave radiation ("rsdscs") +- Surface upwelling all-sky shortwave radiation ("rsus") +- Surface upwelling clear-sky shortwave radiation ("rsuscs") + +The following 4-D (lat-lon-level-time), monthly model fields are requied: +- Air temperature ("ta" in CMIP conventions) +- Specific humidity ("hus") + +The observational estimates (see below) are for 2003-2014. The start date is based on data availability while the end +date was selected to match the end date of relevant CMIP6 simulations. For an ideal comparison, +the model data used in this POD should cover the same period and have realistic, +historical forcing boundary conditions. However, this package will still have value as a "gut check" for any simulation, +especially with respect to radiative feedbacks, which often exhibit similar characteristics regardless of the +forcing scenario. + + +More about the diagnostic +------------------------- + +a) Choice of reference dataset +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +While total TOA radiative changes are directly observed, the radiative feedback and radiative forcing components are +not. Therefore, in this package the observational estimates of radiative feedbacks and radiative forcing are derived +by multiplying data from ERA5 Reanalysis by the CloudSat/CALIPSO radiative kernels mentioned above. +Global-mean surface temperature anomalies from ERA5 are used to compute the radiative feedbacks from the kernel-derived +radiative anomalies as described above. To diagnose the instantaneous radiative forcing, the kernel-derived, +clear-sky estimates of radiative feedbacks are subtracted by a measure of the total radiative anomalies at the TOA. +For the observational dataset used here, that total radiative anomaly estimates is from CERES. +The methods for diagnosing these radiative changes in observations are outlined by :ref:`Kramer et al. 2021 <4>` +and :ref:`He et al. 2021 <5>` + +References +---------- + + .. _1: + +1. Sherwood, S. C., Bony, S., Boucher, O., Bretherton, C., Forster, P. M., Gregory, J. M., & Stevens, B. (2015). +Adjustments in the Forcing-Feedback Framework for Understanding Climate Change. +*Bulletin of the American Meteorological Society*, **96** (2), 217–228. https://doi.org/10.1175/BAMS-D-13-00167.1 + + .. _2: + +2. Kramer, R. J., Matus, A. V., Soden, B. J., & L’Ecuyer, T. S. (2019). +Observation‐Based Radiative Kernels From CloudSat/CALIPSO. *Journal of Geophysical Research: Atmospheres*, +2018JD029021. https://doi.org/10.1029/2018JD029021 + + .. _3: + +3. Soden, B. J., Held, I. M., Colman, R., Shell, K. M., Kiehl, J. T., & Shields, C. A. (2008). +Quantifying Climate Feedbacks Using Radiative Kernels. *Journal of Climate*, **21** (14), 3504–3520. +https://doi.org/10.1175/2007JCLI2110.1 + + .. _4: + +4. Kramer, R.J, He, H., Soden, B.J., Oreopoulos, R.J., Myhre, G., Forster, P.F., & Smith, C.J. +(2021) Observational Evidence of Increasing Global Radiative Forcing. *Geophys. Res. Lett.*, **48** (7), +e2020GL091585. https://doi.org/10.1029/2020GL091585 + + .. _5: diff --git a/diagnostics/forcing_feedback/forcing_feedback.html b/diagnostics/forcing_feedback/forcing_feedback.html new file mode 100644 index 000000000..e7addf9da --- /dev/null +++ b/diagnostics/forcing_feedback/forcing_feedback.html @@ -0,0 +1,67 @@ + + + +Forcing and Feedbacks + +

+The Forcing and Feedback module uses radiative kernels to compute the model's Instantaneous Radiative Forcing and the + Planck, lapse rate, water vapor, surface albedo and cloud radiative feedbacks. The total radiative change + (in W/m2/K) is also computed. These quantities are defined at the top-of-the-atmosphere (TOA) and decomposed + into longwave (LW) and shortwave (SW) components when applicable. Radiative feedbacks are diagnosed by regressing + local radiative anomalies against global-mean surface temperature. + +These results can be used to understand what is driving the a model's particular climate sensitivity and more broadly + provide insights on how changes in the model's atmospheric state alter the model's energy budget. + +

+ +

+ + + + + + + + + + + + + + + + + + + + + + + + +
< color=navy> Forcing and Feedbacks +{CASENAME} vs. Observations +
Global-Mean Total Radiation Change +plot +
Global-Mean Instantaneous Radiative Forcing +plot +
Global-Mean Longwave Feedbacks +plot +
Global-Mean Shortwave Feedbacks +plot +
Comparison with CMIP6 Models +plot +
Map of Temperature Feedback +plot +
Map of Water Vapor Feedback +plot +
Map of Surface Albedo Feedback +plot +
Map of Cloud Feedback +plot +
Map of Total Radiation Change +plot +
Map of Instantaneous Radiative Forcing Trend +plot +
diff --git a/diagnostics/forcing_feedback/forcing_feedback.py b/diagnostics/forcing_feedback/forcing_feedback.py new file mode 100644 index 000000000..23b968bf0 --- /dev/null +++ b/diagnostics/forcing_feedback/forcing_feedback.py @@ -0,0 +1,77 @@ +# This file is part of the forcing_feedback module of the MDTF code package (see mdtf/MDTF_v2.0/LICENSE.txt) + +# ====================================================================== +# forcing_feedback.py +# +# Forcing Feedback Diagnostic Package +# +# The forcing feedback diagnostic package uses radiative kernels to compute radiative forcing and radiative +# feedback terms. +# +# Version 1 05-Sept-2023 Ryan Kramer (NOAA/GFDL prev. NASA GSFC/UMBC) +# PI: Brian Soden (University of Miami; bsoden@rsmas.miami.edu) +# Current developer: Ryan Kramer (ryan.kramer@noaa.gov) +# +# This package and the MDTF code package are distributed under the LGPLv3 license +# (see LICENSE.txt). +# +# +# As a module of the MDTF code package, all scripts of this package can be found under +# mdtf/MDTF_$ver/var_code/forcing_feedback** +# and pre-digested radiative kernels used in the calculations under +# mdtf/inputdata/obs_data/forcing_feedback +# (**$ver depends on the actual version of the MDTF code package +# +# This package is written in Python 3 and requires the following Python packages: +# os,sys,numpy,xarray,scipy,matplotlib,cartopy,dask +# +# The following 4-D (lon-lat-pressure levels-time) monthly-mean model fields +# are required: +# (1) air temperature (units: K) +# (2) specific humidity (units: kg/kg) +# +# The following 3-D (lon-lat-time) monthly-mean model fields are required: +# (1) surface temperature (units: K) +# (2) TOA longwave and shortwave radiative flux diagostics (TOA SW upwellling, TOA SW downwelling, etc.) +# for longwave and shortwave and for all-sky and clear-sky conditions when applicable +# (3) Surface shortwave radiative flux diagnostics (Surface SW Upwelling, Surface SW downwelling) +# for all-sky and clear-sky conditions +# +# +################################## +# forcing_feedback +# +# Created By: Ryan J. Kramer, Brian J. Soden +# +# +# This is the main script for the forcing_feedback Toolkit. With some user-specified details +# this Toolkit uses radiative kernels to compute instantaneous radiative forcing and radiative feedbacks +# for a single model. Further details are in the module's documentation at ../doc. +# +################################## + +import os + +if not os.path.isfile(os.environ["OBS_DATA"] + "/forcing_feedback_kernels.nc"): + print("Kernel file is missing. POD will not work!") + +else: + + try: + os.system("python " + os.environ["POD_HOME"] + "/" + "forcing_feedback_kernelcalcs.py") + print('Working Directory is ' + os.environ['WK_DIR']) + print('Forcing Feedback POD is executing') + except RuntimeError as e1: + print('WARNING', e1.errno, e1.strerror) + print("**************************************************") + print("Kernel calculations (forcing_feedback_kernelcalcs.py) are NOT Executing as Expected!") + + try: + os.system("python " + os.environ["POD_HOME"] + "/" + "forcing_feedback_plot.py") + print('Generating Forcing Feedback POD plots') + except RuntimeError as e2: + print('WARNING', e2.errno, e2.strerror) + print("**************************************************") + print("Plotting functions (forcing_feedback_plots.py) are NOT Executing as Expected!") + + print("Last log message by Forcing Feedback POD: finished executing") diff --git a/diagnostics/forcing_feedback/forcing_feedback_kernelcalcs.py b/diagnostics/forcing_feedback/forcing_feedback_kernelcalcs.py new file mode 100644 index 000000000..7bd0e4d46 --- /dev/null +++ b/diagnostics/forcing_feedback/forcing_feedback_kernelcalcs.py @@ -0,0 +1,347 @@ +# This file is part of the forcing_feedback module of the MDTF code package (see mdtf/MDTF_v2.0/LICENSE.txt) + +# ====================================================================== +# forcingfluxanom_kernelcalcs_final.py +# +# Called by forcingfluxanom_xr_final.py +# Reads in and (as needed) regrids radiative kernels, reads in model data, computes radiative flux kernel calculations +# +# Forcing Feedback Diagnositic Package +# +# This file is part of the Forcing and Feedback Diagnostic Package +# and the MDTF code package. See LICENSE.txt for the license. +# + +import os +import numpy as np +import xarray as xr +from scipy.interpolate import interp1d + +# Import functions specific to this toolkit +from forcing_feedback_util import var_anom4D +from forcing_feedback_util import fluxanom_calc_4D +from forcing_feedback_util import fluxanom_calc_3D +from forcing_feedback_util import esat_coef +from forcing_feedback_util import latlonr3_3D4D +from forcing_feedback_util import feedback_regress + +# ====================================================================== + +# Read in radiative kernels +kernnc = xr.open_dataset(os.environ["OBS_DATA"] + "/forcing_feedback_kernels.nc") +lat_kern = kernnc['latitude'].values +lon_kern = kernnc['longitude'].values +lev_kern = kernnc['plev'].values +time_kern_TOA = kernnc['time'].values +lw_t_kern_TOA = kernnc['lw_t'].values # LW all-sky air temperature kernel +lwclr_t_kern_TOA = kernnc['lwclr_t'].values # LW clear-sky air temperature kernel +lw_ts_kern_TOA = kernnc['lw_ts'].values # LW all-sky surface temperature kernel +lwclr_ts_kern_TOA = kernnc['lwclr_ts'].values # LW clear-sky surface temperature kernel +lw_q_kern_TOA = kernnc['lw_q'].values # LW all-sky water vapor radiative kernel +lwclr_q_kern_TOA = kernnc['lwclr_q'].values # LW clear-sky water vapor radiative kernel +sw_q_kern_TOA = kernnc['sw_q'].values # SW all-sky water vapor radiative kernel +swclr_q_kern_TOA = kernnc['swclr_q'].values # SW clear-sky water vapor radiative kernel +sw_a_kern_TOA = kernnc['sw_a'].values # SW all-sky surface albedo radiative kernel +swclr_a_kern_TOA = kernnc['swclr_a'].values # SW clear-sky surface albedo radiative kernel +ps_kern_TOA = kernnc['PS'].values # Radiative kernel surface pressure + +# close kernel file +kernnc.close() + +# Read in model output + +varnames = ["ta", "ts", "hus", "rsus", "rsds", "rsuscs", "rsdscs", "rsdt", "rsut", "rsutcs", "rlut", "rlutcs"] +model_mainvar_pert = {} +i = 0 +for varname in varnames: + nc = xr.open_dataset(os.environ[varname.upper() + "_FILE"]) + model_mainvar_pert[os.environ[varname + "_var"]] = nc[os.environ[varname + "_var"]] + if i == 0: + lat_model = nc[os.environ["lat_coord"]].values + lon_model = nc[os.environ["lon_coord"]].values + lev_model = nc[os.environ["plev_coord"]].values + time_model = nc[os.environ["time_coord"]].values + nc.close() + i += 1 +i = None + +model_mainvar_climo = model_mainvar_pert + +# To avoid issues with interpreting different model time formats, just create new variable. +time_model = np.arange(len(time_model)) + 1 + +# Regrid kernels using function to match horizontal grid of model/observational data +lw_t_kern_TOA = latlonr3_3D4D(lw_t_kern_TOA, lat_kern, lon_kern, lat_model, lon_model, + model_mainvar_climo[os.environ["ta_var"]]) +lwclr_t_kern_TOA = latlonr3_3D4D(lwclr_t_kern_TOA, lat_kern, lon_kern, lat_model, lon_model, + model_mainvar_climo[os.environ["ta_var"]]) +lw_ts_kern_TOA = latlonr3_3D4D(lw_ts_kern_TOA, lat_kern, lon_kern, lat_model, lon_model, + model_mainvar_climo[os.environ["ts_var"]]) +lwclr_ts_kern_TOA = latlonr3_3D4D(lwclr_ts_kern_TOA, lat_kern, lon_kern, lat_model, lon_model, + model_mainvar_climo[os.environ["ts_var"]]) +lw_q_kern_TOA = latlonr3_3D4D(lw_q_kern_TOA, lat_kern, lon_kern, lat_model, lon_model, + model_mainvar_climo[os.environ["ta_var"]]) +lwclr_q_kern_TOA = latlonr3_3D4D(lwclr_q_kern_TOA, lat_kern, lon_kern, lat_model, lon_model, + model_mainvar_climo[os.environ["ta_var"]]) +sw_q_kern_TOA = latlonr3_3D4D(sw_q_kern_TOA, lat_kern, lon_kern, lat_model, lon_model, + model_mainvar_climo[os.environ["ta_var"]]) +swclr_q_kern_TOA = latlonr3_3D4D(swclr_q_kern_TOA, lat_kern, lon_kern, lat_model, lon_model, \ + model_mainvar_climo[os.environ["ta_var"]]) +sw_a_kern_TOA = latlonr3_3D4D(sw_a_kern_TOA, lat_kern, lon_kern, lat_model, lon_model, \ + model_mainvar_climo[os.environ["ts_var"]]) +swclr_a_kern_TOA = latlonr3_3D4D(swclr_a_kern_TOA, lat_kern, lon_kern, lat_model, lon_model, \ + model_mainvar_climo[os.environ["ts_var"]]) +ps_kern_TOA = latlonr3_3D4D(ps_kern_TOA, lat_kern, lon_kern, lat_model, lon_model, \ + model_mainvar_climo[os.environ["ts_var"]]) + +# Kernels have now been regridded to the model/observation grid +lat_kern = lat_model +lon_kern = lon_model + +# If necessary, conduct vertical interpolation +for varname in varnames: + if len(model_mainvar_pert[os.environ[varname + "_var"]].shape) == 4: + # If vertical coordinates for kernel and model data differ, check if just a difference in units, + # check if one is flipped and finally, flip or interpolate as needed. + if not np.array_equal(lev_model, lev_kern) and not np.array_equal(lev_model / 100, lev_kern) \ + and not np.array_equal(lev_model * 100, lev_kern): + if np.array_equal(lev_model, lev_kern[::-1]) or np.array_equal(lev_model / 100, lev_kern[::-1]) \ + or np.array_equal(lev_model * 100, lev_kern[::-1]): + model_mainvar_pert[os.environ[varname + "_var"]] = \ + model_mainvar_pert[os.environ[varname + "_var"]][:, ::-1, ...] + model_mainvar_climo[os.environ[varname + "_var"]] = \ + model_mainvar_climo[os.environ[varname + "_var"]][:, ::-1, ...] + else: + if np.max(lev_model) / np.max(lev_kern) > 10: + lev_model = lev_model / 100 # scale units down + if np.max(lev_model) / np.max(lev_kern) < 0.1: + lev_model = lev_model * 100 # scale units up + f = interp1d(np.log(lev_model), model_mainvar_pert[os.environ[varname + "_var"]], + axis=1, bounds_error=False, fill_value='extrapolate') + model_mainvar_pert[os.environ[varname + "_var"]] = f(np.log(lev_kern)) + f = None + f = interp1d(np.log(lev_model), model_mainvar_climo[os.environ[varname + "_var"]], + axis=1, bounds_error=False, fill_value='extrapolate') + model_mainvar_climo[os.environ[varname + "_var"]] = f(np.log(lev_kern)) + +# Pressure of upper boundary of each vertical layer +pt = (lev_kern[1:] + lev_kern[:-1]) / 2 +pt = np.append(pt, 0) +# Pressure of lower boundary of each vertical layer +pb = pt[:-1] +pb = np.insert(pb, 0, 1000) +# Pressure thickness of each vertical level +dp = pb - pt + +sk = lw_t_kern_TOA.shape +sp = model_mainvar_pert[os.environ["ta_var"]].shape + +# Determine thickness of lowest layer, dependent on local surface pressure +dp_sfc = dp[0] + (ps_kern_TOA - pb[0]) +dp_sfc[ps_kern_TOA >= pt[0]] = 0 + +# Repeat lowest layer thicknesses to match size of model data for kernel calculations +dp_sfc = np.repeat(dp_sfc[np.newaxis, :, :, :], int(sp[0] / 12), axis=0) + +# Compute lapse rate and uniform warming +ta_pert = np.asarray(model_mainvar_pert[os.environ["ta_var"]]) +ta_climo = np.asarray(model_mainvar_climo[os.environ["ta_var"]]) +ts_pert = np.asarray(model_mainvar_pert[os.environ["ts_var"]]) +ts_climo = np.asarray(model_mainvar_climo[os.environ["ts_var"]]) +s = ta_pert.shape + +pl_pert = np.repeat(ts_pert[:, np.newaxis, :, :], s[1], axis=1) +s = ta_climo.shape +pl_climo = np.repeat(ts_climo[:, np.newaxis, :, :], s[1], axis=1) + +ta_anom = var_anom4D(ta_pert, ta_climo) +pl_anom = var_anom4D(pl_pert, pl_climo) +lr_anom = ta_anom - pl_anom + +# Compute Planck Radiative Flux Anomalies +[fluxanom_pl_tot_TOA_tropo, fluxanom_pl_clr_TOA_tropo, fluxanom_pl_tot_TOA_strato, fluxanom_pl_clr_TOA_strato] = \ + fluxanom_calc_4D(pl_anom, lw_t_kern_TOA, lwclr_t_kern_TOA, dp_sfc, lev_kern, lat_kern, ps_kern_TOA) + +# Compute Surface Temperature (surface Planck) Flux Anomalies +[fluxanom_pl_sfc_tot_TOA, fluxanom_pl_sfc_clr_TOA] = fluxanom_calc_3D(ts_pert, ts_climo, lw_ts_kern_TOA, + lwclr_ts_kern_TOA) + +# Comput Lapse Rate Radiative Flux Anomalies +[fluxanom_lr_tot_TOA_tropo, fluxanom_lr_clr_TOA_tropo, fluxanom_lr_tot_TOA_strato, fluxanom_lr_clr_TOA_strato] = \ + fluxanom_calc_4D(lr_anom, lw_t_kern_TOA, lwclr_t_kern_TOA, dp_sfc, lev_kern, lat_kern, ps_kern_TOA) + +# Compute water vapor change +hus_pert = np.asarray(model_mainvar_pert[os.environ["hus_var"]]) +hus_pert[hus_pert < 0] = 0 +hus_climo = np.asarray(model_mainvar_climo[os.environ["hus_var"]]) +hus_climo[hus_climo < 0] = 0 + +# Calculations necessary to convert units of water vapor change to match kernels +shp = ta_climo.shape +ta_ratio_climo = np.squeeze(np.nanmean(np.reshape(ta_climo, (int(shp[0] / 12), 12, shp[1], shp[2], shp[3])), + axis=0)) +es_ratio = esat_coef(ta_ratio_climo + 1) / esat_coef(ta_ratio_climo) + +es_ratio_pert = np.reshape(np.repeat(es_ratio[np.newaxis, ...], int(ta_pert.shape[0] / 12), axis=0), + (ta_pert.shape[0], shp[1], shp[2], shp[3])) + +# Log of water vapor +q_pert = np.log(hus_pert) / (es_ratio_pert - 1) +es_ratio_climo = np.reshape(np.repeat(es_ratio[np.newaxis, ...], int(shp[0] / 12), axis=0), + (shp[0], shp[1], shp[2], shp[3])) + +q_climo = np.log(hus_climo) / (es_ratio_climo - 1) +es_ratio, ta_ratio_climo, hus_pert, hus_climo, ta_pert, ta_climo = None, None, None, None, None, None + +q_anom = var_anom4D(q_pert, q_climo) + +# Compute SW Water Vapor Radiative Flux Anomalies + +[fluxanom_sw_q_tot_TOA_tropo, fluxanom_sw_q_clr_TOA_tropo, fluxanom_sw_q_tot_TOA_strato, fluxanom_sw_q_clr_TOA_strato] \ + = fluxanom_calc_4D(q_anom, sw_q_kern_TOA, swclr_q_kern_TOA, dp_sfc, lev_kern, lat_kern, ps_kern_TOA) + +# Compute LW Water Vapor Radiative Flux Anomalies +[fluxanom_lw_q_tot_TOA_tropo, fluxanom_lw_q_clr_TOA_tropo, fluxanom_lw_q_tot_TOA_strato, fluxanom_lw_q_clr_TOA_strato] \ + = fluxanom_calc_4D(q_anom, lw_q_kern_TOA, lwclr_q_kern_TOA, dp_sfc, lev_kern, lat_kern, ps_kern_TOA) + +fluxanom_sw_q_tot_TOA_tropo[fluxanom_sw_q_tot_TOA_tropo == float('Inf')] = np.nan +fluxanom_sw_q_clr_TOA_tropo[fluxanom_sw_q_clr_TOA_tropo == float('Inf')] = np.nan +fluxanom_lw_q_tot_TOA_tropo[fluxanom_lw_q_tot_TOA_tropo == float('Inf')] = np.nan +fluxanom_lw_q_clr_TOA_tropo[fluxanom_lw_q_clr_TOA_tropo == float('Inf')] = np.nan +fluxanom_sw_q_tot_TOA_tropo[fluxanom_sw_q_tot_TOA_tropo == -float('Inf')] = np.nan +fluxanom_sw_q_clr_TOA_tropo[fluxanom_sw_q_clr_TOA_tropo == -float('Inf')] = np.nan +fluxanom_lw_q_tot_TOA_tropo[fluxanom_lw_q_tot_TOA_tropo == -float('Inf')] = np.nan +fluxanom_lw_q_clr_TOA_tropo[fluxanom_lw_q_clr_TOA_tropo == -float('Inf')] = np.nan + +fluxanom_sw_q_tot_TOA_strato[fluxanom_sw_q_tot_TOA_strato == float('Inf')] = np.nan +fluxanom_sw_q_clr_TOA_strato[fluxanom_sw_q_clr_TOA_strato == float('Inf')] = np.nan +fluxanom_lw_q_tot_TOA_strato[fluxanom_lw_q_tot_TOA_strato == float('Inf')] = np.nan +fluxanom_lw_q_clr_TOA_strato[fluxanom_lw_q_clr_TOA_strato == float('Inf')] = np.nan +fluxanom_sw_q_tot_TOA_strato[fluxanom_sw_q_tot_TOA_strato == -float('Inf')] = np.nan +fluxanom_sw_q_clr_TOA_strato[fluxanom_sw_q_clr_TOA_strato == -float('Inf')] = np.nan +fluxanom_lw_q_tot_TOA_strato[fluxanom_lw_q_tot_TOA_strato == -float('Inf')] = np.nan +fluxanom_lw_q_clr_TOA_strato[fluxanom_lw_q_clr_TOA_strato == -float('Inf')] = np.nan + +# Compute surface albedo change +alb_pert_tot = np.asarray(model_mainvar_pert[os.environ["rsus_var"]] / model_mainvar_pert[os.environ["rsds_var"]]) +alb_climo_tot = np.asarray(model_mainvar_climo[os.environ["rsus_var"]] / model_mainvar_climo[os.environ["rsds_var"]]) +alb_pert_clr = np.asarray(model_mainvar_pert[os.environ["rsuscs_var"]] / model_mainvar_pert[os.environ["rsdscs_var"]]) +alb_climo_clr = np.asarray( + model_mainvar_climo[os.environ["rsuscs_var"]] / model_mainvar_climo[os.environ["rsdscs_var"]]) + +alb_pert_tot[np.isinf(alb_pert_tot)] = 0 +alb_climo_tot[np.isinf(alb_climo_tot)] = 0 +alb_pert_clr[np.isinf(alb_pert_clr)] = 0 +alb_climo_clr[np.isinf(alb_climo_clr)] = 0 +alb_pert_tot[alb_pert_tot > 1] = 0 +alb_pert_tot[alb_pert_tot < 0] = 0 +alb_climo_tot[alb_climo_tot > 1] = 0 +alb_climo_tot[alb_climo_tot < 0] = 0 +alb_pert_clr[alb_pert_clr > 1] = 0 +alb_pert_clr[alb_pert_clr < 0] = 0 +alb_climo_clr[alb_climo_clr > 1] = 0 +alb_climo_clr[alb_climo_clr < 0] = 0 +# Compute Surface Albedo Radiative Flux Anomalies + + +[fluxanom_a_sfc_tot_TOA, fluxanom_a_sfc_clr_TOA] = \ + fluxanom_calc_3D(alb_pert_tot, alb_climo_tot, sw_a_kern_TOA / .01, swclr_a_kern_TOA / .01, alb_pert_clr, + alb_climo_clr) + +# Compute NET Radiative Flux Anomalies +Rtot_LW_TOA_pert = np.asarray(-model_mainvar_pert[os.environ["rlut_var"]]) +Rtot_SW_TOA_pert = np.asarray(model_mainvar_pert[os.environ["rsdt_var"]] - model_mainvar_pert[os.environ["rsut_var"]]) +Rtot_LW_TOA_climo = np.asarray(-model_mainvar_climo[os.environ["rlut_var"]]) +Rtot_SW_TOA_climo = np.asarray( + model_mainvar_climo[os.environ["rsdt_var"]] - model_mainvar_climo[os.environ["rsut_var"]]) +Rclr_LW_TOA_pert = np.asarray(-model_mainvar_pert[os.environ["rlutcs_var"]]) +Rclr_SW_TOA_pert = np.asarray(model_mainvar_pert[os.environ["rsdt_var"]] - model_mainvar_pert[os.environ["rsutcs_var"]]) +Rclr_LW_TOA_climo = np.asarray(-model_mainvar_climo[os.environ["rlutcs_var"]]) +Rclr_SW_TOA_climo = np.asarray( + model_mainvar_climo[os.environ["rsdt_var"]] - model_mainvar_climo[os.environ["rsutcs_var"]]) + +# TOA +[fluxanom_Rtot_LW_TOA, fluxanom_Rclr_LW_TOA] = \ + fluxanom_calc_3D(Rtot_LW_TOA_pert, Rtot_LW_TOA_climo, np.ones(lw_ts_kern_TOA.shape), + np.ones(lwclr_ts_kern_TOA.shape), + Rclr_LW_TOA_pert, Rclr_LW_TOA_climo) + +[fluxanom_Rtot_SW_TOA, fluxanom_Rclr_SW_TOA] = \ + fluxanom_calc_3D(Rtot_SW_TOA_pert, Rtot_SW_TOA_climo, np.ones(lw_ts_kern_TOA.shape), + np.ones(lwclr_ts_kern_TOA.shape), + Rclr_SW_TOA_pert, Rclr_SW_TOA_climo) +# TOA CRE +fluxanom_Rcre_LW_TOA = fluxanom_Rtot_LW_TOA - fluxanom_Rclr_LW_TOA +fluxanom_Rcre_SW_TOA = fluxanom_Rtot_SW_TOA - fluxanom_Rclr_SW_TOA + +# Compute Instantaneous Radiative Forcing as difference between NET Radiative Flux Anomalies and +# the sum of all individual radiative flux anomalies. Total-sky IRF computed as +# Clear-Sky IRF divided by cloud masking constant. NOTE, these cloud masking constants may not apply +# to user's specific model experiment. +IRF_lw_clr_TOA = fluxanom_Rclr_LW_TOA - fluxanom_pl_clr_TOA_tropo - fluxanom_lr_clr_TOA_tropo - \ + fluxanom_lw_q_clr_TOA_tropo - fluxanom_pl_sfc_clr_TOA - fluxanom_pl_clr_TOA_tropo - \ + fluxanom_lr_clr_TOA_strato - fluxanom_lw_q_clr_TOA_strato +IRF_lw_tot_TOA = IRF_lw_clr_TOA / np.double(os.environ["LW_CLOUDMASK"]) +IRF_sw_clr_TOA = (fluxanom_Rclr_SW_TOA - fluxanom_sw_q_clr_TOA_tropo - fluxanom_a_sfc_clr_TOA - + fluxanom_sw_q_clr_TOA_strato) +IRF_sw_tot_TOA = IRF_sw_clr_TOA / np.double(os.environ["SW_CLOUDMASK"]) + +# Compute Cloud Radiative Flux Anomalies as dCRE minus correction for cloud masking using +# kernel-derived IRF and individual flux anomalies (See e.g. Soden et al. 2008) + +fluxanom_lw_c_TOA = fluxanom_Rcre_LW_TOA - (fluxanom_pl_tot_TOA_tropo - fluxanom_pl_clr_TOA_tropo) \ + - (fluxanom_pl_tot_TOA_strato - fluxanom_pl_clr_TOA_strato) \ + - (fluxanom_pl_sfc_tot_TOA - fluxanom_pl_sfc_clr_TOA) \ + - (fluxanom_lr_tot_TOA_tropo - fluxanom_lr_clr_TOA_tropo) \ + - (fluxanom_lr_tot_TOA_strato - fluxanom_lr_clr_TOA_strato) \ + - (fluxanom_lw_q_tot_TOA_tropo - fluxanom_lw_q_clr_TOA_tropo) \ + - (fluxanom_lw_q_tot_TOA_strato - fluxanom_lw_q_clr_TOA_strato) \ + - (IRF_lw_tot_TOA - IRF_lw_clr_TOA) + +fluxanom_sw_c_TOA = fluxanom_Rcre_SW_TOA - (fluxanom_sw_q_tot_TOA_tropo - fluxanom_sw_q_clr_TOA_tropo) \ + - (fluxanom_sw_q_tot_TOA_strato - fluxanom_sw_q_clr_TOA_strato) \ + - (fluxanom_a_sfc_tot_TOA - fluxanom_a_sfc_clr_TOA) \ + - (IRF_sw_tot_TOA - IRF_sw_clr_TOA) + +# Regress radiative flux anomalies with global-mean dTs to compute feedbacks +# within feedback_regress function, results saved to a netcdf + +# Planck Feedback +feedback_regress(fluxanom_pl_tot_TOA_tropo + fluxanom_pl_sfc_tot_TOA, ts_pert, ts_climo, lat_kern, lon_kern, + 'Planck') + +# Lapse Rate Feedback +feedback_regress(fluxanom_lr_tot_TOA_tropo, ts_pert, ts_climo, lat_kern, lon_kern, 'LapseRate') + +# LW Water vapor Feedback +feedback_regress(fluxanom_lw_q_tot_TOA_tropo, ts_pert, ts_climo, lat_kern, lon_kern, 'LW_WaterVapor') + +# SW Water vapor Feedback +feedback_regress(fluxanom_sw_q_tot_TOA_tropo, ts_pert, ts_climo, lat_kern, lon_kern, 'SW_WaterVapor') + +# Surface Albedo Feedback +feedback_regress(fluxanom_a_sfc_tot_TOA, ts_pert, ts_climo, lat_kern, lon_kern, 'SfcAlbedo') + +# Total stratospheric Feedback +feedback_regress(fluxanom_pl_tot_TOA_strato + fluxanom_lr_tot_TOA_strato + fluxanom_lw_q_tot_TOA_strato + \ + fluxanom_sw_q_tot_TOA_strato, ts_pert, ts_climo, lat_kern, lon_kern, 'StratFB') + +# LW Cloud Feedback +feedback_regress(fluxanom_lw_c_TOA, ts_pert, ts_climo, lat_kern, lon_kern, 'LW_Cloud') + +# SW Cloud Feedback +feedback_regress(fluxanom_sw_c_TOA, ts_pert, ts_climo, lat_kern, lon_kern, 'SW_Cloud') + +# LW Total Radiative Anomalies +feedback_regress(fluxanom_Rtot_LW_TOA, ts_pert, ts_climo, lat_kern, lon_kern, 'LW_Rad') + +# SW Total Radiative Anomalies +feedback_regress(fluxanom_Rtot_SW_TOA, ts_pert, ts_climo, lat_kern, lon_kern, 'SW_Rad') + +shp = ts_pert.shape +xtime = np.repeat(np.repeat(np.arange(shp[0])[..., np.newaxis], shp[1], axis=1)[..., np.newaxis], shp[2], axis=2) +# LW IRF (trendlines, regressed against time) +feedback_regress(IRF_lw_tot_TOA, xtime, xtime * 0, lat_kern, lon_kern, 'LW_IRF') + +# SW IRF (trendlines, regressed against time) +feedback_regress(IRF_sw_tot_TOA, xtime, xtime * 0, lat_kern, lon_kern, 'SW_IRF') diff --git a/diagnostics/forcing_feedback/forcing_feedback_plot.py b/diagnostics/forcing_feedback/forcing_feedback_plot.py new file mode 100644 index 000000000..865c46def --- /dev/null +++ b/diagnostics/forcing_feedback/forcing_feedback_plot.py @@ -0,0 +1,251 @@ +# This file is part of the forcing_feedback module of the MDTF code package (see mdtf/MDTF_v2.0/LICENSE.txt) + +# ====================================================================== +# forcing_feedback_plot.py +# +# Called by forcing_feedback.py +# Reads in observations and temporary model results of individual 2D feedbacks and IRF, produces maps of each and +# a bar graph summarizing global-mean values. Also creates a dotplot, comparing results against CMIP6 suite of models +# +# Forcing Feedback Diagnositic Package +# +# This file is part of the Forcing and Feedback Diagnostic Package +# and the MDTF code package. See LICENSE.txt for the license. +# +# + +import os +import numpy as np +import xarray as xr +import matplotlib as mpl + +mpl.use('Agg') +import matplotlib.pyplot as plt + +from forcing_feedback_util import globemean_2D +from forcing_feedback_util import bargraph_plotting +from forcing_feedback_util import map_plotting_4subs +from forcing_feedback_util import map_plotting_2subs + +# Read in observational data +nc_obs = xr.open_dataset(os.environ["OBS_DATA"] + "/forcing_feedback_obs.nc") +lat_obs = nc_obs.lat.values +lon_obs = nc_obs.lon.values +llons_obs, llats_obs = np.meshgrid(lon_obs, lat_obs) + +# Read in model results + +nc_pl = xr.open_dataset(os.environ["WK_DIR"] + "/model/netCDF/fluxanom2D_Planck.nc") +nc_lr = xr.open_dataset(os.environ["WK_DIR"] + "/model/netCDF/fluxanom2D_LapseRate.nc") +nc_lw_q = xr.open_dataset(os.environ["WK_DIR"] + "/model/netCDF/fluxanom2D_LW_WaterVapor.nc") +nc_sw_q = xr.open_dataset(os.environ["WK_DIR"] + "/model/netCDF/fluxanom2D_SW_WaterVapor.nc") +nc_alb = xr.open_dataset(os.environ["WK_DIR"] + "/model/netCDF/fluxanom2D_SfcAlbedo.nc") +nc_lw_c = xr.open_dataset(os.environ["WK_DIR"] + "/model/netCDF/fluxanom2D_LW_Cloud.nc") +nc_sw_c = xr.open_dataset(os.environ["WK_DIR"] + "/model/netCDF/fluxanom2D_SW_Cloud.nc") +nc_lw_irf = xr.open_dataset(os.environ["WK_DIR"] + "/model/netCDF/fluxanom2D_LW_IRF.nc") +nc_sw_irf = xr.open_dataset(os.environ["WK_DIR"] + "/model/netCDF/fluxanom2D_SW_IRF.nc") +nc_lw_netrad = xr.open_dataset(os.environ["WK_DIR"] + "/model/netCDF/fluxanom2D_LW_Rad.nc") +nc_sw_netrad = xr.open_dataset(os.environ["WK_DIR"] + "/model/netCDF/fluxanom2D_SW_Rad.nc") +nc_strat = xr.open_dataset(os.environ["WK_DIR"] + "/model/netCDF/fluxanom2D_StratFB.nc") + +lat_model = nc_sw_irf.lat.values +weights_model = np.cos(np.deg2rad(lat_model)) +weights_obs = np.cos(np.deg2rad(lat_obs)) + +# Global-mean barplot comparison +# Figure 1: Total Radiation +LW_RA_Model = globemean_2D(nc_lw_netrad.LW_Rad.values, weights_model) +SW_RA_Model = globemean_2D(nc_sw_netrad.SW_Rad.values, weights_model) +Net_RA_Model = LW_RA_Model + SW_RA_Model +bars1 = [Net_RA_Model, LW_RA_Model, SW_RA_Model] +LW_RA_Obs = globemean_2D(nc_obs.LW_Rad.values, weights_obs) +SW_RA_Obs = globemean_2D(nc_obs.SW_Rad.values, weights_obs) +Net_RA_Obs = LW_RA_Obs + SW_RA_Obs +bars2 = [Net_RA_Obs, LW_RA_Obs, SW_RA_Obs] +units = 'W/$m^2$/K' +legendnames = ['Net Radiation', 'LW Rad', 'SW Rad'] +filename = 'Rad' +bargraph_plotting(bars1, bars2, units, legendnames, filename) + +# Figure 2: IRF +LW_IRF_Model = 12 * globemean_2D(nc_lw_irf.LW_IRF.values, weights_model) # converting from W/m2/month to W/m2/yr +SW_IRF_Model = 12 * globemean_2D(nc_sw_irf.SW_IRF.values, weights_model) +Net_IRF_Model = LW_IRF_Model + SW_IRF_Model +bars1 = [Net_IRF_Model, LW_IRF_Model, SW_IRF_Model] +LW_IRF_Obs = 12 * globemean_2D(nc_obs.LW_IRF.values, weights_obs) +SW_IRF_Obs = 12 * globemean_2D(nc_obs.SW_IRF.values, weights_obs) +Net_IRF_Obs = LW_IRF_Obs + SW_IRF_Obs +bars2 = [Net_IRF_Obs, LW_IRF_Obs, SW_IRF_Obs] +units = 'W/$m^2/yr$' +legendnames = ['Net IRF', 'LW IRF', 'SW IRF'] +filename = 'IRF' +bargraph_plotting(bars1, bars2, units, legendnames, filename) + +# Figure 3: Longwave Radiative Feedbacks +PL_Model = globemean_2D(nc_pl.Planck.values, weights_model) +LR_Model = globemean_2D(nc_lr.LapseRate.values, weights_model) +LWWV_Model = globemean_2D(nc_lw_q.LW_WaterVapor.values, weights_model) +LWC_Model = globemean_2D(nc_lw_c.LW_Cloud.values, weights_model) +bars1 = [PL_Model, LR_Model, LWWV_Model, LWC_Model] +PL_Obs = globemean_2D(nc_obs.Planck.values, weights_obs) +LR_Obs = globemean_2D(nc_obs.LapseRate.values, weights_obs) +LWWV_Obs = globemean_2D(nc_obs.LW_WaterVapor.values, weights_obs) +LWC_Obs = globemean_2D(nc_obs.LW_Cloud.values, weights_obs) +bars2 = [PL_Obs, LR_Obs, LWWV_Obs, LWC_Obs] +units = 'W/$m^2$/K' +legendnames = ['Planck', 'Lapse Rate', 'LW Water Vapor', ' LW Cloud'] +filename = 'LWFB' +bargraph_plotting(bars1, bars2, units, legendnames, filename) + +# Figure 4: Shortwave Radiative Feedbacks +Alb_Model = globemean_2D(nc_alb.SfcAlbedo.values, weights_model) +SWWV_Model = globemean_2D(nc_sw_q.SW_WaterVapor.values, weights_model) +SWC_Model = globemean_2D(nc_sw_c.SW_Cloud.values, weights_model) +bars1 = [Alb_Model, SWWV_Model, SWC_Model] +Alb_Obs = globemean_2D(nc_obs.SfcAlbedo.values, weights_obs) +SWWV_Obs = globemean_2D(nc_obs.SW_WaterVapor.values, weights_obs) +SWC_Obs = globemean_2D(nc_obs.SW_Cloud.values, weights_obs) +bars2 = [Alb_Obs, SWWV_Obs, SWC_Obs] +units = 'W/$m^2$/K' +legendnames = ['Sfc. Albedo', 'SW Water Vapor', ' SW Cloud'] +filename = 'SWFB' +bargraph_plotting(bars1, bars2, units, legendnames, filename) + +Strat_Model = globemean_2D(nc_strat.StratFB.values, weights_model) +# Strat_Obs = globemean_2D(nc_obs.StratFB.values,weights_obs) + +# CMIP6 values. IRF already multiplied by 12 +CMIP6vals = np.loadtxt(os.environ["OBS_DATA"] + "/CldFB_MDTF.txt") + +# Create scatter plot with CMIP6 data. One iteration only +plt.figure(1) +f, (ax1, ax2) = plt.subplots(1, 2, gridspec_kw={'width_ratios': [3, 1]}) +# Add in each CMIP6 value model-by-model +for m in range(CMIP6vals.shape[0] - 1): + ax1.scatter(np.arange(CMIP6vals.shape[1] - 1), CMIP6vals[m, :-1], c='k', label='_nolegend_') +# For legend purposes, add in last CMIP6 model separately +ax1.scatter(np.arange(CMIP6vals.shape[1] - 1), CMIP6vals[m + 1, :-1], c='k', label='CMIP6 (Hist. 2003-2014)') +# Add in values from POD user's model +ax1.scatter(np.arange(CMIP6vals.shape[1] - 1), np.asarray([Net_RA_Model, LWC_Model + SWC_Model, + PL_Model + LR_Model + LWWV_Model + SWWV_Model + Alb_Model]), + c='b', label='Your Model') +# Add in Observational Values +ax1.scatter(np.arange(CMIP6vals.shape[1] - 1), np.asarray([Net_RA_Obs, + LWC_Obs + SWC_Obs, + PL_Obs + LR_Obs + LWWV_Obs + SWWV_Obs + Alb_Obs]), c='r', + label='Obs.') +# Creating axis labels for dotplot +ax1.set_ylabel('W/$m^2$/K') +xterms = ['$\Delta{R}_{tot}$', '$\lambda_{cloud}$', '$\lambda_{noncloud}$'] +ax1.set_xticks([r for r in range(CMIP6vals.shape[1] - 1)], xterms) +ax1.legend(loc='lower center') +for m in range(CMIP6vals.shape[0]): + ax2.scatter(1, CMIP6vals[m, -1], c='k', label='_nolegend_') +ax2.scatter(1, Net_IRF_Model, c='b', label='_nolegend_') +ax2.scatter(1, Net_IRF_Obs, c='r', label='_nolegend_') +ax2.set_ylabel('W/$m^2$/yr') +xterms = ['', 'IRF', ''] +ax2.set_xticks([r for r in range(len(xterms))], xterms) +plt.tight_layout() +plt.savefig(os.environ['WK_DIR'] + '/model/PS/forcing_feedback_CMIP6scatter.eps') +plt.close() + +if np.max(nc_sw_irf.lon.values) >= 300: # convert 0-360 lon to -180-180 lon for plotting + lon1 = np.mod((nc_sw_irf.lon.values + 180), 360) - 180 + lon1a = lon1[0:int(len(lon1) / 2)] + lon1b = lon1[int(len(lon1) / 2):] + lon_model = np.concatenate((lon1b, lon1a)) +else: + lon_model = nc_sw_irf.lon.values +llons_model, llats_model = np.meshgrid(lon_model, lat_model) + +lon_originalmodel = nc_sw_irf.lon.values + +# Produce maps of the radiative feedbacks and IRF tends, comparing model results to observations + +# Temperature Feedback +levels_1 = np.arange(-6, 6.0001, 1) +variablename_1 = 'Planck' +modelvariable_1 = nc_pl.Planck.values +obsvariable_1 = nc_obs.Planck.values +levels_2 = np.arange(-6, 6.0001, 1) +variablename_2 = 'Lapse Rate' +modelvariable_2 = nc_lr.LapseRate.values +obsvariable_2 = nc_obs.LapseRate.values +units = 'W/$m^2$/K' +filename = 'Temperature' +map_plotting_4subs(levels_1, levels_2, variablename_1, modelvariable_1, lon_originalmodel, + lon_model, lat_model, lon_obs, lat_obs, obsvariable_1, variablename_2, + modelvariable_2, obsvariable_2, units, filename) + +# Water Vapor Feedback +levels_1 = np.arange(-6, 6.0001, 1) +variablename_1 = 'LW Water Vapor' +modelvariable_1 = nc_lw_q.LW_WaterVapor.values +obsvariable_1 = nc_obs.LW_WaterVapor.values +levels_2 = np.arange(-1, 1.0001, 0.2) +variablename_2 = 'SW Water Vapor' +modelvariable_2 = nc_sw_q.SW_WaterVapor.values +obsvariable_2 = nc_obs.SW_WaterVapor.values +units = 'W/$m^2$/K' +filename = 'WaterVapor' +map_plotting_4subs(levels_1, levels_2, variablename_1, modelvariable_1, lon_originalmodel, + lon_model, lat_model, lon_obs, lat_obs, obsvariable_1, + variablename_2, modelvariable_2, obsvariable_2, units, filename) + +# Surface Albedo Feedback +levels_1 = np.arange(-6, 6.0001, 1) +variablename_1 = 'Sfc. Albedo' +modelvariable_1 = nc_alb.SfcAlbedo.values +obsvariable_1 = nc_obs.SfcAlbedo.values +units = 'W/$m^2$/K' +filename = 'SfcAlbedo' +map_plotting_2subs(levels_1, variablename_1, modelvariable_1, lon_originalmodel, + lon_model, lat_model, lon_obs, lat_obs, obsvariable_1, units, filename) + +# Cloud Feedback +levels_1 = np.arange(-16, 16.0001, 2) +variablename_1 = 'LW Cloud' +modelvariable_1 = nc_lw_c.LW_Cloud.values +obsvariable_1 = nc_obs.LW_Cloud.values +levels_2 = np.arange(-16, 16.0001, 2) +variablename_2 = 'SW Cloud' +modelvariable_2 = nc_sw_c.SW_Cloud.values +obsvariable_2 = nc_obs.SW_Cloud.values +units = 'W/$m^2$/K' +filename = 'Cloud' +map_plotting_4subs(levels_1, levels_2, variablename_1, modelvariable_1, lon_originalmodel, + lon_model, lat_model, lon_obs, lat_obs, obsvariable_1, + variablename_2, modelvariable_2, obsvariable_2, units, filename) + +# Rad Feedback +levels_1 = np.arange(-24, 24.0001, 2) +variablename_1 = 'LW Total Rad' +modelvariable_1 = nc_lw_netrad.LW_Rad.values +obsvariable_1 = nc_obs.LW_Rad.values +levels_2 = np.arange(-24, 24.0001, 2) +variablename_2 = 'SW Total Rad' +modelvariable_2 = nc_sw_netrad.SW_Rad.values +obsvariable_2 = nc_obs.SW_Rad.values +units = 'W/$m^2$/K' +filename = 'Rad' +map_plotting_4subs(levels_1, levels_2, variablename_1, modelvariable_1, lon_originalmodel, + lon_model, lat_model, lon_obs, lat_obs, obsvariable_1, + variablename_2, modelvariable_2, obsvariable_2, units, filename) + +# IRF Trend +# levels_1 = np.arange(-0.015,0.0150001,0.0015) +levels_1 = np.arange(-0.030, 0.0300001, 0.0030) +variablename_1 = 'LW IRF' +modelvariable_1 = 12 * nc_lw_irf.LW_IRF.values +obsvariable_1 = 12 * nc_obs.LW_IRF.values +# levels_2 = np.arange(-0.015,0.0150001,0.0015) +levels_2 = np.arange(-0.030, 0.0300001, 0.0030) +variablename_2 = 'SW IRF' +modelvariable_2 = 12 * nc_sw_irf.SW_IRF.values +obsvariable_2 = 12 * nc_obs.SW_IRF.values +units = 'W/$m^2$/yr' +filename = 'IRF' +map_plotting_4subs(levels_1, levels_2, variablename_1, modelvariable_1, lon_originalmodel, + lon_model, lat_model, lon_obs, lat_obs, obsvariable_1, + variablename_2, modelvariable_2, obsvariable_2, units, filename) diff --git a/diagnostics/forcing_feedback/forcing_feedback_util.py b/diagnostics/forcing_feedback/forcing_feedback_util.py new file mode 100644 index 000000000..6396bb862 --- /dev/null +++ b/diagnostics/forcing_feedback/forcing_feedback_util.py @@ -0,0 +1,600 @@ +# This file is part of the forcing_feedback module of the MDTF code package (see mdtf/MDTF_v2.0/LICENSE.txt) + +# ====================================================================== +# forcing_feedback_util_tropseperate.py +# +# Provide functions called by forcing_feedback.py +# +# This file is part of the Forcing Feedback Diagnostic Package and the +# MDTF code package. See LICENSE.txt for the license. +# +# Including: +# (1) fluxanom_calc_4D +# (2) fluxanom_calc_3D +# (3) esat_coef +# (4) latlonr3_3D4D +# (5) globemean_2D +# (6) globemean_3D +# (7) fluxanom_nc_create +# (8) feedback_regress +# (9) bargraph_plotting +# (10) map_plotting_4subs +# (11) map_plotting_2subs +# +# ====================================================================== +# Import standard Python packages + +import os +import numpy as np +import numpy.ma as ma +import dask.array as da +import xarray as xr +from scipy.interpolate import griddata +import matplotlib as mpl + +mpl.use('Agg') +import matplotlib.pyplot as plt +import matplotlib.ticker as mticker +import cartopy.crs as ccrs +from cartopy.mpl.gridliner import LATITUDE_FORMATTER +from cartopy.util import add_cyclic_point + + +# ======================================================= +# var_anom4D + +def var_anom4D(var_pert, var_base): + sp = var_pert.shape + sb = var_base.shape + + if len(sp) != 4 or len(sb) != 4: + print("An input variable is not 4D! Function will not execute") + else: + + # Prep variable to analyze on a monthly-basis + + var_pert_re = np.reshape(var_pert, (int(sp[0] / 12), 12, sp[1], sp[2], sp[3])) + var_base_re = np.reshape(var_base, (int(sb[0] / 12), 12, sb[1], sb[2], sb[3])) + var_base_tmean = np.repeat(np.squeeze(np.nanmean(var_base_re, axis=0))[np.newaxis, :, :, :], \ + int(sp[0] / 12), axis=0) + var_anom = da.from_array(var_pert_re - var_base_tmean, chunks=(5, 5, sp[1], sp[2], sp[3])) + + return var_anom + + +# ====================================================================== +# fluxanom_calc_4D + +def fluxanom_calc_4D(var_anom, tot_kern, clr_kern, dpsfc, levs, lats, psk): + """ + Computes anomalies of radiatively-relevant 4D climate variables and multiplies + by radiative kernel to convert to radiative flux change. Performs clear- and + all-sky calculations + + """ + + # Pressure of upper boundary of each vertical layer + pt = (levs[1:] + levs[:-1]) / 2 + pt = np.append(pt, 0) + # Pressure of lower boundary of each vertical layer + pb = pt[:-1] + pb = np.insert(pb, 0, 1000) + # Pressure thickness of each vertical level + dp = pb - pt + + sp = var_anom.shape + + # Create indices to seperate troposphere from stratosphere + frac_tropo = np.zeros((sp[0], sp[1], sp[2] - 1, sp[3], sp[4])) + frac_strato = np.zeros((sp[0], sp[1], sp[2] - 1, sp[3], sp[4])) + tropopause = (100 + np.absolute(lats) * 200 / 90) + tropopause_mat = np.tile(tropopause, (sp[0], sp[1], sp[2] - 1, sp[4], 1)).transpose(0, 1, 2, 4, 3) + ptop_mat = np.tile(pt[1:], (sp[0], sp[1], sp[3], sp[4], 1)).transpose(0, 1, 4, 2, 3) + pbot_mat = np.tile(pb[1:], (sp[0], sp[1], sp[3], sp[4], 1)).transpose(0, 1, 4, 2, 3) + psk_mat = np.tile(psk, (sp[0], sp[2] - 1, 1, 1, 1)).transpose(0, 2, 1, 3, 4) + + frac_tropo[(pbot_mat <= psk_mat) & (ptop_mat >= tropopause_mat)] = 1 + frachold = (pbot_mat - tropopause_mat) / (pbot_mat - ptop_mat) + frac_tropo[(pbot_mat >= tropopause_mat) & (ptop_mat <= tropopause_mat)] = frachold[ + (pbot_mat >= tropopause_mat) & (ptop_mat <= tropopause_mat)] + frachold = (psk_mat - ptop_mat) / (pbot_mat - ptop_mat) + frac_tropo[(pbot_mat >= psk_mat) & (ptop_mat <= psk_mat)] = frachold[(pbot_mat >= psk_mat) & (ptop_mat <= psk_mat)] + + frachold = (tropopause_mat - ptop_mat) / (pbot_mat - ptop_mat) + frac_strato[(pbot_mat >= tropopause_mat) & (ptop_mat <= tropopause_mat)] = frachold[ + (pbot_mat >= tropopause_mat) & (ptop_mat <= tropopause_mat)] + frac_strato[(pbot_mat <= tropopause_mat) & (ptop_mat <= tropopause_mat)] = 1 + frachold, tropopause_mat, ptop_mat, pbot_mat, psk_mat = None, None, None, None, None + + if len(sp) != 5: + print("An input variable is not 4D! Function will not execute") + else: + + tot_kern = da.from_array(np.squeeze(np.repeat(tot_kern[np.newaxis, ...], sp[0], axis=0)), + chunks=(5, 5, sp[2], sp[3], sp[4])) + clr_kern = da.from_array(np.squeeze(np.repeat(clr_kern[np.newaxis, ...], sp[0], axis=0)), + chunks=(5, 5, sp[2], sp[3], sp[4])) + dp_mat = da.from_array(np.squeeze(np.repeat(np.repeat(np.repeat(np.repeat( \ + dp[np.newaxis, 1:], 12, axis=0)[np.newaxis, ...], sp[0], axis=0)[:, :, :, np.newaxis], \ + sp[3], axis=3)[:, :, :, :, np.newaxis], sp[4], axis=4)), + chunks=(5, 5, sp[2], sp[3], sp[4])) + frac_tropo = da.from_array(frac_tropo, chunks=(5, 5, sp[2], sp[3], sp[4])) + frac_strato = da.from_array(frac_strato, chunks=(5, 5, sp[2], sp[3], sp[4])) + dpsfc = da.from_array(dpsfc, chunks=(5, 5, sp[3], sp[4])) + + # Calculate flux anomaly for all levels except first level above surface - total-sky, troposphere + flux_tot_tropo = (tot_kern[:, :, 1:, :, :] * frac_tropo * dp_mat * var_anom[:, :, 1:, :, :] / 100) + flux_tot_strato = (tot_kern[:, :, 1:, :, :] * frac_strato * dp_mat * var_anom[:, :, 1:, :, :] / 100) + + # Calculate flux anomaly for level above surface + + flux_tot_tropo_bottom = (tot_kern[:, :, 0, :, :] \ + * dpsfc * var_anom[:, :, 0, :, :] / 100) + + flux_tot_strato_bottom = (tot_kern[:, :, 0, :, :] \ + * dpsfc * var_anom[:, :, 0, :, :] / 100) + + # Calculate flux anomaly for all levels except first level above surface - clear-sky, troposphere + flux_clr_tropo = (clr_kern[:, :, 1:, :, :] * frac_tropo * dp_mat * var_anom[:, :, 1:, :, :] / 100) + + flux_clr_strato = (clr_kern[:, :, 1:, :, :] * frac_tropo * dp_mat * var_anom[:, :, 1:, :, :] / 100) + + # Calculate flux anomaly for level above surface + flux_clr_tropo_bottom = (clr_kern[:, :, 0, :, :] \ + * dpsfc * var_anom[:, :, 0, :, :] / 100) + + flux_clr_strato_bottom = (clr_kern[:, :, 0, :, :] \ + * dpsfc * var_anom[:, :, 0, :, :] / 100) + + frac_tropo, frac_strato = None, None + + # Reshape fluxanom variables and vertically integrate + flux_tot_tropo = da.append(flux_tot_tropo, flux_tot_tropo_bottom[:, :, np.newaxis, ...], axis=2) + flux_tot_strato = da.append(flux_tot_strato, flux_tot_strato_bottom[:, :, np.newaxis, ...], axis=2) + flux_tot_strato = da.append(flux_tot_strato, flux_tot_strato_bottom[:, :, np.newaxis, ...], axis=2) + flux_tot_strato = da.append(flux_tot_strato, flux_tot_strato_bottom[:, :, np.newaxis, ...], axis=2) + + flux_tot_tropo = np.reshape(np.squeeze(np.nansum(flux_tot_tropo, axis=2)), (sp[0] * sp[1], sp[3], sp[4])) + flux_clr_tropo = np.reshape(np.squeeze(np.nansum(flux_clr_tropo, axis=2)), (sp[0] * sp[1], sp[3], sp[4])) + flux_tot_strato = np.reshape(np.squeeze(np.nansum(flux_tot_strato, axis=2)), (sp[0] * sp[1], sp[3], sp[4])) + flux_clr_strato = np.reshape(np.squeeze(np.nansum(flux_clr_strato, axis=2)), (sp[0] * sp[1], sp[3], sp[4])) + + return np.asarray(flux_tot_tropo), np.asarray(flux_clr_tropo), np.asarray(flux_tot_strato), np.asarray( + flux_clr_strato) + + +# ====================================================================== +# fluxanom_calc_3D + +def fluxanom_calc_3D(var_pert_tot, var_base_tot, tot_kern, clr_kern, var_pert_clr=None, var_base_clr=None): + """ + + Computes anomalies of radiatively-relevant 3D climate variable and multiplies + by radiative kernel to convert to radiative flux change. Clear- and all-sky calculations + Note var_*_clr not always used. Specifically an option for clear-sky albedo calculations. + + """ + + sp = var_pert_tot.shape + sb = var_base_tot.shape + skt = tot_kern.shape + skc = clr_kern.shape + + flux_sfc_tot = np.zeros((int(sp[0] / 12), 12, sp[1], sp[2])) + flux_sfc_clr = np.zeros((int(sp[0] / 12), 12, sp[1], sp[2])) + if len(skt) != 3 or len(skc) != 3 or len(sp) != 3 or len(sb) != 3: + print("An input variable is not 3D! Function will not execute") + else: + + # Prep variable to analyze on a monthly-basis + var_pert_tot_re = np.reshape(var_pert_tot, (int(sp[0] / 12), 12, sp[1], sp[2])) + var_base_tot_re = np.reshape(var_base_tot, (int(sb[0] / 12), 12, sb[1], sb[2])) + + if var_pert_clr is not None: + var_pert_clr_re = np.reshape(var_pert_clr, (int(sp[0] / 12), 12, sp[1], sp[2])) + if var_base_clr is not None: + var_base_clr_re = np.reshape(var_base_clr, (int(sb[0] / 12), 12, sb[1], sb[2])) + + for m in range(0, 12): + + # Conduct calculations by month, using m index to isolate data accordingly + # Create climatology by average all timesteps in the var_base variable + var_base_tot_m_tmean = np.squeeze(np.nanmean(var_base_tot_re[:, m, :, :], axis=0)) + var_pert_tot_m = np.squeeze(var_pert_tot_re[:, m, :, :]) + + if var_base_clr is not None: + var_base_clr_m_tmean = np.squeeze(np.mean(var_base_clr_re[:, m, :, :], axis=0)) + var_pert_clr_m = np.squeeze(var_pert_clr_re[:, m, :, :]) + + # Compute anomalies + var_tot_anom = var_pert_tot_m - np.repeat(var_base_tot_m_tmean[np.newaxis, :, :], int(sp[0] / 12), axis=0) + + if var_base_clr is not None: + var_clr_anom = var_pert_clr_m - np.repeat(var_base_clr_m_tmean[np.newaxis, :, :], int(sp[0] / 12), + axis=0) + + # Compute flux anomaly - total-sky + flux_sfc_tot[:, m, :, :] = np.squeeze( + np.repeat(tot_kern[np.newaxis, m, :, :], int(sp[0] / 12), axis=0)) * var_tot_anom + + # Compute flux anomaly - clear-sky + if var_base_clr is not None: + flux_sfc_clr[:, m, :, :] = np.squeeze( + np.repeat(clr_kern[np.newaxis, m, :, :], int(sp[0] / 12), axis=0)) * var_clr_anom + else: + flux_sfc_clr[:, m, :, :] = np.squeeze( + np.repeat(clr_kern[np.newaxis, m, :, :], int(sp[0] / 12), axis=0)) * var_tot_anom + + # Reshape flux anomalies + flux_sfc_tot = np.reshape(flux_sfc_tot, (sp[0], sp[1], sp[2])) + flux_sfc_clr = np.reshape(flux_sfc_clr, (sp[0], sp[1], sp[2])) + + return flux_sfc_tot, flux_sfc_clr + + +# ====================================================================== +# esat_coef + +def esat_coef(temp): + """ + + Computes the saturation vapor pressure coefficient necessary for water vapor + radiative flux calculations + + """ + + tc = temp - 273 + aw = np.array([6.11583699, 0.444606896, 0.143177157e-01, + 0.264224321e-03, 0.299291081e-05, 0.203154182e-07, + 0.702620698e-10, 0.379534310e-13, -0.321582393e-15]) + ai = np.array([6.11239921, 0.443987641, 0.142986287e-01, + 0.264847430e-03, 0.302950461e-05, 0.206739458e-07, + 0.640689451e-10, -0.952447341e-13, -0.976195544e-15]) + esat_water = aw[0] + esat_ice = ai[0] + + for z in range(1, 9): + esat_water = esat_water + aw[z] * (tc ** (z)) + esat_ice = esat_ice + ai[z] * (tc ** (z)) + + esat = esat_ice + b = np.where(tc > 0) + esat[b] = esat_water[b] + + return esat + + +# ====================================================================== +# latlonr3_3D4D +# + +def latlonr3_3D4D(variable, lat_start, lon_start, lat_end, lon_end, kern): + """ + + Reformats, reorders and regrids lat,lon so model data matches kernel data grid + + """ + + # Check of start and end lat is in similar order. If not, flip. + if ((lat_start[0] > lat_start[-1]) and (lat_end[0] < lat_end[-1])) or \ + ((lat_start[0] < lat_start[-1]) and (lat_end[0] > lat_end[-1])): + lat_start = np.flipud(lat_start) + variable = variable[..., ::-1, :] + + # Check if start and end lon both are 0-360 or both -180-180. If not, make them the same + if ((np.max(lon_start) >= 300) and (np.max(lon_end) > 100 and np.max(lon_end) < 300)): + + lon1 = np.mod((lon_start + 180), 360) - 180 + + lon1a = lon1[0:len(lon1) / 2] + lon1b = lon1[len(lon1) / 2:] + start1a = variable[..., 0:len(lon1) / 2] + start1b = variable[..., len(lon1) / 2:] + lon_start = np.concatenate((lon1b, lon1a)) + variable = np.concatenate((start1b, start1a), axis=-1) + elif ((np.max(lon_start) > 100 and np.max(lon_start) < 300) and (np.max(lon_end) >= 300)): + lon1 = np.mod(lon_start, 360) + + lon1a = lon1[0:len(lon1) / 2] + lon1b = lon1[len(lon1) / 2:] + start1a = variable[..., 0:len(lon1) / 2] + start1b = variable[..., len(lon1) / 2:] + lon_start = np.concatenate((lon1b, lon1a)) + variable = np.concatenate((start1b, start1a), axis=-1) + + # If, after above change (or after skipping that step), start and lat are in different order, flip. + if ((lon_start[0] > lon_start[-1]) and (lon_end[0] < lon_end[-1])) or \ + ((lon_start[0] < lon_start[-1]) and (lon_end[0] > lon_end[-1])): + lon_start = np.flipud(lon_start) + variable = variable[..., ::-1] + + # Now that latitudes and longitudes have similar order and format, regrid. + Y_start, X_start = np.meshgrid(lat_start, lon_start) + Y_kern, X_kern = np.meshgrid(lat_end, lon_end) + + if len(variable.shape) == 3: # For 3D data + shp_start = variable.shape + shp_kern = kern.shape + variable_new = np.empty((shp_start[0], shp_kern[1], shp_kern[2])) * np.nan + for kk in range(shp_start[0]): + variable_new[kk, :, :] = griddata((Y_start.flatten(), \ + X_start.flatten()), np.squeeze(variable[kk, :, :]).T.flatten(), \ + (Y_kern.flatten(), X_kern.flatten()), fill_value=np.nan).reshape( + shp_kern[2], shp_kern[1]).T + elif len(variable.shape) == 4: # For 4D data + shp_start = variable.shape + shp_kern = kern.shape + variable_new = np.empty((shp_start[0], shp_start[1], shp_kern[2], shp_kern[3])) * np.nan + for ll in range(shp_start[1]): + for kk in range(shp_start[0]): + variable_new[kk, ll, :, :] = griddata((Y_start.flatten(), + X_start.flatten()), + np.squeeze(variable[kk, ll, :, :]).T.flatten(), + (Y_kern.flatten(), X_kern.flatten()), fill_value=np.nan).reshape( + shp_kern[3], shp_kern[2]).T + + return variable_new + + +# ====================================================================== +# globemean_2D +# + +def globemean_2D(var, w): + """ + + Compute cosine weighted global-mean over a 2D variable + + """ + + var_mask = ma.masked_array(var, ~np.isfinite(var)) + var_mean = np.squeeze(np.average(np.nanmean(var_mask, axis=1), weights=w)) + + return var_mean + + +# ====================================================================== +# globemean_3D +# + +def globemean_3D(var, w): + """ + + Compute cosine weighted global-mean over a 3D variable + + """ + + var_mask = ma.masked_array(var, ~np.isfinite(var)) + var_mean = np.squeeze(np.average(np.nanmean(var_mask, axis=2), axis=1, weights=w)) + + return var_mean + + +# ====================================================================== +# fluxanom_nc_create +# + +def fluxanom_nc_create(variable, lat, lon, fbname): + """ + + Saves 2D feedback or forcing variables into a NetCDF + + """ + var = xr.DataArray(variable, coords=[lat, lon], dims=['lat', 'lon'], name=fbname) + var.to_netcdf(os.environ['WK_DIR'] + '/model/netCDF/fluxanom2D_' + fbname + '.nc') + + return None + + +# ====================================================================== +# feedback_regress +# +# Regeresses radiative flux anomalies with global-mean dTs to compute 2D feedback +def feedback_regress(fluxanom, tspert, tsclimo, lat, lon, fbname): + """ + + Regresses radiative flux anomalies with global-mean dTs to compute 2D feedback + + + """ + + sp = tspert.shape + sc = tsclimo.shape + + tsclimo_re = np.squeeze(np.nanmean(np.reshape(tsclimo, \ + (int(sc[0] / 12), 12, sc[1], sc[2])), axis=0)) + + tsanom = tspert - np.reshape(np.repeat(tsclimo_re[np.newaxis, ...], int(sp[0] / 12), \ + axis=0), (sp[0], sp[1], sp[2])) + + weights = np.repeat(np.cos(np.deg2rad(lat))[np.newaxis, :], sp[0], axis=0) + tsanom_globemean = globemean_3D(tsanom, weights) + tsanom_re = np.repeat(np.repeat(tsanom_globemean[:, np.newaxis], \ + sp[1], axis=1)[..., np.newaxis], sp[2], axis=2) + + tsanom_re_timemean = np.nanmean(tsanom_re, axis=0) + tsanom_re_std = np.nanstd(tsanom_re, axis=0) + fluxanom_timemean = np.nanmean(fluxanom, axis=0) + + n = np.sum(~np.isnan(tsanom_re), axis=0) + cov = np.nansum((tsanom_re - tsanom_re_timemean) * \ + (fluxanom - fluxanom_timemean), axis=0) / n + slopes = cov / (tsanom_re_std ** 2) + + fluxanom_nc_create(slopes, lat, lon, fbname) + + return slopes + + +# ====================================================================== +# bargraph_plotting +# + +def bargraph_plotting(model_bar, obs_bar, var_units, var_legnames, var_filename): + """ + + Used for plotting the first four figures generated by forcing_feedback_plots.py. + Shows global-mean results from the model and observations + + """ + + barWidth = 0.125 + r1 = np.arange(len(model_bar)) + r2 = [x + barWidth for x in r1] + plt.bar(r1, model_bar, color='blue', width=barWidth, edgecolor='white', \ + label='Model') + plt.bar(r2, obs_bar, color='red', width=barWidth, edgecolor='white', \ + label='Observations') + plt.axhline(0, color='black', lw=1) + plt.ylabel(var_units) + plt.xticks([r + barWidth for r in range(len(model_bar))], var_legnames) + plt.legend(loc="upper right") + plt.savefig(os.environ['WK_DIR'] + '/model/PS/forcing_feedback_globemean_' + var_filename + '.eps') + plt.close() + + return None + + +# ====================================================================== +# map_plotting_4subs +# +# Function for producing figured with 4 subplot maps generated by +# forcing_feedback_plots.py + +def map_plotting_4subs(cbar_levs1, cbar_levs2, var1_name, var1_model, \ + model_origlon, lon_m, lat_m, lon_o, lat_o, var1_obs, var2_name, \ + var2_model, var2_obs, var_units, var_filename): + """ + + Function for producing figured with 4 subplot maps generated by + forcing_feedback_plots.py + + + """ + + fig, axs = plt.subplots(2, 2, subplot_kw=dict(projection= \ + ccrs.PlateCarree(central_longitude=180)), figsize=(8, 8)) + + axs[0, 0].set_title(var1_name + ' - Model') + if np.max(model_origlon) > 300: # convert 0-360 lon to -180-180 lon for plotting + start1a = var1_model[..., 0:int(len(model_origlon) / 2)] + start1b = var1_model[..., int(len(model_origlon) / 2):] + var1_model = np.concatenate((start1b, start1a), axis=1) + var1_model, lon_m180 = add_cyclic_point(var1_model, coord=lon_m) + cs = axs[0, 0].contourf(lon_m180, lat_m, var1_model, cmap=plt.cm.RdBu_r, \ + transform=ccrs.PlateCarree(), vmin=cbar_levs1[0], \ + vmax=cbar_levs1[-1], levels=cbar_levs1, extend='both') + axs[0, 0].coastlines() + g1 = axs[0, 0].gridlines(linestyle=':') + g1.xlines = False + g1.ylabels_left = True + g1.ylocator = mticker.FixedLocator(np.arange(-60, 61, 30)) + g1.yformatter = LATITUDE_FORMATTER + if not np.all(cbar_levs1 == cbar_levs2): + cbar = plt.colorbar(cs, ax=axs[0, 0], orientation='horizontal', aspect=25) + cbar.set_label(var_units) + + axs[0, 1].set_title(var1_name + ' - Obs.') + var1_obs, lon_o180 = add_cyclic_point(var1_obs, coord=lon_o) + cs = axs[0, 1].contourf(lon_o180, lat_o, var1_obs, cmap=plt.cm.RdBu_r, \ + transform=ccrs.PlateCarree(), vmin=cbar_levs1[0], \ + vmax=cbar_levs1[-1], levels=cbar_levs1, extend='both') + axs[0, 1].coastlines() + g1 = axs[0, 1].gridlines(linestyle=':') + g1.xlines = False + if np.all(cbar_levs1 == cbar_levs2) == False: + cbar = plt.colorbar(cs, ax=axs[0, 1], orientation='horizontal', aspect=25) + cbar.set_label(var_units) + + axs[1, 0].set_title(var2_name + ' - Model') + if np.max(model_origlon) > 300: # convert 0-360 lon to -180-180 lon for plotting + start1a = var2_model[..., 0:int(len(model_origlon) / 2)] + start1b = var2_model[..., int(len(model_origlon) / 2):] + var2_model = np.concatenate((start1b, start1a), axis=1) + var2_model, lon_m180 = add_cyclic_point(var2_model, coord=lon_m) + cs = axs[1, 0].contourf(lon_m180, lat_m, var2_model, cmap=plt.cm.RdBu_r, \ + transform=ccrs.PlateCarree(), vmin=cbar_levs2[0], \ + vmax=cbar_levs2[-1], levels=cbar_levs2, extend='both') + axs[1, 0].coastlines() + g1 = axs[1, 0].gridlines(linestyle=':') + g1.xlines = False + g1.ylabels_left = True + g1.ylocator = mticker.FixedLocator(np.arange(-60, 61, 30)) + g1.yformatter = LATITUDE_FORMATTER + if not np.all(cbar_levs1 == cbar_levs2): + cbar = plt.colorbar(cs, ax=axs[1, 0], orientation='horizontal', aspect=25) + cbar.set_label(var_units) + + axs[1, 1].set_title(var2_name + ' - Obs.') + var2_obs, lon_o180 = add_cyclic_point(var2_obs, coord=lon_o) + cs = axs[1, 1].contourf(lon_o180, lat_o, var2_obs, cmap=plt.cm.RdBu_r, \ + transform=ccrs.PlateCarree(), vmin=cbar_levs2[0], \ + vmax=cbar_levs2[-1], levels=cbar_levs2, extend='both') + axs[1, 1].coastlines() + g1 = axs[1, 1].gridlines(linestyle=':') + g1.xlines = False + if not np.all(cbar_levs1 == cbar_levs2): + cbar = plt.colorbar(cs, ax=axs[1, 1], orientation='horizontal', aspect=25) + cbar.set_label(var_units) + + if np.all(cbar_levs1 == cbar_levs2): + cbar = plt.colorbar(cs, ax=axs.ravel(), orientation='horizontal', aspect=25) + cbar.set_label(var_units) + plt.savefig(os.environ['WK_DIR'] + '/model/PS/forcing_feedback_maps_' + \ + var_filename + '.eps', bbox_inches='tight') + plt.close() + + return None + + +# ====================================================================== +# map_plotting_2subs +# +# Function for producing figured with 2 subplot maps generated by +# forcing_feedback_plots.py + +def map_plotting_2subs(cbar_levs, var_name, var_model, + model_origlon, lon_m, lat_m, lon_o, lat_o, var_obs, + var_units, var_filename): + """ + + Function for producing figured with 2 subplot maps generated by + forcing_feedback_plots.py + + """ + + fig, axs = plt.subplots(1, 2, subplot_kw=dict(projection= \ + ccrs.PlateCarree(central_longitude=180))) + + axs[0].set_title(var_name + ' - Model') + if np.max(model_origlon) > 300: # convert 0-360 lon to -180-180 lon for plotting + start1a = var_model[..., 0:int(len(model_origlon) / 2)] + start1b = var_model[..., int(len(model_origlon) / 2):] + var_model = np.concatenate((start1b, start1a), axis=1) + var_model, lon_m180 = add_cyclic_point(var_model, coord=lon_m) + axs[0].contourf(lon_m180, lat_m, var_model, cmap=plt.cm.RdBu_r, + transform=ccrs.PlateCarree(), vmin=cbar_levs[0], + vmax=cbar_levs[-1], levels=cbar_levs, extend='both') + axs[0].coastlines() + g1 = axs[0].gridlines(linestyle=':') + g1.xlines = False + g1.ylabels_left = True + g1.ylocator = mticker.FixedLocator(np.arange(-60, 61, 30)) + g1.yformatter = LATITUDE_FORMATTER + + axs[1].set_title(var_name + ' - Obs.') + var_obs, lon_o180 = add_cyclic_point(var_obs, coord=lon_o) + cs = axs[1].contourf(lon_o180, lat_o, var_obs, cmap=plt.cm.RdBu_r, + transform=ccrs.PlateCarree(), vmin=cbar_levs[0], + vmax=cbar_levs[-1], levels=cbar_levs, extend='both') + axs[1].coastlines() + g1 = axs[1].gridlines(linestyle=':') + g1.xlines = False + + cbar = plt.colorbar(cs, ax=axs.ravel(), orientation='horizontal', aspect=25) + cbar.set_label(var_units) + plt.savefig(os.environ['WK_DIR'] + '/model/PS/forcing_feedback_maps_' + \ + var_filename + '.eps', bbox_inches='tight') + plt.close() + + return None diff --git a/diagnostics/forcing_feedback/obs_processing_script.py b/diagnostics/forcing_feedback/obs_processing_script.py new file mode 100644 index 000000000..8e5d71aac --- /dev/null +++ b/diagnostics/forcing_feedback/obs_processing_script.py @@ -0,0 +1,214 @@ +import sys +import xarray as xr +import pandas as pd +import numpy as np + +np.set_printoptions(threshold=sys.maxsize) +import numpy.ma as ma + + +# ====================================================================== +# globemean_3D +# +# Compute cosine weighted global-mean +def globemean_3D(var, w): + var_mask = ma.masked_array(var, ~np.isfinite(var)) + var_mean = np.squeeze(np.average(np.nanmean(var_mask, axis=2), axis=1, weights=w)) + + return var_mean + + +def globemean_2D(var, w): + var_mask = ma.masked_array(var, ~np.isfinite(var)) + var_mean = np.squeeze(np.average(np.nanmean(var_mask, axis=1), weights=w)) + + return var_mean + + +# ====================================================================== +# feedback_regress +# +# Regeresses radiative flux anomalies with global-mean dTs to compute 2D feedback +def feedback_regress(fluxanom, tspert, tsclimo, lat, lon, fbname): + sp = tspert.shape + sc = tsclimo.shape + + tsclimo_re = np.squeeze(np.nanmean(np.reshape(tsclimo, (np.int(sc[0] / 12), 12, sc[1], sc[2])), axis=0)) + + tsanom = tspert - np.reshape(np.repeat(tsclimo_re[np.newaxis, ...], np.int(sp[0] / 12), + axis=0), (sp[0], sp[1], sp[2])) + + weights = np.repeat(np.cos(np.deg2rad(lat))[np.newaxis, :], sp[0], axis=0) + tsanom_globemean = globemean_3D(tsanom, weights) + tsanom_re = np.repeat(np.repeat(tsanom_globemean[:, np.newaxis], sp[1], axis=1)[..., np.newaxis], sp[2], axis=2) + + tsanom_re_timemean = np.nanmean(tsanom_re, axis=0) + tsanom_re_std = np.nanstd(tsanom_re, axis=0) + fluxanom_timemean = np.nanmean(fluxanom, axis=0) + + n = np.sum(~np.isnan(tsanom_re), axis=0) + cov = np.nansum((tsanom_re - tsanom_re_timemean) * \ + (fluxanom - fluxanom_timemean), axis=0) / n + slopes = cov / (tsanom_re_std ** 2) + + slopes = xr.DataArray(slopes, coords=[lat, lon], dims=['lat', 'lon'], name=fbname) + + return slopes + + +kname = 'CloudSat' +regime = 'TOA' + +# Read in reanalysis data and kernel-derived radiative anomalies and forcing, +# which were computed using a slightly modified version of the MDTF "Forcing and Feedback" +# module code. Then, regress against global-mean surface temperature, when applicable, +# to compute radiative feedbacks using the "feedback_regress" function detailed above. +# For Instantaneous Radiative Forcing, we just regress against time (trend) + +# Surface temperature + +ts_GISS_anoms = xr.open_dataset( + '/discover/nobackup/projects/geos5rad/rjkramer/GISStemp/gistemp1200_GHCNv4_ERSSTv5_MDTF.nc') +ts_GISS_anoms = ts_GISS_anoms.assign_coords( + time=pd.date_range('1880-01', periods=len(ts_GISS_anoms.time.values), freq='M')) +ts_GISS_anoms = ts_GISS_anoms.sel(time=slice('2003-01', '2014-12')) +ts_GISS_anoms_climo = ts_GISS_anoms_climo = ts_GISS_anoms.sel(time=slice('2003-01', '2014-12')) +ts_GISS_anoms_climo_climatology = ts_GISS_anoms_climo.groupby('time.month').mean('time') +ts_GISS_anoms_anoms = ts_GISS_anoms.groupby('time.month') - ts_GISS_anoms_climo_climatology +ts_pert = ts_GISS_anoms_anoms.tempanomaly.values +ts_climo = ts_GISS_anoms_anoms.tempanomaly.values + +# Planck radiative anomalies +inputfile = ('/gpfsm/dnb32/rjkramer/MERRAfb/results/' + regime + '_Fluxanom_planck_Results_K' + kname + + '_MERRAfb_wCERES_retry.nc') +nc_pl_era = xr.open_dataset(inputfile) +nc_pl_era = nc_pl_era.assign_coords(time=pd.date_range('2003-01', periods=len(nc_pl_era.time.values), freq='M')).sel( + time=slice('2003-01', '2014-12')) +inputfile = None + +varhold = nc_pl_era.fluxanom_pl_sfc_tot.values + nc_pl_era.fluxanom_pl_trop_tot.values + nc_pl_era.fluxanom_pl_strat_tot.values +fb_pl_era = feedback_regress(varhold, ts_pert, ts_climo, nc_pl_era.latitude.values, nc_pl_era.longitude.values, + 'Planck') +varhold = None + +# Lapse Rate radiative anomalies +inputfile = ('/gpfsm/dnb32/rjkramer/MERRAfb/results/' + regime + '_Fluxanom_lapserate_Results_K' + kname + + '_MERRAfb_wCERES_retry.nc') +nc_lr_era = xr.open_dataset(inputfile) +nc_lr_era = nc_lr_era.assign_coords(time=pd.date_range('2003-01', periods=len(nc_lr_era.time.values), freq='M')).sel( + time=slice('2003-01', '2014-12')) +inputfile = None + +varhold = nc_lr_era.fluxanom_lr_trop_tot.values + nc_lr_era.fluxanom_lr_strat_tot.values +fb_lr_era = feedback_regress(varhold, ts_pert, ts_climo, nc_lr_era.latitude.values, nc_lr_era.longitude.values, + 'LapseRate') +varhold = None + +# Water vapor radiative anomalies +inputfile = ('/gpfsm/dnb32/rjkramer/MERRAfb/results/' + regime + '_Fluxanom_watervapor_Results_K' + kname + + '_MERRAfb_wCERES_retry.nc') +nc_wv_era = xr.open_dataset(inputfile) +nc_wv_era = nc_wv_era.assign_coords(time=pd.date_range('2003-01', periods=len(nc_wv_era.time.values), + freq='M')).sel(time=slice('2003-01', '2014-12')) +inputfile = None + +varhold = nc_wv_era.fluxanom_lw_q_trop_tot.values + nc_wv_era.fluxanom_lw_q_strat_tot.values +fb_lw_q_era = feedback_regress(varhold, ts_pert, ts_climo, nc_wv_era.latitude.values, nc_wv_era.longitude.values, + 'LW_WaterVapor') +varhold = None +varhold = nc_wv_era.fluxanom_sw_q_trop_tot.values + nc_wv_era.fluxanom_sw_q_strat_tot.values +fb_sw_q_era = feedback_regress(varhold, ts_pert, ts_climo, nc_wv_era.latitude.values, nc_wv_era.longitude.values, + 'SW_WaterVapor') +varhold = None + +# Cloud radiative anomalies +inputfile = ('/gpfsm/dnb32/rjkramer/MERRAfb/results/' + regime + '_Fluxanom_clouds_Results_K' + kname + + '_MERRAfb_wCERES_retry.nc') +nc_c_era = xr.open_dataset(inputfile) +nc_c_era = nc_c_era.assign_coords(time=pd.date_range('2003-01', periods=len(nc_c_era.time.values), freq='M')).sel( + time=slice('2003-01', '2014-12')) +inputfile = None + +varhold = nc_c_era.fluxanom_lw_c.values +fb_lw_c_era = feedback_regress(varhold, ts_pert, ts_climo, nc_c_era.latitude.values, nc_c_era.longitude.values, + 'LW_Cloud') +varhold = None +varhold = nc_c_era.fluxanom_sw_c.values +fb_sw_c_era = feedback_regress(varhold, ts_pert, ts_climo, nc_c_era.latitude.values, nc_c_era.longitude.values, + 'SW_Cloud') +varhold = None + +# Surface Albedo radiative anomalies +inputfile = ('/gpfsm/dnb32/rjkramer/MERRAfb/results/' + regime + '_Fluxanom_albedo_Results_K' + kname + + '_MERRAfb_wCERES_retry.nc') +nc_a_era = xr.open_dataset(inputfile) +nc_a_era = nc_a_era.assign_coords(time=pd.date_range('2003-01', periods=len(nc_a_era.time.values), freq='M')).sel( + time=slice('2003-01', '2014-12')) +inputfile = None + +varhold = nc_a_era.fluxanom_a_sfc_tot.values +fb_a_era = feedback_regress(varhold, ts_pert, ts_climo, nc_a_era.latitude.values, nc_a_era.longitude.values, + 'SfcAlbedo') +varhold = None + +# Total TOA radiative imbalance +inputfile = ('/gpfsm/dnb32/rjkramer/MERRAfb/results/' + regime + '_Fluxanom_netrad_Results_K' + kname + + '_MERRAfb_wCERES_retry.nc') +nc_netrad_era = xr.open_dataset(inputfile) +nc_netrad_era = nc_netrad_era.assign_coords( + time=pd.date_range('2003-01', periods=len(nc_netrad_era.time.values), freq='M')).sel( + time=slice('2003-01', '2014-12')) +inputfile = None + +varhold = nc_netrad_era.netrad_lw_tot.values +fb_lw_netrad_era = feedback_regress(varhold, ts_pert, ts_climo, nc_netrad_era.latitude.values, + nc_netrad_era.longitude.values, 'LW_Rad') +varhold = None +varhold = nc_netrad_era.netrad_sw_tot.values +fb_sw_netrad_era = feedback_regress(varhold, ts_pert, ts_climo, nc_netrad_era.latitude.values, + nc_netrad_era.longitude.values, 'SW_Rad') +varhold = None + +# Instaneous Radiative Forcing +inputfile = ('/gpfsm/dnb32/rjkramer/MERRAfb/results/' + regime + '_Fluxanom_IRF_Results_K' + kname + + '_MERRAfb_wCERES_retry.nc') +nc_IRF_era = xr.open_dataset(inputfile) +nc_IRF_era = nc_IRF_era.assign_coords(time=pd.date_range('2003-01', periods=len(nc_IRF_era.time.values), + freq='M')).sel(time=slice('2003-01', '2014-12')) +inputfile = None + +varhold = nc_IRF_era.IRF_lw_tot.values +shp = ts_pert.shape +xtime = np.repeat(np.repeat(np.arange(shp[0])[..., np.newaxis], shp[1], axis=1)[..., np.newaxis], shp[2], axis=2) +fb_lw_IRF_era = feedback_regress(varhold, xtime, xtime * 0, nc_IRF_era.latitude.values, nc_IRF_era.longitude.values, + 'LW_IRF') +varhold = None +varhold = nc_IRF_era.IRF_sw_tot.values +fb_sw_IRF_era = feedback_regress(varhold, xtime, xtime * 0, nc_IRF_era.latitude.values, nc_IRF_era.longitude.values, + 'SW_IRF') +varhold = None + +# Save 2D Radiative feedbacks (W/m2/K) and IRF trends (W/m2/month) to a netcdf for use in the MDTF Module +ds = fb_pl_era.to_dataset() +ds['LapseRate'] = fb_lr_era +ds['LW_WaterVapor'] = fb_lw_q_era +ds['SW_WaterVapor'] = fb_sw_q_era +ds['SfcAlbedo'] = fb_a_era +ds['LW_Cloud'] = fb_lw_c_era +ds['SW_Cloud'] = fb_sw_c_era +ds['LW_IRF'] = fb_lw_IRF_era # W/m2/month for now. converted to W/m2/yr later in POD +ds['SW_IRF'] = fb_sw_IRF_era # W/m2/month for now. converted to W/m2/yr later in POD +ds['LW_Rad'] = fb_lw_netrad_era +ds['SW_Rad'] = fb_sw_netrad_era + +ds.to_netcdf('forcing_feedback_obs_MERRA2.nc') + +weights = np.cos(np.deg2rad(nc_IRF_era.latitude.values)) +print('Rad') +print(globemean_2D(fb_lw_netrad_era + fb_sw_netrad_era, weights)) +print('CldFB') +print(globemean_2D(fb_lw_c_era + fb_sw_c_era, weights)) +print('LR FB') +print(globemean_2D(fb_lr_era, weights)) +print('IRF') +print(globemean_2D(fb_lw_IRF_era + fb_sw_IRF_era, weights)) diff --git a/diagnostics/forcing_feedback/settings.jsonc b/diagnostics/forcing_feedback/settings.jsonc new file mode 100644 index 000000000..0deac150f --- /dev/null +++ b/diagnostics/forcing_feedback/settings.jsonc @@ -0,0 +1,111 @@ +{ + "settings" : { + "driver" : "forcing_feedback.py", + "long_name" : "Radiative Forcing and Feedback Diagnostics", + "realm" : "atmos", + "runtime_requirements": { + "python3": ["os", "numpy", "xarray", "pandas", "netCDF4", "scipy", "matplotlib", "cartopy"] + }, + "pod_env_vars" : { + "LW_CLOUDMASK": 1.24, + "SW_CLOUDMASK": 2.43 + } + }, + "data": { + "format": "any_netcdf_classic", + "rename_dimensions": false, + "rename_variables": false, + "multi_file_ok": false, + "frequency": "mon", + "min_frequency": "mon", + "max_frequency": "mon", + "min_duration": "5yr", + "max_duration": "any" + }, + "dimensions": { + "lat": {"standard_name": "latitude"}, + "lon": {"standard_name": "longitude"}, + "plev": { + "standard_name": "air_pressure", + "units": "Pa", + "positive": "down", + "axis": "Z" + }, + "time": {"standard_name": "time"} + }, + "varlist" : { + "ts": { + "standard_name": "surface_temperature", + "units": "K", + "dimensions" : ["time", "lat", "lon"], + "freq": "mon" + + }, + "ta": { + "standard_name": "air_temperature", + "units": "K", + "dimensions" : ["time", "plev", "lat", "lon"], + "freq": "mon" + }, + "hus": { + "standard_name": "specific_humidity", + "units": "1", + "dimensions" : ["time", "plev", "lat", "lon"], + "freq": "mon" + }, + "rsus": { + "standard_name": "surface_upwelling_shortwave_flux_in_air", + "units": "W m-2", + "dimensions" : ["time", "lat", "lon"], + "freq": "mon" + }, + "rsuscs": { + "standard_name": "surface_upwelling_shortwave_flux_in_air_assuming_clear_sky", + "units": "W m-2", + "dimensions" : ["time", "lat", "lon"], + "freq": "mon" + }, + "rsds": { + "standard_name": "surface_downwelling_shortwave_flux_in_air", + "units": "W m-2", + "dimensions" : ["time", "lat", "lon"], + "freq": "mon" + }, + "rsdscs": { + "standard_name": "surface_downwelling_shortwave_flux_in_air_assuming_clear_sky", + "units": "W m-2", + "dimensions" : ["time", "lat", "lon"], + "freq": "mon" + }, + "rsdt": { + "standard_name": "toa_incoming_shortwave_flux", + "units": "W m-2", + "dimensions" : ["time", "lat", "lon"], + "freq": "mon" + }, + "rsut": { + "standard_name": "toa_outgoing_shortwave_flux", + "units": "W m-2", + "dimensions" : ["time", "lat", "lon"], + "freq": "mon" + }, + "rsutcs": { + "standard_name": "toa_outgoing_shortwave_flux_assuming_clear_sky", + "units": "W m-2", + "dimensions" : ["time", "lat", "lon"], + "freq": "mon" + }, + "rlut": { + "standard_name": "toa_outgoing_longwave_flux", + "units": "W m-2", + "dimensions" : ["time", "lat", "lon"], + "freq": "mon" + }, + "rlutcs": { + "standard_name": "toa_outgoing_longwave_flux_assuming_clear_sky", + "units": "W m-2", + "dimensions" : ["time", "lat", "lon"], + "freq": "mon" + } + } +}